date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/21 | 867 | 3,682 | <issue_start>username_0: ```
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
guard let cell = tableView.dequeueReusableCell(withIdentifier: "CollectionDetailsItem", for: indexPath) as? CharacterCollectionDetailsTableCell else {
fatalError("Dequeued cell is not an instance of CharacterDetailsTableViewCell class.")
}
return cell
}
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
guard let cell = tableView.dequeueReusableCell(withIdentifier: "CollectionDetailsItem", for: indexPath) as? CharacterCollectionDetailsTableCell else {
fatalError("Dequeued cell is not an instance of CharacterDetailsTableViewCell class.")
}
if let character = character {
cell.setCollectionViewDataSourceDelegate(dataType: dataTypes[indexPath.row], characterEntity: character)
}
}
```
I'm really don't understand why this error happens, could someone please help me what I do wrong?<issue_comment>username_1: The error is being caused by your misuse of `dequeueReusableCell` in `willDisplayCell`. You must only ever use that in `cellForRowAt`.
Besides, the cell is already given to you as a parameter to `willDisplayCell`.
Update to:
```
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
if let character = character {
let myCell = cell as! CharacterCollectionDetailsTableCell
myCell.setCollectionViewDataSourceDelegate(dataType: dataTypes[indexPath.row], characterEntity: character)
}
}
```
And simply force-cast the cell type. If you setup your code incorrectly, it will crash just as well as using `guard` and `fatalError`.
Upvotes: 1 <issue_comment>username_2: You are getting error because of the wrong implementation of the data source and delegate methods of the `tableview`. `dequeueReusableCell` is used for creating reusable cells in the `tableview`. Hence it should be implemented in the `cellForRow` dataSource method of the tableview. You are doing it fine in the first method but this is where you are doing it wrong.
```
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
guard let cell = tableView.dequeueReusableCell(withIdentifier: "CollectionDetailsItem", for: indexPath) as? CharacterCollectionDetailsTableCell else {
fatalError("Dequeued cell is not an instance of CharacterDetailsTableViewCell class.")
}
if let character = character {
cell.setCollectionViewDataSourceDelegate(dataType: dataTypes[indexPath.row], characterEntity: character)
}
}
```
This method is not where you can create cell instead you can perform various tasks based on the requirement when the cell will be displayed. So based on what you have asked it could be like this..
```
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
guard let cell = tableView.dequeueReusableCell(withIdentifier: "CollectionDetailsItem", for: indexPath) as? CharacterCollectionDetailsTableCell else {
fatalError("Dequeued cell is not an instance of CharacterDetailsTableViewCell class.")
}
return cell
}
func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) {
if let cell = cell as? CharacterCollectionDetailsTableCell {
guard let character = character else {
return
}
cell.setCollectionViewDataSourceDelegate(dataType: dataTypes[indexPath.row], characterEntity: character)
}
}
```
Thanks.
Upvotes: 3 [selected_answer] |
2018/03/21 | 1,083 | 4,251 | <issue_start>username_0: I have a child view being programmatically being presented by its parent view. I want to set the CenterXAnchor for the child view to be constrained to the parent's CenterXAnchor, but I want the child view to have a specific height and width.
```
childViewController.view.centerXAnchor.constraint(equalTo:
parentViewController.view.centerXAnchor)
```
How do I go about doing this?
I have tried setting a frame for the childViewController but that messes with the anchor.<issue_comment>username_1: You can use a Height and Width Anchor to set it to a certain height/width. It can also be relative height to another view.
In your example:
```
childViewController.view.heightAnchor.constraint(equalToConstant: 50)
childViewController.view.widthAnchor.constraint(equalToConstant: 50)
```
EDIT: Also remember to use '.isActive = true' to activate the constraint.
This makes your view a height and width of 50.
You can read more on it here:
<https://developer.apple.com/documentation/uikit/uiview/1622590-heightanchor>
<https://developer.apple.com/documentation/uikit/uiview/1622605-widthanchor>
Upvotes: 1 <issue_comment>username_2: Overview
========
Given below is an example showing how to do the following:
* Add a child view controller
* Setup constraints (layout anchors) for the child view
Code
====
```
private func showChildView() {
let childViewController = UIViewController()
let parentViewController = self
childViewController.view.backgroundColor = .brown
parentViewController.addChildViewController(childViewController)
let parentView = parentViewController.view!
let childView = childViewController.view!
//For programatically created view set the translatesAutoresizingMaskIntoConstraints = false
childView.translatesAutoresizingMaskIntoConstraints = false
parentView.addSubview(childView)
childView.centerXAnchor.constraint(equalTo: parentView.centerXAnchor).isActive = true
childView.centerYAnchor.constraint(equalTo: parentView.centerYAnchor).isActive = true
childView.widthAnchor.constraint(equalToConstant: 100).isActive = true
childView.heightAnchor.constraint(equalToConstant: 200).isActive = true
childViewController.didMove(toParentViewController: parentViewController)
}
```
Further reading:
================
* Please read about AutoLayout. Refer - <https://developer.apple.com/library/content/documentation/UserExperience/Conceptual/AutolayoutPG/>
* When you use the autolayout you must not set the frame directly.
* Autolayout is like a set of rules / constraints, so the frame would be determined based on the rules you set.
Upvotes: 2 [selected_answer]<issue_comment>username_3: **Please Refer sample code**
```
//Intiallising Child ViewController
let controller = UIViewController()
controller.view.frame = CGRect.init(x: 0, y: 0, width: 150, height: 150)
controller.view.backgroundColor = UIColor.red
addChildViewController(controller)
controller.didMove(toParentViewController: self)
view.addSubview(controller.view)
//Setting Constrains programattically
controller.view.translatesAutoresizingMaskIntoConstraints = false
let widthConstraint = NSLayoutConstraint(item: controller.view, attribute: .width, relatedBy: .equal,
toItem: nil, attribute: .notAnAttribute, multiplier: 1.0, constant: 150)
let heightConstraint = NSLayoutConstraint(item: controller.view, attribute: .height, relatedBy: .equal,
toItem: nil, attribute: .notAnAttribute, multiplier: 1.0, constant: 150)
let xConstraint = NSLayoutConstraint(item: controller.view, attribute: .centerX, relatedBy: .equal, toItem: self.view, attribute: .centerX, multiplier: 1, constant: 0)
let yConstraint = NSLayoutConstraint(item: controller.view, attribute: .centerY, relatedBy: .equal, toItem: self.view, attribute: .centerY, multiplier: 1, constant: 0)
NSLayoutConstraint.activate([widthConstraint, heightConstraint, xConstraint, yConstraint])**strong text**
```
**Sample output**
[](https://i.stack.imgur.com/vfyd0.png)
Upvotes: 0 |
2018/03/21 | 722 | 2,551 | <issue_start>username_0: I need different progress bars for each image upload for my Angular4 application. (Storing with AngularFireStore)
**My component**
```
percentageArray = [];
startUpload(event: FileList) {
Array.from(event).forEach(file => {
if (file.type.split('/')[0] != 'image') {
alert('Dieses Dateiformat wird nicht unterstützt');
}
// Storage path
const path = `uploads/${this.currentUserEmail}/${this.uniqueID}/${file.name}`;
// Meta Data
const customMetadata = {
auctionID: this.uniqueID.toString()
}
// Main Task
this.task = this.storage.upload(path, file, {customMetadata});
// Progress Monitoring
this.percentage = this.task.percentageChanges();
this.percentage.subscribe(p => {
this.percentageArray.push(p);
})
this.snapshot = this.task.snapshotChanges();
// File Download Url
this.downloadURL = this.task.downloadURL();
this.imgArray.push(path);
})
}
```
**My HTML**
```
```
**Result**

For each status I get a new progress bar...how I can force to combine it in only one for each upload?<issue_comment>username_1: It should not be an array (maybe an array with keys and values, but then the update is more complicated).
e.g.
```
percentageArray = {};
```
```
this.percentage.subscribe(p => {
// Update percentage for this file.
this.percentageArray[file.name] = p;
});
```
Not sure how to best iterate over object keys/values in Angular HTML, but you probably can find the answer to that somewhere on SO.
Upvotes: 1 <issue_comment>username_2: You need to use index of current file.
```
startUpload(event: FileList) { Array.from(event).forEach((file, index) => { // <-- separate index for each file
if (file.type.split('/')[0] != 'image') {
alert('Dieses Dateiformat wird nicht unterstützt');
}
// Storage path
const path = `uploads/${this.currentUserEmail}/${this.uniqueID}/${file.name}`;
// Meta Data
const customMetadata = {
auctionID: this.uniqueID.toString()
}
// Main Task
this.task = this.storage.upload(path, file, {customMetadata});
// Progress Monitoring
this.percentage = this.task.percentageChanges();
this.percentage.subscribe(p => {
this.percentageArray[index] = p; // <--- just put new percentage to needed index
})
this.snapshot = this.task.snapshotChanges();
// File Download Url
this.downloadURL = this.task.downloadURL();
this.imgArray.push(path);})}
```
Then remove one loop from html
```
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 593 | 2,094 | <issue_start>username_0: I am trying to add an image viewer canvas with zoom controls etc. [ngx-imageviewer](https://github.com/hallysonh/ngx-imageviewer) seems to provide what I need but I cannot get it to work as I get the following error when I try to do an AOT build with angular-cli
>
> 'client:157 Error during template compile of
> 'ImageViewerModule'Function calls are not supported in decorators but
> 'ɵmakeDecorator' was called in 'NgModule' 'NgModule' calls
> 'ɵmakeDecorator'.
>
>
>
When I try running with JIT I get the following error
>
> 'Error: StaticInjectorError[DomSanitizer]:
>
> StaticInjectorError[DomSanitizer]:
> NullInjectorError: No provider for DomSanitizer!'
>
>
>
Has anyone been able to get this working or does anyone know of any other libraries that offer this kind of functionality? I only need it to support pngs<issue_comment>username_1: i think you're trying to use a sanite url for your images src .. could it be? .. i so remeber to import it ..
if you don't ..so try to implement like this:
```
import { Pipe, PipeTransform } from '@angular/core';
import { DomSanitizer } from '@angular/platform-browser';
@Pipe({
name: 'safeHtml'
})
export class SafeHtmlPipe implements PipeTransform {
constructor(private sanitizer: DomSanitizer) { }
transform(value: any, args?: any): any {
return this.sanitizer.bypassSecurityTrustHtml(value);
// or
// bypassSecurityTrustStyle(value: string): SafeStyle
// bypassSecurityTrustScript(value: string): SafeScript
// bypassSecurityTrustUrl(value: string): SafeUrl
}
}
```
import it like:
```
@NgModule({
declarations: [
AppComponent,
SafeHtmlPipe
],
```
use it like:
```
![]()
```
Hope it helps you
Upvotes: 1 <issue_comment>username_2: I found an alternate npm package that I can get to work called [ngx-image-viewer](https://www.npmjs.com/package/ngx-image-viewer) (slightly different spelling)
Beware of a potential gotcha where you have to add an array of image urls rather than just the url string when adding the src input. e.g
```
```
Upvotes: 0 |
2018/03/21 | 940 | 3,166 | <issue_start>username_0: i have Table1 ,in this table users can charge their account
```
userid action amount
1 Deposit 10000
1 removal 500
2 Deposit 20000
2 removal 13000
```
now i want take select remaine charge for every user.
Sum(amount)`with conditions WHERE action='Deposit'` - SUM(amount)`with conditions WHERE action='removal'`
now result code should be the following :
```
userid reamine charge
1 9500
2 7000
```
thanks alot<issue_comment>username_1: You should sum only on particular filters.
```
SELECT
userId,
SUM(CASE WHEN action = 'deposit' THEN amount END) TotalDeposits,
SUM(CASE WHEN action = 'removal' THEN amount END) TotalRemovals,
SUM(CASE WHEN action = 'deposit' THEN amount END)
- SUM(CASE WHEN action = 'removal' THEN amount END) TotalAvailable
FROM
Table1
GROUP BY
userId
```
Upvotes: -1 [selected_answer]<issue_comment>username_2: ```
select client_id, credits,debits, credits-debits as balance
from (SELECT
client_id,
SUM(case when ACTION_TYPE='Deposit' then action_amount else 0 end) AS credits,
SUM(case when ACTION_TYPE='removal' then action_amount else 0 end) AS debits
FROM categories
GROUP BY client_id) a
where debits-credits<>0;
```
Using SUM() can do your needs..
[Fiddle](http://www.sqlfiddle.com/#!18/2bf41/3)
Upvotes: -1 <issue_comment>username_3: ```
Here is another way to achive what u want, i will leave
```
a little bit more information in my statement so u can see what actually happen
```
SELECT ts.*, removal.*,
SUM(ts.amount) as total_deposit,
SUM(removal.amount) AS total_removal,
SUM(ts.amount) - SUM(removal.amount) as result FROM Table1 as ts
INNER JOIN Table1 as removal ON ts.userId=removal.userId
WHERE ts.action = 'Deposit' AND removal.action='removal'
GROUP BY ts.userId, ts.action
and this is cleared variant of the statement
SELECT ts.userId,
SUM(ts.amount) - SUM(removal.amount) as result
FROM Table1 as ts
INNER JOIN Table1 as removal ON ts.userId=removal.userId
WHERE ts.action = 'Deposit' AND removal.action='removal'
GROUP BY ts.userId, ts.action
```
Upvotes: -1 <issue_comment>username_4: While I agree with the comment that you should be storing negatives, here is a solution.
```
DECLARE @data TABLE ([userid] int,[action] NVARCHAR(200),Amount MONEY)
INSERT INTO @data ([userid],[action],Amount) SELECT 1,'Deposit',10000
INSERT INTO @data ([userid],[action],Amount) SELECT 1,'removal',500
INSERT INTO @data ([userid],[action],Amount) SELECT 2,'Deposit',20000
INSERT INTO @data ([userid],[action],Amount) SELECT 2,'removal',13000
select
[userid],
sum (
CASE
WHEN [action]='Deposit' THEN Amount
ELSE -1.0 * Amount
END
) AS [reamine charge]
FROM @DATA
group by [userid]
```
Your requested solution is not the best, but here it is:
```
select
[userid],
(
sum(CASE WHEN [action]='Deposit' THEN Amount ELSE 0 END)
-sum(CASE WHEN [action]='removal' THEN Amount ELSE 0 END)
) AS [reamine charge]
FROM @DATA
group by [userid]
```
Upvotes: 0 |
2018/03/21 | 4,263 | 6,862 | <issue_start>username_0: So, I'm refactoring an old R script to Python that converts a hex string to an array of numbers (the hex string comes from a proprietary database and the numbers end up being coordinates).
I've refactored all of the code successfully, except for converting the raw binary output into decimal numbers.
I'm certain there's a simple way to do this?
The relevant section of the R script:
```
fourByteFloatHexToNumeric <- function(hexCharacterVector) {
rawString <- hexToRaw(hexCharacterVector) #returns as.raw(c(0x00, 0xfc, 0x3f, 0x46, 0x00, 0xcc, 0x3c, 0x46, 0x00, 0x90, 0x3c, 0x46, 0x00, 0x30, 0x3c, 0x46,...)
result <- readBin(rawString, what = "numeric", size = 4, n = nchar(hexCharacterVector) / 8) #returns arr c(12287, 12083, 12068, 12044, 12019, 11968, ....)
}
```
The Python script:
```
import base64
def readFromHexBlob(hex):
while(len(hex)%8 != 0):
hex = '0' + hex #pad the hex with leading 0's if it's stored in the dB wrong
array = base64.b16decode(hex)
#how to get to decimal array??
readFromHexBlob('FC3F4600CC3C4600903C4600303C4600CC3B4600003B4600943946008837460058354600E8324600E42F4600B82C4600482A460080294600B8<KEY>85EB014066661640D7A33040A4704D40713D6A401F8583401F859340A470A5403D0AB740D7A3C8401F85DB40EC51F0405C8F0241CDCC0C411F8517413333234152B82E4166663A417B14464148E152411F855F41F6286C41CDCC7841C3F5824114AE8941E17A9041B81E974100009E41C3F5A44185EBAB41CDCCB2410AD7B94148E1C04185EBC741C3F5CE410000D6413D0ADD417B14E4413D0AEB417B14F241C3F5F8410AD7FF41295C03420AD7064233330A421F850D4248E11042713D1442A4701742D7A31A42CDCC1D42000021423D0A244200002742C3F52942C3F52C42CDCC2F421F853242713D3542C3F537425C8F3A42C3F53C4266663F42CDCC4142B81E444233334642EC51484266664A42A4704C42F6284E4248E14F42D7A35142295C534252B854423D0A564266665742CDCC58428FC259428FC25A4252B85B4214AE5C42EC515D4285EB5D42E17A5E423D0A5F42295C5F425C8F5F4252B85F4248E15F4248E15F42D7A35F42A4705F4233335F4248E15E42AE475E4214AE5D427B145D4266665C4266665B42A4705A42E17A5942E17A5842B81E5742CDCC5542E17A5442F6285342E17A5142CDCC4F42F6284E421F854C421F854A421F8548421F8546421F854442EC5142420000404214AE3D42295C3B4248E13842AE47364214AE33427B143142295C2E421F852B42D7A32842CDCC254248E12242CDCC1F4252B81C42D7A319421F851642AE4713420000104252B80C42E17A09427B140642D7A302425C8FFE418FC2F741CDCCF0410AD7E94148E1E2410AD7DB41CDCCD4418FC2CD41D7A3C6419A99BF41E17AB841A470B1416666AA41295CA341EC519C411F859541D7A38E4114AE8741C3F58041CDCC744114AE674152B85A419A994D41A4704141AE473541AE472941B81E1D417B14124133330741C3F5F8403333E34014AECF40B81EBD405C8FAA4000009840295C8740D7A3704033335340B81E3540713D1A40E17A0440A470DD3F85EBB13F713D8A3FAE47613F1F852B3F1F85EB3E713D8A3ECDCC4C3EB81E053E8FC2753D8FC2F53C')
```<issue_comment>username_1: You should sum only on particular filters.
```
SELECT
userId,
SUM(CASE WHEN action = 'deposit' THEN amount END) TotalDeposits,
SUM(CASE WHEN action = 'removal' THEN amount END) TotalRemovals,
SUM(CASE WHEN action = 'deposit' THEN amount END)
- SUM(CASE WHEN action = 'removal' THEN amount END) TotalAvailable
FROM
Table1
GROUP BY
userId
```
Upvotes: -1 [selected_answer]<issue_comment>username_2: ```
select client_id, credits,debits, credits-debits as balance
from (SELECT
client_id,
SUM(case when ACTION_TYPE='Deposit' then action_amount else 0 end) AS credits,
SUM(case when ACTION_TYPE='removal' then action_amount else 0 end) AS debits
FROM categories
GROUP BY client_id) a
where debits-credits<>0;
```
Using SUM() can do your needs..
[Fiddle](http://www.sqlfiddle.com/#!18/2bf41/3)
Upvotes: -1 <issue_comment>username_3: ```
Here is another way to achive what u want, i will leave
```
a little bit more information in my statement so u can see what actually happen
```
SELECT ts.*, removal.*,
SUM(ts.amount) as total_deposit,
SUM(removal.amount) AS total_removal,
SUM(ts.amount) - SUM(removal.amount) as result FROM Table1 as ts
INNER JOIN Table1 as removal ON ts.userId=removal.userId
WHERE ts.action = 'Deposit' AND removal.action='removal'
GROUP BY ts.userId, ts.action
and this is cleared variant of the statement
SELECT ts.userId,
SUM(ts.amount) - SUM(removal.amount) as result
FROM Table1 as ts
INNER JOIN Table1 as removal ON ts.userId=removal.userId
WHERE ts.action = 'Deposit' AND removal.action='removal'
GROUP BY ts.userId, ts.action
```
Upvotes: -1 <issue_comment>username_4: While I agree with the comment that you should be storing negatives, here is a solution.
```
DECLARE @data TABLE ([userid] int,[action] NVARCHAR(200),Amount MONEY)
INSERT INTO @data ([userid],[action],Amount) SELECT 1,'Deposit',10000
INSERT INTO @data ([userid],[action],Amount) SELECT 1,'removal',500
INSERT INTO @data ([userid],[action],Amount) SELECT 2,'Deposit',20000
INSERT INTO @data ([userid],[action],Amount) SELECT 2,'removal',13000
select
[userid],
sum (
CASE
WHEN [action]='Deposit' THEN Amount
ELSE -1.0 * Amount
END
) AS [reamine charge]
FROM @DATA
group by [userid]
```
Your requested solution is not the best, but here it is:
```
select
[userid],
(
sum(CASE WHEN [action]='Deposit' THEN Amount ELSE 0 END)
-sum(CASE WHEN [action]='removal' THEN Amount ELSE 0 END)
) AS [reamine charge]
FROM @DATA
group by [userid]
```
Upvotes: 0 |
2018/03/21 | 1,310 | 4,360 | <issue_start>username_0: I'm trying to use ***@tornado.web.stream\_request\_body*** for upload files.
But I have a problem with upload large files.
For example, when I upload PDF file larger than 100 MB (<https://yadi.sk/i/rzLQ96pk3Tcef6>) it loads incorrectly and it doesn't open in viewers.
Code example:
```
MAX_STREAMED_SIZE = 1024 * 1024 * 1024
@tornado.web.stream_request_body
class UploadHandler(tornado.web.RequestHandler):
def prepare(self):
self.request.connection.set_max_body_size(MAX_STREAMED_SIZE)
self.f = open(os.path.join('files', '12322.pdf'), "w+b")
def data_received(self, data):
self.f.write(data)
def post(self):
self.f.close()
print("upload completed")
```
What can be the reason of the problem?<issue_comment>username_1: This is my solution == >
Python code:
```
import tornado.ioloop
import tornado.options
import tornado.web
import tornado.httpserver
from tornado.options import options, define
define("port", default=8888, help="run on the given port", type=int)
class Appliaction(tornado.web.Application):
def __init__(self):
handlers = [
(r"/", HomeHandler),
(r"/upload", UploadHandler),
]
settings = dict(
debug = True,
)
super(Appliaction, self).__init__(handlers, **settings)
class HomeHandler(tornado.web.RequestHandler):
def get(self):
self.render('form.html')
MAX_STREAMED_SIZE = 1024 * 1024 * 1024
@tornado.web.stream_request_body
class UploadHandler(tornado.web.RequestHandler):
def initialize(self):
self.bytes_read = 0
self.data = b''
def prepare(self):
self.request.connection.set_max_body_size(MAX_STREAMED_SIZE)
def data_received(self, chunck):
self.bytes_read += len(chunck)
self.data += chunck
def post(self):
this_request = self.request
value = self.data
with open('file', 'wb') as f:
f.write(value)
def Main():
tornado.options.parse_command_line()
http_server = tornado.httpserver.HTTPServer(Appliaction())
http_server.listen(options.port)
tornado.ioloop.IOLoop.instance().start()
if __name__ == "__main__":
Main()
```
Html code :
```
Upload
Upload
======
```
Upvotes: 1 <issue_comment>username_2: Try this. Work fine with one file upload, no large mem usage
```
#! /usr/bin/env python
#-* coding: utf-8 -*
# Official packages
# 3rd-party Packages
import tornado.web
# Local Packages
# CONST
MB = 1024 * 1024
GB = 1024 * MB
TB = 1024 * GB
MAX_STREAMED_SIZE = 16*GB
# Class&Function Defination
@tornado.web.stream_request_body
class UploadHandler(tornado.web.RequestHandler):
def initialize(self):
self.bytes_read = 0
self.meta = dict()
self.receiver = self.get_receiver()
# def prepare(self):
"""If no stream_request_body"""
# self.request.connection.set_max_body_size(MAX_STREAMED_SIZE)
def data_received(self, chunk):
self.receiver(chunk)
def get_receiver(self):
index = 0
SEPARATE = b'\r\n'
def receiver(chunk):
nonlocal index
if index == 0:
index +=1
split_chunk = chunk.split(SEPARATE)
self.meta['boundary'] = SEPARATE + split_chunk[0] + b'--' + SEPARATE
self.meta['header'] = SEPARATE.join(split_chunk[0:3])
self.meta['header'] += SEPARATE *2
self.meta['filename'] = split_chunk[1].split(b'=')[-1].replace(b'"',b'').decode()
chunk = chunk[len(self.meta['header']):] # Stream掐头
import os
self.fp = open(os.path.join('upload',self.meta['filename']), "wb")
self.fp.write(chunk)
else:
self.fp.write(chunk)
return receiver
def post(self, *args, **kwargs):
# Stream去尾
self.meta['content_length'] = int(self.request.headers.get('Content-Length')) - \
len(self.meta['header']) - \
len(self.meta['boundary'])
self.fp.seek(self.meta['content_length'], 0)
self.fp.truncate()
self.fp.close()
self.finish('OK')
# Logic
if __name__ == '__main__':
pass
```
Upvotes: 2 |
2018/03/21 | 1,233 | 4,172 | <issue_start>username_0: I'm trying to use GoToMeeting's API and making a POST request to create a meeting. At the moment, I'm just trying to hardcode the body of the meeting and send headers but I'm receiving and I'm invalid JSON error and not sure why. Here's the code for that route:
```
app.post('/new-meeting', (req, res) => {
const headers = {
'Content-Type': 'application/json',
Accept: 'application / json',
Authorization: 'OAuth oauth_token=' + originalToken
};
console.log('-----------------------------------------------------------')
console.log('Acess Token:');
console.log('OAuth oauth_token=' + originalToken);
console.log('-----------------------------------------------------------')
const meetingBody = {
subject: 'string',
starttime: '2018-03-20T08:15:30-05:00',
endtime: '2018-03-20T09:15:30-05:00',
passwordrequired: true,
conferencecallinfo: 'string',
timezonekey: 'string',
meetingtype: 'immediate'
};
return fetch('https://api.getgo.com/G2M/rest/meetings', {
method: 'POST',
body: meetingBody,
headers: headers
}).then(response => {
console.log('response:');
console.log(response);
response
.json()
.then(json => {
res.send(json);
console.log(req.headers);
})
.catch(err => {
console.log(err);
});
});
});
```
When I hit that router, I get the following error:
```
{
"error": {
"resource": "/rest/meetings",
"message": "invalid json"
}
}
```
Any advice would be appreciated!<issue_comment>username_1: I believe the header is incorrect.
You need 'Accept: application/json' without space.
Upvotes: 0 <issue_comment>username_2: tl;dr
=====
You are passing `fetch` a value for the `body` represented by a JavaScript object. It is converting it to a string by (implicitly) calling its `.toString()` method. This doesn't give you JSON. The API you are calling then complains and tells you that it isn't JSON.
You need to convert your object to JSON using:
```
body: JSON.stringify(meetingBody),
```
---
Test case
=========
This demonstrates the problem and the solution.
Server
------
This is designed to be a very primitive and incomplete mock of GoToMeeting's API. It just echos back the request body.
```
const express = require("express");
var app = express();
var bodyParser = require('body-parser');
app.use(bodyParser.text({ type: "*/*" }));
app.post("/", (req, res) => {
console.log(req.body);
res.send(req.body)
});
app.listen(7070, () => console.log('Example app listening on port 7070!'))
```
Client
------
This represents your code, but with the Express server stripped out. Only the code relevant for sending the request to GoToMeeting's API is preserved.
```
const url = "http://localhost:7070/";
const fetch = require("node-fetch");
const headers = {
'Content-Type': 'application/json',
Accept: 'application / json',
Authorization: 'OAuth oauth_token=<PASSWORD>'
};
const meetingBody = {
subject: 'string',
starttime: '2018-03-20T08:15:30-05:00',
endtime: '2018-03-20T09:15:30-05:00',
passwordrequired: true,
conferencecallinfo: 'string',
timezonekey: 'string',
meetingtype: 'immediate'
};
fetch(url, {
method: 'POST',
body: meetingBody,
headers: headers
})
.then(res => res.text())
.then(body => console.log(body));
```
Results of running the test case
--------------------------------
The logs of both server and client show:
```
[object Object]
```
This is what you get when you call `meetingBody.toString()`.
If you change the code as described at the top of this answer, you get:
```
{"subject":"string","starttime":"2018-03-20T08:15:30-05:00","endtime":"2018-03-20T09:15:30-05:00","passwordrequired":true,"conferencecallinfo":"string","timezonekey":"string","meetingtype":"immediate"}
```
This is JSON, which is what the API is expecting.
---
Aside
=====
MIME types do not have spaces in them. `Accept: 'application / json',` should be `Accept: 'application/json',`. This *probably* isn't causing you any problems though.
Upvotes: 3 [selected_answer] |
2018/03/21 | 954 | 3,520 | <issue_start>username_0: I have a project which I need to ask the user to introduce at least 5 valid full names (until 10 full names or if the user introduce "fim"). Each full name needs to have at least 2 names with 4 characters each and the full name is only valid if it doesn't exceed 120 characters. I need to create a array for each full name which the elements are the names that are part of the full name. Here I have my code yet. I have a lot of options that doesn't work in comment. "Nome Inválido" id invalid name and "Nome Válido" iS Valid Name.
```
public static void main(String[] args) {
Scanner keyboard = new Scanner (System.in);
System.out.println("Introduza até 10 nomes completos com até 120 caracteres e pelo menos dois nomes com pelo menos 4 caracteres: ");
String nome;
int i = 0;
do {
//nomes[i] = keyboard.next();
nome = keyboard.nextLine();
i++;
String[] nomeSeparado = nome.split(" ");
System.out.print(Arrays.toString(nomeSeparado));
int j = nomeSeparado[i].length();
/**
1) for(int k = 0; k < 2; k++) {
if(!(j == 4)) {
System.out.println(" Nome Inválido ");
}
else {
System.out.println(" Nome Válido ");
}
}
2) while( k < 2 ) {
if(!(j == 4)) {
System.out.println(" Nome Inválido ");
}
else {
System.out.println(" Nome Válido ");
}
}
3) if(while(!(nomeSeparado[i].length() == 4)<2)) {
System.out.println(" Nome Inválido ");
}
4) for(i = 0; i < 10 ; i++) {
if( j > 2 && nomeSeparado[i].length() == 4 ) {
System.out.println(" Nome Válido ");
}
else {System.out.println(" Nome Inválido ");}
}
**/
}
while(!nome.equalsIgnoreCase("fim") && i<10);
}
```<issue_comment>username_1: Every time the user inputs a name you save the first name as the first element of the `nomeSeparado` and the second name as a second element. What you have to do is inspect the length of both the first and second element and check if they comply with the rules. A logic like this should work:
```
int lengthOfFirstName = nomeSeparado[0].length();
int lengthOfSecondName = nomeSeparado[1].length();
if (lengthOfFirstName >= 4 && lengthOfSecondName >= 4 && lengthOfFirstName + lengthOfSecondName < 120) {
System.out.println("Valid name");
} else {
System.out.println("Invalid name");
}
```
Upvotes: 1 <issue_comment>username_2: Why are you using arrays? I think .length() saves you from that
import java.util.Scanner;
```
public static void main(String[] args) {
String firstName, surname, fullName;
int fullNameCount;
Scanner input = new Scanner(System.in);
System.out.println("Enter your first name");
firstName = input.nextLine();
System.out.println("Enter your surname");
surname = input.nextLine();
if (firstName.length() <= 4 && surname.length() <=4 ){
System.out.println("Invalid name! Exit");
} else{
fullName = firstName + surname;
fullNameCount = fullName.length();
System.out.println(fullNameCheck(fullNameCount));
char[] fullNameArray = fullName.toCharArray();
}
}
public static String fullNameCheck(int fullNameCount){
if (fullNameCount <= 120){
return "Valid Name";
}
else{
return "Invalid Name";
}
}
```
}
Upvotes: 0 |
2018/03/21 | 1,097 | 4,459 | <issue_start>username_0: I'm trying to print the progress of data transfer while using multipeer connectivity.
The `progress` information is available on the receiver side, in the `didStartReceivingResourceWithName` method and on the sender side, in the `sendResource` method.
Here is how I have implemented the receiver side:
```
func session(_ session: MCSession, didStartReceivingResourceWithName resourceName: String, fromPeer peerID: MCPeerID, with progress: Progress) {
DispatchQueue.main.async {
print (progress)
}
}
```
And here is how I implemented the sender side:
```
func sendFileAction()->Progress{
var filePath = Bundle.main.url(forResource: "10MO", withExtension: "file")
if mcSession.connectedPeers.count > 0 {
do {
let data = try Data(contentsOf: filePath!)
fileTransferProgressInSender = mcSession.sendResource(at: filePath!, withName: "filename", toPeer: mcSession.connectedPeers[0]) { (error) -> Void in
DispatchQueue.main.async {
if error != nil {
print("Sending error: \(String(describing: error))")
}else{
print("sendAFile with no error "+"filename")
}
}
}
}
catch let error as NSError {
let ac = UIAlertController(title: "Send file error", message: error.localizedDescription, preferredStyle: .alert)
ac.addAction(UIAlertAction(title: "OK", style: .default))
present(ac, animated: true)
}
}
return(fileTransferProgressInSender)
}
```
The receiver function does display the `progress` only once, at the beginning.
```
: Parent: 0x0 / Fraction completed: 0.0000 / Completed: 0 of 10485760
```
And I can't figure out where I can call the return of `sendFileAction` to display the progress on the sender side.
Any help please?
Thanks.
Edit:
I tried with the following code:
```
func session(_ session: MCSession, didStartReceivingResourceWithName resourceName: String, fromPeer peerID: MCPeerID, with progress: Progress) {
startTime = Date()
DispatchQueue.main.async {
self.endTransfer = false
self.sendProgressBar.progress = 0.0
self.updateProgress(progress: progress)
}
}
func updateProgress(progress:Progress){
DispatchQueue.main.async {
while !self.endTransfer {
print (progress.fractionCompleted)
self.sendProgressBar.progress = Float(progress.fractionCompleted)
}
}
}
```
While the `print` gives real progress in the console, the progress bar jumps from 0 to 1 (reaches 1 before the `print` does).
What am I doing wrong?
Thanks again.<issue_comment>username_1: Every time the user inputs a name you save the first name as the first element of the `nomeSeparado` and the second name as a second element. What you have to do is inspect the length of both the first and second element and check if they comply with the rules. A logic like this should work:
```
int lengthOfFirstName = nomeSeparado[0].length();
int lengthOfSecondName = nomeSeparado[1].length();
if (lengthOfFirstName >= 4 && lengthOfSecondName >= 4 && lengthOfFirstName + lengthOfSecondName < 120) {
System.out.println("Valid name");
} else {
System.out.println("Invalid name");
}
```
Upvotes: 1 <issue_comment>username_2: Why are you using arrays? I think .length() saves you from that
import java.util.Scanner;
```
public static void main(String[] args) {
String firstName, surname, fullName;
int fullNameCount;
Scanner input = new Scanner(System.in);
System.out.println("Enter your first name");
firstName = input.nextLine();
System.out.println("Enter your surname");
surname = input.nextLine();
if (firstName.length() <= 4 && surname.length() <=4 ){
System.out.println("Invalid name! Exit");
} else{
fullName = firstName + surname;
fullNameCount = fullName.length();
System.out.println(fullNameCheck(fullNameCount));
char[] fullNameArray = fullName.toCharArray();
}
}
public static String fullNameCheck(int fullNameCount){
if (fullNameCount <= 120){
return "Valid Name";
}
else{
return "Invalid Name";
}
}
```
}
Upvotes: 0 |
2018/03/21 | 428 | 1,260 | <issue_start>username_0: I have three tables:
Table One: Users
Table Two: Roles
Table Three: UserInRoles
* Users
---
```
UserID | FullName
--------------------------------------------------
07DCEE4A-6598-42E1-95C6-0390FF8BB534 | <NAME>
```
* Roles
---
```
RoleID
---------------------------------------
E5C46F8E-EE6A-4052-AABA-08184E5F0158
```
* UserInRoles
---
```
UserID | RoleID
---------------------------------------------------------------------------
07DCEE4A-6598-42E1-95C6-0390FF8BB534 | E5C46F8E-EE6A-4052-AABA-08184E5F0158
```
I need to Select all `UsersID` who's not in table `UserInRoles` from table `Users`
I tryed :
```
SELECT DISTINCT Users.UserId, Users.FullName
FROM Users
INNER JOIN UserInRoles
ON Users.UserId <> UserInRoles.UserId
```<issue_comment>username_1: Think `NOT IN` or `NOT EXISTS`:
```
select u.*
from users u
where not exists (select 1 from UserInRules ur where u.UserId = ur.UserId);
```
Upvotes: 2 <issue_comment>username_2: To Select all UsersID who's not in table UserInRoles from table Users, simply use `not in`
```
select distinct * from users where userid not in
(select userid from UserInroles)
```
Upvotes: 1 [selected_answer] |
2018/03/21 | 657 | 2,333 | <issue_start>username_0: I am using a customer express server with Next.js. It's running within a container. I am doing an http request with `isomorphic-fetch` to get data for my render. I'd like to do `localhost` when running on server and `mysite.com` when running on client. Not sure the best way to accomplish this. I can do it hackily by doing `const isServer = typeof window === 'undefined'` but that seems pretty bad.<issue_comment>username_1: You can use `process.browser` to distinguish between server environment (NodeJS) and client environment (browser).
`process.browser` is `true` on the client and `undefined` on the server.
Upvotes: 6 <issue_comment>username_2: One additional note is that `componentDidMount()` is always called on the browser. I often load the initial data set (seo content in `getInitialProps()`, then load more in depth data in the `componentDidMount()` method.
Upvotes: 3 <issue_comment>username_3: Since I don't like depending on odd third party things for this behavior (even though `process.browser` seems to come from *Webpack*), I think the preferred way to check is for presence of `appContext.ctx.req` like this:
```
async getInitialProps (appContext) {
if (appContext.ctx.req) // server?
{
//server stuff
}
else {
// client stuff
}
}
```
Source: <https://github.com/zeit/next.js/issues/2946>
Upvotes: 4 <issue_comment>username_4: Now (2020 Jan) it should be `typeof window === 'undefined'` since `process.browser` is deprecated
Refer to <https://github.com/zeit/next.js/issues/5354#issuecomment-520305040>
Upvotes: 9 [selected_answer]<issue_comment>username_5: [`getServerSideProps` and `getStaticProps`](https://nextjs.org/blog/next-9-3#next-gen-static-site-generation-ssg-support) are added in Next 9.3(Mar 2020), and these functions are [recommended](https://nextjs.org/docs/api-reference/data-fetching/getInitialProps).
>
> If you're using Next.js 9.3 or newer, we recommend that you use `getStaticProps` or `getServerSideProps` instead of `getInitialProps`.
>
>
>
So no need to detect, just put server side stuff in `getServerSideProps`.
```js
const MyPage = () => {
useEffect(() => {
// client side stuff
}, [])
return (
...
)
}
MyPage.getServerSideProps = async () => {
// server side stuff
}
```
Upvotes: -1 |
2018/03/21 | 475 | 1,420 | <issue_start>username_0: I am trying to multiply each element of an array by an integer (along the first dimension). The tricky thing is that this integer will change for each element.
An example :
```
test <- array(dim = c(3,5,7))
test[1,,] <- 1
test[2,,] <- 10
test[3,,] <- 100
vec <- c(1,2,3)
```
The result I want is an array with the same dimension (3,5,7) and along the first dimension :
```
test[1,,] * vec[1]
test[2,,] * vec[2]
test[3,,] * vec[3]
```
This means
```
Result <- array(dim = c(3,5,7))
Result[1,,] <- 1
Result[1,,] <- 20
Result[1,,] <- 300
```
I think I am quite close with different functions like outer or apply but I think there is an easier way, as I have a lot of data to treat. For now, I found the outer function, and I should select something like the diagonal of the result.
Can someone help ?<issue_comment>username_1: How about
```
test*replicate(7, replicate(5, vec))
```
Upvotes: 2 <issue_comment>username_2: What's wrong with using apply like this?
`sapply(1:length(vec), function(i) test[i,,]<<- test[i,,]*vec[i])`
Upvotes: 1 <issue_comment>username_3: `slice.index` might be helpful here
```
Result <- test * vec[slice.index(test, 1)]
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: In this case you can just do
```
Result <- test*vec
```
Note that this will only work if the dimension that is being split and multiplied is the first one.
Upvotes: 0 |
2018/03/21 | 682 | 1,971 | <issue_start>username_0: How can I get something like this to work? I want `all = sum(onecycle, twocycle)`, without having to type it all out.
```
library('dplyr')
library('english')
ex <- data.frame(onecycle = 1:10, twocycle = sample(1:10), recycle = sample(1:10), gvar = rep(1:5, each = 2))
ex %>%
mutate(all = sum(paste0(english(1:2), 'cycle'))
```<issue_comment>username_1: how about this:
```
ex$all=ex %>% select(ends_with("cycle"))%>% rowSums()
```
Upvotes: 1 <issue_comment>username_2: You could use `dplyr::rowwise` or the `base::rowSums()`:
```
ex %>%
rowwise %>%
mutate(cycle_sum=sum(onecycle,twocycle))
```
OR
```
ex %>%
mutate(cycle_sum = rowSums(.[paste0(english(1:2), 'cycle')]))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Here is one option with `reduce`
```
libary(tidyverse)
ex %>%
select(matches('cycle')) %>%
reduce(`+`) %>%
mutate(ex, all = .)
```
---
Or another option is to `nest` and then use `map/reduce` within `mutate`
```
ex %>%
nest(-gvar) %>%
mutate(all = map(data, ~ .x %>%
reduce(`+`))) %>%
unnest
```
Upvotes: 2 <issue_comment>username_4: Here are some methods I found using `rlang::syms`
```
ex %>%
rowwise %>%
mutate(all = sum(!!!syms(paste0(english(1:2), 'cycle'))))
ex %>%
mutate(all = list(!!!syms(paste0(english(1:2), 'cycle'))) %>% reduce (`+`))
```
Upvotes: 1 <issue_comment>username_5: ```
library('purrr')
ex %>%
mutate(total = pmap_dbl(select(., onecycle, twocycle), sum))
onecycle twocycle recycle gvar total
1 1 7 8 1 8
2 2 9 9 1 11
3 3 4 6 2 7
4 4 2 7 2 6
5 5 3 10 3 8
6 6 8 3 3 14
7 7 1 2 4 8
8 8 10 1 4 18
9 9 6 5 5 15
10 10 5 4 5 15
```
Upvotes: 0 |
2018/03/21 | 928 | 2,822 | <issue_start>username_0: I cant find a clear (to a newbie like me) answer to this online.
If I create a spark df (using pyspark if that matters, I dont think it does) like:
```
new_df = spark.sql ("select * from old_df)
print(new_df.count())
```
**1) Does new\_df exist now due to the count() command?**
**2) If I instead did new\_df.show(5) instead of count() does this change the answer to #1?**
If I then do this
```
new_df =new_df.withColumn('foo', new column formula)
print(new_df.count())
```
**3) Does the initial step to create new\_dt get re-ran before the new column is created?**
**4) Would new\_DF.cache() change the responses?**
**I am confused about WHEN something actually runs and if steps get reran as more and more is done or changed with a DF.**
EDIT:
What I meant for number 4 was if the sequence of commands had been:
```
new_df = spark.sql ("select * from old_df)
print(new_df.count())
**new_df.cache()**
new_df =new_df.withColumn('foo', new column formula)
print(new_df.count())
```
versus the same without **new\_df.cache()** would this keep the second
print(new\_df.count()) from triggering a rebuild of new\_df from old\_df assuming old\_df was not cached.<issue_comment>username_1: how about this:
```
ex$all=ex %>% select(ends_with("cycle"))%>% rowSums()
```
Upvotes: 1 <issue_comment>username_2: You could use `dplyr::rowwise` or the `base::rowSums()`:
```
ex %>%
rowwise %>%
mutate(cycle_sum=sum(onecycle,twocycle))
```
OR
```
ex %>%
mutate(cycle_sum = rowSums(.[paste0(english(1:2), 'cycle')]))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Here is one option with `reduce`
```
libary(tidyverse)
ex %>%
select(matches('cycle')) %>%
reduce(`+`) %>%
mutate(ex, all = .)
```
---
Or another option is to `nest` and then use `map/reduce` within `mutate`
```
ex %>%
nest(-gvar) %>%
mutate(all = map(data, ~ .x %>%
reduce(`+`))) %>%
unnest
```
Upvotes: 2 <issue_comment>username_4: Here are some methods I found using `rlang::syms`
```
ex %>%
rowwise %>%
mutate(all = sum(!!!syms(paste0(english(1:2), 'cycle'))))
ex %>%
mutate(all = list(!!!syms(paste0(english(1:2), 'cycle'))) %>% reduce (`+`))
```
Upvotes: 1 <issue_comment>username_5: ```
library('purrr')
ex %>%
mutate(total = pmap_dbl(select(., onecycle, twocycle), sum))
onecycle twocycle recycle gvar total
1 1 7 8 1 8
2 2 9 9 1 11
3 3 4 6 2 7
4 4 2 7 2 6
5 5 3 10 3 8
6 6 8 3 3 14
7 7 1 2 4 8
8 8 10 1 4 18
9 9 6 5 5 15
10 10 5 4 5 15
```
Upvotes: 0 |
2018/03/21 | 1,296 | 3,311 | <issue_start>username_0: I have a pandas dataframe like this:
```
+-----+----------+
| No | quantity |
+-----+----------+
| 1 | 100.0 |
| 2 | 102.3 |
| 3 | 301.2 |
| 4 | 100.6 |
| 5 | 120.9 |
| ... | ... |
+-----+----------+
```
How can I calculate the probability for each value that it does fit into the dataset (in dataframe above all do except of No.3). The idea is using the standardized normal distribution and calculate the probability, that a value (or more extreme one) would occur. In this case, probability that No.3 occurs is almost zero because it is far from all other values.
I know how to do this on paper for each value:
1. calculating z-score
2. find the corresponding value in the standard normal probabilities-table
3. if value is below average of the distribution, the probability is 1-probability
so desired output is something like this:
```
+-----+----------+--------+
| No | quantity | prob |
+-----+----------+--------+
| 1 | 100.0 | 99,85% |
| 2 | 102.3 | 99,81% |
| 3 | 301.2 | 00,00% |
| 4 | 100.6 | 99,90% |
| 5 | 120.9 | 74,30% |
| ... | ... | ... |
+-----+----------+--------+
```
How can I realize that in python?
Thank you :)<issue_comment>username_1: Found my mistake, this is the answer to my question:
```
df= pd.DataFrame(columns=['No','quantity'], data=[[1,100.0],[2,102.3],[3,301.3],[4,101.3],[5,101.3],[6,120.3]])
df['z'] = (df.quantity - df.quantity.mean())/df.quantity.std(ddof=0)
mu = np.mean(df.quantity)
sig = df.quantity.std()
df['prob'] = 0.0
for idx,row in df.iterrows():
if row.quantity < mu:
df.at[idx,'prob'] = 1 - (scipy.stats.norm(mu,sig).pdf(row.quantity))
else:
df.at[idx,'prob'] = scipy.stats.norm(mu, sig).pdf(row.quantity)
```
Output is:
```
No quantity z prob
0 1 100.0 -0.513775 0.995560
1 2 102.3 -0.482472 0.995502
2 3 301.3 2.225906 0.000629
3 4 101.3 -0.496082 0.995527
4 5 101.3 -0.496082 0.995527
5 6 120.3 -0.237493 0.995159
```
Upvotes: -1 <issue_comment>username_2: Some comments on your solution: if you're already using scipy, you can just use [scipy.stats.mstats.zscore](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.mstats.zscore.html) rather than writing your own zscore calculation, and there's no need to import numpy to calculate the mean of a pandas series:
```
df= pd.DataFrame(columns=['No','quantity'], data=[[1,100.0],[2,102.3],[3,301.3],[4,101.3],[5,101.3],[6,120.3]])
mu=df.quantity.mean()
sig=df.quantity.std()
df['z']=scipy.stats.mstats.zscore(df.quantity)
df['prob'] = 0.0
for idx,row in df.iterrows():
if row.quantity < mu:
df.at[idx,'prob'] = 1 - (scipy.stats.norm(mu,sig).pdf(row.quantity))
else:
df.at[idx,'prob'] = scipy.stats.norm(mu, sig).pdf(row.quantity)
```
You also can avoid the iteration over the dataframe using apply:
```
df= pd.DataFrame(columns=['No','quantity'], data=[[1,100.0],[2,102.3],[3,301.3],[4,101.3],[5,101.3],[6,120.3]])
mu=df.quantity.mean()
sig=df.quantity.std()
df['z']=scipy.stats.mstats.zscore(df.quantity)
df['prob']=df['quantity'].apply(lambda x: scipy.stats.norm(mu,sig).pdf(x) if x > mu else 1 - scipy.stats.norm(mu,sig).pdf(x))
```
Upvotes: 4 [selected_answer] |
2018/03/21 | 882 | 2,887 | <issue_start>username_0: I have used this one script but some 01523 values in CSV in write 1523.
```
$file = fopen('demosaved.csv', 'w');
// save the column headers
fputcsv($file, array('Column 1', 'Column 2', 'Column 3', 'Column 4', 'Column 5'));
// Sample data. This can be fetched from mysql too
$data = array(
array(01523, 'Data 12', 'Data 13', 'Data 14', 'Data 15')
);
// save each row of the data
foreach ($data as $row)
{
fputcsv($file, $row);
}
// Close the file
fclose($file);
```
In csv output like **1523** I want **01523**.
[](https://i.stack.imgur.com/Ijnda.png)
Pease I don't want to use `'/n', '/t'` , and `'01523'` in value.
Also How can I set all header and values in the double quote? like "column 1","column 2"...<issue_comment>username_1: Don't open it in Excel. It always drops leading zeros off number fields unless you tell it otherwise. If you open it in a text editor and the zero is there, then PHP is doing the job correctly and you need to go to an Excel forum for help.
The answer is to change the formatting if you insist on using Excel.
Upvotes: 0 <issue_comment>username_2: After you run your script (assuming `01523` in your array is actually the string `"01523"`), the contents of *demosaved.csv* will be:
```
"Column 1","Column 2","Column 3","Column 4","Column 5"
01523,"Data 12","Data 13","Data 14","Data 15"
```
If you open it in a text editor, you will see the leading zero, because *it is there in the file*. If you open it in Excel, you will not see the leading zero, *even though it is there in the file*, because that's how Excel displays numbers with leading zeroes.
Even if you edit your demosaved.csv file in your text editor and put quotes around the number with the leading zero, so it's `"01523","Data 12"...` instead of `01523,"Data 12",...`, Excel will still not display the leading zero. The only way to force Excel to display the leading zero in a number in a CSV file is to use one of the tricks you want to avoid.
If the intended use of your output file is to be opened in Excel, you can create an Excel document instead of a CSV file. Here's a quick example with PhpSpreadsheet:
```
php
require 'vendor/autoload.php';
use PhpOffice\PhpSpreadsheet\Spreadsheet;
use PhpOffice\PhpSpreadsheet\Writer\Xlsx;
$spreadsheet = new Spreadsheet();
$sheet = $spreadsheet-getActiveSheet();
$sheet->fromArray($data, null, 'A1');
$writer = new Xlsx($spreadsheet);
$writer->save('example.xlsx');
```
The leading zero will show up in *example.xlsx*. But this file format cannot be used to pass your data to another application that expects a CSV. If that is the intended use of your output file, the way you're already doing it is fine.
However, if you do open the CSV in Excel, edit it (or not), and save the changes, the leading zero will be gone.
Upvotes: 1 |
2018/03/21 | 329 | 1,170 | <issue_start>username_0: I have this code:
```
transform(searchData: Array, searchResultContentType: string) {
if (searchData == undefined) {
return;
}
return searchData.filter((item) => item.ContentType == searchResultContentType);
}
```
My console prints:
```
ERROR TypeError: searchData.filter is not a function
at FilterCountPipe.webpackJsonp.../../../../../src/app/Common/pipes/filterCount.pipe.ts.FilterCountPipe.transform (filterCount.pipe.ts:19)
```
I tried to add import from rxjs/add/operator/filter, but it didn't solved it.
Any ideas on how to fix it?
Thanks<issue_comment>username_1: Just change `Array` to `array`. the first one is an object and the second is an array. you can have more informations about ``filter` here [1].
[1] : <https://alligator.io/js/filter-array-method/>
Upvotes: -1 <issue_comment>username_2: Should `searchData` be null (or undefined), you should get an error like
```
TypeError: cannot read property filter of null // or undefined
```
Like @OscarPaz thought, `filter is not a function` is thrown because the received `searchData` is not an array (yet still defined and not null).
Upvotes: 4 [selected_answer] |
2018/03/21 | 327 | 1,258 | <issue_start>username_0: I am making a JFrame application in Java , I'm using the Application designer to insert components in my JFrame. In a Jtextarea , I would like to display some text but that text is returned by a function that I wrote in my class. So I thought I can just call the function in the JTextarea value in the initcomponents() which manage the code for my gui components. But the initcomponent method can not be modified(highlighted in grey). Is there a way to do this?
```
public String yes() {
return "voila";
}
```
Is there a way to do something like this ?
```
private void initcomponent() {
jTextArea1.setText("some text" + yes());
}
```<issue_comment>username_1: Just change `Array` to `array`. the first one is an object and the second is an array. you can have more informations about ``filter` here [1].
[1] : <https://alligator.io/js/filter-array-method/>
Upvotes: -1 <issue_comment>username_2: Should `searchData` be null (or undefined), you should get an error like
```
TypeError: cannot read property filter of null // or undefined
```
Like @OscarPaz thought, `filter is not a function` is thrown because the received `searchData` is not an array (yet still defined and not null).
Upvotes: 4 [selected_answer] |
2018/03/21 | 670 | 2,135 | <issue_start>username_0: Is there a good way to check if all items in an array are of the same type?
Something that does this:
```
[1, 2, 3, 4] // true
[2, 3, 4, "foo"] // false
```<issue_comment>username_1: You can use Array.every to check if all elements are strings.
```
arr.every( (val, i, arr) => typeof val === typeof arr[0]);
arr = ["foo", "bar", "baz"] // true
arr = [1, "foo", true] // false
```
Note:
```
arr = [] // true
```
Upvotes: 1 <issue_comment>username_2: You could create a [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) from the types of each element in the array and make sure that it has at most one element:
```js
console.log( allSameType( [1,2,3,4] ) );
console.log( allSameType( [2,3,4,"foo"] ) );
function allSameType( arr ) {
return new Set( arr.map( x => typeof x ) ).size <= 1;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Maybe less complicated solution would be
```
function sameTypes(arr, type) {
arr.forEach((item, index) => {
if (typeof item == type) {
console.log('TRUE');
} else {
console.log('FALSE');
}
});
}
```
Upvotes: 0 <issue_comment>username_4: Came up with a functional approach using recursion.
```js
var array = [1, 2, "foo"]
function check([head, ...tail])
{
if(!head)
{
return true
}
else
{
var flag = true
tail.forEach(element => {
flag &= typeof element === typeof head
})
return flag? check(tail) : false
}
}
console.log(check(array))
```
Upvotes: 0 <issue_comment>username_5: Not exactly what OP asked but if you want to check if it's a certain type:
```js
function isArrayOfType(arr, type) {
return arr.filter(i => typeof i === type).length === arr.length;
}
const numericArray = [1, 2, 3, 4, 5];
const mixedArray = [1, 2, 3, "foo"];
console.log(isArrayOfType(numericArray, 'number')); // true
console.log(isArrayOfType(numericArray, 'string')); // false
console.log(isArrayOfType(mixedArray, 'number')); // false
```
Upvotes: 0 |
2018/03/21 | 952 | 3,062 | <issue_start>username_0: I have two dataframes (DF1 and DF2)
```
DF1 <- as.data.frame(c("A, B","C","A","C, D"))
names(DF1) <- c("parties")
```
DF1
```
parties
A, B
C
A
C, D
```
.
```
B <- as.data.frame(c(LETTERS[1:10]))
C <- as.data.frame(1:10)
DF2 <- bind_cols(B,C)
names(DF2) <- c("party","party.number")
```
.
DF2
```
party party.number
A 1
B 2
C 3
D 4
E 5
F 6
G 7
H 8
I 9
J 10
```
The desired result should be an additional column in DF1 which contains the party numbers taken from DF2 for each row in DF1.
Desired result (based on DF1):
```
parties party.numbers
A, B 1, 2
C 3
A 1
C, D 3, 4
```
I strongly suspect that the answer involves something like `str_match`(DF1$parties, DF2$party.number) or a similar regular expression, but I can't figure out how to put two (or more) party numbers into the same row (DF2$party.numbers).<issue_comment>username_1: You can use Array.every to check if all elements are strings.
```
arr.every( (val, i, arr) => typeof val === typeof arr[0]);
arr = ["foo", "bar", "baz"] // true
arr = [1, "foo", true] // false
```
Note:
```
arr = [] // true
```
Upvotes: 1 <issue_comment>username_2: You could create a [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) from the types of each element in the array and make sure that it has at most one element:
```js
console.log( allSameType( [1,2,3,4] ) );
console.log( allSameType( [2,3,4,"foo"] ) );
function allSameType( arr ) {
return new Set( arr.map( x => typeof x ) ).size <= 1;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Maybe less complicated solution would be
```
function sameTypes(arr, type) {
arr.forEach((item, index) => {
if (typeof item == type) {
console.log('TRUE');
} else {
console.log('FALSE');
}
});
}
```
Upvotes: 0 <issue_comment>username_4: Came up with a functional approach using recursion.
```js
var array = [1, 2, "foo"]
function check([head, ...tail])
{
if(!head)
{
return true
}
else
{
var flag = true
tail.forEach(element => {
flag &= typeof element === typeof head
})
return flag? check(tail) : false
}
}
console.log(check(array))
```
Upvotes: 0 <issue_comment>username_5: Not exactly what OP asked but if you want to check if it's a certain type:
```js
function isArrayOfType(arr, type) {
return arr.filter(i => typeof i === type).length === arr.length;
}
const numericArray = [1, 2, 3, 4, 5];
const mixedArray = [1, 2, 3, "foo"];
console.log(isArrayOfType(numericArray, 'number')); // true
console.log(isArrayOfType(numericArray, 'string')); // false
console.log(isArrayOfType(mixedArray, 'number')); // false
```
Upvotes: 0 |
2018/03/21 | 673 | 1,660 | <issue_start>username_0: with data frames like below, I want to use a for loop to iterate through each data frame and add some new column say `new` & `new1` to the data frames `d1` and `d2`. Below is what I have tried.
```
d1 <- data.frame(y1 = c(1, 2, 3), y2 = c(4, 5, 6))
d2 <- data.frame(y1 = c(3, 2, 1), y2 = c(6, 5, 4))
my.list <- list(d1, d2)
for(df in my.list) {
df$new <- df$y1 + df$y2
df$new1 <- df$y1 - df$y2
}
```
However when I look at the original data frames `d1` and `d2` they do not show the new col `new`.
```
> colnames(d1)
[1] "y1" "y2"
> colnames(d2)
[1] "y1" "y2"
```
How do I got about getting new columns added to the original data frames `d1` and `d2` ?<issue_comment>username_1: here is an option with `lapply`
```
lst <- lapply(my.list, transform, new = y1 + y2, new1 = y1 - y2)
```
It is better to keep the 'data.frame's in the `list` and not to update the objects on the global environment. But, if it is really needed
```
list2env(lst, envir = .GlobalEnv)
d1
# y1 y2 new new1
#1 1 4 5 -3
#2 2 5 7 -3
#3 3 6 9 -3
```
### data
```
my.list <- mget(paste0("d", 1:2))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use lapply or Map/map, rather than for.
```
lapply(my.list, function(df) {df$new <- df$y1 + df$y2; df})
```
Upvotes: 0 <issue_comment>username_3: The map-version looks as follows:
```
library(purrr) # part of the tidyverse
map(my.list, ~ mutate(.x, new = y1 + y2))
```
`~` creates and anonymous function.
Upvotes: 2 <issue_comment>username_4: Do this instead:
`for(df in 1:length(my.list)) {
my.list[[df]]$new <- my.list[[df]]$y1 + my.list[[df]]$y2
}`
Upvotes: 0 |
2018/03/21 | 539 | 1,433 | <issue_start>username_0: I'm trying to add and start a state in Phaser JS but I'm getting a weird error. Here is the code:
```
var game = new Phaser.Game(800, 600, Phaser.AUTO);
var GameState = {
preload: function(){
},
create: function(){
},
update: function(){
}
};
game.state.add('GameState', GameState);
game.state.start(GameState);
```
I'm getting the error:
>
> Uncaught TypeError: Cannot read property 'add' of undefined
> at main.js:18
>
>
><issue_comment>username_1: here is an option with `lapply`
```
lst <- lapply(my.list, transform, new = y1 + y2, new1 = y1 - y2)
```
It is better to keep the 'data.frame's in the `list` and not to update the objects on the global environment. But, if it is really needed
```
list2env(lst, envir = .GlobalEnv)
d1
# y1 y2 new new1
#1 1 4 5 -3
#2 2 5 7 -3
#3 3 6 9 -3
```
### data
```
my.list <- mget(paste0("d", 1:2))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use lapply or Map/map, rather than for.
```
lapply(my.list, function(df) {df$new <- df$y1 + df$y2; df})
```
Upvotes: 0 <issue_comment>username_3: The map-version looks as follows:
```
library(purrr) # part of the tidyverse
map(my.list, ~ mutate(.x, new = y1 + y2))
```
`~` creates and anonymous function.
Upvotes: 2 <issue_comment>username_4: Do this instead:
`for(df in 1:length(my.list)) {
my.list[[df]]$new <- my.list[[df]]$y1 + my.list[[df]]$y2
}`
Upvotes: 0 |
2018/03/21 | 1,992 | 4,325 | <issue_start>username_0: So, here is the deal: I have this code bellow and it produces multiples results, how do i put all this results in a single document? I was wondering if it was possible to make all of this a list of links. It's comming this way
```
['http://acervo.estadao.com.br/pagina/#!/20171101-45305-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171004-45277-spo-1-pri-a1-not/busca/Minist%C3%A9rio', 'http://acervo.estadao.com.br/pagina/#!/20171004-45277-nac-1-pri-a1-not/busca/Minist%C3%A9rio', 'http://acervo.estadao.com.br/pagina/#!/20171109-45313-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171219-45353-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171122-45326-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171122-45326-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171229-45363-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20171229-45363-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20180105-45370-nac-1-pri-a1-not/busca/minist%C3%A9rio']
['http://acervo.estadao.com.br/pagina/#!/20180202-45398-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20180202-45398-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20180131-45396-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20100702-42626-spo-1-pri-a1-not/busca/Ministro', 'http://acervo.estadao.com.br/pagina/#!/20101202-42779-spo-1-pri-a1-not/busca/Minist%C3%A9rio', 'http://acervo.estadao.com.br/pagina/#!/20101220-42797-spo-1-pri-a1-not/busca/Minist%C3%A9rio', 'http://acervo.estadao.com.br/pagina/#!/20100904-42690-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20101102-42749-spo-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20100514-42577-nac-1-pri-a1-not/busca/ministro', 'http://acervo.estadao.com.br/pagina/#!/20100915-42701-spo-1-pri-a1-not/busca/Minist%C3%A9rio']
```
But i wanted something like a list, like this:
```html
http://acervo.estadao.com.br/pagina/#!/20171101-45305-nac-1-pri-a1-not/busca/ministro
http://acervo.estadao.com.br/pagina/#!/20180202-45398-spo-1-pri-a1-not/busca/ministro
http://acervo.estadao.com.br/pagina/#!/20180131-45396-spo-1-pri-a1-not/busca/ministro
http://acervo.estadao.com.br/pagina/#!/20171101-45305-nac-1-pri-a1-not/busca/ministro
```
A bunch of links in the order they were get in a .txt document. I have no idea how to start (i'm a newbie in programming).
```
opts = Options()
opts.add_argument("user-agent=Mozilla/5.0")
driver = webdriver.Chrome(chrome_options=opts)
x = 1
driver.get("http://acervo.estadao.com.br/procura/#!/ministro%3B minist%C3%A9rio|||/Acervo/capa//1/2000|2010|2010///Primeira")
time.sleep(5)
page_number = driver.find_element_by_class_name("page-ultima-qtd").text
for i in range(int(page_number)):
link = ("http://acervo.estadao.com.br/procura/#!/ministro%3B minist%C3%A9rio|||/Acervo/capa//{}/2000|2010|2010///Primeira").format(x)
#driver.get(link)
links = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.LINK_TEXT, "LEIA ESTA EDIÇÃO")))
references = [link.get_attribute("href") for link in links]
driver.find_element_by_class_name("seta-right").click()
time.sleep(1)
print(references)
x = x + 1
#print(x)
print(i)
```<issue_comment>username_1: ```
import csv
list1 = ['a','b','c']
list2 = ['a','b','c']
#if your output your getting is lists you could put them all into one list first
master = list1 + list2
#concatenated lists
print(master)
#then simply send to file
with open("filenames.csv", 'w') as f:
wr = csv.writer(f, lineterminator='\n')
for row in master:
wr.writerow([row])
```
Upvotes: 2 <issue_comment>username_2: Simplest solution: format your `references` list before printing, ie
```
# print(references)
print("\n".join(references))
```
or print them one by one (might be a bit longer but well):
```
# print(references)
for ref in references:
print(ref)
```
and then use your OS redirections to redirect the output to a file (linux example):
```
$ python yourscript.py > myurls.txt
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 847 | 3,109 | <issue_start>username_0: I'm working on angular 5 Reactive Forms :
I had this error :
*There is no directive with "exportAs" set to "ngModel"*
I saw in others forums that problem can be for many reasons :
misspelling in the HTML template , forgetting to import "FormsModule" or "ReactiveFormsModule", ....
I checked my code but i didn't find the issue
Can you Help me please !!!
***Console error :***
```
There is no directive with "exportAs" set to "ngModel" ("
[(ngModel)]="user.FirstName"
formControlName="FirstName"
[ERROR ->]#FirstName="ngModel" />
{{ 'FIRST\_NAME' | translate:param}}")
: ng:///AppModule/LoginComponent.html@12:15
```
**app.module.ts:**
```
//angular moudel
import { NgModule } from '@angular/core';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';
....
@NgModule({
declarations: [
.....
],
imports: [
BrowserModule,
FormsModule,
ReactiveFormsModule,
...
AppRoutingMoudel,
],
...
})
```
**LoginComponent.ts**
```
import { Component, OnInit } from '@angular/core';
import { User } from './../../../model/user';
import {FormBuilder,FormGroup,FormControl,Validators,NgForm} from '@angular/forms'
....
export class LoginComponent implements OnInit {
user : User;
userLoginForm: FormGroup;
constructor(private userLoginFormBuilder:FormBuilder) {
this.user = new User ("TestName", "Yagmi",
"<EMAIL>", "esay", "esay");
this.userLoginForm = this.userLoginFormBuilder.group({
FirstName: new FormControl (this.user.FirstName,
[Validators.minLength(4),])
});
}
}
```
**LoginComponent.Html**
```
{{ 'FIRST\_NAME' | translate:param }}
min length is 4 caracters.
touched.
```
**user.ts**
```
export class User {
constructor(
public FirstName: string,
public LastName: string,
public Email: string,
public Passeword: string,
public ConfirmPasseword: string
)
}
```<issue_comment>username_1: In the line
`#FirstName="ngModel"`
The component referenced needs to have defined "exportAs" value.
For example
```
@Directive({
selector: '[tooltip]',
exportAs: 'tooltip'
})
```
`#FirstName="tooltip"`
<https://netbasal.com/angular-2-take-advantage-of-the-exportas-property-81374ce24d26>
Upvotes: 0 <issue_comment>username_2: I find where is my error I used template driven and reactive forms together
this why i had error (after i read the comment of Alex )
the solution is just to remove all template driven from my Html template
```
to remove template driven
formControlName="FirstName" //==> to keep reactive forms
// #FirstName="ngModel" ===> to remove template driven
/>
{{ 'FIRST\_NAME' | translate:param }}
min length is 4 caracters.
touched.
```
also im my
**LoginComponent.ts**
```
import {FormBuilder,FormGroup,FormControl,Validators} from '@angular/forms'
```
==> remove import **NgForm** no need with reactive forms
Upvotes: 1 |
2018/03/21 | 541 | 1,837 | <issue_start>username_0: I want to be able to feed a initial state to a network via a placeholder and TensorFlow only allow array or tensor to be feed (And I don't know how to create a zer initiale state tuple) . But the `tf.nn.dynamic_rnn` function recquire a tuple of size 3.
In the answer of this post:
[How do I set TensorFlow RNN state when state\_is\_tuple=True?](https://stackoverflow.com/questions/39112622/how-do-i-set-tensorflow-rnn-state-when-state-is-tuple-true/39917340#comment79297687_39917340)
is exposed a method to do this conversion but the function utilised `l = tf.unpack(state_placeholder, axis=0)` doesn't exist anymore. How can i perform the conversion from a tensor of shape (num\_layer,2,batch\_size,hidden\_layers) feed to a placeholder to a tupple acceptable by `tf.nn.dynamic_rnn` as a initial\_state argument?<issue_comment>username_1: In the line
`#FirstName="ngModel"`
The component referenced needs to have defined "exportAs" value.
For example
```
@Directive({
selector: '[tooltip]',
exportAs: 'tooltip'
})
```
`#FirstName="tooltip"`
<https://netbasal.com/angular-2-take-advantage-of-the-exportas-property-81374ce24d26>
Upvotes: 0 <issue_comment>username_2: I find where is my error I used template driven and reactive forms together
this why i had error (after i read the comment of Alex )
the solution is just to remove all template driven from my Html template
```
to remove template driven
formControlName="FirstName" //==> to keep reactive forms
// #FirstName="ngModel" ===> to remove template driven
/>
{{ 'FIRST\_NAME' | translate:param }}
min length is 4 caracters.
touched.
```
also im my
**LoginComponent.ts**
```
import {FormBuilder,FormGroup,FormControl,Validators} from '@angular/forms'
```
==> remove import **NgForm** no need with reactive forms
Upvotes: 1 |
2018/03/21 | 169 | 587 | <issue_start>username_0: I want to console.log multiply, what am I doing wrong?
```
×
var doms = document.getElementsByTagName('button');
const data = doms.dataset.action;
console.log(data);
```<issue_comment>username_1: You can do it this way:
```
var doms = document.getElementsByTagName('button');
console.log(doms[0].getAttribute('data-action'));
```
Upvotes: -1 <issue_comment>username_2: doms is has array of buttons it needs an index.
```
var doms = document.getElementsByTagName('button')[0];
const data = doms.dataset.action;
console.log(data);
```
Upvotes: 0 |
2018/03/21 | 1,214 | 3,308 | <issue_start>username_0: I have this table:
```
VAT | Email1 | Email2
000 | <EMAIL> | <EMAIL>
000 | <EMAIL> | -
000 | <EMAIL> | <EMAIL>
000 | - | <EMAIL>
```
I want this result:
```
VAT | Emails
000 | <EMAIL>, <EMAIL>, <EMAIL>, <EMAIL>
```
How can I do this in SQL?
Note that I want to concatenate values from multiple columns **and** multiple rows simultaneously.<issue_comment>username_1: Well, it's not an exact duplicate of the [question](https://stackoverflow.com/questions/194852/how-to-concatenate-text-from-multiple-rows-into-a-single-text-string-in-sql-serv) Lad2025 linked to,
but the answers to that question does show how to convert values of different rows into a comma separated string.
The one thing you have left to do is to get a distinct list of emails per vat from both columns.
Here is one way to do it:
First, Create and populate sample table (**Please** save us this step in your future questions):
```
DECLARE @T AS TABLE
(
VAT char(3),
Email1 char(6),
Email2 char(6)
)
INSERT INTO @T(VAT,Email1, Email2) VALUES
('000', '<EMAIL>', '<EMAIL>'),
('000', '<EMAIL>', NULL),
('000', '<EMAIL>', '<EMAIL>'),
('000', NULL, '<EMAIL>')
```
Then, use a common table expression to combine values from `email1` and `email2` using `union`.
Note that `union` will remove duplicate values so you will get a distinct list of emails for each vat value:
```
;WITH CTE AS
(
SELECT VAT, Email1 As Email
FROM @T
UNION
SELECT VAT, Email2
FROM @T
)
```
Then use `for xml path` to get a comma delimited list from the email column of the cte (that will ignore the `null` values), and `stuff` to remove the first comma:
```
SELECT DISTINCT VAT,
(
SELECT STUFF(
(SELECT ',' + Email
FROM CTE t1
WHERE t0.VAT = t1.VAT
FOR XML PATH(''))
, 1, 1, '')
) As Emails
FROM CTE t0
```
Results:
```
VAT Emails
000 <EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>
```
Upvotes: 2 <issue_comment>username_2: Here's another option, but the above might be faster.
```
DECLARE @TBL TABLE(VAT varchar(10), Email1 varchar(50), Email2 varchar(50))
INSERT INTO @TBL select '000','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '000','<EMAIL>',''
INSERT INTO @TBL select '000','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '000','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '001','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '001','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '001','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '001','<EMAIL>','<EMAIL>'
INSERT INTO @TBL select '001',NULL,'<EMAIL>'
SELECT VAT, '' + REVERSE(STUFF(REVERSE(( select x.Email + ','
FROM (
select VAT, Email1 as Email
from @TBL T2
WHERE T2.VAT = T1.VAT
AND ISNULL(Email1,'') > ''
GROUP BY VAT, EMAIL1
union
select VAT, Email2 as Email
from @TBL T3
WHERE T3.VAT = T1.VAT
AND ISNULL(Email2,'') > ''
GROUP BY VAT, EMAIL2
) x
FOR XML PATH('')
)), 1, 1, '' ) ) + '' as Email
from @TBL T1
GROUP by T1.VAT
```
Results:
```
VAT | Email
000 | <EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>
001 | <EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>,<EMAIL>
```
Upvotes: 0 |
2018/03/21 | 733 | 2,666 | <issue_start>username_0: I have a **$character property** in my entity Foo.
The property is an entity itself (AppBundle\Entity\Character).
When I serialize Foo, I don't want to have the whole entity Character to be serialized: I need just the nickname of the Character.
I wrote this in AppBundle\Entity\Foo:
```
/**
*
* @Serializer\VirtualProperty()
* @Serializer\SerializedName("character")
*/
public function getCharacterNickname()
{
return $this->character->getNickname();
}
```
The "**virtual property**" annotation works.
But the "**serializedName**" doesn't, because the result is the following:
```
{
"id": 18,
"characterNickname": "Mr.<NAME>",
"foo": "foo",
"bar": true,
"baz": "baz"
}
```
("characterNickname" instead of just "character", as I asked in the annotation).
The properties "id", "foo", "bar" and "baz" have the annotation @**Serializer\Expose**(). The property "character" doesn't (because I want to serialize THAT property via the VirtualProperty)
What am I missing?
Is it caused by the fact that I want to serialize the property with the name of an existing property?
Ty :)<issue_comment>username_1: Found the solution:
<https://github.com/schmittjoh/serializer/issues/334>
It seems that there's an error in the IdenticalPropertyNamingStrategy file of the library.
Upvotes: 3 [selected_answer]<issue_comment>username_2: For people using symfony 3 with those jms versions :
```
//composer.json
"jms/serializer-bundle": "2.4.4",
"jms/serializer": "1.10.0",
"jms/metadata": "1.7.0",
"jms/parser-lib": "1.0.0"
```
I had the same problem, but there is a simple solution.
Let's first start with an example which will not work :
```
/**
* @Serializer\VirtualProperty
* @Serializer\SerializedName("myFirstName")
* @Serializer\Groups({"book:primitives"})
*/
public function firstName()
{
return ($this->getAuthor() !== null) ? $this->getAuthor()->getFirstname() : '';
}
```
The returned json with name the property with the nameof the function even if the **@SerializedName** is given.
To make jms take the **@SerializedName** in consideration , due to jms convention naming method , the virtual propery function name must be start with a **get** like getFirstName , getLastName, etc ...
```
/**
* @Serializer\VirtualProperty
* @Serializer\SerializedName("myFirstName")
* @Serializer\Groups({"book:primitives"})
*/
public function getFirstName()
{
return ($this->getAuthor() !== null) ? $this->getAuthor()->getFirstname() : '';
}
```
And then , your json here will contain the propery named **myFirstName**
good luck!
Upvotes: 0 |
2018/03/21 | 530 | 1,872 | <issue_start>username_0: I included the line `using namespace std;`on top of my code. Now consider the declaration `int a,b,c;`.Is above code an equivalent to `int std::a,std::b,std::c;`? If so, consider the following example:
If I declared(defined) a namespace `Hi` after the line `using namespace std;`.
Is `Hi` a part of `std` i.e .., am I supposed to use `Hi` as `std::Hi`?
I'm a beginner .<issue_comment>username_1: >
> Is above code an equivalent to `int std::a,std::b,std::c;`
>
>
>
No. A `using namespace` statement means that your code can use members of that namespace without having to qualify them with the namespace's name.
For example, say you have a `#include` statement followed by a `using namespace std;` statement. You can then refer to the `std::string` class as just `string` instead.
Or, say you have a `#include` statement followed by a `using namespace std;` statement. You can then refer to the `std::cin` and `std::cout` objects as just `cin` and `cout`, respectively.
Simply `using` a namespace does not add anything to that namespace. It is a way of bringing content from the specified namespace into the calling namespace.
>
> If I declared(defined) a namespace `Hi` after the line `using namespace std;`. Is `Hi` a part of `std`
>
>
>
No. To do that, `Hi` would have to be declared inside of a `namespace std` block, eg:
```
namespace std {
namespace Hi {
...
}
}
```
But, [that is undefined behavior (except in special cases)](http://en.cppreference.com/w/cpp/language/extending_std), as you are generally not allowed to add things to the `std` namespace.
Upvotes: 4 [selected_answer]<issue_comment>username_2: No, there is no need for `std::Hi`.The variables you declare are not in `std` namespace. The `std` generally contains common language facilities such as `cout`, `cin`, `string` etc.
Upvotes: 1 |
2018/03/21 | 730 | 2,545 | <issue_start>username_0: **Schemas**
```
class Parent:
relationship(ChildA) #One-to-Many
relationship(ChildB, lazy="joined") #One-to-Many
relationship(ChildC, lazy="joined") #One-to-Many
class ChildA:
parent_id
array_of_enums
id
class ChildB:
parent_id
class ChildC:
parent_id
```
**Goal**
Query for `Parent`, `ChildA` pairs in which `ChildA.array_of_enums` contains a subset of enum values.
**Query**
```
session.query(Parent, ChildA.array_of_enums).filter(
Parent.attr == specified_value,
Parent.id == ChildA.parent_id,
ChildA.array_of_enums.contains(enums)
ChildA.attr == specified_value_2
).all()
```
**Question**
SQLAlchemy attaches joinedload options for `ChildB` & `ChildC` and results in returning (Parent, ChildA.array\_of\_enums) for each child of `ChildB` & `Child C`. As a result, I'm getting too many of `Parent`, `ChildA` pairs (extra for each of children B & children C).
Is there a way to build the query, such that all children B and children C come in with Parent all together without a separate SQL statement?
Also interestingly, querying for `ChildA.id` (or any other column) as opposed to `ChildA.array_of_enums` does not result in "duplicated" results.<issue_comment>username_1: It's been a while, but I believe this worked for me. The latter 2 conditions in the join can probably be in the filter. I think something else I tried that got me a reasonable solution was separating the array contains check in a subquery and selecting from that subquery.
```
session.query(
Parent,
Child.array_of_enums
).join(
Child, and_(
Child.parent_id == Parent.id,
Child.array_of_enums.contains(enums),
ChildA.attr == specified_value_2
)
).filter(
Parent.attr == specified_value,
).all()
```
Upvotes: 0 <issue_comment>username_2: I finaly found a different approch
I define my Parent/Child relationship like that in my Parent Class:
```
child = relationship('Child', back_populates="parent")
```
I define my Child/Parent relationship like that in my Child Class:
```
parent = relationship("Parent", back_populates="child")
```
I use this query:
```
parents = session.query(Parent).outerjoin(Child,Parent.child).options(contains_eager(Parent.child)
```
The `contains_eager` is use to dig in parent to find associate child like:
```
for parent in parents:
mychilds = parent.child
```
So I have all my Parents with or without child (as many they have) and Parent are not duplicate.
Upvotes: 1 |
2018/03/21 | 1,872 | 5,726 | <issue_start>username_0: In Python I normally use functions like *vstack*, *stack*, etc to easily create a 3D array by stacking 2D arrays one onto another.
Is there any way to do this in C++?
In particular, I have loaded a image into a Mat variable with OpenCV like:
```
cv::Mat im = cv::imread("image.png", 0);
```
I would like to make a 3D array/Mat of N layers by stacking copies of that Mat variable.
**EDIT:** This new 3D matrix has to be "travellable" by adding an integer to any of its components, such that if I am in the position (x1,y1,1) and I add +1 to the last component, I arrive to (x1,y1,2). Similarly for any of the coordinates/components of the 3D matrix.
**SOLVED:** Both answers from @Aram and @Nejc do exactly what expected. I set @Nejc 's answer as the correct one for his shorter code.<issue_comment>username_1: **This answer is in response to the question above of:**
>
> In Python I normally use functions like vstack, stack, etc to easily create a 3D array by stacking 2D arrays one onto another.
>
>
>
This is certainly possible, you can add matrices into a vector which would be your "stack"
For instance you could use a
```
std::vector>
```
This would give you a vector of mats, which would be one slice, and then you could "layer" those by adding more slices vector
If you then want to have multiple stacks you can add that vector into another vector:
```
std::vector>
```
To add matrix to an array you do:
```
myVector.push_back(matrix);
```
**Edit for question below**
>
> In such case, could I travel from one position (x1, y1, z1) to an immediately upper position doing (x1,y1,z1+1), such that my new position in the matrix would be (x1,y1,z2)?
>
>
>
You'll end up with something that looks a lot like this. If you have a matrix at element 1 in your vector, it doesn't really have any relationship to the element[2] except for the fact that you have added it into that point. If you want to build relationships then you will need to code that in yourself.
[](https://i.stack.imgur.com/qe7RO.png)
Upvotes: 2 <issue_comment>username_2: Based on the question and comments, I think you are looking for something like this:
```
std::vector vec\_im;
//In side for loop:
vec\_im.push\_back(im);
```
Then, you can access it by:
```
Scalar intensity_1 = vec_im[z1].at(y, x);
Scalar intensity\_2 = vec\_im[z2].at(y, x);
```
This assumes that the image is single channel.
Upvotes: 0 <issue_comment>username_3: You can actually create a 3D or ND mat with opencv, you need to use the constructor that takes the [dimensions](https://docs.opencv.org/3.1.0/d3/d63/classcv_1_1Mat.html#a5fafc033e089143062fd31015b5d0f40) as input. Then copy each matrix into (this case) the 3D array
```
#include
using namespace cv;
using namespace std;
int main() {
// Dimensions for the constructor... set dims[0..2] to what you want
int dims[] = {5, 5, 5}; // 5x5x5 3d mat
Mat m = Mat::zeros(5, 5, CV\_8UC1);
for (size\_t i = 0; i < 5; i++) {
for (size\_t k = 0; k < 5; k++) {
m.at(i, k) = i + k;
}
}
// Mat with constructor specifying 3 dimensions with dimensions sizes in dims.
Mat 3DMat = Mat(3, dims, CV\_8UC1);
// We fill our 3d mat.
for (size\_t i = 0; i < m2.size[0]; i++) {
for (size\_t k = 0; k < m2.size[1]; k++) {
for (size\_t j = 0; j < m2.size[2]; j++) {
3DMat.at(i, k, j) = m.at(k, j);
}
}
}
// We print it to show the 5x5x5 array.
for (size\_t i = 0; i < m2.size[0]; i++) {
for (size\_t k = 0; k < m2.size[1]; k++) {
for (size\_t j = 0; j < m2.size[2]; j++) {
std::cout << (int) 3DMat.at(i, k, j) << " ";
}
std::cout << endl;
}
std::cout << endl;
}
return 0;
}
```
Upvotes: 2 <issue_comment>username_4: The Numpy function `vstack` returns a contiguous array. Any C++ solution that produces vectors or arrays of `cv::Mat` objects does not reflect the behaviour of `vstack` in this regard, becase separate "layers" belonging to individual cv::Mat objects will not be stored in contiguous buffer (unless a careful allocation of underlying buffers is done in advance of course).
I present the solution that copies all arrays into a three-dimensional `cv::Mat` object with a contiguous buffer. As far as the idea goes, this answer is similar to [Aram's answer](https://stackoverflow.com/a/49412544/7519513). But instead of assigning pixel values one by one, I take advantage of OpenCV functions. At the beginning I allocate the matrix which has a size `N X ROWS X COLS`, where `N` is the number of 2D images I want to "stack" and `ROWS x COLS` are dimensions of each of these images.
Then I make `N` steps. On every step, I obtain the pointer to the location of the first element along the "outer" dimension. I pass that pointer to the constructor of temporary Mat object that acts as a kind of wrapper around the memory chunk of size `ROWS x COLS` (but no copies are made) that begins at the address that is pointed-at by pointer. I then use `copyTo` method to copy `i`-th image into that memory chunk. Code for `N = 2`:
```
cv::Mat img0 = cv::imread("image0.png", CV_IMREAD_GRAYSCALE);
cv::Mat img1 = cv::imread("image1.png", CV_IMREAD_GRAYSCALE);
cv::Mat images[2] = {img0, img1}; // you can also use vector or some other container
int dims[3] = { 2, img0.rows, img0.cols }; // dimensions of new image
cv::Mat joined(3, dims, CV_8U); // same element type (CV_8U) as input images
for(int i = 0; i < 2; ++i)
{
uint8_t* ptr = &joined.at(i, 0, 0); // pointer to first element of slice i
cv::Mat destination(img0.rows, img0.cols, CV\_8U, (void\*)ptr); // no data copy, see documentation
images[i].copyTo(destination);
}
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 315 | 1,209 | <issue_start>username_0: Why can you still submit a form even though the function returns false?
```js
function submit() {
return false;
}
```
```html
account :
```<issue_comment>username_1: [preventDefault()](https://api.jquery.com/event.preventdefault/) work great to prevent default action like clics or submits..
```
$('form').submit((ev) => {
ev.preventDefault();
})
```
Upvotes: 0 <issue_comment>username_2: I believe your function name conflicts with the native submit action.
I've renamed it below.
```js
function submitter() {
return false;
}
```
```html
account :
```
Also see [Reserved words in JavaScript](http://www.javascripter.net/faq/reserved.htm).
Upvotes: 2 [selected_answer]<issue_comment>username_3: I think you should use a unique identifier on the form and use it to add an event listener in your javascript as below.
Also use event.preventDefault() to stop the default form submission action which works as in the code below. Hope this solves your problem :-)
```js
document.querySelector("#my-form").addEventListener("submit", function(e){
e.preventDefault(); //stop form from submitting
});
```
```html
account :
```
Upvotes: 0 |
2018/03/21 | 875 | 3,238 | <issue_start>username_0: I'm encountering the following error: *"Unexpected error launching Internet Explorer. Protected Mode settings are not the same for all zones. Enable Protected Mode must be set to the same value (enabled or disabled for all zones)."* when opening IE using Selenium WebDriver.
In Java (using selenium-server 3.8.1), I solved this by using:
```
InternetExplorerOptions options = new InternetExplorerOptions();
options.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true);
driver = new InternetExplorerDriver(options);
```
How do I do this for Robot Framework (using Java port of SeleniumLibrary: robotframework-seleniumlibrary-3.8.1.0-jar-with-dependencies)?
```
${ie_options}= Create Dictionary InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS=true
Open Browser ${url} ie None None ${ie_options} None
```
I tried the one above but I still encounter the error. Changed it to *ignoreProtectedModeSettings* to no avail. Any ideas?<issue_comment>username_1: I have written **Custom Keyword** which updates the `Windows Registry` to enable `ProtectedMode` for `all Zones`.
Below is **Python** code :
```
from winreg import *
def Enable_Protected_Mode():
"""
# 0 is the Local Machine zone
# 1 is the Intranet zone
# 2 is the Trusted Sites zone
# 3 is the Internet zone
# 4 is the Restricted Sites zone
# CHANGING THE SUBKEY VALUE "2500" TO DWORD 0 ENABLES PROTECTED MODE FOR THAT ZONE.
# IN THE CODE BELOW THAT VALUE IS WITHIN THE "SetValueEx" FUNCTION AT THE END AFTER "REG_DWORD".
"""
try:
keyVal = r'Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1'
key = OpenKey(HKEY_CURRENT_USER, keyVal, 0, KEY_ALL_ACCESS)
SetValueEx(key, "2500", 0, REG_DWORD, 0)
except Exception:
print("Failed to enable protected mode")
```
You can write the same code in Java.[Check here for more help !!!](https://stackoverflow.com/questions/62289/read-write-to-windows-registry-using-java)
Upvotes: 1 <issue_comment>username_2: To do this directly in the Robot Framework:
```
${ie_dc} = Evaluate
... sys.modules['selenium.webdriver'].DesiredCapabilities.INTERNETEXPLORER
... sys, selenium.webdriver
${ieOptions} = Create Dictionary ignoreProtectedModeSettings=${True}
Set To Dictionary ${ie_dc} se:ieOptions ${ieOptions}
Open Browser ${url} ie desired_capabilities=${ie_dc}
```
At some point the ignoreProtectedModeSettings got placed inside the se:ieOptions dictionary within the capabilities dictionary. You can see this if you debug Selenium's Python library, specifically webdriver/remote/webdriver.py and look at the response in `start_session`.
Upvotes: 0 <issue_comment>username_3: I was facing the same issue and tried to use username_1's answer but it did not work. Finally, I was able to find this <https://stackoverflow.com/a/63543398/3297490> and it worked like a charm.
One thing to note however, after running the vbs script I checked in the IE settings and the protected mode settings were still shown the way they were and they did not really come back to the normal levels.
Upvotes: 0 |
2018/03/21 | 997 | 3,855 | <issue_start>username_0: I am trying to save the text contents of a `TextView` when my app closes, so I am trying to save that info in the `onDestroy` method, and then set them in the `onCreate` method.
I have written the following 2 functions to get rid of the boilerplate of getting and putting a value in the shared prefs:
```
fun MainActivity.putStringInPrefs(prefsFile: String, key: String, value: Any) =
getSharedPreferences(prefsFile, Context.MODE_PRIVATE)
.edit()
.putString(key, value.toString())
.apply()
fun MainActivity.getStringFromPrefs(prefsFile: String, key: String, default: Any = "") : String =
getSharedPreferences(prefsFile, Context.MODE_PRIVATE).getString(key, default.toString())
```
When the app closes this is what gets called:
```
override fun onDestroy() {
super.onDestroy()
log("OnDestroy!")
putStringInPrefs(mainPrefsFile, "lastSelectedItemDescription", textViewItemDetailsTextView.text) }
```
`log` is just a wrapper around `Log.d("", "string")`
And in the `onCreate` this gets called:
```
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState) ; log("OnCreate!")
setContentView(R.layout.activity_main)
// Set up the item details text view
textViewItemDetailsTextView.text = getStringFromPrefs(mainPrefsFile, "lastSelectedItemDescription", detailsText)
```
Problem is that whatever I do the preferences are not saved and the default value is always returned in `onCreate`. I have tried both apply and commit, with and without clear. No results. What am I doing wrong.<issue_comment>username_1: I have written **Custom Keyword** which updates the `Windows Registry` to enable `ProtectedMode` for `all Zones`.
Below is **Python** code :
```
from winreg import *
def Enable_Protected_Mode():
"""
# 0 is the Local Machine zone
# 1 is the Intranet zone
# 2 is the Trusted Sites zone
# 3 is the Internet zone
# 4 is the Restricted Sites zone
# CHANGING THE SUBKEY VALUE "2500" TO DWORD 0 ENABLES PROTECTED MODE FOR THAT ZONE.
# IN THE CODE BELOW THAT VALUE IS WITHIN THE "SetValueEx" FUNCTION AT THE END AFTER "REG_DWORD".
"""
try:
keyVal = r'Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1'
key = OpenKey(HKEY_CURRENT_USER, keyVal, 0, KEY_ALL_ACCESS)
SetValueEx(key, "2500", 0, REG_DWORD, 0)
except Exception:
print("Failed to enable protected mode")
```
You can write the same code in Java.[Check here for more help !!!](https://stackoverflow.com/questions/62289/read-write-to-windows-registry-using-java)
Upvotes: 1 <issue_comment>username_2: To do this directly in the Robot Framework:
```
${ie_dc} = Evaluate
... sys.modules['selenium.webdriver'].DesiredCapabilities.INTERNETEXPLORER
... sys, selenium.webdriver
${ieOptions} = Create Dictionary ignoreProtectedModeSettings=${True}
Set To Dictionary ${ie_dc} se:ieOptions ${ieOptions}
Open Browser ${url} ie desired_capabilities=${ie_dc}
```
At some point the ignoreProtectedModeSettings got placed inside the se:ieOptions dictionary within the capabilities dictionary. You can see this if you debug Selenium's Python library, specifically webdriver/remote/webdriver.py and look at the response in `start_session`.
Upvotes: 0 <issue_comment>username_3: I was facing the same issue and tried to use username_1's answer but it did not work. Finally, I was able to find this <https://stackoverflow.com/a/63543398/3297490> and it worked like a charm.
One thing to note however, after running the vbs script I checked in the IE settings and the protected mode settings were still shown the way they were and they did not really come back to the normal levels.
Upvotes: 0 |
2018/03/21 | 1,252 | 3,158 | <issue_start>username_0: I have a matrix (`pred_matrix`, dim = 1e6, 250), the rows are "pixelstacks" of 250 NDVI values of a Landsat scene, from which i did a "fuzzy cmeans" classification witch 6 centers (classes), stored in the list `results`. I want now to plot a random subset of each class of the 1e6 rows. This is my quick and dirty code so far:
```
random_index <- floor(runif(10000, 1, 1e6+1))
random_cluster <- results[[6]]$cluster[random_index]
random_pred_matrix <- pred_matrix[random_index, ]
dates_subse_after_pred <- rdn_num[rm_na_pred_df]
random_res <- cbind(random_pred_matrix, random_cluster)
random_res <- t(random_res)
random_res <- cbind(c(dates_subse_after_pred, 1), random_res)
df_1 <- data.frame(random_res[1:250,c(TRUE, random_cluster==1)])
df_2 <- data.frame(random_res[1:250,c(TRUE, random_cluster==2)])
df_3 <- data.frame(random_res[1:250,c(TRUE, random_cluster==3)])
df_4 <- data.frame(random_res[1:250,c(TRUE, random_cluster==4)])
df_5 <- data.frame(random_res[1:250,c(TRUE, random_cluster==5)])
df_6 <- data.frame(random_res[1:250,c(TRUE, random_cluster==6)])
df_1.long <- melt(df_1, id.vars = 1)
df_1.long$X1 <- as.Date(df_1.long$X1)
df_2.long <- melt(df_2, id.vars = 1)
df_2.long$X1 <- as.Date(df_2.long$X1)
df_3.long <- melt(df_3, id.vars = 1)
df_3.long$X1 <- as.Date(df_3.long$X1)
df_4.long <- melt(df_4, id.vars = 1)
df_4.long$X1 <- as.Date(df_4.long$X1)
df_5.long <- melt(df_5, id.vars = 1)
df_5.long$X1 <- as.Date(df_5.long$X1)
df_6.long <- melt(df_6, id.vars = 1)
df_6.long$X1 <- as.Date(df_6.long$X1)
ggplot(df_1.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "lightblue")
ggplot(df_2.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "blue")
ggplot(df_3.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "lightgreen")
ggplot(df_4.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "green")
ggplot(df_5.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "pink")
ggplot(df_6.long) +
geom_line( aes(x = X1, y= value, group = variable), color = "red")
```
After this i have just hit 6 times the export button in rstudio and inserted it all in a word document...
Is there a way to do this in a loop? Or even produce a final pdf containing the 6 plots?<issue_comment>username_1: If you are using Rstudio I would recommend writing your code in a Rmarkdown file and then exporting to pdf directly.
Upvotes: -1 <issue_comment>username_2: Separate file
=============
I think what you are after is having the following six times in your code.
```
ggsave("filename.png", # or pdf if you like
plot = last_plot(), # or give ggplot object name as in myPlot,
width = 5, height = 5,
units = "in", # other options c("in", "cm", "mm"),
dpi = 300)
```
For example,
library(ggplot2)
```
p1 <- ggplot(df_1.long) +
geom_line( aes(x = X1, y= value, group = variable),
color = "lightblue")
ggsave("df1.png", plot = p1, dpi = 300)
```
All in one
==========
If you want **all the six files in one pdf**, then first do
```
pdf("file_name.pdf")
# do your ggplots here
p1
p2
p6
dev.off()
```
Upvotes: 0 |
2018/03/21 | 484 | 1,311 | <issue_start>username_0: I am trying to extract the index values from a dataframe (`df1`) that represent a range of times (start - end) and that encompass the times given in another dataframe (`df2`). My required output is `df3`.
```
df1<-data.frame(index=c(1,2,3,4),start=c(5,10,15,20),end=c(10,15,20,25))
df2<-data.frame(time=c(11,17,18,5,5,22))
df3<-data.frame(time=c(11,17,18,5,5,22),index=c(2,3,3,1,1,4))
```
Is there a tidyverse solution to this?<issue_comment>username_1: If you are using Rstudio I would recommend writing your code in a Rmarkdown file and then exporting to pdf directly.
Upvotes: -1 <issue_comment>username_2: Separate file
=============
I think what you are after is having the following six times in your code.
```
ggsave("filename.png", # or pdf if you like
plot = last_plot(), # or give ggplot object name as in myPlot,
width = 5, height = 5,
units = "in", # other options c("in", "cm", "mm"),
dpi = 300)
```
For example,
library(ggplot2)
```
p1 <- ggplot(df_1.long) +
geom_line( aes(x = X1, y= value, group = variable),
color = "lightblue")
ggsave("df1.png", plot = p1, dpi = 300)
```
All in one
==========
If you want **all the six files in one pdf**, then first do
```
pdf("file_name.pdf")
# do your ggplots here
p1
p2
p6
dev.off()
```
Upvotes: 0 |
2018/03/21 | 500 | 1,943 | <issue_start>username_0: For a project I must use Java 6, so I set my eclipse compiler setting to 1.6 (JDK compliance level).
However, I included `java.nio.file.Files` which is a Java 7 library and I am not getting any complaints. I can ensure that my project specific setting is set to 1.6. I even changed my entire workspace to 1.6 and rebuilt, still no complaints. My colleagues are seeing the complaint on java.nio.files.
Is it becuase I have a jdk7 which is recognizing the `java.nio.file.Files` even when set to 1.6 spcs?<issue_comment>username_1: *JDK compliance level* is the level of the Java syntax, not the runtime libraries. It will just prevent you from using language features that were introduced in later versions like try-with-resources which was introduced in JDK 7.
If you want to develop for JDK6, you need to use JDK6.
Upvotes: 1 <issue_comment>username_2: These are two different things:
* the *compliance* level is about the **syntax** that you can use when writing Java code (respectively about the Java version number that gets put into compiled byte code)
* but the **libraries** that are available to you depend on the **JDK** that your project is using!
In other words: if you truly want to restrict your project to Java 6 libraries, you will have to install a Java 6 JDK on your system, and point to that within your project setup ( most likely, your current project setup makes use of a newer-than-Java-6 JDK ).
And the usual disclaimer: Java 6 has had end of life many years ago. You should do whatever you can to upgrade your setup.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Java 6 is able to interpret `java.nio.file.Files` because there is no special Java 7 syntax in contrast to Java 7 and Java 8 (lambda expressions etc.). So you are working on standard libraries. Uninstall Java 7 JDK and install Java 6 JDK and you will that `java.nio.file.Files` is not available anymore.
Upvotes: 1 |
2018/03/21 | 823 | 3,133 | <issue_start>username_0: So, I want the user to be able to build a treeview by himself.
The treeview basically contains two kinds of items:
* item (`MenuItem` inheriting `TreeviewMenuItem`)
* submenu (`MenuSubmenu` inheriting `TreeviewMenuItem`, contains a `List`)
Treeview uses an **ItemsSource** which is a `List`.
User can add submenus and items into the submenus.
There is no limit to the level of nodes.
```
public abstract class TreeviewMenuItem
{
public virtual string Text { get; set; }
public virtual string DisplayName { get => Text; }
public virtual MenuSubmenu ParentMenu { get; set; } = null;
}
public class MenuSubmenu : TreeviewMenuItem
{
public override string DisplayName { get => Text + " [" + Items.Count + "]"; }
public List Items { get; set; }
public MenuSubmenu(MenuSubmenu parent = null)
{
ParentMenu = parent;
Items = new List();
}
}
public class MenuItem : TreeviewMenuItem
{
public MenuItem(MenuSubmenu parent = null)
{
ParentMenu = parent;
}
}
```
Here is an example of a menu the user can create:
[](https://i.stack.imgur.com/d9OOk.png)
When the user has finished building the treeview, it can export it to XML.
The problem is: How can I iterate through all the nodes?
As you can see, since my submenus contains a `List` which can also contain submenus (etc.), I can't use a simple loop through the ItemsSource.
I have no idea how to handle the dynamic amount of submenus with all the items it contains...<issue_comment>username_1: First question, why do you need to iterate through items? Just take the base of the tree and serialize it, XML serialization will handle all the children.
It is important to add `XmlInclude` attributes for all derived classes and omit loops using `XmlIgnore`
On your example:
```
[XmlInclude(typeof(MenuSubmenu))]
[XmlInclude(typeof(MenuItem))]
public abstract class TreeviewMenuItem
{
public virtual string Text { get; set; }
public virtual string DisplayName { get => Text; }
[XmlIgnore]
public virtual MenuSubmenu ParentMenu { get; set; } = null;
}
public class MenuSubmenu : TreeviewMenuItem
{
public override string DisplayName { get => Text + " [" + Items.Count + "]"; }
[XmlArrayItem(Type = typeof(TreeviewMenuItem)),
XmlArrayItem(Type = typeof(MenuSubmenu))]
public List Items { get; set; }
public MenuSubmenu(MenuSubmenu parent = null)
{
Items = new List();
}
}
public class MenuItem : TreeviewMenuItem
{
public MenuItem(MenuSubmenu parent = null)
{
ParentMenu = parent;
}
}
```
Upvotes: 1 <issue_comment>username_2: To analyze the tree the simplest method could be to write a recursive method.
Something like this:
```
public void AnalyzeTree(List menuItems)
{
foreach (var menuItem in menuItems)
{
switch (menuItem)
{
case MenuSubmenu submenu:
// TODO: submenu action
AnalyzeTree(submenu.Items);
break;
case MenuItem item:
// TODO: item action
break;
}
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 829 | 2,995 | <issue_start>username_0: I'm trying to convert a function writing on jquery to javascript vanilla
i know that .each() is .forEach() on pure javascript but i dont understand what i'm missing with my code !
here the jquery code :
```
addClickItems: function(classe) {
$(classe).each(function (index) {
$(classe + ":eq(" + (index) + ")").click(function () {
if (classe === ".droite") {
});
});
},
```
and here the javascript code :
```
clickImages : function (classe) {
//classe = new Object(diaporama);
Object.keys(classe).forEach(index => {
classe[index].addEventListener("click", function () {
});
});
},
```
thanks for help !<issue_comment>username_1: In the original code `classe` appears to be a jQuery selector. You need to use `document.querySelectorAll()` to search for all the matching elements and iterate over that:
```
document.querySelectorAll(classe).forEach(elt => elt.addEventListener("click", function() {
...
});
```
Upvotes: 1 <issue_comment>username_2: The **[`jQuery.each()`](http://api.jquery.com/jquery.each/)** method enumerates all the individual DOM elements that are contained within the **[jQuery wrapped set of elements](http://api.jquery.com/jquery/)** that you call it on:
```js
$("div").each(function(index, value){
console.log(index, value);
});
```
```html
one
two
three
```
Your attempt at conversion goes to using `Object.keys(class).forEach`, where **[`Object.keys`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/keys)** is not a set of elements, but keys/properties of an `Object`.
```js
var myObj = {
key1: 10,
key2: true,
key3: "foo"
};
Object.keys(myObj).forEach(function(key, index){
console.log(index, key, myObj[key]);
});
```
So, the two uses of `each` are not analogous.
If you do indeed have DOM elements to enumerate, you need to get them into a JavaScript array and then you can call **[`.forEach()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach)** on that array.
**NOTE 1:** Most modern browsers allow you to call `.forEach` on node lists/HTML Collections directly, but for compatibility with those browsers that do not, you need to convert the node list/HTML Collection into an array to be sure that the code will work. This is shown below.
**NOTE 2:** Be mindful that the callback function you pass to `jQuery.forEach()` is, itself, passed two arguments: `index` and `value`, where for the vanilla JavaScript `Array.forEach()`, the callback is passed `value` and `index` (reversed order).
```js
var elements = document.querySelectorAll("div"); // Get all the elements into a node list
var elArray = Array.prototype.slice.call(elements); // Convert node list to array
// Now, enumerate the array with .forEach()
elArray.forEach(function(value, index){
console.log(index, value);
});
```
```html
one
two
three
```
Upvotes: 2 |
2018/03/21 | 1,349 | 5,683 | <issue_start>username_0: I did use this documentation:
<https://autofaccn.readthedocs.io/en/latest/advanced/interceptors.html>
to implement Interface Interceptors. To handle my async calls I used the IAsyncInterceptor interface described here:
<https://github.com/JSkimming/Castle.Core.AsyncInterceptor>
The registration code I came up with does look like this:
```
builder.Register(c => new CallResultLoggerInterceptor())
.Named("log-calls");
builder.RegisterType()
.As()
.EnableInterfaceInterceptors()
.InstancePerDependency();
```
where the AppointmentService has an InterceptAttribute.
```
[Intercept("log-calls")]
public class AppointmentService : IAppointmentService
...
```
When i call the containers Build() method, it throws an ComponentNotRegisteredException with the message:
The requested service 'log-calls (Castle.DynamicProxy.IInterceptor)' has not been registered. To avoid this exception, either register a component to provide the service, check for service registration using IsRegistered(), or use the ResolveOptional() method to resolve an optional dependency.
which is correct because I do not implement IInterceptor but IAsyncInterceptor. I guess the problem is the concrete implementation of EnableInterfaceInterceptors in autofac using the "wrong" extension method of the ProxyGenerator - but how can I solve this?
Cheers,
Manuel<issue_comment>username_1: You need to register a named `IInterceptor` for Autofac interceptors to work. You're registering an `IAsyncInterceptor`. That won't work.
Note Autofac has no support for this extended async interceptor extension you're using. If you want to get that to work, it'll require writing a custom adapter of some nature to get it to respond to `IInterceptor`.
Upvotes: 1 <issue_comment>username_2: You can see my answer in the issue of Castle.Core.AsyncInterceptor:
<https://github.com/JSkimming/Castle.Core.AsyncInterceptor/issues/42#issuecomment-592074447>
1. create an adapter
```cs
public class AsyncInterceptorAdaper : AsyncDeterminationInterceptor
where TAsyncInterceptor : IAsyncInterceptor
{
public AsyncInterceptorAdaper(TAsyncInterceptor asyncInterceptor)
: base(asyncInterceptor)
{ }
}
```
2. create your async interceptor
```cs
public class CallLoggerAsyncInterceptor : AsyncInterceptorBase
{
....
}
```
3. relate the interceptor to interface
```cs
[Intercept(typeof(AsyncInterceptorAdaper))]
public interface ISomeType
```
4. register to IoC container
```cs
//register adapter
builder.RegisterGeneric(typeof(AsyncInterceptorAdaper<>));
//register async interceptor
builder.Register(c => new CallLoggerAsyncInterceptor(Console.Out));
```
I've made a code sample in <https://github.com/wswind/aop-learn/blob/master/AutofacAsyncInterceptor>
Upvotes: 2 <issue_comment>username_3: I've created my own extension method for registering application services.
This extension method simply prepares input parameter for castle core `ProxyGenerator`.
```cs
using System;
using System.Collections.Generic;
using System.Linq;
using Castle.DynamicProxy;
using Autofac;
namespace pixi.Extensions
{
public static class AutofacExtensions
{
private static readonly ProxyGenerator _proxyGenerator = new ProxyGenerator();
///
/// Use this extension method to register default interceptors `UnitOfWorkInterceptor`
/// and `LoggingInterceptor` on your application service implementations. If you need custom
/// interceptors that are not part of infrastructure but are part of specific business module then pass
/// in those interceptors in params explicitly.
///
///
///
///
///
///
public static void RegisterApplicationService(this ContainerBuilder builder, params Type[] interceptors)
where TImplementation : class
{
ValidateInput(interceptors);
builder.RegisterType().AsSelf();
builder.Register(c =>
{
var service = c.Resolve();
var resolvedInterceptors = ResolveInterceptors(interceptors, c);
return (TService) \_proxyGenerator.CreateInterfaceProxyWithTarget(
typeof(TService),
service,
ProxyGenerationOptions.Default,
resolvedInterceptors
);
}).As();
}
private static void ValidateInput(Type[] interceptors)
{
if (!typeof(TService).IsInterface)
throw new ArgumentException("Type must be interface");
if (interceptors.Any(i => i != typeof(IAsyncInterceptor)))
throw new ArgumentException("Only IAsyncInterceptor types are expected");
}
private static IAsyncInterceptor[] ResolveInterceptors(Type[] interceptors,
IComponentContext c) where TImplementation : class
{
var resolvedInterceptors = new List
{
c.Resolve(),
c.Resolve()
}.Concat(interceptors
.Where(i => i != typeof(UnitOfWorkInterceptor)
&& i != typeof(LoggingInterceptor))
.Select(i => (IAsyncInterceptor) c.Resolve(i))).ToArray();
return resolvedInterceptors;
}
}
}
```
I am using castle core for unit of work and logging hence the name `UnitOfWorkInterceptor` and `LogginInterceptor`. Change these two to your desired defaults. Default interceptors must be registered in this way:
```cs
public class SomeModule: Module
{
protected override void Load(ContainerBuilder builder)
{
builder.RegisterType().AsSelf();
builder.RegisterType().AsSelf();
builder.RegisterApplicationService();
builder.RegisterType().As();
}
}
```
In the above code snippet I've also demonstrated the usage provided extension method. Doing it this way I get red of tag interfaces and placing extra attributes on interfaces. That way I can keep my ApplicationService interfaces free of framework/3rd party library dependencies.
I hope this helps.
Upvotes: 0 |
2018/03/21 | 1,165 | 4,908 | <issue_start>username_0: I have a ViewPager with 3 tabs, in each of these tab I can have 2-3 or 4 sub tabs.
When opening the activity, all the tabs (all the fragments) are loaded. Some of the fragments are asking some permissions (can be same permission for different fragment).
My main problem is all the request permission pop up will be displayed when opening this activity. Event if the first fragment visible doesn't need any permission.
Is there is a way to ask permission only when the fragment is visible? Or the only solution is to track the click on Tab and the ViewPager OnPageChangeListener?<issue_comment>username_1: You need to register a named `IInterceptor` for Autofac interceptors to work. You're registering an `IAsyncInterceptor`. That won't work.
Note Autofac has no support for this extended async interceptor extension you're using. If you want to get that to work, it'll require writing a custom adapter of some nature to get it to respond to `IInterceptor`.
Upvotes: 1 <issue_comment>username_2: You can see my answer in the issue of Castle.Core.AsyncInterceptor:
<https://github.com/JSkimming/Castle.Core.AsyncInterceptor/issues/42#issuecomment-592074447>
1. create an adapter
```cs
public class AsyncInterceptorAdaper : AsyncDeterminationInterceptor
where TAsyncInterceptor : IAsyncInterceptor
{
public AsyncInterceptorAdaper(TAsyncInterceptor asyncInterceptor)
: base(asyncInterceptor)
{ }
}
```
2. create your async interceptor
```cs
public class CallLoggerAsyncInterceptor : AsyncInterceptorBase
{
....
}
```
3. relate the interceptor to interface
```cs
[Intercept(typeof(AsyncInterceptorAdaper))]
public interface ISomeType
```
4. register to IoC container
```cs
//register adapter
builder.RegisterGeneric(typeof(AsyncInterceptorAdaper<>));
//register async interceptor
builder.Register(c => new CallLoggerAsyncInterceptor(Console.Out));
```
I've made a code sample in <https://github.com/wswind/aop-learn/blob/master/AutofacAsyncInterceptor>
Upvotes: 2 <issue_comment>username_3: I've created my own extension method for registering application services.
This extension method simply prepares input parameter for castle core `ProxyGenerator`.
```cs
using System;
using System.Collections.Generic;
using System.Linq;
using Castle.DynamicProxy;
using Autofac;
namespace pixi.Extensions
{
public static class AutofacExtensions
{
private static readonly ProxyGenerator _proxyGenerator = new ProxyGenerator();
///
/// Use this extension method to register default interceptors `UnitOfWorkInterceptor`
/// and `LoggingInterceptor` on your application service implementations. If you need custom
/// interceptors that are not part of infrastructure but are part of specific business module then pass
/// in those interceptors in params explicitly.
///
///
///
///
///
///
public static void RegisterApplicationService(this ContainerBuilder builder, params Type[] interceptors)
where TImplementation : class
{
ValidateInput(interceptors);
builder.RegisterType().AsSelf();
builder.Register(c =>
{
var service = c.Resolve();
var resolvedInterceptors = ResolveInterceptors(interceptors, c);
return (TService) \_proxyGenerator.CreateInterfaceProxyWithTarget(
typeof(TService),
service,
ProxyGenerationOptions.Default,
resolvedInterceptors
);
}).As();
}
private static void ValidateInput(Type[] interceptors)
{
if (!typeof(TService).IsInterface)
throw new ArgumentException("Type must be interface");
if (interceptors.Any(i => i != typeof(IAsyncInterceptor)))
throw new ArgumentException("Only IAsyncInterceptor types are expected");
}
private static IAsyncInterceptor[] ResolveInterceptors(Type[] interceptors,
IComponentContext c) where TImplementation : class
{
var resolvedInterceptors = new List
{
c.Resolve(),
c.Resolve()
}.Concat(interceptors
.Where(i => i != typeof(UnitOfWorkInterceptor)
&& i != typeof(LoggingInterceptor))
.Select(i => (IAsyncInterceptor) c.Resolve(i))).ToArray();
return resolvedInterceptors;
}
}
}
```
I am using castle core for unit of work and logging hence the name `UnitOfWorkInterceptor` and `LogginInterceptor`. Change these two to your desired defaults. Default interceptors must be registered in this way:
```cs
public class SomeModule: Module
{
protected override void Load(ContainerBuilder builder)
{
builder.RegisterType().AsSelf();
builder.RegisterType().AsSelf();
builder.RegisterApplicationService();
builder.RegisterType().As();
}
}
```
In the above code snippet I've also demonstrated the usage provided extension method. Doing it this way I get red of tag interfaces and placing extra attributes on interfaces. That way I can keep my ApplicationService interfaces free of framework/3rd party library dependencies.
I hope this helps.
Upvotes: 0 |
2018/03/21 | 397 | 1,308 | <issue_start>username_0: I am trying to build a RegEx that picks urls that end with "/topic". These urls have a different number of folders so whereas one might be www.example.com/pijamas/topic another could be www.example.com/pijamas/strippedpijamas/topic
What regular expression can I use to do that? My attempt is ^www.example.com/[a-zA-Z][1,]/topic$ but this hasn't worked. Even if it worked I'd like to have a shorter RegEx to do this really.
Any help on this would be much appreciated.
Thank you, A.<issue_comment>username_1: Try this:
```
^www\.example\.com\/[\w\/]*topic$
```
Upvotes: 1 <issue_comment>username_2: You need to make a few changes to your regex. Firstly, the dot (`.`) is a special character and needs to be escaped by prefacing it with a backslash.
Secondly, you probably meant `{1,}` instead of `[1,]` – the latter defines a character class. You can substitute `{1,}` with `+`.
Then there's the fact that your second URL has one more subdirectory, so you need to somehow incorporate a `/` into your regex.
Putting all this together:
```
^www\.example\.com/[a-zA-Z]+(/[a-zA-Z]+)*/topic$
```
To shorten it, you can use the `i` option to match regardless of case, cutting down the two `[a-zA-Z]` to `[a-z]`. Try this online [here](https://regex101.com/r/b9uBlv/1).
Upvotes: 0 |
2018/03/21 | 1,093 | 3,229 | <issue_start>username_0: I am trying to use the python module `limmbo` (<https://github.com/HannahVMeyer/limmbo>) with R via the `reticulate` R package. I have successfully installed `limmbo` with Anaconda2. I'm now trying to use the function `limmbo$core$vdbootstrap$LiMMBo$runBootstrapCovarianceEstimation`, as in my code below. When I run the code below, I get an error about converting a float64 to an integer64.
```
```{r}
library(reticulate)
import("limmbo") -> limmbo
```
```
I then run the python code:
```
```{python}
import numpy
from numpy.random import RandomState
from numpy.linalg import cholesky as chol
from limmbo.core.vdsimple import vd_reml
from limmbo.io.input import InputData
random = RandomState(15)
N = 100
S = 1000
P = 3
snps = (random.rand(N, S) < 0.2).astype(float)
kinship = numpy.dot(snps, snps.T) / float(10)
y = random.randn(N, P)
pheno = numpy.dot(chol(kinship), y)
pheno_ID = [ 'PID{}'.format(x+1) for x in range(P)]
samples = [ 'SID{}'.format(x+1) for x in range(N)]
datainput = InputData()
datainput.addPhenotypes(phenotypes = pheno,
phenotype_ID = pheno_ID, pheno_samples = samples)
datainput.addRelatedness(relatedness = kinship,
relatedness_samples = samples)
```
```
The problem arises when I try to run the R function `limmbo$core$vdbootstrap$LiMMBo$runBootstrapCovarianceEstimation`:
```
```{r}
(limmbo$core$vdbootstrap$LiMMBo(py$datainput, timing = TRUE, iterations = 100, S = 2) -> foo)
limmbo$core$vdbootstrap$LiMMBo$runBootstrapCovarianceEstimation(foo, cpus = 1, seed = 12345)
```
Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: Cannot cast array from dtype('float64') to dtype('int64') according to the rule 'safe'
Detailed traceback:
File "/Users/frederickboehm/anaconda2/lib/python2.7/site-packages/limmbo/core/vdbootstrap.py", line 96, in runBootstrapCovarianceEstimation
minCooccurrence=minCooccurrence)
File "/Users/frederickboehm/anaconda2/lib/python2.7/site-packages/limmbo/core/vdbootstrap.py", line 353, in __generateBootstrapMatrix
rand_state = np.random.RandomState(seed)
File "mtrand.pyx", line 644, in mtrand.RandomState.__init__
File "mtrand.pyx", line 687, in mtrand.RandomState.seed
Calls: ... eval -> eval -> -> py\_call\_impl -> .Call
Execution halted
```<issue_comment>username_1: First of all, import you numpy module via
`np <- import("numpy", convert = FALSE)`.
And then you can re-create your numpy array with explicit type `int64` by using `reticulate::np_array(datainput, dtype = np$int64)`.
You can learn more about how to manipulate and create your arrays in [this tutorial](https://rstudio.github.io/reticulate/articles/arrays.html).
Hope this helps.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yuan's tutorial (link in the answer above) contains suggestions that allowed me to answer the question. Here is my revised R code that, as of now, works:
```
np <- import("numpy", convert = FALSE)
(limmbo$core$vdbootstrap$LiMMBo(datainput, timing = TRUE, iterations = np_array(10, dtype = "int64"), S = np_array(2, dtype = "int64")) -> foo)
limmbo$core$vdbootstrap$LiMMBo$runBootstrapCovarianceEstimation(foo, cpus = np$int(1), seed = np_array(1232, dtype = "int64"))
```
Upvotes: 1 |
2018/03/21 | 1,129 | 3,803 | <issue_start>username_0: So, I'm learning python with the book: *"How to think like a computer scientist: Learning with python 3"* and this came out: My problem is in the last `elif` and `else`. how can we compare two string types with <.
Both the `bigger_vocab` list and `book_words` are string types and I cant understand why it uses this `<`.
Please help me and if you need more info ill try to answer even though I'm not that good when it comes to explaining.
```
def find_unknowns_merge_pattern(vocab, wds):
#Both the vocab and wds must be sorted. Return a new
#list of words from wds that do not occur in vocab.
result = []
xi = 0
yi = 0
while True:
if xi >= len(vocab):
result.extend(wds[yi:])
return result
if yi >= len(wds):
return result
if vocab[xi] == wds[yi]: # Good, word exists in vocab
yi += 1
elif vocab[xi] < wds[yi]: # Move past this vocab word,
xi += 1
else: # Got word that is not in vocab
result.append(wds[yi])
yi += 1
all_words = get_words_in_book("AliceInWonderland.txt")
t0 = time.clock()
all_words.sort()
book_words = remove_adjacent_dups(all_words)
missing_words = find_unknowns_merge_pattern(bigger_vocab, book_words)
t1 = time.clock()
print("There are {0} unknown words.".format(len(missing_words)))
print("That took {0:.4f} seconds.".format(t1-t0))
```<issue_comment>username_1: The question comes down to "What happens when we compare two strings?". Basically, a string `A` is considered to be 'less than' string `B` if `A` comes before `B` alphabetically. So `ape` is 'less than' `badger`, because `a` comes before `b` in the alphabet.
Because both `vocab` and `wds` are sorted, this loop basically keeps track of which word in `wds` we are currently checking, and then skips words in `vocab` until either the current word in `vocab` matches the current word in `wds` (in which case the index in `vocab` moves up one word) or the current word in `wds` is 'greater' than the current word in `vocab`, in which case we move on to the next vocab word.
If the current word in `vocab` is 'badger', then you can skip forward through `wds` until the current word in `wds` is either greater than 'badger' (in which case 'badger' was not in the `wds` list), or the current word in `wds` *is* 'badger' (in which case you add 'badger' to your result list and then move on to the next vocab word to look for).
Upvotes: 2 <issue_comment>username_2: When you compare two string, you are checking their *alphabetical order\**:
```
print('A' > 'B') # False
print('AAA' > 'AAB') # False
print('BAA' > 'AAA') # True
```
lowercase letters comes before uppercase letters:
```
print('z' > 'A') # True
```
and string of number are last:
```
print('c' > '1') # True
print('B' > '0') # True
```
---
\*What we are actually checking isn't exactly the *alphabetical order*, it's instead the [ASCII value](https://www.asciitable.com/) of each character (where the letters are ordered alphabetically).
you can print the ASCII value of each char with `print(ord('A'))`:
```
print(ord('A')) # 65
print(ord('B')) # 66
print(ord('z')) # 122
print(ord('1')) # 49
print(ord('0')) # 48
```
With these premises, we do expect that *@* (`ord('@')) = 64`) is lower than *}* (`ord('}') = 125`):
```
print('@' < '}') # True
```
Upvotes: 0 <issue_comment>username_3: You can use comparison operators `==,!=,>,<,>=,<=` on strings just like you can for integers or floats. The way the comparisons for strings are made are roughly in alphabetical order by comparing character ASCII values character by character. See [here](http://thepythonguru.com/python-strings/) for more detail.
Upvotes: 2 [selected_answer] |
2018/03/21 | 729 | 1,709 | <issue_start>username_0: Assuming I have a matrix / array / list like `a=[1,2,3,4,5]` and I want to nullify all entries except for the max so it would be `a=[0,0,0,0,5]`.
I'm using `b = [val if idx == np.argmax(a) else 0 for idx,val in enumerate(a)]` but is there a better (and faster) way (especially for more than 1-dim arrays...)<issue_comment>username_1: Rather than masking, you can create an array of zeros and set the right index appropriately?
**1-D (optimised) Solution**
(Setup) Convert `a` to a 1D array: `a = np.array([1,2,3,4,5])`.
1. To replace *just* one instance of the max
```
b = np.zeros_like(a)
i = np.argmax(a)
b[i] = a[i]
```
2. To replace all instances of the max
```
b = np.zeros_like(a)
m = a == a.max()
b[m] = a[m]
```
---
**N-D solution**
```
np.random.seed(0)
a = np.random.randn(5, 5)
```
```
b = np.zeros_like(a)
m = a == a.max(1, keepdims=True)
b[m] = a[m]
```
```
b
array([[0. , 0. , 0. , 2.2408932 , 0. ],
[0. , 0.95008842, 0. , 0. , 0. ],
[0. , 1.45427351, 0. , 0. , 0. ],
[0. , 1.49407907, 0. , 0. , 0. ],
[0. , 0. , 0. , 0. , 2.26975462]])
```
Works for all instances of `max` per row.
Upvotes: 2 <issue_comment>username_2: You can use `numpy` for an in-place solution. Note that the below method will make *all* matches for the max value equal to 0.
```
import numpy as np
a = np.array([1,2,3,4,5])
a[np.where(a != a.max())] = 0
# array([0, 0, 0, 0, 5])
```
For unique maxima, see [@cᴏʟᴅsᴘᴇᴇᴅ's solution](https://stackoverflow.com/a/49412169/9209546).
Upvotes: 3 [selected_answer] |
2018/03/21 | 897 | 3,403 | <issue_start>username_0: I want to observe changes of a property with RxJS.
```
interface Test {
required: boolean;
toObserve: boolean;
}
class TestClass {
@Input() subject: Subject;
registerHandlers() {
this.subject.filter(element => element.required).subscribe(next =>
// Observe a property of every element that was registered
Observable.of(next.toObserve).subscribe(val => {
if (val) {
// DO SOMETHING
} else {
// DO SOMETHING ELSE
}
})
);
}
}
```
I got a subject into which newly created objects are pushed. Several components subscribe on these and should react on different property changes.
In the above example if `toObserve` is set I want the component to do something. This works exactly once currently - Depending on the value the element has when it is registered with `subject.next(element)` the correct path is executed.
However as soon as I change the value of `element.toObserve` nothing is happening and the subscription seems to have no effect anymore.<issue_comment>username_1: Sorry, but I reckon you didn't completely understand how subscription is to be done properly.
You have your subject
```
@Input() subject: Subject;
```
and you want to trigger actions whenever the subject changes. Then put this subscription into your ngOnInit()-method:
```
this.subject.subscribe(value => {
// and here goes your evaluation
if(value.toObserve) {
// do something
} else {
// do something else
}
});
```
and you can go even further and do something like this
```
this.subject.subscribe(value => {
// and here goes your evaluation
if(value.toObserve) {
// do something
} else {
// do something else
}
if(value.required) {
// do something
} else {
// do something else
}
});
```
Upvotes: 1 <issue_comment>username_2: Doing a subscribe into a subscribe is not recommended. I would rather create an operators chain to your subject and subscribe to it:
If you want to perform some side effects in your components depending on a specific property in your stream, you can use the do operator:
```
interface Test {
required: boolean;
toObserve: boolean;
}
class TestClass implements OnInit {
@Input() subject: Subject;
ngOnInit() {
this.registerHandlers().subscribe();
}
registerHandlers() {
return this.subject
.filter(element => element.required)
.do(next => {
if (next.toObserve) {
// DO SOMETHING
} else {
// DO SOMETHING ELSE
}
});
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: I do this, in this example I used a attribute type number, but, you can use another like "Test"
```
import { Observable, Subject, ReplaySubject } from 'rxjs';
@Injectable({
providedIn: 'root'
})
export class CoinsService {
private total: Subject = new ReplaySubject(1); //This is my attribute
private total$ = this.total.asObservable(); //This is my observable
constructor() {
this.total.next(0); //Init my parameter
}
async add(coins: number) {
// I must subscribe for take the value of attribute
this.total$
.pipe(
take(1) // Guaranty that is the one time and release the subscription
)
.subscribe(async accumate => {
this.total.next(accumate + coins);// Set the new value
});
}
getTotal(): Observable {
return this.total$;//Return this observable for client subscription on every change
}
}
```
That is.
I hope help you.
Upvotes: 0 |
2018/03/21 | 238 | 841 | <issue_start>username_0: I want to display a section in my html file only if a variable(passed from a django view) equals "abc". I sort of think that javascript can be of help here but not sure how- new to both. How can this be achieved?<issue_comment>username_1: You can do this directly from the template without involving javascript on the client.
```
{% if var == 'abc' %}
Rendered!
{% endif %}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want to be able to toggle this `div`, you could use the variable to set an attribute value, and with an attribute selector, use CSS to display it.
Maybe something like this
```
Hi there
```
Stack snippet
```css
div[data-showme] {
display: none;
}
div[data-showme='true'] {
display: block;
}
```
```html
Hi there
Hi there back (hidden)
```
Upvotes: 1 |
2018/03/21 | 1,148 | 4,414 | <issue_start>username_0: I have the following JSON
```
ds = [{
"name": "groupA",
"subGroups": [{
"subGroup": 1,
"categories": [{
"category1": {
"value": 10
}
},
{
"category2": {}
},
{
"category3": {}
}
]
}]
},
{
"name": "groupB",
"subGroups": [{
"subGroup": 1,
"categories": [{
"category1": {
"value": 500
}
},
{
"category2": {}
},
{
"category3": {}
}
]
}]
}]
```
I can get a dataframe for all the categories by doing:
```
json_normalize(ds, record_path=["subGroups", "categories"], meta=['name', ['subGroups', 'subGroup']], record_prefix='cat.')
```
This will give me:
```
cat.category1 cat.category2 cat.category3 subGroups.subGroup name
0 {'value': 10} NaN NaN 1 groupA
1 NaN {} NaN 1 groupA
2 NaN NaN {} 1 groupA
3 {'value': 500} NaN NaN 1 groupB
4 NaN {} NaN 1 groupB
5 NaN NaN {} 1 groupB
```
But, I don't care about category 2 and category 3 at all. I only care about the category 1.
So'd I prefer something like:
cat.category1 subGroups.subGroup name
0 {'value': 10} 1 groupA
1 {'value': 500} 1 groupB
Any ideas how I get to this?
And even better, I really want the value of value in category1. So something like:
```
cat.category1.value subGroups.subGroup name
0 10 1 groupA
1 500 1 groupB
```
Any ideas?<issue_comment>username_1: Try using YAML for this purpose it has yaml dump to write output in a human readable format and other functions to rewrite the output in json.
Check the basic video tutorial here :
<https://www.youtube.com/watch?v=hSuHnuNC8L4>
Upvotes: -1 <issue_comment>username_2: The problem is that `category1` is not considered a record by `json_normalize`. An informal definition of record is a key in a dictionary that maps to an list of dicts. You can't access `category1` (and therefore `value`) through `record_path` argument because it doesn't map to an list of dicts.
This is the best solution I could find:
```
import pandas as pd
df = pd.io.json.json_normalize(ds,
record_path=['subGroups', 'categories'],
errors='ignore',
meta=['name',
['subGroups', 'subGroup'],
],
record_prefix='cat.')
df = df.drop(['cat.category2', 'cat.category3'], axis=1)
for i in range(df.shape[0]):
row = df.at[i, 'cat.category1']
if isinstance(row, dict) and 'value' in row:
df.at[i, 'cat.category1'] = row['value']
else:
df.at[i, 'cat.category1'] = np.nan
# EDIT: if you want to remove rows for which cat.category1 column has NAN values
df = df[pd.notnull(df['cat.category1'])]
```
Output of `df` is the desired form of the dataframe.
On the other hand, if your JSON structure looked like this (notice the list brackets around the `value` dict):
```
ds = [{
"name": "groupA",
"subGroups": [{
"subGroup": 1,
"categories": [{
"category1": [{
"value": 10
}]
}]
}]
},
{
"name": "groupB",
"subGroups": [{
"subGroup": 1,
"categories": [{
"category1": [{
"value": 500
}]
}]
}]
}]
```
You would be able to use `json_normalize` like this:
```
df = pd.io.json.json_normalize(ds,
record_path=['subGroups', 'categories', 'category1'],
errors='ignore',
meta=['name',
['subGroups', 'subGroup'],
],
record_prefix='cat.')
```
And you would get this:
```
cat.value name subGroups.subGroup
10 groupA 1
500 groupB 1
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,988 | 6,999 | <issue_start>username_0: I'm rendering a Collada (\*.dae) file with `ARKit`. As an overlay of my `ARSCNView` I'm adding a `SKScene` that simply shows a message bubble (without text yet).
Currently, I know how to modify the position of the bubble so that it looks like it's always at the feet of my 3D model. I'm doing like this:
```
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if let overlay = sceneView.overlaySKScene as? BubbleMessageScene {
guard let borisNode = sceneView.scene.rootNode.childNode(withName: "boris", recursively: true) else { return }
let boxWorldCoordinates = sceneView.scene.rootNode.convertPosition(borisNode.position, from:sceneView.scene.rootNode.parent)
let screenCoordinates = self.sceneView.projectPoint(boxWorldCoordinates)
let boxY = overlay.size.height - CGFloat(screenCoordinates.y)
overlay.bubbleNode?.position.x = CGFloat(screenCoordinates.x) - (overlay.bubbleNode?.size.width)!/2
overlay.bubbleNode?.position.y = boxY
}
}
```
However my bubble is always at the *feet* of the 3D model because I can only get the SCNNode position of my model, where it is anchored. I would like it to be at the head of my model.
Is there a way I can get the height of my 3D model, and then its transformed screen coordinates, so no matter where I am with my phone it looks like the bubble message is always next to the head?
[](https://i.stack.imgur.com/kVXnz.jpg)<issue_comment>username_1: you can get borisNode.boundingBox : (float3, float3) to calculate the size of the node, you get a tuple of 2 points, then calculate the heigh by subtracting the y from one point from the other. Finally move your overlay's Y position by the number you get.
Upvotes: 0 <issue_comment>username_2: Each SCNNode has a `boundingBox` property which is the:
>
> The minimum and maximum corner points of the object’s bounding box.
>
>
>
So what this means is that:
>
> Scene Kit defines a bounding box in the local coordinate space using two points identifying its corners, which implicitly determine six axis-aligned planes marking its limits. For example, if a geometry’s bounding box has the minimum corner {-1, 0, 2} and the maximum corner {3, 4, 5}, all points in the geometry’s vertex data have an x-coordinate value between -1.0 and 3.0, inclusive.
>
>
>
If you look in SceneKit Editor you will also be able to see the size of your model in meters (I am saying this simply as a point you can refer to in order to check the calculations):
[](https://i.stack.imgur.com/GApl0.png)
In my example I am using a Pokemon model with the size above.
I scaled the model (which you likely did as well) e.g:
```
pokemonModel.scale = SCNVector3(0.01, 0.01, 0.01)
```
So in order to get the `boundingBox` of the `SCNNode` we can do this:
```
/// Returns The Original Width & Height Of An SCNNode
///
/// - Parameter node: SCNNode
func getSizeOfModel(_ node: SCNNode){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
}
```
Calling it like so:
```
getSizeOfModel(pokemonModel)
```
Now of course since our SCNNode has been scaled this doesn't help much so obviously we need to take this into account, by re-writing the function:
```
/// Returns The Original & Scaled With & Height On An SCNNode
///
/// - Parameters:
/// - node: SCNode
/// - scalar: Float
func getOriginalAndScaledSizeOfNode(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
}
```
Which would be called like so:
```
getOriginalAndScaledSizeOfNode(pokemonModel, scalar: 0.01)
```
Having done this you say you want to position a 'bubble' above your model, which could then be done like so:
```
func getSizeOfNodeAndPositionBubble(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
//3. Create A Buubble
let pointNodeHolder = SCNNode()
let pointGeometry = SCNSphere(radius: 0.04)
pointGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
pointNodeHolder.geometry = pointGeometry
//4. Place The Bubble At The Origin Of The Model, At The Models Origin + It's Height & At The Z Position
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled, node.position.z)
self.augmentedRealityView.scene.rootNode.addChildNode(pointNodeHolder)
}
```
This yields the following result (which I also tested on a few other unfortunate Pokemon as well):
[](https://i.stack.imgur.com/5GkhB.png)
You will probably want to add a bit of 'padding' as well to the calculation, so that the node is a bit higher up than the top of the model e.g:
```
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled + 0.1, node.position.z)
```
I am not great at Maths, and this uses an SCNNode for the bubble rather than an `SKScene`, but hopefully it will point you in the right direction...
Upvotes: 4 [selected_answer] |
2018/03/21 | 877 | 3,213 | <issue_start>username_0: Is it possible to rotate tomcat access logs based on size? Going through the [Appendix](https://docs.spring.io/spring-boot/docs/current/reference/html/common-application-properties.html), I couldn't find such an option. These are the only access log options I'm seeing:
```
server.tomcat.accesslog.buffered=true # Whether to buffer output such that it is flushed only periodically.
server.tomcat.accesslog.directory=logs # Directory in which log files are created. Can be absolute or relative to the Tomcat base dir.
server.tomcat.accesslog.enabled=false # Enable access log.
server.tomcat.accesslog.file-date-format=.yyyy-MM-dd # Date format to place in the log file name.
server.tomcat.accesslog.pattern=common # Format pattern for access logs.
server.tomcat.accesslog.prefix=access_log # Log file name prefix.
server.tomcat.accesslog.rename-on-rotate=false # Whether to defer inclusion of the date stamp in the file name until rotate time.
server.tomcat.accesslog.request-attributes-enabled=false # Set request attributes for the IP address, Hostname, protocol, and port used for the request.
server.tomcat.accesslog.rotate=true # Whether to enable access log rotation.
server.tomcat.accesslog.suffix=.log # Log file name suffix.
```<issue_comment>username_1: By default, Tomcat doesn't provide access log rotation based on size, though you can configure hourly/monthly etc. configuration using file date format using `server.tomcat.accesslog.file-date-format`. for example hourly
```
server.tomcat.accesslog.file-date-format=.yyyy-MM-dd.HH
```
If you still need to rotate based on size only, you can extend `Access Log Valve`. check [tomcat docs](https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Extended_Access_Log_Valve) or refer [this thread](https://stackoverflow.com/questions/6278614/how-to-rotate-the-tomcat-localhost-log)
Upvotes: 0 <issue_comment>username_2: There is no way to do that with the out of the box Tomcat. But there are a couple of options.
a) Disable tomcat's rotation completely by setting `server.tomcat.accesslog.rotate` as false, then do the rotation using another utility like unix's [logrotate](https://linux.die.net/man/8/logrotate) which does support rotating by size. Use the `copytruncate` option to avoid restarting tomcat.
b) Implement a custom [AccessLogValve](https://tomcat.apache.org/tomcat-8.0-doc/api/org/apache/catalina/valves/AccessLogValve.html) and override the `rotate()` method to customize rotation as you want. Then inject this valve using [addContextValves](https://docs.spring.io/autorepo/docs/spring-boot/1.0.0.RC3/api/org/springframework/boot/context/embedded/tomcat/TomcatEmbeddedServletContainerFactory.html#addContextValves(Valve...)) in TomcatEmbeddedServletContainerFactory (you can find an example of customizing TomcatEmbeddedServletContainerFactory [here](https://stackoverflow.com/questions/43653845/spring-boot-1-3-5-tomcat-access-log-rotation)). You'll also have to remove the default implementation from the list returned by [getValves](https://docs.spring.io/spring-boot/docs/2.0.0.RC1/api/org/springframework/boot/web/embedded/tomcat/TomcatReactiveWebServerFactory.html#getEngineValves--).
Upvotes: 1 |
2018/03/21 | 589 | 1,850 | <issue_start>username_0: For example:
```
ex.com/read?id=1
```
should open
```
ex.com/route.php?action=read&id=1
```
but url won`t changing.<issue_comment>username_1: Try this :
```
RewriteEngine On
RewriteCond %{THE_REQUEST} ^[A-Z]{3,}\s(.*)\?id=(.+)\sHTTP.*$
RewriteRule ^(.*)$ /route.php?action=$1&id=%2 [QSD,R=301,L,NE]
RewriteCond %{QUERY_STRING} ^action=(.*)&id=(.*)$
RewriteRule ^ /%1?id=%2 [QSD,L,NE]
```
First I match request that contains query string `id=whatever` then redirect it with new query string.
Then I match new URI which contains new query string `action=whatever&id=whatever` and redirect it internally to same original path.
**QSD** flag is is available in Apache version 2.4.0 and later <https://httpd.apache.org/docs/trunk/rewrite/flags.html> , I added to discard previous query string and prevent it to be appended.
Clear browser cache then test these rules .
**UPDATE:**
As per your comment , you want to open `/route.php?action=read&id=1` internally while `/read?id=1` in browser so , you could do what starkeen answered , but with specific query string and general URI it should look like this :
```
RewriteEngine on
RewriteCond %{QUERY_STRING} ^id=(.*)$
RewriteRule ^(.*)$ /route.php?action=$1 [QSA,L]
```
So if query string is existing and strat with `id=whatever` , `/abc?id=123` will get internally from `/route.php?action=abc&id=123` and the previous query string will be appending with new one
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use the following `RewriteRule` in your `/.htaccess` :
```
RewriteEngine on
RewriteRule ^read/?$ /route.php?action=root [QSA,L]
```
This will rewrite `/read?id=1` to `/route.php?action=read&id=1` . You do not read to write `&id=1` in the Rule's destination as QSA automatically adds it to the Rewrite destination.
Upvotes: 0 |
2018/03/21 | 373 | 1,144 | <issue_start>username_0: When date was 2018-03-21 19:40, i tried following code
```
var date = new Date();
console.log(date);
```
Output :
```
2018-03-21T16:40:53.755Z
```
Server is missing for 3 hours as you see. I fixed it by adding 3 hours but I think it's not a good way. How can i fix this problem with better way ?<issue_comment>username_1: Your server is most likely in another time zone.
Upvotes: 0 <issue_comment>username_2: I don't think the date is incorrect, if you look closely at the format it is being printed, it has a `Z` at the end, which [means](https://www.ietf.org/rfc/rfc3339.txt):
>
> A suffix which, when applied to a time, denotes a UTC offset of 00:00;
> often spoken "Zulu" from the ICAO phonetic alphabet representation of
> the letter "Z".
>
>
>
I guess you are in a place separated by 3 hours from UTC.
Node.js uses this format to print Date objects by default, but you can print your local time using [toLocaleString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleString):
```
console.log(date.toLocaleString());
```
Upvotes: 4 [selected_answer] |
2018/03/21 | 492 | 1,724 | <issue_start>username_0: I'm trying to get the server time from the firebase with the following code:
```
let timestamp = ServerValue.timestamp()
let today = NSDate(timeIntervalSince1970: timestamp/1000)
```
But this gives me an error saying:
>
> Binary operator '/' cannot be applied to operands of type '[Any Hashable: Any]' and 'Int'
>
>
>
Why `ServerValue.timestamp()` dosen't return a TimeIntervel value?
How can I get the server's local time?<issue_comment>username_1: I'm not sure how to get the timestamp from servervalue but the error is being thrown because the `convenience init(timeIntervalSince1970 secs: TimeInterval)` you are using to create the date is expecting the secs to be in TimeInterval which is Double and to pass that you are trying to divide a type which is not permitted (timestamp/1000).
Below is an example of how your double value should be to get it working
```
let unixTimestamp = 1480134638.0
let date = Date(timeIntervalSince1970: unixTimestamp)
print(date) -> "2016-11-26 04:30:38 +0000\n"
```
Upvotes: 0 <issue_comment>username_2: You can't use the `ServerValue.TIMESTAMP` to get the Firebase's server time.
It maps to the Firestore timestamp value for the server time when you write it to a document.
If you want the server time, there are two ways.
1. Write the `ServerValue.TIMESTAMP` to a temporary doc in Firestore and read it from there.
2. Use a Google Cloud function to get an instance of `Date()` object as a string in `response.end()` method. Create an HTTP endpoint like this. You can then use an AJAX request to get the result.
`const app = (req, res) => {
// this will return the Firebase server time
res.send(new Date());
};`
Upvotes: 2 [selected_answer] |
2018/03/21 | 887 | 2,228 | <issue_start>username_0: My input file contains a multithreaded output from an API and looks like:
```
i-12a: True^M
i-4e6: False^M
SSH error 1.1.1.1 i-678 i-9we: True^M
i-890: True^M
SSH error 172.16.31.10 i-001 i-007: False^M
SSH error None i-001 i-007: False^M
i-1ae: True^Mi-3644h: True^M
```
The SSH Error will always be present in a new line.
My goal is to get all occurrences of line having `SSH error` but it should be limited to i-\* and ignore if the line contains `None`.
e.g. For the above input the output should be like below:
```
SSH error 1.1.1.1 i-678
SSH error 172.16.31.10 i-001
```
I am trying to achieve my task using:
```
SSH_ERR="SSH error"
ssh_err=$(awk -v ERROR="$SSH_ERR" '$0~ERROR' <<< $(awk '$0 !~ /None/' < ))
```
But it is giving output like below:
```
i-123: True^M i-456: False^M SSH error 1.1.1.1 i-678 i-987: True^M i-890: True^M SSH error 172.16.31.10 i-001 i-007: False^M SSH error None i-001 i-007: False^M
```
Also when you are posting answer. Can you post use bash variable which is `$SSH error` instead of `SSH error`
Help me out.<issue_comment>username_1: You seems to have control M characters in your Input\_file so I am adding solution which will remove them too and then print the expected output. So following `awk` may help you on same.
```
awk '{gsub(/\r/,"")} /SSH error/{print $1,$2,$3}' Input_file
```
In case you want to have a shell variable's value in `awk` code and then search it then use following:
```
SSH_ERR="SSH error"
awk -v ssh_error="$SSH_ERR" '{gsub(/\r/,"")} $0 ~ ssh_error{print $1,$2,$3}' Input_file
```
OR
```
SSH_ERR="SSH error"
awk -v ssh_error="$SSH_ERR" '{gsub(/\r/,"")} $0 ~ ssh_error && $0 !~ /NONE/{print $1,$2,$3}' Input_file
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Simple **`awk`** command:
```
awk '/^SSH error/ && !/None/{ print $1,$2,$3,$4 }' file
```
The output:
```
SSH error 1.1.1.1 i-678
SSH error 172.16.31.10 i-001
```
Upvotes: 1 <issue_comment>username_3: You can try this sed
```
sed '/^SSH error/!d;/None/d;s/[^ ]*//5g' infile
```
Upvotes: 1 <issue_comment>username_4: ```
awk '/1\.1\.1\../{print $1,$2,$3,$4}' file
```
output
```
SSH error 1.1.1.1 i-678
SSH error 172.16.31.10 i-001
```
Upvotes: 0 |
2018/03/21 | 3,722 | 10,128 | <issue_start>username_0: This is my first post on this stackoverflow site.
I'm trying to grab the java versions/spark versions/ pip list and anaconda list from multiple servers using bash script.
In which I have already tried few attempts but it doen't help much..
Please take a look and help me in this..
```
set -x
#!/bin/bash -x
user=`whoami` #checks which user logged in
if [ "$user" != "spark" ]; then
echo "You must be spark to run this script"
exit 1
fi
id=$1
if [ $# = 2 ]; then
ENV=$2
else
ENV=IQA
fi
if [ "$ENV" == "IQA" ]; then
HOST="19.928.282.26 19.928.282.27 19.928.282.20 19.928.282.31"
elif [ "$ENV" == "DEV" ]; then
HOST="19.928.282.226 19.928.282.27 319.928.282.250 19.928.522.232"
elif [ "$ENV" == "STAGE" ]; then
HOST="19.928.282.286 19.928.282.24 319.928.282.255 19.928.522.233"
elif [ "$ENV" == "PERF" ]; then
HOST="19.928.282.225 19.928.282.25 319.928.282.260 19.928.522.236"
elif [ "$ENV" == "PRODNA" ]; then
HOST="19.928.282.26 19.928.285.27 56.928.282.45 19.928.522.90"
elif [ "$ENV" == "PRODEU" ]; then
HOST="19.928.282.276 19.928.282.56 319.928.282.254 19.928.522.245"
else
echo "You must specify a valid environment... IQA, DEV, STAGE, PERF, PRODN or PRODEU"
exit 1
fi
mkdir -p "/tmp/kumar/${HOSTNAME}_Java_Version_Details"
if [[ $id == javaversion* ]]; then
echo "Grabbing the details of $id"
for HOSTS in ${HOST}; do
ssh ${user}@${HOSTS} "java -version 2>&1 >/dev/null | grep \"java version\" | awk \"{print $3}\"" > /tmp/kumar/${HOSTNAME}_Java_Version_Details/${HOSTS}_java_list.txt
done
exit 0
fi
done
exit 1
elif [[ $id == sparkversion* ]]; then
echo "Grabbing the details of $id"
for HOSTS in ${HOST}; do
ssh ${user}@${HOSTS} "spark-submit --version > /tmp/${HOSTNAME}_spark_versionlist.txt 2>&1" > /tmp/kumar/${HOSTNAME}_spark_versionlist.txt
done
exit 0
fi
done
exit 1
elif [[ $id == condalist* ]]; then
echo "Grabbing the details of $id"
for HOSTS in ${HOST}; do
ssh ${user}@${HOSTS} "/opt/anaconda/bin/conda list > /tmp/${HOSTNAME}_conda_list.txt" > /tmp/kumar/${HOSTNAME}_conda_list.txt
done
exit 0
fi
done
exit 1
elif [[ $id == piplist* ]]; then
echo "Grabbing the details of $id"
for HOSTS in ${HOST}; do
ssh ${user}@${HOSTS} "/opt/anaconda/bin/pip list > /tmp/${HOSTNAME}_pip_list.txt 2> /dev/null" > /tmp/kumar/${HOSTNAME}_pip_list.txt
done
exit 0
fi
done
exit 1
```
I'm trying to make the above script to grab the details from all the nodes of the selected environment and stors it in the masternode (ie.node01) of that environment..
But when i execute the above script it doesnt generate the files from node 01-10 instead it just creates only on node01..<issue_comment>username_1: I took a deeper look at your script and decided to tackle the issues.
* I replaced the `if then elif else fi` with `case` statements.
* I reversed the name of the $HOST and $HOSTS variable. For me $HOSTS is the list of hosts, $HOST is 1 single host in a for loop. Makes more sense to my brain :)
* I obviously could not try the ssh statements, I just reworked the structure around them.
* Remove the first line (`set -x`). Not required if you use `#!/bin/bash -x` anyway.
* Doing with `case` now provides validation of the value of $id and $ENV.
* Fixed some confusion between $HOSTNAME and $HOST. I am not convinced you want the ${HOSTNAME}\_Java\_Version\_Details directory but I left it there.
* Your code had syntax errors.
* Use `>` and `>>` carefully. `>` overwrites the file with the new content, so you loose what was already in there.
Obviously this can be reworked to structure the output files like you need them. So finally here is the code:
```
#!/bin/bash
user=`whoami` #checks which user logged in
if [ "$user" != "spark" ]
then
echo "You must be spark to run this script"
#exit 1
fi
id=$1
if [ $# = 2 ]
then
ENV=$2
else
ENV=IQA
fi
case $ENV in
'IQA')
HOSTS="19.928.282.26 19.928.282.27 19.928.282.20 19.928.282.31"
;;
'DEV')
HOSTS="19.928.282.226 19.928.282.27 319.928.282.250 19.928.522.232"
;;
'STAGE')
HOSTS="19.928.282.286 19.928.282.24 319.928.282.255 19.928.522.233"
;;
'PERF')
HOSTS="19.928.282.225 19.928.282.25 319.928.282.260 19.928.522.236"
;;
'PRODNA')
HOSTS="19.928.282.26 19.928.285.27 56.928.282.45 19.928.522.90"
;;
'PRODEU')
HOSTS="19.928.282.276 19.928.282.56 319.928.282.254 19.928.522.245"
;;
*)
echo "You must specify a valid environment... IQA, DEV, STAGE, PERF, PRODN or PRODEU"
exit 1
esac
# DEBUG echo "$id, $ENV"
output_dir="/tmp/kumar"
output_dir_javaversion="$output_dir/${HOSTNAME}_Java_Version_Details"
if [ ! -d $output_dir_javaversion ]
then
mkdir -p "/tmp/kumar/${HOSTNAME}_Java_Version_Details"
fi
case $id in
'javaversion')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "Java version of $HOST": > $output_dir_javaversion/${HOST}_java_list.txt
ssh ${user}@${HOST} "java -version 2>&1 >/dev/null | grep \"java version\" | awk \"{print $3}\"" >> $output_dir_javaversion/${HOST}_java_list.txt
done
;;
'sparkversion')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "Spark version for $HOST:" > $output_dir/${HOST}_spark_versionlist.txt
ssh ${user}@${HOST} "spark-submit --version > /tmp/${HOSTNAME}_spark_versionlist.txt 2>&1" >> $output_dir/${HOST}_spark_versionlist.txt
done
;;
'condalist')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "Anaconda version for $HOST:" > $output_dir/${HOST}_conda_list.txt
ssh ${user}@${HOSTS} "/opt/anaconda/bin/conda list > /tmp/${HOSTNAME}_conda_list.txt" >> $output_dir/${HOST}_conda_list.txt
done
;;
'piplist')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "PIP information for $HOST:" > $output_dir/${HOST}_pip_list.txt
ssh ${user}@${HOSTS} "/opt/anaconda/bin/pip list > /tmp/${HOSTNAME}_pip_list.txt 2> /dev/null" >> $output_dir/${HOST}_pip_list.txt
done
;;
*)
echo "Invalid value for id. Valid values are javaversion, sparkversion, condalist, piplist"
exit 1
;;
esac
```
Upvotes: 2 <issue_comment>username_2: I have corrected the errors and now the code is ready to run..
The below code will do:
Step 1: checks the user is spark or not
=======================================
step 2: gets the inputs from the user(example: ./script2.sh javaversion IQA)
============================================================================
step 3: checks the output directory is exist or not(example: /tmp/kumar/javaversion)
====================================================================================
step 4: Execute the command and store the result at the output directory(example: /tmp/kumar/javaversion/45.468.788.67\_java\_list.txt
======================================================================================================================================
step 5: zip the output directory (example: /tmp/kumar/javaversion.zip)
======================================================================
step 6: Delete the directory (example: /tmp/kumar/javaversion/)
===============================================================
#!/bin/bash
```
user=`whoami` #checks which user logged in
if [ "$user" != "spark" ]
then
echo "You must be spark to run this script"
#exit 1
fi
id=$1
if [ $# = 2 ]
then
ENV=$2
else
ENV=IQA
fi
case $ENV in
'IQA')
HOSTS="19.928.282.26 19.928.282.27 19.928.282.20 19.928.282.31"
;;
'DEV')
HOSTS="19.928.282.226 19.928.282.27 319.928.282.250 19.928.522.232"
;;
'STAGE')
HOSTS="19.928.282.286 19.928.282.24 319.928.282.255 19.928.522.233"
;;
'PERF')
HOSTS="19.928.282.225 19.928.282.25 319.928.282.260 19.928.522.236"
;;
'PRODNA')
HOSTS="19.928.282.26 19.928.285.27 56.928.282.45 19.928.522.90"
;;
'PRODEU')
HOSTS="19.928.282.276 19.928.282.56 319.928.282.254 19.928.522.245"
;;
*)
echo "You must specify a valid environment... IQA, DEV, STAGE, PERF, PRODN or PRODEU"
exit 1
esac
output_dir="/tmp/${id}"
if [ ! -d ${output_dir} ]
then
mkdir -p ${output_dir}
fi
case $id in
'javaversion')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "Java version of $HOST": > $output_dir/${HOST}_java_list.txt
ssh ${user}@${HOST} "java -version 2>&1 >/dev/null | grep \"java version\" | awk \"{print $3}\"" >> $output_dir/${HOST}_java_list.txt
zip -r ${id}.zip ${output_dir}
done
;;
'sparkversion')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
ssh ${user}@${HOST} "spark-submit --version" > $output_dir/${HOST}_sparkversion.txt 2>&1
zip -r ${id}.zip ${output_dir}
done
;;
'condalist')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "Anaconda version for $HOST:" > $output_dir/${HOST}_conda_list.txt
ssh ${user}@${HOST} "/opt/anaconda/bin/conda list" >> $output_dir/${HOST}_conda_list.txt
zip -r ${id}.zip ${output_dir}
ls -latr $output_dir/${HOST}_conda_list.txt
done
;;
'piplist')
echo "Grabbing the details of $id"
for HOST in ${HOSTS}
do
echo "PIP information for $HOST:" > $output_dir/${HOST}_pip_list.txt
ssh ${user}@${HOST} "/opt/anaconda/bin/pip list 2> /dev/null" >> $output_dir/${HOST}_pip_list.txt
zip -r ${id}.zip ${output_dir}
done
;;
*)
echo "Invalid value for id. Valid values are javaversion, sparkversion, condalist, piplist"
exit 1
;;
esac
if [ ! -z ${output_dir} ]
then
if [ -d ${output_dir} ]
then
rm -rf ${output_dir}
fi
fi
```
Upvotes: 1 [selected_answer] |
2018/03/21 | 436 | 1,575 | <issue_start>username_0: I have defined object in data of vue component like this:
```
export default {
components: {..}
data: () => {
filter: ({
from: 2017,
to: 2018
}
}),
computed: mapState({
fooObjects: (state) => {
console.log(this.filter) // print undefined
}
}),
...
}
```
Can you tell me how to access to my filter object in computed property and why is filter undefined? I've initialize it with 2 years on start as you can see. Thanks.<issue_comment>username_1: A couple of things. First your parenthesis do not line up starting at line `4`. Second, in order to access `this.filter` your data method must return a json object. You have it returning the filter.
The following code should give you access to `this.filter`.
```
export default {
components: {..}
data: () => {
return {
filter: {
from: 2017,
to: 2018
}
}
},
computed: mapState({
fooObjects: (state) => {
console.log(this.filter) // print undefined
}
}),
...
}
```
Upvotes: 0 <issue_comment>username_2: Don't use arrow functions on computed, they are bound to the parent context, this will not be the Vue instance as you’d expect. Also you should return an object from your data method. This is working below
```js
export default {
components: {..},
data () {
return {
filter: {
from: 2017,
to: 2018
}
}
},
computed: {
fooObjects: function () {
return console.log(this.filter)
}
}
}
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,496 | 4,493 | <issue_start>username_0: I'm trying to wrap my head around this as looking online this code though basic from what i can understand as no knowledge of VBA, Should at least just put in the search bar for google the word I put in. But it doesn't seem to am I missing something or have I done it completely wrong any pointers?
I know it won't hit enter as not been added in yet, It will load the google page but that is it.
Ideally after getting it to search is to expand on it to update an internal website with the information from a spreadsheet
Also if anyone knows any good places to see for VBA code meaning ect, please advise.
```
Sub test()
Dim MyHTML_Element As IHTMLElement
Dim MyURL As String
On Error GoTo Err_Clear
MyURL = "google.com"
Set MyBrowser = New InternetExplorer
MyBrowser.Silent = True
MyBrowser.navigate MyURL
MyBrowser.Visible = True
Do
Loop Until MyBrowser.readyState = READYSTATE_COMPLETE
Set HTMLDoc = MyBrowser.document
ie.document.getElementById("lst-ib").Value = "bbc"
Err_Clear:
If Err <> 0 Then
Err.Clear
Resume Next
End If
End Sub
```<issue_comment>username_1: As at present you are only after the search results for a search term you could use either of the following where the searchTerm = "BBC" is concatenated into the URL. XMLHTTP60 would be what ever version is for your Excel as the XML reference library.
```
Option Explicit
Sub Getinfo2()
Dim http As New XMLHTTP60
Dim html As New HTMLDocument
Dim searchTerm As String
searchTerm = "BBC"
With http
.Open "GET", "https://www.google.co.uk/search?safe=strict&source=hp&ei=Ep2yWvPDN8visAfVwIagBg&q=" & searchTerm & "&oq=" & searchTerm & "&gs_l=psy-ab.3..35i39k1j0i131i67k1l3j0i131k1j0i131i67k1l3j0i67k1j0i131i67k1.1897.1897.0.3045.3.2.0.0.0.0.134.134.0j1.2.0....0...1.1.64.psy-ab..1.2.269.6...135.zC-Z7B8DrM4", False
.send
html.body.innerHTML = .responseText
End With
Dim posts As MSHTML.IHTMLElementCollection 'add stuff to a collection etc
Dim post As MSHTML.IHTMLElement
'......
End Sub
```
Or with IE
```
Public Sub ScrapeIE()
Dim appIE As Object
Dim ihtml As Object
Dim searchTerm As String
searchTerm = "BBC"
Set appIE = CreateObject("internetexplorer.application")
With appIE
.Visible = True
.navigate "https://www.google.co.uk/search?safe=strict&source=hp&ei=Ep2yWvPDN8visAfVwIagBg&q=" & searchTerm & "&oq=" & searchTerm & "&gs_l=psy-ab.3..35i39k1j0i131i67k1l3j0i131k1j0i131i67k1l3j0i67k1j0i131i67k1.1897.1897.0.3045.3.2.0.0.0.0.134.134.0j1.2.0....0...1.1.64.psy-ab..1.2.269.6...135.zC-Z7B8DrM4"
While .Busy = True Or .readyState < 4: DoEvents: Wend
Set ihtml = .document
' .Quit
End With
Set appIE = Nothing
End Sub
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: What are you trying to do? Do Google searches and count the number of hits? Try the script below and give me feedback.
```
Sub Gethits()
Dim url As String, lastRow As Long
Dim XMLHTTP As Object, html As Object, objResultDiv As Object, objH3 As Object, link As Object
Dim start_time As Date
Dim end_time As Date
Dim var As String
Dim var1 As Object
lastRow = Range("A" & Rows.Count).End(xlUp).Row
Dim cookie As String
Dim result_cookie As String
start_time = Time
Debug.Print "start_time:" & start_time
For i = 2 To lastRow
url = "https://www.google.com/search?q=" & Cells(i, 1) & "&rnd=" & WorksheetFunction.RandBetween(1, 10000)
Set XMLHTTP = CreateObject("MSXML2.serverXMLHTTP")
XMLHTTP.Open "GET", url, False
XMLHTTP.setRequestHeader "Content-Type", "text/xml"
XMLHTTP.setRequestHeader "User-Agent", "Mozilla/5.0 (Windows NT 6.1; rv:25.0) Gecko/20100101 Firefox/25.0"
XMLHTTP.send
Set html = CreateObject("htmlfile")
html.body.innerHTML = XMLHTTP.ResponseText
Set objResultDiv = html.getelementbyid("rso")
Set var1 = html.getelementbyid("resultStats")
Cells(i, 2).Value = var1.innerText
DoEvents
Next
end_time = Time
Debug.Print "end_time:" & end_time
Debug.Print "done" & "Time taken : " & DateDiff("n", start_time, end_time)
MsgBox "done" & "Time taken : " & DateDiff("n", start_time, end_time)
End Sub
```
Put your search terms in ColumnA and run the script.
[](https://i.stack.imgur.com/xdFij.jpg)
Upvotes: 0 |
2018/03/21 | 418 | 1,395 | <issue_start>username_0: I have a file in custom\_target.config. I have a shell script that is going to read through this config file. The config file is split up by the name of the file and the target directory I am trying to move it to. I was wondering what the best way is to split line in the file and start moving the files to the respective locations.
```
EXAMPLE_FILE /home/user/example
EXAMPLE_FILE_1 /home/user/example
EXAMPLE_FILE_1 /home/user/example/subfolder
```
Attemp so far:
```
while IFS='' read -r line || [[ -n "$line" ]]; do
for element in $line
do
echo $element
done
#echo "Text read from file: $line"
done < "$1"
```<issue_comment>username_1: Let `read` do the word splitting instead of trusting `$line` not to do anything odd.
```
while read -r key value; do
echo "KEY: $key"
echo "VALUE: $value"
done < "$1"
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: you can use Awk:
----------------
copy files
----------
```
`awk '{printf "cp "$1" "$2}' custom_target.config`
```
move files
----------
```
`awk '{printf "mv "$1" "$2}' custom_target.config`
```
print a strings to be executed: (`cp file_name path`)
`$1` - first field separated by space (file name)
`$2` - second field separated by space (path name)
`awk '{printf "cp "$1" "$2}' custom_target.config`
use **``** for execute the output strings
Upvotes: 0 |
2018/03/21 | 651 | 2,326 | <issue_start>username_0: I am attempting to create a reactJS state that has an empty array in it on construction. Once i receive a message, that is a JSON object, I would want to create a key in that array and store that received message as the value under that key.
for example. I have
array x =[]
JSON\_obj = {"data":"mydata"}
in normal javascript I could say
x["some\_key"] = JSON\_obj
I am attempting accomplish this in reactJS inside the state.
```js
class App extends Component {
constructor(props) {
super(props);
this.state = {
cusipData: []
};
this.socket = io("adresss");
this.socket.on("ringBuffer", function(data) {
addMessage(JSON.parse(data));
});
const addMessage = data => {
this.setState({ cusipData: this.state.cusipData[data.cusip], data });
console.log("this.state");
console.log(this.state);
};
}
}
```
```html
```<issue_comment>username_1: The following method `addMessage` adds the `data.cusip` parameter to the existing `state.cusipData` array.
```js
class App extends Component {
constructor(props)
{
super(props)
this.state = {
cusipData: [],
};
this.socket = io("adresss")
this.socket.on('ringBuffer', function(data){
addMessage(JSON.parse(data));
});
const addMessage = data => {
// adds the new data.cusip to the existing state array
this.setState({
cusipData: [...this.state.cusipData, data.cusip]
});
};
}
```
Upvotes: 0 <issue_comment>username_2: I would be careful with using `setState` when self-referencing `this.state` in the method as state should remain immutable. This should work.
```
class App extends Component {
constructor(props) {
super(props);
this.state = {
cusipData: []
};
this.socket = io("adresss");
this.socket.on("ringBuffer", function(data) {
addMessage(JSON.parse(data));
});
const addMessage = data => {
this.setState((prevState, props) => {
const newCusipData = prevState.cusipData;
newCusipData[data.cusip] = data;
return {
cusipData: newCusipData
};
});
console.log("this.state");
console.log(this.state);
};
}
}
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 897 | 3,364 | <issue_start>username_0: I have a list of Publication objects:
```
public class Publication
{
public List Authors { get; set; }
}
```
I would like to group my publication objects by their authors using LINQ.
My solution uses a dictionary and looks like this:
```
var dict = new Dictionary>();
foreach (var publication in list)
{
foreach(var author in publication.Authors)
{
if (dict.ContainsKey(author))
{
dict[author].Add(publication);
}
else
{
dict.Add(author, new List { publication });
}
}
}
```
However this seems ugly to me. There must be a simple and elegant one-liner for this problem, but I can't get it straight. I also tried to flatten the list with selectmany on Authors property and group them afterwards but it strips out only the author strings and the publication objects are gone.<issue_comment>username_1: With LINQ you can do like this:
```
var groupedByAuthor = list
.SelectMany(publication =>
publication.Authors.Select(author => new { author, publication }))
.GroupBy(arg => arg.author, keyValue => keyValue.publication)
.ToArray();
```
But this will be a little bit slower.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use Linq and some anonymous classes:
```
public class Publication
{
public List Authors { get; set; }
public string Title { get; set; }
}
class Program
{
static void Main()
{
IEnumerable publications; // YOUR DATA HERE!
var groupsByAuthor = publications.SelectMany(publication => publication.Authors.Select(author => new { Publication = publication, Author = author })).GroupBy(x => x.Author);
foreach (var gba in groupsByAuthor)
{
Console.WriteLine(gba.Key);
foreach (var pub in gba)
{
Console.WriteLine(pub.Publication.Title);
}
}
Console.ReadLine();
}
}
```
Upvotes: 0 <issue_comment>username_3: You can also do it like this :
```
var dict = pubs.SelectMany(p => p.Authors)
.ToDictionary(
a => a,
a => pubs.Where(p => p.Contains(a)).ToList())
);
```
Not tested but you get the idea.
Upvotes: -1 <issue_comment>username_4: If you more concerned about clean and expressive code you may want to try creating an extension method instead brute forcing an ugly linq query. Code should look something like the following.
```
class Program
{
static void Main(string[] args)
{
List test = new List();
Dictionary> publicationsByAuther = test.GroupByAuthor();
}
}
public static class Extensions
{
public static Dictionary> GroupByAuther(this List publications)
{
Dictionary> dict = new Dictionary>();
foreach (var publication in publications)
{
foreach (var author in publication.Authors)
{
if (dict.ContainsKey(author))
{
dict[author].Add(publication);
}
else
{
dict.Add(author, new List
{
publication
});
}
}
}
return dict;
}
}
```
Upvotes: 0 <issue_comment>username_5: You might think about separating the case of adding a new author from adding a publication. Turning the code into something like that:
```
var dict = new Dictionary>();
foreach (var publication in list)
{
foreach (var author in publication.Authors)
{
if (!dict.ContainsKey(author))
dict.Add(author, new List());
dict[author].Add(publication);
}
}
```
It is not LINQ and not a new solution, but maybe it can help about the elegancy
Upvotes: 0 |
2018/03/21 | 965 | 2,702 | <issue_start>username_0: I am stuck with a problem in Regex where I need to search all the available alphanumeric sequences in a document. A document can have more than one such sequences. I am doing it in Python.
So for example if the document is like "some blah blah blah with id X12354, id 1234Z and id 12P555. All are 50 years old."
So the expected output should be:
X12354
1234Z
12P555
*Summary*: **Both alphabets and numbers must be present in a string where sequence or length doesn't matter. This string can come multiple times in a document. And it can be anywhere.**
I've tried several ways to sort out regex but it's becoming confusing every time. Thanks in advance.<issue_comment>username_1: This detects if at least an alphabet and digit exists in every small chunk of string.
```
import re
from string import punctuation
s = "some blah blah blah with id X12354, id 1234Z and id 12P555. All are 50 years old."
ans = [v for v in re.split("[ " + punctuation + "]", s)
if any(c.isdigit() for c in v) and any(c.isalpha() for c in v)]
['X12354,', '1234Z', '12P555']
```
`re.split("[ " + punctuation + "]", s)` splits with all punctuation and space.
Upvotes: 2 <issue_comment>username_2: You could match between word boundaries and use a positive lookahead to assert and uppercase character and a digit:
[`\b(?=[A-Z-0-9]*[A-Z])(?=[A-Z-0-9]*[0-9])[A-Z0-9]+\b`](https://regex101.com/r/bzDcJi/1)
That would match:
* `\b` Word boundary
* `(?=` Positive lookahead that asserts what is on the right
+ `[A-Z0-9]`\* Match zero or more times an uppercase character
+ `[A-Z]` Match an uppercase character
* `)` Close positive lookahead
* `(?=` Positive lookahead that asserts what is on the right
+ `[A-Z0-9]*` Match zero or more times an uppercase character
+ `[0-9]` Match a digit
* `)` Close positive lookahead
* `[A-Z0-9]+` Match one or more times an uppercase character or a digit
* `\b` Word boundary
So, in Python, that would be:
```
import re
s = "some blah blah blah with id X12354, id 1234Z and id 12P555. All are 50 years old."
re.findall(r'\b(?=[A-Z-0-9]*[A-Z])(?=[A-Z-0-9]*[0-9])[A-Z0-9]+\b', s)
```
giving:
```
['X12354', '1234Z', '12P555']
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Use `re.findall` to get all matches. Use two lookaheads, one for verifying that the match contains a number, another one for verifying that it contains a letter.
```
document = "some blah blah blah with id X12354, id 1234Z and id 12P555. All are 50 years old."
matches = re.findall('(?=[a-z0-9]*[a-z])(?=[a-z0-9]*[0-9])[a-z0-9]+', document, re.IGNORECASE)
print(matches)
```
You can try the regex online [here](https://regex101.com/r/TZnQji/1).
Upvotes: 1 |
2018/03/21 | 1,079 | 3,709 | <issue_start>username_0: When running Mocha, i get the error:
>
> Error: Promise rejected with no or falsy reason
>
>
>
on the promise function i'm trying to write tests for.
I have a class like this:
```
class Parser {
constructor(jobName, jobExchange) {
this.jobName = jobName;
this.jobExchange = this.jobExchange;
}
parse(job) {
return new Promise( (resolve, reject ) => {
let jobJSON = JSON.parse(job.content);
return this._isRelevant(jobJSON) ? resolve(jobJSON) : reject();
});
}
_isRelevant(job) {
return job.exchange_name === this.jobExchange &&
job.__CLASS__ === this.jobName;
}
}
module.exports = new Parser(process.env.AMQP_JOB, process.env.AMQP_EXCHANGE);
```
and then a test like:
```
const parser = require('../../job/Parser');
describe('Parser', () => {
describe('parse', () => {
it('resolves', async () => {
let job = {
content: '{"__CLASS__": "abc","exchange_name": "abc"}'
}
const result = await parser.parse(job);
expect(result).to.equal('promise resolved');
});
// it('rejects', async () => {
// });
});
});
```
and i run my tests from the command line, like: `env AMQP_JOB=abc AMQP_EXCHANGE=abc mocha`
So what am i doing wrong?<issue_comment>username_1: May be helpful to catch your error.
You can either wrap your promise in a try / catch or use the conventional promise `.then()` / `.catch()`
```
try {
const result = await parser.parse(job);
}catch(err) {
console.log('err', err)
}
```
or
```
await parser.parse(job).then(res => {
console.log(result)
}).catch(err => {
console.log('err', err)
})
```
Upvotes: 0 <issue_comment>username_2: On this line:
```
return this._isRelevant(jobJSON) ? resolve(jobJSON) : reject();
```
You are rejecting a promise with no arguments. You should instead reject it with something like `new Error('job not relevant')`.
Normally, when the promise returned to mocha rejects, mocha will display whatever error you provided. In this case, since you didn't provide one, you're seeing that 'unhelpful' message instead.
Like others, though, I really wonder why you're making the parse method asynchronous at all. :\
Upvotes: 1 <issue_comment>username_3: ```
// asyncpromiserejection.js
class asyncpromise{
constructor(s){
this.s=s
}
PTest(){
var somevar = false;
somevar=this.s;
return new Promise(function (resolve, reject) {
if (somevar === true)
resolve();
else
// throw new Error("Promise rejcted")
reject();
});
}
}
module.exports=asyncpromise
// test.js
const assert=require('chai').assert
const asyncpromise=require('./asyncpromiserejection.js')
describe("asyncTests", () => {
it("handles Promise rejection",async ()=>{
var t=new asyncpromise(false)
await t.PTest().then(function () {
console.log("Promise Resolved");
})
})
});
```
[](https://i.stack.imgur.com/UlebS.png)
Whereas, if you use the `catch`in the test, the promise rejection for an empty promise gets handled, like so:
```
const assert=require('chai').assert
const asyncpromise=require('./asyncpromiserejection.js')
describe("asyncTests", () => {
it("handles Promise rejection",async ()=>{
var t=new asyncpromise(false)
await t.PTest().then(function () {
console.log("Promise Resolved");
}).catch((error)=>{
console.log("Promise rejected")
// assert.equal(error.message,"Promise rejected")
})
})
});
```
[](https://i.stack.imgur.com/mscNG.png)
Upvotes: 0 |
2018/03/21 | 519 | 2,054 | <issue_start>username_0: Take a look at this code:
```
public class ThreadTest {
public static void main(String[] args) {
new Thread(new Runnable() {
@Override
public void run() {
while(true) {
//some code here
}
}
}).start();
System.out.println("End of main");
}
}
```
Normally, when the end of `main` is reached, the program terminates. But in this example, the program will prints "End of main" and then keeps running because the thread is still running. Is there a way that the thread can stop *automatically* when the end is reached, without using something like `while(isRunning)`?<issue_comment>username_1: The thread you are creating is independent and does not depend on the Main Thread termination. You can use `Daemon` thread for the same. **Daemon threads will be terminated by the JVM when there are none of the other non- daemon threads running, it includes a main thread of execution as well.**
```
public static void main(String[] args) {
Thread t = new Thread(new Runnable() {
@Override
public void run() {
while (true) {
System.out.println("Daemon thread");
}
}
});
t.setDaemon(true);
t.start();
System.out.println("End of main");
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Make it [a daemon thread](https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html#setDaemon(boolean)).
>
> `public final void setDaemon(boolean on)`
>
> Marks this thread as either a
> daemon thread or a user thread. **The Java Virtual Machine exits when
> the only threads running are all daemon threads**. This method must be
> invoked before the thread is started.
>
>
>
```
Thread t = new Thread(new Runnable() {
@Override
public void run() {
while(true) {
//some code here
}
}
});
t.setDaemon(true);
t.start();
System.out.println("End of main");
```
Upvotes: 2 |
2018/03/21 | 588 | 2,190 | <issue_start>username_0: I am trying to create a program which calculates prime numbers from 1 to 1000. I believe the code is having trouble either creating or joining the pthreads. I keep getting a core dump error. My main concern is in the second for loop where I am joining the pthreads. Is this the correct way to check if a thread has ended?
I am using gcc --std=c99 -Wall -Werror -pthread primes.c -o primes in the terminal.
```
while (test_val <= FINAL){
//loop to check if thread is finished
for(int i = 0; i < THREADS; i++){
if(tid[i] == 0){
pthread_create(&tid[i], NULL, testprime, NULL);
}
}
//wait for thread to end
for(int i = 0; i < THREADS; i++){
pthread_join(tid[i], NULL);
```
This is the error:
Segmentation fault (core dumped)<issue_comment>username_1: The thread you are creating is independent and does not depend on the Main Thread termination. You can use `Daemon` thread for the same. **Daemon threads will be terminated by the JVM when there are none of the other non- daemon threads running, it includes a main thread of execution as well.**
```
public static void main(String[] args) {
Thread t = new Thread(new Runnable() {
@Override
public void run() {
while (true) {
System.out.println("Daemon thread");
}
}
});
t.setDaemon(true);
t.start();
System.out.println("End of main");
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Make it [a daemon thread](https://docs.oracle.com/javase/7/docs/api/java/lang/Thread.html#setDaemon(boolean)).
>
> `public final void setDaemon(boolean on)`
>
> Marks this thread as either a
> daemon thread or a user thread. **The Java Virtual Machine exits when
> the only threads running are all daemon threads**. This method must be
> invoked before the thread is started.
>
>
>
```
Thread t = new Thread(new Runnable() {
@Override
public void run() {
while(true) {
//some code here
}
}
});
t.setDaemon(true);
t.start();
System.out.println("End of main");
```
Upvotes: 2 |
2018/03/21 | 563 | 1,943 | <issue_start>username_0: I am working on parsing a MapInfo TAB Format file in Java. It consists of a set of four files viz .TAB, .ID, .MAP and .DAT. After searching on web I came across a parser provided by GeoTools at this repository:
<https://github.com/geotools/geotools/blob/master/modules/library/main/src/main/java/org/geotools/data/MapInfoFileReader.java>
When I use this API for parsing the MAPInfo TAB format file bundle, the API throws exception:
>
> "Didn't find a minimum of three control points in the .tab file."
>
>
>
I am using public MapInfoFileReader(final File tabfile) constructor. I have tried using different versions of Geotools API.
Is there a work around for parsing MapInfo TAB Format file in Java? Or if anyone can provide sample code for using GeoTools' MapInfoFileReader<issue_comment>username_1: The API you are trying to use is specifically for the use of raster tab files for the **GeoTiff** format. The title on the github page specifically mentions this:
>
> GEOT-4619: Support MapInfo TAB files in geotiff format reader
>
>
>
There is a C++ library available to read and write MapInfo TAB files called [MITAB](http://mitab.maptools.org/) which you could integrate into your application. Alternatively, you could use a library such as [GDAL](http://www.gdal.org/) to convert your data into MIF/MID (a plain text MapInfo file format) which you can then parse as text.
Upvotes: 2 <issue_comment>username_2: There is no direct way to parse .TAB using java code. First of all you need to convert .TAB into KML or other extension using Gdal[1](http://www.gdal.org/ogr2ogr.html) or ogr2ogr using this command line tool
ogr2ogr -f "KML" "filepath/filename.kml" "filepath/filename.TAB"
After that you can parse .kml file using java code using DOM parser[2](https://docs.oracle.com/cd/B14099_19/web.1012/b12024/oracle/xml/parser/v2/DOMParser.html) API available in java.
Upvotes: 3 [selected_answer] |
2018/03/21 | 299 | 1,267 | <issue_start>username_0: We are five students working in the same project.This project was created with Netbeans, and we have a repository on gitHub.
I want to use Intellij and the four team members are using NetbeansIDE so can i work on this repository using Intellij IDE, without break the repo...if yes how can i do it.<issue_comment>username_1: Java source code is stored as plain text. You could have a person on your team using Notepad and it wouldn't make a difference. I will include a few caveats to this statement, however:
1. Don't add any IntelliJ IDEA or Netbeans files to the project. You can add the entire `.idea` folder to your `.gitignore`. I'm not sure if Netbeans has a similar folder.
2. Make sure your team agrees on tabs vs. spaces, indentation level, etc. Both IDEs include tools to format your code, so you want to keep it consistent.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Making the project a maven/gradle project would be a great solution. This way managing the build and dependencies of the project will be independent from the IDE, development parties are using.
Upvotes: 1 <issue_comment>username_3: Absolutely! If you have a .gitignore just ignore the files that the respective ides e.g .idea for IntelliJ.
Upvotes: 1 |
2018/03/21 | 552 | 2,247 | <issue_start>username_0: I would like to add a translation option to our website but other than embedding the google translate widget (which doesn't look great) I can't find how to do this.
I am looking to style it to suit our website similar to what they have done on this site (not ours) <https://www.visitscotland.com/brochures/>
Any help would be much appreciated!<issue_comment>username_1: The typical approach to this is to have the content swap out based on some toggle - as in your example site - or indicator in the url - such as example.uk or example.de.
This would be much more efficient than attempting to translate your content for users with some widget, because it needs to be translated only once rather than every time a user visits the content.
There are also built-in translation features for certain browsers, chrome in particular often offers to translate pages. You can help enhance this a bit by explicitly stating what language your website is in, and then Chrome will offer to translate it to the language of the user's browser for them; there are two main ways to do this:
W3C recommends using the lang and/or xml:lang attributes in the html tag:
Google recommends the meta http-equiv:
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can use Cloud Translation, it's a free and open-source project from Angry Monkey Cloud: <https://github.com/angrymonkeycloud/CloudTranslation>.
You should add a reference to jQuery first, then to the CloudTranslation JavaScript file:
```
```
And add the configuration within the HTML head as follows:
```
{
"Settings": {
"DefaultLanguage": "en",
"TranslatorProvider": "Azure",
"TranslatorProviderKey": "{Your Microsoft Azure Translator Key}",
"UrlLanguageLocation": "Subdirectory"
},
"Languages": [
{
"Code": "en",
"DisplayName": "English"
},
{
"Code": "de",
"DisplayName": "Deutsch"
}
]
}
```
and add your own custom select (dropdown) having the class "CloudTranslationSelect" (you can customize the style of your select the way you want).
You can add your own translations to the configuration where it will reduce the calling from the translator provider.
More information found on <https://www.angrymonkeycloud.com/translation>
Upvotes: 0 |
2018/03/21 | 740 | 2,648 | <issue_start>username_0: Is there any way to turn off the option `Use Strict Mode for Redirect URIs` in a Facebook app? It seems that as of March 2018 this property automatically is turned on and is greyed out so cannot be disabled. Facebook seems to disallow authentication unless the exact URL is mentioned in `Valid OAuth Redirect URIs`. This is a problem because the Sitecore Social Connected module seems to pass in a different state parameter in the query string each time you log in. I have tested using the `Redirect URI Validator` in the Facebook app and this confirms that the redirect must be exactly as per `Valid OAuth Redirect URIs`.<issue_comment>username_1: >
> Is there any way to turn off the option `Use Strict Mode for Redirect URIs` in a Facebook app?
>
>
>
NO
==
Due to the security changes made to Facebook, it's no longer possible to turn off this setting.
---
Regarding specifics of Sitecore and the Social Connected module, I found from @CBroe's comments that the `Valid OAuth Redirect URIs` now needs to contain a query string parameter as follows:
```
http://example.com/layouts/Social/Connector/SocialLogin.ashx?type=access
```
previously I just had
```
http://example.com/layouts/Social/Connector/SocialLogin.ashx
```
If you are using HTTPS, you will need to enter the URI with the port number as well i.e.
```
https://example.com:443/layouts/Social/Connector/SocialLogin.ashx?type=access
```
This last point is not related to the recent Facebook app changes.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Same experience, I could not turn it off. What eventually worked for me was
I have a link on my site that starts the login process:
```
https://www.example.com/users/auth/facebook
```
Following this causes my rails app to redirect to
```
https://www.facebook.com/v2.6/dialog/oauth?client_id=1234&redirect_uri=https%3A%2F%2Fwww.example.com%2Fusers%2Fauth%2Ffacebook%2Fcallback&response_type=code&scope=email&state=123456
```
Facebook replies with
```
https://www.example.com/users/auth/facebook/callback?code=abcverylongcodexyz
```
Therefor the URI that needs to be whitelisted is simply "<https://www.example.com/users/auth/facebook/callback>", without the code part.
FWIW, when I moved my site from http to https I needed to update my config/initializers/devise.rb to include
```
config.omniauth :facebook, '1234', '34567', :scope => 'email', :callback_url => 'https://www.example.com/users/auth/facebook/callback'
```
as it was still using the http: protocol in the callback url, and you can't whitelist any URI in that protocol under the current guidelines.
Upvotes: 1 |
2018/03/21 | 959 | 2,866 | <issue_start>username_0: So I'm creating an 8 ball program that prompts the user for a question will then spit out a response.
The project I have states that the answers must be in txt file, so I created
this
```
file = open("Ball_response.txt","w")
file.write("YES, OF COURSE!")
file.write("WITHOUT A DOUBT, YES")
file.write("YOU CAN COUNT ON IT.")
file.write("FOR SURE!")
file.write("ASK ME LATER")
file.write("I AM NOT SURE")
file.write("I CAN'T TELL YOU RIGHT NOW?")
file.write("I WILL TELL YOU AFTER MY NAP")
file.write("NO WAY!")
file.write("I DON'T THINK SO")
file.write("WITHOUT A DOUBT, NO.")
file.write("THE ANSWER IS CLEARLY NO")
file.close()
```
The I want to call the list here
```
import random
# Reading the Ball_response file
def main():
input_file = open('Ball_response.txt', 'r')
line = input_file.readline()
print(line[0])
main()
```
But when I run the program, it only prints out `"Y"`. I want
```
0 - Yes, Of Course
1 - Without a Doubt, yes
2 - You can count on it
```
etc.....
How can I accomplish this? I feel like there is something I'm not understanding<issue_comment>username_1: You should use `print(line)` instead of `print(line[0])`. You are only getting Y because it is the [0]'th element in the string.
Upvotes: 0 <issue_comment>username_2: In order to get each response on a different line you need to change the way you are writing your file.
```
file = open("Ball_response.txt","w")
file.write("YES, OF COURSE!\n")
file.write("WITHOUT A DOUBT, YES\n")
// etc
file.close()
```
Then in your function `main` since you want to print the whole line you need to do `print(line)`.
```
import random
def main():
input_file = open('Ball_response.txt', 'r')
rand_idx = random.randrange(12)
for i,line in enumerate(input_file):
if i == rand_idx:
print(str(i) + " - " + line.strip('\n'))
```
This will print (for example):
>
> 7 - I WILL TELL YOU AFTER MY NAP
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_3: You might want to fix your code first.
While writing into file add \n in each file.write so that all lines go to new lines like this
```
file.write("YES, OF COURSE!\n")
```
**For more efficiency** store all these strings in a list and use `file.writelines(list)`
```
lines = ["YES, OF COURSE!\n","WITHOUT A DOUBT, YES\n","YOU CAN COUNT ON IT.\n"]
file.writelines(lines)
```
For reading a file line by line do this.
```
def main():
input_file = open('Ball_response.txt', 'r')
i = 0
for line in input_file:
print(str(i) + ' ' + line)
i = i+1
```
One can also do this to automatically **enumerate**
```
for i,line in enumerate(input_file):
print(str(i) + ' ' + line)
```
Output will be like
>
> 0 YES, OF COURSE!
>
>
> 1 WITHOUT A DOUBT, YES
>
>
> 2 YOU CAN COUNT ON IT.
>
>
>
Upvotes: 1 |
2018/03/21 | 1,359 | 5,566 | <issue_start>username_0: When using the `AVPlayerViewController`, the user is allowed to select whether the subtitles are on a specific language, `off`, or set to `auto`. Setting the `requiresFullSubtitles` property I can force the display of subtitles, but that is not what I want.
Is there a way to detect what the user has selected for the subtitle setting, whether a language is selected, `off`, or `auto`?
[](https://i.stack.imgur.com/1o77j.png)<issue_comment>username_1: You can grab the currently selected language options & also grab the language info when it's used to set the subtitle or audio track, as described in "[Adding Subtitles and Alternative Audio Tracks.](https://developer.apple.com/documentation/avfoundation/media_assets_playback_and_editing/adding_subtitles_and_alternative_audio_tracks)"
Available subtitles & audio tracks are found in an array of `availableMediaCharacteristics` for the video asset.
They're grouped in an `AVMediaSelectionGroup` by whether they are `AVMediaCharacteristicAudible` or `AVMediaCharacteristicLegible` ...
The [currently selected option](https://developer.apple.com/documentation/avfoundation/avmediaselection/1389197-selectedmediaoption) is found by:
```
`func selectedMediaOption(in mediaSelectionGroup: AVMediaSelectionGroup) -> AVMediaSelectionOption?`
```
It could return `nil` so "none," or it would return whatever language is selected. So you could set up some custom 'didChange' listener on that property. Doesn't seem to be any sort of publicly available notification for this, so you'd have to make your own.
Whenever you would select/set the subtitle option on the player, you could capture and use that same information to do whatever it is you intend to do with it:
```
if let group = asset.mediaSelectionGroup(forMediaCharacteristic: AVMediaCharacteristicLegible) {
let locale = Locale(identifier: "es-ES")
let options =
AVMediaSelectionGroup.mediaSelectionOptions(from: group.options, with: locale)
if let option = options.first {
/*** DO WHATEVER YOU WANT HERE AFTER CAPTURING THE LANGUAGE SELECTION & RETRIEVING AN AVAILABLE SUBTITLE ***/
playerItem.select(option, in: group)
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: mc01's answer is correct, but if you want a cut-and-paste solution for Swift 4, here is what is ended up with:
```
var selectedSubtitleLocale: Locale?
fileprivate func detectSubtitleLanguage() {
var locale: Locale?
if let playerItem = player?.currentItem,
let group = playerItem.asset.mediaSelectionGroup(forMediaCharacteristic: AVMediaCharacteristic.legible) {
let selectedOption = playerItem.currentMediaSelection.selectedMediaOption(in: group)
locale = selectedOption?.locale
}
selectedSubtitleLocale = locale
}
```
Upvotes: 2 <issue_comment>username_3: There is a notification [AVPlayerItemMediaSelectionDidChangeNotification](https://developer.apple.com/documentation/avfoundation/avplayeritemmediaselectiondidchangenotification?language=objc) appeared in iOS 13 and tvOS 13
You can subscribe your AVPlayerItem to this notification:
```
if (@available(iOS 13.0, tvOS 13.0, *)) {
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(handleMediaSelectionChange:)
name:AVPlayerItemMediaSelectionDidChangeNotification
object:playerItem];
}
```
And then you can detect what language or subtitle user has selected:
```
- (void)handleMediaSelectionChange:(NSNotification *)notification {
AVPlayerItem *playerItem = (AVPlayerItem *)notification.object;
if([playerItem.asset isKindOfClass:[AVURLAsset class]]){
AVURLAsset *asset = (AVURLAsset *)playerItem.asset;
AVMediaSelectionGroup *audio = [asset mediaSelectionGroupForMediaCharacteristic:AVMediaCharacteristicAudible];
AVMediaSelectionGroup *subtitles = [asset mediaSelectionGroupForMediaCharacteristic:AVMediaCharacteristicLegible];
AVMediaSelectionOption *selectedAudio = [playerItem.currentMediaSelection selectedMediaOptionInMediaSelectionGroup:audio];
AVMediaSelectionOption *selectedSubtitles = [playerItem.currentMediaSelection selectedMediaOptionInMediaSelectionGroup:subtitles];
}
}
```
Upvotes: 2 <issue_comment>username_4: Swift version of @username_3
--------------------------
You can add this observer while configuring session/periodic observers
```
NotificationCenter.default.addObserver(
self,
selector: #selector(didMediaToggle(_:)),
name: AVPlayerItem.mediaSelectionDidChangeNotification,
object: nil
)
@objc func didMediaToggle(_ sender: Notification) {
print("LOGGER: mediaSelectionDidChangeNotification:", sender.description)
let subtitles = player.currentItem?.asset.mediaSelectionGroup(forMediaCharacteristic: .legible)
print("LOGGER: mediaSelectionDidChangeNotification: subtitles:", subtitles)
}
/// this will not be called in case of `CC`, aka only `On` and `Off` case
```
For **CC** you can give a shot to this notification as **CC(Closed Captions)** is completely [different](https://support.apple.com/en-in/guide/iphone/iph3e2e23d1/ios) from **Subtitles**.
```
NotificationCenter.default.addObserver(
self,
selector: #selector(didCaptionsToggle(_:)),
name: UIAccessibility.closedCaptioningStatusDidChangeNotification,
object: nil
)
```
Upvotes: 1 |
2018/03/21 | 1,046 | 3,936 | <issue_start>username_0: With docker, I try to setup a traefik backend using HTTPS port 443, so communication between the traefik container and the app container (apache 2.4) will be encrypted.
I got an `Internal Server Error` if i activate `traefik.protocol=https` and `traefik.port=443` on my docker container. This issue has been documented here:
<https://github.com/containous/traefik/issues/2770#issuecomment-374926137>
Exactly same setup work great with `jwidler/nginx-proxy` (reverse proxy available on docker hub) for instance. Certificates on the container (apache 2.4 running inside) are real signed one (i installed them on traefik and on the apache of my container). If i request directly my apache container with https://... all browsers say certificate is valid (green). So the certificates in the container are ok.
The question is simple:
Using `InsecureSkipVerify = true` is not safe.
Is there any solution for production to be able to make work a container backend with label `traefik.protocol=https` and `traefik.port=443`, by using a certificate issued by a well-know authority (in my case Gandi or Comodo).
Thanks.<issue_comment>username_1: I guess you may need to add
```
InsecureSkipVerify = true
```
in the main/global section
Please refer to <https://docs.traefik.io/configuration/commons/>, which says:
```
InsecureSkipVerify : If set to true invalid SSL certificates are accepted for backends.
Note: This disables detection of man-in-the-middle attacks so should only be used on secure backend networks.
```
Upvotes: 4 <issue_comment>username_2: I only managed to expose the Kubernetes Dashboard with setting `InsecureSkipVerify = true`. Here is how I added it to the traefik deployment file (last line):
```
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: https
containerPort: 443
args:
- --api
- --kubernetes
- --logLevel=INFO
- --defaultentrypoints=https
- --entrypoints=Name:https Address::443 TLS
- --insecureSkipVerify=true
```
Upvotes: 2 <issue_comment>username_3: The problem for me was `traefik.protocol=https`; this was not necessary to enable https and directly caused the `500`.
Upvotes: 1 <issue_comment>username_4: As mentioned earlier:
>
> That's specifically listed as not a good solution in the question. As of the writing of this comment, Traefik does not support SNI for backend connections, so there's no way to use any kind of certificate without an IP SAN for the backend's IP. –
> <NAME>
> Sep 23 '18 at 23:40
>
>
>
<https://github.com/traefik/traefik/issues/3906> addresses this problem.
Traefik communicates with the backend internally in a node via IP addresses. For those the used certificate is not valid.
There are two options:
1. Communicate via http between Traefik and the backend
2. Use `--insecureSkipVerify=true` to ignore the certificate validation
The first solution is configured at the ingress:
```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: some-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-name
port:
number: 80
tls:
- secretName: traefik-cert
```
The second solution is to set `--serversTransport.insecureSkipVerify=true` via arg.
Upvotes: 0 <issue_comment>username_5: To enable an Https-Backend-Connection on a certain container, you can use
`- "traefik.http.services.service0.loadbalancer.server.scheme=https"`
as a label on the Docker container.
[Reference on Github](https://github.com/traefik/traefik/issues/5346#issuecomment-530257435)
Upvotes: 0 |
2018/03/21 | 542 | 1,920 | <issue_start>username_0: When trying to make a GraphQL request I get the following [error](https://i.stack.imgur.com/SEVfa.png).
I am running an express server on port 3002. I am also running express-graphql on the express server with the "/graphql" endpoint.
I am using the apollo-client from a react app.
I have tried the following in my express code
`app.use(cors())`
However I am still getting the same error on the client
How do I resolve this CORS issue?<issue_comment>username_1: *In your ExpressJS app on node.js, do the following with your routes:*
```js
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.get('/', function(req, res, next) {
// Handle the get for this route
});
app.post('/', function(req, res, next) {
// Handle the post for this route
});
```
Upvotes: 2 <issue_comment>username_2: Make sure app.use(cors()) comes before you use graphQL.
Express code
```js
const express = require( `express` );
const graphqlHTTP = require( `express-graphql` );
const cors = require( `cors` );
const app = express();
app.use( cors() );
app.use(
`/graphql`,
graphqlHTTP( {
schema: schema, // point to your schema
rootValue: rootResolver, // point to your resolver
graphiql: true
} )
);
```
Fetch example based on [GraphQL Documentation](https://graphql.org/graphql-js/graphql-clients/)
```js
fetch( url, {
method : `post`,
headers: {
'Content-Type': `application/json`,
'Accept' : `application/json`
},
body: JSON.stringify( {
query: `
{
person {
name
}
}`
} )
} )
.then( response => response.json() )
.then( response => console.log( response ) );
```
Upvotes: 2 |
2018/03/21 | 541 | 1,948 | <issue_start>username_0: I'm learning to build [pdfium project](https://pdfium.googlesource.com/pdfium/). Following [the instructions](https://chromium.googlesource.com/chromium/src/+/master/docs/windows_build_instructions.md#visual-studio) I get 57 dlls in my out\ directory. However I cannot find the pdfium.dll which is supposed to contain functions like `FPDFAvail_GetDocument(), FPDFBitmap_GetBuffer()` etc.
Could anyone of you share your successful experience with building pdfium library?<issue_comment>username_1: *In your ExpressJS app on node.js, do the following with your routes:*
```js
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.get('/', function(req, res, next) {
// Handle the get for this route
});
app.post('/', function(req, res, next) {
// Handle the post for this route
});
```
Upvotes: 2 <issue_comment>username_2: Make sure app.use(cors()) comes before you use graphQL.
Express code
```js
const express = require( `express` );
const graphqlHTTP = require( `express-graphql` );
const cors = require( `cors` );
const app = express();
app.use( cors() );
app.use(
`/graphql`,
graphqlHTTP( {
schema: schema, // point to your schema
rootValue: rootResolver, // point to your resolver
graphiql: true
} )
);
```
Fetch example based on [GraphQL Documentation](https://graphql.org/graphql-js/graphql-clients/)
```js
fetch( url, {
method : `post`,
headers: {
'Content-Type': `application/json`,
'Accept' : `application/json`
},
body: JSON.stringify( {
query: `
{
person {
name
}
}`
} )
} )
.then( response => response.json() )
.then( response => console.log( response ) );
```
Upvotes: 2 |
2018/03/21 | 462 | 1,644 | <issue_start>username_0: PHP 5.6 was working fine in my Mac Sierra I tried to upgrade to 7.0 than it stopped working. when I try to run localhost safari it shows not connected to server error.<issue_comment>username_1: *In your ExpressJS app on node.js, do the following with your routes:*
```js
app.use(function(req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});
app.get('/', function(req, res, next) {
// Handle the get for this route
});
app.post('/', function(req, res, next) {
// Handle the post for this route
});
```
Upvotes: 2 <issue_comment>username_2: Make sure app.use(cors()) comes before you use graphQL.
Express code
```js
const express = require( `express` );
const graphqlHTTP = require( `express-graphql` );
const cors = require( `cors` );
const app = express();
app.use( cors() );
app.use(
`/graphql`,
graphqlHTTP( {
schema: schema, // point to your schema
rootValue: rootResolver, // point to your resolver
graphiql: true
} )
);
```
Fetch example based on [GraphQL Documentation](https://graphql.org/graphql-js/graphql-clients/)
```js
fetch( url, {
method : `post`,
headers: {
'Content-Type': `application/json`,
'Accept' : `application/json`
},
body: JSON.stringify( {
query: `
{
person {
name
}
}`
} )
} )
.then( response => response.json() )
.then( response => console.log( response ) );
```
Upvotes: 2 |
2018/03/21 | 837 | 3,189 | <issue_start>username_0: I have a JSON field object
```
{path: 'field', children: []}
```
that can be nested like this:
```
$scope.fields = [{path: 'field1', children:
[{path: 'field1.one', children: []},
{path: 'field1.two', children: []}]},
{path: 'field2', children: []}];
```
I am using the following function to iterate among the nested fields to retrieve the field with a specified path:
```
var getField = function(thisPath, theseFields) {
if (arguments.length == 1) {
theseFields = $scope.fields;
}
angular.forEach(theseFields, function(field) {
if (field.path == thisPath) {
console.log('returning: '+JSON.stringify(field));
return field;
}
if (field.children != null) {
return getField(thisPath, field.children);
}
});
};
```
This call
```
console.log('field1.one: '+ JSON.stringify(getField('field1.one')));
```
generates the following logging in the browser console:
```
returning: {"path":"field1.one","children":[]}
field1.one: undefined
```
The target field is found but never returned!
I get the same result with or without the `return` in the method call
```
return getField(thisPath, field.children)
```
What am I missing? See working [plunkr](https://plnkr.co/edit/SjJtlgwRQwSMQKf1qVN1?p=preview).<issue_comment>username_1: ```
var getField = function(thisPath, theseFields) {
var result = null;
if (arguments.length == 1) {
theseFields = $scope.fields;
}
angular.forEach(theseFields, function(field) {
var test;
if (field.path == thisPath) {
console.log('returning: '+JSON.stringify(field));
result = field;
}
if (field.children != null) {
test = getField(thisPath, field.children);
if(test!=null)
result = test;
}
});
return result;
}
```
Upvotes: -1 <issue_comment>username_2: In this solution I use an exception when the field is found, and immediately this breaks the loop.
```js
const fields = [
{
path: 'field1',
children:[
{
path: 'field1.one',
children: [
{
path: 'field1.one.one',
children: [
{
path: 'field1.one.one.one',
children: [ ]
}
]
}
]
},
{
path: 'field1.two',
children: []
}
]
},
{
path: 'field2',
children:[ ]
}
]
getField = ( fields, fieldToFind ) => {
fields.map( ( field ) => {
if ( field.path === fieldToFind ) {
/**
* break map
* avoid continue loop
*/
throw { field: field }
} else {
getField( field.children, fieldToFind );
}
} )
}
try {
getField( fields, 'field1.one.one.one' );
} catch( field ) {
console.log( field );
}
```
Upvotes: -1 |
2018/03/21 | 1,442 | 4,427 | <issue_start>username_0: I have four elements. I need element number 3 to float left, while the remaining elements float right.
[](https://i.stack.imgur.com/EykzV.png)
Additional Factors
------------------
* Columns are not equal width.
* Cannot use `position: absolute`, since the hight of the children is unknown and it's important for the wrapper to remain relative to the height.
* Cannot use `display: grid`, since the code needs to support IE10.
* I cannot reorder the elements in HTML. I can, however, include various wrappers around certain elements, as long as they're in order. The reason is because elements 1, 2, 3 and 4 are supposed to show up in order on mobile devices, i.e. responsive.
What I've tried so far
----------------------
* Using Float Right/Left - <https://jsfiddle.net/s4542jz2/2/>
* Using Float Right/None - <https://jsfiddle.net/zfLspshq/2/>
* Using Inline-block - <https://jsfiddle.net/kq8dbn5s/2/>
* Using Flexbox - <https://jsfiddle.net/g3wwdw84/6/>
Update
------
* I've updated the question to say that I cannot reorder the HTML.<issue_comment>username_1: You can use jQuery to get the heights of the four elements and set the height of the wrapper to the height of the higher of the two columns. This solution is based on your flexbox approach. I just changed the height of element "three" a bit to demonstrate the possible different heights of the two columns:
Note: Right now the left column is higher. If you change the height of `.tree` to less than 600px you see that the wrapper gets the height of the right column.
```js
$(document).ready(function() {
var heightthree = $('.three').outerHeight();
var heightone = $('.one').outerHeight();
var heighttwo = $('.two').outerHeight();
var heightfour = $('.four').outerHeight();
var heightright = heightone + heighttwo + heightfour;
if (heightthree > heightright) {
$('.wrapper').css('height', heightthree)
} else {
$('.wrapper').css('height', heightright)
}
})
```
```css
html, body {
margin: 0;
}
.wrapper {
width: 100%;
display: flex;
flex-direction: column;
flex-wrap: wrap;
background: #fa0;
}
.one {
order: 2;
height: 200px;
background: #999;
width: 55%;
}
.two {
order: 3;
height: 200px;
background: #888;
width: 55%;
}
.three {
order: 1;
height: 620px;
background: black;
width: 45%;
color: white;
}
.four {
order: 4;
height: 200px;
background: #777;
width: 55%;
}
```
```html
one
two
three
four
```
And here a version where the left column is less high than the right one: <https://jsfiddle.net/8m2cjb6r/21/>
Upvotes: 0 <issue_comment>username_2: Is this something you looking for?
```css
.wrapper {
width: 100%;
color: white;
}
.one {
height: 200px;
background: #999;
width: 55%;
overflow: auto;
}
.two {
height: 200px;
background: #888;
width: 55%;
overflow: auto;
}
.three {
float: left;
height: 600px;
background: black;
width: 45%;
}
.four {
height: 200px;
background: #777;
width: 55%;
overflow: auto;
}
```
```html
three
one
two
four
```
Upvotes: 0 <issue_comment>username_3: I would do something like this :
<https://jsfiddle.net/s4542jz2/13/>
Split in two different div :
```
three
one
two
four
```
Upvotes: 0 <issue_comment>username_4: I know you mentioned the HTML cannot be changed because you still need the #1, #2, #3, #4 display order on mobile. But you can actually reorder them to differ from the original markup (visually not in the DOM though). Therefore, I would suggest to use the #3, #1, #2, #4 order in the markup to make such layout possible, since you also commented #3 is fixed width. Here is the approach with just CSS float + flexbox + media queries.
**[jsFiddle](https://jsfiddle.net/6s2at2wc/)**
```css
div {
border: 1px solid pink;
}
.wrapper {
display: flex;
flex-direction: column;
}
.one { order: 1; }
.two { order: 2; }
.three { order: 3; }
.four { order: 4; }
@media (min-width: 768px) {
.wrapper {
display: block;
}
.wrapper:after {
content: "";
display: table;
clear: both;
}
.three {
float: left;
width: 200px;
}
.one,
.two,
.four {
margin-left: 200px;
}
}
```
```html
three
one
two
four
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,204 | 3,722 | <issue_start>username_0: I have created a python program with gui using tkinter. In the file there are several calls to external text files. I wanted to make a standalone executable using pyinstaller but the executable formed gives an error which is something of the sort "Script cant be executed" ( I don't remember the exact wordings).
How can I take that whole folder and convert it into a single .exe file?<issue_comment>username_1: You can use jQuery to get the heights of the four elements and set the height of the wrapper to the height of the higher of the two columns. This solution is based on your flexbox approach. I just changed the height of element "three" a bit to demonstrate the possible different heights of the two columns:
Note: Right now the left column is higher. If you change the height of `.tree` to less than 600px you see that the wrapper gets the height of the right column.
```js
$(document).ready(function() {
var heightthree = $('.three').outerHeight();
var heightone = $('.one').outerHeight();
var heighttwo = $('.two').outerHeight();
var heightfour = $('.four').outerHeight();
var heightright = heightone + heighttwo + heightfour;
if (heightthree > heightright) {
$('.wrapper').css('height', heightthree)
} else {
$('.wrapper').css('height', heightright)
}
})
```
```css
html, body {
margin: 0;
}
.wrapper {
width: 100%;
display: flex;
flex-direction: column;
flex-wrap: wrap;
background: #fa0;
}
.one {
order: 2;
height: 200px;
background: #999;
width: 55%;
}
.two {
order: 3;
height: 200px;
background: #888;
width: 55%;
}
.three {
order: 1;
height: 620px;
background: black;
width: 45%;
color: white;
}
.four {
order: 4;
height: 200px;
background: #777;
width: 55%;
}
```
```html
one
two
three
four
```
And here a version where the left column is less high than the right one: <https://jsfiddle.net/8m2cjb6r/21/>
Upvotes: 0 <issue_comment>username_2: Is this something you looking for?
```css
.wrapper {
width: 100%;
color: white;
}
.one {
height: 200px;
background: #999;
width: 55%;
overflow: auto;
}
.two {
height: 200px;
background: #888;
width: 55%;
overflow: auto;
}
.three {
float: left;
height: 600px;
background: black;
width: 45%;
}
.four {
height: 200px;
background: #777;
width: 55%;
overflow: auto;
}
```
```html
three
one
two
four
```
Upvotes: 0 <issue_comment>username_3: I would do something like this :
<https://jsfiddle.net/s4542jz2/13/>
Split in two different div :
```
three
one
two
four
```
Upvotes: 0 <issue_comment>username_4: I know you mentioned the HTML cannot be changed because you still need the #1, #2, #3, #4 display order on mobile. But you can actually reorder them to differ from the original markup (visually not in the DOM though). Therefore, I would suggest to use the #3, #1, #2, #4 order in the markup to make such layout possible, since you also commented #3 is fixed width. Here is the approach with just CSS float + flexbox + media queries.
**[jsFiddle](https://jsfiddle.net/6s2at2wc/)**
```css
div {
border: 1px solid pink;
}
.wrapper {
display: flex;
flex-direction: column;
}
.one { order: 1; }
.two { order: 2; }
.three { order: 3; }
.four { order: 4; }
@media (min-width: 768px) {
.wrapper {
display: block;
}
.wrapper:after {
content: "";
display: table;
clear: both;
}
.three {
float: left;
width: 200px;
}
.one,
.two,
.four {
margin-left: 200px;
}
}
```
```html
three
one
two
four
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 412 | 1,516 | <issue_start>username_0: Given a component, with a form declaration
```
ngOnInit() {
this.form = this.fb.group({
address: [""],
});
}
```
An two input controls on a form, both referencing the same control.
```
```
How do I keep the input values the same in each of the controls.
Updating each input element does change the model value, but **not the other corresponding input value**. I am sure this is by design.
I am using the control on a *tabbed interface, that requires a duplicate* on each tab. Is there an easy way to keep them updated?
I have a **[working plunker demonstration.](https://plnkr.co/edit/Gu68Yl?p=preview)**<issue_comment>username_1: just add a value field to the form
```
```
check out this [plunker](https://plnkr.co/edit/WskFHZQcoe8DKusLeIM9?p=preview)
Upvotes: 6 [selected_answer]<issue_comment>username_2: If the components have a reference to same form, but they are in different parts (different templates for example), you can subscribe the value change event to update the value. This works for me:
```
class CustomComponent implements OnInit {
@Input() formGroup: FormGroup;
ngOnInit() {
this.formGroup.controls['param'].valueChanges.subscribe(x=> {
this.formGroup.controls['param'].setValue(x, {onlySelf: true, emitEvent: false});
});
}
}
```
So, if you modify the value in one component, the subscription will handle the changes in the other component.
The `emitEvent: false` prevents to enter in a infinite loop on changes.
Upvotes: 4 |
2018/03/21 | 329 | 904 | <issue_start>username_0: How can I get the value of `list` `code` elements?
```
def test(argu):
print(argu)
code = [5,10,15]
length =len(code)
for i in range(0,length):
test("code[%d]" %i)
```
Expected output:
```
5
10
15
```
Acutal output:
```
code[0]
code[1]
code[2]
```<issue_comment>username_1: `%` simply substitutes the parameter into the string, the string is not re-interpreted as an expression.
Just use an ordinary list index, without quoting it:
```
for i in range(0, length):
test(code[i])
```
There's also no need to use `range`, just iterate over the list directly:
```
for elt in code:
test(elt)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Changing `test("code[%d]" %i)` by `test(code[i])` solves your problem:
```
def test(argu):
print(argu)
code = [5,10,15]
length = len(code)
for i in range(length):
test(code[i])
```
Upvotes: 1 |
2018/03/21 | 220 | 813 | <issue_start>username_0: I have added a parameter to select either fiscal or calendar year. I have a year over year line graph and since the fiscal year begins in July I would like July to be the first month listed on the axis when fiscal year is selected. Right now the months display January to December.<issue_comment>username_1: You can change the fiscal year by changing the `fiscal year start` to `June`
Check below image
[](https://i.stack.imgur.com/Y9bgE.png)
Upvotes: 1 <issue_comment>username_2: [Fiscal dates in Tableau](https://tarsolutions.co.uk/blog/fiscal-dates-in-tableau/) are best managed using standard date functions. In this case your fiscal date would be a DATEADD of 7 months: DATEADD('month',7,[Order Date])
Upvotes: 0 |
2018/03/21 | 269 | 836 | <issue_start>username_0: I have a map of maps (immutable)
```
private val map = mutable.Map[String, mutable.Map[String, MyObject]]()
```
and I wonder why
```
map.get(id).get(name)
```
returns `MyObject` while
```
map.get(id)
```
returns `Option[String, MyObject]]`
I want to get `Option[MyObject]` out of map of maps<issue_comment>username_1: You can change the fiscal year by changing the `fiscal year start` to `June`
Check below image
[](https://i.stack.imgur.com/Y9bgE.png)
Upvotes: 1 <issue_comment>username_2: [Fiscal dates in Tableau](https://tarsolutions.co.uk/blog/fiscal-dates-in-tableau/) are best managed using standard date functions. In this case your fiscal date would be a DATEADD of 7 months: DATEADD('month',7,[Order Date])
Upvotes: 0 |
2018/03/21 | 373 | 1,207 | <issue_start>username_0: I have a data frame like:
```
cat.c1 cat.c2 cat.c3 name
0 tony NaN NaN groupA
1 Magoo {} NaN groupA
2 Jon NaN {} groupA
```
Queries such as
```
df.query('name=="groupA"')
```
But I want to query on a prefixed column.
I try:
df.query('cat.c1=="tony"')
I get:
```
KeyError: 'cat'
```
Any ideas?<issue_comment>username_1: `query` has limitations on what columns you can query with it. A rule of thumb I like to follow is that, if the name isn't a valid python identifier name, then it just won't work.
Your only option is to index directly with a boolean max.
```
df[df['cat.c1'] == "tony"]
```
Alternatively, you may want to get rid of those pesky prefixes, or just join them altogether.
```
df.columns.str.split('.').str.join('_')
Index(['cat_c1', 'cat_c2', 'cat_c3', 'name'], dtype='object')
```
Assign the column names back, and you can then use `query`:
```
df.query('cat_c1 == "tony"')
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you enclose columns in back quotes it will also work
Upvotes: 0 |
2018/03/21 | 1,323 | 4,274 | <issue_start>username_0: I have this code in my function.php, how can i dispaly this field value in admin mail order? Thanks!
```
//1.1 Display field in admin
add_action('woocommerce_product_options_inventory_product_data', 'woocommerce_product_custom_fields');
function woocommerce_product_custom_fields(){
global $woocommerce, $post;
echo '';
woocommerce\_wp\_text\_input(
array(
'id' => '\_custom\_product\_text\_field',
'placeholder' => 'Name',
'label' => \_\_('customtext', 'woocommerce'),
'desc\_tip' => 'true'
)
);
echo '';
}
//1.2 Save field
add_action('woocommerce_process_product_meta', 'woocommerce_product_custom_fields_save');
function woocommerce_product_custom_fields_save($post_id){
$woocommerce_custom_product_text_field = $_POST['_custom_product_text_field'];
if (!empty($woocommerce_custom_product_text_field))
update_post_meta($post_id, '_custom_product_text_field', esc_attr($woocommerce_custom_product_text_field));
}
```<issue_comment>username_1: **Update 2** There is two very similar alternatives (the first one is better):
To display easily this product custom field values in emails (and in orders too) you can use the following code, that will save this data in order items, once the order is placed.
First alternative code *(the best new one that use WC 3+ recent CRUD methods)*:
```
// 1.3 Save custom field value in order items meta data (only for Woocommerce versions 3+)
add_action( 'woocommerce_checkout_create_order_line_item', 'add_custom_field_to_order_item_meta', 20, 4 );
function add_custom_field_to_order_item_meta( $item, $cart_item_key, $values, $order ) {
$custom_field_value = get_post_meta( $item->get_product_id(), '_custom_product_text_field', true );
if ( ! empty($custom_field_value) ){
$item->update_meta_data( __('Custom label', 'woocommerce'), $custom_field_value );
}
}
```
… or …
The other alternative *(the old one updated)*:
```
// 1.3 Save custom field value in order items meta data
add_action( 'woocommerce_add_order_item_meta', 'add_custom_field_to_order_item_meta', 20, 3 );
function add_custom_field_to_order_item_meta( $item_id, $values, $cart_item_key ) {
$custom_field_value = get_post_meta( $values['data']->get_id(), '_custom_product_text_field', true );
if ( ! empty($custom_field_value) )
wc_add_order_item_meta( $item_id, __('Custom label', 'woocommerce'), $custom_field_value );
}
```
*Code goes in function.php file of your active child theme (or theme).* Tested and works.
On orders you will get that (for both):
[](https://i.stack.imgur.com/dwzin.png)
On emails you will get that (for both):
[](https://i.stack.imgur.com/bHwPf.png)
Upvotes: 2 [selected_answer]<issue_comment>username_2: This is my code now
```
//1.1 Display custom field
add_action('woocommerce_product_options_inventory_product_data', 'woocommerce_product_custom_fields');
add_action('woocommerce_process_product_meta', 'woocommerce_product_custom_fields_save');
function woocommerce_product_custom_fields(){
global $woocommerce, $post;
echo '';
woocommerce\_wp\_text\_input(
array(
'id' => '\_custom\_product\_text\_field',
'placeholder' => 'text',
'label' => \_\_('Custom', 'woocommerce'),
'desc\_tip' => 'true'
)
);
echo '';
}
// 1.2 Save this field in admin
function woocommerce_product_custom_fields_save($post_id){
$woocommerce_custom_product_text_field = $_POST['_custom_product_text_field'];
if (!empty($woocommerce_custom_product_text_field))
update_post_meta($post_id, '_custom_product_text_field', esc_attr($woocommerce_custom_product_text_field));
}
// 1.3 Save custom field value in order items meta data
add_action( 'woocommerce_add_order_item_meta', 'add_custom_field_to_order_item_meta', 20, 3 );
function add_custom_field_to_order_item_meta( $item_id, $values, $cart_item_key ) {
$custom_field_value = get_post_meta( $values['data']->get_id(), '_custom_product_text_field', true );
if ( ! empty($$custom_field_value) )
wc_add_order_item_meta( $item_id, __('Custom label', 'woocommerce'), $custom_field_value );
}
```
Upvotes: 0 |
2018/03/21 | 492 | 1,466 | <issue_start>username_0: So I have an array of items and I want to grab the number item in the last array, I need the able to grab it in the last item, I feel like this is pretty simple but I tried using the end() function and it do not seem to work, here is the example array:
```
Array
(
[0] => stdClass Object
(
[ID] => 1
[number] => 1
[mode] => 1
[timestamp] => 2018-03-20 15:23:58
[question_text] => Hello
)
[1] => stdClass Object
(
[ID] => 2
[number] => 2
[mode] => 1
[timestamp] => 2018-03-20 15:23:58
[question_text] => Hello 2
)
[2] => stdClass Object
(
[ID] => 3
[number] => 3
[mode] => 1
[timestamp] => 2018-03-20 15:23:58
[question_text] => Hello 3
)
[3] => stdClass Object
(
[ID] => 4
[number] => 4
[mode] => 1
[timestamp] => 2018-03-20 15:23:58
[question_text] => Hello 4
)
)
```
So im trying to only grab [number] from the last array item in the array<issue_comment>username_1: Let's assume the array is stored in a variable called $array:
```
echo $array[sizeof($array) - 1)]->number;
```
Upvotes: -1 <issue_comment>username_2: `end()` is the right function but you also need to get the property value as its an array of objects not an array of arrays.
`echo end($array)->number;`
or
```
$item = end($array);
echo $item->number;
```
Upvotes: 0 |
2018/03/21 | 169 | 658 | <issue_start>username_0: I have 2 Routes like the following
```
```
However since the `:organizationId` matches with anything when I go to /organizations/create it loads both Component1 and Component2. I want it to load the second component only when the url is like "/organizations/5".
What is an ideal fix for this WITHOUT changing the url (i.e. "organizations/details/:organizationId" would not work - Or is this the only solution)<issue_comment>username_1: You need to use the `exact` keyword
```
```
Upvotes: 0 <issue_comment>username_2: Use a `Switch`, it will cause only the first matching route to show:
```
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 775 | 2,894 | <issue_start>username_0: On a daily basis, I save my web pages files on a public drive. I then have to send a direct link of the html file to the client.
To do so, I tend to manually convert all the backslashes "\" from the folder directory into forward slashes "/" as well as adding "http://" at the beggining.
Example:
>
> From
> `\\Public\Drive\PageLocation\`
>
> To
> `http://Public/Drive/PageLocation/index.html`
>
>
>
I've started using the find and replace option, but I feel like it would be much better if there was some sort of code that converts these paths in an input field.
Here's a quick visual of what I have in mind:
```css
*{font-family:sans-serif;}
p{
font-weight:bold;
}
```
```html
Folder Directory to URL
Result: http://Public/Drive/PageLocation/index.html
```
I've tried figuring out JavaScript RegExp, but I didn't find much luck making it functional. It seems like it can only read double backslashes and ignores singles:
```js
var FolderDirectory = "\\Public\Drive\PageLocation";
var URLConvert = FolderDirectory.replace(/\\/g, "/");
alert(URLConvert);
```
Is there such thing as a hotkey that converts slashes?
What do you guys use?
Is there an alternative way that I could convert slashes locally?
What would you recommend?
Thank you.<issue_comment>username_1: The Problem already is your start string. The backslashes where interpreted as escape characters. So they dont exist in the string. But if you want to input the path in a HTML form, the string does already contain the backslashes in the correct way. See here my short example.
```js
function convert(){
var src_url = document.getElementById("path").value;
var converted_url = src_url.replace("\\\\", "http://").replace(/\\/g, "/");
alert(converted_url);
}
```
```html
```
Upvotes: 0 <issue_comment>username_2: Your regex is perfect, the problem is the string you are changing. In javascript, a backslash (\) is used to escape a char.
>
> The backslash (\) escape character turns special characters into string characters...
> [Javascript Strings](https://www.w3schools.com/js/js_strings.asp "Javascript Strings")
>
>
>
Here's a working example of what you want to do:
```js
document.getElementById('input').onkeyup = function() {
//when someone types in the input
var v = this.value; //input's value
if (v[0] === '\\') {
//text entered is a url
// add 'http:' replace \ with /
document.getElementById('result').textContent = 'http:' + v.replace(/\\/g, '/');
} else {
//text entered is a path
// remove http or https replace / with \
document.getElementById('result').textContent = v.replace(/https?:/g, '').replace(/\//g, '\\');
}
}
```
```html
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 919 | 2,634 | <issue_start>username_0: So I want to create a web page where there will be images on the left and text on the right. I tried with margin and text-align but those things don't seem to work. Here is my HTML.
```css
.content {
min-height: 690px;
background: url("background.png");
background-size: 100% 100%;
overflow: scroll;
}
p {
margin-left: 210px
}
.dog {
height: 200px;
border-radius: 8px;
padding-left: 10px;
}
.cat {
display: block;
border-radius: 8px;
height: 250px;
width: 331px;
padding-top: 10px;
padding-left: 10px;
}
```
```html
PETS
====


TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT
```<issue_comment>username_1: Better to use float:left for the image.
Side Note:To get out of such problems learn bootstrap which will be very useful in web designing
Upvotes: 2 <issue_comment>username_2: You'll need to add:
```
img{ display: block; float: left; }
```
This will float the images on the left-hand side and the text in the paragraphs will wrap around them.
Upvotes: 1 <issue_comment>username_3: Please try this code and test in snippet . its also responsive mobile phones
```css
@media screen and (min-width:540px) {
.div_left,
.div_right {
max-width: 50%;
width: 100%;
float: left;
}
}
img {
width: 100%;
}
.div_right p {
padding: 5px;
}
```
```html
PETS
====


TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT
```
Upvotes: 1 <issue_comment>username_4: Just pull left the images, and pull right the text.
Like this example:
```css
.content {
background: white;
}
.containerImg{float:left;width:50%;}
.containerImg img{width:100%}
.containerText{float:right;width:50%;}
```
```html
PETS
====


TEXT TEXT TEXT TEXTTEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXTTEXTTEXTTEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT
```
Upvotes: 3 [selected_answer]<issue_comment>username_5: use flex layout for this.
like:
```
.content {
display: flex;
min-height: 690px;
}
.dog {
height: 200px;
}

TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT
```
by default `flex-direction` is `row`, you can change it to `column` if you want item vertically.
Upvotes: 0 |
2018/03/21 | 1,899 | 6,064 | <issue_start>username_0: I have two input boxes with id `no1` and `no2`. I have to put two integer values on both inputs and show their sum without using any button. My code works, but it shows only the individual values, not the calculated sum.
How can I show my addition value In real time when user is just typing?
```js
var c;
var a = document.getElementById("no1");
var b = document.getElementById("no2");
var number = document.getElementById("number");
var m = document.getElementById("message");
a.onfocus = function() {
m.style.display = "block";
}
a.onblur = function() {
m.style.display = "none";
}
a.onkeyup = function() {
var c;
if (isNaN(a.value) == false) {
c = a.value + b.value;
m.innerHTML = c;
} else {
m.innerHTML = "**Must containe number only**";
}
}
b.onfocus = function() {
m.style.display = "block";
}
b.onblur = function() {
m.style.display = "none";
}
b.onkeyup = function() {
if (isNaN(b.value) == false) {
c = a.value + b.value;
m.innerHTML = c;
} else {
m.innerHTML = "**Must containe number only**";
}
}
```
```html
Press Me
```<issue_comment>username_1: Input values are strings, which are concatenated by the `+` sign.
In order to add them together, use [`parseInt()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseInt) to convert to an integer.
I've also included a way to handle non-numeric values:
```
c = (parseInt(a.value)||0) + (parseInt(b.value)||0);
```
Also, the and tags are obsolete.
I suggest using CSS instead.
Example below:
```js
var c;
var a = document.getElementById("no1");
var b = document.getElementById("no2");
var number = document.getElementById("number");
var m = document.getElementById("message");
a.onfocus = function() {
m.style.display = "block";
}
a.onblur = function() {
m.style.display = "none";
}
a.onkeyup = function() {
var c;
if (isNaN(a.value) == false) {
c = (parseInt(a.value) || 0) + (parseInt(b.value) || 0);
m.innerHTML = c;
} else {
m.innerHTML = "Must contain number only";
}
}
b.onfocus = function() {
m.style.display = "block";
}
b.onblur = function() {
m.style.display = "none";
}
b.onkeyup = function() {
if (isNaN(b.value) == false) {
c = (parseInt(a.value) || 0) + (parseInt(b.value) || 0);
m.innerHTML = c;
} else {
m.innerHTML = "Must contain number only";
}
}
```
```css
#message {
color: blue;
font-size: 45px;
text-align: center;
}
```
```html
```
Upvotes: 2 <issue_comment>username_2: The problem in your code is that when you are doing `a.value + b.value` it is actually not doing sum but is concating two string as `a.value is string not number`. so you have to convert string to number(by using `parseInt` or `parseFloat`).
You can remove redundant code by doing following things.
```js
var c;
var a = document.getElementById("no1");
var b = document.getElementById("no2");
var m = document.getElementById("message");
var x=document.querySelectorAll("input");
for (var i = 0; i < x.length; i++) {
x[i].addEventListener('focus', function() {
m.style.display = "block";
});
x[i].addEventListener('keyup', function() {
var c;
if (isNaN(a.value) == false) {
c = parseInt(a.value || 0) + parseInt(b.value || 0);
m.innerHTML = c;
} else {
m.innerHTML = "**Must containe number only**";
}
});
x[i].addEventListener('blur', function() {
m.style.display = "none";
});
}
```
```html
Press Me
```
Upvotes: 1 <issue_comment>username_3: Form controls like store value as a string. Use the [**`.valueAsNumber`**](https://webplatform.github.io/docs/dom/HTMLInputElement/valueAsNumber/) property so you don't need to convert any string values to numbers afterwards with `parseInt()`, `parseFloat()`, `Number()`, etc.
~~inputObj1.value + inputObj2.value~~
inputObj1.**valueAsNumber** + inputObj2.**valueAsNumber**
The following demo has 2 examples. The first example uses **[HTMLFormControlsCollection API](https://developer.mozilla.org/en-US/docs/Web/API/HTMLFormControlsCollection)** and **[`.addEventListener()`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener)**. The second example uses an **[onevent attribute event handler](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Event_handlers)**.
Both examples need to use a tag. and I replaced the and tags (they are HTML4 not HTML5) with an tag. I also changed the `type` to `number`.
Both examples register the `input` event to the tag. Within the `eventListener` there's a condition that will only process user input from tags. The onevent attribute handler on the second example only needs a function expression. The `input` event will fire as soon as the user enters text.
Demo
----
```js
var container1 = document.forms.container1;
var con1 = container1.elements;
container1.addEventListener('input', function(e) {
if (e.target.type === 'number') {
con1.message1.value = con1.no1.valueAsNumber + con1.no2.valueAsNumber;
}
}, false);
```
```html
```
Upvotes: 1 <issue_comment>username_4: ```
var c;
var a = document.getElementById("no1");
var b = document.getElementById("no2");
var number = document.getElementById("number");
var m = document.getElementById("message");
a.onfocus = function() {
m.style.display = "block"; }
a.onblur = function() {
m.style.display = "none"; }
a.onkeyup = function() {
var c;
if (isNaN(a.value) == false) {
c = Number(a.value) + Number(b.value);
m.innerHTML = c;
}
else {
m.innerHTML = "**Must containe number only**";
}
}
b.onfocus = function() {
m.style.display = "block"; }
b.onblur = function() {
m.style.display = "none"; }
b.onkeyup = function() {
if (isNaN(b.value) == false) {
c = Number(a.value) + Number(b.value);
m.innerHTML = c;
}
else {
m.innerHTML = "**Must containe number only**";
}
}
```
Upvotes: 1 [selected_answer] |
2018/03/21 | 1,012 | 3,403 | <issue_start>username_0: I'm building a simple rotator that rotates three pieces of advice for a website. I want it to fade out/in and adding/removing classes to do that. In the JavaScript debugger, it works perfectly, because it is given time to step through all of the calls, but in actual execution, it is very choppy. I understand it's doing that because the script only cares about adding the class, it doesn't care about completing the transition in the class, but I need to know how to get the script to fully finish the transition in the class. Here is the JavaScript:
```
function rotateAdvice() {
let nextAdvice;
const currentAdviceCont = document.querySelector(".advice-show");
const currentAdvice = currentAdviceCont.id.slice(-1);
if (currentAdvice >= 2) {
nextAdvice = document.getElementById("advice-0");
} else {
nextAdvice = "advice-"+(parseInt(currentAdvice) + 1);
nextAdvice = document.getElementById(nextAdvice);
}
currentAdviceCont.classList.add("advice");
currentAdviceCont.classList.remove("advice-show");
currentAdviceCont.classList.add("no-display");
nextAdvice.classList.remove("no-display");
nextAdvice.classList.add("advice-show");
}
// called through this:
setInterval(rotateAdvice, 4000);
```
And here are the three css classes:
```
.advice {
opacity: 0;
transition: 1s opacity;
}
.advice-show {
opacity: 1;
transition: 1s opacity;
}
.no-display {
display: none;
visibility: hidden;
}
```<issue_comment>username_1: You can use its `transitionend` event to get a more accurate toggle.
Stack snippet
```js
var box1 = document.querySelector(".box.nr1");
var box2 = document.querySelector(".box.nr2");
box1.addEventListener("transitionend", trans_ended, false);
box2.addEventListener("transitionend", trans_ended, false);
setTimeout(function() { box1.classList.add('trans') }, 10);
function trans_ended (e) {
if (e.type == 'transitionend') {
if (e.target == box1) {
box1.classList.remove('trans');
setTimeout(function() { box2.classList.add('trans') }, 10);
} else if (e.target == box2) {
box2.classList.remove('trans');
setTimeout(function() { box1.classList.add('trans') }, 10);
}
}
}
```
```css
.box {
position: absolute;
left: 1em;
top: 1em;
width: 5em;
height: 5em;
background-color: blue;
opacity: 0;
transition: opacity 2s;
}
.box.nr2 {
background-color: red;
}
.box.trans {
opacity: 1;
}
```
```html
```
---
Another option is to use `animation`, with which one can do more cool stuff.
Here is another answer of mine, with a simple sample:
* [Changing an HTML element's style in JavaScript with its CSS transition temporarily disabled isn't reliably functioning](https://stackoverflow.com/questions/49286413/changing-an-html-elements-style-in-javascript-with-its-css-transition-temporari/49286965#49286965)
Upvotes: 2 <issue_comment>username_2: I just used a simple event listener for transitionend like this:
```
currentAdviceCont.classList.add("advice");
currentAdviceCont.classList.remove("advice-show");
currentAdviceCont.addEventListener("transitionend", () => {
currentAdviceCont.classList.add("no-display");
nextAdvice.classList.remove("no-display");
nextAdvice.classList.remove("advice");
nextAdvice.classList.add("advice-show");});
}
```
Upvotes: 1 [selected_answer] |
2018/03/21 | 761 | 2,485 | <issue_start>username_0: In the filter I am implementing there is a step doing some reduction over the boundary of square domain
```
RDom r(0, filter_size, 0, filter_size);
r.where( (r.x == 0 || r.x == filter_size - 1)
|| (r.y == 0 || r.y == filter_size - 1));
```
However this makes domain traversal `O(filter_size^2)` while useful reduction domain is only `O(filter_size)`.
Now my reduction operation is a bit involved, so repeating if for each side of the filter window makes quite a mess. Is there an elegant && efficient way of doing this in Halide?<issue_comment>username_1: You can use its `transitionend` event to get a more accurate toggle.
Stack snippet
```js
var box1 = document.querySelector(".box.nr1");
var box2 = document.querySelector(".box.nr2");
box1.addEventListener("transitionend", trans_ended, false);
box2.addEventListener("transitionend", trans_ended, false);
setTimeout(function() { box1.classList.add('trans') }, 10);
function trans_ended (e) {
if (e.type == 'transitionend') {
if (e.target == box1) {
box1.classList.remove('trans');
setTimeout(function() { box2.classList.add('trans') }, 10);
} else if (e.target == box2) {
box2.classList.remove('trans');
setTimeout(function() { box1.classList.add('trans') }, 10);
}
}
}
```
```css
.box {
position: absolute;
left: 1em;
top: 1em;
width: 5em;
height: 5em;
background-color: blue;
opacity: 0;
transition: opacity 2s;
}
.box.nr2 {
background-color: red;
}
.box.trans {
opacity: 1;
}
```
```html
```
---
Another option is to use `animation`, with which one can do more cool stuff.
Here is another answer of mine, with a simple sample:
* [Changing an HTML element's style in JavaScript with its CSS transition temporarily disabled isn't reliably functioning](https://stackoverflow.com/questions/49286413/changing-an-html-elements-style-in-javascript-with-its-css-transition-temporari/49286965#49286965)
Upvotes: 2 <issue_comment>username_2: I just used a simple event listener for transitionend like this:
```
currentAdviceCont.classList.add("advice");
currentAdviceCont.classList.remove("advice-show");
currentAdviceCont.addEventListener("transitionend", () => {
currentAdviceCont.classList.add("no-display");
nextAdvice.classList.remove("no-display");
nextAdvice.classList.remove("advice");
nextAdvice.classList.add("advice-show");});
}
```
Upvotes: 1 [selected_answer] |
2018/03/21 | 626 | 1,913 | <issue_start>username_0: We have a table `Things` with instances belonging to table `Projects` via `thing.project_id`.
When querying an instance of `Things`, we are returning a json representation of the parent `Project` record as a value on that instance.
Currently it looks like:
```
SELECT t.id,
(
SELECT row_to_json(a.*)
FROM (
SELECT p.id, p.name
) AS a
) AS project
FROM "Things" t
INNER JOIN "Projects" p ON p.id = t.project_id
WHERE t.id = ?
```
This works fine, but it seems like it could be simplified.
Is there a way to get rid of the need for an intermediate variable (`a` in this example) while retaining clarity?<issue_comment>username_1: How about using the `json_build_object` function:
```
SELECT t.id, json_build_object('id', p.id, 'name', p.name) AS project
FROM "Things" t
INNER JOIN "Projects" p ON p.id = t.project_id
WHERE t.id = ?
```
Upvotes: 1 <issue_comment>username_2: If you want to use `row_to_json()` on chosen columns, a subquery is necessary, however it would be more natural and more readable (IMO) to use the subquery as a derived table (i.e. in the `FROM` clause):
```
SELECT t.id, row_to_json(p.*)
FROM "Things" t
INNER JOIN (
SELECT id, name
FROM "Projects") p
ON p.id = t.project_id
WHERE t.id = ?
```
Upvotes: 1 <issue_comment>username_3: Not sure what exactly you want, but what about:
```
SELECT t.id, to_jsonb(p) as project
FROM "Things" t
JOIN "Projects" p ON p.id = t.project_id
WHERE t.id = ?
```
To exclude columns from the `project` table, you can use the `-` operator with `jsonb`:
```
SELECT t.id, to_jsonb(p) - 'id' as project
FROM "Things" t
JOIN "Projects" p ON p.id = t.project_id
WHERE t.id = ?
```
You can remove multiple columns that way:
```
SELECT t.id, to_jsonb(p) - array['id', 'some_column'] as project
FROM "Things" t
JOIN "Projects" p ON p.id = t.project_id
WHERE t.id = ?
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 826 | 2,045 | <issue_start>username_0: I've run to a error. I've been trying to append a text file to itself like so:
```
file_obj = open("text.txt", "a+")
number = 6
def appender(obj, num):
count = 0
while count<=num:
read = file_obj.read()
file_obj.seek(0,2)
file_obj.write(read)
count+=1
appender(file_obj, number)
```
However, the `text.txt` file is then filled with strange ASCII symbols. At first, the file contains only a simple "hello", but after the code, it contains this:
```
hellohello䀀 猀· d娀 Ť搀Ŭ娀ͤ攀ɪ昀Ѥ萀 夀ɚ搀ť樀Ŧ搀茀 婙ݤ攀Ѫ昀ࡤ萀 夀њ搀
ɥ攀ժ昀
茀 婙攀ť樀ɦ搀茀 婙萀 ݚ搀࡚攀४攀ƃ娀搀⡓ 癳 祐桴湯䌠慨慲瑣牥䴠灡楰杮
䌠摯捥挠ㅰ㔲‰敧敮慲整牦浯✠䅍偐义升嘯久佄卒䴯䍉䙓⽔䥗䑎坏⽓偃㈱〵吮员‧楷桴朠湥潣敤祰
മഊ椊 and so on.
```
Any help will be appreciated<issue_comment>username_1: I am using this code and everything is working as expected:
```
with open("file.txt") as f:
for line in f:
f.write(line)
```
Upvotes: 0 <issue_comment>username_2: I think I can fix your problem, even though I can't reproduce it. There's a logic error: after you write, you fail to return to the start of the file for reading. In terms of analysis, you failed to do anything to diagnose the problem. At the very least, use a `print` statement to see what you're reading: that highlights the problem quite well. Here's the loop I used:
```
count = 0
while count<=num:
file_obj.seek(0) # Read from the beginning of the file.
read = file_obj.read()
print(count, read) # Trace what we're reading.
file_obj.seek(0, 2)
file_obj.write(read)
count+=1
```
This gives the expected output of 128 (2^(6+1)) repetitions of "hello".
**EXTENSIONS**
I recommend that you learn to use both the `for` loop and the `with open ... as` idiom. These will greatly shorten your program and improve the readability.
Upvotes: 2 [selected_answer]<issue_comment>username_3: You just have the wrong mode - use `'r+'` rather than `'a+'`. See [this link](https://www.tutorialspoint.com/python/python_files_io.htm) for a list of modes and an explanation of reading files.
Upvotes: 0 |
2018/03/21 | 1,106 | 2,823 | <issue_start>username_0: I have two pandas DataFrames that contain numeric and non-numeric values. I want to divide one by the other, but keep the non-numeric columns. Here is a MWE:
```
a = pd.DataFrame(
[
['group1', 1., 2.],
['group1', 3., 4.],
['group1', 5., 6.]
],
columns=['Group', 'A', 'B']
)
b = pd.DataFrame(
[
['group1', 7., 8.],
['group1', 9., 10.],
['group1', 11., 12.]
],
columns=['Group', 'A', 'B']
)
```
Trying to do:
```
b.div(a)
```
Results in:
>
> `TypeError: unsupported operand type(s) for /: 'str' and 'str'`
>
>
>
So to get around this, I have done:
```
result = b.drop(["Group"], axis=1).div(a.drop(["Group"], axis=1))
print(result)
# A B
#0 7.0 4.0
#1 3.0 2.5
#2 2.2 2.0
```
Which is correct, but I also wanted to keep the column `"Group"`.
One way to get my desired output would be to do:
```
desired_output = b[["Group"]].join(result)
print(desired_output)
# Group A B
#0 group1 7.0 4.0
#1 group1 3.0 2.5
#2 group1 2.2 2.0
```
But my real DataFrames have many non-numeric columns. Is there a cleaner/faster/more efficient way to tell pandas to divide only the numeric columns?<issue_comment>username_1: You can use `np.divide`, passing a mask to the `where` parameter.
```
np.divide(b, a, where=a.dtypes.ne(object))
```
Assuming the non-numeric columns are the same across DataFrames, use `combine_first`/`fillna` to get them back:
```
np.divide(b, a, where=a.dtypes.ne(object)).combine_first(a)
Group A B
0 group1 7.0 4.0
1 group1 3.0 2.5
2 group1 2.2 2.0
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Similar to @cᴏʟᴅsᴘᴇᴇᴅ's answer, but you can stay within Pandas with `.select_dtypes()`. This will attempt to do index-aligned division on any non-object dtypes.
```
>>> b.select_dtypes(exclude='object').div(
... a.select_dtypes(exclude='object')).combine_first(a)
...
A B Group
0 7.0 4.0 group1
1 3.0 2.5 group1
2 2.2 2.0 group1
```
To retain column ordering:
```
>>> desired_output = b.select_dtypes(exclude='object')\
... .div(a.select_dtypes(exclude='object'))\
... .combine_first(a)[a.columns]
>>> desired_output
Group A B
0 group1 7.0 4.0
1 group1 3.0 2.5
2 group1 2.2 2.0
```
Upvotes: 2 <issue_comment>username_3: Maybe `set_index()`
```
b.set_index('Group').div(a.set_index('Group'),level=[0]).reset_index()
Out[579]:
Group A B
0 group1 7.0 4.0
1 group1 3.0 2.5
2 group1 2.2 2.0
```
Work for more string type columns
```
pd.concat([b,a]).groupby(level=0).agg(lambda x : x.iloc[0]/x.iloc[1] if x.dtype=='int64' else x.head(1))
Out[584]:
Group A B
0 group1 7.0 8.0
1 group1 9.0 10.0
2 group1 11.0 12.0
```
Upvotes: 1 |
2018/03/21 | 2,412 | 9,694 | <issue_start>username_0: I am experimenting with GKE cluster upgrades in a 6 nodes (in two node pools) test cluster before I try it on our staging or production cluster. Upgrading when I only had a 12 replicas nginx deployment, the nginx ingress controller and cert-manager (as helm chart) installed took 10 minutes per node pool (3 nodes). I was very satisfied. I decided to try again with something that looks more like our setup. I removed the nginx deploy and added 2 node.js deployments, the following helm charts: mongodb-0.4.27, mcrouter-0.1.0 (as a statefulset), redis-ha-2.0.0, and my own www-redirect-0.0.1 chart (simple nginx which does redirect). The problem seems to be with mcrouter. Once the node starts draining, the status of that node changes to `Ready,SchedulingDisabled` (which seems normal) but the following pods remains:
* mcrouter-memcached-0
* fluentd-gcp-v2.0.9-4f87t
* kube-proxy-gke-test-upgrade-cluster-default-pool-74f8edac-wblf
I do not know why those two kube-system pods remains, but that mcrouter is mine and it won't go quickly enough. If I wait long enough (1 hour+) then it eventually work, I am not sure why. The current node pool (of 3 nodes) started upgrading 2h46 minutes ago and 2 nodes are upgraded, the 3rd one is still upgrading but nothing is moving... I presume it will complete in the next 1-2 hours...
I tried to run the drain command with `--ignore-daemonsets --force` but it told me it was already drained.
I tried to delete the pods, but they just come back and the upgrade does not move any faster.
Any thoughts?
Update #1
=========
The mcrouter helm chart was installed like this:
`helm install stable/mcrouter --name mcrouter --set controller=statefulset`
The statefulsets it created for mcrouter part is:
```
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
app: mcrouter-mcrouter
chart: mcrouter-0.1.0
heritage: Tiller
release: mcrouter
name: mcrouter-mcrouter
spec:
podManagementPolicy: OrderedReady
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: mcrouter-mcrouter
chart: mcrouter-0.1.0
heritage: Tiller
release: mcrouter
serviceName: mcrouter-mcrouter
template:
metadata:
labels:
app: mcrouter-mcrouter
chart: mcrouter-0.1.0
heritage: Tiller
release: mcrouter
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: mcrouter-mcrouter
release: mcrouter
topologyKey: kubernetes.io/hostname
containers:
- args:
- -p 5000
- --config-file=/etc/mcrouter/config.json
command:
- mcrouter
image: jphalip/mcrouter:0.36.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: mcrouter-port
timeoutSeconds: 5
name: mcrouter-mcrouter
ports:
- containerPort: 5000
name: mcrouter-port
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: mcrouter-port
timeoutSeconds: 1
resources:
limits:
cpu: 256m
memory: 512Mi
requests:
cpu: 100m
memory: 128Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/mcrouter
name: config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: mcrouter-mcrouter
name: config
updateStrategy:
type: OnDelete
```
and here is the memcached statefulset:
```
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
app: mcrouter-memcached
chart: memcached-1.2.1
heritage: Tiller
release: mcrouter
name: mcrouter-memcached
spec:
podManagementPolicy: OrderedReady
replicas: 5
revisionHistoryLimit: 10
selector:
matchLabels:
app: mcrouter-memcached
chart: memcached-1.2.1
heritage: Tiller
release: mcrouter
serviceName: mcrouter-memcached
template:
metadata:
labels:
app: mcrouter-memcached
chart: memcached-1.2.1
heritage: Tiller
release: mcrouter
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: mcrouter-memcached
release: mcrouter
topologyKey: kubernetes.io/hostname
containers:
- command:
- memcached
- -m 64
- -o
- modern
- -v
image: memcached:1.4.36-alpine
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: memcache
timeoutSeconds: 5
name: mcrouter-memcached
ports:
- containerPort: 11211
name: memcache
protocol: TCP
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: memcache
timeoutSeconds: 1
resources:
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
type: OnDelete
status:
replicas: 0
```<issue_comment>username_1: That is a bit complex question and I am definitely not sure that it is like how I thinking, but... Let's try to understand what is happening.
You have an upgrade process and have 6 nodes in the cluster. The system will upgrade it one by one using Drain to remove all workload from the pod.
Drain process itself respecting your settings and number of replicas and **desired state** of workload has **higher priority** than the drain of the node itself.
During the drain process, Kubernetes will try to schedule all your workload on resources where scheduling available. Scheduling on a node which system want to drain is disabled, you can see it in its state - `Ready,SchedulingDisabled`.
So, Kubernetes scheduler trying to find a right place for your workload on all available nodes. It will wait as long as it needs to place everything you describe in a cluster configuration.
Now the most important thing. You set that you need `replicas: 5` for your `mcrouter-memcached`. It cannot run more than one replica per node because of `podAntiAffinity` and a node for a running it should have enough resources for that, which is calculated using `resources:` block of `ReplicaSet`.
So, I think, that your cluster just does not has enough resource for a run new replica of `mcrouter-memcached` on the remaining 5 nodes. As an example, on the last node where a replica of it still not running, you have not enough memory because of other workloads.
I think if you will set `replicaset` for `mcrouter-memcached` to 4, it will solve a problem. Or you can try to use a bit more powerful instances for that workload, or add one more node to the cluster, it also should help.
Hope I gave enough explanation of my logic, ask me if something not clear to you. But first please try to solve an issue by provided solution:)
Upvotes: 2 <issue_comment>username_2: The problem was a combination of the minAvailable value from a PodDisruptionBudget (that was part of the memcached helm chart which is a dependency of the mcrouter helm chart) and the replicas value for the memcached replicaset. Both were set to 5 and therefore none of them could be deleted during the drain. I tried changing the minAvailable to 4 but [PDB are immutable at this time](https://github.com/kubernetes/kubernetes/issues/45398). What I did was remove the helm chart and replace it.
```
helm delete --purge myproxy
helm install ./charts/mcrouter-0.1.0-croy.1.tgz --name myproxy --set controller=statefulset --set memcached.replicaCount=5 --set memcached.pdbMinAvailable=4
```
Once that was done, I was able to get the cluster to upgrade normally.
What I should have done (but only thought about it after) was to change the replicas value to 6, this way I would not have needed to delete and replace the whole chart.
Thank you @AntonKostenko for trying to help me finding this issue.
This [issue](https://stackoverflow.com/questions/48122841/kubectl-drain-not-evicting-helm-memcached-pods) also helped me.
Thanks to the folks in [Slack@Kubernetes](https://kubernetes.slack.com), specially to Paris who tried to get my issue more visibility and the volonteers of the [Kubernetes Office Hours](https://github.com/kubernetes/community/blob/master/events/office-hours.md) (which happened to be [yesterday](https://youtu.be/9S6GB_rVySU?t=36m38s), lucky me!) for also taking a look.
Finally, thank you to psycotica0 from [Kubernetes Canada](https://k8scanada.slack.com) to also give me some pointers.
Upvotes: 1 [selected_answer] |
2018/03/21 | 2,072 | 5,989 | <issue_start>username_0: I'm trying to set up Protractor for a new project. Our requirements encompass end-to-end tests on most popular Browsers (Chrome, Firefox, IE, Safari).
After 15 seconds setting up Chrome tests I've gotten stuck a whole day trying to get the Firefox drivers to work. Here's my current configuration:
```
exports.config = {
allScriptsTimeout: 11000,
specs: [
'./e2e/**/*.e2e-spec.ts'
],
multiCapabilities: [
{
browserName: 'chrome',
chromeOptions: {
args: ['--headless', 'no-sandbox', '--disable-gpu',
'--window-size=1200x900']
}
},
{
browserName: 'firefox'
}
],
directConnect: false,
chromeDriver: './node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_2.37',
firefoxPath: './node_modules/protractor/node_modules/webdriver-manager/selenium/geckodriver-v0.20.0',
seleniumAddress:
'http://localhost:4444/wd/hub',
baseUrl:
'http://localhost:4200/',
framework:
'jasmine',
jasmineNodeOpts:
{
showColors: true,
defaultTimeoutInterval:
30000,
print:
function () {
}
}
,
onPrepare() {
require('ts-node').register({
project: 'e2e/tsconfig.e2e.json'
});
jasmine.getEnv().addReporter(
new SpecReporter({spec: {displayStacktrace: true}}));
}
}
```
Problem is, I get the following error:
```
SessionNotCreatedError: Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line
```
I have already done a `webdriver-manager update` and all drivers are present in node\_modules. They just aren't getting found or used.
Here's the entire output of `ng e2e`:
```
** NG Live Development Server is listening on localhost:49152, open your browser on http://localhost:49152/ **
Date: 2018-03-21T16:46:44.040Z
Hash: 3bda6edf1fc2359b155e
Time: 11716ms
chunk {inline} inline.bundle.js, inline.bundle.js.map (inline) 3.89 kB [entry] [rendered]
chunk {main} main.bundle.js, main.bundle.js.map (main) 8.44 kB [initial] [rendered]
chunk {polyfills} polyfills.bundle.js, polyfills.bundle.js.map (polyfills) 202 kB [initial] [rendered]
chunk {styles} styles.bundle.js, styles.bundle.js.map (styles) 435 kB [initial] [rendered]
chunk {vendor} vendor.bundle.js, vendor.bundle.js.map (vendor) 3.64 MB [initial] [rendered]
(node:95425) DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
webpack: Compiled successfully.
[17:46:44] I/update - chromedriver: file exists /Users/xyz/Projects/xyz/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_2.37.zip
[17:46:44] I/update - chromedriver: unzipping chromedriver_2.37.zip
[17:46:44] I/update - chromedriver: setting permissions to 0755 for /Users/xyz/Projects/xyz/node_modules/protractor/node_modules/webdriver-manager/selenium/chromedriver_2.37
[17:46:44] I/update - chromedriver: chromedriver_2.37 up to date
[17:46:45] I/launcher - Running 2 instances of WebDriver
[17:46:45] I/testLogger -
------------------------------------
[17:46:45] I/testLogger - [firefox #11] PID: 95431
[firefox #11] Specs: /Users/xyz/Projects/xyz/e2e/app.e2e-spec.ts
[firefox #11]
[firefox #11] (node:95431) DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
[firefox #11] [17:46:45] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[firefox #11]
[firefox #11] /Users/xyz/Projects/xyz/node_modules/selenium-webdriver/lib/error.js:505
[firefox #11] throw new ctor(message);
[firefox #11] ^
[firefox #11] SessionNotCreatedError: Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binary flag set on the command line
[firefox #11] Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:33:15.31Z'
[firefox #11] System info: host: 'MacBook-Pro.local', os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.13.1', java.version: '1.8.0_112'
[firefox #11] Driver info: driver.version: unknown
[firefox #11] remote stacktrace:
[firefox #11] at Object.checkLegacyResponse (/Users/xyz/Projects/xyz/node_modules/selenium-webdriver/lib/error.js:505:15)
[firefox #11] From: Task: WebDriver.createSession()
[17:46:45] I/testLogger -
[17:46:45] E/launcher - Runner process exited unexpectedly with error code: 1
[17:46:45] I/launcher - 1 instance(s) of WebDriver still running
.[17:46:48] I/testLogger -
------------------------------------
[17:46:48] I/testLogger - [chrome #01] PID: 95430
[chrome #01] Specs: /Users/xyz/Projects/xyz/e2e/app.e2e-spec.ts
[chrome #01]
[chrome #01] (node:95430) DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
[chrome #01] [17:46:45] I/hosted - Using the selenium server at http://localhost:4444/wd/hub
[chrome #01] Jasmine started
[chrome #01]
[chrome #01] xyz App
[chrome #01] ✓ should display welcome message
[chrome #01]
[chrome #01] Executed 1 of 1 spec SUCCESS in 2 secs.
[17:46:48] I/testLogger -
[17:46:48] I/launcher - 0 instance(s) of WebDriver still running
[17:46:48] I/launcher - firefox #11 failed with exit code: 1
[17:46:48] I/launcher - chrome #01 passed
[17:46:48] I/launcher - overall: 1 process(es) failed to complete
[17:46:48] E/launcher - Process exited with error code 100
```
Anyone got any tips?<issue_comment>username_1: Turns out I didn't have Firefox (the browser itself) installed. I was wrongly convinced the installed driver was a standalone version of the browser, but it's actually just a wrapper.
Upvotes: 2 [selected_answer]<issue_comment>username_2: `multiCapabilities: [{
'browserName': 'chrome'
}, {
'browserName': 'firefox'
}],`
for chrome , firefox, no need to add driver/exe file simplify from multiCapabilities we add browser name.
Upvotes: 0 |
2018/03/21 | 963 | 3,525 | <issue_start>username_0: I have a Python package (Python 3.6, if it makes a difference) that I've designed to run as 'python -m *package* *arguments*' and I'd like to write unit tests for the \_\_main\_\_.py module. I specifically want to verify that it sets the exit code correctly. Is it possible to use runpy.run\_module to execute my \_\_main\_\_.py and test the exit code? If so, how do I retrieve the exit code?
To be more clear, my \_\_main\_\_.py module is very simple. It just calls a function that has been extensively unit tested. But when I originally wrote \_\_main\_\_.py, I forgot to pass the result of that function to exit(), so I would like unit tests where the main function is mocked to make sure the exit code is set correctly. My unit test would look something like:
```
@patch('my_module.__main__.my_main', return_value=2)
def test_rc2(self, _):
"""Test that rc 2 is the exit code."""
sys.argv = ['arg0', 'arg1', 'arg2', …]
runpy.run_module('my_module')
self.assertEqual(mod_rc, 2)
```
My question is, how would I get what I’ve written here as ‘mod\_rc’?
Thanks.<issue_comment>username_1: <NAME> has said before (I believe it was in [Clean Code Talks: Don't Look for Things](https://www.youtube.com/watch?v=RlfLCWKxHJ0) but I may be wrong) that he doesn't know how to effectively unit test main methods, so his solution is to make them so simple that you can prove logically that they work if you assume the correctness of the (unit-tested) code that they call.
For example, if you have a discrete, tested unit for parsing command line arguments; a library that does the actual work; and a discrete, tested unit for rendering the completed work into output, then a main method that calls all three of those in sequence is assuredly going to work.
With that architecture, you can basically get by with just one big system test that is expected to produce something other than the "default" output and it'll either crash (because you wired it up improperly) or work (because it's wired up properly and all of the individual parts work).
---
At this point, I'm dropping all pretense of knowing what I'm talking about. There is almost assuredly a better way to do this, but frankly you could just write a shell script:
```
python -m package args
test $? -eq [expected exit code]
```
That will exit with error `iff` your program outputs incorrectly, which TravisCI or similar will regard as build failing.
Upvotes: 2 <issue_comment>username_2: With pytest, I was able to do:
`import mypkgname.__main__ as rtmain`
where mypkgname is what you've named your app as a package/module. Then just running pytest as normal worked. I hope this helps some other poor soul.
Upvotes: 0 <issue_comment>username_3: `__main__.py` is still subject to normal `__main__` global behavior — which is to say, you can implement your `__main__.py` like so
```py
def main():
# Your stuff
if __name__ == "__main__":
main()
```
and then you can test your `__main__` in whatever testing framework you like by using
```py
from your_package.__main__ import main
```
As an aside, if you are using `argparse`, you will probably want:
```py
def main(arg_strings=None):
# …
args = parser.parse_args(arg_strings)
# …
if __name__ == "__main__":
main()
```
and then you can override arg strings from a unit test simply with
```py
from your_package.__main__ import main
def test_main():
assert main(["x", "y", "z"]) == …
```
or similar idiom in you testing framework.
Upvotes: 2 |
2018/03/21 | 750 | 2,300 | <issue_start>username_0: ```
def newactivation(x):
if x>0:
return K.relu(x, alpha=0, max_value=None)
else :
return x * K.sigmoid(0.7* x)
get_custom_objects().update({'newactivation': Activation(newactivation)})
```
I am trying to use this activation function for my model in keras, but I am having hard time by finding what to replace
```
if x>0:
```
ERROR i got:
>
> File
> "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py",
> line 614, in **bool**
> raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "
>
>
> TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if >t is not None:` instead of `if t:` to test if a tensor is defined, and >use TensorFlow ops such as tf.cond to execute subgraphs conditioned on >the value of a tensor.
>
>
>
Can someone make it clear for me?<issue_comment>username_1: `if x > 0` doesn't make sense because `x > 0` is a tensor, and not a boolean value.
To do a conditional statement in Keras use `keras.backend.switch`.
For example your
```
if x > 0:
return t1
else:
return t2
```
Would become
```
keras.backend.switch(x > 0, t1, t2)
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Try something like:
```
def newactivation(x):
return tf.cond(x>0, x, x * tf.sigmoid(0.7* x))
```
x isn't a python variable, it's a Tensor that will hold a value when the model is run. The value of x is only known when that op is evaluated, so the condition needs to be evaluated by TensorFlow (or Keras).
Upvotes: 2 <issue_comment>username_3: You can evaluate the tensor and then check for the condition
```
from keras.backend.tensorflow_backend import get_session
sess=get_session()
if sess.run(x)>0:
return t1
else:
return t2
```
get\_session is not available for TensorFlow 2.0. Solution for that you can find [here](https://stackoverflow.com/questions/58255821/how-to-use-k-get-session-in-tensorflow-2-0-or-how-to-migrate-it)
Upvotes: 0 <issue_comment>username_4: inspired by the previous answer from ed Mar 21 '18 at 17:28
username_2. This worked for me. [tf.cond](https://www.tensorflow.org/api_docs/python/tf/cond)
```
def custom_activation(x):
return tf.cond(tf.greater(x, 0), lambda: ..., lambda: ....)
```
Upvotes: 0 |
2018/03/21 | 2,595 | 7,535 | <issue_start>username_0: The Haskell aviary combinators lists [(=<<)](http://hackage.haskell.org/package/data-aviary-0.4.0/docs/Data-Aviary-Functional.html#v:-61--60--60-) as:
```
(a -> r -> b) -> (r -> a) -> r -> b
```
Is there an official bird-name for this? Or can it be derived via the pre-existing ones?<issue_comment>username_1: >
> Is there an official bird-name for this?
>
>
>
I can't find it in [Data.Aviary.Birds](http://hackage.haskell.org/package/data-aviary-0.4.0/docs/Data-Aviary-Birds.html), so I suppose there's not. If there was, it probably would've been referenced in the list you linked.
>
> Or can it be derived via the pre-existing ones?
>
>
>
Surely. The easiest might be to start with the [`starling`](http://hackage.haskell.org/package/data-aviary-0.4.0/docs/Data-Aviary-Birds.html#v:starling) whose signature is similar, and just compose it with `flip`, i.e.
```
(=<<) = bluebird starling cardinal
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: maybe will be correctly like: `blackbird warbler bluebird`
this is like
```
(...) = (.) . (.) -- blackbird
(.) -- bluebird
join -- warbler
-- and your function will be
f = join ... (.)
```
Upvotes: 2 <issue_comment>username_3: Quoting a comment:
>
> Btw do you have any advice on how to combine combinators to get a specific signature? I feel like I'm missing some trick (my current technique of staring at a list and doing mental gymnastics doesn't scale too well!)
>
>
>
Let the types guide you. You are looking for:
```
-- This name is totally made-up.
mino :: (b -> a -> c) -> (a -> b) -> a -> c
```
While you won't find it in [the list](http://hackage.haskell.org/package/data-aviary-0.4.0/docs/Data-Aviary-Birds.html), there is something quite similar:
```
starling :: (a -> b -> c) -> (a -> b) -> a -> c
```
If only we had a way to somehow twist `starling` into what we want...
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = f starling
-- f :: ((a -> b -> c) -> (a -> b) -> a -> c) -> (b -> a -> c) -> (a -> b) -> a -> c
```
This mysterious `f` has a rather unwieldy type, so let's abbreviate it for a moment: with `x ~ b -> a -> c`, `y ~ a -> b -> c` and `z -> (a -> b) -> a -> c`, we have
```
f :: (y -> z) -> x -> z
```
Another look at the list shows this fits the result type of `queer`:
```
queer :: (a -> b) -> (b -> c) -> a -> c
```
Progress!
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = queer g starling
-- g :: x -> y
-- g :: (b -> a -> c) -> a -> b -> c
```
As for `g`, there is a great candidate near the top of the list:
```
cardinal :: (a -> b -> c) -> b -> a -> c
```
And there it is:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = queer cardinal starling
```
`queer`, of course, is `cardinal bluebird` (i.e. reverse function composition), which leads us back to [username_1's `bluebird starling cardinal`](https://stackoverflow.com/a/49413069/2751851).
---
GHC can actually assist you with this kind of derivation:
```
import Data.Aviary.Birds
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = _f starling
```
```
GHCi> :l Mino.hs
[1 of 1] Compiling Main ( Mino.hs, interpreted )
Mino.hs:4:8: error:
* Found hole:
_f
:: ((a0 -> b0 -> c0) -> (a0 -> b0) -> a0 -> c0)
-> (b -> a -> c) -> (a -> b) -> a -> c
Where: `b0' is an ambiguous type variable
`a0' is an ambiguous type variable
`c0' is an ambiguous type variable
`b' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:3:1-43
`a' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:3:1-43
`c' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:3:1-43
Or perhaps `_f' is mis-spelled, or not in scope
* In the expression: _f
In the expression: _f starling
In an equation for `mino': mino = _f starling
* Relevant bindings include
mino :: (b -> a -> c) -> (a -> b) -> a -> c (bound at Mino.hs:4:1)
|
4 | mino = _f starling
| ^^
Failed, no modules loaded.
```
If you want a clean output, though, you have to ask gently:
```
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE PartialTypeSignatures #-}
import Data.Aviary.Birds
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
mino =
let s :: (a -> b -> c) -> _
s = starling
in _f s
```
(A type annotation to `starling` would make defining `s` unnecessary; that style, however, would get ugly very quickly with more complicated expressions.)
```
GHCi> :l Mino.hs
[1 of 1] Compiling Main ( Mino.hs, interpreted )
Mino.hs:10:8: error:
* Found hole:
_f
:: ((a -> b -> c) -> (a -> b) -> a -> c)
-> (b -> a -> c) -> (a -> b) -> a -> c
Where: `b' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:6:1-57
`a' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:6:1-57
`c' is a rigid type variable bound by
the type signature for:
mino :: forall b a c. (b -> a -> c) -> (a -> b) -> a -> c
at Mino.hs:6:1-57
Or perhaps `_f' is mis-spelled, or not in scope
* In the expression: _f
In the expression: _f s
In the expression:
let
s :: (a -> b -> c) -> _
s = starling
in _f s
* Relevant bindings include
s :: (a -> b -> c) -> (a -> b) -> a -> c (bound at Mino.hs:9:9)
mino :: (b -> a -> c) -> (a -> b) -> a -> c (bound at Mino.hs:7:1)
|
10 | in _f s
| ^^
Failed, no modules loaded.
```
---
The process described above still involves quite a bit of staring at the list, as we are working it out using nothing but the birds in their pointfree majesty. Without such constraints, though, we would likely proceed in a different manner:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = _
```
The hole has type `a -> c`, so we know it is a function that takes an `a`:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = \x -> _
-- x :: a
```
The only other thing that takes an `a` here is `g`:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = \x -> g _ x
```
The type of the hole is now `b`, and the only thing that gives out a `b` is `f`:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = \x -> g (f x) x
```
This, of course, is the usual definition of the reader `(=<<)`. If we flip `g`, though...
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = \x -> flip g x (f x)
```
... the reader `(<*>)` (i.e. the S combinator) becomes recogniseable:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino g f = \x -> (<*>) (flip g) f x
```
We can then write it pointfree...
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = (<*>) . flip
```
... and translate to birdspeak:
```
mino :: (b -> a -> c) -> (a -> b) -> a -> c
mino = bluebird starling cardinal
```
Upvotes: 1 |
2018/03/21 | 262 | 1,148 | <issue_start>username_0: I'm trying to provision a server with a Chef project that relied on an older version of the mysql cookbook (5.6.3). As this particular version is not compatible with anything newer that Ubuntu 14.04, I tried updating to the latest version (8.5.1) which lead to the same error. I understand that the mysql cookbook no longer provides recipes and is supposed to be a library only cookbook but it's really not clear how I'm supposed to edit the code to do what the older version did. Is there any other cookbook built on top of this one to simply install mysql client and server like the old version did or do I have to write a wrapper cookbook like the documentation seems to suggest?<issue_comment>username_1: You can write your own cookbook to install the specific version of mysql. You can use package chef resource
You can create a template(my.cnf.erb) for the my.cnf file and copy it over the default using the template chef resource
Upvotes: -1 <issue_comment>username_2: You have to write a wrapper cookbook now. The community cookbook only provides resources that you then use from that wrapper cookbook.
Upvotes: 2 |
Subsets and Splits