date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 329 | 1,188 | <issue_start>username_0: I am trying to execute following Query in Repository using Query method. I want unique Slocation and that result should be in `JSON` format(key, value)
**Here is my code**
```
@Repository
public interface AccountRepository extends JpaRepository, QueryDslPredicateExecutor {
// Load location
@Query("select new map (distinct(a.slocation) as slocation) from Account a where a.slocation !=null")
Set findSlocation();
```
**Error**
```
Caused by: org.hibernate.hql.internal.ast.QuerySyntaxException: unexpected token: distinct near line 1, column 17 [select new map (distinct(a.slocation) as slocation) from com.spacestudy.model.Account a where a.slocation !=null]
```
Can any one tell me how to solve this error?<issue_comment>username_1: You can use this :
```
@Query("select new map (a.slocation) from Account a where a.slocation !=null group by slocation")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I got one more solution in that I removed distinct and added `Set` instead of `List`
```
@Query("select new map (a.slocation as slocation) from AccountModel a where a.slocation !=null")
Set findBySlocation();
```
Upvotes: 1 |
2018/03/22 | 760 | 2,218 | <issue_start>username_0: Im trying to configure Scala in IntelliJ IDE
My Scala & Spark Versions in my machine
```
Welcome to Scala 2.12.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121).
apache-spark/2.2.1
```
SBT file
```
scalaVersion := "2.12.5"
resolvers += "MavenRepository" at "http://central.maven.org/maven2"
libraryDependencies ++= {
val sparkVersion = "2.2.1"
Seq( "org.apache.spark" %% "spark-core" % sparkVersion)
}
```
Error Im getting
```
Error:Error while importing SBT project:
...
```
[info] Resolving jline#jline;2.14.5 ...
[error] (*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: org.apache.spark#spark-core_2.12;2.2.1: not found
[error] unresolved dependency: org.apache.spark#spark-core_2.12;1.4.0: not found
[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.spark#spark-core_2.12;2.2.1: not found
[error] unresolved dependency: org.apache.spark#spark-core_2.12;1.4.0: not found
```
```<issue_comment>username_1: *There isn't the version of spark core that you defined in you sbt project available to be downloaded*. You can check [maven dependency](https://mvnrepository.com/artifact/org.apache.spark/spark-core) for more info on what versions are available
*As you can see that for spark-core version 2.2.1, the latest version to be downloaded is compiled in Scala 2.11* [info here](https://mvnrepository.com/artifact/org.apache.spark/spark-core_2.11/2.2.1)
So
**either** you change your sbt build file as
```
scalaVersion := "2.11.8"
resolvers += "MavenRepository" at "http://central.maven.org/maven2"
libraryDependencies ++= {
val sparkVersion = "2.2.1"
Seq( "org.apache.spark" %% "spark-core" % sparkVersion)
}
```
**or** define version of build in dependency as
```
libraryDependencies ++= {
val sparkVersion = "2.2.1"
Seq("org.apache.spark" % "spark-core_2.11" % sparkVersion)
}
```
I hope the answer is helpful
Upvotes: 4 [selected_answer]<issue_comment>username_2: `Spark-2.2.1` does not support to `scalaVersion-2.12`. You have to do like this:
>
> scalaVersion := "2.11.8"
>
>
> libraryDependencies += "org.apache.spark" % "spark-core" % "$sparkVersion"
>
>
>
Thanks
Upvotes: 0 |
2018/03/22 | 792 | 2,933 | <issue_start>username_0: I am migrating code from Objective-C to Swift 4.0. Here I have some float #define constants related to my deviceHeight in Specific Objective-C header class. While accessing this #define giving error `"Use of unresolved identifier"`. When I use Objective-C string #define identifier it's easily accessible within Swift class.
Not accessible in Swift4
**#define PHONE\_IPHONE10 PHONE\_UISCREEN\_HEIGHT==812.0f**
Accessible in Swift4
**#define ERROR @"Some error occured. Please try later."**
Help me with your comments or solution.<issue_comment>username_1: To achieve similar functionality I created Constants.swift file with this structure:
```
struct Constants {
struct phoneHeights {
static let PHONE_UISCREEN_HEIGHT = 812.0
//some others consts
}
struct iPhoneX {
static let statusBarHeight: CGFloat = 44
//some others consts
}
}
```
Or simply:
```
struct Constants {
static let PHONE_UISCREEN_HEIGHT = 812.0
static let statusBarHeight: CGFloat = 44
}
```
And for type safety in Swift, you can read [here](http://www.dummies.com/programming/macintosh/type-safety-in-swift/).
Upvotes: -1 <issue_comment>username_2: The reason this imports to Swift...
>
> #define ERROR @"Some error occured. Please try later."
>
>
>
...is that it’s semantically equivalent to a constant declaration. That is, it permanently associates that string-literal value with the name `ERROR`. The Swift compiler recognizes that you’re using the C preprocessor to define a constant, and translates it to a Swift constant.
(Even though you could—and probably should—define C global constants without the preprocessor, Swift recognizes that there’s a long tradition of using `#define` instead, and imports it anyway.)
---
The reason this doesn’t import to Swift...
>
> #define PHONE\_IPHONE10 PHONE\_UISCREEN\_HEIGHT==812.0f
>
>
>
...is that this is a preprocessor *macro*. It doesn’t statically map a name to a value. Instead, it tells C that wherever it sees your name `PHONE_IPHONE10`, it should substitute the *expression* `PHONE_UISCREEN_HEIGHT==812.0f`. Presumably `PHONE_UISCREEN_HEIGHT` is itself a macro, so the whole thing expands to a chain of method calls and an equality comparison.
Swift itself doesn’t do preprocessor macros, or anything like such, so it doesn’t import them from C.
A close equivalent would be to redefine this logic using a computed property or function (and the idiomatic way to do that in Swift would be as a static member on a type, not a global symbol). Something like this:
```
extension UIDevice {
class var isMaybeiPhoneX: Bool {
return false // or some logic based on UIScreen.main.size
}
}
```
But be warned, the whole idea of conditionally changing your app’s UI or behavior based on a specific screen height check is fraught with peril. Tried Auto Layout?
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,180 | 4,108 | <issue_start>username_0: I am working on a task where i am taking the id generated from the database collection and i passing it through the postman.. i am converting that it to database obejct id created and its working accordinlg if we pass correct id else it is throwing an error
```
mongo.get().collection("post").find({"_id":new ObjectId(req.headers.postid)}).toArray(function(err, result) {
if (err) throw err;
if(result.length==0){
jsonObj.response="post id entered is invalid ";
res.send(jsonObj)
}
else{
//some operations
}
}
```
when ever i am not passing the valid input it is throwing the following error
```
Argument passed in must be a single String of 12 bytes or a string of 24 hex characters
=======================================================================================
```
Error: Argument passed in must be a single String of 12 bytes or a string of 24 hex characters
at new ObjectID (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/bson/lib/bson/objectid.js:51:11)
at Object.handle (/home/vamsi/nodejs-training/myfirstproject/facebook/routes/updatepost.js:36:47)
at next_layer (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/route.js:103:13)
at Route.dispatch (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/route.js:107:5)
at /home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:195:24
at Function.proto.process_params (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:251:12)
at next (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:189:19)
at Function.proto.handle (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:234:5)
at Layer.router (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:23:12)
at trim_prefix (/home/vamsi/nodejs-training/myfirstproject/facebook/node_modules/express/lib/router/index.js:226:17)
```
```<issue_comment>username_1: **1-**
If you're using mongoose, and you're trying to perform a search operation on the basis if `_id` you don't actually need to convert the passed `id` to `objectId`. just use
```
mongo.get().collection("post")find({"_id" : req.headers.postid })
```
This won't impose the passed `id` to be a constructable to monogdb's objectId as `objectId` in mongo db follows special pattern. Like
12-byte structure, the first 4 bytes of the ObjectId represent the time in seconds since the UNIX epoch.
The next 3 bytes of the ObjectId represent the machine identifier.
The next 2 bytes of the ObjectId represent the process ID.
And the last 3 bytes of the ObjectId represent a random counter value.
And if your passed id is not constructable to the pattern mentioned above this is definitely gonna throw the errors like you're getting
**2**
Well I am not very sure about this approach but you can first check if `id` passed is OK or not with something like
```
var id = null;
try {
id = new ObjectId(req.headers.postid)
mongo.get().collection("post").find({"_id":id}).toArray(function(err, result) {
....
}
catch(error) {
console.error(error);
//there must be error with the id
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you are okay with monk, you can use this.
`var Id = require('monk').id(req.headers.postid);`
Upvotes: 0 <issue_comment>username_3: ```
if(!validator.isMongoId(req.headers.postid)){
res.send("invalid post id ")
}
else{
mongo.get().collection("post").find({"_id":new ObjectId(req.headers.postid)}).toArray(function(err, result) {
if (err) throw err;
if(result.length==0){
jsonObj.response="post id entered is invalid ";
res.send(jsonObj)
}
else{
//some operations
}
}
}
```
before that install validator npm module
Upvotes: 1 |
2018/03/22 | 278 | 1,056 | <issue_start>username_0: I am attempting to run a script that I have finished writing without having to launch PyCharm every time I run it.
However, I receive the following error when trying to run it from my command-line in `Windows 10`:
```
Traceback (most recent call last):
File "C:\Users\stuff\bot.py", line 5, in
import irc.bot
ImportError: No module named irc.bot
```
My folder structure to get to irc.bot looks like:
```
main folder
--venv
----lib
------site-packages
--------irc
----------bot.py
```
I'm fairly certain the issue is the virtual environment configuration, but I'm not sure how to get around that in a batch file to launch my script.<issue_comment>username_1: I've faced a similar issue.
Upgrade the version of pip, wheel and setuptools in your virtualenv and afterwards try to install irc.
Hope that it works.
Upvotes: 0 <issue_comment>username_2: In order for python to recognize the module, first enter the virtual environment using the script located at:
```
venv\Scripts\activate
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 577 | 1,652 | <issue_start>username_0: I have a file with filename and path to the file
I want to delete the the rows which have files that do not exist anymore
file.txt (For now all existing files):
```
file1;~/Documents/test/123
file2;~/Documents/test/456
file3;~/Test
file4;~/Files/678
```
Now if I delete any of the given files(file 2 AND file4 fore example) and run my script I want it to test if the file in the given row exists and remove the row if it does not
file.txt(after removing file2, file4):
```
file1;~/Documents/test/123
file3;~/Test
```
What I got so far(Not working at all):
-Does not want to run at all
```
#!/bin/sh
backup=`cat file.txt`
rm -f file.txt
touch file.txt
while read -r line
do
dir=`echo "$line" | awk -F';' '{print $2}'`
file=`echo "$line" | awk -F';' '{print $1}'`
if [ -f "$dir"/"$file" ];then
echo "$line" >> file.txt
fi
done << "$backup"
```<issue_comment>username_1: If I understand, this should do it.
```
touch file.txt file2.txt
for i in `cat file.txt`; do
fp=`echo $i|cut -d ';' -f2`
if [ -e $fp ];then
echo "$i" >> file2.txt
fi
done
mv file2.txt file.txt
```
Upvotes: 0 <issue_comment>username_2: Here's one way:
```
tmp=$(mktemp)
while IFS=';' read -r file rest; do
[ -f "$file" ] && printf '%s;%s\n' "$file" "$rest"
done < file.txt > "$tmp" && mv "$tmp" file.txt
```
or if you don't want a temp file for some reason:
```
tmp=()
while IFS=';' read -r file rest; do
[ -f "$file" ] && tmp+=( "$file;$rest" )
done < file.txt &&
printf '%s\n' "${tmp[@]}" > file.txt
```
Both are untested but should be very close if not exactly correct.
Upvotes: 1 |
2018/03/22 | 666 | 2,568 | <issue_start>username_0: I am working on a swift project.In that i need the logo to be shown in left side of navigation bar and would like to make it globally in AppDelegate. But self.navigationitem is not detected in Appdelegate?Any Help would be appreciated as its my first project in swift.
```
UINavigationBar.appearance().barTintColor = Constants.templatecolor
let logoImage = UIImage.init(named: "logoImage")
let logoImageView = UIImageView.init(image: logoImage)
logoImageView.frame = CGRect(x: -40, y: 0, width: 150, height: 25)
logoImageView.contentMode = .scaleAspectFit
let imageItem = UIBarButtonItem.init(customView: logoImageView)
let negativeSpacer = UIBarButtonItem.init(barButtonSystemItem: .fixedSpace, target: nil, action: nil)
negativeSpacer.width = -25
UINavigationItem.setLeftBarButtonItems(negativeSpacer, imageItem)
```<issue_comment>username_1: Please change x position of logoImageView while setting its frame.
You are sets it's value as -40 might be because of that you are not able to see that logoImage.
Change it's x position to 0.
Upvotes: 0 <issue_comment>username_2: You can't set the Navigation item in `AppDelegate`. Navigation item is the property of `UIViewController` or `UINavigationController` and you set the `UINavigationItem` for each ViewControllers.
If all that you wanna achieve is setting the navigation title for all screens declare a base class call it as `BaseViewController` n in `viewDidLoad` of `BaseViewController` set the navigation item
```
class BaseViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let logoImage = UIImage.init(named: "close")
let logoImageView = UIImageView.init(image: logoImage)
logoImageView.frame = CGRect(x: -40, y: 0, width: 150, height: 25)
logoImageView.contentMode = .scaleAspectFit
let imageItem = UIBarButtonItem.init(customView: logoImageView)
let negativeSpacer = UIBarButtonItem.init(barButtonSystemItem: .fixedSpace, target: nil, action: nil)
negativeSpacer.width = -25
self.navigationItem.leftBarButtonItems = [negativeSpacer,imageItem]
// Do any additional setup after loading the view.
}
}
```
Now every single viewController in your project can extend from `BaseViewController`
```
class ViewController: BaseViewController {
...
override func viewDidLoad() {
super.viewDidLoad()
}
}
```
Make sure you call `super.viewDidLoad` in `viewDidLoad`
Hope it helps
Upvotes: 3 [selected_answer] |
2018/03/22 | 226 | 737 | <issue_start>username_0: I am trying to use LIME package for explaining predictive models. See error while running the following code:
```
# Create the LIME Explainer
explainer = lime.lime_tabular.LimeTabularExplainer(X_train_undersample feature_names = feature_names,class_names=['0','1'], kernel_width=3)
```
Error:
>
> TypeError: unhashable type: 'slice'
>
>
>
What could be the problem here?<issue_comment>username_1: The error is because your X\_train\_undersample is DataFrame stype. It will be fine if you change the DataFrame stype into an array stype (eg. a float64 array).
Upvotes: 0 <issue_comment>username_2: If you x\_train is a DataFrame, try passing it like:
```
x_train_undersample.as_matrix()
```
Upvotes: 1 |
2018/03/22 | 629 | 2,362 | <issue_start>username_0: I have a function App. type is Timetrigger. I have given the time trigger expression `0 0 * * * *` , as my requirement it to run with 1 Hour interval. I have refereed the [TimeTrigger Cheat Sheet](https://codehollow.com/2017/02/azure-functions-time-trigger-cron-cheat-sheet/)
But Unfortunetly its triggering in each 5 min. Somehow it's not working. Help me regarding this.
**target--> Run the function App in the interval of 1 Hour.**<issue_comment>username_1: If you have changed your code, I suggest you could right click project>**rebuild** your project to try again. Or you could try to use expression like '`0 0 */1 * * *`'.
The result:
[](https://i.stack.imgur.com/LLYIP.png)
>
> But not sure why its getting triggered 5 min while putting the code in Azure Portal.
>
>
>
The code in portal and Visual Studio is different. You could try to right click project>**publish**>choose Azure function service to publish your function to portal. Then check the schedule in function.json:
[](https://i.stack.imgur.com/zHwqv.png)
In Portal, you could also click '+' to create TimeTrigger in Azure function service directly:
[](https://i.stack.imgur.com/Ws5WF.png)
Upvotes: 2 <issue_comment>username_2: I got a conclusion out of the Issue, Although you will update the Time trigger schedule in the Code level, this will not impact the Running schedule. What ever is there in the configuration ( go to Integrate tab of the function App & check the timer value). Code value changes not impacting the config values. Need to change the config values manually.
Upvotes: 1 [selected_answer]<issue_comment>username_3: You can set the trigger time as a config value, something like this:
```
[FunctionName(nameof(TimerFunction))]
public static async Task Run(
[TimerTrigger("%schedule%")]
TimerInfo timerInfo,
TraceWriter log)
{}
```
and then define **schedule** in the Application Settings of your Function App in Azure portal:
`schedule 0 */5 * * * *` (in this case every five minutes)
I'm using [crontab.guru](https://crontab.guru/) for choosing triggering intervals
Upvotes: 0 |
2018/03/22 | 853 | 3,122 | <issue_start>username_0: On a plain TYPO3 8 installation, I want ckeditor to allow the img tag in the source, without enabling the image plugin. The default configuration of ckeditor in TYPO3 removes the image plugin, which causes img tags to be removed as well.
According to the ckeditor documentation, this can be achieved with the property
```
config.extraAllowedContent = 'img'
```
but it does not work in my custom yaml configuration. img tags are always striped when I switch from source to wysiwyg mode.
If I enable the image plugin, img tags are kept. But I do not want to give to the editors this button as an option.
What am I doing wrong?
Here is my test yaml configuration. At the bottom is the extraAllowedContent option
```
imports:
- { resource: "EXT:rte_ckeditor/Configuration/RTE/Processing.yaml" }
- { resource: "EXT:rte_ckeditor/Configuration/RTE/Editor/Base.yaml" }
- { resource: "EXT:rte_ckeditor/Configuration/RTE/Editor/Plugins.yaml" }
editor:
config:
contentsCss: ["EXT:rte_ckeditor/Resources/Public/Css/contents.css", "EXT:tucmmforumhook/Resources/Public/Styles/Tucmain.css"]
stylesSet:
- { name: "XYZ Text", element: "span", attributes: { class: "highlighted red"} }
- { name: "Button", element: "a", attributes: { class: "button"} }
- { name: "Checklist", element: "ul", attributes: { class: "check-list"} }
format_tags: "p;h2;h3;h4;h5"
toolbarGroups:
- { name: styles, groups: [ styles, format ] }
- { name: basicstyles, groups: [ basicstyles ] }
- { name: paragraph, groups: [ list, indent, blocks, align ] }
- { name: links, groups: [ links ] }
- { name: clipboard, groups: [ clipboard, cleanup, undo ] }
- { name: editing, groups: [ spellchecker ] }
- { name: insert, groups: [ insert ] }
- { name: tools, groups: [ table, specialchar ] }
- { name: document, groups: [ mode ] }
justifyClasses:
- text-left
- text-center
- text-right
- text-justify
extraPlugins:
- justify
removePlugins:
- image
removeButtons:
- Strike
- Anchor
- Outdent
- Indent
- Blockquote
- JustifyBlock
extraAllowedContent:
- img
```<issue_comment>username_1: In order to achieve this you need to switch to the custom mode of AFC:
<https://sdk.ckeditor.com/samples/acfcustom.html> .
Use:
```
allowedContent:
- img[!src,alt,width,height]
```
to allow images.
It might be tricky than to make configuration/filtering fitting to your presets: you have to add other allowed tags to `allowedContent`list as well, otherwise the buttons for tables, list etc would not be rendered.
**Avoid** using
```
allowedContent: true
```
because it would allow everything!
Upvotes: 0 <issue_comment>username_2: Based on @username_1 answer, I managed to make the extraAllowedContent to work. Instead of
```
extraAllowedContent:
- img
```
it was enough to use
```
extraAllowedContent:
- img[!src,alt,width,height]
```
This keeps the img tags intact, without needing the image plugin
Upvotes: 3 [selected_answer] |
2018/03/22 | 355 | 1,291 | <issue_start>username_0: i update php form and change the value but it cant change it saves the same previous value
```
| Course Fees |
|
```
Please see the screenshot of source code and input filed . i changed the value in input field but it remain same in source code and save source code value db
[](https://i.stack.imgur.com/AuU64.png)<issue_comment>username_1: In order to achieve this you need to switch to the custom mode of AFC:
<https://sdk.ckeditor.com/samples/acfcustom.html> .
Use:
```
allowedContent:
- img[!src,alt,width,height]
```
to allow images.
It might be tricky than to make configuration/filtering fitting to your presets: you have to add other allowed tags to `allowedContent`list as well, otherwise the buttons for tables, list etc would not be rendered.
**Avoid** using
```
allowedContent: true
```
because it would allow everything!
Upvotes: 0 <issue_comment>username_2: Based on @username_1 answer, I managed to make the extraAllowedContent to work. Instead of
```
extraAllowedContent:
- img
```
it was enough to use
```
extraAllowedContent:
- img[!src,alt,width,height]
```
This keeps the img tags intact, without needing the image plugin
Upvotes: 3 [selected_answer] |
2018/03/22 | 955 | 2,643 | <issue_start>username_0: I have a table called articletag for a blog database that says which article has which tag:
```
Art_Id Tag_id
1 3
2 3
3 3
4 3
1 1
3 1
4 1
2 2
5 5
```
another way to see this data is:
```
1, "blog", "first"
2, "blog", "second"
3, "blog", "first"
4, "blog", "first"
5, "seaside"
```
Tag\_id 3 = 'blog' Tag\_id 1 = 'first' Tag\_id 5 = 'seaside' Tag\_id 2 = 'second'
I am specifically looking for any articles with 2 or more words in common among EVERY article in the database and EVERY word tag (these tags are unique, btw)
Looking at the denormalized example above the answer should be 1,3,4, as articles with 2 or more words in common. Those 3 articles clearly share "blog" and "first."
The output should be
art\_id
1
3
4
I have been trying for hours to get this right. The best I came up with was finding which tag\_id shows up 2 or more times using:
```
Select a.*
from articletag a
join (
select t.tag_id
from articletag t
group by t.tag_id
having count(*) >=2
) b on b.tag_id = a.tag_id
```
But what I really want is which Article\_id's have 2 or more words in common
Can anyone please help?<issue_comment>username_1: We can try doing a self join here:
```
SELECT t1.Art_id, t2.Art_id
FROM articletag t1
INNER JOIN articletag t2
ON t2.Art_id > t1.Art_id AND
t1.Tag_id = t2.Tag_id
GROUP BY
t1.Art_id, t2.Art_id
HAVING
COUNT(DISTINCT t1.Tag_id) >= 2;
```
[](https://i.stack.imgur.com/dcLpH.png)
[Demo
----](http://rextester.com/XTWYRP2961)
Note that I am seeing 1-3, 1-4, and 3-4 as being the articles which have two or more tags in common.
Upvotes: 2 <issue_comment>username_2: Try this:
```
declare @x table (art_id int, tag_id int)
insert into @x values
(1, 3),
(2, 3),
(3, 3),
(4, 3),
(1, 1),
(3, 1),
(4, 1),
(2, 2),
(5, 5)
select distinct art_id from (
select [x1].art_id,
COUNT(*) over (partition by [x1].art_id,[x2].art_id) [cnt]
from @x [x1] join @x [x2]
on [x1].tag_id = [x2].tag_id and [x1].art_id <> [x2].art_id
) a where cnt > 1
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: You could also use `cte` to find the `Art_Id`s which have same combination
```
;with cte as
(
select Tag_id
from table
group by Tag_id
having count(*) >= 2
)
select t.Art_Id
from cte c inner join table t
on t.Tag_id = c.Tag_id
group by t.Art_Id
having count(*) = (select count(1) from cte)
```
Upvotes: 0 |
2018/03/22 | 801 | 2,183 | <issue_start>username_0: when trying to navigate to next activity after Alert Dialog, then I am getting
this error:
>
> Activity has Leaked window DecorView@a61b0ed[] that was originally
> addded here
>
>
>
here is the snippet:
```
if (alert1 != null && alert1.isShowing()) {
alert1.dismiss();
}
builder.setCancelable(true);
final AlertDialog alert1 = builder.create();
alert1.show();
onPause();
final Timer t = new Timer();
t.schedule(new TimerTask() {
@Override
public void run() {
alert1.dismiss();
t.cancel();
}
}, 3000);
if (updatedQnty.equals("order full")) {
Intent intent = new Intent();
setResult(Activity.RESULT_OK, intent);
finish();
// callForDestroy(alert1);
}else{
mScannerView.resumeCameraPreview(this);
}
```<issue_comment>username_1: We can try doing a self join here:
```
SELECT t1.Art_id, t2.Art_id
FROM articletag t1
INNER JOIN articletag t2
ON t2.Art_id > t1.Art_id AND
t1.Tag_id = t2.Tag_id
GROUP BY
t1.Art_id, t2.Art_id
HAVING
COUNT(DISTINCT t1.Tag_id) >= 2;
```
[](https://i.stack.imgur.com/dcLpH.png)
[Demo
----](http://rextester.com/XTWYRP2961)
Note that I am seeing 1-3, 1-4, and 3-4 as being the articles which have two or more tags in common.
Upvotes: 2 <issue_comment>username_2: Try this:
```
declare @x table (art_id int, tag_id int)
insert into @x values
(1, 3),
(2, 3),
(3, 3),
(4, 3),
(1, 1),
(3, 1),
(4, 1),
(2, 2),
(5, 5)
select distinct art_id from (
select [x1].art_id,
COUNT(*) over (partition by [x1].art_id,[x2].art_id) [cnt]
from @x [x1] join @x [x2]
on [x1].tag_id = [x2].tag_id and [x1].art_id <> [x2].art_id
) a where cnt > 1
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: You could also use `cte` to find the `Art_Id`s which have same combination
```
;with cte as
(
select Tag_id
from table
group by Tag_id
having count(*) >= 2
)
select t.Art_Id
from cte c inner join table t
on t.Tag_id = c.Tag_id
group by t.Art_Id
having count(*) = (select count(1) from cte)
```
Upvotes: 0 |
2018/03/22 | 712 | 2,779 | <issue_start>username_0: I have several maven-projects:
* commons-lib (*simple Java project*)
* user-service (*Spring-Boot-driven project*)
* composite-service (*Spring-Boot-driven project*)
* frontend-service (*Spring-Boot- / Angular2-driven project*)
* all-services-parent (*parent project building everything else*)
While **commons-lib** is unlikely to ever be released separately, all other projects (except for the parent) might be released separately.
If the first four in the list are sub-modules of the fifth, do they **have to** have their parent set to parent (e.g. all-services-parent) in return?
Since I want to include *commons-lib* in the user- and composite-services I understand that I have to have it built first. However: each of the services above may be released separately - so which building structure is most proper for what I need?
Would it be:
```
-- all-services-parent
|-- (maven sub-module) commons-lib
|-- (maven sub-module) user-service
|-- (maven sub-module) composite-service
|-- (maven sub-module) frontend-service
```
or:
```
-- all-services-parent
|-- user-service-parent
|-- (maven sub-module) commons-lib
|-- (maven sub-module) user-service
|-- composite-service-parent
|-- (maven sub-module) commons-lib
|-- (maven sub-module) composite-service
|-- frontend-service
```
The second building structure would allow me to build all the JARs by calling "mvn clean install" on all-services-parent while still being able to build separate projects properly by calling "mvn clean install" on the corresponding parent, but is it really how it's done?
In my current setup I am trying to use the first building structure, but since e.g. composite-service has "spring-boot-starter-parent" set as its parent, I cannot access the properties or anything from the "all-services-parent"-module.
I read into [Maven parent pom vs modules pom](https://stackoverflow.com/questions/1992213/maven-parent-pom-vs-modules-pom) (a question that looked promising at first), but it did not apply to my case as much as I would like it to.<issue_comment>username_1: Try to import the spring boot parent and not inherit from it like this:
```
org.springframework.boot
spring-boot-starter-parent
1.5.9.RELEASE
pom
import
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Parent module properties file can access by sub modules but sub modules properties file cannot access to other sub modules.
**or**
You need to do separate properties file for each sub modules.
Upvotes: 0 <issue_comment>username_2: Add your sub modules like this in your pom.xml file and give its absolute path.
This should be work
```
../TestWeb
../TestBusiness
../MsgScheduler
../Connector
```
Upvotes: -1 |
2018/03/22 | 476 | 1,639 | <issue_start>username_0: I have the following text, which I receive from my database
`"----- Some Text ------ Bônus -------- Some Text ------- "`
I am storing it in a String variable and printing the variable in the console and writing it into the PDF , but the issue is I am not getting the text in the correct format in console as well as in PDF , rather am getting **Bônus** as '**Bônus**' , I referred this example
<http://itext.2136553.n4.nabble.com/Problem-with-spanish-character-td2163635.html>
and changed the encoding of my compiler in gradle file to ,
```
compileJava.options.encoding = 'UTF-8'
```
But still the issue didnt get resolved
The example text I put out is just a sample one , there are many other words which gets changed or either a empty box is displayed.
Do I need to look into it from IText perspective ? or Compiler ?
Changing the Unicode of compiler didn't help though<issue_comment>username_1: Check if you have a unicode font. It could cause problem in your case. There is a code for setting font
```
BaseFont basefont = BaseFont.createFont("font_name.ttf", BaseFont.IDENTITY_H, BaseFont.EMBEDDED);
```
And then use it just like that:
```
Paragraph p = new Paragraph("text in unicode", new Font(basefont, 22));
```
Upvotes: 1 <issue_comment>username_2: After some more searching , I was finally able to get it in the UTF 8 form
```
byte[] ptext = originalString.getBytes(ISO_8859_1);
String value = new String(ptext, UTF_8);
```
Please find the link I referred to,
[Encode String to UTF-8](https://stackoverflow.com/questions/5729806/encode-string-to-utf-8)
Upvotes: 1 [selected_answer] |
2018/03/22 | 662 | 1,642 | <issue_start>username_0: I want to use some function that requires to concat two dataframe. Here is the example:
```
import numpy as np
import pandas as pd
data1 = np.array([['','Col1','Col2'],['1',1,2],['2',3,4]])
data1=pd.DataFrame(data=data1[1:,1:],index=data1[1:,0],columns=data1[0,1:])
data2=np.array([['','Col1','Col2'],['1',5,6],['2',7,8]])
data2=pd.DataFrame(data=data2[1:,1:],index=data2[1:,0],columns=data2[0,1:])
X=pd.concat([data1,data2],0)
X_transformed=func(X)
```
Now I want to unconcat X\_transformed back into the original data1 and data2. Is there a way to do so?<issue_comment>username_1: You can add parameter `keys` to [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) for distinguish each DataFrame and then select by `loc`:
```
X=pd.concat([data1,data2],0, keys=[0,1])
print (X)
Col1 Col2
0 1 1 2
2 3 4
1 1 5 6
2 7 8
data11 = X.loc[0]
data22 = X.loc[1]
print (data11)
Col1 Col2
1 1 2
2 3 4
print (data22)
Col1 Col2
1 5 6
2 7 8
```
EDIT:
More general solution:
```
data3=data2.iloc[[0]].rename({'1':'10'})
dfs = [data1,data2,data3]
X=pd.concat(dfs, keys=np.arange(len(dfs)))
print (X)
Col1 Col2
0 1 1 2
2 3 4
1 1 5 6
2 7 8
2 10 5 6
print (X.xs(0))
Col1 Col2
1 1 2
2 3 4
print (X.xs(1))
Col1 Col2
1 5 6
2 7 8
print (X.xs(2))
Col1 Col2
10 5 6
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: `X_transformed.iloc[:data1.shape[0]]` is the first dataframe. `X_transformed.iloc[data1.shape[0]:]` is the second one.
Upvotes: 0 |
2018/03/22 | 599 | 2,078 | <issue_start>username_0: I have a grid of 4 buttons and once one of them is clicked it will call a function called `doSearch` which checks which button was clicked and based on that assigns a string to the `last_search` value.
However, when I click any of the four buttons, I always seem to only press the `edm` button and reads 'i am edm' to console.
Could anyone explain why that is?
html
```
EDM
House
Pop
Dubstep
```
function code
```
doSearch(event): void {
if (document.getElementById('edm-btn')) {
this.last_search = 'edm';
console.log('i am edm');
} else if (document.getElementById('house-btn')) {
this.last_search = 'house';
console.log('i am house');
} else if (document.getElementById('pop-btn')) {
this.last_search = 'pop';
console.log('i am pop');
} else if (document.getElementById('dubstep-btn')) {
this.last_search = 'dubstep';
console.log('i am dubstep');
}
}
```
---
FIX:
instead of passing the `id` of the button, I decided to pass a string directly into the function call of `doSearch`
html
```
EDM
House
Pop
Dubstep
```
function
```
doSearch(category): void {
console.log(JSON.stringify(category, null, 2));
if (category === 'edm') {
this.last_search = 'edm';
console.log('i am edm');
} else if (category === 'house') {
this.last_search = 'house';
console.log('i am house');
} else if (category === 'pop') {
this.last_search = 'pop';
console.log('i am pop');
} else if (category === 'dubstep') {
this.last_search = 'dubstep';
console.log('i am dubstep');
}
}
```<issue_comment>username_1: It's because no matter what event you pass, your 1st condition is always true. You are passing an event, not the actual data, as well as checking if an element exists even if it already is.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You actually don't need here if and else, it's enough:
```
public doSearch(category: string) {
this.last_search = category;
}
```
Upvotes: 0 |
2018/03/22 | 1,193 | 3,488 | <issue_start>username_0: I want to upgrade my application from angular4 to angular5. What changes do I need to make in `package.json`?
Here are my dependencies:
```
"dependencies": {
"@material/fab": "^0.28.0",
"@progress/kendo-angular-charts": "1.2.1",
"@progress/kendo-angular-dateinputs": "1.0.5",
"@progress/kendo-angular-dropdowns": "1.1.1",
"@progress/kendo-angular-excel-export": "1.0.3",
"@progress/kendo-angular-grid": "1.1.3",
"@progress/kendo-angular-inputs": "1.0.6",
"@progress/kendo-angular-intl": "1.2.1",
"@progress/kendo-angular-l10n": "1.0.2",
"@progress/kendo-data-query": "1.0.5",
"@progress/kendo-drawing": "1.1.1",
"angular-2-dropdown-multiselect": "1.5.4",
"angular-tree-component": "4.1.0",
"angular2-cool-storage": "3.1.0",
"core-js": "2.5.0",
"hammerjs": "2.0.8",
"mydatepicker": "2.0.27",
"ng-sidebar": "6.0.1",
"ng2-device-detector": "^1.0.0",
"ng2-dnd": "4.2.0",
"ng2-dropdown-treeview": "2.0.1",
"ng2-toasty": "4.0.3",
"ng2draggable": "1.3.2",
"ngx-bootstrap": "1.9.3",
"ngx-clipboard": "8.0.4",
"primeng": "4.1.3",
"rxjs": "5.4.3",
"typescript": "2.3.4",
"zone.js": "0.8.16"
}, "devDependencies": {
"@angular/cli": "1.3.0",
"@angular/compiler-cli": "4.3.4",
"@types/jasmine": "2.5.53",
"@types/node": "8.0.22",
"codelyzer": "3.2.0",
"jasmine-core": "~2.8.0",
"jasmine-spec-reporter": "4.2.1",
"karma": "1.7.0",
"karma-chrome-launcher": "2.2.0",
"karma-cli": "1.0.1",
"karma-jasmine": "1.1.0",
"karma-jasmine-html-reporter": "0.2.2",
"karma-coverage-istanbul-reporter": "1.3.0",
"protractor": "5.1.2",
"ts-node": "3.3.0",
"tslint": "5.7.0",
"typescript": "2.5.2"
}
```<issue_comment>username_1: I would advice you, from my experince, to create a new project with the newest @angular/cli and copy your code there. It will be more easier than configuring angular 5, specially if you have ejected your project, because the webpack configurations are pretty different, and you will get a lot of wierd errors by just updating packages.
Upvotes: 1 <issue_comment>username_2: just follow this command
```
npm uninstall -g angular-cli
npm uninstall --save-dev angular-cli
```
you have to upgrade your Angular CLI to avoid some conflict
this command gonna upgrade your global package
```
npm uninstall -g @angular/cli
npm cache verify
# if npm version is < 5 then use `npm cache clean`
npm install -g @angular/cli@latest
```
and this command will upgrade your local package
```
rm -rf node_modules dist <-- in mac os
# use rmdir /S/Q node_modules dist <--- in Windows Command Prompt;
use rm -r -fo node_modules,dist <--- in Windows PowerShell
npm install --save-dev @angular/cli@latest
npm install
```
I used to upgrade project from angular 4 to angular 5, and i got a little bug cause i use some library that deprecate in angular 4,
so my suggestion is backup your project and try to upgrade it !!
Lol, hope your enjoy :)
P.S. i'm copy this code from somewhere and didn't save path sorry for didn't give a credit of this code
Upvotes: 0 <issue_comment>username_3: You can uninstall and Reinstall on that time you wants to give the
```
npm install @angular/Cli @5.2.9
```
Which version you need that's. Version
Then after Remove the nodemodules in project and then click
```
npm install
```
Then the package.json is automatically upgrades to 5
Upvotes: 0 |
2018/03/22 | 2,917 | 10,138 | <issue_start>username_0: I have problem when try to build the app after update playservice version to 12.0.0 but it works fine with version 11.8.0. It's shows binding folder is not found in app.
**Message :**
>
> Error:(20, 38) error: package com.app.test.databinding does not exist
> Error:(31, 13) error: cannot find symbol class MainFragmentBinding
>
>
>
**Gradle Console Log**
```
Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:compileDevDebugJavaWithJavac'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:100)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor.process(DefaultTaskPlanExecutor.java:60)
```
**App Gradle**
```
buildscript {
repositories {
maven { url 'https://maven.fabric.io/public' }
}
dependencies {
classpath 'io.fabric.tools:gradle:1.+'
}
}
apply plugin: 'com.android.application'
apply plugin: 'io.fabric'
apply plugin: 'realm-android'
repositories {
maven { url 'https://maven.fabric.io/public' }
}
ext {
VERSION_BATCH_NOTIFICATION = '1.10.2'
VERSION_PARCELER = '1.1.9'
VERSION_RETROFIT = '2.3.0'
VERSION_SUPPORT_LIB = '27.1.0'
VERSION_PLAYSERVICE = '12.0.0'
VERSION_FIREBASE = '12.0.0'
SDK_MINUMUM = 17
SDK_TARGET = 27
}
android {
compileSdkVersion SDK_TARGET
//keystore configurations
defaultConfig {
minSdkVersion SDK_MINUMUM
targetSdkVersion SDK_TARGET
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
multiDexEnabled true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
flavorDimensions "environment"
//app flavors
productFlavors {
dev {
dimension "environment"
applicationId "com.app.test"
}
}
}
//data binding enables
dataBinding {
enabled = true
}
//enabled lamda expressions
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
lintOptions {
checkReleaseBuilds false
// Or, if you prefer, you can continue to check for errors in release builds,
// but continue the build even when errors are found:
abortOnError false
}
testOptions {
animationsDisabled = true
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
//multidex lib
implementation 'com.android.support:multidex:1.0.3'
//android support libs
implementation "com.android.support:appcompat-v7:${VERSION_SUPPORT_LIB}"
implementation "com.android.support:support-v13:${VERSION_SUPPORT_LIB}"
implementation "com.android.support:design:${VERSION_SUPPORT_LIB}"
implementation "com.android.support:cardview-v7:${VERSION_SUPPORT_LIB}"
implementation "com.android.support:recyclerview-v7:${VERSION_SUPPORT_LIB}"
implementation "com.android.support:support-v4:${VERSION_SUPPORT_LIB}"
//libs for views and layout
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
implementation 'de.hdodenhof:circleimageview:2.2.0'
implementation 'com.romandanylyk:pageindicatorview:1.0.0@aar'
implementation 'com.daimajia.swipelayout:library:1.2.0@aar'
//libs for api call and image loader
implementation "com.squareup.retrofit2:retrofit:${VERSION_RETROFIT}"
implementation "com.squareup.retrofit2:converter-gson:${VERSION_RETROFIT}"
implementation "com.squareup.retrofit2:adapter-rxjava2:${VERSION_RETROFIT}"
implementation 'com.github.bumptech.glide:glide:4.5.0'
//Image picker and cropper
implementation 'com.github.esafirm.android-image-picker:imagepicker:1.12.0'
implementation 'com.github.esafirm.android-image-picker:rximagepicker:1.12.0'
implementation 'com.theartofdev.edmodo:android-image-cropper:2.5.1', {
exclude group: 'com.android.support', module: 'exifinterface'
}
//lib for batch notification
implementation "com.batch.android:batch-sdk:${VERSION_BATCH_NOTIFICATION}"
//parceller libs
implementation "org.parceler:parceler-api:${VERSION_PARCELER}"
annotationProcessor "org.parceler:parceler:${VERSION_PARCELER}"
//gcm
implementation "com.google.android.gms:play-services-gcm:${VERSION_PLAYSERVICE}"
//Firebase libs
implementation "com.google.firebase:firebase-messaging:${VERSION_FIREBASE}"
implementation "com.google.firebase:firebase-core:${VERSION_FIREBASE}"
implementation "com.google.firebase:firebase-auth:${VERSION_FIREBASE}"
implementation "com.google.firebase:firebase-database:${VERSION_FIREBASE}"
implementation 'com.firebase:geofire-android:2.3.0'
//lib for map
implementation "com.google.android.gms:play-services-maps:${VERSION_PLAYSERVICE}"
implementation "com.google.android.gms:play-services-location:${VERSION_PLAYSERVICE}"
//rxjava-2 libs
implementation 'io.reactivex.rxjava2:rxjava:2.1.7'
implementation 'io.reactivex.rxjava2:rxandroid:2.0.1'
//language lib for managing emoji
implementation 'org.apache.commons:commons-text:1.2'
//Testing libs
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
implementation 'com.android.support.test.espresso:espresso-idling-resource:3.0.1'
//Crashlytics lib
implementation('com.crashlytics.sdk.android:crashlytics:2.8.0@aar') {
transitive = true;
}
}
apply plugin: 'com.google.gms.google-services'
```
**Note**
I'm able to run when I downgrade the playservice and firebase version to 11.8.0<issue_comment>username_1: I would advice you, from my experince, to create a new project with the newest @angular/cli and copy your code there. It will be more easier than configuring angular 5, specially if you have ejected your project, because the webpack configurations are pretty different, and you will get a lot of wierd errors by just updating packages.
Upvotes: 1 <issue_comment>username_2: just follow this command
```
npm uninstall -g angular-cli
npm uninstall --save-dev angular-cli
```
you have to upgrade your Angular CLI to avoid some conflict
this command gonna upgrade your global package
```
npm uninstall -g @angular/cli
npm cache verify
# if npm version is < 5 then use `npm cache clean`
npm install -g @angular/cli@latest
```
and this command will upgrade your local package
```
rm -rf node_modules dist <-- in mac os
# use rmdir /S/Q node_modules dist <--- in Windows Command Prompt;
use rm -r -fo node_modules,dist <--- in Windows PowerShell
npm install --save-dev @angular/cli@latest
npm install
```
I used to upgrade project from angular 4 to angular 5, and i got a little bug cause i use some library that deprecate in angular 4,
so my suggestion is backup your project and try to upgrade it !!
Lol, hope your enjoy :)
P.S. i'm copy this code from somewhere and didn't save path sorry for didn't give a credit of this code
Upvotes: 0 <issue_comment>username_3: You can uninstall and Reinstall on that time you wants to give the
```
npm install @angular/Cli @5.2.9
```
Which version you need that's. Version
Then after Remove the nodemodules in project and then click
```
npm install
```
Then the package.json is automatically upgrades to 5
Upvotes: 0 |
2018/03/22 | 1,756 | 6,533 | <issue_start>username_0: I am facing an issue where I have downloaded an Excel file with Angular 1 but if I am implementing the same code in Angular 5 it is showing the error that your file is corrupted. My response is in **ArrayBuffer** and I am unable to read the file.
Below is my code:
**Service:**
```
DownloadData(model:requiredParams):Observable{
const headers = new Headers();
const requestOptions = new RequestOptions({ headers: headers });
requestOptions.headers.append('Content-Type', 'application/json');
const body = JSON.stringify(model);
return this.http.post(url, body, requestOptions)
.map((res:any) => res)
.catch((e: any) => Observable.throw(this.errorHandler(e)));
}
```
**Component:**
```
exportToExcel() {
this.loadingOverlayFlag = true;
this.podashboardService.DownloadData(this.data).subscribe(result=>{
console.log(result);
this.downloadFile(result._body,'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'export.xlsx');
})
}
downloadFile(blob: any, type: string, filename: string) {
var binaryData = [];
binaryData.push(blob);
const url = window.URL.createObjectURL(new Blob(binaryData, {type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"})); // <-- work with blob directly
// create hidden dom element (so it works in all browsers)
const a = document.createElement('a');
a.setAttribute('style', 'display:none;');
document.body.appendChild(a);
// create file, attach to hidden element and open hidden element
a.href = url;
a.download = filename;
a.click();
}
```
I am able to download the file, but unable to read its content. The error is:
>
> Microsoft Excel
>
> Excel cannot open the file '███████ DASHBOARD (5).xlsx' because the file format or file extension is not valid.
> Verify that the file has not been corrupted and that the file extension matches the format of the file. OK
>
>
><issue_comment>username_1: I struggle with this one all day. Replace angular HttpClient and use XMLHttpRequest as follows:
```
var oReq = new XMLHttpRequest();
oReq.open("POST", url, true);
oReq.setRequestHeader("content-type", "application/json");
oReq.responseType = "arraybuffer";
oReq.onload = function (oEvent) {
var arrayBuffer = oReq.response;
if (arrayBuffer) {
var byteArray = new Uint8Array(arrayBuffer);
console.log(byteArray, byteArray.length);
this.downloadFile(byteArray, 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet', 'export.xlsx');
}
};
oReq.send(body);
```
Then modified the creation of the Blob in your downloadFile function:
```
const url = window.URL.createObjectURL(new Blob([binaryData]));
```
In your case the service will look something like this:
```
DownloadData(model:requiredParams):Observable{
return new Observable(obs => {
var oReq = new XMLHttpRequest();
oReq.open("POST", url, true);
oReq.setRequestHeader("content-type", "application/json");
oReq.responseType = "arraybuffer";
oReq.onload = function (oEvent) {
var arrayBuffer = oReq.response;
var byteArray = new Uint8Array(arrayBuffer);
obs.next(byteArray);
};
const body = JSON.stringify(model);
oReq.send(body);
});
}
```
Then the component:
```
exportToExcel() {
this.loadingOverlayFlag = true;
this.podashboardService.DownloadData(this.data).subscribe(result=>{
// console.log(result);
this.downloadFile(result,'application/vnd.openxmlformats-
officedocument.spreadsheetml.sheet', 'export.xlsx');
})
}
downloadFile(blob: any, type: string, filename: string) {
var binaryData = [];
binaryData.push(blob);
const url = window.URL.createObjectURL(new Blob(binaryData, { type: filetype })); // <-- work with blob directly
// create hidden dom element (so it works in all browsers)
const a = document.createElement('a');
a.setAttribute('style', 'display:none;');
document.body.appendChild(a);
// create file, attach to hidden element and open hidden element
a.href = url;
a.download = filename;
a.click();
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I managed to make it working by using httpClient ( `responseType: 'arraybuffer'` in the httpOptions did the trick).
```
createReportBackend() {
const httpOption: Object = {
observe: 'response',
headers: new HttpHeaders({
'Content-Type': 'application/json'
}),
responseType: 'arraybuffer'
};
this.httpClient.post('http://localhost:8080/api/report', this.data, httpOption)
.pipe(map((res: HttpResponse) => {
return {
filename: 'Drinks.xlsx',
data: new Blob(
[res['body']],
{ type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'}
),
};
}))
.subscribe(res => {
if (window.navigator.msSaveOrOpenBlob) {
window.navigator.msSaveBlob(res.data, res.filename);
} else {
const link = window.URL.createObjectURL(res.data);
const a = document.createElement('a');
document.body.appendChild(a);
a.setAttribute('style', 'display: none');
a.href = link;
a.download = res.filename;
a.click();
window.URL.revokeObjectURL(link);
a.remove();
}
}, error => {
throw error;
}, () => {
console.log('Completed file download.');
});
}
```
Upvotes: 3 <issue_comment>username_3: Easiest way to download exel file found using file-saver is here:
```
//Declaration
headers: HttpHeaders;
options: any;
//Constructor or u can have for specific method
this.headers = new HttpHeaders({ 'Content-Type': 'application/json' });
this.options = {
observe: 'response',
headers: this.headers,
responseType: 'arraybuffer'
};
//Service request:
this.httpClient.post('http://localhost:8080/api/report', this.data, this.option)
.pipe(
catchError(err => this.handleError(err))
).subscribe(response => {
Helper.exportExelFile(response, 'FileName');
});
//component or in helper function in one class
import * as FileSaver from 'file-saver';
function exportExelFile(data, filename) {
const blobValue = new Blob([data['body']], {
type: 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'
});
FileSaver.saveAs(blobValue, filename + '.' + FileType.EXCEL);
}
export const Helper = {
exportExelFile
};
```
Upvotes: 0 |
2018/03/22 | 532 | 1,772 | <issue_start>username_0: I am new in Xamrin & trying to create a category for UItextField as below
```
using UIKit;
using System.Diagnostics.Contracts;
using CoreGraphics;
using System.ComponentModel;
using Foundation;
namespace WeatherApp.iOS.Extension
{
[Category(typeof(UITextField))]
public static class UITextFieldExtensionMethod
{
[Export ("setBottomBorder")]
static UITextField setBottomBorder(this UITextField self)
{
Contract.Ensures(Contract.Result() != null);
self.BorderStyle = UITextBorderStyle.None;
self.Layer.MasksToBounds = true;
self.Layer.ShadowColor = UIColor.FromRGB(red: (System.nfloat)230.0, green: (System.nfloat)230.0, blue: (System.nfloat)230.0).CGColor;
self.Layer.ShadowOffset = new CGSize(width: 0.0, height: 0.9);
self.Layer.ShadowOpacity = 1.0f;
self.Layer.ShadowRadius = 0.0f;
return self;
}
}
}
```
but getting an error on **[Category(typeof(UITextField))]** as
*cannot convert from 'System.Type' to 'string' (CS1503)*
Please suggest me what am I doing wrong while creating a category. I am working with Visual Studio for Mac.<issue_comment>username_1: Since you are `using System.ComponentModel;`, please refer to this api: [CategoryAttribute Class](https://msdn.microsoft.com/en-us/library/system.componentmodel.categoryattribute%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396). This attribute can only accept a string parameter.
Did you want to use [ObjCRuntime.CategoryAttribute Class](https://developer.xamarin.com/api/type/ObjCRuntime.CategoryAttribute/)? With this namespace you can use it successfully.
Upvotes: 0 <issue_comment>username_2: Remove:
```
using System.ComponentModel;
```
and add:
```
using ObjCRuntime;
```
You added the wrong using.
Upvotes: 1 |
2018/03/22 | 1,115 | 3,483 | <issue_start>username_0: With *Rcpp*, I've defined a matrix *M* in *C++*. Using `M.nrow()`, we should be able to retrieve the number of rows. However, when I tried to return the number of rows as an `IntegerVector`, the answer is incorrect:
```
set.seed(1100)
M = matrix(sample(18), nrow = 6)
Rcpp::cppFunction('IntegerVector tccp5(IntegerMatrix M) { int x = M.nrow(); return x;}')
tccp5(M)
# [1] 0 0 0 0 0 0
```
The correct answer should be the number of rows, e.g.
```
# [1] 6
```
Can you please explain what is happening?<issue_comment>username_1: The anomalous output arises because of the type declaration of the function.
```r
library(Rcpp)
M <- matrix(sample(1:18), nrow=6)
cppFunction('int tccp6(IntegerMatrix M) { int x = M.nrow(); return x;}')
tccp6(M)
#> [1] 6
```
Created on 2018-03-22 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 1 <issue_comment>username_2: While @username_1's answer is spot on, I wanted to expand upon *why* declaring as an `int` and not `IntegerVector` leads to the right construction...
In particular, each *return* into *R* is seamlessly converted using `wrap()` to an [`SEXP`](https://cran.r-project.org/doc/manuals/r-release/R-ints.html#SEXPs), or an *S* expression that points to the structure. By supplying `IntegerVector` and returning an `int`, the `wrap()` must instantiate a new `IntegerVector` of length `x` as there is no `SEXP` underlying an `int`. On the other hand, when the defined return type is `int`, the `wrap()` feature of *Rcpp* is able to correctly coerce the `int` into an `IntegerVector`.
---
To emphasize the underlying "seamless" transition that happens, let's add the parameter `verbose = TRUE` to `cppFunction()` to view how the *C++* code is: compiled, linked, and imported into *R*. (Note: I've truncated the output to the compilations.)
If we consider:
```
Rcpp::cppFunction('IntegerVector tccp5(IntegerMatrix M) { int x = M.nrow(); return x;}',
verbose = TRUE)
```
We get:
```
Generated code for function definition:
--------------------------------------------------------
#include
using namespace Rcpp;
// [[Rcpp::export]]
IntegerVector tccp5(IntegerMatrix M) { int x = M.nrow(); return x;}
Generated extern "C" functions
--------------------------------------------------------
#include
// tccp5
IntegerVector tccp5(IntegerMatrix M);
RcppExport SEXP sourceCpp\_7\_tccp5(SEXP MSEXP) {
BEGIN\_RCPP
Rcpp::RObject rcpp\_result\_gen;
Rcpp::RNGScope rcpp\_rngScope\_gen;
Rcpp::traits::input\_parameter< IntegerMatrix >::type M(MSEXP);
rcpp\_result\_gen = Rcpp::wrap(tccp5(M));
return rcpp\_result\_gen;
END\_RCPP
}
```
Compared to:
```
Rcpp::cppFunction('int tccp5(IntegerMatrix M) { int x = M.nrow(); return x;}',
verbose = TRUE)
```
which gives:
```
Generated code for function definition:
--------------------------------------------------------
#include
using namespace Rcpp;
// [[Rcpp::export]]
int tccp5(IntegerMatrix M) { int x = M.nrow(); return x;}
Generated extern "C" functions
--------------------------------------------------------
#include
// tccp5
int tccp5(IntegerMatrix M);
RcppExport SEXP sourceCpp\_9\_tccp5(SEXP MSEXP) {
BEGIN\_RCPP
Rcpp::RObject rcpp\_result\_gen;
Rcpp::RNGScope rcpp\_rngScope\_gen;
Rcpp::traits::input\_parameter< IntegerMatrix >::type M(MSEXP);
rcpp\_result\_gen = Rcpp::wrap(tccp5(M));
return rcpp\_result\_gen;
END\_RCPP
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,939 | 5,877 | <issue_start>username_0: I need help with modifying the code below.
I have two tables,
1. table one has two rows and six columns
2. table two has two rows and six columns with each cell colored either red or yellow creating two congruent shapes.
I want to display an alert when the cells in "table one" match the congruent shapes and their colors of "table two".
\*note... You can change the color of the cells by selecting one of the top left buttons (white, yellow or red).
\*\*Is there a way that allows for two solutions that match table two? I've included an image as an example.
[](https://i.stack.imgur.com/5SDRb.png)
```js
jQuery(function() {
var brush = "white_block";
jQuery('input.block').on('click', function() {
brush = jQuery(this).data('brush');
});
function cellCheck() {
var reds = jQuery('#two .red_block').length,
yellows = jQuery('#two .yellow_block').length,
cells_colored = reds + yellows,
cells_total = jQuery('#two td').length;
// // condition 1: all colored
// if (cells_colored == cells_total) {
// setTimeout(function() {alert("All Colored");}, 100);
// }
// // condition 2: equal colors
// if (reds == yellows) {
// setTimeout(function() {alert("Equal colors");}, 100);
// }
// // condition 3: both conditions
// if (cells_colored == cells_total && reds == yellows) {
// setTimeout(function() {alert("Finished!");}, 100);
// }
}
jQuery('td').on('click', function() {
jQuery(this).removeClass('white_block yellow_block red_block').addClass(brush);
cellCheck();
});
});
```
```css
.block {
border: thin solid #000000;
width: 59px;
height: 57px;
}
.white_block {
background-color: #FFFFFF;
}
.red_block {
background-color: #FF0000;
}
.yellow_block {
background-color: #FFFF00;
}
table {
margin: 1em 0 0;
}
```
```html
Buttons:
Table One:
| | | |
| --- | --- | --- |
| | | |
| | | |
Table Two:
| | | |
| --- | --- | --- |
| | | |
| | | |
```<issue_comment>username_1: Please use this [fiddle](https://jsfiddle.net/96oa88eu/46/)
```js
jQuery(function() {
var brush = "white_block";
jQuery('input.block').on('click', function() {
brush = jQuery(this).data('brush');
});
var arr_t_one = [];
var arr_t_two = [];
var s = 0;
function cellCheck() {
var reds = jQuery('#two .red_block').length,
yellows = jQuery('#two .yellow_block').length,
cells_colored = reds + yellows,
cells_total = jQuery('#two td').length;
arr_t_one = [];
arr_t_two = [];
$( "#one td" ).each(function( index ) {
arr_t_one.push($( this ).attr('class') );
});
$( "#two td" ).each(function( index ) {
arr_t_two.push($( this ).attr('class') );
});
var is_same = (arr_t_one.length == arr_t_two.length) && arr_t_one.every(function(element, index) {
return element === arr_t_two[index];
});
var is_not_same = (arr_t_one.length == arr_t_two.length) && arr_t_one.every(function(element, index) {
return element !== arr_t_two[index];
});
if(is_same==true)
setTimeout(function() {alert("Exact Match");}, 100);
if(is_not_same==true)
setTimeout(function() {alert("Not matched Exactly");}, 100);
}
jQuery('td').on('click', function() {
jQuery(this).removeClass('white_block yellow_block red_block').addClass(brush);
if(brush!='white_block'){
cellCheck();
}
});
});
```
```css
.block {
border: thin solid #000000;
width: 59px;
height: 57px;
}
.white_block {
background-color: #FFFFFF;
}
.red_block {
background-color: #FF0000;
}
.yellow_block {
background-color: #FFFF00;
}
table {
margin: 1em 0 0;
}
```
```html
Buttons:
Table One:
| | | |
| --- | --- | --- |
| | | |
| | | |
Table Two:
| | | |
| --- | --- | --- |
| | | |
| | | |
```
I have checked the **class name for exact match**..
Upvotes: 1 <issue_comment>username_2: ```js
jQuery(function() {
var brush = "white_block";
jQuery('input.block').on('click', function() {
brush = jQuery(this).data('brush');
});
function cellCheck() {
$one = $("#one").html().replace(/\s/g,'');
$two = $("#two").html().replace(/\s/g,'');
$three = $("#three").html().replace(/\s/g,'');
if($one === $two){
alert("match with two");
}
if($one === $three){
alert("match with three");
}
}
jQuery('td').on('click', function() {
jQuery(this).removeClass('white_block yellow_block red_block').addClass(brush);
cellCheck();
getsolution();
});
function getsolution(){
$("#two").clone().each(function() {
var $this = $(this);
var newrows = [];
var firstTr = '';
var i = 0;
$this.find("tr").each(function(){
if(i == 0){
firstTr = "|"+$(this).html()+"
";
i++;
}else{
$("#three").html("|"+$(this).html()+"
");
$("#three").append(firstTr);
}
});
});
}
getsolution();
});
```
```css
.block {
border: thin solid #000000;
width: 59px;
height: 57px;
}
.white_block {
background-color: #FFFFFF;
}
.red_block {
background-color: #FF0000;
}
.yellow_block {
background-color: #FFFF00;
}
table {
margin: 1em 0 0;
}
```
```html
Buttons:
Table One:
| | | |
| --- | --- | --- |
| | | |
| | | |
Table Two:
| | | |
| --- | --- | --- |
| | | |
| | | |
Three
```
The easiest way to check if two tables html is same. You can take the HTML and replace all the white spaces from them using regex. Now check if two strings are equal.
Upvotes: 3 [selected_answer] |
2018/03/22 | 844 | 3,545 | <issue_start>username_0: In my Maven, Spring Boot 2 project I have Maven module called `api1`. I have declared a number of `@RestController`s there.
In order to extend the logic of the `api1` module, I have implemented another Maven module called `api2` and placed `api1` there as Maven dependency.
Right now all of the `@RestController`s from `api1` project are initialized in the `api2` because all of them are present on the `api2` classpath.
How to disable a certain `@RestController` in `api2` project?<issue_comment>username_1: I think the crucial fact here to understand is that Spring works at **runtime** only, while maven matters in **build** time.
So maven sees that api2 depends on api1 so it understands that both modules have to be included in the artifact (in the case of spring boot its a big jar with all modules inside).
Now, when spring starts - it "takes for granted" that all modules are accessible, and depending on spring configurations it just defines beans to be loaded and processed, all rest controllers are among these beans of course.
So I assume, you don't mind having two modules in the artifact (and in classpath).
In this case, you shouldn't touch the maven part at all, but when the spring boot application starts it has to be "instructed" somehow that some rest controllers have to be excluded. The point is that it should be done not in terms of modules ("hey, spring, this controller belongs to module api2, so it has to be excluded"), but in terms of business "jargon". For example, api1 contains all "admin" functionality and api2 contains all "applicative" stuff. So, if you work with Java configurations, for example, you can do the following:
Inside module api1:
```
@Configuration
@ConditionalOnProperty(name = "admin.enabled", havingValue=true)
public class AdminControllersConfiguration {
@Bean
public AdminControllerFromModuleApi1 adminController() {
return new AdminControllerFromModuleApi1();
}
}
```
}
In module api2 you just define your rest controllers in a similar way but without "@ConditionalOnProperty" annotation.
The thing with this annotation is that it allows to "switch off" beans or entire configurations like in my example.
So, when you start api2, you just define in "application.properties" or something the following:
```
admin.enabled=false
```
And your controllers won't be "deployed" by spring although physically the files are certainly in the classpath.
Of course, since spring allows different types of configurations, this method might not be applicable to your project, but the idea is still the same.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You may try using Condition interface from Spring which provide support for conditional enable/disable of the beans based on certain condition/expression.
something like below:
```
@RestController
@ConditionalOnExpression("${api1.controller.enabled:false}")
@RequestMapping(value = "/", produces = "application/json;charset=UTF-8")
public class Api1Controller {
@RequestMapping(value = "/greeting")
public ResponseEntity greeting() {
return new ResponseEntity<>("Hello world", HttpStatus.OK);
}
}
```
you have to set the Conditional expression by some way (env. variable / property key). check [this](http://www.baeldung.com/spring-boot-custom-auto-configuration) for some reference. [Condition docs](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/context/annotation/Conditional.html) can guide you on more details.
Upvotes: 2 |
2018/03/22 | 1,258 | 4,481 | <issue_start>username_0: >
> AADSTS70005: response\_type 'id\_token' is not enabled for the
> application
>
>
>
I am getting above error even after setting `"oauth2AllowImplicitFlow": true`, in manifest.<issue_comment>username_1: Make sure you don't have two instances of the key `oauth2AllowImplicitFlow` in your manifest - in my case I had added the key but it was present already with the value set to false. Hopefully this solves the issue:)
Upvotes: 2 <issue_comment>username_2: try this:
go to portal.azure.com
select your directory, and go to Azure AD
then select App registration (preview)
select the app you are trying to authenticate (you should already have registered it)
go to the authentication tab
check "ID tokens" in the Advanced Settings section (see the bottom of the attached image)
[](https://i.stack.imgur.com/AQwhe.png)
this have worked for me
Upvotes: 7 <issue_comment>username_3: I got the error:
>
> AADSTS700054: response\_type 'id\_token' is not enabled for the application.
>
>
>
And the resolution was setting:
```js
{
"oauth2AllowIdTokenImplicitFlow" : true
}
```
in Azure Active Directory [App Manifest](https://learn.microsoft.com/en-us/azure/active-directory/develop/reference-app-manifest)
Upvotes: 5 <issue_comment>username_4: I was facing similar issue and when visited the page of ActiveDirectory -> App registrations, it wasnt showing new UI.
Also it doesnt allow me to set the flag in the metadata, Found the workaround for this.
<https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/Authentication/quickStartType//sourceType/Microsoft_AAD_IAM/appId/9bab1d75-34b8-475b-abfe-5a62c6f01234/objectId/a4b459c1-7753-400c-8f8f-46fb5451234/isMSAApp//defaultBlade/Overview/servicePrincipalCreated/true>
First login to your instance, modify the above URL to paste object id and application id of your application.
Then it should show the screen @username_2 posted.
Upvotes: 1 <issue_comment>username_5: **Error : OpenIdConnectMessage.Error was not null, indicating an error. Error: 'unsupported\_response\_type'.**
This error occurred because Azure AD not return any Access tokens or ID tokens.
Azure AD need to enabled check box to return tokens, after authentication is done.
**How to Solve :** goto Azure AD => App registration => click tab Authentication =>
enabled Access tokens and ID tokens check-boxes.
Upvotes: 3 <issue_comment>username_6: Make sure you have selected `ID tokens (used for implicit and hybrid flows)`
You can do from `Authentication` blade in your app in Azure AD. See screenshot below

Or go to the `Manifest` blade and make `oauth2AllowIdTokenImplicitFlow` to `true`. See screenshot below
[](https://i.stack.imgur.com/CV03F.png)
Upvotes: 4 <issue_comment>username_7: It is true like a lot of you are saying that you need to enable `ID tokens (used for implicit and hybrid flows)` if you really need the ID Token.
>
> 'AADSTS700054: response\_type 'id\_token' is not enabled for the
> application.
>
>
>
However if you use a `Authorization Code Flow` you don't really need it. Microsoft OpenID Connect authentication (`Microsoft.AspNetCore.Authentication.OpenIdConnect`) uses `id_token` as default `ResponseType` for `OpenIdConnect` and `JwtSecurityTokenHandler`.
[](https://i.stack.imgur.com/CSxBw.png)
Using `AddOpenIdConnect` you can set `ResponseType` to `OpenIdConnectResponseType.Code` or simply `"code"` and then you don't need the `id_token` at all.
Working example with Azure Ad and IdentityServer:
```
services.AddAuthentication()
.AddOpenIdConnect("aad", "Azure AD", options =>
{
options.ClientSecret = "";
options.ResponseType = OpenIdConnectResponseType.Code;
options.ClientId ="";
options.Authority = "https://login.microsoftonline.com//";
options.CallbackPath = "/signin-oidc";
})
.AddIdentityServerJwt();
```
<http://docs.identityserver.io/en/latest/topics/signin_external_providers.html>
Upvotes: 1 <issue_comment>username_8: I stumbled across this post since I was having the exact same issue with my Azure App Service. I fixed it by using the exact redirect URL in the error message and adding that to the list of URLs in the app registration.
Upvotes: 0 |
2018/03/22 | 541 | 1,819 | <issue_start>username_0: I have a json object which contains some HTML.
For example:
```
{
"cat": "1",
"catUrl": "this-is-a-url",
"catSummary": "This is a summary with [a link](\"http://www.a.com\")"
},
```
Notice catSummary has an href in it. But it doesn't get rendered as a link. Instead it just renders as regular text..
[](https://i.stack.imgur.com/CDAPW.png)
How do I make this work as a proper link?
**EDIT**
Just to clarify, I am using the VueJS framework, not jQuery.
A simple solution (that I completely missed..) is using the v-html directive.<issue_comment>username_1: You can use below method after getting "catSummary" value from your json
```
$.parseHTML("This is a summary with [a link](\"http://www.a.com\")")
```
Above method load your string as HTML and you can load into any element, See [Reference](https://api.jquery.com/jquery.parsehtml/)
Upvotes: 0 <issue_comment>username_2: In case all you want to do is populate another html element with the content in the catSummary field, use following:
```
$('#element_id').innerHTML = object_name.catSummary;
```
This will populate the element's object with the content in catSummary field.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Do you use any library ? For example with jQuery you could simply do like this:
```
var data = {
"cat": "1",
"catUrl": "this-is-a-url",
"catSummary": "This is a summary with [a link](\"http://www.a.com\")"
};
$("body").html(data.catSummary);
```
and it will render as a link.
Upvotes: 0 <issue_comment>username_1: You can get your json property "catSummary" and set into HTML controls
```
$("#" + elementId).innerHTML = "This is a summary with [a link](\"http://www.a.com\")";
```
Upvotes: 0 |
2018/03/22 | 634 | 2,129 | <issue_start>username_0: i try to build a store procedure who insert data to a table,
after it run, the table is empty.
this is the code:
```
CREATE TABLE invoices
(invoiceNo int,
invoiceDate date,
invoiceTotal int,
invoiceType char(1))
alter PROCEDURE Invoices_AGG
@year int
AS
select
(case when MONTH(invoiceDate) >9 then concat('01','/',MONTH(invoiceDate),'/',year(invoiceDate)) else concat('01','/0',MONTH(invoiceDate),'/',year(invoiceDate)) end) as DateID,
SUM(case when invoiceType = 'B' then invoiceTotal else 0 end) as Total_Incomes_TypeB,
SUM(case when invoiceType = 'I' then invoiceTotal else 0 end) as Total_Incomes_TypeI
into FACT_Invoices_AGG
from invoices
where year(invoiceDate)=@year
group by (case when MONTH(invoiceDate) >9 then concat('01','/',MONTH(invoiceDate),'/',year(invoiceDate)) else concat('01','/0',MONTH(invoiceDate),'/',year(invoiceDate)) end);
exec Invoices_AGG 2013
```
thank you<issue_comment>username_1: You can use below method after getting "catSummary" value from your json
```
$.parseHTML("This is a summary with [a link](\"http://www.a.com\")")
```
Above method load your string as HTML and you can load into any element, See [Reference](https://api.jquery.com/jquery.parsehtml/)
Upvotes: 0 <issue_comment>username_2: In case all you want to do is populate another html element with the content in the catSummary field, use following:
```
$('#element_id').innerHTML = object_name.catSummary;
```
This will populate the element's object with the content in catSummary field.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Do you use any library ? For example with jQuery you could simply do like this:
```
var data = {
"cat": "1",
"catUrl": "this-is-a-url",
"catSummary": "This is a summary with [a link](\"http://www.a.com\")"
};
$("body").html(data.catSummary);
```
and it will render as a link.
Upvotes: 0 <issue_comment>username_1: You can get your json property "catSummary" and set into HTML controls
```
$("#" + elementId).innerHTML = "This is a summary with [a link](\"http://www.a.com\")";
```
Upvotes: 0 |
2018/03/22 | 492 | 1,801 | <issue_start>username_0: I know this question has been asked alot, but I tried several solutions suggested and none has worked as expected.
I have an EditText that opens the keyboard when I launch that activity, how can prevent it from launching the keyboard.
```
//Solutions I have tried thus far
//Solution 1
//XML
android:editable ="false"
//Solution 2
//XML
android:focusable="false"
//Solution 3
//JAVA
editText.setinputType(InputType.TYPE_NULL); // Works but I can no longer use the edit text to add input
//Solution 4 - Similar to Solution 2
editText.setInputType(0); // Works but I can no longer use the edit text to add input
//Solution 5
//Using InputMethodManager
InputMethodManager imm = (InputMethodManager)getSystemService(INPUT_METHOD_SERVICE);
imm.hideSoftInputFromWindow(editText.getWindowToken(),0); //Does not work
```<issue_comment>username_1: Try this Use [**`android:windowSoftInputMode="stateHidden"`**](https://developer.android.com/guide/topics/manifest/activity-element.html#wsoft) inside manifest file to your activity like below code
**`"stateHidden"`**
>
> The soft keyboard is hidden when the user chooses the activity — that is, when the user affirmatively navigates forward to the activity, rather than backs into it because of leaving another activity.
>
>
>
**SAPME CODE**
```
```
Upvotes: 2 <issue_comment>username_2: ```
getWindow().setSoftInputMode(WindowManager.LayoutParams.SOFT_INPUT_STATE_HIDDEN);
```
this worked for me
Upvotes: 1 <issue_comment>username_3: add this line in on create
```
// Hide the keyboard.
getWindow().setSoftInputMode(
WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN
);
```
Upvotes: 1 <issue_comment>username_4: Please try this on your manifest :
```
```
it helps you.
Upvotes: 1 |
2018/03/22 | 601 | 2,048 | <issue_start>username_0: Hello this is the code I'm using:
```
menuItemButtonContainer: {
marginRight: 1,
marginLeft: 1,
marginTop: 1,
marginBottom: 1,
paddingRight: 1,
paddingLeft: 1,
paddingTop: 1,
paddingBottom: 1,
borderRadius: 10,
overflow: 'hidden',
position: "absolute",
backgroundColor: 'white',
borderWidth: 1,
borderColor: colors.navy
},
```
but for some reason the top right corner is not round. I've been cudgeling my brain for a day.
This is the component code
```
ADD
```
[](https://i.stack.imgur.com/ztZWh.png)<issue_comment>username_1: Styles you written works, Please see the below. May be border style getting overridden by some other styles in your application.Inspect that element to see.
[](https://i.stack.imgur.com/kwkqO.png)
Upvotes: 1 <issue_comment>username_2: Try the border radius on each corner as follows:
```
menuItemButtonContainer: {
marginRight: 1,
marginLeft: 1,
marginTop: 1,
marginBottom: 1,
paddingRight: 1,
paddingLeft: 1,
paddingTop: 1,
paddingBottom: 1,
borderTopLeftRadius: 10, //Top Left Corner
borderTopRightRadius: 10,// Top Right Corner
borderBottomLeftRadius: 10,// Bottom Left Corner
borderBottomRightRadius: 10, // Bottom Right Corner
overflow: 'hidden',
position: "absolute",
backgroundColor: 'white',
borderWidth: 1,
borderColor: colors.navy
},
```
Upvotes: 0 <issue_comment>username_3: As I have reviewed your code and as per that, you try by using below property to give corner radius one bye one for all the corner
```
borderBottomLeftRadius: 10
borderBottomRightRadius: 10
borderTopLeftRadius: 10
borderTopRightRadius: 10
```
Upvotes: 1 <issue_comment>username_4: You may also use the sx prop like this:
```
sx={{"& .MuiOutlinedInput-root": {
borderRadius: '0 10px 0 0'
}}}
```
Upvotes: -1 |
2018/03/22 | 783 | 3,044 | <issue_start>username_0: Git is a DVCS and each local git user holds a complete copy of the repository.
I am trying to set up a repository for our company's project (of small team of less than 10 people). We selected git so that we dont need to set up a server like SVN. We are using git with Visual Studio 2017. It seems that it requires us to set up / push to a remote repository either using VSTS or other remote repository such as GitHub or BitBucket, so that my teammate can clone from it.
I wonder can git work using just local repository copies in a team environment? (With no remote repository such as GitHub needed)
I have set up the repository of the project locally in my machine, how can I have my teammate clone it from me (without needing remote repository such as GitHub)?<issue_comment>username_1: * If your teammates can not access to your laptop, there is no way for them to clone the git repo from your PC.
* Even if your teammates can access to your laptop, they can just clone this repo. And your teammates can not push to your local repo with the same branch name.
In a word, you should work with remote repo for your situation since there are multiple users need to work together.
Upvotes: 0 <issue_comment>username_2: As an addition to [this answer](https://stackoverflow.com/a/5947357/2303202):
* to pull from/push to Windows, you could use just a shared folder, or set up a server. For example [like this](https://github.com/PowerShell/Win32-OpenSSH/wiki/Setting-up-a-Git-server-on-Windows-using-Git-for-Windows-and-Win32_OpenSSH), but there are really a lot of options, from using [the embedded anonymous server](https://stackoverflow.com/a/5198654/2303202) to setting up an IIS with smart http handler (I cannot say anything about this one).
* you can push, especially if you don't have permanent connection, but you should be careful to not mess with receiver's work. I'd recommend to push to remote references.
For example, if you are registered as at your teammates, you push to them like:
```
git push teammate1 "refs/heads/*:refs/remotes/kvitas/*"
git push teammate2 "refs/heads/*:refs/remotes/kvitas/*"
....
```
This is basically same as they fetching from you
Upvotes: 1 <issue_comment>username_3: Each user does get a complete copy of the repository, but the canonical way to share modifications accross a team is still to have a shared repository hosted on a central server.
As opposed to SVN, the setup barrier of this shared server is pretty low, though :
* a shared folder, or a server with an ssh access is enough to hold a copy of the repo, which can be used as the shared repository
* the next step is to manage some access rules to your repo (e.g : prevent people from doing `push -f` on master, ... ), and there are several free, self hosted options to add this kind of functionnality on the central servers, two of which are :
+ gitlab : comes with a complete web GUI
+ gitolite : minimalistic server, with users and access rules, which runs behind an existing ssh server
Upvotes: 0 |
2018/03/22 | 707 | 2,688 | <issue_start>username_0: I have an image which I want to place in splash screen in such a way that it is visible in navigation bar also at the top. Normally the image is visible in full activity except the navigation bar. But I want it to start from navigation bar also as in swiggy app splash screen.
Below is the related image:
<issue_comment>username_1: * If your teammates can not access to your laptop, there is no way for them to clone the git repo from your PC.
* Even if your teammates can access to your laptop, they can just clone this repo. And your teammates can not push to your local repo with the same branch name.
In a word, you should work with remote repo for your situation since there are multiple users need to work together.
Upvotes: 0 <issue_comment>username_2: As an addition to [this answer](https://stackoverflow.com/a/5947357/2303202):
* to pull from/push to Windows, you could use just a shared folder, or set up a server. For example [like this](https://github.com/PowerShell/Win32-OpenSSH/wiki/Setting-up-a-Git-server-on-Windows-using-Git-for-Windows-and-Win32_OpenSSH), but there are really a lot of options, from using [the embedded anonymous server](https://stackoverflow.com/a/5198654/2303202) to setting up an IIS with smart http handler (I cannot say anything about this one).
* you can push, especially if you don't have permanent connection, but you should be careful to not mess with receiver's work. I'd recommend to push to remote references.
For example, if you are registered as at your teammates, you push to them like:
```
git push teammate1 "refs/heads/*:refs/remotes/kvitas/*"
git push teammate2 "refs/heads/*:refs/remotes/kvitas/*"
....
```
This is basically same as they fetching from you
Upvotes: 1 <issue_comment>username_3: Each user does get a complete copy of the repository, but the canonical way to share modifications accross a team is still to have a shared repository hosted on a central server.
As opposed to SVN, the setup barrier of this shared server is pretty low, though :
* a shared folder, or a server with an ssh access is enough to hold a copy of the repo, which can be used as the shared repository
* the next step is to manage some access rules to your repo (e.g : prevent people from doing `push -f` on master, ... ), and there are several free, self hosted options to add this kind of functionnality on the central servers, two of which are :
+ gitlab : comes with a complete web GUI
+ gitolite : minimalistic server, with users and access rules, which runs behind an existing ssh server
Upvotes: 0 |
2018/03/22 | 331 | 1,195 | <issue_start>username_0: The Wooslider plugin on my Wordpress site suddenly stopped loading images on all pages it's used on. Can somebody look into this and tell me what the issue is and how to fix it?
The error code I receive in the console is:
```html
Uncaught ReferenceError: jQuery is not defined
at (index):296
```
Thanks in advance!<issue_comment>username_1: Please include jquery library link before you've added woo slider library.
Add this script to header.php in your wp theme
```
```
Upvotes: -1 [selected_answer]<issue_comment>username_2: Remove all lines which link `JQuery` to your page and add below the line, which is fo latest version of 'JQuery`
```
```
in your code before
Something like
```
....
....
```
What this will do is load `JQuery` code before loading the contents of `...` code.
Adding some links related to above problem
1. [Handling code which relies on jQuery before jQuery is loaded](https://stackoverflow.com/questions/8298430/handling-code-which-relies-on-jquery-before-jquery-is-loaded)
2. [Best Practice: Where to include your tags](https://teamtreehouse.com/community/best-practice-where-to-include-your-script-tags)
Upvotes: 0 |
2018/03/22 | 587 | 2,183 | <issue_start>username_0: I just started using Java recently so please bear with me. I'm trying to create an array of the object Product by reading data from a csv file with the format
`Name,Price,Stock`
but every time I'm trying using the `Load()` function it keeps giving me an error in the Product constructor line.
```
private Product[] product = new Product[100];
public Product[] Load() throws FileNotFoundException {
int counter = 0;
boolean end = false;
Scanner scanner = new Scanner(new File("products.csv"));
scanner.useDelimiter(",");
while (!end) {
if (scanner.hasNext()) {
product[counter] = new Product(scanner.next(), scanner.nextFloat(), scanner.nextInt());
counter++;
}
else {
end = true;
}
}
scanner.close();
return product;
}
```
the error message is `java.util.scanner.next(unknown source)`, and the same error for both **scanner.nextFloat()** and **scanner.nextInt()**
```
Exception in thread "main" java.util.InputMismatchException
at java.util.Scanner.throwFor(Unknown Source)
at java.util.Scanner.next(Unknown Source)
at java.util.Scanner.nextInt(Unknown Source)
at java.util.Scanner.nextInt(Unknown Source)
at ScoutShop.CSVReader.Load(CSVReader.java:20)
at ScoutShop.Main.main(Main.java:11)
```<issue_comment>username_1: Please include jquery library link before you've added woo slider library.
Add this script to header.php in your wp theme
```
```
Upvotes: -1 [selected_answer]<issue_comment>username_2: Remove all lines which link `JQuery` to your page and add below the line, which is fo latest version of 'JQuery`
```
```
in your code before
Something like
```
....
....
```
What this will do is load `JQuery` code before loading the contents of `...` code.
Adding some links related to above problem
1. [Handling code which relies on jQuery before jQuery is loaded](https://stackoverflow.com/questions/8298430/handling-code-which-relies-on-jquery-before-jquery-is-loaded)
2. [Best Practice: Where to include your tags](https://teamtreehouse.com/community/best-practice-where-to-include-your-script-tags)
Upvotes: 0 |
2018/03/22 | 508 | 1,743 | <issue_start>username_0: I've learned a little about django pagination from here : <https://docs.djangoproject.com/en/2.0/topics/pagination/#page-objects>
And I want to know how to get the number of items in certain Page object.
I thought a way like this : `Page.end_index() - Page.start_index() + 1`
But I think maybe it is not good or inaccurate way to get result.
Is there good way to get correct result?<issue_comment>username_1: Probably `len(page.object_list)` is the easiest.
Upvotes: 1 <issue_comment>username_2: Something vaguely similar to:
```
Paginator(query, objects_per_page, current_page_number)
```
And then pass the resulting paginator object into the template.
Inside the paginator's init you'd want to do something similar to:
```
def __init__(self, query, objects_per_page, current_page_number):
self.total = query.count()
self.per_page = objects_per_page
self.current_page_number = current_page_number
self.lower_limit = objects_per_page * current_page_number
self.upper_limit = objects_per_page * (current_page_number + 1)
if self.upper_limit > self.total:
self.upper_limit = self.total
self.objects = query[self.lower_limit - 1:self.upper_limit - 1]
```
Then in the template you'd do something like this:
```
Showing {{paginator.lower_limit}}-{{paginator.upper_limit}} of {{paginator.total}}
```
I hope that gives you a general idea of how you can cleanly do this.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In template:
```
{{ page_obj.paginator.count }} # The total number of objects, across all pages
{{ page_obj.paginator.per_page }} # The number of objects every page
{{ page_obj.paginator.num_pages }} # The total number of pages.
```
Upvotes: 2 |
2018/03/22 | 326 | 1,109 | <issue_start>username_0: I've got a List like:
```
results = ['SDV_GAMMA','SDV_BETA,'...','...']
```
and then comes and for loop like:
```
for i in range (len(results)):
a = instance.elementSets[results[i]]
```
The strings defined in the result-list are part of a `*.odb` result file and if they didn't exist there comes an error.
I would like that my program doesn't stop cause of an error. It should go on and check if values of the others result values exist.
So i do not have to sort every result before i start my program. If it´s not in the list, there is no problem, and if it exists i get my data.
I hope u know what i mean.<issue_comment>username_1: You can use `try..except block`
**Ex:**
```
for i in results
try:
a = instance.elementSets[results[i]]
except:
pass
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can simply check the presence of results[i] in instance.elementSets before extracting it.
If instance.elementSets is a dictionary, use the `dict.get` command.
<https://docs.python.org/3/library/stdtypes.html#dict.get>
Upvotes: 0 |
2018/03/22 | 485 | 1,861 | <issue_start>username_0: I'm new to Robot Framework, I have created this code to create a folder of current date but now I don't know how to use it at the time of test suite run.
So that whenever I run my suite it will store in separate folder i.e. date wise.
This the code for create Folder with system date:
```
*** Keywords****
Create Folder
${Resultdir}= C:/Users/xyz
${date}= Get Current Date result_format=%d-%m-%y
${date}= Convert to String ${date}
create directory ${Resultdir}/${date}
```
Currently I'm using this command to create dir:
```
robot -d Results\Default --log NONE --output NONE --report NONE Tests/GeneralTestSuite/CreateResultFolder.robot
```
I want this code at : pybot -d {**CurrentDirecory**} Test\TestSuite\Abc.robot<issue_comment>username_1: You can't change the report directory once the test has started running. Your only choice is to create the folder outside of the test, and specify the folder location on the command line.
Upvotes: 2 <issue_comment>username_2: You can use a script to generate command line arguments for Robot Framework using the [Reading argument files from standard input](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#reading-argument-files-from-standard-input) functionality.
To create a folder named as the current time and set that as the [output directory](http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html#output-directory), something like this can be done:
```
import datetime
import os
time = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
dirpath = str(time)
if not os.path.exists(dirpath):
os.makedirs(dirpath)
print('--outputdir ' + dirpath)
```
You have to execute your tests like:
```
python OutputDirArgumentFile.py | robot --argumentfile STDIN my_test.robot
```
Upvotes: 0 |
2018/03/22 | 915 | 3,061 | <issue_start>username_0: [](https://i.stack.imgur.com/BJ0E3.png)
Hello everyone I need your help. I want that like in image if I search carrot listing should show Carrot first like the element having same name show first and if element contains these text list after that is that possible
```
NSString *predicateString;
NSString * tempString;
if (string.length > 0) {
tempString = [NSString stringWithFormat:@"%@%@",textField.text, string];
} else {
tempString = [textField.text substringToIndex:[textField.text length] - 1];
}
predicateString = [NSString stringWithFormat:@"SELF.title contains [cd] \"%@\" ", tempString];
NSLog(@"Ingredent Array :- %@ ",allIngrediantArr);
NSPredicate *predicate = [NSPredicate predicateWithFormat:predicateString];
if (allIngrediantArr.count>0) {
searchFilterdArr = [NSMutableArray arrayWithArray:[allIngrediantArr filteredArrayUsingPredicate:predicate]];
}
```<issue_comment>username_1: you can use in the alternate
**BEGINSWITH**
```
predicateString = [NSString stringWithFormat:@"SELF.title BEGINSWITH [cd] \"%@\" ", tempString];
```
**CONTAINS || BEGINSWITH**
if you want the both (contains || Beginswith), then use
```
NSPredicate *predicate = [NSPredicate predicateWithFormat:
@"(SELF.title BEGINSWITH [cd] \"%@\") OR (SELF.title CONTAINS [cd] \"%@\")", tempString, tempString];
```
**MATCHES || BEGINSWITH**
```
NSPredicate *predicate = [NSPredicate predicateWithFormat:
@"(SELF.title BEGINSWITH [cd] \"%@\") OR (SELF.title MATCHES [cd] \"%@\")", tempString, tempString];
```
see the `NSPredicate` [string comparison](https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/Predicates/Articles/pSyntax.html)
[](https://i.stack.imgur.com/Dq7VN.png)
Upvotes: 2 <issue_comment>username_2: You can filter the `searchFilterdArr` array by comparing the range of search string.
```
NSString *searchStr = @"carrot";
NSArray *searchFilterdArr = [[NSArray alloc ] initWithObjects:@{@"title":@"baby carrot"},@{@"title":@"baby purple carrot"},@{@"title":@"carrot"}, nil];
NSLog(@"%@",searchFilterdArr);
id mySort = ^(NSDictionary * obj1, NSDictionary * obj2){
return [[obj1 valueForKey:@"title"] rangeOfString:searchStr].location > [[obj2 valueForKey:@"title"] rangeOfString:searchStr].location;
};
NSArray * sortedMyObjects = [searchFilterdArr sortedArrayUsingComparator:mySort];
NSLog(@"%@",sortedMyObjects);
```
searchFilterdArr
>
> {
> title = "baby carrot";
> },
> {
> title = "baby purple carrot";
> },
> {
> title = "carrot";
> }
>
>
>
sortedMyObjects
>
> {
> title = "carrot";
> },
> {
> title = "baby carrot";
> },
> {
> title = "baby purple carrot";
> }
>
>
>
Upvotes: 2 [selected_answer] |
2018/03/22 | 499 | 1,972 | <issue_start>username_0: I'm using a select statement that combines two columns, EmployerID and Name as such:
```
SELECT EmployerID + '-' + Name AS CompanyName FROM Employer
```
to be put into a drop down list from the database.
I have no issue selecting and filling the database, however, I need to use the EmployerID which is a foreign key of the Employee entity to select respective employees that belong to said employer by using
```
string x =ddlEmployer.SelectedValue.ToString();
SqlCommand EmployeeCmd = new SqlCommand("SELECT EmployeeNo + ' - ' + Name As EmployeeName From Employee Where EmployerID = @SelectEmployer", conn);
EmployeeCmd.Parameters.AddWithValue("@SelectEmployer", x);
```
I cannot select it as doing a select statement of the selected value using the CompanyName combined values would not return anything be it making it equals or like as shown above.
So my question is, after selecting these two and displaying them as a single option in a dropdownlist, how would I then use the selected value to search for the EmployerID that is supposed to exist?<issue_comment>username_1: First, change your initial query to include the EmployerID.
```
SELECT EmployerID + '-' + Name AS CompanyName, EmployerID FROM Employer
```
Then bind the correct columns to the DropDownList. You can have two different column, one for the TextField and one for the ValueField.
```
DropDownList1.DataSource = source;
DropDownList1.DataTextField = "CompanyName";
DropDownList1.DataValueField = "EmployerID";
DropDownList1.DataBind();
```
Now you always have the EmployerID as the SelectedValue.
```
EmployeeCmd.Parameters.AddWithValue("@SelectEmployer", DropDownList1.SelectedValue);
```
Upvotes: 2 <issue_comment>username_2: I've managed to fix this with hendryanw's comment which was to simply use
```
string x = ddlEmployer.SelectedValue.ToString().Split('-)[0];
```
which helps to split it and select the index I want, works both ways.
Upvotes: 0 |
2018/03/22 | 505 | 1,505 | <issue_start>username_0: I am adding dynamic forms in slider, so that multiple forms can be visible with swap. But when keyboard opens ion-slides is disappears. please suggest a solution.
[](https://i.stack.imgur.com/Sxw9I.png)
[](https://i.stack.imgur.com/zTfCe.png)
cli packages: (/usr/local/lib/node\_modules)
```
@ionic/cli-utils : 1.19.2
ionic (Ionic CLI) : 3.20.0
```
global packages:
```
cordova (Cordova CLI) : 8.0.0
```
local packages:
```
@ionic/app-scripts : 3.1.8
Cordova Platforms : android 7.0.0 browser 5.0.3
Ionic Framework : ionic-angular 3.9.2
```
System:
```
Android SDK Tools : 26.1.1
Node : v9.4.0
npm : 5.6.0
OS : macOS High Sierra
```<issue_comment>username_1: you can use following CSS it will work.
```
.scroll-content{
margin-bottom: 0px !important;
padding-bottom: 0px !important;
}
```
Upvotes: 1 <issue_comment>username_2: Surprisingly!! output came normal with ion-slides tag surrounded by div tag. If you have any explanation, please share.
[](https://i.stack.imgur.com/HJwM2.png)
Upvotes: 1 <issue_comment>username_3: I have a similar issue with ion-slide and can confirm that wrapping it in tags solves the issue as per yohoprashant's answer.
The issue only happens in Android but not iOS
Upvotes: 0 |
2018/03/22 | 296 | 1,078 | <issue_start>username_0: I'm having an array of `NSManagedObjects`. And I would like to search through the whole array using search bar. But the problem is in the array of `NSManagedObject` there are different types of keys(Different entities). Like if I'm search using title than there will be keys like:
"`event_title`"
"`message_title`"
"`album_title`"
Now how can I search a keyword from all these fields inside an array?<issue_comment>username_1: you can use following CSS it will work.
```
.scroll-content{
margin-bottom: 0px !important;
padding-bottom: 0px !important;
}
```
Upvotes: 1 <issue_comment>username_2: Surprisingly!! output came normal with ion-slides tag surrounded by div tag. If you have any explanation, please share.
[](https://i.stack.imgur.com/HJwM2.png)
Upvotes: 1 <issue_comment>username_3: I have a similar issue with ion-slide and can confirm that wrapping it in tags solves the issue as per yohoprashant's answer.
The issue only happens in Android but not iOS
Upvotes: 0 |
2018/03/22 | 372 | 1,150 | <issue_start>username_0: Code:
```
$db = mysqli_connect("localhost","root","","photos");
if(isset($_POST['POST'])){
$image = addslashes($_FILES['image']['tmp_name']);
$image_name = addslashes($_FILES['image']['name']);
$image = file_get_contents($image);
$image = base64_encode($image);
$sql = "INSERT INTO imaag ( name ) VALUES ('$image')";
mysqli_query($db,$sql);
}
$res = mysqli_query($db,"SELECT * from imaag");
```
and the div in which i'm getting images is:
```
php
while ($row = mysqli\_fetch\_array($res)) {
echo '<img
src="data:image/jpeg;base64,"'.base64\_encode($row['name']).'/';
}?>
```
On retrieval the image is broken, and i don't know if the image is uploaded or not.What I think is that it isn't uploading. Uploading part has some problem.<issue_comment>username_1: Because on retrieval You dont need `base64_encode` again. Just simply add string. change your `echo` as below:
```
echo '';
}?>
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your **src="** has no closing **"**
Do this:
```
echo '';
```
Upvotes: 0 |
2018/03/22 | 1,294 | 4,500 | <issue_start>username_0: I have a log file that has data lines and some explanation text lines. I would like to read the last 10 data lines from a file. How can I do it in Python? I mean, is there faster way than use
```
for line in reversed(open("filename").readlines()):
```
and then parse the file. I guess it opens the whole file and is slow if the log file is huge. So is there a method to open just the end of the file and read data from it? All I need is the last 10 lines from a file having text `,Kes`. If there are not 10 lines having `,Kes`, it should return all lines having `,Kes` in the same order those appeared in the file.<issue_comment>username_1: You have to cross over the first (N - 10) lines but you can do it in a smart way. The fact that you're consuming time doesn't mean that you have to consume memory as well. In your code you're using `readlines()` which read all the lines and returns a list of them. This is while the `fileobject` itself is an iterator-like object and you can use a container with restricted length and insert all the lines into it which at the end it will only preserve the last N lines. In python you can use a `deque` with its `maxlen` set to 10 for this sake:
```
from collections import deque
with open("filename") as f:
last_ten_lines = deque(f,maxlen=10)
```
Regarding your last point, if you want to filter the lines that have the word `,Kes` the best way is to loop over the reverse of the file object.
```
from itertools import islice
def get_last_n(file_name, n=10):
""" Returns the last N filtered lines. """
def loop_over():
with open(file_name) as f:
for line in reversed(f):
if ",Kes" in line:
yield line
return islice(get_last_ten(), N)
```
Upvotes: 2 <issue_comment>username_2: import os
os.popen('tail -n 10 filepath').read()
Upvotes: -1 <issue_comment>username_3: You can
* read all, store all in a list, reverse all and take first 10 lines that contain `,Kes`
+ your approach - takes **lots** of storage and time
* use Kasramvd's approach wich is francly far more elegant then this one - leveraging iterable and islice
* read each line yourself and check if `,Kes` in it, if so queue it:
---
```
from collections import deque
# create demodata
with open ("filename","w") as f:
for n in range (20):
for p in range(20):
f.write("some line {}-{}\n".format(n,p))
f.write("some line with {} ,Kes \n".format(n))
# read demodata
q = deque(maxlen=10)
with open("filename") as f:
for line in f: # read one line at time, not huge file at once
if ',Kes' in line: # only store line if Kes in it
q.append(line) # append line, size limit will make sure we store 10 at most
# print "remebered" data
print(list(q))
```
Output:
```
['some line with 10 ,Kes \n', 'some line with 11 ,Kes \n', 'some line with 12 ,Kes \n',
'some line with 13 ,Kes \n', 'some line with 14 ,Kes \n', 'some line with 15 ,Kes \n',
'some line with 16 ,Kes \n', 'some line with 17 ,Kes \n', 'some line with 18 ,Kes \n',
'some line with 19 ,Kes \n']
```
You will not have the whole file in RAM at once as well, at most 11 lines (curr line + deque holding 10 lines and it only remembers lines with `,Kes` in it.
Upvotes: 2 [selected_answer]<issue_comment>username_4: Your proposed code is clearly not efficient:
* you read the whole file into memory
* you fully reverse the list of lines
* only then you search the lines containing keyword.
I can imagine 2 possible algorithms:
1. scan the file in forward order and store 10 lines containing the keyword, each new one replacing the older. Code could be more or less:
```
to_keep = [None] * 10
index = 0
for line in file:
if line.find(keyword) != -1:
to_keep[index] = line
index = (index + 1) % 10
```
It should be acceptable if only few line in the file contain the keyword and if reading from the back would also need to load a great part of the file
2. Read the file in chunks from the end and apply above algorithm on each chunk. It will be more efficient if keyword is frequent enough for only few chunks to be required, but will be slightly more complex: it is not possible to seek to lines but only to byte positions in a file, so you could start in the middle of a line or even in the middle of a multibyte character (think about UTF-8), so you should keep the first partial line and add it later to next chunk.
Upvotes: 1 |
2018/03/22 | 557 | 1,998 | <issue_start>username_0: I want to parse a string to decimal using this method
```
try {
Double.parseDouble(str);
} catch (Exception exception) {
exception.printStackTrace();
}
```
It throws exception if I put any letter in EditText or a number followed by a letter(so far so good),but when I enter a number followed by letter d or f(example 2.1or 2.1f) it doesen't throw exception.The method treats 2.1d string or 2.1f string as 2.1float or 2.1double and return the double as 2.1d or 2.1f and the program crashes.
I also tried to parse the string using :
```
NumberFormat.getInstance().parse(str);
Double.valueOf(str);
Double doubleObject = new Double(str);
double number = doubleObject.doubleValue();
StringUtils.isNumeric(str);
```
and the result was the same.
In the end I did it using `if(str.contains(d)||str.contains(f))throw new Exception(...)`
Is there another method or another way to do this without using if.<issue_comment>username_1: You should use Long to parse a number. Because Double is a floating point number and it does not throw Exception for Floating value and also **d** represent the value in Double so still it will not throw Exception. You should use Long for this.
**Like this**
```
Long.parseLong(str);
```
but still it will accept **l** with the no. so then you can add an if condition before parsing
```
if(!str.contains("l"))
{
Long.parseLong(str);
}
else
{
// it is not a number.
}
```
another solution is that put **inputType** inside the `EditText`
```
android:inputType="numberDecimal"
```
Hope it will work for you.
Upvotes: -1 <issue_comment>username_2: I would do it this way if I were you.
```
Double numberVar = Double.parseDouble(yourEditText.getText().toString());
if(new Double(numberVar) istanceof Double){
return true;
}
```
Very clean and just because a string contains a 0 or 1 or any other number does not mean it does not contain an a or an I.
Upvotes: 0 |
2018/03/22 | 329 | 1,136 | <issue_start>username_0: I want to split the string
`"abcabcab"`
into
`"abc", "a", "b"` and so on.
How do I achieve this?<issue_comment>username_1: You should use Long to parse a number. Because Double is a floating point number and it does not throw Exception for Floating value and also **d** represent the value in Double so still it will not throw Exception. You should use Long for this.
**Like this**
```
Long.parseLong(str);
```
but still it will accept **l** with the no. so then you can add an if condition before parsing
```
if(!str.contains("l"))
{
Long.parseLong(str);
}
else
{
// it is not a number.
}
```
another solution is that put **inputType** inside the `EditText`
```
android:inputType="numberDecimal"
```
Hope it will work for you.
Upvotes: -1 <issue_comment>username_2: I would do it this way if I were you.
```
Double numberVar = Double.parseDouble(yourEditText.getText().toString());
if(new Double(numberVar) istanceof Double){
return true;
}
```
Very clean and just because a string contains a 0 or 1 or any other number does not mean it does not contain an a or an I.
Upvotes: 0 |
2018/03/22 | 378 | 1,391 | <issue_start>username_0: I have a directory of files of which I want the files to be selected in a random order. I am wondering what the best way to do this is – I am thinking some sort of use of 'Arrange by: Date Modified' then using some basic bash(?) touch to change these dates in a random order. How would I code a random-file-toucher? Is there a better way? What would it be?<issue_comment>username_1: You should use Long to parse a number. Because Double is a floating point number and it does not throw Exception for Floating value and also **d** represent the value in Double so still it will not throw Exception. You should use Long for this.
**Like this**
```
Long.parseLong(str);
```
but still it will accept **l** with the no. so then you can add an if condition before parsing
```
if(!str.contains("l"))
{
Long.parseLong(str);
}
else
{
// it is not a number.
}
```
another solution is that put **inputType** inside the `EditText`
```
android:inputType="numberDecimal"
```
Hope it will work for you.
Upvotes: -1 <issue_comment>username_2: I would do it this way if I were you.
```
Double numberVar = Double.parseDouble(yourEditText.getText().toString());
if(new Double(numberVar) istanceof Double){
return true;
}
```
Very clean and just because a string contains a 0 or 1 or any other number does not mean it does not contain an a or an I.
Upvotes: 0 |
2018/03/22 | 2,778 | 9,623 | <issue_start>username_0: I am looking for a script or some cmd command to uninstall any app from my computer. I am using Windows OS. Being an automation analyst I have to constantly run automated tests for different products whenever there is new build. So every time I have to uninstall and install build which takes time.
I did try the following command:
```
wmic product where name="product name" call uninstall
```
Apparently it did work, but it didn't remove the app from control panel and also when I tried to install again it shows installation menu saying uninstall is needed. However the app's data is removed from installation directory.<issue_comment>username_1: I figured it out myself and it works for many projects which have .exe setups.
Following is the format
1. Open terminal with admin rights
2. go to path in where setup that you used to install the product.
3. Once there then type: setupname.exe /uninstall /q
Upvotes: 0 <issue_comment>username_2: No Silver Bullet
================
There is no silver bullet when it comes to automating installation or uninstallation - but there is a **quick trick** that is described in the "**General Uninstall**" section.
There are heaps of different flavors of installer types - and the list keeps growing. Automating them is a bit of a black art as you will be fully aware of. Not rocket science, but tedious and tiresome when things don't work reliably and there is no suitable remedy that consistently works all the time.
I have written about these issues many times before and cross-linked the content very heavily. It is messy, but if you follow the links and web of linked pages below you should be able to find the information you need for many different setup.exe and installer types.
General Uninstall
=================
Before going into the **below ad-hoc list** of different types of installers / uninstallers and how to handle their **command line parameters**. I want to add that you can find a list of most of the products installed on the system in these **registry locations**:
* `HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall` (***64-bit***)
* `HKLM\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall` (***32-bit***)
* `HKCU\Software\Microsoft\Windows\CurrentVersion\Uninstall` (***per-user***)
***By checking for `UninstallString` or equivalent for a specific entry under these parent keys, you can get a general idea of how to uninstall the product in question by command line***.
**Try this simple approach first**, but do read the material below for a better understanding of how installers of various kinds operate. Not all deployment tools and deployment operations register properly in these locations.
**Apps** are not found in these locations in the registry, only **MSI installers** (Windows Installer) and some - or most - of legacy **setup.exe installers**.
---
Installer Types & Uninstall (and extract for setup.exe)
=======================================================
1. ***"Unattended.com"***: The easiest and quickest read on **the topic of automating install / uninstall** could be this one: **<http://unattended.sourceforge.net/installers.php>**
* This is aging content, but I remember it as helpful for me back in the day.
* And crucially it is not my own content - so I don't link entirely to myself! :-) (most of the links below are earlier answers of mine). Apologizes for that - it is just easier to remember your own content - you know it exists - and it is easier to find.
2. ***MSI***: The most standardized packages to deal with are **MSI packages** (Windows Installer).
* They can be installed / uninstalled in a plethora of reliable ways: **[Uninstalling an MSI file from the command line without using msiexec](https://stackoverflow.com/questions/450027/uninstalling-an-msi-file-from-the-command-line-without-using-msiexec/1055933#1055933)**. The most common approach is to use the `msiexec.exe` command line (section 3 in the linked answer). ***Do read this answer please***. It shows the diverse ways Windows can invoke an install / uninstall for MSI files (`command line`, `automation`, `Win32`, `.NET`, `WMI`, `Powershell`, etc...)
* Though complicated Windows Installer has [**a number of corporate benefits of major significance**](https://serverfault.com/questions/11670/the-corporate-benefits-of-using-msi-files/274609#274609) compared to previous installation technologies. **Standardized command line** and **suppressible GUI for reliable install / uninstall** are two of the most important benefits.
* [The standard msiexec.exe command line](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988%28v=vs.85%29.aspx)
* [How can I find the product GUID of an installed MSI setup?](https://stackoverflow.com/questions/29937568/how-can-i-find-the-product-guid-of-an-installed-msi-setup/29937569#29937569)
3. ***Setup.exe***: Installers in **setup.exe** format can be just about anything, including wrapped MSI files (Windows Installer). For MSI files wrapped in setup.exe files you can use the previous bullet point's standard mechanisms to uninstall (they are registered by product code GUID). *Below are some links on **how to extract the content of setup.exe files** for various types of setup.exe files* and also links to pages documenting the actual `setup.exe` command line parameters:
* ***General Links***: There is some information on how to deal with different types of setup.exe files here: **[Extract MSI from EXE](https://stackoverflow.com/questions/1547809/extract-msi-from-exe/24987512#24987512)**
* ***Installshield***: is a tool used to create setups of both legacy and modern MSI types. It delivers setup.exe files that are actually in many different formats and some with differing command line switches.
+ [There is a description of Installshield setup.exe peculiarities here](https://stackoverflow.com/questions/48976617/regarding-silent-installation-using-setup-exe-generated-using-installshield-2013/49011371#49011371)
+ [**Here is the official documentation for Installshield setup.exe files**](http://helpnet.flexerasoftware.com/installshield24helplib/installshield24helplib.htm#helplibrary/IHelpSetup_EXECmdLine.htm#ref-command-linetools_1056566962_1021611%3FTocPath%3DReference%7CCommand-Line%2520Tools%7C_____9)
+ [Here is a simple explanation of setup.exe extraction for Installshield setup.exe files](https://stackoverflow.com/a/8694205/129130)
* ***Wise***: was another important tool that is now off market. Many Wise-compiled setup.exe files are still in use. [Here is a brief overview of Wise switches](http://www.itninja.com/blog/view/wise-setup-exe-switches)
* ***Advanced Installer***: is a current tool used by many to make setup.exe / installers. <http://www.advancedinstaller.com/user-guide/exe-setup-file.html>
* ***WiX***: this is an Open Source toolkit used to compile MSI files and setup.exe files / installers. It is quite common to encounter WiX-compiled files.
+ **WiX installers** can be in standard Windows Installer format such as **MSI**, **MSP**, etc... In this case use the options listed in bullet point 2 to deal with them.
+ There is also a way to compile `setup.exe` files with WiX. [I only know of this "unofficial list" of switches to link to](http://windows-installer-xml-wix-toolset.687559.n2.nabble.com/Running-Burn-driven-installer-in-quiet-mode-command-line-parameters-tt5913001.html#a5913628). The basic uninstall format is: `setup.exe /uninstall /passive /norestart`
+ You can also open a command prompt and go `setup.exe /?` to get a list of parameters for WiX (and other) `setup.exe` files.
+ There is also a WiX tool that can be used to **decompile MSI files** and also to **decompress a WiX setup.exe file**. See relevant section here: [How can I compare the content of two (or more) MSI files?](https://stackoverflow.com/questions/48482545/how-can-i-compare-the-content-of-two-or-more-msi-files/48482546#48482546)
4. ***Other Tools***: There are **many tools** that can be used to **create installers / setup.exe files**. Here are some of the bigger ones (just FYI - no command line switches to find here, just product information):
* Non-MSI installer tools: <http://www.installsite.org/pages/en/tt_nonmsi.htm>
* Windows installer tools: <http://www.installsite.org/pages/en/msi/authoring.htm>
* Sys-admin tools for deployment: <http://www.installsite.org/pages/en/msi/admins.htm>
* This may also be of help: [Wix - How to run/install application without UI](https://stackoverflow.com/questions/48148875/wix-how-to-run-install-application-without-ui/48157650#48157650)
5. ***Apps***: And there is the whole new world of apps.
* [How to silent install an UWP appx?](https://stackoverflow.com/questions/49567498/how-to-silent-install-an-uwp-appx/49571537#49571537)
+ [Add-AppxPackage](https://technet.microsoft.com/en-us/library/hh856048.aspx)
+ [Remove-AppxPackage](https://technet.microsoft.com/en-us/library/hh856038.aspx)
* [Install UWP apps with App Installer](https://learn.microsoft.com/en-us/windows/uwp/packaging/appinstaller-root)
* [Install Windows Store App package (\*.appx) for all users](https://superuser.com/questions/647927/install-windows-store-app-package-appx-for-all-users)
* [Install package (appxbundle) via .appinstaller to all users on machine](https://stackoverflow.com/questions/50326370/install-package-appxbundle-via-appinstaller-to-all-users-on-machine)
---
Other Links:
* [How to create windows installer](https://stackoverflow.com/questions/49624070/how-to-create-windows-installer/49632260#49632260)
Upvotes: 1 |
2018/03/22 | 2,815 | 9,694 | <issue_start>username_0: Android Emulator is annoying me from days. Runs properly for some days and gives somekind of error next day. This time when I click to start emulator, it appears in taskbar and then crashes giving me following errors:
```
Emulator: init: Could not find wglGetExtensionsStringARB!
Emulator: getGLES2ExtensionString: Could not find GLES 2.x config!
Emulator: Failed to obtain GLES 2.x extensions string!
Emulator: Could not initialize emulated framebuffer
```
I am using Android studio 3.0.1 and Emulator: 27.1.12
Note: I have tried using Software Graphics it reduces performance/responsiveness of Virtual device. My question is it was running perfectly yesterday, what has happened to it today?<issue_comment>username_1: I figured it out myself and it works for many projects which have .exe setups.
Following is the format
1. Open terminal with admin rights
2. go to path in where setup that you used to install the product.
3. Once there then type: setupname.exe /uninstall /q
Upvotes: 0 <issue_comment>username_2: No Silver Bullet
================
There is no silver bullet when it comes to automating installation or uninstallation - but there is a **quick trick** that is described in the "**General Uninstall**" section.
There are heaps of different flavors of installer types - and the list keeps growing. Automating them is a bit of a black art as you will be fully aware of. Not rocket science, but tedious and tiresome when things don't work reliably and there is no suitable remedy that consistently works all the time.
I have written about these issues many times before and cross-linked the content very heavily. It is messy, but if you follow the links and web of linked pages below you should be able to find the information you need for many different setup.exe and installer types.
General Uninstall
=================
Before going into the **below ad-hoc list** of different types of installers / uninstallers and how to handle their **command line parameters**. I want to add that you can find a list of most of the products installed on the system in these **registry locations**:
* `HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall` (***64-bit***)
* `HKLM\SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall` (***32-bit***)
* `HKCU\Software\Microsoft\Windows\CurrentVersion\Uninstall` (***per-user***)
***By checking for `UninstallString` or equivalent for a specific entry under these parent keys, you can get a general idea of how to uninstall the product in question by command line***.
**Try this simple approach first**, but do read the material below for a better understanding of how installers of various kinds operate. Not all deployment tools and deployment operations register properly in these locations.
**Apps** are not found in these locations in the registry, only **MSI installers** (Windows Installer) and some - or most - of legacy **setup.exe installers**.
---
Installer Types & Uninstall (and extract for setup.exe)
=======================================================
1. ***"Unattended.com"***: The easiest and quickest read on **the topic of automating install / uninstall** could be this one: **<http://unattended.sourceforge.net/installers.php>**
* This is aging content, but I remember it as helpful for me back in the day.
* And crucially it is not my own content - so I don't link entirely to myself! :-) (most of the links below are earlier answers of mine). Apologizes for that - it is just easier to remember your own content - you know it exists - and it is easier to find.
2. ***MSI***: The most standardized packages to deal with are **MSI packages** (Windows Installer).
* They can be installed / uninstalled in a plethora of reliable ways: **[Uninstalling an MSI file from the command line without using msiexec](https://stackoverflow.com/questions/450027/uninstalling-an-msi-file-from-the-command-line-without-using-msiexec/1055933#1055933)**. The most common approach is to use the `msiexec.exe` command line (section 3 in the linked answer). ***Do read this answer please***. It shows the diverse ways Windows can invoke an install / uninstall for MSI files (`command line`, `automation`, `Win32`, `.NET`, `WMI`, `Powershell`, etc...)
* Though complicated Windows Installer has [**a number of corporate benefits of major significance**](https://serverfault.com/questions/11670/the-corporate-benefits-of-using-msi-files/274609#274609) compared to previous installation technologies. **Standardized command line** and **suppressible GUI for reliable install / uninstall** are two of the most important benefits.
* [The standard msiexec.exe command line](https://msdn.microsoft.com/en-us/library/windows/desktop/aa367988%28v=vs.85%29.aspx)
* [How can I find the product GUID of an installed MSI setup?](https://stackoverflow.com/questions/29937568/how-can-i-find-the-product-guid-of-an-installed-msi-setup/29937569#29937569)
3. ***Setup.exe***: Installers in **setup.exe** format can be just about anything, including wrapped MSI files (Windows Installer). For MSI files wrapped in setup.exe files you can use the previous bullet point's standard mechanisms to uninstall (they are registered by product code GUID). *Below are some links on **how to extract the content of setup.exe files** for various types of setup.exe files* and also links to pages documenting the actual `setup.exe` command line parameters:
* ***General Links***: There is some information on how to deal with different types of setup.exe files here: **[Extract MSI from EXE](https://stackoverflow.com/questions/1547809/extract-msi-from-exe/24987512#24987512)**
* ***Installshield***: is a tool used to create setups of both legacy and modern MSI types. It delivers setup.exe files that are actually in many different formats and some with differing command line switches.
+ [There is a description of Installshield setup.exe peculiarities here](https://stackoverflow.com/questions/48976617/regarding-silent-installation-using-setup-exe-generated-using-installshield-2013/49011371#49011371)
+ [**Here is the official documentation for Installshield setup.exe files**](http://helpnet.flexerasoftware.com/installshield24helplib/installshield24helplib.htm#helplibrary/IHelpSetup_EXECmdLine.htm#ref-command-linetools_1056566962_1021611%3FTocPath%3DReference%7CCommand-Line%2520Tools%7C_____9)
+ [Here is a simple explanation of setup.exe extraction for Installshield setup.exe files](https://stackoverflow.com/a/8694205/129130)
* ***Wise***: was another important tool that is now off market. Many Wise-compiled setup.exe files are still in use. [Here is a brief overview of Wise switches](http://www.itninja.com/blog/view/wise-setup-exe-switches)
* ***Advanced Installer***: is a current tool used by many to make setup.exe / installers. <http://www.advancedinstaller.com/user-guide/exe-setup-file.html>
* ***WiX***: this is an Open Source toolkit used to compile MSI files and setup.exe files / installers. It is quite common to encounter WiX-compiled files.
+ **WiX installers** can be in standard Windows Installer format such as **MSI**, **MSP**, etc... In this case use the options listed in bullet point 2 to deal with them.
+ There is also a way to compile `setup.exe` files with WiX. [I only know of this "unofficial list" of switches to link to](http://windows-installer-xml-wix-toolset.687559.n2.nabble.com/Running-Burn-driven-installer-in-quiet-mode-command-line-parameters-tt5913001.html#a5913628). The basic uninstall format is: `setup.exe /uninstall /passive /norestart`
+ You can also open a command prompt and go `setup.exe /?` to get a list of parameters for WiX (and other) `setup.exe` files.
+ There is also a WiX tool that can be used to **decompile MSI files** and also to **decompress a WiX setup.exe file**. See relevant section here: [How can I compare the content of two (or more) MSI files?](https://stackoverflow.com/questions/48482545/how-can-i-compare-the-content-of-two-or-more-msi-files/48482546#48482546)
4. ***Other Tools***: There are **many tools** that can be used to **create installers / setup.exe files**. Here are some of the bigger ones (just FYI - no command line switches to find here, just product information):
* Non-MSI installer tools: <http://www.installsite.org/pages/en/tt_nonmsi.htm>
* Windows installer tools: <http://www.installsite.org/pages/en/msi/authoring.htm>
* Sys-admin tools for deployment: <http://www.installsite.org/pages/en/msi/admins.htm>
* This may also be of help: [Wix - How to run/install application without UI](https://stackoverflow.com/questions/48148875/wix-how-to-run-install-application-without-ui/48157650#48157650)
5. ***Apps***: And there is the whole new world of apps.
* [How to silent install an UWP appx?](https://stackoverflow.com/questions/49567498/how-to-silent-install-an-uwp-appx/49571537#49571537)
+ [Add-AppxPackage](https://technet.microsoft.com/en-us/library/hh856048.aspx)
+ [Remove-AppxPackage](https://technet.microsoft.com/en-us/library/hh856038.aspx)
* [Install UWP apps with App Installer](https://learn.microsoft.com/en-us/windows/uwp/packaging/appinstaller-root)
* [Install Windows Store App package (\*.appx) for all users](https://superuser.com/questions/647927/install-windows-store-app-package-appx-for-all-users)
* [Install package (appxbundle) via .appinstaller to all users on machine](https://stackoverflow.com/questions/50326370/install-package-appxbundle-via-appinstaller-to-all-users-on-machine)
---
Other Links:
* [How to create windows installer](https://stackoverflow.com/questions/49624070/how-to-create-windows-installer/49632260#49632260)
Upvotes: 1 |
2018/03/22 | 912 | 2,378 | <issue_start>username_0: i am using chart js for developing my pie chart. i want to create the legend position just like this. anyone please help me to solve this...
[](https://i.stack.imgur.com/G9vv0.jpg)
here is my code... i actually want the result just like the picture. and i getting error when i use css zoom:70% in html and make the javascript didn't work..
```html
body{ background-color: #1f1d1d; font-family: Roboto, Myriad Pro, Segoe UI;zoom:70%;}
.col-sm-1.content { border: 2px solid #1f1d1d}
*Browse by Channel*
var ctx = document.getElementById('myChartBBC').getContext('2d');
var chart = new Chart(ctx, {
// The type of chart we want to create
type: 'pie',
// The data for our dataset
data: {
labels: ["USSD", "URP", "MyTsel App", "Chatbot"],
datasets: [{
backgroundColor: ['rgb(12, 146, 204)',
'rgb(255, 67, 0)',
'rgb(131, 0, 255)',
'rgb(250, 255, 0)'
],
borderColor: ['rgb(12, 146, 204)',
'rgb(255, 67, 0)',
'rgb(131, 0, 255)',
'rgb(250, 255, 0)'
],
data: [73, 17, 3, 7],
}]
},
// Configuration options go here
options: {
legend:{
position: 'bottom',
labels:{
fontColor: "white"
}
}
}
});
```<issue_comment>username_1: Try this updated code.
```html
Pie Chart
body {
background-color: #1f1d1d;
font-family: Roboto, Myriad Pro, Segoe UI;
width: 800px;
height: 800px;
}
*Browse by Channel*
var config = {
type: 'pie',
data: {
labels: ["USSD", "URP", "MyTsel App", "Chatbot"],
datasets: [{
backgroundColor: ['rgb(12, 146, 204)',
'rgb(255, 67, 0)',
'rgb(131, 0, 255)',
'rgb(250, 255, 0)'
],
borderColor: ['rgb(12, 146, 204)',
'rgb(255, 67, 0)',
'rgb(131, 0, 255)',
'rgb(250, 255, 0)'
],
data: [73, 17, 3, 7],
}]
},
options: {
responsive: true,
legend: {
position: 'bottom',
labels: {
fontColor: "white",
boxWidth: 20,
padding: 20
}
}
}
};
window.onload = function() {
var ctx = document.getElementById('chart-area').getContext('2d');
window.myPie = new Chart(ctx, config);
};
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For later versions of ChartJs it has to go in the `plugins` section
```
options: {
...
plugins: {
...
legend: {
position: 'bottom'
}
}
}
```
Upvotes: 0 |
2018/03/22 | 493 | 1,947 | <issue_start>username_0: I am new to ECMA classes.
In the following code, I have built a button class that is working fine. Now I am trying to call the prev\_image() method from inside the click eventlistener. I know 'this' refers to the button instance but am not sure how to call a method from the Gallery class. Thanks for any help.
```
class Gallery{
constructor(){
}
draw(){
//build button
prevbtn.draw();
//button listener
document.getElementById('prevbtn').addEventListener('click', function(){
this.prev_image(); <--- this errors out
console.log('pressed'); <--this works
});
}
prev_image(){
console.log('previous image!');
}
}
```<issue_comment>username_1: With a function that way, you need to bind `this`. However, change the function to an arrow function:
```
prev_image = () => {
console.log('previous image!');
}
```
And it should work. You no longer need to bind `this`, and it's also a lot cleaner.
Upvotes: 0 <issue_comment>username_2: ```
document.getElementById('prevbtn').addEventListener('click', ()=>{
this.prev_image();
console.log('pressed');
});
```
Use the arrow function here.Arrow function does not have its own `this` it uses `this` from the code that contains the Arrow Function
Upvotes: 5 [selected_answer]<issue_comment>username_3: Try it by binding the context using `.bind(this)`
```js
class Gallery {
constructor() {}
draw() {
//build button
//prevbtn.draw();
//button listener
document.getElementById('prevbtn').addEventListener('click', function() {
this.prev_image();
console.log('pressed');
}.bind(this));
}
// prevbtn.draw(){
//console.log('prev btn')
//}
prev_image() {
console.log('previous image!');
}
}
var x = new Gallery();
x.draw();
```
```html
Click
```
Upvotes: 2 |
2018/03/22 | 514 | 1,969 | <issue_start>username_0: I'm trying to automatically login to a website using puppeteer, with the following script:
```
const puppeteer = require('puppeteer');
async function logIn(userName, password) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('https://foo.com'); //anonymized host
await page.type('[name="Email"]', userName);
await page.type('[name="Pass"]', password);
page.click('[type=submit]');
await page.waitForNavigation({waitUntil: 'load'});
}
logIn('<EMAIL>')
```
The `await`for `waitForNavigation` eventually times out at 30 seconds. Launching puppeteer with `{headless: false}` on the same script, I can check in chromium's devtools that `document.readyState` evals to "complete" way before the timeout thresold interval. Am I doing something wrong?<issue_comment>username_1: [According to Puppeteer documentation](https://github.com/puppeteer/puppeteer/blob/v5.5.0/docs/api.md#pageclickselector-options) proper pattern for clicking on submit and waiting for navigation is like this:
```js
await Promise.all([
page.waitForNavigation({ waitUntil: 'load' }),
page.click('[type=submit]'),
]);
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: Adding to the answer of username_1, I would like to present a more modern code. Basically instead of doing any operation and calling `waitFornavigation`, the motive is to do the other way around. Something like this:
```
//Get the promise which isn't resolved yet
const navigationPromise = page.waitForNavigation();
//Perform the activity (Click, Goto etc). Note that the link is any valid url
await page.goto(link);
//wait until the navigationPromise resolves
await navigationPromise;
```
This helps to avoid race condition as per [this](https://github.com/puppeteer/puppeteer/issues/3338) discussion
This code makes use of async/await which is much more readable than promise-based code
Upvotes: 3 |
2018/03/22 | 729 | 2,824 | <issue_start>username_0: Let's say I have the following structure:
* package
+ module1
- class1
- class2
And the user of the package wants to use class1 and class2.
The way I understand I have too options, in my `__init__.py`:
```
from module1 import class1,class2
```
Which will let the user do something like `x = pacakge.class1()`
Or:
```
import module1
```
Which will let the user do something like `x = pacakge.module1.class1()`
Is there is a preferred way?<issue_comment>username_1: One disadvantage of the 2nd approach:
* If the package has many modules, the user will have to do something like:
```
from package.module1 import *
from package.module2 import *
...
```
Which can be a little annoying.
**So, here is what I think would work best:**
1. I structure my package to subpackages based on context.
2. Main package will import subpackages.
3. Subpackage will import all symbols from all of its modules.
This way, if my user wants to use the pacakage's `foo` features, he can simply do `from pacakge.foo import *` and get `foo` and only `foo` features.
Upvotes: 0 <issue_comment>username_2: It really depends on the context - your package's, modules, classes, functions etc semantics, what they are used for, how they relate to one another etc, and of course the size of your package (how many subpackages / modules, how many names in each etc) - so there's no one-size-fits-all technical answer here.
If your modules are only an implementation detail (ie to avoid having one monolithic 10+k source file) and from a semantic POV some object (function, class, whatever) really belongs to the package's top-level, then use the package's `__init__` as a facade so the user don't have to know where the name is effectively defined.
If your modules/subpackages make sense as coherent namespaces (like for example `django.db` and `django.forms`) then you obviously want to keep them clearly distinct and let the user explicitely import from them.
Also keep in mind that once a submodule or subpackage name is publicly exposed (it's existence is documented and client code imports from it) then it is de-facto part of the API and you cannot rename it / move it / move things from it without breaking client code.
NB in this last case you can still turn a module into a package used as a facade (using the packages `__init__`) but the change must (well, should) be transparent to the client code.
As a general rule: start with the simplest implementation - either a plain single module if it's a small lib (only a few public names exposed) that doesn't require namespacing or a package with a few "namespace" submodules for bigger or less cohesive libs, and if your modules grow to huge turn them into subpackages using the package's `__init__` to hide the fact to client code.
Upvotes: 1 |
2018/03/22 | 1,029 | 2,278 | <issue_start>username_0: I am trying to extract the grouped index values from a dataframe (df1) that represent a range of grouped times (start - end) and that encompass the grouped times given in another dataframe (df2). My required output is df3.
```
df1<-data.frame(group = c("A","A","A","A","B","B","B","B","C","C","C","C"),index=c(1,2,3,4,5,6,7,8,9,10,11,12),start=c(5,10,15,20,5,10,15,20,5,10,15,20),end=c(10,15,20,25,10,15,20,25,10,15,20,25))
df2<-data.frame(group = c("A","B","B","C","A","C"),time=c(11,17,24,5,5,22))
df3<-data.frame(time=c(11,17,24,5,5,22),index=c(2,7,8,9,1,12))
```
A previous related question I posted was answered with a neat pipe solution for ungrouped data:
```
library(tidyverse)
df1 %>%
select(from = start, to = end) %>%
pmap(seq) %>%
do.call(cbind, .) %>%
list(.) %>%
mutate(df2, new = .,
ind = map2(time, new, ~ which(.x == .y, arr.ind = TRUE)[,2])) %>%
select(-new)
```
Can this be modified to group by the 'group' column in both df1 and df2 to give the output df3?<issue_comment>username_1: With `group_by`, we can `nest` and then do a join
```
library(tidyverse)
df1 %>%
group_by(group) %>%
nest(-group) %>%
mutate(new = map(data, ~.x %>%
select(from = start, to = end) %>%
pmap(seq) %>%
do.call(cbind, .) %>%
list(.))) %>%
right_join(df2) %>%
mutate(ind = map2_int(time, new, ~ which(.x == .y[[1]], arr.ind = TRUE)[,2]),
ind = map2_dbl(ind, data, ~ .y$index[.x])) %>%
select(time, ind)
# A tibble: 6 x 2
# time ind
#
#1 11.0 2.00
#2 17.0 7.00
#3 24.0 8.00
#4 5.00 9.00
#5 5.00 1.00
#6 22.0 12.0
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is something nice with data.table,
```
df1<-data.table(group = c("A","A","A","A","B","B","B","B","C","C","C","C"),index=c(1,2,3,4,5,6,7,8,9,10,11,12),start=c(5,10,15,20,5,10,15,20,5,10,15,20),end=c(10,15,20,25,10,15,20,25,10,15,20,25))
df2<-data.table(group = c("A","B","B","C","A","C"),time=c(11,17,24,5,5,22))
df1[df2,on=.(group,start<=time,end>=time)][,c("start","index")]
start index
1: 11 2
2: 17 7
3: 24 8
4: 5 9
5: 5 1
6: 22 12
```
you can then rename the start column to time and you got your answer i think.
Upvotes: 1 |
2018/03/22 | 724 | 1,660 | <issue_start>username_0: I upload encoded String image on image view but I got
>
> Caused by: java.lang.IllegalArgumentException: bad base-64 error.
>
>
>
```
byte[] decodedString = Base64.decode(value, Base64.DEFAULT);
Bitmap bitmap = BitmapFactory.decodeByteArray(decodedString, 0,decodedString.length);
dashboard_img.setImageBitmap(bitmap);
dashboard_img.invalidate();
```
`Base64.URL_SAFE` also not working for base 64 error.<issue_comment>username_1: With `group_by`, we can `nest` and then do a join
```
library(tidyverse)
df1 %>%
group_by(group) %>%
nest(-group) %>%
mutate(new = map(data, ~.x %>%
select(from = start, to = end) %>%
pmap(seq) %>%
do.call(cbind, .) %>%
list(.))) %>%
right_join(df2) %>%
mutate(ind = map2_int(time, new, ~ which(.x == .y[[1]], arr.ind = TRUE)[,2]),
ind = map2_dbl(ind, data, ~ .y$index[.x])) %>%
select(time, ind)
# A tibble: 6 x 2
# time ind
#
#1 11.0 2.00
#2 17.0 7.00
#3 24.0 8.00
#4 5.00 9.00
#5 5.00 1.00
#6 22.0 12.0
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is something nice with data.table,
```
df1<-data.table(group = c("A","A","A","A","B","B","B","B","C","C","C","C"),index=c(1,2,3,4,5,6,7,8,9,10,11,12),start=c(5,10,15,20,5,10,15,20,5,10,15,20),end=c(10,15,20,25,10,15,20,25,10,15,20,25))
df2<-data.table(group = c("A","B","B","C","A","C"),time=c(11,17,24,5,5,22))
df1[df2,on=.(group,start<=time,end>=time)][,c("start","index")]
start index
1: 11 2
2: 17 7
3: 24 8
4: 5 9
5: 5 1
6: 22 12
```
you can then rename the start column to time and you got your answer i think.
Upvotes: 1 |
2018/03/22 | 1,279 | 4,512 | <issue_start>username_0: I'm busy working through an ETL pipeline, but for this particular problem, I need to take a table of data, and turn each column into a set - that is, a unique array.
I'm struggling to wrap my head around how I would accomplish this within the Kiba framework.
Here's the essence of what I'm trying to achieve:
**Source**:
```ruby
[
{ dairy: "Milk", protein: "Steak", carb: "Potatoes" },
{ dairy: "Milk", protein: "Eggs", carb: "Potatoes" },
{ dairy: "Cheese", protein: "Steak", carb: "Potatoes" },
{ dairy: "Cream", protein: "Chicken", carb: "Potatoes" },
{ dairy: "Milk", protein: "Chicken", carb: "Pasta" },
]
```
**Destination**
```ruby
{
dairy: ["Milk", "Cheese", "Cream"],
protein: ["Steak", "Eggs", "Chicken"],
carb: ["Potatoes", "Pasta"],
}
```
Is something like this a) doable in Kiba, and b) even advisable to do in Kiba?
Any help would be greatly appreciated.
Update - partially solved.
==========================
I've found a partial solution. This transformer class will transform a table of rows into a hash of sets, but I'm stuck on how to get that data out using an ETL Destination. I suspect I'm using Kiba in a way in which it's not intended to be used.
```ruby
class ColumnSetTransformer
def initialize
@col_set = Hash.new(Set.new)
end
def process(row)
row.each do |col, col_val|
@col_set[col] = @col_set[col] + [col_val]
end
@col_set
end
end
```<issue_comment>username_1: OK - So, using Kiba within a job context doesn't seem to be the way this tool was intended to be used. I wanted to use Kiba because I've already implemented a lot of related E, T, and L code for this project, and the reuse would be huge.
So, if I've got the code to reuse, but I can't use it within the Kiba framework, I can just call it as if it was normal code. This is all thanks to Thibaut's excellently simple design!
Here's how I solved the problem:
```ruby
source = CSVOrXLSXSource.new("data.xlsx", document_config: { some: :settings })
xformer = ColumnSetTransformer.new
source.each do |row|
xformer.process(row)
end
p xformer.col_set # col_set must be attr_reader on this class.
```
And now I have my data handily transformed :)
Upvotes: 0 <issue_comment>username_2: Your solution will work just fine, and indeed the reason to have such a design in Kiba (mostly "Plain Old Ruby Objects") is to make it easy to call the components yourself, should you need it! (this is very useful for testing!).
That said here are a few extra possibilities.
What you are doing is a form of aggregation, which can be implemented in various ways.
Buffering destination
---------------------
Here the buffer would be a single row, actually. Use a code such as:
```ruby
class MyBufferingDestination
attr_reader :single_output_row
def initialize(config:)
@single_output_row = []
end
def write(row)
row.each do |col, col_val|
single_output_row[col] += [col_val]
end
end
def close # will be called by Kiba at the end of the run
# here you'd write your output
end
end
```
Using an instance variable to aggregate + post\_process block
-------------------------------------------------------------
```ruby
pre_process do
@output_row = {}
end
transform do |row|
row.each do |col, col_val|
@output_row = # SNIP
end
row
end
post_process do
# convert @output_row to something
# you can invoke a destination manually, or do something else
end
```
Soon possible: using a buffering transform
------------------------------------------
As described [here](https://github.com/thbar/kiba/issues/53), it will soon be possible to create buffering transforms, to better decouple the aggregating mechanism from the destination itself.
It will go like this:
```ruby
class MyAggregatingTransform
def process(row)
@aggregate += xxx
nil # remove the row from the pipeline
end
def close
# not yet possible, but soon
yield @aggregate
end
end
```
This will be the best design, because then you'll be able to reuse existing destinations, without modifying them to support buffering, so they'll become more generic & reusable:
```ruby
transform MyAggregatingTransform
destination MyJSONDestination, file: "some.json"
```
It will even be possible to have multiple rows in the destination, by detecting boundaries in the input dataset, & yielding accordingly.
I will update the SO answer once this is possible.
Upvotes: 3 [selected_answer] |
2018/03/22 | 2,934 | 11,434 | <issue_start>username_0: I followed [this](https://www.tutorialspoint.com/android/android_json_parser.htm) to Read JSON data in android
over there I am passing result data to another activity
```
public class MainAct1 extends Activity {
private static String urlString;
private static final String My_TAG= "Log Status";
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setRequestedOrientation (ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);
setContentView(R.layout.data);
if(Dcon.isInternetAvailable(this))
{
try {
urlString = "https://example.net/api_json";
new ProcessJSON(this).execute();
}
catch (Exception e) {
AlertDialog.Builder builder = new AlertDialog.Builder(MainAct1.this);
builder.setMessage("Note: Your Server ID is Invalid \n Please check the Server Status");
builder.setTitle("Please Check Server Details");
builder.setPositiveButton("OK",null);
builder.setPositiveButton("OK", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
finish();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
}
else
{
AlertDialog alertDialog = new AlertDialog.Builder(MainAct1.this).create();
alertDialog.setTitle("Connection Error !");
alertDialog.setMessage("Internet not available, Check your internet connectivity and try again");
alertDialog.setButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
android.os.Process.killProcess(android.os.Process.myPid());
System.exit(1);
finish();
}
});
alertDialog.show();
}
}
private class ProcessJSON extends AsyncTask {
public Context context;
String FinalJSonResult;
public ProcessJSON(Context context) {
this.context = context;
}
@Override
protected void onPreExecute() {
super.onPreExecute();
}
@Override
protected Void doInBackground(Void... arg0) {
HttpHandler sh = new HttpHandler(urlString);
try {
sh.ExecutePostRequest();
if (sh.getResponseCode() == 200) {
FinalJSonResult = sh.getResponse();
if (FinalJSonResult != null) {
try {
JSONObject JObject = new JSONObject(FinalJSonResult);
JSONObject response = JObject.getJSONObject("response");
if(response.has("status")) {
String status = response.getString("status");
MainAct1.this.finish();
Intent op = new Intent(MainAct1.this, MainRes1.class);
op.putExtra("mydata", status);
op.setFlags(Intent.FLAG\_ACTIVITY\_REORDER\_TO\_FRONT);
op.setFlags(Intent.FLAG\_ACTIVITY\_NEW\_TASK);
op.setFlags(Intent.FLAG\_ACTIVITY\_CLEAR\_TASK);
op.setFlags(Intent.FLAG\_ACTIVITY\_NO\_ANIMATION);
startActivity(op);
}
else {
Toast.makeText(context, "No JSON data", Toast.LENGTH\_SHORT).show();
AlertDialog alertDialog = new AlertDialog.Builder(NSdata.this).create();
alertDialog.setTitle("Server Error !");
alertDialog.setMessage("No Data Received");
alertDialog.setButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
android.os.Process.killProcess(android.os.Process.myPid());
System.exit(1);
finish();
}
});
alertDialog.show();
}
}
catch (JSONException e) {
AlertDialog.Builder builder = new AlertDialog.Builder(MainAct1.this);
builder.setMessage("Note: Your Server ID is Invalid \n Please check the Server Status");
builder.setTitle("Please Check Server Details");
builder.setPositiveButton("OK",null);
builder.setPositiveButton("OK", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
finish();
}
});
AlertDialog dialog = builder.create();
dialog.show();
}
catch (JSONException e) {
}
}
else{
AlertDialog alertDialog = new AlertDialog.Builder(MainAct1.this).create();
alertDialog.setTitle("User Error !");
alertDialog.setMessage("No Data Received");
alertDialog.setButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
android.os.Process.killProcess(android.os.Process.myPid());
System.exit(1);
finish();
}
});
alertDialog.show();
}
}
else {
Toast.makeText(context, sh.getErrorMessage(), Toast.LENGTH\_SHORT).show();
}
}
catch (Exception e) {
e.printStackTrace();
}
return null;
}
}
@Override
public void onBackPressed() {
moveTaskToBack(false);
Toast.makeText(this, "Please wait for a While.. Don't Go back .!", Toast.LENGTH\_SHORT).show();
}
}
```
this is my JSON Data
`{"response":{"status":"Active"}}`
or
`{"response":{"status":"Active:4 in stock"}}`
So now I am facing issue with
**1.NonJSON** some times json data is not received due to server error
or it will show Just some HTML headings...
I need to data Handle it Can any one suggest me.. on this kind
and
**2.Empty JSON Data** Some times Result may be Null like
`{"response":{"status":""}}`
or
`{"response":{}}`
Can Any Suggest me How to Handle those Kinds I already Given Alerts But its not working.
Now I am handling these two kinds of data but Some times I am getting Empty
**Update**
For a valid JSON data Its Showing Result in next Activity/page in a Text View...
Valid JSON data `{"response":{"status":"Active"}}` or `{"response":{"status":"Please set the data."}}`
I am showing Result... in Next Page
But Sometimes I will get Invalid JSON data like `{"response":{"status":}}` or Just HTML page with Welcome text... or OOps page not found ...
So I want to Handle them,,, if I get Invalid JSON Data I want to SHOW the Alert to the USER.... so that's what I am trying But its not working
Please Help me on this types<issue_comment>username_1: Please try this
```
if(response.has("status")) {
// Key found in Response JsonObject
String status = response.getString("status");
} else {
//Status Key not found in Response JsonObject
}
```
Upvotes: 1 <issue_comment>username_2: I recommend use library like Gson. and through try/catch & JsonSyntaxException will validate data.
but original reason of bug is shape of data.
rewrite shape of json data in serverside.
Upvotes: 1 <issue_comment>username_3: ```
JSONObject JObject = new JSONObject(FinalJSonResult);
JSONObject response = JObject.getJSONObject("response");
// Check Key found or Not
if(response.has("status")) {
String status = response.getString("status");
// Check if Status Empty or Not
if(status.isEmpty()){
}
} else {
//Status Key not found
}
```
* `DoInBackground()` only gets executed on a different thread other than the main UI thread.
* So you need to write AlertDialog in `onPostExecute`.
* In `ProcessJSON` class use `FinalJSonResult` string to handle different situation in onPostExecute().
>
> **ProcessJSON** Class
>
>
>
```
private class ProcessJSON extends AsyncTask {
public Context context;
String FinalJSonResult;
public ProcessJSON(Context context) {
this.context = context;
}
@Override
protected void onPreExecute() {
super.onPreExecute();
FinalJSonResult = "";
}
@Override
protected Void doInBackground(Void... arg0) {
HttpHandler sh = new HttpHandler(urlString);
try {
sh.ExecutePostRequest();
if (sh.getResponseCode() == 200) {
FinalJSonResult = sh.getResponse();
if (FinalJSonResult != null) {
try {
JSONObject JObject = new JSONObject(FinalJSonResult);
JSONObject response = JObject.getJSONObject("response");
if(response.has("status")) {
String status = response.getString("status");
MainAct1.this.finish();
Intent op = new Intent(MainAct1.this, MainRes1.class);
op.putExtra("mydata", status);
op.setFlags(Intent.FLAG\_ACTIVITY\_REORDER\_TO\_FRONT);
op.setFlags(Intent.FLAG\_ACTIVITY\_NEW\_TASK);
op.setFlags(Intent.FLAG\_ACTIVITY\_CLEAR\_TASK);
op.setFlags(Intent.FLAG\_ACTIVITY\_NO\_ANIMATION);
startActivity(op);
}
else {
FinalJSonResult = "No JSON data";
}
}
catch (JSONException e) {
FinalJSonResult = "Your Server ID is Invalid";
}
}
else{
FinalJSonResult = "User Error";
}
}
else {
Toast.makeText(context, sh.getErrorMessage(), Toast.LENGTH\_SHORT).show();
}
}
catch (Exception e) {
e.printStackTrace();
}
return null;
}
@Override
protected void onPostExecute(Void aVoid) {
super.onPostExecute(aVoid);
if(FinalJSonResult.equalsIgnoreCase("No JSON data")){
// Your AlertDialog code....
}else if(FinalJSonResult.equalsIgnoreCase("Your Server ID is Invalid")){
// Your AlertDialog code....
}else if(FinalJSonResult.equalsIgnoreCase("User Error")){
// Your AlertDialog code....
}
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: AsyncTask has 4 methods i.e. onPreExecute(), doInBackground(Params...), onProgressUpdate(Progress...), onPostExecute(Result).
doInBackground() method is used to execute the task in the background. For any update on UI we use onPostExecute() method, but if you want to update the UI from doInBackground() than please **try this**
```
runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(.this, "Your message", Toast.LENGTH\_SHORT).show();
// or you can use your alert dialog here
}
});
```
For **JSON** you can try below code:
```
if (sh.getResponseCode() == 200) {
if(response.has("status")) {
if (!response.isNull("status")){
String status = response.getString("status");
}else{
//Your alert/toast by using runOnUiThread(new Runnable() {...}
}
}else{
//Your alert/toast by using runOnUiThread(new Runnable() {...}
}
}else{
//Your alert/toast by using runOnUiThread(new Runnable() {...}
}
```
I hope this code will help you
Upvotes: 1 <issue_comment>username_5: Use isNull method to check weather json key/value is null or not.Add a try catch block to handle unknown data. You can try below code to achieve this.
```
try{
JSONObject jsonObject = new JSONObject(RESPONSE_STRING);
JSONObject subJson = jsonObject.getJSONObject("response");
if (subJson.isNull("status")) {
/*Do you work*/
}
} catch (JSONException e) {
Toast.makeText(getApplicationContext(), getString(R.string.something_wrong), Toast.LENGTH_SHORT).show();
e.printStackTrace();
}
```
Same way you check for "response" as well.
Upvotes: 0 <issue_comment>username_6: Create an interface with two functions named 'onSuccessJson(Json json)' and 'onFailure(String error)' and while getting the response check if its json or html like the above code, and pass the respective data to the interface functions. You will get the response where the function is implemented. On 'onFailure(String error)' call the alert dialog, or if success response the call will occur on 'onSuccess(Json json)' you can do further. AsyncTask works like thread, don't call the context related thing in 'doInBackground'
Upvotes: 0 <issue_comment>username_7: You need add “return Code” in your jsonData. Then you judge if HTTP Status Code equal "200" and “return code” equal "success code"
Upvotes: 1 |
2018/03/22 | 1,032 | 3,054 | <issue_start>username_0: In my settings, I'm using django-environ to set the key:
```
import environ
env = environ.Env()
SECRET_KEY = env.read_env('SECRET_KEY')
```
At the root of my project, I do have a .env file with the actual secret key:
.env:
```
SECRET_KEY=<KEY>
```
The error will run during `collectstatic`. Here's the traceback:
```
Traceback (most recent call last):
File "./manage.py", line 10, in
execute\_from\_command\_line(sys.argv)
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 364, in execute\_from\_command\_line
utility.execute()
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 356, in execute
self.fetch\_command(subcommand).run\_from\_argv(self.argv)
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 194, in fetch\_command
settings.INSTALLED\_APPS
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/conf/\_\_init\_\_.py", line 56, in \_\_getattr\_\_
self.\_setup(name)
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/conf/\_\_init\_\_.py", line 41, in \_setup
self.\_wrapped = Settings(settings\_module)
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/django/conf/\_\_init\_\_.py", line 110, in \_\_init\_\_
mod = importlib.import\_module(self.SETTINGS\_MODULE)
File "/usr/local/Cellar/python/2.7.12\_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/\_\_init\_\_.py", line 37, in import\_module
\_\_import\_\_(name)
File "/Users/sju/Dev/django/blog-api/src/blog/settings/base.py", line 27, in
SECRET\_KEY = env('SECRET\_KEY')
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/environ/environ.py", line 130, in \_\_call\_\_
return self.get\_value(var, cast=cast, default=default, parse\_default=parse\_default)
File "/Users/sju/.virtualenvs/blog-api/lib/python2.7/site-packages/environ/environ.py", line 275, in get\_value
raise ImproperlyConfigured(error\_msg)
django.core.exceptions.ImproperlyConfigured: Set the SECRET\_KEY environment variable
```
What am I missing?<issue_comment>username_1: I think you are using it wrong. Based on the [documentation](https://django-environ.readthedocs.io/en/latest/) you first need to load the `.env` file and then use it.
Try this:
```
import environ
env = environ.Env()
environ.Env.read_env()
SECRET_KEY = env('SECRET_KEY')
```
Also make sure the `.env` file is located in the directory as the settings file. Alternatively, you can pass explicitly pass the location of the file to `read_csv()`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Try this:
```
import os
import environ
BASE_DIR = Path(__file__).resolve().parent.parent
env = environ.Env()
environ.Env.read_env(os.path.join(BASE_DIR, '.env'))
SECRET_KEY = env('SECRET_KEY')
DEBUG = env('DEBUG')
```
Successful.
Upvotes: 2 |
2018/03/22 | 720 | 2,191 | <issue_start>username_0: I am looking for a function that can find matches in between the columns and output if it finds a matching row outputs `"has matches"` else `"no matches"`
for example
```
df = data.frame(id=c("good","bad","ugly","dirty","clean","frenzy"),di=c(1,2,"good","dirty",4,"ugly"))
> df
id di
1 good 1
2 bad 2
3 ugly good
4 dirty dirty
5 clean 4
6 frenzy ugly
```
I want to check `di` column has matches or not in `id` column such that
```
> df
id di match
1 good 1 no matches
2 bad 2 no matches
3 ugly good has matches
4 dirty dirty has matches
5 clean 4 no matches
6 frenzy ugly has matches
```
This kind of approach what I am looking for
```
match_func <- function(x,y){
}
df%>%
do(match_func(.$id,.$di))
```
Thanks in advance!<issue_comment>username_1: Just use `%in%` with `ifelse`
```
df %>%
mutate(match = ifelse(di %in% id, "has matches", "no matches"))
```
Or `case_when`
```
df %>%
mutate(match = case_when(di %in% id ~ "has matches",
TRUE ~ "no matches"))
```
---
This can be directly wrapped in a function. Assuming that we are passing unquoted names, then convert it to quosure with `enquo` and then evaluate within the `mutate` by `!!`
```
f1 <- function(dat, col1, col2) {
col1 = enquo(col1)
col2 = enquo(col2)
dat %>%
mutate(match = case_when(!! (col1) %in% !!(col2) ~ "has matches",
TRUE ~ "no matches"))
}
f1(df, di, id)
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `base R` and without `if/else` statement, you can compute `match` column with:
```
df$match <- c("no matches", "has matches")[(df$di %in% df$id) + 1]
df
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 2 |
2018/03/22 | 709 | 2,276 | <issue_start>username_0: I want to add numbers on google map marker.. in code i taken marker as custom image view.. I want to place numbers on that custom marker image view. .. [like i posted image](https://i.stack.imgur.com/klnHk.png)
i tried like this
```
Paint mPaint=new Paint();
mPaint = new Paint(Paint.ANTI_ALIAS_FLAG);
mPaint.setColor(Color.WHITE);
// mPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.SRC_OUT));
//mPaint.setAntiAlias(true);
Bitmap.Config conf = Bitmap.Config.ARGB_8888;
Bitmap bmp = Bitmap.createBitmap(200, 50, conf);
Canvas canvas = new Canvas(bmp);
canvas.drawText("TEXT", 0, 50, mPaint);
marker = googleMap.addMarker(new MarkerOptions()
.position(new LatLng(latitude, longitude))
.title(merchant_name)
.anchor(0.5f, 1)
.icon(BitmapDescriptorFactory.fromBitmap(bmp)));
```<issue_comment>username_1: Just use `%in%` with `ifelse`
```
df %>%
mutate(match = ifelse(di %in% id, "has matches", "no matches"))
```
Or `case_when`
```
df %>%
mutate(match = case_when(di %in% id ~ "has matches",
TRUE ~ "no matches"))
```
---
This can be directly wrapped in a function. Assuming that we are passing unquoted names, then convert it to quosure with `enquo` and then evaluate within the `mutate` by `!!`
```
f1 <- function(dat, col1, col2) {
col1 = enquo(col1)
col2 = enquo(col2)
dat %>%
mutate(match = case_when(!! (col1) %in% !!(col2) ~ "has matches",
TRUE ~ "no matches"))
}
f1(df, di, id)
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `base R` and without `if/else` statement, you can compute `match` column with:
```
df$match <- c("no matches", "has matches")[(df$di %in% df$id) + 1]
df
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 2 |
2018/03/22 | 574 | 1,798 | <issue_start>username_0: I am looking for a simpler solution.
There a **tool A**
It will process the XML of below format
```
```
There a **tool B**
It will process the XML of below format
```
```
I have to process both the XML is same Tool.
XML wise the difference is only few tag names.
In java i could parse this and use a logic to differentiate it.
But is there any easy way to do this? Is there any parser to do this? Is there any simpler way to do this?<issue_comment>username_1: Just use `%in%` with `ifelse`
```
df %>%
mutate(match = ifelse(di %in% id, "has matches", "no matches"))
```
Or `case_when`
```
df %>%
mutate(match = case_when(di %in% id ~ "has matches",
TRUE ~ "no matches"))
```
---
This can be directly wrapped in a function. Assuming that we are passing unquoted names, then convert it to quosure with `enquo` and then evaluate within the `mutate` by `!!`
```
f1 <- function(dat, col1, col2) {
col1 = enquo(col1)
col2 = enquo(col2)
dat %>%
mutate(match = case_when(!! (col1) %in% !!(col2) ~ "has matches",
TRUE ~ "no matches"))
}
f1(df, di, id)
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `base R` and without `if/else` statement, you can compute `match` column with:
```
df$match <- c("no matches", "has matches")[(df$di %in% df$id) + 1]
df
# id di match
#1 good 1 no matches
#2 bad 2 no matches
#3 ugly good has matches
#4 dirty dirty has matches
#5 clean 4 no matches
#6 frenzy ugly has matches
```
Upvotes: 2 |
2018/03/22 | 453 | 1,492 | <issue_start>username_0: Here is my route: `@app.route('/')`. In it I use `request.args.get('page')` for pagination. But the problem I am having is, if I go to my browser and visit `localhost:5000/?page=2` flask returns a 404. What is the reason for this? It works fine on `localhost:5000` but I want to supply a page. How do I do this?
EDIT: Here is my route:
```
from flaskblog import app
from flaskblog.models import Post # Flask-SQLAlchemy
@app.route('/')
def blog_index():
page_num = int(request.args.get('page', 1))
post_data = Post.query.paginate(per_page=10, page=page_num).items
return render_template('index.html', posts=post_data)
```
And for the data I have a single post.<issue_comment>username_1: First, you din't pass in `page` as a parameter into your route;
Second, your route is not designed to handle subsequence page(s).
In order for the route to handle default page 1 and subsequent pages, You specify the route as:
```
@app.route('/', defaults={'page': 1})
@app.route('/page//')
def index(page):
# rest of the code
```
Upvotes: 0 <issue_comment>username_2: I found out my error. In Flask-SQLAlchemy pagination, if nothing is found in a page, it `abort(404)`s. To prevent this from happening I did this:
```
#...
Post.query.paginate(per_page=10, page=page_num, error_out=False).items #the error_out=False part
#...
```
And then I handle the problems on my own such as negative page number, no posts found in page etc.
Upvotes: 2 [selected_answer] |
2018/03/22 | 417 | 1,320 | <issue_start>username_0: controller **PagesController**
```
public function index()
{
$client = new Client();
$hospital_id = 37;
$res = $client->request('POST', 'http://vph.com/api/GetService'.$hospital_id,[
'form_params' => [
'body'=>$hospital_id
]
]);
```
i want to api hit but is not working
**ERROR**
```
Client error: `POST http://vpshealth.com/api/GetServiceList37` resulted in a `404 Not Found` response: 404 Not Found 404 Not Found
=============
---
(truncated...)
```
[error image](https://i.stack.imgur.com/uI5JY.png)<issue_comment>username_1: missing **/** in the URL
url should be **<http://vph.com/api/GetService/>'.$hospital\_id**
```
public function index()
{
$client = new Client();
$hospital_id = 37;
$res = $client->request('POST', 'http://vph.com/api/GetService/'.$hospital_id,[
'form_params' => [
'body'=>$hospital_id
]
]);
```
Upvotes: 1 <issue_comment>username_2: ```
$client = new \GuzzleHttp\Client();
$hospital_id = 37;
$response = $client->request("POST", "http://vpshealth.com/api/GetServiceList?hospital_id=".$hospital_id,
[
'json'=>['body'=>$hospital_id]
]);
$data=$response->getbody();
$data=json_decode($data);
dd($data);
```
**solve**
Upvotes: 0 |
2018/03/22 | 1,097 | 4,527 | <issue_start>username_0: I'm using Spring Boot and have two very similar services which I'd like to configure in my `application.yml`.
The configuration looks roughly like this:
```
serviceA.url=abc.com
serviceA.port=80
serviceB.url=def.com
serviceB.port=8080
```
Is it possible to create one class annotated with `@ConfigurationProperties` and set the prefix at the injection point?
e.g.
```
@Component
@ConfigurationProperties
public class ServiceProperties {
private String url;
private String port;
// Getters & Setters
}
```
and then in the Services itself:
```
public class ServiceA {
@Autowired
@SomeFancyAnnotationToSetPrefix(prefix="serviceA")
private ServiceProperties serviceAProperties;
// ....
}
```
Unfortunately I haven't found something in the documentation about such a feature... Thank you very much for your help!<issue_comment>username_1: The **@ConfigurationProperties** anotation has the field to set prefix configulation. Here is my example:
```
@Component
@ConfigurationProperties(prefix = "b2gconfig")
public class B2GConfigBean {
private String account;
public String getAccount() {
return account;
}
public void setAccount(String account) {
this.account = account;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
private String key;
}
```
And my application.properties file:
[](https://i.stack.imgur.com/HmgZX.png)
Upvotes: -1 <issue_comment>username_2: I achieved almost same thing that you trying.
first, register each properties beans.
```
@Bean
@ConfigurationProperties(prefix = "serviceA")
public ServiceProperties serviceAProperties() {
return new ServiceProperties ();
}
@Bean
@ConfigurationProperties(prefix = "serviceB")
public ServiceProperties serviceBProperties() {
return new ServiceProperties ();
}
```
and at service(or someplace where will use properties) put a @Qualifier and specified which property would be auto wired .
```
public class ServiceA {
@Autowired
@Qualifier("serviceAProperties")
private ServiceProperties serviceAProperties;
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: username_2s example worked perfektly, except for JavaBean Validation.
I've had an annotation @NotNull on one of the properties:
```
public class ServiceProperties {
@NotNull
private String url;
private String port;
// Getters & Setters
```
}
As a consequence, the application startup failed with following error message:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target ch.sbb.hop.commons.infrastructure.hadoop.spark.SparkJobDeployerConfig@730d2164 failed:
Property: url
Value: null
Reason: may not be null
Action:
Update your application's configuration
```
After removing the annotation, the application startet up with correct property binding.
In conclusion, I think there is an issue with JavaBean Validation not getting the correctly initialized instance, maybe because of missing proxy on configuration methods.
Upvotes: 0 <issue_comment>username_4: Following this post [Guide to @ConfigurationProperties in Spring Boot](https://www.baeldung.com/configuration-properties-in-spring-boot#bean) you can create a simple class without annotations:
```
public class ServiceProperties {
private String url;
private String port;
// Getters & Setters
}
```
And then create the @Configuration class using @Bean annotation:
```
@Configuration
@PropertySource("classpath:name_properties_file.properties")
public class ConfigProperties {
@Bean
@ConfigurationProperties(prefix = "serviceA")
public ServiceProperties serviceA() {
return new ServiceProperties ();
}
@Bean
@ConfigurationProperties(prefix = "serviceB")
public ServiceProperties serviceB(){
return new ServiceProperties ();
}
}
```
Finally you can get the properties as follow:
```
@SpringBootApplication
public class Application implements CommandLineRunner {
@Autowired
private ConfigProperties configProperties ;
private void watheverMethod() {
// For ServiceA properties
System.out.println(configProperties.serviceA().getUrl());
// For ServiceB properties
System.out.println(configProperties.serviceB().getPort());
}
}
```
Upvotes: 3 |
2018/03/22 | 3,692 | 12,050 | <issue_start>username_0: Hello I have following problem, I need to write a @Query in JpaRepository that will return list of transactions. My current query looks like this:
```
@Query("SELECT t FROM Transaction t WHERE t.property IN :property AND (t.createdAt BETWEEN :dateFrom AND :dateTo) GROUP BY t.transactionType")
List getAllByPropertyAndDatesBetweenGroupedByTransactionType(@Param("property")List property, @Param("dateFrom") Date dateFrom, @Param("dateTo") Date dateTo);
```
All parameters passed here are valid, but I get error:
```
"exception": "org.springframework.dao.InvalidDataAccessResourceUsageException",
"message": "could not extract ResultSet; SQL [n/a]; nested exception is org.hibernate.exception.SQLGrammarException: could not extract ResultSet",
java.sql.SQLException: No value specified for parameter 3
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:898) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:887) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:861) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.PreparedStatement.checkAllParametersSet(PreparedStatement.java:2211) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2191) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.PreparedStatement.fillSendPacket(PreparedStatement.java:2121) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:1941) ~[mysql-connector-java-5.1.45.jar:5.1.45]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_151]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_151]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_151]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_151]
at org.apache.tomcat.jdbc.pool.StatementFacade$StatementProxy.invoke(StatementFacade.java:114) ~[tomcat-jdbc-8.5.27.jar:na]
at com.sun.proxy.$Proxy151.executeQuery(Unknown Source) ~[na:na]
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.getResultSet(Loader.java:2117) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1900) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1876) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.doQuery(Loader.java:919) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:336) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.doList(Loader.java:2617) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.doList(Loader.java:2600) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2429) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.Loader.list(Loader.java:2424) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:501) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:371) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:216) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1326) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.internal.QueryImpl.list(QueryImpl.java:87) ~[hibernate-core-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.jpa.internal.QueryImpl.list(QueryImpl.java:606) ~[hibernate-entitymanager-5.0.12.Final.jar:5.0.12.Final]
at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:483) ~[hibernate-entitymanager-5.0.12.Final.jar:5.0.12.Final]
at org.springframework.data.jpa.repository.query.JpaQueryExecution$CollectionExecution.doExecute(JpaQueryExecution.java:123) ~[spring-data-jpa-1.11.10.RELEASE.jar:na]
at org.springframework.data.jpa.repository.query.JpaQueryExecution.execute(JpaQueryExecution.java:87) ~[spring-data-jpa-1.11.10.RELEASE.jar:na]
at org.springframework.data.jpa.repository.query.AbstractJpaQuery.doExecute(AbstractJpaQuery.java:116) ~[spring-data-jpa-1.11.10.RELEASE.jar:na]
at org.springframework.data.jpa.repository.query.AbstractJpaQuery.execute(AbstractJpaQuery.java:106) ~[spring-data-jpa-1.11.10.RELEASE.jar:na]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:492) ~[spring-data-commons-1.13.10.RELEASE.jar:na]
at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:475) ~[spring-data-commons-1.13.10.RELEASE.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:56) ~[spring-data-commons-1.13.10.RELEASE.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99) ~[spring-tx-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:282) ~[spring-tx-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136) ~[spring-tx-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:133) ~[spring-data-jpa-1.11.10.RELEASE.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:57) ~[spring-data-commons-1.13.10.RELEASE.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:213) ~[spring-aop-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at com.sun.proxy.$Proxy130.getAllByPropertyAndDatesBetweenGroupedByTransactionType(Unknown Source) ~[na:na]
```
Why I do this error when I specify everything correctly?<issue_comment>username_1: The **@ConfigurationProperties** anotation has the field to set prefix configulation. Here is my example:
```
@Component
@ConfigurationProperties(prefix = "b2gconfig")
public class B2GConfigBean {
private String account;
public String getAccount() {
return account;
}
public void setAccount(String account) {
this.account = account;
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
private String key;
}
```
And my application.properties file:
[](https://i.stack.imgur.com/HmgZX.png)
Upvotes: -1 <issue_comment>username_2: I achieved almost same thing that you trying.
first, register each properties beans.
```
@Bean
@ConfigurationProperties(prefix = "serviceA")
public ServiceProperties serviceAProperties() {
return new ServiceProperties ();
}
@Bean
@ConfigurationProperties(prefix = "serviceB")
public ServiceProperties serviceBProperties() {
return new ServiceProperties ();
}
```
and at service(or someplace where will use properties) put a @Qualifier and specified which property would be auto wired .
```
public class ServiceA {
@Autowired
@Qualifier("serviceAProperties")
private ServiceProperties serviceAProperties;
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: username_2s example worked perfektly, except for JavaBean Validation.
I've had an annotation @NotNull on one of the properties:
```
public class ServiceProperties {
@NotNull
private String url;
private String port;
// Getters & Setters
```
}
As a consequence, the application startup failed with following error message:
```
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target ch.sbb.hop.commons.infrastructure.hadoop.spark.SparkJobDeployerConfig@730d2164 failed:
Property: url
Value: null
Reason: may not be null
Action:
Update your application's configuration
```
After removing the annotation, the application startet up with correct property binding.
In conclusion, I think there is an issue with JavaBean Validation not getting the correctly initialized instance, maybe because of missing proxy on configuration methods.
Upvotes: 0 <issue_comment>username_4: Following this post [Guide to @ConfigurationProperties in Spring Boot](https://www.baeldung.com/configuration-properties-in-spring-boot#bean) you can create a simple class without annotations:
```
public class ServiceProperties {
private String url;
private String port;
// Getters & Setters
}
```
And then create the @Configuration class using @Bean annotation:
```
@Configuration
@PropertySource("classpath:name_properties_file.properties")
public class ConfigProperties {
@Bean
@ConfigurationProperties(prefix = "serviceA")
public ServiceProperties serviceA() {
return new ServiceProperties ();
}
@Bean
@ConfigurationProperties(prefix = "serviceB")
public ServiceProperties serviceB(){
return new ServiceProperties ();
}
}
```
Finally you can get the properties as follow:
```
@SpringBootApplication
public class Application implements CommandLineRunner {
@Autowired
private ConfigProperties configProperties ;
private void watheverMethod() {
// For ServiceA properties
System.out.println(configProperties.serviceA().getUrl());
// For ServiceB properties
System.out.println(configProperties.serviceB().getPort());
}
}
```
Upvotes: 3 |
2018/03/22 | 1,990 | 6,124 | <issue_start>username_0: I have a table which has ID,Date,Flag indicator (which includes yes or no value)
I want to get latest date when flag indicator changed from No to Yes which determines as of today if that ID is in scope or not..
```
ID Date Flag Indicator
1 2-Jan-15 No
1 4-Jan-16 Yes
1 2-Jan-17 No
1 1-Jan-18 Yes
```
for the above result must be 1 Jan 2018
but for below
ID Date Flag Indicator
2 2-Jan-15 No
2 4-Jan-16 Yes
2 2-Jan-17 No
2 1-Jan-18 No
```
Result must be null
```
because based on latest date Flag Indicator is 'No' which mean its not in scope.
ID Date Flag Indicator
4 2-Jan-15 No
4 4-Jan-16 Yes
4 2-Jan-17 Yes
4 1-Jan-18 Yes
Result be 4 Jan 2016 because this was the date when indicator came in scope with Yes<issue_comment>username_1: Check the maximum No date, then retrieve the minimum Yes date after that one.
```
;WITH MaxNoDateByID AS
(
SELECT
N.ID,
MaxNoDate = MAX(N.Date)
FROM
NumberedDates AS N
WHERE
N.Flag = 'No'
GROUP BY
N.ID
)
SELECT
T.ID,
MinYesDateAfterNo = MIN(T.Date)
FROM
YourTable AS T
INNER JOIN MaxNoDateByID AS M ON T.ID = M.ID
WHERE
T.Date > M.MaxNoDate -- Assuming all dates after MaxNoDate are supposed to be Flag Yes
GROUP BY
T.ID
```
Upvotes: 0 <issue_comment>username_2: This might help you
```
CREATE TABLE [dbo].[flagindicator](
[id] [int] NULL,
[date] [date] NULL,
[flagindicator] [varchar](50) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0x6F390B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0xDE3A0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0x4A3C0B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0xB63D0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0xB63D0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0x4A3C0B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0xDE3A0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0x6F390B00 AS Date), N'No')
SELECT id,ISNULL(MAX(CASE WHEN flagindicator='Yes' THEN date END ),'') indicator FROM flagindicator GROUP BY id
```
Upvotes: 0 <issue_comment>username_3: Use *subquery* with correlation
```
select * from table t
where Date = (select max(Date) from table where id = t.id) and
[Flag Indicator] = 'Yes'
```
**EDIT :**
Use `INNER JOIN` as self join containing max\_date for each id with *correlation* approach
```
select
top (1) with ties t.Id,
(case when (select top 1 [Flag Indicator] from table
where id = c.id and date = c.MAXDATE) <> 'No'
then date end) as Date
from table t inner join (
select id, max(date) MAXDATE from table group by id
) c on c.id = t.id
order by row_number() over(partition by t.Id
order by case when [Flag Indicator] = 'Yes'
then 0 else 1 end)
```
Check [Demo](http://sqlfiddle.com/#!18/5b372/1)
Upvotes: 0 <issue_comment>username_4: **EDIT 1**
I have corrected the output date to the first and not the last good.
**EDIT 2**
Added protection against same id/date.
Added optimized version.
I'm assuming your `[Date]` column is a `DATETIME`, in this way you will find your desired output:
```
DECLARE @YourDesiredID INT = 1;
;WITH
t1 as (select distinct ID from YourTable),
m1 as (
select *,
ROW_NUMBER() over (partition by id order by id, t.date, flag) desc) n,
LAG (flag, 1, 'No ') OVER (partition by id order by id, t.date, flag desc) ) PrevFlag
from YourTable t
),
t2 as (select * from m1 where n=1 and flag = 'Yes'),
m2 as (
select id, max(m1.date) [date]
from m1
where flag = 'Yes' and PrevFlag = 'No '
group by id
)
select
--t1.id,
m2.date
from t1
left join t2 on t1.id = t2.id
left join m2 on m2.id = t2.id
where t.ID = @YourDesiredID
```
An optimized version could use `MAX([Date])` instead of `ROW_NUMBER()` but you must be sure you cannot have more records with the same id/date.
These are the optimized subqueries:
```
t1 as (select ID, max(date) maxdate from t group by id),
m1 as (select *, LAG (flag, 1, 'No') OVER (partition by id order by id, t.date, flag desc) PrevFlag from t),
t2 as (select m1.* from m1 join t1 on t1.id = m1.id where maxdate = date and flag = 'Yes'),
```
Upvotes: 0 <issue_comment>username_5: Here is a method that uses window functions:
```
select t.*,
(case when flag = 'Yes' and prev_flag = 'No' and
running_no = max(partition by id running_no) over ()
then 'Yes' else 'No'
end) as new_flag
from (select t.*,
sum(case when flag = 'No' then 1 else 0 end) over (partition by id order by date) as running_no,
lag(flag) over (partition by id order by date) as prev_flag
from t
) t;
```
Window functions are generally going to perform better than solutions using correlated subqueries, self-joins, or `apply`.
If you just want one row for each id, then I think of using correlated subqueries:
```
select top (1) with ties t.*
from t
where t.flag = 'yes' and
t.date > (select max(t2.date) from t t2 where t2.id = t.id and t2.flag = 'no')
order by row_number() over (partition by id order by date);
```
Upvotes: 1 <issue_comment>username_4: There is another way to acomplish this task, I think this is an optimization of @GordonLinoff answer, this is simpler and more performant.
```
select t.id, min(date) date
from YourTable
left join (
select id, max(date) maxdate_no
from YourTable
where flag = 'no'
group by id
) tn on tn.ID = t.ID
where t.flag = 'yes' and t.date > tn.maxdate_no
group by t.id
```
Upvotes: 0 |
2018/03/22 | 1,902 | 5,978 | <issue_start>username_0: Using Chrome Dev Tools console, I have xpath $x("//img[contains(@src, 'https://')]")
to get image links from this website: <https://www.etsy.com/market/happiness_bracelet>.
It is returning 164 results in the array, but I only need the currentSrc property of each result in the array. How would I do that through JavaScript?
I tried: var x = $x("//img[contains(@src, 'https://')]")
for (var i = 0; i < 150; i++) {
console.log(x[i]);
but it is giving me class, src, and alt values. I only need src values, so that's why I want to get only currentSrc of each result in the array.
Thanks for your help.<issue_comment>username_1: Check the maximum No date, then retrieve the minimum Yes date after that one.
```
;WITH MaxNoDateByID AS
(
SELECT
N.ID,
MaxNoDate = MAX(N.Date)
FROM
NumberedDates AS N
WHERE
N.Flag = 'No'
GROUP BY
N.ID
)
SELECT
T.ID,
MinYesDateAfterNo = MIN(T.Date)
FROM
YourTable AS T
INNER JOIN MaxNoDateByID AS M ON T.ID = M.ID
WHERE
T.Date > M.MaxNoDate -- Assuming all dates after MaxNoDate are supposed to be Flag Yes
GROUP BY
T.ID
```
Upvotes: 0 <issue_comment>username_2: This might help you
```
CREATE TABLE [dbo].[flagindicator](
[id] [int] NULL,
[date] [date] NULL,
[flagindicator] [varchar](50) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0x6F390B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0xDE3A0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0x4A3C0B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (1, CAST(0xB63D0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0xB63D0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0x4A3C0B00 AS Date), N'No')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0xDE3A0B00 AS Date), N'Yes')
INSERT [dbo].[flagindicator] ([id], [date], [flagindicator]) VALUES (2, CAST(0x6F390B00 AS Date), N'No')
SELECT id,ISNULL(MAX(CASE WHEN flagindicator='Yes' THEN date END ),'') indicator FROM flagindicator GROUP BY id
```
Upvotes: 0 <issue_comment>username_3: Use *subquery* with correlation
```
select * from table t
where Date = (select max(Date) from table where id = t.id) and
[Flag Indicator] = 'Yes'
```
**EDIT :**
Use `INNER JOIN` as self join containing max\_date for each id with *correlation* approach
```
select
top (1) with ties t.Id,
(case when (select top 1 [Flag Indicator] from table
where id = c.id and date = c.MAXDATE) <> 'No'
then date end) as Date
from table t inner join (
select id, max(date) MAXDATE from table group by id
) c on c.id = t.id
order by row_number() over(partition by t.Id
order by case when [Flag Indicator] = 'Yes'
then 0 else 1 end)
```
Check [Demo](http://sqlfiddle.com/#!18/5b372/1)
Upvotes: 0 <issue_comment>username_4: **EDIT 1**
I have corrected the output date to the first and not the last good.
**EDIT 2**
Added protection against same id/date.
Added optimized version.
I'm assuming your `[Date]` column is a `DATETIME`, in this way you will find your desired output:
```
DECLARE @YourDesiredID INT = 1;
;WITH
t1 as (select distinct ID from YourTable),
m1 as (
select *,
ROW_NUMBER() over (partition by id order by id, t.date, flag) desc) n,
LAG (flag, 1, 'No ') OVER (partition by id order by id, t.date, flag desc) ) PrevFlag
from YourTable t
),
t2 as (select * from m1 where n=1 and flag = 'Yes'),
m2 as (
select id, max(m1.date) [date]
from m1
where flag = 'Yes' and PrevFlag = 'No '
group by id
)
select
--t1.id,
m2.date
from t1
left join t2 on t1.id = t2.id
left join m2 on m2.id = t2.id
where t.ID = @YourDesiredID
```
An optimized version could use `MAX([Date])` instead of `ROW_NUMBER()` but you must be sure you cannot have more records with the same id/date.
These are the optimized subqueries:
```
t1 as (select ID, max(date) maxdate from t group by id),
m1 as (select *, LAG (flag, 1, 'No') OVER (partition by id order by id, t.date, flag desc) PrevFlag from t),
t2 as (select m1.* from m1 join t1 on t1.id = m1.id where maxdate = date and flag = 'Yes'),
```
Upvotes: 0 <issue_comment>username_5: Here is a method that uses window functions:
```
select t.*,
(case when flag = 'Yes' and prev_flag = 'No' and
running_no = max(partition by id running_no) over ()
then 'Yes' else 'No'
end) as new_flag
from (select t.*,
sum(case when flag = 'No' then 1 else 0 end) over (partition by id order by date) as running_no,
lag(flag) over (partition by id order by date) as prev_flag
from t
) t;
```
Window functions are generally going to perform better than solutions using correlated subqueries, self-joins, or `apply`.
If you just want one row for each id, then I think of using correlated subqueries:
```
select top (1) with ties t.*
from t
where t.flag = 'yes' and
t.date > (select max(t2.date) from t t2 where t2.id = t.id and t2.flag = 'no')
order by row_number() over (partition by id order by date);
```
Upvotes: 1 <issue_comment>username_4: There is another way to acomplish this task, I think this is an optimization of @GordonLinoff answer, this is simpler and more performant.
```
select t.id, min(date) date
from YourTable
left join (
select id, max(date) maxdate_no
from YourTable
where flag = 'no'
group by id
) tn on tn.ID = t.ID
where t.flag = 'yes' and t.date > tn.maxdate_no
group by t.id
```
Upvotes: 0 |
2018/03/22 | 539 | 2,389 | <issue_start>username_0: Good morning, developing an app where I use firebase to receive notifications, I have a problem if I send a notification and the application is killed, I open the first page of the application instead I would like to open a specific view, here is my code
```
func application (_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
FirebaseApp.configure ()
// locally save data for settings
let launchedBefore = UserDefaults.standard.bool (forKey: "launchedBefore")
if launchedBefore {
print ("Not first launch.")
} else {
print ("First launch, setting UserDefault.")
UserDefaults.standard.set (true, forKey: "isNotify")
UserDefaults.standard.set (false, forKey: "isZoom")
UserDefaults.standard.set (true, forKey: "isCookie")
if UserDefaults.standard.bool (forKey: "isNotify") {
registerNotification ()
}
UserDefaults.standard.set (true, forKey: "launchedBefore")
}
NotificationCenter.default.addObserver (self, selector: #selector (registerNotification), name: NSNotification.Name (rawValue: "isNotify"), object: nil)
return true
}
func registerNotification () {
if #available (iOS 10, *) {
UNUserNotificationCenter.current (). RequestAuthorization (options: [. Badge, .alert, .sound]) {(granted, error) in}
UIApplication.shared.registerForRemoteNotifications ()
}
// iOS 9 support
else {
UIApplication.shared.registerUserNotificationSettings (UIUserNotificationSettings (types: [.badge, .sound, .alert], categories: nil))
UIApplication.shared.registerForRemoteNotifications ()
}
}
```<issue_comment>username_1: inside the didFinishLaunchingWithOptions method, read the notification like that:
```
let notification = launchOptions?[UIApplicationLaunchOptionsKey.remoteNotification] as? NSDictionary
if notification != nil {
// Read the notification payload and open related page
}
```
Upvotes: 1 <issue_comment>username_1: ```
let rootViewController = self.window.rootViewController
// write a method named like openNotifyPage in rootViewController to push opennotify
rootViewController.openNotifyPage
```
than push opennotify vewcontroller inside this method
Upvotes: 0 |
2018/03/22 | 269 | 1,183 | <issue_start>username_0: I have a script that has to check whether the environment variable FILE was set but also whether the path to it really exists. This is what I googled so far:
```
if env | grep -q ^FILE=
then
echo env variable is already exported
else
echo env variable was not exported
exit 1
fi
```
Which works if the variable is set correctly, but if the path doesn't exist, it runs into error on the first if statement. How should I modify the statements to check for it? My task is to create the path if it doesn't exist from the script.<issue_comment>username_1: inside the didFinishLaunchingWithOptions method, read the notification like that:
```
let notification = launchOptions?[UIApplicationLaunchOptionsKey.remoteNotification] as? NSDictionary
if notification != nil {
// Read the notification payload and open related page
}
```
Upvotes: 1 <issue_comment>username_1: ```
let rootViewController = self.window.rootViewController
// write a method named like openNotifyPage in rootViewController to push opennotify
rootViewController.openNotifyPage
```
than push opennotify vewcontroller inside this method
Upvotes: 0 |
2018/03/22 | 776 | 2,856 | <issue_start>username_0: so a bit of context:
I'm creating a program to manage products (candles) and I need to check if the name of the candles perfume exists before updating the data.
So after some research I wrote this :
```
try {
stmt = conn.createStatement();
//Get a perfumeCode from a perfumeName
sql = "SELECT Code_Parfum FROM T_Parfum WHERE Nom_Parfum = \"" + tbxPerfumeName.getText() + "\";";
rs = stmt.executeQuery(sql);
System.out.println(!rs.next()); // = true if doesn't exists and = false if it does exist
if (!rs.next()) {
System.out.println("Ce produit n'exist pas !!"); //prints an error to help me see if this works
}
} catch (SQLException e) {
}
```
In short: It gets the name of the perfume I entered (in a textField) and returns me a ResultSet.
With that resultset I can check if the code of the perfume exists with rs.next()
The problem is: for the print... It works fine !!
When it does not exist, I get **true** (because of "**!**") and when it does exist I get **false**!
But when it comes to the if, it does not work anymore. Everytime I try, the print inside the if gets executed. Everytime. Even if **!rs.next()** is true or false. Here is what I get as an output:
```
Connection to SQLite has been established.
false
Ce produit n'exist pas !!
DataBase Closed
Connection to SQLite has been established.
true
Ce produit n'exist pas !!
DataBase Closed
```
as you can see: I get **false** the first time (I entered a name that existed) and **true** the other (name does not exist) but the **if** printed the String in both cases. Despite having the same condition than the print before.
Am I missing something ? Why doesn't I get the "same" result (if true in print, then true in if, and vice-versa) ?
Any help appreciated and thanks !<issue_comment>username_1: You have to call `rs.next()` only once, because next time you call it, it checks for the next row, Also take a look at the [PreparedStatement](https://docs.oracle.com/javase/7/docs/api/java/sql/PreparedStatement.html) class, you shouldn't concatenate directly string to the query because this could lead to [SQLInjection](https://en.wikipedia.org/wiki/SQL_injection)
Upvotes: 3 <issue_comment>username_2: Each call to `rs.next()` moves the cursor to the next result. So:
```
rs = stmt.executeQuery(sql);
```
Now the cursor is at position -1 (ie before the first result set).
```
System.out.println(!rs.next()); // = true if doesn't exists and = false if it does exist
```
Now the cursor is at position 0, at the first result; this result exists so the call returns `true`.
```
if (!rs.next()) {
```
Now the cursor is at position 1; because only one result was returned, that position does not exist and the call returns `false`.
Upvotes: 3 [selected_answer] |
2018/03/22 | 548 | 1,916 | <issue_start>username_0: As far as I know, `*ptr++` should be evaluated to get the value in the original `ptr` address, and `ptr` gets updated to point to the next memory location. So the only thing that should be changing here is the address held by `ptr`.
However when I ran this program
```
#include
int main()
{
int x=5;
int \*ptr= &x
int val= \*ptr++;
printf("%i\n", val);
printf("%i", \*ptr);
return 0;
}
```
I expected to get
```
5
//Some random value
```
But the output shows always
```
5
5
```
So what am I missing here?<issue_comment>username_1: All of your *knowledge* is correct, none of your *expectations* are.
To elaborate, in your case, the **problem** is
```
printf("%i", *ptr);
```
you're trying to access (de-rererence) a memory which is *invalid* from the point of view of your program.
Remember, `ptr` itself got incresed, due to the post-increment operator use. Now it points past the address allocated for variable `x`. Then, you're trying to de-reference the address to fetch the value. This causes invalid memory access which in turn causes [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior).
Once your program causes UB, the output (if any), cannot be validated or justified in any ways.
Upvotes: 2 <issue_comment>username_2: >
> So what I'm missing here?
>
>
>
The fact your program has undefined behavior. You may increment the pointer to point past an object (`x` in your example), but you may not dereference it afterwards. To do so is to venture beyond the promises a well-formed C program makes.
There's no reasoning about what `printf("%i", *ptr);` does, because it may do anything. Whatever guarantees the C standard placed on your program are forfeit the moment you do that access.
Beyond that, you also seem to be a victim of a logical fallacy. 5 is no less random here than 42, 106 or 0xf00ba7.
Upvotes: 3 [selected_answer] |
2018/03/22 | 527 | 1,930 | <issue_start>username_0: Suppose I have a NotesDocument with a property named `"someJSONObject"` with the following value:
```
"{
"someObject":
{
"objId": "someId"
"Object_Name": "objName",
"Alternative_Name" : "altName",
"Alias_Name" : "alias"
}
}"
```
My question is how I can get `"objId"` property to use it in Notes View in such a way, so I would be able to use this as a seach criteria. There are only "Simple Function", "Field", "Formula" available. How can I write it there, instead of writing duplicate field just for sorting purposes?<issue_comment>username_1: All of your *knowledge* is correct, none of your *expectations* are.
To elaborate, in your case, the **problem** is
```
printf("%i", *ptr);
```
you're trying to access (de-rererence) a memory which is *invalid* from the point of view of your program.
Remember, `ptr` itself got incresed, due to the post-increment operator use. Now it points past the address allocated for variable `x`. Then, you're trying to de-reference the address to fetch the value. This causes invalid memory access which in turn causes [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior).
Once your program causes UB, the output (if any), cannot be validated or justified in any ways.
Upvotes: 2 <issue_comment>username_2: >
> So what I'm missing here?
>
>
>
The fact your program has undefined behavior. You may increment the pointer to point past an object (`x` in your example), but you may not dereference it afterwards. To do so is to venture beyond the promises a well-formed C program makes.
There's no reasoning about what `printf("%i", *ptr);` does, because it may do anything. Whatever guarantees the C standard placed on your program are forfeit the moment you do that access.
Beyond that, you also seem to be a victim of a logical fallacy. 5 is no less random here than 42, 106 or 0xf00ba7.
Upvotes: 3 [selected_answer] |
2018/03/22 | 975 | 4,041 | <issue_start>username_0: I used a way to keep the main method running.
```
public static void main(String[] args) throws InterruptedException {
while (true) {
TimeUnit.SECONDS.sleep(1);
}
}
```
But I'm not sure it's the best way.
Can someone give me some advice?<issue_comment>username_1: Here is one way to maintain the main the java instance to run until you want to exit. Using the GUI to do that:
```
public static void main(String[] args){
JFrame jFrame = new JFrame();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setLocationRelativeTo(null);
jFrame.setSize(480, 260);
jFrame.setVisible(true);
//do some task here
}
```
Upvotes: -1 <issue_comment>username_2: You should use a ExecutorService and submit a task that waits for reading your services's output.
```
package demo;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutorService;
public class TaskDemo {
public static void main(String[] args ) {
System.out.println("Hello");
ExecutorService threadPool = Executors.newFixedThreadPool(1);
Runnable task = () -> {
try {
//Loop to read your services's output
Thread.sleep(1000);
System.out.println("This is from Task");
} catch(InterruptedException e) {
e.printStackTrace();
}
};
threadPool.execute(task);
//Wait for the task to finish and shutdow the pool
threadPool.shutdown();
}
}
```
Upvotes: -1 <issue_comment>username_3: You have many choices, Some of them:
* Simple thread
* TimerTask
* ScheduledExecutorService
Simple Thread:
```
public class Task {
public static void main(String[] args) {
final long timeInterval = 1000;
Runnable runnable = new Runnable() {
public void run() {
while (true) {
System.out.println("Running Task ...");
try {
Thread.sleep(timeInterval);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
Thread thread = new Thread(runnable);
thread.start();
}
}
```
TimerTask:
```
import java.util.Timer;
import java.util.TimerTask;
public class Task {
public static void main(String[] args) {
TimerTask task = new TimerTask() {
@Override
public void run() {
System.out.println("Running Task ...");
}
};
Timer timer = new Timer();
long delay = 0;
long intevalPeriod = 1000;
timer.scheduleAtFixedRate(task, delay,intevalPeriod);
}
}
```
ScheduledExecutorService:
```
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class Task {
public static void main(String[] args) {
Runnable runnable = new Runnable() {
public void run() {
System.out.println("Running Task ...");
}
};
ScheduledExecutorService service = Executors
.newSingleThreadScheduledExecutor();
service.scheduleAtFixedRate(runnable, 0, 1, TimeUnit.SECONDS);
}
}
```
Upvotes: -1 <issue_comment>username_4: The best way is to hold the main() Thread without entering into Terminated/Dead state.
Below are the two methods I often use-
```
public static void main(String[] args) throws InterruptedException {
System.out.println("Stop me if you can");
Thread.currentThread().join(); // keep the main running
}
```
The other way is to create the ReentrantLock and call wait() on it:
```
public class Test{
private static Lock mainThreadLock = new ReentrantLock();
public static void main(String[] args) throws InterruptedException {
System.out.println("Stop me if you can");
synchronized (mainThreadLock) {
mainThreadLock.wait();
}
}
```
Upvotes: 2 |
2018/03/22 | 477 | 1,837 | <issue_start>username_0: I have created a new Google account along with OAuth 2.0 credentials for Google sign in.
When Google asks the user for permission, it asks for "manage your contacts”.
Why am I getting this "manage your contacts”? I just wanted to do a Google sign in.<issue_comment>username_1: When authenticating a user you are asking them for permission to access their data.
[](https://i.stack.imgur.com/6absx.png)
There is a large number of scopes for accessing google data scopes define what access you need and what access is requested of the user [huge list of scopes](https://developers.google.com/identity/protocols/googlescopes)
If you are seeing `manage your contacts` then you are probably requesting the following scope from the [people api](https://developers.google.com/identity/protocols/googlescopes#peoplev1). If you dont want access to the users contacts then remove that scope and it will stop requesting it.
>
> <https://www.googleapis.com/auth/contacts>
>
>
>
Upvotes: 2 <issue_comment>username_2: You can remove the contacts scopes you pass in to GoogleSignIn:
```
GoogleSignIn _googleSignIn = GoogleSignIn(
// scopes: [
// //'email',
// //'https://www.googleapis.com/auth/contacts.readonly',
// ],
);
```
This will stop asking for Contacts permission.
Scopes:
email- View your email address
<https://www.googleapis.com/auth/contacts.readonly> -See and download your contacts
<https://www.googleapis.com/auth/contacts> - See, edit, download, and permanently delete your contacts
Even if you remove the email scope, you will still be able to see the user's email by using GoogleSignInAccount account.email.
For more information on scopes:
<https://developers.google.com/identity/protocols/oauth2/scopes>
Upvotes: 0 |
2018/03/22 | 2,811 | 8,119 | <issue_start>username_0: I'm trying to understand the linking and loading phases in depth.
When a translation unit is compiled / assembled into a single object file, i understand that it creates a symbol table of every variable / function found.
If a variable has only file scope by using the static keyword for example, it will be marked as local in the symbol table.
However, when the linker produces the final executable file, is there a final symbol table there with every single entry encountered for all files?
I was confused because if we have a variable declared as static meaning only file scope within one file, when this variable is encountered every time in the executable, does the compiler have to reference the final symbol table to see its actual scope, or does it generate special code for it?
Thanks ahead.<issue_comment>username_1: This is not correct:
>
> When a translation unit is compiled / assembled into a single object file, i understand that it creates a symbol table of every variable / function found.
>
>
>
The object file will only have information about global symbols referenced and defined by the compilation unit.
>
> However, when the linker produces the final executable file, is there a final symbol table there with every single entry encountered for all files?
>
>
>
An executable file will on include universal symbols (those that need to be defined in loadable libraries). A loadable library will only include universal symbols but it may define those symbols as well as reference them.
If you define a static variable XYX that name disappears when you compile.
If you define a global function (that is not exported in a loadable library), that name disappears when you link.
The one bit of oversimplification I have made here is that compilers and linkers support the OPTIONAL inclusion of debugging information that may describe all symbols encountered in processing.
Debugging information about a symbol must include information about the modules that defined the symbol.
The debugging information is usually completely separate within the object and executable files from the information needed to run or link those files. In fact, debugging information can usually be stripped out easily from those file.
Upvotes: 0 <issue_comment>username_2: >
> When a translation unit is compiled / assembled into a single object file, i understand that it creates a symbol table of every variable / function found.
>
>
>
That is *mostly* accurate: local (aka stack, aka automatic storage duration) variables are never put into the symbol table (except when using ancient debugging formats, such as [STABS](https://en.wikipedia.org/wiki/Stabs)).
You don't need to take my word for it: this is trivial to observe:
```
$ cat foo.c
int a_common_global;
int a_global = 42;
static int a_static = 43;
static int static_fn()
{
return 44;
}
int global_fn()
{
int a_local = static_fn();
static int a_function_static = 1;
return a_local + a_static + a_function_static;
}
$ gcc -c foo.c
$ readelf -Ws foo.o
Symbol table '.symtab' contains 14 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 FILE LOCAL DEFAULT ABS foo.c
2: 0000000000000000 0 SECTION LOCAL DEFAULT 1
3: 0000000000000000 0 SECTION LOCAL DEFAULT 3
4: 0000000000000000 0 SECTION LOCAL DEFAULT 4
5: 0000000000000004 4 OBJECT LOCAL DEFAULT 3 a_static
6: 0000000000000000 11 FUNC LOCAL DEFAULT 1 static_fn
7: 0000000000000008 4 OBJECT LOCAL DEFAULT 3 a_function_static.1800
8: 0000000000000000 0 SECTION LOCAL DEFAULT 6
9: 0000000000000000 0 SECTION LOCAL DEFAULT 7
10: 0000000000000000 0 SECTION LOCAL DEFAULT 5
11: 0000000000000004 4 OBJECT GLOBAL DEFAULT COM a_common_global
12: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 a_global
13: 000000000000000b 34 FUNC GLOBAL DEFAULT 1 global_fn
```
There are a few things worth noting here:
1. `a_local` does not appear in the symbol table
2. `a_function_static` got "random" number appended to its name. This is so `a_function_static` in a different function will not collide.
3. `a_static` and `static_fn` have `LOCAL` linkage
Note also that while `a_static` and `static_fn` appear in the symbol table, this is done *only* to assist debugging. The local symbols are not used by subsequent link, and can be safely removed.
After running `strip --strip-unneeded foo.o`:
```
$ readelf -Ws foo.o
Symbol table '.symtab' contains 10 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 SECTION LOCAL DEFAULT 1
2: 0000000000000000 0 SECTION LOCAL DEFAULT 3
3: 0000000000000000 0 SECTION LOCAL DEFAULT 4
4: 0000000000000000 0 SECTION LOCAL DEFAULT 5
5: 0000000000000000 0 SECTION LOCAL DEFAULT 6
6: 0000000000000000 0 SECTION LOCAL DEFAULT 7
7: 0000000000000004 4 OBJECT GLOBAL DEFAULT COM a_common_global
8: 0000000000000000 4 OBJECT GLOBAL DEFAULT 3 a_global
9: 000000000000000b 34 FUNC GLOBAL DEFAULT 1 global_fn
```
>
> when the linker produces the final executable file, is there a final symbol table there with every single entry encountered for all files?
>
>
>
Yes. Adding `main.c` like so:
```
$ cat main.c
extern int global_fn();
extern int a_global;
int a_common_global = 23;
int main()
{
return global_fn() + a_common_global + a_global;
}
$ gcc -c main.c foo.c
$ gcc main.o foo.o
$ readelf -Ws a.out
Symbol table '.symtab' contains 69 entries:
Num: Value Size Type Bind Vis Ndx Name
```
... I omit un-interesting entries (there are many).
```
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
34: 0000000000000000 0 FILE LOCAL DEFAULT ABS main.c
35: 0000000000000000 0 FILE LOCAL DEFAULT ABS foo.c
36: 0000000000201030 4 OBJECT LOCAL DEFAULT 23 a_static
37: 000000000000061c 11 FUNC LOCAL DEFAULT 13 static_fn
38: 0000000000201034 4 OBJECT LOCAL DEFAULT 23 a_function_static.1800
50: 0000000000000627 34 FUNC GLOBAL DEFAULT 13 global_fn
63: 00000000000005fa 34 FUNC GLOBAL DEFAULT 13 main
64: 000000000020102c 4 OBJECT GLOBAL DEFAULT 23 a_global
```
>
> I was confused because if we have a variable declared as static meaning only file scope within one file, when this variable is encountered every time in the executable, does the compiler have to reference the final symbol table to see its actual scope, or does it generate special code for it?
>
>
>
At link stage, the compiler is (usually) not invoked at all. And the linker doesn't (doesn't need to) pay any attention to `LOCAL` symbols.
In general, the linker only does two things:
1. Resolve undefined references (such as reference to `global_fn` and `a_global` from `main.o`) to their definitions (here in `foo.o`) and
2. Apply relocations.
Applying relocations for `a_static` and `a_function_static` in `foo.o` doesn't actually need their names; only their offsets within the `.data` section, as this output should make clear:
```
$ objdump -dr foo.o
foo.o: file format elf64-x86-64
Disassembly of section .text:
...
000000000000000b :
b: 55 push %rbp
c: 48 89 e5 mov %rsp,%rbp
f: 48 83 ec 10 sub $0x10,%rsp
13: b8 00 00 00 00 mov $0x0,%eax
18: e8 e3 ff ff ff callq 0
1d: 89 45 fc mov %eax,-0x4(%rbp)
20: 8b 15 00 00 00 00 mov 0x0(%rip),%edx # 26
22: R\_X86\_64\_PC32 .data
26: 8b 45 fc mov -0x4(%rbp),%eax
29: 01 c2 add %eax,%edx
2b: 8b 05 00 00 00 00 mov 0x0(%rip),%eax # 31
2d: R\_X86\_64\_PC32 .data+0x4
31: 01 d0 add %edx,%eax
33: c9 leaveq
34: c3 retq
```
Note how relocations at offset `0x22` and `0x2d` don't say anything about the names (`a_static` and `a_function_static.1800` respectively).
Upvotes: 3 [selected_answer] |
2018/03/22 | 876 | 3,051 | <issue_start>username_0: I have a CMakeLists.txt that defines a function, that needs to refer to its own path because it needs to use a file in its own directory:
```
├── path/to/a:
| ├── CMakeLists.txt
| └── file_i_need.in
└── different/path/here:
└── CMakeLists.txt
```
The `path/to/a/CMakeLists.txt` file has a function that needs to `configure_file()`:
```
function(do_something_interesting ...)
configure_file(
file_i_need.in ## <== how do I get the path to this file
file_out ## <== this one I don't need to path
)
endfunction()
```
I can write `path/to/a/file_i_need.in` on that line, but that seems overly cumbersome. I can use `${CMAKE_CURRENT_LIST_DIR}` outside of the function, but inside of the function when called by `different/path/here/CMakeLists.txt` it'll be `different/path/here` instead.
Is there a way to refer to the path of *this* CMakeLists.txt?<issue_comment>username_1: Outside of any function store value of *CMAKE\_CURRENT\_LIST\_DIR* into a variable, then use that variable in a function, defined in that file.
Definition of a variable depends on visibility relation between a script, which defines a function (**define-script**) and a script, which could use that function(**use-script**).
1. **use-script** is executed *in the scope* of **define-script**.
This is most common case, when *use-script* is included into *define-script* or one of its parents.
The variable can be defined as a simple variable:
```
set(_my_dir ${CMAKE_CURRENT_LIST_DIR})
```
2. **use-script** is executed *out of the scope* of **define-script**. Note, that a function's definition is *global*, so it is visible anywhere.
This case corresponds to the code in the question post, where `CMakeLists.txt` files, corresponded to **use-script** and **define-script**, belongs to *different subtrees*.
The variable can be defined as a *CACHE* variable:
```
set(_my_dir ${CMAKE_CURRENT_LIST_DIR} CACHE INTERNAL "")
```
The function definition is the same in both cases:
```
function(do_something_interesting ...)
configure_file(
${_my_dir}/file_i_need.in ## <== Path to the file in current CMake script
file_out ## <== this one I don't need to path
)
endfunction()
```
In both cases name of variable (`_my_dir`) should be somehow unique. It could include a project's name (for scripts `CMakeLists.txt`) or a script name (for scripts `.cmake`).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Just as an update, since [release CMake 3.17](https://cmake.org/cmake/help/latest/release/3.17.html#variables) you can now use `CMAKE_CURRENT_FUNCTION_LIST_DIR`.
ref: <https://cmake.org/cmake/help/v3.17/variable/CMAKE_CURRENT_FUNCTION_LIST_DIR.html>
So your sample becomes:
```
function(do_something_interesting ...)
configure_file(
${CMAKE_CURRENT_FUNCTION_LIST_DIR}/file_i_need.in ## <== Path to the file in current CMake script
file_out ## <== this one I don't need to path
)
endfunction()
```
Upvotes: 2 |
2018/03/22 | 551 | 1,856 | <issue_start>username_0: How to remove double quote that is between a set of double quotes?
`"Test T"est"` should get output as `"Test Test"`
`"Test T"est", "Test1 "Test1"` should get output as `"Test Test", "Test1 Test1"`<issue_comment>username_1: You can try with `awk`:
```
$ awk -F", *" '{ # Set the field separator
for(i=1;i<=NF;i++){ # Loop through all fields
$i="\""gensub("\"", "", "g", $i)"\"" # Rebuild the field with only surrounding quotes
}
}1' OFS="," file # Print the line
"Test Test","Test1 Test1"
```
Upvotes: 1 <issue_comment>username_2: If this is a corrupted CSV and you can say there are no commas inside the fields, then PowerShell's CSV handling will read them and leave the trailing quote. Remove that, then re-export to a new CSV to get values with double quotes around them.
```
import-csv .\test.csv -Header 'column1', 'column2' |
ForEach-Object {
foreach ($column in $_.psobject.properties.Name)
{
$_.$column = $_.$column.Replace('"', '')
}
$_
} | Export-Csv .\test2.csv -NoTypeInformation
```
If the file has headers in it, remove the `-header 'column1', 'column2'` part.
Upvotes: 1 <issue_comment>username_3: So if this is for a corrupted CSV you could state the problem as remove any double quotes that don't appear at the start or end of a line and that are not near a comma (with optional white space). So this can easily be done with a Powershell regex like so:
```
$t = '"Test T"est", "Test1 "Test1"'
$t -replace '(?
```
Upvotes: 1 <issue_comment>username_4: An alternative with sed:
```
sed 's/\("[^"]\+\)"\([^"]\+"\)/\1\2/g' inputFile
```
input:
```
"Test T"est"
"Test T"est", "Test1 "Test1"
```
output:
```
"Test Test"
"Test Test", "Test1 Test1"
```
Upvotes: 0 |
2018/03/22 | 1,451 | 4,548 | <issue_start>username_0: I am trying to remove comments from an assembly instructions file, then print clean instructions in another file.
For example: This
```
TESTD TD STDIN
JEQ TESTD . Loop until ready
RD STDIN . Get input to see how many times to loop
STA NUMLOOP . Save the user's input into NUMLOOP
STLOOP STX LOOPCNT . Save how many times we've loops so far
```
Becomes this:
```
TESTD TD STDIN
JEQ TESTD
RD STDIN
STA NUMLOOP
STLOOP STX LOOPCNT
```
==========================================================================
I wrote this program to remove any thing after the point that mark comments including the dot, but it did not work; output file have the same line containing the comments as the input file.
```
#include
#include
int main(int argc, char\* argv[])
{
FILE \*fp, \*fp2; // input and output file pointers
char ch[1000];
fp = fopen(argv[1], "r");
fp2 = fopen(argv[2],"w");
while( fgets(ch, sizeof(ch), fp)) // read line into ch array
{
int i = 0 ;
for( ch[i] ; ch[i]<=ch[1000] ; ch[i++]) // loop to check all characters in the line array
{
if(ch[i] == '.')
{
for( ch[i] ; ch[i] <= ch[1000] ; ch[i++])
{
ch[i] = '\0'; //making all characters after the point NULL
}
}
continue;
}
fputs(ch, fp2);
}
fclose(fp);
fclose(fp2);
return(0);
}
```<issue_comment>username_1: Your loop makes no sense:
```
for( ch[i] ; ch[i] <= ch[1000] ; ch[i++])
```
because `ch[i] <= ch[1000]` compares the value of the characters with each other. Also `ch[1000]` is invalid, because the size of your array is `1000`, so the highest valid index is `999`.
Since your loop will end as soon as `ch[i] <= ch[1000]` is `false`, and since we don't know what the value of `ch[1000]` might be, there is a great chance that your loop won't ever be executed, thus no lines are modified at all. (There is also a chance that your loop will loop forever, if `ch[1000]` evaluates to some value that is always bigger than any character in your array.)
Correct loop: `for(i = 0; i < 1000; i++)`
Even better would be to check for the actual *line length* in your loop.
Also you don't need to set all characters to `0` after the `dot`. What for? Just set `ch[i]` to `0` and you're done:
```
if(ch[i] == '.')
{
ch[i] = '\0';
}
```
On a side note, the `continue;` at the end of your loop is redundant. You probably want to place it inside the `if`-statement after setting the `dot` to `0`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I consider the design of OP's code a bit unlucky. Due to the internal buffer `char ch[1000];`, there is a constraint which could be easily prevented:
```
#include
int main(void)
{
FILE \*fIn = stdin, \*fOut = stdout;
/\* loop until end of input \*/
for (int c = getc(fIn); c >= 0;) {
/\* read and write until '.' is detected \*/
for (; c >= 0 && c != '.'; c = getc(fIn)) {
if (putc(c, fOut) < 0) {
fprintf(stderr, "Write failed!\n"); return -1;
}
}
/\* read only until '\n' is detected \*/
for (; c >= 0 && c != '\n'; c = getc(fIn)) ;
}
/\* done \*/
return 0;
}
```
Instead of buffering a whole line, my approach just interleaves character reading and writing.
There are two read loops: one with output, one without:
1. At character `.` the first is left for second.
2. At character `\n` the second is left for first.
3. Everything ends when end of input is detected. (Hence all the checks for `c >= 0`.)
Input:
```none
TESTD TD STDIN
JEQ TESTD . Loop until ready
RD STDIN . Get input to see how many times to loop
STA NUMLOOP . Save the user's input into NUMLOOP
STLOOP STX LOOPCNT . Save how many times we've loops so far
```
Output:
```none
TESTD TD STDIN
JEQ TESTD
RD STDIN
STA NUMLOOP
STLOOP STX LOOPCNT
```
[Life demo on **ideone**](https://ideone.com/80h4FY)
Upvotes: 2 <issue_comment>username_3: A simple state machine with two states (basically do/dont echo the character):
---
```
#include
int main(void)
{
int state;
for(state=0; ; ) {
int ch;
ch = getc(stdin);
if (ch == EOF) break;
switch (state) {
case 0: /\* initial \*/
if (ch == '.') { state = 1; continue; }
break;
case 1:
if (ch == '\n') { state = 0; break; }
continue;
}
putc(ch, stdout);
}
return 0;
}
```
Upvotes: 0 |
2018/03/22 | 773 | 2,543 | <issue_start>username_0: Switched from **`google-services:3.1.1 to 3.2.0`** since then I get the following warning
```
Could not find google-services.json while looking in [src/nullnull/debug, src/debug/nullnull, src/nullnull, src/debug, src/nullnullDebug]
Could not find google-services.json while looking in [src/nullnull/release, src/release/nullnull, src/nullnull, src/release, src/nullnullRelease]
```
I see in my gradle console that the `JSON` file is parsed from the right place, as it is expected if you didn't use any flavors.
```
Parsing json file: C:\testApp\app\google-services.json
```
So building works fine.
But I really like a clean console output, so how can I remove this warning that seems not needed?<issue_comment>username_1: It's a known regression. Check Google Issue [#110321069](https://issuetracker.google.com/110321069) for details. Until a fix is provided, it's safe to just ignore the warning.
Upvotes: 4 <issue_comment>username_2: This is a bug with the Google Services gradle plugin. I've created a public bug report here: <https://issuetracker.google.com/issues/110321069>
If you want to work around the issue and hide the warnings, you can copy your `google-service.json` into `app/src/debug` and `app/src/release`.
Upvotes: 2 <issue_comment>username_3: This appears to be fixed with **`com.google.gms:google-services:4.0.2`** which was [released on July 13, 2018](https://mvnrepository.com/artifact/com.google.gms/google-services/4.0.2).
Gradle output with google-services:4.0.2:
```
> Configure project :app
> Task :app:processDebugGoogleServices
Parsing json file: /.../build/app/google-services.json
> Task :app:processReleaseGoogleServices
Parsing json file: /.../build/app/google-services.json
```
Gradle output with google-services:4.0.1:
```
> Configure project :app
Could not find google-services.json while looking in [src/nullnull/debug, src/debug/nullnull, src/nullnull, src/debug, src/nullnullDebug]
Could not find google-services.json while looking in [src/nullnull/release, src/release/nullnull, src/nullnull, src/release, src/nullnullRelease]
> Task :app:processDebugGoogleServices
Parsing json file: /.../build/app/google-services.json
> Task :app:processReleaseGoogleServices
Parsing json file: /.../build/app/google-services.json
```
Upvotes: 6 [selected_answer]<issue_comment>username_4: Try this
put google-services.json
under the path add
```
1. [your project]\platforms\android\app\src\debug
2. [your project]\platforms\android\app\src\release
```
Upvotes: 2 |
2018/03/22 | 609 | 1,949 | <issue_start>username_0: I have an `StackedAreaChart` and after I show areas on graph I want to add a custom dot.
Currently my code look like this but I want to draw a dot depend on `XAxis` and `YAxis` data. Does exist any way to pass `x` and `y` coordinates to a `CustomizedDot` class?
```
import React from 'react';
import { AreaChart, Area, XAxis, YAxis, CartesianGrid, Tooltip, Label, Dot } from 'recharts';
const data = [
{ name: 142, underweight: 36.3, normal: 11.3, overweight: 6.8, obese: 56.7 },
{ name: 196, underweight: 70.3, normal: 22.7, overweight: 15.9, obese: 2.2 },
];
class CustomizedDot extends React.Component {
render() {
const { cx, cy } = this.props;
return (
);
}
};
class BMIGraph extends React.Component {
render() {
return (
Index telesne mase za odrasle
-----------------------------
{/\* \*/}
} />
);
}
}
export default BMIGraph;
```
[The result of the current code](https://i.stack.imgur.com/xKiJZ.png)<issue_comment>username_1: You no need to pass X and Y axis data.
By using
```
const { cx, cy } = this.props;
```
`cx & cy` holds x, y position you required.
Try this in your `CustomizedDot` class
```
class CustomizedDot extends React.Component {
render() {
const { cx, cy } = this.props;
return (
);
}
};
```
Upvotes: 4 <issue_comment>username_2: For customized dots.
this.props already contains:
cx: Number,
cy: Number,
stroke: String,
payload: Object,
value: Number
cx and cy are the coordinates you are after, and if you are interested in the data as well, this.props.payload is probably the one you want to use.
Upvotes: 2 <issue_comment>username_3: Same solution as above, just using typescript and as a functional component:
```
import React from 'react';
export const CustomScatterDot: React.FC<{ cx: number, cy: number }> = ({ cx, cy }) => {
return (
);
};
```
Upvotes: 0 |
2018/03/22 | 292 | 959 | <issue_start>username_0: I want to filter out empty value on email column.
So I tried this:
```
users = User.objects.filter(email__iexact='')
```
But, record has empty value would not be appeared.
Anyone know why is that?
Model is:
```
class User(models.Model):
email = models.EmailField(blank=True)
```<issue_comment>username_1: If you want to display all the emails that are not empty (filter-out empty) then you can exclude them using
```
users = User.objects.exclude(email__isnull=True)
```
If you want to get all the columns with empty email value then use
```
users = User.objects.filter(email__isnull=True)
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Declare model:
```
class User(models.Model):
email = models.EmailField(null=True)
```
then:
```
users = User.objects.filter(email__isnull=True)
```
If you want to exclude empty value then:
```
users = User.objects.exclude(email__isnull=True)
```
Upvotes: 1 |
2018/03/22 | 2,066 | 7,993 | <issue_start>username_0: In the CA Automic Automation Engine Java APIs, each AE object\* type has its own subclass of [`UC4Object`](https://docs.automic.com/documentation/webhelp/english/ALL/components/AE/LATEST/API/index.html?com/uc4/api/objects/UC4Object.html). Many of the operations for working with AE objects are specific to the AE object type. I want to write a method that takes a UC4Object as input, and returns an object of the class appropriate to the AE object type — e.g., the method returns an object of type [`JobPlan`](https://docs.automic.com/documentation/webhelp/english/ALL/components/AE/LATEST/API//index.html?com/uc4/api/objects/JobPlan.html) for a workflow.
The AE object type can be determined using [`UC4Object.getType()`](https://docs.automic.com/documentation/webhelp/english/ALL/components/AE/LATEST/API//com/uc4/api/objects/UC4Object.html#getType--). Once one knows the AE object type, one can cast its `UC4Object` object to the class specific to the AE object type. E.g, if one is working with a `UC4Object` object called uc4Object, one might do something like this:
```
if ("JOBP".equals(uc4Object.getType())){
JobPlan workflow = (JobPlan) uc4Object;
}
```
`JOBP` is the AE object type of [workflows](https://docs.automic.com/documentation/webhelp/english/ALL/components/DOCU/LATEST/AWA%20Guides/help.htm#AWA/Objects/obj_classesAndTypes.htm). I want to generalize this so that it works for all AE object types.
```
UC4Object.getType() UC4Object Subclass
JSCH Schedule
JOBP JobPlan
EVNT_TIME TimeEvent
EVNT_FILE FileEvent
EVNT_DB DatabaseEvent
EVNT_CONS ConsoleEvent
SCRI Script
JOBS Job
JOBF FileTransfer
```
\* By *AE object*, I mean [objects in the Automation Engine](https://docs.automic.com/documentation/webhelp/english/ALL/components/DOCU/LATEST/AWA%20Guides/help.htm#AWA/Objects/obj_classesAndTypes.htm). This is a different concept from objects in Java.
**Update 1**
I can get the name of the class to cast to as follows:
```
String uc4ObjectClassName = uc4Object.getClass().getSimpleName();
System.out.println(String.format("Object is an instance of class %s.", uc4ObjectClassName));
```
I am hoping there is a straightforward return an object of this class.<issue_comment>username_1: >
> I want to write a method that takes a UC4Object as input, and returns
> an object of the class appropriate to the AE object type — e.g., the
> method returns an object of type JobPlan for a workflow.
>
>
>
You can create a method that downcasts the object according to the `getType()` value and returns it.
But from the client side of the method, you could not manipulate directly this type as the client doesn't know the returned type.
That means that you should apply your processing/work with the cast object before returning it.
About the way to achieve the mapping, as you want to manipulate the specific subtype, you don't have other choice that using a series of conditional statements.
---
To allow the clients to manipulate the specific subtypes, you should probably redesign by casting first all objects to their subtypes and by storing all of them in a custom `Workflow` structure that contains fields with specific types.
That could look like :
```
public class Workflow{
private List jobPlans;
private List schedules;
...
}
```
Load and store all `UC4Object` in a `Workflow` instance :
```
List workflowObjects = ...;
Workflow myWorkflow = new WorkflowMapper().create(workflowObjects);
```
In this way the client can so find all of them :
```
List jobPlans = myWorkflow.getJobPlans();
List schedules = myWorkflow.getSchedules();
```
or individually (by id for example) :
```
int id = 1;
JobPlan jobPlan = myWorkflow.getJobPlan(id);
Schedule schedule = myWorkflow.getSchedule(id);
```
By using this way you have another advantage : you don't need any longer to use a series of conditional statements during the downcasts as the only processing now is adding them in the `Workflow` instance.
You could store a `Map>` in the mapper class where the Consumer is the setter method of the workflow.
It could give something as :
```
public class WorkflowMapper {
Map> map = new HashMap<>();
private Workflow workflow = new Workflow();
{
map.put("JOBP", (uc4Object) -> workflow.setJobPlan((JobPlan) uc4Object));
map.put("EVNT\_TIM", (uc4Object) -> workflow.setTimeEvent((TimeEvent) uc4Object));
// and so for ...
}
public Workflow create(List uc4Objects) {
for (UC4Object o : uc4Objects) {
final String type = o.getType();
map.getOrDefault(type, (t) -> {
throw new IllegalArgumentException("no matching for getType = " + t);
})
.accept(o);
}
return workflow;
}
}
```
Upvotes: 1 <issue_comment>username_2: I realized that in this particular case, I can avoid the complexity introduced by the requirement that the method be able to return objects of different types.
The subclasses of `UC4Object` all share a `header()` method, and that method returns an object of class [`XHeader`](https://docs.automic.com/documentation/webhelp/english/ALL/components/AE/LATEST/API/index.html?com/uc4/api/objects/XHeader.html). *That is what I need at the moment*, so I can make `XHeader` the method's return type.
```
public static XHeader getObjectHeader(UC4Object uc4Object) {
XHeader header = null;
String objectName = uc4Object.getName();
String uc4ObjectClassName = uc4Object.getClass().getSimpleName();
System.out.println(String.format("Object is an instance of class %s.", uc4ObjectClassName));
switch (uc4ObjectClassName) {
case "JobPlan":
System.out.println(String.format("Object %s is a standard workflow.", objectName));
header = ((JobPlan)uc4Object).header();
break;
case "WorkflowIF":
System.out.println(String.format("Object %s is an IF workflow.", objectName));
header = ((WorkflowIF)uc4Object).header();
break;
case "WorkflowLoop":
System.out.println(String.format("Object %s is a FOREACH workflow.", objectName));
header = ((WorkflowLoop)uc4Object).header();
break;
case "Schedule":
System.out.println(String.format("Object %s is a Schedule.", objectName));
header = ((Schedule)uc4Object).header();
break;
case "Script":
System.out.println(String.format("Object %s is a Script.", objectName));
header = ((Script)uc4Object).header();
break;
case "TimeEvent":
System.out.println(String.format("Object %s is a Time Event.", objectName));
header = ((TimeEvent)uc4Object).header();
break;
case "FileEvent":
System.out.println(String.format("Object %s is a File Event.", objectName));
header = ((FileEvent)uc4Object).header();
break;
case "DatabaseEvent":
System.out.println(String.format("Object %s is a DB Event.", objectName));
header = ((DatabaseEvent)uc4Object).header();
break;
case "ConsoleEvent":
System.out.println(String.format("Object %s is a Console Event.", objectName));
header = ((ConsoleEvent)uc4Object).header();
break;
case "Job":
System.out.println(String.format("Object %s is an OS Job.", objectName));
header = ((Job)uc4Object).header();
break;
case "FileTransfer":
System.out.println(String.format("Object %s is a File Transfer.", objectName));
header = ((FileTransfer)uc4Object).header();
break;
default:
System.out.println(String.format("Unknown object type."));
header = null;
}
return header;
}
```
I’m glad to learn of ways to improve on this approach.
Upvotes: 0 |
2018/03/22 | 1,546 | 6,100 | <issue_start>username_0: I have the following functions : The first prints unsigned integers in a given base (radix).
The second function does exactly the same but with a signed integers.
As you can see, the body of these functions are exactly the same. I've been struggling for hours in order to prevent doubling the same code, and I could not find a solution for this issue.
**Unsigned function:**
```
const char *digit = "0123456789abcdef";
int print_int_helper_unsigned(unsigned int n, int radix, const char *digit) {
int result;
if (n < radix) {
putchar(digit[n]);
return 1;
}
else {
result = print_int_helper_unsigned(n / radix, radix, digit);
putchar(digit[n % radix]);
return 1 + result;
}
}
```
**Signed function:**
```
int print_int_helper( int n, int radix, const char *digit) {
int result;
if (n < radix) {
putchar(digit[n]);
return 1;
}
else {
result = print_int_helper(n / radix, radix, digit);
putchar(digit[n % radix]);
return 1 + result;
}
}
```<issue_comment>username_1: Maybe you want this:
```
#include
...
int print\_int\_helper(int n, int radix, const char \*digit) {
if (n < 0)
{
putchar('-');
}
return print\_int\_helper\_unsigned(abs(n), radix, digit);
}
```
Upvotes: 2 <issue_comment>username_2: Refactor the recursive part into a helper function:
```
static void fprintf_digits_recursive(FILE *out,
const unsigned long radix,
const char *digits,
unsigned long value)
{
if (value >= radix)
fprintf_digits_recursive(out, radix, digits, value / radix);
fputc(digits[value % radix], out);
}
```
I marked it `static`, because it should only be visible in this compilation unit (file), and not directly callable outside.
The purpose of the helper function is to print out a single digit. If the `value` has more than one digit, the higher digit(s) are printed first. (This is why the `fputc()` is *after* the recursive call.)
Signed and unsigned integer printers use the helper thus:
```
void print_int(const int value, const char *digits, const int radix)
{
if (radix < 1 || !digits) {
fprintf(stderr, "print_int(): Invalid radix and/or digits!\n");
exit(EXIT_FAILURE);
}
if (value < 0) {
fputc('-', stdout);
fprintf_digits_recursive(stdout, radix, digits, (unsigned long)(-value));
} else
fprintf_digits_recursive(stdout, radix, digits, (unsigned long)(value));
}
void print_uint(const unsigned int value, const char *digits, const int radix)
{
if (radix < 1 || !digits) {
fprintf(stderr, "print_int(): Invalid radix and/or digits!\n");
exit(EXIT_FAILURE);
}
fprintf_digits_recursive(stdout, radix, digits, (unsigned long)value);
}
```
I deliberately added the output stream identifier and changed the parameter order, so that it is easier to understand how the visible function (sometimes also called *wrapper functions*, if they're very simple) can differ from the actual internal helper function that does the real work.
I added the radix and digits check to the wrapper functions, because that is the recommended practice. (That is, do not leave parameter checking to the helper functions, but do it in the wrapper functions. That way you can provide also "fast"/unchecked versions of the wrapper functions, if you want.)
This same approach is used to refactor duplicated code into helpers and actual public functions. You find the repeated code, move it to a separate internal helper function, noting that the parameters can be very different than what the public functions use — often, you might have e.g. a dynamically allocated buffer to put the data into.
There is no really hard part, you just need to practice a bit, and learn to think about the multiple users — the originally duplicated functions — when choosing which parameters are passed to the helper function.
In very complicated cases, you might need to split the helper functions into multiple helpers; into a *helper toolbox* of a sort. This is often the case with e.g. dynamic memory management helper functions.
For example, if you wanted to turn this into an interface that produces dynamically allocated strings, you might use a string buffer interface, something like
```
struct strbuffer {
char *data;
size_t size; /* Number of chars allocated for */
size_t used; /* Number of chars used in data */
};
#define STRBUFFER_INIT { NULL, 0, 0 }
static void strbuffer_addchar(struct strbuffer *ref, const char c);
static char *strbuffer_finalize(struct strbuffer *ref);
static char *strbuffer_finalize_reverse(struct strbuffer *ref);
```
so that the signed integer-to-string function might look like
```
char *new_int_to_string(const int value, const size_t radix, const char *digits)
{
struct strbuffer buf = STRBUFFER_INIT;
if (value < 0) {
reverse_radix(&buf, radix, digits, -value);
strbuffer_addchar(&buf, '-');
} else
reverse_radix(&buf, radix, digits, value);
return strbuffer_finalize_reverse(&buf);
}
```
and `reverse_radix()` builds the numeric value in reverse order,
```
void reverse_radix(struct strbuffer *ref,
const unsigned long radix,
const char *digits,
unsigned long value)
{
do {
strbuffer_addchar(ref, digits[value % radix]);
value /= radix;
} while (value > 0);
}
```
The helper function `strbuffer_finalize()` will reallocate the `data` field to the exact desired length, including a string-terminating `\0`, clear the structure, and return the `data`; `strbuffer_finalize_reverse()` does the same, except reversing the contents first. That way we turn the recursive call into a simple loop, too.
You can even implement a floating-point version using the strbuffer helpers; you just convert the integral part separately from the decimal part (so two more helper functions).
Upvotes: 1 [selected_answer] |
2018/03/22 | 528 | 1,768 | <issue_start>username_0: I am trying to display a string on multilines with `\n` in the Postman Console. How can I do this?
Example:
```
var mystr='line\n another line\n another line';
console.log(mystr);
```
Expecting:
```
line
another line
another line
```
Getting:
```
lineanother lineanother line
```
Note: it is working as expected in Firefox scratchpad.<issue_comment>username_1: I don’t think that you can achieve this in the Postman console - maybe worth raising an issue on the Postman github project, is there isn’t one already.
I would have suggested doing the same thing as the comments, adding `\n` works in every other console but this one, which is strange.
I think that the only way, at the moment, is just to add multiple `console.log()` statements, to get your vars printing on new lines.
Another alternative is to put what you need into an `array` - This is not ideal but would give you the information in the console, on separate lines.
[](https://i.stack.imgur.com/3C0vS.jpg)
Upvotes: 1 <issue_comment>username_2: [Postman console log not allowing to write string in next line #1477](https://github.com/postmanlabs/newman/issues/1477)
Asked the same question on postman GitHub, I will update here once got the solution.
Thanks, @username_1 :)
Upvotes: 2 <issue_comment>username_3: type 3 times console.log:
```js
console.log('linea1');
console.log('linea2');
console.log('linea3');
```
Upvotes: 2 <issue_comment>username_4: You can print multiline text like this:
```
console.log("hello", '\n', "world");
```
Which will show up like this in console:
[](https://i.stack.imgur.com/FUvns.jpg)
Upvotes: 2 |
2018/03/22 | 3,684 | 14,056 | <issue_start>username_0: I am trying to capture the signature of the user through signaturepad and canvas and downloading the same into device in a project in phonegap cordova. The image is getting downloaded but it is disappearing the very next minute.
Dont know where is the issue in my code. I have attached my code snippet also.
Link for the file transfer plugin- <https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-file-transfer/>
```js
//register.js
var imageURI;
var imageURIsign;
var signaturePad;
var canvas;
$(document).ready(function() {
document.addEventListener("deviceready", onDeviceReady, false);
});
function onDeviceReady() {
}
function callcanvas() {
//$('#test').popup('open');
$('#test').attr("style", "display:block");
alert("Inside call canvas...");
canvas = document.getElementById("signature");
var w = window.innerWidth;
var h = window.innerHeight;
// As the canvas doesn't has any size, we'll specify it with JS
// The width of the canvas will be the width of the device
canvas.width = w;
// The height of the canvas will be (almost) the third part of the screen height.
canvas.height = h / 2.5;
signaturePad = new SignaturePad(canvas, {
dotSize: 1
});
document.getElementById("export").addEventListener("click", function(e) {
// Feel free to do whatever you want with the image
// as export to a server or even save it on the device.
//imageURIsign=signaturePad.toDataURL("image/jpeg");
imageURIsign = signaturePad.toDataURL();
document.getElementById("preview").src = imageURIsign;
alert("imageURIsign :-" + imageURIsign);
//downloadFile();
}, false);
document.getElementById("reset").addEventListener("click", function(e) {
// Clears the canvas
signaturePad.clear();
}, false);
}
/*$(".img-download").click(function(){
var data = signaturePad.toDataURL();
$(this).attr("href",data)
$(this).attr("download","imgName.png");
});*/
//yyy add
function downloadFile() {
var fileTransfer = new FileTransfer();
var downloadurl = document.getElementById("preview").src;
var uri = encodeURI(downloadurl);
var fileURL = 'cdvfile://localhost/sdcard/test.jpg';
fileTransfer.download(
uri, fileURL,
function(entry) {
alert("download complete: " + entry.toURL());
},
function(error) {
alert("error :- " + JSON.stringify(error));
},
false, {
headers: {
"Authorization": "Basic <KEY>
}
}
);
}
function DownloadFile(URL, Folder_Name, File_Name) {
//Parameters mismatch check
if (URL == null && Folder_Name == null && File_Name == null) {
return;
} else {
//checking Internet connection availablity
var networkState = navigator.connection.type;
if (networkState == Connection.NONE) {
return;
} else {
download(URL, Folder_Name, File_Name); //If available download function call
}
}
}
function download(URL, Folder_Name, File_Name) {
//step to request a file system
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, fileSystemSuccess, fileSystemFail);
function fileSystemSuccess(fileSystem) {
var download_link = encodeURI(URL);
ext = download_link.substr(download_link.lastIndexOf('.') + 1); //Get extension of URL
var directoryEntry = fileSystem.root; // to get root path of directory
directoryEntry.getDirectory(Folder_Name, {
create: true,
exclusive: false
}, onDirectorySuccess, onDirectoryFail); // creating folder in sdcard
var rootdir = fileSystem.root;
var fp = rootdir.toURL(); // Returns Fulpath of local directory
console.log("harsh_link:- " + fp);
fp = fp + "/" + Folder_Name + "/" + File_Name + "." + ext; // fullpath and name of the file which we want to give
// download function call
filetransfer(download_link, fp);
}
function onDirectorySuccess(parent) {
// Directory created successfuly
}
function onDirectoryFail(error) {
//Error while creating directory
alert("Unable to create new directory: " + error.code);
}
function fileSystemFail(evt) {
//Unable to access file system
alert(evt.target.error.code);
}
}
function filetransfer(download_link, fp) {
var fileTransfer = new FileTransfer();
// File download function with URL and local path
fileTransfer.download(download_link, fp,
function(entry) {
alert("download complete: " + entry.fullPath);
},
function(error) {
//Download abort errors or download failed errors
alert("download error source " + error.source);
//alert("download error target " + error.target);
//alert("upload error code" + error.code);
}
);
}
//yyy add
```
```html
Register User
Signature
DownloadSignature
![]()
```<issue_comment>username_1: I solved my problem by adding the background colour as opaque white, as signature pad was taking black by default , my issue is solved by changing the background colour.
var signaturePad = new SignaturePad(canvas,{
backgroundColor: "rgb(255,255,255)",
```
dotSize: 1
```
});
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```js
//register.js
var imageURI;
var imageURIsign;
var signaturePad;
var canvas;
$(document).ready(function() {
document.addEventListener("deviceready", onDeviceReady, false);
});
function onDeviceReady() {
}
function callcanvas() {
//$('#test').popup('open');
$('#test').attr("style", "display:block");
alert("Inside call canvas...");
canvas = document.getElementById("signature");
var w = window.innerWidth;
var h = window.innerHeight;
// As the canvas doesn't has any size, we'll specify it with JS
// The width of the canvas will be the width of the device
canvas.width = w;
// The height of the canvas will be (almost) the third part of the screen height.
canvas.height = h / 2.5;
signaturePad = new SignaturePad(canvas, {
dotSize: 1
});
document.getElementById("export").addEventListener("click", function(e) {
// Feel free to do whatever you want with the image
// as export to a server or even save it on the device.
//imageURIsign=signaturePad.toDataURL("image/jpeg");
imageURIsign = signaturePad.toDataURL();
document.getElementById("preview").src = imageURIsign;
alert("imageURIsign :-" + imageURIsign);
//downloadFile();
}, false);
document.getElementById("reset").addEventListener("click", function(e) {
// Clears the canvas
signaturePad.clear();
}, false);
}
/*$(".img-download").click(function(){
var data = signaturePad.toDataURL();
$(this).attr("href",data)
$(this).attr("download","imgName.png");
});*/
//yyy add
function downloadFile() {
var fileTransfer = new FileTransfer();
var downloadurl = document.getElementById("preview").src;
var uri = encodeURI(downloadurl);
var fileURL = 'cdvfile://localhost/sdcard/test.jpg';
fileTransfer.download(
uri, fileURL,
function(entry) {
alert("download complete: " + entry.toURL());
},
function(error) {
alert("error :- " + JSON.stringify(error));
},
false, {
headers: {
"Authorization": "Basic <KEY>
}
}
);
}
function DownloadFile(URL, Folder_Name, File_Name) {
//Parameters mismatch check
if (URL == null && Folder_Name == null && File_Name == null) {
return;
} else {
//checking Internet connection availablity
var networkState = navigator.connection.type;
if (networkState == Connection.NONE) {
return;
} else {
download(URL, Folder_Name, File_Name); //If available download function call
}
}
}
function download(URL, Folder_Name, File_Name) {
//step to request a file system
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, fileSystemSuccess, fileSystemFail);
function fileSystemSuccess(fileSystem) {
var download_link = encodeURI(URL);
ext = download_link.substr(download_link.lastIndexOf('.') + 1); //Get extension of URL
var directoryEntry = fileSystem.root; // to get root path of directory
directoryEntry.getDirectory(Folder_Name, {
create: true,
exclusive: false
}, onDirectorySuccess, onDirectoryFail); // creating folder in sdcard
var rootdir = fileSystem.root;
var fp = rootdir.toURL(); // Returns Fulpath of local directory
console.log("harsh_link:- " + fp);
fp = fp + "/" + Folder_Name + "/" + File_Name + "." + ext; // fullpath and name of the file which we want to give
// download function call
filetransfer(download_link, fp);
}
function onDirectorySuccess(parent) {
// Directory created successfuly
}
function onDirectoryFail(error) {
//Error while creating directory
alert("Unable to create new directory: " + error.code);
}
function fileSystemFail(evt) {
//Unable to access file system
alert(evt.target.error.code);
}
}
function filetransfer(download_link, fp) {
var fileTransfer = new FileTransfer();
// File download function with URL and local path
fileTransfer.download(download_link, fp,
function(entry) {
alert("download complete: " + entry.fullPath);
},
function(error) {
//Download abort errors or download failed errors
alert("download error source " + error.source);
//alert("download error target " + error.target);
//alert("upload error code" + error.code);
}
);
}
//yyy add
```
```html
Register User
Signature
DownloadSignature
![]()
```
```js
//register.js
var imageURI;
var imageURIsign;
var signaturePad;
var canvas;
$(document).ready(function() {
document.addEventListener("deviceready", onDeviceReady, false);
});
function onDeviceReady() {
}
function callcanvas() {
//$('#test').popup('open');
$('#test').attr("style", "display:block");
alert("Inside call canvas...");
canvas = document.getElementById("signature");
var w = window.innerWidth;
var h = window.innerHeight;
// As the canvas doesn't has any size, we'll specify it with JS
// The width of the canvas will be the width of the device
canvas.width = w;
// The height of the canvas will be (almost) the third part of the screen height.
canvas.height = h / 2.5;
signaturePad = new SignaturePad(canvas, {
dotSize: 1
});
document.getElementById("export").addEventListener("click", function(e) {
// Feel free to do whatever you want with the image
// as export to a server or even save it on the device.
//imageURIsign=signaturePad.toDataURL("image/jpeg");
imageURIsign = signaturePad.toDataURL();
document.getElementById("preview").src = imageURIsign;
alert("imageURIsign :-" + imageURIsign);
//downloadFile();
}, false);
document.getElementById("reset").addEventListener("click", function(e) {
// Clears the canvas
signaturePad.clear();
}, false);
}
/*$(".img-download").click(function(){
var data = signaturePad.toDataURL();
$(this).attr("href",data)
$(this).attr("download","imgName.png");
});*/
//yyy add
function downloadFile() {
var fileTransfer = new FileTransfer();
var downloadurl = document.getElementById("preview").src;
var uri = encodeURI(downloadurl);
var fileURL = 'cdvfile://localhost/sdcard/test.jpg';
fileTransfer.download(
uri, fileURL,
function(entry) {
alert("download complete: " + entry.toURL());
},
function(error) {
alert("error :- " + JSON.stringify(error));
},
false, {
headers: {
"Authorization": "Basic <KEY>
}
}
);
}
function DownloadFile(URL, Folder_Name, File_Name) {
//Parameters mismatch check
if (URL == null && Folder_Name == null && File_Name == null) {
return;
} else {
//checking Internet connection availablity
var networkState = navigator.connection.type;
if (networkState == Connection.NONE) {
return;
} else {
download(URL, Folder_Name, File_Name); //If available download function call
}
}
}
function download(URL, Folder_Name, File_Name) {
//step to request a file system
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, fileSystemSuccess, fileSystemFail);
function fileSystemSuccess(fileSystem) {
var download_link = encodeURI(URL);
ext = download_link.substr(download_link.lastIndexOf('.') + 1); //Get extension of URL
var directoryEntry = fileSystem.root; // to get root path of directory
directoryEntry.getDirectory(Folder_Name, {
create: true,
exclusive: false
}, onDirectorySuccess, onDirectoryFail); // creating folder in sdcard
var rootdir = fileSystem.root;
var fp = rootdir.toURL(); // Returns Fulpath of local directory
console.log("harsh_link:- " + fp);
fp = fp + "/" + Folder_Name + "/" + File_Name + "." + ext; // fullpath and name of the file which we want to give
// download function call
filetransfer(download_link, fp);
}
function onDirectorySuccess(parent) {
// Directory created successfuly
}
function onDirectoryFail(error) {
//Error while creating directory
alert("Unable to create new directory: " + error.code);
}
function fileSystemFail(evt) {
//Unable to access file system
alert(evt.target.error.code);
}
}
function filetransfer(download_link, fp) {
var fileTransfer = new FileTransfer();
// File download function with URL and local path
fileTransfer.download(download_link, fp,
function(entry) {
alert("download complete: " + entry.fullPath);
},
function(error) {
//Download abort errors or download failed errors
alert("download error source " + error.source);
//alert("download error target " + error.target);
//alert("upload error code" + error.code);
}
);
}
//yyy add
```
```html
Register User
Signature
DownloadSignature
![]()
```
Upvotes: 0 |
2018/03/22 | 562 | 2,059 | <issue_start>username_0: I have received a response in the following format
{"ErrorCode":"406","Message":"Employee Name should not be empty and should not be more than 30 characters"} and to validate the same, i have added a response assertion and given the response in response text but when i try to assert it gets failed[](https://i.stack.imgur.com/CM7qk.png)
Kindly suggest me the right way. Thanks<issue_comment>username_1: Since [JMeter 4.0](https://www.blazemeter.com/blog/whats-new-in-jmeter-4) there is [JSON Assertion](http://jmeter.apache.org/usermanual/component_reference.html#JSON_Assertion) which is the right way to deal with [JSON](https://en.wikipedia.org/wiki/JSON) responses using [JSONPath](https://github.com/json-path/JsonPath) language.
String operations like equals and substring as well as regular expression operations like contains and matches can be fragile given JSON nature of the response.
1. To validate the "Error Code":
* Assert JSON Path Exists: `$.ErrorCode`
* Additionally assert value: check
* Expected value: `406`
[](https://i.stack.imgur.com/28oQr.png)
2. To validate the "Message":
* Assert JSON Path Exists: `$.Message`
* Additionally assert value: check
* Expected value: `Employee Name should not be empty and should not be more than 30 characters`
[](https://i.stack.imgur.com/DnxEk.png)
---
If for some reason you got stuck on earlier JMeter version you can play the same trick using [JSON Path Assertion](https://jmeter-plugins.org/wiki/JSONPathAssertion/) plugin
Upvotes: 2 <issue_comment>username_2: It seems that your check include a new line character at the beginning.
JMeter uses the value you input as is, without trimming, so your solution is:
Remove spaces before/after your string. You can also split to several smaller assertion to make sure no extra characters added.
Upvotes: 0 |
2018/03/22 | 1,014 | 3,828 | <issue_start>username_0: I am upgrading my project from .NET Core 1 to .NET Core 2.
Following the Official Docs - [Link](https://learn.microsoft.com/en-us/aspnet/core/migration/1x-to-2x/) - I edited my csproj to contain the following packages.
```
```
Then updated the `Program.cs` to.
```
public class Program
{
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args).UseStartup().Build();
}
```
and also changed the Startup Class to this.
```
public IConfiguration Configuration { get; }
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
// Load Configuration from appsettings.json
services.Configure(Configuration.GetSection("IdentityServerSettings"));
services.AddOptions();
var identityServerOptions = serviceProvider.GetService>().Value;
services.AddMvc(o =>
{
if (identityServerOptions.EnableSSL)
o.Filters.Add(new RequireHttpsAttribute());
});
services.AddAuthentication("Bearer").AddIdentityServerAuthentication(opt =>
{
opt.Authority = identityServerOptions.Authority;
opt.RequireHttpsMetadata = identityServerOptions.EnableSSL;
opt.ApiName = identityServerOptions.ApiName;
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
#region Identity Server Config
// Setup Identity Server Options for this API -
app.UseAuthentication();
#endregion
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseBrowserLink();
}
else
{
app.UseExceptionHandler("/Home/Error");
}
app.UseStaticFiles();
// tried this
app.UseMvc();
// and this
//app.UseMvcWithDefaultRoute();
// and this
//app.UseMvc(routes =>
//{
// routes.MapRoute(name: "default", template: "{controller=Home}/{action=Index}/{id?}");
//});
}
```
I have tried all three
1 - `app.UseMvc();`
2 - `app.UseMvcWithDefaultRoute();`
3 -
```
app.UseMvc(routes =>
{
routes.MapRoute(name: "default", template: "{controller=Home}/{action=Index}/{id?}");
});
```
But whenever I run the project I get an Exception: 'Sequence contains more than one matching element'.
[](https://i.stack.imgur.com/cF4ye.png)<issue_comment>username_1: Ok, this had nothing to do with .NET Core upgrading. But is an issue with how I've setup Fluent Validation.
During the upgrade I also upgraded `FluentValidation.AspNetCore` from 7.2.1 to 7.5.\*. Which was causing the exception.
After downgrading back to 7.2.1 the solution is working fine.
I will Post more on the issue when I find a solution to get it working with `FluentValidation.AspNetCore` V 7.5.
Upvotes: 0 <issue_comment>username_2: I followed the docs and did the upgrades from `AspNetCore 1` to `ASPNetCore.All` v2.0.7 too (I know v2.0.8 just came out but Azure doesn't support it just yet so I am holding on the upgrade). I had no problem compiling with `FluentValidation` v7.5.2.
Just wanna share how I did it.
### Create an extension method
```
namespace DL.SO.Web.UI.Extensions
{
public static class MvcBuilderExtensions
{
public static void AddFluentValidationServices(this IMvcBuilder mvcBuilder)
{
mvcBuilder.AddFluentValidation(fv =>
{
fv.RegisterValidatorsFromAssemblyContaining();
});
}
}
}
```
### Call it after `AddMvc()`
```
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc()
.AddFluentValidationServices();
}
```
Upvotes: 1 |
2018/03/22 | 634 | 2,364 | <issue_start>username_0: While I'm working on a Vertical `TextView`, I have one of two solutions:
1. Normal TextView with rotation -90: this makes the control of the
position is too hard and with unexpected results due to the
different sizes of devices
2. Custom Vertical TextView, but I don't have an idea about doing that.
Any suggestions or solutions?
the following code will cause the text view to have a center position, what if I need to make the vertical view to be at the right end of the screen and centralized vertically?
```
```<issue_comment>username_1: Here is the Custom Class Which help You,
```
import android.content.Context;
import android.graphics.Canvas;
import android.util.AttributeSet;
import android.view.Gravity;
import android.widget.TextView;
/**
* Created by <NAME> on 22/3/18.
*/
public class VTextView extends TextView {
final boolean isTop;
public VTextView(Context context, AttributeSet attrs){
super(context, attrs);
final int mGravity = getGravity();
if(Gravity.isVertical(mGravity) && (mGravity&Gravity.VERTICAL_GRAVITY_MASK) == Gravity.BOTTOM) {
setGravity((mGravity&Gravity.HORIZONTAL_GRAVITY_MASK) | Gravity.TOP);
isTop = false;
}else
isTop = true;
}
@Override
protected void onMeasure(int mWidthMeasureSpec, int mHeightMeasureSpec){
super.onMeasure(mHeightMeasureSpec, mWidthMeasureSpec);
setMeasuredDimension(getMeasuredHeight(), getMeasuredWidth());
}
@Override
protected boolean setFrame(int l, int t, int r, int b){
return super.setFrame(l, t, l+(b-t), t+(r-l));
}
@Override
public void draw(Canvas mCanvas){
if(isTop){
mCanvas.translate(getHeight(), 0);
mCanvas.rotate(90); //Here rotate view to 90degree
}else {
mCanvas.translate(0, getWidth());
mCanvas.rotate(-90);
}
mCanvas.clipRect(0, 0, getWidth(), getHeight(), android.graphics.Region.Op.REPLACE);
super.draw(mCanvas);
}
}
```
Upvotes: 1 <issue_comment>username_2: I did like this
try like this :-
```
```
Take `height` according to `text` and take `width` according to `textSize`
Upvotes: 0 <issue_comment>username_3: You can add a pivot position to your text.
```
android:transformPivotX="200dp"
```
Upvotes: 0 |
2018/03/22 | 742 | 2,562 | <issue_start>username_0: I am working on a quiz module where the quiz has to be submitted automatically if it crosses the time limit i.e. 30 mins. I have used jquery for this but somehow the quiz doesn't submits automatically when time limit reaches. may I know where am I going wrong. Any insight to this will be very helpful. Following is my code snippet,
```
function get15dayFromNow() {
return new Date(new Date().valueOf() + php echo $duration ; ? * 60 * 1000);
}
var $time_spend = $('#time_spend');
$time_spend.countdown(get15dayFromNow(), function(event) {
$(this).val(event.strftime('%M:%S'));
});
var $clock = $('#clock');
$clock.countdown(get15dayFromNow(), function(event) {
$(this).html('Time Left : '+'00:'+event.strftime('%M:%S'));
var counterVal = $("#clock").text();
if((counterVal == '00:00') || (counterVal == '00:00:00'))
{
submitForm();
}
});
function submitForm()
{
document.getElementById("target").submit();
}
php echo $duration ; ?:00
//Question and options listings here.
Submit
Submit
```<issue_comment>username_1: Finally its sorted now. The lines
```
if((counterVal == '00:00') || (counterVal == '00:00:00'))
{
submitForm();
}
```
had to be replaced by ,
```
if((counterVal == 'Time Left : 00:00:00') || (counterVal == 'Time Left : 00:00'))
{
submitForm();
}
```
Upvotes: 0 <issue_comment>username_2: First of all you need to keep you js code in document ready function,
There is an event which will execute on completion of your time, you can use that
event to submit the form. Look at the line having text as **finish.countdown**
Try following code.
```
$(document).ready(function(){
function get15dayFromNow() {
return new Date(new Date().valueOf() + php echo $duration ; ? * 60 * 1000);
}
var $time_spend = $('#time_spend');
$time_spend.countdown(get15dayFromNow(), function(event) {
$(this).val(event.strftime('%M:%S'));
});
var $clock = $('#clock');
$clock.countdown(get15dayFromNow(), function(event) {
$(this).html('Time Left : '+'00:'+event.strftime('%M:%S'));
})
.on('finish.countdown', function() {
submitForm();
});
function submitForm()
{
document.getElementById("target").submit();
}
});
```
Read more on [Document](http://hilios.github.io/jQuery.countdown/documentation.html#controls)
Upvotes: 2 [selected_answer] |
2018/03/22 | 482 | 1,522 | <issue_start>username_0: My date range is
```
var currDay = Jan.1
var birthday = Feb.15
```
I know that to find the difference in number of weeks is
```
currDay.diff(birthday, 'week')
```
However, is there a way to find the full weeks and the remaining days?<issue_comment>username_1: You can make use of [`duration`](http://momentjs.com/docs/#/durations/).
You can get the years, months (excludes years) and days (excludes years and months) using this. Only problem is weeks are calculated using the value of days so you'd still have to get the remainder on days if you're getting the number of weeks.
From [momentjs docs](https://momentjs.com/docs/#/durations/weeks/):
>
> Pay attention that unlike the other getters for duration, weeks are
> counted as a subset of the days, and are not taken off the days count.
>
>
>
Note: If you want the total number of weeks use `asWeeks()` instead of `weeks()`
```js
var currDay = moment("2018-01-01");
var birthday = moment("2018-02-16");
var diff = moment.duration(birthday.diff(currDay));
console.log(diff.months() + " months, " + diff.weeks() + " weeks, " + diff.days()%7 + " days.");
console.log(Math.floor(diff.asWeeks()) + " weeks, " + diff.days()%7 + " days.");
```
Upvotes: 3 <issue_comment>username_2: ```js
var currDay = moment("Jan.1","MMM.DD");
var birthday = moment("Feb.15","MMM.DD");
var diff = moment.duration(birthday.diff(currDay));
console.log(diff.weeks() +" weeks & "+ diff.days()%7 +" days to go for birthday");
```
Upvotes: 0 |
2018/03/22 | 630 | 2,367 | <issue_start>username_0: I'm getting 401 issue when i deployed the bot in local iis and registered the bot channel in azure with ngrok https url.
Steps i followed :
1. Deployed bot in IIS with port 1214.
2. created https using ngrok for the port 1214

3. Registered channel in Azure portal and given end point as ngrok url.

4. i try accessing the WEB chat using the iframe url in the browser & getting 401 error. I verified the MS appid & password in both web.config in my local server and azure portal,both are same.

5. I have tried accessing the iis bot server using the emulator with ngrok ulr and given empty ms appid & password and changed web.config to empty appid,password, it's working fine.

Please help.
**Edited :**
I have debugged the botconnector(github) source code and figured out the issue,but not sure how to fix it.
[Token expire](https://i.stack.imgur.com/tRnzo.png)
Token expires in short time even for first time chat but the time difference in the screen shot shows too long (validto and current). so what may be the issue ? As i said my bot is running in my local server (location india) and i registered bot channel with endpoint as ngrok url running in my local server. (pls refer details above.).<issue_comment>username_1: Try [troubleshooting](https://learn.microsoft.com/en-us/azure/bot-service/bot-service-troubleshoot-authentication-problems#issue-an-http-request-to-the-microsoft-login-service) your bot.
If you don't get a success response reset your app password.
Upvotes: 1 <issue_comment>username_2: I fixed the issue. Its due to wrong time zone settings in my system.
Upvotes: -1 [selected_answer]<issue_comment>username_3: I also faced this error "**Error: Unauthorized. Invalid AppId passed on token**" while trying to access a bot deployed in Azure from the bot emulator. The solution was to go to Bot registration>Access control> I added myself as the owner of the bot. And it started working for me in the emulator. I gave http://localhost:port and didnt provide the MicrosoftAppId and MicrosoftAppPassword in the emulator and I was able to debug the bot locally.
Upvotes: 0 |
2018/03/22 | 1,583 | 5,597 | <issue_start>username_0: I'm trying to code the mismatch kernel in python. I would like to extract all substrings of a string given a numpy boolean array mask, where the extraction pattern is not necessarily continuous (e.g. mask = [False, True, False, True], such that from 'ABCD' I extract 'BD'). After extracting substrings according to this pattern I can then count all the common substrings between my two sequences.
Concerning the extraction step `string[theta]` doesn't work to extract such substring. I now have the following chunk of code which works:
```
def function(s1, s2, k, theta):
l1 = []
l2 = []
# substrings of s1
substrk_itr1 = (s1[i:i+k] for i in range(len(s1) - k + 1))
l1 = [''.join(substr[i] for i, b in enumerate(theta) if b)
for substr in substrk_itr1]
# substrings of s2
substrk_itr2 = (s2[i:i+k] for i in range(len(s2) - k + 1))
l2 = [''.join(substr[i] for i, b in enumerate(theta) if b)
for substr in substrk_itr2]
L = l1 + l2
C = Counter(L)
c1 = Counter(l1)
c2 = Counter(l2)
x = sum([c1[w] * c2[w] for w in C if w])
return x
```
where (s1,s2) are strings I want to extract all substrings from by first considering all substrings of length k, and then reextract a substring according to the boolean pattern theta. You can make tests with the following values, and you should theoretically get 2.
```
k = 5
theta = np.array([False,True, True, True, False])
X = 'AAATCGGGT'
Y = 'AAATTGGGT'
```
The issue is that this code is too slow, (I use it to compute a kernel, so I run it thousands of times). I profiled the code and the bottleneck is due to the join function mostly.
Is there a way to perform the extraction step faster with python code, or in a more pythonic way ? If I write such code in cython could it be faster ? On the doc they are saying :
>
> In many use cases, C strings (a.k.a. character pointers) are slow and cumbersome. For one, they usually require manual memory management in one way or another, which makes it more likely to introduce bugs into your code.
>
>
>
Thank you for your help !<issue_comment>username_1: You can get a very easy 35% speed increase just by converting the strings to numpy arrays (of `dtype=np.int8` which is the same size as a character) and replacing `''.join(...)` with a boolean array index: `substr[theta]`
```
def function(s1,k,theta):
s1 = np.fromstring(s1,np.int8)
substrk_itr1 = (s1[i:i+k] for i in range(len(s1) - k + 1))
l1 = [substr[theta] for substr in substrk_itr1]
l1 = [ x.tostring() for x in l1 ]
# etc for s2
```
There is almost certainly more that you can do, however this is the most obvious quick improvement.
Upvotes: 1 <issue_comment>username_2: In addition to taking advantage of using types as suggested by @username_1, using the appropriate data structures is important. Python's container objects like `list` are slow since they are not contiguous in memory, and they should be avoided when writing performance-minded cython code.
In this case, we can also save some of those `for` loops as well. Instead of iterating over `theta` twice for both the `s1` and the `s2` sublists, we can handle both strings at the same time. We can also compare the characters one by one, and then break out of the comparison early as soon as we hit the first character/nucleotide that is not matching.
Below is my cython version of your code, which should runs well over an order of magnitude faster than the question's code (about 1 second for 1 million iterations versus 40 seconds or so). I have added some comments that hopefully will be helpful. As for your concern about C string management, at least for simple one-off functions like this, as long as you call an appropriate `free` for each call to `malloc`, you should be fine. Since no `char*` had to be dynamically allocated directly at the C level via such means, there is no need for worrying about memory management here. Also, indexing into `char*` rather than `str`, especially in a tight loop like this, can avoid some slight python overhead.
```
from libc.stdint cimport int8_t
cimport cython
"""
I have included three @cython decorators here to save some checks.
The first one, boundscheck is the only useful one in this scenario, actually.
You can see their effect if you generate cython annotations!
Read more about it here:
http://cython.readthedocs.io/en/latest/src/reference/compilation.html
"""
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.initializedcheck(False)
def fast_count_substr_matches(str s1, str s2, int k, int8_t[:] theta):
cdef int i, j, m#you used k, unfortunately, so stuck with m...
cdef bytes b1 = s1.encode("utf-8")
#alternatively, could just pass in bytes strings instead
#by prefixing your string literals like b'AAATCGGGT'
cdef bytes b2 = s2.encode("utf-8")
cdef char* c1 = b1
cdef char* c2 = b2
#python str objects have minor overhead when accessing them with str[index]
#this is why I bother converting them to char* at the start
cdef int count = 0
cdef bint comp#A C-type int that can be treated as python bool nicely
for i in range(len(s1) - k + 1):
for j in range(len(s2) - k + 1):
comp = True
for m in range(k):
if theta[m] == True and c1[i + m] != c2[j + m]:
comp = False
break
if comp:
count += 1
return count
```
Please let me know if there is anything in this answer that could be cleared up. Hope this helps!
Upvotes: 1 [selected_answer] |
2018/03/22 | 322 | 1,201 | <issue_start>username_0: In Opencart v2.3X admin while editing product, I am uploading images on root folder it is giving "direct does not exist" error.
Thanks in advance.<issue_comment>username_1: if your problem is that at backend,at image manager,you cannot create a new folder,or cannot click a existing folders ,such as "catalog","data" ,
the reason is that you have uploaded a big ,big image ,which is like 2M,to one of these folder.just find the image and delete it ,everything return to normal.
Upvotes: 0 <issue_comment>username_2: I had the same problem with one of my folders. Realised that my folder name had the symbol "&" on it. That was creating the issue.
Upvotes: 2 <issue_comment>username_3: In my case this worked:
check DIR\_APPLICATION & DIR\_SYSTEM & DIR\_IMAGE in config.php
this error occurred in this page:
.../admin/controller/common/filemanager.php
find >> error\_directory
Upvotes: 0 <issue_comment>username_4: Make sure the admin/config.php and config.php path are relatively specified.
E.g
use this
```
define('DIR_APPLICATION', '/Users/yourpcusername/opencart-dev/catalog/');
```
instead of
```
define('DIR_APPLICATION', '../catalog/');
```
Upvotes: 0 |
2018/03/22 | 432 | 1,370 | <issue_start>username_0: How can I insert a record to another table with `select * from table` and add additional data?
* `table_a` columns: id, name
* `table_b` columns: id, name, email, phone
My query:
```
INSERT INTO `table_b`
SELECT *
FROM `table_a`
WHERE `id` = '1' + additional data
```
I've found the answer
Query :
```
INSERT INTO `table_b`
SELECT a.*, '<EMAIL>','1234'
FROM table_a a
WHERE `id`='2'
```
Thanks<issue_comment>username_1: You may just add constants to the `SELECT` statement, e.g.
```
INSERT INTO table_b (id, name, email, phone)
SELECT id, name, '<EMAIL>', '867-5309'
FROM table_a
WHERE id = 1;
```
Note that I explicitly list all columns both from `table_b` and from `table_a`. While it isn't absolutely required to do this, it makes your code much easier to read, more maintainable, and minimizes the chance for errors later on.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can add this data in your `SELECT`
```
INSERT INTO table_b
SELECT table_a.id, table_a.name, email = '<EMAIL>', phone = '123'
FROM table_a WHERE id='1'
```
of course, the columns have to match.
If only some of the columns (name or simply actual existence) match, you can specify which columns you are inserting (watch for the order!) after the "insert into table\_b" statement
Upvotes: 1 |
2018/03/22 | 1,256 | 4,965 | <issue_start>username_0: In my application I have implemented FCM push notification. It is working fine When checking notification with FCM console and pushtry.com website, Then I tried with actual Server API the foreground notification works well, But background notification not receiving and sometimes it receives very rarely and that too without banner and sound . Please help to find out the issue.
Here I attached the code what I am trying..
```
import UIKit
import UserNotifications
import Firebase
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
var window: UIWindow?
func application(_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
FirebaseApp.configure()
Messaging.messaging().delegate = self
if #available(iOS 10.0, *) {
UNUserNotificationCenter.current().delegate = self
let authOptions: UNAuthorizationOptions = [.alert, .badge, .sound]
UNUserNotificationCenter.current().requestAuthorization(
options: authOptions,
completionHandler: {_, _ in })
} else {
let settings: UIUserNotificationSettings =
UIUserNotificationSettings(types: [.alert, .badge, .sound], categories: nil)
application.registerUserNotificationSettings(settings)
}
application.registerForRemoteNotifications()
return true
}
func application(_ application: UIApplication,
didReceiveRemoteNotification userInfo: [AnyHashable: Any]) {
print(userInfo)
}
func applicationDidBecomeActive(_ application: UIApplication)
{
Messaging.messaging().shouldEstablishDirectChannel = true
application.applicationIconBadgeNumber = 0;
}
func applicationDidEnterBackground(_ application: UIApplication)
{
Messaging.messaging().shouldEstablishDirectChannel = false
print("Disconnected from FCM.")
}
func application(_ application: UIApplication,
didReceiveRemoteNotification userInfo: [AnyHashable: Any],
fetchCompletionHandler completionHandler: @escaping
(UIBackgroundFetchResult) -> Void) {
print(userInfo)
completionHandler(UIBackgroundFetchResult.newData)
}
func application(_ application: UIApplication,
didFailToRegisterForRemoteNotificationsWithError error: Error) {
print("Unable to register for remote notifications: \ .
(error.localizedDescription)")
}
func application(_ application: UIApplication,
didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) {
print("APNs token retrieved: \(deviceToken)")
}
}
@available(iOS 10, *)
extension AppDelegate : UNUserNotificationCenterDelegate {
func userNotificationCenter(_ center: UNUserNotificationCenter,
willPresent notification: UNNotification,
withCompletionHandler completionHandler:
@escaping (UNNotificationPresentationOptions) -> Void) {
let userInfo = notification.request.content.userInfo
print(userInfo)
completionHandler([])
}
func userNotificationCenter(_ center: UNUserNotificationCenter,
didReceive response: UNNotificationResponse,
withCompletionHandler completionHandler:
@escaping () -> Void) {
let userInfo = response.notification.request.content.userInfo
print(userInfo)
completionHandler()
}
}
extension AppDelegate : MessagingDelegate {
func messaging(_ messaging: Messaging, didRefreshRegistrationToken
fcmToken: String) {
print("Firebase registration token: \(fcmToken)")
}
func messaging(_ messaging: Messaging, didReceive remoteMessage:
MessagingRemoteMessage) {
print("Received data message: \(remoteMessage.appData)")
}
}
```
Here I share the request format from server for push the notification:
```
{
"to":"cerGBjsmtzE:APA91bFk-
6ZI4ehaWbg0bGSGzAh10NeUh3AcyEFq7dASU7W4YY4WL8vWCA5wav-
LTYc0xTGWHev8Z99",
"priority":"high",
"data":
{
"title":"New Inspection scheduled",
"body":"You have a new Inspection Request scheduled,
"sound":"default"
}
}
```<issue_comment>username_1: To receive the push notification in background mode there must be `"mutable-content"` key with value `"1"` or `true` is present in your payload format.
Please tell you, backend developer, to add that key.
Please check following valid payload format
```
{
"to": "dWB537Nz1GA:APA91bHIjJ5....",
"content_available": true,
"mutable_content": true,
"data":
{
"message": "Offer!",
"mediaUrl": "https://upload.wikimedia.org/wikipedia/commons/thumb/2/2a/FloorGoban.JPG/1024px-FloorGoban.JPG"
},
"notification":
{
"body": "Enter your message",
"sound": "default"
}
}
```
You can refer the following link
<https://firebase.google.com/docs/cloud-messaging/concept-options>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Make sure you have selected the Remote settings in background mode --- Under the capabilities section of XCODE
[](https://i.stack.imgur.com/pZtcm.png)
Upvotes: 3 |
2018/03/22 | 662 | 2,174 | <issue_start>username_0: I'm trying to show only a selected number of rows from the database(say from 20 to 45) I'm trying to use limit but it is not working
```
Select *
from UsersTable
limit 20,45
```
It is wrong since SQL Server doesn't allow this feature.
The answer that I found is
```
SELECT *
FROM
(SELECT
*, ROW_NUMBER() OVER (ORDER BY name) AS row
FROM
sys.databases) a
WHERE
row > 20 and row <= 45
```
Can someone simplify this? I am not able to follow the above query, what is `(ORDER BY name)` in it
Say my database has the columns `Id, UserName, Email` and values in my `Id` column will be like `1, 2, 4, 8, 11, 17` -> not continuous values<issue_comment>username_1: In SQL-Server 2012 and above you could use `OFFSET` and `FETCH` in following:
```
SELECT *
FROM tbl
ORDER BY name
OFFSET 20 ROWS
FETCH NEXT 25 ROWS ONLY
```
In older versions you have to use `ROW_NUMBER()` in following:
```
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER (ORDER BY name) as rn
FROM tbl
) x
WHERE rn > 20 and rn <= 45
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The `ORDER BY name` clause in your call to `ROW_NUMBER` is the ordering logic which will be used for assigning row numbers. Consider the following data, and the corresponding row numbers:
```
name | row
Abby | 1
Bob | 2
... | ...
Jack | 20
John | 21
... | ...
Mike | 45
```
You may visualize the subquery you aliased as `a` as the above intermediate table. Then, you subquery `a` on the condition that row > 20 and row < 45. In the above table, this means you would be retaining the records from `John` up to an including `Mike`.
Later versions of SQL Server also support `OFFSET` and `FETCH`, but you should learn how to use `ROW_NUMBER`, perhaps the most basic analytic function, because it will likely come up again in your future work.
Your full query:
```
SELECT *
FROM
(
-- this subquery corresponds to the table given above
SELECT *, ROW_NUMBER() OVER (ORDER BY name) AS row
FROM sys.databases
) a
-- now retain only records with row > 20 and row <= 45
WHERE
row > 20 AND row <= 45;
```
Upvotes: 1 |
2018/03/22 | 964 | 3,673 | <issue_start>username_0: Whenever I try to Login in my application this **`getSystemService()`** method produce a NPE. pls. help as solutions already available are not answer to my Question. I have called **`super.oncreate()`** already.
**Stack Trace**
```
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Object android.content.Context.getSystemService(java.lang.String)' on a null object reference
at com.example.aiousecurityapplication.Utill.AppConfig.isNetworkAvailable(AppConfig.java:19)
at com.example.aiousecurityapplication.Utill.JSONSenderReceiver.makeHttpRequest(JSONSenderReceiver.java:40)
at com.example.aiousecurityapplication.Activities.SigninActivity$login.doInBackground(SigninActivity.java:200)
at com.example.aiousecurityapplication.Activities.SigninActivity$login.doInBackground(SigninActivity.java:151)
at android.os.AsyncTask$2.call(AsyncTask.java:316)
at java.util.concurrent.FutureTask.run(FutureTask.java:237)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:255)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1133)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:607)
at java.lang.Thread.run(Thread.java:776)
```
**Method Location**
```
public class AppConfig {
// Server url
public static String URL_MAIN = "http://10.14.36.4/aiousecurity/index.php";
public static final String FROM_CNIC = "fromCnic";
public static final String ID = "id";
public static boolean isNetworkAvailable() {
boolean flag = false;
ConnectivityManager connMgr =
(ConnectivityManager) AIOUSecurity.getContext().getSystemService(Context.CONNECTIVITY_SERVICE);
if (connMgr!= null) {
NetworkInfo networkInfo= connMgr.getActiveNetworkInfo();
flag = (networkInfo != null && networkInfo.isConnected());
}
return (flag);
}
```
}
I have also called **`super.oncreate();`** Pls. reference below code
```
public class AIOUSecurity extends Application {
private static Context context;
public static int screenHeight;
public static int screenWidth;
@Override
public void onCreate() {
super.onCreate();
context = getApplicationContext();
DisplayMetrics displayMetrics = getApplicationContext().getResources().getDisplayMetrics();
screenHeight = displayMetrics.heightPixels;
screenWidth = displayMetrics.widthPixels;
}
public static Context getContext() {
return context;
}
```
}<issue_comment>username_1: I guess you forget declare application in manifest.
You must add
`android:name=".AIOUSecurity"` into tag
of manifest
Upvotes: 2 [selected_answer]<issue_comment>username_2: Your problem is that the context is null (maybe the app isn't initialized or you have lost the reference, I'm not sure about this point).
If I understand what you are trying to do, AppConfig seems like an utility class with some static methods.
For your use case, why don't you pass the context as a parameter to the method so you can be sure that you have that context?
```
public class AppConfig {
// Server url
public static String URL_MAIN = "http://10.14.36.4/aiousecurity/index.php";
public static final String FROM_CNIC = "fromCnic";
public static final String ID = "id";
public static boolean isNetworkAvailable(Context context) {
boolean flag = false;
ConnectivityManager connMgr =
(ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE);
if (connMgr!= null) {
NetworkInfo networkInfo= connMgr.getActiveNetworkInfo();
flag = (networkInfo != null && networkInfo.isConnected());
}
return (flag);
}
```
Upvotes: 0 |
2018/03/22 | 540 | 1,970 | <issue_start>username_0: I would have thought this was straightforward but I think it might give unexpected results. I basically load a few documents into a variable.. then I want to save the whole document back to MongoDB.
Do I have to save each one individually using save
<https://docs.mongodb.com/manual/reference/method/db.collection.save/>
Or can I send them as an array using updatemany <https://docs.mongodb.com/manual/reference/method/db.collection.updateMany/>
it doesn't seem that clear in the docs how to do that.. if you can.
Thanks for your time.<issue_comment>username_1: If you use the Native MongoDB Driver for NodeJs, you can use insertMany if you want to create new documents, I think it will work if you have an array of object (be careful to well name fields) like :
```
var your_variable = [{a: 1, b: 2, c: 3}, {a: 2, b: 3, c: 1, d: 4}];
db.YourCollection.insertMany(your_variable, function(err, result) {
// Your treatement
});
```
If you want to update multiple with multiple value, it's impossible (or after many research I didn't find a good solution), you have to make multiple request.
Upvotes: 2 <issue_comment>username_1: To update multiple documents with the same values, you can query like this :
```
db.YourCollection({/* filter empty to update all document */}, {
$set: {
field1: new_value1,
field2: new_value2
// etc...
}
}, function(err, result) {
// your treatment
});
```
If you want to update multiple documents but not all, use the method like this :
```
db.YourCollection({fieldToFilter: value}, {
$set: {
field1: new_value1,
field2: new_value2,
// etc...
}
}, function(err) {
// Your treatment
});
```
the `$set` stage is for change the value of the fields, but you can use any operator from : <https://docs.mongodb.com/manual/reference/operator/update/>
You're not forced to use callback function, you can use Promise too
Upvotes: 1 |
2018/03/22 | 831 | 2,202 | <issue_start>username_0: I am a newbie to Scala.
I have a Tuple[Int, String]
```
((1, "alpha"), (2, "beta"), (3, "gamma"), (4, "zeta"), (5, "omega"))
```
For the above list, I want to print all strings where the corresponding length is 4.<issue_comment>username_1: >
>
> >
> > printing length of string of Tuples in Scala
> >
> >
> >
>
>
>
```
val tuples = List((1, "alpha"), (2, "beta"), (3, "gamma"), (4, "zeta"), (5, "omega"))
println(tuples.map(x => (x._2, x._2.length)))
//List((alpha,5), (beta,4), (gamma,5), (zeta,4), (omega,5))
```
>
>
> >
> > I want to print all strings where the corresponding length is 4
> >
> >
> >
>
>
>
You can `filter` first and then `print` as
```
val tuples = List((1, "alpha"), (2, "beta"), (3, "gamma"), (4, "zeta"), (5, "omega"))
tuples.filter(_._2.length == 4).foreach(x => println(x._2))
```
it should print
```
beta
zeta
```
Upvotes: 1 <issue_comment>username_2: Let's suppose you have a list of Tuple, and you need all the values with string length equal to 4.
You can do a filter on the list:
```
val filteredList = list.filter(_._2.length == 4)
```
And then iterate over each element to print them:
```
filteredList.foreach(tuple => println(tuple._2))
```
Upvotes: 0 <issue_comment>username_3: You can convert your `Tuple` to `List` and then map and filter as you need:
```
tuple.productIterator.toList
.map{case (a,b) => b.toString}
.filter(_.length==4)
```
Example:
For the given input:
```
val tuple = ((1, "alpha"), (2, "beta"), (3, "gamma"), (4, "zeta"), (5, "omega"))
tuple: ((Int, String), (Int, String), (Int, String), (Int, String), (Int, String)) = ((1,alpha),(2,beta),(3,gamma),(4,zeta),(5,omega))
```
Output:
```
List[String] = List(beta, zeta)
```
Upvotes: 1 <issue_comment>username_4: Here is way to achieve this
```
scala> val x = ((1, "alpha"), (2, "beta"), (3, "gamma"), (4, "zeta"), (5, "omega"))
x: ((Int, String), (Int, String), (Int, String), (Int, String), (Int, String)) = ((1,alpha),(2,beta),(3,gamma),(4,zeta),(5,omega))
scala> val y = x.productIterator.toList.collect{
case ele : (Int, String) if ele._2.length == 4 => ele._2
}
y: List[String] = List(beta, zeta)
```
Upvotes: 0 |
2018/03/22 | 1,016 | 4,104 | <issue_start>username_0: I have following two tables, whose schema looks like given below :-
```
postgres=# \d products1;
Table "public.products1"
Column | Type | Modifiers
--------------------+---------+--------------------------------------------------------
id | integer | not null default nextval('products1_id_seq'::regclass)
name | text | not null
default_picture_id | integer |
Indexes:
"products1_pkey" PRIMARY KEY, btree (id)
"unique_id_default_pic_id" UNIQUE CONSTRAINT, btree (id, default_picture_id)
Foreign-key constraints:
"fk_products_1" FOREIGN KEY (id, default_picture_id) REFERENCES product_pictures1(product_id, id) ON UPDATE RESTRICT ON DELETE RESTRICT
Referenced by:
TABLE "product_pictures1" CONSTRAINT "fk_id_product_id" FOREIGN KEY (id, product_id) REFERENCES products1(default_picture_id, id)
postgres=# \d product_pictures1;
Table "public.product_pictures1"
Column | Type | Modifiers
------------+---------+----------------------------------------------------------------
id | integer | not null default nextval('product_pictures1_id_seq'::regclass)
img_path | text | not null
product_id | integer |
Indexes:
"product_pictures1_pkey" PRIMARY KEY, btree (id)
"unique_id_productid" UNIQUE CONSTRAINT, btree (id, product_id)
Foreign-key constraints:
"fk_id_product_id" FOREIGN KEY (id, product_id) REFERENCES products1(default_picture_id, id)
Referenced by:
TABLE "products1" CONSTRAINT "fk_products_1" FOREIGN KEY (id, default_picture_id) REFERENCES product_pictures1(product_id, id) ON UPDATE RESTRICT ON DELETE RESTRICT
```
Following two tables are referring each other:-
When I try to delete any one table, it gives me following error:-
```
postgres=# drop table products1;
ERROR: cannot drop table products1 because other objects depend on it
DETAIL: constraint fk_id_product_id on table product_pictures1 depends on table products1
HINT: Use DROP ... CASCADE to drop the dependent objects too.
```
But when I delete with cascade option, table is deleted, but it doesn't delete the other table or its foreign key column in that table, it deletes only the foreign key constraint.
```
postgres=# drop table products1 cascade;
NOTICE: drop cascades to constraint fk_id_product_id on table product_pictures1
DROP TABLE
postgres=# \d product_pictures1;
Table "public.product_pictures1"
Column | Type | Modifiers
------------+---------+----------------------------------------------------------------
id | integer | not null default nextval('product_pictures1_id_seq'::regclass)
img_path | text | not null
product_id | integer |
Indexes:
"product_pictures1_pkey" PRIMARY KEY, btree (id)
"unique_id_productid" UNIQUE CONSTRAINT, btree (id, product_id)
```
Is this the expected behavious ? In case of `on delete cascade`, deletion of row in the parent, deletes in the child table, but the same things is not happening with drop table ?
Am I missing something ? Is this behaviour specific to postgres ?
Thanks in advance.<issue_comment>username_1: >
> (...) to drop a table that is referenced by a view or a foreign-key constraint of another table, CASCADE must be specified. (CASCADE will remove a dependent view entirely, **but in the foreign-key case it will only remove the foreign-key constraint, not the other table entirely**.)
>
>
>
<https://www.postgresql.org/docs/current/static/sql-droptable.html> (emphasis mine)
Upvotes: 1 <issue_comment>username_2: Because this is how `DROP ... CASCADE` was designed.
[Quote from the manual](https://www.postgresql.org/docs/current/static/sql-droptable.html)
>
> but in the foreign-key case it will only remove the foreign-key constraint, **not the other table** entirely
>
>
>
(Emphasis mine)
This is not specific to Postgres. Oracle and DB2 work the same when dropping tables.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,178 | 4,037 | <issue_start>username_0: I'm transitioning from NGJS to NG and trying to recode my previous application to practice.
I stumbled upon the new NgInit where initializations are done in Angular's Component.
What I'm trying to achieve is to initialize a value WITHIN the scope to be used as a toggle to hide and unhide HTML elements.
I'm trying to solve this without looping within `ngOnInit() {}` to initialize for each object within the array. *(**See `ng-init` in `ng-repeat` block**)*
Below is a working copy of the scenario I'm trying to achieve:
```js
angular.module("app", [])
.controller("controller", function($scope) {
$scope.init = function() {
$scope.modules = [
{
label: 'Module A',
children: [
'Module A - 1',
'Module A - 2',
'Module A - 3'
]
},
{
label: 'Module B',
children: [
'Module B - 1',
'Module B - 2',
'Module B - 3'
]
},
{
label: 'Module C',
children: [
'Module C - 1',
'Module C - 2',
'Module C - 3'
]
}
];
};
});
```
```css
.child {
padding-left: 24px;
padding-top: 8px;
padding-bottom: 8px;
}
.parent {
padding: 8px;
}
```
```html
{{module.label}}
toggle
{{child}}
```
Here's a Plunker if you prefer:
<https://plnkr.co/edit/JDBBPLkr21wxSe2dlRBv?p=preview><issue_comment>username_1: Implement `OnInit` while declaring the component's class and move your initialization code to `ngOnInit` function.
```
@Component({
...
})
export class componentClass implements OnInit {
...
ngOnInit() {
// initialization code block
}
}
```
Mention that Angular(Version2+) provides [life hook](https://angular.io/guide/lifecycle-hooks) for a component from been created to been destroyed.
---
For `ng-init` at `ng-repeat` part, From Angular2, you should use `ngFor` instead and `ngFor` only allows a limited set of local variables to be defined, see **[DOC](https://angular.io/api/common/NgForOf#local-variables)**.
Upvotes: 2 <issue_comment>username_2: I did not understand your request, could you explain yourself better?
why do not you try to use the @component ...
```
@Component({
selector: 'tag-selector',
templateUrl: './pagina.html',
styleUrls: ['./pagina.css']
})
export class Controller{
your code
}
```
Edit:
if you declare the $scope out of Init, it should work anyway
```
angular.module("app", [])
.controller("controller", function($scope) {
$scope.init = function() {
};
$scope.modules = [{
label: 'Module A',
children: [
'Module A - 1',
'Module A - 2',
'Module A - 3'
]
}, {
label: 'Module B',
children: [
'Module B - 1',
'Module B - 2',
'Module B - 3'
]
}, {
label: 'Module C',
children: [
'Module C - 1',
'Module C - 2',
'Module C - 3'
]
}];
});
```
I'm sorry, but I'm not sure I fully understood the question ...
Upvotes: 0 <issue_comment>username_3: You could do it like this.
You loop over your array with \*ngFor. The button toggles the corresponding boolean value, which defines if your element is shown or not (with the \*ngIf directive)
```
@Component({
selector: 'my-app',
template: `
toggle
{{module.label}}
----------------
- {{child}}
`,
})
```
Then initialize your variables:
```
export class AppComponent {
modules:any[];
show:boolean[];
constructor() {
this.modules = [
{
label: 'Module A',
children: [
'Module A - 1',
'Module A - 2',
'Module A - 3'
]
},
{
label: 'Module B',
children: [
'Module B - 1',
'Module B - 2',
'Module B - 3'
]
},
{
label: 'Module C',
children: [
'Module C - 1',
'Module C - 2',
'Module C - 3'
]
}
];
this.show = this.modules.map(()=>true);
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 894 | 3,259 | <issue_start>username_0: Just trying to learn about setting up source control with bitbucket and sourcetree for a project I'm working on in a group. I can set up a repository and push the project to it, but I'm confused about what happens when someone else tries to push their update. If someone's trying to push a new update and they first perform a pull, what exactly does that do to their update? Does it take the last push and merge it with their last commit? Also, is merging usually only done with separate branches or do you merge branches with themselves if there is a conflict? Been researching this for hours and I'm just having trouble wrapping my head around it.<issue_comment>username_1: Implement `OnInit` while declaring the component's class and move your initialization code to `ngOnInit` function.
```
@Component({
...
})
export class componentClass implements OnInit {
...
ngOnInit() {
// initialization code block
}
}
```
Mention that Angular(Version2+) provides [life hook](https://angular.io/guide/lifecycle-hooks) for a component from been created to been destroyed.
---
For `ng-init` at `ng-repeat` part, From Angular2, you should use `ngFor` instead and `ngFor` only allows a limited set of local variables to be defined, see **[DOC](https://angular.io/api/common/NgForOf#local-variables)**.
Upvotes: 2 <issue_comment>username_2: I did not understand your request, could you explain yourself better?
why do not you try to use the @component ...
```
@Component({
selector: 'tag-selector',
templateUrl: './pagina.html',
styleUrls: ['./pagina.css']
})
export class Controller{
your code
}
```
Edit:
if you declare the $scope out of Init, it should work anyway
```
angular.module("app", [])
.controller("controller", function($scope) {
$scope.init = function() {
};
$scope.modules = [{
label: 'Module A',
children: [
'Module A - 1',
'Module A - 2',
'Module A - 3'
]
}, {
label: 'Module B',
children: [
'Module B - 1',
'Module B - 2',
'Module B - 3'
]
}, {
label: 'Module C',
children: [
'Module C - 1',
'Module C - 2',
'Module C - 3'
]
}];
});
```
I'm sorry, but I'm not sure I fully understood the question ...
Upvotes: 0 <issue_comment>username_3: You could do it like this.
You loop over your array with \*ngFor. The button toggles the corresponding boolean value, which defines if your element is shown or not (with the \*ngIf directive)
```
@Component({
selector: 'my-app',
template: `
toggle
{{module.label}}
----------------
- {{child}}
`,
})
```
Then initialize your variables:
```
export class AppComponent {
modules:any[];
show:boolean[];
constructor() {
this.modules = [
{
label: 'Module A',
children: [
'Module A - 1',
'Module A - 2',
'Module A - 3'
]
},
{
label: 'Module B',
children: [
'Module B - 1',
'Module B - 2',
'Module B - 3'
]
},
{
label: 'Module C',
children: [
'Module C - 1',
'Module C - 2',
'Module C - 3'
]
}
];
this.show = this.modules.map(()=>true);
}
}
```
Upvotes: 3 [selected_answer] |
Subsets and Splits