date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 1,357 | 5,518 | <issue_start>username_0: i got two PE files having same sections named as ".data". These name contains different bytes when we see in hex dump. This sections is having 00 bytes in contents. What is this file type can be?<issue_comment>username_1: Looks like you are trying to call an instance method (non-static) from a static property.
Try making your method static:
```
public string getLevel(string levelID)
{
string levelName;
//logic here
return levelName;
}
```
Upvotes: 1 <issue_comment>username_2: Do you mean this:
```
using System;
using System.Collections.Generic;
using System.Configuration;
namespace SSPWAS.Utilities
{
public static class Constants
{
public static string ApproverlevelL1 = getLevel("1");
public static string ApproverlevelL2 = getLevel("2");
public static string ApproverlevelL3 = getLevel("3");
public static string ApproverlevelL4 = getLevel("4");
public static string ApproverlevelL5 = getLevel("5");
private static string getLevel(string levelID)
{
string levelName;
logic here
return levelName;
}
}
}
```
Upvotes: 0 <issue_comment>username_3: Your "constants" are not constant. Make them `readonly` if you want them to behave like constants while making use of initialising static members with values. You could also make them properties and use just the `get` accessor to call the `getLevel` method.
As others have pointed out you cannot call a non-static member from within a static member, without instantiating an instance of the non-static class.
Your `getLevel` method also needs to be in its own class. As it stands it doesn't belong to a class or namespace. If you want it in its own separate class then just set the method to static.
.NET naming conventions recommend you use Pascal Casing for methods. So rename your `getLevel` method.
```
using System;
using System.Collections.Generic;
using System.Configuration;
namespace SSPWAS.Utilities
{
public static class Constants
{
// Use readonly static members
public static readonly string
ApproverlevelL1 = GetLevel("1"),
ApproverlevelL2 = GetLevel("2"),
ApproverlevelL3 = GetLevel("3"),
ApproverlevelL4 = GetLevel("4"),
ApproverlevelL5 = GetLevel("5");
// Or you could use the latest convenient syntax
public static string ApproverLevelL6 => GetLevel("6");
// Or you could use readonly properties
public static string ApproverLevelL7 { get { return GetLevel("7"); } }
private static string GetLevel(string levelId)
{
//... do logic
return "";
}
}
}
```
Or if you want the method in its own class:
```
public static class Constants
{
// Use readonly static members
public static readonly string
ApproverlevelL1 = Level.Get("1"),
ApproverlevelL2 = Level.Get("2"),
ApproverlevelL3 = Level.Get("3"),
ApproverlevelL4 = Level.Get("4"),
ApproverlevelL5 = Level.Get("5");
// Or you could use the latest convenient syntax
public static string ApproverLevelL6 => Level.Get("6");
// Or you could use readonly properties
public static string ApproverLevelL7 { get { return Level.Get("7"); } }
}
public class Level
{
public static string Get(string levelId)
{
//... do logic
return "";
}
}
```
Upvotes: 0 <issue_comment>username_4: If your issue is that you want the getLevel method to be in a different class, you could add a Function into your static class and override it with the method.
```
using System;
using System.Collections.Generic;
using System.Configuration;
namespace SSPWAS.Utilities
{
public class Constants
{
public static Func getLevel = x => string.Empty;
// added get accessor to make these read only
public static string ApproverlevelL1 { get; } = getLevel("1");
public static string ApproverlevelL2 { get; } = getLevel("2");
public static string ApproverlevelL3 { get; } = getLevel("3");
public static string ApproverlevelL4 { get; } = getLevel("4");
public static string ApproverlevelL5 { get; } = getLevel("5");
}
public class WhateverClass
{
public string getLevel(string levelID)
{
string levelName;
//logic here
return levelName;
}
// call this before accessing the fields in your Constants class
public void Init()
{
Constants.getLevel = x => getLevel(x);
}
}
}
```
The only reason I can think of to do this is maybe you have two applications using this static class and they need to get the level differently. Maybe one uses actual constant values and another reads a database, etc.
If you don't require this then the simplest answer is to actually put the method into the class as a static method:
```
namespace SSPWAS.Utilities
{
public class Constants
{
public static string getLevel(string levelID)
{
string levelName;
//logic here
return levelName;
}
// added get accessor to make these read only
public static string ApproverlevelL1 { get; } = getLevel("1");
public static string ApproverlevelL2 { get; } = getLevel("2");
public static string ApproverlevelL3 { get; } = getLevel("3");
public static string ApproverlevelL4 { get; } = getLevel("4");
public static string ApproverlevelL5 { get; } = getLevel("5");
}
}
```
Upvotes: 0 |
2018/03/22 | 416 | 1,416 | <issue_start>username_0: I'm using Filemaker API and PHP and Postman to test it. In Postman (and my PHP project) whenever I try to find a record by an email field it returns an error and doesn't find the record if there is a `@` symbol in the query. For example:
```
{
"query":[
{"Contact_Email": "<EMAIL>"}]
}
```
This will return:
```
{
"errorMessage": "No records match the request",
"errorCode": "401"
}
```
But this request:
```
{
"query":[
{"Contact_Email": "john.smith"}]
}
```
Will return the record I am looking for.
What is the issue here? Do I need to escape the `@` symbol? Is this a FileMaker API issue, or something else?<issue_comment>username_1: The @ sign is reserved as an operator within the FM DB engine, so most likely it's bumping heads with that. I'm not sure how to escape it via API, but either testing actually including the quotes as part of the search string should put you on the path...
```
{
"query":[
{"Contact_Email": "\"<EMAIL>\""}]
}
```
or maybe including an equals sign as part of the query:
```
{
"query":[
{"Contact_Email": "=<EMAIL>"}]
}
```
or maybe try \ escaping the @:
```
{
"query":[
{"Contact_Email": "<EMAIL>"}]
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I just used double equal sign ==<EMAIL> and this worked in Postman
Upvotes: 1 |
2018/03/22 | 1,055 | 4,043 | <issue_start>username_0: I found an issue in vue-router which triggers me a lot.
Always when I switch between my routes, a new instance of the component is created. Further the old instances are not deleted and are running in background!
I would expect that when I open a route, the old components will be destroyed or stop running.
Is there a workaround to fix that issue?
Here is a fiddle: <https://jsfiddle.net/4xfa2f19/5885/>
```
let foo = {
template: 'Foo',
mounted() {
console.log('Mount Foo with uid: ' + this._uid);
setInterval(() => {console.log('Instance ' + this._uid + ' of Foo is running')}, 500);
}
};
let bar = {
template: 'Bar',
mounted() {
console.log('Mount Bar with uid: ' + this._uid);
setInterval(() => {console.log('Instance ' + this._uid + ' of Bar is running')}, 500);
}
};
const router = new VueRouter({
routes: [
{ path: '/user/foo', component: foo },
{ path: '/user/bar', component: bar }
]
});
const app = new Vue({ router }).$mount('#app');
```<issue_comment>username_1: There are 2 ways to solve this problem:
Properly cleaning up in the `destroy` hook
==========================================
If you use any outside event listeners, like `setInterval`, `addEventListener`, etc you also need to *deregister* them when your component gets destroyed, example:
```
{
name: '...',
template: '...',
data() {
return {
interval: undefined,
timeout: undefined
};
},
mounted() {
interval = setInterval(() => {console.log('Instance ' + this._uid + ' of myself is running')}, 500);
timeout = setTimeout(() => {console.log('Instance ' + this._uid + ' of myself is running')}, 500);
document.addEventListener('click', this.onOutsideClick);
},
beforeDestroy() {
// Cleanup interval
clearInterval(interval);
// Cleanup any pending timeouts
clearTimeout(timeout);
// Cleanup any event listeners outside the root of the element
document.removeEventListener('click', this.onOutsideClick);
},
methods: {
onOutsideClick() {
...
}
}
}
```
Using keep-alive to keep the component alive
============================================
When using keepalive, Vue caches your component, and keeps it alive in the background, this means that only one instance will ever exists. This can potentially consume more memory if you have a large amount of routes
```
```
Upvotes: 4 <issue_comment>username_2: >
> Always when I switch between my routes, a new instance of the component is created.
>
>
>
That's expected. you can keep instanes alive and re-use them with the component, but that's usually not necessary and if so, requires special attention to re-initiate all local state of re-used components where necesseray.
Creating a fresh instance is much cleaner and therefore the default behaviour.
>
> Further the old instances are not deleted and are running in background!
>
>
>
That's not expected. Previous instances are destroyed.
>
>
> ```
> setInterval(() => {console.log('Instance ' + this._uid + ' of Foo is running')}, 500);
>
> ```
>
>
Well, since this intervall callback contains a reference to the component instance, it can't be garbage collected by the browser, So you are keeping them alive, not Vue.
Without that intervall, I woudl expect the instance to be garbage collected after the router destroyed them.
Upvotes: 2 <issue_comment>username_3: The same issue is stated here : <https://github.com/vuejs/vuex/issues/1580>
>
> As a workaround you can use transition mode out-in. Rolandoda
>
>
>
```
```
Upvotes: 0 <issue_comment>username_4: In Vue 3, the syntax has changed. The solution to tell Vue Router to keep all the components alive is:
```
```
If you only want to keep a certain one alive, then you can use this:
```
```
Or make sure one isn't kept alive (but all others are), do this:
```
```
Upvotes: 0 |
2018/03/22 | 309 | 1,030 | <issue_start>username_0: HTML:
```
**msg**
```
JS:
```
$('#url').on('change keyup paste',function(){
$('#msg')style.visibility('visible');});
```
How can i do it please?<issue_comment>username_1: You are mixing both Javascript and Jquery.
With Jquery, just change the css property using the function `css()`
```
$("#msg").css('visibility', 'visible');
```
or with pure Javascript
```
document.getElementById("msg").style.visibility = "visible";
```
Upvotes: 3 <issue_comment>username_2: with jQuery, you can use the `.css` method like so :
```js
$(document).ready(function () {
$('#msg').css('visibility', 'visible');
});
```
```html
**message**
```
Upvotes: 1 <issue_comment>username_3: ```js
$(document).ready(function(){
$('#msg').css('visibility','visible')
//or: initial => Sets this property to its default value
$('#msg').css('visibility','initial')
//or: inherit => Inherits this property from its parent element.
$('#msg').css('visibility','inherit')
})
```
```html
**msg**
```
Upvotes: 0 |
2018/03/22 | 830 | 2,779 | <issue_start>username_0: My database:
```
+----+-------+-------+
| id | slug | text |
+----+-------+-------+
| 1 | link1 | text1 |
| 2 | link2 | text2 |
| 3 | link3 | text3 |
+----+-------+-------+
```
My array (queried from database):
```
Array
(
[0] => Array
(
[id] => 1
[slug] => link1
[text] => text1
)
[1] => Array
(
[id] => 2
[slug] => link2
[text] => text2
)
[2] => Array
(
[id] => 3
[slug] => link3
[text] => text3
)
```
I want to write my view like this but I can't get it to work in a loop. (I sent the array for parsing already).
```
{xxxxx}
* [{text}]({slug})
{/xxxxx}
```<issue_comment>username_1: You can save your array in a variable like:
```
$data['blogs'] = $my_result_array_from_model;
```
Send the data to view:
```
$this->parser->parse('blog_template', $data);
```
Then, in your view,
```
{blogs}
* [{text}]({slug})
{/blogs}
```
For reference: Check out [Official CI Documentation](https://www.codeigniter.com/user_guide/libraries/parser.html#parsing-templates)
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Hope this helps you**
```
$data['blog_entries'] = [
'0' => ['id' => 1,'slug' => 'link1','text' => 'text1'],
'1' => ['id' => 2,'slug' => 'link2','text' => 'text2'],
'2' => ['id' => 3,'slug' => 'link3','text' => 'text3']
];
$this->parser->parse('blog_templates', $data);
```
**In your view :**
```
// this is the array to iterate like this
{blog\_entries}
##### {id}
{slug}
{text}
{/blog\_entries}
```
In the above code you’ll notice a pair of variables: {blog\_entries} data… {/blog\_entries}. In a case like this, the entire chunk of data between these pairs would be repeated multiple times, corresponding to the number of rows in the “blog\_entries” element of the parameters array.
**Result in view :**
```
##### 1
link1
text1
##### 2
link2
text2
##### 3
link3
text3
```
If your “pair” data is coming from a database result, which is already a multi-dimensional array, you can simply use the database `result_array`() method:
```
$query = $this->db->query("SELECT * FROM blog");
$data = array(
'blog_title' => 'My Blog Title',
'blog_heading' => 'My Blog Heading',
'blog_entries' => $query->result_array()
);
$this->parser->parse('blog_template', $data);
```
For More : <https://www.codeigniter.com/user_guide/libraries/parser.html>
Upvotes: 1 |
2018/03/22 | 498 | 1,549 | <issue_start>username_0: I work on Ubuntu Xenial (16.04) with `python3`, I also installed anaconda.
I installed `python3-gammu` (with apt install python3-gammu or/and pip install python3-gammu) to test send SMS.
Just run python3 console and
```
>>> import gammu
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named 'gammu'
import sys
print(sys.path)
```
only return anaconda paths !
If I run
```
sudo find -iname gammu
…
./usr/lib/python3/dist-packages/gam
…
```
so if I add this path:
```
>>> sys.path.append('/usr/lib/python3/dist-packages/')
>>> import gammu
```
and it works !
Could you clarify this library path issue?<issue_comment>username_1: When you are trying to import any package it will check sys.path, which contains all paths of packages. If it find the package you want to import it will import it.
sorry for bad english...
[Why use sys.path.append(path) instead of sys.path.insert(1, path)?](https://stackoverflow.com/questions/10095037/why-use-sys-path-appendpath-instead-of-sys-path-insert1-path)
You may get clarity after seeing this ?
Upvotes: 0 <issue_comment>username_2: just
```
export PYTHONPATH=$PYTHONPATH:/usr/lib/python3/dist-packages/
```
To keep it at next reboot, put this line in your ~/.bashrc :
```
# added by Anaconda3 4.2.0 installer
export PATH="/home/my_user_name/anaconda3/bin:$PATH"
export PYTHONPATH="/usr/lib/python3/dist-packages/:$PYTHONPATH"
```
to active new .bashrc, do not forget to run
```
source ~/.bashrc
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 347 | 1,180 | <issue_start>username_0: Can anyone recommend an android java library that creates PDF files and support right to left reading order languages such as Arabic and Hebrew?
Or a way to create them using libraries such as PdfBox-Android, apwlibrary ?
iText is ok but 0.5$ per device is too expensive for me.<issue_comment>username_1: When you are trying to import any package it will check sys.path, which contains all paths of packages. If it find the package you want to import it will import it.
sorry for bad english...
[Why use sys.path.append(path) instead of sys.path.insert(1, path)?](https://stackoverflow.com/questions/10095037/why-use-sys-path-appendpath-instead-of-sys-path-insert1-path)
You may get clarity after seeing this ?
Upvotes: 0 <issue_comment>username_2: just
```
export PYTHONPATH=$PYTHONPATH:/usr/lib/python3/dist-packages/
```
To keep it at next reboot, put this line in your ~/.bashrc :
```
# added by Anaconda3 4.2.0 installer
export PATH="/home/my_user_name/anaconda3/bin:$PATH"
export PYTHONPATH="/usr/lib/python3/dist-packages/:$PYTHONPATH"
```
to active new .bashrc, do not forget to run
```
source ~/.bashrc
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 577 | 2,017 | <issue_start>username_0: Consider this NodeJS route with the provided query:
```
app.post("/api/test", async function(req, res) {
const findTags = ["banana", "produce", "yellow", "organic", "fruit"];
Product.find({
tags: {
$size: findTags.length,
$in: findTags
}
})
.sort({ date: "descending" })
.limit(20)
.exec(function(err, docs) {
console.log("Query found these products: ");
console.log(docs);
});
});
```
Inside of my MongoDB, I have the following two entries:
```
{
name: "Banana",
"tags": [
"banana",
" produce",
" yellow",
" organic",
" fruit"
]
},
{
name: "Blueberry",
"tags": [
"organic",
" produce",
" blue",
" blueberries",
" fruit"
]
}
```
I expected the query to return only `Banana` back
**Actual Result:** Both `Banana`, and `Blueberry` were returned back.
So I tried replacing `$in` with `$all`, and all I get back is an empty array. Meaning it hasn't found anything! How is this possible?
**What I'm trying to do:** I want to find all `Products` that have *at least* all of the `findTags`. It's okay if the product has 100 more tags as long as it has all of the tags I'm looking for.
How would I change my query to make that happen?<issue_comment>username_1: You have to write query like this. And you also have to remove spaces from your db. or make query of tags with space.
```
var tagsArr = ["banana", "produce", "yellow", "organic", "fruit"];
var query = {tags: {$all: tagsArr } };
Product.find(query).sort({ date: "descending" })
.limit(20)
.exec(function(err, docs) {
console.log("Query found these products: ");
console.log(docs);
});
```
Upvotes: -1 <issue_comment>username_2: `$all` does not work for you, because `tags` in the documents does not match `tags` from the query.
Some `tags` contains trailing space (e.g. `produce`) in the documents
Upvotes: 2 [selected_answer] |
2018/03/22 | 466 | 1,852 | <issue_start>username_0: I am getting the phone contacts and problems are given but when I am using the async task the application stops working.
This is my error:
>
> java.lang.IllegalArgumentException: Can't have a viewTypeCount < 1
>
>
>
I am calling the async task in the onCreate method like so:
`new displayContacts().execute();`
What am I doing wrong?<issue_comment>username_1: I guess getViewTypeCount returns the number of different types of views this adapter can return. type of view should just return 1.
```js
public int getItemViewType(int position) {
return 0;
}
public int getViewTypeCount() {
return 1;
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: The issue you have is related to your `Adapter` not your `AsyncTask`..
If you override the [`getViewTypeCount()`](https://developer.android.com/reference/android/widget/Adapter.html#getViewTypeCount()) in your adapter you should make sure you return more than 1. This method is used to know how many view types your list should have.. in your case I guess it'll be one type, instead you're using `getCount()` which eventually could return 0
```
public int getViewTypeCount() {
// if you have more than 1 view type than make sure it's > 1
return 1;
}
```
You're also querying `ContactsContract.CommonDataKinds.Phone` which contains the phone numbers.. so if a user has 3 entries in that table you'll list will contains 3 times the same name.. you should query `ContactsContract.Contacts`
Refer to this [Documentation](https://developer.android.com/guide/topics/providers/contacts-provider.html) to better understand the contact provider.
You find here [how to retrieve contacts list](https://developer.android.com/training/contacts-provider/retrieve-names.html) according to the official documentation.
Upvotes: -1 |
2018/03/22 | 620 | 2,486 | <issue_start>username_0: For example, if I have a class such as this:
```
public class TestClass {
public IDictionary>> Variable {get; set;}
}
```
and I decided I need to use this type elsewhere, but don't want to keep using `IDictionary>>` everywhere as it seems a bit smelly.. how "safe" is it for me to refactor the code by adding 2 more classes that look like this:
```
public class SubClass1 : Dictionary {
}
public class SubClass2 : List>{
}
```
and changing my original class to look like this:
```
public class TestClass {
public SubClass1 Variable {get; set;}
}
```
when there are potentially bits of code outside of my control (this DLL is referenced by other projects) that may be trying to use the original types? My assumption is that it will just be able to cast it naturally since its basically just the same code written in a slightly different way.. but are there any caveats to doing this I should be aware of, or is my assumption that this is safe just plain wrong?<issue_comment>username_1: I guess getViewTypeCount returns the number of different types of views this adapter can return. type of view should just return 1.
```js
public int getItemViewType(int position) {
return 0;
}
public int getViewTypeCount() {
return 1;
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: The issue you have is related to your `Adapter` not your `AsyncTask`..
If you override the [`getViewTypeCount()`](https://developer.android.com/reference/android/widget/Adapter.html#getViewTypeCount()) in your adapter you should make sure you return more than 1. This method is used to know how many view types your list should have.. in your case I guess it'll be one type, instead you're using `getCount()` which eventually could return 0
```
public int getViewTypeCount() {
// if you have more than 1 view type than make sure it's > 1
return 1;
}
```
You're also querying `ContactsContract.CommonDataKinds.Phone` which contains the phone numbers.. so if a user has 3 entries in that table you'll list will contains 3 times the same name.. you should query `ContactsContract.Contacts`
Refer to this [Documentation](https://developer.android.com/guide/topics/providers/contacts-provider.html) to better understand the contact provider.
You find here [how to retrieve contacts list](https://developer.android.com/training/contacts-provider/retrieve-names.html) according to the official documentation.
Upvotes: -1 |
2018/03/22 | 3,788 | 13,462 | <issue_start>username_0: when I am trying to generate apk, I am getting the below error.How to solve it?
Error:Execution failed for task ':app:transformClassesWithMultidexlistForDebug'.
>
> java.io.IOException: Wrong classpath: File "/Users/gowthamichintha/Downloads/20" not found
>
>
>
This is my gradle console
```
Executing tasks: [:app:assembleDebug]
Configuration 'compile' in project ':app' is deprecated. Use 'implementation' instead.
Configuration 'androidTestCompile' in project ':app' is deprecated. Use 'androidTestImplementation' instead.
Configuration 'testCompile' in project ':app' is deprecated. Use 'testImplementation' instead.
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
:app:preBuild UP-TO-DATE
:app:preDebugBuild UP-TO-DATE
:app:compileDebugAidl UP-TO-DATE
:app:compileDebugRenderscript UP-TO-DATE
:app:checkDebugManifest UP-TO-DATE
:app:generateDebugBuildConfig UP-TO-DATE
:app:prepareLintJar UP-TO-DATE
:app:generateDebugResValues UP-TO-DATE
:app:generateDebugResources UP-TO-DATE
:app:processDebugGoogleServices
Parsing json file: /Users/gowthamichintha/Downloads/20:03:2018 android/app/google-services.json
:app:mergeDebugResources UP-TO-DATE
:app:createDebugCompatibleScreenManifests UP-TO-DATE
:app:processDebugManifest UP-TO-DATE
:app:splitsDiscoveryTaskDebug UP-TO-DATE
:app:processDebugResources UP-TO-DATE
:app:generateDebugSources UP-TO-DATE
:app:javaPreCompileDebug
:app:compileDebugJavaWithJavac
Note: Some input files use or override a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
:app:compileDebugNdk NO-SOURCE
:app:compileDebugSources
:app:mergeDebugShaders
:app:compileDebugShaders UP-TO-DATE
:app:generateDebugAssets UP-TO-DATE
:app:mergeDebugAssets
:app:transformClassesWithDexBuilderForDebug
:app:transformClassesWithMultidexlistForDebug FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:transformClassesWithMultidexlistForDebug'.
> java.io.IOException: Wrong classpath: File "/Users/gowthamichintha/Downloads/20" not found
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
* Get more help at https://help.gradle.org
BUILD FAILED in 12s
20 actionable tasks: 7 executed, 13 up-to-date
```
--stacktrace
```
* What went wrong:
Execution failed for task ':app:transformClassesWithMultidexlistForDebug'.
> java.io.IOException: Wrong classpath: File "/Users/gowthamichintha/Downloads/20" not found
* Try:
Run with --info or --debug option to get more log output.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:transformClassesWithMultidexlistForDebug'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:100)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: java.lang.RuntimeException: java.io.IOException: Wrong classpath: File "/Users/gowthamichintha/Downloads/20" not found
at com.android.builder.profile.Recorder$Block.handleException(Recorder.java:55)
at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:104)
at com.android.build.gradle.internal.pipeline.TransformTask.transform(TransformTask.java:213)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
... 27 more
Caused by: java.io.IOException: Wrong classpath: File "/Users/gowthamichintha/Downloads/20" not found
at com.android.multidex.Path.(Path.java:60)
at com.android.multidex.MainDexListBuilder.(MainDexListBuilder.java:113)
at com.android.build.gradle.internal.transforms.MainDexListTransform.callDx(MainDexListTransform.java:292)
at com.android.build.gradle.internal.transforms.MainDexListTransform.computeList(MainDexListTransform.java:264)
at com.android.build.gradle.internal.transforms.MainDexListTransform.transform(MainDexListTransform.java:186)
at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:222)
at com.android.build.gradle.internal.pipeline.TransformTask$2.call(TransformTask.java:218)
at com.android.builder.profile.ThreadRecorder.record(ThreadRecorder.java:102)
... 39 more
Caused by: java.io.FileNotFoundException: File "/Users/gowthamichintha/Downloads/20" not found
at com.android.multidex.Path.getClassPathElement(Path.java:45)
at com.android.multidex.Path.(Path.java:58)
... 46 more
```
--app level gradle
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 26
buildToolsVersion "26.0.2"
defaultConfig {
applicationId "com.abc.bca.bca"
minSdkVersion 15
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
multiDexEnabled true
}
buildTypes {
release {
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
android {
defaultConfig {
// Enabling multidex support.
multiDexEnabled true
}
}
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', {
exclude group: 'com.android.support', module: 'support-annotations'
})
compile 'com.google.firebase:firebase-firestore:11.8.0'
compile 'com.google.firebase:firebase-firestore:11.8.0'
compile 'com.android.support:appcompat-v7:26.+'
compile 'com.android.support:design:26.+'
compile 'com.android.support.constraint:constraint-layout:1.0.2'
compile 'com.google.firebase:firebase-auth:11.8.0'
compile 'de.hdodenhof:circleimageview:2.1.0'
compile 'com.google.firebase:firebase-database:11.8.0'
compile 'com.google.firebase:firebase-storage:11.8.0'
compile 'com.google.firebase:firebase-core:11.8.0'
compile 'com.github.bumptech.glide:glide:4.1.1'
compile 'com.facebook.android:facebook-android-sdk:4.6.0'
compile 'com.google.android.gms:play-services-auth:11.8.0'
compile 'me.dm7.barcodescanner:zxing:1.9'
compile 'com.google.android.gms:play-services-maps:11.8.0'
compile 'com.google.android.gms:play-services-location:11.8.0'
compile 'com.google.firebase:firebase-messaging:11.8.0'
compile 'com.android.support:support-v4:26.+'
compile 'com.android.support:cardview-v7:26.1.0'
testCompile 'junit:junit:4.12'
}
apply plugin: 'com.google.gms.google-services'
```
-- project level gradle
```
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
classpath 'com.google.gms:google-services:3.1.0'
classpath 'com.google.firebase:firebase-plugins:1.1.5'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
mavenCentral()
google()
jcenter()
maven {
url "https://maven.google.com" // Google's Maven repository
}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
Thanks inAdvance<issue_comment>username_1: I guess getViewTypeCount returns the number of different types of views this adapter can return. type of view should just return 1.
```js
public int getItemViewType(int position) {
return 0;
}
public int getViewTypeCount() {
return 1;
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: The issue you have is related to your `Adapter` not your `AsyncTask`..
If you override the [`getViewTypeCount()`](https://developer.android.com/reference/android/widget/Adapter.html#getViewTypeCount()) in your adapter you should make sure you return more than 1. This method is used to know how many view types your list should have.. in your case I guess it'll be one type, instead you're using `getCount()` which eventually could return 0
```
public int getViewTypeCount() {
// if you have more than 1 view type than make sure it's > 1
return 1;
}
```
You're also querying `ContactsContract.CommonDataKinds.Phone` which contains the phone numbers.. so if a user has 3 entries in that table you'll list will contains 3 times the same name.. you should query `ContactsContract.Contacts`
Refer to this [Documentation](https://developer.android.com/guide/topics/providers/contacts-provider.html) to better understand the contact provider.
You find here [how to retrieve contacts list](https://developer.android.com/training/contacts-provider/retrieve-names.html) according to the official documentation.
Upvotes: -1 |
2018/03/22 | 600 | 2,254 | <issue_start>username_0: Here is my problem:
In my WPF application I have a `MyBaseControl` (derives from `System.Windows.Controls.ContentControls`) and a lot of `MyCustomControls` which derives from `MyBaseControl`. I need to do some storing and cleanup operations for all my `MyCustomControls` befor the application is closed.
Here is some code:
```
public abstract class MyBaseControl : ContentControl
{
// Do some smart stuff.
}
App.Exit += new System.Windows.ExitEventHandler(App.App_OnExit);
```
In `App_OnExit()` I do the really last operations that need to be done.
I tried to do my cleanup operations in the destructor of `MyBaseControl` but this is called after `App_OnExit()`. Same problem with `AppDomain.CurrentDomain.ProcessExit`.
The `ContentControl.Closed` and `ContentControl.Unloaded` events don't occour when I exit the application via ALT+F4.
Where can I hook in to do my cleanup operations?<issue_comment>username_1: >
> Where can I hook in to do my cleanup operations?
>
>
>
In a `Closing` event handler for the parent window of the control:
```
public abstract class MyBaseControl : ContentControl
{
public MyBaseControl()
{
Loaded += MyBaseControl_Loaded;
}
private void MyBaseControl_Loaded(object sender, RoutedEventArgs e)
{
Window parentWindow = Window.GetWindow(this);
parentWindow.Closing += ParentWindow_Closing;
Loaded -= MyBaseControl_Loaded;
}
private void ParentWindow_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
//cleanup...
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can put it to your application class:
```
public delegate void TimeToCleanUpEventHandler();
public event TimeToCleanUpEventHandler TimeToCleanUp;
```
modify Exit event handler:
```
App.Current.Exit += ((o, e) => { TimeToCleanUp?.Invoke(); App.App_OnExit(o, e); });
```
and modify your base control:
```
public abstract class MyBaseControl : ContentControl
{
public MyBaseControl()
{
(App.Current as MyApp).TimeToCleanUp += CleanItUp;
}
public virtual void CleanItUp()
{
(App.Current as MyApp).TimeToCleanUp -= CleanItUp;
//do stuff;
}
}
```
Upvotes: 0 |
2018/03/22 | 484 | 1,876 | <issue_start>username_0: I'm currently working through some old code and I started to wonder, is there actually any difference between:
```
public class XmlExport : IXmlExport
{
private readonly IJobRepository jobRepository = new JobRepository();
}
```
and
```
public class XmlExport : IXmlExport
{
private readonly IJobRepository jobRepository;
public XmlExport()
{
jobRepository = new JobRepository();
}
}
```
Are there any advantages to initializing inside the constructor? or is it just the same code?<issue_comment>username_1: >
> Where can I hook in to do my cleanup operations?
>
>
>
In a `Closing` event handler for the parent window of the control:
```
public abstract class MyBaseControl : ContentControl
{
public MyBaseControl()
{
Loaded += MyBaseControl_Loaded;
}
private void MyBaseControl_Loaded(object sender, RoutedEventArgs e)
{
Window parentWindow = Window.GetWindow(this);
parentWindow.Closing += ParentWindow_Closing;
Loaded -= MyBaseControl_Loaded;
}
private void ParentWindow_Closing(object sender, System.ComponentModel.CancelEventArgs e)
{
//cleanup...
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can put it to your application class:
```
public delegate void TimeToCleanUpEventHandler();
public event TimeToCleanUpEventHandler TimeToCleanUp;
```
modify Exit event handler:
```
App.Current.Exit += ((o, e) => { TimeToCleanUp?.Invoke(); App.App_OnExit(o, e); });
```
and modify your base control:
```
public abstract class MyBaseControl : ContentControl
{
public MyBaseControl()
{
(App.Current as MyApp).TimeToCleanUp += CleanItUp;
}
public virtual void CleanItUp()
{
(App.Current as MyApp).TimeToCleanUp -= CleanItUp;
//do stuff;
}
}
```
Upvotes: 0 |
2018/03/22 | 571 | 2,069 | <issue_start>username_0: I upgraded to Font Awesome 5 (FA 5) from version 4.7. The reason was the layered icons.
In FA 4.7, `fa-stack` classes were used. In FA 5, fa-layers are much powerful.
The problem, as far as I see, `fa-layers` are only implemented in pure js version of Font Awesome. (using fontawesome-all.js). If you want to use css version, you do not see `fa-layers` class anywhere in folder structure (in the current version of 5.0.8). Is it possible to use fa-layers with css version of FA 5?
By css version I mean this:
```
```
Bt Js version, I mean this:
```
```
Since fontawesome-all.js replaced all i tags to svg, css manipulation is difficult with this version. So, if css version has all the features that Js version has, I would like to us css version of FA 5.<issue_comment>username_1: No, Webfonts with CSS does not have all of the features that SVG with JS has. The [How to Use SVG with JS](https://fontawesome.com/how-to-use/svg-with-js) page shows some of the features that are new or exclusive to SVG with JS. [Layers](https://fontawesome.com/how-to-use/svg-with-js#layering), specifically, are new to SVG with JS:
>
> Layers are the new way to place icons and text visually on top of each
> other, replacing our classic icons stacks.
>
>
>
You can still use stacks in Webfonts with CSS to do some interesting things: [codepen example](https://codepen.io/fontawesome/pen/rvvKoY/)
```
```
But stacks are definitely not as powerful as Layers with Power Transforms, which are only available in SVG with JS.
Upvotes: 3 <issue_comment>username_2: I prefer using the Web Fonts version as well, mostly for performance reasons. I also wanted to use the more advanced layered icons with the `fa-layers` class.
I recreated some of the `fa-layers` functionality in CSS and put the stylesheet [up on GitHub](https://github.com/cityssm/fa5-power-transforms-css). It's not perfect or complete. I'm still working on it, but it might help you get some of the missing functionality without switching away from Web Fonts and CSS.
Upvotes: 2 |
2018/03/22 | 243 | 1,073 | <issue_start>username_0: I have a question/opinion that needs experts suggestion.
I have a table called ***config*** that contains some configuration information as the table name suggests. I need this details to be accessed from all the executors during my job's life cycle. So my first option is ***Broadcasting*** them in List[Case Class] .But suddenly got an idea of making the ***config*** as Temptable using `registerTempTable()` and use it accross my job.
This temp table approach can be used an alternative to Broadcast variables ( I have extensive hands-on on Broadcasting)?<issue_comment>username_1: registerTempTable() then using it for lookup, will mostly use the broadcast join only internally given the scenario the table/config file size < 10MB.
Upvotes: 0 <issue_comment>username_2: `registerTempTable` does just give you the possibilty to run plain sql queries on your dataframe, there is no performance benefit/caching/materialization involved.
You should go with broadcasting (I would suggest to use a `Map` for configuration parameters)
Upvotes: 1 |
2018/03/22 | 436 | 1,512 | <issue_start>username_0: We try to calculate the running total in for each year ordered by person based on his latest date info. So i got an example for you how the data is ordered:
[](https://i.stack.imgur.com/79QNU.png)
Expected result:
[](https://i.stack.imgur.com/cUJAv.png)
So for each downloaded date we want to running total in of all persons ordered by year (now the year is only 2018)
What do we have so far:
```
sum(Amount)
over(partition by [Year],[Person]
order by [Enddate)
where max(Downloaded)
```
Any idea how to fix this?<issue_comment>username_1: Try using a subquery with a moving downloaded date range.
```
SELECT
T.*,
RunningTotalByDate = (
SELECT
SUM(N.Amount)
FROM
YourTable AS N
WHERE
N.Downloaded <= T.Downloaded)
FROM
YourTable AS T
ORDER BY
T.Downloaded ASC,
T.Person ASC
```
Or with windowed `SUM()`. Do no include a `PARTITION BY` because it will reset the sum when the partitioned by column value changes.
```
SELECT
T.*,
RunningTotalByDate = SUM(T.Amount) OVER (ORDER BY T.Downloaded ASC)
FROM
YourTable AS T
ORDER BY
T.Downloaded ASC,
T.Person ASC
```
Upvotes: 0 <issue_comment>username_2: Just use *window function*
```
select *,
sum(Amount) over (partition by Year, Downloaded) RuningTotal
from table t
```
Upvotes: 1 |
2018/03/22 | 844 | 3,248 | <issue_start>username_0: I have a class representing sensors in a plant. For historical reasons, similar objects (that are therefore represented by the same class) have a different identification:
* Some have a name (ie. "north-west-gummy-bear")
* Some have an areaId, and a sensorId
In order to accomodate for this, I use an empty interface:
```
public class sensor
{
ISensorIdentifier id{get;set;}
}
public interface ISensorIdentifier{
}
public class namedSensorID:ISensorIdentifier{
string name{get;set;}
}
public class idSensorID:ISensorIdentifier{
int areaID{get;set;}
int sensorID{get;set;}
}
```
This allows me to use the same class for objects with a different identification system.
It is my understanding that empty interfaces are a code smell, and that I should use custom attributes instead.
However, after reading about custom attributes, I have no idea where to start. Indeed, I could use a custom attribute 'sensorIdentifier' instead of the empty interface, but how should I type the `id` property in the `sensor` class?<issue_comment>username_1: Well you can consider that a sensor has one unique valid identifier information (using c#7 native tuple support):
```
(Name, AreaId, SensorId)
```
Your business logic should enforce that valid id information must be:
```
(Name, null, null)
```
Or
```
(null, AreaId, SensorId)
```
Anything else is not valid. Ok, lets build a base class that enforces this:
```
public abstract class Sensor
{
private readonly string name;
private readonly int? areaId, sensorId;
protected Sensor(string name)
{
this.name = name;
}
protected Sensor(int areaId, int sensorId)
{
this.areaId = areaId;
this.sensorId = sensorId;
}
public (string Name, int? AreaId, int? SensorId) Id
{
get
{
Debug.Assert(
(name != null && !(areaId.HasValue || sensorId.HasValue)) ||
(name == null && (areaId.HasValue && sensorId.HasValue)));
return (name, areaId, sensorId);
}
}
}
```
Your specific sensor implementations are trivial, extending `Sensor`:
```
public class NamedSensor: Sensor
{
public NamedSensor(string name)
:base(name)
{ }
}
public class IdSensor: Sensor
{
public IdSensor(int areaId, int sensorId)
:base(areaId, sensorId)
{ }
}
```
And you can happily work with an `IEnumerable`.
Upvotes: 1 <issue_comment>username_2: I propose that you add a unique identifier in your empty interface which should be implemented in each of your concrete classes:
```
public interface ISensorIdentifier
{
string UniqueSensorId { get; }
}
```
You should simply ensure that these implementations in the different sub classes make this new ID unique. Here is a quick proposal:
```
public class namedSensorID : ISensorIdentifier
{
public string UniqueSensorId { get { return nameof(namedSensorID) + name; } }
string name { get; set; }
}
public class idSensorID : ISensorIdentifier
{
int areaID { get; set; }
int sensorID { get; set; }
public string UniqueSensorId { get { return nameof(idSensorID) + areaID + sensorID; } }
}
```
Upvotes: 0 |
2018/03/22 | 1,709 | 5,665 | <issue_start>username_0: I have a redux thunk action that fetches some data and then dispatches some actions (not shown in the code here, but you'll be able to find it in the demo link bellow)
```
export const fetchPosts = (id: string) => (dispatch: Dispatch) => {
return fetch('http://example.com').then(
response => {
return response.json().then(json => {
return "Success message";
});
},
err => {
throw err;
}
);
};
```
and than in my component I use `mapDispatchToProps` with `bindActionCreators` to call this function from my component like so:
```
public fetchFunc() {
this.props.fetchPosts("test").then(
res => {
console.log("Res from app", res);
},
err => {
console.log("Err from app", err);
}
);
}
```
Since I am using typescript, I need to define the type of this function in the Props
```
interface IProps {
name?: string;
posts: IPost[];
loading: boolean;
fetchPosts: (id: string) => Promise;
}
```
If I do-it like above, Typescript will complain that I should do-it like this:
```
fetchPosts: (id: string) => (dispatch: Dispatch) => Promise;
```
If I do-it like this, then Typescript complains when I use `then` in my component saying that that function is not a promise.
I created a demo where you can fiddle with the code
Pressing "Load from remote" will sometimes fail just to see if the promise:
<https://codesandbox.io/s/v818xwl670><issue_comment>username_1: Basically, in Typescript, the generic type of the promise will be inferred from the `resolve` only.
For example
```
function asyncFunction() {
return new Promise((resolve, reject) => {
const a = new Foo();
resolve(a);
})
}
```
`asynFunction` return type will be inferred as `Promise`
You just only need to remove `Error` as a union type in your type to get the proper type definition:
`fetchPosts: (id: string) => (dispatch: Dispatch) => Promise;`
Upvotes: 2 <issue_comment>username_2: The problem is the call to `bindActionCreators` in `mapDispatchToProps`. At runtime `bindActionCreators` basically transforms this `(id: string) => (dispatch: Dispatch) => Promise;` into this `(id: string) => Promise;`, but the type for `bindActionCreators` does not reflect this transformation. This is probably due to the fact that to accomplish this you would need conditional types which until recently were not available.
If we look at [this](https://github.com/reactjs/redux/blob/cbdca6215e1891e97120ac05c971618182455e54/test/typescript/actionCreators.ts#L33) sample usage from the redux repo, we see that they accomplish the transformation by specifying the types of the functions explicitly:
```
const boundAddTodoViaThunk = bindActionCreators<
ActionCreator,
ActionCreator
>(addTodoViaThunk, dispatch)
```
We could do the same in your code, referencing existing types, but this hurts type safety as there is no check that `fetchPosts` in the two types will be correctly typed:
```
const mapDispatchToProps = (dispatch: Dispatch): Partial =>
bindActionCreators<{ fetchPosts: typeof fetchPosts }, Pick>(
{
fetchPosts
},
dispatch
);
```
Or we could use a type assertion since the above method does not really offer any safety anyway:
```
const mapDispatchToProps2 = (dispatch: Dispatch) =>
bindActionCreators({
fetchPosts: fetchPosts as any as ((id: string) => Promise)
}, dispatch );
```
For a truly type safe way to do this we need to use typescript 2.8 and conditional types with a helper function. We can type `bindActionCreators` the way it should, and automatically infer the correct type for the resulting creators:
```
function mybindActionCreators(map: M, dispatch: Dispatch) {
return bindActionCreators }>(map, dispatch);
}
const mapDispatchToProps = (dispatch: Dispatch) =>
mybindActionCreators(
{
fetchPosts
},
dispatch
);
// Helpers
type IsValidArg = T extends object ? keyof T extends never ? false : true : true;
type RemoveDispatch =
T extends (a: infer A, b: infer B, c: infer C, d: infer D, e: infer E, f: infer F, g: infer G, h: infer H, i: infer I, j: infer J) => (dispatch: Dispatch) => infer R ? (
IsValidArg extends true ? (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H, i: I, j: J) => R :
IsValidArg *extends true ? (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H, i: I) => R :
IsValidArg extends true ? (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H) => R :
IsValidArg extends true ? (a: A, b: B, c: C, d: D, e: E, f: F, g: G) => R :
IsValidArg extends true ? (a: A, b: B, c: C, d: D, e: E, f: F) => R :
IsValidArg extends true ? (a: A, b: B, c: C, d: D, e: E) => R :
IsValidArg extends true ? (a: A, b: B, c: C, d: D) => R :
IsValidArg extends true ? (a: A, b: B, c: C) => R :
IsValidArg **extends true ? (a: A, b: B) => R :
IsValidArgextends true ? (a: A) => R :
() => R
) : T;***
```
Upvotes: 3 <issue_comment>username_3: Thank you @username_1 and @username_2.
I've found a mix between the two answers you provided.
**1:**
In the props, instead of redeclaring all argument types and return types, I can say that the function has the typeof the original function as so: `fetchPosts: typeof fetchPosts` (Thanks to @titian-cernicova-dragomir)
**2:**
Now I can use that function, but not as a promise. to make that work as a promise, I can use the solution provided by @thai-duong-tran.
```
const fetchPromise = new Promise(resolve => {
this.props.fetchPosts("adidas");
});
```
You can see the working demo here: <https://codesandbox.io/s/zo15pj633>
Upvotes: 2 [selected_answer]<issue_comment>username_4: Try the following:
```
fetchPosts: (id: string) => void;
```
Upvotes: 0 |
2018/03/22 | 2,736 | 8,910 | <issue_start>username_0: I'm running into this code (adapted with dummy data):
```
public Map queryDatabase() {
final Map map = new TreeMap<>();
map.put("one", 1);
map.put("two", 2);
// ...
return map;
}
public Map.Entry getEntry(int n) {
final Map map = queryDatabase();
for (final Map.Entry entry : map.entrySet()) {
if (entry.getValue().equals(n)) return entry; // dummy check
}
return null;
}
```
The `Entry` is then stored into a newly-created object that is saved into a cache for an undefined period:
```
class DataBundle {
Map.Entry entry;
public void doAction() {
this.entry = Application.getEntry(2);
}
}
```
While `queryDatabase` is called multiple times in a minute, the local Maps should be discarded at the consequent gc cycle. I have reason to believe though, that `DataBundle` keeping an `Entry` reference prevents the `Map` from being collected at all.
Besides, a `java.util.TreeMap.Entry` holds multiple references to siblings:
```
static final class Entry implements Map.Entry {
K key;
V value;
Entry left;
Entry right;
Entry parent;
// ...
}
```
---
**Q:** Does storing a `Map.Entry` into a member field retain the local `Map` instances into memory?<issue_comment>username_1: The contract for [Map.Entry](https://docs.oracle.com/javase/8/docs/api/java/util/Map.Entry.html) does not make any commitments in that area so you should not make any assumptions either.
>
> ... These Map.Entry objects are valid only for the duration of the iteration; ...
>
>
>
For this reason, if you wish to store `Key-Value` pairs derived from a `Map.Entry` then you should take copies.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I have written a benchmarking application and the results clearly demonstrate that the JVM is **not able to collect the local `Map` instances** if an `Entry` reference is kept alive.
This applies only to `TreeMap`s though and the reason might be that a `TreeMap.Entry` holds different references to its siblings.
As @username_1 mentions,
>
> you should not make any assumptions [and] if you wish to store Key-Value pairs derived from a Map.Entry then you should take copies
>
>
>
I believe that at this point, if you don't know what you are doing, whatever `Map` you are working with, holding a `Map.Entry` should be considered **evil** and **anti-pattern**.
Always opt for saving copies of `Map.Entry` or directly store the key and value.
---
**Benchmarking technical data:**
JVM
```
java version "1.8.0_152"
Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
```
CPU
```
Caption : Intel64 Family 6 Model 158 Stepping 9
DeviceID : CPU0
Manufacturer : GenuineIntel
MaxClockSpeed : 4201
Name : Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
SocketDesignation : LGA1151
```
RAM
```
Model Name MaxCapacity MemoryDevices
----- ---- ----------- -------------
Physical Memory Array 67108864 4
```
---
**What will the benchmark do?**
* The main program will (pseudo-)query the database `500` times
* A new instance of `DataBundle` will be created at each iteration an will store a random `Map.Entry` by calling `getEntry(int)`
* `queryDatabase` is going to create a local `TreeMap` of `100_000` items at each call and `getEntry` will simply return one `Map.Entry` from the Map.
* All `DataBundle` instances will be stored into an `ArrayList` cache.
* How `DataBundle` stores the `Map.Entry` will be different among the benchmarks, in order to demonstrate the `gc` abilities to perform its duty.
* Every `100` calls to `queryDatabase` the `cache` will be cleared: this is to see the effect of the `gc` in [visualvm](https://visualvm.github.io/)
---
**Benchmark 1: TreeMap and storing Map.Entry - CRASH**
The `DataBundle` class:
```
class DataBundle {
Map.Entry entry = null;
public DataBundle(int i) {
this.entry = Benchmark\_1.getEntry(i);
}
}
```
The benchmarking application:
```
public class Benchmark_1 {
static final List CACHE = new ArrayList<>();
static final int MAP\_SIZE = 100\_000;
public static void main(String[] args) throws InterruptedException {
for (int i = 0; i < 500; i++) {
if (i % 100 == 0) {
System.out.println("Clearing");
CACHE.clear();
}
final DataBundle dataBundle = new DataBundle(new Random().nextInt(MAP\_SIZE));
CACHE.add(dataBundle);
Thread.sleep(500); // to observe behavior in visualvm
}
}
public static Map queryDatabase() {
final Map map = new TreeMap<>();
for (int i = 0; i < MAP\_SIZE; i++) map.put(String.valueOf(i), i);
return map;
}
public static Map.Entry getEntry(int n) {
final Map map = queryDatabase();
for (final Map.Entry entry : map.entrySet())
if (entry.getValue().equals(n)) return entry;
return null;
}
}
```
The application can't even reach the first `100` iterations (the cache clearing) and it throws a `java.lang.OutOfMemoryError`:
```
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.lang.Integer.valueOf(Integer.java:832)
at org.payloc.benchmark.Benchmark_1.queryDatabase(Benchmark_1.java:34)
at org.payloc.benchmark.Benchmark_1.getEntry(Benchmark_1.java:38)
at org.payloc.benchmark.DataBundle.(Benchmark\_1.java:11)
at org.payloc.benchmark.Benchmark\_1.main(Benchmark\_1.java:26)
Mar 22, 2018 1:06:41 PM sun.rmi.transport.tcp.TCPTransport$AcceptLoop executeAcceptLoop
WARNING: RMI TCP Accept-0: accept loop for
ServerSocket[addr=0.0.0.0/0.0.0.0,localport=31158] throws
java.lang.OutOfMemoryError: Java heap space
at java.net.NetworkInterface.getAll(Native Method)
at java.net.NetworkInterface.getNetworkInterfaces(NetworkInterface.java:343)
at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:86)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400)
at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372)
at java.lang.Thread.run(Thread.java:748)
\*\*\* java.lang.instrument ASSERTION FAILED \*\*\*: "!errorOutstanding" with message can't create byte arrau at JPLISAgent.c line: 813
```
The `visualvm` graph clearly shows how the memory is retained despite the `gc` performing some activity in the background, ergo: **holding a single `Entry` keeps the entire `Map` in the heap.**
[](https://i.stack.imgur.com/gj8pi.png)
[](https://i.stack.imgur.com/2jhq7.png)
**Benchmark 2: HashMap and storing Map.Entry - PASS**
Very same program, but instead of a `TreeMap` I'm using a `HashMap`.
The `gc` is able to collect the local `Map` instances despite the stored `Map.Entry` being kept into memory, *(If you tried in fact to print the `cache` result after the benchmark you will see actual values).*
`visualvm` graphs:
[](https://i.stack.imgur.com/x3Ya2.png)
[](https://i.stack.imgur.com/5wLUz.png)
The application does not throw any memory error.
**Benchmark 3: TreeMap and storing only key and value (no Map.Entry) - PASS**
Still using a `TreeMap`, but this time, instead of the `Map.Entry`, I will be storing the key and the value data directly.
```
class DataBundle3 {
String key;
Integer value;
public DataBundle3(int i) {
Map.Entry e = Benchmark\_3.getEntry(i);
this.key = e.getKey();
this.value = e.getValue();
}
}
```
Safe approach, as the application reaches the end correctly and the `gc` periodically cleans the maps.
[](https://i.stack.imgur.com/VgUzY.png)
[](https://i.stack.imgur.com/WR4rH.png)
**Benchmark 4: TreeMap and cache of SoftReference (storing Map.Entry) - PASS**
Not the best solution, but since many caching systems use `java.lang.ref.SoftReference` I'll post a benchmark with it.
So, still using a `TreeMap`, still storing the `Map.Entry` into `DataBundle`, but using a List of `SoftReference`.
So the cache becomes:
```
static final List> CACHE = new ArrayList<>();
```
and I save the objects via:
```
CACHE.add(new SoftReference<>(dataBundle));
```
The application completes correctly, and the `gc` is free to collect the maps anytime it needs to.
This happens because `SoftReference` doesn't retain its `referent` (in our case the `Map.Entry`) from being collected.
[](https://i.stack.imgur.com/hvVm1.png)
[](https://i.stack.imgur.com/PCGMC.png)
---
Hope this will be useful for someone.
Upvotes: 2 |
2018/03/22 | 373 | 1,423 | <issue_start>username_0: I am trying to store an array in the Firebase Database, but I'm not sure how to do it properly.
I am wanting to store an array such as:
```
var exampleArray = ["item1", "item2", "item3"]
```
The reason why is I populate a UIPicker with values from an array, and rather than having to push an update to the app each time to add a new value, it would be better if I could just update the database and instantly add the new value to each app.
Is there any way that I could store the values in the database and pull the values from the database and store it into an array as shown above?
Thank you.<issue_comment>username_1: A simple solution would be read JSON from a Firebase Database Reference containing the array, and deserealize it in swift to a native swift array. See point 2 at <https://firebase.google.com/docs/database/#implementation_path>
Upvotes: 0 <issue_comment>username_2: The Firebase `setValue` method will accept an array just the same as a String or Integer. So you can read and write the values in the same way -
```
var ref: DatabaseReference!
ref = Database.database().reference()
var exampleArray = ["item1", "item2", "item3"]
// Write the array
ref.child("myArray").setValue(exampleArray)
// Read the array
ref.child("myArray").observe(.value) { snapshot in
for child in snapshot.children {
// Add the values to your picker array here
}
}
```
Upvotes: 2 |
2018/03/22 | 594 | 1,704 | <issue_start>username_0: This is my array:
```
data = [{"src": 'a'},
{'src': 'b'},
{'src': 'c'}];
```
But I want to change key like this:
```
data = [{"letter": 'a'},
{'letter': 'b'},
{'letter': 'c'}];
```<issue_comment>username_1: Use `map`
```
var output = data.map( s => ({letter:s.src}) );
```
**Demo**
```js
var data = [{
"src": 'a'
},
{
'src': 'b'
},
{
'src': 'c'
}
];
console.log(data.map(s => ({
letter: s.src
})));
```
But if there are multiple other keys and you only want to change `src` from it then
```
var output = data.map( s => {
if ( s.hasOwnProperty("src") )
{
s.letter = s.src;
delete s.src;
}
return s;
})
```
**Demo**
```js
var data = [{
"src": 'a'
},
{
'src': 'b'
},
{
'src2': 'c'
}
];
var output = data.map(s => {
if (s.hasOwnProperty("src")) {
s.letter = s.src;
delete s.src;
}
return s;
})
console.log(output);
```
Upvotes: 4 <issue_comment>username_2: The easiest way is to use `map` method. Check it out the [documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map)
```
data.map(function(item) {
return { letter: item.src };
})
```
Upvotes: 3 <issue_comment>username_3: Use array.map
```
data.map(function(d) { return { letter: d.src } })
```
Upvotes: 3 <issue_comment>username_4: ```
var newData = data.map(a => { "letter": a.src })
```
Upvotes: 1 <issue_comment>username_5: With `map` you can achieve what you want. Kindly note that `map` returns a new array and doesn't modify the existing array.
```
data.map((item) => ({ letter: item.src }));
```
Upvotes: 2 |
2018/03/22 | 927 | 2,267 | <issue_start>username_0: I have a data frame in R called info which includes several dates under the column Date, they are ordered in "%Y-%m-%d" I want to only have those values that are less then 6 days apart and remove the "outliers" anyone know how this can be done?
what the data frame looks like
```
'> info
Date ens seps
3 1951-01-08 mem01 2
4 1951-01-12 mem01 4
37 1959-12-08 mem01 4
42 1959-12-30 mem01 3
43 1960-01-01 mem01 2
47 1961-01-03 mem01 2
49 1961-01-18 mem01 2
50 1961-01-20 mem01 2
62 1964-11-29 mem01 4
93 1971-02-12 mem01 2
99 1972-02-15 mem01 2
100 1972-02-18 mem01 3
102 1972-02-21 mem01 2
119 1981-10-16 mem01 3
121 1981-10-19 mem01 2
131 1984-12-24 mem01 2
134 1987-01-02 mem01 2
```<issue_comment>username_1: Use `map`
```
var output = data.map( s => ({letter:s.src}) );
```
**Demo**
```js
var data = [{
"src": 'a'
},
{
'src': 'b'
},
{
'src': 'c'
}
];
console.log(data.map(s => ({
letter: s.src
})));
```
But if there are multiple other keys and you only want to change `src` from it then
```
var output = data.map( s => {
if ( s.hasOwnProperty("src") )
{
s.letter = s.src;
delete s.src;
}
return s;
})
```
**Demo**
```js
var data = [{
"src": 'a'
},
{
'src': 'b'
},
{
'src2': 'c'
}
];
var output = data.map(s => {
if (s.hasOwnProperty("src")) {
s.letter = s.src;
delete s.src;
}
return s;
})
console.log(output);
```
Upvotes: 4 <issue_comment>username_2: The easiest way is to use `map` method. Check it out the [documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map)
```
data.map(function(item) {
return { letter: item.src };
})
```
Upvotes: 3 <issue_comment>username_3: Use array.map
```
data.map(function(d) { return { letter: d.src } })
```
Upvotes: 3 <issue_comment>username_4: ```
var newData = data.map(a => { "letter": a.src })
```
Upvotes: 1 <issue_comment>username_5: With `map` you can achieve what you want. Kindly note that `map` returns a new array and doesn't modify the existing array.
```
data.map((item) => ({ letter: item.src }));
```
Upvotes: 2 |
2018/03/22 | 1,392 | 4,799 | <issue_start>username_0: So, I have this pipeline job that builds completely inside a Docker container. The Docker image used is pulled from a local repository before build and has almost all the dependencies required to run my project.
The problem is I need an option to define volumes to bind mound from Host to container so that I can perform some analysis using a tool available on my Host system but not in the container.
Is there a way to do this from inside a Jenkinsfile (Pipeline script)?<issue_comment>username_1: Suppose you are under Linux, run the following code
```
docker run -it --rm -v /local_dir:/image_root_dir/mount_dir image_name
```
Here is some detail:
-it: interactive terminal
--rm: remove container after exit the container
-v: volume or say mount your local directory to a volume.
Since the mount function will 'cover' the directory in your image, your should alway make a new directory under your images root directory.
Visit [Use bind mounts](https://docs.docker.com/storage/bind-mounts/) to get more information.
ps:
run
```
sudo -s
```
and tpye the password before you run docker, that saves you a lot of time, since you don't have to type *sudo* in front of *docker* every time you run docker.
ps2:
suppose you have an image with a long name and the image ID is 5ed6274db6ce, you can simply run at least the first three digits, or more
```
docker run [options] 5ed
```
if you have more image have the same first three digits, you can use four or more.
For example, you have following two images
```
REPOSITORY IMAGE ID
My_Image_with_very_long_name 5ed6274db6ce
My_Image_with_very_long_name2 5edc819f315e
```
you can simply run
```
docker run [options] 5ed6
```
to run the image My\_Image\_with\_very\_long\_name.
Upvotes: 0 <issue_comment>username_2: I'm not fully clear if this is what you mean. But if it isn't. Let me know and I'll try to figure out.
What I understand of mounting from host to container is mounting the content of the Jenkins Workspace inside the container.
For example in this pipeline:
```
pipeline {
agent { node { label 'xxx' } }
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('add file') {
steps {
sh 'touch myfile.txt'
sh 'ls'
}
}
stage('Deploy') {
agent {
docker {
image 'username_2/aws-cli'
args '-v $WORKSPACE:/project'
reuseNode true
}
}
steps {
sh 'ls'
sh 'aws --version'
}
}
}
post {
always {
cleanWs()
}
}
}
```
In the first stage I just add a file to the workspace. just in Jenkins. Nothing with Docker.
In the second stage I start a docker container which contains the aws CLI (this is not installed on our jenkins slaves). We will start the container and mount the workspace inside the `/project` folder of my container. Now I can execute AWS CLI command's and I have access to the text file. In a next stage (not in the pipeline) you can use the file again in a different container or jenkins slave itself.
Output:
```
[Pipeline] {
[Pipeline] stage
[Pipeline] { (add file)
[Pipeline] sh
[test] Running shell script
+ touch myfile.txt
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Deploy)
[Pipeline] getContext
[Pipeline] sh
[test] Running shell script
+ docker inspect -f . username_2/aws-cli
.
[Pipeline] withDockerContainer
FJ Arch Slave 7 does not seem to be running inside a container
$ docker run -t -d -u 201:201 -v $WORKSPACE:/project -w ... username_2/aws-cli cat
$ docker top xx -eo pid,comm
[Pipeline] {
[Pipeline] sh
[test] Running shell script
+ ls
myfile.txt
[Pipeline] sh
[test] Running shell script
+ aws --version
aws-cli/1.14.57 Python/2.7.14 Linux/4.9.78-1-lts botocore/1.9.10
[Pipeline] }
$ docker stop --time=1 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
$ docker rm -f 3652bf94e933cbc888def1eeaf89e1cf24554408f9e4421fabfd660012a53365
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] cleanWs
[WS-CLEANUP] Deleting project workspace...[WS-CLEANUP] done
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
```
In your case you can mount your data in the container. Perform the stuff and in a later stage you can do your analysis on your code on your jenkins slave itself (without docker)
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,165 | 3,497 | <issue_start>username_0: I tried to make a demo chat application and I styled a chat bubble but I don't how to achieve the auto size bubble chat. Currently, it has a fixed width and height, and I want the bubble's size to fit the text content.
Here's a scaled down version of what I currently have:
```js
function typo() {
var currentText = document.getElementById("demo").innerHTML;
var x = '' + document.getElementById("myText").value + '
';
document.getElementById("myText").value = "";
var y = document.getElementById("demo").innerHTML = currentText + x;
}
var input = document.getElementById("myText");
input.addEventListener("keyup", function(event) {
event.preventDefault();
if (event.keyCode === 13) {
document.getElementById("btn-chat").click();
}
});
```
```css
.bubble{
clear: both;
box-sizing: border-box;
position: relative;
width: 200px;
height: 75px;
padding: 4px;
background: #C28584;
border-radius: 10px;
margin : 30px;
z-index: -1;
}
.bubble:after{
content: '';
position: absolute;
border-style: solid;
border-width: 11px 30px 11px 0;
border-color: transparent #C28584;
display: block;
width: 0;
overflow: auto;
left: -30px;
top: 31px;
}
.bottom{
position: absolute;
bottom: 1px;
}
.widebox{
width: 100%;
float: left;
margin-left: -10px;
}
button {
float : right;
margin-left: 250px;
margin-right: -50px;
margin-top : -28px;
}
```
```html
Chat
Send
```
The demo is here, you can see it in action: <https://jsfiddle.net/mzdf0dbw/5/><issue_comment>username_1: In your CSS, change `height: 75px;` to `height:auto;`
Upvotes: 1 <issue_comment>username_2: I have made some changes to your code and achieved it. Please see this [Fiddle](https://jsfiddle.net/yrathod101/65ta4kxo/1/) and let me know if this is what desired output is.
```
```
```
.bubble {
clear: both;
box-sizing: border-box;
position: relative;
width: auto;
height: auto;
padding: 4px;
background: #C28584;
border-radius: 10px;
margin: 10px 30px;
display: inline-block;
z-index: -1;
padding-left: 15px;
/\* min-width: 50px; \*/
padding-right: 15px;
}
.bubble:after {
content: '';
position: absolute;
border-style: solid;
border-width: 11px 32px 11px 0;
border-color: transparent #C28584;
display: block;
width: 0;
overflow: auto;
left: -30px;
top: 50%;
transform: translateY(-50%);
}
.bottom {
position: absolute;
bottom: 01px;
}
.widebox {
width: 100%;
float: left;
margin-left: -10px;
}
button {
float: right;
margin-left: 250px;
margin-right: -50px;
margin-top: -28px;
}
Chat
Send
function typo() {
var currentText = document.getElementById("demo").innerHTML;
var x = '<div><p class=bubble>' + document.getElementById("myText").value + '</div></p>';
document.getElementById("myText").value = "";
var y = document.getElementById("demo").innerHTML = currentText + x;
}
var input = document.getElementById("myText");
input.addEventListener("keyup", function(event) {
event.preventDefault();
if (event.keyCode === 13) {
document.getElementById("btn-chat").click();
}
});
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: [This site](https://leaverou.github.io/bubbly/) (bubbly) shows an example of how to achieve the chat bubble using CSS. If you want to experiment with CSS border radius values to get the right bubble "look", try [CSS Borders website](http://cssborders.com).
Upvotes: 0 |
2018/03/22 | 626 | 2,227 | <issue_start>username_0: I have a question to get my customized cell.
I use my customized cell item will get error.
What's wrong with me?
```
let insertIndexPath = IndexPath(row: self.contents.count-1, section: 0)
var cell = self.tableView.cellForRow(at: insertIndexPath)
if cell is MyTextTableViewCell {
cell = MyTextTableViewCell() as MyTextTableViewCell
cell.checkImageView.isHidden = true //error here
cell.warningButton.isHidden = true //error here
}else if cell is MyImageTableViewCell {
cell = MyImageTableViewCell() as MyImageTableViewCell
cell.checkImageView.isHidden = true //error here
cell.warningButton.isHidden = true //error here
}
```
error like this:
>
> Value of type 'UITableViewCell?' has no member 'checkImageView'
>
>
><issue_comment>username_1: You are using custom cell class to ensure whether cell instance belongs to that particular class or not.
Actually you need to create an instance of custom cell class using `if-let` block.
Try this and see:
```
let insertIndexPath = IndexPath(row: self.contents.count-1, section: 0)
if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyTextTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyImageTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) {
print("cell description -- \(cell.description)")
}
```
Upvotes: 0 <issue_comment>username_2: Do like this :
```
let index = IndexPath(row: self.contents.count-1, section: 0)
if let cell = self.tableView.cellForRow(at: index) as? MyTextTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyImageTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) {
print("cell is not from required one -- \(cell.description)")
}
```
Upvotes: -1 |
2018/03/22 | 487 | 1,697 | <issue_start>username_0: This code is from oauth nodesjs.
I want to ask why we are using '{}' around the var google? I also tried using it without '{}' and got error OAuth2 is undefined. i can't understand what is happening here.
```
var {google} = require('googleapis');
var OAuth2 = google.auth.OAuth2;
```<issue_comment>username_1: You are using custom cell class to ensure whether cell instance belongs to that particular class or not.
Actually you need to create an instance of custom cell class using `if-let` block.
Try this and see:
```
let insertIndexPath = IndexPath(row: self.contents.count-1, section: 0)
if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyTextTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyImageTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) {
print("cell description -- \(cell.description)")
}
```
Upvotes: 0 <issue_comment>username_2: Do like this :
```
let index = IndexPath(row: self.contents.count-1, section: 0)
if let cell = self.tableView.cellForRow(at: index) as? MyTextTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) as? MyImageTableViewCell {
cell.checkImageView.isHidden = true
cell.warningButton.isHidden = true
} else if let cell = self.tableView.cellForRow(at: insertIndexPath) {
print("cell is not from required one -- \(cell.description)")
}
```
Upvotes: -1 |
2018/03/22 | 1,366 | 3,721 | <issue_start>username_0: Take the below example. To replace one string in one particular column I have done this and it worked fine:
```
df = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C'],
'data1': range(6),
'data2': ['A1', 'B1', 'C1', 'A1', 'B1', 'C1']},
columns = ['key', 'data1', 'data2'])
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 C1
3 A 3 A1
4 B 4 B1
5 C 5 C1
df['data2']= df['data2'].str.strip().str.replace("A1","Bad")
key data1 data2
0 A 0 Bad
1 B 1 B1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
```
Q(1) How can we conditionally replace one string? Meaning that, in column `data2`, I would like to replace `A1` but only `if "key==A" and "data1">1`. How can I do that?
Q(2) Can the conditional replacement be applied to multiple replacement (i.e., replacing `A1 and A2` at the same time with "Bad" but only under similar conditions?<issue_comment>username_1: I think need filter column in both sides with replace only for filtered rows:
```
mask = (df['key']=="A") & (df['data1'] > 1)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace("A1","Bad")
print (df)
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
```
If need multiple replace use [`replace`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html) with `dict`:
```
df = pd.DataFrame({'key': ['A', 'A', 'C', 'A', 'B', 'C'],
'data1': range(6),
'data2': ['A1', 'A2', 'C1', 'A1', 'B1', 'C1']},
columns = ['key', 'data1', 'data2'])
mask = (df['key']=="A") & (df['data1'] > 0)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().replace({"A1":"Bad", "A2":'Bad1'})
```
Or use regex:
```
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace(r'^A.*',"Bad")
print (df)
key data1 data2
0 A 0 A1
1 A 1 Bad1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
```
Upvotes: 2 <issue_comment>username_2: You can use `numpy` and a `regex`-based replacement to cover `A1, A2` and more. if we extend your data to include an example with `A3`:
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C', 'A'],
'data1': range(7),
'data2': ['A1', 'B1', 'C1', 'A1', 'B1', 'C1', 'A3']},
columns=['key', 'data1', 'data2'])
df['data2'] = np.where((df['key'] == 'A') & (df['data1'] > 1),
df['data2'].str.replace(r'A\d+','Bad'),
df['data2'])
```
This returns:
```
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
6 A 6 Bad
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: If we want to extend the example above in the following way:
```
df = pd.DataFrame({'key': ['<KEY>'],
'data1': range(6),
'data2': ['A1', 'B1', 'C1', 'A1', 'B1', 'C1']},
columns = ['key', 'data1', 'data2'])
mask = (df['data1'] > 1)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace("A1",df['key'])
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 NaN
3 A 3 NaN
4 B 4 NaN
5 C 5 NaN
```
I am very surprised by the answer I thought the content of data2 would be replaced by content of column "key" (at the condition data1>1). any idea?
Upvotes: 0 |
2018/03/22 | 1,225 | 3,218 | <issue_start>username_0: I am using Google smtp in my project, and here's a poblem. I am sending html letters with cyrillic text. Body is okay, but subject displaying with broken encoding (`Ð ÑРµСÐСâ СÐÐ ÑР·СÐÐ ÑР»РÑÐ Ñ`). Which encoding must be used to set cyrillic subject for Google SMTP? I tried utf-8 and windows-1251.
```
iconv('windows-1251','utf-8',"Тест розсилки")
mb_convert_encoding('Тест розсилки','Windows-1251')
mb_convert_encoding('Тест розсилки','UTF-8')
```
mb\_detect\_encoding returns 'utf-8'<issue_comment>username_1: I think need filter column in both sides with replace only for filtered rows:
```
mask = (df['key']=="A") & (df['data1'] > 1)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace("A1","Bad")
print (df)
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
```
If need multiple replace use [`replace`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html) with `dict`:
```
df = pd.DataFrame({'key': ['<KEY>'],
'data1': range(6),
'data2': ['A1', 'A2', 'C1', 'A1', 'B1', 'C1']},
columns = ['key', 'data1', 'data2'])
mask = (df['key']=="A") & (df['data1'] > 0)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().replace({"A1":"Bad", "A2":'Bad1'})
```
Or use regex:
```
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace(r'^A.*',"Bad")
print (df)
key data1 data2
0 A 0 A1
1 A 1 Bad1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
```
Upvotes: 2 <issue_comment>username_2: You can use `numpy` and a `regex`-based replacement to cover `A1, A2` and more. if we extend your data to include an example with `A3`:
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C', 'A'],
'data1': range(7),
'data2': ['A1', 'B1', 'C1', 'A1', 'B1', 'C1', 'A3']},
columns=['key', 'data1', 'data2'])
df['data2'] = np.where((df['key'] == 'A') & (df['data1'] > 1),
df['data2'].str.replace(r'A\d+','Bad'),
df['data2'])
```
This returns:
```
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 C1
3 A 3 Bad
4 B 4 B1
5 C 5 C1
6 A 6 Bad
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: If we want to extend the example above in the following way:
```
df = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C'],
'data1': range(6),
'data2': ['A1', 'B1', 'C1', 'A1', 'B1', 'C1']},
columns = ['key', 'data1', 'data2'])
mask = (df['data1'] > 1)
df.loc[mask, 'data2']= df.loc[mask, 'data2'].str.strip().str.replace("A1",df['key'])
key data1 data2
0 A 0 A1
1 B 1 B1
2 C 2 NaN
3 A 3 NaN
4 B 4 NaN
5 C 5 NaN
```
I am very surprised by the answer I thought the content of data2 would be replaced by content of column "key" (at the condition data1>1). any idea?
Upvotes: 0 |
2018/03/22 | 480 | 1,574 | <issue_start>username_0: I have a page where class `"check-cls agent-cls corporate-cls log-out-cls hidden"` is present. When I run below jQuery command it is returning me the result:
Jquery:
```
$(".check-cls.agent-cls.corporate-cls.log-out-cls");
```
>
> Result : 0: li.check-cls.agent-cls.corporate-cls.log-out-cls.hidden
>
>
>
Why it is taking hidden class also? I want to perform action on element having class `.check-cls.agent-cls.corporate-cls.log-out-cls` but action is getting applied on `li.check-cls.agent-cls.corporate-cls.log-out-cls.hidden` also.<issue_comment>username_1: As you are using Multiple Class Selector, it will target element having both classes the element with class `"check-cls agent-cls corporate-cls log-out-cls hidden"` fulfills the requirement.
Using [`:not()` selector](https://api.jquery.com/not-selector/) you can exclude element with `.hidden` class
```
$(".check-cls.agent-cls.corporate-cls.log-out-cls:not(.hidden)");
```
Upvotes: 2 <issue_comment>username_2: Try **:visible**
```
$(".check-cls.agent-cls.corporate-cls.log-out-cls:visible");
```
Upvotes: 0 <issue_comment>username_3: This is because the jquery selector is matching the classes with all the elements. Now since the element with hidden class also has these classes it is being selected.
To select the element with all the class names defined by you and not having the hidden class do,
```
$(".check-cls.agent-cls.corporate-cls.log-out-cls").not('.hidden');
```
or
```
$(".check-cls.agent-cls.corporate-cls.log-out-cls:not(.hidden)")
```
Upvotes: 0 |
2018/03/22 | 1,037 | 2,526 | <issue_start>username_0: ```
_______________________________________
|item |created |expiry |
_______________________________________
|A |01/01/2000 |01/02/2000 |
|B |01/04/2000 |01/06/2000 |
|C |01/05/2000 |01/11/2000 |
|D |01/02/2000 |01/05/2000 |
|E |01/06/2000 |01/07/2000 |
```
what I wanted is to select all where the values between the created date and expiry date that is between the inputted start\_range and end\_range.
eg.
```
start_range: 01/03/2000
end_range: 01/05/2000
```
the range above will have the values
```
01/03/2000
01/04/2000
01/05/2000
```
the output would be like:
```
_______________________________________
|item |created |expiry |
_______________________________________
|B |01/04/2000 |01/06/2000 |
|C |01/05/2000 |01/11/2000 |
|D |01/02/2000 |01/05/2000 |
```
A and E is not included because the dates between them is not in the dates between the start and end ranges.<issue_comment>username_1: ```
SELECT item,TO_CHAR(created,'DD/MM/YYYY') created,TO_CHAR(expiry,'DD/MM/YYYY') expiry
FROM Table1
WHERE created <= DATE '2000-01-05' AND expiry >= DATE '2000-01-03';
```
Output
```
ITEM CREATED EXPIRY
B 04/01/2000 06/01/2000
C 05/01/2000 11/01/2000
D 02/01/2000 05/01/2000
```
Live Demo
>
> <http://sqlfiddle.com/#!4/bee9f/16>
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a direct translation of human language to your desired sql:
>
> select all where the values between the created date and expiry date that is between the inputted start\_range and end\_range
>
>
>
```
select item, created, expiry
from table
where created between start_range and end_range
and expiry between start_range and end_range;
```
Upvotes: 1 <issue_comment>username_3: Here is the solution to your problem:
```
SELECT Item, created, expiry
FROM Table1
WHERE
created BETWEEN DATE '2000-01-03' AND DATE '2000-01-05'
OR
expiry BETWEEN DATE '2000-01-03' AND DATE '2000-01-05';
```
Follow the link to the demo:
>
> <http://sqlfiddle.com/#!4/bee9f/2>
>
>
>
**ANOTHER WAY:**
```
SELECT Item, created, expiry
FROM Table1
WHERE created <= DATE '2000-01-05' AND expiry >= DATE '2000-01-03';
```
General Query:
```
SELECT Item, created, expiry
FROM Table1
WHERE created <= DATE end_range AND expiry >= DATE start_range;
```
Follow the link to the demo:
>
> <http://sqlfiddle.com/#!4/bee9f/7>
>
>
>
Upvotes: 1 |
2018/03/22 | 328 | 1,262 | <issue_start>username_0: mongoose-paginate does not render virtual fields of the mongoose model. Any solution for this ?
<https://github.com/edwardhotchkiss/mongoose-paginate><issue_comment>username_1: mongoose-paginate can render virtuals, but only if you exclude lean() operation.
```
const options = {
// lean: true - exclude this string if you have
sort: '-updatedAt',
// etc.
}
```
Upvotes: 0 <issue_comment>username_2: If you use the virtual config at the `schema level` then probably this settings would render the virtuals for you
```js
const options = {
// lean: true - exclude this string if you have
sort: '-updatedAt',
// etc.
}
```
or if you are like me who adds the virtual config at the document level, for example:
```js
// jobModel is my job schema and questions is a virtual collection mapped from questions schema
const doc = await this.jobModel.findOne({ _id: id }).populate({ path: 'questions' });
// This returns me the parent doc and the virtual data
return doc ? doc.toObject({ virtuals: true }) : null;
```
then you might need to keep the `lean: true` for virtual fields to populate like so:
```js
const options = {
lean: true,
sort: '-updatedAt',
// etc.
}
```
Upvotes: 1 |
2018/03/22 | 477 | 1,900 | <issue_start>username_0: I have developed a windows form application using Bunifu .NET UI Framework.
But I have a problem, I want to set max length of the text box.
Therefore please give me some advice, how can I do it?<issue_comment>username_1: Here is working code - add the code on form load or your constructor like **BunifuMetro(yourtextbox);** after InitializeComponent(). You can try and switch between the controls by replacing **Bunifu.Framework.UI.BunifuMetroTextbox** with another textbox; Cheers
```
private void BunifuMetro(Bunifu.Framework.UI.BunifuMetroTextbox metroTextbox)
{
foreach (var ctl in metroTextbox.Controls)
{
if (ctl.GetType() == typeof(TextBox))
{
var txt = (TextBox)ctl;
txt.MaxLength = 5;
// set other properties & events here
}
}
}
```
Upvotes: 0 <issue_comment>username_2: You can likewise use the method below:
```
///
/// Sets the maximum length of text in Bunifu MetroTextBox.
///
/// The Bunifu MetroTextbox control.
/// The maximum length of text to edit.
private void SetMaximumLength(Bunifu.Framework.UI.BunifuMetroTextbox metroTextbox, int maximumLength)
{
foreach (Control ctl in metroTextbox.Controls)
{
if (ctl.GetType() == typeof(TextBox))
{
var txt = (TextBox)ctl;
txt.MaxLength = maximumLength;
// Set other properties & events here...
}
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Simple way, Assign the MaxLength property on TextChange event of the textbox (Working 100%)
```
int maxLength=5;
private void textbox1_TextChange(object sender, EventArgs e)
{
textbox1_TextChange.MaxLength = maxLength + txtActivationKey.PlaceholderText.Length;
}
```
Upvotes: 0 |
2018/03/22 | 464 | 1,785 | <issue_start>username_0: I am pretty much new to token base authentication. Can i read other than username from ClaimsPrincipal principal (identity). Is there any way to read/write(store) other information in bearer token.
```
ClaimsPrincipal principal = Request.GetRequestContext().Principal as ClaimsPrincipal;
var Name = ClaimsPrincipal.Current.Identity.Name;
```<issue_comment>username_1: Additional information is stored is so called claims in the payload part of a JWT.
JWT is described in [RFC 7519](https://www.rfc-editor.org/rfc/rfc7519) and [section 4](https://www.rfc-editor.org/rfc/rfc7519#section-4) of this rfc describes the standard claims as well as the possibility to use private claim names.
The JWT issuer (the authorization server) can also write addional claims to the JWT, e.g.:
```
var identity = new ClaimsIdentity("JWT");
identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName)); // standard claim
identity.AddClaim(new Claim("myClaim", "myClaimValue")); // private claim
```
Please note: only the issuer can add information to the JWT and it can only be done during the creation of the JWT.
As the payload of a JWT is just normal JSON (after base64 decoding), you can read all the claims.
Check <https://jwt.io/> for examples.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can get any value from your bearer token with key like "user\_name".
```
private string GetUserName()
{
var claims = (ClaimsIdentity)ClaimsPrincipal.Current.Identity;
if (claims == null)
{
return defaultValue;
}
var targetClaim = claims.FirstOrDefault(c => c.Type == "user_name");
if (targetClaim == null)
{
return defaultValue;
}
return targetClaim.Value;
}
```
Upvotes: 2 |
2018/03/22 | 503 | 1,883 | <issue_start>username_0: I want to loop over the parameter vector and find which is the biggest index. If the biggest index exists in multiple places it is going to return the highest index value.
Could someone explain what I'm doing wrong?
```
static int maxIndex(int vector[]) {
int maxIndex = 0;
for (int i = 1; i < vector.length; i++) {
if (vector[i] >= vector[maxIndex]) {
maxIndex = i;
}
}
return maxIndex;
}
```<issue_comment>username_1: Additional information is stored is so called claims in the payload part of a JWT.
JWT is described in [RFC 7519](https://www.rfc-editor.org/rfc/rfc7519) and [section 4](https://www.rfc-editor.org/rfc/rfc7519#section-4) of this rfc describes the standard claims as well as the possibility to use private claim names.
The JWT issuer (the authorization server) can also write addional claims to the JWT, e.g.:
```
var identity = new ClaimsIdentity("JWT");
identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName)); // standard claim
identity.AddClaim(new Claim("myClaim", "myClaimValue")); // private claim
```
Please note: only the issuer can add information to the JWT and it can only be done during the creation of the JWT.
As the payload of a JWT is just normal JSON (after base64 decoding), you can read all the claims.
Check <https://jwt.io/> for examples.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can get any value from your bearer token with key like "user\_name".
```
private string GetUserName()
{
var claims = (ClaimsIdentity)ClaimsPrincipal.Current.Identity;
if (claims == null)
{
return defaultValue;
}
var targetClaim = claims.FirstOrDefault(c => c.Type == "user_name");
if (targetClaim == null)
{
return defaultValue;
}
return targetClaim.Value;
}
```
Upvotes: 2 |
2018/03/22 | 371 | 1,162 | <issue_start>username_0: I'm getting a string `"hi\nbye\ngoodluck\n\n{{optout}}"` from a backbone model. But i need to trim() the string. But it's not working.
When i'm using the same string with trim() in browser, it's working fine : `"hi\nbye\ngoodluck\n\n{{optout}}".trim()`
Expected output :
```
hi
bye
goodluck
{{optout}}
```<issue_comment>username_1: your string has `multiple-new-lines(\n)`,and `trim()` remove single new-lines. That's why issue occur
You need to use `.replace()` with a `pattern`
Working snippet:-
```js
console.log("hi\nbye\ngoodluck\n\n{{optout}}".replace(/[\r\n]+/g, '\n'));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your issue has nothing to do with `trim`. `trim` does not convert ' \n' into a new line. '\n' is already a new line character. If you want to see the string in the page and in different lines, you must know that HTML ignores new lines, except if the text is inside tags.
You need to convert the newlines to a tag before adding it to the page:
```js
var str = "hi\nbye\ngoodluck\n\n{{optout}}";
var convertedStr = str.replace(/\n/g, '
');
console.log(convertedStr);
```
Upvotes: 0 |
2018/03/22 | 527 | 1,867 | <issue_start>username_0: I'm trying to access CSRF Token received from POST response header, which I need to send back with every further request from my app. I'm was able to drill down the response stream until I could access the [[Entries]] which has the "csrfpreventionsalt" token. It gets displayed in console but when trying to access it shows as undefined.
**Tried & Tested:**
I've tried the "get" method to access the header but it didn't work.
var csrf = res.headers.get('csrfpreventionsalt');
I've seen other questions on SO which say that you can't access the header value but If I can access the header in console then definately I should be able to access the token & assign it to a variable.
Solution to this might help others as well who could face the same situation in their apps. Any help is welcome !!
[](https://i.stack.imgur.com/aaXJL.png)
[](https://i.stack.imgur.com/ChoRr.png)<issue_comment>username_1: your string has `multiple-new-lines(\n)`,and `trim()` remove single new-lines. That's why issue occur
You need to use `.replace()` with a `pattern`
Working snippet:-
```js
console.log("hi\nbye\ngoodluck\n\n{{optout}}".replace(/[\r\n]+/g, '\n'));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your issue has nothing to do with `trim`. `trim` does not convert ' \n' into a new line. '\n' is already a new line character. If you want to see the string in the page and in different lines, you must know that HTML ignores new lines, except if the text is inside tags.
You need to convert the newlines to a tag before adding it to the page:
```js
var str = "hi\nbye\ngoodluck\n\n{{optout}}";
var convertedStr = str.replace(/\n/g, '
');
console.log(convertedStr);
```
Upvotes: 0 |
2018/03/22 | 364 | 1,261 | <issue_start>username_0: Simple case but could not figure out the answer so far.
From angular v4 when trying to manually inject `NgControl`:
```
this.injector.get(NgControl);
```
The lint starts complaining:
```
get is deprecated: from v4.0.0 use Type or InjectionToken
```
How to properly inject it using **injector**?<issue_comment>username_1: You have to use it like this now:
```
this.injector.get(NgControl);
```
See the angular documention: <https://angular.io/api/core/Injector>
```
get(token: Type | InjectionToken, notFoundValue?: T, flags?: InjectFlags): T
```
Upvotes: -1 <issue_comment>username_2: (Using Angular 8)
This passes without `TSLint` errors:
```
this.injector.get(NgControl as Type);
```
However there is a more elegant solution via the constructor.
```
constructor(@Self() @Optional() public ngControl: NgControl) {
if(this.ngControl) {
this.ngControl.valueAccessor = this;
}
}
```
But you need to bear in mind this is an inline (more elegant) replacement for `NG_VALUE_ACCESSOR` - you will get a cyclical dependency error using both.
```
providers: [
{
provide: NG_VALUE_ACCESSOR,
useExisting: forwardRef(() => FieldComponent),
multi: true
}
```
Upvotes: 2 |
2018/03/22 | 530 | 2,082 | <issue_start>username_0: I understand there is few other similar questions in stackoverflow but I have been investigating them and none are helping my scenario or at-least I am not getting it so here's my situation:
I have a component(lets call it `MainComponent`) with two inputs inside. The first input is a `checkbox` and the second one is a `picker`(yes the native one). What I am trying to do is when I click one `checkbox` the corresponding `picker` is getting disabled but I can't remove the value of the picker(which i selected when it became enabled) from `redux-form` state. If i select other `checkbox` previous one does get disabled but the value stays.
Note that I am rendering this `MainComponent` multiple times from another component based on some array of objects.
**MainFile.js**
```
categories.map((categoryItem, index)=>
items.map((item, index)=>
```
**Inputs.js**
```
```
The checkbox and the picker is setup correctly with the `inputs: {value, onChange}` etc.
I have tried to change value within this `inputs.js` but then I realized it was wrong and it gave me error that i cant change something from render.
What I am trying to achieve is a way when I uncheck the checkbox I want to also change the value of the picker(**The picker data should be removed from the redux-form state**). How can i do that? Help would be much appreciated.<issue_comment>username_1: Whatever I understood from your question is that you want to update a field in `redux-form`.
`redux-form` has `change` method which can be utilized to update any field value dynamically.
`import { reduxForm, change } from 'redux-form';`
`const mapDispatchToProps = {
updateField: (field, data) => change( "YOUR_FORM_NAME", field, data )
}`
Would this help?
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want to remove the picker from the redux state you can use props UnregisterField
Import it into your file
```
import { unregisterField } from 'redux-form';
```
and in your code
```
this.props.unregisterField(yourFormName, yourFieldName);
```
Upvotes: 0 |
2018/03/22 | 314 | 1,112 | <issue_start>username_0: I use CMSIS-DAP link for debugging and CMSIS-DAP firmware load successfully.
and I use IAR Workbench Version 7.80 IDE.
I can build code successfully.
But after I Got a **"Fatal Error : Failed to connect to CPU Session aborted !"** during **"Download and Debug"** a code from IAR IDE.
I could not find any solution behind this issue.
So,Please reply me.
Regards,
Pratik<issue_comment>username_1: Whatever I understood from your question is that you want to update a field in `redux-form`.
`redux-form` has `change` method which can be utilized to update any field value dynamically.
`import { reduxForm, change } from 'redux-form';`
`const mapDispatchToProps = {
updateField: (field, data) => change( "YOUR_FORM_NAME", field, data )
}`
Would this help?
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want to remove the picker from the redux state you can use props UnregisterField
Import it into your file
```
import { unregisterField } from 'redux-form';
```
and in your code
```
this.props.unregisterField(yourFormName, yourFieldName);
```
Upvotes: 0 |
2018/03/22 | 1,671 | 6,006 | <issue_start>username_0: I have a need to only get one record from a queryset\_set. (as it only returns 1. so I have used .0 or .first() normally.
However at the moment when I use .0 I do not get any data. when I use .first() I get duplicated queries (despite the prefetch)
I have tried to use a sub query but from the samples ive seen I am unable to interpret it to suit my needs.
The query which generates duplicate when using first but nothing when using 0
```
circuits = SiteCircuits.objects.all() \
.exclude(circuit__decommissioned=True) \
.select_related('site') \
.select_related('circuit') \
.prefetch_related(
Prefetch(
'circuit__devicecircuitsubnets_set',
queryset=DeviceCircuitSubnets.objects.all().select_related('subnet')
) \
) \
```
in the template:
```
{% for item in circuits %}
{{ item.circuit.devicecircuitsubnets_set.0.subnet }}{{ item.circuit.devicecircuitsubnets_set.0.mask }}
...
```
models:
```
class Circuit(models.Model):
name = models.CharField(max_length=200, verbose_name="Name")
order_no = models.CharField(max_length=200, verbose_name="Order No")
ref_no = models.CharField(max_length=200, verbose_name="Reference No")
expected_install_date = models.DateField()
install_date = models.DateField(blank=True, null=True)
...
class SiteCircuits(models.Model):
site = models.ForeignKey(Site, on_delete=models.CASCADE)
circuit = models.ForeignKey(Circuit, on_delete=models.CASCADE)
site_count = models.IntegerField(verbose_name="How many sites circuit is used at?", blank=True, null=True)
active_link = models.BooleanField(default=False, verbose_name="Active Link?")
class Device(models.Model):
site = models.ForeignKey(Site, verbose_name="Site device belongs to", on_delete=models.PROTECT)
hostname = models.CharField(max_length=200)
class Subnet(models.Model):
subnet = models.GenericIPAddressField(protocol='IPv4', \
verbose_name="Subnet", blank=True, null=True)
mask = models.CharField(max_length=4, verbose_name="Mask", \
class DeviceCircuitSubnets(models.Model):
device = models.ForeignKey(Device, on_delete=models.CASCADE)
circuit = models.ForeignKey(Circuit, on_delete=models.CASCADE, blank=True, null=True)
subnet = models.ForeignKey(Subnet, on_delete=models.CASCADE)
```
sample output in console:
```
>>> circuits[100].circuit.devicecircuitsubnets_set.first().subnet.subnet
'10.10.10.4'
>>> circuits[100].circuit.devicecircuitsubnets_set.all()[0].subnet.subnet
'10.10.10.4'
>>>
```<issue_comment>username_1: I'm not very familiar with the internals, but I suspect that `first()` generates extra queries for similar reasons that `filter()` does. See [this question](https://stackoverflow.com/questions/12973929/why-does-djangos-prefetch-related-only-work-with-all-and-not-filter/12974801#12974801) for more info.
For the second part of your question about why does `.0` not return any results - your models and code are so complicated that I can't understand them. You might get a better response if you provided a simpler example (including the models) which reproduced the problem.
Upvotes: 0 <issue_comment>username_2: EDIT:
I'm not sure why your query is failing, but [the docs](https://docs.djangoproject.com/en/2.0/ref/models/querysets/#django.db.models.query.QuerySet.prefetch_related) say you need to be careful in the order in which prefetch is called. My next guess would be to try removing all prefetching apart from that which is absolutely necessary. Sorry it's not expert advice, I'm just muddling along myself. Also if you are using Django 1.11 onwards, you could re-write the query as below, as you are only interested in the first subnet/mask object per SiteCircuit.
```
from django.db.models import Subquery, OuterRef
subnet = Subquery(
DeviceCircuitSubnets.objects.filter(circuit_id=OuterRef(
'circuit_id')).values('subnet__subnet')[:1])
mask = Subquery(
DeviceCircuitSubnets.objects.filter(circuit_id=OuterRef(
'circuit_id')).values('subnet__mask')[:1])
circuits = SiteCircuits.objects.filter(
circuit__decommissioned=False
).annotate(
circuit_subnet=subnet,
cicuit_mask=mask
).select_related(
'site'
).select_related(
'circuit'
)
```
You could then access this by:
```
{% for item in circuits %}
{{ item.circuit_subnet }}{{ item.circuit_mask }}
...
```
You shouldn't need to call `prefetch_related` on relationships which aren't many-to-many or one-to-many. For many-to-one and one-to-one relationships you should use `select_related` instead. E.g. you should use `.select_related('circuit__circuit_type')` rather than `.prefetch_related('circuit__circuit_type')` assuming that `circuit_type` is not a many to many field.
== From OP ==
Debug toolbar output:
```
SELECT "config_devicecircuitsubnets"."id", "config_devicecircuitsubnets"."device_id", "config_devicecircuitsubnets"."circuit_id", "config_devicecircuitsubnets"."subnet_id", "config_subnet"."id", "config_subnet"."subnet", "config_subnet"."mask", "config_subnet"."subnet_type_id" FROM "config_devicecircuitsubnets" INNER JOIN "circuits_circuit" ON ("config_devicecircuitsubnets"."circuit_id" = "circuits_circuit"."id") INNER JOIN "config_subnet" ON ("config_devicecircuitsubnets"."subnet_id" = "config_subnet"."id") WHERE (NOT ("circuits_circuit"."decommissioned" = 'True' AND "circuits_circuit"."decommissioned" IS NOT NULL) AND "config_devicecircuitsubnets"."circuit_id" = '339') ORDER BY "config_devicecircuitsubnets"."id" ASC LIMIT 1
Duplicated 526 times.
```
this line is highlighted in debug toolbar:
```
{{ item.circuit.devicecircuitsubnets_set.first.subnet }}{{ item.circuit.devicecircuitsubnets_set.first.mask }}
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 397 | 1,345 | <issue_start>username_0: My checkbox code:
```
Beverage Size
Small
Regular
Large
```
My submit button:
```
Add Beverage
```
Edited:
I want to pass all the checkbox status as a parameter of a function. The drinkSmall, drinkRegular, drinkLarge should be the status of checkbox.<issue_comment>username_1: You should create a object that holds the checkbox values. Then you can pass that object to your submit function. See below for a example using a model called "checkboxes"
**TS:**
```
// A achecboxes object in your controller:
checkboxes: any = {
drinkSmall: true,
drinkRegular: true,
drinkLarge: true
}
```
**HTML**:
```
Beverage Size
Small
Regular
Large
Add Beverage
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In .ts you need to create an **object**.
```
checkboxes: any = {
drinkSmall: true,
drinkRegular: true,
drinkLarge: true
};
data: any = {
checkboxes:this.checkboxes,
para1:'',
para2:'',
para3:''
};
```
**In `.html` you need to change like**
```
Beverage Size
Small
Regular
Large
Add Beverage
```
Here one object data holds multiple `checkbox` as well as `para1,para2,para3`
**Output of `console.log`**
[](https://i.stack.imgur.com/1B58q.png)
Upvotes: 1 |
2018/03/22 | 955 | 3,408 | <issue_start>username_0: i am Using for each Loop to a Jagged String Array in order to display the Elements in it but it is not working and i have tried this code !
```
public class Program
{
public static void Main(string[] args)
{
string[][] str = new string[5][];
str[0] = new string[5];
str[1] = new string[10];
str[2] = new string[20];
str[3] = new string[50];
str[4] = new string[10];
//Now store data in the jagged array
str[0][0] = "Pune";
str[1][0] = "Kolkata";
str[2][0] = "Bangalore";
str[3][0] = "The pink city named Jaipur";
str[4][0] = "Hyderabad";
//Lastly, display the content of each of the string arrays inside the jagged array
foreach(string[] i in str)
Console.WriteLine(i.ToString());
Console.ReadKey();
}
}
```
I have used a foreach loop but it is Printing
```
System.String[]
System.String[]
System.String[]
System.String[]
System.String[]
```
as output ....
So get me the Problem Solution Please as what i have to modify to get the Display as
```
Pune
Kolkata
Bangalore
The pink city named Jaipur
Hyderabad
```<issue_comment>username_1: A jagged array is an array of arrays - meaning you must have nested loops:
```
foreach(string[] i in str)
foreach(string s in i)
if(!string.IsNullOrEmpty(s))
Console.WriteLine(s);
```
Another option is using linq's `SelectMany`:
```
foreach(var s in str.SelectMany(s => s).Where(s => !string.IsNullOrEmpty(s)))
{
Console.WriteLine(s);
}
```
Or, if you want to only display the first cell of each nested array:
```
foreach(string[] i in str)
var s = i[0];
if(!string.IsNullOrEmpty(s))
Console.WriteLine(s);
```
Note I've added a check for `String.IsNullOrEmpty` to avoid writing empty lines to the console.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try updating your code with this to loop through each array entry in your array :
```
foreach(string[] i in str)
{
foreach(string s in i)
{
Console.WriteLine(s);
}
}
```
Upvotes: 1 <issue_comment>username_3: Your `i` itself is a string array and thus the result since `Tostring()` returning the default implementation of it. You need another loop to get to the real data item
```
foreach(string[] i in str)
{
foreach(string j in i)
Console.WriteLine(j);
}
```
Upvotes: 1 <issue_comment>username_4: Jagged array is an array of arrays, so you need a second, nested, loop.
The second loop could be explicit, or hidden inside some method that loops over entries that you pass to it. For example, `string.Join` would combine all strings from a jagged row into a single string for printing:
```
foreach(string[] row in str)
Console.WriteLine(string.Join(" ", row));
```
You can use the same technique to eliminate the outer loop, too:
```
Console.WriteLine(string.Join("\n", str.Select(row => string.Join(" ", row))));
```
Upvotes: 1 <issue_comment>username_5: If you want to print all the strings in all the arrays, and not get an exception:
```
foreach (string[] i in str)
foreach (string s in i)
if (s != null)
Console.WriteLine(s);
```
Upvotes: 0 <issue_comment>username_6: Console.WriteLine(i[0].ToString());
Upvotes: 0 |
2018/03/22 | 1,740 | 5,582 | <issue_start>username_0: I want to share the initial visible area of my page between a slideshow and my navbar like the below diagram which will then **stick at the top** when you scroll down the document. This is where I've run into issues.
I've tried doing something like `height: 90vh` for my slideshow and `height: 10vh` for my navbar but I want the website to be dynamic and able to fit to most resolutions until you hit cellphone level or at least like 200% zoom on Microsoft edge wherein another stylesheet will be used.
I've also tried placing them within the same div, setting `height: 90%` for the slideshow and `height: auto` for the navbar. This worked best in terms of how dynamic it is but the `position: sticky` obviously didn't work because it only traverses the height of the parent div.
The one that works best is setting the slideshow height to `height: 90vh` and allowing the navbar to go accordingly. It kinda sorta works but not nearly good enough for me.
The navbar *has* to initially be at the bottom then stick to the top. If possible I'd rather have a purely CSS solution, but I am open to javascript. Though I'd rather have pure javascript as opposed to jQuery but if it's well explained I'm okay with it.
[](https://i.stack.imgur.com/YAHn6.png)
The actual question is: **How do I make my navbar and my slideshow share the initial visible height dynamically?**
Here is all the relevant code:
```css
#container {
display: flex;
flex-direction: column;
height: 100vh;
}
.slideshow-base {
flex-grow: 1;
width: 100%;
margin: 0px;
}
.slideshow-container {
height: 100%;
width: 100%;
position: relative;
overflow: hidden;
}
.Slides {
position: absolute;
transform: translateX(-100%);
transition: transform 2s;
display: inline-block;
width: 100%;
height: 100%;
margin: 0;
}
.Slides-Images {
width: 100%;
height: 100%;
object-fit: cover;
}
.navbar-base {
font-weight: bold;
z-index: 2;
font-variant: small-caps;
height: 100%;
width: auto;
top: 0px;
background-color: rgba(50, 64, 147, 0.9);
display: block;
border-bottom: 1px solid rgb(226, 208, 0);
}
```
```html



❮
❯
᛫
᛫
᛫
---
* @Html.ActionLink("main page", "MainPage", "Home")
* @Html.ActionLink("about", "About", "About")
+ @Html.ActionLink("academy", "Academy", "About")
+ @Html.ActionLink("the club", "DKClub", "About")
+ @Html.ActionLink("taebo", "TaeBo", "About")
+ @Html.ActionLink("founders and staff", "Staff", "About")
* @Html.ActionLink("contacts", "Contacts")
* @Html.ActionLink("gallery", "Gallery")
* @Html.ActionLink("shop dk", "Index", "Shop")
```<issue_comment>username_1: You can probably wrap those 2 elements in a wrapper (and optionally give it a height) and use flexbox to make them share space.
In the example below, the slideshow will always cover 90% of the wrapper's height and nav will cover 10% of it.
```css
.slideshow,
.nav {
border: 2px solid #000
}
.wrapper {
display: flex;
flex-direction: column;
height: 90vh
}
.nav {
/* An arbitrary value to start with */
flex-grow: 1;
}
.slideshow {
/* Grow this element 9 times more than the other element. */
flex-grow: 9;
}
```
```html
Slideshow content
Navigation content
```
Upvotes: 0 <issue_comment>username_2: Use CSS Grids
```css
body {
margin: 0;
}
#container {
width: 100vw;
background: #ccc;
padding: 10px;
height: 100vh;
display: grid;
grid-template-rows: 90vh 10vh;
grid-gap: 10px;
}
#container>div {
background: #999;
}
```
```html
Sticky
NavBar
```
Upvotes: 2 <issue_comment>username_3: Ok, I would use a flexbox approach for your initial view, then a bit of js to add a class on scroll:
```js
window.onscroll = function() {
var nav = document.getElementById('nav');
var scrollTop = window.pageYOffset || document.body.scrollTop || document.documentElement.scrollTop;
if (scrollTop > 100) { // not sure how much you want it to scroll before it is made sticky
nav.classList.add("fixed");
} else {
nav.classList.remove("fixed");
}
}
```
```css
body {
margin:0;
height:200vh; /* just so there is some scrolling */
}
.container {
height:100vh;
display:flex;
flex-direction:column;
}
.slideshow-base {
flex-grow:1; /* this will make shadow base take the rest of the available height to start with */
}
.fixed {
position:fixed;
top:0;
left:0;
right:0;
}
```
```html



❮
❯
᛫
᛫
᛫
---
* @Html.ActionLink("main page", "MainPage", "Home")
* @Html.ActionLink("about", "About", "About")
+ @Html.ActionLink("academy", "Academy", "About")
+ @Html.ActionLink("the club", "DKClub", "About")
+ @Html.ActionLink("taebo", "TaeBo", "About")
+ @Html.ActionLink("founders and staff", "Staff", "About")
* @Html.ActionLink("contacts", "Contacts")
* @Html.ActionLink("gallery", "Gallery")
* @Html.ActionLink("shop dk", "Index", "Shop")
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 382 | 1,376 | <issue_start>username_0: I've created a mock item for my logger so that I can verify what calls are being made to it; like so:
```
mock_log.Setup(l => l.InfoFormat(It.IsAny(), It.IsAny()));
mock\_log.Verify(m => m.InfoFormat("1 file(s) found that match criteria."), Times.Exactly(1));
```
I've debugged the code and i KNOW that these logs are defiantly being hit in the code so these should be logged. This is the code that logs that message
```
_log.InfoFormat("{0} file(s) found that match criteria.", files.Count);
```
and there is only 1 file that gets passed down.
So why when I verify the one call it fails?
Any suggestions.<issue_comment>username_1: In the mock setup you are telling the logger that second parameter is an array, but then you are passing in a count at run time. I suspect that's going to be the problem. Try changing to:
`It.IsAny()`
Another thought, the test you are applying in .Verify looks a little odd. I'm not sure what the Times.Exactly(1) is meant to be doing, if the mock is trying to run verify the resulting format string, you probably just want to pass in 1?
Upvotes: 0 <issue_comment>username_2: Found out it's because the string incapsulation doesn't get put through. Change the verify to
```
mock_log.Verify(m => m.InfoFormat("{0} file(s) found on Ftp server.", 4), Times.Exactly(1));
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 475 | 1,420 | <issue_start>username_0: I have a byte array of, let's say, of 100 bytes.
from 0 to 15, these bytes correspond to parameter1,
from 16 to 50, corresponds to parameter2,
from 51 to 80 corresponds to parameter3,
from 81 to 99 corresponds to parameter4
Indexes 1,15,16,50,51,80,81,99 are not fixed. They vary with the parameter
I read the bytes from a device. I have to update, for example, bytes for parameter 3.
How can I accomplish that?
Thank you
P.S. Below is a simple example. I replaced the bytes "23" and "34" with "99"
```
Dim temp As Byte() = New Byte() {12, 23, 12, 23, 34, 56, 67, 89}
Dim tempReplaced As Byte() = New Byte() {12, 23, 12, 99, 99, 56, 67, 89}
```<issue_comment>username_1: In the mock setup you are telling the logger that second parameter is an array, but then you are passing in a count at run time. I suspect that's going to be the problem. Try changing to:
`It.IsAny()`
Another thought, the test you are applying in .Verify looks a little odd. I'm not sure what the Times.Exactly(1) is meant to be doing, if the mock is trying to run verify the resulting format string, you probably just want to pass in 1?
Upvotes: 0 <issue_comment>username_2: Found out it's because the string incapsulation doesn't get put through. Change the verify to
```
mock_log.Verify(m => m.InfoFormat("{0} file(s) found on Ftp server.", 4), Times.Exactly(1));
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,007 | 3,978 | <issue_start>username_0: I have created a program which takes a string (e.g "[2\*4]x + [3/2]x"), isolates all the instances where there is text within square brackets and places them within an array 'matches'. It then strips off the square brackets and by some function (i am using the library flee) takes each string (e.g 2\*4) and evaluates it before storing it in an array 'answers'. I now need a way to replace the items within square brackets in the original string with the items in the 'answers' array but I am not sure how to do this
```
public string result(string solution,int num1, int num2, int num3,int num4)
{
Regex regex = new Regex(@"\[.*?\]");
MatchCollection matches = regex.Matches(solution);
int count = matches.Count;
int [] answers = new int [10];
for (int i = 0; i <= count; i++)
{
string match = matches[i].Value;
match = match.Replace("[", "");
match = match.Replace("]", "");
Console.WriteLine(match);
ExpressionOptions options = new ExpressionOptions();
options.Imports.AddType(typeof(System.Math));
ExpressionOwner owner = new ExpressionOwner();
owner.a = num1;
owner.b = num2;
owner.c = num3;
owner.d = num4;
Expression expressionmethod = new Expression(match, owner, options);
try
{
ExpressionEvaluator evaluator = (ExpressionEvaluator)expressionmethod.Evaluator;
int result = evaluator();
answers[i] = result;
}
catch
{
ExpressionEvaluator evaluator = (ExpressionEvaluator)expressionmethod.Evaluator;
double result = evaluator();
answers[i] = Convert.ToInt32(result);
}
}
}
```<issue_comment>username_1: As well as getting the matches, use `regex.Split` with the same pattern to get the text between the matches.
Then it's just a case of interpolating them with your results to build your result string.
Right after `MatchCollection matches = regex.Matches(solution);` add the line:
```
string[] otherStuff = regex.Split(solution);
Debug.Assert(otherStuff.Length == matches.Count + 1); // Optional obviously. Regex can be weird though, I'd check it just in case.
```
Then you can just do
```
StringBuilder finalResult = new StringBuilder();
finalResult.Append(otherStuff[0]);
for (int i = 0; i < count; i++)
{
finalResult.Append(answers[i]);
finalResult.Append(otherStuff[i+1]);
}
```
and `finalResult` should give you what you need.
Upvotes: -1 [selected_answer]<issue_comment>username_2: You may use `Regex.Replace` with a callback method as the replacement argument where you may do whatever you need with the match value and put it back into the resulting string after modifications. Capture all text between square brackets so as to avoid extra manipulation with the match value.
Here is the code:
```
public string ReplaceCallback(Match m)
{
string match = m.Groups[1].Value;
Console.WriteLine(match);
ExpressionOptions options = new ExpressionOptions();
options.Imports.AddType(typeof(System.Math));
ExpressionOwner owner = new ExpressionOwner();
owner.a = num1;
owner.b = num2;
owner.c = num3;
owner.d = num4;
Expression expressionmethod = new Expression(match, owner, options);
try
{
ExpressionEvaluator evaluator = (ExpressionEvaluator)expressionmethod.Evaluator;
int result = evaluator();
return result.ToString();
}
catch
{
ExpressionEvaluator evaluator = (ExpressionEvaluator)expressionmethod.Evaluator;
double result = evaluator();
return result.ToString();
}
}
public string result(string solution,int num1, int num2, int num3,int num4)
{
return Regex.Replace(solution, @"\[(.\*?)]", ReplaceCallback);
}
```
The `\[(.*?)]` regex matches `[`, then matches and captures any 0+ chars other than a newline as few as possible, and then matches a `]` char. So, the text between `[...]` is inside `match.Groups[1].Value` that is further modified inside the callback method.
Upvotes: 2 |
2018/03/22 | 1,524 | 5,089 | <issue_start>username_0: I can do it separate but cannot combine them together, since I don't know disk device name.
My configuration:
```
- name: Create Virtual Machine
azure_rm_virtualmachine:
resource_group: "{{ resource_group }}"
name: "{{ item }}"
vm_size: "{{ flavor }}"
managed_disk_type: "{{ disks.disk_type }}"
network_interface_names: "NIC-{{ item }}"
ssh_password_enabled: false
admin_username: "{{ cloud_config.admin_username }}"
image:
offer: "{{ image.offer }}"
publisher: "{{ image.publisher }}"
sku: "{{ image.sku }}"
version: "{{ image.version }}"
tags:
Node: "{{ tags.Node }}"
ssh_public_keys:
- path: "/home/{{ cloud_config.admin_username }}/.ssh/authorized_keys"
key_data: "{{ cloud_config.ssh.publickey }}"
data_disks:
- lun: 0
disk_size_gb: "{{ disks.disk_size }}"
caching: "{{ disks.caching }}"
managed_disk_type: "{{ disks.disk_type }}"
```
Other part to format and mount the disk
```
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: /dev/sdc1
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: /dev/sdc1
fstype: xfs
state: mounted
```
My question: device name cannot be configured.
It can be /dev/sdc or /dev/sdb. For AWS ec2, I can set volumes[device\_name], But I don't find such field in Azure. How could I fix it?<issue_comment>username_1: ***/dev/sdb*** are used for temporary disk by default, but sometimes it was used by my data disk.
I found a workaround to check device name before format.
I know it's not a smart way.
```
- name: check device name which should be parted
shell: parted -l
register: device_name
- name: Show middle device name
debug:
msg: "{{ device_name.stderr.split(':')[1] }}"
register: mid_device
- name: Display real device name
debug:
msg: "{{ mid_device.msg.split()[0] }}"
register: real_device
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk {{ real_device.msg }}'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: "{{ real_device.msg }}1"
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: "{{ real_device.msg }}1"
fstype: xfs
state: mounted
```
Upvotes: 1 <issue_comment>username_2: Maybe can try the `azure_rm_managed_disk` module and then attach it to VM. Then you have all the properties of the disk.
Upvotes: 0 <issue_comment>username_1: We can use **softlink** rather than /dev/sdb to format data disk, the link was located in **/dev/disk/azure**.
You can run command "tree /dev/disk/azure" to know the detail structure.
Here is my example to format one data disk, if there are more disks, you can change the softlink to be like /dev/disk/azure/scsi1/lun1, /dev/disk/azure/scsi1/lun2, /dev/disk/azure/scsi1/lun3...
```
- name: use parted to make label
shell: "parted /dev/disk/azure/scsi1/lun0 mklabel msdos"
args:
executable: /bin/bash
- name: partition new disk
shell: "parted /dev/disk/azure/scsi1/lun0 mkpart primary 1 100%"
args:
executable: /bin/bash
- name: inform the OS of partition table changes (partprobe)
command: partprobe
- name: Makes file system on block device with xfs file system
filesystem:
fstype: xfs
dev: /dev/disk/azure/scsi1/lun0-part1
- name: create data dir for mounting
file: path=/data state=directory
- name: Get UUID of the new filesystem
shell: |
blkid -s UUID -o value $(readlink -f /dev/disk/azure/scsi1/lun0-part1)
register: uuid
- name: show real uuid
debug:
msg: "{{ uuid.stdout }}"
- name: mount the dir
mount:
path: /data
src: "UUID={{ uuid.stdout }}"
fstype: xfs
state: mounted
- name: check disk status
shell: df -h | grep /dev/sd
register: df2_status
- debug: var=df2_status.stdout_lines
```
Upvotes: 1 <issue_comment>username_3: If you need LVM...
```yaml
- name: Mount disks with logical volume management
block:
- name: Add disks to logical volume group
community.general.lvg:
vg: "{{ my_volume_group }}"
pvs: "{{ my_physical_devices }}"
- name: Manage logical volume
community.general.lvol:
vg: "{{ my_volume_group }}"
lv: "{{ my_logical_volume }}"
size: "{{ my_volume_size }}"
- name: Manage mount point
ansible.builtin.file:
path: "{{ my_path }}"
state: directory
mode: 0755
- name: Manage file system
community.general.filesystem:
dev: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
- name: Mount volume
ansible.posix.mount:
path: "{{ my_path }}"
state: mounted
src: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
opts: defaults,nodev
```
Upvotes: 0 |
2018/03/22 | 1,172 | 3,904 | <issue_start>username_0: This is the string I got:
`%uD83D%uDE0C`
I'm pretty sure it is an emoji, but when I try to decode it with this library: <https://github.com/BriquzStudio/php-emoji>, it does not work.
I also tried that library with php's `urldecode` function but with no luck.
Can somebody tell me what kind of decoding this is and/or how I can decode this?<issue_comment>username_1: ***/dev/sdb*** are used for temporary disk by default, but sometimes it was used by my data disk.
I found a workaround to check device name before format.
I know it's not a smart way.
```
- name: check device name which should be parted
shell: parted -l
register: device_name
- name: Show middle device name
debug:
msg: "{{ device_name.stderr.split(':')[1] }}"
register: mid_device
- name: Display real device name
debug:
msg: "{{ mid_device.msg.split()[0] }}"
register: real_device
- name: partition new disk
shell: 'echo -e "n\np\n1\n\n\nw" | fdisk {{ real_device.msg }}'
args:
executable: /bin/bash
- name: Makes file system on block device
filesystem:
fstype: xfs
dev: "{{ real_device.msg }}1"
- name: new dir to mount
file: path=/hadoop state=directory
- name: mount the dir
mount:
path: /hadoop
src: "{{ real_device.msg }}1"
fstype: xfs
state: mounted
```
Upvotes: 1 <issue_comment>username_2: Maybe can try the `azure_rm_managed_disk` module and then attach it to VM. Then you have all the properties of the disk.
Upvotes: 0 <issue_comment>username_1: We can use **softlink** rather than /dev/sdb to format data disk, the link was located in **/dev/disk/azure**.
You can run command "tree /dev/disk/azure" to know the detail structure.
Here is my example to format one data disk, if there are more disks, you can change the softlink to be like /dev/disk/azure/scsi1/lun1, /dev/disk/azure/scsi1/lun2, /dev/disk/azure/scsi1/lun3...
```
- name: use parted to make label
shell: "parted /dev/disk/azure/scsi1/lun0 mklabel msdos"
args:
executable: /bin/bash
- name: partition new disk
shell: "parted /dev/disk/azure/scsi1/lun0 mkpart primary 1 100%"
args:
executable: /bin/bash
- name: inform the OS of partition table changes (partprobe)
command: partprobe
- name: Makes file system on block device with xfs file system
filesystem:
fstype: xfs
dev: /dev/disk/azure/scsi1/lun0-part1
- name: create data dir for mounting
file: path=/data state=directory
- name: Get UUID of the new filesystem
shell: |
blkid -s UUID -o value $(readlink -f /dev/disk/azure/scsi1/lun0-part1)
register: uuid
- name: show real uuid
debug:
msg: "{{ uuid.stdout }}"
- name: mount the dir
mount:
path: /data
src: "UUID={{ uuid.stdout }}"
fstype: xfs
state: mounted
- name: check disk status
shell: df -h | grep /dev/sd
register: df2_status
- debug: var=df2_status.stdout_lines
```
Upvotes: 1 <issue_comment>username_3: If you need LVM...
```yaml
- name: Mount disks with logical volume management
block:
- name: Add disks to logical volume group
community.general.lvg:
vg: "{{ my_volume_group }}"
pvs: "{{ my_physical_devices }}"
- name: Manage logical volume
community.general.lvol:
vg: "{{ my_volume_group }}"
lv: "{{ my_logical_volume }}"
size: "{{ my_volume_size }}"
- name: Manage mount point
ansible.builtin.file:
path: "{{ my_path }}"
state: directory
mode: 0755
- name: Manage file system
community.general.filesystem:
dev: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
- name: Mount volume
ansible.posix.mount:
path: "{{ my_path }}"
state: mounted
src: /dev/{{ my_volume_group }}/{{ my_logical_volume }}
fstype: "{{ my_fstype }}"
opts: defaults,nodev
```
Upvotes: 0 |
2018/03/22 | 1,665 | 6,104 | <issue_start>username_0: So I feel like I have come to the end of the rope here, but hoping someone knows more than I do here. I have some `Typescript` files, though that is mostly irrelevant as I am having this problem with all content files.
I am able to generate a `nuget`, or more precisely `dotnet pack`, nuget package that includes my content files in the package by using this in the `.csproj` of my `parent` project:
```
true
contentFiles\Scripts\;content\Scripts
```
I can browse the generated `.nupkg` and see that indeed the file was added to the package in both the `content\Scripts` and `contentFiles\Scripts` locations
The problem is that whenver I consume this package in my 'child' progect, that `Typescript` never gets copied into any folder of the `child` project, though I can see it extracted in the `.nuget\packages\parent\...` folders.
At first I thought it was something with my initial settings in the `parent` project, and it may be, but after trying what seems like everything in the book, that fails to copy the content files to the `child` project. I then tried going the dark path of trying to use `Init.ps1` in the `tools` folder of my package, and though it was impossible to debug, it also seemed to run sporatically (I completely unistalled and reinstalled the package and it still failed to run most of the time.) This could be the way but I don't know why I can't get it to output to the `Package Manager Console`... maybe there's still hope with `Init.ps1` but I can't seem to figure it out. Finally I see some potential with a [nuget `.targets` file](https://learn.microsoft.com/en-us/nuget/reference/msbuild-targets) but I can's seem to grasp how to use it for my purpose either! I would love some feedback as to how to get this done.<issue_comment>username_1: Apparently you need the `any\any` in the path ([learn more](https://learn.microsoft.com/en-us/nuget/reference/msbuild-targets#including-content-in-a-package)) as well as to include `true`, like this:
```
true
contentFiles\any\any\wwwroot\js\;content\any\any\wwwroot\js\
true
```
You'll also need to precompile your `TypeScript` before including the `.js` files in the package
However, this still doesn't create a file there, just some strange reference to it.
In the end, we got it working with a `.targets` file, you can find a working repo here: <https://github.com/NuGet/Home/issues/6743>
Upvotes: 3 <issue_comment>username_2: **From:** [Announcing NuGet 3.1 with Support for Universal Windows Platform](https://blog.nuget.org/20150729/Introducing-nuget-uwp.html#deprecated-features)
Importing content from a Nuget package was depreciated for projects using a project.json file in Nuget v3.1. Since then the project.json file has been dropped in favour of the new .csproj format. Importing content from a Nuget package should still work though if you're using the packages.config file instead.
Also mentioned is the fact that there are other package managers available for delivering content.
It looks to me like the answer in the new world is to create a node module containing utility.js and let npm deliver it to your project.
**Possible Workaround:**
I've looked at .targets to copy files and got this working, but it does run on each build - which may or may not be a problem for you. I can't do what I want with it.

In [PackageId].targets:
```
```
and in the .csproj file (replacing [PackageId] with the name of your package):
```
... any Globals for source control stuff ...
netcoreapp2.0
7.0.0
[PackageId]
... any PackageReference stuff ...
```
There seemed to be a bug whereby when the `[PackageId]` wasn't set explicitly in the .csproj, the build targets didn't work. Although that may well be an issue with my development environment.
Upvotes: 4 [selected_answer]<issue_comment>username_3: [username_1's answer](https://stackoverflow.com/a/49438592/1406818) got me on the right track, but it wasn't sufficient to deploy the content file to the `bin` directory (as he noted). I was able to get the file to be deployed by changing the package reference options in the consuming project's .csproj file, as follows:
```
all
analyzers;build
```
It seems like the default for `PrivateAssets` is `contentfiles;analyzers;build` [(documentation)](https://learn.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files#controlling-dependency-assets), which is not what we want in this case.
Upvotes: 1 <issue_comment>username_4: Simplified code and explanation from @PurplePiranha
**TL;DR**:
Basic .NET6 simplified sample code on [Github](https://github.com/LeonardGC/NugetGeneratingFilesInDestinationProject)
**Step by Step guide**
[](https://i.stack.imgur.com/QM9Yg.png)
1. **Selection of the files**
First we need to select all the files that needs to get into the nuget package.
Add this to the .csproj:
```
...
```
Multiple content lines are allowed.
2. **Write a target**
Make a target file to copy the files before (or after) the build to the bin directory:
The location and name of this file is important:
\build\.targets
Now, make sure that it will get executed by referencing it in the .csproj by adding a content line:
```
...
true
```
Eg: From the code in github:
```
...
true
```
*NOTE: By copying the files to the bin directory, the files are not part of your version control, but your package is!*
3. **Build and pack**
In Visual Studio, right-click on the package name and select "Pack".
A new nuget package should be created in the bin directory of your library.
4. **Use the nuget package**
Install the nuget package now in your destination package.
Notice that the files are in the solution explorer, but not in a directory on your disk. They have a shortcut symbol.
5. **Build the destination package**
Check the bin directory.
The files should be copied to the location mentioned in the targets.
Upvotes: 0 |
2018/03/22 | 1,489 | 5,324 | <issue_start>username_0: What are the standards for boolean parameters in REST which indicates whether or not certain parts of the response should be included. Should they be prefixed with "include" "with" or similar prefix?
Example:
Say, I have a REST service GET /buildings which returns buildings:
```
[
{
name: "The Empire State Building",
flors: 102
}
]
```
Now when there is a use case to include the address, but the address is not always needed (because let's say getting address is quite expensive in the backend, so it is better not to include that by default).
I would like to add parameter which instructs the backend to include address in response, say:
```
GET /buildings?address=true:
[
{
"name": "The Empire State Building",
"flors": 102,
"address": {
"street" : "Fifth Avenue",
"number": 99
}
}
]
```
Now the question is how this address parameter should be named:
"includeAddress=true", "address=true"or what should be the name?<issue_comment>username_1: Apparently you need the `any\any` in the path ([learn more](https://learn.microsoft.com/en-us/nuget/reference/msbuild-targets#including-content-in-a-package)) as well as to include `true`, like this:
```
true
contentFiles\any\any\wwwroot\js\;content\any\any\wwwroot\js\
true
```
You'll also need to precompile your `TypeScript` before including the `.js` files in the package
However, this still doesn't create a file there, just some strange reference to it.
In the end, we got it working with a `.targets` file, you can find a working repo here: <https://github.com/NuGet/Home/issues/6743>
Upvotes: 3 <issue_comment>username_2: **From:** [Announcing NuGet 3.1 with Support for Universal Windows Platform](https://blog.nuget.org/20150729/Introducing-nuget-uwp.html#deprecated-features)
Importing content from a Nuget package was depreciated for projects using a project.json file in Nuget v3.1. Since then the project.json file has been dropped in favour of the new .csproj format. Importing content from a Nuget package should still work though if you're using the packages.config file instead.
Also mentioned is the fact that there are other package managers available for delivering content.
It looks to me like the answer in the new world is to create a node module containing utility.js and let npm deliver it to your project.
**Possible Workaround:**
I've looked at .targets to copy files and got this working, but it does run on each build - which may or may not be a problem for you. I can't do what I want with it.

In [PackageId].targets:
```
```
and in the .csproj file (replacing [PackageId] with the name of your package):
```
... any Globals for source control stuff ...
netcoreapp2.0
7.0.0
[PackageId]
... any PackageReference stuff ...
```
There seemed to be a bug whereby when the `[PackageId]` wasn't set explicitly in the .csproj, the build targets didn't work. Although that may well be an issue with my development environment.
Upvotes: 4 [selected_answer]<issue_comment>username_3: [username_1's answer](https://stackoverflow.com/a/49438592/1406818) got me on the right track, but it wasn't sufficient to deploy the content file to the `bin` directory (as he noted). I was able to get the file to be deployed by changing the package reference options in the consuming project's .csproj file, as follows:
```
all
analyzers;build
```
It seems like the default for `PrivateAssets` is `contentfiles;analyzers;build` [(documentation)](https://learn.microsoft.com/en-us/nuget/consume-packages/package-references-in-project-files#controlling-dependency-assets), which is not what we want in this case.
Upvotes: 1 <issue_comment>username_4: Simplified code and explanation from @PurplePiranha
**TL;DR**:
Basic .NET6 simplified sample code on [Github](https://github.com/LeonardGC/NugetGeneratingFilesInDestinationProject)
**Step by Step guide**
[](https://i.stack.imgur.com/QM9Yg.png)
1. **Selection of the files**
First we need to select all the files that needs to get into the nuget package.
Add this to the .csproj:
```
...
```
Multiple content lines are allowed.
2. **Write a target**
Make a target file to copy the files before (or after) the build to the bin directory:
The location and name of this file is important:
\build\.targets
Now, make sure that it will get executed by referencing it in the .csproj by adding a content line:
```
...
true
```
Eg: From the code in github:
```
...
true
```
*NOTE: By copying the files to the bin directory, the files are not part of your version control, but your package is!*
3. **Build and pack**
In Visual Studio, right-click on the package name and select "Pack".
A new nuget package should be created in the bin directory of your library.
4. **Use the nuget package**
Install the nuget package now in your destination package.
Notice that the files are in the solution explorer, but not in a directory on your disk. They have a shortcut symbol.
5. **Build the destination package**
Check the bin directory.
The files should be copied to the location mentioned in the targets.
Upvotes: 0 |
2018/03/22 | 510 | 1,910 | <issue_start>username_0: It's not a question, but rather a solution (with a small dirty trick).
I needed to insert an image in CKEditor with "center" alignment by default. I could not find working example, so I spent lots of time and come up with the following answer.<issue_comment>username_1: In your editor's `config.js`
```
CKEDITOR.on( 'dialogDefinition', function( ev ) {
// Take the dialog name and its definition from the event data.
var dialogName = ev.data.name;
var dialogDefinition = ev.data.definition;
if ( dialogName == 'image2' ) {
ev.data.definition.dialog.on('show', function() {
//debugger;
var widget = ev.data.definition.dialog.widget;
// To prevent overwriting saved alignment
if (widget.data['src'].length == 0)
widget.data['align'] = 'center';
});
}
});
```
Enjoy!
Upvotes: 2 <issue_comment>username_2: I've used the above solution for a few years successfully - although now (CKEditor Version 4.14.0) it throws an error and no longer works. After a bunch of troubleshooting and assistance from admittedly old documentation from here:
<https://nightly.ckeditor.com/20-05-20-06-04/standard/samples/old/dialog/dialog.html>
... the following seems to work here:
In editor config.js file:
```
CKEDITOR.on('dialogDefinition', function(ev) {
let dialogName = ev.data.name;
let dialogDefinition = ev.data.definition;
console.log(ev);
if (dialogName == 'image2') {
dialogDefinition.onFocus = function() {
/**
* 'none' is no good for us - if is none - reset to 'center'
* if it's already 'left','center', or 'right' - leave alone.
*/
if (this.getContentElement('info', 'align')
.getValue() === 'none') {
this.getContentElement('info', 'align')
.setValue('center');
}
};
}
});
```
Upvotes: 2 |
2018/03/22 | 463 | 1,652 | <issue_start>username_0: I have to merge 2 cells where the range might vary at every run. I am trying with the below code, but there is some error with the code, which I am not able to identify. For fixed range its working fine, but for variable it is showing error. Line no is the cell number which needs to be merged, and it will vary at every run:
```
Range("D" & line_no & ":" "E" & line_no & ).Select
Range("D" & line_no).Activate
With Selection
.VerticalAlignment = xlCenter
.HorizontalAlignment = xlCenter
.WrapText = False
.Orientation = 0
.AddIndent = False
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
```<issue_comment>username_1: Your problem lies in string concatenation. Comments cover that part.
If this range would be used throughout the program, I'd recommend stroing this range in variable:
define string which will point desired range: `Dim rng As String: rng = "D" & line_no & ":E" & line_no`, then use it like this:
```
Range(rng).Select
Range(rng).Activate
```
OR
define range and store range in the variable instead of a string"
```
Dim rng As Range
Set rng = Range("D" & line_no & ":E" & line_no)
rng.Select
rng.Activate
'...
```
Upvotes: 1 <issue_comment>username_2: I would try to get rid of the `Select` in general. You could do it like this:
```
With Range("D" & line_no & ":" & "E" & line_no)
.VerticalAlignment = xlCenter
.HorizontalAlignment = xlCenter
.WrapText = False
.Orientation = 0
.AddIndent = False
.ShrinkToFit = False
.ReadingOrder = xlContext
.MergeCells = True
End With
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,225 | 4,074 | <issue_start>username_0: I have a repo with a django project and want to create a Docker image from it. I also don't want to store any compiled files in git so I try to automate creation of all the artifacts during the Docker image creation.
If I insert:
`RUN python manage.py compilemessages -l en`
in my Dockerfile I get (note that all dependencies are installed on host machine):
```
Traceback (most recent call last):
File "manage.py", line 13, in
execute\_from\_command\_line(sys.argv)
File "/usr/local/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 338, in execute\_from\_command\_line
utility.execute()
File "/usr/local/lib/python2.7/site-packages/django/core/management/\_\_init\_\_.py", line 312, in execute
django.setup()
File "/usr/local/lib/python2.7/site-packages/django/\_\_init\_\_.py", line 18, in setup
apps.populate(settings.INSTALLED\_APPS)
File "/usr/local/lib/python2.7/site-packages/django/apps/registry.py", line 115, in populate
app\_config.ready()
File "/src/playpilot/apps.py", line 21, in ready
for ct in ContentType.objects.all():
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 162, in \_\_iter\_\_
self.\_fetch\_all()
File "/usr/local/lib/python2.7/site-packages/django/db/models/query.py", line 965, in \_fetch\_all
self.\_result\_cache = list(self.iterator())
...
File "/usr/local/lib/python2.7/site-packages/psycopg2/\_\_init\_\_.py", line 164, in connect
conn = \_connect(dsn, connection\_factory=connection\_factory, async=async)
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
```
I work this around with running docker-compose in a build script (with entire running environment)
```
docker-compose up -d
docker-compose exec web ./manage.py compilemessages -l en
docker commit proj_web_1 image_name
docker-compose down
```
But that adds to build time and looks like quite an ugly solution.
`manage.py` does not need the connection to a database to perform this particular task.
Is there a way to run `manage.py` so it doesn't call into db backend?
django version: 1.8<issue_comment>username_1: Your problem is that when you run `collectstatic` `postgres` container started but `postgres` itself not started yet. Read <https://docs.docker.com/compose/startup-order/> for more info
What you basically need to do is to setup `ENTRYPOINT` that whill check that `postgres` is ready to accept connections, here is example of how I do it in my projects
```
#!/bin/sh
# NOTE: if there is no bash can cause
# standard_init_linux.go:190: exec user process caused "no such file or directory"
# https://docs.docker.com/compose/startup-order/
set -euo pipefail
WAIT_FOR_POSTGRES=${WAIT_FOR_POSTGRES:-true}
if [[ "$WAIT_FOR_POSTGRES" = true ]]; then
DATABASE_URL=${DATABASE_URL:-postgres://postgres:postgres@postgres:5432/postgres}
# convert to connection string
# https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING
POSTGRES_URL=${DATABASE_URL%%\?*}
# https://www.gnu.org/software/bash/manual/bash.html#Shell-Parameter-Expansion
POSTGRES_URL=${POSTGRES_URL/#postgis:/postgres:}
# let postgres and other services (e.g. elasticsearch) to warm up...
# https://www.caktusgroup.com/blog/2017/03/14/production-ready-dockerfile-your-python-django-app/
until psql $POSTGRES_URL -c '\q'; do
>&2 echo "Postgres is not available - sleeping"
sleep 1
done
# >&2 echo "Postgres is up - executing command"
fi
if [[ $# -ge 1 ]]; then
exec "$@"
else
echo "Applying migrations"
python manage.py migrate --noinput -v 0
echo "Generate translations"
python manage.py compilemessages --locale ru -v 0
echo "Starting server"
exec python manage.py runserver 0.0.0.0:8000
fi
```
Upvotes: 1 <issue_comment>username_2: Use `django-admin` instead of `manage.py`:
```
RUN django-admin compilemessages -l en
```
Upvotes: 2 |
2018/03/22 | 3,889 | 11,904 | <issue_start>username_0: I can't save a keras model when using a lambda layer and shared variables.
Here's a minimal code that gives this error :
```
# General imports.
import numpy as np
# Keras for deep learning.
from keras.layers.core import Dense,Lambda
from keras.layers import Input
from keras.models import Model
import keras.backend as K
n_inputs = 20
n_instances = 100
def preprocess(X,minimum,span):
output = (X - minimum)/span
return output
inputs = Input(shape=(n_inputs,),name='input_tensor')
maximum = K.max(inputs)
minimum = K.min(inputs)
span = maximum - minimum
x = Lambda(preprocess,arguments={'minimum':minimum,'span':span})(inputs)
x = Dense(units=100,activation='elu')(x)
outputs = Dense(units=n_inputs,activation='elu')(x)
model = Model(inputs=inputs,outputs=outputs)
model.compile(optimizer='adam', loss='mse')
x = np.array([np.random.randn(20) for i in range(n_instances)])
y = np.array([np.random.randn(20) for i in range(n_instances)])
model.fit(x,y,epochs=10)
model.save('test.h5') # This line doesn't work.
```
And here's the full error I am getting :
```
Traceback (most recent call last):
File "C:\Dropbox\HELMo_Gramme\M2\Stage\Programmation\Filter\sanstitre0.py", line 35, in
model.save('test.h5') # This line doesn't work.
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2580, in save
save\_model(self, filepath, overwrite, include\_optimizer)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py", line 111, in save\_model
'config': model.get\_config()
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2421, in get\_config
return copy.deepcopy(config)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 215, in \_deepcopy\_list
append(deepcopy(a, memo))
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle \_thread.lock objects
```
The model can be trained and can be used to predict, the issue only appears when saving.
I have seen similar errors (e.g. [this link](https://github.com/tensorflow/tensorflow/issues/11157)) for people using lambda layers but I think my issue is slightly different (most of the times it was using seq2seq.py which I don't use). However it still seems to be linked with deep copy.
If I remove the lambda layer or the external variables it works. I am probably doing something I shouldn't be doing with the variables but I don't know how to do it properly. I need them to be outside of the scope of the preprocess function because I use those same variable in a postprocess function.
I understand that preprocessing inside the model is not the most efficient but I have my reasons to do it and performance is not an issue on this dataset.
Update
------
I forgot to clarify that I would like to be able to reuse `maximum`, `minimum`, and `span` in another Lambda layer, which is why they are defined outside the scope of `preprocess`.
Update 2
--------
maxim's solution did help, but it still didn't work in my actual code. The difference is that I actually create my model inside a function and return it, and somehow this returns the same type of error.
Example code:
```
# General imports.
import numpy as np
# Keras for deep learning.
from keras.layers.core import Dense,Lambda
from keras.layers import Input
from keras.models import Model
import keras.backend as K
n_inputs = 101
n_instances = 100
def create_model(n_inputs):
def preprocess(X):
maximum = K.max(inputs)
minimum = K.min(inputs)
span = maximum - minimum
output = (X - minimum)/span
return output
def postprocess(X):
maximum = K.max(inputs)
minimum = K.min(inputs)
span = maximum - minimum
output = X*span + minimum
return output
inputs = Input(shape=(n_inputs,),name='input_tensor')
x = Lambda(preprocess)(inputs)
x = Dense(units=100,activation='elu')(x)
outputs = Dense(units=n_inputs,activation='elu')(x)
outputs = Lambda(postprocess)(outputs)
model = Model(inputs=inputs,outputs=outputs)
model.compile(optimizer='adam', loss='mse')
return model
x = np.array([np.random.randn(n_inputs) for i in range(n_instances)])
y = np.array([np.random.randn(n_inputs) for i in range(n_instances)])
model = create_model(n_inputs)
model.fit(x,y,epochs=10)
model.save('test.h5') # This line doesn't work.
```
Error :
```
Traceback (most recent call last):
File "C:\Dropbox\HELMo_Gramme\M2\Stage\Programmation\Filter\sanstitre0.py", line 46, in
model.save('test.h5') # This line doesn't work.
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2580, in save
save\_model(self, filepath, overwrite, include\_optimizer)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\models.py", line 111, in save\_model
'config': model.get\_config()
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2421, in get\_config
return copy.deepcopy(config)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 215, in \_deepcopy\_list
append(deepcopy(a, memo))
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 220, in \_deepcopy\_tuple
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda3\lib\copy.py", line 220, in
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 220, in \_deepcopy\_tuple
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda3\lib\copy.py", line 220, in
y = [deepcopy(a, memo) for a in x]
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 180, in deepcopy
y = \_reconstruct(x, memo, \*rv)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 280, in \_reconstruct
state = deepcopy(state, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 150, in deepcopy
y = copier(x, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 240, in \_deepcopy\_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "C:\ProgramData\Anaconda3\lib\copy.py", line 169, in deepcopy
rv = reductor(4)
TypeError: can't pickle \_thread.lock objects
```
I **need** to create my model in a function because I am optimizing hyperparameters and so I iterate over different models with different set of parameters (well I could probably do it in a loop but it's not as nice).<issue_comment>username_1: The problem is with lambda arguments: `minimum` and `span`. They are deduced from the input, but when you define the lambda layer like this:
```
x = Lambda(preprocess,arguments={'minimum':minimum,'span':span})(inputs)
```
... they are considered *independent arguments* that need to be serialized (as a context for lambda). This results in error, because both of them are tensorflow tensors, not static values or a numpy arrays.
Change your code to this:
```py
# `preprocess` encapsulates all intermediate values in itself.
def preprocess(X):
maximum = K.max(X)
minimum = K.min(X)
span = maximum - minimum
output = (X - minimum) / span
return output
inputs = Input(shape=(n_inputs,), name='input_tensor')
x = Lambda(preprocess)(inputs)
```
Upvotes: 2 <issue_comment>username_2: I had the same problem (Lambda layer + multiple arguments on [email protected] with [email protected] backend) and `model.save_weights(...)`instead of `model.save(...)` works - if you just want to load the trained weights later and you don't need to store the architecture.
Upvotes: 0 <issue_comment>username_3: See my answer on a [similar question here on Stackoverflow](https://stackoverflow.com/questions/47066635/checkpointing-keras-model-typeerror-cant-pickle-thread-lock-objects/55229794#55229794).
For your particular case, change these lines:
```
(...)
x = Lambda(preprocess)(inputs)
(...)
outputs = Lambda(postprocess)(outputs)
(...)
```
with these lines:
```
(...)
x = Lambda(lambda t: preprocess(t))(inputs)
(...)
outputs = Lambda(lambda t: postprocess(t))(outputs)
(...)
```
Upvotes: 0 |
2018/03/22 | 700 | 1,992 | <issue_start>username_0: I'm working in postgres 9.6 and still getting my head around json
i have a column with a json object that is an array of numbers that represent recurrence frequency and the days of the week.
```
{"every":"1","weekdays":["1"]}
{"every":"1","weekdays":["1","3"]}
{"every":"1","weekdays":["1","2","3","4","5"]}
ROW1 -[1] : MON
ROW2 -[1,3] : MON , WED
ROW3 -[1,2,3,4,5] : MON , TUE , WED , THU , FRI
```
I want to expand these into columns such that:
```
|ROW- |MON | TUE| WED|THU|FRI|
------------------------------
|ROW1 |Y |N |N |N |N |
|ROW2 |Y |N |Y |N |N |
|ROW3 |Y |Y |Y |Y |Y |
```
I can get the elements out using `jsonb_array_elements(pattern)` but then what?
i thought to use the 'contains' expression to build each column
`pattern @> '{1}'`, `pattern @> '{2}'` etc but I couldn't construct an object that would give a hit<issue_comment>username_1: Example data:
```
create table my_table(id serial primary key, pattern jsonb);
insert into my_table (pattern) values
('{"weekdays": [1]}'),
('{"weekdays": [1, 3]}'),
('{"weekdays": [1, 2, 3, 4, 5]}');
```
You can use the operator @> in this way:
```
select
id,
pattern->'weekdays' @> '[1]' as mon,
pattern->'weekdays' @> '[2]' as tue,
pattern->'weekdays' @> '[3]' as wed,
pattern->'weekdays' @> '[4]' as thu,
pattern->'weekdays' @> '[5]' as fri
from my_table
id | mon | tue | wed | thu | fri
----+-----+-----+-----+-----+-----
1 | t | f | f | f | f
2 | t | f | t | f | f
3 | t | t | t | t | t
(3 rows)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: It seems i was on the right track with 'contains' but i had confused myself about what was in the array. I should have been looking for a string not a number
```
, bookings.pattern->'weekdays' @> '"1"'::jsonb
```
Thanks to Pitto for the prompt to put the outer json in the question which made it obvious
Upvotes: 0 |
2018/03/22 | 362 | 1,416 | <issue_start>username_0: I am looking at this example
```
docker run --rm --volumes-from myredis -v $(pwd)/backup:/backup debian cp /data/dump.rdb /backup/
```
from Using Docker book.
Why do we need --rm flag?
Why do we have --volumes-from?<issue_comment>username_1: The `--rm` flag tells Docker Engine to remove the container once it exits. Without this flag, you need to manually remove the container after you stop it.
The `--volumes-from` flag mounts all the defined volumes from the referenced containers, it ensures the two containers mounts same volumes.
Upvotes: 1 <issue_comment>username_2: The idea here is that
* you have a redis container named `myredis` which has some `volumes` for persistent storage (that you'd like to backup).
* you run a temporary `debian` container that will save the backup to `your_current_dir/backup` and get removed.
---
1. `docker run --rm ... debian` runs the container and removes it after it exits
2. `--volumes-from myredis` this way the `debian` container will have access to the database
3. `-v $(pwd)/backup:/backup` this second volume is used to put the backup at your current dir `$(pwd)/backup`. If it wasn't used, the backup would have only been copied to `/backup` (inside the container) and later been removed together with the container. This way the backup persists.
4. `cp /data/dump.rdb /backup/` copies the actual files
Upvotes: 3 [selected_answer] |
2018/03/22 | 642 | 2,841 | <issue_start>username_0: There is a cluster Kubernetes and IBM Cloud Private with two workers.
I have one deployment which creates two pods. How can I force deployment to install its pods on two different workers? In this case if I lost one icp worker I always have other with need pod.<issue_comment>username_1: You can create your pods as kubernetes DaemonSet. A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. You can access below link to see details.
<https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/>
Upvotes: 0 <issue_comment>username_2: In addition to @username_3 answer regarding scheduling policy in affinity mode there are two [different mods](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature) of affinity:
* requiredDuringSchedulingIgnoredDuringExecution
* preferredDuringSchedulingIgnoredDuringExecution.
While using ***requiredDuringSchedulingIgnoredDuringExecution*** affinity scheduler we need to make sure that all rules are met for a pod to be scheduled.
If you will have i.e. not enough nodes to spawn all pods the scheduler will wait forever until there will be enough nodes available.
If you use ***preferredDuringSchedulingIgnoredDuringExecution*** affinity scheduler it will try to spawn all replicas based on the highest score the nodes gets from the combination of defined rules and their *weight*.
*Weight* is a parameter used along with a rule, each rule can have a different weight. In order to calculate a Score for a node we use following logic:
For every node, we iterate through rules defined in the configuration (i.e. resource request, requiredDuringScheduling, affinity expressions, etc.). In case the rule is matched we add the weight value to the score for that node. Once all rules for all nodes are processed we will have a list of all nodes with their final score. The node(s) with the highest score are the most preferred.
Just to summarize, higher weight value will increase importance of a rule and will help scheduler to decide which node to choose.
Upvotes: 0 <issue_comment>username_3: If you want pods to not schedule on the same node, the correct concept that you will want to use is inter-pod anti-affinity. <https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature>
Observe:
```
spec:
replicas: 2
selector:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 803 | 1,905 | <issue_start>username_0: I have data like this:
```
{
"-L8BpxbS70KYrZMQUF0W": {
"createdAt": "2018-03-22T16:33:57+08:00",
"email": "<EMAIL>",
"name": "ss"
},
"-KYrZMQUF0WL8BpxbS70": {
// etc.
}
}
```
Which I want to turn into this:
```
[{
id: '-L8BpxbS70KYrZMQUF0W
createdAt: "2018-03-22T16:33:57+08:00",
email: "<EMAIL>",
name: "ss"
}, {
id: -KYrZMQUF0WL8BpxbS70"
// etc.
}]
```
I'm started with this:
```
Object.keys(idBasedObjects).forEach(key => {
console.log(resp[key])
})
```
But I get `undefined`.
What's best way of creating this array?<issue_comment>username_1: Get the keys and values using [`Object.entries()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/entries), and [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) them to to the required form using [object spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax#Spread_in_object_literals):
```js
const obj = {"-L8BpxbS70KYrZMQUF0W":{"createdAt":"2018-03-22T16:33:57+08:00","email":"<EMAIL>","name":"ss"},"-KYrZMQUF0WL8BpxbS70":{}};
const result = Object.entries(obj).map(([id, props]) => ({
id,
...props
}));
console.log(result);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Use `Object.keys`, `Object.assign` and `map`
```
var output = Object.keys(obj).map( s => Object.assign( obj[s], {id : s} ))
```
**Demo**
```js
var obj = {
"-L8BpxbS70KYrZMQUF0W": {
"createdAt": "2018-03-22T16:33:57+08:00",
"email": "<EMAIL>",
"name": "ss"
},
"-KYrZMQUF0WL8BpxbS70": {
"createdAt": "2018-03-22T16:33:57+08:00",
"email": "<EMAIL>",
"name": "ss2"
}
};
var output = Object.keys(obj).map(s => Object.assign(obj[s], {
id: s
}));
console.log(output);
```
Upvotes: 2 |
2018/03/22 | 1,069 | 3,619 | <issue_start>username_0: I want to create element, add it to the slice and change it by its address. I expect that the change of the element from the outside will change also a slice. But after the addition, a new slice is created. I use serialization, so the use of an slice of addresses is not suitable, also the addition of elements occurs in different goroutines, so accessing the last added element is also not suitable.
```
package main
import (
"fmt"
)
func main() {
var a []int
b := 1
fmt.Println("old addr:", &b)
// old addr: 0x10414020
a = append(a, b)
fmt.Println("new addr:", &a[0])
// new addr: 0x10414028
}
```
[play golang example](https://play.golang.org/p/c28m1fu9JH0 "example")<issue_comment>username_1: Quoted program behaves perfectly correct - addresses denote storage locations, `b int` is one such location and element inside the slice is other one.
You most probably need to store pointers not `int`s.
Now with regard to serialization - I am not sure what kind do you have in mind, but in general you could proceed like below:
```
type myArray []*int
var a myArray
//...
func (x myArray) Serialize() []byte { .... }
```
Where `Serialize` satisfies the interface used by your serializator.
Upvotes: 0 <issue_comment>username_2: This is not an issue of `append()` creating a new slice header and backing array.
This is an issue of you appending `b` to the slice `a`, a copy of the value of the `b` variable will be appended. You append a *value*, not a *variable*! And by the way, appending (or assigning) makes a copy of the value being appended (or assigned).
Note that the address of `b` and `a[0]` will also be different if you do not call `append()` but instead preallocate the slice and simply assign `b` to `a[0]`:
```
var a = make([]int, 1)
b := 1
fmt.Println("old addr:", &b)
// old addr: 0x10414024
a[0] = b
fmt.Println("new addr:", &a[0])
// new addr: 0x10414020
```
Try it on the [Go Playground](https://play.golang.org/p/AU5FMaav_aL).
The reason for this is because the variable `b` and `a` are distinct variables; or more precisely the variable `b` and `a`'s backing array reserving the memory for its elements (including the memory space for `a[0]`), so their addresses cannot be the same!
You cannot create a variable placed to the same memory location of another variable. To achieve this "effect", you have pointers at your hand. You have to create a pointer variable, which you can set to point to another, existing variable. And by accessing and modifying the *pointed* value, effectively you are accessing and modifying the variable whose address you stored in the pointer.
If you want to store "something" in the `a` slice through which you can access and modify the "outsider" `b` variable, the easiest is to store its address, which will be of type `*int`.
Example:
```
var a []*int
b := 1
fmt.Println("b's addr:", &b)
a = append(a, &b)
fmt.Println("addr in a[0]:", a[0])
// Modify b via a[0]:
*a[0] = *a[0] + 1
fmt.Println("b:", b)
```
Output (try it on the [Go Playground](https://play.golang.org/p/c5CqdavVnAQ)):
```
b's addr: 0x10414020
addr in a[0]: 0x10414020
b: 2
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Assigning values always lets the runtime allocate new memory for the copied value and that allocated memory will have another address. If you append to a slice of values, you will always copy the variable.
If you have to access the elements from different go routines, you will have to make it thread safe. You have to do that, whether you are using values or references.
Upvotes: 0 |
2018/03/22 | 1,090 | 3,362 | <issue_start>username_0: I want to stop a timer by clicking a button but I can't find the exact way.
I've tried to stop a timer by `clearInterval()` but I'm not sure if it is called properly.
This is my working code.
```html
Bootstrap
var sec = 0;
function pad(val) {
return val > 9 ? val : "0" + val;
};
setInterval( function(){
$("#seconds").html(pad(++sec%60));
$("#minutes").html(pad(parseInt(sec/60,10)));
}, 1000);
function myStopFunction() {
clearInterval(sec);
}
00:00
Stop
```<issue_comment>username_1: take that set interval into a variable then use clear interval
```
var myInterval = setInterval( function(){
$("#seconds").html(pad(++sec%60));
$("#minutes").html(pad(parseInt(sec/60,10)));
}, 1000);
function myStopFunction() {
clearInterval(myInterval);
}
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: Add a global var in this case `myTimer` to hold the timer. in clearinterval use `myTimer` to stop the timer.
```html
Bootstrap
var sec = 0;
function pad ( val ) { return val > 9 ? val : "0" + val; }
var myTimer= setInterval( function(){
$("#seconds").html(pad(++sec%60));
$("#minutes").html(pad(parseInt(sec/60,10)));
}, 1000);
function myStopFunction() {
clearInterval(myTimer);
}
00:00
Stop
```
Upvotes: 2 <issue_comment>username_3: Add this
```
setTimeout(function(){
clearInterval(sec);
}, 1000);
```
and no need `function myStopFunction()` remove that
Upvotes: -1 <issue_comment>username_4: ```html
Bootstrap
var sec = 0;
function pad ( val ) { return val > 9 ? val : "0" + val; }
var setIntValue = setInterval( function(){
$("#seconds").html(pad(++sec%60));
$("#minutes").html(pad(parseInt(sec/60,10)));
}, 1000);
function myStopFunction() {
clearInterval(setIntValue);
}
00:00
Stop
```
Upvotes: -1 <issue_comment>username_5: Made correction and Added "Start again" and "Clear" Button also. working fine.
```html
Bootstrap
var sec = 0;
function pad ( val ) { return val > 9 ? val : "0" + val; }
var func;
function timerstart(){
func = setInterval( function(){
$("#seconds").html(pad(++sec%60));
$("#minutes").html(pad(parseInt(sec/60,10)));
}, 1000);
}
timerstart();
function myStopFunction() {
clearInterval(func);
}
function myClearFunction(){
myStopFunction();
$("#seconds").html(pad(00));
$("#minutes").html(pad(00));
sec = 0;
}
00:00
Stop
Start Again
Clear
```
Upvotes: 1 <issue_comment>username_6: Stopwatch Timer Proper Code for 1 hour from 00:00:00
```html
Page Title
var upgradeTime = 1;
var seconds = upgradeTime;
function timer() {
var days = Math.floor(seconds/24/60/60);
var hoursLeft = Math.floor((seconds) - (days\*86400));
var hours = Math.floor(hoursLeft/3600);
var minutesLeft = Math.floor((hoursLeft) - (hours\*3600));
var minutes = Math.floor(minutesLeft/60);
var remainingSeconds = seconds % 60;
function pad(n) {
return (n < 10 ? "0" + n : n);
}
document.getElementById('countdown').innerHTML =pad(hours) + ":" + pad(minutes) + ":" + pad(remainingSeconds);
if (seconds == 3600) {
clearInterval(countdownTimer);
document.getElementById('countdown').innerHTML = "Completed";
} else {
seconds++;
}
}
var countdown = setInterval('timer()', 1000);
```
Upvotes: 1 |
2018/03/22 | 462 | 1,287 | <issue_start>username_0: Iam having variable with value 9:30 am - 12:00 pm i.e
```
$val=$shift (o/p 9:30 am - 12:00pm)
```
I want to convert it to 09:30 am - 12:00pm, i mean i need to add 0 infront of number if time having one digit<issue_comment>username_1: If you are using `date` function then you can format time using `h`.
For example
```
echo date('h:ia'); // output 09:50am
```
See [Referance](http://php.net/manual/en/function.date.php)
If you have values from DataBase as you mentioned in comment then you need to apply PHP's built-in function [`strtotime()`](http://php.net/manual/en/function.strtotime.php) before formatting the time.
For Example
```
echo date( 'h:ia', strtotime('9:30 am') ); // output 09:50am
```
Upvotes: 2 <issue_comment>username_2: Try
```
echo date('h:ia',strtotime("9:30 am"))." - ".date('h:ia',strtotime("12:00pm"));
```
Upvotes: 0 <issue_comment>username_3: Assuming you are saving your time in your database as a string as `9:30 am - 12:00pm` you can just split them. Then format each data.
```
$val= "9:30 am - 12:00pm";
$splittedString = explode('-', $val);
$time1 = date('h:ia',strtotime($splittedString[0]));
$time2 = date('h:ia',strtotime($splittedString[1]));
echo $time1." - ".$time2;
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 775 | 2,444 | <issue_start>username_0: I have a form that saves a date into a mysql database. Before the date is saved I need to convert the format. How can this be done more elegantly than the code I have below? I know it can but not sure how to do it correctly.
```
//Get the date (22/03/2018)
$start=$_POST['start'];
//Remove the /and replace with -
$starttime = str_replace('/', '-', $start);
convert the format to y-m-d
$starttime=date('Y-m-d', strtotime($starttime));
```<issue_comment>username_1: User strtotime on the date string and format it as you want:
```
$datestring = $_POST['start'];
$dateval = strtotime($datestring );
$start = date('Y-m-d', $dateval );
```
Or an other code:
```
$datestring = $_POST['start'];
$dateval = new DateTime(datestring);
$start = $dateval->format('Y-m-d');
```
Is it the correct date form? You can read some info here: [php.net/manual/en/function.strtotime.php](http://php.net/manual/en/function.strtotime.php)
One part from a note: "...if the separator is a slash (/), then the American m/d/y is assumed; whereas if the separator is a dash (-) or a dot (.), then the European d-m-y format is assumed..."
So this may cause a problem.
If the day, month, year parts are fix you can use "#username_5 Shrivastava" 's comment as well.
Upvotes: 1 <issue_comment>username_2: Take a look at [Carbon](https://carbon.nesbot.com/docs/)
It's a minimal library that makes date really flexible, you can install via composer typing `composer require nesbot/carbon`
Which will enable you to do something like
```
Carbon::createFromFormat('Y-m-d', $date)->toDateTimeString();
// Returns something like 1975-05-21 22:00:00
```
From there you can do pretty much whatever you want with it!
Upvotes: 0 <issue_comment>username_3: Another possible solution (the simplest one):
```
$starttime = preg_replace('/(\d+)\/(\d+)\/(\d+)/', "$3-$2-$1", $_POST['start']);
```
Upvotes: 0 <issue_comment>username_4: The best way
============
is using the class **DateTime** is: <http://php.net/manual/es/datetime.createfromformat.php>
```
$fecha = DateTime::createFromFormat('d/m/Y', $_POST['start']);
echo $fecha->format('Y-m-d');
```
Upvotes: 1 <issue_comment>username_5: you should try `list()` with `preg_split()`
```
$start=$_POST['start'];
//Get the date (22/03/2018)
list($day,$month,$year) = preg_split('/[\/\s:]+/', $start);
echo $date = $year . '-' . $month. '-' . $day ;
```
Upvotes: 0 |
2018/03/22 | 965 | 3,301 | <issue_start>username_0: I have a table "tbdetails" where students' details are kept. Put simply, I want to limit the table to only hold 10 records max. My form for the table "frmDetails" has the following OnCurrent Event but it did not work:
```
Private Sub Form_Current()
Forms(Detail).MaxRecords = 10
End Sub
```
I looked online and found that but I could just so easily insert 11 and 12 records. Any answer is welcome, VBA is not required (if it is possible to go without it ) Can this be done simply from the properties menu or something?
---
**EDIT:**
Now when I saved I get
>
> Runtime error 438: Object Doesn't Support this Property or Method
>
>
>
So there is definitely something wrong here<issue_comment>username_1: If you want to limit the total number of records that can be added in the table with a certain form, you could use the following code:
```
Private Sub Form_Current()
Dim rs As DAO.Recordset
Set rs = Me.RecordsetClone 'Clone because we don't want to move the current record
If Not rs.EOF Then rs.MoveLast 'Initialize recordset
If rs.RecordCount >= 10 Then
Me.AllowAdditions = False
Else
Me.AllowAdditions = True
End If
End Sub
```
Upvotes: 1 <issue_comment>username_2: It takes a little more.
See in-line comments for usage:
```
Public Sub SetFormAllowAdditions( _
ByVal frm As Form, _
Optional ByVal RecordCountMax As Long = 1)
' Limit count of records in (sub)form to that of RecordCountMax.
' 2016-10-26, Cactus Data ApS, CPH
'
' Call in (sub)form:
'
' Private Sub LimitRecords()
' Const RecordsMax As Long = 5
' Call SetFormAllowAdditions(Me.Form, RecordsMax)
' End Sub
'
' Private Sub Form_AfterDelConfirm(Status As Integer)
' Call LimitRecords
' End Sub
'
' Private Sub Form_AfterInsert()
' Call LimitRecords
' End Sub
'
' Private Sub Form_Current()
' Call LimitRecords
' End Sub
'
' Private Sub Form_Open(Cancel As Integer)
' Call LimitRecords
' End Sub
'
' If the record count of a subform is to be limited, also
' the parent form must be adjusted:
'
' Private Sub Form_Current()
' Call SetFormAllowAdditions(Me.Form)
' End Sub
'
Dim AllowAdditions As Boolean
With frm
AllowAdditions = (.RecordsetClone.RecordCount < RecordCountMax)
If AllowAdditions <> .AllowAdditions Then
.AllowAdditions = AllowAdditions
End If
End With
End Sub
```
Upvotes: 1 <issue_comment>username_3: ```
Private Sub Form_Current()
Me.AllowAdditions = (Nz(DCount("[IDFieldName]","[TableName]",""),0)<10)
End Sub
```
Based on comments is seems this is enough:
```
Me.AllowAdditions = (DCount("[IDFieldName]","[TableName]")<10)
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: >
> *"I want to limit the table to only hold 10 records max."*
>
>
>
Apply a check constraint to the table so it will not accept more than 10 rows.
Create a throwaway procedure and run it once ...
```vba
Public Sub limit_rows()
Dim strSql As String
strSql = "ALTER TABLE tblDetails" & vbCrLf & _
"ADD CONSTRAINT 10_rows_max" & vbCrLf & _
"CHECK ((SELECT Count(*) FROM tblDetails) < 11);"
Debug.Print strSql
CurrentProject.Connection.Execute strSql
End Sub
```
Upvotes: 2 |
2018/03/22 | 223 | 699 | <issue_start>username_0: i want to add a password to this line :
```
define('FTP_PASS' <PASSWORD>')
```
but wp-confing will not recognize @ and \ as characters and for that reason my website goes down.
How can i add those characters as text there?

thanks!<issue_comment>username_1: Typo? you're missing a comma between the two parameters and the quotes on the screenshot do not look regular.
<http://php.net/manual/en/function.define.php>
```
define('FTP_PASS','<PASSWORD>');
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
define('FTP_PASS' <PASSWORD>')
```
Escape the characters or use different brackets
Upvotes: 0 |
2018/03/22 | 362 | 1,271 | <issue_start>username_0: I have a Logger class like this:
```
public class Logger : ILogging
{
private ILogger _logger;
public Logger()
{
_logger = LoggerFactory.Resolve();
}
public void log(string priority, string message)
{
//to do the code here
}
```
}
and this is my config file:
```
```
I want by "Name" rules to write to a specific function in nlog
for example: if user call
```
logger.log("Priority1","errorMessage");
```
the function know to go for logger.error("error")
I tried to see in google but I didnt see any nice solve
**UPDATE**
```
public void log( Priority priority, string message)
{
string currentLogLevel;
_logger = NLog.LogManager.GetLogger(priority.ToString());
LogEventInfo logEvent = new LogEventInfo(currentLogLevel , _logger.Name, message);
_logger.Log(logEvent);
}
```<issue_comment>username_1: Typo? you're missing a comma between the two parameters and the quotes on the screenshot do not look regular.
<http://php.net/manual/en/function.define.php>
```
define('FTP_PASS','<PASSWORD>');
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
define('FTP_PASS' '\<PASSWORD>')
```
Escape the characters or use different brackets
Upvotes: 0 |
2018/03/22 | 727 | 1,995 | <issue_start>username_0: I need some help on this, I try to use folium in my code but keep getting this error message :
>
> Attribute Error : Map object has no attribute 'create\_map'
>
>
>
Here is what I get (and there is not create\_map in the list) :
```
dir(folium.Map)
['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__',
'__format__', '__ge__', '__getattribute__', '__gt__', '__hash__',
'__init__', '__init_subclass__', '__le__', '__lt__', '__module__',
'__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__',
'__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__',
'_get_self_bounds', '_repr_html_', '_repr_png_', '_to_png', 'add_child' ,
'add_children', 'add_tile_layer', 'add_to', 'choropleth', 'fit_bounds',
'get_bounds', 'get_name', 'get_root', 'render', 'save', 'to_dict', 'to_json']
```
---
The code is :
```
import folium
map_osm = folium.Map(location=[45.5236, -122.6750])
map_osm.create_map(path='osm.html')
```
P.S : folium version 0.5.0 // Python 3.6.4<issue_comment>username_1: You can use the `save` method to save maps as HTML.
e.g
```
map_osm.save('osm.html')
```
It is also mentioned in the output of `dir(folium.Map)`
[](https://i.stack.imgur.com/hF5I5.png)
Upvotes: 2 <issue_comment>username_2: This question seems easy for me:
```
import folium
map_osm = folium.Map(location=[45.5236, -122.6750])
map_osm.save('/Users/YourName/Desktop/osm.html') #here need to be
full path
```
And then you can find a osm.html in the directory you put. Open with your internet explorer and then you can see the map.
[](https://i.stack.imgur.com/kqBqh.png)
Upvotes: 2 <issue_comment>username_3: ```
import folium
map_osm = folium.Map(location=[45.5236, -122.6750])
map_osm.save('osm.html')
```
#here you will find html file at same folder of your script
Upvotes: 0 |
2018/03/22 | 587 | 1,996 | <issue_start>username_0: I am using flake8 and pylint via [ALE](https://github.com/w0rp/ale) in vim.
I know how to disable individual errors/warnings for each of these linters in their respective config files.
How can I keep the `line-too-long` checks *except* for the shebang line at the start of the file (if present)?
If the first line is not a shebang line, it should still complain about too-long lines.
So if the max line length is 5 (for sake of example), with this file:
```
#!/run/stuff
x=3
print(x)
```
They should complain about the third line but not the first one.
But with this file:
```
x = 1 + 1 + 1
# Print the result
print(x)
```
It should complain about all three lines.<issue_comment>username_1: Make a config file by doing `pylint --generate-rcfile`. see here for more <https://docs.pylint.org/en/1.6.0/run.html>
under the `[MESSAGES CONTROL]` section add `line-too-long` to the list of for `disable=`.
On the second line of each python file, you re-enable `line-too-long`.
```
#!/usr/bin/env python3
# pylint: enable=line-too-long
x=3
print(x) # make this longer than the enable line
```
I set the line limit to 30, so pylint complains about line 4. I added gratuitous whitespace to the shebang line, but the length is still overlooked by pylint.
Upvotes: 2 <issue_comment>username_2: With thanks to [<NAME>co](https://stackoverflow.com/users/1953283/ian-stapleton-cordasco), I ended up submitting a patch to pycodestyle (which is used by flake8) to ignore the length of shebang lines.
So now to fix my ALE setup, I can replace the pycodestyle script my copy of flake8 is using with the [latest version from GitHub](https://github.com/PyCQA/pycodestyle), and disable line-too-long checks in pylint while leaving them on in flake8/pycodestyle. That way without modifying my scripts in any way I still get linting for line length everywhere else, without getting redundant warnings for long shebangs.
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,739 | 5,649 | <issue_start>username_0: I have a list of athletes(name, score, place) with names and scores already set. I need to get their place number in the competition depending on the score.
At the moment I keep them in a `TreeSet`, so they are already sorted in ascending order. If all the scores are different, I can just do this:
```
int place=1;
for (Athlete athlete:allAthletes){
athlete.setRelativePlace(place++);
}
```
The problem is, if two athletes have the same score, they have to have something like "1-2" or "8-9-10" as assigned places. E.g. the winners both got 8000 points, they both should have a String "1-2" in their place field. The next person - if her score is unique - will get the normal "3", etc.
Is there a way to this in a few simple lines without having to do two loops and adding extra fields in Athlete class?<issue_comment>username_1: This is not a complete answer, but it is not too difficult to generate the ranks you want in a single pass over all athletes. Rather than using `8-9-10` as a rank label I would recommend just using `8` for all three athletes. That is, there are three athletes tied for 8th place.
```
int rank = 0;
int score = -1;
for (Athlete a : allAthletes) {
int currScore = a.getScore();
if (currScore != score) {
++rank;
score = currScore;
}
a.setRelativePlace(rank);
}
```
This ranking system is what is referred to as the "dense rank" in database jargon. Using this scheme, if there were two athletes tied for first place, and three tied for third place, we would have the following ranks:
```
score | dense rank
100 | 1
100 | 1
95 | 2
80 | 3
80 | 3
80 | 3
```
**Edit:** Assuming you are stuck with your requirement, then consider the following script. Rather than relying on your class, which I can't test unless I guess what your code looks like, I just print rankings on a sorted list of scores, in ascending order.
```
List scores = new ArrayList<>();
scores.add(80);
scores.add(80);
scores.add(80);
scores.add(90);
scores.add(100);
scores.add(100);
Collections.sort(scores);
int pos = 0;
int score = -1;
int prevScore = -1;
int a = 0;
for (int currScore : scores) {
System.out.println("DEBUG: current score is: " + currScore);
if (score == -1) {
score = currScore;
prevScore = currScore;
++a;
continue;
}
if (score != currScore || a == scores.size() - 1) {
String rank = "";
prevScore = score;
// this covers the case of the last score, or group of scores
if (a == scores.size() - 1) ++a;
for (int i=0; i < a - pos; ++i) {
if (i > 0) rank += "-";
rank += (pos + i + 1);
}
for (int i=pos; i < a; ++i) {
System.out.println("Score " + prevScore + " has rank " + rank);
}
score = currScore;
pos = a;
}
++a;
}
Score 80 has rank 1-2-3
Score 80 has rank 1-2-3
Score 80 has rank 1-2-3
Score 90 has rank 4
Score 100 has rank 5-6
Score 100 has rank 5-6
```
Follow the link below for a running demo of the above code snippet.
[Demo
----](http://rextester.com/MQYU28860)
Upvotes: 1 <issue_comment>username_2: At the moment I keep them in a TreeSet, so they are already sorted in ascending order. **TreeSet wont keep two Athletes having same score if you compare Athletes score while storing in TreeSet.**
Assuming relativePlace is of int type.Its better to Store Athletes in List and sort them with Comparator. Athletes with same score will be listed in the insertion order.
```
public class Athlete {
private String name;
private int score;
private int relativePlce;
}
```
Sorting will be like..
```
athletes.sort((o1, o2) -> new Integer(o1.getScore()).compareTo(o2.getScore()));
int place = 1;
for (Athlete athlete : athletes) {
athlete.setPlcae(place++);
System.out.println( athlete.getName()+":"+athlete.getScore() + ":" + athlete.getPlcae());
```
Upvotes: 0 <issue_comment>username_3: As one now merely has a `setRelativePlace(int)`, one could store the starting place:
```
int priorScore = -1;
int priorPlace = -1;
int place = 0;
for (Athlete athlete: allAthletes) {
int score = athlete.getScore();
++place;
if (score != priorScore) {
priorScore = score;
priorPlace = place;
}
athlete.setRelativePlace(priorPlace);
}
```
Having a data structure for a range of places one could patch the last found place:
```
static class Place {
int start;
int end; // inclusive
@Override
String toString() {
if (start == end) {
return Integer.toString(start);
}
return IntStream.rangeClosed(start, end)
.map(Integer::toString)
.collect(Collectors.joining("-"));
}
}
int priorScore = -1;
Place priorPlace = null;
int place = 0;
for (Athlete athlete: allAthletes) {
int score = athlete.getScore();
++place;
if (score != priorScore) {
priorScore = score;
priorPlace = new Place();
priorPlace.start = place;
}
priorPlace.end = place;
athlete.setRelativePlace(priorPlace);
}
```
When allAthletes would be a sorted List and you have two int fields start and end:
```
int priorScore = -1;
int priorPlace = -1;
for (int place = 0; ; ++place) {
if (place == allAthletes.size() || allAthletes.get(place)) {
for (int p = priorPlace; p < place; ++p) {
allAthletes.setTillPlace(place - 1);
}
}
if (place == allAthlets.size()) {
break;
}
int score = athlete.getScore();
if (score != lastScore) {
priorScore = score;
priorPlace = place;
}
athlete.setRelativePlace(priorPlace);
}
```
Upvotes: 0 |
2018/03/22 | 1,935 | 6,058 | <issue_start>username_0: Google Apps Script to pull data from API. I'd like to parse out the information according to the relevant headers.
```js
function Fraud2() {
var ret = "no value";
var response = UrlFetchApp.fetch("https://fraudshield.24metrics.com/api/v1/reports/fraud.json?tracker_id=905&group[]=sub_id&group[]=partner&date_start=2018-01-18&date_end=2018-01-18&timezone=UTC&user_id=XXX&api_token=XXX",{muteHttpExceptions:true})
var user_id = "XXX";
var api_token = "XXX";
var sheet = SpreadsheetApp.getActiveSheet();
sheet.appendRow([response]);
}
```
The return is pushed into one single cell like so:
```
{"results":[{"tracker_id":905,"conversion":7883,"click":0,"goal":0,"approved":6511,"rejected":1372,"tracker":"Tatoo
Integration","conversion_rate":"N\/A"},{"tracker_id":906,"conversion":1868,"click":0,"goal":0,"approved":1682,"rejected":186,"tracker":"Aise
Integration","conversion_rate":"N\/A"},{"tracker_id":933,"conversion":413,"click":0,"goal":0,"rejected":290,"approved":123,"tracker":"Tatoo
Invalids Integration","conversion_rate":"N\/A"}]}
```
I tried [this](https://stackoverflow.com/questions/48997748/parse-received-information-into-columns-rows) without success.
How can I get the results arranged neatly into columns?<issue_comment>username_1: This is not a complete answer, but it is not too difficult to generate the ranks you want in a single pass over all athletes. Rather than using `8-9-10` as a rank label I would recommend just using `8` for all three athletes. That is, there are three athletes tied for 8th place.
```
int rank = 0;
int score = -1;
for (Athlete a : allAthletes) {
int currScore = a.getScore();
if (currScore != score) {
++rank;
score = currScore;
}
a.setRelativePlace(rank);
}
```
This ranking system is what is referred to as the "dense rank" in database jargon. Using this scheme, if there were two athletes tied for first place, and three tied for third place, we would have the following ranks:
```
score | dense rank
100 | 1
100 | 1
95 | 2
80 | 3
80 | 3
80 | 3
```
**Edit:** Assuming you are stuck with your requirement, then consider the following script. Rather than relying on your class, which I can't test unless I guess what your code looks like, I just print rankings on a sorted list of scores, in ascending order.
```
List scores = new ArrayList<>();
scores.add(80);
scores.add(80);
scores.add(80);
scores.add(90);
scores.add(100);
scores.add(100);
Collections.sort(scores);
int pos = 0;
int score = -1;
int prevScore = -1;
int a = 0;
for (int currScore : scores) {
System.out.println("DEBUG: current score is: " + currScore);
if (score == -1) {
score = currScore;
prevScore = currScore;
++a;
continue;
}
if (score != currScore || a == scores.size() - 1) {
String rank = "";
prevScore = score;
// this covers the case of the last score, or group of scores
if (a == scores.size() - 1) ++a;
for (int i=0; i < a - pos; ++i) {
if (i > 0) rank += "-";
rank += (pos + i + 1);
}
for (int i=pos; i < a; ++i) {
System.out.println("Score " + prevScore + " has rank " + rank);
}
score = currScore;
pos = a;
}
++a;
}
Score 80 has rank 1-2-3
Score 80 has rank 1-2-3
Score 80 has rank 1-2-3
Score 90 has rank 4
Score 100 has rank 5-6
Score 100 has rank 5-6
```
Follow the link below for a running demo of the above code snippet.
[Demo
----](http://rextester.com/MQYU28860)
Upvotes: 1 <issue_comment>username_2: At the moment I keep them in a TreeSet, so they are already sorted in ascending order. **TreeSet wont keep two Athletes having same score if you compare Athletes score while storing in TreeSet.**
Assuming relativePlace is of int type.Its better to Store Athletes in List and sort them with Comparator. Athletes with same score will be listed in the insertion order.
```
public class Athlete {
private String name;
private int score;
private int relativePlce;
}
```
Sorting will be like..
```
athletes.sort((o1, o2) -> new Integer(o1.getScore()).compareTo(o2.getScore()));
int place = 1;
for (Athlete athlete : athletes) {
athlete.setPlcae(place++);
System.out.println( athlete.getName()+":"+athlete.getScore() + ":" + athlete.getPlcae());
```
Upvotes: 0 <issue_comment>username_3: As one now merely has a `setRelativePlace(int)`, one could store the starting place:
```
int priorScore = -1;
int priorPlace = -1;
int place = 0;
for (Athlete athlete: allAthletes) {
int score = athlete.getScore();
++place;
if (score != priorScore) {
priorScore = score;
priorPlace = place;
}
athlete.setRelativePlace(priorPlace);
}
```
Having a data structure for a range of places one could patch the last found place:
```
static class Place {
int start;
int end; // inclusive
@Override
String toString() {
if (start == end) {
return Integer.toString(start);
}
return IntStream.rangeClosed(start, end)
.map(Integer::toString)
.collect(Collectors.joining("-"));
}
}
int priorScore = -1;
Place priorPlace = null;
int place = 0;
for (Athlete athlete: allAthletes) {
int score = athlete.getScore();
++place;
if (score != priorScore) {
priorScore = score;
priorPlace = new Place();
priorPlace.start = place;
}
priorPlace.end = place;
athlete.setRelativePlace(priorPlace);
}
```
When allAthletes would be a sorted List and you have two int fields start and end:
```
int priorScore = -1;
int priorPlace = -1;
for (int place = 0; ; ++place) {
if (place == allAthletes.size() || allAthletes.get(place)) {
for (int p = priorPlace; p < place; ++p) {
allAthletes.setTillPlace(place - 1);
}
}
if (place == allAthlets.size()) {
break;
}
int score = athlete.getScore();
if (score != lastScore) {
priorScore = score;
priorPlace = place;
}
athlete.setRelativePlace(priorPlace);
}
```
Upvotes: 0 |
2018/03/22 | 354 | 1,290 | <issue_start>username_0: ```
1|Health & Fitness
2|Cardiovascular
3|General Fitness
4|Pilates
5|Technology
6|Artificial Intelligence
7|Advance Learning Machines
8|Virtual and Augmented Reality.
```
this is data in my table.
**I want to get data in between Health & Fitness & Technology**<issue_comment>username_1: Try a query like this
```
SELECT * FROM topics
where
id >= ( SELECT id from topics WHERE name = 'Health & Fitness')
AND
id <= ( SELECT id from topics WHERE name = 'Technology')
order by id;
```
**query with auto min / max**
```
SELECT * FROM topics
where
id >= ( SELECT min(id) from topics WHERE name IN('Health & Fitness','Technology'))
AND
id <= ( SELECT max(id) from topics WHERE name IN('Health & Fitness','Technology'))
order by id;
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If there are no other fields in the table, then this task can not be solved with some simple syntax. The problem is that data in a relation (table) is unsorted and in general, if you run simple select statements like
```
select * from topics
```
you may get records in any order.
Whereas, if you have a field, by which you can sort or set a range, then you can try the above solution by username_1.
Upvotes: -1 |
2018/03/22 | 1,050 | 3,127 | <issue_start>username_0: I have string and there is `0x80` in it. string presentation is : `serialno�` and hex presentation is `73 65 72 69 61 6C 6E 6F 80`. I want to remove `0x80` from string without convert string to hex string. is it possible in java ? I tried `lastIndexOf(0x80)`. but it returns -1.
my code is (also you can find on <https://ideone.com/3p8wKT>) :
```
public static void main(String[] args) {
String hexValue = "73657269616C6E6F80";
String binValue = hexStringToBin(hexValue);
System.out.println("binValue : " + binValue);
int index = binValue.lastIndexOf(0x80);
System.out.println("index : " + index);
}
public static String hexStringToBin(String s) {
int len = s.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
}
```<issue_comment>username_1: This is working for me :
```
int hex = 0x6F;
System.out.println("serialno€".lastIndexOf((char)hex));
```
Output :
```
7
```
Upvotes: 0 <issue_comment>username_2: It's because you converted `�` symbol to hex incorrectly (`0x80`). 1 symbol in `UTF-8` can take 1 byte or more. In your case `�` symbol takes 2 bytes and have the following representation `65533` or `0xFFFD`. So, if you replace your code with
```
int index = variable.lastIndexOf(0xFFFD);
//index will be 8
```
all will work fine.
Code snippet to proof my words:
```
String variable = "serialno�";
for (char c : variable.toCharArray())
System.out.print(((int)c)+ " ");
// 115 101 114 105 97 108 110 111 65533
```
**UPDATE**
You've made a mistake in `hexStringToBin` function. Replace it with
```
public static String hexStringToBin(String s) {
int len = s.length();
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
}
```
and all will work fine.
Upvotes: 1 <issue_comment>username_3: Change your hex string method to map directly to characters.
```
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
```
The exchange between a String and byte[] requires an encoding.
Your hex string seems to be a string representation of bytes/characters. It would appear you had an original String -> converted it to your hex string, but we don't know the encoding.
If you want to say that each pair of characters maps to the corresponding character, eg "80" -> char c = 0x80; Then you can achieve that by using a char[], which doesn't get encoded/decoded when creating a string.
If you use a byte[] (as you have done in your example), then it will get decoded and invalid characters get mapped to 0xFFFD, which is [unicode replacement character](https://www.fileformat.info/info/unicode/char/0fffd/index.htm).
Upvotes: 3 [selected_answer] |
2018/03/22 | 958 | 3,061 | <issue_start>username_0: ```
copy_deliverable_script_tomaster(args.Software_name.value,function(state){
res.end("added")
}
)
function copy_deliverable_script_tomaster(software_name,callback){
client.scp('./Temporary_software_files/folder/', {
host: 'ip',
username: 'centos',
privateKey: String(require("fs").readFileSync('./foreman_keypairs/coe-
central.pem')),
path: '/home/centos/Software_with_files/'+software_name
}, function(err,response) {
if(err){
console.log(err)
}
else{
console.log("after copy in master")
return callback(response);
}
})
}
```
I have used the above code, to copy large files to the remote machine.
Copying file continues in the remote machine, but the response("no content")comes before copy completes.
`console.log("after copy in master")`, will be printed only after the copy is completed.
Unable to get the response.<issue_comment>username_1: This is working for me :
```
int hex = 0x6F;
System.out.println("serialno€".lastIndexOf((char)hex));
```
Output :
```
7
```
Upvotes: 0 <issue_comment>username_2: It's because you converted `�` symbol to hex incorrectly (`0x80`). 1 symbol in `UTF-8` can take 1 byte or more. In your case `�` symbol takes 2 bytes and have the following representation `65533` or `0xFFFD`. So, if you replace your code with
```
int index = variable.lastIndexOf(0xFFFD);
//index will be 8
```
all will work fine.
Code snippet to proof my words:
```
String variable = "serialno�";
for (char c : variable.toCharArray())
System.out.print(((int)c)+ " ");
// 115 101 114 105 97 108 110 111 65533
```
**UPDATE**
You've made a mistake in `hexStringToBin` function. Replace it with
```
public static String hexStringToBin(String s) {
int len = s.length();
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
}
```
and all will work fine.
Upvotes: 1 <issue_comment>username_3: Change your hex string method to map directly to characters.
```
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
```
The exchange between a String and byte[] requires an encoding.
Your hex string seems to be a string representation of bytes/characters. It would appear you had an original String -> converted it to your hex string, but we don't know the encoding.
If you want to say that each pair of characters maps to the corresponding character, eg "80" -> char c = 0x80; Then you can achieve that by using a char[], which doesn't get encoded/decoded when creating a string.
If you use a byte[] (as you have done in your example), then it will get decoded and invalid characters get mapped to 0xFFFD, which is [unicode replacement character](https://www.fileformat.info/info/unicode/char/0fffd/index.htm).
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,017 | 3,604 | <issue_start>username_0: I write these code all are working fine but there is a warning coming that sanitize the sql parameter.
```
private DataSet ExcelToDataSet(string fileData)
{
DataSet ds = new DataSet();
string connectionString = GetConnectionString(fileData);
using (OleDbConnection conn = new OleDbConnection(connectionString))
{
conn.Open();
OleDbCommand cmd = new OleDbCommand();
cmd.Connection = conn;
// Get all Sheets in Excel File
DataTable dtSheet = conn.GetOleDbSchemaTable(OleDbSchemaGuid.Tables, null);
// Loop through all Sheets to get data
foreach (DataRow dr in dtSheet.Rows)
{
string sheetName = dr["TABLE_NAME"].ToString();
if (!sheetName.EndsWith("$"))
continue;
// Get all rows from the Sheet
cmd.CommandText = "SELECT * FROM [" + sheetName + "]";
DataTable dt = new DataTable();
dt.TableName = sheetName;
OleDbDataAdapter da = new OleDbDataAdapter(cmd);
da.Fill(dt);
ds.Tables.Add(dt);
}
cmd = null;
conn.Close();
}
return (ds);
}
```
I have to sanitize the following line
```
cmd.CommandText = "SELECT * FROM [" + sheetName + "]";
```<issue_comment>username_1: This is working for me :
```
int hex = 0x6F;
System.out.println("serialno€".lastIndexOf((char)hex));
```
Output :
```
7
```
Upvotes: 0 <issue_comment>username_2: It's because you converted `�` symbol to hex incorrectly (`0x80`). 1 symbol in `UTF-8` can take 1 byte or more. In your case `�` symbol takes 2 bytes and have the following representation `65533` or `0xFFFD`. So, if you replace your code with
```
int index = variable.lastIndexOf(0xFFFD);
//index will be 8
```
all will work fine.
Code snippet to proof my words:
```
String variable = "serialno�";
for (char c : variable.toCharArray())
System.out.print(((int)c)+ " ");
// 115 101 114 105 97 108 110 111 65533
```
**UPDATE**
You've made a mistake in `hexStringToBin` function. Replace it with
```
public static String hexStringToBin(String s) {
int len = s.length();
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
}
```
and all will work fine.
Upvotes: 1 <issue_comment>username_3: Change your hex string method to map directly to characters.
```
char[] data = new char[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (char) ((Character.digit(s.charAt(i), 16) << 4) + Character.digit(s.charAt(i + 1), 16));
}
return new String(data);
```
The exchange between a String and byte[] requires an encoding.
Your hex string seems to be a string representation of bytes/characters. It would appear you had an original String -> converted it to your hex string, but we don't know the encoding.
If you want to say that each pair of characters maps to the corresponding character, eg "80" -> char c = 0x80; Then you can achieve that by using a char[], which doesn't get encoded/decoded when creating a string.
If you use a byte[] (as you have done in your example), then it will get decoded and invalid characters get mapped to 0xFFFD, which is [unicode replacement character](https://www.fileformat.info/info/unicode/char/0fffd/index.htm).
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,088 | 3,572 | <issue_start>username_0: I've wrote a program that creates 4 threads which each sort 20.000 numbers from low to high 50 times. I've runned this test several times on .NET Core 2.0 and .NET Framework 4.6.1. In this test .NET Framework always outperforms .NET Core.
**Setup**
* .NET Core in release mode & published
* Windows 10, i7 duo core, 4 threads (hyperthreading)
The following code has been used to benchmark the two frameworks.
```
static void Main()
{
const int amountParallel = 4;
var globalStopwatch = new Stopwatch();
globalStopwatch.Start();
var tasks = new Task[4];
for (int i = 0; i < amountParallel; i++)
{
tasks[i] = Start();
}
Task.WaitAll(tasks);
globalStopwatch.Stop();
Console.WriteLine("Averages: {0}ms", tasks.SelectMany(r => r.Result).Average(x => x));
Console.WriteLine("Time completed: {0}", globalStopwatch.Elapsed.TotalMilliseconds);
}
private static Task Start()
{
return Task.Factory.StartNew(() =>
{
var numbersToSort = new int[20000];
var globalStopwatch = new Stopwatch();
var individualStopwatch = new Stopwatch();
var stopwatchTimes = new double[50];
int temp;
globalStopwatch.Start();
for (int i = 0; 50 > i; i++)
{
Console.WriteLine("Running task: {0}", i);
numbersToSort = Enumerable.Range(0, 20000).Reverse().ToArray();
individualStopwatch.Start();
for (int indexNumberArray = 0; numbersToSort.Length > indexNumberArray; indexNumberArray++)
{
for (int sort = 0; numbersToSort.Length - 1 > sort; sort++)
{
if (numbersToSort[sort] > numbersToSort[sort + 1])
{
temp = numbersToSort[sort + 1];
numbersToSort[sort + 1] = numbersToSort[sort];
numbersToSort[sort] = temp;
}
}
}
individualStopwatch.Stop();
Console.WriteLine("Task {0} completed, took: {1}ms", i, Math.Round(individualStopwatch.Elapsed.TotalMilliseconds));
stopwatchTimes[i] = individualStopwatch.Elapsed.TotalMilliseconds;
individualStopwatch.Reset();
}
globalStopwatch.Stop();
Console.WriteLine("Total time: {0}s", Math.Round(globalStopwatch.Elapsed.TotalSeconds, 2));
Console.WriteLine("Average: {0}ms", Math.Round(stopwatchTimes.Average(time => time)));
return stopwatchTimes;
}, TaskCreationOptions.LongRunning);
}
```
Test results:
**.NET Core**
* Average: 761ms
* Total time: 38s
**.NET Framework**
* Average: 638ms
* Total time: 32s
.NET Core isn't slower on only CPU related tasks. It's also slower on disk I/O tasks.
Any idea's why .NET Core is a bit slower on this part? Are there changes I can make to improve the performance of .NET Core?<issue_comment>username_1: .NET Framework projects default to 32-bit code. This option is visible in the build settings of a project and selected by default. .NET Core projects default to 64-bit code. If you untick the "Prefer 32-bit" box you will notice .NET Framework drops in performance.
>
> Another point of note is that the desktop x86 JIT is a separate code
> base from the x64 JIT. For 64-bit, both .NET Framework and .NET Core
> use RyuJIT now; for 32-bit .NET Core still uses RyuJIT, but .NET
> Framework uses the legacy JIT, so you've got both different bitness
> and a different jitter.
>
>
>
The answers were provided in the comments by <NAME> and <NAME>.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This should be fixed in .Net Core 2.0.7 and .Net Framework 4.7.2, via <https://github.com/dotnet/coreclr/pull/15323>
Root cause was a bug in the JIT's Common Subexpression Elimination (aka CSE) optimization. See issue (linked from PR) for gory details.
Upvotes: 3 |
2018/03/22 | 542 | 1,746 | <issue_start>username_0: ```
document.write("<table border=2 width=50%");
for (var i = 0; i < ${SALE\_DATA.SALE\_ORDER\_ITEMS\_LIST.length}; i++) {
document.write("<tr>");
document.write("<td>" + ${SALE\_DATA.SALE\_ORDER\_ITEMS\_LIST[i].WEIGHT} + "</td>");
document.write("<td>" + i + "</td>");
document.write("</tr>");
}
document.write("</table>");
```
I am trying to pass the i in `${SALE_DATA.SALE_ORDER_ITEMS_LIST[i].WEIGHT}`
here SALE\_DATA is the constant array in angular 5.
but it is giving the error like below
ERROR ReferenceError: i is not defined.
suggest me the way to pass the i value in loop
if i keep the `${SALE_DATA.SALE_ORDER_ITEMS_LIST[0].WEIGHT}`
the output is like below
thanks in advance!!
[out put](https://i.stack.imgur.com/Lk05l.png)<issue_comment>username_1: You may use an intenal iterator , like using forEach loop. This will allow you not taking care of how the loop works.
Something like this can do the job.
```
${SALE_DATA.SALE_ORDER_ITEMS_LIST}.forEach(function(curVal,i){
document.write("|");
document.write(" " + curVal.WEIGHT + " |");
document.write(" " + i + " |");
document.write("
");
})
```
Upvotes: -1 <issue_comment>username_2: you can you use template literal (``) to create a string with dynamic variables in it.
```
for (var i = 0; i < `${SALE_DATA.SALE_ORDER_ITEMS_LIST.length}`; i++) {
console.log(i)
}
```
Above code will work fine but it's not recommended to use template literals in this manner.
>
> Template literals are string literals allowing embedded expressions.
> You can use multi-line strings and string interpolation features with
> them. They were called "template strings" in prior editions of the
> ES2015 specification.
>
>
>
Upvotes: 1 |
2018/03/22 | 528 | 1,980 | <issue_start>username_0: I am trying to understand, why do we need `Offer` and `OfferLast` methods in `Deque`, as both these methods add the elements at the end/tail of the `Deque`. What is the significance of it?<issue_comment>username_1: So you can use the same object both as a queue and a deque.
Upvotes: 0 <issue_comment>username_2: The Queue interface was added in Java 5. It defined the [`offer`](https://docs.oracle.com/javase/10/docs/api/java/util/Queue.html#offer(E)) method, which adds an element at the end.
(The `offer` method and the [`add`](https://docs.oracle.com/javase/10/docs/api/java/util/Collection.html#add(E)) method both return booleans. They differ in that `add` is permitted to reject the element and return false only if the element is already present in the collection. The `offer` method can reject the element for other reasons, such as the queue being full.)
With `Queue.offer`, there is little question about the semantics, as elements are generally added to the tail of a queue and removed from the head.
The [Deque](https://docs.oracle.com/javase/10/docs/api/java/util/Deque.html) interface was added in Java 6. A deque allows elements to be added to and removed from *both* the head and tail, so `Deque` defines `offerFirst` and `offerLast` methods. A deque is also a queue, so `Deque` is a sub-interface of `Queue`. Thus it inherits the `offer` method from `Queue`. That's how `Deque` ends up with both `offer` and `offerLast`.
We probably could have gotten by without adding `offerLast`, but that would have left asymmetries in the `Deque` interface. Many operations work on both the head and the tail (add, get, offer, peek, poll, remove) so it makes sense for all of them to have -first and -last variants, even though this adds redundancy. This redundancy occurs with other `Queue` methods as well, such as `add` and `addLast`, `peek` and `peekFirst`, `poll` and `pollFirst`, and `remove` and `removeFirst`.
Upvotes: 5 [selected_answer] |
2018/03/22 | 691 | 2,135 | <issue_start>username_0: I need to find this element :
```
* <
* 1
===> * 2
* 3
```
Current class is changing but I always need to select its previous element if it exists
I tried :
```
$("#pagin li a").closest(".current").prev(".pagination")
$("#pagin li a.current").prev()
$("#pagin li a.pagination.current").prev()
```
Maybe this don't work because it's searching for previous element with .pagination but can't find the way to select the good element<issue_comment>username_1: You can use [`:has()` selector](https://api.jquery.com/has-selector/) to get `-` having `a.pagination.current` element then use [`.prev()`](https://api.jquery.com/prev/) to get its immediately preceding sibling
```
var prevli = $("#pagin li:has(a.pagination.current)").prev();
var prevLiAnchor = prevli.find('a');
```
```js
$("#pagin li:has(a.pagination.current)").prev().css('color', 'red')
```
```css
.current{ color : green}
```
```html
* <
* 1
* 2
* 3
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use `parent` and `prev`.
```js
console.log($("#pagin li a.current").parent().prev('li').find('a').text());
```
```html
* <
* 1
===> * 2
* 3
```
Upvotes: 0 <issue_comment>username_3: ```js
$(function(){
$("#pagin li").click(function(){
$("#pagin .pagination").removeClass("current previous");
$(this).find("a").addClass("current");
$(this).prev().find("a").addClass("previous");
});
})
```
```css
.current::after{
content: " - current";
color: blue;
}
.previous::after{
content: " - previous";
color: green;
}
```
```html
#### Click on each item
* <
* 1
* 2
* 3
* 4
* 5
* 6
```
Upvotes: 0 <issue_comment>username_4: ```
var previousLink = $("ul#pagin li a.current").closest('li').prev().find('a.pagination');
if(previousLink){
//do something
}else{
//do something else
}
```
closest is a better way to go because even if you wrap anchor tag(a) with some wrapper for styling, closest will still be able to find the LI tag.
>
> closest begins with the current element and travels up the DOM tree
> until it finds a match for the supplied selector.
>
>
>
Upvotes: 0 |
2018/03/22 | 728 | 2,858 | <issue_start>username_0: I'm using push browser notifications in my project, everything is working fine except for clicking on notification, i want it to open dynamic url according to notification data
my code
```
self.addEventListener('push', function(event) {
console.log('[Service Worker] Push Received.');
console.log(`[Service Worker] Push had this data: "${event.data.text()}"`);
const title = 'Garment IO';
const options = {
body: getAlertData(event.data.text()),
icon: '../images/icon.png',
dir:'rtl',
badge: '../images/badge.png'
};
event.waitUntil(self.registration.showNotification(title, options));
});
self.addEventListener('notificationclick', function(event) {
console.log('[Service Worker] Notification click Received.');
event.notification.close();
event.waitUntil(
clients.openWindow('https://google.com');///want to change this !
);
});
```
so is there any attribute i can push in event ? like url or sth ?<issue_comment>username_1: Instead of passing only text into the push handler you can pass in a stringified object (with `JSON.stringify()`), which you parse back into a object with `event.data.json()`. Then you can set the url as part of the data property of the notification object.
```
self.addEventListener('push', function(event) {
const myObject = event.data.json();
const options = {
body: myObject.bodyText,
data: myObject.url,
...
};
return self.registration.showNotification(myObject.title, options);
});
```
The data object is later accessible in the "notificationclick" event
```
self.addEventListener("notificationclick", function(event) {
event.waitUntil(clients.openWindow(event.notification.data.url));
});
```
Upvotes: 2 <issue_comment>username_2: There is a mistake in the above answer, because url does not exist in `data`, as he said you have to pass a json string with `JSON.stringify()` and then convert it back to json with `data.json()`, so this my final correct code.
```
self.addEventListener('push', function(event) {
console.log('[Service Worker] Push Received.');
console.log(`[Service Worker] Push had this data: "${event.data.json()}"`);
const myObject = event.data.json();
//const title = 'Nueva Reserva Alpaca Expeditions';
const options = {
body: myObject.bodyText,
data: myObject.url,
icon: '/assets/icon.png',
badge: '/assets/badge.png'
};
const notificationPromise = self.registration.showNotification(myObject.title, options);
event.waitUntil(notificationPromise);
});
self.addEventListener('notificationclick', function(event) {
console.log('[Service Worker] Notification click Received.');
console.info(event.notification)
event.notification.close();
event.waitUntil(
clients.openWindow(event.notification.data)
);
});
```
Upvotes: 1 |
2018/03/22 | 919 | 2,412 | <issue_start>username_0: I want to calculate the maximum of 15 files:- ifile1.txt, ifile2.txt, ....., ifile15.txt. Number of columns and rows of each file are same. But some of them are missing values. Part of the data looks as
```
ifile1.txt ifile2.txt ifile3.txt
3 ? ? ? . 1 2 1 3 . 4 ? ? ? .
1 ? ? ? . 1 ? ? ? . 5 ? ? ? .
4 6 5 2 . 2 5 5 1 . 3 4 3 1 .
5 5 7 1 . 0 0 1 1 . 4 3 4 0 .
. . . . . . . . . . . . . . .
```
I would like write maximums of these 15 files without considering the missing values.
```
ofile.txt
4 2 1 3 . (i.e. max of 3 1 4, max of ? 2 ? and so on)
5 ? ? ? .
4 6 5 2 .
5 5 7 4 .
. . . . .
```
I am trying with this, but not getting the result.
```
awk '
{
for( i = 1; i <= FNR; i++){
for( j = 1; j <= NF; j++) printf "%s ", {
max=="" || $i > max {max=$i} END{ print max}' FS="|" : "?"
print ""
}
}
' ifile*
```<issue_comment>username_1: You can try this `awk` program:
```
awk '{
for(i=1;i<=NF;i++)
if($i>a[FNR,i] && $i!="?")
a[FNR,i]=$i
}
END{
for(j=1;j<=FNR;j++)
for(i=1;i<=NF;i++)
printf "%s%s",(a[j,i]?a[j,i]:"?"),(i
```
The default loop will get values from all files and store the highest value into the array `a`.
The `END` statement is looping through the array to display the values.
This relies on `FNR`, the line number of the current file being processed and `NF` the number of fields of the current line.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Taking some memory but only 1 loop per file
```
awk '{L=FNR==1?"":L ORS;for (i=1;i<=NF;i++){R=FNR":"i; M[R]=M[R]*1<=$i*1?$i:M[R];L=L OFS M[R]}}END{print L}' ifile*
```
explain:
```
awk '{
#Lines output buffer is reset on first line of file and append with new line otherwise
L=FNR==1?"":L ORS
# for every field (number in line)
for (i=1;i<=NF;i++){
# create a index reference (line + field)
R=FNR":"i
# set in an array the max value between old value for this reference and current one. the ( * 1 ) allow to compare ? and number but only used in compare values, not assignation to keep the "?"
M[R]=M[R]*1<=$i*1?$i:M[R]
# Append the value to Output lines
L=L OFS M[R]}}
# after last line of last file, print the Output lines
END{print L}
' ifile*
```
Upvotes: 1 |
2018/03/22 | 826 | 2,613 | <issue_start>username_0: I have made a SQLite database with some users for a small website I'm working on, and I would like to display all of those usernames on my website. I use Java, JavaScript, HTML and SQL.
The overview of my SQLite:
```
Table name: users
Column names: username, password
My guess on the SQL code: SELECT username FROM users
```
Here is the solution!
```
public ArrayList selectUsernames() {
public ArrayList usernameList = new ArrayList<>();
String sql = "SELECT username FROM users";
try {
java.sql.Connection conn = connect();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
username = rs.getString("username");
System.out.println(rs.getString("username"));
usernameList.add(username);
}
for(int i = 0; i < usernameList.size(); i++){
System.out.println(usernameList.get(i));
}
conn.close();
} catch (SQLException e) {
System.out.println("Error line 43");
System.out.println(e.getMessage());
}
return usernameList;
}
```
Then in the HTML i added this Scala line
```
@(users: List[String])
```
Now if i type `@users` it will print out all usernames! Now im trying to make a scala for loop.<issue_comment>username_1: You can try this `awk` program:
```
awk '{
for(i=1;i<=NF;i++)
if($i>a[FNR,i] && $i!="?")
a[FNR,i]=$i
}
END{
for(j=1;j<=FNR;j++)
for(i=1;i<=NF;i++)
printf "%s%s",(a[j,i]?a[j,i]:"?"),(i
```
The default loop will get values from all files and store the highest value into the array `a`.
The `END` statement is looping through the array to display the values.
This relies on `FNR`, the line number of the current file being processed and `NF` the number of fields of the current line.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Taking some memory but only 1 loop per file
```
awk '{L=FNR==1?"":L ORS;for (i=1;i<=NF;i++){R=FNR":"i; M[R]=M[R]*1<=$i*1?$i:M[R];L=L OFS M[R]}}END{print L}' ifile*
```
explain:
```
awk '{
#Lines output buffer is reset on first line of file and append with new line otherwise
L=FNR==1?"":L ORS
# for every field (number in line)
for (i=1;i<=NF;i++){
# create a index reference (line + field)
R=FNR":"i
# set in an array the max value between old value for this reference and current one. the ( * 1 ) allow to compare ? and number but only used in compare values, not assignation to keep the "?"
M[R]=M[R]*1<=$i*1?$i:M[R]
# Append the value to Output lines
L=L OFS M[R]}}
# after last line of last file, print the Output lines
END{print L}
' ifile*
```
Upvotes: 1 |
2018/03/22 | 464 | 1,625 | <issue_start>username_0: Please see the video for animation issue in UITableview. When I get the data from the API and refresh cell than the first cell animation issue when it is expanded.
I used below code for reload cell.
```
self.tblSpDirectoryView.reloadRows(at: [IndexPath(row: 3, section: 0)], with: .none)
```
I have also tried to belove solutions for that.
```
self.tblSpDirectoryView.beginUpdates()
self.tblSpDirectoryView.reloadRows(at: [IndexPath(row: 3, section: 0)], with: .none)
self.tblSpDirectoryView.endUpdates()
```
Before all above code, I have used only the reload.
```
self.tblSpDirectoryView.reloadData()
```
It was not helpfull for me. I have share video file for understanding [please find video file here.](https://file.fm/u/yttrymzu)<issue_comment>username_1: Try to add:
```
self.tableView.beginUpdates()
self.tableView.endUpdates()
```
Also you can try to calculate all your cells heights.
And pass them to the method `tableView(_:heightForRowAt:)`
[See my answer](https://stackoverflow.com/a/49288428/8069241)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your first code is correct for reload section you are using swift 3 kindly try this code
`let indexPath = IndexPath(item: row, section: 0)
tableView.reloadRows(at: [indexPath], with: .fade)`
Animation is not happening correctly, if your table view is static kindly check height in both storyboard and tableview height delegate method, if you are using xib the check in that also.If the differs from the storyboard and in the code the it will reflect like this.
Upvotes: 0 |
2018/03/22 | 995 | 2,926 | <issue_start>username_0: when i tried to import css file it showed error that loaders are missing. so i have installed css-loader and style-loader packages. if i add those packages to **webpack.config.js** i get following error. i dont know how to resolve it.
```
ERROR in ./node_modules/css-loader!./src/index.js
Module build failed: Unknown word (1:1)
> 1 | import React from 'react';
| ^
2 | import ReactDOM from 'react-dom';
3 | import {App} from './components/App';
4 |
@ ./src/index.js 3:14-73
@ multi (webpack)-dev-server/client?http://localhost:8080
./src/index.js
```
**Webpack.config.js**
```
module.exports = {
entry: [
'./src/index.js'
],
module: {
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: ["babel-loader","style-loader","css-loader"],
}
]
},
resolve: {
extensions: ['*', '.js', '.jsx']
},
output: {
path: __dirname + '/dist',
publicPath: '/',
filename: 'bundle.js'
},
devServer: {
contentBase: './dist'
}
};
```
i have installed following dependecies
**package.json**
```
{
"name": "Reactjs",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "webpack-dev-server --config ./webpack.config.js --mode
development"
},
"keywords": [],
"author": "",
"license": "ISC",
"babel": {
"presets": [
"env",
"react",
"stage-2"
]
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.4",
"babel-preset-env": "^1.6.1",
"babel-preset-react": "^6.24.1",
"babel-preset-stage-2": "^6.24.1",
"css-loader": "^0.28.11",
"style-loader": "^0.20.3",
"webpack": "^4.2.0",
"webpack-cli": "^2.0.12",
"webpack-dev-server": "^3.1.1"
},
"dependencies": {
"react": "^16.2.0",
"react-dom": "^16.2.0",
"react-router-dom": "^4.2.2"
}
}
```<issue_comment>username_1: Modifie your webpack.config.js to this and import your css file on your App component like this `import './file.css';` (assuming that the css file is in the same directory as your App component)
```
module: {
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: ["babel-loader"],
},
{
test: /\.css$/,
use: [ 'style-loader', 'css-loader' ]
}
]
},
resolve: {
extensions: ['*', '.js', '.jsx']
},
output: {
path: __dirname + '/dist',
publicPath: '/',
filename: 'bundle.js'
},
devServer: {
contentBase: './dist'
}
};
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to add a separate rule for css to your webpack.config in order to load css into your project.
`...
rules: [
{
test: /\.(js|jsx)$/,
exclude: /node_modules/,
use: ["babel-loader"]
},
{
test: /\.css$/,
use: [ 'style-loader', 'css-loader' ]
}
]
...`
You're using `style-loader` and `css-loader` to transform your `.jsx` files which is going to throw an error.
Upvotes: 2 |
2018/03/22 | 550 | 1,765 | <issue_start>username_0: ```
class Map extends Component
{
constructor(props)
{
super(props)
this.state = {
listHouse:[{"_id":"5aaa4cb5dab4f51645d3e257","location":{"lat":"32.7020541","lng":"-97.2755012","__typename":"locationType"},"__typename":"rentType"},{"_id":"5aa8c86b0815090a6610f7d9","location":{"lat":"32.5872886","lng":"-97.0258724","__typename":"locationType"},"__typename":"rentType"},{"_id":"5aa8c9230815090a6610f7da","location":{"lat":"35.2962839","lng":"-98.2031782","__typename":"locationType"},"__typename":"rentType"}]
}
}
render()
{
return (
{
(this.state.listHouse && this.state.listHouse.length > 0)?
(
this.state.listHouse.map((house,key) =>{
console.log("house details..." +JSON.stringify(house))
return(
)
})
):(null)
}
);
}
```
When I am trying to bind the lat and lng dynamically from a state, I am getting an error like this
index.js:2177 InvalidValueError: setPosition: not a LatLng or LatLngLiteral: in property lat: not a number,
These dynamic values are not assigned to the Marker position in return statement<issue_comment>username_1: I had to convert the lat and lng typeof to Float then it worked fine for me
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this ..
Some objects values from Apollo may be unreadable, so copy those values to another object and try.
```
{
(this.state.listHouse && this.state.listHouse.length > 0)?
(
this.state.listHouse.map((house,key) =>{
console.log("house details..." +JSON.stringify(house))
let houseObject = Object.assign({},house);
return(
)
})
):(null)
}
```
Upvotes: 2 |
2018/03/22 | 941 | 2,860 | <issue_start>username_0: I wish to count the number of lines in paragraph from text file which looks like this:
text file =
```
black
yellow
pink
hills
mountain
liver
barbecue
spaghetti
```
I want to know that the last paragraph has less or more lines than others and then remove it.
The result I want:
```
black
yellow
pink
hills
mountain
liver
```
I tried in this way:
```
c = []
with open(file) as paragraph:
index = 0
for line in paragraph:
if line.strip():
index += 1
c.append(index)
```
but, I was struck that this could be too complicated...maybe?<issue_comment>username_1: You could split by `\n\n` and use a list comprehension:
>
> test.txt
>
>
>
```
black
yellow
pink
hills
mountain
liver
barbecue
spaghetti
```
>
> test.py
>
>
>
```
with open('test.txt') as f:
output = f.read()
x = [len(i.split('\n')) for i in output.split('\n\n')]
print(x)
```
Output:
```
[3, 3, 2] # 2 is the one you want to remove
```
Upvotes: 1 <issue_comment>username_2: You can use something like this:
```
from itertools import groupby
lines = open("test.txt").read().splitlines()
paragraphs = [list(groups) for keys, groups in groupby(lines, lambda x: x != "") if keys]
```
Where you read the file and split on new lines. This will give you:
```
[['black', 'yellow', 'pink'], [''], ['hills', 'mountain', 'liver'], [''], ['barbecue', 'spaghetti']]
```
From there you can use `itertools.groupby` to group them to a list of sublists and do some operations to determine what you want.
Output:
```
[['black', 'yellow', 'pink'], ['hills', 'mountain', 'liver'], ['barbecue', 'spaghetti']]
```
So now each sublist is a paragraph that you can count on it. So for the first paragraph, something like this: `len(sublists[0])` will give you 3. For example:
```
for paragraph in paragraphs:
print(len(paragraph))
```
Output:
```
3
3
2
```
Now you just need to put your logic to finish this. You can use `del sublists[i]` to delete the `i`th sublist.
Upvotes: 0 <issue_comment>username_3: The file **test\_line.txt**
```
black
yellow
pink
hills
mountain
liver
barbecue
spaghetti
```
1. Start counting the line using `index`.
2. On line 6 check if a new Line came, and append the list with the counted lines of paragraphs and reset the `index` to `0`
3. On line 9 counting the lines
4. On line 11 append for the last paragraph
***Now you have got a list which contains number of lines in each paragraph. Do anything with the list as you please.***
Here's your modified code-
```
file = "test_line.txt"
c = []
with open(file) as paragraph:
index = 0
for line in paragraph:
if line == '\n':
c.append(index)
index = 0
else:
index+=1
c.append(index)
print(c)
```
**OUTPUT**
```
[3, 3, 2]
```
Hope it helps!
Upvotes: 2 |
2018/03/22 | 816 | 2,492 | <issue_start>username_0: What is the best strategy to store an api response that contains a list of data while building a PWA?
1. to cache the response using sw-toolbox
or
2. to save it in indexedDB and return from it when offline.<issue_comment>username_1: You could split by `\n\n` and use a list comprehension:
>
> test.txt
>
>
>
```
black
yellow
pink
hills
mountain
liver
barbecue
spaghetti
```
>
> test.py
>
>
>
```
with open('test.txt') as f:
output = f.read()
x = [len(i.split('\n')) for i in output.split('\n\n')]
print(x)
```
Output:
```
[3, 3, 2] # 2 is the one you want to remove
```
Upvotes: 1 <issue_comment>username_2: You can use something like this:
```
from itertools import groupby
lines = open("test.txt").read().splitlines()
paragraphs = [list(groups) for keys, groups in groupby(lines, lambda x: x != "") if keys]
```
Where you read the file and split on new lines. This will give you:
```
[['black', 'yellow', 'pink'], [''], ['hills', 'mountain', 'liver'], [''], ['barbecue', 'spaghetti']]
```
From there you can use `itertools.groupby` to group them to a list of sublists and do some operations to determine what you want.
Output:
```
[['black', 'yellow', 'pink'], ['hills', 'mountain', 'liver'], ['barbecue', 'spaghetti']]
```
So now each sublist is a paragraph that you can count on it. So for the first paragraph, something like this: `len(sublists[0])` will give you 3. For example:
```
for paragraph in paragraphs:
print(len(paragraph))
```
Output:
```
3
3
2
```
Now you just need to put your logic to finish this. You can use `del sublists[i]` to delete the `i`th sublist.
Upvotes: 0 <issue_comment>username_3: The file **test\_line.txt**
```
black
yellow
pink
hills
mountain
liver
barbecue
spaghetti
```
1. Start counting the line using `index`.
2. On line 6 check if a new Line came, and append the list with the counted lines of paragraphs and reset the `index` to `0`
3. On line 9 counting the lines
4. On line 11 append for the last paragraph
***Now you have got a list which contains number of lines in each paragraph. Do anything with the list as you please.***
Here's your modified code-
```
file = "test_line.txt"
c = []
with open(file) as paragraph:
index = 0
for line in paragraph:
if line == '\n':
c.append(index)
index = 0
else:
index+=1
c.append(index)
print(c)
```
**OUTPUT**
```
[3, 3, 2]
```
Hope it helps!
Upvotes: 2 |
2018/03/22 | 596 | 2,409 | <issue_start>username_0: So I have created a macro to add a "todo-shape". Now I'd like to create a macro that goesto the next todo-shape in the presentation. I am quite new to VBA in PowerPoint but have created some code below.
Any ideas how to get it to work?
```
Sub TodoSelectNext()
Dim i As Integer
i = ActiveWindow.Selection.SlideRange.SlideIndex
Do While i < ActivePresentation.Slides.Count
ActivePresentation.Slides(ActiveWindow.Selection.SlideRange(1).SlideIndex + 1).Select
ActivePresentation.Slides(1).Shapes("todo").Select
i = i + 1
Loop
End Sub
```<issue_comment>username_1: Try this instead:
```
Sub TodoSelectNext()
Dim i As Long ' SlideIndex is a Long, not an Integer
Dim oSh As Shape
i = ActiveWindow.Selection.SlideRange.SlideIndex + 1
If i < ActivePresentation.Slides.Count Then
Do While i <= ActivePresentation.Slides.Count
ActiveWindow.View.GotoSlide (i)
On Error Resume Next
Set oSh = ActivePresentation.Slides(i).Shapes("todo")
' Did we find the shape there?
If Not oSh Is Nothing Then
oSh.Select
Exit Sub
Else
i = i + 1
End If
Loop
End If
End Sub
```
Upvotes: 0 <issue_comment>username_2: I managed to create a solution.
```
Sub TodoSelectNext()
Dim i As Integer
Dim shp As Shape
i = ActiveWindow.Selection.SlideRange.SlideIndex
Do While i < ActivePresentation.Slides.Count
ActivePresentation.Slides(ActiveWindow.Selection.SlideRange(1).SlideIndex + 1).Select
For Each shp In ActivePresentation.Slides(ActiveWindow.Selection.SlideRange(1).SlideIndex).Shapes
If shp.Name = "todo" Then
Exit Sub
End If
Next shp
i = i + 1
Loop
End Sub
Sub TodoSelectPrevious()
Dim i As Integer
Dim shp As Shape
i = ActiveWindow.Selection.SlideRange.SlideIndex
Do While i > 1
ActivePresentation.Slides(ActiveWindow.Selection.SlideRange(1).SlideIndex - 1).Select
For Each shp In ActivePresentation.Slides(ActiveWindow.Selection.SlideRange(1).SlideIndex).Shapes
If shp.Name = "todo" Then
Exit Sub
End If
Next shp
i = i - 1
Loop
End Sub
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,429 | 3,953 | <issue_start>username_0: I have two array, one is type of String and second one is number. How can I combine these conditionally as key value objects.
**For example:**
```
var fruits = [
"Apple",
"Banana" ,
"Apricot",
"Bilberry"
]
var count = [3,5,0,2]
```
I want to combine `fruits` and `count` array as key value object and **which count is not** **`0`**
**Expected:**
```
var merge = [{"Apple":3},{"Banana" :5},{"Bilberry":2}]
```
**What i have tried is:**
```
var merge = _.zipObject(["Apple","Banana" ,"Apricot","Bilberry"], [3,5,0,2])
```
and result is:
```
{"Apple":3,"Banana":5 ,"Apricot":0,"Bilberry":2}
```<issue_comment>username_1: Try this vanilla js solution as well using `filter`, `Object.values` and `map`
```
var output = count.map((s, i) => ({
[fruits[i]]: s
})).filter(s => Object.values(s)[0]);
```
**Demo**
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
];
var count = [3, 5, 0, 2];
var output = count.map((s, i) => ({
[fruits[i]]: s
})).filter(s => Object.values(s)[0]);
console.log(output);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Create the object with `_.zipObject()`, and then use [`_.pickBy()`](https://lodash.com/docs/4.17.5#pickBy) to filter keys with 0 values.
**Note:** `_.pickBy()` accepts a callback. The default is identity, which will filter all falsy values (false, 0, null, undefined, etc...). If you want to filter just zeroes, supply another callback, for example `(v) => v !== 0`.
```js
var fruits = ["Apple", "Banana", "Apricot", "Bilberry"];
var count = [3,5,0,2];
var result = _.pickBy(_.zipObject(fruits, count));
console.log(result);
```
With vanilla JS, you can use [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce):
```js
var fruits = ["Apple", "Banana", "Apricot", "Bilberry"];
var count = [3,5,0,2];
var result = fruits.reduce(function(r, f, i) {
if(count[i]) r[f] = count[i];
return r;
}, {});
console.log(result);
```
Upvotes: 1 <issue_comment>username_3: You can use `map` to create a new array of objects & `filter` to remove the undefined
The source of `undefined` is teh callback function is not returning any value when the count is 0
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
]
var count = [3, 5, 0, 2]
var newArr = count.map(function(item, index) {
if (item !== 0) {
return {
[fruits[index]]: item
}
}
}).filter(function(item) {
return item !== undefined;
})
console.log(newArr)
```
Upvotes: 0 <issue_comment>username_4: The two utility functions I think you're looking for are:
* `zip`, to go from `[ a, b ] + [ 1, 2 ] -> [ [ a, 1 ], [ b, 2 ] ]`
* `fromPair` to go from `[ a, 1 ] -> { a: 1 }`
Splitting the transformation in these two steps allows you to filter your list of key-value-pairs, which ensures you don't loose track of the link by index:
```
const valueFilter = ([k, v]) => v !== 0;
```
Possible implementations for those functions are:
```
const zip = (xs, ...others) =>
xs.map(
(x, i) => [x].concat(others.map(ys => ys[i]))
);
const fromPair = ([k, v]) => ({ [k]: v });
```
With those utilities, you can do:
```js
// Utils
const zip = (xs, ...others) =>
xs.map(
(x, i) => [x].concat(others.map(ys => ys[i]))
);
const fromPair = ([k, v]) => ({ [k]: v });
// Data
const fruits = [ "Apple", "Banana", "Apricot", "Bilberry" ];
const counts = [3,5,0,2];
// App
const valueNotZero = ([k, v]) => v !== 0;
console.log(
zip(fruits, counts)
.filter(valueNotZero)
.map(fromPair)
)
```
Upvotes: 0 <issue_comment>username_5: With simple `forEach`
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
]
var count = [3, 5, 0, 2];
var merge = [];
fruits.forEach((val, i) => {
if (count[i]) { merge.push({ [val]: count[i] }) };
});
console.log(merge);
```
Upvotes: 0 |
2018/03/22 | 1,478 | 4,361 | <issue_start>username_0: I am trying to get some hands-on experience about template engine using groovy. Referring to the [official document](http://docs.groovy-lang.org/docs/next/html/documentation/template-engines.html#_streamingtemplateengine)
Below is the code snipped that I am trying to execute, but got an error saying "**unable to resolve class groovy.text.StreamingTemplateEngine**".
```
def text = '''\
dear <% out.print firstname %> ${lastname},
We <% if (accepted) out.print ' are pleased' else out.print 'regret' %>
to inform you, '$title' was ${accepted ? 'accepted' : 'declined' }.
'''
def Template = new groovy.text.StreamingTemplateEngine().createTemplate(text)
def binding = [
firstname : "raghu",
lastname : "lokineni",
accepted : "true",
title : "groovy"
]
string response = Template.make(binding)
println "${response}"
```
**I have further added the following code to resolve the error but no use**.
```
import groovy.text.*
```
Can someone explain me what is that I am doing wrong?<issue_comment>username_1: Try this vanilla js solution as well using `filter`, `Object.values` and `map`
```
var output = count.map((s, i) => ({
[fruits[i]]: s
})).filter(s => Object.values(s)[0]);
```
**Demo**
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
];
var count = [3, 5, 0, 2];
var output = count.map((s, i) => ({
[fruits[i]]: s
})).filter(s => Object.values(s)[0]);
console.log(output);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Create the object with `_.zipObject()`, and then use [`_.pickBy()`](https://lodash.com/docs/4.17.5#pickBy) to filter keys with 0 values.
**Note:** `_.pickBy()` accepts a callback. The default is identity, which will filter all falsy values (false, 0, null, undefined, etc...). If you want to filter just zeroes, supply another callback, for example `(v) => v !== 0`.
```js
var fruits = ["Apple", "Banana", "Apricot", "Bilberry"];
var count = [3,5,0,2];
var result = _.pickBy(_.zipObject(fruits, count));
console.log(result);
```
With vanilla JS, you can use [`Array.reduce()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce):
```js
var fruits = ["Apple", "Banana", "Apricot", "Bilberry"];
var count = [3,5,0,2];
var result = fruits.reduce(function(r, f, i) {
if(count[i]) r[f] = count[i];
return r;
}, {});
console.log(result);
```
Upvotes: 1 <issue_comment>username_3: You can use `map` to create a new array of objects & `filter` to remove the undefined
The source of `undefined` is teh callback function is not returning any value when the count is 0
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
]
var count = [3, 5, 0, 2]
var newArr = count.map(function(item, index) {
if (item !== 0) {
return {
[fruits[index]]: item
}
}
}).filter(function(item) {
return item !== undefined;
})
console.log(newArr)
```
Upvotes: 0 <issue_comment>username_4: The two utility functions I think you're looking for are:
* `zip`, to go from `[ a, b ] + [ 1, 2 ] -> [ [ a, 1 ], [ b, 2 ] ]`
* `fromPair` to go from `[ a, 1 ] -> { a: 1 }`
Splitting the transformation in these two steps allows you to filter your list of key-value-pairs, which ensures you don't loose track of the link by index:
```
const valueFilter = ([k, v]) => v !== 0;
```
Possible implementations for those functions are:
```
const zip = (xs, ...others) =>
xs.map(
(x, i) => [x].concat(others.map(ys => ys[i]))
);
const fromPair = ([k, v]) => ({ [k]: v });
```
With those utilities, you can do:
```js
// Utils
const zip = (xs, ...others) =>
xs.map(
(x, i) => [x].concat(others.map(ys => ys[i]))
);
const fromPair = ([k, v]) => ({ [k]: v });
// Data
const fruits = [ "Apple", "Banana", "Apricot", "Bilberry" ];
const counts = [3,5,0,2];
// App
const valueNotZero = ([k, v]) => v !== 0;
console.log(
zip(fruits, counts)
.filter(valueNotZero)
.map(fromPair)
)
```
Upvotes: 0 <issue_comment>username_5: With simple `forEach`
```js
var fruits = [
"Apple",
"Banana",
"Apricot",
"Bilberry"
]
var count = [3, 5, 0, 2];
var merge = [];
fruits.forEach((val, i) => {
if (count[i]) { merge.push({ [val]: count[i] }) };
});
console.log(merge);
```
Upvotes: 0 |
2018/03/22 | 724 | 2,853 | <issue_start>username_0: I need to check a condition before the job triggers the postbuild action
I could see that post section within the Pipeline supports always, changed, failure, success, unstable, and aborted. as the post build conditions. But i want to check another condition in the post build action. I tried with when{} but it’s not supported in post build action.
```
pipeline {
agent any
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
post {
}
}
```<issue_comment>username_1: As you said. The `when` option is not available in the post section. To create a condition you can just create a scripted block inside the post section like in this example:
```
pipeline {
agent { node { label 'xxx' } }
environment {
STAGE='PRD'
}
options {
buildDiscarder(logRotator(numToKeepStr: '3', artifactNumToKeepStr: '1'))
}
stages {
stage('test') {
steps {
sh 'echo "hello world"'
}
}
}
post {
always {
script {
if (env.STAGE == 'PRD') {
echo 'PRD ENVIRONMENT..'
} else {
echo 'OTHER ENVIRONMENT'
}
}
}
}
}
```
When the env var of STAGE is PRD it will print `PRD ENVIRONMENT` in the post section. If it isn't it will print: `DIFFERENT ENVIRONMENT`.
Run with `STAGE='PRD'`:
```
[Pipeline] sh
[test] Running shell script
+ echo 'hello world'
hello world
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
PRD ENVIRONMENT..
```
Run where `STAGE='UAT'` (you can use a parameter instead of a env var of course):
```
[Pipeline] sh
[test] Running shell script
+ echo 'hello world'
hello world
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Declarative: Post Actions)
[Pipeline] script
[Pipeline] {
[Pipeline] echo
OTHER ENVIRONMENT
[Pipeline] }
[Pipeline] // script
```
Upvotes: 1 <issue_comment>username_2: You can add a script tag inside post build action and can use if() to check your condition
```
post {
failure {
script{
if( expression ){
//Post build action which satisfy the condition
}
}
}
}
```
You can also add post build action for a **stage** in the pipeline also. In that case you can specify the condition in the **when**
```
pipeline {
stages {
stage('Example') {
when {
//condition that you want check
}
steps {
}
post { //post build action for the stage
failure {
//post build action that you want to check
}
}
}
}}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,015 | 3,922 | <issue_start>username_0: Hello I am trying to store the name of the player with the high score in the `sharedpreferences`. I want it to be the top 5 players.
Now if I try to store an array in there I get the error: "Wrong 2nd argument type. Found '`java.lang.String[][]' required 'java.util.Set`'"
```
public static void setHighScore(Context context, String name, int score) {
String[] player = new String[] {name, String.valueOf(score)};
String[][] highScores = new String[][] {player};
SharedPreferences.Editor editor = getPreferences(context).edit();
editor.putStringSet(PLAYER, highScores);
editor.apply();
}
```
How can I store a name and a score of multiple players?
Thanks in advance!<issue_comment>username_1: Your error says explicitly that you need to pass a Set object to the function `putStringSet()`. As explained in the [documentation](https://developer.android.com/reference/android/content/SharedPreferences.Editor.html#putStringSet(java.lang.String,%20java.util.Set%3Cjava.lang.String%3E)).
Reguarding to this, I think using SharedPreferences to store your HighScores is a bad idea. You will face different problems.
**First choice :** Use the player name as key, use just `putString` as we put one value
```
public static void setHighScore(Context context, String name, int score) {
Set scoreSet = new HashSet();
scoreSet.add(String.valueOf(score));
SharedPreferences.Editor editor = getPreferences(context).edit();
editor.putString(name, scoreSet);
editor.apply();
}
```
This is a really bad implementation. Because it will be hard to retrieve your score since the key is the player name and will always change.
**Second Choice :** Use only one key and store all the score in a Set
```
public static void setHighScore(Context context, String name, int score) {
SharedPreferences prefs = getPreferences(context);
Set scoreSet = prefs.getStringSet("highScores"); //I use "highScores" as the key, but could be what you want
// You need to create a function that find the lower scores and remove it
removeLower(scoreSet);
scoreSet.add(name + ":" + String.valueOf(score)); //need to define a pattern to separate name from score
SharedPreferences.Editor editor = prefs.edit();
editor.putStringSet("highScores", scoreSet);
editor.apply();
}
```
This is also not a good idea. Because you need to redefine a function to find the lower score and remove it. You also need to define a pattern to store name + score. Then you need to define a function to read scores to separate the name from the score.
**Solution :**
The good solution here is to use a database. Preferences are not design to stored data but preferences. Also a database will provides functionality to easily store/retrieve/order/etc your data. Have a look [here](https://developer.android.com/training/data-storage/room/index.html)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Convert your double dimension array to string and save the string in shared preference.
**Here is the code**
```
private String twoDimensionalStringArrayToString(String[][] s) throws UnsupportedEncodingException, IOException {
ByteArrayOutputStream bo = null;
ObjectOutputStream so = null;
Base64OutputStream b64 = null;
try {
bo = new ByteArrayOutputStream();
b64 = new Base64OutputStream(bo, Base64.DEFAULT);
so = new ObjectOutputStream(b64);
so.writeObject(s);
return bo.toString("UTF-8");
} finally {
if (bo != null) { bo.close(); }
if (b64 != null) { b64.close(); }
if (so != null) { so.close(); }
}
}
```
Save the string in Shared preference
```
prefsEditor.putString(PLAYLISTS, sb.toString());
```
Check this post for further details
[how to store 2dimensional array in shared preferences in android or serialize it](https://stackoverflow.com/questions/23261540/how-to-store-2dimensional-array-in-shared-preferences-in-android-or-serialize-it)
Upvotes: 0 |
2018/03/22 | 1,081 | 4,207 | <issue_start>username_0: ```
I am trying to display the shortest time taken to click a shape. However the `localStorage` function is returning a `NaN` value instead of the required Number value. How can I tweak the code to return a number value.
```
Thanks
```
document.getElementById("shapes").onclick = function() {
document.getElementById("shapes").style.display = "none";
shapeAppearDelay();
var end = new Date().getTime();
var timeTaken = (end - start) / 1000;
document.getElementById("timeTaken").innerHTML = "Time taken: " + timeTaken + "s";
var timeTaken = (end - start) / 1000;
if (localStorage.shortestTimeTaken) {
if (timeTaken < Number(localStorage.shortestTimeTaken)) {
localStorage.shortestTimeTaken = timeTaken;
}
} else {
localStorage.shortestTimeTaken = timeTaken;
}
document.getElementById("shortestTimeTaken").innerHTML = "Shortest time taken: " + localStorage.shortestTimeTaken + "s";
}
```<issue_comment>username_1: Your error says explicitly that you need to pass a Set object to the function `putStringSet()`. As explained in the [documentation](https://developer.android.com/reference/android/content/SharedPreferences.Editor.html#putStringSet(java.lang.String,%20java.util.Set%3Cjava.lang.String%3E)).
Reguarding to this, I think using SharedPreferences to store your HighScores is a bad idea. You will face different problems.
**First choice :** Use the player name as key, use just `putString` as we put one value
```
public static void setHighScore(Context context, String name, int score) {
Set scoreSet = new HashSet();
scoreSet.add(String.valueOf(score));
SharedPreferences.Editor editor = getPreferences(context).edit();
editor.putString(name, scoreSet);
editor.apply();
}
```
This is a really bad implementation. Because it will be hard to retrieve your score since the key is the player name and will always change.
**Second Choice :** Use only one key and store all the score in a Set
```
public static void setHighScore(Context context, String name, int score) {
SharedPreferences prefs = getPreferences(context);
Set scoreSet = prefs.getStringSet("highScores"); //I use "highScores" as the key, but could be what you want
// You need to create a function that find the lower scores and remove it
removeLower(scoreSet);
scoreSet.add(name + ":" + String.valueOf(score)); //need to define a pattern to separate name from score
SharedPreferences.Editor editor = prefs.edit();
editor.putStringSet("highScores", scoreSet);
editor.apply();
}
```
This is also not a good idea. Because you need to redefine a function to find the lower score and remove it. You also need to define a pattern to store name + score. Then you need to define a function to read scores to separate the name from the score.
**Solution :**
The good solution here is to use a database. Preferences are not design to stored data but preferences. Also a database will provides functionality to easily store/retrieve/order/etc your data. Have a look [here](https://developer.android.com/training/data-storage/room/index.html)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Convert your double dimension array to string and save the string in shared preference.
**Here is the code**
```
private String twoDimensionalStringArrayToString(String[][] s) throws UnsupportedEncodingException, IOException {
ByteArrayOutputStream bo = null;
ObjectOutputStream so = null;
Base64OutputStream b64 = null;
try {
bo = new ByteArrayOutputStream();
b64 = new Base64OutputStream(bo, Base64.DEFAULT);
so = new ObjectOutputStream(b64);
so.writeObject(s);
return bo.toString("UTF-8");
} finally {
if (bo != null) { bo.close(); }
if (b64 != null) { b64.close(); }
if (so != null) { so.close(); }
}
}
```
Save the string in Shared preference
```
prefsEditor.putString(PLAYLISTS, sb.toString());
```
Check this post for further details
[how to store 2dimensional array in shared preferences in android or serialize it](https://stackoverflow.com/questions/23261540/how-to-store-2dimensional-array-in-shared-preferences-in-android-or-serialize-it)
Upvotes: 0 |
2018/03/22 | 423 | 1,614 | <issue_start>username_0: I want to create a Gallery with a databse. Connect to database, use `foreach` to print out info in prepared `HTML` code.
I separated the `PHP` code from the `HTML` code, so **each type of code** is in **different folders**, separated files, for example, i have `index.php` which requires `html/index.html` and runs all the necessary code.
Now im facing a real problem - how can i output prepared `HTML` code in exact place? It's impossible to use `foreach` and `echo` without writing `PHP` code between `HTML` lines, right ?<issue_comment>username_1: You already have a lot of template engine on the wild web like Blade, Deval, HAH, mTemplate, pHAML, PHP, RainTPL, Scurvy, Simphple, Smarty, StampTE, TinyButStrong, Tonic, Twig, uBook ..., just pick one according to your needs.
Comparison between them can be easily found with a simple search on "php template engine comparison"
Here, by exemple, a [wikipedia page](https://en.wikipedia.org/wiki/Comparison_of_web_template_engines) with a quite complete list by techno.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I would advise you to use some framework, like [Laravel](https://laravel.com/), [Symfony](https://symfony.com/) or [Phalcon](https://phalconphp.com). They all have big community and good documentation with a ton of examples. Besides, they all come with their own templating system and use their preferable [ORM](https://en.wikipedia.org/wiki/Object-relational_mapping) system. This will help you to make your code clean and elegant, or at least cleaner, then if you tried to invent your own wheel.
Upvotes: 0 |
2018/03/22 | 703 | 2,825 | <issue_start>username_0: I am making an APP which always is detecting noises. And when it detect a loud noise it starts a function.
While the function is happening I need temporarily pause the loop which detects noises.
How can I do that?
Hares my code:
```
private void readAudioBuffer() {
try {
short[] buffer = new short[bufferSize];
int bufferReadResult;
do {
bufferReadResult = audio.read(buffer, 0, bufferSize);
for (int i = 0; i < bufferReadResult; i++){
if (buffer[i] > lastLevel) {
lastLevel = buffer[i];
}
}
// if sound level is over 20000 start voice recognition
if (lastLevel > 20000){
lastLevel = 0;
// Pause this function:
startVoiceRecognitionActivity();
}
} while (bufferReadResult > 0 && audio.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING);
} catch (Exception e) {
e.printStackTrace();
}
}
```<issue_comment>username_1: ```
private void readAudioBuffer() {
try {
short[] buffer = new short[bufferSize];
int bufferReadResult;
do {
while(audio.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){
//Wait for certain time. If it is in a recording state then it would enter this loop and wait else it would go ahead to check for sound levels.
}
bufferReadResult = audio.read(buffer, 0, bufferSize);
for (int i = 0; i < bufferReadResult; i++){
if (buffer[i] > lastLevel) {
lastLevel = buffer[i];
}
}
// if sound level is over 20000 start voice recognition
if (lastLevel > 20000){
lastLevel = 0;
startVoiceRecognitionActivity();
}
} while (bufferReadResult > 0);
} catch (Exception e) {
e.printStackTrace();
}
}
```
Upvotes: -1 <issue_comment>username_2: Basically adding
Thread.sleep(5000);
Did the trick
```
try {
short[] buffer = new short[bufferSize];
int bufferReadResult;
do {
bufferReadResult = audio.read(buffer, 0, bufferSize);
for (int i = 0; i < bufferReadResult; i++){
if (buffer[i] > lastLevel) {
lastLevel = buffer[i];
}
}
// if sound level is over 20000 start voice recognition
if (lastLevel > 20000){
lastLevel = 0;
startVoiceRecognitionActivity();
Thread.sleep(5000);
}
} while (bufferReadResult > 0 && audio.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING);
} catch (Exception e) {
e.printStackTrace();
}
}
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,033 | 3,996 | <issue_start>username_0: Sorry, i am newbie on java web development.
I got some task to fetch user profile picture from 3rd party company via HTTP rest(GET method). Their api only can be accessed using session id on the header parameter and the api will return some byte[] array looks like `’ÑÒBRSb¢ÂáTr²ñ#‚4“â3C` etc.
**How to handle rest response with content type image/jpg in Rest Template?**
I do my best like this
```
private RestTemplate restTemplate;
public byte[] getProfilePic(){
String canonicalPath = "http://dockertest/bankingapp/customer/profpicFile";
String sessionId= "MTQ4NzE5Mz...etc";
HttpEntity request = new HttpEntity(null, getHeaders(true, "GET", null, canonicalPath, sessionId));
//getHeaders() will return HttpHeaders with those parameter
ResponseEntity response = null;
try {
response = this.restTemplate.exchange(uri, HttpMethod.GET, request, byte[].class);
} catch( HttpServerErrorException hse ){
throw hse;
}
return response;
}
```
This code will return an error
>
> org.springframework.web.client.RestClientException: Could not extract
> response: no suitable HttpMessageConverter found for response type
> [[B] and content type [image/jpg]
>
>
>
Any suggestion or help will be appreciated!
Thank you
**Update**
Using stackoveflower suggestions i can manage to solve this.
```
private RestTemplate restTemplate;
public byte[] getProfilePic(){
String canonicalPath = "/mobile/customer/profpicFile";
String sessionId= "MTQ4NzE5Mz...etc";
HttpEntity request = new HttpEntity(null, getHeaders(true, "GET", null, canonicalPath, sessionId));
//getHeaders() will return HttpHeaders with those parameter
ResponseEntity response = null;
try {
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
response = this.restTemplate.exchange(uri, HttpMethod.GET, request, byte[].class).getBody();
return response;
} catch( HttpServerErrorException hse ){
throw hse;
}
return null;
}
```
Note about HttpMessageConverter, instead using list, i can directly add a `ByteArrayHttpMessageConverter()`<issue_comment>username_1: As said I guess you must use the right `messageconverter`
I would do in this way:
```
private RestTemplate restTemplate;
public byte[] getProfilePic(){
String canonicalPath = "http://dockertest/bankingapp/customer/profpicFile";
String sessionId= "MTQ4NzE5Mz...etc";
List converters = new ArrayList<>(1);
converters.add(new ByteArrayHttpMessageConverter());
restTemplate.setMessageConverters(converters);
HttpEntity request = new HttpEntity(null, getHeaders(true, "GET", null, canonicalPath, sessionId));
//getHeaders() will return HttpHeaders with those parameter
ResponseEntity response = null;
try {
response = this.restTemplate.exchange(uri, HttpMethod.GET, request, byte[].class);
} catch( HttpServerErrorException hse ){
throw hse;
}
return response;
}
```
More information can be found here: <https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/client/RestTemplate.html#setMessageConverters-java.util.List-> and here <https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/http/converter/HttpMessageConverter.html> and here <https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/http/converter/ByteArrayHttpMessageConverter.html>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thank you very much,this problem takes up my a lot of time. Now,it was resolved.
following:
```
@Configuration
@Slf4j
public class RestTemplateConfiguration implements ApplicationContextAware {
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
RestTemplate restTemplate = (RestTemplate) applicationContext.getBean("restTemplate");
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
restTemplate.setUriTemplateHandler(new GetUriTemplateHandler());
}
```
}
Upvotes: 0 |
2018/03/22 | 488 | 1,776 | <issue_start>username_0: I have extracted the value from Json response for one of the Key. It has two possible values as either
```
Key=[] or
Key=[{"combination":[{"code":"size","value":"Small"}]},{"combination":
[{"code":"size","value":"Medium"}]}]
```
I need to check whether Key is [] or it has some values. Could you please help me what is wrong with below implementation:
```
if ("${Key}"=="[]") {
vars.put('size', 'empty')
} else {
vars.put('size', 'notempty')
}
```
My Switch controller is not navigating to Else Part based on above implementation . Help is useful!<issue_comment>username_1: If you have an already extracted value `Key` and you are using an if controller with the default `JavaScript` you can do the following in the `condition` field:
```
"${Key}".length > 0
```
The `"${Key}"` is evaluating to your `JavaScript` object which is an array. You can check the length of the array to see if there are objects in it.
Upvotes: 1 <issue_comment>username_2: 1. Don't ever inline [JMeter Functions or Variables](https://jmeter.apache.org/usermanual/functions.html) into script body as they may resolve into something which will cause compilation failure or unexpected behaviour. Either use "Parameters" section like:
[](https://i.stack.imgur.com/TXefr.png)
or use `vars.get('Key')` statement instead.
2. Don't compare string literals using `==`, go for [.equals()](http://docs.groovy-lang.org/docs/next/html/groovy-jdk/java/lang/Object.html#equals(java.util.List)) method instead
See [Apache Groovy - Why and How You Should Use It](https://www.blazemeter.com/blog/groovy-new-black) article for more information on using Groovy scripting in JMeter tests
Upvotes: 3 [selected_answer] |
2018/03/22 | 834 | 2,994 | <issue_start>username_0: I have been sitting on this problem for a while now. I have seen similar problems already being solved but so far applying those solutions just returned other errors to me.
I have a spreadsheet connected to Bloomberg. I want to save the spreadsheet as pdf with different Company names/tickers inserted. But the code runs through too fast.
So what i get back is only several pdf documents saying #N/A Requesting Data...
about 50 times.
I have already seen people using
```
Application.Wait
```
or doing some random calculations like
```
For x=1 to x= 10^9
```
just to buy time, but it seems the computer is occupied with those commands and doesn't work on my formulas in this time.
In a similar thread i saw
```
Application.ontime + TimeValue("00:00:05"), "Other task"
```
but this only returned errors to me so far.
Here is my current code so you can get an idea what I am trying to do. In the current version it tells (N Times) that all makros are deactivated or that Print1() is not available in this worksheet. Also I can see it running through in the same high speed as without adding the Application.OnTime.
```
Sub All_To_Pdf()
Dim i,N As Integer
N=100
For i = 1 To N
Range("I4") = Sheets("List").Cells(i, 3).Value
Application.OnTime Now + TimeValue("00:00:10"), "Print1()"
Next
End Sub
```
And a simple print function
```
Sub Print1()
ActiveSheet.ExportAsFixedFormat Type:=xlTypePDF, Filename:= _
"FOLDER" & Range("A3").Value & ".pdf", Quality:=xlQualityStandard, _
IncludeDocProperties:=True, IgnorePrintAreas:=False, OpenAfterPublish:= _
False
End Sub
```
Update:
Now I changed my tactic and avoided the topic by copying the values in a different excel and from there saving it to pdf.
Thank you all for your help<issue_comment>username_1: If you have an already extracted value `Key` and you are using an if controller with the default `JavaScript` you can do the following in the `condition` field:
```
"${Key}".length > 0
```
The `"${Key}"` is evaluating to your `JavaScript` object which is an array. You can check the length of the array to see if there are objects in it.
Upvotes: 1 <issue_comment>username_2: 1. Don't ever inline [JMeter Functions or Variables](https://jmeter.apache.org/usermanual/functions.html) into script body as they may resolve into something which will cause compilation failure or unexpected behaviour. Either use "Parameters" section like:
[](https://i.stack.imgur.com/TXefr.png)
or use `vars.get('Key')` statement instead.
2. Don't compare string literals using `==`, go for [.equals()](http://docs.groovy-lang.org/docs/next/html/groovy-jdk/java/lang/Object.html#equals(java.util.List)) method instead
See [Apache Groovy - Why and How You Should Use It](https://www.blazemeter.com/blog/groovy-new-black) article for more information on using Groovy scripting in JMeter tests
Upvotes: 3 [selected_answer] |
2018/03/22 | 638 | 2,364 | <issue_start>username_0: How can I search my date into the datagrid view? I have a sample code which I think is correct but why does it not search?
[](https://i.stack.imgur.com/IGqiF.png)
```
private void textBox3_TextChanged(object sender, EventArgs e)
{
if (string.IsNullOrEmpty(textBox3.Text))
{
SqlDataAdapter sda = new SqlDataAdapter("Select * from STUDENT_RESEARCH_PROJECTS WHERE COURSE LIKE '" + comboBox2.Text + "%'", con);
DataTable data = new DataTable();
sda.Fill(data);
dataGridView1.DataSource = data;
}
else
{
SqlDataAdapter sda = new SqlDataAdapter("SELECT ACCESSION_NO,TITLE_PROJECT,CATEGORY,YEAR,COURSE,DATE,Student_Name1,Student_Name2,Student_Name3,Student_Name4,Student_Name5,RELATED_TITLE1,RELATED_TITLE2,RELATED_TITLE3,RELATED_TITLE4,RELATED_TITLE5 FROM STUDENT_RESEARCH_PROJECTS WHERE DATE LIKE'" + textBox3.Text + "%'", con);
DataTable data = new DataTable();
sda.Fill(data);
dataGridView1.DataSource = data;
}
}
```<issue_comment>username_1: If you have an already extracted value `Key` and you are using an if controller with the default `JavaScript` you can do the following in the `condition` field:
```
"${Key}".length > 0
```
The `"${Key}"` is evaluating to your `JavaScript` object which is an array. You can check the length of the array to see if there are objects in it.
Upvotes: 1 <issue_comment>username_2: 1. Don't ever inline [JMeter Functions or Variables](https://jmeter.apache.org/usermanual/functions.html) into script body as they may resolve into something which will cause compilation failure or unexpected behaviour. Either use "Parameters" section like:
[](https://i.stack.imgur.com/TXefr.png)
or use `vars.get('Key')` statement instead.
2. Don't compare string literals using `==`, go for [.equals()](http://docs.groovy-lang.org/docs/next/html/groovy-jdk/java/lang/Object.html#equals(java.util.List)) method instead
See [Apache Groovy - Why and How You Should Use It](https://www.blazemeter.com/blog/groovy-new-black) article for more information on using Groovy scripting in JMeter tests
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,403 | 5,408 | <issue_start>username_0: I've successfully created a simple `iOS` app that will change the background color When I click on the button. But. now the problem is I've no idea on how to implement property observer to print out message to the effect of the color whenever the color has changed.
`UIColorExtension.swift`
```
import UIKit
extension UIColor {
static var random: UIColor {
// Seed (only once)
srand48(Int(arc4random()))
return UIColor(red: CGFloat(drand48()), green: CGFloat(drand48()), blue: CGFloat(drand48()), alpha: 1.0)
}
}
```
`ViewController.Swift`
```
import UIKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// apply random color on view did load
applyRandomColor()
}
@IBAction func btnHandler(_ sender: Any) {
// on each button click, apply random color to view
applyRandomColor()
}
func applyRandomColor() {
view.backgroundColor = UIColor.random
}
}
```
Please teach me on how to use property observer to monitor and print the color each time the color is changed as I don't have the slightest idea to do so.<issue_comment>username_1: In Swift there are two types of property observers i.e:
>
> willSet
>
>
> didSet
>
>
>
You can find it in the Swift documentation [here](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Properties.html?).
if you just need to print color description using property observer then you can create property and set its observers like this:
```
var backgroundColor = UIColor() {
didSet {
print("didSet value : \(backgroundColor)")
}
}
```
and your complete code will look like:
```
import UIKit
class ViewController: UIViewController {
var backgroundColor = UIColor() {
didSet {
print("didSet value : \(backgroundColor)")
}
}
override func viewDidLoad() {
super.viewDidLoad()
self.applyRandomColor()
}
@IBAction func btnHandler(_ sender: Any) {
// on each button click, apply random color to view
applyRandomColor()
}
func applyRandomColor() {
backgroundColor = UIColor.random
self.view.backgroundColor = backgroundColor
}
}
extension UIColor {
static var random: UIColor {
// Seed (only once)
srand48(Int(arc4random()))
return UIColor(red: CGFloat(drand48()), green: CGFloat(drand48()), blue:
CGFloat(drand48()), alpha: 1.0)
}
}
```
Upvotes: 1 <issue_comment>username_2: First of all you don't need any observer here because you know when the color is changing as it is changing by you manually by calling the `applyRandomColor()` function. So you can do here whatever you want to do.
After that if you still want to see how observer works. Then add an observer in class where you want to observe this event.
```
view.addObserver(self, forKeyPath: "backgroundColor", options: NSKeyValueObservingOptions.new, context: nil)
```
And here is the event:
```
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "backgroundColor" {
/// Background color has changed
/// Your view which backgroundColor it is
guard let view = object as? UIView else {return}
print(view.backgroundColor)
OR
guard let color = change?[.newKey] as? UIColor else {return}
print(color)
}
}
```
Upvotes: 1 <issue_comment>username_3: First add observer like this:
```
view.addObserver(self, forKeyPath: "backgroundColor", options: [.new], context: nil)
```
Then override the function:
```
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
guard let color = change?[.newKey] as? UIColor else {return}
print(color)
}
```
Upvotes: 1 <issue_comment>username_4: If you really need an observer Swift 4 provides a very smart way to do that:
* Declare a `NSKeyValueObservation` property
```
var observation : NSKeyValueObservation?
```
* In `viewDidLoad` add the observer, the closure is executed whenever the background color changes.
```
observation = observe(\.view.backgroundColor, options: [.new]) { _, change in
print(change.newValue!)
}
```
Upvotes: 2 <issue_comment>username_5: At the moment you're just doing this
```
view.backgroundColor = X
```
instead, you probably want to make your own "property"....
```
var mood: UIColor { }
```
properties can have a "didSet" action...
```
var mood: UIColor {
didSet {
view.backgroundColor = mood
print("Hello!")
}
}
```
So now, just use "mood" when you want to change the background color.
```
mood = X
```
But, you can ALSO run any other code - where it prints the Hello.
Imagine your program **has 100s of places where you do this thing of changing the background color**.
If you use a property, you can change what happens "all at once" by just changing the code at print("Hello").
Otherwise, you'd have to change every one of the 100s of places where you do that.
This is a real basic in programming. Enjoy!
Upvotes: 2 |
2018/03/22 | 716 | 3,116 | <issue_start>username_0: I understand that if a number gets closer to zero than `realmin`, then Matlab converts the double to a [denorm](https://en.wikipedia.org/wiki/Denormal_number) . I am noticing this causes significant performance cost. In particular I am using a gradient descent algorithm that when near convergence, the gradients (in backprop for my bespoke neural network) drop below `realmin` such that the algorithm incurs heavy performance cost (due to, I am assuming, type conversion behind the scenes). I have used the following code to validate my gradient matrices so that no numbers falls below `realmin`:
`function mat= validateSmallDoubles(obj, mat, threshold)
mat= mat.*(abs(mat)>threshold);
end`
Is this usual practice and what value should `threshold` take (obviously you want this as close to `realmin` as possible, but not too close otherwise any additional division operations will send some elements of `mat` below `realmin` *after* validation)?. Also, specifically for neural networks, where are the best places to do gradient validation without ruining the network's ability to learn?. I would be grateful to know what solutions people with experience in training neural networks have? I am sure this is a problem for all languages. Tentative `threshold` values have ruined my network's learning.<issue_comment>username_1: I traced the diminishing gradient occurrences to the Adam SGD optimiser - the biased moving average matrix calculations in the Adam optimiser were causing matlab to carry out the denorm operation. I simply thresholded the matrix elements for each layer after these calculations, with `threshold=10*realmin`, to zero without any effect on learning. I have yet to investigate why my moving averages were getting so close to zero as my architecture and weight initialisation priors would normally mitigate this.
Upvotes: 1 [selected_answer]<issue_comment>username_2: I do not know if it is somehow related to your problem, but I had a similar problem with underflows while doing exponentially weighted average of gradients (say while implementing Momentum or Adam).
In particular, at some point you do something like:
`v := 0.9*v + 0.1*gradient` where `v` is the exponentially weighted average of your gradient `g`. If in a lot of successive iterations a same element of your `g` matrix remains 0, your `v` is quickly becoming very small and you hit dernormals.
So the problem, is why all those zeros ? In my case the culprit where the `ReLu` units which outputed a lot of zeros (if x<0 , relu(x) is zero). Because when `Relu` outputs zero on a given neurons the related weight has no effect it means the corresponding partial derivative will be zero in `g`. So it happened to me that in a lot of successive iterations that particular neuron was not fired.
To avoiding having zero activations (and derivatives), I used "leaky relu" so to have a very small derivative instead.
Another solution, is to use gradient clipping before applying your weighted average to threshold your gradients to a minimum value. Which is quite similar to what you did.
Upvotes: 1 |
2018/03/22 | 568 | 1,909 | <issue_start>username_0: I'm trying to write a `case` statement with two separate conditions:
1. `controller_name` being `'pages'` or `'users'`
2. `controller_name` being `'static'` and `action_name` being `'homepage'`
This is what I've tried:
```
case controller_name
when 'pages', 'users'
stylesheet_link_tag "style"
when 'static' && action_name == 'homepage'
stylesheet_link_tag "homepage"
end
```
The first `when` works as expected, but the second `when` does not. I suspect that `&&` is causing the problem, but I cannot figure out the correct syntax.<issue_comment>username_1: It doesn't work because `case controller_name` compares each `when`-value to `controller_name`, so you have something like:
```
('static' && action_name == 'homepage') === controller_name
```
which becomes:
```
true === controller_name
# or
false === controller_name
```
You could handle the `action_name` using an `if` statement instead:
```
case controller_name
when 'pages', 'users'
stylesheet_link_tag "style"
when 'static'
if action_name == 'homepage'
stylesheet_link_tag "homepage"
else
# ...
end
end
```
or another `case`:
```
case controller_name
when 'pages', 'users'
stylesheet_link_tag "style"
when 'static'
case action_name
when 'homepage'
stylesheet_link_tag "homepage"
when '...'
# ...
end
end
```
or maybe just:
```
case controller_name
when 'pages', 'users'
stylesheet_link_tag "style"
when 'static'
stylesheet_link_tag action_name
end
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Another way to to tackle this is to use a lambda as the matcher, this works because `#===` can be used to call a lambda.
```rb
static_home_page = ->(name) {name == 'static' && action_name == 'homepage'}
case controller_name
when 'pages', 'users'
stylesheet_link_tag "style"
when static_home_page
stylesheet_link_tag "homepage"
end
```
Upvotes: 0 |
Subsets and Splits