date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 648 | 2,343 | <issue_start>username_0: The idea is that I show a loading .gif file while I load the data and then once all the data has loaded, wait an `x` amount of time, currently 3000ms, but will be lowered, then after the given amount of time, set the value of data.show to true, so that the loading .gif is no longer showing and that the Table is.
So, how do I reference `show` from the JavaScript in the HTML, ng-show?
Any help would be appreciated!
Thanks :)
edit: This works regarding to this to answer - [setTimeout v $timeout](https://coderwall.com/p/udpmtq/angularjs-use-timeout-not-settimeout)
HTML
----
```

Stuff
```
JavaScript
----------
```
function onSuccess(response) {
controller.errors = response.data;
setTimeout(function () {
$('#errorTable').DataTable({
"lengthMenu": [[6], [6]],
"dom": '<"toolbar">frtip'
});
$("div.toolbar").html('');
$scope.show = true;
}, 3000);
}
```<issue_comment>username_1: Try adding `$scope.show = { data: false };` in the controller
Upvotes: 2 <issue_comment>username_2: The problem is you have not defined $scope.show in the beginning. You probably don't need show.data . Please add the following line at the beginning of the controller
```
$scope.show = false;
```
And Change html to:
```

Stuff
```
And in controller:
You have added the $scope.show.data outside setTimeout bring it within the setTimeout function as per your need and add:
```
function onSuccess(response) {
controller.errors = response.data;
$timeout(function () {
$('#errorTable').DataTable({
"lengthMenu": [[6], [6]],
"dom": '<"toolbar">frtip'
});
$("div.toolbar").html('');
$scope.show = true;
}, 3000);
}
```
[Here](https://embed.plnkr.co/pAI3lvNxRRGNeRi4toAl/) is the plunker that I tried
Upvotes: 4 [selected_answer]<issue_comment>username_3: As there is no `div.toolbar`, meaning, `$('div.toolbar')` would be `null/undefined`, which leads to a Null Pointer at that statement and the next statement is not being executed, as the execution has stopped just before.
Simple solution could be to place the `$scope.show.data = true;` as the first statement in the `function` called by `setTimeout`.
Upvotes: -1 |
2018/03/20 | 698 | 2,300 | <issue_start>username_0: I have a model `'user'` of two properties `'username'` & `'plan'` where `'plan'` is a reference of another model `'plan'` where properties are `'name'` & `'description'`. now when I use following Query in API to retrieve data-
```
`user.find().populate('plan')`
```
I am trying to render this data in front-end using
```
this.state.users.map(re => (
| {re.username} | {re.plan} |
)
```
Here `re.plan` returns the `_id` of the property. To get the `name` property, I have to use `re.plan.name` . Which returns an error as I have some properties which does not have any `plan` property.
Sample data from the Query ---
```
{
"_id": "5a8be9ce4f506927a8d2cacb",
"username": "kjbfhisdugfiasdb",
"plan": {
"_id": "5a64bbd3f671402b90ce5fc9",
"name": " test 3"
},
},
{
"_id": "5a82bcb4c72440221442399f",
"username": "test",
}
```
How to resolve this issue?
Thanks in advance<issue_comment>username_1: Try adding `$scope.show = { data: false };` in the controller
Upvotes: 2 <issue_comment>username_2: The problem is you have not defined $scope.show in the beginning. You probably don't need show.data . Please add the following line at the beginning of the controller
```
$scope.show = false;
```
And Change html to:
```

Stuff
```
And in controller:
You have added the $scope.show.data outside setTimeout bring it within the setTimeout function as per your need and add:
```
function onSuccess(response) {
controller.errors = response.data;
$timeout(function () {
$('#errorTable').DataTable({
"lengthMenu": [[6], [6]],
"dom": '<"toolbar">frtip'
});
$("div.toolbar").html('');
$scope.show = true;
}, 3000);
}
```
[Here](https://embed.plnkr.co/pAI3lvNxRRGNeRi4toAl/) is the plunker that I tried
Upvotes: 4 [selected_answer]<issue_comment>username_3: As there is no `div.toolbar`, meaning, `$('div.toolbar')` would be `null/undefined`, which leads to a Null Pointer at that statement and the next statement is not being executed, as the execution has stopped just before.
Simple solution could be to place the `$scope.show.data = true;` as the first statement in the `function` called by `setTimeout`.
Upvotes: -1 |
2018/03/20 | 4,103 | 14,161 | <issue_start>username_0: I'm using JPA API's and it works well, I have tried to add new member/column to the class and when I tried to add data into it using form, it shows "TRANSACTION ABORTED" error.
```
"javax.persistence.PersistenceException: Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.5.2.v20140319-9ad6abd): org.eclipse.persistence.exceptions.DatabaseException
Internal Exception: java.sql.SQLIntegrityConstraintViolationException: Column 'ITEM_QTY' cannot accept a NULL value.
Error Code: 20000
Call: INSERT INTO ITEM (B_ID, DESCRIPTION, ITEM_NAME, ITEM_PRICE, MANUFACTURER, DTYPE) VALUES (?, ?, ?, ?, ?, ?)
bind => [6 parameters bound]"
```
and this error is shown in glassfish logs
These are my generated entities
Item\_.java
```
package Entities;
import javax.annotation.Generated;
import javax.persistence.metamodel.SingularAttribute;
import javax.persistence.metamodel.StaticMetamodel;
@Generated(value="EclipseLink-2.5.2.v20140319-rNA", date="2018-03-20T16:09:40")
@StaticMetamodel(Item.class)
public class Item_ {
public static volatile SingularAttribute item\_id;
public static volatile SingularAttribute b\_id;
public static volatile SingularAttribute item\_price;
public static volatile SingularAttribute description;
public static volatile SingularAttribute item\_name;
public static volatile SingularAttribute manufacturer;
}
```
and Scarf.java
```
package Entities;
import javax.annotation.Generated;
import javax.persistence.metamodel.SingularAttribute;
import javax.persistence.metamodel.StaticMetamodel;
@Generated(value="EclipseLink-2.5.2.v20140319-rNA", date="2018-03-20T16:09:40")
@StaticMetamodel(Scarf.class)
public class Scarf_ extends Item_ {
public static volatile SingularAttribute t\_id;
public static volatile SingularAttribute final\_price;
public static volatile SingularAttribute discount;
}
```
EJB's
Item.java
```
/*
package Entities;
import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.Inheritance;
import javax.persistence.InheritanceType;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
/**
* Creates Item Superclass Object
* @author josh
*/
@Entity
@Inheritance(strategy = InheritanceType.JOINED)
public class Item implements Serializable {
// Attributes
public static final String ITEM = "Item.findAllItems";
private static final long serialVersionUID = 1L;
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
public Long item_id;
@Column(nullable = true)
public Long b_id;
@Column(nullable = true)
public String manufacturer;
@Column(nullable = true)
public String item_name;
@Size(max = 2000)
@Column(length = 2000)
public String description;
@Column(nullable = true)
public double item_price;
/**
* Empty Constructor
*/
public Item() {
}
/**
* Constructor with Data
* @param b_id
* @param item_name
* @param manufacturer
* @param description
* @param item_price
*/
public Item(long b_id ,String item_name, String manufacturer, String description, double item_price)
{
this.b_id = b_id;
this.item_name = item_name;
this.manufacturer = manufacturer;
this.description = description;
this.item_price = item_price;
}
/**
* Get and Set Methods
* @return
*/
public Long getId() {
return item_id;
}
public void setId(Long i) {
this.item_id = i;
}
public Long getB_id() {
return b_id;
}
public void setB_id(Long i) {
this.b_id = i;
}
public String getItem_name() {
return item_name;
}
public void setItem_name(String i) {
this.item_name = i;
}
public String getManufacturer() {
return manufacturer;
}
public void setManufacturer(String m) {
this.manufacturer = m;
}
public String getDescription() {
return description;
}
public void setDescription(String d) {
this.description = d;
}
public double getItem_price() {
return item_price;
}
public void setItem_price(double i) {
this.item_price = i;
}
/**
* Search Methods
*/
@Override
public int hashCode() {
int hash = 0;
hash += (item_id != null ? item_id.hashCode() : 0);
return hash;
}
@Override
public boolean equals(Object o) {
if (!(o instanceof Item)) {
return false;
}
Item other = (Item) o;
if ((this.item_id == null && other.item_id != null) || (this.item_id != null && !this.item_id.equals(other.item_id))) {
return false;
}
return true;
}
/**
* ToString Override Method
* */
@Override
public String toString() {
return String.format (
"%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n",
"Barcode: ",Long.toString(b_id),
"Item Name: ",this.item_name,
"Manufacturer: ",this.manufacturer,
"Item Price: ",Double.toString(item_price),
"Category: ",this.description
);
}
}
```
Scarf.java
```
/*
package Entities;
import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.validation.constraints.NotNull;
/**
* Creates Scarf Subclass Object
*
*/
@Entity
public class Scarf extends Item implements Serializable
{
// Attributes
public static final String SCARFS = "Scarf.findAllScarfs";
private static final long serialVersionUID = 1L;
@Column(nullable = true)
public long t_id;
@Column(nullable = true)
public double discount;
@Column(nullable = true)
public double final_price;
/**
* Empty Constructor
*/
public Scarf() {
}
/**
* Constructor with Data
* @param b_id
* @param item_name
* @param manufacturer
* @param description
* @param item_price
* @param t_id
* @param discount
* @param final_price
*/
public Scarf(long b_id, String item_name, String manufacturer, String description, double item_price, double discount, double final_price)
{
super(b_id, item_name, manufacturer, description, item_price);
this.t_id = t_id;
this.discount = discount;
this.final_price = final_price;
}
/**
* Get and Set Methods
*
* @return
*/
public Long getT_id() {
return t_id;
}
public void setT_id(Long i) {
this.t_id = i;
}
public double getDiscount() {
return discount;
}
public void setDiscount(double i) {
this.discount = i;
}
public double getFinal_price() {
return final_price;
}
public void setFinal_price(double i) {
this.final_price = i;
}
/**
* ToString Override Method
*/
@Override
public String toString() {
return String.format(
"%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n%s%s\n",
"Item ID: ", Long.toString(this.item_id),
"Barcode: ", Long.toString(this.b_id),
"Item Name: ", this.item_name,
"Manufacturer: ", this.manufacturer,
"Description: ", this.description,
"Item Price: ", Double.toString(item_price),
"Tag ID: ", Long.toString(this.t_id),
"Discount: ", Double.toString(discount),
"Final Price: ", Double.toString(final_price)
);
}
}
```
Create.xhtml
```
[Cancel](/home/)
```
updates logs after adding property
```
Finer: client acquired: 1783508080
Finer: TX binding to tx mgr, status=STATUS_ACTIVE
Finer: acquire unit of work: 2032803813
Finer: TX afterCompletion callback, status=ROLLEDBACK
Finer: release unit of work
Finer: client released
Warning: A system exception occurred during an invocation on EJB ScarfProducer, method: public void Beans.utility.Producer.create(java.lang.Object)
Warning: javax.ejb.EJBException
at com.sun.ejb.containers.EJBContainerTransactionManager.processSystemException(EJBContainerTransactionManager.java:748)
at com.sun.ejb.containers.EJBContainerTransactionManager.completeNewTx(EJBContainerTransactionManager.java:698)
at com.sun.ejb.containers.EJBContainerTransactionManager.postInvokeTx(EJBContainerTransactionManager.java:503)
at com.sun.ejb.containers.BaseContainer.postInvokeTx(BaseContainer.java:4566)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2074)
at com.sun.ejb.containers.BaseContainer.postInvoke(BaseContainer.java:2044)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandler.invoke(EJBLocalObjectInvocationHandler.java:220)
at com.sun.ejb.containers.EJBLocalObjectInvocationHandlerDelegate.invoke(EJBLocalObjectInvocationHandlerDelegate.java:88)
at com.sun.proxy.$Proxy233.create(Unknown Source)
at Beans.__EJB31_Generated__ScarfProducer__Intf____Bean__.create(Unknown Source)
at Beans.ScarfController.persistScarf(ScarfController.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.sun.el.parser.AstValue.invoke(AstValue.java:289)
at com.sun.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:304)
at org.jboss.weld.util.el.ForwardingMethodExpression.invoke(ForwardingMethodExpression.java:40)
at org.jboss.weld.el.WeldMethodExpression.invoke(WeldMethodExpression.java:50)
at com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:105)
at javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:87)
at com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102)
at javax.faces.component.UICommand.broadcast(UICommand.java:315)
at javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:790)
at javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1282)
at com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:81)
at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101)
at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:198)
at javax.faces.webapp.FacesServlet.service(FacesServlet.java:646)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1682)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:318)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:160)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:734)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:673)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:174)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:415)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:282)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:459)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:167)
at org.glassfish.grizzly.http.server.HttpHandler.runService(HttpHandler.java:201)
at org.glassfish.grizzly.http.server.HttpHandler.doHandle(HttpHandler.java:175)
at org.glassfish.grizzly.http.server.HttpServerFilter.handleRead(HttpServerFilter.java:235)
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133)
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112)
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77)
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:561)
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56)
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:565)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:545)
at java.lang.Thread.run(Thread.java:748)
```
[](https://i.stack.imgur.com/B5LKb.png)<issue_comment>username_1: The problem is that you didn't rebuild/redeploy your jar file. I can see that the last time it has been built is `Dec 31 17:46:18 AEST 2015`. In your .war file there is a file inside `HatAndScarf\META-INF\maven\josh\Assignment2` which exposes this information:
```
#Generated by Maven
#Thu Dec 31 17:46:18 AEST 2015
version=1.0
groupId=josh
artifactId=Assignment2
```
I think if you try to build it via maven and then redeploy it the problem should go away.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Alter your database table to drop not null constraint on ITEM\_QTY column.
Upvotes: 0 |
2018/03/20 | 494 | 1,797 | <issue_start>username_0: I put an invalid value in `config.properties`:
```
# Browser: CHROME, CHROME_NOHEAD, FF32, FF32_NOHEAD, FF64, FF64_NOHEAD
# EDGE, IE32, IE64
browser=oPerA
```
I set up the switch with a default clause:
```
public class ServiceHook {
public enum Browser {
CHROME, CHROME_NOHEAD, FF32, FF32_NOHEAD, FF64, FF64_NOHEAD, EDGE, IE32, IE64
}
public void setUp() throws Throwable {
String browserConfig = Configuration.readKey("browser"); // Reads the String from the config file
Browser browser = Browser.valueOf(browserConfig.toUpperCase());
switch (browser) {
case CHROME:
// Do something for each case
break;
case CHROME_NOHEAD: break;
case FF32: break;
case FF32_NOHEAD: break;
case FF64: break;
case FF64_NOHEAD: break;
case EDGE: break;
case IE32: break;
case IE64: break;
default:
// Do something for unsupported browser
break;
}
}
}
```
However, I am getting below exception:
>
> java.lang.IllegalArgumentException: No enum constant
> jcucumberng.steps.defs.ServiceHook.Browser.OPERA
>
>
>
Any advice why it does not trigger default clause?<issue_comment>username_1: The code doesn't get that far.
It's the `Browser browser = Browser.valueOf(browserConfig.toUpperCase());` that throws the Exception. You have to put that in a try-catch block to handle invalid values.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Problem in `Browser browser = Browser.valueOf(browserConfig.toUpperCase());` (7th line),
So
**Debug at this line 'Browser.valueOf(browserConfig.toUpperCase());'** statement to see what exactly happening(see what value is in browser)
Upvotes: 0 |
2018/03/20 | 266 | 974 | <issue_start>username_0: I encountered the error as stated in the title from my Visual Studio Code on Mac. Detailed error message is as below:
[](https://i.stack.imgur.com/rO7cy.png)
I have already targeted my project to Android 7.1 but the error keeps displaying.
[](https://i.stack.imgur.com/iGGhN.png)<issue_comment>username_1: The code doesn't get that far.
It's the `Browser browser = Browser.valueOf(browserConfig.toUpperCase());` that throws the Exception. You have to put that in a try-catch block to handle invalid values.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Problem in `Browser browser = Browser.valueOf(browserConfig.toUpperCase());` (7th line),
So
**Debug at this line 'Browser.valueOf(browserConfig.toUpperCase());'** statement to see what exactly happening(see what value is in browser)
Upvotes: 0 |
2018/03/20 | 807 | 2,399 | <issue_start>username_0: I'm in need where i can get all locations of a zip code for india .
I already tried with below link
```
http://maps.googleapis.com/maps/api/geocode/json?address=110001
```
But here i get city and state name and i'm in need where by a zip code we can fetch all location like
```
{
"Pincode": "110001",
"Country": "INDIA",
"City": "Central Delhi",
"Address": "Baroda House"
},
{
"Pincode": "110001",
"Country": "INDIA",
"City": "Central Delhi",
"Address": "Bengali Market"
},
{
"Pincode": "110001",
"Country": "INDIA",
"City": "Central Delhi",
"Address": "Bhagat Singh Market"
},
{
"Pincode": "110001",
"Country": "INDIA",
"City": "Central Delhi",
"Address": "Connaught Place"
},
{
"Pincode": "110001",
"Country": "INDIA",
"City": "Central Delhi",
"Address": "Constitution House"
},
```
I was searching for a solution in google but can't find any solutions as per my requirements . Can anyone please guide how can i get all the locations of a zip code .
Thanks in advance for your help .<issue_comment>username_1: Login in to site
>
> <https://data.gov.in/catalog/all-india-pincode-directory?filters%5Bfield_catalog_reference%5D=85840&format=json&offset=0&limit=6&sort%5Bcreated%5D=desc>
>
>
>
try
if not worked then refresh once
will work.....
worked for me
Upvotes: 1 <issue_comment>username_2: Alternatively, you can use <https://thezipcodes.com/> Get the API key from account section. Hit the get request.
```sh
https://thezipcodes.com/api/v1/search?zipCode=302010&countryCode={2digitCountryCode}&apiKey={apiKey}
```
```
{
"success": true,
"location": [
{
"country": "RU",
"countryCode2": "RU",
"countryCode3": "",
"state": "Орловская Область",
"stateCode2": "56",
"latitude": "53.0747",
"longitude": "36.2468",
"zipCode": "302019",
"city": "Орел 19"
},
{
"country": "India",
"countryCode2": "IN",
"countryCode3": "IND",
"state": "Rajasthan",
"stateCode2": "24",
"latitude": "26.7865",
"longitude": "75.5809",
"zipCode": "302019",
"city": "Shyam Nagar (Jaipur)",
"timeZone": "Indian Standard Time"
}
]
}
```
Upvotes: 0 |
2018/03/20 | 1,049 | 3,543 | <issue_start>username_0: Consider the following code
```
import numpy as np
a = np.array([1,2])
b = np.array([3,4])
c = [a,b]
np.sum(x for x in c) # array([4,6])
np.sum(c) # 10
```
What is the rationale behind treating lists different from generator expressions?<issue_comment>username_1: Generator expression yields or generates one element at a time.
Since `c` has arrays `a` and `b` as its elements, it will apply sum to individual arrays during `np.sum(x for x in c)`.
np.sum() sums up all the elements of c after flattening them.
Upvotes: 0 <issue_comment>username_2: The reason why you have the difference is that you are iterating an array of arrays instead of separate elements in the first case. I guess the rational explanation is that `numpy` doesn't know in advance what is yielded from generator and sums each element separately. And, the later statement treats your array of arrays as matrix and sums all its elements by default because they all are known in advance.
You can achieve the same result with:
```
assert np.sum(c, axis=0).tolist() == [4, 6]
```
Otherwise, you can try:
```
from itertools import chain
assert np.sum(list(chain(*c))) == 10
```
Upvotes: 0 <issue_comment>username_3: In numpy 1.13.3, `numpy.sum()` uses Python's build-in `sum()` to sum up the elements of a generator one by one. For array-likes (e.g., lists and arrays), numpy's vectorized implementation of `sum()` (which by default flattens the array-like and sums up its elements) is used.
My guess about the rational: The idea behind generators is that the elements are taken and *processed one by one.* The idea behind vectorization is that all data is ideally taken at once and *processed in parallel.* So, the ideas are complete opposites of each other.
Now, if a user explicitly provides a generator as an argument to `numpy.sum()` thent she explicitly wants the generated elements (vectors [1, 2] and [3, 4] in your example) to be summed up one by one and, therefore, do not wants the vectorization.
If a user provides an array-like as an argument to `numpy.sum()` then it is assumed that she wants the vectorized version of `sum()` to be applied. The questions is only what the primitive elements of the array-like that should be summed up are. By default, the array-like is flattened and the values of the flattened array-like (1, 2, 3, and 4 in your example) are summed up. This behavior can be changed using the `axis` parameter.
That is why generators are processed differently from arrays.
Upvotes: 0 <issue_comment>username_4: This:
```
>>> np.sum(x for x in c)
array([4, 6])
```
is equivalent to summing over the fist dimension:
```
>>> np.sum(c, axis=0)
array([4, 6])
```
On the other hand, his will sum over all dimensions:
```
>>> np.sum(c)
10
```
Relevant part of the documentation:
>
> `np.sum(a, axis=None, ...`
>
>
> Sum of array elements over a given axis.
> `axis=None`, will sum all of the elements of the input array.
>
>
>
Upvotes: 1 <issue_comment>username_5: Quoting from this pull request <https://github.com/numpy/numpy/pull/10670>
>
> I think the original rationale for this change was to allow the following change to not break this code
>
>
>
> ```
> from numpy import *
>
> def my_func(n):
> return sum(i*i for i in range(n))
>
> ```
>
> Reasons to deprecate it:
>
>
> * sum is the only function to do this, even though any and all also shadow builtins
> * It's not documented
> * It's not consistent with np.array(generator).sum(), which already fails.
>
>
>
Upvotes: 1 |
2018/03/20 | 561 | 1,542 | <issue_start>username_0: I use `keychain` to manage `ssh-agent`, and I add it into my `.zshrc`.
I have a ssh key with passphrase.
With the normal terminal, I can use `ssh` without call a `passphrase`.
But in tmux, it will ask me to input a passphrase for my ssh-key.
These are run in normal terminal.
```
➜ ~ echo $SSH_AGENT_PID; echo $SSH_AUTH_SOCK
14112
/<KEY>ssh-
lyHqTWPT02HF/agent.14111
```
If running `ssh my.server.domin`, it can connect directly.
I run `tmux` in the same terminal and then
```
➜ ~ echo $SSH_AGENT_PID; echo $SSH_AUTH_SOCK
14112
/<KEY>ssh-ly<KEY>agent.14111
```
If running `ssh my.server.domin`, it will ask me to input passphrase.<issue_comment>username_1: I find that I have a `alias` with ssh. `alias ssh=ssh-ident`.
If I unalias it, all things are correct.
Upvotes: 1 [selected_answer]<issue_comment>username_2: It seems because of stale environment variable SSH\_AUTH\_SOCK.
See <http://blog.mcpolemic.com/2016/10/01/reconciling-tmux-and-ssh-agent-forwarding.html>
Upvotes: 1 <issue_comment>username_3: Another good approach is to update the environment variables using a `PROMPT_COMMAND` (`bash`) / `precmd` hook (`zsh`).
The relevant snippet is
```bash
if [ -n "${TMUX}" ]; then
eval "$(tmux show-environment -s)"
fi
```
Sources:
* <https://without-brains.net/2020/08/05/tmux-and-ssh-agent-forwarding/>
* <https://www.johntobin.ie/blog/updating_environment_variables_from_tmux/>
Upvotes: 0 |
2018/03/20 | 516 | 1,838 | <issue_start>username_0: My code is as shown below:
```
private String getFormattedDate(String date){
try {
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
formatter.setTimeZone(TimeZone.getTimeZone("UTC"));
Date value = formatter.parse(date);
SimpleDateFormat dateFormatter = new SimpleDateFormat("dd-MM HH:mm a");
dateFormatter.setTimeZone(TimeZone.getDefault());
date = dateFormatter.format(value);
} catch (Exception e) {
date = "00-00-0000 00:00";
}
return date;
}
```
Here what I want to do is , I want to convert `2018-03-19T19:24:41.396Z` into the format `19-03 7:24 PM`, but here it gives me the output `19-03 19:24 PM` . Am I missing anything in my method because of which it is giving the false output?<issue_comment>username_1: [SimpleDateFormat](https://developer.android.com/reference/java/text/SimpleDateFormat.html)
Capital H Hour in day (0-23) and Small h Hour in am/pm (1-12)
Change this line
```
SimpleDateFormat dateFormatter = new SimpleDateFormat("dd-MM HH:mm a");
```
to
```
SimpleDateFormat dateFormatter = new SimpleDateFormat("dd-MM hh:mm a");
```
Upvotes: 2 <issue_comment>username_2: `HH` outputs hour in 0-23 format. You must use `hh` instead.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Try This
```
SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
formatter.setTimeZone(TimeZone.getTimeZone("UTC"));
Date value = formatter.parse(date);
SimpleDateFormat dateFormatter = new SimpleDateFormat("dd-MM hh:mm a");
dateFormatter.setTimeZone(TimeZone.getDefault());
date = dateFormatter.format(value);
System.out.println("Date :- "+date);
```
Upvotes: 1 |
2018/03/20 | 304 | 1,159 | <issue_start>username_0: The page refreshes when I use the image upload function in the project. And I'm losing my old data in TextBoxes. How can I keep the previous datas in TexBoxes when the page is refreshed?
The technologies I use:
MVC, C# and web languages<issue_comment>username_1: This would be a prime candidate for LocalStorage you can save key-value pairs to be persisted in the browser.
<https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage>
If you want to store data to persist on browser refresh
```
window.localStorage.setItem('key', data)
```
and when retrieving
```
window.localStorage.getItem('key');
```
Upvotes: 2 <issue_comment>username_2: use Window.localStorage
like:
```
// Check browser support
if (typeof(Storage) !== "undefined") {
// Store
localStorage.setItem("lastname", "Smith");
// Retrieve
document.getElementById("result").innerHTML = localStorage.getItem("lastname");
} else {
document.getElementById("result").innerHTML = "Sorry, your browser does not support Web Storage...";
}
```
see [More](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage)
Upvotes: 0 |
2018/03/20 | 1,030 | 3,792 | <issue_start>username_0: I have a QPushButton and when I click the button I will call a method that take two parameters, in this example: `exampleMethod(int i, double d)`.
Now I connect the click event from QPushButton `button` to the exampleMethod like this:
`connect(button, &QPushButton::clicked,this, &Example::exampleMethod);`
But this doesn't work because the parameter of `clicked()` and `exampleMethod(int, double)` are not compatible.
Now I created a extra signal: `exampleSignal(int i, double d)` to connect to my slot:
`connect(this, &Example::exampleSignal, this, &Example::exampleMethod);`
And a additional slot with no parameters: `exampleMethodToCallFromButtonClick()` to call it from QPushButton clicked() in which I call the signal:
```
Example::Example(QWidget *parent) : QWidget(parent){
button = new QPushButton("Click", this);
connect(button, &QPushButton::clicked,this, &Example::exampleMethodToCallFromButtonClick);
connect(this, &Example::exampleSignal, this, &Example::exampleMethod);
}
void Example::exampleMethod(int i, double d){
qDebug() << "ExampleMethod: " << i << " / " << d;
}
void Example::exampleMethodToCallFromButtonClick(){
emit exampleSignal(5,3.6);
}
```
This works fine.
1) Now my first question: Is this realy the best way to do this **without lambda**?
With lambda it looks even nicer and I don't need two connect statements:
`connect(button, &QPushButton::clicked, [this]{exampleMethod(5, 3.6);});`
2) My second question: **with lamba** is this the best way to do this or are there any better ways to solve it?
I also considered to save the parameters from `exampleMethod` as a member variable, call the method without parameter and get instead of the parameters the member variables but I think thats not a so good way.
Thanks for your help!<issue_comment>username_1: I wouldn't do either of those things. Receive the signal, gather your parameters and then just call `exampleMethod`. A lambda is more appropriate when the parameters are known at the point where you connect.
```
Example::Example(QWidget *parent) : QWidget(parent){
button = new QPushButton("Click", this);
connect(button, &QPushButton::clicked, this, &Example::onButtonClicked);
}
void Example::exampleMethod(int i, double d){
qDebug() << "ExampleMethod: " << i << " / " << d;
}
void Example::onButtonClicked(){
int i = ...;
double d = ...;
exampleMethod(i, d);
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to the single-method approach [in the other answer](https://stackoverflow.com/a/49384062/1329652), perhaps the values of `i` and `d` are unrelated and it'd make sense to factor them out to their own methods:
```
int Example::iValue() const {
...
}
double Example::dValue() const {
...
}
```
Then, the following are equivalent:
```
connect(..., this, [this]{ exampleMethod(iValue(), dValue()); });
connect(..., this, std::bind(&Example::exampleMethod, this, iValue(), dValue()));
```
The choice between `onButtonClicked()` and the use of `iValue()` and `dValue()` is determined mostly by whether the values are useful when factored out separately, and whether for code comprehension it makes sense to specify the call at the `connect` site, or move it out to an individual method.
Finally, if you do use [the single-method approach](https://stackoverflow.com/a/49384062/1329652), and the button is instantiated with `setupUi`, i.e. you've designed `Example` in Designer, you can save the `connect` call by naming the handler method appropriately:
```
Q_SLOT void Example::on_button_clicked();
```
The `button` here is the name of the button object in the .ui file. The connection will be made automatically by `Ui::Example::setupUi(QWidget*)`.
Upvotes: 1 |
2018/03/20 | 642 | 2,315 | <issue_start>username_0: I have a Django ModelForm with an ImageField. When the user edits the model the Image Field displays:
[](https://i.stack.imgur.com/YEvXf.png)
Is there a way to use the URL presented beside "Currently" to display/render the set image instead of the actual URL.<issue_comment>username_1: I wouldn't do either of those things. Receive the signal, gather your parameters and then just call `exampleMethod`. A lambda is more appropriate when the parameters are known at the point where you connect.
```
Example::Example(QWidget *parent) : QWidget(parent){
button = new QPushButton("Click", this);
connect(button, &QPushButton::clicked, this, &Example::onButtonClicked);
}
void Example::exampleMethod(int i, double d){
qDebug() << "ExampleMethod: " << i << " / " << d;
}
void Example::onButtonClicked(){
int i = ...;
double d = ...;
exampleMethod(i, d);
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to the single-method approach [in the other answer](https://stackoverflow.com/a/49384062/1329652), perhaps the values of `i` and `d` are unrelated and it'd make sense to factor them out to their own methods:
```
int Example::iValue() const {
...
}
double Example::dValue() const {
...
}
```
Then, the following are equivalent:
```
connect(..., this, [this]{ exampleMethod(iValue(), dValue()); });
connect(..., this, std::bind(&Example::exampleMethod, this, iValue(), dValue()));
```
The choice between `onButtonClicked()` and the use of `iValue()` and `dValue()` is determined mostly by whether the values are useful when factored out separately, and whether for code comprehension it makes sense to specify the call at the `connect` site, or move it out to an individual method.
Finally, if you do use [the single-method approach](https://stackoverflow.com/a/49384062/1329652), and the button is instantiated with `setupUi`, i.e. you've designed `Example` in Designer, you can save the `connect` call by naming the handler method appropriately:
```
Q_SLOT void Example::on_button_clicked();
```
The `button` here is the name of the button object in the .ui file. The connection will be made automatically by `Ui::Example::setupUi(QWidget*)`.
Upvotes: 1 |
2018/03/20 | 1,194 | 5,613 | <issue_start>username_0: I am trying to get a UIActivityViewController for presenting only the options for Facebook, Twitter and Instagram. So far I have done:
```
let shareText = "Hello, world!"
let image = UIImage(named: "TheImage")
let activityViewController = UIActivityViewController(activityItems: [shareText,image], applicationActivities: nil)
activityViewController.excludedActivityTypes = [UIActivityType.addToReadingList,
UIActivityType.airDrop,
UIActivityType.assignToContact,
UIActivityType.copyToPasteboard,
UIActivityType.mail,
UIActivityType.message,
UIActivityType.openInIBooks,
UIActivityType.print,
UIActivityType.saveToCameraRoll
]
present(activityViewController, animated: true, completion: {})
```
But the UIActivityViewController is still giving me unwanted options.
I also thought to get and approach with NSExtensionItem but I am not sure about it.
Thank you<issue_comment>username_1: There are option available for Facebook and Twitter, You need to just add excludedActivityTypes. You have missing it.
Instagram option still not available in activity types
```
let image = UIImage(named: "TheImage")
let activityViewController = UIActivityViewController(activityItems: [shareText,image], applicationActivities: nil)
activityViewController.excludedActivityTypes = [.addToReadingList,
.airDrop,
.assignToContact,
.copyToPasteboard,
.mail,
.message,
.openInIBooks,
.print,
.saveToCameraRoll,
.postToWeibo,
.copyToPasteboard,
.saveToCameraRoll,
.postToFlickr,
.postToVimeo,
.postToTencentWeibo,
.markupAsPDF
]
present(activityViewController, animated: true, completion: {})
```
I hope this will help you.
There are following list of Activity types available.
```
extension UIActivityType {
@available(iOS 6.0, *)
public static let postToFacebook: UIActivityType
@available(iOS 6.0, *)
public static let postToTwitter: UIActivityType
@available(iOS 6.0, *)
public static let postToWeibo: UIActivityType // SinaWeibo
@available(iOS 6.0, *)
public static let message: UIActivityType
@available(iOS 6.0, *)
public static let mail: UIActivityType
@available(iOS 6.0, *)
public static let print: UIActivityType
@available(iOS 6.0, *)
public static let copyToPasteboard: UIActivityType
@available(iOS 6.0, *)
public static let assignToContact: UIActivityType
@available(iOS 6.0, *)
public static let saveToCameraRoll: UIActivityType
@available(iOS 7.0, *)
public static let addToReadingList: UIActivityType
@available(iOS 7.0, *)
public static let postToFlickr: UIActivityType
@available(iOS 7.0, *)
public static let postToVimeo: UIActivityType
@available(iOS 7.0, *)
public static let postToTencentWeibo: UIActivityType
@available(iOS 7.0, *)
public static let airDrop: UIActivityType
@available(iOS 9.0, *)
public static let openInIBooks: UIActivityType
@available(iOS 11.0, *)
public static let markupAsPDF: UIActivityType
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: **Apple** provides support for some of the default `UIActivityTypes` that includes both `Facebook` and `Twitter` as required by you.
You can get a complete list of `UIActivityTypes` here: <https://developer.apple.com/documentation/uikit/uiactivitytype>
By default, all the `UIActivityTypes` appear in the `UIActivityController`. The activities you don't want to appear in the `UIActivityController` an be added in the `exclude list`, i.e
```
activityViewController.excludedActivityTypes = [.postToFlickr, .postToVimeo]
```
Also, whenever an `App` that includes a **`Share App Extension`** is installed on the device, it is also added to the `UIActivityController`.
***Example*:**
**WhatsApp**. Whenever you install WhatsApp on your device, it will appear as a `Share App Extension` in the `UIActivityController`.
You cannot remove them but can only hide them from the `UIActivityController's` **More** option.
Let me know if you still face any issues.
Upvotes: 1 <issue_comment>username_3: As of now (march 2019), Instagram share option will only appear if you set an image and only that image on the activityItems. If you add a TEXT or URL object there as well the Instagram option will not show up.
Twitter, on the otehr hand would accept a TEXT and an IMAGE, but if you pass an URL too it will not show up as an option as well
Upvotes: 1 |
2018/03/20 | 1,077 | 3,306 | <issue_start>username_0: ```
var myObj = [
{ "heading":"Enter Name", "data_type":"text", "fieldid":"name1" },
{ "heading":"Enter Phone", "data_type":"text", "fieldid":"phone1" }
];
var anotherObj=[
{ "heading":"Enter Name", "data_type":"text", "fieldid":"name2" },
{ "heading":"Enter Phone", "data_type":"text", "fieldid":"phone2" }
];
```
I want to join these two: that is:
```
[{ "heading":"Enter Name", "data_type":"text", "fieldid":"name1" },
{ "heading":"Enter Phone", "data_type":"text", "fieldid":"phone1" },
{ "heading":"Enter Name", "data_type":"text", "fieldid":"name2" },
{ "heading":"Enter Phone", "data_type":"text", "fieldid":"phone2" }];
```
where `"fieldid":"name1"==> 1` means table row no 1
`"fieldid":"name2"==> 2` means table row no 2
Also I want to delete operation, suppose I delete 2nd row then the JSON will
```
[{ "heading":"Enter Name", "data_type":"text", "fieldid":"name1" },
{ "heading":"Enter Phone", "data_type":"text", "fieldid":"phone1" }];
```<issue_comment>username_1: First of all the variable names are not relevant. `myObj` and `anotherObj` are arrays of objects. So basically, you need to combine two arrays.
You can use:
```
Array.prototype.push.apply(arr1,arr2);
console.log(arr1); // final merged result will be in arr1
```
Hope it helps
To delete, you can use:
```
array.splice(index, 1);
```
Check splice reference for more details.
Upvotes: 0 <issue_comment>username_2: You could use:
```
var result = myObj.concat(anotherObj);
```
To join the objects, and then filter to delete the row you want
```
result = result.filter(x => x.fieldid != 'name2')
```
Upvotes: 1 <issue_comment>username_3: ```
// concat arrays to have all data in one place
var sumArray = myObj.concat(anotherObj);
// delete 1 element starts from index 2
sumArray.splice(2, 1);
```
You should read more about operations with arrays here:
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array>
Upvotes: 0 <issue_comment>username_4: I don't know if this is exactly what you want, but you can join two arrays with [concat](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat)() and remove elements from them with [splice](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice)().
```js
var myObj = [{
"heading": "Enter Name",
"data_type": "text",
"fieldid": "name1"
},
{
"heading": "Enter Phone",
"data_type": "text",
"fieldid": "phone1"
}
];
var anotherObj = [{
"heading": "Enter Name",
"data_type": "text",
"fieldid": "name2"
},
{
"heading": "Enter Phone",
"data_type": "text",
"fieldid": "phone2"
}
];
function join(obj1, obj2) {
return obj1.concat(obj2);
}
function removeRow(obj, row) {
obj.splice((row - 1) * 2, 2);
return obj;
}
var joined = join(myObj, anotherObj);
console.log("Join:", joined);
console.log("Remove 2. row:", removeRow(joined, 2));
```
Upvotes: 0 <issue_comment>username_5: try this.
```
var x = myObj.concat(anotherObj);
x = x.filter(function (item) {
if(item.fieldid != 'name2' && item.fieldid != 'phone2')
{
return item;
}
});
console.log(x);
```
Upvotes: 0 |
2018/03/20 | 1,055 | 3,141 | <issue_start>username_0: I have three files. `test.c` `test.h` and `use.c`. Code in each of them looks like:
test.h:
```
#pragma once
#define unused __attribute__((unused))
typedef int cmd_fun_t(struct tokens *tokens);
typedef struct fun_desc {
cmd_fun_t *fun;
char *cmd;
char *doc;
} fun_desc_t;
int cmd_exit(struct tokens *tokens);
int cmd_help(struct tokens *tokens);
int cmd_pwd(struct tokens *tokens);
int cmd_cd(struct tokens *tokens);
```
test.c:
```
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include "test.h"
#include "tokenizer.h"
fun\_desc\_t cmd\_table[] = {
{cmd\_help, "?", "show this help menu"},
{cmd\_exit, "exit", "exit the command shell"},
{cmd\_pwd, "pwd", "print working directory"},
{cmd\_cd, "cd", "change directory"},
};
int cmd\_pwd(unused struct tokens \*tokens){
char cwd[8192];
if (getcwd(cwd, sizeof(cwd)) != NULL)
fprintf(stdout, "%s\n", cwd);
else
perror("Error Occured");
return 1;
}
int cmd\_cd(unused struct tokens \*tokens){
if(chdir(tokens\_get\_token(tokens, 1)) == -1){
fprintf(stdout, "No such file or directory.\n");
return -1;
}
return 1;
}
/\* Prints a helpful description for the given command \*/
int cmd\_help(unused struct tokens \*tokens) {
for (unsigned int i = 0; i < sizeof(cmd\_table) / sizeof(fun\_desc\_t); i++)
printf("%s - %s\n", cmd\_table[i].cmd, cmd\_table[i].doc);
return 1;
}
/\* Exits this shell \*/
int cmd\_exit(unused struct tokens \*tokens) {
exit(0);
}
```
use.c:
```
#include "test.h"
int main(){
for(int i = 0; i < sizeof(cmd_table); i++){
}
return 0;
}
```
my assumption is this should work well but when I compile the code, it gives the following error:
>
> ‘cmd\_table’ undeclared (first use in this function) for(int i = 0;
> i < sizeof(cmd\_table); i++)
>
>
>
Any suggestions why this happens?<issue_comment>username_1: cmd\_table is declared in test.c file and you are using it in use.c file. As scope of cmd\_table is limited to test.c file you can not use it in use.c file
Upvotes: 1 <issue_comment>username_2: `cmd_table` is defined in `test.c`. If you want it to be visible to the compiler when it is compiling other C files, there needs to be an `extern` declaration visible when it is compiling the other files. The usual way to do this is to put the `extern` declaration in the header.
```
// in test.h
extern fun_desc_t cmd_table[];
```
Unfortunately, that won't tell you how big the array is. For lookup tables, like this, a common way to solve this problem is to put a null sentinel value at the end.
```
fun_desc_t cmd_table[] = {
{cmd_help, "?", "show this help menu"},
{cmd_exit, "exit", "exit the command shell"},
{cmd_pwd, "pwd", "print working directory"},
{cmd_cd, "cd", "change directory"},
{NULL, NULL, NULL}
};
```
And you change the loop in `main` as follows
```
int main()
{
for(int i = 0; cmd_table[i].fun != NULL; i++)
{
// Do whatever
}
return 0;
}
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 399 | 1,374 | <issue_start>username_0: I run multiple websites from a single laravel app.
Is it possible to have different session lifetimes for them?
E.g. one site should have the standard 2 hours and another should be 1 year<issue_comment>username_1: cmd\_table is declared in test.c file and you are using it in use.c file. As scope of cmd\_table is limited to test.c file you can not use it in use.c file
Upvotes: 1 <issue_comment>username_2: `cmd_table` is defined in `test.c`. If you want it to be visible to the compiler when it is compiling other C files, there needs to be an `extern` declaration visible when it is compiling the other files. The usual way to do this is to put the `extern` declaration in the header.
```
// in test.h
extern fun_desc_t cmd_table[];
```
Unfortunately, that won't tell you how big the array is. For lookup tables, like this, a common way to solve this problem is to put a null sentinel value at the end.
```
fun_desc_t cmd_table[] = {
{cmd_help, "?", "show this help menu"},
{cmd_exit, "exit", "exit the command shell"},
{cmd_pwd, "pwd", "print working directory"},
{cmd_cd, "cd", "change directory"},
{NULL, NULL, NULL}
};
```
And you change the loop in `main` as follows
```
int main()
{
for(int i = 0; cmd_table[i].fun != NULL; i++)
{
// Do whatever
}
return 0;
}
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 372 | 1,477 | <issue_start>username_0: Do you know if it is possible to encode responses from application run on wildfly 10 to gzip format if request contains Accepted-Encoding : gzip header? I would like the change to be done via Wildfly config only and response to contain some indication in header that the response has gzip encoding (content-encoding : gzip).
I have read that since wildfly 8 such encoding is possible with gzip filter, I have tried to add filter to Wildfly config similar to one described here: <https://rumianom.pl/rumianom/entry/gzip_content_encoding_in_wildfly>, however, with no success. I tried many pradicates like `equals[%{i,Accepted-Encoding},\"gzip\"]` (I found similar example in undertow unit tests) or `regex[pattern='(?:gzip)(;.*)?', value=%{o,Accept-Encoding}, full-match=false]`. Sadly none of my solutions worked.<issue_comment>username_1: This or a variation thereof should in fact work (using Wfly10 and have been doing so for a long time):
```
```
You need to define then reference/configure the filters, in your case, at least the gzipFilter.
Upvotes: 2 <issue_comment>username_2: Filter with pattern worked. If your pattern has special characters like . you need to escape them.
Note that I did not find any documentation regarding what "i" and "o" values in value= are. It appears that if "o" is given pattern searches header in output from app (response) and if "i" is given it matches input message (request).
Upvotes: 3 [selected_answer] |
2018/03/20 | 848 | 2,495 | <issue_start>username_0: I have an array like this :
`array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");`
and would like to get only my values, for example here :
simpleTab=`array("35","37","43");`
or otherwise like this, it should be better for me to get a list :
```
simpleList=["35";"37";"43"]
```
I'm creating a function because I'll need it several times so here it is :
```
$simpleArray=array(); //don't know if this should be array, i'd like to have a list
foreach($valueAssoc as $key => $value{//array_push($simpleArray,$value);//NO it returns an array with keys 0 , 1 , etc
//$simpleArray[]=$value; //same ! I don't want any keys
//I want only an array without any index or key "tableau indexé sans clef"
echo $value;
//would like to have some method like this :
$simpleArray.add($value);//to add value in my list -> can't find the php method
```<issue_comment>username_1: You're looking for [array\_values](http://php.net/manual/en/function.array-values.php) which will return just the values from the key/value pairs.
```
$arr = array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");
$arrVals = array_values($arr);
```
The code behind it is much the same as you'd expect, with a `foreach` looping through and pushing the result to a new array.
Upvotes: 3 <issue_comment>username_2: No need to create a function for it, there is an inbuilt function [`array_values()`](http://php.net/manual/en/function.array-values.php) that does exactly same as required.
From the docs:
>
> Return all the values of an array
>
>
>
Example:
```
$arr = array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");
print_r(array_values($arr)); // Array ( [0] => 35 [1] => 37 [2] => 43 )
```
Upvotes: 3 <issue_comment>username_3: Try this -
```
$arrs = array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");
$array = (array_values($arrs));
echo "
```
";
print_r($array);
```
```
Upvotes: 2 <issue_comment>username_4: If you want the without Key You should use `array_values()` and `json_encode()`(It means convert to string) the array like
```
$arr = array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");
print_r(json_encode(array_values($arr)));
```
OutPut:
```
["35","37","43"]
```
Upvotes: 4 <issue_comment>username_5: Try this guys...
```
$Data_array = array("Peter"=>"35", "Ben"=>"37", "Joe"=>"43");
$output_array = array();
foreach($Data_array as $a)
{
array_push($output_array,$a->{$key_name});
}
// Output will be :
$output_array : ["35","37","43"]
```
Upvotes: 0 |
2018/03/20 | 1,497 | 5,131 | <issue_start>username_0: I want to build a Docker image including my custom Powershell modules. Therefore I use Microsofts `microsoft/powershell:latest` image, from where I wanted to create my own image, that includes my psm1 files.
For simple testing I've the following docker file:
```
FROM microsoft/powershell:latest
RUN mkdir -p /tmp/powershell
COPY C:/temp/somedirectory /tmp/powershell
```
I want to copy the files included in C:\temp\somedirectory to the docker linux container. When building the image I get the following error:
>
> C:\temp\docker\_posh> docker build --rm -f Dockerfile -t docker\_posh:latest .
>
>
> Sending build context to Docker daemon 2.048kB
>
>
> Step 1/3 : FROM microsoft/powershell:latest
> ---> 9654a0b66645
>
>
> Step 2/3 : RUN mkdir -p /tmp/powershell
> ---> Using cache
> ---> 799972c0dde5
>
>
> Step 3/3 : COPY C:/temp/somedirectory /tmp/powershell
> COPY failed: stat /var/lib/docker/tmp/docker-builder566832559/C:/temp/somedirectory: no such file or directory
>
>
>
Of course I know that Docker says that I can't find the file/directory. Therefore I also tried `C:/temp/somedirectory/.`, `C:/temp/somedirectory/*`, and `C:\\temp\\somedirectory\\` as alternativ source paths in the Dockerfile -> Result: none of them worked.
```
docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:22 2017
OS/Arch: windows/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c<PASSWORD>
Built: Wed Dec 27 20:12:29 2017
OS/Arch: linux/amd64
Experimental: true
```
How can I copy a folder including subfolder and files via a Dockerfile?<issue_comment>username_1: You cannot copy files that are outside the build context when building a docker image. The build context is the path you specify to the docker build command. In the case of the instruction
```
C:\temp\docker_posh> docker build --rm -f Dockerfile -t docker_posh:latest .
```
The `.` specifies that the build context is `C:\temp\docker_posh`. Thus `C:/temp/somedirectory` cannot be accessed. You can either move the Dockerfile to temp, or run the same build command
under `C:\temp`. But remember to fix the Dockerfile instructions to make the path relative to the build context.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you want to copy something from host machine to running docker container then you can use `docker cp` command like:
```
docker cp [OPTIONS] CONTAINER_NAME:CONTAINER_SRC_PATH DEST_PATH
docker cp [OPTIONS] SRC_PATH CONTAINER_NAME:CONTAINER_DEST_PATH
```
Options:
```
--archive -a Archive mode (copy all uid/gid information)
--follow-link -L Always follow symbol link in SRC_PATH
```
If you don't want to use options you can ignore them.
Upvotes: 1 <issue_comment>username_3: I found this "feature" very inconvenient and just wrote my own build script that copies everything I cared about into a new temp folder, then builds the docker image in the proper context. Here's a Powershell script for anyone that finds this approach helpful.
Folder structure (feel free to change) looks like this for me:
```
lib\somefile1.ps1
lib\somefile2.ps1
dockerfolder\Dockerfile
dockerfolder\mybuilderscript.ps1
my-main-file.ps1
```
Here's the script:
```
# Docker Image Tag Name
$tagname = "myimage"
# Generate temp directory
$parent = [System.IO.Path]::GetTempPath()
[string] $name = [System.Guid]::NewGuid()
$dockerBuildDirectory = New-Item -ItemType Directory -Path (Join-Path $parent $name)
# Copy things we care about from the parent repo into our build dir
# Copy source folder that has our Dockerfile (and this script) into a folder called "dockerworking" in the temp build directory
Copy-Item -Recurse "../dockerfolder" $dockerBuildDirectory/dockerworking
# Copy directories above our current folder into the dockerworking directory
Copy-Item -Recurse "../lib" $dockerBuildDirectory/dockerworking
# Copy the main script into the working directory
Copy-Item -Recurse "../my-main-file.ps1" $dockerBuildDirectory/dockerworking
# Let the user know where these files are for any troubleshooting that comes up
Write-Host -ForegroundColor Green "Docker files stage at $dockerBuildDirectory"
# Create the docker image in the "dockerworking" folder where everything resides
docker build --no-cache --tag $tagname $dockerBuildDirectory/dockerworking
```
The Dockerfile has "local" COPY commands (and all the files are pre-staged to the right location) so Docker is happy.
Dockerfile snippet:
```
FROM mcr.microsoft.com/windows/servercore:ltsc2019
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
RUN New-Item -Type Directory -Path "c:\scripts" | Out-Null
COPY "lib" "c:\scripts"
COPY "my-main-file.ps1" "c:\scripts"
ENTRYPOINT ["powershell", "-NoProfile", "-Command", "c:\\scripts\\my-main-file.ps1"]
```
Upvotes: 1 |
2018/03/20 | 1,083 | 3,907 | <issue_start>username_0: I've just installed the Atom text editor. When I try to install my first package I get the following error:-
```
"Unable to verify the first certificate"
```
This happens when trying to search for the package e.g. Emmet, AtomLinter. I've restarted my machine but I still get the same error message.
My npm and apm configuration settings are as follows :-
```
$ npm config list && apm config list
; cli configs
metrics-registry = "https://registry.npmjs.org/"
scope = ""
user-agent = "npm/5.5.1 node/v8.9.1 win32 x64"
; userconfig C:\Users\edwarm4\.npmrc
https-proxy = "http://proxy2.nttvpn.via.novonet:80/"
proxy = "http://proxy2.nttvpn.via.novonet:80/"
strict-ssl = false
; builtin config undefined
prefix = "D:\\AppData\\edwarm4\\Application Data\\npm"
; node bin location = C:\Program Files\nodejs\node.exe
; cwd = C:\Program Files\Git
; HOME = C:\Users\edwarm4
; "npm config ls -l" to show all defaults.
; cli configs
globalconfig = "C:\\Users\\edwarm4\\.atom\\.apm\\.apmrc"
user-agent = "npm/3.10.10 node/v6.9.5 win32 x64"
userconfig = "C:\\Users\\edwarm4\\.atom\\.apmrc"
; environment configs
node-gyp = "C:/Users/edwarm4/AppData/Local/atom/app-1.25.0/resources/app/apm/bin /../node_modules/.bin/node-gyp"
python = "C:/Users/edwarm4/AppData/Local/atom/app-1.25.0/resources/app/apm/bin/p ython-interceptor.sh"
; userconfig C:\Users\edwarm4\.atom\.apmrc
https-proxy = "http://proxy2.nttvpn.via.novonet:80/"
proxy = "http://proxy2.nttvpn.via.novonet:80/"
strict-ssl = false
; globalconfig C:\Users\edwarm4\.atom\.apm\.apmrc
cache = "C:\\Users\\edwarm4\\.atom\\.apm"
progress = false
; node bin location = C:\Users\edwarm4\AppData\Local\atom\app-1.25.0\resources\a pp\apm\bin\node.exe
; cwd = C:\Program Files\Git
; HOME = C:\Users\edwarm4
; "npm config ls -l" to show all defaults.
```
Please help!<issue_comment>username_1: With `strict-ssl = false` set, perhaps using the command line will work?
e.g.
`apm search package-name`
`apm install package-name`
Upvotes: 1 <issue_comment>username_2: If you're on Windows 10, temporarily turn off Ransomware Protection. RP running will cause this issue.
Upvotes: 0 <issue_comment>username_3: For me, none of these worked. I have AdGuard installed and had to perform the following (Running on Windows 11):
* Exclude Atom app from filtering by entering the full Atom app path in AdGuard > General Settings > Advanced Settings.
* + App Path is generally: "C:\Users\USER\_NAME\AppData\Local\atom\atom.exe"
* Also, you need to add " atom.io " to the HTTPS Exclusions list. (Settings -> Network -> HTTPS filtering -> Exclustions -> + add atom.io -> Then click save).
This made it work for me with no other configuration changes. If you do not have AdGuard or other application of this type installed, then there may be other anti-virus/Anti-malware applications you may need to exclude this app from.
Upvotes: 2 <issue_comment>username_4: Here is a solution in the official Atom repo: [link](https://github.com/atom/apm/issues/103#issuecomment-131932623)
However, for me it was enough to turn off the VPN and install worked.
Upvotes: -1 <issue_comment>username_5: Im my case I turned off Fiddler and it worked.
Thanks to atorstling on GitHub for mentioning this solution:
<https://github.com/atom/atom/issues/16361#issuecomment-454049768>
Upvotes: 0 |
2018/03/20 | 1,491 | 3,542 | <issue_start>username_0: The data frame looks like this (the dots are supposed to be vertical dots):
```
Class IncreasingNumber
"A" 3 <- row 1
"A" 5 <- row 2
... ...
"A" 20 <-row 31
"B" 1
"B" 2
... ...
"B" 41 <- row 63
"C" 1
......
```
There are 20 different classes. The number of rows within each class equals 31.
What I wan to do is quite simple. For each class, I want to insert 3 new rows between all rows. So, we want to insert 3 new rows between row 1 and row 2, and yet 3 others in between row 2 and row 3, all the way up to 3 three new rows between row 30 and 31. But NO new rows between row 31 and 32, since these belong to a different class.
We want to do this for every class.
The three rows which we insert has the same Class as its surrounding rows, and the IncreasingNumber value for those three rows are just the 3 equally spaced points within the IncreasingNumber values for the surrounding points.
For example, for the first class, and for the first three rows to be inserted between row 1 and row 2, we see that the IncreasingNumbers are 3 and 5.
Therefore, the first value for the first row that we insert should be 3.5. The next value should be 4. The third value should be 4.5.
So we should get something like this for the first two classes:
```
Class IncreasingNumber
"A" 3
"A" 3.5 <- new row
"A" 4 <- new row
"A" 4.5 <- new row
"A" 5
... ...
"A" 20
"B" 1
"B" 1.25 <- new row
"B" 1.5 <- new row
"B" 1.75 <- new row
"B" 2
... ...
"B" 41
"C" 1
```<issue_comment>username_1: A possible solution with the `data.table`-package:
```
library(data.table)
setDT(df)[, .(inc.num.new = unique(c(mapply(function(x, y) seq(from = x, to = y, length.out = 5),
head(inc.num,-1), tail(inc.num,-1)))))
, by = classes]
```
which gives:
>
>
> ```
> classes inc.num.new
> 1: A 3.00
> 2: A 3.50
> 3: A 4.00
> 4: A 4.50
> 5: A 5.00
> 6: A 5.25
> 7: A 5.50
> 8: A 5.75
> 9: A 6.00
> 10: A 7.00
> 11: A 8.00
> 12: A 9.00
> 13: A 10.00
> 14: B 1.00
> 15: B 1.25
> 16: B 1.50
> 17: B 1.75
> 18: B 2.00
> 19: B 4.00
> 20: B 6.00
> 21: B 8.00
> 22: B 10.00
>
> ```
>
>
---
Used data:
```
df <- data.frame(classes = rep(LETTERS[1:2], c(4,3)), inc.num = c(3,5,6,10,1,2,10))
```
Upvotes: 2 <issue_comment>username_2: You may interpolate the points with `approx` and its `n` argument. Using username_1's data:
```
library(data.table)
setDT(df)
df[ , approx(inc.num, n = (.N - 1) * 4 + 1), by = classes]
# classes x y
# 1: A 1.00 3.00
# 2: A 1.25 3.50
# 3: A 1.50 4.00
# 4: A 1.75 4.50
# 5: A 2.00 5.00
# 6: A 2.25 5.25
# 7: A 2.50 5.50
# 8: A 2.75 5.75
# 9: A 3.00 6.00
# 10: A 3.25 7.00
# 11: A 3.50 8.00
# 12: A 3.75 9.00
# 13: A 4.00 10.00
# 14: B 1.00 1.00
# 15: B 1.25 1.25
# 16: B 1.50 1.50
# 17: B 1.75 1.75
# 18: B 2.00 2.00
# 19: B 2.25 4.00
# 20: B 2.50 6.00
# 21: B 2.75 8.00
# 22: B 3.00 10.00
```
Upvotes: 2 |
2018/03/20 | 1,119 | 2,407 | <issue_start>username_0: Hi guys I am trying using Javascript to extract the name
```
Compare with
{% for i in team\_list\_pop %}
{{i.first\_name}} {{i.last\_name}}
{% endfor %}
```
In my JS I tried :
```
$(".pick-chart").change(function(e) {
e.preventDefault();
var val = $(this).val();
var name2 = $("option value=" +'"'+val+'"').text();
}
```
and I got the error :`Uncaught Error: Syntax error, unrecognised expression: option value="52"`
How could I extract {i.first\_name}} ??<issue_comment>username_1: A possible solution with the `data.table`-package:
```
library(data.table)
setDT(df)[, .(inc.num.new = unique(c(mapply(function(x, y) seq(from = x, to = y, length.out = 5),
head(inc.num,-1), tail(inc.num,-1)))))
, by = classes]
```
which gives:
>
>
> ```
> classes inc.num.new
> 1: A 3.00
> 2: A 3.50
> 3: A 4.00
> 4: A 4.50
> 5: A 5.00
> 6: A 5.25
> 7: A 5.50
> 8: A 5.75
> 9: A 6.00
> 10: A 7.00
> 11: A 8.00
> 12: A 9.00
> 13: A 10.00
> 14: B 1.00
> 15: B 1.25
> 16: B 1.50
> 17: B 1.75
> 18: B 2.00
> 19: B 4.00
> 20: B 6.00
> 21: B 8.00
> 22: B 10.00
>
> ```
>
>
---
Used data:
```
df <- data.frame(classes = rep(LETTERS[1:2], c(4,3)), inc.num = c(3,5,6,10,1,2,10))
```
Upvotes: 2 <issue_comment>username_2: You may interpolate the points with `approx` and its `n` argument. Using username_1's data:
```
library(data.table)
setDT(df)
df[ , approx(inc.num, n = (.N - 1) * 4 + 1), by = classes]
# classes x y
# 1: A 1.00 3.00
# 2: A 1.25 3.50
# 3: A 1.50 4.00
# 4: A 1.75 4.50
# 5: A 2.00 5.00
# 6: A 2.25 5.25
# 7: A 2.50 5.50
# 8: A 2.75 5.75
# 9: A 3.00 6.00
# 10: A 3.25 7.00
# 11: A 3.50 8.00
# 12: A 3.75 9.00
# 13: A 4.00 10.00
# 14: B 1.00 1.00
# 15: B 1.25 1.25
# 16: B 1.50 1.50
# 17: B 1.75 1.75
# 18: B 2.00 2.00
# 19: B 2.25 4.00
# 20: B 2.50 6.00
# 21: B 2.75 8.00
# 22: B 3.00 10.00
```
Upvotes: 2 |
2018/03/20 | 1,137 | 4,324 | <issue_start>username_0: I have two Activities, I am adding data to Firestore from these two activities individually. But, whenever I add second activity data to Firestore, it is overwriting the first activity data. I used below code in the two activities:
```
firebaseFirestore.collection("Users").document(user_id).set(data)
```
How to stop overwriting? I want to save both Activities data in the same `user_id`.<issue_comment>username_1: I suggest you to add one more document or collection that it will be able to store more just one data values for single user.
You can create a document references for both activities:
```
firebaseFirestore.collection("Users").document(user_id+"/acitivity1").set(data);
//and
firebaseFirestore.collection("Users").document(user_id+"/acitivity2").set(data);
```
Or you can create a sub-collection for it:
```
firebaseFirestore.collection("Users").document(user_id)
.collection("Activities").document("acitivity1").set(data);
//and
firebaseFirestore.collection("Users").document(user_id)
.collection("Activities").document("acitivity2").set(data);
```
More about hierarchical data [there](https://firebase.google.com/docs/firestore/data-model#hierarchical-data).
Upvotes: 4 [selected_answer]<issue_comment>username_2: If you know that the user document allready exists in firestore then you should use
```
firebaseFirestore.collection("Users").document(user_id).update(data)
```
If you don't know if the document exists then you can use
```
firebaseFirestore.collection("Users").document(user_id).set(data, {merge:true})
```
This performs a deep merge of the data
Alternatively you can do it by using subcollections
Upvotes: 2 <issue_comment>username_3: There are two ways in which you can achieve this. First one would be to use a `Map`:
```
Map map = new HashMap<>();
map.put("yourProperty", "yourValue");
firebaseFirestore.collection("Users").document(user\_id).update(map);
```
As you can see, I have used `update()` method instead of `set()` method.
The second approach would be to use an object of your model class like this:
```
YourModelClass yourModelClass = new YourModelClass();
yourModelClass.setProperty("yourValue");
firebaseFirestore.collection("Users").document(user_id)
.set(yourModelClass, SetOptions.mergeFields("yourProperty"));
```
As you can see, I have used the `set()` method but I have passed as the second argument `SetOptions.mergeFields("yourProperty")`, which means that we do an update only on a specific field.
Upvotes: 3 <issue_comment>username_4: As per the documentation, you could use as a second parameter `{merge:true}`, in my experience the problem usually is in the fact that you are trying to store different data but with the same key.
Even using `{merge: true}` will always update the current key with the value you are passing in.
Merge:true Works only if the key does not exist already. I believe every key in a document must be unique.
To test it try to pass(keeping `{merge: true}` as the second parameter) data with a different key, it will merge to existing.
Upvotes: 2 <issue_comment>username_5: Try this for a direct update
`db.collection('cities').doc('restaurants').update({
rating:"3.2"
});`
Only the field rating is going to change, the rest of the fields will remain the same.
Lets say you have 3 fields, and want to change only 2 of them, you can also do this in a different way
```
// You get the document 'restaurants' and then do the below
db.collection('cities').doc('restaurants').get().then((doc) => {
// Get the rating field string, parse it to a float, remove 0.4 from it and
// set it to a tmp variable
var tmpRating = parseFloat(doc.data()['rating']) - 0.4;
// Get the array of strings field 'menu' and set it to a tmp variable
var tmpMenu = doc.data()['menu'];
// Push 'spaghetti' string to the tmp array of strings( Can also put an
// adapting string variable instead of a specific string like 'spaghetti'
tmpMenu.push('spaghetti');
// Update only the specific fields 'rating' and 'menu' and keep the rest unchanged.
db.collection('cities').doc(doc.id).update({
rating: toString(tmpRating), // Don't forget to separate them by commas
menu: tmpMenu
});
});
```
Upvotes: 0 |
2018/03/20 | 829 | 3,611 | <issue_start>username_0: I am trying to bind a TextBlock to items in a ObservableCollection. TextBlock values should be generated up to elements from collection. Count of elements in collection is between 0 and 7 (if it helps). MyClass implemented INotifyPropertyChanged. It should be directly TextBlock, not ListBox. How can I do it? Thanks!
Update: The problem is that I don't know previously the number of elements in collection. I know that it is better to use ListBox or ListView in this case, but it's important to make it in TextBlock or Label
For example:
1. ObservableCollection contains elements 0, 1, 2.
TextBlock should contains following "Values: 0, 1, 2"
2. ObservableCollection contains elements 0, 1.
TextBlock should contains following "Values: 0, 1"
```
```
```
ObservableCollection values = new ObservableCollection();
public ObservableCollection Values
{
get => values;
set
{
values = value;
OnPropertyChanged();
}
}
```<issue_comment>username_1: Use a Converter that concat those strings:
```
public class StringsCollectionConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
if (value == null) return null;
return string.Join("\n", value as ObservableCollection);
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
throw new NotImplementedException();
}
}
```
**Xaml**
```
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You have to bind to the collection using a converter.
The problem is to get value updated on collection changed(here I mean not setting Values to new collection, but adding/removing items to/from the collection).
To implement updating on add/remove you have to use `MultiBinding` with one of bindings to the ObservableCollection.Count, so if the count be changed, then the bound property will be updated.
```
public class MultValConverter : IMultiValueConverter
{
public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture)
{
if (values.Length > 1 && values[0] is ICollection myCol)
{
var retVal = string.Empty;
var firstelem = true;
foreach (var item in myCol)
{
retVal += $"{(firstelem?string.Empty:", ")}{item}";
firstelem = false;
}
return retVal;
}
else
return Binding.DoNothing;
}
public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture)
{
throw new NotImplementedException("It's a one way converter.");
}
}
```
Upvotes: 0 <issue_comment>username_3: create an additional string property, which will change every time when collection item[s] change:
```
public class Vm
{
public Vm()
{
// new collection assigned via property because property setter adds event handler
Values = new ObservableCollection();
}
ObservableCollection values;
public ObservableCollection Values
{
get => values;
set
{
if (values != null) values.CollectionChanged -= CollectionChangedHandler;
values = value;
if (values != null) values.CollectionChanged += CollectionChangedHandler;
OnPropertyChanged();
}
}
private void CollectionChangedHandler(object sender, NotifyCollectionChangedEventArgs e)
{
OnPropertyChanged("ValuesText");
}
public string ValuesText
{
get { return "Values: " + String.Join(", ", values);}
}
}
```
and then bind to that property:
```
```
Upvotes: 0 |
2018/03/20 | 715 | 2,705 | <issue_start>username_0: (Taking first steps with web development but it appears that I'm currently unable to use even the most fundamental tools, like e.g. a browser debugger.)
When I init a new vue.js 2.x project as follows...
```
sudo npm install --global vue-cli
vue init simple vue1
```
(accepting defaults for all prompts)
... and then proceed to open the generated index.html in Chromium, I more or less see what I would expect in the browser debugger. More specifically, I can enter the following in the browser console...
```
this.app.$data.greeting = "Blah"
```
... and see the page greeting change immediately, as I would expect.
However, when I use a webpack template...
```
vue init webpack-simple vue2
cd vue2
npm install
npm run dev
```
... I'm unable to find the data in the browser console. From the code in App.vue I would expect that there should be a `msg` or `data` property somewhere, but I get lost pretty quickly when I try to look for it. I do find `.msg` in the served build.js but the code is so obfuscated that I fail to determine the full path to it. I'd probably be able to find it if I invested a lot more time but somehow I feel that it cannot be that complicated and that I must be missing something.
So, how would one go about finding this methodically?<issue_comment>username_1: Using the [Vue DevTools](https://github.com/vuejs/vue-devtools) is your best bet. When you are running Vue through webpack, `this` will be the `window` object, not the Vue instance.
(Technically, you can also do `window.Vue = new Vue({ })` or `var foobar = new Vue({}); window.Vue = foobar` and then access Vue properties at `window.Vue.$data.greeting` or whatever but the devtools are designed for this functionality and will be a lot easier for you)
Upvotes: 1 <issue_comment>username_2: The "simple" vue template save the vue instance in a global `app` var that is accessible everywhere :
```
var app = new Vue({
el: '#app',
...
})
```
With the "webpack-simple" template, the `vue-loader` loader wraps the components in a CommonJS module (which are closures and therefore prevent variables to be directly accessible outside the module).
To remediate this issue for debugging purpose, you can use [Vue devtools](https://github.com/vuejs/vue-devtools).
Eventually you can also make your app variable accessible in dev environment.
Define a `DEV` variable in webpack.config.js (remove it for production):
```
plugins: [
new webpack.DefinePlugin({
DEV: true
})
]
```
And expose what you want in your code :
```
if (typeof DEV !== 'undefined' && DEV) {
window.app = app; //this block will be removed by the build step
}
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 1,671 | 6,380 | <issue_start>username_0: Very related to this question ([How can I prevent my Google App Engine cron jobs from burning through all my instance hours?](https://stackoverflow.com/questions/8670532/how-can-i-prevent-my-google-app-engine-cron-jobs-from-burning-through-all-my-ins?answertab=votes#tab-top)), yet unfortunatly I not only experience something similar, in my Cron-jobs in Appengine multiple instances are created, which causes quite a high number of Instance hours to be billed.
I tried different intervals to determine whether this has an effect on the instance hours, but cannot determine the difference currently. To illustrate, I got billed App Engine Frontend Instances: 263.462 hours for the following three cron-jobs (in just 10 days!):
```
cron:
- description: Push a weather "tick" onto pubsub every 5 minutes
url: /publish/weather-tick
schedule: every 5 minutes
- description: Push a crypto "tick" onto pubsub every 5 minutes
url: /publish/crypto-tick
schedule: every 5 minutes
- description: Push a astronomy "tick" onto pubsub every 6 hours starting at 00:00
url: /publish/astronomy-tick
schedule: every 6 hours synchronized
```
When I changed this to one cron-job for each minute:
```
cron:
- description: Push a "tick" onto pubsub every 1 minutes
url: /publish/minute-tick
schedule: every 1 minutes
```
I currently still get multiple instances, see:
[](https://i.stack.imgur.com/uAooY.png)
*Strangely enough, the instance went from 2 --> 3 upon changing to just one cron-job every minute.*
I also have difficulties understanding why there are 3 instances 'created', whilst 0 are 'running', and billing estimates are 1.
Contact with Google could point me towards the high number of instances (which caused the high number of Instance hours), and my parameters set on 'automatic sclaing', which allows for this. However, changing it to manual limits the amount of free instance hours (from 28 to I believe 9).
**My Situation**
I am running cron-jobs to invoke Pub/Sub events. I have Google Cloud functions that listen to Pub/Sub events so that I can execute database updates based on these ticks. This is folllwing the tutorial provided on [Google Firebase](https://firebase.googleblog.com/2017/03/how-to-schedule-cron-jobs-with-cloud.html?utm_campaign=culture_education_functions_en_06-29-17&utm_source=Firebase&utm_medium=yt-desc "here"). I understand the billing of 15 minutes of an instance, even though it's lifespan is shorter, however I cannot understand how or why multiple are created if only such a small task is executed. I am especially intregued, as in the related question ([How can I prevent my Google App Engine cron jobs from burning through all my instance hours?](https://stackoverflow.com/questions/8670532/how-can-i-prevent-my-google-app-engine-cron-jobs-from-burning-through-all-my-ins?answertab=votes#tab-top)), that person is experiencing 24 hours, but expectedly just with one instance. Why am I getting 3? I feel I am missing some conceptual understanding of this process, and the tools to adjust accordingly. Any help or pointers would be very welcome!<issue_comment>username_1: Ok, so as often - anoyingly but helpful - is, explaining it to someone else helps thinking about this problem. I hope this anwser might be useful for someone else having the same questions.
I stumbled upon [this Google Groups](https://groups.google.com/forum/#!topic/google-appengine/eJUQ7NlNkso "this Google Groups") discussion, which talks about how instances of 'older' versions are kept running due to the Flexible Environment, or even when scaling is set to manual. Quite recent comments indicate that this is a roll-back issue from Google's side. Apparently (I was unaware of this), you can delete or stop the older versions and thereby stop the running instances. A related Stack Overflow question and awnser can be found here: [How to automatically delete old Google App Engine version instances?](https://stackoverflow.com/questions/34499875/how-to-automatically-delete-old-google-app-engine-version-instances). This also discusses how to ensure automatic instance deletion from older versions (and is probably the correct way to go). Doing it manually: go to your Appengine dashboard, click versions in the left menu, and select and delete the older versions (if not needed anymore ofcourse). [This named Google Groups](https://groups.google.com/forum/#!topic/google-appengine/eJUQ7NlNkso "this Google Groups") discussion also shows how.
The result:
[](https://i.stack.imgur.com/Y2u16.png)
Also, not that there is a version selection in the top left. I could select the older and latest one (instead of 'all'), which showed me the idle running older instances. This is an additional indication of the 'issue' being the old versions, rather than this one. Learning everyday!
Upvotes: 1 [selected_answer]<issue_comment>username_2: Another thing to consider is that having multiple cron jobs with identical or overlapping schedules means that your app will receive multiple requests roughly **at the same time** - peaks of activity.
The dynamic scheduler may, depending on the scaling config and your app's response time, decide that it needs to spawn additional instances to handle such traffic peaks. You could try to tweak your scaling config to prevent that, see [max\_num\_instances for Google App Engine Standard Environment](https://stackoverflow.com/questions/48380505/max-num-instances-for-google-app-engine-standard-environment/48387012#48387012)
Another way to avoid this would be to stagger your requests to smooth the traffic pattern a bit:
* use a single cron entry for the overlapping schedules and inside its handler make a separate request for each job you want to run, possibly staggered in time a bit. For example using delayed push queues tasks, see [do google app engine pull queues return tasks in FIFO order?](https://stackoverflow.com/questions/40475174/do-google-app-engine-pull-queues-return-tasks-in-fifo-order/40478036#40478036)
* use explicit cron schedule times to minimize the overlap, say schedule one at minutes 1, 6, 11, etc and another one at minutes 2, 7, 12, etc. You only have a limited number of possible combinations, though.
Upvotes: 1 |
2018/03/20 | 492 | 1,814 | <issue_start>username_0: With the arrival of the display of emoticons in Google searches, I wanted to make my own tests by embellishing a few meta\_title to see if it could have a positive effect for our site.
Unfortunately, after saving my meta\_title change with the special character, no get no error message but the field is now empty on both prestashop and the database.
And for good reason, the encoding in the database in utf8 instead of utf8mb4.
So I modified the encoding of the column as well as the table for which I wanted to have access to the special characters but they are transformed after "???".
It seems that I would have to change the encoding of the database itself and that's my question:
**Do you know if there is any contraindication to switch the prestashop database from utf8 to utf8mb4?**
I'm afraid there may be a loss of data somewhere in the database.
I am interested in all the information on the subject.
Thank you in advance.<issue_comment>username_1: I've been through a migration, it did not alter any kind of field or any kind of character (it was a Wordpress, with emojies), it's just a charset extension I guess.
Upvotes: 1 <issue_comment>username_2: If you are using MySQL 5.5 or 5.6 and have `VARCHAR(255)` (or anything >191), see <http://mysql.rjweb.org/doc.php/limits#767_limit_in_innodb_indexes>
You have `???` -- see this for the multiple things that need changing: [Trouble with UTF-8 characters; what I see is not what I stored](https://stackoverflow.com/questions/38363566/trouble-with-utf8-characters-what-i-see-is-not-what-i-stored) (Search for "question mark".)
If you are getting question marks out of the database, then the data is lost -- it was lost when trying to insert, for example, a 4-byte Emoji into the 3-byte `CHARACTER SET utf8`.
Upvotes: 0 |
2018/03/20 | 353 | 1,312 | <issue_start>username_0: Sometimes in asp.net, the Control.ClientID returns the whole Control object instead of the ClientID string. I haven't paid too much attention to it up until now as I could just fish it out client side with ClientID.Id. But now I need to pass the ID of an iframe and IE won't let me pass the iframe object so I have to get the string of the ID this time.
The iframe:
```
```
The Codebehind:
```
YouTubeScript.Text = @"
$(function() {
" + Helpers.CallJavascriptFunction("InitYoutubeVideo", VideoFrame.ClientID) + @"
});
";
```
**Update/Answer**
I need to wrap `VideoFrame.ClientID` in quotation marks so that javascript/jquery recognises it as a string. Otherwise it sees the id and converts it into the html object belonging to that id.<issue_comment>username_1: It looks like you are trying to mix javascript and inline aspnet code somehow. it should look like this:
```
Helpers.CallJavascriptFunction("InitYoutubeVideo", "<%= VideoFrame.ClientID %>");
```
Upvotes: 1 <issue_comment>username_2: You have two options. Use the client ID properly:
```
$(document).ready(function() {
InitYoutubeVideo('<%=VideoFrame.ClientID%>');
});
```
Or make the client id static:
```
$(document).ready(function() {
InitYoutubeVideo('VideoFrame');
});
```
Upvotes: 0 |
2018/03/20 | 2,251 | 7,653 | <issue_start>username_0: I am working on integration tests of .NET Core application and want to use some test configuration. My configuration is a POCO class that is configured via `appsettings.json` and then consumed via `IOptions`. In my tests I'd like to use an instance of that class.
Here is the code:
```
var mySettings = GetTestSettings(); // factory method returning POCO class
var configurationBuilder = new ConfigurationBuilder();
// I am looking for something like
configurationBuilder.AddInMemoryObject("Settings", mySettings);
// does not work, result is just string
configurationBuilder.AddInMemoryCollection("Settings",
JsonConvert.SerializeObject(mySettings));
// requires filename
configurationBuilder.AddJsonFile("filename.json");
```
What is the easiest way to feed POCO to configuration?<issue_comment>username_1: `AddInMemoryCollection` takes a collection of `KeyValuePair` where a key is setting key and value is its value. This call from the question
```
configurationBuilder.AddInMemoryCollection("Settings", JsonConvert.SerializeObject(mySettings));
```
actually passes "Settings" as a key and whole JSON as one setting value, which expectedly does not work.
But the overall approach is correct, you should use `AddInMemoryCollection` extension call. In collection passed to this call, setting keys are full paths within configuration, delimited by a colon. Say if you have following settings POCO:
```
public class SomeSettings
{
public string SomeKey1 { get; set; }
public int SomeKey2 { get; set; }
}
```
and it's loaded from following JSON
```
{
"SomeSettings": {
"SomeKey1": "SomeData",
"SomeKey2": 123
}
}
```
the keys would be `SomeSettings:SomeKey1` and `SomeSettings:SomeKey2`.
You could then add such configuration with following `AddInMemoryCollection` call:
```
configurationBuilder.AddInMemoryCollection(new Dictionary
{
{ "SomeSettings:SomeKey1", "SomeData" },
{ "SomeSettings:SomeKey2", 123 },
});
```
Now, if you want to add settings POCO with one call, you could write simple extension method that will enumerate setting class properties using reflection and return collection of key-value pairs for properties names and values.
Here is a sample:
```
public static class ObjectExtensions
{
public static IEnumerable> ToKeyValuePairs(this Object settings, string settingsRoot)
{
foreach (var property in settings.GetType().GetProperties())
{
yield return new KeyValuePair($"{settingsRoot}:{property.Name}", property.GetValue(settings).ToString());
}
}
}
public static class ConfigurationBuilderExtensions
{
public static void AddInMemoryObject(this ConfigurationBuilder configurationBuilder, object settings, string settingsRoot)
{
configurationBuilder.AddInMemoryCollection(settings.ToKeyValuePairs(settingsRoot));
}
}
```
In the test:
```
var configurationBuilder = new ConfigurationBuilder();
var mySettings = GetTestSettings();
configurationBuilder.AddInMemoryObject(mySettings, "Settings");
```
Such simple `ObjectExtensions.ToKeyValuePairs` extension method will work for plain POCO like `SomeSettings` above. However it will not work if some of the properties are also objects like here:
```
public class InnerSettings
{
public string SomeKey { get; set; }
}
public class OuterSettings
{
public InnerSettings InnerSettings { get; set; }
}
```
However I believe you got an idea and could make required adjustments if required.
Upvotes: 4 <issue_comment>username_2: I ended up with the following:
```
configurationBuilder.AddInMemoryCollection(ToDictionary(GetTestSettings()));
private static IEnumerable> ToDictionary(object o) =>
AddPropsToDic(JObject.FromObject(o), new Dictionary(), "Settings");
private static Dictionary AddPropsToDic(JObject jo, Dictionary dic, string prefix)
{
jo.Properties()
.Aggregate(dic, (d, jt) =>
{
var value = jt.Value;
var key = $"{prefix}:{jt.Name}";
if (value.HasValues)
return AddPropsToDic((JObject) jt.Value, dic, key);
dic.Add(key, value.ToString());
return dic;
});
return dic;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: If you don't mind an additional dependency, I created a package [Extensions.Configuration.Object](https://www.nuget.org/packages/Extensions.Configuration.Object/) just for this.
```
dotnet add package Extensions.Configuration.Object
```
And then you can do:
```cs
var configuration = new ConfigurationBuilder()
.AddObject(new // This supports anonymous and simple classes.
{
MyProperty = "MyValue",
MySection = new
{
MyOtherProperty = "MyOtherValue"
}
})
.Build();
```
If you don't want an additional dependency, you can just copy source code from [GitHub](https://github.com/lawrence-laz/Extensions.Configuration.Object).
Upvotes: 2 <issue_comment>username_4: There are a few problems with the given answers [@the\_joric](https://stackoverflow.com/a/49385725/5963888) and [@codeFuller](https://stackoverflow.com/a/49384575/5963888) provided, and this is they mostly fall apart the second you put in an array, and also making sure you handle nested objects of course. [@Laurynas\_Lazauskas](https://stackoverflow.com/a/72242705/5963888) answer is actually the most correct and approached it by adding a Provider which is probably the best technical route to take. However if you are interested in a Quick and Dirty solution something like this could work, and its performance to @Laurynas\_Lazauskas answer might as well be equal. The only feature I wish I could see is the ability to add in a root key.
---
```
public static Dictionary CreateConfiguration(this TObject @object, string rootKey = "")
where TObject : class
{
if( @object == null) { throw new ArgumentNullException(nameof(@object)); }
var d = new Dictionary();
Dictionary CreateConfiguration(object @obj, string key, Dictionary dict)
{
foreach (var p in @obj.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance))
{
if (p.PropertyType.IsClass && !typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType))
{
return CreateConfiguration(p.GetValue(@obj), $"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", dict);
}
else if(typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType) && p.PropertyType != typeof(string))
{
int i = 0;
foreach( var element in (System.Collections.IEnumerable)p.GetValue(@obj))
{
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}:{i++}", element.ToString());
}
}
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", p.GetValue(@obj).ToString());
}
return dict;
}
return CreateConfiguration(@object, rootKey, d);
}
```
---
| Method | Job | Runtime | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Allocated |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GetConfiguration | .NET 5.0 | .NET 5.0 | 4.221 us | 0.0749 us | 0.0735 us | 0.85 | 0.02 | 1.1368 | 5 KB |
| GetConfiguration | .NET 6.0 | .NET 6.0 | 3.721 us | 0.0663 us | 0.0620 us | 0.75 | 0.02 | 1.1063 | 5 KB |
| GetConfiguration | .NET Core 3.1 | .NET Core 3.1 | 4.972 us | 0.0960 us | 0.0851 us | 1.00 | 0.00 | 1.1292 | 5 KB |
| | | | | | | | | | |
| GetConfiguration\_AddObjectNuget | .NET 5.0 | .NET 5.0 | 6.800 us | 0.1170 us | 0.0977 us | 0.78 | 0.03 | 0.8621 | 4 KB |
| GetConfiguration\_AddObjectNuget | .NET 6.0 | .NET 6.0 | 6.113 us | 0.1186 us | 0.1542 us | 0.71 | 0.03 | 0.7401 | 3 KB |
| GetConfiguration\_AddObjectNuget | .NET Core 3.1 | .NET Core 3.1 | 8.528 us | 0.1466 us | 0.3308 us | 1.00 | 0.00 | 1.0223 | 4 KB |
Upvotes: 2 |
2018/03/20 | 2,221 | 7,339 | <issue_start>username_0: I have this log -
```
MySite Access reject(60) - Redirect to user page
```
I want to be able to parse it so that `(60)` will be in one group and the rest in another.
What I currently use is
```
([A-Za-z \-\(\)\d]+)
```
which takes in all of it. The problem being that when trying to group the first part of it which is ***`MySite Access reject`*** it recognizes the word reject with (60) as one word. I don't know how to get them to be set apart.
Any suggestions will be appreciated.<issue_comment>username_1: `AddInMemoryCollection` takes a collection of `KeyValuePair` where a key is setting key and value is its value. This call from the question
```
configurationBuilder.AddInMemoryCollection("Settings", JsonConvert.SerializeObject(mySettings));
```
actually passes "Settings" as a key and whole JSON as one setting value, which expectedly does not work.
But the overall approach is correct, you should use `AddInMemoryCollection` extension call. In collection passed to this call, setting keys are full paths within configuration, delimited by a colon. Say if you have following settings POCO:
```
public class SomeSettings
{
public string SomeKey1 { get; set; }
public int SomeKey2 { get; set; }
}
```
and it's loaded from following JSON
```
{
"SomeSettings": {
"SomeKey1": "SomeData",
"SomeKey2": 123
}
}
```
the keys would be `SomeSettings:SomeKey1` and `SomeSettings:SomeKey2`.
You could then add such configuration with following `AddInMemoryCollection` call:
```
configurationBuilder.AddInMemoryCollection(new Dictionary
{
{ "SomeSettings:SomeKey1", "SomeData" },
{ "SomeSettings:SomeKey2", 123 },
});
```
Now, if you want to add settings POCO with one call, you could write simple extension method that will enumerate setting class properties using reflection and return collection of key-value pairs for properties names and values.
Here is a sample:
```
public static class ObjectExtensions
{
public static IEnumerable> ToKeyValuePairs(this Object settings, string settingsRoot)
{
foreach (var property in settings.GetType().GetProperties())
{
yield return new KeyValuePair($"{settingsRoot}:{property.Name}", property.GetValue(settings).ToString());
}
}
}
public static class ConfigurationBuilderExtensions
{
public static void AddInMemoryObject(this ConfigurationBuilder configurationBuilder, object settings, string settingsRoot)
{
configurationBuilder.AddInMemoryCollection(settings.ToKeyValuePairs(settingsRoot));
}
}
```
In the test:
```
var configurationBuilder = new ConfigurationBuilder();
var mySettings = GetTestSettings();
configurationBuilder.AddInMemoryObject(mySettings, "Settings");
```
Such simple `ObjectExtensions.ToKeyValuePairs` extension method will work for plain POCO like `SomeSettings` above. However it will not work if some of the properties are also objects like here:
```
public class InnerSettings
{
public string SomeKey { get; set; }
}
public class OuterSettings
{
public InnerSettings InnerSettings { get; set; }
}
```
However I believe you got an idea and could make required adjustments if required.
Upvotes: 4 <issue_comment>username_2: I ended up with the following:
```
configurationBuilder.AddInMemoryCollection(ToDictionary(GetTestSettings()));
private static IEnumerable> ToDictionary(object o) =>
AddPropsToDic(JObject.FromObject(o), new Dictionary(), "Settings");
private static Dictionary AddPropsToDic(JObject jo, Dictionary dic, string prefix)
{
jo.Properties()
.Aggregate(dic, (d, jt) =>
{
var value = jt.Value;
var key = $"{prefix}:{jt.Name}";
if (value.HasValues)
return AddPropsToDic((JObject) jt.Value, dic, key);
dic.Add(key, value.ToString());
return dic;
});
return dic;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: If you don't mind an additional dependency, I created a package [Extensions.Configuration.Object](https://www.nuget.org/packages/Extensions.Configuration.Object/) just for this.
```
dotnet add package Extensions.Configuration.Object
```
And then you can do:
```cs
var configuration = new ConfigurationBuilder()
.AddObject(new // This supports anonymous and simple classes.
{
MyProperty = "MyValue",
MySection = new
{
MyOtherProperty = "MyOtherValue"
}
})
.Build();
```
If you don't want an additional dependency, you can just copy source code from [GitHub](https://github.com/lawrence-laz/Extensions.Configuration.Object).
Upvotes: 2 <issue_comment>username_4: There are a few problems with the given answers [@the\_joric](https://stackoverflow.com/a/49385725/5963888) and [@codeFuller](https://stackoverflow.com/a/49384575/5963888) provided, and this is they mostly fall apart the second you put in an array, and also making sure you handle nested objects of course. [@Laurynas\_Lazauskas](https://stackoverflow.com/a/72242705/5963888) answer is actually the most correct and approached it by adding a Provider which is probably the best technical route to take. However if you are interested in a Quick and Dirty solution something like this could work, and its performance to @Laurynas\_Lazauskas answer might as well be equal. The only feature I wish I could see is the ability to add in a root key.
---
```
public static Dictionary CreateConfiguration(this TObject @object, string rootKey = "")
where TObject : class
{
if( @object == null) { throw new ArgumentNullException(nameof(@object)); }
var d = new Dictionary();
Dictionary CreateConfiguration(object @obj, string key, Dictionary dict)
{
foreach (var p in @obj.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance))
{
if (p.PropertyType.IsClass && !typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType))
{
return CreateConfiguration(p.GetValue(@obj), $"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", dict);
}
else if(typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType) && p.PropertyType != typeof(string))
{
int i = 0;
foreach( var element in (System.Collections.IEnumerable)p.GetValue(@obj))
{
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}:{i++}", element.ToString());
}
}
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", p.GetValue(@obj).ToString());
}
return dict;
}
return CreateConfiguration(@object, rootKey, d);
}
```
---
| Method | Job | Runtime | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Allocated |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GetConfiguration | .NET 5.0 | .NET 5.0 | 4.221 us | 0.0749 us | 0.0735 us | 0.85 | 0.02 | 1.1368 | 5 KB |
| GetConfiguration | .NET 6.0 | .NET 6.0 | 3.721 us | 0.0663 us | 0.0620 us | 0.75 | 0.02 | 1.1063 | 5 KB |
| GetConfiguration | .NET Core 3.1 | .NET Core 3.1 | 4.972 us | 0.0960 us | 0.0851 us | 1.00 | 0.00 | 1.1292 | 5 KB |
| | | | | | | | | | |
| GetConfiguration\_AddObjectNuget | .NET 5.0 | .NET 5.0 | 6.800 us | 0.1170 us | 0.0977 us | 0.78 | 0.03 | 0.8621 | 4 KB |
| GetConfiguration\_AddObjectNuget | .NET 6.0 | .NET 6.0 | 6.113 us | 0.1186 us | 0.1542 us | 0.71 | 0.03 | 0.7401 | 3 KB |
| GetConfiguration\_AddObjectNuget | .NET Core 3.1 | .NET Core 3.1 | 8.528 us | 0.1466 us | 0.3308 us | 1.00 | 0.00 | 1.0223 | 4 KB |
Upvotes: 2 |
2018/03/20 | 2,328 | 7,668 | <issue_start>username_0: I Have a dataframe named `mydata`. Here's a sample of the relevant column:
```
Accuracy.Percentage
100.00
127.00
60.00
175.00
52.00
```
Now, what I want to accomplish is the following: when the value of a row is < 100, I want change its value with `100 - mydata$Accuracy.Percentage`, when the value is > 100, I want to change its value with `mydata$Accuracy.Percentage - 100`. I have tried something like
```
result <- if(mydata$Accuracy.Percentage < 100) {
mutate(result$Accuracy.Percentage = 100 - result$Accuracy.Percentage)
}
else mutate(result$Accuracy.Percentage = result$Accuracy.Percentage - 100)
```
But I can't seem to get it to work, although there probably is a simple way to accomplish this. Thanks in advance, I hope I have been clear enough and formulated my question in a understandable manner!<issue_comment>username_1: `AddInMemoryCollection` takes a collection of `KeyValuePair` where a key is setting key and value is its value. This call from the question
```
configurationBuilder.AddInMemoryCollection("Settings", JsonConvert.SerializeObject(mySettings));
```
actually passes "Settings" as a key and whole JSON as one setting value, which expectedly does not work.
But the overall approach is correct, you should use `AddInMemoryCollection` extension call. In collection passed to this call, setting keys are full paths within configuration, delimited by a colon. Say if you have following settings POCO:
```
public class SomeSettings
{
public string SomeKey1 { get; set; }
public int SomeKey2 { get; set; }
}
```
and it's loaded from following JSON
```
{
"SomeSettings": {
"SomeKey1": "SomeData",
"SomeKey2": 123
}
}
```
the keys would be `SomeSettings:SomeKey1` and `SomeSettings:SomeKey2`.
You could then add such configuration with following `AddInMemoryCollection` call:
```
configurationBuilder.AddInMemoryCollection(new Dictionary
{
{ "SomeSettings:SomeKey1", "SomeData" },
{ "SomeSettings:SomeKey2", 123 },
});
```
Now, if you want to add settings POCO with one call, you could write simple extension method that will enumerate setting class properties using reflection and return collection of key-value pairs for properties names and values.
Here is a sample:
```
public static class ObjectExtensions
{
public static IEnumerable> ToKeyValuePairs(this Object settings, string settingsRoot)
{
foreach (var property in settings.GetType().GetProperties())
{
yield return new KeyValuePair($"{settingsRoot}:{property.Name}", property.GetValue(settings).ToString());
}
}
}
public static class ConfigurationBuilderExtensions
{
public static void AddInMemoryObject(this ConfigurationBuilder configurationBuilder, object settings, string settingsRoot)
{
configurationBuilder.AddInMemoryCollection(settings.ToKeyValuePairs(settingsRoot));
}
}
```
In the test:
```
var configurationBuilder = new ConfigurationBuilder();
var mySettings = GetTestSettings();
configurationBuilder.AddInMemoryObject(mySettings, "Settings");
```
Such simple `ObjectExtensions.ToKeyValuePairs` extension method will work for plain POCO like `SomeSettings` above. However it will not work if some of the properties are also objects like here:
```
public class InnerSettings
{
public string SomeKey { get; set; }
}
public class OuterSettings
{
public InnerSettings InnerSettings { get; set; }
}
```
However I believe you got an idea and could make required adjustments if required.
Upvotes: 4 <issue_comment>username_2: I ended up with the following:
```
configurationBuilder.AddInMemoryCollection(ToDictionary(GetTestSettings()));
private static IEnumerable> ToDictionary(object o) =>
AddPropsToDic(JObject.FromObject(o), new Dictionary(), "Settings");
private static Dictionary AddPropsToDic(JObject jo, Dictionary dic, string prefix)
{
jo.Properties()
.Aggregate(dic, (d, jt) =>
{
var value = jt.Value;
var key = $"{prefix}:{jt.Name}";
if (value.HasValues)
return AddPropsToDic((JObject) jt.Value, dic, key);
dic.Add(key, value.ToString());
return dic;
});
return dic;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: If you don't mind an additional dependency, I created a package [Extensions.Configuration.Object](https://www.nuget.org/packages/Extensions.Configuration.Object/) just for this.
```
dotnet add package Extensions.Configuration.Object
```
And then you can do:
```cs
var configuration = new ConfigurationBuilder()
.AddObject(new // This supports anonymous and simple classes.
{
MyProperty = "MyValue",
MySection = new
{
MyOtherProperty = "MyOtherValue"
}
})
.Build();
```
If you don't want an additional dependency, you can just copy source code from [GitHub](https://github.com/lawrence-laz/Extensions.Configuration.Object).
Upvotes: 2 <issue_comment>username_4: There are a few problems with the given answers [@the\_joric](https://stackoverflow.com/a/49385725/5963888) and [@codeFuller](https://stackoverflow.com/a/49384575/5963888) provided, and this is they mostly fall apart the second you put in an array, and also making sure you handle nested objects of course. [@Laurynas\_Lazauskas](https://stackoverflow.com/a/72242705/5963888) answer is actually the most correct and approached it by adding a Provider which is probably the best technical route to take. However if you are interested in a Quick and Dirty solution something like this could work, and its performance to @Laurynas\_Lazauskas answer might as well be equal. The only feature I wish I could see is the ability to add in a root key.
---
```
public static Dictionary CreateConfiguration(this TObject @object, string rootKey = "")
where TObject : class
{
if( @object == null) { throw new ArgumentNullException(nameof(@object)); }
var d = new Dictionary();
Dictionary CreateConfiguration(object @obj, string key, Dictionary dict)
{
foreach (var p in @obj.GetType().GetProperties(BindingFlags.Public | BindingFlags.Instance))
{
if (p.PropertyType.IsClass && !typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType))
{
return CreateConfiguration(p.GetValue(@obj), $"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", dict);
}
else if(typeof(System.Collections.IEnumerable).IsAssignableFrom(p.PropertyType) && p.PropertyType != typeof(string))
{
int i = 0;
foreach( var element in (System.Collections.IEnumerable)p.GetValue(@obj))
{
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}:{i++}", element.ToString());
}
}
d.TryAdd($"{key}{(String.IsNullOrEmpty(key) ? "" : ":")}{p.Name}", p.GetValue(@obj).ToString());
}
return dict;
}
return CreateConfiguration(@object, rootKey, d);
}
```
---
| Method | Job | Runtime | Mean | Error | StdDev | Ratio | RatioSD | Gen 0 | Allocated |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| GetConfiguration | .NET 5.0 | .NET 5.0 | 4.221 us | 0.0749 us | 0.0735 us | 0.85 | 0.02 | 1.1368 | 5 KB |
| GetConfiguration | .NET 6.0 | .NET 6.0 | 3.721 us | 0.0663 us | 0.0620 us | 0.75 | 0.02 | 1.1063 | 5 KB |
| GetConfiguration | .NET Core 3.1 | .NET Core 3.1 | 4.972 us | 0.0960 us | 0.0851 us | 1.00 | 0.00 | 1.1292 | 5 KB |
| | | | | | | | | | |
| GetConfiguration\_AddObjectNuget | .NET 5.0 | .NET 5.0 | 6.800 us | 0.1170 us | 0.0977 us | 0.78 | 0.03 | 0.8621 | 4 KB |
| GetConfiguration\_AddObjectNuget | .NET 6.0 | .NET 6.0 | 6.113 us | 0.1186 us | 0.1542 us | 0.71 | 0.03 | 0.7401 | 3 KB |
| GetConfiguration\_AddObjectNuget | .NET Core 3.1 | .NET Core 3.1 | 8.528 us | 0.1466 us | 0.3308 us | 1.00 | 0.00 | 1.0223 | 4 KB |
Upvotes: 2 |
2018/03/20 | 1,595 | 4,227 | <issue_start>username_0: ```
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b)]
```
I am the newbie in the python world, and i find when i run my file,the program show
>
> SyntaxError: invalid syntax
>
>
>
in the code above
but as i know,there should be no SyntaxError this code,so i want to ask why,where is wrong with this code?I had try to find where am i wrong,but still have no idea about this.
I use Anaconda3-4.2.0-Windows-x86\_64 this version<issue_comment>username_1: After defining a and b, you could do something like this:
```
if (a==1 and b==1) or (a==0 and b==0):
correct=1
else:
correct=0
```
Or in ternary format, something like this:
```
correct=1 if ((a==1 and b==1) or (a==0 and b==0)) else 0
```
if what you want is to make correct=1 when (a=1 and b=1) or (a=0 and b=0) and make correct=0 for the rest of possibilities
Upvotes: 0 <issue_comment>username_2: This is a gross misuse of a list comprehension.
The logic you may wish to implement *can* be implemented via a ternary statement:
```
a, b = 1, 1
correct = 1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
#1
```
However, even this is close to unreadable. There is nothing intrinsically wrong with ternary statements; so I would advise splitting your logic:
```
test1 = (a == 1) and (b == 1)
test2 = (a == 0) and (b == 0)
correct = 1 if (test1 or test2) else 0
```
Upvotes: 0 <issue_comment>username_3: I may be wrong but you are missing an `in` statement or say iteration in your list comprehension.
```
my_list = [(1, 1), (1, 0)]
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in my_list]
print(correct)
out: [1, 0]
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: What you are doing is essentially "not xor", so, your whole logic can be replaced with
`int(not (a ^ b))`:
```
a,b = 0, 0
print(int(not (a ^ b)))
a, b = 1, 0
print(int(not (a ^ b)))
a, b = 0, 1
print(int(not (a ^ b)))
a, b = 1, 1
print(int(not (a ^ b)))
# 1
# 0
# 0
# 1
```
If you don't like the "magical" "not xor" you can explicitly validate the input against `(0, 0)` and `(1, 1)`:
```
print(1 if (a, b) in ((0, 0), (1, 1)) else 0)
```
This replaces the `if` condition only. If you do have a list of tuples then you would still need the `in` as other answers suggest:
```
def validate(a, b):
return int(not a ^ b)
li = [(0, 0), (0, 1), (1, 0), (1, 1)]
print([validate(a, b) for a, b in li])
# [1, 0, 0, 1]
```
or with `map`, but then `validate` will need to be changed to accept a tuple (and a call to `list` is needed if using Python 3 and wanted output should be a list):
```
def validate(tup):
return int(not tup[0] ^ tup[1])
li = [(0, 0), (0, 1), (1, 0), (1, 1)]
print(list(map(validate, li)))
# [1, 0, 0, 1]
```
Upvotes: 1 <issue_comment>username_5: if you only have to evaluate `a` and `b` and you want a one-liner, you can go with
```
correct = 1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
```
if `a` and `b` represent a couple of values in a list of tuples, and you want a list back of one and zeroes, you can use list comprehension:
```
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for a,b in mylist]
```
Upvotes: 0 <issue_comment>username_6: This
```
[whatever_expression for (a, b)]
```
is indeed incorrect - it should (from the purely syntaxic POV) be
```
[whatever_expression for (a, b) in some_iterable]
```
What you're really trying to do is quite unclear, but if you have two variables `a` and `b` and want to check if both are equal to `0` or both are equal to `1`, you don't need a list comprehension at all:
```
correct = 1 if (a == 0 and b == 0) or (a == 1 and b == 1) else 0
```
and since in Python, `Bool` is a subclass of `int` with `False == 0` and `True == 1`, this can be simplified to
```
correct = (a == 0 and b == 0) or (a == 1 and b == 1)
```
Using a list comprehension only makes sense if you actually have a list of pairs and want to test all those pairs, ie:
```
pairs = [(0, 0), (0, 1), (1, 0), (1, 1)]
corrects = [(a == 0 and b == 0) or (a == 1 and b == 1) for (a, b) in pairs]
```
which would yield
```
[True, False, False, True]
```
Upvotes: 0 |
2018/03/20 | 2,094 | 5,907 | <issue_start>username_0: I have been trying to open a python file from my main python file:
```
from banner import *
from hexed import * # this one is the file I am trying to open
```
I am trying to open files by treating them as modules
this is how I am calling them my banner file works perfectly but can't say the same for hexed file:
```
def options(self):
while True:
try:
try:
main=raw_input(bcolors.B + "PYDRAGON> " + bcolors.E)
if main == "msf":
elif main == "crypto":
hexed() #THIS IS WHERE I AM CALLING ANOTHER FILE
elif main == "print":
banner() #THIS ONE WORKS FINE
else:
print bcolors.R + "--> check your input <--" + bcolors.E
time.sleep(1)
except KeyboardInterrupt:
print (bcolors.R + bcolors.U + "\033[1m" + "\nCtrl-C Pressed! Use 'exit' to close the tool!" + bcolors.E)
time.sleep(0.9)
sys.exit()
pass
except EOFError:
print (bcolors.R + bcolors.U + "\nUser Requsted An Interrupt ..Exixting.." + bcolors.E)
time.sleep(0.9)
sys.exit()
pass
```
My banner function works perfectly but whenever I try to call my hexed python file it gives me this error
```
Traceback (most recent call last):
File "pydragon.py", line 149, in
obj.options()
File "pydragon.py", line 115, in options
hexed()
NameError: global name 'hexed' is not defined
```
I have checked all spaces and tabs I don't think there is any Indentation Error
below is the code of my hexed file
```
def witch(self):
main = raw_input(bcolors.R + "Crytography> " + bcolors.E)
if main == 1:
hound = raw_input(R + bcolors.U + 'String to encode>' + bcolors.E)
hound = hound.encode('hex','strict');
print ""+ G +"Encoded: " + hound
elif main == 2:
hound1 = raw_input(R + bcolors.U + 'String to decode>' + bcolors.E)
hound1 = hound.decode('hex','strict');
print ""+ G +"Decoded String: " + hound1
else:
print '\033[31m' + bcolors.BL + "GRRRR, what your'e trying to type ?? " + bcolors.E
```
I hope this much info helps
thanks<issue_comment>username_1: After defining a and b, you could do something like this:
```
if (a==1 and b==1) or (a==0 and b==0):
correct=1
else:
correct=0
```
Or in ternary format, something like this:
```
correct=1 if ((a==1 and b==1) or (a==0 and b==0)) else 0
```
if what you want is to make correct=1 when (a=1 and b=1) or (a=0 and b=0) and make correct=0 for the rest of possibilities
Upvotes: 0 <issue_comment>username_2: This is a gross misuse of a list comprehension.
The logic you may wish to implement *can* be implemented via a ternary statement:
```
a, b = 1, 1
correct = 1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
#1
```
However, even this is close to unreadable. There is nothing intrinsically wrong with ternary statements; so I would advise splitting your logic:
```
test1 = (a == 1) and (b == 1)
test2 = (a == 0) and (b == 0)
correct = 1 if (test1 or test2) else 0
```
Upvotes: 0 <issue_comment>username_3: I may be wrong but you are missing an `in` statement or say iteration in your list comprehension.
```
my_list = [(1, 1), (1, 0)]
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for (a, b) in my_list]
print(correct)
out: [1, 0]
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: What you are doing is essentially "not xor", so, your whole logic can be replaced with
`int(not (a ^ b))`:
```
a,b = 0, 0
print(int(not (a ^ b)))
a, b = 1, 0
print(int(not (a ^ b)))
a, b = 0, 1
print(int(not (a ^ b)))
a, b = 1, 1
print(int(not (a ^ b)))
# 1
# 0
# 0
# 1
```
If you don't like the "magical" "not xor" you can explicitly validate the input against `(0, 0)` and `(1, 1)`:
```
print(1 if (a, b) in ((0, 0), (1, 1)) else 0)
```
This replaces the `if` condition only. If you do have a list of tuples then you would still need the `in` as other answers suggest:
```
def validate(a, b):
return int(not a ^ b)
li = [(0, 0), (0, 1), (1, 0), (1, 1)]
print([validate(a, b) for a, b in li])
# [1, 0, 0, 1]
```
or with `map`, but then `validate` will need to be changed to accept a tuple (and a call to `list` is needed if using Python 3 and wanted output should be a list):
```
def validate(tup):
return int(not tup[0] ^ tup[1])
li = [(0, 0), (0, 1), (1, 0), (1, 1)]
print(list(map(validate, li)))
# [1, 0, 0, 1]
```
Upvotes: 1 <issue_comment>username_5: if you only have to evaluate `a` and `b` and you want a one-liner, you can go with
```
correct = 1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0
```
if `a` and `b` represent a couple of values in a list of tuples, and you want a list back of one and zeroes, you can use list comprehension:
```
correct = [1 if ((a == 1 and b == 1) or (a == 0 and b == 0)) else 0 for a,b in mylist]
```
Upvotes: 0 <issue_comment>username_6: This
```
[whatever_expression for (a, b)]
```
is indeed incorrect - it should (from the purely syntaxic POV) be
```
[whatever_expression for (a, b) in some_iterable]
```
What you're really trying to do is quite unclear, but if you have two variables `a` and `b` and want to check if both are equal to `0` or both are equal to `1`, you don't need a list comprehension at all:
```
correct = 1 if (a == 0 and b == 0) or (a == 1 and b == 1) else 0
```
and since in Python, `Bool` is a subclass of `int` with `False == 0` and `True == 1`, this can be simplified to
```
correct = (a == 0 and b == 0) or (a == 1 and b == 1)
```
Using a list comprehension only makes sense if you actually have a list of pairs and want to test all those pairs, ie:
```
pairs = [(0, 0), (0, 1), (1, 0), (1, 1)]
corrects = [(a == 0 and b == 0) or (a == 1 and b == 1) for (a, b) in pairs]
```
which would yield
```
[True, False, False, True]
```
Upvotes: 0 |
2018/03/20 | 573 | 2,254 | <issue_start>username_0: Throwing and error "The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP or FOR XML is also specified".
When I user top 100 percent, throwing different error like one of the coloumn used many times<issue_comment>username_1: What part of the error message do you not understand? The `ORDER BY` should only be in the outermost query.
SQL tables and results sets represent *unordered* sets. This is true of subqueries and CTEs as well. The one exception is that the `ORDER BY` is allowed in the outermost query to order the result set.
```
WITH ord_dupes AS (
SELECT ROW_NUMBER() OVER (PARTITION BY bas2."BudgetTask__c", bas2."BudgetResource__c"
ORDER BY bas2."LastModifiedDate" DESC
) as rk,
bas2.*, bas2dupes.* -- should list out the columns
FROM bas2 INNER JOIN
bas2dupes
ON bas2."BudgetTask__c" = bas2dupes."BudgetTask__c" AND
bas2."BudgetResource__c" = bas2dupes."BudgetResource__c"
)
SELECT od.*
FROM ord_dupes od
ORDER BY ? DESC ;
```
Your `ORDER BY` column is `"BudgetTask__r.BudgetHeader__r.Project__r.ProjectNumber__c"`. If that is the name of a column, then use it. But, if so, I strongly recommend that you fix your data model, so you are not using periods in column names.
Otherwise, just put the correct column for `?`.
Upvotes: 0 <issue_comment>username_2: Query is missing the selection of data from the CTE, so we can't see all of the query right now, the error is pretty clear though. It is invalid to order a CTE since the order it outputs in can be re-ordered by the select query which is using the CTE itself.
consider:
```
WITH foo as (
SELECT id,name FROM bar ORDER BY id ASC
)
SELECT * FROM foo ORDER BY name DESC
```
What is the meaning or purpose of the order within the CTE? if the query engine obeyed it, it would perform excess sort work for no purpose.
You perform ordering as your final step - ordering as an intermediate step *only* makes sense in the scenario of a TOP keyword where you want only a sub-set of data based on an order.
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,119 | 3,105 | <issue_start>username_0: I would be really grateful for any advice on creating a 'loop' or 'function':
My goal is essentially manual stemming of a text string - amending several related terms into one term.
My code to do it individually works absolutely fine, but it would save me so much time if I could iterate it.
```
# dataframe of the collection of terms to be substituted into one term
Babanov_stem <- c("бабановдун", "бабановду", "бабановтун", "бабанову", "бабановту", "бабановго", "бабановко", "бабановым", "бабановдон", "бабановтон",
"бабанове", "бабановто", "babanova", "babanov", "babanovpresident",
"бабанова")
Babanov_seq <- seq(1:16)
Babanov <- data_frame(Babanov_seq, Babanov_stem)
# single code works fine
tidy_KG17pre$word2 <- str_replace_all(tidy_KG17pre$word2, Babanov$Babanov_stem[15], "бабанов")
```
The individual code works great, but I would really like to iterate - as I have to do this for approximately 25 terms but across 5 candidates (Babanov is candidate 1)
```
# My poor effort at a for loop
for (i in seq(Babanov$Babanov_stem)){
tidy_KG17pre$word2 <- str_replace_all(tidy_KG17pre$word, Babanov_stem[i], "бабанов")
}
# My effort at Functional Programming appears to be a bit weak too
library(purrr)
tidy_KG17pre$word2 <- tidy_KG17pre$word %>%
map(str_replace_all, Babanov$Babanov_stem, "бабанов") %>%
reduce(append)
```
I would be really grateful for any thoughts on how to get any of the above to work :)<issue_comment>username_1: I created a fake dataset to play with.
```
dtf <- data_frame(word = paste(Babanov_stem, "blabla"))
head(dtf)
# # A tibble: 6 x 1
# word
#
# 1 бабановдун blabla
# 2 бабановду blabla
# 3 бабановтун blabla
# 4 бабанову blabla
# 5 бабановту blabla
# 6 бабановго blabla
```
Replacing a single code as you suggested
```
dtf$word <- str_replace_all(dtf$word, Babanov$Babanov_stem[15], "бабанов")
```
Using a loop to replace any word in Babanov\_stem by the word "бабанов"
```
for (w in Babanov$Babanov_stem){
dtf$word <- str_replace_all(dtf$word, w, "бабанов")
}
head(dtf)
# # A tibble: 6 x 1
# word
#
# 1 бабанов blabla
# 2 бабанов blabla
# 3 бабанов blabla
# 4 бабанов blabla
# 5 бабанов blabla
# 6 бабанов blabla
```
Note: You don't need `seq()` in the `for` loop.
The loop above uses modification in place. It may be a case where functional programming is not recommended. See [Loops that should be left as is](http://adv-r.had.co.nz/Functionals.html#functionals-not) in Hadley wickham's book on Advanced R programming.
Upvotes: 1 <issue_comment>username_2: Thank you Paul for your help. I finally figured it out.
The best way to do it is to use 'stringr' to adapt the common function used to extract URL's in text mining. That command takes a string with 'www.' in it and extracts it replacing it with a 'space'
I did the same, but instead of the 'www.' I used the stem of the Candidates name. It was fine in latin alphabet or cyrillic
```
str_replace_all(KG17$message, "бабан[^[:blank:]]+", "babanov")
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 243 | 930 | <issue_start>username_0: Hey guys I am new to MongoDB. I have successfully installed and setup environment variables, I run command mongod its start. After that I run command mongo its run but its only showing in cmd like
\*\*C:\Program Files\MongoDB\Server\3.6\bin>mongo
MongoDB shell version v3.6.3
connecting to: mongodb://127.0.0.1:27017\*\*
it is not getting connect only showing, connecting . Anyone here who can help me out?<issue_comment>username_1: Did the prompt show again after you launch `mongod` ?
Upvotes: 0 <issue_comment>username_2: Remove Your Installation and try this <https://javabeat.net/monogodb-installation-windows/>
Upvotes: 2 <issue_comment>username_3: Have you tried with ending the process from task manager. If not then try for once that may help. I was also getting this kind of issue and solved it when I ended all the running processes related to mongodb and run the command again.
Upvotes: 1 |
2018/03/20 | 1,146 | 3,452 | <issue_start>username_0: My directory contains the following files:
```
FILE1_h0_something_1950_other.gz
FILE2_h0_something_1950_other.gz
FILE3_h0_something_1950_other.gz
```
Here a portion of my bash script:
```
year=1950
for nc_gz in "$input_path/*h3*$year*.gz"; do
gunzip $nc_gz
done
```
I noted that SOMETIMES (strange behavior... I except always or never) a file named ***h3*1950**\* is created inside my dir.
The code fails when directory contains no gz files.
Where is the issue?<issue_comment>username_1: >
>
> ```
> for nc_gz in "$input_path/*h3*$year*.gz"; do
>
> ```
>
>
You're iterating over a list of one element. Since `"$input_path/*h3*$year*.gz"` is in double quotes, it's a single string. So if the value of `input_path` is `/some/where` and the value of `year` is `2017` then this runs the loop body once, with `nc_gz` set to `/some/where/*h3*2017*.gz`.
>
>
> ```
> gunzip $nc_gz
>
> ```
>
>
Here `$nc_gz` is unquoted, so this applies the [split+glob operator](https://unix.stackexchange.com/questions/171346/security-implications-of-forgetting-to-quote-a-variable-in-bash-posix-shells/171347#171347) to the result. In the example above, `/some/where/*h3*2017*.gz` is expanded to the list of matching files; except if there are no matching files, then this invokes `gunzip` with the argument `/some/where/*h3*2017*.gz`.
First, quote things properly:
* [Use double quotes around variable substitutions](https://unix.stackexchange.com/questions/131766/why-does-my-shell-script-choke-on-whitespace-or-other-special-characters).
* Don't use double quotes around wildcards.
```
for nc_gz in **"**$input_path/**"***h3***"**$year**"***.gz; do
gunzip **"**$nc_gz**"**
done
```
This still runs the loop once if the wildcard does not match any files, because of a shell misfeature that non-matching wildcard patterns are left unchanged instead of being expanded to an empty list. In bash (but not in plain sh), you can avoid this by setting the `nullglob` option.
```
#!/bin/bash
shopt -s nullglob
for nc_gz in "$input_path/"*h3*"$year"*.gz; do
gunzip "$nc_gz"
done
```
In plain sh, you can avoid this by skipping non-existent files.
```
for nc_gz in "$input_path/"*h3*"$year"*.gz; do
if [ ! -e "$nc_gz" ]; then continue; fi
gunzip "$nc_gz"
done
```
Note that I assumed that this was simplified code and you do need the loop to do other things beyond calling `gunzip`. If all you need to do is unzip the files, then you can call `gunzip` a single time, since it supports multiple file arguments.
```
gunzip "$input_path/"*h3*"$year"*.gz
```
This works in the sense of unzipping the files, and it does nothing if the wildcard pattern does not match any file, however gunzip will complain about the non-existent file. Here bash's `nullglob` option won't help you since calling `gunzip` with no file name also doesn't work (it expects to uncompress data from its standard input). You need to test for the number of matching files.
```
unzip_matching_files () {
if [ $# -eq 1 ] && [ ! -e "$1" ]; then return; fi
gunzip "$@"
}
unzip_matching_files "$input_path/"*h3*"$year"*.gz
```
Upvotes: 4 <issue_comment>username_2: Maybe something like this?
```
for i in $(find ../path1 path2 path3 -name "*.env" 2>/dev/null); do
echo $i
done
```
Ignore errors with 2>/dev/null, otherwise i.e. path1 does not exist will print
```
find: ‘../path1’: No such file or directory
```
Upvotes: 0 |
2018/03/20 | 439 | 1,930 | <issue_start>username_0: Can't figure out how to pass date to the server with **Alamofire .post method**. I have to form JSON body like this:
```
{
"title": "My Title",
"locations": [
{
"location": "locationID"
}
],
}
```
I'm stuck on **"locations"** property. Probably, it has to be an Array of objects with one location property which is a string type. For this moment my code is:
```
@IBAction func createEvent(_ sender: Any) {
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": //What have I wright here?
]
Alamofire.request(requestURL,
method: .post,
parameters: parameters,
encoding: JSONEncoding.default,
headers: headers).responseJSON {response in
switch response.result {
case .success:
print(response)
}
case .failure(let error):
print(error)
}
}
}
```
Please help.<issue_comment>username_1: You can try
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [["location":"idherrr"]]
]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: if your API request accepts particular string formate then you need to convert Date in String formate with DateFormatter or else if that accept Date Object then you pass but Date object but that's maybe not possible. so please try with first option Convert date in string format which is predecided by server parameter.
or if possible then share request parameter type support on your server.
Upvotes: 0 <issue_comment>username_3: you can try with
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [dictionary]]
]
```
Upvotes: 1 |
2018/03/20 | 406 | 1,458 | <issue_start>username_0: I'm looking for the exact value of epsilon to run the `DBSCAN` clustering algorithm.
Here's the `KNN` distance plot.

This chart has two flex points. I need the second flex point. I'm using the following code:
```
# evaluate kNN distance
dist <- dbscan::kNNdist(iris, 4)
# order result
dist <- dist[order(dist)]
# scale
dist <- dist / max(dist)
# derivative
ddist <- diff(dist) / ( 1 / length(dist))
# get first point where derivative is higher than 1
knee <- dist[length(ddist)- length(ddist[ddist > 1])]
```
How can improve my code to get second point where derivative is higher than 1?<issue_comment>username_1: You can try
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [["location":"idherrr"]]
]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: if your API request accepts particular string formate then you need to convert Date in String formate with DateFormatter or else if that accept Date Object then you pass but Date object but that's maybe not possible. so please try with first option Convert date in string format which is predecided by server parameter.
or if possible then share request parameter type support on your server.
Upvotes: 0 <issue_comment>username_3: you can try with
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [dictionary]]
]
```
Upvotes: 1 |
2018/03/20 | 635 | 2,427 | <issue_start>username_0: I have a method to get all the items in the database, but I don't know what is the problem, it always say java.lang.numberformatexception: invalid double: "". Any help would be highly appreciated.
My code:
```
@Override
public void processFinish(String s) {
productList = new JsonConverter().toArrayList(s, Products.class);
BindDictionary dic = new BindDictionary();
dic.addStringField(R.id.tvName, new StringExtractor() {
@Override
public String getStringValue(Products item, int position) {
return item.name;
}
});
dic.addStringField(R.id.tvDesc, new StringExtractor() {
@Override
public String getStringValue(Products item, int position) {
return item.description;
}
}).visibilityIfNull(View.GONE);
dic.addStringField(R.id.tvPrice, new StringExtractor(){
@Override
public String getStringValue(Products item, int position) {
return ""+item.price;
//return String.valueOf(item.price);
}
});
dic.addDynamicImageField(R.id.ivImage, new StringExtractor() {
@Override
public String getStringValue(Products item, int position) {
return item.img\_url;
}
}, new DynamicImageLoader() {
@Override
public void loadImage(String url, ImageView img) {
//Set image
ImageLoader.getInstance().displayImage(url, img);
}
});
}
```
**Products.java**
public class Products implements Serializable {
```
@SerializedName("itemID")
public int id;
@SerializedName("productName")
public String name;
@SerializedName("descrt")
public String description;
@SerializedName("price")
public double price;
@SerializedName("my_img")
public String img_url;
```
}
Thanks in advance.<issue_comment>username_1: You can try
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [["location":"idherrr"]]
]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: if your API request accepts particular string formate then you need to convert Date in String formate with DateFormatter or else if that accept Date Object then you pass but Date object but that's maybe not possible. so please try with first option Convert date in string format which is predecided by server parameter.
or if possible then share request parameter type support on your server.
Upvotes: 0 <issue_comment>username_3: you can try with
```
let parameters: Parameters = [
"title": Event.title ?? nil,
"locations": [dictionary]]
]
```
Upvotes: 1 |
2018/03/20 | 923 | 2,651 | <issue_start>username_0: I have three vectors in Matlab that are possibly of different sizes. I want to compare the values in each vector against all the other values in the other vectors and only keep values that are 'close' in 2 out of 3 of the vectors. And by 'keep', I mean take the average of the close values.
For example:
```
a = [10+8i, 20];
b = [10+9i, 30, 40+3i, 55];
c = [10, 60, 41+3i];
```
If I set a closeness tolerance such that only values that are within, say, a magnitude of 1.5 of each other are kept, then the following values should be marked as close:
* 10 + 8i and 10 + 9i
* 40 + 3i and 41 + 3i
Then the routine should return a vector of length that contains the average of each of these sets of numbers:
```
finalVec = [10+8.5i,40.5+3i];
```
What is the most efficient way to do this in Matlab? Is there a better way than just straightforward looping over all elements?<issue_comment>username_1: Building on [this solution](https://fr.mathworks.com/matlabcentral/answers/98191-how-can-i-obtain-all-possible-combinations-of-given-vectors-in-matlab):
```
a = [10+8i, 20];
b = [10+9i, 30, 40+3i, 55];
c = [10, 60, 41+3i];
M1 = compare_vectors(a , b);
M2 = compare_vectors(a , c);
M3 = compare_vectors(b , c);
finalVec = [M1, M2 , M3]
function M = compare_vectors(a , b)
% All combinations of vectors elements
[A,B] = meshgrid(a,b);
C = cat(2,A',B');
D = reshape(C,[],2);
% Find differences lower than tolerance
tolerance = 1.5
below_tolerance = abs(D(:,1) - D(:,2)) < tolerance ;
% If none, return empty
if all(below_tolerance== 0)
M = [];
return
end
% Calculate average of returned values
M = mean(D(below_tolerance,:));
end
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
% your data
a = [10+8i, 20];
b = [10+9i, 30, 40+3i, 55];
c = [10, 60, 41+3i];
tol = 1.5;
% call the function with each combination of vectors and concatenate the results
finalVec = cell2mat(cellfun(@closepoints, {a, a, b}, {b, c, c}, {tol, tol, tol}, 'Uni', 0))
function p = closepoints(a, b, tol)
% find the pairs of indexes of close points
% the bsxfun() part calculates the distance between all combinations of elements of the two vectors
[ii,jj] = find(abs(bsxfun(@minus, a, b.')) < tol);
% calculate the mean
p = (a(jj) + b(ii))/2;
end
```
Note that [cellfun()](https://de.mathworks.com/help/matlab/ref/cellfun.html) isn't really faster than calling the function multiple times in a row or using a for loop. But it would be easier to add more vectors than the former and is IMO nicer to look at than the latter.
Upvotes: 2 |
2018/03/20 | 210 | 837 | <issue_start>username_0: I have a lot of macros in my excel. I don't know whether they are used in that workbook. I don't know whether it is called as a procedure inside other macros.
Is there any way to find out whether the macro is used inside another macros or used in different sheets?<issue_comment>username_1: Probably the easiest way - `Ctrl` + `F`.
Write the name of the "Macro" and select search in Current Project. Then start counting how many times it will show up.
Another way is to write `debug.print "I am used"` after the `Sub` line of the "Macro". Then count how many times has it popped up on the immediate window.
Upvotes: 2 <issue_comment>username_2: * put “Option Explicit” at the beginning of every module
* comment out your “Macro
* start debug
And it’ll point you to any occurrence of “Macro” call
Upvotes: 0 |
2018/03/20 | 272 | 1,006 | <issue_start>username_0: I am automatically generating csv from the SQL server.
Some users have problem that excel automatically converts some numbers to date.
If the value is not converted to date, then it is formated as txt.
When I open the file it is shown ok = without text or date = just as number..
Dont you know how it can be fixed?
[](https://i.stack.imgur.com/B6285.png)
Thanks.<issue_comment>username_1: Probably the easiest way - `Ctrl` + `F`.
Write the name of the "Macro" and select search in Current Project. Then start counting how many times it will show up.
Another way is to write `debug.print "I am used"` after the `Sub` line of the "Macro". Then count how many times has it popped up on the immediate window.
Upvotes: 2 <issue_comment>username_2: * put “Option Explicit” at the beginning of every module
* comment out your “Macro
* start debug
And it’ll point you to any occurrence of “Macro” call
Upvotes: 0 |
2018/03/20 | 188 | 697 | <issue_start>username_0: I must sum of the even integers between 1 and ' n ' (inclusive). For example for n = 5 program return 6(2+4). How to make it?<issue_comment>username_1: Probably the easiest way - `Ctrl` + `F`.
Write the name of the "Macro" and select search in Current Project. Then start counting how many times it will show up.
Another way is to write `debug.print "I am used"` after the `Sub` line of the "Macro". Then count how many times has it popped up on the immediate window.
Upvotes: 2 <issue_comment>username_2: * put “Option Explicit” at the beginning of every module
* comment out your “Macro
* start debug
And it’ll point you to any occurrence of “Macro” call
Upvotes: 0 |
2018/03/20 | 1,173 | 3,266 | <issue_start>username_0: I want to merge the rows of the two dataframes hereunder, when *the strings* in Test1 column of **DF2** contain *a substring* of Test1 column of **DF1**.
```
DF1 = pd.DataFrame({'Test1':list('ABC'),
'Test2':[1,2,3]})
print (DF1)
Test1 Test2
0 A 1
1 B 2
2 C 3
DF2 = pd.DataFrame({'Test1':['ee','bA','cCc','D'],
'Test2':[1,2,3,4]})
print (DF2)
Test1 Test2
0 ee 1
1 bA 2
2 cCc 3
3 D 4
```
For that, I am able with "str contains" to identify the substring of DF1.Test1 available in the strings of DF2.Test1
INPUT:
```
for i in DF1.Test1:
ok = DF2[Df2.Test1.str.contains(i)]
print(ok)
```
OUPUT:
[](https://i.stack.imgur.com/X3Gin.png)
Now, I would like to add in the output, the merge of the substrings of Test1 which match with the strings of Test2
OUPUT:
[](https://i.stack.imgur.com/UIRo0.png)
For that, I tried with "pd.merge" and "if" but i am not able to find the right code yet..
Do you have suggestions please?
```
for i in DF1.Test1:
if DF2.Test1.str.contains(i) == 'True':
ok = pd.merge(DF1, DF2, on= ['Test1'[i]], how='outer')
print(ok)
```
Thank you for your ideas :)<issue_comment>username_1: I believe you need [`extract`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.html) values to new column and then `merge`, last remove helper column `Test3`:
```
pat = '|'.join(r"{}".format(x) for x in DF1.Test1)
DF2['Test3'] = DF2.Test1.str.extract('('+ pat + ')', expand=False)
DF = pd.merge(DF1, DF2, left_on= 'Test1', right_on='Test3').drop('Test3', axis=1)
print (DF)
Test1_x Test2_x Test1_y Test2_y
0 A 1 bA 2
1 C 3 cCc 3
```
**Detail**:
```
print (DF2)
Test1 Test2 Test3
0 ee 1 NaN
1 bA 2 A
2 cCc 3 C
3 D 4 NaN
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I could not respnd to username_1's comment because of my reputation. But I changed his answer to a function to merge on non-capitalized text.
```
def str_merge(part_string_df,full_string_df, merge_column):
merge_column_lower = 'merge_column_lower'
part_string_df[merge_column_lower] = part_string_df[merge_column].str.lower()
full_string_df[merge_column_lower] = full_string_df[merge_column].str.lower()
pat = '|'.join(r"{}".format(x) for x in part_string_df[merge_column_lower])
full_string_df['Test3'] = full_string_df[merge_column_lower].str.extract('('+ pat + ')', expand=True)
DF = pd.merge(part_string_df, full_string_df, left_on= merge_column_lower, right_on='Test3').drop([merge_column_lower + '_x',merge_column_lower + '_y','Test3'],axis=1)
return DF
```
Used with example:
```
DF1 = pd.DataFrame({'Test1':list('ABC'),
'Test2':[1,2,3]})
DF2 = pd.DataFrame({'Test1':['ee','bA','cCc','D'],
'Test2':[1,2,3,4]})
print(str_merge(DF1,DF2, 'Test1'))
Test1_x Test2_x Test1_y Test2_y
0 B 2 bA 2
1 C 3 cCc 3
```
Upvotes: 2 |
2018/03/20 | 2,197 | 5,969 | <issue_start>username_0: i am making block with arrow and border looks like [](https://i.stack.imgur.com/1VX7q.png)
And i have tried this.
```css
* {
box-sizing: border-box;
}
.block-arr {
background: purple;
margin: 20px;
margin-right: 100px;
position: relative;
}
.block-arr .inner {
min-height: 100px;
display: flex;
padding: 20px;
align-items: center;
position: relative;
}
.block-arr .inner:after {
border-top: 50px solid transparent;
border-bottom: 50px solid transparent;
border-left: 50px solid purple;
content: '';
position: absolute;
left: 100%;
top: 0;
}
.block-arr:after {
border-top: 50px solid transparent;
border-bottom: 50px solid transparent;
border-left: 50px solid purple;
content: '';
position: absolute;
left: 100%;
top: 0;
}
```
```html
**Main Heading**
Sub Heading
```
How can i make block like image? And can we make this arrow height responsive?<issue_comment>username_1: I would consider a mix of skew transformation, inset box-shadow and some linear-gradient:
```css
* {
box-sizing: border-box;
}
.block-arr {
padding: 50px;
margin: 20px;
margin-right: 100px;
position: relative;
background: linear-gradient(#fff, #fff)2px 0/2px 100% no-repeat, purple;
border-left: 2px solid purple;
z-index: 0;
}
.block-arr:before {
content: "";
position: absolute;
top: 0;
bottom: 50%;
left: 0;
right: 0;
background: purple;
border: 5px solid purple;
border-bottom: none;
border-left: none;
box-shadow: -2px 2px 0px #fff inset;
transform: skew(25deg);
transform-origin: top left;
z-index: -1;
}
.block-arr:after {
content: "";
position: absolute;
top: 50%;
bottom: 0;
left: 0;
right: 0;
background: purple;
border: 5px solid purple;
border-top: none;
border-left: none;
box-shadow: -2px -2px 0px #fff inset;
transform: skew(-25deg);
transform-origin: bottom left;
z-index: -1;
}
```
```html
**Main Heading**
Sub Heading
**Main Heading**
Sub Heading
```
And here is a more compressed version with some CSS variable to easily handle color. You can also do the same to handle others variables:
```css
* {
box-sizing: border-box;
}
.block-arr {
--c1:purple;
--c2:#fff;
padding: 50px;
margin: 20px;
margin-right: 100px;
position: relative;
background: linear-gradient(var(--c2), var(--c2))2px 0/2px 100% no-repeat, var(--c1);
border-left: 2px solid var(--c1);
z-index: 0;
}
.block-arr:before,
.block-arr:after {
left: 0;
right: 0;
content: "";
position: absolute;
background: var(--c1);
border: 5px solid var(--c1);
border-left: none;
z-index: -1;
}
.block-arr:before {
top: 0;
bottom: 50%;
border-bottom: none;
box-shadow: -2px 2px 0px var(--c2) inset;
transform: skew(25deg);
transform-origin: top left;
}
.block-arr:after {
top: 50%;
bottom: 0;
border-top: none;
box-shadow: -2px -2px 0px var(--c2) inset;
transform: skew(-25deg);
transform-origin: bottom left;
}
```
```html
**Main Heading**
Sub Heading
And yes it is reponsive and grow when height grow
```
**BONUS**
Another fancy and more complex way with only linear-gradient:
```css
* {
box-sizing: border-box;
}
.block-arr {
--c1:purple;
--c2:#fff;
padding: 50px;
margin: 20px;
margin-right: 100px;
position: relative;
border:1px solid;
background:
linear-gradient(to top left,transparent calc(50% + 4px),var(--c2) calc(50% + 4px),var(--c2) calc(50% + 6px),transparent 0) 100% 100%/50px 50% no-repeat,
linear-gradient(to bottom left,transparent calc(50% + 4px),var(--c2) calc(50% + 4px),var(--c2) calc(50% + 6px),transparent 0) 100% 0/50px 50% no-repeat,
linear-gradient(var(--c2),var(--c2)) 4px calc(100% - 4px)/calc(100% - 58px) 2px no-repeat,
linear-gradient(var(--c2),var(--c2)) 4px 4px/calc(100% - 58px) 2px no-repeat,
linear-gradient(var(--c2),var(--c2)) 4px 4px/2px calc(100% - 8px) no-repeat,
linear-gradient(to top left ,transparent 50%,var(--c1) 50%) 100% 100%/50px 50% no-repeat,
linear-gradient(to bottom left,transparent 50%,var(--c1) 50%) 100% 0/50px 50% no-repeat,
linear-gradient(var(--c1),var(--c1)) 0 0/calc(100% - 50px) 100% no-repeat;
}
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `:after` and `:before` pseudo elements, i have made this design.
Hope it fulfills your requirement.
Thanks
**CSS and HTML:**
```css
* {
box-sizing: border-box;
}
p { margin:0; }
.block-arr {
background: purple;
margin: 20px;
margin-right: 100px;
position: relative;
}
.block-arr .inner {
min-height: 100px;
/*display: flex;*/
padding: 20px;
align-items: center;
position: relative;
}
.block-arr .inner:after {
border-top: 50px solid transparent;
border-bottom: 50px solid transparent;
border-left: 50px solid purple;
content: '';
position: absolute;
left: 100%;
top: 0;
}
.block-arr:after {
border-top: 50px solid transparent;
border-bottom: 50px solid transparent;
border-left: 50px solid purple;
content: '';
position: absolute;
left: 100%;
top: 0;
}
.bordered { position:relative; border:1px solid #fff; border-right:none; display: flex; align-items: center; padding:20px; }
.bordered:before, .bordered:after {
content: "";
position: absolute;
height: 72%;
width: 1px;
background: #fff;
top: 0;
right: 0;
z-index: 4;
}
.bordered:before {
transform: rotate(45deg);
top: auto;
right: -3.3%;
bottom: -11%;
}
.bordered:after {
transform: rotate(-45deg);
top: -12%;
right: -3.3%;
}
```
```html
**Main Heading**
Sub Heading
```
Upvotes: 1 |
2018/03/20 | 372 | 1,204 | <issue_start>username_0: I have a dataframe (spark) which has 2 columns each with list values. I want to create a new column which concatenates the 2 columns (as well as the list values inside the column).
For e.g.
Column 1 has a row value - [A,B]
Column 2 has a row value - [C,D]
**"The output should be in a new column i.e. "**
Column 3(newly created column) with row value - [A,B,C,D]
**Note:-
Column values have values stored in LIST**
Please help me implement this with pyspark.
Thanks<issue_comment>username_1: we can use an UDF as,
```
>>> from pyspark.sql import functions as F
>>> from pyspark.sql.types import *
>>> udf1 = F.udf(lambda x,y : x+y,ArrayType(StringType()))
>>> df = df.withColumn('col3',udf1('col1','col2'))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: as general rule, if you want to join more list columns, I suggest to use `chain` from `itertools`
```
from itertools import chain
concat_list_columns = F.udf(lambda *list_: chain(*list_), ArrayType(StringType()))
```
Because udf are heavy on memory, a better solution would be to use pyspark function concat:
```
from pyspark.sql import functions as F
F.concat(col1, col2, col3)
```
Upvotes: 0 |
2018/03/20 | 431 | 1,600 | <issue_start>username_0: I need to make a C# Form screenshot, but there is a catch.
I need it to be a form at certain resolution AND form itself must not flicker to a user, so somehow in background it need to make an screenshot at certain form dimension.
I tried various things but i cannot make form resized without actually resize a window and than take a screenshot. It's anoying since, to user a window will flicker in mean time -> go to resolution -> make screen shot -> go back to old resolution.
Making a deep-copy of form is a more or less no go since it would needed a lot of rewriting things.
I tried things like: [Attempt 1](https://www.developerfusion.com/code/4630/capture-a-screen-shot/) | [Attempt 2](http://www.xtremedotnettalk.com/graphics-and-multimedia/95011-capture-hidden-window-top-level.html) | [Attempt 3](https://www.daniweb.com/programming/software-development/threads/260393/get-bitmap-of-hidden-window)
But still problem with resizing window and flicker from one size to other.
I have dotnet 2.0 (don't ask why).<issue_comment>username_1: This is a very specific problem, and I'm sure there are no real code solutions for that. This is a design-technical issue, and some possible ways to avoid this are
1. You don't resize the form back,
2. You make screenshots only possible in your specific size,
3. You disable the resizing in general and set your size to the default size.
Upvotes: 1 <issue_comment>username_2: Used solution was to paint seperate picture boxes to the memory and than merge all this seperate images to one big image.
Upvotes: 1 [selected_answer] |
2018/03/20 | 654 | 2,324 | <issue_start>username_0: I am getting some errors trying to add Pod to my Cordova iOS project
The errors I am seeing in Xcode are :
```
diff: /Podfile.lock: No such file or directory
diff: /Manifest.lock: No such file or directory
error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation.
```
I have tried the following based on information that I have to seem on other Stackoverflow.
1: Close Xcode and run pod install
2: pod update
3: Remove and reinstall cocoa pods
4: create a new project
My project has the structure `/platform/ios` and I am trying to install Firebase. This what I have done to install the pods
1: In the folder above -- pod init
2: edit the podfile and add the firebase
the podfile content
```
# Uncomment the next line to define a global platform for your project
# platform :ios, '9.0'
target 'RandstadJobs' do
# Uncomment the next line if you're using Swift or would like to use dynamic frameworks
# use_frameworks!
# Pods for RandstadJobs
pod 'Firebase/Core'
end
```
3: run pod install
4: open Xcode clean and build
I see the Podfile.lock and the Manifest.lock file. They are in the folder,,
```
/platform/ios/Podfile.lock and /platform/ios/pods/manifest.lock
```
Here is the build phase setting in Xcode
```
diff "${PODS_PODFILE_DIR_PATH}/Podfile.lock" "${PODS_ROOT}/Manifest.lock" > /dev/null
if [ $? != 0 ] ; then
# print error to STDERR
echo "error: The sandbox is not in sync with the Podfile.lock. Run 'pod install' or update your CocoaPods installation." >&2
exit 1
fi
# This output is used by Xcode 'outputs' to avoid re-running this script phase.
echo "SUCCESS" > "${SCRIPT_OUTPUT_FILE_0}"
```
It seems like the path is not getting set.
Can you please give some hints and how to resolve this issue
Thanks<issue_comment>username_1: You can try clean the folder
>
> /Users/yourName/Library/Developer/Xcode/DerivedData
>
>
>
I just got the problem because I change the project name and folder path.
Upvotes: 0 <issue_comment>username_2: I was able to get pass the error by doing the following:
Add 2 user define variable in Xcode build setting
1: PODS\_ROOT = $(SRCROOT)/PODS
2: PODS\_PODFILE\_DIR\_PATH = $(SRCROOT)
Then clean the project and compile
Upvotes: 2 [selected_answer] |
2018/03/20 | 379 | 1,150 | <issue_start>username_0: The pseudocode is as below. how to calculate time complexity for this programme
```
Algorithm MinValue(A, n):
Input: An integer array A of size n //1
Output: The smallest value in A
minValue <- A[0] //1
for k=1 to n-1 do //n
if (minValue > A[k]) then //n-1
minValue <- A[k] //1
return minValue //1
```
so, it's 1+1+n+n-1+1+1 = 2n+3, is it correct?
---
This is a more simple programme
Algorithm MaxInt(a, b):
```
Input: Two integers a and b //1
Output: The larger of the two integers
if a > b then //1
return a //1
else
return b. // 1
```
total operations = 4, is it correct?
Could anyone tell me the correct answer? Thanks<issue_comment>username_1: You can try clean the folder
>
> /Users/yourName/Library/Developer/Xcode/DerivedData
>
>
>
I just got the problem because I change the project name and folder path.
Upvotes: 0 <issue_comment>username_2: I was able to get pass the error by doing the following:
Add 2 user define variable in Xcode build setting
1: PODS\_ROOT = $(SRCROOT)/PODS
2: PODS\_PODFILE\_DIR\_PATH = $(SRCROOT)
Then clean the project and compile
Upvotes: 2 [selected_answer] |
2018/03/20 | 214 | 784 | <issue_start>username_0: i have a java application that needs both Tomcat 7 server & MySQL DB to run. Now i have 1 Docker container with Tomcat 7 in it, now should i create one more container for MySQL DB OR shall i install MYSQL in Tomcat 7 container itself ? How does the big companies do ideally ?<issue_comment>username_1: You can try clean the folder
>
> /Users/yourName/Library/Developer/Xcode/DerivedData
>
>
>
I just got the problem because I change the project name and folder path.
Upvotes: 0 <issue_comment>username_2: I was able to get pass the error by doing the following:
Add 2 user define variable in Xcode build setting
1: PODS\_ROOT = $(SRCROOT)/PODS
2: PODS\_PODFILE\_DIR\_PATH = $(SRCROOT)
Then clean the project and compile
Upvotes: 2 [selected_answer] |
2018/03/20 | 496 | 1,796 | <issue_start>username_0: login.php (controller)
```
class login extends CI_Controller {
public function index()
{
$this->load->view('login');
}
public function Click()
{
$action = $this->input->post('register'); // $_POST['start']; also works.
if($action)
{
$this->load->view('register');
}
}
}
```
login.php(views)
```
```
how can I change my view to register.php after clicking register button. The error is I keep back to login page after clicking register.<issue_comment>username_1: **You can try this :**
In `Login` controller :
```
public function register()
{
$this->load->view('register');
}
```
**In view:**
```
[Register](<?php echo base_url('login/register');?>)
```
Upvotes: 2 <issue_comment>username_2: You need to enable mod\_rewrite module in apache
Please follow below mention step to enable mod\_rewrite module:
1) Find the “httpd.conf” file under the “conf” folder inside the Apache’s installation folder.
2) Find the following line “#LoadModule rewrite\_module modules/mod\_rewrite.so” in the “httpd.conf” file.You can do this easily by searching the keyword “mod\_rewrite” from find menu.
3) Remove the “#” at the starting of the line, “#” represents that line is commented.
4) Now restart the apache server.
5) You can see now “mod\_rewrite” in the Loaded Module section while doing “phpinfo()”.
Upvotes: 0 <issue_comment>username_2: You can try this :
In Login Controller:
class Login extends CI\_Controller {
```
public function index()
{
if($this->input->post('login') == 'Login'){
$this->load->view('login');
}else{
$this->register();
}
}
public function register(){
$this->load->view('register');
}
```
}
In View :
Upvotes: 0 |
2018/03/20 | 521 | 1,659 | <issue_start>username_0: we have a table named PatientAdmissions listed below [](https://i.stack.imgur.com/OYNqU.jpg)
I'm trying to get the Total Length of stay which is the Total Date difference between the AdmitDate and the DischargeDate in a provided period of time from the user by (AdmitDate as FromDate,DischargeDate as ToDate)
and then apply the following formula
[](https://i.stack.imgur.com/1du45.jpg)
and the Number of Days in the period would be AdmitDate as FromDate and DischargeDate as ToDate
, I couldn't figure out a way to write the query without getting any e error's<issue_comment>username_1: ```
SELECT *,DATEDIFF(hour, AdmitDate, DischargeDate) AS 'Total Length of stay',
DATEDIFF(day, AdmitDate, DischargeDate) AS 'Number of Days in the period',
(DATEDIFF(hour, AdmitDate, DischargeDate)*100)/(133*DATEDIFF(day, AdmitDate, DischargeDate))
AS Formula
from TableName
```
The `DATEDIFF()` function returns the difference between two date values, based on the interval specified.
**Syntax:** `DATEDIFF(Interval, date1, date2)`
* year, yyyy, yy = Year
* quarter, qq, q = Quarter
* month, mm, m = month
* dayofyear = Day of the year
* day, dy, y = Day
* week, ww, wk = Week
* weekday, dw, w = Weekday
* hour, hh = hour
* minute, mi, n = Minute
* second, ss, s = Second
* millisecond, ms = Millisecond
Upvotes: 2 [selected_answer]<issue_comment>username_2: SELECT (DATEDIFF(minute, AdmitDate , DischargeDate ))/133\*(DATEDIFF(day, AdmitDate , DischargeDate )\*100) from PatientAdmissions
Upvotes: 0 |
2018/03/20 | 498 | 1,394 | <issue_start>username_0: input.json:-
```
{
"menu": {
"id": "file",
"value": "File",
"user": {
"address": "USA",
"email": "<EMAIL>"
}
}
}
```
Command:-
```
result=$(cat input.json | jq -r '.menu | keys[]')
```
Result:-
```
id
value
user
```
Loop through result:-
```
for type in "${result[@]}"
do
echo "--$type--"
done
```
Output:-
```
--id
value
user--
```
I want to do process the keys values in a loop. When I do the above, It result as a single string.
How can I do a loop with json keys result in bash script?<issue_comment>username_1: The *canonical* way :
```
file='input.json'
cat "$file" | jq -r '.menu | keys[]' |
while IFS= read -r value; do
echo "$value"
done
```
[bash faq #1](http://mywiki.wooledge.org/BashFAQ/001)
---
But you seems to want an array, so the syntax is (missing parentheses) :
```
file='input.json'
result=( $(cat "$file" | jq -r '.menu | keys[]') )
for type in "${result[@]}"; do
echo "--$type--"
done
```
Output:
-------
```
--id--
--value--
--user--
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Using `bash` to just print an object keys from JSON data is redundant.
**`Jq`** is able to handle it by itself. Use the following simple **`jq`** solution:
```
jq -r '.menu | keys_unsorted[] | "--"+ . +"--"' input.json
```
The output:
```
--id--
--value--
--user--
```
Upvotes: 2 |
2018/03/20 | 486 | 1,573 | <issue_start>username_0: ```
var response = transport.responseJSON || transport.responseText.evalJSON(true) || {};
// console.log(response)
// log results
account: []
agreements: []
billing: []
order: {error: 1, message: "Ingenico ePayments Payment failed", goto_section: "payment"}
payment: []
shipping: []
alert(response.account.message);
```
Depending on what message comes back from the response is there a way I can loop through the AJAX response and display the associated message rather than hardcoding the value like in the above (response.account.message) as sometimes for example the error will come from a different section like:
```
account: {error: 1, message: "An account already exists for this email address."}
```<issue_comment>username_1: The *canonical* way :
```
file='input.json'
cat "$file" | jq -r '.menu | keys[]' |
while IFS= read -r value; do
echo "$value"
done
```
[bash faq #1](http://mywiki.wooledge.org/BashFAQ/001)
---
But you seems to want an array, so the syntax is (missing parentheses) :
```
file='input.json'
result=( $(cat "$file" | jq -r '.menu | keys[]') )
for type in "${result[@]}"; do
echo "--$type--"
done
```
Output:
-------
```
--id--
--value--
--user--
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Using `bash` to just print an object keys from JSON data is redundant.
**`Jq`** is able to handle it by itself. Use the following simple **`jq`** solution:
```
jq -r '.menu | keys_unsorted[] | "--"+ . +"--"' input.json
```
The output:
```
--id--
--value--
--user--
```
Upvotes: 2 |
2018/03/20 | 854 | 2,974 | <issue_start>username_0: I am trying to pass an array of floats (in my case an audio wave) to a fragment shader via texture. It works but I get some imperfections as if the value read from the 1px height texture wasn't reliable.
[](https://i.stack.imgur.com/CCU1U.png)
This happens with many combinations of bar widths and amounts.
I get the value from the texture with:
```
precision mediump float;
...
uniform sampler2D uDisp;
...
void main(){
...
float columnWidth = availableWidth / barsCount;
float barIndex = floor((coord.x-paddingH)/columnWidth);
float textureX = min( 1.0, (barIndex+1.0)/barsCount );
float barValue = texture2D(uDisp, vec2(textureX, 0.0)).r;
...
```
If instead of the value from the texture I use something else the issue doesn't seem to be there.
```
barValue = barIndex*0.1;
```
[](https://i.stack.imgur.com/mzyBe.png)
Any idea what could be the issue? Is using a texture for this purpose a bad idea?
I am using [Pixi.JS](http://www.pixijs.com/) as WebGL framework, so I don't have access to low level APIs.
With a gradient texture for the data and many bars the problems becomes pretty evident.
[](https://i.stack.imgur.com/3b8os.png)
**Update:** Looks like the issue relates to the consistency of the value of textureX.
Trying different formulas like `barIndex/(barsCount-1.0)` results in less noise. Wrapping it on a `min` definitely adds more noise.<issue_comment>username_1: Generally `mediump` is insufficient for texture coordinates for any non-trivial texture, so where possible use `highp`. This isn't always available on some older GPUs, so depending on the platform this may not solve your problem.
If you know you are doing 1:1 mapping then also use GL\_NEAREST rather than GL\_LINEAR, as the quantization effect will more likely hide some of the precision side-effects.
Given you probably know the number of columns and bars you can probably pre-compute some of the values on the CPU (e.g. precompute `1/columns` and pass that as a uniform) at fp32 precision. Passing in small values between 0 and 1 is always much better at preserving floating point accuracy, rather than passing in big values and then dividing out.
Upvotes: 0 <issue_comment>username_2: Turned out the issue wasn't in reading the values from the texture, but was in the drawing. Instead of using IFs I switched to step and the problem went away.
```
vec2 topLeft = vec2(
paddingH + (barIndex*columnWidth) + ((columnWidth-barWidthInPixels)*0.5),
top
);
vec2 bottomRight = vec2(
topLeft.x + barWidthInPixels,
bottom
);
vec2 tl = step(topLeft, coord);
vec2 br = 1.0-step(bottomRight, coord);
float blend = tl.x * tl.y * br.x * br.y;
```
I guess comparisons of floats through IFs are not very reliable in shaders.
Upvotes: 1 |
2018/03/20 | 582 | 1,922 | <issue_start>username_0: I'm having trouble changing the colour of an svg image with jquery or vanila javascript. I've tried a couple of things but nothing seems to work, the original colour is black and i can change it if i open the svg image in my IDE and add/change the fill attribute. But not with JS. What am I doing wrong? Other lines of code work fine so it is not a problem with the files or so.
So far i've tried these lines of codes and put id on the img tag aswell as the svg element it self
html:
```

```
JS:
```
$('#myId').css({fill:"#f8b9d4"});
$("myId").attr("fill", "yellow");
document.getElementById("myId").setAttribute("fill", "#f8b9d4");
xml version="1.0" encoding="UTF-8" standalone="no"?
random title
```
UPDATE - created a completley new project but the problem is the same. (The div is for testing and working fine)
HTML:
```
Title

```
jquery:
```
$(document).ready(function () {
$('#myId').css({fill:"blue"});
$("#h").text("heasdadasdsadasdsad");
});
```
SVG file:
```
random title
```<issue_comment>username_1: Your Jquery css works fine. You don't need to have an empty `fill=""` attribute on the `path`.
Note: Scroll down on the code snippet to see result, your viewbox is a bit big :)
```js
$('#myId').css({fill:"blue"});
```
```html
random title
```
Upvotes: 2 <issue_comment>username_2: ```
Try This-
$('#myId').css({fill:"blue"});
random title
```
Upvotes: 1 <issue_comment>username_3: I'd say it's because you're using an `![]()` tag rather than putting the directly in the HTML.
If you cannot put the directly in your HTML, embed it using the tag, e.g. like this:
```
Your browsser doesn't support SVG
```
Then, you can access elements inside the like this:
```
document.getElementById("img").contentDocument.getElementById("myId").setAttribute("fill", "#f8b9d4");
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,594 | 6,831 | <issue_start>username_0: stuck in One Issue ,
I am using `BluetoothSocket` class, I am sending and receiving data with the help of input and output streams.
when App receives large amount of data from input stream, I am killing my app forcefully and after it I am again restarting my app, but `InputStream` returns me previous data, which is not needed anymore.how to discard that old data?
has Anyone Some Solution for this Issue?
**Following is my source code:**
```
public class MyBluetoothService {
private static final String TAG = "MY_APP_DEBUG_TAG";
private Handler mHandler; // handler that gets info from Bluetooth service
// Defines several constants used when transmitting messages between the
// service and the UI.
private interface MessageConstants {
public static final int MESSAGE_READ = 0;
public static final int MESSAGE_WRITE = 1;
public static final int MESSAGE_TOAST = 2;
// ... (Add other message types here as needed.)
}
private class ConnectedThread extends Thread {
private final BluetoothSocket mmSocket;
private final InputStream mmInStream;
private final OutputStream mmOutStream;
private byte[] mmBuffer; // mmBuffer store for the stream
public ConnectedThread(BluetoothSocket socket) {
mmSocket = socket;
InputStream tmpIn = null;
OutputStream tmpOut = null;
// Get the input and output streams; using temp objects because
// member streams are final.
try {
tmpIn = socket.getInputStream();
} catch (IOException e) {
Log.e(TAG, "Error occurred when creating input stream", e);
}
try {
tmpOut = socket.getOutputStream();
} catch (IOException e) {
Log.e(TAG, "Error occurred when creating output stream", e);
}
mmInStream = tmpIn;
mmOutStream = tmpOut;
}
public void run() {
mmBuffer = new byte[1024];
int numBytes; // bytes returned from read()
// Keep listening to the InputStream until an exception occurs.
while (true) {
try {
// Read from the InputStream.
numBytes = mmInStream.read(mmBuffer);
// Send the obtained bytes to the UI activity.
Message readMsg = mHandler.obtainMessage(
MessageConstants.MESSAGE_READ, numBytes, -1,
mmBuffer);
readMsg.sendToTarget();
} catch (IOException e) {
Log.d(TAG, "Input stream was disconnected", e);
break;
}
}
}
// Call this from the main activity to send data to the remote device.
public void write(byte[] bytes) {
try {
mmOutStream.write(bytes);
// Share the sent message with the UI activity.
Message writtenMsg = mHandler.obtainMessage(
MessageConstants.MESSAGE_WRITE, -1, -1, mmBuffer);
writtenMsg.sendToTarget();
} catch (IOException e) {
Log.e(TAG, "Error occurred when sending data", e);
// Send a failure message back to the activity.
Message writeErrorMsg =
mHandler.obtainMessage(MessageConstants.MESSAGE_TOAST);
Bundle bundle = new Bundle();
bundle.putString("toast",
"Couldn't send data to the other device");
writeErrorMsg.setData(bundle);
mHandler.sendMessage(writeErrorMsg);
}
}
// Call this method from the main activity to shut down the connection.
public void cancel() {
try {
mmSocket.close();
} catch (IOException e) {
Log.e(TAG, "Could not close the connect socket", e);
}
}
}
}
```<issue_comment>username_1: I think you should close the socket to manage the bug.
I recommend you to do this in finalizer like the code below.
```
private class ConnectedThread extends Thread
{
private final BluetoothSocket mmSocket;
private final InputStream mmInStream;
private final OutputStream mmOutStream;
private byte[] mmBuffer; // mmBuffer store for the stream
@override
protected void finalize() throws Throwable
{
try
{
cancel();
}
finally
{
super.finalize();
}
}
...
```
Also as I mentioned in comment, it is safer to close every `stream`s before closing the socket.
So, try this `cancel()` method.
```
// Call this method from the main activity to shut down the connection.
public void cancel()
{
try {
mmInStream.close();
} catch( NullPointerException | IOException e) {}
try {
mmOutStream.close();
} catch( NullPointerException | IOException e) {}
try
{
mmSocket.close();
} catch (IOException e) {
Log.e(TAG, "Could not close the connect socket", e);
}
}
```
And [more information](https://dzone.com/articles/java-what-does-finalize-do-and) about `finalize` method.
**EDIT: bold ones are important than other suggestions.**
Reading comments of EJP I understood why your app stops when you get large data : you maybe have to clear buffers before calling `read()`. And he says that finalizer can happen not to be called by system (I don't know why).
**How about** `break`**ing the loop when the** `read()` **returned** `-1`?
And just now I found a helpful [link](http://www.gregbugaj.com/?p=283) about proper method to read a stream. I hope it helped.
Code cited from the link
>
>
> ```
> private String extract(InputStream inputStream) throws IOException
> {
> ByteArrayOutputStream baos = new ByteArrayOutputStream();
> byte[] buffer = new byte[1024];
> int read = 0;
> while ((read = inputStream.read(buffer, 0, buffer.length)) != -1) {
> baos.write(buffer, 0, read);
> }
> baos.flush();
> return new String(baos.toByteArray(), "UTF-8");
> }
>
> ```
>
>
*Also*
Though the `finalizer` may be able not to be called by system closing `streams` before closing `sockets` is safer (I read some SO threads before).
Upvotes: -1 <issue_comment>username_2: ```
// Keep listening to the InputStream until an exception occurs
```
The problem is here. You should keep reading from the stream until *end of stream or* an exception occurs. You need to break out of the read loop if `read()` returns -1.
At present you are reading beyond end of stream, and ignoring the condition altogether, so of course the data that was in the buffer on the last successful read is still there.
For your application to keep *seeing* that data, you must also be ignoring the read count and assuming the buffer was filled, which also is invalid.
Upvotes: 2 [selected_answer] |
2018/03/20 | 691 | 2,735 | <issue_start>username_0: Hello I am attepting to make a webpage to dowload reports from amazon seller central using the MWS API. The issue im running into is that 3 different calls are needed to download one report!
Request report and get its ID-> check status/get report ID -> download report.
Currently I have this running of 3 separate buttons which need to be clicked in order, how would I go about making the second api call wait for the frist one to return a specific value and so on.
<http://docs.developer.amazonservices.com/en_UK/reports/index.html><issue_comment>username_1: You basically have two options:
1. Have a long running PHP process which does it all in one go. You might have to flush() data to the client so that it doesn't appear to hang.
2. Have different PHP scripts doing the three parts (you might want to combine step 2 and 3) and call them through JavaScript in a "polling loop".
Since MWS reports can sometimes take very long (on occasion >1 hr), you'll probably have to change PHP settings to allow such a long running script. I'd personally go with method 2.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Since you say you are doing this from a webpage, I assume your language of choice is JavaScript. Here are JavaScript examples of making a call, and chaining other calls to it. Depending on the library/framework you are using, you can leverage promises, observables or callbacks.
**Promises** (don't forget to check for errors, I don't check in my example):
```
http.get(reportUrl).then(reportResponse => {
http.get(checkStatusUrl + reportResponse.id).then(statusResponse => {
http.get(finalUrl + statusResponse.id).then(finalResponse => {
console.log(finalResponse);
})
})
})
```
**Observables**:
```
http.get(reportUrl).subscribe(reportResponse => {
http.get(checkStatusUrl + reportResponse.id).subscribe(statusResponse => {
http.get(finalUrl + statusResponse.id).subscribe(finalResponse => {
console.log(finalResponse);
})
})
})
```
**Callbacks**:
```
http.get(reportUrl, reportResponse => {
http.get(checkStatusUrl + reportResponse.id, statusResponse => {
http.get(finalUrl + statusResponse.id, finalResponse => {
console.log(finalResponse);
})
})
})
```
---
If you're using a server-side language, your API should look more or less the same. Make the first request, wait for its response. Use the response to make the second request, and so on.
Upvotes: 1 <issue_comment>username_3: If you make ajax calls,
* you can set `async: false`
* you can set the next ajax call as a callback
* you can call the next ajax call in your calls success function
Upvotes: 0 |
2018/03/20 | 898 | 3,076 | <issue_start>username_0: I have two tables permissions and groups of many to many relationship
```
CREATE TABLE `permissions` (
`Permission_Id` int(11) NOT NULL AUTO_INCREMENT,
`Permission_Name` varchar(50) DEFAULT NULL,
PRIMARY KEY (`Permission_Id`)
)
```
Groups table
```
CREATE TABLE `groups` (
`Group_Id` int(11) NOT NULL AUTO_INCREMENT,
`Group_Desc` varchar(100) DEFAULT NULL,
PRIMARY KEY (`Group_Id`)
)
```
I am confuse how to implement the many to many relationship
which is better to create a composite primary key of Group\_id and Permission\_id in a new table
Or to create a new table & select the columns from the two table using join keyword .<issue_comment>username_1: I know the solution.
I need a to create "junction" table to hold many-to-many relationship in this case.
```
CREATE TABLE Groups_Permissions
(
Group_Id INT,
Permission_Id INT,
)
```
The combination of Group\_Id and Permmission\_Id should be UNIQUE and have FK to groups and permission tables.
Upvotes: 1 [selected_answer]<issue_comment>username_2: From [*my blog*](http://mysql.rjweb.org/doc.php/index_cookbook_mysql#many_to_many_mapping_table):
Do it this way.
```
CREATE TABLE XtoY (
# No surrogate id for this table
x_id MEDIUMINT UNSIGNED NOT NULL, -- For JOINing to one table
y_id MEDIUMINT UNSIGNED NOT NULL, -- For JOINing to the other table
# Include other fields specific to the 'relation'
PRIMARY KEY(x_id, y_id), -- When starting with X
INDEX (y_id, x_id) -- When starting with Y
) ENGINE=InnoDB;
```
Notes:
```
⚈ Lack of an AUTO_INCREMENT id for this table -- The PK given is the 'natural' PK; there is no good reason for a surrogate.
⚈ "MEDIUMINT" -- This is a reminder that all INTs should be made as small as is safe (smaller ⇒ faster). Of course the declaration here must match the definition in the table being linked to.
⚈ "UNSIGNED" -- Nearly all INTs may as well be declared non-negative
⚈ "NOT NULL" -- Well, that's true, isn't it?
⚈ "InnoDB" -- More effecient than MyISAM because of the way the PRIMARY KEY is clustered with the data in InnoDB.
⚈ "INDEX(y_id, x_id)" -- The PRIMARY KEY makes it efficient to go one direction; this index makes the other direction efficient. No need to say UNIQUE; that would be extra effort on INSERTs.
⚈ In the secondary index, saying just INDEX(y_id) would work because it would implicit include x_id. But I would rather make it more obvious that I am hoping for a 'covering' index.
```
To conditionally INSERT new links, use IODKU
Note that if you had an AUTO\_INCREMENT in this table, IODKU would "burn" ids quite rapidly.
**More**
A `FOREIGN KEY` implicitly creates an index on the column(s) involved.
`PRIMARY KEY(a,b)` (1) says that the combo `(a,b)` is `UNIQUE`, and (2) orders the data by `(a,b)`.
`INDEX(a), INDEX(b)` (whether generated by `FOREIGN KEY` or generated manually) is *not* the same as `INDEX(a,b)`.
InnoDB really needs a `PRIMARY KEY`, so you may as well say `PRIMARY KEY (a,b)` *instead of* `UNIQUE(a,b)`.
Upvotes: 1 |
2018/03/20 | 498 | 1,835 | <issue_start>username_0: I have a very large dataset in NetCDF4 format. Normally I would just read the variables I need and pass them as attributes to my custom class so I can create new methods for post processing. Since this dataset is so large, this is not an option as it throws a memory error. So I want to extend the attributes of the NetCDF4 Dataset instead. The following code illustrates what I'm trying to do:
```
import netCDF4
class output(NetCDF4.Dataset):
def __init__(self, path):
super(NetCDF4.Dataset, self).__init__(path)
print(self.variables) # Prints successfully
def my_new_method(self):
print(self.variables) # AttributeError: 'output' object has no attribute 'variables'
```<issue_comment>username_1: Your `super()` syntax is incorrect. Consider
```
class output(netCDF4.Dataset):
def __init__(self, path):
super(output, self).__init__(path)
print(self.variables)
def my_new_method(self):
print(self.variables)
```
The basic syntax of `super()` in Python has been discussed here before (see the linked [post](https://stackoverflow.com/q/222877/1328439))
With this definition I could type
```
my_output=output("myhdf.hdf")
my_output.my_new_method()
```
Both commands output the list of variables.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The NetCDF4 developers helped me further, and this is the actual way of doing it:
```
import netCDF4
class output(netCDF4.Dataset):
def __init__(self, *args, **kwargs):
super(output, self).__init__(*args, **kwargs)
def my_new_method(self):
print(self.variables)
```
As pointed out by <NAME> (see accepted answer) the super syntax was incorrect, but also one needs to pass \*args and \*\*kwargs or else you you will get additional errors.
Upvotes: 0 |
2018/03/20 | 887 | 2,725 | <issue_start>username_0: I am trying to incorporate css styles associated with particular div class names using node.style. I can get the styles to work individually but I am trying to implement multiple styles that span within one another using the div classes mentioned. I am wondering if there is a way of doing this using an easier method or can it be done at all?
```
var node = document.createElement("wall-post");
var image = new Image();
image.src = obj.picture;
node.appendChild(image);
node.style=' display: inline-block;margin: 15px;margin-top: 30px; border-radius: px;overflow-y: hidden;box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);float:left;';
document.body.appendChild(node);
```<issue_comment>username_1: You can also use `node.className = "yourclass"` and use css to define the styling of that class
```
.yourclass {
display: inline-block;
margin: 15px;
margin-top: 30px;
border-radius: px;
overflow-y: hidden;
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
float:left;
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The `style` **property** on an element isn't a string, it's an *object* with properties for the styles. You set them individually.
```
node.style.display = 'inline-block';
node.style.margin = '15px';
node.style.marginTop = '30px';
node.style.borderRadius = /*?? Your question just has 'px' */;
node.style.overflowY = 'hidden';
node.style.boxShadow = '0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19)';
node.style.float = 'left';
```
Note that you use the `camelCase` version (well, or `node.style["margin-top"] = "..."`).
Alternately, if you want to completely replace the style, you can do that by using `setAttribute` to replace the `style` entirely:
```
node.setAttribute('style', ' display: inline-block;margin: 15px;margin-top: 30px; border-radius: px;overflow-y: hidden;box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);float:left;');
```
---
All of that aside: In general, supplying style information in JavaScript code isn't a great pattern to follow. Create a descriptive class, then use that class on the element.
Upvotes: 2 <issue_comment>username_3: The `style` property is related to the `style` attribute which accepts, as its value, the **body** of a CSS rule-set. It associates those rules with the particular element.
Selectors, including class selectors, appear outside the rule-set's body and cannot be included in a style attribute. (It doesn't make sense to do so anyway, the style attribute is associated with a specific element, so there is no point is selecting a different one from that context).
Upvotes: 0 |
2018/03/20 | 840 | 3,548 | <issue_start>username_0: [](https://i.stack.imgur.com/gPdA7.jpg)I have two webapi developed in asp.net core 2.0 and is secured with identityserver4 framework. Both the apis are ssl enabled
I am accessing the webapi using angular2 application.
Everything works fine when i access the webapi individually from the angular application. But when i try to access the another webapi from one webapi, i am getting
```
System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.Http.WinHttpException: A security error occurred
```
Below is the code i am using in one of the webapi:
```
var request = CurrentContext.Request;
var authHeader = request.Headers["Authorization"];
var authHeaderVal = AuthenticationHeaderValue.Parse(authHeader);
var client = new System.Net.Http.HttpClient
{
BaseAddress = new Uri($"https://localhost:5001/")
};
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Authorization = authHeaderVal;
var response = await client.GetAsync("api/alerts/1");
if(response.IsSuccessStatusCode)
{
var content = await response.Content.ReadAsStringAsync();
}
else
{
}
```<issue_comment>username_1: It seams like a **CORS** issue.
**CORS** stands for Cross-Origin Resource Sharing.
As the name implies you trying to access resources from a different origin, it could be a different domain, protocol or port.
**CORS** Requests are enforced by your browser on servers that don't have **CORS** enabled, check [MS docs](https://learn.microsoft.com/en-us/aspnet/core/security/cors) on how to enable **CORS** on your api.
The **CORS** rule is not mandatory to be enforced but most browsers enforce it for security issues, hence some developer tools such as PostMan ignore it to make development easier.
Another solution would be to install a plugin to browser that disables the restriction such as [this extension](https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi?hl=en) if you're using Chrome.
Upvotes: 1 <issue_comment>username_2: Most likely you are failing SSL verification since you are running on localhost. You can override verification. Also, HttpClient should be static so you should update the code accordingly.
```
var request = CurrentContext.Request;
var authHeader = request.Headers["Authorization"];
var authHeaderVal = AuthenticationHeaderValue.Parse(authHeader);
using (var httpClientHandler = new HttpClientHandler())
{
httpClientHandler.ServerCertificateCustomValidationCallback = (msg, cert, chain, err) => { return true; };
using (var client = new HttpClient(httpClientHandler)
{
BaseAddress = new Uri($"https://localhost:5001/")
})
{
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Authorization = authHeaderVal;
var response = await client.GetAsync("api/alerts/1");
if(response.IsSuccessStatusCode)
{
var content = await response.Content.ReadAsStringAsync();
}
else
{
//do something
}
}
}
```
Upvotes: 0 |
2018/03/20 | 1,848 | 7,562 | <issue_start>username_0: **Details about database:** folder\_Meta\_Data has fields id,parentFolderId in which id nothing but folderid acts like primary key and parentFolderId refereces to id.
```
@Entity
@Table(name = "folder_Meta_Data")
@Data
@JsonInclude(JsonInclude.Include.ALWAYS)
@JsonIdentityInfo(generator = ObjectIdGenerators.PropertyGenerator.class, property = "id",scope=FolderMetaData.class)
public class FolderMetaData implements Serializable {
@Id
@SequenceGenerator(name = "seq-gen", initialValue = 1)
@GeneratedValue(strategy = GenerationType.IDENTITY, generator = "seq-gen")
@Column(name = "id", nullable = false)
private Long id;
@Column(name = "projectId", nullable = false)
private Long projectId;
@Column(name = "mgId", nullable = false)
private Long mgId;
@Column(name = "folderRoot", length = 50)
private String folderRoot;
@Column(name = "folderExtention", length = 50)
private String folderExtention;
@Column(name = "folderName", nullable = false, length = 255)
private String folderName;
@Column(name = "createdBy", nullable = false)
private Long createdBy;
@Column(name = "createdTime", nullable = false)
private Date createdTime;
@Column(name = "versionOCC")
private Long versionOCC;
@Column(name = "subProjId", nullable = false)
private Long subProjId;
@Column(name = "modifiedBy")
private Long modifiedBy;
@Column(name = "modifiedTime", length = 50)
private Date modifiedTime;
@Column(name = "parentFolderId")
private Long parentFolderId;
@ManyToOne(cascade = { CascadeType.ALL })
@JoinColumn(name = "parentFolderId", insertable = false, updatable = false)
@JsonBackReference
// @JsonIgnore
// child role
private FolderMetaData folder;
@OneToMany(mappedBy = "folder")
// parent role
@JsonManagedReference
private Set folderList;
// Getter and Setter methods
}
```
**JSON Tree response:**
```
The response i am getting is as below
[
{
"id": 1,
"projectId": 125,
"mgId": 34512,
"folderRoot": null,
"folderExtention": null,
"folderName": "XY1",
"createdBy": 12,
"createdTime": "2018-03-16",
"versionOCC": null,
"subProjId": 561565,
"modifiedBy": null,
"modifiedTime": null,
"parentFolderId": null,
"folderList": []
},
{
"id": 2,
"projectId": 125,
"mgId": 34512,
"folderRoot": null,
"folderExtention": null,
"folderName": "XY2",
"createdBy": 12,
"createdTime": "2018-03-16",
"versionOCC": null,
"subProjId": 561565,
"modifiedBy": null,
"modifiedTime": null,
"parentFolderId": null,
"folderList": [
{
"id": 3,
"projectId": 125,
"mgId": 34512,
"folderRoot": null,
"folderExtention": null,
"folderName": "XY2",
"createdBy": 12,
"createdTime": "2018-03-16",
"versionOCC": null,
"subProjId": 561565,
"modifiedBy": null,
"modifiedTime": null,
"parentFolderId": 2,
"folderList": [
{
"id": 4,
"projectId": 125,
"mgId": 34512,
"folderRoot": null,
"folderExtention": null,
"folderName": "XY2",
"createdBy": 12,
"createdTime": "2018-03-16",
"versionOCC": null,
"subProjId": 561565,
"modifiedBy": null,
"modifiedTime": null,
"parentFolderId": 3,
"folderList": [
{
"id": 5,
"projectId": 125,
"mgId": 34512,
"folderRoot": null,
"folderExtention": null,
"folderName": "XY2",
"createdBy": 12,
"createdTime": "2018-03-16",
"versionOCC": null,
"subProjId": 561565,
"modifiedBy": null,
"modifiedTime": null,
"parentFolderId": 4,
"folderList": []
}
]
}
]
}
]
},
3,
4,
5
]
```
In the service class method I am calling below method to retrieve all records. As it is Jackson api, all fetched records are populated into list as parent and children relation. Records getting as json tree.
```
List list=folderRepository.findAll();
```
**Problem is:**
after using @JsonBackReference and @JsonManagedReference in FolderMetaData pojo.
```
The tree is coming correctly. but in the bottom of this json array response, I am getting dirty data which are ids of subtrees. if anyone help me to fix this issue, that would be great.
```
**like below**
```
,
3,
4,
5
```<issue_comment>username_1: It seams like a **CORS** issue.
**CORS** stands for Cross-Origin Resource Sharing.
As the name implies you trying to access resources from a different origin, it could be a different domain, protocol or port.
**CORS** Requests are enforced by your browser on servers that don't have **CORS** enabled, check [MS docs](https://learn.microsoft.com/en-us/aspnet/core/security/cors) on how to enable **CORS** on your api.
The **CORS** rule is not mandatory to be enforced but most browsers enforce it for security issues, hence some developer tools such as PostMan ignore it to make development easier.
Another solution would be to install a plugin to browser that disables the restriction such as [this extension](https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi?hl=en) if you're using Chrome.
Upvotes: 1 <issue_comment>username_2: Most likely you are failing SSL verification since you are running on localhost. You can override verification. Also, HttpClient should be static so you should update the code accordingly.
```
var request = CurrentContext.Request;
var authHeader = request.Headers["Authorization"];
var authHeaderVal = AuthenticationHeaderValue.Parse(authHeader);
using (var httpClientHandler = new HttpClientHandler())
{
httpClientHandler.ServerCertificateCustomValidationCallback = (msg, cert, chain, err) => { return true; };
using (var client = new HttpClient(httpClientHandler)
{
BaseAddress = new Uri($"https://localhost:5001/")
})
{
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Authorization = authHeaderVal;
var response = await client.GetAsync("api/alerts/1");
if(response.IsSuccessStatusCode)
{
var content = await response.Content.ReadAsStringAsync();
}
else
{
//do something
}
}
}
```
Upvotes: 0 |
2018/03/20 | 1,226 | 4,939 | <issue_start>username_0: I've tried to fetch photos from the library. It works, but I just got 3 photos from 9 photos from the library. Here's my code:
```
let options = PHFetchOptions()
let userAlbums = PHAssetCollection.fetchAssetCollections(with: PHAssetCollectionType.album, subtype: PHAssetCollectionSubtype.any, options: options)
let userPhotos = PHAsset.fetchKeyAssets(in: userAlbums.firstObject!, options: nil)
let imageManager = PHCachingImageManager()
userPhotos?.enumerateObjects({ (object: AnyObject!, count: Int, stop: UnsafeMutablePointer) in
if object is PHAsset {
let obj:PHAsset = object as! PHAsset
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
fetchOptions.predicate = NSPredicate(format: "mediaType = %d", PHAssetMediaType.image.rawValue)
let options = PHImageRequestOptions()
options.deliveryMode = .fastFormat
options.isSynchronous = true
imageManager.requestImage(for: obj, targetSize: CGSize(width: obj.pixelWidth, height: obj.pixelHeight), contentMode: .aspectFill, options: options, resultHandler: { img, info in
self.images.append(img!)
})
}
})
```
When I tried `images.count`, it said 3. Can anyone help me to find my mistake and get all photos? Big thanks!<issue_comment>username_1: Check first in info.plist that your app is authorized to access photos from library. than Use the below code to access all photos:
```
PHPhotoLibrary.requestAuthorization { (status) in
switch status {
case .authorized:
print("You Are Authrized To Access")
let fetchOptions = PHFetchOptions()
let allPhotos = PHAsset.fetchAssets(with: .image, options: fetchOptions)
print("Found number of: \(allPhotos.count) images")
case .denied, .restricted:
print("Not allowed")
case .notDetermined:
print("Not determined yet")
}
}
```
Upvotes: 1 <issue_comment>username_2: Try this,
first import photos
```
import Photos
```
then declare Array for store photo before viewDidLoad()
```
var allPhotos : PHFetchResult? = nil
```
Now write code for fetch photo in viewDidLoad()
```
/// Load Photos
PHPhotoLibrary.requestAuthorization { (status) in
switch status {
case .authorized:
print("Good to proceed")
let fetchOptions = PHFetchOptions()
self.allPhotos = PHAsset.fetchAssets(with: .image, options: fetchOptions)
case .denied, .restricted:
print("Not allowed")
case .notDetermined:
print("Not determined yet")
}
}
```
Now write this code for display image from Array
```
/// Display Photo
let asset = allPhotos?.object(at: indexPath.row)
self.imageview.fetchImage(asset: asset!, contentMode: .aspectFit, targetSize: self.imageview.frame.size)
// Or Display image in Collection View cell
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = self.collectionView.dequeueReusableCell(withReuseIdentifier: "photoCell", for: indexPath) as! SelectPhotoCell
let asset = allPhotos?.object(at: indexPath.row)
cell.imgPicture.fetchImage(asset: asset!, contentMode: .aspectFit, targetSize: cell.imgPicture.frame.size)
return cell
}
extension UIImageView{
func fetchImage(asset: PHAsset, contentMode: PHImageContentMode, targetSize: CGSize) {
let options = PHImageRequestOptions()
options.version = .original
PHImageManager.default().requestImage(for: asset, targetSize: targetSize, contentMode: contentMode, options: options) { image, _ in
guard let image = image else { return }
switch contentMode {
case .aspectFill:
self.contentMode = .scaleAspectFill
case .aspectFit:
self.contentMode = .scaleAspectFit
}
self.image = image
}
}
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: **Swift 5 - Update**
First import Photos
```
Import Photos
```
Create a variable to hold all the images
```
var images = [UIImage]()
```
Main function to grab the assets and request image from asset
```
fileprivate func getPhotos() {
let manager = PHImageManager.default()
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = false
requestOptions.deliveryMode = .highQualityFormat
// .highQualityFormat will return better quality photos
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let results: PHFetchResult = PHAsset.fetchAssets(with: .image, options: fetchOptions)
if results.count > 0 {
for i in 0..
```
Upvotes: 3 |
2018/03/20 | 1,684 | 6,279 | <issue_start>username_0: I'm having a little trouble finding the index of a hashtable in an array. I create a JSON with this code:
```
$start = {
Clear-Host
$BIB = Read-Host 'Bibliothek'
$BIBName = Read-Host 'Bibliothek Name'
$Standort = Read-Host 'Bibliothek Standort'
$Autor = Read-Host 'Buchautor'
$BuchName = Read-Host 'Buchname'
$jsonfile = "C:\Skripte\bibV2-1000.xml"
if(![System.IO.File]::Exists($jsonfile)){
$Data = @{BIBs = @(
@{$BIB = @{BIBName=$BIBName},
@{Standort = $Standort},
@{Bücher = @(
@{BuchName = $BuchName;
Autor = $Autor})
}}
)}
ConvertTo-Json -Depth 50 -InputObject $Data | Add-Content $jsonfile
.$continue
} else {
$jsonfile = "C:\Skripte\bibV2-1000.json"
$Data = Get-Content $jsonfile | ConvertFrom-Json
$Data.BIBs += New-Object -TypeName psobject -Property @{$BIB =
@{BIBname=$BIBName},
@{Standort=$Standort},
@{Bücher = @(@{
Buchname=$BuchName;
Autor=$Autor})
}
}
ConvertTo-Json -Depth 50 -InputObject $Data | Out-File $jsonfile}
.$continue
}
$continue = {
Write-Host ""
Write-Host "Was wollen Sie machen?"
Write-Host "(1) Eine weitere Bibliothek hinzufügen"
Write-Host "(2) Einer Bibliothek neue Bücher hinzufügen"
Write-Host "(E) Script beenden"
If (($read = Read-Host ) -eq "1") {
&$start} else {
if (($read) -eq "2") {
. C:\Skripte\büc.ps1 } else {
if (($read) -eq "E") {
exit} else {
Write-Host "+++ FALSCHE EINGABE! Bitte wählen Sie (1) oder (2) für die entsprechende Aktion +++"
.$continue
}
}
}
}
&$start
```
The output is as follows:
```
{
"BIBs": [{
"BIB1": [{
"BIBName": "123"
},
{
"Standort": "123"
},
{
"Bücher": [{
"Autor": "123",
"BuchName": "123"
}]
}
]
},
{
"BIB2": [{
"BIBname": "345"
},
{
"Standort": "345"
},
{
"Bücher": [{
"Autor": "345",
"Buchname": "345"
}]
}
]
}
]
}
```
Now I want to find out the index of "BIB1". I already tried the IndexOf()-Method which should create the output "0" but it gives me "-1" instead, because it can't find the value. How can I get the index of "BIB1"?<issue_comment>username_1: Check first in info.plist that your app is authorized to access photos from library. than Use the below code to access all photos:
```
PHPhotoLibrary.requestAuthorization { (status) in
switch status {
case .authorized:
print("You Are Authrized To Access")
let fetchOptions = PHFetchOptions()
let allPhotos = PHAsset.fetchAssets(with: .image, options: fetchOptions)
print("Found number of: \(allPhotos.count) images")
case .denied, .restricted:
print("Not allowed")
case .notDetermined:
print("Not determined yet")
}
}
```
Upvotes: 1 <issue_comment>username_2: Try this,
first import photos
```
import Photos
```
then declare Array for store photo before viewDidLoad()
```
var allPhotos : PHFetchResult? = nil
```
Now write code for fetch photo in viewDidLoad()
```
/// Load Photos
PHPhotoLibrary.requestAuthorization { (status) in
switch status {
case .authorized:
print("Good to proceed")
let fetchOptions = PHFetchOptions()
self.allPhotos = PHAsset.fetchAssets(with: .image, options: fetchOptions)
case .denied, .restricted:
print("Not allowed")
case .notDetermined:
print("Not determined yet")
}
}
```
Now write this code for display image from Array
```
/// Display Photo
let asset = allPhotos?.object(at: indexPath.row)
self.imageview.fetchImage(asset: asset!, contentMode: .aspectFit, targetSize: self.imageview.frame.size)
// Or Display image in Collection View cell
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = self.collectionView.dequeueReusableCell(withReuseIdentifier: "photoCell", for: indexPath) as! SelectPhotoCell
let asset = allPhotos?.object(at: indexPath.row)
cell.imgPicture.fetchImage(asset: asset!, contentMode: .aspectFit, targetSize: cell.imgPicture.frame.size)
return cell
}
extension UIImageView{
func fetchImage(asset: PHAsset, contentMode: PHImageContentMode, targetSize: CGSize) {
let options = PHImageRequestOptions()
options.version = .original
PHImageManager.default().requestImage(for: asset, targetSize: targetSize, contentMode: contentMode, options: options) { image, _ in
guard let image = image else { return }
switch contentMode {
case .aspectFill:
self.contentMode = .scaleAspectFill
case .aspectFit:
self.contentMode = .scaleAspectFit
}
self.image = image
}
}
}
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: **Swift 5 - Update**
First import Photos
```
Import Photos
```
Create a variable to hold all the images
```
var images = [UIImage]()
```
Main function to grab the assets and request image from asset
```
fileprivate func getPhotos() {
let manager = PHImageManager.default()
let requestOptions = PHImageRequestOptions()
requestOptions.isSynchronous = false
requestOptions.deliveryMode = .highQualityFormat
// .highQualityFormat will return better quality photos
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let results: PHFetchResult = PHAsset.fetchAssets(with: .image, options: fetchOptions)
if results.count > 0 {
for i in 0..
```
Upvotes: 3 |
2018/03/20 | 1,117 | 4,526 | <issue_start>username_0: I have a project with multiple framework targets, that also have pods dependencies.
I have:
* No circular dependencies between targets
* Everything, including pods, is in Objective-C, no Swift at all.
* `use_frameworks!` in Podfile, so all pods are frameworks, not libraries.
Here is my structure:
* `TUSystemKit` depends on `TUModels` (which is a framework).
* `TUModels` depends on `Pods_TUModels` (generated by pods).
* `Pods_TUModels` depends on `JSONModel`
* `TUModels` is automatically linked with its own pod framework (which contains `JSONModel`).
* `TUSystemKit` has `TUModels` as target dependency.
* `TUSystemKit` is linked with `TUModels`.
Visually, the dependencies are like this:
`TUSystemKit` ➔ `TUModels` ➔ `Pods_TUModels` ➔ `JSONModel`
When I select `MyModels` as the build target in Xcode, build succeeds. However, when I select `TUSystemKit`, the build fails, saying that module `JSONModel` is not found *while building module* `TUSystemKit` (`TUUser` in screenshot belongs to `TUModels`):
[](https://i.stack.imgur.com/z3gIk.jpg)
What am I doing wrong? Obviously I shouldn't be explicitly linking against all the frameworks in the dependency tree. Why does `TUModels` build perfectly but `TUSystemKit` errs on *a module import inside a linked framework's code*? Do I need to change something with pods?<issue_comment>username_1: After hours of refactoring, I've managed to build my project. I can't tell what exactly was wrong as it took me literally a day to organize all the dependencies and frameworks and it kept failing at a different point, more than a 100 times, but here are some observations to lead to a successful build:
* All the public-facing classes are added as public headers to the target, and not to any other target.
* All the code (.m files) are in Compile Sources section of the target, and not in any other target.
* All the public facing classes' headers are included at umbrella header (a header with the exact same name with the framework)
* The application embeds all the custom frameworks (not the pods).
* All the files inside a framework target only `#import` required files within the same target or a file listed on any targets umbrella header that the framework has a dependency on.
* Obvious, redundant, but worth noting again: **no classes between frameworks should have circular dependencies** (e.g. ClassA in FrameworkA depends on ClassB in FrameworkB, while some class in FrameworkB depends on some class on FrameworkA). I had some, and created delegates. Do whatever pattern fits your design: IoT/dependency injection, notifications/publisher-subscriber etc. But do it: separate the concerns clearly.
* Try to avoid using same classes in multiple targets. Instead, have it in one target, and make the other target depend on the containing target, creating a dependency chain.
After refactoring many files and playing with project settings, I managed to build and run everything again. My previous setup had various number of combinations of the issues that I mentioned above, messing everything up. After cleaning all the bits and grouping code into functional, modular frameworks, I could build it.
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you came here due to the **parse error -> module not found**,
in certain occasions you may be forced to add the path manually.
This is...
* go to **your project** at the top
* select your project **target**
* select **build settings**
* search the parameter **Framework Search Paths** under the title **Search Paths**
* add the one where yours is located. Example: (using cocoa pods) **$(SRCROOT)/Pods**
* indicate/set it to be **recursive** (access to the option by double-clicking your previously added path)
The problem should have been resolved by the 3erd party lib with commands like **install / update / build** or similar but if it fails and you are stuck, this is an option in order to continue.
In the same tone, if you get an error from pods indicating that
**The sandbox is not in sync with the Podfile** because the builder is unable to find files like **Podfile.lock**, then you may consider to go in the same direction adding some user-defined settings:
* select **build settings**
* press the '+' symbol, "Add User-Defined Setting".
* add this pair:
+ param= **PODS\_PODFILE\_DIR\_PATH** value = **${SRCROOT}/.**
+ param = **PODS\_ROOT** value = **${SRCROOT}/Pods**
Cheers
Upvotes: 0 |
2018/03/20 | 1,630 | 5,154 | <issue_start>username_0: When I use `mvn site`, reports are generated, but I have this exception in terminal:
```
[WARNING] Unable to process class module-info.class in JarAnalyzer File C:\WINDOWS\system32\config\systemprofile\.m2\repository\org\apache\logging\log4j\log4j-api\2.10.0\log4j-api-2.10.0.jar
org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in constant pool: 19
at org.apache.bcel.classfile.Constant.readConstant (Constant.java:161)
at org.apache.bcel.classfile.ConstantPool. (ConstantPool.java:69)
at org.apache.bcel.classfile.ClassParser.readConstantPool (ClassParser.java:235)
at org.apache.bcel.classfile.ClassParser.parse (ClassParser.java:143)
at org.apache.maven.shared.jar.classes.JarClassesAnalysis.analyze (JarClassesAnalysis.java:96)
at org.apache.maven.report.projectinfo.dependencies.Dependencies.getJarDependencyDetails (Dependencies.java:259)
at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.hasSealed (DependenciesRenderer.java:1542)
at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderSectionDependencyFileDetails (DependenciesRenderer.java:545)
at org.apache.maven.report.projectinfo.dependencies.renderer.DependenciesRenderer.renderBody (DependenciesRenderer.java:240)
at org.apache.maven.reporting.AbstractMavenReportRenderer.render (AbstractMavenReportRenderer.java:83)
at org.apache.maven.report.projectinfo.DependenciesReport.executeReport (DependenciesReport.java:201)
at org.apache.maven.reporting.AbstractMavenReport.generate (AbstractMavenReport.java:255)
at org.apache.maven.plugins.site.ReportDocumentRenderer.renderDocument (ReportDocumentRenderer.java:219)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.renderModule (DefaultSiteRenderer.java:319)
at org.apache.maven.doxia.siterenderer.DefaultSiteRenderer.render (DefaultSiteRenderer.java:135)
at org.apache.maven.plugins.site.SiteMojo.renderLocale (SiteMojo.java:175)
at org.apache.maven.plugins.site.SiteMojo.execute (SiteMojo.java:138)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo (DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:154)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:146)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject (LifecycleModuleBuilder.java:81)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute (LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:956)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:290)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced (Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode (Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:356)
```
I use eclipse. This is simple maven project without selected archetype.
What could be the problem?
More info about maven and JDK:
```
Apache Maven 3.5.3 (3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-
24T22:49:05+03:00)
Maven home: C:\Program Files\Apache Software Foundation\apache-maven-3.5.3\bin\..
Java version: 1.8.0_161, vendor: Oracle Corporation
Java home: C:\Program Files\Java\jre1.8.0_161
Default locale: en_US, platform encoding: Cp1251
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
```<issue_comment>username_1: For all this time I have found only solution with reducing log4j2 version from `2.10.0` to `2.9.1`.
Upvotes: 1 <issue_comment>username_2: I suspect this is caused by one of the tools in the tool chain (incorrectly) trying to load classes from the `META-INF/versions/java9` directory in the log4j-api jar.
Can you use a version of BCEL that is “multi -version jar-aware” and doesn’t try to do this?
Upvotes: 3 [selected_answer]<issue_comment>username_3: to solve please try to add the following to your pom
```
maven-project-info-reports-plugin
2.9
org.apache.maven.shared
maven-shared-jar
1.2
com.google.code.findbugs
bcel-findbugs
org.apache.bcel
bcel
6.2
Solved the issue for my build
```
Upvotes: -1 |
2018/03/20 | 995 | 2,687 | <issue_start>username_0: I searched for related questions, but cannot find an appropriate answer for my problem. For example, I have a file of 6 rows and 3 columns.
```
id sample1 sample2 sample3
6 +/+ +/+ +/-
16 -/- +/+ +/+
20 +/- +/+ +/+
21 +/- +/+ +/+
22 +/+ +/+ -/-
25 +/+ +/+ +/+
```
For each column, I need to compare the string of one row with the string in the next one, and based on the values, report a number with AWK, based on the following comparison table (no matter the order of field1 and field2):
```
field1 field2 value
+/+ +/+ 0
+/+ +/- 0.5
+/- +/- 1
+/+ -/- 2
+/- -/- 2.5
-/- -/- 4
```
Desired output is thus:
```
id sample1 sample2 sample3 result1 result2 result3
6 +/+ +/+ +/- 2 0 0.5
16 -/- +/+ +/+ 2.5 0 0
20 +/- +/+ +/+ 1 0 0
21 +/- +/+ +/+ 0.5 0 2
22 +/+ +/+ -/- 0 0 2
25 +/+ +/+ +/+
```
Any help would be appreciated.<issue_comment>username_1: **`awk`** solution:
```
awk 'NR == FNR{ if (NR > 1) a[$1$2] = $3; next }
FNR == 1{ print $0, "result1\tresult2\tresult3"; next }
id{
print id, s2, s3, s4,
(a[$2 s2] == ""? a[s2 $2] : a[$2 s2]),
(a[$3 s3] == ""? a[s3 $3] : a[$3 s3]),
(a[$4 s4] == ""? a[s4 $4] : a[$4 s4])
}
{ id = $1; s2 = $2; s3 = $3; s4 = $4 }
END{ print $0 }' table.txt OFS='\t' data.txt | column -t
```
The output:
```
id sample1 sample2 sample3 result1 result2 result3
6 +/+ +/+ +/- 2 0 0.5
16 -/- +/+ +/+ 2.5 0 0
20 +/- +/+ +/+ 1 0 0
21 +/- +/+ +/+ 0.5 0 2
22 +/+ +/+ -/- 0 0 2
25 +/+ +/+ +/+
```
Upvotes: 0 <issue_comment>username_2: another similar `awk`
```
$ awk 'NR==FNR {a[$1,$2]=a[$2,$1]=$3; next}
FNR==1 {print $0,"result1","result2","result3"; next}
{print f0, a[f[2],$2], a[f[3],$3], a[f[4],$4];
f0=$0; split($0,f)}
END {print}' score file | column -t
id sample1 sample2 sample3 result1 result2 result3
6 +/+ +/+ +/- 2 0 0.5
16 -/- +/+ +/+ 2.5 0 0
20 +/- +/+ +/+ 1 0 0
21 +/- +/+ +/+ 0.5 0 2
22 +/+ +/+ -/- 0 0 2
25 +/+ +/+ +/+
```
Upvotes: 1 |
2018/03/20 | 657 | 2,293 | <issue_start>username_0: I use InnovaEditor to create edit block.
I try to find way in order to set dynamic height of edit block.
Ie block height should correspond block content.
**HTML:**
```
12345
```
What I did:
1) set keyup event in iframe body
2) wrap to content to get real height
3) set calculated height to the iframe
**Javascript:**
```
var $iframe = $("iframe#idContenteditor_field_1");
var $iframeBody = $iframe.contents().find("body");
$iframeBody.keyup(function(e) {
if ($(this).find('.content').length === 0) {
// add wrap
var bodyContent = $(this).html();
$(this).html('' + bodyContent + '');
}
var $contentBlock = $(this).find('.content');
var bodyHeight = $contentBlock.outerHeight();
$('#idContenteditor_field_1').height(bodyHeight); // set real height
});
```
It works fine.
The issue:
I have 10 edit blocks on the page and they are same except id.
But I have problems when I try to apply this code to all iframes.
```
// return all iframes
var $iframes = $('iframe[id^="idContenteditor_field_"]');
// return only single body of first iframe.
var $iframesBody = $iframes .contents().find("body");
```
So I can't set keyup event for all iframes.
Could you help me?
Maybe there is easier way to set dynamic height?<issue_comment>username_1: I haven't tested this code, but something like below should work.
You just need to iterate through your objects and set the event listener for each one in turn.
```
var $iframes = $('iframe[id^="idContenteditor_field_"]');
$iframes.each(function(index, item) {
var $iframeBody = $(item).contents().find("body");
$iframeBody.keyup(function(e) {
if ($(this).find('.content').length === 0) {
// add wrap
var bodyContent = $(this).html();
$(this).html('' + bodyContent + '');
}
var $contentBlock = $(this).find('.content');
var bodyHeight = $contentBlock.outerHeight();
$(item).height(bodyHeight); // set real height
});
});
```
Upvotes: 0 <issue_comment>username_2: ```
var $iframesBody = $iframes .contents().find("body");
```
^^^ it returns only single body of first iframe, because rest iframes have not yet been loaded fully.
So I just execute this script after load of all iframes.
And it works.
Upvotes: 1 |
2018/03/20 | 425 | 1,539 | <issue_start>username_0: I am trying to attach multiple models with one controller using implicit model binding but I am getting the following error if I try to attach more than one model with methods.
```
index() must be an instance of App\\Http\\Models\\Modelname, string given
```
Here is my code:
```
public function index(Model1 $model1,Model2 $model2,Model3 $model3)
{
print_r($application_endpoint);
}
```
Route:
```
Route::resource("model1.model2.model3","MyController",["except"=>["create","edit"]]);
```<issue_comment>username_1: I haven't tested this code, but something like below should work.
You just need to iterate through your objects and set the event listener for each one in turn.
```
var $iframes = $('iframe[id^="idContenteditor_field_"]');
$iframes.each(function(index, item) {
var $iframeBody = $(item).contents().find("body");
$iframeBody.keyup(function(e) {
if ($(this).find('.content').length === 0) {
// add wrap
var bodyContent = $(this).html();
$(this).html('' + bodyContent + '');
}
var $contentBlock = $(this).find('.content');
var bodyHeight = $contentBlock.outerHeight();
$(item).height(bodyHeight); // set real height
});
});
```
Upvotes: 0 <issue_comment>username_2: ```
var $iframesBody = $iframes .contents().find("body");
```
^^^ it returns only single body of first iframe, because rest iframes have not yet been loaded fully.
So I just execute this script after load of all iframes.
And it works.
Upvotes: 1 |
2018/03/20 | 503 | 2,007 | <issue_start>username_0: I am new to configuring and setting up Python from scratch.
I have installed Anaconda and I plan to use Spyder for python development. I also have a older version of Python installed on the same machine elsewhere.
I needed to get my hands on a package to use in Spyder which I needed to download and install.
I downloaded and installed pip directly from the website and then I used this in the command line of the older python install to obtain the package I required.
However I don't understand how I go about making this available to Spyder. I believe it works on a folder structure within it's own directory and I am unsure how to change this to get the package I have already downloaded.
I thought I might be able to copy it across, or point it at the directory where the package was downloaded to but I cannot work out how to do this.
I also tried using pip from within Spyder to work but it cannot find it.
Can you please let me know what I need to check?<issue_comment>username_1: I haven't tested this code, but something like below should work.
You just need to iterate through your objects and set the event listener for each one in turn.
```
var $iframes = $('iframe[id^="idContenteditor_field_"]');
$iframes.each(function(index, item) {
var $iframeBody = $(item).contents().find("body");
$iframeBody.keyup(function(e) {
if ($(this).find('.content').length === 0) {
// add wrap
var bodyContent = $(this).html();
$(this).html('' + bodyContent + '');
}
var $contentBlock = $(this).find('.content');
var bodyHeight = $contentBlock.outerHeight();
$(item).height(bodyHeight); // set real height
});
});
```
Upvotes: 0 <issue_comment>username_2: ```
var $iframesBody = $iframes .contents().find("body");
```
^^^ it returns only single body of first iframe, because rest iframes have not yet been loaded fully.
So I just execute this script after load of all iframes.
And it works.
Upvotes: 1 |
2018/03/20 | 609 | 1,636 | <issue_start>username_0: Is there a way to compare two buffers directly?
For example having two identical files `file1` and `file1-copy`, i would like to do:
```
f1 = open(file1)
f2 = open(file1-copy)
if f1 == f2
println("Equal content")
end
```
I know i can make strings of that and compare those:
```
if readstring(f1) == readstring(f2)
println("Equal content")
end
```<issue_comment>username_1: I guess not(in the sense that comparing two buffers(`f1`&`f2`) **directly**), but if you could pre-calculate their hashes, it would be convenient to directly compare them later on:
```
julia> using SHA
shell> cat file1
Is there a way to compare two buffers directly? For example having two identical files file1 and file1-copy, i would like to do:
f1 = open(file1)
f2 = open(file1-copy)
if f1 == f2
println("Equal content")
end
I know i can make strings of that and compare those:
if readstring(f1) == readstring(f2)
println("Equal content")
end
julia> file1 = open("file1") do f
sha256(f)
end |> bytes2hex
"eb179202793cfbfd1a1f19e441e813a8e23012a5bdd81e453daa266fcb74144a"
julia> file1copy = open("file1-copy") do f
sha256(f)
end |> bytes2hex
"eb179202793cfbfd1a1f19e441e813a8e23012a5bdd81e453daa266fcb74144a"
julia> file1 == file1copy
true
```
Upvotes: 1 <issue_comment>username_2: Easiest way is probably just to [`mmap`](https://docs.julialang.org/en/stable/stdlib/io-network/#Base.Mmap.mmap-Tuple%7BAny,Type,Any,Any%7D) them:
```
julia> f1 = open("file")
f2 = open("file-copy");
julia> Mmap.mmap(f1) == Mmap.mmap(f2)
true
```
Upvotes: 4 [selected_answer] |
2018/03/20 | 672 | 2,424 | <issue_start>username_0: Visual Studio Code has a feature where it will change the color of a filename in the file tree when there is a linting error. Unfortunately the error color looks a little harsh for me:
[](https://i.stack.imgur.com/MhXEB.png)
Is there a way I can change the error color here? I know I can disable it completely with `problems.decorations.enabled`, but I would prefer to have them enabled, but just not be so hard on the eyes. I've gone through the theme customizations available [here](https://code.visualstudio.com/docs/getstarted/theme-color-reference), but I haven't found anything related the the `problems.decorations`.<issue_comment>username_1: There are two other settings you can look at:
```
"explorer.decorations.colors": false,
"explorer.decorations.badges": false
```
and some colorCustomizations (see [git decorator colors](https://code.visualstudio.com/docs/getstarted/theme-color-reference#_git-colors))
>
> Git Colors
>
>
> `gitDecoration.modifiedResourceForeground`: Color for
> modified git resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.deletedResourceForeground`: Color for deleted git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.untrackedResourceForeground`: Color for untracked git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.ignoredResourceForeground`: Color for ignored git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.conflictingResourceForeground`: Color for conflicting git
> resources. Used file labels and the SCM viewlet.
>
>
>
However, there doesn't seem to be a way to change the color of the problem colors. Modified, untracked and ignored all work fine though. You can change the file name color back to uncolored with the
```
"explorer.decorations.colors": false,
```
setting.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try with:
```
"list.errorForeground": "#00AA00"
```
This will change the color to green ("#00AA00").
In my case, I used it to add color to the High Contrast Theme because, for some reason, it was missing the error color in the file explorer.
Here's what I changed for that specific theme:
```
"workbench.colorCustomizations": {
"[Default High Contrast]": {
"list.errorForeground": "#00AA00"
}
}
```
Upvotes: 0 |
2018/03/20 | 661 | 2,322 | <issue_start>username_0: Im using CONCAT\_WS within hibernate query,
to create a giant string and search by all fields using `like '%value%'` . It works fine, but for some records some fields are null. e.g if `actId` is null, my whole concat\_ws returns null. I don't know why, concat\_ws must ignore null values. May be it's because hibernate trying to call getActNumber from null? anyway I'm trying hard to resolve this problem.
```
where CONCAT_WS("_", actItemId.actId.actNumber, DATE_FORMAT(recordDate, '%d.%m.%Y'), actItemId.techniqueId.name, fzkActNumber, ....etc) like '%value%'
```
thanks!<issue_comment>username_1: There are two other settings you can look at:
```
"explorer.decorations.colors": false,
"explorer.decorations.badges": false
```
and some colorCustomizations (see [git decorator colors](https://code.visualstudio.com/docs/getstarted/theme-color-reference#_git-colors))
>
> Git Colors
>
>
> `gitDecoration.modifiedResourceForeground`: Color for
> modified git resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.deletedResourceForeground`: Color for deleted git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.untrackedResourceForeground`: Color for untracked git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.ignoredResourceForeground`: Color for ignored git
> resources. Used file labels and the SCM viewlet.
>
>
> `gitDecoration.conflictingResourceForeground`: Color for conflicting git
> resources. Used file labels and the SCM viewlet.
>
>
>
However, there doesn't seem to be a way to change the color of the problem colors. Modified, untracked and ignored all work fine though. You can change the file name color back to uncolored with the
```
"explorer.decorations.colors": false,
```
setting.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try with:
```
"list.errorForeground": "#00AA00"
```
This will change the color to green ("#00AA00").
In my case, I used it to add color to the High Contrast Theme because, for some reason, it was missing the error color in the file explorer.
Here's what I changed for that specific theme:
```
"workbench.colorCustomizations": {
"[Default High Contrast]": {
"list.errorForeground": "#00AA00"
}
}
```
Upvotes: 0 |
2018/03/20 | 439 | 1,539 | <issue_start>username_0: This code is already working, but i want "plain" & "Letter" to be dynamic. Like pointing to specific range cell. I've tried it but it wasn't successful.
Basically sReplace = Replace(sReplace, Range"A1", "Range"B1:B10")
```
Private Sub CommandButton1_Click()
Dim sFind As String
Dim sReplace As String
Dim iFileNum As Integer
Dim FilePath As String
Dim newFileName As String
FilePath = "C:\Users\new\Plain.prn"
iFileNum = FreeFile
Open FilePath For Input As iFileNum
Do Until EOF(iFileNum)
Line Input #iFileNum, sFind
sReplace = sReplace & sFind & vbCrLf
Loop
Close iFileNum
sReplace = Replace(sReplace, "plain", "Letter")
'SaveAs txtfile
newFileName = Replace(FilePath, ".", "_edit.")
iFileNum = FreeFile
Open newFileName For Output As iFileNum
Print #iFileNum, sReplace
Close iFileNum
End Sub
```<issue_comment>username_1: This is how to make the values pointing to a given range in a given worksheet:
```
With Worksheets("NameOfTheWorksheet")
sReplace = Replace(sReplace, .Range("A1"), .Range("A2"))
End With
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This might do what you are asking:
Replaces the values in Range `B1:B10` where it matches `sReplace`, with the value from Range `A1`
```
Range("B1:B10").Replace What:=sReplace, Replacement:=Range("A1"), LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
```
Upvotes: 0 |
2018/03/20 | 1,912 | 7,353 | <issue_start>username_0: I have recently upgraded my Visual Studio environment, and now I seem to have a problem with my .Net framework, as you can see in following error message:
>
> 9> C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(1126,5): error MSB3644: The reference assemblies for framework ".NETFramework,Version=v4.5" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.
>
>
>
(the number "9" is just a counter of the nineth project within my Visual Studio solution)
Let me explain you what's so confusing about this situation:
When I load my solution, I get following error message:
>
> The C# project ... is targeting ".NETFramework,Version=4.5", which is not installed on this machine. To proceed, select an option below:
>
>
>
* Change the target to .NET Framework 4.6.1. You can change back to ".NETFramework,Version=4.5" at a later time.
* Download the targeting pack for ".NETFramework,Version=4.5". The project will not change.
* Do not load the project.
I choose for the second option, and I get following website: [.NET SDKs for Visual Studio](https://www.microsoft.com/net/download/visual-studio-sdks)
This gives the following: list of .NET Frameworks:
```
4.7.1 Developer Pack, included in Visual Studio 2017 Runtime Release Details
4.7 Developer Pack, included in Visual Studio 2017 Runtime Release Details
4.6.2 Developer Pack, included in Visual Studio 2017 Runtime Release Details
4.6.1 Developer Pack, included in Visual Studio 2017 Runtime Release Details
4.6 Developer Pack, included in Visual Studio 2017 Runtime Release Details
4.5.2 Developer Pack, included in Visual Studio 2017 Runtime Release Details
3.5 SP1 Developer Pack, included in Visual Studio 2017 Runtime Release Details
```
=> Where is 4.5?
I have already different times downloaded 4.5.2 version, Developer Pack as well as Runtime, but nothing changes.
So, after a while, I just close the dialog box about the missing target. I'm expecting to get a more readable error message, once I get to the compilation of the project.
Next to that, I've also investigated the installation of my PC (PC settings, Add and Remove Programs), this gives following list:
```
Microsoft .NET Core SDK - 2.1.4 (x64)
Microsoft .NET Framework 4.5 SDK
Microsoft .NET Framework 4.5.1 Multi-Targeting Pack (ENU)
Microsoft .NET Framework 4.5.1 SDK
Microsoft .NET Framework 4.5.2 Multi-Targeting Pack (ENU)
Microsoft .NET Framework 4.6 Targeting Pack
Microsoft .NET Framework 4.6.2 SDK
Microsoft .NET Framework 4.6.2 Targeting Pack
Microsoft .NET Framework 4.7 SDK
Microsoft .NET Framework 4.7 Targeting Pack
```
=> Ok, there seems not to be a targeting pack for 4.5 (only the SDK seems to be present), but what to do about it?
The error message speaks about retargeting my application, but that's out of the question: I'm the only person, having this problem, and modifying the configuration of a central application is not a solution.
The error message also speaks about the so-called "Global Assembly Cache" which will be used in place of reference assemblies. So, let's have a look at those things:
I seem to have two reference assemblies on my PC:
>
> C:\Program Files\Reference Assemblies\Microsoft\Framework (seems only to contain "v3.0" and "v3.5" directories)
>
> C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.
>
>
>
The latter seems to contain a ".NETFramework" directory, which, at his turn, contains a "v4.5" subdirectory, containing quite some XML files.
The error message however states that those are not used, the "Global Assembly Cache (GAC)" is used instead. If I try finding this, I find references to `gacutil`, `sn -k ...` and other programs which are not found on my PC for the specied .Net framework.
If I look, using Google, for "Microsoft .Net Framework 4.5 Targeting Pack" (exact phrase), I get zero results.
So, in other words, I get a message about a missing piece of software on my PC. I get the opportunity to install it but the installation does nothing. Then I get a compilation error about this, which talks about a very general technology which seems not to exist following Google, the mostly used search engine on the internet. It mentions a technology (GAC) which might be used or not and who is found, either on `C:\Program Files` or `C:\Program Files (x86)`, and its corresponding tool (`gacutil`) seems only to exist on my PC for other versions.
Can anybody give me a push in the right direction here?
Thanks in advance<issue_comment>username_1: Meanwhile I've solved the issue, and it comes down to a general truth in working with computers:
>
> When you have a problem, don't listen to what your computer tells you, but ask yourself what you were doing when the problem arose.
>
>
>
In my case, it means the following:
The problem popped up when I had upgraded my Visual Studio version (using the Visual Studio Installer), so the solution consisted of starting that Visual Studio Installer again, and check some more features to be installed (I've now checked so many of them that I can't figure out which one exactly solved my problem).
The error message, preceeded by the dialog box, were just information about the PC internals which only led to ununderstandable sidetracks.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The boxes that I checked and seemed to work were ASP.NET framework, Office/Sharepoint development, Linux development with C++, and Visual studio extension development, all under workloads. Under the individual packages I checked every single .Net framework SDK and targeting pack.
That worked eventually.
Upvotes: 1 <issue_comment>username_3: I had the same issue in azure devops. my build was failing with same error msg. what fixed mine was to install .net Framework 4.8 developer pack. you can download it here
<https://dotnet.microsoft.com/en-us/download/visual-studio-sdks?cid=getdotnetsdk>
Upvotes: 0 <issue_comment>username_4: I was getting the same error on my net6 app. The build was happening on the solution and I had a reactApp included in my solution and it was doing a build on that.
I removed any Build & Deploy on that by:
Right click on my solution, -> Properties -> Configuration Properties ->
* set Configuration to All Configurations
* set Platform to All Platforms
then uncheck reactApp for Build and Deploy.
At the end on my CI/CD pipeline I saw:
```
The project "reactApp" is not selected for building in solution configuration "Debug|Any CPU".
```
Upvotes: 1 <issue_comment>username_5: I downloaded and installed the developer pack at least 6 times. I opened the installer Modify/Individual Components/Checked all 4.8 boxes. Nothing seemed to work until I realized there was one little box ***on the right side*** not checked!
[Visual Studio 2022 Installer Screenshot](https://i.stack.imgur.com/zErUL.png)
Upvotes: 0 |
2018/03/20 | 674 | 2,757 | <issue_start>username_0: I am new to MySQL and I decided to work with the GUI platform (Workbench).
I would like to know if it is possible to force a certain quantity of numbers in a column.
I mean, if i want an integer with 4 numbers and they introduce an integer with 3 numbers, i want to block it, as well as if they introduce 5 numbers.
Thank you and sorry for the English.<issue_comment>username_1: Meanwhile I've solved the issue, and it comes down to a general truth in working with computers:
>
> When you have a problem, don't listen to what your computer tells you, but ask yourself what you were doing when the problem arose.
>
>
>
In my case, it means the following:
The problem popped up when I had upgraded my Visual Studio version (using the Visual Studio Installer), so the solution consisted of starting that Visual Studio Installer again, and check some more features to be installed (I've now checked so many of them that I can't figure out which one exactly solved my problem).
The error message, preceeded by the dialog box, were just information about the PC internals which only led to ununderstandable sidetracks.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The boxes that I checked and seemed to work were ASP.NET framework, Office/Sharepoint development, Linux development with C++, and Visual studio extension development, all under workloads. Under the individual packages I checked every single .Net framework SDK and targeting pack.
That worked eventually.
Upvotes: 1 <issue_comment>username_3: I had the same issue in azure devops. my build was failing with same error msg. what fixed mine was to install .net Framework 4.8 developer pack. you can download it here
<https://dotnet.microsoft.com/en-us/download/visual-studio-sdks?cid=getdotnetsdk>
Upvotes: 0 <issue_comment>username_4: I was getting the same error on my net6 app. The build was happening on the solution and I had a reactApp included in my solution and it was doing a build on that.
I removed any Build & Deploy on that by:
Right click on my solution, -> Properties -> Configuration Properties ->
* set Configuration to All Configurations
* set Platform to All Platforms
then uncheck reactApp for Build and Deploy.
At the end on my CI/CD pipeline I saw:
```
The project "reactApp" is not selected for building in solution configuration "Debug|Any CPU".
```
Upvotes: 1 <issue_comment>username_5: I downloaded and installed the developer pack at least 6 times. I opened the installer Modify/Individual Components/Checked all 4.8 boxes. Nothing seemed to work until I realized there was one little box ***on the right side*** not checked!
[Visual Studio 2022 Installer Screenshot](https://i.stack.imgur.com/zErUL.png)
Upvotes: 0 |
2018/03/20 | 393 | 1,361 | <issue_start>username_0: I'm trying to flash error messages from my controller back to my view. I tried this with:
```
\Route::group(['middleware' => 'web'], function ()
flash('Error message');
return Redirect::back();
});
```
And tried showing it my view with:
```
@include('flash::message')
```
However this just seems not to show the message.
I've been looking over the web for some good 2 to 3 hours now and I am at a loss right now.
If this is a duplication of another question somewhere on stackoverflow, then sorry!<issue_comment>username_1: In controller
```
use Session;
\Session::flash('msg', 'Error' );
```
in blade
```
{!!Session::get('msg')!!}
```
Upvotes: 1 <issue_comment>username_2: To use session flash in Laravel:
web.php
```
Route::get('/',
function () {
Session::flash('error', 'test');
return view('welcome');
});
```
In your `.blade` view file you can access the message using
```
@if (session('error'))
{{ session('error') }}
@endif
```
You could replace `'error'` with any type of message ('success', 'warning', 'yourOwnMessageIdentifier etc) you'd want to flash.
Upvotes: 3 [selected_answer]<issue_comment>username_3: ```
use simply
\Session::flash('msg', 'Changes Saved.' );
@if(Session::has('msg'))
×
**Heads Up!** {!!Session::get('msg')!!}
@endif
```
Upvotes: 1 |
2018/03/20 | 875 | 3,429 | <issue_start>username_0: What I would like to be able to do is for user to click button on my website (actually a webapp that runs locally on user machine so each instance is single user) which initiates a Google image search with various params set and display the results in a section of my html page.
This is because the idea is they can then select an image and drag and drop it onto a dropzone on my webpage. This parts works but currently the search is opening in a new tab so its a bit of a pain dragging from that tab to my tab.
Embedding as an iframe does not work, Google does not allow that.
So
a> is there a Google sanctioned API I can use to perform a Google search for images and display within my page.
b> Could I send send a url request from my server (i,e like curl/wget and then screenscrape the results and present on webpage<issue_comment>username_1: Google has retired the image search API, it is no longer available (see this SO thread with alternatives).
If you open up the URL you provided it will say this:
```
{"responseData": null, "responseDetails": "This API is no longer available.", "responseStatus": 403}
```
Upvotes: 1 <issue_comment>username_2: Google's Custom Search Engine (CSE) API is limited to 100 free requests per day.
Creating `cx` and modifying it to search for images
===================================================
1. Create custom search engine at <https://cse.google.com/cse/create/new> based on your search criteria.
2. Choose sites to search (leave this blank if you want to search the entire web, otherwise you can enter a site to search in one particular site)
3. Enter a name and a language for your search engine.
4. Click "create." You can now find `cx` in your browser URL.
5. Under "Modify your search engine," click the "Control Panel" button. In the "edit" section you will find an "Image Search" label with an ON/OFF button, change it to **ON**. Click "update" to save your changes.
Conducting a search with the API
================================
The API endpoint url is [`https://www.googleapis.com/customsearch/v1`](https://www.googleapis.com/customsearch/v1)
The following JSON parameters are used for this API:
* `q`: specifies search text
* `num`: specifies number of results. Requires an integer value between 1 and 10 (inclusive)
* `start`: the "offset" for the results, which result the search should start at. Requires an integer value between 1 and 101.
* `imgSize`: the size of the image. I used `"medium"`
* `searchType`: must be set to `"image"`
* `filetype`: specifies the file type for the image. I used `"jpg", but you can leave this out if file extension doesn't matter to you.
* `key`: an API key, obtained from <https://console.developers.google.com/>
* `cx`: the custom search engine ID from the previous section
Simply make a `GET` request by passing above parameters as JSON to the API endpoint (also listed above).
**Note:** If you set a list of referrers in the search engine settings, visiting the URL via your browser will likely not work. You will need to make an AJAX call (or the equivalent from another language) from a server specified in this list. It will work for only the referrers which were specified in the configuration settings.
Reference:
<https://developers.google.com/custom-search/json-api/v1/reference/cse/list>
Answer from: <https://stackoverflow.com/a/34062436/4366303>
Upvotes: 4 [selected_answer] |
2018/03/20 | 447 | 1,582 | <issue_start>username_0: I'm reading rows from excel into a model , and excel returns the input such as "10%" or "20%" as "0.1" ,"0.2" **string** . What is the best way to convert that string into a **string** such as "20%" , "10%" and to avoid unnecessary conversions, in order not to lose on the performance ? So just to clarify i need to convert string to a string but the program is not letting me obviously . I could convert that string to double then convert that double to string with `ToString("#0.##%")`, but I would like to know if there is a faster way .
This is the code that reads row and formats excel columns into model props types
```
static T ReadRow(ExcelWorksheet worksheet, int rowNum)
{
var o = (T)Activator.CreateInstance(typeof(T));
int colNum = 1;
foreach (var p in typeof(T).GetProperties())
{
if (worksheet.Cells[rowNum, colNum].Value != null)
{
p.SetValue(o, Convert.ChangeType(worksheet.Cells[rowNum, colNum].Value, p.PropertyType));
}
colNum++;
}
return o;
```<issue_comment>username_1: The reason you are getting 10% as .01 is that the number is stored as .01 only and when reading excel you will get its actual stored value. You see it as 10% because its formatted as % in excel.
To read the percent: 10% as string '10%' you will have to format the string when reading.
Upvotes: 0 <issue_comment>username_2: To expand on what prachi has put with a solution to converting the string to your desired result.
```
string val = "0.2"; // for reference
string.Format("{0}%", Convert.ToDouble(val) * 100);
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 2,840 | 8,533 | <issue_start>username_0: I have a code like this:
```
let myString = "Swift Attributed String"
let myAttribute = [ NSAttributedStringKey.foregroundColor: UIColor.blue , NSAttributedStringKey.writingDirection:(NSWritingDirection.rightToLeft.rawValue|NSWritingDirectionFormatType.override.rawValue)] as [NSAttributedStringKey : Any]
let myAttrString = NSAttributedString(string: myString, attributes: myAttribute)
// set attributed text on a UILabel
self.labelUsername.attributedText = myAttrString
```
The app is crashing on above line. If I remove `NSAttributedStringKey.writingDirection` from the attribute dictionary, the crash goes away.
Here's the crash log:
```
2018-03-20 14:38:59.077 Haraj Swift[90519:25834245] -[__NSCFNumber countByEnumeratingWithState:objects:count:]: unrecognized selector sent to instance 0xb000000000000033
2018-03-20 14:38:59.092 Haraj Swift[90519:25834245] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSCFNumber countByEnumeratingWithState:objects:count:]: unrecognized selector sent to instance 0xb000000000000033'
*** First throw call stack:
(
0 CoreFoundation 0x000000010ef6434b __exceptionPreprocess + 171
1 libobjc.A.dylib 0x000000010e9c521e objc_exception_throw + 48
2 CoreFoundation 0x000000010efd3f34 -[NSObject(NSObject) doesNotRecognizeSelector:] + 132
3 CoreFoundation 0x000000010eee9c15 ___forwarding___ + 1013
4 CoreFoundation 0x000000010eee9798 _CF_forwarding_prep_0 + 120
5 UIFoundation 0x00000001176e13b3 -[NSATSGlyphStorage setGlyphRange:characterRange:] + 2709
6 UIFoundation 0x00000001176dc5ec -[NSATSTypesetter _ctTypesetter] + 287
7 UIFoundation 0x00000001176e6ef7 -[NSATSLineFragment layoutForStartingGlyphAtIndex:characterIndex:minPosition:maxPosition:lineFragmentRect:] + 95
8 UIFoundation 0x00000001176dd540 -[NSATSTypesetter _layoutLineFragmentStartingWithGlyphAtIndex:characterIndex:atPoint:renderingContext:] + 3240
9 UIFoundation 0x00000001176eeb84 -[NSSingleLineTypesetter createRenderingContextForCharacterRange:typesetterBehavior:usesScreenFonts:hasStrongRight:syncDirection:mirrorsTextAlignment:maximumWidth:] + 408
10 UIFoundation 0x0000000117724167 __NSStringDrawingEngine + 27441
11 UIFoundation 0x00000001177263d4 -[NSAttributedString(NSExtendedStringDrawing) boundingRectWithSize:options:context:] + 797
12 UIKit 0x000000010fd1f744 -[UILabel _textRectForBounds:limitedToNumberOfLines:includingShadow:] + 1199
13 UIKit 0x000000010fd1f05b -[UILabel textRectForBounds:limitedToNumberOfLines:] + 68
14 UIKit 0x000000010fd24b7f -[UILabel _intrinsicSizeWithinSize:] + 168
15 UIKit 0x000000010fd24c6c -[UILabel intrinsicContentSize] + 92
16 UIKit 0x00000001104679ed -[UIView(UIConstraintBasedLayout) _generateContentSizeConstraints] + 35
17 UIKit 0x00000001104676ea -[UIView(UIConstraintBasedLayout) _updateContentSizeConstraints] + 494
18 UIKit 0x00000001104714d6 -[UIView(AdditionalLayoutSupport) _updateSystemConstraints] + 66
19 UIKit 0x000000011046ff3e -[UIView(AdditionalLayoutSupport) _sendUpdateConstraintsIfNecessaryForSecondPass:] + 161
20 UIKit 0x000000011047039d -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededCollectingViews:forSecondPass:] + 860
21 UIKit 0x00000001104702fa -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededCollectingViews:forSecondPass:] + 697
22 UIKit 0x00000001104702fa -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededCollectingViews:forSecondPass:] + 697
23 Foundation 0x000000010e4e1bd0 -[NSISEngine withBehaviors:performModifications:] + 155
24 UIKit 0x00000001104705f2 -[UIView(AdditionalLayoutSupport) _recursiveUpdateConstraintsIfNeededCollectingViews:forSecondPass:] + 118
25 UIKit 0x00000001104702fa -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededCollectingViews:forSecondPass:] + 697
26 UIKit 0x00000001104702fa -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededCollectingViews:forSecondPass:] + 697
27 Foundation 0x000000010e4e1bd0 -[NSISEngine withBehaviors:performModifications:] + 155
28 UIKit 0x0000000110470c71 __97-[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededPostponeVariableChangeNotifications:]_block_invoke + 91
29 UIKit 0x000000011046fa9b -[UIView(AdditionalLayoutSupport) _withUnsatisfiableConstraintsLoggingSuspendedIfEngineDelegateExists:] + 117
30 UIKit 0x0000000110470757 -[UIView(AdditionalLayoutSupport) _updateConstraintsIfNeededPostponeVariableChangeNotifications:] + 181
31 UIKit 0x000000011047184d -[UIView(AdditionalLayoutSupport) _updateConstraintsAtEngineLevelIfNeededPostponeVariableChangeNotifications:] + 356
32 UIKit 0x000000010fb4b922 -[UIView(Hierarchy) _updateConstraintsAsNecessaryAndApplyLayoutFromEngine] + 159
33 UIKit 0x000000010fb5cf50 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1237
34 QuartzCore 0x0000000117054cc4 -[CALayer layoutSublayers] + 146
35 QuartzCore 0x0000000117048788 _ZN2CA5Layer16layout_if_neededEPNS_11TransactionE + 366
36 QuartzCore 0x0000000117048606 _ZN2CA5Layer28layout_and_display_if_neededEPNS_11TransactionE + 24
37 QuartzCore 0x0000000116fd6680 _ZN2CA7Context18commit_transactionEPNS_11TransactionE + 280
38 QuartzCore 0x0000000117003767 _ZN2CA11Transaction6commitEv + 475
39 QuartzCore 0x00000001170040d7 _ZN2CA11Transaction17observer_callbackEP19__CFRunLoopObservermPv + 113
40 CoreFoundation 0x000000010ef08e17 __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__ + 23
41 CoreFoundation 0x000000010ef08d87 __CFRunLoopDoObservers + 391
42 CoreFoundation 0x000000010eeedb9e __CFRunLoopRun + 1198
43 CoreFoundation 0x000000010eeed494 CFRunLoopRunSpecific + 420
44 GraphicsServices 0x0000000116f37a6f GSEventRunModal + 161
45 UIKit 0x000000010fa98964 UIApplicationMain + 159
46 <NAME> 0x000000010de9e8a7 main + 55
47 libdyld.dylib 0x0000000112fe168d start + 1
)
libc++abi.dylib: terminating with uncaught exception of type NSException
```<issue_comment>username_1: Try using an `array` of `NSWritingDirection`
```
let myString = "Swift Attributed String"
let myAttrString = NSAttributedString(string: myString, attributes: [.foregroundColor : UIColor.blue, .writingDirection: [NSWritingDirection.rightToLeft.rawValue]])
self.label.attributedText = myAttrString
```
Let me know if you still face any issues.
Upvotes: 3 [selected_answer]<issue_comment>username_2: According to documentation, you have to use a NSNumber for indicate the value
Array NSNumber Values
Writing Direction Constants
0 -> NSWritingDirectionLeftToRight | NSTextWritingDirectionEmbedding
1 -> NSWritingDirectionRightToLeft | NSTextWritingDirectionEmbedding
2 -> NSWritingDirectionLeftToRight | NSTextWritingDirectionOverride
3 -> NSWritingDirectionRightToLeft | NSTextWritingDirectionOverride
So, you can do that by this way
```
let myString = "Swift Attributed String"
let attr:[NSAttributedStringKey: Any] = [.foregroundColor: UIColor.blue, .writingDirection: [NSNumber(integerLiteral: 3)]]
let myAttrString = NSAttributedString(string: myString, attributes:attr)
self.label.attributedText = myAttrString
```
Upvotes: 1 |
2018/03/20 | 1,090 | 3,625 | <issue_start>username_0: I am using Ajax to work with the openweather API. Everything works fine except for one thing.
I'm trying to make a button toggle between celsius and fahrenheit.
On the first click, the function converts the value to fahrenheit, but for some reason when you click the button again it doesn't convert it back to celsius.
I've found different solutions which work, but I would really like to avoid copy pasting and to figure out where's the error in my function's logic. I would appreciate your help on this.
Js Fiddle: <https://jsfiddle.net/m8w7gyxe/1/>
LE: Turns out that there were more errors than one :). Thank you everbody.
J.M.'s solution works great.
<https://jsfiddle.net/m8w7gyxe/31/>
```
℃ //celsius icon
let degree = '℃'; //celsius icon
let city = $('#searchInput');
const celsiusToFahrenheit = (celsius) => {
return (celsius \* 9 / 5 + 32);
}
$('#searchBtn').on('click', function() {
$.ajax({
url: 'https://api.openweathermap.org/data/2.5/weather?q=' + city.val() + '&units=metric&APPID=MYAPPIDHERE',
type: 'POST',
dataType: 'json',
success(response) {
getResponse(response)
},
error(jqXHR, status, errorThrown) {
alert('City not found...');
}
});
});
function getResponse(response) {
$('#temperature').html(response.main.temp.toFixed() + ' ' + degree);
$('.toggleBtn').on('click', function() {
if (degree = '℃') { //if degree is celsius
degree = '℉'; //change degree to fahrenheit
$('#temperature').html(celsiusToFahrenheit(response.main.temp.toFixed()) + ' ' + degree);
}
if (degree = '℉') { // if degree is fahrenheit
degree = '℃'; // change degree to celsius
$('#temperature').html(response.main.temp.toFixed() + ' ' + degree);
console.log(degree); //℃
}
});
}
```<issue_comment>username_1: Several issues:
1. Yes, the handler for the `.on('click')` should be taken outside the response, as [username_2](https://stackoverflow.com/a/49383991/4770813) stated, you only need the values, not the response, to perform the conversions.
2. You say it works from Celsius to Fahrenheit but not the other way, I don't see a function to convert from Fahrenheit to Celsius.
3. `if (degree = '℃')` is not comparing, it's assigning, so always passes. Use `if (degree == '℃')` instead
4. You missed an important `else` in `else if (degree == '℉')`, without it both `if`s were being run.
Relevant edits here:
```
let fahrenheitToCelsius = (fahrenheit) => {
return ((fahrenheit -32) * 5/9).toFixed(2);
}
$('.toggleBtn').on('click',function() {
if (degree == '℃') { //if degree is celsius
degree = '℉'; //change degree to fahrenheit
$('#temperature').html(celsiusToFahrenheit(parseFloat($('#temperature').text())) + ' ' + degree);
}
else if (degree == '℉') { //if degree is fahrenheit
degree = '℃'; //change degree to celsius
$('#temperature').html(fahrenheitToCelsius(parseFloat($('#temperature').text())) + ' ' + degree);
}
});
```
try it and let me know: <https://jsfiddle.net/m8w7gyxe/26/>
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think the problem is that you handler is being declared inside your "getResponse" function, which means when it is clicked it tries to use
```
response.main.temp.toFixed()
```
Which it can't because there is no response at that time. Try moving the handler outside of the getResponse, and instead of using the degrees from the response get it from what is currently on the page (You only need the number to convert back and forth, not the response)
Upvotes: 1 |
2018/03/20 | 186 | 596 | <issue_start>username_0: I was exploring AWS API Gateway and created several APIs during learning session. Can I delete these API?<issue_comment>username_1: You can use the AWS CLI by passing the API ID delete-rest-api:
```
aws apigateway delete-rest-api --rest-api-id 1234123412
```
See <https://docs.aws.amazon.com/cli/latest/reference/apigateway/delete-rest-api.html>
Upvotes: 2 <issue_comment>username_2: You can use "Delete API" action on Resources tab, Actions menu.
[](https://i.stack.imgur.com/f5wZO.png)
Upvotes: 2 |
2018/03/20 | 1,186 | 3,609 | <issue_start>username_0: Using SQL subquery, how do I get the total items and total revenue for each manager including his team?
Suppose I have this table items\_revenue with columns:
[](https://i.stack.imgur.com/LUHij.png)
All the managers (is\_manager=1) and their respective members are in the above table. Member1 is under Manager1, Member2 is under Manager2, and so on, but real data are in random arrangement.
I want my query to output the ff.:
[](https://i.stack.imgur.com/EKAX2.png)
This is related to [SQL query to get the subtotal of some rows](https://stackoverflow.com/questions/47525059/sql-query-to-get-the-subtotal-of-some-rows/47525135?noredirect=1#comment85758618_47525135) but I don't want to use the `CASE` expression. Thanks!
You can copy this to easily create the table:
```
DROP TABLE IF EXISTS items_revenue;
CREATE TABLE items_revenue (id int, is_manager int, manager_id int, name varchar(55), no_of_items int, revenue int);
INSERT INTO items_revenue (id, is_manager, manager_id, name, no_of_items, revenue)
VALUES
(1 , 1 , 0 , 'Manager1' , 621 , 833),
(2 , 1 , 0 , 'Manager2' , 458 , 627),
(3 , 1 , 0 , 'Manager3' , 872 , 1027 ),
(8 , 0 , 1 , 'Member1' , 1258 , 1582),
(9 , 0 , 2 , 'Member2' , 5340 , 8827),
(10 , 0 , 3 , 'Member3' , 3259 , 5124);
```<issue_comment>username_1: Use `union all` and aggreation:
```
select manager_id, sum(no_of_items) as no_of_items, sum(revenue) as revenue
from ((select ir.manager_id, ir.no_of_items, ir.revenue
from items_revenue ir
where ir.is_manager = 0
) union all
(select ir.id, ir.no_of_items, ir.revenue
from items_revenue ir
where ir.is_manager = 1
)
) irm
group by manager_id;
```
Note: This only handles direct reports in the table. This is the sample data that you provide. The problem is significantly different if you need all direct reports, so don't modify this question for that situation (ask another). If that is your need, then MySQL is not the best tool (unless you are using version 8), although you can solve it if you know the maximum depth.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use two subqueries, one grouped by `id` where `is_manager=1` and another grouped by `manager_id` where `is_manager=0`.
```
SELECT id, name,
(t1.total_items + t2.total_items) total_items,
(t1.total_revenue + t2.total_revenue) total_revenue
FROM (SELECT id, name,
sum(no_of_items) total_items,
sum(revenue) total_revenue
FROM items_revenue
WHERE is_manager = 1
GROUP BY id, name) t1
JOIN (SELECT manager_id,
sum(no_of_items) total_items,
sum(revenue) total_revenue
FROM items_revenue
WHERE is_manager = 0
GROUP BY manager_id) t2
ON t1.id = t2.manager_id;
```
Rextester [here](http://rextester.com/PNNL70529)
Upvotes: 1 <issue_comment>username_3: ```
SELECT
employee.totalit + items_revenue.no_of_items,
employee.totalerev + items_revenue.revenue,
employee.manager_id
FROM (SELECT
sum(no_of_items) AS totalit,
sum(revenue) as totalerev,
manager_id
FROM items_revenue
WHERE manager_id <> 0
GROUP BY manager_id) AS employee, items_revenue
WHERE items_revenue.user_id = employee.manager_id;
```
Upvotes: 0 |
2018/03/20 | 1,465 | 3,844 | <issue_start>username_0: When compiling VSCode extension then typescript persistently complained
>
> error TS2307: Cannot find module 'vscode'.
>
>
>
Installation `npm install vscode` or magic links never helped.
Strace provided:
```
stat("/home/work/mymodule/src/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
stat("/home/work/mymodule/node_modules/vscode", 0x7ffe73f2d200) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
stat("/home/work/mymodule/node_modules/vscode.ts", 0x7ffe73f2d040) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/vscode.tsx", 0x7ffe73f2d040) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/vscode.d.ts", 0x7ffe73f2d040) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/vscode", 0x7ffe73f2d230) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/@types", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/tma/work/qore/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/tma/work/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/tma/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/src/node_modules", 0x7ffe73f2d460) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
stat("/home/work/mymodule/node_modules/vscode", 0x7ffe73f2d200) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules", {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
stat("/home/work/mymodule/node_modules/vscode.js", 0x7ffe73f2d040) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/vscode.jsx", 0x7ffe73f2d040) = -1 ENOENT (No such file or directory)
stat("/home/work/mymodule/node_modules/vscode", 0x7ffe73f2d230) = -1 ENOENT (No such file or directory)
```
How to proceed?<issue_comment>username_1: The solution is a link pointing to `vscode.d.ts` from `node_modules` directory.
```
ln -s /home/work/mymodule/node_modules/vscode.d.ts /usr/share/code/resources/app/out/vs/vscode.d.ts
```
Upvotes: 1 <issue_comment>username_2: Update your `Package.json` `"scripts"` section to be:
```
"compile": "tsc -watch -p ./",
```
Upvotes: 0 <issue_comment>username_3: ### The solution
Run `npm install` to fix the issue.
### Why?
Because there is "post install" script (`node ./node_modules/vscode/bin/install`) which fetch a **vscode.d.ts** according to the engine you're using in your project.
>
> Detected VS Code engine version: ^1.6.0
> Found minimal version that qualifies engine range: 1.6.0
> Fetching vscode.d.ts from: <https://raw.githubusercontent.com/Microsoft/vscode/e52fb0bc87e6f5c8f144e172639891d8d8c9aa55/src/vs/vscode.d.ts>
> vscode.d.ts successfully installed!
>
>
>
[source](https://github.com/Microsoft/vscode/issues/2810#issuecomment-181724735)
Upvotes: 5 <issue_comment>username_4: In my case, the reason is: can't load a module with a long path on Windows.
Relate issue link: <https://github.com/nodejs/node/issues/1990>
So, after I upgrade nodejs v14.x to v15.x, the issue was gone.
Upvotes: 0 <issue_comment>username_5: If you get this error when trying to create a `webview` for a vsCode extension, check that you are doing it under the `client` folder and no the `server` folder
Upvotes: 0 <issue_comment>username_6: I suggest that you close and open VS Code
Upvotes: 1 |
2018/03/20 | 1,030 | 3,647 | <issue_start>username_0: I know (from the answers of [this question](https://stackoverflow.com/questions/10824347/does-sqlite3-compress-data)) that Sqlite by default doesn't enable compression. **Is it possible to enable it, or would this require another tool?** Here is the situation:
I need to add millions of rows in a Sqlite database. The table contains a `description` column (~500 char on average), and on average, each `description` is shared by, say, 40 rows, like this:
```
id name othercolumn description
1 azefds ... This description will be the same for probably 40 rows
2 tsdyug ... This description will be the same for probably 40 rows
...
40 wxcqds ... This description will be the same for probably 40 rows
41 azeyui ... This one is unique
42 uiuotr ... This one will be shared by 60 rows
43 poipud ... This one will be shared by 60 rows
...
101 iuotyp ... This one will be shared by 60 rows
102 blaxwx ... Same description for the next 10 rows
103 sdhfjk ... Same description for the next 10 rows
...
```
Question:
* Would you just insert rows like this, and enable a compression algorithm of the DB? **Pro: you don't have to deal with 2 tables, it's easier when querying.**
or
* Would you use 2 tables?
```
id name othercolumn descriptionid
1 azefds ... 1
2 tsdyug ... 1
...
40 wxcqds ... 1
41 azeyui ... 2
...
id description
1 This description will be the same for probably 40 rows
2 This one is unique
```
**Con: instead of the simple `select id, name, description from mytable` from solution #1, we have to use a complex way to retrieve this, involving 2 tables, and probably multiple queries?** Or maybe is it possible to do it without a complex query, but with a clever query with `union` or `merge` or anything like this?<issue_comment>username_1: Using multiple tables will not only prevent inconsistency, and take less space, but may also be faster, even if multiple/more complex queries are involved (precisely because it involves moving less data around). Which you should use depends on which of those characteristics are most important to you.
A query to retrieve the results when you have 2 tables would look something like this (which is really just a join between the two tables):
```
select table1.id, table1.name, table1.othercolumn, table2.description
from table1, table2
where table1.descriptionid=table2.id
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is some illustration code in Python for ScottHunter's answer:
```
import sqlite3
conn = sqlite3.connect(':memory:')
c = conn.cursor()
c.execute("CREATE TABLE mytable (id integer, name text, descriptionid integer)")
c.execute("CREATE TABLE descriptiontable (id integer, description text)")
c.execute('INSERT INTO mytable VALUES(1, "abcdef", 1)');
c.execute('INSERT INTO mytable VALUES(2, "ghijkl", 1)');
c.execute('INSERT INTO mytable VALUES(3, "iovxcd", 2)');
c.execute('INSERT INTO mytable VALUES(4, "zuirur", 1)');
c.execute('INSERT INTO descriptiontable VALUES(1, "Description1")');
c.execute('INSERT INTO descriptiontable VALUES(2, "Description2")');
c.execute('SELECT mytable.id, mytable.name, descriptiontable.description FROM mytable, descriptiontable WHERE mytable.descriptionid=descriptiontable.id');
print c.fetchall()
#[(1, u'abcdef', u'Description1'),
# (2, u'ghijkl', u'Description1'),
# (3, u'iovxcd', u'Description2'),
# (4, u'zuirur', u'Description1')]
```
Upvotes: 0 |
2018/03/20 | 849 | 3,275 | <issue_start>username_0: I need to check if a EPS/PDF file contains any vector elements
First I convert the PDF to EPS and remove all text elements and images from the file like this
```
pdftocairo -f $page_number -l $page_number -eps $input - | sed '/BT/,/ET/ d' | sed '/^8 dict dup begin$/,/^Q$/ c Q' > $output
```
But how can I then check if any elements are written to the canvas?<issue_comment>username_1: What do you mean, exactly, by 'vector elements' ? Anything except an actual bitmap image ? Why do you care ? Perhaps if you explained what you want to achieve it would be easier to help you.
Note that the approach you are using is by no means guaranteed to work, there can easily be 'elements' in the file which won't be removed by your rather basic approach to finding image.
You could use Ghostscript; run the file to a bitmap and specify -dFILTERTEXT and -dFILTERIMAGES. Then examine the pixels fo the bitmap to see if any are non-white. If they are, then there was vector content i the file. You could probably use something like ImageMagick to count the colours and see if there's more than 1.
Or run the file to bitmap twice, once normally, and once with -dFILTERVECTOR. Compare the two bitmaps (MD5 on them would be sufficient). If there are no differences then there was no vector content.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Any PDF that has vector elements will use at least one of the path painting operators. According to chapter 8 of the PDF standard, those are:
```
S, s, f, F, f*, B, B*, b, b*, n
```
Of course, since PDF files can be complex, you'll also need it in a standard form. You can do that using the `qpdf` program's `QDF` format. (`apt install qpdf` if you don't have it).
```
qpdf -qdf schedule.pdf - | egrep -m1 -q '\b[SsfFBbn]\*?$' && echo Yup
```
That'll print "Yup" if the file schedule.pdf has vector graphics in it.
---
Note: I think this will do the job for you, but it is not fool proof. It's possible to get false negatives if your PDF is loading vectors from an external file, embedding raw postscript, or doing some other trickiness. And, of course it can have false positives (for example, a file that draws a completely transparent, 0pt dot in white ink on a white background).
Upvotes: 2 <issue_comment>username_3: Other answers have addressed identifying the drawing operators in a plain text stream. For the other question,
>
> But how can I then check if any elements are written to the canvas?
>
>
>
For this, the elements need to be part of a content stream that is referred to
in the `/Contents` member of the `Page` object.
If you read in all of the pdf objects, there will be a tree connecting all the content streams to the `Root` object declared in the trailer.
* Trailer : /Root is a reference to the Document Catalog object
* Document Catalog : /Pages is an array of Page objects or Pages nodes
* Page : /Contents is an array of references to Content Stream objects that draw the elements of the page
It is possible for there to be stray content stream objects that are not referenced in the Document tree. By traversing the Pages tree you could collect any and all actual content and then feed that result to one of the solutions from the other answers.
Upvotes: 0 |
2018/03/20 | 240 | 708 | <issue_start>username_0: I'm trying to display 5th line from end of file, how can I do this using `sed`?<issue_comment>username_1: Hi Use below command for your requirement.
```
tail -5 yourfile.txt | sed -n "1p"
```
If you want to change the line to print just change `"5p"` like `"10p"` for 10th line `"3p"` to print 3rd line etc.
Or you can just use the combination of tail and head
```
tail -5 yourfile.txt | head -1
```
Upvotes: -1 <issue_comment>username_2: Found it here [emulate tail with sed](https://unix.stackexchange.com/questions/107387/emulate-tail-with-sed) – anonyXmous
So final answer for this can be: sed -e :a -e '$q;N;6,$D;ba' | sed -n '1p' – Eneen
Upvotes: 1 [selected_answer] |
2018/03/20 | 464 | 1,788 | <issue_start>username_0: I have a dynamodb table. Whenever I add a new item to the table, a lambda function is triggered, certain processing is done and the new item is updated. It works fine, but I was wondering what if thousands of items were added at once. Will the function execute for all items at once?? Or will the execution be queued?<issue_comment>username_1: From [Managing Concurrency](https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html), official document for AWS Lambda
>
> Functions scale automatically based on incoming request rate
>
>
>
So, it means that if requests are concurrent then to handle these requests multiple instances of the same function will execute. Some of the requests may be queued.
Also from the same documentation,
>
> By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000.
>
>
>
Upvotes: 2 [selected_answer]<issue_comment>username_2: Lambda has 100 parallel Lambda executions assuming each execution takes 1 second or more to complete.
so at a time Lambda can manage multiple executions.
Upvotes: 0 <issue_comment>username_3: From [Understanding Scaling Behavior](https://docs.aws.amazon.com/en_us/lambda/latest/dg/scaling.html)
>
> For Lambda functions that process Kinesis or DynamoDB streams the
> number of shards is the unit of concurrency
>
>
>
Amounts of shards depend on your partition key cardinality. For example, if your partition key cardinality is 2, then you have only two shards, and concurrency of your lambda is 2.
Remember you can't control a number of shards explicit.
[This is a related question](https://stackoverflow.com/questions/42305262/increase-number-of-shards-in-dynamodb-to-spin-up-more-lambdas-in-parallel)
Upvotes: 0 |
2018/03/20 | 1,037 | 3,956 | <issue_start>username_0: I'm trying to add a gmail user to a group using google's api in python.
Here's my code:
```
from __future__ import print_function
import httplib2
import os
from apiclient import discovery
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/admin-directory_v1-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/admin.directory.group'
CLIENT_SECRET_FILE = 'client_secret1.json'
APPLICATION_NAME = 'Directory API Python Quickstart'
GROUP = '<EMAIL>'
def get_credentials():
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'admin-directory_v1-python-quickstart.json')
store = Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('admin', 'directory_v1', http=http)
groupinfo = {'email': '<EMAIL>',
'role': 'MEMBER',
}
service.members().insert(groupKey=GROUP, body=groupinfo).execute()
if __name__ == '__main__':
main()
```
Whenever I run this I get the following error:
```
HttpError:
```
So far I have done the following to troubleshoot this:
1. I tested the quickstart code from google and it worked.
2. I tried out oauth playground and I entered the same json data and it worked
I am not sure what exactly is the issue here. Scope is correct(worked in oauth playground), code seems to be correct(request worked in oauth playground). The only thing I can think of is request type?
I would appreciate any help.
Thanks.<issue_comment>username_1: From [Managing Concurrency](https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html), official document for AWS Lambda
>
> Functions scale automatically based on incoming request rate
>
>
>
So, it means that if requests are concurrent then to handle these requests multiple instances of the same function will execute. Some of the requests may be queued.
Also from the same documentation,
>
> By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000.
>
>
>
Upvotes: 2 [selected_answer]<issue_comment>username_2: Lambda has 100 parallel Lambda executions assuming each execution takes 1 second or more to complete.
so at a time Lambda can manage multiple executions.
Upvotes: 0 <issue_comment>username_3: From [Understanding Scaling Behavior](https://docs.aws.amazon.com/en_us/lambda/latest/dg/scaling.html)
>
> For Lambda functions that process Kinesis or DynamoDB streams the
> number of shards is the unit of concurrency
>
>
>
Amounts of shards depend on your partition key cardinality. For example, if your partition key cardinality is 2, then you have only two shards, and concurrency of your lambda is 2.
Remember you can't control a number of shards explicit.
[This is a related question](https://stackoverflow.com/questions/42305262/increase-number-of-shards-in-dynamodb-to-spin-up-more-lambdas-in-parallel)
Upvotes: 0 |
2018/03/20 | 652 | 2,268 | <issue_start>username_0: I'm not sure why i'm having this weird error. I'm using route.put and selected PUT on my post man.
Here's my post man error:
Image link-><https://ibb.co/dzvAKc>
All of my musics on my mongoDB data: Image link-><https://ibb.co/d9TY5H>
Routes:
```
const User = require('../models/user');
const Music = require('../models/music');
const jwt = require('jsonwebtoken');
const config = require('../config/database.js');
module.exports = (router) => {
```
Update function:
```
router.put('/updateMusic', (req, res) => {
if (!req.body._id) {
res.json({ success: false, message: 'No music id provided.'});
}
else { .. more authentications here }
return router;
};
```
Somehow it can't get pass that 1st if.
[UPDATE] :
Here's the img for the headers-> <https://ibb.co/mGr9vH><issue_comment>username_1: From [Managing Concurrency](https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html), official document for AWS Lambda
>
> Functions scale automatically based on incoming request rate
>
>
>
So, it means that if requests are concurrent then to handle these requests multiple instances of the same function will execute. Some of the requests may be queued.
Also from the same documentation,
>
> By default, AWS Lambda limits the total concurrent executions across all functions within a given region to 1000.
>
>
>
Upvotes: 2 [selected_answer]<issue_comment>username_2: Lambda has 100 parallel Lambda executions assuming each execution takes 1 second or more to complete.
so at a time Lambda can manage multiple executions.
Upvotes: 0 <issue_comment>username_3: From [Understanding Scaling Behavior](https://docs.aws.amazon.com/en_us/lambda/latest/dg/scaling.html)
>
> For Lambda functions that process Kinesis or DynamoDB streams the
> number of shards is the unit of concurrency
>
>
>
Amounts of shards depend on your partition key cardinality. For example, if your partition key cardinality is 2, then you have only two shards, and concurrency of your lambda is 2.
Remember you can't control a number of shards explicit.
[This is a related question](https://stackoverflow.com/questions/42305262/increase-number-of-shards-in-dynamodb-to-spin-up-more-lambdas-in-parallel)
Upvotes: 0 |
2018/03/20 | 1,348 | 4,239 | <issue_start>username_0: I've done this JavaScript calculation. [Live Preview](https://promehedi.com/projects/apps/diploma-cgpa-calculator/)
Everything is okay but the problem is, Once the calculation is done, the next time it shows NAN value. If you do not reload the page, then you can not calculation!
I want to repeat the calculation without reloading, how could it be?
Here Is my Simple Calculation:
```js
// get all data
var first = document.getElementById('first');
var second = document.getElementById('second');
var third = document.getElementById('third');
var four = document.getElementById('four');
var five = document.getElementById('five');
var six = document.getElementById('six');
var seven = document.getElementById('seven');
var eight = document.getElementById('eight');
var outPut = document.getElementById('result');
// Listen for Submit the form
document.getElementById('gpaInput').addEventListener('submit', outPutF);
function outPutF(e){
e.preventDefault();
// Calculation
first = first.value * 5 / 100;
second = second.value * 5 / 100;
third = third.value * 5 / 100;
four = four.value * 15 / 100;
five = five.value * 15 / 100;
six = six.value * 20 / 100;
seven = seven.value * 25 / 100;
eight = eight.value * 10 / 100;
var result = first + second + third + four + five + six + seven + eight;
// Reset the form
this.reset();
// Finally output the Calculation
outPut.innerHTML = 'Your CGPA: '+result;
// setTimeout(window.location.reload(true), 9000);
}
```
```css
input:focus, button:focus, select:focus {outline: none!important;box-shadow: none!important;}
#gpaInput .input-group {margin: 0.5em 0;}
#gpaInput .input-group-text { min-width: 95px;}
#result {display: block; width: 68%; text-align: center; margin-top: 25px; padding: 17px 0;}
.jumbotron {overflow: hidden;}
```
```html
---
Calculate CGPA
##### Complete The form!
```<issue_comment>username_1: You get NaN because you are replacing the reference to the element with its value , so on the second time it is not longer the element.
```
var first = document.getElementById('first'); //<-- element
first = first.value * 5 / 100; //<-- first replaced with a number
```
so than next time you call it
```
first = first.value * 5 / 100; //<-- first.value is undefined here since first is a number
```
So you need to rename your variables inside...
```
var nFirst = first.value * 5 / 100;
var nSecond = second.value * 5 / 100;
var nThird = third.value * 5 / 100;
var nFour = four.value * 15 / 100;
var nFive = five.value * 15 / 100;
var nSix = six.value * 20 / 100;
var nSeven = seven.value * 25 / 100;
var nEight = eight.value * 10 / 100;
var result = nFirst + nSecond + nThird + nFour + nFive + nSix + nSeven + nEight;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You Should **Declare your variables inside your function** or **you can get the value of every fields inside the function** so that when the code runs for the next time it will get the values again correctly,
```js
// Listen for Submit the form
document.getElementById('gpaInput').addEventListener('submit', outPutF);
function outPutF(e){
var first = document.getElementById('first');
var second = document.getElementById('second');
var third = document.getElementById('third');
var four = document.getElementById('four');
var five = document.getElementById('five');
var six = document.getElementById('six');
var seven = document.getElementById('seven');
var eight = document.getElementById('eight');
var outPut = document.getElementById('result');
e.preventDefault();
// Calculation
first = first.value * 5 / 100;
second = second.value * 5 / 100;
third = third.value * 5 / 100;
four = four.value * 15 / 100;
five = five.value * 15 / 100;
six = six.value * 20 / 100;
seven = seven.value * 25 / 100;
eight = eight.value * 10 / 100;
var result = first + second + third + four + five + six + seven + eight;
// Reset the form
this.reset();
// Finally output the Calculation
outPut.innerHTML = 'Your CGPA: '+result;
// setTimeout(window.location.reload(true), 9000);
}
```
```html
---
Calculate CGPA
##### Complete The form!
```
Upvotes: 1 |
2018/03/20 | 1,153 | 3,486 | <issue_start>username_0: I have this code but is not working , it keep giving me the following errors :
```none
[Error] cannot convert 'float' to 'float*' for argument '1' to 'void zeroCrossing(float*, float*, int)'
[Error] cannot convert 'float*' to 'float' for argument '1' to 'bool getSign(float)'
[Error] cannot convert 'float*' to 'float' for argument '1' to 'bool getSign(float)'
[Error] invalid conversion from 'int' to 'float*' [-fpermissive]
```
```cpp
#include
#include
using namespace std;
void zeroCrossing(float \*data, float \*zerCross, int nx);
bool getSign(float data);
float array[9] = {1,2,3,0,-1,-2,-3,0,1};
float \*p = array;
float f1[9];
float \*p2 = f1;
int bx= 2 ;
int main() {
zeroCrossing(\*p,\*p2,bx);
return 0 ;
}
/\* zero crossing function \*/
/\* data = input array \*/
/\* zerCross = output zero crossing array \*/
void zeroCrossing(float \*data[], float \*zerCross[], int nx)
{
int i;
bool sign1, sign2;
memset(zerCross, 0, nx\*sizeof(float));//copies the 0 to the first characters of the string
//pointed to, by argument
for(i=0; i0) /\* positif data \*/
return (1);
else /\* negatif data \*/
return (0);
}
```<issue_comment>username_1: You get NaN because you are replacing the reference to the element with its value , so on the second time it is not longer the element.
```
var first = document.getElementById('first'); //<-- element
first = first.value * 5 / 100; //<-- first replaced with a number
```
so than next time you call it
```
first = first.value * 5 / 100; //<-- first.value is undefined here since first is a number
```
So you need to rename your variables inside...
```
var nFirst = first.value * 5 / 100;
var nSecond = second.value * 5 / 100;
var nThird = third.value * 5 / 100;
var nFour = four.value * 15 / 100;
var nFive = five.value * 15 / 100;
var nSix = six.value * 20 / 100;
var nSeven = seven.value * 25 / 100;
var nEight = eight.value * 10 / 100;
var result = nFirst + nSecond + nThird + nFour + nFive + nSix + nSeven + nEight;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You Should **Declare your variables inside your function** or **you can get the value of every fields inside the function** so that when the code runs for the next time it will get the values again correctly,
```js
// Listen for Submit the form
document.getElementById('gpaInput').addEventListener('submit', outPutF);
function outPutF(e){
var first = document.getElementById('first');
var second = document.getElementById('second');
var third = document.getElementById('third');
var four = document.getElementById('four');
var five = document.getElementById('five');
var six = document.getElementById('six');
var seven = document.getElementById('seven');
var eight = document.getElementById('eight');
var outPut = document.getElementById('result');
e.preventDefault();
// Calculation
first = first.value * 5 / 100;
second = second.value * 5 / 100;
third = third.value * 5 / 100;
four = four.value * 15 / 100;
five = five.value * 15 / 100;
six = six.value * 20 / 100;
seven = seven.value * 25 / 100;
eight = eight.value * 10 / 100;
var result = first + second + third + four + five + six + seven + eight;
// Reset the form
this.reset();
// Finally output the Calculation
outPut.innerHTML = 'Your CGPA: '+result;
// setTimeout(window.location.reload(true), 9000);
}
```
```html
---
Calculate CGPA
##### Complete The form!
```
Upvotes: 1 |
2018/03/20 | 478 | 2,013 | <issue_start>username_0: Hi I am trying to dynamically build input based on ajax returned results. However `.remove()` is not working and my solution is not pretty as I am repeating my self. Any help or direction please. Thanks in advance.
```
```
```js
url: '/SettingTypes/GetSettingTypeDataValidation',
async: false,
data: { SettingTypeName: SettingName },
dataType: "json",
success: function (settingValueType) {
if (settingValueType == 'Integer') {
$("#dataTypeInputPlaceHolder").remove();
$("#dataTypeInputBoolean").remove();
$("#dataTypeInputStringList").remove();
$("#dataType").append("")
}
if (settingValueType == 'Decimal') {
$("#dataTypeInputBoolean").remove();
$("#dataTypeInputPlaceHolder").remove();
$("#dataTypeInputStringList").remove();
$("#dataType").append("")
}
if (settingValueType == 'Boolean') {
$("#dataTypeInputPlaceHolder").remove();
$("#dataTypeInputStringList").remove();
$("#dataTypeInput").remove();
$("#dataType").append("")
$("#dataTypeInputBoolean").val(true);
//$('#dataTypeInput').remove();
} else {
$("#dataTypeInputPlaceHolder").remove();
$("#dataTypeInput").remove();
$("#dataTypeInputPlaceHolder").remove();
$("#dataTypeInputBoolean").remove();
$("#dataType").append("")
}
```<issue_comment>username_1: ```
$("#dataTypeInputPlaceHolder").empty();
```
Or
```
$("#dataTypeInputPlaceHolder").val('');
```
Upvotes: -1 <issue_comment>username_2: this is not a good method to Remove textbox ,If you want to remove the textbox,can you please use Textbox inside of div element And Set Id for Div element then try it.
Upvotes: -1 <issue_comment>username_3: From the [docs...](http://api.jquery.com/trigger/)
To trigger a custom event
```
$( "#some-element").trigger( "custom");
```
Then:
```
$('#user div').on('custom', function(){
$('#user div').remove('label');
});
```
Upvotes: 0 |
2018/03/20 | 415 | 1,351 | <issue_start>username_0: ```
use fields __PACKAGE__->SUPER::_praveen, qw(path);
```
while executing above line with perl\_5.18.2 getting an error @INC SUPER.pm not found.
I am able to compile above line with Perl 5.8.8
can you please help me on this<issue_comment>username_1: I'm not surprised you're getting errors: the `fields` pragma expects a list of field names as parameters, and even if the inherited class provides something suitable through a call like that, it is far from clear whether the superior class is in scope at that point.
If you want more help then you must show a minimal code sample that reproduces the issue.
Upvotes: 0 <issue_comment>username_2: That error shouldn't be produced by that line since the only module loaded by that line is `fields.pm`. Unsurprisingly, I can't replicate your error.
`Foo.pm`:
```
package Foo;
use strict;
use warnings qw( all );
sub _praveen { qw( id ) }
1;
```
`Bar.pm`:
```
package Bar;
use strict;
use warnings qw( all );
use parent 'Foo';
use fields __PACKAGE__->SUPER::_praveen, qw( path );
1;
```
`a.pl`:
```
use strict;
use warnings qw( all );
use feature qw( say );
use FindBin qw( $RealBin );
use lib $RealBin;
use Bar qw( );
say "ok";
```
Output:
```
$ perlbrew use 5.18.2t
$ perl a.pl
ok
```
Please provide a minimal, runnable demonstration of the problem.
Upvotes: 1 |
2018/03/20 | 650 | 2,365 | <issue_start>username_0: im trying to test Machine Learning codes from this site :<https://github.com/lyuboraykov/flight-genie>
im really new to machine learning and im using windows.
i already installed the requirements to run the code (python, virtualenv, numpy, sklearn, scipy, etc) but i got stuck when i try to run the main code, it shows notification like this :
[](https://i.stack.imgur.com/PRS3a.png)
please help, thanks<issue_comment>username_1: Have you made installation steps described on author's page? I'm afraid that you didn't read this...
```
sh
# you have to have python 3 installed
pyenv env
source env/bin/activate
pip install -r requirements.txt
python flight_genie/main.py
```
Upvotes: -1 <issue_comment>username_2: Based on the error message I think your problem is with your import. I'm guessing it should be something like:
```
from flight import Flight
```
Instead of:
```
from flight_genie.flight import Flight
```
If that doesn't work try to post your code so we can try to find the problem.
Upvotes: -1 <issue_comment>username_3: If you run a python script as
```
python dir_name/script.py
```
... then python will execute the file named `script.py` inside the folder `dir_name`. What will happen here is python program will go inside `dir_name` folder first and then run the script.
In your case if you type `python flight_genie/main.py`, it will go inside the folder `flight_genie` and will execute the file. Then python can't find a folder (actually the module) named `flight_genie` there because python program is already inside that folder. That's why you get this error.
So one way of fixing this issue is replacing all `import flight_genie.xxxx` with just `import xxxx`. (Also `from flight_genie.xxxx import yyyy` with `from xxxx import yyyy`)
But it is so time consuming if you have a large project. (And sometime it won't even work). So best way is to run the project as a whole module.
If you look at [here](https://www.python.org/dev/peps/pep-0338/) you can see how to run python modules as scripts. You just have to type the following command in console.
```
python -m flight_genie.main
```
ps: I assume that you have `python3` installed in windows and configured to run `python3` when you type `python` in command line.
Upvotes: 0 |
2018/03/20 | 571 | 2,138 | <issue_start>username_0: I come from a C# background and am debugging some legacy C++ code. My google skills are failing me, or perhaps my understanding of the results is lacking, but could anyone provide a brief explanation of what Message Maps are used for?
Many Thanks<issue_comment>username_1: Have you made installation steps described on author's page? I'm afraid that you didn't read this...
```
sh
# you have to have python 3 installed
pyenv env
source env/bin/activate
pip install -r requirements.txt
python flight_genie/main.py
```
Upvotes: -1 <issue_comment>username_2: Based on the error message I think your problem is with your import. I'm guessing it should be something like:
```
from flight import Flight
```
Instead of:
```
from flight_genie.flight import Flight
```
If that doesn't work try to post your code so we can try to find the problem.
Upvotes: -1 <issue_comment>username_3: If you run a python script as
```
python dir_name/script.py
```
... then python will execute the file named `script.py` inside the folder `dir_name`. What will happen here is python program will go inside `dir_name` folder first and then run the script.
In your case if you type `python flight_genie/main.py`, it will go inside the folder `flight_genie` and will execute the file. Then python can't find a folder (actually the module) named `flight_genie` there because python program is already inside that folder. That's why you get this error.
So one way of fixing this issue is replacing all `import flight_genie.xxxx` with just `import xxxx`. (Also `from flight_genie.xxxx import yyyy` with `from xxxx import yyyy`)
But it is so time consuming if you have a large project. (And sometime it won't even work). So best way is to run the project as a whole module.
If you look at [here](https://www.python.org/dev/peps/pep-0338/) you can see how to run python modules as scripts. You just have to type the following command in console.
```
python -m flight_genie.main
```
ps: I assume that you have `python3` installed in windows and configured to run `python3` when you type `python` in command line.
Upvotes: 0 |
2018/03/20 | 3,653 | 10,970 | <issue_start>username_0: When I try to use `okhttp` or `javax.ws.rs.client.Client` the following error occur
>
> java.lang.NoSuchMethodError:
> sun.security.ssl.SSLSessionImpl.(Lsun/security/ssl/ProtocolVersion;Lsun/security/ssl/CipherSuite;Ljava/util/Collection;Lsun/security/ssl/SessionId;Ljava/lang/String;I)V
>
>
>
Searching in the sun.security.ssl package, there is no SSLSessionImpl class
Im using Mac OS 10.13.3 (17D102)
```
java -version
java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
```
And running my war on Glassfish 5.0
*build.gradle*
```
buildscript {
ext.kotlin_version = '1.2.30'
repositories {
mavenCentral()
}
dependencies {
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath "org.jetbrains.kotlin:kotlin-noarg:$kotlin_version"
classpath "org.jetbrains.kotlin:kotlin-allopen:$kotlin_version"
}
}
group 'invoice-administration-api'
version '1.0-SNAPSHOT'
apply plugin: 'idea'
apply plugin: 'war'
apply plugin: 'kotlin'
apply plugin: 'kotlin-jpa'
apply plugin: 'kotlin-allopen'
repositories {
mavenCentral()
}
dependencies {
compile "org.jetbrains.kotlin:kotlin-stdlib-jdk8:$kotlin_version"
compileOnly group: 'javax', name: 'javaee-api', version: '8.0'
compile group: 'org.hibernate', name: 'hibernate-core', version: '5.2.13.Final'
compile group: 'org.glassfish.jersey.media', name: 'jersey-media-json-jackson', version: '2.26'
}
allOpen {
annotation('javax.ejb.Stateless')
}
compileKotlin {
kotlinOptions.jvmTarget = "1.8"
}
compileTestKotlin {
kotlinOptions.jvmTarget = "1.8"
}
```<issue_comment>username_1: EDIT: since this is the "correct answer" and Java 1.8.0-151 will potentialy lack security patches. I can't recommend to downgrade to a such an old Java version.
Please take a look at Payara project or new releases of Glassfish.
--
Try to downgrade the Java version to Java 1.8.0-151. It should work.
There's an issue in glassfish 5 with Java 8 161, there's an [issue](https://github.com/javaee/glassfish/issues/22407) created, but no changes in nightly builds regarding the issue, I checked yesterday (19th march).
Upvotes: 4 [selected_answer]<issue_comment>username_2: The issue appears because Glassfish embeds native `sun.*` classes into `[glassfish5_home]/glassfish/modules/endorsed/grizzly-npn-bootstrap.jar`, so it conflicts with others classes included into `[JDK_HOME]/jre/lib/jsse.jar`
So edit the `grizzly-npn-bootstrap.jar` (make a copy before) file and remove the sun folder.
Upvotes: 4 <issue_comment>username_3: Maybe a little bit more elegant is to use the existing fix done in the npn-grizzly jar:
```
mkdir -p scratch/glassfish5/glassfish/modules/endorsed && cd scratch
wget http://download.oracle.com/glassfish/5.0.1/nightly/glassfish-5.0.1-b02-12_03_2018.zip
wget http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-npn-bootstrap/1.8.1/grizzly-npn-bootstrap-1.8.1.jar
mv grizzly-npn-bootstrap-1.8.1.jar glassfish5/glassfish/modules/endorsed/grizzly-npn-bootstrap.jar
jar uvf glassfish-5.0.1-b02-12_03_2018.zip glassfish5/glassfish/modules/endorsed/grizzly-npn-bootstrap.jar
echo "you have a patched archive for runnig with Java 8 patchlevel 191"
```
HTH
Upvotes: 1 <issue_comment>username_4: The above Error could result to this displaying in your browser:
```
The connection was reset The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.**
```
SOLUTION:
Download: <http://central.maven.org/maven2/org/glassfish/grizzly/grizzly-npn-bootstrap/1.8.1/grizzly-npn-bootstrap-1.8.1.jar>
and Replace your galssfish/payara grizzly-npn-bootstrap.jar.
```
mv grizzly-npn-bootstrap-1.8.1.jar glassfish5/glassfish/modules/endorsed/grizzly-npn-bootstrap.jar
```
Upvotes: 3 <issue_comment>username_5: The changes in the security ciphers in java 1.8.0\_161 have broken the SSL function in the underlying Grizzly server of Glassfish 5,0. The last working java version was 1.8.0\_152
As of the moment of this writing i can confirm, that java 1.8.0\_202 works with Glassfish 5.1 (now maintained by and downloadable from Eclipse.org) The included grizzly-npn-bootstrap is ver 1.9.0. There is no need to delete or alter anything.
Upvotes: 2 <issue_comment>username_6: I just came across this frustrating issue and I did want to expand just a little on username_2's answer above <https://stackoverflow.com/a/52479362/4722577> which worked for me.
As a note, I have Glassfish 5.0 and Java `openjdk version "1.8.0_232"
OpenJDK Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu1~18.04.1-b09)
OpenJDK 64-Bit Server VM (build 25.232-b09, mixed mode)`
1. As mentioned by username_2, I copied the `grizzly-npn-bootstrap.jar` to another location as a backup.
2. I did the `jar -xvf grizzly-npn-bootstrap.jar` to see the exact classes and folder locations in the jar. Which gave me the following output:
```
cduran@cduran-VirtualBox:~/Documents$ jar -xvf grizzly-npn-bootstrap.jar
inflated: META-INF/MANIFEST.MF
created: META-INF/
created: META-INF/maven/
created: META-INF/maven/org.glassfish.grizzly/
created: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/
inflated: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/pom.properties
inflated: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/pom.xml
created: org/
created: org/glassfish/
created: org/glassfish/grizzly/
created: org/glassfish/grizzly/npn/
inflated: org/glassfish/grizzly/npn/AlpnClientNegotiator.class
inflated: org/glassfish/grizzly/npn/AlpnServerNegotiator.class
inflated: org/glassfish/grizzly/npn/ClientSideNegotiator.class
inflated: org/glassfish/grizzly/npn/NegotiationSupport.class
inflated: org/glassfish/grizzly/npn/ServerSideNegotiator.class
created: sun/
created: sun/security/
created: sun/security/ssl/
inflated: sun/security/ssl/Alerts.class
inflated: sun/security/ssl/AlpnExtension$Builder.class
inflated: sun/security/ssl/AlpnExtension.class
inflated: sun/security/ssl/ClientHandshaker$1.class
inflated: sun/security/ssl/ClientHandshaker$2.class
inflated: sun/security/ssl/ClientHandshaker.class
inflated: sun/security/ssl/ExtensionType.class
inflated: sun/security/ssl/GrizzlyNPN.class
inflated: sun/security/ssl/HandshakeMessage$1.class
inflated: sun/security/ssl/HandshakeMessage$CertificateMsg.class
inflated: sun/security/ssl/HandshakeMessage$CertificateRequest.class
inflated: sun/security/ssl/HandshakeMessage$CertificateVerify$1.class
inflated: sun/security/ssl/HandshakeMessage$CertificateVerify.class
inflated: sun/security/ssl/HandshakeMessage$ClientHello.class
inflated: sun/security/ssl/HandshakeMessage$DH_ServerKeyExchange.class
inflated: sun/security/ssl/HandshakeMessage$DistinguishedName.class
inflated: sun/security/ssl/HandshakeMessage$ECDH_ServerKeyExchange.class
inflated: sun/security/ssl/HandshakeMessage$Finished.class
inflated: sun/security/ssl/HandshakeMessage$HelloRequest.class
inflated: sun/security/ssl/HandshakeMessage$NextProtocol$Builder.class
inflated: sun/security/ssl/HandshakeMessage$NextProtocol.class
inflated: sun/security/ssl/HandshakeMessage$RSA_ServerKeyExchange.class
inflated: sun/security/ssl/HandshakeMessage$ServerHello.class
inflated: sun/security/ssl/HandshakeMessage$ServerHelloDone.class
inflated: sun/security/ssl/HandshakeMessage$ServerKeyExchange.class
inflated: sun/security/ssl/HandshakeMessage.class
inflated: sun/security/ssl/Handshaker$1.class
inflated: sun/security/ssl/Handshaker$DelegatedTask.class
inflated: sun/security/ssl/Handshaker.class
inflated: sun/security/ssl/HelloExtensions.class
inflated: sun/security/ssl/NextProtocolNegotiationExtension$Builder.class
inflated: sun/security/ssl/NextProtocolNegotiationExtension.class
inflated: sun/security/ssl/SSLEngineImpl.class
inflated: sun/security/ssl/ServerHandshaker$1.class
inflated: sun/security/ssl/ServerHandshaker$2.class
inflated: sun/security/ssl/ServerHandshaker$3.class
inflated: sun/security/ssl/ServerHandshaker.class
```
3. Observe that the jar file contains 3 folders in the root level: **META-INF**, **org**, and **sun**. Again from username_2's answer we want to remove the `sun` folder.
4. I don't know a command line way to recreate a jar by specifying a folder to remove, so I did this command where I recreate the `grizzly-npn-bootstrap.jar` by just adding the **META-INF** and **org** folders: `jar -cvf grizzly-npn-bootstrap.jar META-INF/* org/*`
5. Subsequently if I do the `jar -xvf grizzly-npn-bootstrap.jar` command I get this output (notice no more **sun** folder listed):
```
cduran@cduran-VirtualBox:~/glassfish-5.0-web-profile/glassfish5/glassfish/modules/endorsed$ jar -xvf grizzly-npn-bootstrap.jar
created: META-INF/
inflated: META-INF/MANIFEST.MF
created: META-INF/maven/
created: META-INF/maven/org.glassfish.grizzly/
created: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/
inflated: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/pom.properties
inflated: META-INF/maven/org.glassfish.grizzly/grizzly-npn-bootstrap/pom.xml
created: org/glassfish/
created: org/glassfish/grizzly/
created: org/glassfish/grizzly/npn/
inflated: org/glassfish/grizzly/npn/ClientSideNegotiator.class
inflated: org/glassfish/grizzly/npn/ServerSideNegotiator.class
inflated: org/glassfish/grizzly/npn/AlpnClientNegotiator.class
inflated: org/glassfish/grizzly/npn/NegotiationSupport.class
inflated: org/glassfish/grizzly/npn/AlpnServerNegotiator.class
```
6. Restart your glassfish.
After this I no longer get that error message the OP posted above:
>
> java.lang.NoSuchMethodError:
> sun.security.ssl.SSLSessionImpl.(Lsun/security/ssl/ProtocolVersion;Lsun/security/ssl/CipherSuite;Ljava/util/Collection;Lsun/security/ssl/SessionId;Ljava/lang/String;I)V
>
>
>
Upvotes: 2 <issue_comment>username_7: I had a deep dive in this issue just several days ago and found the root reason by reviewing JDK source code. I think you are using openjdk now.
This is a bug in openjdk up to 1.8.0\_242.
The SSLSessionImpl is not instantiated when it is used in SSLSessionEngine. It is used as a static class, however it is only a final class.
The issue has been fixed in 1.8.0\_252, so I recommend you to upgrade it to this version. Or switch to any build of Oracle jdk1.8.0, as there is no such bug in it.
Hope it helps.
Openjdk:
[SSLSessionEngine.java](https://i.stack.imgur.com/bD1pH.png)
[SSLSessionImpl.java](https://i.stack.imgur.com/CqGE8.png)
Oracle jdk:
[SSLSessionEngine.java](https://i.stack.imgur.com/gX3t3.png)
Upvotes: 0 |
2018/03/20 | 938 | 2,708 | <issue_start>username_0: I have read the [article](https://github.com/ethereum/EIPs/issues/165) abount ERC165 that is enable us to detect which interface is implemented in a contract.
However, I found that some ERC721 tokens are a little bit diffrent from others, like [cryptokitties](https://etherscan.io/address/0x06012c8cf97bead5deae237070f9587f8e7a266d) and [cryptohorse](http://www.cryptohorse.ch/sourceC.html).
Are these different ERC721 tokens are both detected as erc721 token by wallet application.
And how does erc721 support wallet detect the type of tokens.<issue_comment>username_1: In golang you can detect if a contract is a token by calling the EVM and checking the output, kind of like this:
```
// Getting token Decimals
input, err = erc20_token_abi.Pack("decimals")
if err!=nil {
utils.Fatalf("packing input for decimals failed: %v",err)
}
msg=types.NewMessage(fake_src_addr,addr,0,value,gas_limit,gas_price,input,false)
evm_ctx=core.NewEVMContext(msg,block_header,chain,nil)
ethvirmac=vm.NewEVM(evm_ctx,statedb,bconf,vm_cfg)
gp=new(core.GasPool).AddGas(math.MaxBig256)
ret,gas,failed,err=core.ApplyMessage(ethvirmac,msg,gp)
tok.decimals_found=false
if (failed) {
log.Info(fmt.Sprintf("vm err for decimals: %v, failed=%v",err,failed))
}
if err!=nil {
log.Info(fmt.Sprintf("getting 'decimals' caused error in vm: %v",err))
}
if !((err!=nil) || (failed)) {
int_output:=new(uint8)
err=erc20_token_abi.Unpack(int_output,"decimals",ret)
if err!=nil {
log.Info(fmt.Sprintf("Contract %v: can;t upack decimals: %v",hex.EncodeToString(addr.Bytes()),err))
} else {
tok.decimals=int32(*int_output)
tok.decimals_found=true
}
}
```
But if you don't want to get your hands dirty by working with the EVM directly, you can check event signatures:
```
erc20_approval_signature,_ = hex.DecodeString("8c5be1e5ebec7d5bd14f71427d1e84f3dd0314c0f7b2291e5b200ac8c7c3b925")
erc20_transfer_signature,_ = hex.DecodeString("ddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef")
```
These are located at event.Topics[0], so if Topics[0] matches to the above string the probability that the contract is ERC20 token is high. If you collect all the event of a contract, and derive a summary of what methods does it use, you can decide either it is a ERC20 or ERC721
Upvotes: 1 <issue_comment>username_2: Yes, some ERC-721 contracts do not follow the standard. The reason for most of them is because they were deployed before the ERC-721 standard was actually confirmed. One of these examples is CryptoKitties.
Use ERC-721 Validator that checks if a smart contract fully follows the ERC-721 standard: <https://erc721validator.org>
Upvotes: 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.