text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Get Azure Webjob History - 403 Token invalid I am trying to retrieve the web job history of an Azure web job via REST using a .NET backend and the OAuth2 credentials flow (as described here
https://learn.microsoft.com/en-us/rest/api/appservice/web-apps/get-triggered-web-job-history-slot)
How do I need to authenticate correctly?
I retrieve the token as follows:
POST https://login.microsoftonline.com/{MySubscription}/oauth2/v2.0/token
client_id={MyApp}
&grant_type=client_credentials
&scope=https://management.azure.com/.default
&client_secret={myclient_secret}
I get a token back, however I get a 403 error message when I try to retrieve the resource:
GET https://management.azure.com/subscriptions/{MySubscription}/resourceGroups/{MyResource}/providers/Microsoft.Web/sites/{MyApp}/slots/{MySlot}/triggeredwebjobs/{MyWebjob}/history?api-version=2021-02-01
Authorization: Bearer {MyToken}
Client '{MyApp}' with object ID '{MyApp}' is not
authorized to perform the action
'Microsoft.Web/sites/slots/triggeredwebjobs/history/read' using the
scope
'/subscriptions/{MySubscription}/resourceGroups/{MyResource}/providers/Microsoft.Web/sites/{MyApp}/slots/{MySlot}/triggeredwebjobs/{MyWebjob}'
or the scope is invalid. If access was granted recently, please update
your credentials.
What am I doing wrong?
I already added the API-Permission
A: The "403 Token invalid" error usually occurs if you missed giving permissions to particular scope (Azure Service Management).
By giving this scope it enables you to access https://management.azure.com
To resolve this error, please follow below steps:
Go to Azure Ad ->your application -> API permissions -> Add permission -> Azure Service Management -> delegated permissions ->User impersonation -> Add
After giving these permissions try to retrieve the resource again, there won't be any error.
A: Since I didn't find a solution that worked with OAuth2 and the Credentials flow, I got it working with Basic Authentication. The username (userName) and password (userPWD) can be taken from the publishing profile of the respective app service.
GET https://{appservicename}.scm.azurewebsites.net/api/triggeredwebjobs/{jobName}/history
Authorization Basic ....
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71785136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Meteor loginWithFacebook permissions not being asked I am using Meteor.loginWithFacebook:
Meteor.loginWithFacebook({
// https://developers.facebook.com/docs/reference/fql/permissions/
requestPermissions: ['read_friendlists','user_about_me','user_birthday',
'user_education_history', 'user_friends', 'user_likes', 'user_photos',
'user_religion_politics', 'user_work_history'],
loginStyle: "popup"
}, function (err,res) {
if(err) alert(err)
else console.log(res)
});
But when my actual login box pops up, thought the user is logged in and I get access to all their publicly availabe information, it doesn't actually request any of the specified permissions (and I therefore don't have access to them). Is there something in my code I need to change in order to have the permissions actually be requested?
A: @kittyminky you right
the solution is
Accounts.ui.config({ requestPermissions: { facebook: [the_permissons_you_want] } });
A: on my case using Accounts.ui.config yields an error since i only have Accounts defined, Accounts.ui isn't defined for me.
Perhaps because i didn't add that package, but must be a way without using the .ui ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27339190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to save All excel Charts in one pdf then Email it I would greatly appreciate it if you could help me with this, I am creating a dynamic excel sheet and I managed so far to create excel charts using Excel vba.
However, I am struggling with exporting all of the charts and one additional sheet to one pdf. I have around 15 excel charts and one excel sheet that I need to put in one pdf. And I need the excel sheet to be the first page in the pdf. Then email this pdf (all using vba).
Could you please help me on this! Your help is much appreciated. Thank you in advance!
A: Well you could Publish the workbook to PDF, just make sure your fist page is the first sheet
Option Explicit
Sub PDF_And_Mail()
Dim FileName As String
'// Call the function with the correct arguments
FileName = Create_PDF(Source:=ActiveWorkbook, _
OverwriteIfFileExist:=True, _
OpenPDFAfterPublish:=False)
If FileName <> "" Then
Mail_PDF FileNamePDF:=FileName
End If
End Sub
'// Create PDF
Function Create_PDF(Source As Object, OverwriteIfFileExist As Boolean, _
OpenPDFAfterPublish As Boolean) As String
Dim FileFormatstr As String
Dim Fname As Variant
'// Test If the Microsoft Add-in is installed
If Dir(Environ("commonprogramfiles") & "\Microsoft Shared\OFFICE" _
& Format(Val(Application.Version), "00") & "\EXP_PDF.DLL") <> "" Then
'// Open the GetSaveAsFilename dialog to enter a file name for the pdf
FileFormatstr = "PDF Files (*.pdf), *.pdf"
Fname = Application.GetSaveAsFilename("", filefilter:=FileFormatstr, _
Title:="Create PDF")
'// If you cancel this dialog Exit the function
If Fname = False Then
Exit Function
End If
'If OverwriteIfFileExist = False we test if the PDF
'already exist in the folder and Exit the function if that is True
If OverwriteIfFileExist = False Then
If Dir(Fname) <> "" Then Exit Function
End If
'Now the file name is correct we Publish to PDF
Source.ExportAsFixedFormat _
Type:=xlTypePDF, _
FileName:=Fname, _
Quality:=xlQualityStandard, _
IncludeDocProperties:=True, _
IgnorePrintAreas:=False, _
OpenAfterPublish:=OpenPDFAfterPublish
'If Publish is Ok the function will return the file name
If Dir(Fname) <> "" Then
Create_PDF = Fname
End If
End If
End Function
'// Email Created PDF
Function Mail_PDF(FileNamePDF As String)
Dim GMsg As Object
Dim gConf As Object
Dim GmBody As String
Dim Flds As Variant
Set GMsg = CreateObject("CDO.Message")
Set gConf = CreateObject("CDO.Configuration")
gConf.Load -1 ' CDO Source Defaults
Set Flds = gConf.Fields
With Flds
.Item("http://schemas.microsoft.com/cdo/configuration/smtpusessl") = True
.Item("http://schemas.microsoft.com/cdo/configuration/smtpauthenticate") = 1
.Item("http://schemas.microsoft.com/cdo/configuration/sendusername") = "[email protected]"
.Item("http://schemas.microsoft.com/cdo/configuration/sendpassword") = "password"
.Item("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "smtp.gmail.com"
.Item("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2
.Item("http://schemas.microsoft.com/cdo/configuration/smtpserverport") = 25
.Update
End With
GmBody = "Hi there" & vbNewLine & vbNewLine
With GMsg
Set .Configuration = gConf
.To = "[email protected]"
.CC = ""
.BCC = ""
.From = "[email protected]"
.Subject = "Important message"
.TextBody = GmBody
.AddAttachment FileNamePDF
.Send
End With
End Function
Most codes from Ron de Bruin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34848393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Folium map not displaying Running on canopy version 1.5.5.3123
With;
Folium Version: 0.1.2, Build: 1
The following code;
import folium
import pandas as pd
LDN_COORDINATES = (51.5074, 0.1278)
from IPython.display import HTML
import shapefile
#create empty map zoomed in on London
LDN_COORDINATES = (51.5074, 0.1278)
map = folium.Map(location=LDN_COORDINATES, zoom_start=12)
display(map)
Returns
<folium.folium.Map at 0x10c01ae10>
But nothing else.
How do i get to display a map within an ipython notebook?
A: _build_map() doesn't exist anymore. The following code worked for me
import folium
from IPython.display import display
LDN_COORDINATES = (51.5074, 0.1278)
myMap = folium.Map(location=LDN_COORDINATES, zoom_start=12)
display(myMap)
A: Considering the above answers, another simple way is to use it with Jupiter Notebook.
for example (on the Jupiter notebook):
import folium
london_location = [51.507351, -0.127758]
m = folium.Map(location=london_location, zoom_start=15)
m
and see the result when calling the 'm'.
A: Is there a reason you are using an outdated version of Folium?
This ipython notebook clarifies some of the differences between 1.2 and 2, and it explains how to put folium maps in iframes.
http://nbviewer.jupyter.org/github/bibmartin/folium/blob/issue288/examples/Popups.ipynb
And the code would look something like this (found in the notebook above, it adds a marker, but one could just take it out):
m = folium.Map([43,-100], zoom_start=4)
html="""
<h1> This is a big popup</h1><br>
With a few lines of code...
<p>
<code>
from numpy import *<br>
exp(-2*pi)
</code>
</p>
"""
iframe = folium.element.IFrame(html=html, width=500, height=300)
popup = folium.Popup(iframe, max_width=2650)
folium.Marker([30,-100], popup=popup).add_to(m)
m
The docs are up and running, too, http://folium.readthedocs.io/en/latest/
A: I've found this tutorial on Folium in iPython Notebooks quite helpful. The raw Folium instance that you've created isn't enough to get iPython to display the map- you need to do a bit more work to get some HTML that iPython can render.
To display in the iPython notebook, you need to generate the html with the myMap._build_map() method, and then wrap it in an iFrame with styling for iPython.
import folium
from IPython.display import HTML, display
LDN_COORDINATES = (51.5074, 0.1278)
myMap = folium.Map(location=LDN_COORDINATES, zoom_start=12)
myMap._build_map()
mapWidth, mapHeight = (400,500) # width and height of the displayed iFrame, in pixels
srcdoc = myMap.HTML.replace('"', '"')
embed = HTML('<iframe srcdoc="{}" '
'style="width: {}px; height: {}px; display:block; width: 50%; margin: 0 auto; '
'border: none"></iframe>'.format(srcdoc, width, height))
embed
Where by returning embed as the output of the iPython cell, iPython will automatically call display.display() on the returned iFrame. In this context, you should only need to call display() if you're rendering something else afterwards or using this in a loop or a function.
Also, note that using map as a variable name may might be confused with the .map() method of several classes.
A: You can also save the map as html and then open it with webbrowser.
import folium
import webbrowser
class Map:
def __init__(self, center, zoom_start):
self.center = center
self.zoom_start = zoom_start
def showMap(self):
#Create the map
my_map = folium.Map(location = self.center, zoom_start = self.zoom_start)
#Display the map
my_map.save("map.html")
webbrowser.open("map.html")
#Define coordinates of where we want to center our map
coords = [51.5074, 0.1278]
map = Map(center = coords, zoom_start = 13)
map.showMap()
A: There is no need to use iframes in 2022. To display the map, simply use the
{{ map | safe }} tag in html and _repr_html_() method in you view. It is also not necessary to save the map to the template
sample.py
@app.route('/')
def index():
start_coords = (46.9540700, 142.7360300)
folium_map = folium.Map(location=start_coords, zoom_start=14)
return folium_map._repr_html_()
template.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Title</title>
</head>
<body>
{{ folium_map | safe }}
</body>
</html>
A: i have same error and nothing work for me
finally i found it
print(dir(folium.Map))
see method save dose not exist instead use
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36969991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20"
} |
Q: Python-How I pass list or vector in pd.read_sql query New to Python Data science.
Here I have a sql server extract and I am extracting the data via 'pyodbc.connect' and reading the data by pd.read_sql(.....SQL query) from SQL server.
Here my intention is want to use a list or vector (example below) in SQL query where condition. How I do that? It hleps us not fetching millions of rows into memory.
I like to know how I pass number list and string list (both have different use cases)
1st whare conditions string:
raw_data2 = {'age1': ['ten','twenty']}
df2 = pd.DataFrame(raw_data2, columns = ['age1'])
2nd where condition number:
raw_data2 = {'age_num': [10,20,30]}
df3 = pd.DataFrame(raw_data2, columns = ['age_num'])
Thank you for your help and this will reduce our fetch time to 80%
A: Consider using pandas' read_sql and pass parameters to avoid type handling. Additionally, save all in a dictionary of dataframes with keys corresponding to original raw_data keys and avoid flooding global environment with many sepeate dataframes:
raw_data = {'age1': ['ten','twenty'],
'age_num': [10, 20, 30]}
df_dict = {}
for k, v in raw_data.items():
# BUILD PREPARED STATEMENT WITH PARAM PLACEHOLDERS
where = '{col} IN ({prm})'.format(col=k, prm=", ".join(['?' for _ in v]))
sql = 'SELECT * FROM mytable WHERE {}'.format(where)
print(sql)
# IMPORT INTO DATAFRAME
df_dict[k] = pd.read_sql(sql, conn, params = v)
# OUTPUT TOP ROWS OF EACH DF ELEM
df_dict['age1'].head()
df_dict['age_num'].head()
For separate dataframe objects:
def build_query(my_dict):
for k, v in my_dict.items():
# BUILD PREPARED STATEMENT WITH PARAM PLACEHOLDERS IN WHERE CLAUSE
where = '{col} IN ({prm})'.format(col=k, prm=", ".join(['?' for _ in v]))
sql = 'SELECT * FROM mytable WHERE {}'.format(where)
return sql
raw_data2 = {'age1': ['ten','twenty']}
# ASSIGNS QUERY
sql = build_query(raw_data2)
# IMPORT TO DATAFRAME PASSING PARAM VALUES
df2 = pd.read_sql(sql, conn, params = raw_data2['age1'])
raw_data3 = {'age_num': [10,20,30]}
# ASSIGNS QUERY
sql = build_query(raw_data3)
# IMPORT TO DATAFRAME PASSING PARAM VALUES
df3 = pd.read_sql(sql, conn, params = raw_data3['age_num'])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49698446",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: EF Core overwrite mechanic to send commands to database I want to use EF Core for my application but instead that EF Core should send a SQL command onto a datbase it should just simply send it to another application which will care about multi-tenancy and choosing the right database (tennant) for this command.
So is there a way to overwrite the main point in EF Core where the command is allready generated and WOULD be now send into the given database and instead of sending it into that, just do things?
A: EF Core Interceptors allow you to do just that:
public class QueryCommandInterceptor : DbCommandInterceptor
{
public override async ValueTask<InterceptionResult<DbDataReader>> ReaderExecutingAsync(
DbCommand command,
CommandEventData eventData,
InterceptionResult<DbDataReader> result,
CancellationToken cancellationToken = default)
{
var resultReader = await SendCommandToAnotherAppAsync(command);
return InterceptionResult<DbDataReader>.SuppressWithResult(resultReader);
}
private Task<DbDataReader> SendCommandToAnotherAppAsync(DbCommand command)
{
// TODO: Actual logic.
Console.WriteLine(command.CommandText);
return Task.FromResult<DbDataReader>(null!);
}
}
Make sure to override other methods like ScalarExecuting, NonQueryExecuting, sync and async variants.
A: Multitenancy is already supported in EF Core. The options are described in the Multi-tenancy page in the docs.
In this case, it appears a different database per tenant is used. Assuming there's a way/service that can return who the tenant is, a DbContext can use a different connection string for each tenant. Assuming there's a way/service to retrieve the current tenant name, eg ITenantService, the DbContext constructor can use it to load a different connection string per tenant. The entire process could be abstracted behind eg an ITenantConnectionService, but the docs show how this is done explicitly.
The context constructor will need the tenant name and access to configuration :
public ContactContext(
DbContextOptions<ContactContext> opts,
IConfiguration config,
ITenantService service)
: base(opts)
{
_tenantService = service;
_configuration = config;
}
Each time a new DbContext instance is created, the connection string is loaded from configuration based on the tenant's name:
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
var tenant = _tenantService.Tenant;
var connectionStr = _configuration.GetConnectionString(tenant);
optionsBuilder.UseSqlite(connectionStr);
}
This is enough to use a different database per tenant.
Scope and lifecycle
These aren't a concern when a DbContext is used as a Scoped or Transient service.
By default AddDbContext registers DbContexts as scoped services. The scope in a web application is a single request. Once a request is processed, any DbContext instances created are disposed.
Connection Pooling
Connection pooling isn't a concern either. First of all, a DbContext doesn't represent a database connection. It's a disconnected Unit-of-Work. A connection is only opened when the context needs to load data from the database or persist changes when SaveChanges is called. If SaveChanges isn't called, nothing gets persisted. No connections remain open between loading and saving.
Consistency is maintained using optimistic concurrency. When the data is saved EF Core checks to see if the rows being stored have changed since the time they were loaded. In the rare case that some other application/request changed the data, EF Core throws a DbConcurrencyException and the application has to decide what to do with the changed data.
This way, there's no need for a database transaction and thus no need for an open connection. When optimistic concurrency was introduced in ... the late 1990s, application scalability increased by several (as in more than 100, if not 1000) orders of magnitude.
Connection pooling is used to eliminate the cost of opening and closing connections. When DbConnection.Close() is called, the actual network connection isn't terminated. The connection is reset and placed in a pool of connections that can be reused.
That's a driver-level feature, not something controlled by EF Core. Connections are pooled by connection string and in the case of Windows Authentication, per user. This eliminates the chance of a connection meant for one user getting used by another one.
Connection pooling is controlled by ADO.NET and the database provider. Some drivers allow connection pooling while others don't. In those that do support it, it can be turned off using keywords in the connection string or driver settings.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73809984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: When using Object.assign, why only the inner object properties of copied object are changed, when changing the original object? I'm doing a shallow copy and trying to understand why only the inner objects properties are changed, given the fact that a shallow copy copies the reference from, lets say car into copiedCar, so if I change a property on the car object, shouldn't that property also change on the copiedCar object?
const car = {
name: "Audi",
properties: {
color: "yellow",
wheels: 19
}
}
const copiedCar = Object.assign({}, car);
car.name = "BMW",
car.properties.color = "Green"
car.properties.wheels = 15
console.log(copiedCar)
//Here it will print:
//{
// name:"Audi",
// properties: {
// color:"Green",
// wheels:15
// }
//}
Why are only the properties of inner object changed and not also the property of the object?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66570327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to programmatically install a jar file to startup on all operating systems - Java So i have come across a problem with installing my java application to startup on all Operating Systems.
Firstly I know this is possible on Windows by adding a new key to registry (I think).
I believe the command: 'reg add' works on windows.
I'm not 100% sure about Mac but i believe you can use a service wrapper?
And for Linux I have no clue.
Basically i want my application to be installed to startup when a user ticks a box, no matter which operating system it is run on.
Just some clarification on these and if possible an example for each would really assist me.
Any help with this is greatly appreciated.
Thanks.
A: Add a .desktop file to one of the following directories:
*
*$XDG_CONFIG_DIRS/autostart/ (/etc/xdg/autostart/ by default) for all users
*$XDG_CONFIG_HOME/autostart/ (~/.config/autostart/ by default) for a particular user
See the FreeDesktop.org Desktop Application Autostart Specification for details.
So for example,
PrintWriter out = new PrintWriter(new FileWriter(
System.getProperty("user.home")+"/.config/autostart/myapp.desktop"));
out.println("[Desktop Entry]");
out.println("Name=myapp");
out.println("Comment=Autostart entry for myapp");
out.println("Exec=" + installPath + "/bin/myapp_wrapper.sh");
out.println("Icon=" + installPath + "/myapp_icon.png");
out.flush();
out.close();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25699787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: iOS NSURLSession Get Request JSON always showing outdated data I have the following doubts about the JSON data returned from using both "GET" versus "POST" request. In the following URL JSON DATA, the data is not always updated based on the server changes (eg: database). For example, if I delete all the suggestion records from the database when I had 3 previously, it still returns 3 suggestion records in my JSON response body when I call dataTaskWithRequest.
However, if I change to POST, then the JSON response body will always be updated with the actual records from the server. In my server code (Using CakePHP), I did not check for post or get data. Actually, it was intended to be a GET method, but for some reason, only POST method seems to always fetch the most up to date data from JSON as opposed to GET.
Below is my code from my iOS client, but I'm not too sure if its very useful. I was wondering if there is a cache issue for GET request as opposed to POST request? However, I tried disabling cache for NSURLSessionConfig but it had no impact.
config.requestCachePolicy = NSURLRequestReloadIgnoringLocalCacheData;
The code base is below:
NSString *requestString = [NSString stringWithFormat:@"%@/v/%@.json", hostName, apptIDHash];
NSURL *url = [NSURL URLWithString:requestString];
NSMutableURLRequest *req = [[NSMutableURLRequest alloc] initWithURL:url];
NSURLSessionDataTask *dataTask = [self.session dataTaskWithRequest:req completionHandler:^(NSData *data, NSURLResponse *response, NSError *error){
if (!error) {
NSHTTPURLResponse *httpResp = (NSHTTPURLResponse*) response;
if (httpResp.statusCode == 200) {
NSError *jsonError;
NSDictionary *jsonObject = [NSJSONSerialization JSONObjectWithData:data options:NSJSONReadingAllowFragments error:&jsonError];
[self printJSONOutputFromDictionary:jsonObject];
if (!jsonError) {
block(jsonObject, nil);
}
else{
block(nil, jsonError);
}
}
else{
NSError *statusError = [self createServerUnavailableNSError:httpResp];
block(nil, statusError);
}
}
else{
block(nil, error);
}
}];
[dataTask resume];
In the above code fragment, the JSON body is always showing outdated data.
I really want to understand the issue, and would really appreciate if anyone could explain this issue for me.
A: Try adding the following request header:
[req addRequestHeader:@"Cache-Control" value:@"no-cache"];
I encountered the same problem as you and adding the above code solved the problem for me.
Taken from ASIHTTPRequest seems to cache JSON data always
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24552481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Numeric comparison during merge in R Dataframe d1:
x y
4 10
6 20
7 30
Dataframe d2:
x z
3 100
6 200
9 300
How do I merge d1 and d2 by "x" where d1$x should be matched against exact match or the next higher number in d2$x. Output should look like:
x y z
4 10 200 # (4 is matched against next higher value that is 6)
6 20 200 # (6 is matched against 6)
7 30 300 # (7 is matched against next higher value that is 9)
If merge() cannot do this, then is there any other way to do this? For loops are painfully slow.
A: This is pretty straightforward using rolling joins with data.table:
require(data.table) ## >= 1.9.2
setkey(setDT(d1), x) ## convert to data.table, set key for the column to join on
setkey(setDT(d2), x) ## same as above
d2[d1, roll=-Inf]
# x z y
# 1: 4 200 10
# 2: 6 200 20
# 3: 7 300 30
A: Input data:
d1 <- data.frame(x=c(4,6,7), y=c(10,20,30))
d2 <- data.frame(x=c(3,6,9), z=c(100,200,300))
You basically wish to extend d1 by a new column. So let's copy it.
d3 <- d1
Next I assume that d2$x is sorted nondecreasingly and thatmax(d1$x) <= max(d2$x).
d3$z <- sapply(d1$x, function(x) d2$z[which(x <= d2$x)[1]])
Which reads: for each x in d1$x, get the smallest value from d2$x which is not smaller than x.
Under these assumptions, the above may also be written as (& should be a bit faster):
d3$z <- sapply(d1$x, function(x) d2$z[which.max(x <= d2$x)])
In result we get:
d3
## x y z
## 1 4 10 200
## 2 6 20 200
## 3 7 30 300
EDIT1: Inspired by @MatthewLundberg's cut-based solution, here's another one using findInterval:
d3$z <- d2$z[findInterval(d1$x, d2$x+1)+1]
EDIT2: (Benchmark)
Exemplary data:
set.seed(123)
d1 <- data.frame(x=sort(sample(1:10000, 1000)), y=sort(sample(1:10000, 1000)))
d2 <- data.frame(x=sort(c(sample(1:10000, 999), 10000)), z=sort(sample(1:10000, 1000)))
Results:
microbenchmark::microbenchmark(
{d3 <- d1; d3$z <- d2$z[findInterval(d1$x, d2$x+1)+1] },
{d3 <- d1; d3$z <- sapply(d1$x, function(x) d2$z[which(x <= d2$x)[1]]) },
{d3 <- d1; d3$z <- sapply(d1$x, function(x) d2$z[which.max(x <= d2$x)]) },
{d1$x2 <- d2$x[as.numeric(cut(d1$x, c(-Inf, d2$x, Inf)))]; merge(d1, d2, by.x='x2', by.y='x')},
{d1a <- d1; setkey(setDT(d1a), x); d2a <- d2; setkey(setDT(d2a), x); d2a[d1a, roll=-Inf] }
)
## Unit: microseconds
## expr min lq median uq max neval
## findInterval 221.102 1357.558 1394.246 1429.767 17810.55 100
## which 66311.738 70619.518 85170.175 87674.762 220613.09 100
## which.max 69832.069 73225.755 83347.842 89549.326 118266.20 100
## cut 8095.411 8347.841 8498.486 8798.226 25531.58 100
## data.table 1668.998 1774.442 1878.028 1954.583 17974.10 100
A: cut can be used to find the appropriate matches in d2$x for the values in d1$x.
The computation to find the matches with cut is as follows:
as.numeric(cut(d1$x, c(-Inf, d2$x, Inf)))
## [1] 2 2 3
These are the values:
d2$x[as.numeric(cut(d1$x, c(-Inf, d2$x, Inf)))]
[1] 6 6 9
These can be added to d1 and the merge performed:
d1$x2 <- d2$x[as.numeric(cut(d1$x, c(-Inf, d2$x, Inf)))]
merge(d1, d2, by.x='x2', by.y='x')
## x2 x y z
## 1 6 4 10 200
## 2 6 6 20 200
## 3 9 7 30 300
The added column may then be removed, if desired.
A: Try: sapply(d1$x,function(y) d2$z[d2$x > y][which.min(abs(y - d2$x[d2$x > y]))])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24099498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Serialize Point, PointF Objects I am new to serialization concept and i am trying to serialize PointF class as i need to send it over socket connection (I am using ObjectOutputStream and ObjectInputStream). My problem is while sending no matter what values i have in my PointF object while sending , after receiving i get default value.
example if i sent pointF(1.0,4.0) i get (0.0,0.0).
Following is my code for implimenting serializable.
public class MyPointF extends PointF implements Serializable {
/**
*
*/
private static final long serialVersionUID = -455530706921004893L;
public MyPointF() {
super();
}
public MyPointF(float x, float y) {
super(x, y);
}
}
Can anybody point out the problem?
Also, after searching a little i found that this thing happens to android.canvas.Path class also. Please correct me where I am Wrong.
A: Your superclass PointF is not serialisable. That means that the following applies:
To allow subtypes of non-serializable classes to be serialized, the subtype may assume responsibility for saving and restoring the state of the supertype's public, protected, and (if accessible) package fields. The subtype may assume this responsibility only if the class it extends has an accessible no-arg constructor to initialize the class's state. It is an error to declare a class Serializable if this is not the case. The error will be detected at runtime.
During deserialization, the fields of non-serializable classes will be initialized using the public or protected no-arg constructor of the class. A no-arg constructor must be accessible to the subclass that is serializable. The fields of serializable subclasses will be restored from the stream.
See: http://docs.oracle.com/javase/7/docs/api/java/io/Serializable.html
You will need to look at readObject and writeObject:
Classes that require special handling during the serialization and deserialization process must implement special methods with these exact signatures:
private void writeObject(java.io.ObjectOutputStream out)
throws IOException
private void readObject(java.io.ObjectInputStream in)
throws IOException, ClassNotFoundException;
See also here: Java Serialization with non serializable parts for more tips and tricks.
A: I finally found the solution. Thanx @Greg and the other comment that has now been deleted. The solution is that instead of extending these objects we can make stub objects. As i was calling super from constructor the x and y fields were inherited from base class that is not serializable. so they were not serialized and there values were not sent.
so i mofidfied my class as per your suggestions
public class MyPointF implements Serializable {
/**
*
*/
private static final long serialVersionUID = -455530706921004893L;
public float x;
public float y;
public MyPointF(float x, float y) {
this.x = x;
this.y = y;
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21179794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Chrome DevTools "isSystemKey" Debugger Attribute While looking into chrome.debugger.sendCommand documentation, I came across an attribute, isSystemKey. I wonder whether this could be used to verify whether a key is typed by a human or program (similar to the isTrusted event attribute), but there doesn't appear to be any documentation on it, in the linked URL or elsewhere. Not a mention of its usage, as far as I can see.
So what does this boolean attribute do?
Some example code to illustrate, given a character c, a tab id tab_id and the debugger currently attached to that same tab:
let options = {
'key': c,
'type': 'keyDown',
'nativeVirtualKeyCode': c.charCodeAt(),
'windowsVirtualKeyCode': c.charCodeAt(),
'macVirtualKeyCode': c.charCodeAt(),
'isSystemKey': true // What does this change, compared to its falsy default?
}
// Pressing key
chrome.debugger.sendCommand({tabId: tab_id}, "Input.dispatchKeyEvent", options, () => {
// Releasing key
options.type = 'keyUp';
chrome.debugger.sendCommand({tabId: tab_id}, "Input.dispatchKeyEvent", options, () => {
// Detaching debugger
chrome.debugger.detach({tabId: tab_id});
});
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72861886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is instabot.py safe to use? I am unfamiliar with the instaboy.py python package. I am wondering if there are any security issues with this package like possibly getting information leaked. I am wondering how does the API work if there are a lot of people using this package. Wouldn't you need your own personal Instagram API token? I am confused by the whole concept and if anyone could explain even just a little bit it will be much appreciated.
A: Bots are now easily detected by Instagram. Your account could be banned for 3 days, 7 days, 30 days or definitively if Instagram detects too many attempts
Usually bots simulate a browser via Sellenium and then create a "browse like a human" bot to create likes, follow, unfollow, etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67041693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Refreshing SBT project in Intellij Idea switches Java to 7 I was having a Play 2.4.2 scala project built with J7 in IntelliJ Idea, then I switched the project to Play 2.5.0 with J8. I have modified J7->J8 everywhere I could think of, but for some reason, when I refresh project in SBT projects window in Intellij Idea (and it also refreshes it automatically when I change build.sbt), it sets Java version back to 7 (both options Project SDK and Project language level: in Project Structure window are set back)
I've probably missed some option, but I cannot find anything that still points to J7. Any idea?
I've tried to put this in build.sbt, but it did not fix the issue:
scalacOptions ++= Seq("-target:jvm-1.8")
Sbt compiles project fine if it is compiled SBT terminal, but I prefer to use IntelliJ Idea run option.
A: We're looking into it. Meanwhile, one workaround is editing .idea/sbt.xml and change the jdk option line to <option name="jdk" value="1.8" /> (or whatever you named the SDK in your project structure) and then refreshing your project.
Update: The latest Nightlies of the Scala plugin change how the project JDK is set, which should solve this.
A: IntelliJ has a closed ticket for this issue: https://youtrack.jetbrains.com/issue/SCL-6823
I have created a new ticket:
https://youtrack.jetbrains.com/issue/SCL-10631
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36220873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Where is the Upgrade Path for C# 6.0? Is it safe yet? I maintain a large number of projects, all currently written in .Net 4.5 / c# 5. I'm interested in upgrading to C# 6.0, but cannot find any documentation on the safety of doing so.
From what I've read, upgrading to VS 2015 / C# 6 / .Net 4.6 means building our code using Roslyn / RyuJit. (Or with msbuild 14, which uses Roslyn under the hood).
Roslyn, however, has a massive amount of open issues currently: https://github.com/dotnet/roslyn/issues/7278 Many of which are guaranteed to impact our codebase.
Similarly, RyuJit appears to be completely unstable, as recently as 6 months ago (http://nickcraver.com/blog/2015/07/27/why-you-should-wait-on-dotnet-46/)
I simply cannot find any documentation, anywhere, on safely upgrading to C# 6.0 / .Net 4.6, and yet find it extremely strange that these things are already released to RTM and VS 2015 with so many bugs out in the open.
Help?
A:
Many of which are guaranteed to impact our codebase.
I wouldn't be so sure. We are building not just Roslyn with itself, but the rest of Visual Studio, the entire .NET Framework, Windows, ASP.NET, and more with Roslyn, and have been doing so for two years now. We did test passes where we literally downloaded thousands of projects from GitHub so we could verify that code that built with the old compiler would build with the new compiler. We take compatibility with the old compilers very, very seriously.
with so many bugs out in the open.
There's a few things to know about that bug count:
*
*That includes not just the compiler, but also IDE, refactorings, debugger, and a lot of other components.
*That count includes bugs for features that haven't shipped yet. For example, a few days ago we started testing a new compiler feature that we hope to ship in C# 7, and filed 30-40 bugs on various IDE features that need to be updated to know about it.
*We file bugs for things that don't affect you. For example, any time we have a problem with one of our automated tests, we file a bug. Any time somebody realizes "huh, we could clean this up", we file a bug. We even use "issues" to discuss future language proposals as a forum, rather than a bug itself. Right now I have an issue against me which is to write a blog post that needs to be written.
*Some of these aren't actually bad bugs; they include issues like "I wish the compiler gave better error messages."
*Many of them involve running Roslyn on Linux or Mac, which is still actively under development.
If we filter to the actual list of "compiler" bugs and filter out compiler bugs for features that haven't shipped, the count is much much smaller.
And the most important bit:
*There were always bugs, in every release of the compiler. The compiler is a piece of software being written by humans, so it is by definition not perfect. We're just choosing to air our dirty laundry on GitHub instead of hiding it behind our firewall!
This is of course not to say that you won't hit a bug, but we've tried our best to make Roslyn as best a compiler as we can, with the best compatibility we can. If we had to write a document that said "here's all the ways your code isn't compatible", that would mean we failed at that. As usual, always test a bit of stuff before deploying, but that's no different than anything else.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35729254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Android FTP connection problems Hi I have been trying for two days to get a simple ftp connection to transfer a small xml file. I have tried lots of different examples of code, but all seem to give the same errors.
Main FTP class code:
public class MyFTPClientFunctions {
public FTPClient mFTPClient = null;
public boolean ftpConnect(String host, String username, String password, int port) {
try {
mFTPClient = new FTPClient();
// connecting to the host
mFTPClient.connect(host, port);
// now check the reply code, if positive mean connection success
if (FTPReply.isPositiveCompletion(mFTPClient.getReplyCode())) {
// login using username & password
boolean status = mFTPClient.login(username, password);
mFTPClient.setFileType(FTP.BINARY_FILE_TYPE);
mFTPClient.enterLocalPassiveMode();
return status;
}
//Log.d(TAG, "Error: could not connect to host " + host);
} catch (SocketException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return false;
}
private Context getApplicationContext() {
return null;
}
}
Main Activity code to send
send.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
//set up FTP file transfer here
MyFTPClientFunctions ftpSend = new MyFTPClientFunctions();
ftpSend.ftpConnect("xxxxxx.asuscomm.com","admin","xxxxxxxxxx",21);
}
LogCat messages
06-22 12:09:21.460 17329-17329/com.example.rats.moham_2 E/AndroidRuntime﹕ FATAL EXCEPTION: main
Process: com.example.rats.moham_2, PID: 17329
android.os.NetworkOnMainThreadException
at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1147)
at java.net.InetAddress.lookupHostByName(InetAddress.java:418)
at java.net.InetAddress.getAllByNameImpl(InetAddress.java:252)
at java.net.InetAddress.getByName(InetAddress.java:305)
at org.apache.commons.net.SocketClient.connect(SocketClient.java:203)
at com.example.rats.moham_2.MyFTPClientFunctions.ftpConnect(MyFTPClientFunctions.java:25)
at com.example.rats.moham_2.MainActivity$3.onClick(MainActivity.java:155)
at android.view.View.performClick(View.java:4780)
at android.view.View$PerformClick.run(View.java:19866)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5257)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
A: This Exception is usualy thrown, if you are using the network on the main thread.
Please use Async Tasks.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30978033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: MongoDB options in connection string are being interpreted as the database name I am trying to set the maxPoolSize via connection string in MongoDB following this piece of documentation. Here is my connection string:
mongodb://localhost:27017/databaseName?maxPoolSize=200
However, instead of having the database databaseName with the maxPoolSize equals to 200, I'm getting a database called databaseName?maxPoolSize=200. This is, Mongo is getting everything (name + options) as the database name.
Some info:
*
*Mongo version: 3.2.10
*Connecting using Morphia 1.1.0
I will be happy to provide any further information.
A: if you are doing
MongoClient client = new MongoClient(
"mongodb://localhost:27017/databaseName?maxPoolSize=200");
then dont do that, instead do as following,
MongoClient client = new MongoClient(
new MongoClientURI(
"mongodb://localhost:27017/databaseName?maxPoolSize=200"));
because you need to tell mongo that you are passing some options along the connection string.
if you think i misunderstood your question. please post the piece of code where you are trying to get a connection.
A: You can try something like this.
MongoClientURI uri = new MongoClientURI("mongodb://localhost:27017/databaseName?maxPoolSize=200");
MongoClient mongoClient = new MongoClient(uri);
Morphia morphia = new Morphia();
Datastore datastore = morphia.createDatastore(mongoClient, "dbname");
Alternatively
MongoClientOptions.Builder options = new MongoClientOptions.Builder();
//set your connection option here.
options.connectionsPerHost(200); //max pool size
MongoClient mongoClient = new MongoClient(new ServerAddress("localhost", 27017), options.build());
Morphia morphia = new Morphia();
Datastore datastore = morphia.createDatastore(mongoClient, "dbname");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40664652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: VS Code crashes when I close windows PowerShell I mostly open VS Code with "code." command from powershell. But today when I opened the VS Code and closed the Powershell VS Code also crashed:
Error Message:
(node:4628) Electron: Loading non-context-aware native module in renderer: '\?\C:\Users\MuhammadQasim\AppData\Local\Programs\Microsoft VS Code\resources\app\node_modules.asar.unpacked\spdlog\build\Release\spdlog.node'. This is deprecated, see https://github.com/electron/electron/issues/18397.
(node:9792) Electron: Loading non-context-aware native module in renderer: '\?\C:\Users\MuhammadQasim\AppData\Local\Programs\Microsoft VS Code\resources\app\node_modules.asar.unpacked\spdlog\build\Release\spdlog.node'. This is deprecated, see https://github.com/electron/electron/issues/18397.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64737825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How do I validate $_GET to allow only specific key Let's say my current URL is
http://domain.com/category/cars/?page=2
How do I validate on this page to allow only $_GET['page']?
If user type on URL something below, will go to error page.
http://domain.com/category/cars/?page=2&bar=foo
http://domain.com/category/cars/?foo=bar&page=2
http://domain.com/category/cars/?foo=bar&bar=foo
Let me know..
A: Well, you could use:
if( count($_GET) > 1 || !isset($_GET['page'])) { /*error*/ }
A: You should care only of GET/POST variables used in your application. Validate and escape them accordingly, check if they're set. The rest should be ignored - you don't need to care of them and display errors.
A: $allowedKeys = array('page');
$_GET = array_intersect_key($_GET, array_flip($allowedKeys));
You won't have to worry about your undesired parameters and values, as they will not be accepted via the $_GET method. With the above code, the only allowed $_GET parameters is 'page'
A: I might do this:
<?php
if ($_GET)
{
foreach($_GET as $key=>$value)
{
if($key!='page')
{
$error=true;
}
}
}
?>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9322100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How do I check if a value is changed a certain amount within a second? I'm monitoring the Pitch and Roll Values in my application and i want to monitor to see if they change by 5 degrees within a second and then run a process. Currently my code looks like this:
CMAttitude *attitude;
CMDeviceMotion *motion = scoringManage.deviceMotion;
attitude = motion.attitude;
basePitch = degrees(attitude.pitch);
baseRoll = degrees(attitude.roll);
if((pitchfloat >= basePitch+5) || (pitchfloat <= basePitch-5)) {
}
if((rollfloat >= baseRoll+5) || (rollfloat <= baseRoll-5)) {
}
That is called by:
yprTime = [NSTimer scheduledTimerWithTimeInterval:(1.0/1.0) target:self selector:@selector(yprscore) userInfo:nil repeats:YES];
This process runs every second along with my timer but when the value is changed it will run that loop many times.
The Problem is that the if statements run like 20 times too many within that second.
A: if you only want it to run once the value is changed you need to set up a BOOL. Within your if pitch > 5 statement setup another if that checks that BOOL
if((pitchfloat >= basePitch+5) || (pitchfloat <= basePitch-5)) {
if (firstTimeBOOLCheckisTrue == NO) {
firstTimeBOOLCheckisTrue = YES;
[self doSomething];
}
}
(void *)doSomething{
if (imReadytoCheckPitchAgain == YES){
firstTimeBOOLCheckisTrue = NO;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17277162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Reading in a table with non-standard row names in R? I have a .txt file that looks like this:
xyz ghj asd qwe
a / b: 1 2 3 4
c / d: 5 6 7 8
e / f: 9 10 11 12
...
...
I'm trying to use read.table(header = T) but it seems to be misinterpreting the row name. Is there a way to deal with this in read.table() or should I just use readLines()
A: There is no option to just skip a few characters in each row using a read.table option.
Instead, you can call read.table twice, once for all the data after the first row, and the second time for the header.
Where your data are in a file called "test.txt", you would do:
library(magrittr)
tmp <- read.table(file="test.txt", sep="", stringsAsFactors = FALSE, skip=1)[, -c(1:3)] %>%
setNames(read.table(file="test.txt", sep="", stringsAsFactors = FALSE, nrows=1))
> tmp
xyz ghj asd qwe
1 1 2 3 4
2 5 6 7 8
3 9 10 11 12
>
Package magrittr is what gives you the pipe operator %>% that allows you to read the data and the header separately, but put them together in a single line. If you have a sufficiently-new R version you can use the |> operator instead, without the magrittr package.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71834615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Difference between use of sleep thread and refresh page in php as server event source I have built a chat application using server sent events .But sse are not supported across all browsers.I was planning to use long polling ajax for polling source page for browser which do not support sse.
What the server page does is check db for new messages and output it using echo statement.
By refreshing page every X seconds using header("refresh: X;");
I saw another approach by which the
while(1){
echo statemenst....
echo statemenst....
sleep(5);
}
So my question is: If I sent ajax request with a very large waiting time will it receive response If header("refresh: X;"); is used .Or I need to use the infinite loop?.And if page refreshes will the requests to that page that are waiting for response get affected in someway ?
function waitForMsg(){
/* This requests the url "msgsrv.php"
When it complete (or errors)*/
$.ajax({
type: "GET",
url: "msgsrv.php",
async: true, /* If set to non-async, browser shows page as "Loading.."*/
cache: false,
timeout:50000, /* Timeout in ms */
success: function(data){ /* called when request to barge.php completes */
addmsg("new", data); /* Add response to a .msg div (with the "new" class)*/
setTimeout(
waitForMsg, /* Request next message */
1000 /* ..after 1 seconds */
);
},
error: function(XMLHttpRequest, textStatus, errorThrown){
addmsg("error", textStatus + " (" + errorThrown + ")");
setTimeout(
waitForMsg, /* Try again after.. */
15000); /* milliseconds (15seconds) */
}
});
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33263265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Create list of dictionaries when values of original dictionary are lists I have a dictionary where the values for each key are a list. I would like to extract a key value pair for each key in the dictionary with its corresponding element value from the list, and then store individual dictionaries in a new list.
So, if I have a simple dictionary like the following:
my_dict = {'Folder': ['2021-03-12_020000', '2021-03-12_020000', '2021-03-12_020000'],
'Filename': ['2021-03-12_020000-frame79.jpg', '2021-03-12_020000-frame1.jpg', '2021-03-
12_020000-frame39.jpg'],
'Labeler': ['Labeler 2', 'Labeler 2', 'Labeler 1']}
My desired output list would be a list of dictionaries, where each dictionary has the keys from my_dict with the corresponding value from the original values lists in my_dict:
final_list = [{'Folder':'2021-03-12_020000', 'Filename': '2021-03-12_020000-frame79.jpg', 'Labeler': 'Labeler 2'},
{'Folder':'2021-03-12_020000', 'Filename': '2021-03-12_020000-frame1.jpg', 'Labeler': 'Labeler 2'},
{'Folder':'2021-03-12_020000', 'Filename': '2021-03-12_020000-frame39.jpg', 'Labeler': 'Labeler 1'}]
A: for dynamic key and value:
my_dict = {'Folder': ['2021-03-12_020000', '2021-03-12_020000', '2021-03-12_020000'],
'Filename': ['2021-03-12_020000-frame79.jpg', '2021-03-12_020000-frame1.jpg', '2021-03- 12_020000-frame39.jpg'],
'Labeler': ['Labeler 2', 'Labeler 2', 'Labeler 1']}
new_dict = {}
for key in my_dict:
for idx, value in enumerate(my_dict[key]):
if idx not in new_dict:
new_dict[idx] = [(key,value)]
else:
new_dict[idx].append((key,value))
new_list = []
for key,value in new_dict.items():
new_list.append(dict(value))
print(new_list)
output
[{'Folder': '2021-03-12_020000', 'Filename': '2021-03-12_020000-frame79.jpg', 'Labeler': 'Labeler 2'}, {'Folder': '2021-03-12_020000', 'Filename': '2021-03-12_020000-frame1.jpg', 'Labeler': 'Labeler 2'}, {'Folder': '2021-03-12_020000', 'Filename': '2021-03- 12_020000-frame39.jpg', 'Labeler': 'Labeler 1'}]
A: You can use zip for the task:
my_dict = {
"Folder": ["2021-03-12_020000", "2021-03-12_020000", "2021-03-12_020000"],
"Filename": [
"2021-03-12_020000-frame79.jpg",
"2021-03-12_020000-frame1.jpg",
"2021-03-12_020000-frame39.jpg",
],
"Labeler": ["Labeler 2", "Labeler 2", "Labeler 1"],
}
final_list = [
{"Folder": a, "Filename": b, "Labeler": c}
for a, b, c in zip(
my_dict["Folder"], my_dict["Filename"], my_dict["Labeler"]
)
]
print(final_list)
Prints:
[
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03-12_020000-frame79.jpg",
"Labeler": "Labeler 2",
},
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03-12_020000-frame1.jpg",
"Labeler": "Labeler 2",
},
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03-12_020000-frame39.jpg",
"Labeler": "Labeler 1",
},
]
A: Here is code assuming all items are inter related.
my_dict = {'Folder': ['2021-03-12_020000', '2021-03-12_020000', '2021-03-12_020000'],
'Filename': ['2021-03-12_020000-frame79.jpg', '2021-03-12_020000-frame1.jpg', '2021-03- 12_020000-frame39.jpg'],
'Labeler': ['Labeler 2', 'Labeler 2', 'Labeler 1']}
result_dict = []
for i, (folder, filename, labler) in enumerate(zip(my_dict['Folder'], my_dict['Filename'], my_dict['Labeler'])):
result_dict.append({"Folder":folder,'Filename':filename,"Labeler":labler })
print(result_dict)
output:
[
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03-12_020000-frame79.jpg",
"Labeler": "Labeler 2"
},
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03-12_020000-frame1.jpg",
"Labeler": "Labeler 2"
},
{
"Folder": "2021-03-12_020000",
"Filename": "2021-03- 12_020000-frame39.jpg",
"Labeler": "Labeler 1"
}
]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74719130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: MVC View Lookup Caching. How long is it cached? and where is it cached? We are optimising a site and have read about the issue of the initial view lookup taking a long time. Subsequent lookups of the views are then much faster. Mini-profiler shows that a lot of the time is in the initial find view (I know I can use a ~ path to reduce this) and whatever else is done at this stage.
Where is the caching done? How long are view lookups etc cached? Can I see what is cached? Can we do anything to cause it to pre-load so there isn't a delay?
We have many views that are often not visited for hours and I don't want sudden peaks and troughs in performance.
We are using Azure and have a number of web role instances. Can I assume that each web role has its own cache of the view lookup? Can we centralise the caching so that it only occurs once per application?
Also I read MVC4 is faster at finding views? Does anyone have any figures?
A: The default cache is 15min and is stored in the HttpContext.Cache, this is all managed by the System.Web.Mvc.DefaultViewLocationCache class. Since this uses standard ASP.NET caching you could use a custom cache provider that gets its cache from WAZ AppFabric Cache or the new caching preview (there is one on NuGet: http://nuget.org/packages/Glav.CacheAdapter). Using a shared cache will make sure that only 1 instance needs to do the work of resolving the view. Or you could go and build your own cache provider.
Running your application in release mode, clearing unneeded view engines, writing the exact path instead of simply calling View, ... are all ways to speed up the view lookup process. Read more about it here:
*
*http://samsaffron.com/archive/2011/08/16/Oh+view+where+are+thou+finding+views+in+ASPNET+MVC3+
*http://blogs.msdn.com/b/marcinon/archive/2011/08/16/optimizing-mvc-view-lookup-performance.aspx
You can pre-load the view locations by adding a key for each view to the cache. You should format it as follows (where this is the current VirtualPathProviderViewEngine):
string.Format((IFormatProvider) CultureInfo.InvariantCulture, ":ViewCacheEntry:{0}:{1}:{2}:{3}:{4}:", (object) this.GetType().AssemblyQualifiedName, (object) prefix, (object) name, (object) controllerName, (object) areaName);
I don't have any figures if MVC4 is faster, but it looks like the DefaultViewLocationCache code is the same as for MVC3.
A: To increase my cachetime to 24 hours I used the following in the Global.asax
var viewEngine = new RazorViewEngine
{ViewLocationCache = new DefaultViewLocationCache(TimeSpan.FromHours(24))};
//Only allow Razor view to improve for performance
ViewEngines.Engines.Clear();
ViewEngines.Engines.Add(viewEngine);
Also this article ASP.NET MVC Performance Issues with Render Partial was also interesting.
Will look at writing my own ViewLocationCache to take advantage of shared Azure caching.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/11786808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Embedded Google Map in modal is showing blank until window resizes manually I have a google map that is embedded into a modal, but when clicking to open the modal, the map appears blank.
After googling i thought i found a solution using jquery to resize the window again but it doesnt seem to work.
The current code i'm using:
component.html:
<div class="col s12 m6">
<a href="#" onclick="$('#map-modal').modal('open');">
<div class="card">
<div class="card-image">
<img src="./assets/1.jpg" class="z-depth-4">
<span id="card-title" class="card-title">Map's and Directions</span>
</div>
</div>
</a>
</div>
<div id="map">
<script type="text/javascript">
google.maps.event.trigger(map, 'resize');
</script>
</div>
component.css
#map {
width: 100vw;
height: 100vh;
max-width: 100%;
margin: 0;
padding: 0;
}
component.ts
export class AppComponent implements OnInit {
title = 'app';
modalActions = new EventEmitter<string|MaterializeAction>();
ngOnInit() {
var mapProp = {
center: new google.maps.LatLng(-34.397, 150.644),
zoom: 11,
mapTypeId: google.maps.MapTypeId.ROADMAP
};
var map = new google.maps.Map(document.getElementById("map"), mapProp);
$(document).ready(function() {
google.maps.event.addListener(map, "idle", function(){
google.maps.event.trigger(map, 'resize');
});
});
}
}
index.html
<script src="http://maps.googleapis.com/maps/api/js?key=APIKEY"></script>
Can anyone see what im missing?
I m doing the app in angular 2 btw
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48431346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Join table, for which the table name has to be grabbed in the same query This question is not to discuss if the setup of the DB is as it should be, i'm not happy with how it is, but it is how it is and a refactor will not be done by the DBA at this moment.
What i am looking for is a way to join a table, for which i do not in advance know the table name but it is in the table i want to do the join against.
So:
TABEL transactions
trans_id autherizer
001Bar payment_provider_a
001Foo payment_provider_b
TABLE payment_provider_a
trans_id amount
001Bar 50
TABLE payment_provider_b
trans_id amount
001Foo 50
The table names are fictional, but the setup is identical. There is a transaction table, which stores an transaction_id and a payment_provider string name (with a lot of additional data, which is not relevant for the question).
Would there be anyway to get all the data from the transaction table and in that query do directly a join on the payment_provider table, for which we only now what that table can be from the transaction table.
I have tagged it with PHP as well, since i want to make the call with PDO. Whole PHP snippets are not required, but if you insist ;). A push in the right direction for the query it self would be sufficient. I am aware that i am lacking the example of what i have tried. But to be honest i haven't tried that much because i can't really think of something, it's the first time i am in this kind of need for such a query.
A: Not overly clean, but you can try this:
SELECT * FROM transactions t JOIN
(
SELECT 'payment_provider_a' AS name,* FROM payment_provider_a
UNION
SELECT 'payment_provider_b' AS name,* FROM payment_provider_b
) p ON t.payment_provider = p.name AND t.trans_id=p.trans_id
Note that all payment_provider_x tables must have the same number and types of columns. Otherwise you'll need to select only the fields that are actually common (there are ways around this if needed).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24439223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Debugging visual basic dll library from asp.net hosted in IIS I have the following problem.
I have some project written in Visual Basic (not Visual Basic .NET but simple Visual Basic - sic!). I can compile it and generate a dll.
Then inside my web application I add reference to this dll library. When I run my web application hosted in default Visual Studio server, everything is fine and I can debug my Visual Basic project. However, when I host my web application in IIS then I can't. Code does not stop in my breakpoint.
My asp.net catch the exception when I try to execute some method from the mentioned library which is something like:
Unable to cast COM object of type 'xxx' to interface type 'yyy'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{4C2875B5-3265-306B-9C74-1BEC98986B1A}' failed due to the following error: Error loading type library/DLL. (Exception from HRESULT: 0x80029C4A (TYPE_E_CANTLOADLIBRARY)).
Can someone please help me because I've been struggling for 2 days without success.
A: Are you debugging using "Attach to Process"?
First determine the w3wp.exe process:
For IIS6
*
*Start > Run > Cmd
*Go To Windows > System32
*Run cscript iisapp.vbs
*You will get the list of Running
Worker ProcessID and the Application
Pool Name.
For IIS7
*
*Start > Run > Cmd
*Go To Windows > System32 > Inetsrv
*Run appcmd list wp
Note the Process ID (PID).
Next in Visual Studio:
*
*Debug > Attach to Process...
*Find the w3wp.exe with the right PID and attach
Your breakpoints should work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4218333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Get a a process window handle by click in C# At the moment, I can get a list of running processes with a main window using System.Diagnostics.Process.GetProcesses() and executing a simple LINQ query.
Then, I can import user32.dll and the SetWindowPos function and I manipulate other processes' window parameters.
Ok, it works. Now I'd like to select a window of a process, let's say calc.exe, by clicking it. In other words, I'd like to obtain a Process object (and then the MainWindowHandle) with a hook that catches the process name when I click on its window.
How can I achieve this?
A: I don't know how this is done in C#, but you have also tagged this question WinAPI so I can help there. In WinAPI, it can be done like so:
#include <stdio.h>
#include <Windows.h>
#include <Psapi.h>
#pragma comment(lib, "Psapi.lib")
int main(void)
{
/* Hacky loop for proof of concept */
while(TRUE) {
Sleep(100);
if(GetAsyncKeyState(VK_F12)) {
break;
}
if(GetAsyncKeyState(VK_LBUTTON)) {
HWND hwndPt;
POINT pt;
if(!GetCursorPos(&pt)) {
wprintf(L"GetCursorPos failed with %d\n", GetLastError());
break;
}
if((hwndPt = WindowFromPoint(pt)) != NULL) {
DWORD dwPID;
HANDLE hProcess;
GetWindowThreadProcessId(hwndPt, &dwPID);
hProcess = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, FALSE, dwPID);
if(hProcess == NULL) {
wprintf(L"OpenProcess failed with error: %d\n", GetLastError());
} else {
wchar_t lpFileName[MAX_PATH];
DWORD dwSize = _countof(lpFileName);
QueryFullProcessImageName(hProcess, 0, lpFileName, &dwSize);
wprintf(L"%s\n", lpFileName);
CloseHandle(hProcess);
}
}
}
}
return EXIT_SUCCESS;
}
Example result:
In this case, I am simply polling to get the mouse click. A more proper way would be to use some sort of windows hook.
A: As Mike Kwan said, you'll be better of writing a hook though both approaches have their own drawbacks, but bjarneds already did a good work on this. Have a look @ DotNET Object Spy. It's written in C# and will serve for your needs and more.
You should also note that using hooks are becoming redundant by the day. Depending on what you want to do, other winapis like GetForegroundWindow might serve better.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10318640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Concurrency about Hibernate first level cache Background: We have a distributed system running same web application on multiple machines. Application on each machine will w/r data to one database (the database is Oracle cluster). We have a load balancer in front of those web application servers (pls ignore the web server for now).
Problem: We have a feature, e.g. shopping cart, will save a shopping item into a customer's cart. The code should like:
1: Load cart by Hibernate
2: if [Shopping item exists in Cart]
3: update quantity
else
4: add new cartItem
We met concurrency problem here. If multiple users login with same customer account, and add shopping item to their cart at the same time, duplicate cart item will appear.
Because Hibernate using First level cache (aka session cache), and each session cache the data when the transaction begin. Sessions does not know other sessions have added same shopping item in step 2, if other sessions completed between step 1 and step.
We do not use Second level cache for now, since it will not improve too much performance. Still, second level cache does not seemed to be a solution here, since Hibernate will read First level catch at first.
Anyone solved this kind of problem before?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36809239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: how to send image in pushplugin in BigPictureStyle method? I am developing a cordova application, I would like to know how to send image in pushplugin like BigPictureStyle as done by flipkart myntra and all ..
there is a pull request https://github.com/phonegap-build/PushPlugin/pull/498
for the same but how to implement this?
how to send the payload so as to show the image in the notification bar.
A: Install the modified plugin
*
*Clone locally and pull the changes
*
*Clone PushPlugin locally
git clone https://github.com/phonegap-build/PushPlugin.git
*Add a remote to the repository where the modifications are
git remote add bigpicture-repo https://github.com/ajoyoommen/PushPlugin.git
*Then fetch the changes
git fetch bigpicture-repo
*Merge the changes you just fetched (from the remote) into you master
git merge bigpicture-repo/master
OR
*Directly clone my repository containing the feature
git clone https://github.com/ajoyoommen/PushPlugin
Uninstall the old plugin
cordova plugin remove com.phonegap.plugins.PushPlugin
Install the new plugin from your filesystem
cordova plugin add /home/myname/Documents/PushPlugin
Send messages from your server to your application. If the payload has bigPicture in it, with a URL, that image will be downloaded and displayed in a Big Picture style notification.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31939809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: .Net 3.5 Winforms Basic Dialog with Label Autosize for custom text contents I want a form containing:
+----------------------------------------+
| Dialog Title X|
+----------------------------------------+
+----------------------------------------+
|icon | |
|32x32px| One-line label (Heading) |
| +--------------------------------+
| | |
| | Message label with auto-wrap |
| | text according to any given |
| | string. |
+-------+--------------------------------+
| row for dialog buttons... |
+----------------------------------------+
I'll gladly answer any questions; the basic idea is still simple (though I cannot get it to work): Given any message string containing possible newlines the dialog (a Form) should keep its width but grow vertically depending on the message.
Any way how this can be done?
A: I think the component that you will find most useful is TableLayoutPanel. Find it under “Containers” in the Toolbox. Set the TableLayoutPanel’s Dock = Fill.
You can use it to lay out the controls in columns and rows. Once a control is inside the TableLayoutPanel, you can use the ColumnSpan property on such a control to span it across multiple columns; I’d use this for the button row at the bottom, i.e. make a new panel for the button row and put the buttons inside that. For the icon, of course, use RowSpan instead.
Experiment with various values of Anchor, AutoSize and AutoSizeMode for some of the controls, especially the message label that you want to grow automatically. If you set the TableLayoutPanel and the Form to AutoSize = true, then the window will grow automatically with the text contents.
A: You could try handling the TextChanged event of the label and measure the size of the string using something like this:
Graphics g = Graphics.FromHwnd(this.Handle);
SizeF s = g.MeasureString(yourLabel.Text, yourLabel.Font, yourLabel.Width);
After this, knowing the sizes of the other controls you can modify the size of the window accordingly. I am assuming that you only want to resize the window vertically.
A: Try a TableLayoutPanel for the layout and set its Dock property to Fill to occupy the entire Form. Then plop your "one-line" and "message" labels into their respective cells and set their Dock properties to Fill to occupy the entire cell.
If you really want to resize the entire Form to fit any message at runtime, you may have to use Graphics.MeasureString to determine the area you need to contain the string and then resize the form to contain that area.
A: Create your own new Form and show it as a dialog box. You can put whatever/however you want on that form.
Here you have a tutorial that will show you how to do the hardest part.
A: You might try to ask for the position of the last character
TextBox box = new TextBox();
box.Text = "...";
var positionOfLastCharacter = box.GetPositionFromCharIndex(box.TextLength);
The you can calculate the necessary height of the textbox and the form.
Edit: That will give you the top left corner of the last character, you should add 10px or so to make the last line fit.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3583870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Python-docx header / footer error from docx import Document
d = Document('/tmp/doc_with_header.docx')
d.sections[0].headers[0].add_paragraph(text='moar header')
d.save('/tmp/moar_headers.docx')
this is the code
here is the error
AttributeError: 'Section' object has no attribute 'header'
A: Looks like, you've got this solution from this page, yeah? Unfortunately, it's not the actual documentation, but only API suggestion, for further implementation. Just promises and nothing more at the moment=\
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43205825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using Math.Round(ReportItems!category.Value,MidpointRounding.AwayFromZero) returns incorrect value I am using
=Math.Round(ReportItems!category.Value,MidpointRounding.AwayFromZero)
in a tablix row, which results incorrect value.
5.48 is rounding to 5
Instead, I would like to see 5.48 as 6.
A: What you are asking for is not rounding away from zero, it is Ceiling for positive numbers and Floor for negative numbers.
A: MidpointRounding.AwayFromZero Is a way of defining how the midpoint value is handled. The midpoint is X.5. So, 4.5 is rounded to 5, rather than 4, which is what would happen in the case of MidpointRounding.ToEven. Round is completely correct.
If you want to write a function that rounds all non-integer values to the next highest integer then that operation is Math.Ceiling, not Round.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21147702",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Subscription' is not assignable to type I'm having a problem getting a service to display results on page. The error is the subscribing method returns a subscription type and I'm stuck trying to get it to a products array. The products are in a json file.
Setup:
I'm trying to learn Angular 2 by going through a tutorial. The tutorial is dated and I'm using the latest version of angular (ng -v = @angular/cli: 1.4.2). I used ng new and ng generate to setup app.
product-list.component.ts
export class ProductListComponent implements OnInit {
pageTitle = 'Product List';
imageWidth = 50;
imageMargin = 2;
showImage = false;
listFilter = '';
products: IProductList[];
subscription: Subscription;
errorMessage = '';
constructor(private _productListService: ProductListService) {
}
ngOnInit() {
**// ERROR - 'Subscription' is not assignable to type 'IProductList[]'**
this.products = this._productListService.getProducts() // ******** ERROR ******
.subscribe(
products => this.products = products,
error => this.errorMessage = <any>error);
}
product-list.service.ts
import { Injectable } from '@angular/core';
import { Http, Response } from '@angular/http';
import { Observable } from 'rxjs/Observable';
import { IProductList } from './product-list';
@Injectable()
export class ProductListService {
private _productListUrl = 'api/product-list/product-list.json';
constructor(private _http: Http) { }
getProducts(): Observable<IProductList[]> {
return this._http.get(this._productListUrl)
.map((response: Response) => <IProductList[]>response.json())
.do(data => console.log('All: ' + JSON.stringify(data)))
.catch(this.handleError);
}
private handleError(error: Response) {
console.error(error);
return Observable.throw(error.json().error || 'Server Error');
}
}
product-list.component.html
<tr *ngFor='let product of products | async | productFilter: listFilter' >
<td>
<img *ngIf='showImage' [src]='product.imageUrl' [title]='product.productName' [style.width.px]='imageWidth' [style.marging.px]='imageMargin'>
</td>
<td>{{product.productName}}</td>
<td>{{product.productCode | lowercase }}</td>
<td>{{product.releaseDate}}</td>
<td>{{product.price | currency:'USD':true:'1.2-2' }}</td>
<td><app-ai-star [rating] = 'product.starRating'
(ratingClicked)='onRatingClicked($event)'></app-ai-star></td>
</tr>
A: The problem is that you want to make your subscription = to the getProducts() call.
ngOnInit() {
this.subscription = this._productListService.getProducts() // subscription created here
.subscribe(
products => this.products = products, // value applied to products here
error => this.errorMessage = <any>error);
}
A: The value of this.products is assigned in subscribe(). You can ignore the returned subscription object as in code snippet below.
ngOnInit() {
this._productListService.getProducts()
.subscribe(
products => this.products = products,
error => this.errorMessage = <any>error);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46391400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: how to use distinct keyword to filter object i have a project list where i can get all details about projects. In project class,i have object of customer in project , i just want to filter that list according to customer. how can i do this .
public ICollection<Project> GetProjectBasicDetailsByProjectTypeCustomerID(ProjectType projectType, string custName, string cordinatorName, string projectName)
{
oLog.Debug("Started");
ISession session = DataAccessLayerHelper.OpenReaderSession();
ITransaction transaction = null;
ICollection<Project> projectList = null;
try
{
transaction = session.BeginTransaction();
ICriteria criteria = session.CreateCriteria(typeof(Project),"Project")
.CreateAlias("Project.customer","customer",NHibernate.SqlCommand.JoinType.InnerJoin)
.CreateAlias("Project.Coordinator", "Coordinator", NHibernate.SqlCommand.JoinType.InnerJoin)
.Add(Restrictions.Eq("Project.ProjectType", projectType));
projectList = criteria.List<Project>().ToList();
session.Flush();
transaction.Commit();
}
catch (Exception ex)
{
if (transaction != null && transaction.IsActive)
transaction.Rollback();
oLog.Error(ex);
}
finally
{
if (transaction != null)
transaction.Dispose();
if (session != null && session.IsConnected)
session.Close();
}
oLog.Debug("End");
return projectList;
}
A: If I understand you correctly you would like to get the distinct customers from your projects ?
In this case I think something like this should work (this should give you the dictinct project IDs from this criteria - maybe there is a way to get the projects directly - but I can't test this right now:
ICriteria criteria = session.CreateCriteria(typeof(Project),"Project")
.SetProjection(Projections.Distinct(Projections.Property("project.Id")))
.CreateAlias("Project.customer","customer",NHibernate.SqlCommand.JoinType.InnerJoin)
.CreateAlias("Project.Coordinator", "Coordinator", NHibernate.SqlCommand.JoinType.InnerJoin)
.Add(Restrictions.Eq("Project.ProjectType", projectType));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13010511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: LNK1104: cannot open file 'wxbase28d.lib' I am trying to compile existing project created using wxWidgets library.
I successfully compiled wxWidgets 2.8.12 library.
Now, I am trying to compile my project.
But I get error:
fatal error LNK1104: cannot open file 'wxbase28d.lib'
Afterwards I added some variables in settings like:
C/C++->Preprocessor Definitions:
WIN32;__WXMSW__;_WINDOWS;_DEBUG;__WXDEBUG__;_CRT_SECURE_NO_WARNINGS;WIN32;_DEBUG;_WINDOWS;%(PreprocessorDefinitions)
VC++ Directories->Include Directories:
D:\instantclient_12_1\sdk\include;$(WXWIN)\lib\vc_lib\mswd;$(WXWIN)\include;$(VCInstallDir)include;$(VCInstallDir)atlmfc\include;$(WindowsSdkDir)include;$(FrameworkSDKDir)\include;
Linker->General->Additional Library Directories:
$(WXWIN)\lib\vc_lib;E:\app\vasyl\product\11.1.0\db_1\OCI\lib\MSVC\vc71;$(WXDIR284)\lib\vc_lib;%(AdditionalLibraryDirectories)
Resources->General->Additional Include directories:
$(WXWIN)\include;c:\wxMSW284\include;$(WXDIR284)\include;%(AdditionalIncludeDirectories)
Now, the situation is like this:
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _free already defined in LIBCMTD.lib(dbgfree.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _malloc already defined in LIBCMTD.lib(dbgmalloc.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _realloc already defined in LIBCMTD.lib(dbgrealloc.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _memmove already defined in LIBCMTD.lib(memmove.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _tolower already defined in LIBCMTD.lib(tolower.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _isalpha already defined in LIBCMTD.lib(_ctype.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _isdigit already defined in LIBCMTD.lib(_ctype.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _isspace already defined in LIBCMTD.lib(_ctype.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _strtol already defined in LIBCMTD.lib(strtol.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: _strtoul already defined in LIBCMTD.lib(strtol.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: __strtoi64 already defined in LIBCMTD.lib(strtoq.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: __strtoui64 already defined in LIBCMTD.lib(strtoq.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: __errno already defined in LIBCMTD.lib(dosmap.obj)
1>MSVCRTD.lib(MSVCR100D.dll) : error LNK2005: __vsprintf_p already defined in LIBCMTD.lib(vsnprnc.obj)
...
...
etc.
Can someone help me spot what am I doing wrong?
A: You're using different CRT settings (static vs DLL) for your project and the library. Make sure to (re)build both of them using the same option, either /MD[d] or /MT[d].
A: There are many possible causes for this linker error. First adress to check is MSDN: https://msdn.microsoft.com/en-us/library/ts7eyw4s.aspx
What is $(WXWIN) and how does it differ from $(WXDIR284)? It seems, you include wxWidgets path twice...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/32073431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Finding record which doesn't contain some value in array fields I am using sequelize + typescript over node (with postgresql db) and I have the following model:
id: number,
someField: string,
arr1: number[],
arr2: number[]
and I'm trying to find all records in which arr1 and arr2 don't contain a certain value.
As far as I've seen my only option in one query is a mix between Op.not and Op.contains,
so I've tried the following queries:
/// Number 1
where: {
arr1: {
[Op.not] : {[Op.contains]: [someValue]}
},
arr2: {
[Op.not] : {[Op.contains]: [soemValue]}
}
},
/// Number 2
where: {
[Op.not]: [
{arr1: {[Op.contains]: [someValue]}},
{arr2: {[Op.contains]: [someValue]}}
]
},
Now, number 1 does compile in typescript but when trying to run it the following error returns:
{
"errorId": "db.failure",
"message": "Database error occurred",
"innerError":
{
"name": "SequelizeValidationError",
"errors":
[
{
"message": "{} is not a valid array",
"type": "Validation error",
"path": "arr1",
"value": {},
"origin": "FUNCTION",
"instance": null,
"validatorKey": "ARRAY validator",
"validatorName": null,
"validatorArgs": []
}
]
}
}
So I tried number 2, which didn't compile at all with the following TS error:
Type '{ [Op.not]: ({ arr1: { [Op.contains]: [number]; }; } | { arr2: { [Op.contains]: [number]; }; })[]; }' is not assignable to type 'WhereOptions<any>'.
Types of property '[Op.not]' are incompatible.
Type '({ arr1: { [Op.contains]: [number]; }; } | { arr2: { [Op.contains]: [number]; }; })[]' is not assignable to type 'undefined'
So the question is what am I doing wrong, or in other words, how can I make that query without querying all records and filter using code
Thanks!
A: You have to use notIn and not contain maybe then it will work:
Official Docs: https://sequelize.org/master/manual/model-querying-basics.html
where: {
arr1: {
[Op.notIn]: someValueArray
},
arr2: {
[Op.notIn]: someValueArray
}
},
A: Apparently the second option is the correct one, but what was incorrect was the types of sequelize, @ts-ignore fixes the problem
| {
"language": "en",
"url": "https://stackoverflow.com/questions/69518087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error when getting row().data() from indexed db I tried to get the row data using
row(index).data()
Where index is the row index.
I get the data initially pushed to the table. But the update I do in "columnDefs" render() function is not reflected.
The value I return in render() function is reflecting in table but it is not got in row().data()
please see the screenshot value in table but it is null in the console. I have consoled row().data(). [1]: https://i.stack.imgur.com/5IP56.png
please see I have returned value 10 for the column in "columnDefs" render() function and the value has come in table. [2]: https://i.stack.imgur.com/9YMnq.png
A: I fixed the issue like this.
columnDefs: [
{
"targets": [],
"render": function(data, type, row) {
row[5] = row[2] * $(row[3]).val();
return row[5];
}
}]
what I did previously was like:
columnDefs: [
{
"targets": [],
"render": function(data, type, row) {
value = row[2] * $(row[3]).val();
return value;
}
}]
here new value is returned and it will be visible in table but the row data wont be updated. If we update row[] array we will get the update in row().data().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63581695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Cryptic error messages with npm and node I'm following facebook's tutorial on getting started with React Native (https://facebook.github.io/react-native/docs/tutorial.html#hello-world), but I can't get the react-native-cli to install. Any help interpreting the error messages? Obviously it says to unlink something, but I don't know what it is linked to that it shouldn't be linked to.
Running as root seems to do something, but zsh still won't recognize the react-native command.
➜ ~ npm install -g react-native-cli
npm ERR! Darwin 14.3.0
npm ERR! argv "node" "/usr/local/bin/npm" "install" "-g" "react-native-cli"
npm ERR! node v0.12.4
npm ERR! npm v2.10.1
npm ERR! path /Users/bbarclay/.node/bin/react-native
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! Error: EACCES, unlink '/Users/bbarclay/.node/bin/react-native'
npm ERR! at Error (native)
npm ERR! { [Error: EACCES, unlink '/Users/bbarclay/.node/bin/react-native']
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! path: '/Users/bbarclay/.node/bin/react-native' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
npm ERR! error rolling back Error: EACCES, unlink '/Users/bbarclay/.node /bin/react-native'
npm ERR! error rolling back at Error (native)
npm ERR! error rolling back { [Error: EACCES, unlink '/Users/bbarclay/.node/bin/react-native']
npm ERR! error rolling back errno: -13,
npm ERR! error rolling back code: 'EACCES',
npm ERR! error rolling back path: '/Users/bbarclay/.node/bin/react-native' }
npm ERR! Please include the following file with any support request:
npm ERR! /Users/bbarclay/npm-debug.log
➜ ~ sudo npm install -g react-native-cli
Password:
/Users/bbarclay/.node/bin/react-native -> /Users/bbarclay/.node/lib/node_modules/react-native-cli/index.js
[email protected] /Users/bbarclay/.node/lib/node_modules/react-native-cli
└── [email protected] ([email protected], [email protected], [email protected], [email protected], [email protected])
➜ ~ react-native init AwesomeProject
zsh: command not found: react-native
A: Add the admin permission to .npm home directory.
sudo chown -R $(whoami) ~/.npm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30924461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the equivalent of dispatchTouchEvent for a fragment? I'm trying to get horizontal swipe's from a vertical scrollview before they are consumed by the scrollview in order to override them. It can be done in an activity starting with the dispatchTouchEvent.
I'm trying to do the same thing in a ListFragment and a Fragment, but dispatchTouchEvent doesn't seem to exist for fragments. Is there a equivalent method to dispatchTouchEvent in a fragment?
A: I just put the following in my parent activity in the onCreate ..., then used a public interface like Rarw mentioned above.
gesturedetector = new GestureDetector(new MyGestureListener());
myLayout.setOnTouchListener(new OnTouchListener() {
@Override
public boolean onTouch(View v, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
bTouch = true;
return false;
} else {
gesturedetector.onTouchEvent(event);
bTouch = false;
return true;
}
}
});
And added the following to my parent activity and a gesturedetector to catch the desired events.
public boolean dispatchTouchEvent(MotionEvent ev) {
super.dispatchTouchEvent(ev);
return gesturedetector.onTouchEvent(ev);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22864956",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: What does this binary-search function exactly? I struggle to understand how this binary-search function works:
bsearch :: Ord a => [a] -> a -> Bool
bsearch [] _ = False
bsearch xs x =
if x < y then bsearch ys1 x
else if x > y then bsearch ys2 x
else True
where
ys1 = take l xs
(y:ys2) = drop l xs
l = length xs `div` 2
I tried to think it by an example: bsearch [1,2,3,4] 4
but I don't understand where the function starts. I like to believe first l = length xs 'div' 2 gets calculated. l = 2is the result.
Now I put my variables in (y:ys2) = drop l xs where (y:ys2) = 3:[4]
which equals to drop 2 [1,2,3,4] = [3,4]. Next else if 4 > 3 then bsearch ys2 x gets executed where ys2 = [4] and x = 4. What happens next? How does x = 4and ys2 = [4]get compared?
EDIT: I think since bsearch [4] 4is the new bsearch xs x the new l = length xs 'div' 2 = length [4] 'div' 2 = 0 that executes drop 0 [4] = [4] = (4:[]). 4 < 4 and 4 > 4is False therefore else True.
Is this the way this function executes for my example?
I would be very happy if someone could help me with this function.
A: Your interpretation of how the bindings expand is correct. The function essentially operates by converting a finite sorted list on demand into a binary search tree. I could rewrite portions of the function just to show that tree structure (note that the where portion is unchanged):
data Tree a = Node (Tree a) a (Tree a) | Empty
deriving Show
tree [] = Empty
tree xs = Node (tree ys1) y (tree ys2)
where
ys1 = take l xs
(y:ys2) = drop l xs
l = length xs `div` 2
The tree form can then be produced:
*Main> tree [1..4]
Node (Node (Node Empty 1 Empty) 2 Empty) 3 (Node Empty 4 Empty)
The recursive upper section is about traversing only the relevant portion of the tree.
bsearchT Empty _ = False
bsearchT (Node ys1 y ys2) x =
if x < y then bsearchT ys1 x
else if x > y then bsearchT ys2 x
else True
bsearch xs x = bsearchT (tree xs) x
The operation itself does suggest that a plain list is not the appropriate data type; we can observe that Data.List.Ordered.member performs a linear search, because lists must be traversed from the head and may be infinite. Arrays or vectors provide random access, so there is indeed a Data.Vector.Algorithms.Search.binarySearch.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50555747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Set factor values for multiple columns to missing without losing the label attribute I have surveydata with a number of likert-scale items that have been imported as factors where the actual question is attached to those factors as the "label" attribute.
In addition, there is one random variable that has been assigned to each participant and was used for different routings of the questionnaire.
For example if the random number is 1 or 2, the questionnaire would display a certain group of items (e.g G1, G2, G3, etc.). If the random number was 3, those items would be skipped, hence their values should be missing.
In some cases due to technical issues, groups of items were still displayed and answered even if they shouldn't have been displayed. I would like to set them missing programmatically for the whole group of items, depending on the random variable, using across() and case_when.
However, I have some trouble keeping the label attribute when trying to set the values to NA.
Here are some fabricated test data to illustrate the problem:
set.seed(123)
test_df <-
tibble(
random_number = rep(1:3, each = 2),
G1 = factor(sample(c("Never", "Sometimes", "Often"), size = 6, replace = TRUE), levels = c("Never", "Sometimes", "Often")),
G2 = factor(sample(c("Never", "Sometimes", "Often"), size = 6, replace = TRUE), levels = c("Never", "Sometimes", "Often")),
G3 = factor(sample(c("Never", "Sometimes", "Often"), size = 6, replace = TRUE), levels = c("Never", "Sometimes", "Often")),
G4 = factor(sample(c("Never", "Sometimes", "Often"), size = 6, replace = TRUE), levels = c("Never", "Sometimes", "Often")))
attributes(test_df$G1)$label <- "Question 1: Do you use R?"
attributes(test_df$G2)$label <- "Question 2: Do you use Python?"
attributes(test_df$G3)$label <- "Question 3: Do you use SQL?"
attributes(test_df$G4)$label <- "Question 4: Do you use PowerBI?"
#----------------------------------------------------------------------------------
attributes(test_df$G1)
$levels
[1] "Never" "Sometimes" "Often"
$class
[1] "factor"
$label
[1] "Question 1: Do you use R?" #label still here.
All items starting with G should be missing if the random number is 3
# A tibble: 6 x 5
random_number G1 G2 G3 G4
<int> <fct> <fct> <fct> <fct>
1 1 Often Often Often Often
2 1 Often Sometimes Sometimes Sometimes
3 2 Never Often Never Never
4 2 Never Sometimes Often Often
5 3 Never Never Often Never #should be missing
6 3 Never Sometimes Never Never #should be missing
The code below works for setting the correct values missing but due to the conversion to character and back to factor, the label attribute gets dropped.
test_df_no_label <-
test_df %>%
dplyr::mutate(dplyr::across(.cols = dplyr::matches("[G]\\d{1,2}"),
.fns = ~dplyr::case_when(random_number != 3 ~ as.character(.x),
TRUE ~ NA_character_) %>%
factor(levels = c("Never", "Sometimes", "Often"))))
> test_df_no_label
# A tibble: 6 x 5
random_number G1 G2 G3 G4
<int> <fct> <fct> <fct> <fct>
1 1 Often Often Often Often
2 1 Often Sometimes Sometimes Sometimes
3 2 Never Often Never Never
4 2 Never Sometimes Often Often
5 3 NA NA NA NA #are now missing, good
6 3 NA NA NA NA
#-----------------------------------------------------------------
# label attribute is gone
> attributes(test_df_no_label$G1)
$levels
[1] "Never" "Sometimes" "Often"
$class
[1] "factor"
I've also tried .fns = ~ifelse(random_number != 3, .x, NA_character_) but that converted the factor to numeric. Any ideas about how to avoid dropping the label when setting the values to missing? The real df has many more items and many more groups, so I would like to avoid hard-coding it manually.
A: As Hadley notes in Advanced R:
Attributes should generally be thought of as ephemeral. For example, most attributes are lost by most operations.
But one option to keep your labels would be to make use of a helper function which first saves the label attribute and resets is afterwards:
library(dplyr)
to_na <- function(x, rn) {
label <- attr(x, "label")
levels <- levels(x)
x <- as.character(x)
x[rn == 3] <- NA_character_
x <- factor(x, levels = levels)
attr(x, "label") <- label
x
}
test_df <- test_df %>%
dplyr::mutate(dplyr::across(
.cols = dplyr::matches("[G]\\d{1,2}"),
.fns = ~ to_na(.x, random_number)))
test_df
#> # A tibble: 6 × 5
#> random_number G1 G2 G3 G4
#> <int> <fct> <fct> <fct> <fct>
#> 1 1 Often Sometimes Never Never
#> 2 1 Often Sometimes Sometimes Never
#> 3 2 Often Often Often Never
#> 4 2 Sometimes Never Never Never
#> 5 3 <NA> <NA> <NA> <NA>
#> 6 3 <NA> <NA> <NA> <NA>
lapply(test_df, attr, "label")
#> $random_number
#> NULL
#>
#> $G1
#> [1] "Question 1: Do you use R?"
#>
#> $G2
#> [1] "Question 2: Do you use Python?"
#>
#> $G3
#> [1] "Question 3: Do you use SQL?"
#>
#> $G4
#> [1] "Question 4: Do you use PowerBI?"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72944370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Mathematics errors in basic C++ program I am working with a basic C++ program to determine the area and perimeter of a rectangle. My program works fine for whole numbers but falls apart when I use any number with a decimal. I get the impression that I am leaving something out, but since I'm a complete beginner, I have no idea what.
Below is the source:
#include <iostream>
using namespace std;
int main()
{
// Declared variables
int length; // declares variable for length
int width; // declares variable for width
int area; // declares variable for area
int perimeter; // declares variable for perimeter
// Statements
cout << "Enter the length and the width of the rectangle: "; // states what information to enter
cin >> length >> width; // user input of length and width
cout << endl; // closes the input
area = length * width; // calculates area of rectangle
perimeter = 2 * (length + width); //calculates perimeter of rectangle
cout << "The area of the rectangle = " << area << " square units." <<endl; // displays the calculation of the area
cout << "The perimeter of the rectangle = " << perimeter << " units." << endl; // displays the calculation of the perimeter
system ("pause"); // REMOVE BEFORE RELEASE - testing purposes only
return 0;
}
A: Change all your int type variables to double or float. I would personally use double because they have more precision than float types.
A: int datatype stands for integer (i.e. positive and negative whole numbers, including 0)
If you want to represent decimal numbers, you will need to use float.
A: Use the float or double type, like the others already said.
But it ain't as simple as that. You need to understand what floating-point numbers actually are, and why (0.1 + 0.1 + 0.1) != (0.3). This is a complicated subject, so I won't even try to explain it here - just remember that a float is not a decimal, even if the computer is showing it to you in the form of a decimal.
A: use floats not ints an integer (int) is a whole number, floats allow decimal places (as do doubles)
float length; // declares variable for length
float width; // declares variable for width
float area; // declares variable for area
float perimeter; // declares variable for perimete
A: You've defined your variables as integers. Use double instead.
Also, you can look up some formatting for cout to define the number of decimal places you want to show, etc.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3034686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: "Failed to load module script" error when building and serving with a base property in vitejs I'm having an issue in serving a vite app. I'm building the app using the following command:
vite build --base=/registration-form/
and serving the app with
serve -s dist
But I'm getting the following error in the browser console and all the .js files are HTML files.
Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/html".
Strict MIME type checking is enforced for module scripts per HTML spec.
More context:
I'm using serving in production with pm2, serve & nginx. This is my startup script:
pm2 start "serve -s -p [PORT] dist" --name registration-form
and this is the part of my nginx.conf that handles this route:
location /ideathon-registration/ {
http://[HOST]:[PORT];
}
How do I solve this?
Vite version: ^2.3.8
Node version: 14.17.1
Local OS: Linux Mint 20.04
Server OS: Ubuntu 18.04
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68190154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Duotone image in Canvas Alright. I'm new to this, I'm not coder, just trying something for fun. And I'm confused.
I've found a tutorial about making a duotone image with canvas, and I'm so new to this that I can't figure out what I'm doing wrong. Maybe someone can help.
Here is my code. It displays the original image (demo_small.png), but doesn't show any of the effects on it. I guess that maybe I've to overwrite it after the last "return pixels", but I've not idea of what I'm doing so..
<html>
<head>
<style>
body {
margin: 0px;
padding: 0px;
}
</style>
<script src="https://www.mattkandler.com/assets/application-9cbca3f8879431193adab436bd8e0cf7629ecf3752685f49f997ed4469f42826.js" type="text/javascript"></script>
</head>
<body>
<canvas id="idOfCanvasToDrawImageOn" width="img.width" height="img.height"></canvas>
<script>
//Getting the image pixels
var canvasId = 'idOfCanvasToDrawImageOn';
var imageUrl = 'demo_small.png';
var canvas = document.getElementById(canvasId);
var context = canvas.getContext('2d');
var img = new Image();
// img.crossOrigin = 'Anonymous';
img.onload = function() {
// Perform image scaling if desired size is given
var scale = 1;
context.canvas.width = img.width;
context.canvas.height = img.height;
context.scale(scale, scale);
// Draw image on canvas
context.drawImage(img, 0, 0);
// Perform filtering here
};
img.src = imageUrl;
//Then we'll need to grab the pixels from this newly created canvas image using the following function
Filters.getPixels = function(img) {
var c = this.getCanvas(img.width, img.height);
var ctx = c.getContext('2d');
ctx.drawImage(img, 0, 0);
return ctx.getImageData(0, 0, c.width, c.height);
};
//Converting to grayscale
Filters.grayscale = function(pixels) {
var d = pixels.data;
var max = 0;
var min = 255;
for (var i = 0; i < d.length; i += 4) {
// Fetch maximum and minimum pixel values
if (d[i] > max) {
max = d[i];
}
if (d[i] < min) {
min = d[i];
}
// Grayscale by averaging RGB values
var r = d[i];
var g = d[i + 1];
var b = d[i + 2];
var v = 0.3333 * r + 0.3333 * g + 0.3333 * b;
d[i] = d[i + 1] = d[i + 2] = v;
}
for (var i = 0; i < d.length; i += 4) {
// Normalize each pixel to scale 0-255
var v = (d[i] - min) * 255 / (max - min);
d[i] = d[i + 1] = d[i + 2] = v;
}
return pixels;
};
//Building a color gradient
Filters.gradientMap = function(tone1, tone2) {
var rgb1 = hexToRgb(tone1);
var rgb2 = hexToRgb(tone2);
var gradient = [];
for (var i = 0; i < (256 * 4); i += 4) {
gradient[i] = ((256 - (i / 4)) * rgb1.r + (i / 4) * rgb2.r) / 256;
gradient[i + 1] = ((256 - (i / 4)) * rgb1.g + (i / 4) * rgb2.g) / 256;
gradient[i + 2] = ((256 - (i / 4)) * rgb1.b + (i / 4) * rgb2.b) / 256;
gradient[i + 3] = 255;
}
return gradient;
};
//Applying the gradient
Filters.duotone = function(img, tone1, tone2) {
var pixels = this.getPixels(img);
pixels = Filters.grayscale(pixels);
var gradient = this.gradientMap(tone1, tone2);
var d = pixels.data;
for (var i = 0; i < d.length; i += 4) {
d[i] = gradient[d[i] * 4];
d[i + 1] = gradient[d[i + 1] * 4 + 1];
d[i + 2] = gradient[d[i + 2] * 4 + 2];
}
return pixels;
};
</script>
</body>
</html>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72423589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: TypeError: list indices must be integers or slices, not str error appears while parsing xml I tried to write a parses for my device. But I can't undestood where I made mistake, because I got the error:
print(value2['@id'])
TypeError: list indices must be integers or slices, not str
My code you may see below:
import requests
import xmltodict
url = 'http://192.168.1.8:8060/query/apps'
text = requests.get(url).text
#content = """
#<?xml version="1.0" encoding="UTF-8"?>
#<apps>
#<app id="31012" type="menu" #version="2.0.53">Vudu Movie & #TV Store</app>
#</apps>
#"""
text = text.split('\n')
text = text[1:]
text = ''.join(text)
data = xmltodict.parse(text)
data = dict(data)
for key1, value1 in data.items():
for key2, value2 in value1.items():
print(value2['@id'])
print(value2['@type'])
print(value2['#text'])
Could anyone help me with it, please?
A: import requests
import xmltodict
url = 'http://192.168.1.8:8060/query/apps'
text = requests.get(url).text
#content = """
#<?xml version="1.0" encoding="UTF-8"?>
#<apps>
#<app id="31012" type="menu" #version="2.0.53">Vudu Movie & #TV Store</app>
#</apps>
#"""
text = text.split('\n')
text = text[1:]
text = ''.join(text)
data = xmltodict.parse(text)
data = dict(data)
for key1, value1 in data.items():
for key2, value2 in value1.items():
if isinstance(value2, list):
for item in value2:
print(item['@id'])
print(item['@type'])
print(item['#text'])
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71704716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Spring installation with Eclipse I have downloaded and installed Spring STS for Eclipse...
But when I've Created a Spring Project I can't to
import org.springframework.context.ApplicationContext;
So I tried to add library like spring.jar but I couldn't find this jar.(properties-->java build path-->libraries)
What's wrong with this setting? Or my installation is not ended?
A: Spring STS is only a Eclipse plugin which helps the developer managing spring beans. It is not mandatory for developing a spring-based application.
So your second approach was the right one to add the spring library. You can download spring here: Spring Downloads and then add it as a library to your project.
But serious... I would encourage you to do a tutorial first: http://www.springsource.org/tutorials
A: Plugins does not includes library of spring.
Plugin is only to manage the reference of configuration in your project build in eclipse.
Libraries are actual jar file which is required at compile as well as run time.
If you skip the plugin installation then your code will work if your configuration is correct
But if you skip spring library installation or download then your code will not compile of run.
Please go through step by step tutorial by following these tutorial
Spring Step By Step Tutorial
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9888514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Why are we checking if temp == null? This code is for implementation of linked list.
node *single_llist::create_node(int value)
{
struct node *temp, *s;
temp = new(struct node);
if (temp == NULL)
{
cout<<"Memory not allocated "<<endl;
return 0;
}
else
{
temp->info = value;
temp->next = NULL;
return temp;
}
}
*
*Here, why are we checking if temp == NULL. I can't think of any case where this can happen
*Also to exit from if, why are we returning 0 since the return type is node?
A: *
*As the message clearly says, this is in case the request to allocate memory fails. (Exactly how this might happen is irrelevant; it is possible, so the code should handle it.)
*The author is assuming that NULL==0, which is often true, but not necessarily so, and (as we both seem to think) is a bad assumption to make.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57731829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: How to run PenAndPdf project in Android So Pen and Pdf is an open source PDFViewer/Annotater app for android and other platforms. It is written on top of MuPDF and as such is written in C. When I get the code from github and add it to android studio it gives me all kinds of problems. How to I run the project?
Pen And Pdf
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59988213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to query and parse adjacent list hierarchy using cte? How can I query this parent-child hierarchy to produce a result set in which the levels are in their own columns? Sample data:
SET NOCOUNT ON;
USE Tempdb;
IF OBJECT_ID('dbo.Employees', 'U') IS NOT NULL DROP TABLE dbo.Employees;
CREATE TABLE dbo.Employees
(
empid INT NOT NULL PRIMARY KEY,
mgrid INT NULL REFERENCES dbo.Employees,
empname VARCHAR(25) NOT NULL,
salary MONEY NOT NULL,
CHECK (empid <> mgrid),
CHECK (empid > 0)
);
CREATE UNIQUE INDEX idx_unc_mgrid_empid ON dbo.Employees(mgrid, empid);
INSERT INTO dbo.Employees(empid, mgrid, empname, salary) VALUES
(1, NULL, 'David' , $10000.00),
(2, 1, 'Eitan' , $7000.00),
(3, 1, 'Ina' , $7500.00),
(4, 2, 'Seraph' , $5000.00),
(5, 2, 'Jiru' , $5500.00),
(6, 2, 'Steve' , $4500.00),
(7, 3, 'Aaron' , $5000.00),
(8, 5, 'Lilach' , $3500.00),
(9, 7, 'Rita' , $3000.00),
(10, 5, 'Sean' , $3000.00),
(11, 7, 'Gabriel', $3000.00),
(12, 9, 'Emilia' , $2000.00),
(13, 9, 'Michael', $2000.00),
(14, 9, 'Didi' , $1500.00);
select * from dbo.Employees
go
;WITH Tree (empid, mgrid, lv)
AS (
SELECT empid, mgrid, 1
FROM Employees
WHERE mgrid IS NULL
UNION ALL
SELECT E.empid, E.mgrid, lv + 1
FROM Employees AS E
JOIN Tree
ON E.mgrid= Tree.empid
)
SELECT empid, mgrid, lv
FROM Tree
ORDER BY Lv, empid
The resulting table should have a structure like
+-------+-----+--------+--------+--------+--------+--------+
| empid | lvl | level1 | level2 | level3 | level4 | level5 |
+-------+-----+--------+--------+--------+--------+--------+
| 1 | 1 | 1 | NULL | NULL | NULL | NULL |
| 2 | 2 | 1 | 2 | NULL | NULL | NULL |
| 3 | 2 | 1 | 3 | NULL | NULL | NULL |
| 4 | 3 | 1 | 2 | 4 | NULL | NULL |
| 5 | 3 | 1 | 2 | 5 | NULL | NULL |
| 6 | 3 | 1 | 2 | 6 | NULL | NULL |
| 7 | 3 | 1 | 3 | 7 | NULL | NULL |
| 8 | 4 | 1 | 2 | 5 | 8 | NULL |
| 9 | 4 | 1 | 3 | 7 | 9 | NULL |
| 10 | 4 | 1 | 2 | 5 | 10 | NULL |
| 11 | 4 | 1 | 3 | 7 | 11 | NULL |
| 12 | 5 | 1 | 3 | 7 | 9 | 12 |
| 13 | 5 | 1 | 3 | 7 | 9 | 13 |
| 14 | 5 | 1 | 3 | 7 | 9 | 14 |
+-------+-----+--------+--------+--------+--------+--------+
A: Your example data makes the question clearer. You could collect the manager levels as you descend:
; with Tree as
(
SELECT empid
, mgrid
, 1 as lv
, 1 as level1
, null as level2
, null as level3
, null as level4
, null as level5
FROM Employees
WHERE mgrid IS NULL
UNION ALL
SELECT E.empid
, E.mgrid
, T.lv + 1
, T.level1
, case when T.lv = 1 then E.empid else t.level2 end
, case when T.lv = 2 then E.empid else t.level3 end
, case when T.lv = 3 then E.empid else t.level4 end
, case when T.lv = 4 then E.empid else t.level5 end
FROM Employees AS E
JOIN Tree T
ON E.mgrid = T.empid
)
select *
from Tree
Example at SQL Fiddle.
A: ;WITH Tree (empid, level, level1, level2, level3, level4, level5)
AS (
SELECT empid, 1, empid, NULL, NULL, NULL, NULL
FROM Employees
WHERE mgrid IS NULL
UNION ALL
SELECT E.empid, T.level + 1,
CASE WHEN T.level+1 = 1 THEN E.empid ELSE T.level1 END,
CASE WHEN T.level+1 = 2 THEN E.empid ELSE T.level2 END,
CASE WHEN T.level+1 = 3 THEN E.empid ELSE T.level3 END,
CASE WHEN T.level+1 = 4 THEN E.empid ELSE T.level4 END,
CASE WHEN T.level+1 = 5 THEN E.empid ELSE T.level5 END
FROM Employees AS E
JOIN Tree T
ON E.mgrid= T.empid
)
SELECT empid, level, level1, level2, level3, level4, level5
FROM Tree
A: With a pivot?
;WITH Tree (empid, mgrid, lv)
AS (
SELECT empid, mgrid, 1
FROM @Employees
WHERE mgrid IS NULL
UNION ALL
SELECT E.empid, E.mgrid, lv + 1
FROM @Employees AS E
JOIN Tree
ON E.mgrid= Tree.empid
)
select *
from
Tree
pivot
(count(empid) for lv in ([1],[2],[3],[4],[5]))p
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13158857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Expression based Autowire in Spring Boot (with Kotlin) Situation
I'm trying to come up with a methodology to conditionally load one bean (based on the existence of 2 property or environment variables) and if they are missing load up another bean.
Vars
So the two property (or env vars) are:
*
*ProtocolHOST
*ProtocolPORT
So for example java -jar xxxx -DProtocolHost=myMachine -DProtocolPort=3333 would use the bean I want, but if both are missing then you'd get another bean.
@Component("Protocol Enabled")
class YesBean : ProtocolService {}
@Component("Protocol Disabled")
class NoBean : ProtocolService {
Later in my controller I have a:
@Autowired
private lateinit var sdi : ProtocolService
So I've looked at a variety of options:
using both @ConditionalOnProperty and @ConditionalOnExpression and I cant seem to make any headway.
I'm pretty sure I need to go the Expression route so I wrote some test code that seems to be failing:
@PostConstruct
fun customInit() {
val sp = SpelExpressionParser()
val e1 = sp.parseExpression("'\${ProtocolHost}'")
println("${e1.valueType} ${e1.value}")
println(System.getProperty("ProtocolHost")
}
Which returns:
class java.lang.String ${ProtocolHost}
taco
So I'm not sure if my SPeL Parsing is working correctly because it looks like its just returning the string "${ProtocolHost}" instead of processing the correct value. I'm assuming this is why all the attempts I've made in the Expression Language are failing - and thus why i'm stuck.
Any assistance would be appreciated!
Thanks
Update
I did get things working by doing the following
in my main:
val protocolPort: String? = System.getProperty("ProtocolPort", System.getenv("ProtocolPort"))
val protocolHost: String? = System.getProperty("ProtocolHost", System.getenv("ProtocolHost"))
System.setProperty("use.protocol", (protocolHost != null && protocolPort != null).toString())
runApplication<SddfBridgeApplication>(*args)
And then on the bean definitions:
@ConditionalOnProperty(prefix = "use", name = arrayOf("protocol"), havingValue = "false", matchIfMissing = false)
@ConditionalOnProperty(prefix = "use", name = arrayOf("protocol"), havingValue = "false", matchIfMissing = false)
However this feels like a hack and I'm hoping it could be done directly in SpEL instead of pre-settings vars a head of time.
A: This sounds like a perfect use case for Java based bean configuration:
@Configuration
class DemoConfiguration {
@Bean
fun createProtocolService(): ProtocolService {
val protocolPort: String? = System.getProperty("ProtocolPort", System.getenv("ProtocolPort"))
val protocolHost: String? = System.getProperty("ProtocolHost", System.getenv("ProtocolHost"))
return if(!protocolHost.isNullOrEmpty() && !protocolPort.isNullOrEmpty()) {
YesBean()
} else {
NoBean()
}
}
}
open class ProtocolService
class YesBean : ProtocolService()
class NoBean : ProtocolService()
You might also want look into Externalized Configurations to replace System.getProperty() and System.getenv().
This would then look like this:
@Configuration
class DemoConfiguration {
@Bean
fun createProtocolService(@Value("\${protocol.port:0}") protocolPort: Int,
@Value("\${protocol.host:none}") protocolHost: String): ProtocolService {
return if (protocolHost != "none" && protocolPort != 0) {
YesBean()
} else {
NoBean()
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53043564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Creating queue for Flask backend that can handle multiple users I am creating a robot that has a Flask and React (running on raspberry pi zero) based interface for users to request it to perform tasks. When a user requests a task I want the backend to put it in a queue, and have the backend constantly looking at the queue and processing it on a one-by-one basis. Each tasks can take anywhere from 15-60 seconds so they are pretty lengthy.
Currently I just immediately do the task in the same python process that is running the Flask server, and from testing locally It seems like i can go to the react app in two different browsers and request tasks at the same time and it looks like the raspberry pi is trying to run them in parallel (from what I'm seeing in the printed logs).
What is the best way to allow multiple users to go to the front-end and queue up tasks? When multiple users go to the react app I assume they all connect to the same instance of the back-end. So it it enough just to add a dequeue to the back-end and protect it with a mutex lock (what is the pythonic way to use mutexes?). Or is this too simple? Do I need some other process or method to implement the task queue (such as writing/reading to an external file to act as the queue)?
A: In general, the most popular way to run tasks in Python is using Celery. It is a Python framework that runs on a separate process, continuously checking a queue (like Redis or AMQP) for tasks. When it finds one, it executes it, and logs the result to a "result backend" (like a database or Redis again). Then you have the Flask servers just push the tasks to the queue.
In order to notify the users, you could use polling from the React app, which is just requesting an update every 5 seconds until you see from the result backend that the task has completed successfully. As soon as you see that, stop polling and show the user the notification.
You can easily have multiple worker processes run in parallel, if the app would become large enough to need it. In general, you just need to remember to have every process do what it's needed to do: Flask servers should answer web requests, and Celery servers should process tasks. Not the other way around.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63766979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Flutter: How to determine cacheSize of Images before launch? The Flutter DevTools displayed the message: Consider resizing the asset ahead of time, supplying a cacheWidth parameter of 35, a cacheHeight parameter of 35, or using a ResizeImage.
Because my image size depends on the screen size I am searching for a way to set it dynamically.
I attempted that by using a LayoutBuilder but it feels a little bit too complicated to do this with every image.
Has anybody already experience with that and give any advice?
Thanks in advance!
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67761359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: RestApi post request to specific URL Im working on integration with some rest API and i need to make calls to their URLS to receive the data.
Im just wondering if its possible to use a REST web-service which will be mapped to that certain URL instead of the local one and later on I will write the client side that will be mapped to these calls.
for example:
@Path("/URL")
public class MessageRestService {
@GET
@Path("/{param}")
public Response printMessage(@PathParam("param") String msg) {
String result = "Restful example : " + msg;
return Response.status(200).entity(result).build();
}
}
I cant make straight API calls from client side for example using AngularJs because i get this error:
Response to preflight request doesn't pass access control check: No 'Access- Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 400.
I did find code samples for straight API calls to URLS from java, but it looks messy especially when you have to create it for a lot of API calls:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
public class Connection {
public static void main(String[] args) {
try {
URL url = new URL("INSERT URL HERE");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setDoOutput(true);
conn.setRequestMethod("POST");
conn.setRequestProperty("Content-Type", "application/json");
String messageToPost = "POST";
OutputStream os = conn.getOutputStream();
os.write(input.getBytes());
os.flush();
conn.connect();
BufferedReader br = new BufferedReader(new InputStreamReader(
(conn.getInputStream())));
String output;
System.out.println("Output from Server .... \n");
while ((output = br.readLine()) != null) {
System.out.println(output);
}
conn.disconnect();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
A: You are facing a same origin policy issue.
This is because your client-side (web browser) application is fetched from Server-A, while it tries to interact with data on Server-B.
*
*Server-A is wherever you application is fetched from (before it is displayed to the user on their web browser).
*Server-B is localhost, where your mock service is deployed to
For security reasons, by default, only code originating from Server-B can talk to Server-B (over-simplifying a little bit). This is meant to prevent malicious code from Server-A to hijack a legal application from Server-B and trick it into manipulating data on Server-B, behind the user's back.
To overcome this, if a legal application from Server-A needs to talk to Server-B, Server-B must explicitly allow it. For this you need to to implement CORS (Cross Origin Resource Sharing) - Try googling this, you will find plenty of resources that explain how to do it. https://www.html5rocks.com/en/tutorials/cors/ is also a great starting point.
However, as your Server-B/localhost service is just a mock service used during development and test, if your application is simple enough, you may get away with the mock service simply adding the following HTTP headers to all its responses:
Access-Control-Allow-Origin:*
Access-Control-Allow-Headers:Keep-Alive,User-Agent,Content-Type,Accept [enhance with whatever you use in you app]
As an alternative solution (during dev/tests only!) you may try forcing the web browser to disregard the same origin policy (eg: --disable-web-security for Chrome) - but this is dangerous if you do not pay attention to use separate instances of the web browser for your tests and for you regular web browsing.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40996013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: What is the use of a package body without package specification in Oracle? I have faced this question in one interview.
Without specification, Package Body is invalid but they asked advantage. Can you explain what is the use of a package body without package specification in Oracle?
A: As you've said, package body without its specification is useless:
SQL> create package body pkg_test as
2 procedure p_test;
3 end;
4 /
Warning: Package Body created with compilation errors.
SQL> show err
Errors for PACKAGE BODY PKG_TEST:
LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0 PL/SQL: Compilation unit analysis terminated
1/14 PLS-00201: identifier 'PKG_TEST' must be declared
1/14 PLS-00304: cannot compile body of 'PKG_TEST' without its
specification
SQL> exec pkg_test.p_test;
BEGIN pkg_test.p_test; END;
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'PKG_TEST.P_TEST' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
SQL>
Therefore, I see no advantage(s) in having a package body.
Perhaps it can be used as code graveyard - you know, there are some procedures and functions you wrote but you don't need them any more - create a package body and put all that code in there. You won't be able to use them, but will be able to have a look if necessary. Something like your own code repository stored in the database, instead of scattered all over your hard disk in different directories and different kind of files (.SQL, .PRC, .PKG, ...).
A: I think the premise of the question is misleading and probably wrong.The Interviewer perhaps wanted you to know the difference between variables and procedures that are not defined in package specification and defined and called or used internally within the package body. A package body can be used to define local procedures and variables without them having defined inside package specification.
create or replace package pkg_spec_test
AS
procedure p1;
v_1 NUMBER := 10;
END;
/
create or replace package body pkg_spec_test
AS
procedure p1 AS
BEGIN
dbms_output.put_line('HELLO. I can be called externally');
END p1;
procedure p2 AS
BEGIN
dbms_output.put_line('I''m private to package body.');
END;
procedure main as
begin
p2; --called within the body;
END main;
END pkg_spec_test;
/
begin
pkg_spec_test.p1;
end;
/
Works fine.
HELLO. I can be called externally
PL/SQL procedure successfully completed.
begin
pkg_spec_test.p2;
end;
/
Causes error
Error starting at line : 30 in command -
begin
pkg_spec_test.p2;
end;
Error report -
ORA-06550: line 2, column 15:
PLS-00302: component 'P2' must be declared
The one below works fine
begin
pkg_spec_test.main;
end;
/
I'm private to package body.
PL/SQL procedure successfully completed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53953508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to keep a login session from a cross origin (CORS) Spring security without sending another authentication request I have a Spring boot app server protected by spring security. The user is authenticated with a username and password when they first log in. If I use Spring MVC (same origin) I didn't have to re-login every time I call an API. But when I call the API from an Angular app (cross-origin), I have to provide an authorization every time I refresh the page.
Is it possible to keep my login session without having to send an auth every time I refresh the page? Do I need some kind of HTTP interceptor service to check the response from the Spring server manually?
The REST API I tried to call
@CrossOrigin(origins = "http://localhost:4200")
@RestController
public class TestControllers {
private final AtomicLong counter = new AtomicLong();
@GetMapping("/greeting")
public MessageModel greeting (@RequestParam(value = "name", defaultValue = "World") String name) {
return new MessageModel(counter.incrementAndGet(),"Hello, " + name + "!");
}
private class MessageModel{
private long id;
private String content;
//Constructor, getter & setter
}
}
Auth controller
@RestController
@RequestMapping("/api/v1")
public class BasicAuthController {
@GetMapping(path = "/basicauth")
public AuthenticationModel basicauth() {
return new AuthenticationModel("You are authenticated");
}
class AuthenticationModel {
private String message;
//Constructor, getter & setter
}
}
The security configuration
@Configuration
@EnableWebSecurity(debug = true)
public class SpringSecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.cors().and()
.authorizeRequests()
.requestMatchers(CorsUtils::isPreFlightRequest).permitAll()
.antMatchers("/**").permitAll()
.anyRequest().authenticated()
.and()
.httpBasic();
}
@Override
protected void configure(AuthenticationManagerBuilder auth) throws Exception {
PasswordEncoder encoder = new BCryptPasswordEncoder();
auth.inMemoryAuthentication()
.passwordEncoder(encoder)
.withUser("user")
.password(encoder.encode("asdasd"))
.roles("USER");
}
@Bean
CorsConfigurationSource corsConfigurationSource() {
CorsConfiguration configuration = new CorsConfiguration();
configuration.setAllowedOrigins(Arrays.asList("http://localhost:4200"));
configuration.setAllowedMethods(Arrays.asList("GET","POST"));
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
source.registerCorsConfiguration("/**", configuration);
return source;
}
}
The Angular authentication service
authenticationService(username: string, password: string) {
return this.http.get('http://localhost:8080/api/v1/basicauth',
{ headers: { authorization: this.createBasicAuthToken(username, password) } }).pipe(map((res) => {
this.username = username;
this.password = password;
this.registerSuccessfulLogin(username, password);
}));
}
A: You need an interceptor for your Angular client, so make a new injectable like this:
@Injectable()
export class AuthInterceptor implements HttpInterceptor {
constructor(private authenticationService: AuthenticationService) {}
intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {
const username = this.authenticationService.username; //get your credentials from wherever you saved them after authentification
const password = this.authenticationService.password;
if (username && password) {
request = request.clone({
setHeaders: {
Authorization: this.createBasicAuthToken(username, password),
}
});
}
return next.handle(request);
}
}
and add this to your providers located in app.module.ts:
{provide: HTTP_INTERCEPTORS, useClass: AuthInterceptor, multi: true},
This will add your authentication data to each request so you don't have to login every time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62368879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Disable layout autoresize in layout.js I've got an application made using primefaces which in turn uses the jquery ui layout plugin to change the size of four divs (north, south, west and center) when you resize the browser window.
I'm not allowed to remove the script myself as it's part of the application so I need a way to disable this resize functionality after the page has loaded.
If there's a way to unset the script after it's loaded that would work fine as I don't need it on this particular page.
Here's what I've got so far without much luck
function InitializeLayout(sElement) {
var options = {
resizeWithWindow: false,
resizable: false,
closable: false,
initPanes: false,
west__resizable: false,
west__closable: false,
defaults : {
autoResize: false
},
west : {
autoResize: false
}
};
oLayout = $(sElement).layout(options);
console.log(oLayout.state);
}
jQuery(document).ready(function() {
InitializeLayout("#layout-body");
});
https://code.google.com/p/primefaces/source/browse/primefaces/trunk/src/main/resources/META-INF/resources/primefaces/layout/layout.js?r=7990
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28564634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Public domain to private intranet server, with https I have servers set up on a huge local intranet/network where they hosts local websites, however I want HTTPS on the websites hosted from the servers. I need HTTPS since the websites uses HTML 5 and uses phone cameras to take pictures and so forth. And I cant selfsign a certificate since I dont want the users to have to see errors and feel unsafe.
But the servers has to be private, inside the routers firewalls. I own a public domain, lets call it www.example.se. It would be really nice to go to www.example.se and it points to my private IP with the domain-name in the browser and https. Since its a public domain, https should be easy to fix?
The servers have two local dns ip adresses and one local dns name, which all are unreachable if you are not within the same internet. The public ip of the machines have almost all ports blocked from outside. The only way in to the network is being directly connected to it.
Im having a hard time getting my head around how to fix it, or if its even doable. Would like some tips on how to fix this, or a suggestion on how to make this work with some other solution. I just need the https to access user media (navigator.mediaDevices.getUserMedia(constraints)), and its nice to have https to give the users a sense of security.
EDIT 2017-11-20: Adding some more information.
From one of the clients computers inside the same router and firewalls I get this information when grepping DNS servers.
*
*IP4.ADDRESS[1]: XXX.XX.133.231/22
*IP4.GATEWAY: XXX.XX.132.1
*IP4.DNS[1]: XXX.XX.132.3
*IP4.DNS[2]: XXX.XX.172.2
*IP4.DOMAIN[1]: 'EXAMPLE'.local
*IP6.ADDRESS[1]: fe80::xxx:xxx:xxx:xxxx/64
*IP6.GATEWAY: --
A: If you have the DHCP server within the intranet under your control, you can specify a DNS server that everyone has to use. That DNS server can point to local IP addresses. Then, it will look like an ordinary website to visitors within your network.
If you want to connect via HTTPS, you would have to use something like Let's Encrypt to get a certificate or you can self-sign a certificate, although modern browsers will throw up an error if you self-sign.
Edit:
For https, it shouldn't be too big of a deal to get a CA to sign you a cert. If you don't want any connections to the outside world (air-gapped or something), then you need to find a CA to sign a certificate for you (costs $$$). Https shouldn't be too big of a need for you because only people within your network could carry out an attack, and it seems like you have it locked down pretty well.
The best option for you would be to self-sign and have all the browsers within the network trust your certificate. That would be free and easy to do with no need to connect to the outside world.
For DNS, adding a DNS entry to point to your internal server will not affect other machines. All it will do is tell the other computers on your network that "www.example.se" exists at 192.168.1.1 (or whatever the internal ip of your server is).
Do you have a DNS server on your network or are you communicating using IP addresses?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47378343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Laravel eloquent order by subquery I have a problem with ordering by columns in subquery (lastname, firstname).
I already tried this code as suggested by other posts:
->with(['customer' => function ($query) {
$query->orderBy("lastname", "asc")
->orderBy("firstname", "asc");
}])
Here my full code, but it doesn't work.
return Membership::forCompany($companyId)
->whereIn('state', ['ATTIVA', 'IN ATTESA DI ESITO', 'DA INVIARE'])
->where(function ($query) {
$query->where('end_date', '>=', Carbon::now()->toDateString())
->orWhereNull('end_date');
})
->with('federation')
->with(['customer' => function ($query) {
$query->orderBy("lastname", "asc")
->orderBy("firstname", "asc");
}]);
Here the relationships:
In customer model I have:
public function memberships() {
return $this->hasMany('App\Models\Membership');
}
In Membership model I have:
public function customer() {
return $this->belongsTo("App\Models\Customer");
}
A: Try orderBy() with join() like:
$memberships = \DB::table("memberships")
->where("company_id", $companyId)
->where(function ($query) {
$query->where('end_date', '>=', Carbon::now()->toDateString())
->orWhereNull('end_date');
})
->join("customers", "memberships.customer_id", "customers.id")
->select("customers.*", "memberships.*")
->orderBy("customers.lastname", "asc")
->get();
dd($memberships);
Let me know if you are still having the issue. Note, code not tested! so you may need to verify by yourself once.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56522288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: golang variable naming convention across packages Recently I started studying go-ethereum and this was my first time using golang. C++ is my main language and I'm a bit puzzled due to variable names in go-ethereum project.
core/state/managed_state.go:25:type account struct {
core/state/state_object.go:98:type Account struct {
There are both "account" and "Account" types in state package, which seems weird.
I've checked Naming convention for similar Golang variables, and it still looks terrible.
And what I've found is that they use a lot of "Node" struct in different packages. Definitely they do have different purposes and structures.
Are these kinds of naming is convention and popular in golang?
If you have any good references for naming convention in golang (e.g. open source projects or books), could you please name some of them? It would be really appreciated.
A:
There are both "account" and "Account" types in state package, which seems weird.
There is a meaningful difference between these two names in the language specification.
From the Go Language Specification:
An identifier may be exported to permit access to it from another
package. An identifier is exported if both:
*
*the first character of the identifier's name is a Unicode upper case
letter (Unicode class "Lu"); and
*the identifier is declared in the
package block or it is a field name or method name.
All other identifiers are not exported.
So, taking a closer look at the go-ethereum codebase, here are my observations:
*
*type account in managed_state.go is an internal implementation detail, providing a type for the accounts field in the exported ManageState struct
*type Account in state_object.go is an exported identifier
I think the implementation choice make more sense when you look at the generated documentation.
In particular, the Account type is front and center, detailing a data structure that's of interest to consumers of the package.
When you look at the documentation for the ManageState struct, the unexported fields are purposely not documented. This is because they're internal details that don't impact the exported interface and could easily be changed without impacting users of the package.
In regards to naming recommendations, see the Names section in Effective Go.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50939125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Libgdx: Dynamically increasing size of Pixmap I am new to Libgdx. I have a requirement where I have multiple balls in a game and I want to increase the ball diameter every time the user touches the ball. I used the method pixmap.drawCircle() to create the ball and every time a user touches the ball, I call a method increaseBallDia() to increase the ball size. Since there is no method to increase the height and width of an existing Pixmap, in increaseBallDia() method, I create a new Pixmap with increased width and height and then draw a ball with incremented diameter.
The problem is that my increaseBallDia() is not working properly and there is no increment to the height and width of the Pixmap. Due to this as the ball diameter increases it covers the entire Pixmap area and appears like a rectangle instead of a circle.
Any pointers on how to solve this would be greatly appreciated.
Following is the relevant code snippet:
public class MyActor extends Actor {
int actorBallDia = 90;
Pixmap pixmap = new Pixmap(actorBallDia, actorBallDia,Pixmap.Format.RGBA8888);
float actorX, actorY;
Texture texture;
public void increaseBallDia() {
actorBallDia = actorBallDia + 5;
int radius = actorBallDia / 2 - 1;
Pixmap pixmap = new Pixmap(actorBallDia, actorBallDia,
Pixmap.Format.RGBA8888);
pixmap.drawCircle(pixmap.getWidth() / 2 , pixmap.getHeight() / 2,
radius);
pixmap.fillCircle(pixmap.getWidth() / 2 , pixmap.getHeight() / 2,
radius);
texture = new Texture(pixmap);
}
public void createYellowBall() {
pixmap.setColor(Color.YELLOW);
int radius = actorBallDia / 2 - 1;
pixmap.drawCircle(pixmap.getWidth() / 2, pixmap.getHeight() / 2,
radius);
pixmap.setColor(Color.YELLOW);
pixmap.fillCircle(pixmap.getWidth() / 2, pixmap.getHeight() / 2,
radius);
texture = new Texture(pixmap);
}
@Override
public void draw(Batch batch, float alpha) {
if (texture != null) {
batch.draw(texture, actorX, actorY);
}
}
public void addTouchListener() {
addListener(new InputListener() {
@Override
public boolean touchDown(InputEvent event, float x, float y,
int pointer, int button) {
MyActor touchedActor = (MyActor) event.getTarget();
touchedActor.actorTouched = true;
return true;
}
@Override
public void touchUp(InputEvent event, float x, float y,
int pointer, int button) {
MyActor touchedActor = (MyActor) event.getTarget();
touchedActor.actorTouched = false;
}
});
}
@Override
public void act(float delta) {
if (actorTouched) {
increaseBallDia();
}
}
Regards,
RG
A: You should create the pixmap with its biggest possible dimensions. When the ball is meant to be small, just scale it down as you wish.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26880366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to build a values density heatmap in Bokeh for timed window occurencies calculated in Spark? According to https://stackoverflow.com/a/48692943/1759063 it is possible to aggregate the occurence on values per time unit like that:
+---------+----------+------------------------+------------+------+
|device_id|read_date |ids |counts |top_id|
+---------+----------+------------------------+------------+------+
|device_A |2017-08-05|[4041] |[3] |4041 |
|device_A |2017-08-06|[4041, 4041] |[3, 3] |4041 |
|device_A |2017-08-07|[4041, 4041, 4041] |[3, 3, 4] |4041 |
|device_A |2017-08-08|[4041, 4041, 4041] |[3, 4, 3] |4041 |
|device_A |2017-08-09|[4041, 4041, 4041] |[4, 3, 3] |4041 |
|device_A |2017-08-10|[4041, 4041, 4041, 4045]|[3, 3, 1, 2]|4041 |
|device_A |2017-08-11|[4041, 4041, 4045, 4045]|[3, 1, 2, 3]|4045 |
|device_A |2017-08-12|[4041, 4045, 4045, 4045]|[1, 2, 3, 3]|4045 |
|device_A |2017-08-13|[4045, 4045, 4045] |[3, 3, 3] |4045 |
+---------+----------+------------------------+------------+------+
I'd like to plot that in Zeppelin with X being read_time, Y being integer ID value and counts turn it into heatmap. How I can plot that with Bokeh and pandas?
A: This kind of DataFrame is based on more plain DataFrame where ids and counts are not grouped to arrays. It is more convenient to use non grouped DataFrame to build that with Bokeh:
https://discourse.bokeh.org/t/cant-render-heatmap-data-for-apache-zeppelins-pyspark-dataframe/8844/8
instead of grouped to list columns ids/counts we have raw table with one line per unique id ('value') and value of count ('index') and each line has its 'write_time'
rowIDs = pdf['values']
colIDs = pdf['window_time']
A = pdf.pivot_table('index', 'values', 'window_time', fill_value=0)
source = ColumnDataSource(data={'x':[pd.to_datetime('Jan 24 2022')] #left most
,'y':[0] #bottom most
,'dw':[pdf['window_time'].max()-pdf['window_time'].min()] #TOTAL width of image
#,'dh':[pdf['delayWindowEnd'].max()] #TOTAL height of image
,'dh':[1000] #TOTAL height of image
,'im':[A.to_numpy()] #2D array using to_numpy() method on pivotted df
})
color_mapper = LogColorMapper(palette="Viridis256", low=1, high=20)
plot = figure(toolbar_location=None,x_axis_type='datetime')
plot.image(x='x', y='y', source=source, image='im',dw='dw',dh='dh', color_mapper=color_mapper)
color_bar = ColorBar(color_mapper=color_mapper, label_standoff=12)
plot.add_layout(color_bar, 'right')
#show(plot)
show(gridplot([plot], ncols=1, plot_width=1000, high=pdf['index'].max()))
And the result:
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70890561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: efficient way to draw pixel art in pygame I'm making a very simple pixel art software in pygame. My logic was creating a grid class, which has a 2D list, containing 0's. When I click, the grid approximates the row and column selected, and mark the cell with a number, corresponding to the color. For simplicity, let's say '1'.
The code works correctly, but It's slow. If the number of rows and columns is less or equal than 10, It works perfectly, but if It's more, it's very laggy.
I think the problem is that I'm updating the entire screen everytime, and, since the program has to check EVERY cell, It can't handle a bigger list
import pygame
from grid import Grid
from pincel import Pincel
from debugger import Debugger
from display import Display
from pygame.locals import *
pygame.init()
pygame.mixer_music.load("musica/musica1.wav")
pygame.mixer_music.play(-1)
width = 1300
height = 1300
screen = pygame.display.set_mode((1366, 768), pygame.RESIZABLE)
pygame.display.set_caption("SquareDraw")
#Grid Creator
numberOfRows = 25
numberOfColumns = 25
grid = Grid(numberOfRows, numberOfColumns)
# Medidas
basicX = width / numberOfColumns
basicY = height / numberOfRows
#Tool Creator
pincel = Pincel(2)
#variáveis de controle
running = 1
#Initial values
grid.equipTool(pincel)
#variáveis de controle de desenho
clicking = 0
def drawScreen(screen, grid, rows, columns, basicX, basicY):
for i in range(rows):
for j in range(columns):
if grid[i][j]:
print('yes')
print(i, j)
pygame.draw.rect(screen, (0, 0, 0), (j * basicX, i * basicY, basicX, basicY))
while running:
screen.fill((255, 255, 255))
Display.drawScreen(screen, grid.board, grid.getRows(), grid.getColumns(), basicX, basicY)
pygame.display.flip()
events = pygame.event.get()
for event in events:
if (event.type == pygame.QUIT) or (event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE):
running = 0
if event.type == pygame.MOUSEBUTTONDOWN or clicking:
clicking = 1
x, y = pygame.mouse.get_pos()
Debugger.printArray2D(grid.board)
print('')
xInGrid = int(x / basicX)
yInGrid = int(y / basicY)
grid.ferramenta.draw(grid.board, xInGrid, yInGrid)
Debugger.printArray2D(grid.board)
print('')
if event.type == pygame.MOUSEBUTTONUP:
clicking = 0
if event.type == pygame.VIDEORESIZE:
width = event.w
height = event.h
basicX = width / numberOfColumns
basicY = height / numberOfRows
print(width, height)
pygame.quit()
The class grid contains the 2D list. The class "Pincel" marks the cells and The class "Debugger" is just for printing lists or anything related to debugging.
Is there a way to update only the part of the screen that was changed? If so, how can I apply that in my logic?
Thanks in advance :)
A: A few things:
*
*Use the grid array to store the on\off blocks of the screen. It only gets read when the screen is resized and needs a full redraw
*When a new rectangle is turned on, draw the rectangle directly in the event handler and update the grid array. There is no need to redraw the entire screen here.
*In the resize event, reset the screen mode to the new size then redraw the entire screen using the grid array. This is the only time you need to do a full redraw.
Here is the updated code:
import pygame
#from grid import Grid
#from pincel import Pincel
#from debugger import Debugger
#from display import Display
from pygame.locals import *
pygame.init()
#pygame.mixer_music.load("musica/musica1.wav")
#pygame.mixer_music.play(-1)
width = 1000
height = 1000
screen = pygame.display.set_mode((width, height), pygame.RESIZABLE)
pygame.display.set_caption("SquareDraw")
#Grid Creator
numberOfRows = 250
numberOfColumns = 250
#grid = Grid(numberOfRows, numberOfColumns)
grid = [[0 for x in range(numberOfRows)] for y in range(numberOfColumns)] # use array for grid: 0=white, 1=black
# Medidas
basicX = width / numberOfColumns
basicY = height / numberOfRows
#Tool Creator
#pincel = Pincel(2)
#xx
running = 1
#Initial values
#grid.equipTool(pincel)
#xx
clicking = 0
def drawScreen(screen, grid, basicX, basicY): # draw rectangles from grid array
for i in range(numberOfColumns):
for j in range(numberOfRows):
if grid[i][j]:
#print('yes')
#print(i, j)
pygame.draw.rect(screen, (0, 0, 0), (j * basicX, i * basicY, basicX, basicY))
screen.fill((255, 255, 255)) # start screen
while running:
events = pygame.event.get()
for event in events:
if (event.type == pygame.QUIT) or (event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE):
running = 0
if event.type == pygame.MOUSEBUTTONDOWN or clicking: # mouse button down
clicking = 1
x, y = pygame.mouse.get_pos()
#Debugger.printArray2D(grid.board)
#print('')
xInGrid = int(x / basicX)
yInGrid = int(y / basicY)
grid[yInGrid][xInGrid] = 1 # save this point = 1, for screen redraw (if resize)
pygame.draw.rect(screen, (0, 0, 0), (xInGrid * basicX, yInGrid * basicY, basicX, basicY)) # draw rectangle
#grid.ferramenta.draw(grid.board, xInGrid, yInGrid)
#Debugger.printArray2D(grid.board)
#print('')
pygame.display.flip() # update screen
if event.type == pygame.MOUSEBUTTONUP:
clicking = 0
if event.type == pygame.VIDEORESIZE: # screen resized, must adjust grid height, width
width = event.w
height = event.h
basicX = width / numberOfColumns
basicY = height / numberOfRows
#print(width, height)
screen = pygame.display.set_mode((width, height), pygame.RESIZABLE) # reset screen with new height, width
screen.fill((255, 255, 255)) # clear screen
drawScreen(screen, grid, basicX, basicY) # redraw rectangles
pygame.display.flip() # update screen
pygame.quit()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63316612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Can I have the Man-Class of a jar which exists in a different jar in classpath I have a scenario where I need to make a dummy executable jar with Main-Class from a different jar which is in classpath ( i.e., main class is not in the dummy jar). Is that possible to achieve this?
This is because the jar which is having the main class is not an executable jar but same time I don't want to tamper with it since it is a third-party jar, while as per our standard application invoke scripts we should use an executable jar to invoke the application. Hence I created that dummy jar with Main-Class in it. But still it's giving:
Error: Could not find or load main class
eg : dummy.jar manifest entries:
Main-Class: com.thirdparty.jar.MainClass
Class-Path:thirdpartyapp.jar
Here the com.thirdparty.jar.MainClass is inside thirdpartyapp.jar.
Thanks in advance :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22651402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Removing one Duplicate value from a String in Java 8 using lambdas I have to remove a duplicate value from a String.
But first I have to filter the given value that needs to be deleted. And if it's a duplicate, return a new String with first duplicated value removed. And I want to do this using lambdas.
Example.
Input (filter the value: "A")
String input = "A, A, C";
Expected output
"A, C"
But if I have this.
Input (filter a value different to "A")
String input = "A, A, C";
Expected output
"A, A, C"
I.e. if the given filter-value is a duplicate, like "A", which is encountered in the string multiple times, then its first occurrence has to be removed. Otherwise, the same string should be returned.
The white spaces and commas has to be considered in the output too.
I have been tried this code:
public class Main {
public static void main(String[] args) {
String mercado = "A, B, A";
mercado = mercado.replaceAll("\\b(A)\\b(?=.*\\b\\1\\b)", "");
System.out.println( mercado );
}
}
But the output is: , B, A
And I have to remove that white space and that comma in front.
A: If I understood your goal correctly, you need a method that expects two string arguments.
And depending on the number of occurrences of the second string in the first string the method will return either the first string intact (if the target value is unique or not present in the first string), or will generate a new string by removing the first occurrence of the target value (the second string).
This problem can be addressed in the following steps:
*
*Create a list by splitting the given string.
*Count the number of occurrences of the target value.
*If the count is <= 1 (i.e. value is unique, or not present at all) return the same string.
*Otherwise remove the target value from the list.
*Combine list elements into a string.
It might be implemented like this:
public static String removeFirstIfDuplicate(String source, String target) {
List<String> sourceList = new ArrayList<>(Arrays.asList(source.split("[\\p{Punct}\\p{Space}]+")));
long targetCount = sourceList.stream()
.filter(str -> str.equals(target))
.count();
if (targetCount <= 1) {
return source;
}
sourceList.remove(target);
return sourceList.stream().collect(Collectors.joining(", "));
}
main()
public static void main(String[] args) {
System.out.println(removeFirstIfDuplicate("A, A, C", "A"));
System.out.println(removeFirstIfDuplicate("A, A, C", "C"));
}
Output
A, C
A, A, C
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71971708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Compiling nativetoascii maven plugin When i m compiling my project, i got those problem:
Failed to execute goal org.codehaus.mojo:native2ascii-maven-plugin:1.0-alpha-1:native2ascii (default) on project ViewController: Execution default of goal org.codehaus.mojo:native2ascii-maven-plugin:1.0-alpha-1:native2ascii failed: Error starting Sun's native2ascii: sun.tools.native2ascii.Main -> [Help 1]
using netbeans 7.2 and jdk 1.7.07 , but!! if i using jdk 1.6 it works!
How can i compile this when im using jdk 1.7 ??
Tnx for HELP!!
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>${project.parent.groupId}</groupId>
<artifactId>ViewController</artifactId>
<packaging>war</packaging>
<version>${project.parent.version}</version>
<name>ViewController</name>
<url>${project.parent.url}</url>
<organization>
<name>kMicro</name>
</organization>
<parent>
<groupId>com.km.eFarmer</groupId>
<artifactId>eFarmer</artifactId>
<version>1.0.1-alpha1-SNAPSHOT</version>
</parent>
<dependencies>
<dependency>
<groupId>org.glassfish.extras</groupId>
<artifactId>glassfish-embedded-all</artifactId>
<version>3.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>DataModel</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>SOAPClient</artifactId>
<version>${project.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jboss.cache</groupId>
<artifactId>jbosscache-core</artifactId>
<version>3.2.7.GA</version>
</dependency>
<dependency>
<groupId>org.jboss</groupId>
<artifactId>jboss-common-core</artifactId>
<version>2.2.14.GA</version>
</dependency>
<dependency>
<groupId>jgroups</groupId>
<artifactId>jgroups</artifactId>
<version>2.6.13.GA</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-core</artifactId>
<version>3.0.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-web</artifactId>
<version>3.0.2.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-config</artifactId>
<version>3.0.2.RELEASE</version>
</dependency>
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>8.4-701.jdbc4</version>
<scope>test</scope>
</dependency>
<!--dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-openid</artifactId>
<version>3.0.2.RELEASE</version>
<scope>compile</scope>
</dependency-->
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-api</artifactId>
<version>2.0.3-FCS</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.sun.faces</groupId>
<artifactId>jsf-impl</artifactId>
<version>2.0.3-FCS</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.openfaces</groupId>
<artifactId>openfaces</artifactId>
<version>3.0.2-KM</version>
</dependency>
<dependency>
<groupId>org.primefaces</groupId>
<artifactId>primefaces</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>org.primefaces</groupId>
<artifactId>primefaces-mobile</artifactId>
<version>0.9.3</version>
</dependency>
<dependency>
<groupId>org.primefaces.themes</groupId>
<artifactId>redmond</artifactId>
<version>1.0.5</version>
</dependency>
<dependency>
<groupId>commons-collections</groupId>
<artifactId>commons-collections</artifactId>
<version>3.2.1</version>
</dependency>
<dependency>
<groupId>commons-beanutils</groupId>
<artifactId>commons-beanutils</artifactId>
<version>1.7.0</version>
</dependency>
<dependency>
<groupId>commons-digester</groupId>
<artifactId>commons-digester</artifactId>
<version>1.8</version>
</dependency>
<dependency>
<groupId>commons-lang</groupId>
<artifactId>commons-lang</artifactId>
<version>2.4</version>
</dependency>
<dependency>
<groupId>cssparser</groupId>
<artifactId>cssparser</artifactId>
<version>0.9.5</version>
</dependency>
<dependency>
<groupId>milyn</groupId>
<artifactId>sac</artifactId>
<version>1.3</version>
</dependency>
<dependency>
<groupId>jdom</groupId>
<artifactId>jdom</artifactId>
<version>1.0</version>
</dependency>
<dependency>
<groupId>jfree</groupId>
<artifactId>jfreechart</artifactId>
<version>1.0.13</version>
</dependency>
<dependency>
<groupId>jfree</groupId>
<artifactId>jcommon</artifactId>
<version>1.0.16</version>
</dependency>
<dependency>
<groupId>jexcelapi</groupId>
<artifactId>jxl</artifactId>
<version>2.6</version>
</dependency>
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>4.5.1</version>
</dependency>
<dependency>
<groupId>it.eng.spago</groupId>
<artifactId>sbi-utils</artifactId>
<version>3.3.0</version>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.0</version>
</dependency>
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>javax</groupId>
<artifactId>javaee-api</artifactId>
<version>6.0</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.codehaus.jettison</groupId>
<artifactId>jettison</artifactId>
<version>1.1</version>
</dependency>
<dependency>
<groupId>freemarker</groupId>
<artifactId>freemarker</artifactId>
<version>2.3.9</version>
</dependency>
</dependencies>
<build>
<plugins>
<!--plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>native2ascii-maven-plugin</artifactId>
<version>1.0-alpha-1</version>
<configuration>
<dest>${project.build.directory}/classes</dest>
<src>src/main/resources/locale</src>
</configuration>
<executions>
<execution>
<goals>
<goal>native2ascii</goal>
</goals>
<configuration>
<encoding>UTF8</encoding>
</configuration>
</execution>
</executions>
</plugin-->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>jasperreports-maven-plugin</artifactId>
<configuration>
<compiler>net.sf.jasperreports.compilers.JRGroovyCompiler</compiler>
<sourceDirectory>src/main/resources/jasperReportSources</sourceDirectory>
<outputDirectory>src/main/webapp/reports</outputDirectory>
<xmlValidation>true</xmlValidation>
</configuration>
<executions>
<execution>
<goals>
<goal>compile-reports</goal>
</goals>
<phase>compile</phase>
</execution>
</executions>
<dependencies>
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports</artifactId>
<version>4.5.1</version>
</dependency>
<dependency>
<groupId>org.codehaus.groovy</groupId>
<artifactId>groovy-all</artifactId>
<version>1.7.5</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.1-alpha-1</version>
<configuration>
<archive>
<manifest>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
<resources>
<resource>
<directory>src/main/resources</directory>
<excludes>
<exclude>resources/**</exclude>
</excludes>
</resource>
<resource>
<directory>src/main/resources/font</directory>
</resource>
</resources>
</build>
<properties>
<netbeans.hint.deploy.server>gfv3ee6</netbeans.hint.deploy.server>
<jdk.version>1.6</jdk.version>
</properties>
A: Update the version of the native2ascii-maven-plugin to the newest version.
A: adding this works for me:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>native2ascii-maven-plugin</artifactId>
...
<!-- added for java 7 compilation -->
<dependencies>
<dependency>
<groupId>com.sun</groupId>
<artifactId>tools</artifactId>
<version>1.5.0</version>
<scope>system</scope>
<systemPath>${java.home}/../lib/tools.jar</systemPath>
</dependency>
</dependencies> ...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12497624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Is it possible to restrict switch to use particular cases only in kotlin/JAVA? Is it possible to restrict switch for using particular case.
Here is my scenario :
class XYZ {
public static final String DEFAULT = "DEFAULT";
public static final String BIG_TEXT = "BIG_TEXT";
public static final String BIG_PICTURE = "BIG_PICTURE";
public static final String CAROUSEL = "CAROUSEL";
public static final String GIF = "GIF";
@Retention(RetentionPolicy.SOURCE)
@StringDef({DEFAULT, BIG_TEXT, BIG_PICTURE, CAROUSEL, GIF})
public @interface NotificationStyle {}
@NotificationStyle
public String style() {
if (CollectionUtils.isNotEmpty(carouselItems)) {
return CAROUSEL;
}
if (CollectionUtils.isNotEmpty(gifItems)) {
return GIF;
} else {
return DEFAULT;
}
}
}
So here I have define one StringDef interface and restricting style() just to return @NotificationStyle specified values and here is my switch case
// Some other class
XYZ obj = new XYZ()
switch (obj.style()) {
case XYZ.BIG_PICTURE:
//Something something
break;
case XYZ.BIG_PICTURE:
//Something something
break;
case "Not available to execute":
//Something something
break;
default : //Something something
}
I know obj.style() will only return restricted values but I want to somehow restrict switch case to even provide this case here
case "Not available to execute":
//Something something
break;
As this will be unreachable code always.
*Please do not look for the code and syntax , just looking for concept here.
Thanks.
A: You're doing a switch over a String, right? That's why you can, of course, add cases, that won't really happen (like "Not available to execute"). Why don't you just change your possible Strings to an enum and make obj.style return a constant from that enum? This is how you can restict those Strings.
fun style(): XYZValues {
if (true) {
return XYZValues.BIG_TEXT
}
return XYZValues.DEFAULT
}
enum class XYZValues(desc: String) {
DEFAULT("DEFAULT"),
BIG_TEXT("BIG_TEXT")
//more }
}
fun main(args: Array<String>) {
when (style()) {
XYZValues.BIG_TEXT -> println("1")
XYZValues.DEFAULT -> println("2")
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47173199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Access global event object in Firefox The goal: run some functions on .ajaxStart() but only if fired by a certain event.
The code:
$('#loading_indicator').ajaxStart(function() {
if(event != null){
if(event.type == 'hashchange' || event.type == 'DOMContentLoaded'){
$(this).show();
$('#acontents').hide();
$(this).ajaxComplete(function() {
$(this).hide();
$('#acontents').show();
bindClickOnTable();
initFilterInput();
});
}
}
});
The problem: This does not work in Firefox. In Internet Explorer and Chrome I can happily access the event object without passing it to the .ajaxStart(function(). In Firefox however, the event object is undefined.
The obvious but incorrect solution: pass the event object to the function. this will not work because it will pass the ajaxStart event and my checks will not work anymore.
The question: How do I make the global event object accessible within this function?
A: You can store event Object in any variable than can use in other function.
Here is the demo : http://jsfiddle.net/cVDbp/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9886787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Delete lines that found in another file i have 2 files:
text1.txt and text2.txt
how can i do this: if found a row in text1.txt that match to a row from text2.txt, delete it (or display the unique)?
this is what i have so far:
$a = file('text1.txt');
$b = file('text2.txt');
$contents = '';
foreach($b as $line2) {
foreach($a as $line1) {
if(!strstr($line1, $line2)) {
$contents .= $line1;
}
}
}
file_put_contents('unique.txt', $contents);
A: That will be:
file_put_contents('unique.txt', array_diff(file('text1.txt'), file('text2.txt')));
-since you're loading your files into RAM entirely, I suppose it's acceptable solution.
Also you may want to define your own function to determine if strings are equal. Logic then will be the same, but array_udiff() should be used
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19806386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error response TwitterKit I have one petition url request but the problem is the response data is nil , with other url request the petiton is success but with this not function! Any idea ¿?
let client = TWTRAPIClient()
let statusesShowEndpoint = "https://api.twitter.com/1.1/statuses/home_timeline.json"
//let params = ["id": "20"]
var clientError : NSError?
let request = client.urlRequest(withMethod: "GET", url: statusesShowEndpoint, parameters: nil, error: &clientError)
client.sendTwitterRequest(request) { (response, data, connectionError) -> Void in
if connectionError != nil {
print("Error: \(connectionError)")
}
do {
let json = try JSONSerialization.jsonObject(with: data!, options: [])
print("json: \(json)")
} catch let jsonError as NSError {
print("json error: \(jsonError.localizedDescription)")
}
}
The error appear for console
"Request failed: forbidden (403)" UserInfo={NSLocalizedFailureReason=Twitter API error : Your credentials do not allow access to this resource. (code 220), TWTRNetworkingStatusCode=403, NSErrorFailingURLKey=https://api.twitter.com/1.1/statuses/home_timeline.json,
But is exclusive for this request , the other request api not appear this error.
A: Try to use this code, it worked for me :
var userID: String = Twitter.sharedInstance().sessionStore.session.userID
var client = TWTRAPIClient(userID: userID)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45979401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to generate chart from result I am currently learning SAS programming and am having difficulty figuring out how to generate a pie chart from the results. Any direction someone with more experience can give me is much appreciated.
proc freq data=sashelp.cars;
where lowcase(type)^="hybrid";
table type*origin / nocum nopercent norow nocol;
proc gchart data=???;
============
UPDATE
============
I figured out thanks to the answer on this page what mistake I was making. I was putting two columns in for the pie chart, but not putting the second column into the option for detail.
proc freq data=sashelp.cars;
where lowcase(type)^="hybrid";
table type*origin / nocum nopercent norow nocol;
proc gchart data=sashelp.cars;
where lowcase(type)^="hybrid";
pie origin / detail=type;
run;
quit;
Result:
A: From the official site
title "Types of Vehicles Produced Worldwide (Details)";
proc gchart data=sashelp.cars;
pie type / detail=drivetrain
detail_percent=best
detail_value=none
detail_slice=best
detail_threshold=2
legend
;
run;
quit;
This graph uses the data set entitled CARS found in the SASHELP
library. The DETAIL= option produces an inner pie overlay showing the
percentage that each DRIVETRAIN contributes toward each type of
vehicle. The DETAIL_PERCENT= option and the DETAIL_SLICE= option
control the positioning of the detail slice labels. The DETAIL_VALUE=
option turns off the display of the number of DRIVETRAINS for each
detail slice. The DETAIL_THRESHOLD= option shows all detail slices
that contribute more than 2% of the entire pie. The LEGEND option
displays a legend for the slice names and their midpoint values,
instead of printing them beside the slices.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31997282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: MS SSRS Report Builder - Semantic query compilation failed Semantic query compilation failed: e InvalidParameterValueCardinality The parameter "Region1" requires a single value.
However, the value provided for the parameter is a set. (SemanticQuery ''). (rsSemanticQueryEngineError)
----------------------------
An error has occurred during report processing. (rsProcessingAborted)
I was getting this error on MS SSRS. I have two dropdowns which can select multiple values. It's working for single value. If I select multiple values I'm getting this error.
A: I was able to fix the issue by setting "equals" to "In a List".
here is the steps
In DataSets -> Query Designer -> Filter -> Any of section click on equals then select In a List
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43798592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to only match before the first dot? I have the following regex.
^((?!example).)*$#Subdomain is reserved (example).
I would like to validate <subdomain>.example.org. However, since the domain name contains example, a match is occurring.
The validation should not match when the address is www.example.org
The validation should match when the address is example.example.org
A: Looks like you're missing the escape character from the period
^(example)\..*$
should work
A: It seems that a simple
^example\.
is enough. Or use string methods, depending on your language:
url.indexOf('example.') === 0
If input such as example.org is also possible, you can use
^example\..+\.
to force the appearance of two dots. But this would still fail for example.co.uk. It depends on your input.
A: A simple way might be to break it up into two:
*
*^.+\.example\.org$
*^(www)?\.example\.org$
If 1) matches and 2) does not, it's a subdomain of example.org; otherwise, it's not. (Although www technically is a subdomain, but you understand.)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8250721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: convert scientific notation to real number on ggplot python I'm tryin to make a plot with pythons ggplot but the x and y aaxes are showing up in scientific notation
this is my code:
ggplot(dfMerge, aes(x='rent_price',y='sale_price', color='outlier',label='name')) +\
geom_text(hjust=0, vjust=0, size=20) +\
geom_point() +\
scale_y_continuous(labels="%.2f") +\
xlab("Rent Unit Price") +\
ylab("Sale Unit Price") +\
stat_smooth(color='blue')
Here's the plot i get:
How can i change the exponetial values on the axes to real numbers?
Thank you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46007919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: NatTable width inside a CTabItem I have a CTabFolder which has many CTabItems. Each CTabItem has a different NatTable as its control. When each NatTable is created, it has column headers but no rows. When I click on a "Populate Data" button, it will populate each table with data.
If I view each tab before I click the "Populate Data" button, I will see the column headers as expected. I can then push the button and see all the data correctly populated in all the tables.
If I DO NOT view tab(s) before I click the "Populate Data" button, I will not see any column headers or data (for the tabs that I did not previously view). This is because the NatTable in that tab has a width is 0.
I do not want to have to click on every tab before clicking the "Populate Data" button. What call is being made internally (that I might manually have to call) to correctly set the NatTable width when the tab is not focused.
Below is my code sample:
public void whenNatTableIsCreated(){
// Make sure the table fills the width of the parent
glazedListsGridLayer.getBodyDataLayer()
.setColumnPercentageSizing(true);
nattable.addConfiguration(new DefaultNatTableStyleConfiguration() {
{
cellPainter = new LineBorderDecorator(new
TextPainter(false, true, 5, true));
}
});
}
public void afterTableHasData() {
// Now that we have data, turn off the percentage sizing and
// allow the table width to exceed the parent width
// This fails because getNatTable().getWidth() is 0
getGlazedListsGridLayer().getBodyDataLayer().setDefaultColumnWidth(
getNatTable().getWidth() / getGlazedListsGridLayer().
getBodyDataLayer().getColumnCount());
getGlazedListsGridLayer().getBodyDataLayer()
.setColumnPercentageSizing(false);
}
A: From the sample I don't understand the usage of percentage sizing. I suppose this is the issue. You enable it initially but the parent composite doesn't has a size, therefore the width of all columns is set to 0. By enabling it again the column widths are not automatically set to some width and stay 0.
The width of the table is derived from the columns. In case of the ViewportLayer the width is maxed to the available client area. But as all columns have a width of 0, the width of the whole NatTable is 0. And setting the default column width after percentage sizing was enabled will probably also not lead to the result you expect, because there is a size value for every column stored, which overrides the default width.
In short, your whole idea doesn't seem to work with NatTable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40635094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Button not clickable despite isClickable() being true I have a ListView populated with items, some of which have Buttons. Sometimes, the Buttons onClick() registers correctly, but other times, the onClick() simply does not work. I am checking if isClickable() is true, and it is in all situations. What are some possible reasons for why a Button can have isClickable() to be true, but still have onClick() not register?
Edit: Here is my getView() of the adapter
public View getView(int position, View convertView, ViewGroup parent) {
final ExpandableListItem object = mData.get(position);
final View view = storedViews.get(position);
if (object instanceof FillerParamListItem) {
return view;
}
//Load the holder
ExpandingViewHolder holder = (ExpandingViewHolder) view.getTag();
//Handle collapsing and expanding the cells
if (((ParameterActivity) activity).listView.isExpanded() && !object.isExpanded()) {
view.setAlpha(.20f);
} else {
view.setAlpha(1f);
}
if (object instanceof SwitchParamListItem) {
....
} else if (object instanceof DomainListItem) {
....
} else if (object instanceof ParameterListItem) {
if (((ParameterActivity) activity).listView.isExpanded() && !object.isExpanded()) {
holder.expandingLayout.setVisibility(View.GONE);
holder.valueTextView.setVisibility(View.VISIBLE);
if (activity.getCoreApplication().getOfflineMode()) {
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.main_list_item_background_offline));
} else {
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.main_list_item_background));
}
holder.collapsedLayout.setElevation(0);
} else {
if (object.isExpanded()) {
System.out.println("expanded parameter item");
holder.expandingLayout.setVisibility(View.VISIBLE);
holder.valueTextView.setVisibility(View.GONE);
holder.valueEditText.setVisibility(View.VISIBLE);
if (activity.getCoreApplication().getOfflineMode()) {
holder.expandingLayout.setBackground(activity.getResources().getDrawable(R.drawable.parameter_activity_expanded_layout_background_offline));
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.parameter_activity_expanded_layout_background_offline));
} else {
holder.expandingLayout.setBackground(activity.getResources().getDrawable(R.drawable.parameter_activity_expanded_layout_background));
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.parameter_activity_expanded_layout_background));
}
holder.collapsedLayout.setElevation(3.0f * activity.getResources().getDisplayMetrics().density);
} else {
holder.expandingLayout.setVisibility(View.GONE);
holder.valueTextView.setVisibility(View.VISIBLE);
holder.valueEditText.setVisibility(View.INVISIBLE);
if (activity.getCoreApplication().getOfflineMode()) {
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.main_list_item_background_offline));
} else {
holder.collapsedLayout.setBackground(activity.getResources().getDrawable(R.drawable.main_list_item_background));
}
holder.collapsedLayout.setElevation(0);
}
}
if (object.getImgResource() != ExpandableListItem.NO_ICON) {
holder.imageView.setImageBitmap(object.getImage());
} else {
holder.imageView.setVisibility(View.INVISIBLE);
}
holder.titleTextView.setText(object.name);
holder.valueTextView.setText(object.getValue());
holder.unitTextView.setText(object.unitSym);
holder.descriptionTextView.setText(object.description);
holder.useTextView.setText(object.hint);
ParameterListItem parameterListItem = (ParameterListItem) object;
//Register listeners for the seeker bar and EditText for this specific parameter
TextListener editTextListener = new TextListener(parameterListItem, activity, view, holder.valueEditText);
holder.seekbar.setOnSeekBarChangeListener(parameterListItem.getListener());
holder.valueEditText.addTextChangedListener(editTextListener);
parameterListItem.registerTextListener(editTextListener);
holder.valueEditText.setOnEditorActionListener(editTextListener);
parameterListItem.registerSeekBar(holder.seekbar);
holder.valueEditText.setTag("PROGRAM");
holder.valueEditText.setText(object.getValue());
//show value and symbols within the expanded cells for the seeker bar track
holder.minValueTextView.setText(Util.getStringForFloat(parameterListItem.getMinValue()) + object.unitSym);
holder.maxValueTextView.setText(Util.getStringForFloat(parameterListItem.getMaxValue()) + object.unitSym);
holder.middleValueTextView.setText(Util.getStringForFloat((parameterListItem.getMaxValue() + parameterListItem.getMinValue()) / 2) + object.unitSym);
view.setOnTouchListener(editTextListener);
}
object.assocView = view;
return view;
}
and here is the XML (I'm only including the button parts for now). The RelativeLayout here is contained within the "expandingLayout in the adapter":
<RelativeLayout
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:layout_below="@id/parameter_activity_parameter_use_layout"
android:layout_marginTop="20dp"
android:orientation="horizontal"
android:background="@android:color/transparent">
<Button
android:id="@+id/parameter_activity_parameter_cancel_button"
android:layout_height="wrap_content"
android:layout_width="wrap_content"
android:layout_alignParentStart="true"
android:layout_marginLeft="10dp"
android:onClick="cancelClicked"
android:clickable="true"
android:background="?android:attr/selectableItemBackground"
android:text="Cancel"
android:textColor="@color/button_text_color"
android:textSize="21sp"
android:textStyle="bold"/>
<Button
android:id="@+id/parameter_activity_parameter_undo_button"
android:layout_height="wrap_content"
android:layout_width = "wrap_content"
android:layout_centerHorizontal="true"
android:clickable="true"
android:onClick="undoClicked"
android:background="?android:attr/selectableItemBackground"
android:text="Undo Last Change"
android:textColor="@color/button_text_color"
android:textSize="21sp"
android:textStyle="bold"/>
<Button
android:id="@+id/parameter_activity_parameter_apply_button"
android:layout_height="wrap_content"
android:layout_width="wrap_content"
android:layout_alignParentEnd="true"
android:layout_marginRight="10dp"
android:clickable="true"
android:onClick="applyClicked"
android:background="?android:attr/selectableItemBackground"
android:text="Apply"
android:textColor="@color/button_text_color"
android:textSize="21sp"
android:textStyle="bold" />
</RelativeLayout>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56843927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Troubleshooting mod_rewrite in a .htacces with a LightSpeed I have a couple web pages located at these locations:
Home Page / Index : www.codeliger.com/index.php?page=home
Education : www.codeliger.com/index.php?page=home&filter=1
Skills: www.codeliger.com/index.php?page=home&filter=2
Projects: www.codeliger.com/index.php?page=home&filter=3
Work Experience: www.codeliger.com/index.php?page=home&filter=4
Contact : www.codeliger.com/index.php?page=contact
I am trying to rewrite them to prettier urls:
codeliger.com/home
codeliger.com/education
codeliger.com/skills
codeliger.com/projects
codeliger.com/experience
codeliger.com/contact
I have confirmed that my htaccess file works and mod-rewrite works to google, but I cannot get my syntax working that was specified in multiple tutorials online.
RewriteEngine on
RewriteRule /home /index.php?page=home
RewriteRule /([a-Z]+) /index.php?page=$1
RewriteRule /education /index.php?page=home&filter=1
RewriteRule /skills /index.php?page=home&filter=2
RewriteRule /projects /index.php?page=home&filter=3
RewriteRule /experience /index.php?page=home&filter=4
How can I fix my syntax to rewrite these pages to prettier urls?
A: The first thing you should probably do is fix your regex. You cannot have a range like [a-Z], you can just do [a-z] and use the [NC] (no case) flag. Also, you want this rule at the very end since it'll match requests for /projects which will make it so the rule further down will never get applied. Then, you want to get rid of all your leading slashes. Lastly, you want a boundary for your regex, otherwise it'll match index.php and cause another error.
So:
RewriteEngine on
RewriteRule ^home /index.php?page=home
RewriteRule ^education /index.php?page=home&filter=1
RewriteRule ^skills /index.php?page=home&filter=2
RewriteRule ^projects /index.php?page=home&filter=3
RewriteRule ^experience /index.php?page=home&filter=4
RewriteRule ^([a-z]+)$ /index.php?page=$1 [NC]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/20646932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Apache Wicket replace fragements of the RIA application (web-site :)) without page refresh we are currently in the process of analyzing different JS and web-frameworks.
We would like to build a DHTML application where you can replace / load content into the website at runtime.
For example:
There is only a "main.html" (or at least one that looks like being a single HTML file for externals) and inside that file I would like to load a login form at runtime.
But WITHOUT page refresh.
I would like to load the HTML into the website dynamically.
http://api.jquery.com/load/ seems to be perfect for that.
However we are also considering using Apache Wicket.
Does Wicket provide a similar mechanism? For me it seems like Wicket can define "static" parts in the website but it does heavily rely on page-refresh to update the website.
Also as a Wicket "newbe" I wonder why there are quite only a few UI components documentated on the Wicket website compared to other UI frameworks.
For me it seems like most people use Wicket + jQuery but never Wicket standalone.
As we already have a REST interface available I wonder what Wicket would offer us at all compared to for example Apache Velocity.
Thanks!
Sebastian
A: It is quite common in Wicket to replace only portions of a page using Ajax. See these examples.
Wicket is also easily used in combination with jQuery and other JavaScript frameworks.
A: So called single page applications (a single page who's components get constantly replaced and/or updated via ajax) are the way almost every Wicket Apllication I wrote up till now turned out. Most of the Wicket Applications I saw out there relied on a very small number of (or just one) page(s).
The real big advantage of Wicket above jQuery in these use cases is the way in which Wicket offers a non-javascript fallback (then relying on page refreshes) with very little additional work (replacing AjaxLinks by AjaxFallbackLinks and adding an if-statement to check which refresh was triggered.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12122257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error in lines.etm(x[[i]], tr.choice = tr.choice[j], col = col[j + (i - : Argument 'tr.choice' and possible transitions must match When trying to plot cif data in R :
plot(cif.kweet) this error pops up:
"Error in lines.etm(x[[i]], tr.choice = tr.choice[j], col = col[j + (i - :
Argument 'tr.choice' and possible transitions must match", what does this mean, how to solve it?
Additional information:
cif.kweet <-etmCIF(Surv(entry, exit, cause !=0) ~ group, data, etype=data$cause, failcode=3)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47252319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Cancel IO for pipe I 'm using the CreatePipe to redirect stdin/out from a process to my process.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365152(v=vs.85).aspx
This works ok so far. The problem is when I want to terminate the thread that waits for the client process to write something.
I can use CancelIoEx() but this only works in Vista+, and I also want an XP solution. Without CancelIoEx(), ReadFile() in the other thread never returns.
I cannot also use OVERLAPPED ReadFile, for pipes created with CreatePipe do not support it.
Any options?
A: Save a handle to the write end of the stdout pipe when creating the child process. You can then write a character to this to unblock the thread that has called ReadFile (that is reading from the read end of the stdout pipe). In order not to interpret this as data, create an Event (CreateEvent) that is set (SetEvent) in the thread that writes the dummy character, and is checked after ReadFile returns. A bit messy, but seems to work.
/* Init */
stdout_closed_event = CreateEvent(NULL, TRUE, FALSE, NULL);
/* Read thread */
read_result = ReadFile(stdout_read, data, buf_len, &bytes_read, NULL);
if (!read_result)
ret = -1;
else
ret = bytes_read;
if ((bytes_read > 0) && (WAIT_OBJECT_0 == WaitForSingleObject(stdout_closed_event, 0))) {
if (data[bytes_read-1] == eot) {
if (bytes_read > 1) {
/* Discard eot character, but return the rest of the read data that should be valid. */
ret--;
} else {
/* No data. */
ret = -1;
}
}
}
/* Cancel thread */
HMODULE mod = LoadLibrary (L"Kernel32.dll");
BOOL WINAPI (*cancel_io_ex) (HANDLE, LPOVERLAPPED) = NULL;
if (mod != NULL) {
cancel_io_ex = (BOOL WINAPI (*) (HANDLE, LPOVERLAPPED)) GetProcAddress (mod, "CancelIoEx");
}
if (cancel_io_ex != NULL) {
cancel_io_ex(stdout_write_pipe, NULL);
} else {
SetEvent(stdout_closed_event);
WriteFile(stdout_write_pipe, &eot, 1, &written, NULL);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27089489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Run GitHub Actions workflows Jobs on 2 runners parallelly at a time I'm using GitHub actions to copy the artifact to the runner/VM, here I added my VM as a self-hosted runner and ran the workflow directly on the runner. I'm downloading the artifact from artifactory and copying it to the deployment location, Now, I Need to do the same thing on another runner/VM as I have an identical deployment VM for the application.
To achieve this, I have copied the same job and changed the 'runs-on' value with a different runner name which is my second VM. below is my workflow code snippet.
My question is, Instead of 2 jobs, how can we run using a single job for all VMs related to dev? let's say, Assume that I have 4 VMs for the Dev environment and 3 VMs for QA, and 5 VMs for Production
can someone help with this or do we need to continue with the same approach as whatever I'm doing right now?
I have tried matrix but looking to see if there is any other solution
name: Deployment_workflow
on:
workflow_dispatch:
inputs:
dev:
type: boolean
required: false
default: false
qa:
type: boolean
required: false
default: false
jobs:
dev_deploy_1:
if: github.event.inputs.dev =='true'
runs-on: dev-vm-1
steps:
- name: Download the artifact from artifactory
run: |
cd /tmp
curl -u ${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPT} -T dep-deploy-1.war "$(ARTIFACTORY_URL)/artifactory/general-artifacts/dep-deploy-1.war"
- name: copy the file from /tmp to deployment location
run: |
cp /tmp/dep-deploy-1.war /var/lib/app_deploy_dir
dev_deploy_2:
if: github.event.inputs.dev =='true'
runs-on: dev-vm-2
steps:
- name: Download the artifact from artifactory
run: |
cd /tmp
curl -u ${ARTIFACTORY_USER}:${ARTIFACTORY_ENCRYPT} -T dep-deploy-2.war "$(ARTIFACTORY_URL)/artifactory/general-artifacts/dep-deploy-2.war"
- name: copy the file from /tmp to deployment location
run: |
cp /tmp/dep-deploy-1.bar /var/lib/app_deploy_dir
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73684407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Read a PEM file with Flutter API encrypt import 'package:encrypt/encrypt.dart';
import 'package:encrypt/encrypt_io.dart';
import 'dart:io';
import 'package:pointycastle/asymmetric/api.dart';
import 'dart:async';
import 'package:flutter/services.dart' show rootBundle;
class Encrypt {
Future<String> loadPrivateKey() async {
return await rootBundle.loadString('assets/private_key.pem');
}
Future<String> loadPublicKey() async {
return await rootBundle.loadString('assets/public_key.pem');
}
encryptString() async {
print(loadPublicKey().toString());
final publicKey =
await parseKeyFromFile<RSAPublicKey>('${loadPublicKey()}');
final privateKey =
await parseKeyFromFile<RSAPrivateKey>('${loadPrivateKey()}');
final plainText = 'James Bond';
final encrypter =
Encrypter(RSA(publicKey: publicKey, privateKey: privateKey));
final encrypted = encrypter.encrypt(plainText);
final decrypted = encrypter.decrypt(encrypted);
print(decrypted);
print(encrypted.base64);
}
}
ERROR:
Performing hot reload...
Syncing files to device AOSP on IA Emulator...
Reloaded 8 of 707 libraries in 1,021ms.
I/flutter ( 7395): Instance of 'Future'
E/flutter ( 7395): [ERROR:flutter/lib/ui/ui_dart_state.cc(157)] Unhandled Exception: FileSystemException: Cannot open file, path = 'Instance of 'Future'' (OS Error: No such file or directory, errno = 2)
I did add assets in yaml file as:
flutter:
assets:
- assets/
A: parseKeyFromFile is a convenience function that reads a file and parses the contents. You don't have a file, you have an asset that you are already doing the work of reading into a string. Having read the file, it just parses it - and that's what you need.
This should work:
final publicPem = await rootBundle.loadString('assets/public_key.pem');
final publicKey = RSAKeyParser().parse(publicPem) as RSAPublicKey;
and similar for the private key.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61720791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Using Vagrant to manage AWS instances For some time I am managing EC2 (Windows Boxes), RDS and S3 on AWS.
I do know manual steps that must be made in order to set up lets say a normal box (DB, Storage and Server. I heard about Vagrand, but everywhere I looked it mainly talks about Linux boxes on AWS.
My main question is: Is Vagrand a tool that will save me time for deyploment (windows), or should I not use it at all (in Windows scenario).
A: Vagrant plays nicely with AWS (via vagrant-aws plugin).
Vagrant seems to play nicely with Windows as well since version 1.6 and the introduction of WinRM support (ssh alternative for Windows).
However AWS plugin doesn't support WinRM communicator yet. So you'll need to pre-bake your Windows AMIs with SSH service pre installed, if you want vagrant to provision it.
Update (29/03/2016): Thanks to Rafael Goodman for pointing to vagrant-aws-winrm plugin as a possible workaround.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26677255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Entity Framework - Challenging setup includes multiple primary keys, and multiple associations to foreign table I'm trying to map a couple of legacy tables using Entity Framework. The classes look like this...
public class Customer
{
[Key, Required]
public string Code { get; set; }
public string Domain { get; set; }
public virtual Address BillToAddress { get; set; }
public virtual ICollection<Address> ShipToAddresses { get; set; }
}
public class Address
{
[Column(Order = 0), Key, Required]
public string Code { get; set; }
[Column(Order = 1), Key, Required]
public string Domain { get; set; }
public string Type { get; set; }
public string CustomerReferenceCode { get; set; }
}
Each Customer has one "BillToAddress" that corresponds with an Address whose CustomerReferenceCode contains the Customer's Code and where the Type field contains the text "Customer"
Each Customer has zero or more "ShipToAddresses" that correspond to Addresses whose CustomerReferenceCode contains the Customer's Code and whose where the Type fields contain the text "Ship-To"
I'm able to reference the BillToAddress by adding
[Key, Required]
[ForeignKey("BillToAddress"), Column(Order = 1)]
public string Code { get; set; }
[ForeignKey("BillToAddress"), Column(Order = 2)]
public string Domain { get; set; }
But I've not been able to figure out how to reference the collection of ShipToAddresses for the customer.
A: See this example (note it is separate classes): Fluent NHibernate automap inheritance with subclass relationship
One easy approach might be:
public class Customer
{
[Key, Required]
public string Code { get; set; }
public string Domain { get; set; }
public virtual ICollection<Address> Addresses{ get; set; }
public virtual Address BillToAddress { get { Addresses.Where(n=>n.Type = Address.BillingAddress)).Single(); }
public virtual ICollection<Address> ShipToAddresses { get { Addresses.Where(n=>n.Type = Address.ShipToAddress)); }
}
One additional comment - this is not implicitly forcing your one Billing address business logic as the example you start with does, so you either need to enforce that elsewhere with the above approach. Also, creating two classes and using TPH as described in the example I link to is the approach I would likely take anyway - which would meet your described goal above directly. Also, in between the two, this may work, using the discriminator in the getter of each property directly.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14401018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Python Regex Capturing Multiple Matches in separate observations I am trying to create variables location; contract items; contract code; federal aid using regex on the following text:
PAGE 1
BID OPENING DATE 07/25/18 FROM 0.2 MILES WEST OF ICE HOUSE 07/26/18 CONTRACT NUMBER 03-2F1304 ROAD TO 0.015 MILES WEST OF CONTRACT CODE 'A '
LOCATION 03-ED-50-39.5/48.7 DIVISION HIGHWAY ROAD 44 CONTRACT ITEMS
INSTALL SANDTRAPS AND PULLOUTS FEDERAL AID ACNH-P050-(146)E
PAGE 1
BID OPENING DATE 07/25/18 IN EL DORADO COUNTY AT VARIOUS 07/26/18 CONTRACT NUMBER 03-2H6804 LOCATIONS ALONG ROUTES 49 AND 193 CONTRACT CODE 'C ' LOCATION 03-ED-0999-VAR 13 CONTRACT ITEMS
TREE REMOVAL FEDERAL AID NONE
PAGE 1
BID OPENING DATE 07/25/18 IN LOS ANGELES, INGLEWOOD AND 07/26/18 CONTRACT NUMBER 07-296304 CULVER CITY, FROM I-105 TO PORT CONTRACT CODE 'B '
LOCATION 07-LA-405-R21.5/26.3 ROAD UNDERCROSSING 55 CONTRACT ITEMS
ROADWAY SAFETY IMPROVEMENT FEDERAL AID ACIM-405-3(056)E
This text is from one word file; I'll be looping my code on multiple doc files. In the text above are three location; contract items; contract code; federal aid pairs. But when I use regex to create variables, only the first instance of each pair is included.
The code I have right now is:
# imports
import os
import pandas as pd
import re
import docx2txt
import textract
import antiword
all_bod = []
all_cn = []
all_location = []
all_fedaid = []
all_contractcode = []
all_contractitems = []
all_file = []
text = ' PAGE 1
BID OPENING DATE 07/25/18 FROM 0.2 MILES WEST OF ICE HOUSE 07/26/18 CONTRACT NUMBER 03-2F1304 ROAD TO 0.015 MILES WEST OF CONTRACT CODE 'A '
LOCATION 03-ED-50-39.5/48.7 DIVISION HIGHWAY ROAD 44 CONTRACT ITEMS
INSTALL SANDTRAPS AND PULLOUTS FEDERAL AID ACNH-P050-(146)E
PAGE 1
BID OPENING DATE 07/25/18 IN EL DORADO COUNTY AT VARIOUS 07/26/18 CONTRACT NUMBER 03-2H6804 LOCATIONS ALONG ROUTES 49 AND 193 CONTRACT CODE 'C ' LOCATION 03-ED-0999-VAR 13 CONTRACT ITEMS
TREE REMOVAL FEDERAL AID NONE
PAGE 1
BID OPENING DATE 07/25/18 IN LOS ANGELES, INGLEWOOD AND 07/26/18 CONTRACT NUMBER 07-296304 CULVER CITY, FROM I-105 TO PORT CONTRACT CODE 'B '
LOCATION 07-LA-405-R21.5/26.3 ROAD UNDERCROSSING 55 CONTRACT ITEMS
ROADWAY SAFETY IMPROVEMENT FEDERAL AID ACIM-405-3(056)E'
bod1 = re.search('BID OPENING DATE \s+ (\d+\/\d+\/\d+)', text)
bod2 = re.search('BID OPENING DATE\n\n(\d+\/\d+\/\d+)', text)
if not(bod1 is None):
bod = bod1.group(1)
elif not(bod2 is None):
bod = bod2.group(1)
else:
bod = 'NA'
all_bod.append(bod)
# creating contract number
cn1 = re.search('CONTRACT NUMBER\n+(.*)', text)
cn2 = re.search('CONTRACT NUMBER\s+(.........)', text)
if not(cn1 is None):
cn = cn1.group(1)
elif not(cn2 is None):
cn = cn2.group(1)
else:
cn = 'NA'
all_cn.append(cn)
# location
location1 = re.search('LOCATION \s+\S+', text)
location2 = re.search('LOCATION \n+\S+', text)
if not(location1 is None):
location = location1.group(0)
elif not(location2 is None):
location = location2.group(0)
else:
location = 'NA'
all_location.append(location)
# federal aid
fedaid = re.search('FEDERAL AID\s+\S+', text)
fedaid = fedaid.group(0)
all_fedaid.append(fedaid)
# contract code
contractcode = re.search('CONTRACT CODE\s+\S+', text)
contractcode = contractcode.group(0)
all_contractcode.append(contractcode)
# contract items
contractitems = re.search('\d+ CONTRACT ITEMS', text)
contractitems = contractitems.group(0)
all_contractitems.append(contractitems)
This code parses the only first instance of these variables in the text.
contract-number
location
contract-items
contract-code
federal-aid
03-2F1304
03-ED-50-39.5/48.7
44
A
ACNH-P050-(146)E
But, I am trying to figure out a way to get all possible instances in different observations.
contract-number
location
contract-items
contract-code
federal-aid
03-2F1304
03-ED-50-39.5/48.7
44
A
ACNH-P050-(146)E
03-2H6804
03-ED-0999-VAR
13
C
NONE
07-296304
07-LA-405-R21.5/26.3
55
B
ACIM-405-3(056)E
The all_variables in the code are for looping over multiple word files - we can ignore that if we want :).
Any leads would be super helpful. Thanks so much!
A: import re
data = []
df = pd.DataFrame()
regex_contract_number =r"(?:CONTRACT NUMBER\s+(?P<contract_number>\S+?)\s)"
regex_location = r"(?:LOCATION\s+(?P<location>\S+))"
regex_contract_items = r"(?:(?P<contract_items>\d+)\sCONTRACT ITEMS)"
regex_federal_aid =r"(?:FEDERAL AID\s+(?P<federal_aid>\S+?)\s)"
regex_contract_code =r"(?:CONTRACT CODE\s+\'(?P<contract_code>\S+?)\s)"
regexes = [regex_contract_number,regex_location,regex_contract_items,regex_federal_aid,regex_contract_code]
for regex in regexes:
for match in re.finditer(regex, text):
data.append(match.groupdict())
df = pd.concat([df, pd.DataFrame(data)], axis=1)
data = []
df
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75153254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: OpenLayers.js which files to use? As i red from the readme file, openlayers.js has multiple choices for including files and themes.
What i would like is to use the lightest solution of openlayers.js files.
I included the openlayers.light.js in my app, and it creates maps but do not show them, check this:
do i forgot to include some other file?
my structure structure is this:
/vendor
/js
openlayers.light.js
/img
/theme
how to show maps layers?
Also does the openlayers.light.js will work on mobile devices (once fixed this problem :P )? or i'll need to include openlayers.mobile.js too?
This is the code not working with openlayers.light.js but working with openlayers.js (740kb) :
var _element = "#map";
var map = new OpenLayers.Map (_element, {
controls: [
new OpenLayers.Control.Navigation({
dragPanOptions: {
enableKinetic: true
}
}),
new OpenLayers.Control.Zoom()
],
projection: new OpenLayers.Projection("EPSG:900913"),
displayProjection: new OpenLayers.Projection("EPSG:4326")
});
var lonLat = new OpenLayers.LonLat(_lon, _lat).transform (
new OpenLayers.Projection("EPSG:4326"), // transform from WGS 1984
new OpenLayers.Projection("EPSG:900913") // to Spherical Mercator Projection
// map.getProjectionObject() doesn't work for unknown reason
);
var markers = new OpenLayers.Layer.Markers( "Markers" );
map.addLayer(markers);
var size = new OpenLayers.Size(21,25);
var offset = new OpenLayers.Pixel(-(size.w/2), -size.h);
var icon = new OpenLayers.Icon(_img_map_marker, size, offset);
markers.addMarker(new OpenLayers.Marker(lonLat, icon.clone()));
var mapnik = new OpenLayers.Layer.OSM("Test");
map.addLayer(mapnik);
map.setCenter (lonLat,3);
PS: my openlayers map js init method is ok, it works using the huge openlayers.js (740KB), but not working with openlayers.light.js as i showed above
html
<div id="map"></div>
css
img{max-width:none;}
#map{
width:300px;
height:300px;
}
A: if you want to use mobile properties with openlayers as panning or zooming with hand you have to use openlayers.mobile.js.
you can use openlayers.light.js with mobile devices but not mobile functions.
i think your structure should be :
myProject
/js
openlayers.light.js
/img
/theme
and i have tried openlayers.light.js in http://jsfiddle.net/aragon/ZecJj/ and there is no problem with it.
My code:
var map = new OpenLayers.Map({
div: "map",
minResolution: "auto",
maxResolution: "auto",
});
var osm = new OpenLayers.Layer.OSM();
var toMercator = OpenLayers.Projection.transforms['EPSG:4326']['EPSG:3857'];
var center = toMercator({x:-0.05,y:51.5});
map.addLayers([osm]);
map.setCenter(new OpenLayers.LonLat(center.x,center.y), 13);
and try to read Deploying (Shipping OpenLayers in your Application).
OpenLayers comes with pre-configured examples out of the box: simply
download a release of OpenLayers, and you get a full set of easy to
use examples. However, these examples are designed to be used for
development. When you’re ready to deploy your application, you want a
highly optimized OpenLayers distribution, to limit bandwidth and
loading time.
you can change src file with this link and can see it still works.
<script type="text/javascript" src="http://openlayers.org/dev/OpenLayers.light.debug.js"></script>
to
<script type="text/javascript" src="https://view.softwareborsen.dk/Softwareborsen/Vis%20Stedet/trunk/lib/OpenLayers/2.12/OpenLayers.light.js?content-type=text%2Fplain"></script>
i hope it helps you...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14773449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Need to sort complex objects, like dominoe Here is a situation. For example I have a structure like this (code is simplified):
class Dominoe
{
ctor Dominoe(left, right)
string LeftSide;
string RightSide;
}
And I have data, somewhat like this:
Dominoe("2", "3"), Dominoe("1", "2"), Dominoe("4", "5"), Dominoe("3", "4")
I know that there won't be any gaps in dominoes, nor repeats.
I need to order this collection, so every RightSide would be connected to appropriate LeftSide. Like so:
Dominoe("1", "2"), Dominoe("2", "3"), Dominoe("3", "4"), Dominoe("4", "5")
Values - not numbers. Just need a clue.
Right now I've done this task in 2 steps. Primary - i'm looking for the entry point. The dominoe which have LeftSide not presented in any other Dominoe RightSide. After that i switch it with 0 index item. Secondly - i'm looking for next dominoe which have LeftSide same as RightSide of my entry dominoe and so on in cycle.
I'm doing this in C#, but this really doesn't matter.
The problem is - i don't think it's best algorithm. Any ideas will be great. Thx.
EDITED !
Was my bad to talk about numbers.
Let's change Dominoe for trevel cards.
So it'll be like:
TravelCard ("Dublin", "New York"), TravelCard ("Moscow", "Dublin"), TravelCard ("New York", "Habana")
A: Unless you have huge amount of your cards your solution will work. Otherwise you can consider 2 dictionaries to make searches constants and keep O(N) complexity:
namespace ConsoleApplication
{
public class Dominoe
{
public Dominoe(int left, int right)
{
LeftSide = left;
RightSide = right;
}
public int LeftSide;
public int RightSide;
}
class Program
{
static void Main(string[] args)
{
var input = new List<Dominoe>()
{
new Dominoe(2, 3),
new Dominoe(1, 2),
new Dominoe(4, 5),
new Dominoe(3, 4)
};
var dicLeft = new Dictionary<int, Dominoe>();
var dicRigth = new Dictionary<int, Dominoe>();
foreach (var item in input)
{
dicLeft.Add(item.LeftSide, item);
dicRigth.Add(item.RightSide, item);
}
Dominoe first = null;
foreach(var item in input)
{
if (!dicRigth.ContainsKey(item.LeftSide))
{
first = item;
break;
}
}
Console.WriteLine(string.Format("{0} - {1}", first.LeftSide, first.RightSide));
for(int i = 0; i < input.Count - 1; i++)
{
first = dicLeft[first.RightSide];
Console.WriteLine(string.Format("{0} - {1}", first.LeftSide, first.RightSide));
}
Console.ReadLine();
}
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36598168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to cache a small piece of information in Django I have a website that delivers a list of questions to users. This set of questions is the same for all users. I have these questions stored in a text file. The content of this file will not change while the server is running.
I cannot use a static page to display these questions because I have some logic to decide when to show which question.
I would like to cache these questions in memory (instead of reading the file off the hard drive every time a user connects). I am using Django 1.7.
I did read about caching from Django's website, I think the suggested methods (like memcached and database) are too heavy for my need. What would be a good solution for my situation? use a global variable?
Thanks!
A: There are so many caching back-ends you can use as listed in https://docs.djangoproject.com/en/1.7/topics/cache/
I haven't tried the file system or local memory caching myself, I always needed memcached, but looks like they're available, and the rest is a piece of cake!
from django.core import cache
cache_key = 'questions'
questions = cache.cache.get(cache_key) # to get
if questions:
# use questions you fetched from cache
else:
questions = { 'question1': 'How are you?'} #Something serializable that has your questions
cache.cache.set(cache_key, questions)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29391293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: XAMPP 7.0.0 php session causing whole page not to load Before upgrading to XAMPP 7, everything used to work fine.
Then after the update, everything except php sessions is working.
Any page with session_start() does not load in the browser and nothing is logged into php.log file concerning the error.
I've looked everywhere for such, and I found one here on Stackoverflow that had a similar problem, but the issue had been using MySQL.
Now, at the very basic level, this won't load:
<?php
session_start();
?>
I've tried changing the session.save_path in php.ini file without any success. Any help?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34542243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Automatically sync 2 writable git repositories I am working in a small office, where we are using git with a central repository at a windows share, where we can PUSH to and PULL from.
To be able to work from home, travel etc, we want to have a possibility to reach a central repo from "the outside world". Our internet connection is very, very slow, so it is no possibility to have just one central repository "inside" or "outside".
My simple attempt was the following:
*
*Make a central repository at the internal WIN network share: git init --bare
*Make one "outside" (e.g. github or an external WIN share or anything else)
*At the internal repo call git remote add EXTERNALREPO <pathToIt>
*and make a batch running every x minutes/hours, saying git fetch --tags EXTERNALREPO, git push --tags EXTERNALREPO
When working "inside", clone/push/pull the internal repo, when on the road, use the external repo for that.
Question: Is this the way to go, is there a better way, or am I completely wrong?
Related:
*
*Two-way git mirror –
I do not think that wee need locking as we are not so many people.
*Safe master-master setup with git? (writable git mirror)
*How to keep 2 git repositories in sync automatically – As far as I know, a push lasts as long as it take to run all the hooks, AND since using a windows share, the CLIENT has to run it, so it wouldn't be a solution.
Update 1: I now came up with a slightly adjusted configuration.
*
*INTERNAL repo on an internal Windows Share: git init --bare
*EXTERNAL repo on an external Windows Share: git init --bare
*Both repos:
git config receive.denyNonFastForwards 1
git config receive.denyDeletes 1
git remote add {INTERNAL|EXTERNAL} file:///...
*Every x seconds/minutes/hours, call git push --all {INTERNAL|EXTERNAL} and git push --tags {INTERNAL|EXTERNAL}, at first at the internal, then at the external repository.
A: *
*"repository at a windows share" is The Bad&Ugly Idea (tm)
*Use post-* hook in INTERNAL for pushing to EXTERNAL as it happens
*Pseudo-CVCS in DVCS is ugly (everybody can bring local travel-repo to workplace and sync from it)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37540598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Google Cloud Shell - How to resolve CERTIFICATE_VERIFY_FAILED error? I have simple dataflow pipeline and trying to execute from cloud shell,
Code:
from __future__ import print_function
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
with beam.Pipeline(options=PipelineOptions()) as p:
lines = p | 'Read' >> beam.io.ReadFromText('test.csv')
lines | 'Write' >> beam.io.WriteToText('gs://bucket/output_20193003', file_name_suffix='.csv')
result = p.run()
result.wait_until_finish()
Command used to execute:
python -m simple_pipeline --runner DataflowRunner --project myproject --staging_location gs://bucket/staging --temp_location gs://bucket/temp
Observations:
When executed from Cloud shell I encounter below error,
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/portability/fn_api_runner.py", line 859, in run_stages
pcoll_buffers, safe_coders).process_bundle.metrics
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/portability/fn_api_runner.py", line 970, in run_stage
self._progress_frequency).process_bundle(data_input, data_output)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/portability/fn_api_runner.py", line 1174, in process_bundle
result_future = self._controller.control_handler.push(process_bundle)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/portability/fn_api_runner.py", line 1054, in push
response = self.worker.do_instruction(request)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 208, in do_instruction
request.instruction_id)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/worker/sdk_worker.py", line 230, in process_bundle
processor.process_bundle(instruction_id)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/runners/worker/bundle_processor.py", line 301, in process_bundle
op.finish()
File "apache_beam/runners/worker/operations.py", line 398, in apache_beam.runners.worker.operations.DoOperation.finish
File "apache_beam/runners/worker/operations.py", line 399, in apache_beam.runners.worker.operations.DoOperation.finish
File "apache_beam/runners/worker/operations.py", line 400, in apache_beam.runners.worker.operations.DoOperation.finish
File "apache_beam/runners/common.py", line 598, in apache_beam.runners.common.DoFnRunner.finish
File "apache_beam/runners/common.py", line 589, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 618, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 587, in apache_beam.runners.common.DoFnRunner._invoke_bundle_method
File "apache_beam/runners/common.py", line 299, in apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
File "apache_beam/runners/common.py", line 302, in apache_beam.runners.common.DoFnInvoker.invoke_finish_bundle
File "apache_beam/runners/common.py", line 693, in apache_beam.runners.common._OutputProcessor.finish_bundle_outputs
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/io/iobase.py", line 1005, in finish_bundle
yield WindowedValue(self.writer.close(), window.MAX_TIMESTAMP,
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/io/filebasedsink.py", line 388, in close
self.sink.close(self.temp_handle)
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/io/filebasedsink.py", line 148, in close
file_handle.close()
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/io/filesystemio.py", line 201, in close
self._uploader.finish()
File "/home/user/beam-env/local/lib/python2.7/site-packages/apache_beam/io/gcp/gcsio.py", line 553, in finish
raise self._upload_thread.last_error # pylint: disable=raising-bad-type
RuntimeError: SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661) [while running 'Write/Write/WriteImpl/WriteBundles']
Note: I see this error only from Cloud Shell and the same code and commands works fine when executed from local machine - Using SDK or PyCharm IDE.
What is the problem with my cloud shell? How to fix it? Please suggest.
A: Or try this in cloud shell?
virtualenv --python=/usr/bin/python2 run_pipeline
. ./run_pipeline/bin/activate
pip install apache-beam[gcp]
python -m simple_pipeline --runner DataflowRunner --project myproject --staging_location gs://bucket/staging --temp_location gs://bucket/temp
A: I am able to launch dataflow jobs after upgrading to latest version.
pip install apache-beam[gcp]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55430101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PHP duplicating query results? For some reason,when I run this query through PHP, it returns 8 rows, when in reality i only have 2 rows in my database, and when i run it through phpmyadmin it returns 2 obviously.
$sql = "SELECT url FROM bookmarks,users WHERE bookmarks.user_id = {$session->user_id}";
$resultado = $db->query($sql);
echo mysql_num_rows($resultado);
I'm breaking my head!
A: your query produces cartesian product because you have not supplied the relationship between the two tables: bookmarks and users,
SELECT url
FROM bookmarks
INNER JOIN users
ON bookmarks.COLNAME = users.COLNAME
WHERE bookmarks.user_id = '$session->user_id'
where COLNAME is the column that defines how the table are related with each other, or how the tables should be linked.
To further gain more knowledge about joins, kindly visit the link below:
*
*Visual Representation of SQL Joins
As a sidenote, the query is vulnerable with SQL Injection if the value(s) of the variables came from the outside. Please take a look at the article below to learn how to prevent from it. By using PreparedStatements you can get rid of using single quotes around values.
*
*How to prevent SQL injection in PHP?
A: As JW pointed out, you are producing a Cartesian Product -- you aren't joining bookmarks against users. That's why you're duplicating rows.
With that said, you don't need to join users at all in your above query:
"SELECT url FROM bookmarks WHERE bookmarks.user_id = {$session->user_id}"
Good luck.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14888823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Intemittent Cannot create ActiveX component Everything was working fine, and then our code starts throwing:
Cannot create ActiveX component when we try to create a com object.
We reboot the server a couple of times and it goes away
Then after a while it comes back
This is driving us nuts. Any help appreciated.
A: I apologize that the question was vague. However we had no clue what was causing the problem so what facts would have been relevant?
At any rate it turns out we have two web apps running under IIS, both are trying to create ActiveX components. As soon as we turn one of the web apps off the problem goes away.
After we verified this behavior, we tried calling some simple ActiveX components (like excel). Lo and behold exactly the same behavior as long as there are more than two web apps running.
Our solution for now is to host the second web app elsewhere.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5585358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Alternative to Cookie Session for different applications within same domain I have 2 different ASP.NET applications on the same domain, different app pools on IIS. Application 1 opens Application 2 using window.open(url). At this stage the cookie is shared among both applications.
Application 2 runs the following code after login:
Context.Response.Cookies["ASP.NET_SessionId"].Value = string.Empty;
Context.Response.Cookies["ASP.NET_SessionId"].Expires = DateTime.Now.AddMonths(-20);
The purpose of this code is to prevent Session Fixation vulnerability whereby an attacker could steal the session id pre-login, and use it post-login to authenticate themselves as a legitimate user.
However, this also changes the session id on Application 1, effectively login the user out of the application.
How can I prevent this from happening? and what other approaches could I follow?
A: You could try use JSON web tokens instead
https://www.red-gate.com/simple-talk/dotnet/net-development/jwt-authentication-microservices-net/
https://jwt.io/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55311185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Crash with UITabBarController Sometimes I have strange crash when I switch from one tab to another:
0 libobjc.A.dylib 0x3857e636 objc_msgSend + 22
1 UIKit 0x3024ca89 -[UIResponder(Internal) _canBecomeFirstResponder] + 21
2 UIKit 0x3024c7a3 -[UIResponder becomeFirstResponder] + 207
3 UIKit 0x3024caff -[UIView(Hierarchy) becomeFirstResponder] + 107
4 UIKit 0x302c946b -[UITextField becomeFirstResponder] + 47
5 UIKit 0x302ca345 -[UIView(Hierarchy) deferredBecomeFirstResponder] + 57
6 UIKit 0x301cd441 __45-[UIView(Hierarchy) _postMovedFromSuperview:]_block_invoke + 165
7 Foundation 0x2e304d33 -[NSISEngine withBehaviors:performModifications:] + 211
8 UIKit 0x301cd291 -[UIView(Hierarchy) _postMovedFromSuperview:] + 297
9 UIKit 0x301da01d -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1413
10 UIKit 0x301d9a93 -[UIView(Hierarchy) addSubview:] + 31
11 UIKit 0x302bb563 -[UITransitionView transition:fromView:toView:removeFromView:] + 979
12 UIKit 0x302fb783 -[UITransitionView transition:fromView:toView:] + 31
13 UIKit 0x302fb759 -[UITransitionView transition:toView:] + 105
14 UIKit 0x302fa87b -[UITabBarController transitionFromViewController:toViewController:transition:shouldSetSelected:] + 1107
15 UIKit 0x302fa41f -[UITabBarController transitionFromViewController:toViewController:] + 39
16 UIKit 0x302fa2f7 -[UITabBarController _setSelectedViewController:] + 259
17 UIKit 0x303c45c9 -[UITabBarController _tabBarItemClicked:] + 273
18 UIKit 0x302036c7 -[UIApplication sendAction:to:from:forEvent:] + 91
19 UIKit 0x30203663 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 39
20 UIKit 0x303c447f -[UITabBar _sendAction:withEvent:] + 371
21 UIKit 0x302036c7 -[UIApplication sendAction:to:from:forEvent:] + 91
22 UIKit 0x30203663 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 39
23 UIKit 0x30203633 -[UIControl sendAction:to:forEvent:] + 47
24 UIKit 0x301eed7b -[UIControl _sendActionsForEvents:withEvent:] + 375
25 UIKit 0x303c41a7 -[UITabBar(Static) _buttonUp:] + 119
26 UIKit 0x302036c7 -[UIApplication sendAction:to:from:forEvent:] + 91
27 UIKit 0x30203663 -[UIApplication sendAction:toTarget:fromSender:forEvent:] + 39
28 UIKit 0x30203633 -[UIControl sendAction:to:forEvent:] + 47
29 UIKit 0x301eed7b -[UIControl _sendActionsForEvents:withEvent:] + 375
30 UIKit 0x3020307b -[UIControl touchesEnded:withEvent:] + 595
31 UIKit 0x30202d4d -[UIWindow _sendTouchesForEvent:] + 529
32 UIKit 0x301fdca7 -[UIWindow sendEvent:] + 759
33 UIKit 0x301d2e75 -[UIApplication sendEvent:] + 197
34 UIKit 0x301d1541 _UIApplicationHandleEventQueue + 7121
Maybe for someone this will look familiar and he could help to understand root of problem.
A: I got the same crash log.And my reason is the view controller is released but the UITextField.delegate is not setting nil. So you can set UITextField.delegate = nil in dealloc of the view controller.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25874754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Asking the user about the radius of the circle and will solve the area and the circumference I am making a class similar code to this but I don't know how I will make it into a input function.
How would I ask the user to input a radius of a circle?
class Circle:
def __init__(self, r):
self.radius = r
def area(self):
return 3.14 * (self.radius ** 2)
def perimeter(self):
return 2*3.14*self.radius
obj = Circle(3)
print("Area of circle:",obj.area())
print("Perimeter of circle:",obj.perimeter())
A: You just have to replace the argument to an input function to take input from the user. So the code will be changed from
obj = Circle(3)
to
obj = Circle(int(input("Please Enter Radius:")))
The int() before the input function is to convert the input from string to an integer.
To know more about taking input from user, please check out here.
A: class Circle:
def __init__(self, r):
self.radius = r
def area(self):
return 3.14 * (self.radius ** 2)
def perimeter(self):
return 2*3.14*self.radius
obj = Circle(int(input("Enter Radius:")))
#obj = Circle()
print("Area of circle:",obj.area())
print("Perimeter of circle:",obj.perimeter())
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71297957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Where to place a data array in zf2 I want to create the following static data array in my zf2 project so that routing and others methods can be done based on department codes, and then view helpers and form select elements in different modules look up the department titles for the user interface.
$deptList = array(
'01' => 'Human Resources',
'02' => 'Sales',
'03' => 'Marketing',
'04' => 'Accounting',
// ...
);
In the zf2 directory structure, where should I put this? Does it need it's own class?
Also, it might be convenient to record this data in a database table instead of hard-coding it. But I question whether this would affect performance.
A: As there is no registry singleton like in ZF1 creating a service and injecting it where needed is appropriate. You can then place it according to your autoloader configuration in the filesystem. As well inside that class you could do anything you like to build the array, e.g. using a database for it.
Nevertheless you can as well use your config for it if it is static information - e.g. like this:
Module.php
class Module
{
public function getConfig()
{
return include __DIR__ . '/config/module.config.php';
}
}
module.config.php
return [
'deptList' => [
'01' => 'Human Resources',
'02' => 'Sales',
'03' => 'Marketing',
'04' => 'Accounting',
// ...
],
];
MyController.php
class MyController extends \Zend\Mvc\Controller\AbstractActionController
{
public function myAction()
{
$config = $this->getServiceLocator()->get('config');
$deptList = $config['deptList'];
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30039796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Facebook API Send Message : How to send link I send messages with facebook application, the messages are texts
I want to send link in the message.
This is my code :
$message_to_reply = '<a href="http://test.com">hello</a>';
$ch = curl_init($url);
$jsonData = '{
"recipient":{
"id":"' . $sender . '"
},
"message":{
"text":"' . $message_to_reply . '"
}
}';
$jsonDataEncoded = $jsonData;
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $jsonDataEncoded);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
But it doesnt work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56040528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: qTranslate plugin switching language in the same page I have a serious problem with a qTranslate buttons.
Right now the web structure is: http://www.site.com/news/?lang=en
When I stay in the home page and I try to change the language, the botton opens the first post (maybe because I'm using permalink):
<?php if(qtrans_getLanguage()=='it'): ?>
<li><a href="<?php echo qtrans_convertURL(get_permalink(), 'en'); ?>" >eng</a></li>
<li class="liguaattiva">ita</li>
<?php endif; ?>
<?php if(qtrans_getLanguage()=='en'): ?>
<li class="liguaattiva">eng</li>
<li><a href="<?php echo qtrans_convertURL(get_permalink(), 'it'); ?>" >ita</a></li>
<?php endif; ?>
How to solve this without open the last post or come back at the home page, but only switching language in the same page?
A: I'm using qTranslate on my project and I do not do any of that stuff you do in your code above and have no problem switching between languages.
All I do is call qts_language_menu() function that creates language menu, nothing else. This will create necessary links which ables you to switch between languages but stay on the same page.
A: You don't need to use get_permalink()
you can just pass an empty string as url and the language as 2nd param
and the function will do rest !
just like:
$my_translated_content_url = qtrans_convertURL("", "en");
infact if you see at the function definition:
function qtrans_convertURL($url='', $lang='', $forceadmin = false) {
global $q_config;
// invalid language
if($url=='') $url = esc_url($q_config['url_info']['url']); // <-You don't need the url
if($lang=='') $lang = $q_config['language'];
[... the function continue...]
A: The links are stored in the $qTranslate_slug object. I made a function to make it easy to get the link for the current page in the desired language:
function getUrlInTargetLanguage($targetLang){
global $qtranslate_slug;
return $qtranslate_slug->get_current_url($targetLang);
}
So for example, if you wanted to get the english link, you should write:
getUrlInTargetLanguage("en");
A: May be this late but following good functions to easy call check current language or auto generate any language URL for qTranslate:
// check language
function check_lang() {
return qtranxf_getLanguage();
}
// Generate language convert URL
function get_lan_url($lang){
echo qtranxf_convertURL('', $lang);
}
// generate inline translate short code
add_shortcode( 'translate_now', 'get_translate' );
function get_translate( $atts, $content = null ) {
extract( shortcode_atts(
array(
'ar' => '',
'en' => '',
'es' => '',
'fr' => '',
), $atts )
);
if ( check_lang() == 'ar' ) {
echo $atts['ar'];
}
if ( check_lang() == 'en' ) {
echo $atts['en'];
}
if ( check_lang() == 'es' ) {
echo $atts['ar'];
}
if ( check_lang() == 'fr' ) {
echo $atts['ar'];
}
}
function translate_now($ar,$en,$es,$fr){
$content = '[translate_now ar="'.$ar.'" en="'.$en.'" es="'.$es.'" fr="'.$fr.'"]';
echo do_shortcode($content);
}
So now you can check current language using check_lang() function for example:
<?php if(check_lang() == 'ar'): echo 'مرحبا'; endif;?>
<?php if(check_lang() == 'en'): echo 'Hello'; endif;?>
<?php if(check_lang() == 'es'): echo 'Hola'; endif;?>
<?php if(check_lang() == 'fr'): echo 'Bonjour'; endif;?>
Also you can use function translate_now()to translate inline by passing values:
<?php
translate_now(
'مرحبا', // ar
'Hello', //en
'Hola', //es
'Bonjour' //fr
);
?>
Also to generate any langauge convert URL use function get_lan_url() passing requested language:
<a href="<?php get_lan_url('ar');?>">العربية</a>
<a href="<?php get_lan_url('en');?>">English</a>
<a href="<?php get_lan_url('es');?>">España</a>
<a href="<?php get_lan_url('fr');?>">Français</a>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16574953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Can anyone explain ![]/[] == true statement in js? console.log(true == []); // -> false
console.log(true == ![]); // -> false
Why they are always false?
A: true == [] is false simply because true is not equal to [].
![] evaluates to false, so true == ![] is false.
Also, true == !![] is true.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57595738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.