text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
PowerShell: combining paths using a variable
This must be something obvious, but I can't get this to work.
I'm trying to build a variable that should contain the path to an existing file, using an environment variable ($env:programfiles(x86)). However I keep getting errors, and I fail to see why.
This works fine (if the file exists):
PS C:\> $f = "C:\Program Files (x86)" + '\sometextfile.txt'
PS C:\> $f
C:\Program Files (x86)\sometextfile.txt
PS C:\> gci $f
Directory: C:\Program Files (x86)
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 13/12/2010 14:03 0 sometextfile.txt
PS C:\>
However, this does not:
PS C:\> "$env:programfiles(x86)"
C:\Program Files(x86)
PS C:\> $f = "$env:ProgramFiles(x86)" + '\sometextfile.txt'
PS C:\> $f
C:\Program Files(x86)\sometextfile.txt
PS C:\> gci $f
Get-ChildItem : Cannot find path 'C:\Program Files(x86)\sometextfile.txt' because it does not exist.
At line:1 char:4
+ gci <<<< $f
+ CategoryInfo : ObjectNotFound: (C:\Program Files(x86)\sometextfile.txt:String) [Get-ChildItem], ItemNot
FoundException
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
What's happening, and how do I fix it?
A:
Here is what is going on...
In any Windows PowerShell path empty characters or spaces need to be surrounded with a set of quotes or brackets. The PowerShell environment variable for the C:\Program Files (x86) is ${env:ProgramFiles(x86)}, not $env:ProgamFiles(x86) since PowerShell needs to escape the empty spaces in the real path.
If you use the '${env:ProgramFiles(x86)}' explicit environment variable, it works perfectly.
This won't work...
PS C:\> cd "$env:programfiles(x86)"
Set-Location : Cannot find path 'C:\Program Files(x86)' because it does not e
At line:1 char:3
+ cd <<<< "$env:programfiles(x86)"
+ CategoryInfo : ObjectNotFound: (C:\(x86):String)
+ FullyQualifiedErrorId : PathNotFound,Microsoft.PowerShell.
or this...
PS C:\> $env:ProgramFiles(x86)
Unexpected token '(' in expression or statement.
At line:1 char:19
+ $env:ProgramFiles( <<<< x86)
+ CategoryInfo : ParserError: ((:String) [], Parent
+ FullyQualifiedErrorId : UnexpectedToken
But this works great...
PS C:\> ${env:ProgramFiles(x86)}
C:\Program Files (x86)
PS C:\> $f = "${env:ProgramFiles(x86)}" + "\sometextfile.txt"
PS C:\> $f
C:\Program Files (x86)\sometextfile.txt
PS C:\> gci $f
Directory: C:\Program Files (x86)
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 12/13/2010 8:58 AM 0 sometextfile.txt
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there any need of learning views and template engines in express when we have already learn angular in the MEAN Stack
I am learning MEAN Stack. I started from learning Angular(from angular.io) now I am learning node.js and express.js
My question is, if there is angular for front end in MEAN Stack then why there are views and template engines in express.js at back-end? Are they alternative for each other or complements each other? what is the boundary for the role and responsibility of these two?
I am looking forward for someone's help in clarifying of my concept for role these two technologies(express' views and angular) used in mean stack.
A:
In order to answer your question, let me explain what is angular and what are template engines in express?
What is Angular?
Angular is a platform that makes it easy to build applications with the web. Angular combines declarative templates, dependency injection, end to end tooling, and integrated best practices to solve development challenges. Angular empowers developers to build applications that live on the web, mobile, or the desktop.
what is template engine?
A template engine enables you to use static template files in your application. At runtime, the template engine replaces variables in a template file with actual values and transforms the template into an HTML file sent to the client. This approach makes it easier to design an HTML page.
Some popular template engines that work with Express are Pug, Mustache, and EJS. The Express application generator uses Jade as its default, but it also supports several others.
So,
Angular is a framework with a templating component baked in. You use it to create Single page Web Applications which means that DOM modification is happening on the client side and the app exchange with server only data. If your concern is template it is plain HTML.
Whereas, template engines' rendered views are sent to client each time by server whenever request is made each time a new page is rendered on server and sent to the client which is Great for static sites but not for rich site interactions.
If there is angular for front-end in MEAN Stack then why there are views and template engines in express.js at back-end?
This is because not every time generating views from angular is recommended sometimes it is better to use Template Engines to generate views and send the rendered page to a client, generating views at client side has its own pros and cons and generating views at server side has its own.
Generating views using template engines (i.e. at server-side):
pros:
Search engines can crawl the site for better SEO.
The initial page load is faster.
Great for static sites.
cons:
Frequent server requests.
An overall slow page rendering.
Full page reloads.
Non-rich site interactions.
Generating views using angular engines (i.e. at client-side):
pros:
Rich site interactions
Fast website rendering after the initial load.
Great for web applications.
Robust selection of JavaScript libraries.
cons:
Low SEO if not implemented correctly.
The initial load might require more time.
In most cases, requires an external library.
So, after knowing the pros and cons, you yourself can better decide that in particular case which one is better for you. Mean Stack has provided options for developers.
As far as your question regarding the role of these two technologies is concerned, Angular is a lot more view generator only, It has features like routing, it as services two-way data binding etc while the template engines are meant to render HTML so that it can be sent to the client.
I hope you will find this answer useful.
References:
what is the template engine?
what is angular?
pros/cons
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I dynamically create file download links in code behind in ASP.NET?
I have files on the web server. I want to make them available for download to a client. However, I don't know what the files will be, or even how many there will be until runtime. If I create a hyperlink with the NavigateUrl set to the file location on the server, then the client just tries to locate the file on their local system at that location, so that doesn't work. I was able to get a single file to work using a linkbutton, but that is also not an option because I can't create it dynamically.
Is there a way to do this dynamically in ASP.NET?
A:
The way I eventually solved this was to use a repeater and then put the action in the repeater instead of in the link button.
<asp:Repeater id="repLinks" runat="server" OnItemCommand="repLinks_OnItemCommand">
<ItemTemplate>
<li>
<asp:LinkButton ID="HyperLink1" runat="server" Text='<%# Eval("Name") %>' CommandArgument='<%# Eval("Name") %>'>
</asp:LinkButton>
</li>
</ItemTemplate>
</asp:Repeater>
And then the code behind as follows:
protected void repLinks_OnItemCommand(object sender, RepeaterCommandEventArgs e)
{
Current.Response.AddHeader("Content-disposition", "attachment; filename=" & fileName)
Current.Response.ContentType = "application/octet-stream"
Current.Response.WriteFile(System.IO.Path.Combine(_path, e.CommandArgument.ToString))
Current.Response.Flush()
Current.Response.[End]()
}
The thing I was missing is that I could have the same handler for every LinkButton in the repeater, and then just pass the filename as a CommandArgument.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Counterexample: If $f: \mathbb{Q}(\alpha) \to \mathbb{Q}(\alpha)$ is an isomorphism of fields, then $\beta=f(\alpha)$.
State whether the statement below is true or false:
If $f: \mathbb{Q}(\alpha) \to \mathbb{Q}(\beta)$ is an isomorphism of fields, then $\beta=f(\alpha)$.
If true, provide a proof; if false, provide a counter-example.
I know that this statement is false but I can't think of a counter-example.
A:
f the identity of Q=Q(1)=Q(-1), a counterexample
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to programmatically send http request to inner webpage link with java?
I am try to making an java application which will be connected with a server and then try to access a link of that server page. For example, I have a link "http://goodserver.com" and I am able to connect with this url by this code
InetAddress addr = null;
Socket sock = new Socket("http://goodserver.com", 80);
addr = sock.getInetAddress();
System.out.println("Connected to " + addr);
Now I am also able to read the whole source code of this page. But there are button with links. When I go through a browser I can easily click on those button and go to that link. For example a button named "Test" and the corresponding link is "http://goodserver.com/targets/Test".
I want to access this link by java but the problem is that it can't be connected directly. I don't want to clcik this link by java as I have read this link "Programmatically click a webpage button" . I just want to know the mechanism by which a browser can access the link after loading the home page but its not possible through java http request.
I have read the page by this code
URL url = new URL("http://goodserver.com");
BufferedReader reader = new BufferedReader
(new InputStreamReader(url.openStream()));
BufferedWriter writer = new BufferedWriter
(new FileWriter("data.html"));
String line;
while ((line = reader.readLine()) != null) {
System.out.println(line);
writer.write(line);
writer.newLine();
}
reader.close();
writer.close();
When replace this home page link with my target button link "http://goodserver.com/targets/Test" I am getting the home page source code not the target page.
I know that a browser also send http requests to get pages then it should be possible by java. Thanks in advance.
A:
If the result of the second request depends on whether you accessed the home page or not, your problem probably has something to do with cookies.
HTTP is a stateless protocol, that means that each request is independent from the others. When you open a page and click a button, you generate a new request to that other URL, but the server has no clue about who you are or what pages you opened before.
Cookies make it possible for the server to "remember" who you are. They work as follows: when you request a page, the server will send the contents of that page to you, but they can also send some extra information called a cookie. Your browser stores that information and everytime you make another request to the same server, the browser sends the cookies with that request. So, even though the server doesn't know at first who is making the request, now it's able to take a look at the cookie and realise that it sent that information to you, you it must be you the person that is making that request.
So, this is the part you are probably missing in your problem: storing the cookies that the server sends to you when you load the home page and then sending them again when you request the other page, to "remind" the server that you have already accessed the home page.
Naturally, you could do it by hand by parsing the HTTP headers, but I strongly recommend that you use some library to do this for you. The Apache HTTP Client is probably the best you can find in the Java world. Here's a short example of how you can keep cookies across requests:
public class CookiesExample {
public static void main(String[] args) throws Exception {
//This object will store your cookies:
BasicCookieStore cookieStore = new BasicCookieStore();
//Create a client using our cookie store:
CloseableHttpClient httpclient = HttpClients.custom()
.setDefaultCookieStore(cookieStore)
.build();
try {
//Execute request:
HttpGet httpget = new HttpGet("https://example.com/");
CloseableHttpResponse response = httpclient.execute(httpget);
try {
//Consume the response:
HttpEntity entity = response.getEntity();
EntityUtils.consume(entity);
} finally {
response.close();
}
//Whatever cookies that were sent by the server in that request
//are now stored in our cookie store. Subsequent requests will
//send those cookies to the server.
httpget = new HttpGet("https://example.com/my/awesome/internal/page");
response = httpclient.execute(httpget);
try {
//Consume the response:
HttpEntity entity = response.getEntity();
EntityUtils.consume(entity);
} finally {
response.close();
}
} finally {
httpclient.close();
}
}
}
Another possible solution would be to use an actual browser that takes care of all of that for you. JavaFX has a browser component that can be controlled from Java and there's also Selenium that lets you use a "driver" to control a real browser (Chrome, Firefox, IE, ...).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to set keepalive option for induvidual socket in VxWorks
Is there any way to set keepalive for induvidual socket descriptor in vxworks? I read in some documents that "SOL_TCP" option in setsockopt function will do such favors in linux. Is such facility available in VxWorks too? If so please provide related details regarding the same, like what are the include file we need to include and how to use such option etc.
A:
From the VxWorks "Library Reference" manual (can be download):
OPTIONS FOR STREAM SOCKETS
The following sections discuss the socket options available for stream (TCP) sockets.
SO_KEEPALIVE -- Detecting a Dead Connection
Specify the SO_KEEPALIVE option to make the transport protocol (TCP) initiate a timer to detect a dead connection:
setsockopt (sock, SOL_SOCKET, SO_KEEPALIVE, &optval, sizeof (optval));
This prevents an application from hanging on an invalid connection. The value at optval for this option is an integer (type int), either 1 (on) or 0 (off).
The integrity of a connection is verified by transmitting zero-length TCP segments triggered by a timer, to force a response from a peer node. If the peer does not respond after repeated transmissions of the KEEPALIVE segments, the connection is dropped, all protocol data structures are reclaimed, and processes sleeping on the connection are awakened with an ETIMEDOUT error.
The ETIMEDOUT timeout can happen in two ways. If the connection is not yet established, the KEEPALIVE timer expires after idling for TCPTV_KEEP_INIT. If the connection is established, the KEEPALIVE timer starts up when there is no traffic for TCPTV_KEEP_IDLE. If no response is received from the peer after sending the KEEPALIVE segment TCPTV_KEEPCNT times with interval TCPTV_KEEPINTVL, TCP assumes that the connection is invalid. The parameters TCPTV_KEEP_INIT, TCPTV_KEEP_IDLE, TCPTV_KEEPCNT, and TCPTV_KEEPINTVL are defined in the file target/h/net/tcp_timer.h.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I need a simple way to call the java's swingworker publish() method from another class
The below code is working fine. The only thing that I need is be able to publish() results to the Swingworker from inside Bot() class?
I have searched all over the web to no avail. This Answer did not work for me.
JButton btnStart = new JButton("Start");
btnStart.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
Bot test = new Bot();
SwingWorker<Void, String> worker = new SwingWorker<Void, String>() {
@Override
protected Void doInBackground() throws Exception {
test.run();
return null;
}
@Override
public void process(List<String> chunks) {
for (String s : chunks) {
textAreaMain.setText(s);
}
}
};
worker.execute();
}
});
btnStart.setBounds(305, 179, 95, 25);
panel_1.add(btnStart);
method-in-another-class
//Bot Class has the
class Bot {
void run() {
for (int i = 0; i <= 10; i++) {
Thread.sleep(1000);
publish("Some message");
}
}
}
With example, how do I write my Bot() class to get what I need?
A:
I would give the Bot class's test method a parameter that accepts a single parameter String function, a callback, and call back through this method.
But really any callback would do, but the Java 8 function references would work perfectly.
e.g.,
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.function.Consumer;
import javax.swing.*;
public class FooJPanel extends JPanel {
private JButton btn = new JButton("Button");
private JTextArea textAreaMain = new JTextArea(20, 20);
public FooJPanel() {
textAreaMain.setFocusable(false);
add(new JScrollPane(textAreaMain));
add(btn);
btn.addActionListener(e -> {
final Bot bot = new Bot();
SwingWorker<Void, String> worker = new SwingWorker<Void, String>() {
@Override
protected Void doInBackground() throws Exception {
bot.run(this::publish); // thanks Vince Emigh
return null;
}
@Override
public void process(List<String> chunks) {
for (String s : chunks) {
textAreaMain.append(s + "\n");
}
}
};
worker.execute();
});
}
public static void main(String[] args) {
SwingUtilities.invokeLater(() -> {
JFrame frame = new JFrame("Foo");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.add(new FooJPanel());
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
});
}
}
class Bot {
public void run(Consumer<String> c) {
for (int i = 0; i < 10; i++) {
c.accept("String #" + i);
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException e) {
}
}
}
}
But any callback or listener construction would work, such as a PropertyChangeListener. As long as you keep coupling low and cohesion high, you should do well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I refresh VS Code explorer from a extension?
I'm working on a VSCode extension that get some inputs from the developer and calls some yarn/npm commands inside an custom opened terminal in order to generate a folder and some files inside it.
Everything is running ok, except that we need to manually click on the refresh button in the Explorer in order to see the resulting folder.
I would like to call a method from the extension to do that for us once the execution was completed.
I did some research on the web and found this command:
terminal.sendText(command);
vscode.commands.executeCommand(
"workbench.files.action.refreshFilesExplorer"
);
But unfortunately, it is not working as I expected.
Am I missing something ?
A:
You likely need to wait until after the command has completed before running refresh.
Does the command need to be run in the terminal? Can it be run using node's child_process instead, with you then executing vscode.commands.executeCommand("workbench.files.action.refreshFilesExplorer") after the process exits?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Dilemma after someone else edits a spam post
Should spam posts be edited? - Everyone knows: No.
Today someone edited a NSFW spam link (10k on Stack Overflow) into
[NSFW spam link]
Before clicking on the image, please note that there is an explicit NSFW link inside.
Seeing the edit, I thought I was under dilemma: I can't simply rollback the edit, but it doesn't comply with the meta post above. So I went on and edited that into
[NSFW spam link, please flag as spam]
and it was no doubt another user told me not to edit spam links. I explained the issue and was forgiven.
The exact thing I'm confused about is
If I roll the edit back, it's like I'm helping the spam link to survive (because it's already hidden), even though it's extending the life span of the link for only a minute or so. Also I would invalidate a few flags if it had already been some time after the edit, further delaying the automatic flag-nuke.
If I "improve" the edit, well the edit has already violated the consensus, my "improvement" only makes the violation worse.
As per the accepted answer to the linked question:
As nhinkle says, most links do not even need to be removed, unless they are linking to porn, viruses, or disturbing content.
This time the NSFW link is porn, so it somehow makes some sense to hide it. I would have rolled it back directly if it were a regular spam (advertisements to whatever drugs).
Next time, what should I do if I see another user edits a spam link out? Rollback or "improve"?
A:
Rollback. Flag (so spamram catches it). Presumably let team smokey know (so smokey catches it)...
There's no real dilemma here, though I disapprove of the use of edits as messaging.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
asp.net mvc authorization problem
I am trying to add authorization to my controllers and it's not working...
I am not sure where to look in my program, but adding the
[Authorize]
filter in my controller is not working, let alone anything like
[Authorize(Roles = "Manager")]
I have been able to get this working in the default application that is provided when creating a new MVC project (i.e., I am able to make the "about" tab redirect to the login screen if I'm not logged in), so I assume I have mucked things up along the way as I've built my app. Does anyone know where I should be looking to fix this? I have users and they have roles; I'm using the ASP.net schema that is auto-created; I've examined my web.config file up and down and although I'm pretty new to this, nothing seems to be out of place. I have no clue why my authorization filters aren't working.?.
A:
I wrote a custom attribute to solve this problem. You can attribute your controller methods as follows:
[RequiresRole(Role="Admin")]
public ActionResult Index()
{
int i = 5 + 5;
return View();
}
The code for the attribute is as follows....
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.Web.Security;
namespace Web.Controllers
{
public class RequiresRoleAttribute : ActionFilterAttribute
{
public string Role { get; set; }
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (string.IsNullOrEmpty(Role))
{
throw new InvalidOperationException("No role specified.");
}
string redirectOnSuccess = filterContext.HttpContext.Request.Url.AbsolutePath;
string redirectUrl = string.Format("?returnUrl={0}", redirectOnSuccess);
string loginUrl = FormsAuthentication.LoginUrl + redirectUrl;
if (!filterContext.HttpContext.User.Identity.IsAuthenticated)
{
filterContext.HttpContext.Response.Redirect(loginUrl, true);
}
else
{
bool isAuthorised = filterContext.HttpContext.User.IsInRole(this.Role);
if (!isAuthorised)
{
filterContext.HttpContext.Response.Redirect(loginUrl, true);
}
}
}
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the difference between comma (,) and plus (+) in string arrays?
string[] NamesList = new string[3]
{
"George", "Bob", "Kat"
};
Console.WriteLine("Name list: ",String.Join(",", NamesList));
On the last line, If I replace the first comma with +, I get the following results:
Name list: George, Bob, Kat
If I leave it as It is (with comma), there is nothing shown:
Name list:
A:
With that line:
Console.WriteLine("Name list: " + String.Join(",", NamesList));
you concatenate everything.
With that line:
Console.WriteLine("Name list: ", String.Join(",", NamesList));
you get nothing after "Name list :" because the right syntax is :
Console.WriteLine("Name list: {0}", String.Join(",", NamesList));
That take the parameters after the first ',' in the {0}, {1} etc...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Will releasing an array also release its elements?
I alloc my NSMutableArray, and add objects that were alloced as well. Will calling release on my array also release the elements within, or do I have to release each element manually first?
A:
Objects in obj-c collection are released when that collection is deallocated (that's not the same as being released). So in practice if you add your object in collection, collection manages its objects ownership and you don't need to put extra releases for its elements.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sort Multidimensional List - Python
I have a 3d list of lists or numpy array and I need to sort it, by the smallest first item index.
This are the last two tentatives I did on this program. Sorry, I am quite sure it is an easy/silly question, but as a newbie in programming 'way of thinking', it is kind of hard for me.
First Try:
lstsArray = [[[54,21,31], [1,2,3], [15,25,35]],
[[12,22,32], [3,2,1], [16,26,36]],
[[34,24,38], [0.1,1,1], [17,27,37]]]
val = np.array(lstsArray)
menor = 120e26
for item in val:
for i in item:
if menor >= i[0] and i[0] >= min(i):
menor = i[0]
print(menor)
lstA = list(val)
a = sorted(lstA, key=itemgetter(menor))
print(a)
Second Try
for i in val:
for j in i:
print(sorted((i), key =itemgetter(j[0])))
Desired Output
[[[0.1,1,1],[1,2,3],[3,2,1]],
[[12,22,32],[15,25,35],[16,26,36]],
[[17,27,37],[34,24,38],[54,21,31]]]
A:
Your list, and array made from it. Note the floats in the array:
In [124]: lstsArray = [[[54,21,31], [1,2,3], [15,25,35]],
...: [[12,22,32], [3,2,1], [16,26,36]],
...: [[34,24,38], [0.1,1,1], [17,27,37]]]
In [125]: val=np.array(lstsArray)
In [126]: val
Out[126]:
array([[[54. , 21. , 31. ],
[ 1. , 2. , 3. ],
[15. , 25. , 35. ]],
[[12. , 22. , 32. ],
[ 3. , 2. , 1. ],
[16. , 26. , 36. ]],
[[34. , 24. , 38. ],
[ 0.1, 1. , 1. ],
[17. , 27. , 37. ]]])
This is a (3,3,3) shaped array. But your sorting ignores the initial (3,3) layout, so let's go ahead and reshape it:
In [133]: val = np.array(lstsArray).reshape(-1,3)
In [134]: val
Out[134]:
array([[54. , 21. , 31. ],
[ 1. , 2. , 3. ],
[15. , 25. , 35. ],
[12. , 22. , 32. ],
[ 3. , 2. , 1. ],
[16. , 26. , 36. ],
[34. , 24. , 38. ],
[ 0.1, 1. , 1. ],
[17. , 27. , 37. ]])
Now we can easily reshape on the first column value. argsort gives the sort order:
In [135]: idx = np.argsort(val[:,0])
In [136]: idx
Out[136]: array([7, 1, 4, 3, 2, 5, 8, 6, 0])
In [137]: val[idx]
Out[137]:
array([[ 0.1, 1. , 1. ],
[ 1. , 2. , 3. ],
[ 3. , 2. , 1. ],
[12. , 22. , 32. ],
[15. , 25. , 35. ],
[16. , 26. , 36. ],
[17. , 27. , 37. ],
[34. , 24. , 38. ],
[54. , 21. , 31. ]])
and to get it back to 3d:
In [138]: val[idx].reshape(3,3,3)
Out[138]:
array([[[ 0.1, 1. , 1. ],
[ 1. , 2. , 3. ],
[ 3. , 2. , 1. ]],
[[12. , 22. , 32. ],
[15. , 25. , 35. ],
[16. , 26. , 36. ]],
[[17. , 27. , 37. ],
[34. , 24. , 38. ],
[54. , 21. , 31. ]]])
or in list display:
In [139]: val[idx].reshape(3,3,3).tolist()
Out[139]:
[[[0.1, 1.0, 1.0], [1.0, 2.0, 3.0], [3.0, 2.0, 1.0]],
[[12.0, 22.0, 32.0], [15.0, 25.0, 35.0], [16.0, 26.0, 36.0]],
[[17.0, 27.0, 37.0], [34.0, 24.0, 38.0], [54.0, 21.0, 31.0]]]
But if the list had just one level of nesting:
In [140]: alist = val.tolist()
In [141]: alist
Out[141]:
[[54.0, 21.0, 31.0],
[1.0, 2.0, 3.0],
[15.0, 25.0, 35.0],
[12.0, 22.0, 32.0],
[3.0, 2.0, 1.0],
[16.0, 26.0, 36.0],
[34.0, 24.0, 38.0],
[0.1, 1.0, 1.0],
[17.0, 27.0, 37.0]]
the python sorted works quite nicely:
In [142]: sorted(alist, key=lambda x:x[0]) # or itemgetter
Out[142]:
[[0.1, 1.0, 1.0],
[1.0, 2.0, 3.0],
[3.0, 2.0, 1.0],
[12.0, 22.0, 32.0],
[15.0, 25.0, 35.0],
[16.0, 26.0, 36.0],
[17.0, 27.0, 37.0],
[34.0, 24.0, 38.0],
[54.0, 21.0, 31.0]]
The fact that you have a double nested list, but want the sort to ignore one layer, complicates the list processing. That's where numpy reshape helps a lot.
For now I won't test the relative speeds of these approaches.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Only run map once, ramda js
const arr = [{
_id: 'z11231',
_typename: 'items'
id: '123',
comment: null,
title: 'hello'
}, {
_id: 'z11231',
_typename: 'items'
id: 'qqq',
comment: 'test',
title: 'abc'
}]
Wanted output:
[['123', null, 'hello'], ['qqq', 'test', 'abc']];
export const convertObjectsWithValues = R.map(R.values);
export const removeMongoIdAndGraphqlTypeName = R.map(R.omit(['_id', '__typename']));
export const getExcelRows = R.pipe(removeMongoIdAndGraphqlTypeName, convertObjectsWithValues);
Problem here is I'm running two separate maps. It's to slow. Can I combine this in a way where only one map is executed. And still keep it clean in three seperate functions?
A:
Use R.map with R.props to state which properties you want in the order that you want them. This will always maintain the correct order, unlike. R.values, which is constrained by the way JS orders keys.
const arr = [{"_id":"z11231","_typename":"items","id":"123","comment":null,"title":"hello"},{"_id":"z11231","_typename":"items","id":"qqq","comment":"test","title":"abc"}]
const getExcelRows = keys => R.map(R.props(keys))
const result = getExcelRows(['id', 'comment', 'title'])(arr)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In iOS - Production and staging server included in single build
I have an application having two servers STAGING and PRODUCTION. I used to release two builds with changing the servers in coding part. But now my client asked to provide a single build where we can provide an option in app settings or phone settings to chnage the URL.
I reserached a lot in stack overflow and came to know that at runtime when we select debug/release mode at the time of giving build, it can be possible but anyways it is also come under process of giving two builds.
I want to have a single build where the user can change the option of Staging/Producetion. Please help me. Is it possible. ?
A:
You can just add a switch somewhere in your application to turn on staging mode (switch off = production). The state of this switch is saved in NSUserDefaults. Then, depending to this state, you choose the right URL of server in your code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
NgRx - How are states combined and initialized
When we initialize our Store:
StoreModule.provideStore({r1: Reducer1, r2: Reducer2, ...})
we do pass the reducers to the Store to be stored. But we never actually pass the initial states to the store, except defining it in the reducers functions:
const someReducer = (state = initialState, act: Action) => { ... }
SO, is it that when application bootstraps, all the reducers are called once to acquire the initial state from reducer definition, and then store the state in the NgRx Store?
If so, is it that every reducer must have an initial state value? otherwise the states will always be undefined?
And, if all reducers are called at bootstrap, how does NgRx make sure that the reduce must get to the default case?:
case ...:
...
default:
return initialState
Thanks a lot!
Any help appreciated!!
A:
Every time an action is dispatched to the store, every registered reducer is called. A reducer is just a function that takes the current state plus an action and returns a new state.
So yes, on bootstrap an init-action is dispatched (and therefore every reducer is called). If you use store-devtools for example, you will see that the initial action is called @ngrx/store/init.
Since the current state is undefined at this point, the reducer-function is called with the following: someReducer(undefined, { type: '@ngrx/store/init' })
When a function gets called with a parameter that is undefined and it has a parameter with a default value, it will use that default value instead.
function funcWithoutDefault(myParam) {
console.log(myParam);
}
funcWithoutDefault('hello'); // 'hello'
funcWithoutDefault(undefined); // undefined
function funcWithDefault(myParam = 'world') {
console.log(myParam);
}
funcWithDefault('hello'); // 'hello'
funcWithDefault(undefined); // 'world'
That's probably nothing new to you. But hopefully it helps to answer your question about initial state:
As I said, the current state is undefined before the init-action is called. So when the reducer-function has a parameter with a default value (the initialstate), the store will be initialized with this initial state.
If you don't define a default value for a reducer, that slice of the state stays undefined.
This could be a valid scenario. Consider for example a reducer that handles actions for an admin-section. Unless a user with elevated rights uses your application, you might not want to have anything in that state-slice.
Now to the point where I presume your question came from:
You don't want to return the initialState in the default case! Since every reducer gets called on every action-dispatch, the default case should return state.
The store doesn't know what reducer an action is meant for. It calls every reducer with the current state and the dispatched action and expects to get a (new) state back.
If a certain reducer should react to an action is up to you. Every other reducer that should not react to this action should return it's unaltered state (via the default case). And of course you can have multiple reducers react to the same action!
In your case, if every reducer returns initialState in the default case your whole app-state would reset to it's inital state, except for the reducer that reacts to the dispatched action.
TL;DR for your questions:
Yes, all reducer-functions get called on bootstrap (and on every other action dispatch)...
... and yes, that is when the store gets initialized with the initial state.
No, a reducer must not always have an initial state*
If a slice of state is initially undefined, it can still be changed by the reducer reacting to an action by returning a new state.
Every reducer not reacting to an action should return the current state via the default-case.
* of course this only makes sense when the rest of your application is able to handle an undefined state
|
{
"pile_set_name": "StackExchange"
}
|
Q:
char a[512] vs char b[512 + 1]
I see lot of code, using the following notation
char a[512 + 1];
a[512] = '\0';
Is it not inefficient, memory utilization wise? Assuming you are using 32 bit machine. And [512 + 1] would actally mean [512 + 4].
It might not be a big deal for server applications, but for embedded system programming it shall matter.
A:
char bla[512];
bla[sizeof bla - 1] = '\0';
is better in my opinion.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
random subset of fixed length such that each group is present at least N times
I want to select 5 rows from each values in column1 of df such that the output has at least 1 value for each unique value in column2.
Also there should not be any duplicates in the output
Edit: There should not be duplicates in (column1, column3) pair:
i.e For each value in column1 all values in column3 should be unique
column1 = rep(c("a","b"), each = 12)
column2 = rep(c(1,2,3), each = 4)
column3 = c("x1","x2","x3","x4","x5","x3","x6","x7","x8","x1","x9","x5","x6","x2","x3","x4","x7","x5","x6","x1","x4","x1","x6","x9")
df = data.frame(column1, column2, column3)
Here is a valid solution
sample_output_1 = data.frame(column1 = rep(c("a","b"), each = 5),
column2 = c(1,1,2,2,3,1,1,2,2,3),
column3 = c("x1","x2","x5","x3","x8","x6","x2","x5","x1","x9"))
A:
Check this
foo = function(a_df){
inds = 1:NROW(a_df)
#Sample 5 indices along the rows of a_df
my_inds = sample(inds, 5)
#If subset of a_df based on my_inds has duplicates
#Or if 2nd column does not have all unique values
while(any(duplicated(a_df[my_inds, c(1, 3)])) &
!identical(sort(unique(a_df[my_inds, 2])), sort(unique(a_df[[2]])))){
#Count the number of duplicates or missing all values
n = sum(duplicated(a_df[my_inds, c(1, 3)]))
n = n + sum(!sort(unique(a_df[my_inds, 2])) %in% sort(unique(a_df[[2]])))
#Remove my_inds from inds
inds = inds[!inds %in% my_inds]
#Remove the n indices that create duplicates from my_nds
my_inds = my_inds[!duplicated(a_df[my_inds, c(1, 3)])]
#Sample n more from inds and add to my_inds
my_inds = sample(c(my_inds, sample(inds, n)))
}
return(a_df[my_inds,])
}
set.seed(42)
do.call(rbind, lapply(split(df, df$column1), function(a) foo(a_df = a)))
# column1 column2 column3
# a.11 a 3 x9
# a.12 a 3 x5
# a.3 a 1 x3
# a.8 a 2 x7
# a.6 a 2 x3
# b.19 b 2 x6
# b.21 b 3 x4
# b.14 b 1 x2
# b.18 b 2 x5
# b.23 b 3 x6
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to put an image inside a button that created in javascript?
I am developing a simple project using bootstrap. I have this code that can create a button inside a dataTable and it is working fine. I try to put an image to it not glyphicons by the code that I provided below as you can see.. But it's not working. But the glyphicon icon in First button is working.
{ data: "ActionMenu", title: "Test", sClass: "alignCenter","mRender": function (data) {
return '<button type="button" class="btn btn-info btn-md action" id="checkId"><i class="fa fa-dollar fa-fw action"></i></button>'
+ ' <button type="button" class="btn btn-danger btn-md action" id="checkId2" src="Images/TestPicture.png"></button>';
}
A:
You can't give a src to a button element. Try to put an <img> element inside the button:
<button type="button" class="btn btn-danger btn-md action" id="checkId2">
<img src="Images/TestPicture.png"/>
</button>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I send/receive payments but prevent forwarding payments in Lightning Network?
Say I am using a very computationally constrained device (eg. a sensor) and it uses Lightning Network to send/receive payments. However, being computationally constrained it doesn't want to forward incoming payments that are actually meant for other nodes. The reasons could be that forwarding a payment will require it to forward the htlc, fulfill it as well as compute the onion for routing to the next node which might be computationally draining. However, sending an error would be just one operation after parsing the onion.
Every time the sensor node parses the onion and sees that the payment is not meant for it, it will send a temporary_node_failure or temporary_channel_failure error message. In situations where the lightning software uses some data analysis tools like autopilot, other nodes might completely eschew this sensor node out from forwarding the payments due to its high failure rate.
Is this something that can be implemented with a slight tweak to the current lightning implementations (like LND or c-lightning)? If the failure rate is too high, is there something like a banscore that is being kept by other nodes that might prevent this sensor node from receiving/sending payments in the future?
A:
Yes, there are basically two ways to avoid becoming a forwarding node:
Do not announce your channels, and keep them private
Reject any incoming HTLC that is not destined for you
The first is supported by the protocol itself, and is a proactive measure against forwarding any payment that is not destined for you, while the latter is a reactive measure and would allow you to decide on a per-HTLC basis whether you want to forward it or now.
The protocol allows channels to remain private and not be announced in the wider network:
Only the least-significant bit of channel_flags is currently defined: announce_channel. This indicates whether the initiator of the funding flow wishes to advertise this channel publicly to the network, as detailed within BOLT #7.
This means that the channel is not going to be included in the gossip, and nodes won't learn about the channel's existence. In order then to receive payments, which requires the sender to compute a route to you through that unannounced channel, you selectively tell the sender about the channel in the invoice using route hints, i.e., the r field in the invoice.
The second method mentioned above involves instrumenting the node such that it accepts HTLCs, but immediately rejects any HTLCs for which you are not the destination. This is has several downsides, among which the fact that you are announcing channels that are fundamentally not operational for forwarding, and you still have to process all HTLCs since you can't filter them out ahead of time. This corresponds to the scenarion that Rene Pickhardt mentioned. The computational overhead consists of:
More messages to process, including the wire encryption/decryption, potentially waking your CPU up if you run on a low-powered device
Decrypting the onion, which is a really expensive operation since it decrypts/encrypts 2600 bytes of data by generating a pseudorandom stream. In addition the onion gets prepared for an eventual next hop.
Need to process the HTLC itself (DB lookups, ...)
Both methods are implemented in some implementations: the mobile version of eclair does not announce its channels by default, lnd is planning to implement a bias against (though not a complete exclusion) channels and nodes that have proven to be unreliable and c-lightning allows you to implement any forwarding policy you'd like as a plugin using the htlc_accepted hook. Furthermore it is trivial to modify lnd and c-lightning to make the announcement of channels configurable.
(Disclaimer: I am one of the spec authors and work on c-lightning)
A:
The TL;DR answer is: Yes it is totally possible, but as far as I know not implemented in any lightning node software at this time.
However I wish to elaborate a little bit. In your question you wrote:
The reasons could be that forwarding a payment will require it to forward the htlc, fulfill it as well as compute the onion for routing to the next node which might be computationally draining
When I accept a payment I have to go through all of these steps! I have to accept an incoming update_add_htlc message which contains an onion. Then I have to decrypt the onion and if I have the preimage release the payment_preimage to settle / fulfill the onion.
I can safe a little bit on computing the next onion but that is not more complex than decrypting the incoming onion which I would have to do in any case. So the savings might be tiny here.
you also asked:
If the failure rate is too high, is there something like a banscore that is being kept by other nodes that might prevent this sensor node from receiving/sending payments in the future?
I believe Alex Bosworth from lnd announced that starting with the next major version (I guess that should be lnd 0.8) they want to start keeping track of who are good and poor routing nodes and create an internal score which will be used in route computation. In that way they plan to go a way from the cheapest routing fess but rather to metrics like realiability and uptime.
The BOLTs actually do not specify how pathfinding is supposed to be computed, so any implementation can do what ever they wish on this end.
One last thought. There are people working on hardware wallet for lightning one policy for those wallets is to be the other way around of what you ask. Have the private keys on the node for allowing routing of payments but if a new payment is supposed to be send have an air gaped device that helps to sign the messages which are needed.
All that being said it is totally possible that a node decides to only implement part of the protocol. However sending and receiving are exactly the messages and parts of the protocols that are needed for routing anyway (since a payment is coming in by accepting an htlc) and a payment is going out by offering an htlc. It is actually the ability to send payments that currently make the maintenance of a lightning node expense since they need to participate in gossip (which is the most costly operation) in order to participate in source based routing which is needed to initiate a payment.
If you just want to be a routing node you can totally opt out of gossip and just maintain your payment channels and state which comes at extremely low cost.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Leading backslash in path names in java
So, I am trying to code a regular Java application which reads the current revision from a file which is updated by ANT during compile time. When run on my dev machine (Eclipse 3.5.2 on Ubuntu 11.04) with either the OpenJDK or SunJDK, it throws a FileNotFoundException. Adding or removing a leading backslash seems to have no effect.
Any ideas on how I could solve this? I believe the fault lies in this line here:
in = new FileInputStream("data/build_info.properties");
Code- Updated
Transitioned to Java Properties
String revision = "";
Properties defaultProps = new Properties();
FileInputStream in;
try {
in = new FileInputStream("data/build_info.properties");
defaultProps.load(in);
in.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
revision = "Version: " + defaultProps.getProperty("build.major.number") + "." +
defaultProps.getProperty("build.minor.number") + " " +
"Revision: " + defaultProps.getProperty("build.revision.number");
ANT Jar Script
<target name="jar">
<antcall target="clean" />
<antcall target="compile" />
<jar destfile="${dir.dist}/${name.jar}" basedir="." includes="${dir.lib}/*" filesetmanifest="mergewithoutmain">
<manifest>
<attribute name="Main-Class" value="emp.main.EmpowerView" />
</manifest>
<fileset dir="${dir.build}" includes="**/*" excludes="META-INF/*.SF" />
<fileset dir="." includes="${dir.media}/*" />
<fileset dir="." includes="${dir.data}/*" />
</jar>
<chmod file="${dir.dist}/${name.jar}" perm="+x" />
</target>
A:
If you're trying to load a file inside of the Jar, then you need to use java.lang.Class.getResourceAsStream() to load the file. You can't point to a file that's in the Jar with java.util.File. For an example on how to use it, see this answer:
https://stackoverflow.com/questions/4548791
|
{
"pile_set_name": "StackExchange"
}
|
Q:
DAX - Calculating at the Line Level (SUMX) with Multiple Filters
What I'm trying to get DAX to do is:
Look across each row in a table of HR data.
Identify the start date of the employee ("employee a")
Sum up the number of other employees in the table with the following filters applied:
a. Successfully completed their assignment
b. Ended their assignment BEFORE the start date + 31
c. Ended their assignment AFTER the start date - 31 (which is to say within a month of employee a's start date)
d. Started before employee a (to not count employee a or anyone in their cohort in the count)
e. Has the same job title as employee a.
This is essentially asking the question in normal English "for each of my employees, how many other employees with the same job title ended their assignments successfully within a month of that employee starting?" and in DAX is basically just "how do I apply multiple filter criteria to a SUMX or COUNTAX measure/calculated column?"
The measure I've already tried is:
Contractors Available = COUNTAX(
'BAT VwRptMspAssignment',
CALCULATE(
DISTINCTCOUNT('BAT VwRptMspAssignment'[assignmentgk]),
FILTER(
FILTER(
FILTER(
FILTER(
FILTER(ALL('BAT VwRptMspAssignment'),
'BAT VwRptMspAssignment'[End.Date]<EARLIER('BAT VwRptMspAssignment'[Start.Date])+31),
'BAT VwRptMspAssignment'[End.Date]>EARLIER('BAT VwRptMspAssignment'[Start.Date])-31),
'BAT VwRptMspAssignment'[Start.Date]<EARLIER('BAT VwRptMspAssignment'[Start.Date])),
'BAT VwRptMspAssignment'[EoaReason]="Successful Completion"),
'BAT VwRptMspAssignment'[JobPostingTitle.1]=EARLIER('BAT VwRptMspAssignment'[JobPostingTitle.1]))
)
)
And the calculated column I tried was:
Contractors Available.1 = SUMX(
FILTER(
FILTER(
FILTER(
FILTER(
FILTER(
FILTER(ALL('BAT VwRptMspAssignment'),
'BAT VwRptMspAssignment'[customergk]=EARLIER('BAT VwRptMspAssignment'[customergk])),
'BAT VwRptMspAssignment'[JobPostingTitle.1]=EARLIER('BAT VwRptMspAssignment'[JobPostingTitle.1])),
'BAT VwRptMspAssignment'[End.Date]<EARLIER('BAT VwRptMspAssignment'[Start.Date])+31),
'BAT VwRptMspAssignment'[End.Date]>EARLIER('BAT VwRptMspAssignment'[Start.Date])-31),
'BAT VwRptMspAssignment'[Start.Date]<EARLIER('BAT VwRptMspAssignment'[Start.Date])),
'BAT VwRptMspAssignment'[EoaReason]="Successful Completion"),
'BAT VwRptMspAssignment'[FinishFlag])
but neither of these solutions have worked.
Does anyone have any idea why or what else I can try to accomplish this? An example of the data format, exported to Excel:
"Contractors Available.2" is the calculated column.
Note the 521 in the first line. If I apply all of these filters in Excel, it should be zero, this job title is unique in the dataset. It says 107 "Technical Writer - Expert" rows should have ended within a month of 9/26/2017, but these are the only 3 technical writers in the dataset, and zero of the other two ended their assignments within a month of 9/30/2016:
A:
Try something like this for a calculated column:
Contractors Available.1 =
VAR StartDate = 'BAT VwRptMspAssignment'[Start.Date]
VAR JobTitle = 'BAT VwRptMspAssignment'[JobPostingTitle.1]
RETURN
COUNTROWS (
FILTER (
'BAT VwRptMspAssignment',
'BAT VwRptMspAssignment'[End.Date] < StartDate + 31
&& 'BAT VwRptMspAssignment'[End.Date] > StartDate - 31
&& 'BAT VwRptMspAssignment'[Start.Date] < StartDate
&& 'BAT VwRptMspAssignment'[EoaReason] = "Successful Completion"
&& 'BAT VwRptMspAssignment'[JobPostingTitle.1] = JobTitle
)
)
The EARLIER functions are not necessary because the variables will keep the context where they were defined, which is the rowcontext.
EDIT:
I tested my formula with the data you provided and it seems to work. I changed the [End.Date] in the second row, to get a result in the first row.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to make derived class of a memory-aligned class lose alignment
Suppose one class is declared as having a specific alignment. And that I cannot modify that base class.
#define ATTRIBUTE_ALIGNED16(a) __declspec(align(16)) a
ATTRIBUTE_ALIGNED16(class) btVector3
{};
class Vector3 : public btVector3
{};
Is it possible to make the derived class Vector3 lose that alignment ?
Under MSVC alignment is quite constraining as it prevents from passing by value. My derived class doesn't particularly need it, and in template writing it is convenient to have classes that can be passed by values.
A:
Be careful - if btVector3 is coming from the Bullet Physics Library (where bt is the prefix on their math functions) the btVector3 is aligned to 16 byte boundaries because of the SIMD math functions. Furthermore, the btVector3 is defined as a union of 4 floats and a 128 bit type, which requires 16 byte alignment in most environments. See http://bulletphysics.org/Bullet/BulletFull/btVector3_8h_source.html
Trying to use the math library without the alignment requirements will cause some operations fail and your methods to act in an undefined way. Better to live with the alignment, or find a different library.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Are the tags [product] and [rating] appropriate for this question?
I think tags added to this question (How to move Review stars under add to cart button in WooCommerce) are irrelevant and I'm calling for a discussion to reach a consensus for rollback and or learn from the community why it should be the other way.
To start with, the post is off-topic because it's not on point.
Secondly, it purely has to do with UI alteration to a WordPress WooCommerce compatible plugin named Dokan; there is clearly nothing about rating nor product but marginally php as WordPress is a PHP based Content Management System/Framework.
Edit history available here.
A:
I don't see how either of those tags are useful to describe the question, and how experts on product or rating could use these tags to find questions to answer.
To be honest, I find it hard these tags as useful on any question, but that's not what's under discussion.
I performed an edit to remove the tags. Maybe I acted on haste. If as a result of this discussion there is consensus on adding them back, the post can be edited again.
Regarding topicality: Meh. The question may be on topic. Probably too poor, although it could be useful for future visitors. Certainly not a gem to lose sleep over.
That the question does not include an MCVE is not really an appropriate concern, since it's a "how-to" question, not a debugging question.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python wont execute past any elif statements past the 1st one
I'm in the process of writing a rudimentary Connect 4 game, with the board built into the command line, etc. My problem is that I cant get the code to execute past the 2nd elif statement. I set it up so that if a certain cell in the grid does not have an underscore, it should proceed to place the piece in the next row. However, the following move always only replaces whatever piece is in the cell in row 2. I've tried starting from rows other than the bottom 2 rows, just to try to troubleshoot, but it never gets past the 1st elif statement. Can anyone tell me where I'm going wrong with my elifs?
board = []
for x in range(0, 6):
board.append(["_"] * 7)
def print_board(board):
for i in range(1,7):
print(i, end=" ")
print(7)
for row in board:
print("|".join(row))
print_board(board)
for turn in range(42):
print('Turn', turn+1)
if turn % 2 == 0:
player1 = int(input('Player 1, choose your column: '))
while player1 not in range(1,8):
player1 = int(input('You must enter a column number from 1-7: '))
if board[5][player1-1] == '_':
board[5][player1-1] = 'O'
elif board[5][player1-1] != '_':
board[4][player1-1] = 'O'
elif board[4][player1-1] != '_':
board[3][player1-1] = 'O'
elif board[3][player1-1] != '_':
board[2][player1-1] = 'O'
elif board[2][player1-1] != '_':
board[1][player1-1] = 'O'
elif board[1][player1-1] != '_':
board[0][player1-1] = 'O'
print_board(board)
elif turn % 2 != 0:
player2 = int(input('Player 2, choose your column: '))
while player2 not in range(1,8):
player2 = int(input('You must enter a column number from 1-7: '))
if board[5][player2-1] == '_':
board[5][player2-1] = 'X'
elif board[5][player2-1] != '_':
board[4][player2-1] = 'X'
elif board[4][player2-1] != '_':
board[3][player2-1] = 'X'
elif board[3][player2-1] != '_':
board[2][player2-1] = 'X'
elif board[2][player2-1] != '_':
board[1][player2-1] = 'X'
elif board[1][player2-1] != '_':
board[0][player2-1] = 'X'
print_board(board)
A:
You should test the cells using ==, not !=. Also, you should change the same cell you test on, not the one below it:
board = []
for x in range(0, 6):
board.append(["_"] * 7)
def print_board(board):
for i in range(1,7):
print(i, end=" ")
print(7)
for row in board:
print("|".join(row))
print_board(board)
for turn in range(42):
print('Turn', turn+1)
if turn % 2 == 0:
player1 = int(input('Player 1, choose your column: '))
while player1 not in range(1,8):
player1 = int(input('You must enter a column number from 1-7: '))
if board[5][player1-1] == '_':
board[5][player1-1] = 'O'
elif board[4][player1-1] == '_':
board[4][player1-1] = 'O'
elif board[3][player1-1] == '_':
board[3][player1-1] = 'O'
elif board[2][player1-1] == '_':
board[2][player1-1] = 'O'
elif board[1][player1-1] == '_':
board[1][player1-1] = 'O'
elif board[0][player1-1] == '_':
board[0][player1-1] = 'O'
print_board(board)
elif turn % 2 != 0:
player2 = int(input('Player 2, choose your column: '))
while player2 not in range(1,8):
player2 = int(input('You must enter a column number from 1-7: '))
if board[5][player2-1] == '_':
board[5][player2-1] = 'X'
elif board[4][player2-1] == '_':
board[4][player2-1] = 'X'
elif board[3][player2-1] == '_':
board[3][player2-1] = 'X'
elif board[2][player2-1] == '_':
board[2][player2-1] = 'X'
elif board[1][player2-1] == '_':
board[1][player2-1] = 'X'
elif board[0][player2-1] == '_':
board[0][player2-1] = 'X'
print_board(board)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is causing this function to work for some amount of points, but not more?
Here's what I'm doing. I calculate a bunch of xy points scattered within some range, and then I have some functions that calculate the "boundary" of those points. If you're familiar with this area, it's not a convex hull, it's concave, so there's no unique solution until you tell it how much you want it to "squeeze" into the points (just a parameter I choose).
Here are the functions I'm using (I didn't make them; I don't understand how they work more than superficially):
GetBoundaryMesh[points_, alphaparameter_] := Module[{ashape, bmesh},
Needs["NDSolve`FEM`"];
ashape = alphaShapes2DC[points, alphaparameter];
bmesh = NDSolve`FEM`ToBoundaryMesh@ashape;
Return@MeshPrimitives[MeshRegion@bmesh, 1];
]
circumRadius2D =
Compile[{{v, _Real, 2}},
With[{a = Norm[v[[1]] - v[[2]]], b = Norm[v[[1]] - v[[3]]],
c = Norm[v[[2]] - v[[3]]]}, (a b c)/
Sqrt[(a + b + c) (b + c - a) (c + a - b) (a + b - c)]],
RuntimeOptions -> "Speed", RuntimeAttributes -> {Listable},
Parallelization -> True];
alphaShapes2DC[points_, crit_] :=
Module[{alphacriteria, del = Quiet@DelaunayMesh@points, tris,
tricoords, triradii, getExternalFaces},
alphacriteria[triangle_, radius_, rmax_] :=
Pick[triangle, UnitStep@Subtract[rmax, radius], 1];
getExternalFaces[facets_] := MeshRegion[points, facets];
If[Head[del] === EmptyRegion, del, tris = MeshCells[del, 2];
tricoords = MeshPrimitives[del, 2][[All, 1]];
triradii = circumRadius2D@tricoords;
getExternalFaces@alphacriteria[tris, triradii, crit]]]
Here's an example of it working correctly. There are 441 points here:
pts = {{0.31320000000000003`, 0.33030000000000004`}, {0.3178`,
0.3346`}, {0.3239`, 0.3397`}, {0.3309`,
0.34450000000000003`}, {0.3375`, 0.3471`}, {0.3411`,
0.3453`}, {0.3385`, 0.338`}, {0.3275`, 0.3257`}, {0.3093`,
0.3114`}, {0.28900000000000003`, 0.29960000000000003`}, {0.2733`,
0.2934`}, {0.26580000000000004`, 0.2931`}, {0.2656`,
0.29660000000000003`}, {0.26980000000000004`, 0.3018`}, {0.276`,
0.3073`}, {0.2826`, 0.3123`}, {0.2888`,
0.31670000000000004`}, {0.29450000000000004`, 0.3206`}, {0.2994`,
0.32380000000000003`}, {0.3037`, 0.3266`}, {0.3075`,
0.3289`}, {0.31420000000000003`, 0.3311`}, {0.34850000000000003`,
0.36050000000000004`}, {0.38380000000000003`,
0.3759`}, {0.38880000000000003`, 0.3552`}, {0.3257`,
0.2954`}, {0.24750000000000003`, 0.25520000000000004`}, {0.2281`,
0.2607`}, {0.24070000000000003`, 0.2787`}, {0.2577`,
0.2937`}, {0.27190000000000003`, 0.3045`}, {0.2828`,
0.3121`}, {0.29100000000000004`, 0.31770000000000004`}, {0.2974`,
0.3219`}, {0.3024`, 0.32530000000000003`}, {0.3065`,
0.3281`}, {0.31010000000000004`, 0.3305`}, {0.31320000000000003`,
0.3326`}, {0.316`, 0.33440000000000003`}, {0.3185`,
0.336`}, {0.32080000000000003`, 0.3372`}, {0.3229`,
0.3382`}, {0.31620000000000004`, 0.33280000000000004`}, {0.4194`,
0.4033`}, {0.5552`, 0.4091`}, {0.2089`,
0.1464`}, {0.19310000000000002`, 0.2359`}, {0.2419`,
0.2826`}, {0.26880000000000004`, 0.30260000000000004`}, {0.2841`,
0.313`}, {0.2937`, 0.31920000000000004`}, {0.30010000000000003`,
0.32330000000000003`}, {0.3047`, 0.3262`}, {0.30820000000000003`,
0.3286`}, {0.3111`, 0.3306`}, {0.3136`,
0.33240000000000003`}, {0.3159`, 0.3341`}, {0.31820000000000004`,
0.3357`}, {0.3204`, 0.3373`}, {0.3225`, 0.3387`}, {0.3247`,
0.33990000000000004`}, {0.3267`, 0.3408`}, {0.3285`,
0.34140000000000004`}, {0.31520000000000004`, 0.3302`}, {0.3831`,
0.35100000000000003`}, {0.2886`, 0.277`}, {0.25320000000000004`,
0.2853`}, {0.2747`, 0.306`}, {0.2893`, 0.3164`}, {0.298`,
0.3219`}, {0.3034`, 0.3252`}, {0.307`,
0.3274`}, {0.30970000000000003`, 0.3291`}, {0.3118`,
0.3305`}, {0.31370000000000003`, 0.33190000000000003`}, {0.3155`,
0.33330000000000004`}, {0.3173`, 0.3347`}, {0.3191`,
0.3362`}, {0.3211`, 0.3377`}, {0.32330000000000003`,
0.3392`}, {0.3255`, 0.3407`}, {0.3277`,
0.34190000000000004`}, {0.32980000000000004`, 0.3428`}, {0.3317`,
0.3432`}, {0.30770000000000003`, 0.32370000000000004`}, {0.2872`,
0.29300000000000004`}, {0.2743`, 0.2908`}, {0.2831`,
0.3039`}, {0.29300000000000004`, 0.31370000000000003`}, {0.2997`,
0.3195`}, {0.30410000000000004`, 0.323`}, {0.307`,
0.3252`}, {0.30910000000000004`, 0.3269`}, {0.3108`,
0.32830000000000004`}, {0.3124`, 0.3296`}, {0.314`,
0.331`}, {0.3156`, 0.3325`}, {0.3175`, 0.3342`}, {0.3196`,
0.3361`}, {0.3219`, 0.338`}, {0.3244`, 0.3401`}, {0.3271`,
0.342`}, {0.3299`, 0.3436`}, {0.3325`, 0.3447`}, {0.3347`,
0.3451`}, {0.3054`, 0.3244`}, {0.2782`,
0.2944`}, {0.27540000000000003`, 0.2903`}, {0.2812`,
0.2954`}, {0.2876`, 0.3015`}, {0.2928`,
0.30670000000000003`}, {0.2967`, 0.31070000000000003`}, {0.2997`,
0.3138`}, {0.3022`, 0.3165`}, {0.3045`,
0.3191`}, {0.30670000000000003`, 0.3216`}, {0.309`,
0.3244`}, {0.3116`, 0.3275`}, {0.3146`, 0.331`}, {0.3181`,
0.33490000000000003`}, {0.3219`, 0.33890000000000003`}, {0.3261`,
0.343`}, {0.3305`, 0.3467`}, {0.33480000000000004`,
0.3496`}, {0.3386`, 0.3514`}, {0.3417`,
0.3517`}, {0.30810000000000004`, 0.3274`}, {0.2867`,
0.3089`}, {0.2805`, 0.3014`}, {0.2806`,
0.3`}, {0.28300000000000003`, 0.3018`}, {0.2863`,
0.3054`}, {0.2901`, 0.31020000000000003`}, {0.2943`,
0.316`}, {0.2989`, 0.32270000000000004`}, {0.304`,
0.33030000000000004`}, {0.30960000000000004`, 0.3385`}, {0.3158`,
0.3472`}, {0.3225`, 0.3559`}, {0.3295`, 0.3639`}, {0.3366`,
0.3708`}, {0.34340000000000004`,
0.37570000000000003`}, {0.34950000000000003`, 0.3783`}, {0.3546`,
0.378`}, {0.3582`, 0.3749`}, {0.36010000000000003`,
0.36920000000000003`}, {0.3598`, 0.3613`}, {0.3113`,
0.33030000000000004`}, {0.30820000000000003`, 0.3408`}, {0.3116`,
0.3553`}, {0.3183`, 0.3709`}, {0.32630000000000003`,
0.3855`}, {0.3345`, 0.3976`}, {0.34240000000000004`,
0.4067`}, {0.34940000000000004`, 0.4123`}, {0.3556`,
0.4146`}, {0.3608`, 0.4138`}, {0.36510000000000004`,
0.4103`}, {0.36860000000000004`, 0.4046`}, {0.3713`,
0.397`}, {0.37320000000000003`, 0.3879`}, {0.3743`,
0.37770000000000004`}, {0.3743`, 0.3667`}, {0.37320000000000003`,
0.3553`}, {0.3705`, 0.3438`}, {0.3662`,
0.3326`}, {0.36010000000000003`, 0.32220000000000004`}, {0.3519`,
0.313`}, {0.3149`, 0.33380000000000004`}, {0.3548`,
0.3986`}, {0.391`, 0.44480000000000003`}, {0.4143`,
0.4617`}, {0.4249`, 0.45580000000000004`}, {0.4268`,
0.4384`}, {0.42410000000000003`, 0.4172`}, {0.4192`,
0.39630000000000004`}, {0.4131`, 0.3771`}, {0.40640000000000004`,
0.36010000000000003`}, {0.39930000000000004`,
0.34540000000000004`}, {0.3917`, 0.33280000000000004`}, {0.3835`,
0.3221`}, {0.37460000000000004`, 0.3133`}, {0.3649`,
0.3064`}, {0.3543`, 0.3012`}, {0.343`, 0.2979`}, {0.3311`,
0.2962`}, {0.3189`, 0.2963`}, {0.3068`,
0.2979`}, {0.29560000000000003`, 0.3008`}, {0.3195`,
0.33840000000000003`}, {0.42900000000000005`, 0.4531`}, {0.4974`,
0.47190000000000004`}, {0.511`, 0.431`}, {0.49310000000000004`,
0.3808`}, {0.4631`, 0.3407`}, {0.4305`,
0.31370000000000003`}, {0.3995`, 0.2973`}, {0.3718`,
0.2886`}, {0.3477`, 0.28500000000000003`}, {0.32730000000000004`,
0.2849`}, {0.3105`, 0.2871`}, {0.2969`,
0.2908`}, {0.28650000000000003`, 0.2953`}, {0.27890000000000004`,
0.3005`}, {0.27390000000000003`, 0.3059`}, {0.27140000000000003`,
0.3113`}, {0.271`, 0.31670000000000004`}, {0.2723`,
0.32170000000000004`}, {0.2752`, 0.3264`}, {0.2793`,
0.3304`}, {0.32230000000000003`,
0.33740000000000003`}, {0.46340000000000003`, 0.4056`}, {0.524`,
0.3577`}, {0.4824`, 0.2838`}, {0.3971`,
0.2439`}, {0.32430000000000003`, 0.2388`}, {0.2798`,
0.2497`}, {0.2579`, 0.26430000000000003`}, {0.25`,
0.2777`}, {0.24960000000000002`, 0.2887`}, {0.25320000000000004`,
0.29760000000000003`}, {0.2585`, 0.3048`}, {0.2645`,
0.31070000000000003`}, {0.27080000000000004`, 0.3158`}, {0.277`,
0.3204`}, {0.28300000000000003`, 0.3245`}, {0.2889`,
0.32830000000000004`}, {0.2947`, 0.33180000000000004`}, {0.3002`,
0.33490000000000003`}, {0.3055`, 0.3376`}, {0.3105`,
0.3398`}, {0.3184`, 0.32680000000000003`}, {0.374`,
0.2702`}, {0.3296`, 0.18130000000000002`}, {0.2379`,
0.1565`}, {0.1967`, 0.1922`}, {0.2003`, 0.2328`}, {0.2175`,
0.26130000000000003`}, {0.2351`, 0.2798`}, {0.2499`,
0.2921`}, {0.2617`, 0.3007`}, {0.2712`,
0.30710000000000004`}, {0.279`, 0.3123`}, {0.2856`,
0.3168`}, {0.29150000000000004`, 0.3209`}, {0.2968`,
0.3249`}, {0.3019`, 0.32880000000000004`}, {0.3068`,
0.3326`}, {0.3116`, 0.3362`}, {0.31620000000000004`,
0.33940000000000003`}, {0.32070000000000004`, 0.3421`}, {0.3247`,
0.3441`}, {0.31`, 0.319`}, {0.25880000000000003`,
0.2124`}, {0.189`, 0.1651`}, {0.1733`, 0.183`}, {0.1928`,
0.21710000000000002`}, {0.2175`, 0.2447`}, {0.2379`,
0.2641`}, {0.2533`, 0.27790000000000004`}, {0.265`,
0.2883`}, {0.2741`, 0.29660000000000003`}, {0.2816`,
0.3037`}, {0.2881`, 0.3103`}, {0.29400000000000004`,
0.3166`}, {0.29960000000000003`,
0.32280000000000003`}, {0.30510000000000004`, 0.3289`}, {0.3106`,
0.33480000000000004`}, {0.3161`, 0.34040000000000004`}, {0.3215`,
0.3453`}, {0.3265`, 0.3493`}, {0.3312`, 0.3521`}, {0.3351`,
0.3534`}, {0.303`, 0.3201`}, {0.22460000000000002`,
0.24500000000000002`}, {0.19490000000000002`,
0.22660000000000002`}, {0.2034`, 0.2373`}, {0.22210000000000002`,
0.2544`}, {0.2399`, 0.2707`}, {0.25470000000000004`,
0.2848`}, {0.26680000000000004`,
0.29710000000000003`}, {0.27690000000000003`, 0.308`}, {0.2857`,
0.318`}, {0.29350000000000004`, 0.32730000000000004`}, {0.3009`,
0.336`}, {0.3078`, 0.3441`}, {0.3145`,
0.35150000000000003`}, {0.3209`, 0.3579`}, {0.327`,
0.36310000000000003`}, {0.3326`, 0.3667`}, {0.3376`,
0.36860000000000004`}, {0.3418`, 0.3687`}, {0.34500000000000003`,
0.3668`}, {0.34700000000000003`, 0.3633`}, {0.3024`,
0.326`}, {0.2494`, 0.31270000000000003`}, {0.24020000000000002`,
0.32170000000000004`}, {0.2508`, 0.337`}, {0.266`,
0.3513`}, {0.28040000000000004`, 0.3629`}, {0.2927`,
0.3718`}, {0.3028`, 0.3784`}, {0.3113`, 0.3831`}, {0.3184`,
0.38630000000000003`}, {0.3245`,
0.38820000000000005`}, {0.32980000000000004`,
0.38880000000000003`}, {0.3345`, 0.38830000000000003`}, {0.3386`,
0.38670000000000004`}, {0.3421`, 0.3839`}, {0.3452`,
0.3801`}, {0.3477`, 0.37520000000000003`}, {0.3496`,
0.3695`}, {0.3508`, 0.3629`}, {0.3512`, 0.3558`}, {0.3506`,
0.3482`}, {0.30770000000000003`, 0.33290000000000003`}, {0.3049`,
0.3995`}, {0.3174`, 0.44830000000000003`}, {0.3307`,
0.4682`}, {0.33990000000000004`, 0.46780000000000005`}, {0.3452`,
0.45780000000000004`}, {0.3481`,
0.44470000000000004`}, {0.34950000000000003`, 0.4313`}, {0.3503`,
0.4189`}, {0.3509`, 0.4077`}, {0.3514`, 0.3976`}, {0.3519`,
0.3884`}, {0.35250000000000004`, 0.3799`}, {0.3531`,
0.372`}, {0.3537`, 0.3644`}, {0.3541`,
0.35710000000000003`}, {0.3543`,
0.35000000000000003`}, {0.35400000000000004`, 0.3431`}, {0.3531`,
0.3365`}, {0.35150000000000003`, 0.3301`}, {0.3488`,
0.3242`}, {0.31470000000000004`, 0.3392`}, {0.3754`,
0.4692`}, {0.40750000000000003`, 0.5152`}, {0.41300000000000003`,
0.5028`}, {0.4072`, 0.4727`}, {0.39880000000000004`,
0.443`}, {0.3909`, 0.4181`}, {0.3841`, 0.398`}, {0.3785`,
0.382`}, {0.3739`, 0.369`}, {0.37020000000000003`,
0.3582`}, {0.36710000000000004`, 0.3492`}, {0.3644`,
0.3415`}, {0.3619`, 0.33490000000000003`}, {0.35950000000000004`,
0.3291`}, {0.35710000000000003`, 0.3242`}, {0.3543`,
0.32`}, {0.3512`, 0.3165`}, {0.34740000000000004`,
0.31370000000000003`}, {0.34290000000000004`, 0.3116`}, {0.3376`,
0.3103`}, {0.31970000000000004`, 0.3402`}, {0.4184`,
0.44780000000000003`}, {0.456`, 0.4526`}, {0.45530000000000004`,
0.4208`}, {0.44160000000000005`, 0.3881`}, {0.4258`,
0.3627`}, {0.41100000000000003`, 0.3442`}, {0.3982`,
0.331`}, {0.38720000000000004`, 0.3215`}, {0.37770000000000004`,
0.31470000000000004`}, {0.3695`, 0.3099`}, {0.3623`,
0.3065`}, {0.3558`, 0.3044`}, {0.34990000000000004`,
0.3033`}, {0.3443`, 0.30310000000000004`}, {0.33890000000000003`,
0.3037`}, {0.3336`, 0.305`}, {0.32830000000000004`,
0.307`}, {0.3229`, 0.30960000000000004`}, {0.3173`,
0.31270000000000003`}, {0.31170000000000003`, 0.3161`}, {0.3205`,
0.3335`}, {0.4072`, 0.3511`}, {0.43910000000000005`,
0.3247`}, {0.4358`, 0.29710000000000003`}, {0.4183`,
0.2798`}, {0.3976`, 0.2716`}, {0.3785`,
0.2691`}, {0.36200000000000004`, 0.27`}, {0.3482`,
0.2726`}, {0.33690000000000003`, 0.2762`}, {0.3274`,
0.2803`}, {0.3196`, 0.2848`}, {0.3131`,
0.2897`}, {0.30770000000000003`, 0.2949`}, {0.3032`,
0.3005`}, {0.29960000000000003`, 0.3063`}, {0.2967`,
0.3124`}, {0.29460000000000003`, 0.3185`}, {0.2931`,
0.3245`}, {0.2922`, 0.33`}, {0.29200000000000004`,
0.33490000000000003`}, {0.318`,
0.32480000000000003`}, {0.36210000000000003`,
0.26230000000000003`}, {0.3655`, 0.2159`}, {0.34750000000000003`,
0.19770000000000001`}, {0.3246`, 0.2001`}, {0.3048`,
0.2122`}, {0.2903`, 0.22740000000000002`}, {0.2806`,
0.2426`}, {0.2746`, 0.2565`}, {0.2713`, 0.2691`}, {0.27`,
0.28040000000000004`}, {0.2702`, 0.2908`}, {0.2716`,
0.3004`}, {0.27390000000000003`, 0.3095`}, {0.27690000000000003`,
0.3181`}, {0.2806`, 0.3261`}, {0.2848`,
0.33340000000000003`}, {0.2894`, 0.33990000000000004`}, {0.2942`,
0.3452`}, {0.29910000000000003`, 0.3493`}, {0.3039`,
0.3519`}, {0.3138`, 0.31980000000000003`}, {0.3068`,
0.2265`}, {0.2746`, 0.18100000000000002`}, {0.24350000000000002`,
0.1777`}, {0.226`, 0.197`}, {0.2212`,
0.22260000000000002`}, {0.22440000000000002`,
0.24700000000000003`}, {0.2316`, 0.26780000000000004`}, {0.2404`,
0.2851`}, {0.2495`, 0.29960000000000003`}, {0.2584`,
0.3119`}, {0.267`, 0.3224`}, {0.2752`, 0.3317`}, {0.2831`,
0.3397`}, {0.2906`, 0.3467`}, {0.2977`,
0.35250000000000004`}, {0.3045`,
0.35700000000000004`}, {0.31070000000000003`,
0.36010000000000003`}, {0.3164`, 0.3617`}, {0.3214`,
0.3617`}, {0.3257`, 0.3603`}};
bdrymesh = GetBoundaryMesh[pts, .04];
eplg = {{PointSize[.01], Point[pts]}, bdrymesh};
ListPlot[{}, Epilog -> eplg, PlotRange -> {{0, .7}, {0, .6}}]
This produces this image, which is what I want:
However, if I do the same exact thing, but with 676 points, it doesn't work. Code is here because I've apparently run out of characters in this window (is there a better way to do this?).
It still plots the points, but can't plot the boundary because it wasn't calculated:
There doesn't appear to be anything crazy about those points. The errors it gives are:
MeshPrimitives is not a Graphics primitive or directive.
I'm just not really sure where to begin with diagnosing this because it's working for nearly the exact same input. My best guess is that the computation increases really quickly with input size, and something is basically crapping out when the number is increased too much.
Does anyone know what I could try to fix or diagnose it?
EDIT: Okay, I've gone a little deeper to find the location of the problem, but not what's causing it exactly. I basically manually went through the functions, outside of the functions, step by step, at each step evaluating the results that came from using one set of pts vs the other (I'm calling the first set of (working) points pts1, the second (non-working) set pts2, from the two examples above).
dm1 = DelaunayMesh@pts1
dm2 = DelaunayMesh@pts2
Head@DelaunayMesh@pts1
Head@DelaunayMesh@pts2
t1 = MeshCells[dm1, 2];
t2 = MeshCells[dm2, 2];
tc1 = MeshPrimitives[dm1, 2][[All, 1]];
tc2 = MeshPrimitives[dm2, 2][[All, 1]];
Dimensions@tc1
Dimensions@tc2
tr1 = circumRadius2D@tc1;
tr2 = circumRadius2D@tc2;
Length@tr1
Length@tr2
alphacriteria[triangle_, radius_, rmax_] :=
Pick[triangle, UnitStep@Subtract[rmax, radius], 1];
ac1 = alphacriteria[t1, tr1, .04]
ac2 = alphacriteria[t2, tr2, .04]
Dimensions@ac1
Dimensions@ac2
They are behaving the same up until this point. At this point ac1 and ac2 are both lists of Polygon[{p1,p2,p3}] where those p's are points from pts1 or pts2, respectively, I believe.
Here's where it stops working. When getExternalFaces[] is called (really just MeshRegion), the first one evaluates to a mesh that appears graphically on my screen, while the second one appears as a MeshRegion[] expression. This is where they diverge so must be the source of the problem, but I have no idea why this happens.
getExternalFaces[facets_, points_] := MeshRegion[points, facets];
ef1 = getExternalFaces[ac1, pts1]
ef2 = getExternalFaces[ac2, pts2]
Any ideas as to why this could be happening? Thanks.
A:
Well, I feel stupid, but at least I figured it out.
The parameters passed to MeshRegion are MeshRegion[{points},{meshcells}], where the points are my xy points and the meshcells are Polygons that refer to those points by their indices in the points list.
On a whim I tried taking a MeshRegion of a subset of the meshcells, because the points all seemed like reasonable points and the Polygons themselves looked good, and it worked:
MeshRegion[pts2, ac2[[1 ;; 500]]]
So, I poked around, changing those indices, and eventually found the trouble. Almost all index bounds work, except for these:
ac2[[771 ;; 773]]
which cause it to do the mess it was before. This is because, if we search for duplicates:
Select[Split@Sort@pts2, Length@# > 1 &]
{{{0.3219, 0.3382}, {0.3219, 0.3382}}}
Position[pts2, {0.3219`, 0.3382`}]
{{75}, {124}}
Two points are the same. And:
ac2[[771 ;; 773]]
{Polygon[{124, 312, 100}], Polygon[{387, 312, 124}],
Polygon[{75, 175, 312}]}
They all have index 312, and all either have index 75 or 124, so they all share two points.
To be honest I don't know why this is a problem, but it's an easy fix, just delete duplicates before I pass it to this function.
All fixed!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Reducing Lion OS Footprint by removing unnecessary files/folders
I am trying to downsize my iMac 640Gb disk to make it fit onto a 120Gb (113Gb usable) SSD. The total size of use files is currently approximately 340Gb, of which my home folder accounts for 265Gb (which I intend to not put on the SSD, or at least not all of it). Very rough calculations suggest that my OS and application files are taking up roughly 75Gb, which puts me approximately 2/3rds full - not bad.
But, I would like to reduce this a little further if possible, to ensure that as and when I continue to use the computer (and I have a number of large apps that do not currently have installed including music production apps with a lot of data) I am not worried about running out of space on the boot drive.
To this end I already own XSlimmer which I will use periodically to remove PowerPC and language variants from my apps, but I was wondering if there are any areas of the Base OS that I can safely remove to save further space? I am thinking about things like OS localisation, the location of the included screensavers and wallpapers, the voice files that are used for text-to-speech etc.
Is there a significant amount of such data that can be easily removed to save a few extra Gb on the boot disk?
A:
We get a lot of questions about this topic here. You can search for earlier similar questions and answers.
Backup first
Make a full backup of everything, a complete disk image, before you go on a spree to delete resources from your Mac.
Deleting More Unused Human Language Resources
Monolingual, which is free, can delete unwanted language support files in the /System/Library/ and /Library/ folders itself, whereas XSlimmer (which I also use) is only set up to delete language support files in the Applications folder.
iPhoto
You can "thin out" iPhoto by removing its voluminous printing templates, but if you delete them, you won't be able to print anything from iPhoto at all.
Right-click on iPhoto in /Applications/iPhoto/ and select "Show Package Contents". You'll discover several hundreds of megabytes of files in /iPhoto/Contents/Resources/Themes/. You can actually delete these (authentication required) but it will change the behavior of the iPhoto app.
Speech synthesis voices
Removing system support files from the /System/Library/ folder is dangerous. The only files I know of that you can safely delete are the speech synthesis voices in /System/Library/Speech/Voices/. You should leave one voice in there should you ever need that feature.
Fonts
You can save several dozen megabytes by deleting certain Asian fonts if you don't need them. Don't delete system fonts directly in the Finder. Rather, do it through the Apple Font Book application, which will prevent you from deleting the "reserved" system fonts that Mac OS X expects to see when it boots up, but permit you to delete "non-essential" fonts.
Screen savers and desktop pictures
Screen savers are in /System/Library/Screen Savers/.
/Library/Desktop Pictures/ has a couple of hundreds of megabytes of files you don't need.
Dictionaries
Mac OS X has a Japanese dictionary and thesaurus, several hundred megabytes in size, in /Library/Dictionaries/. You can safely delete these if you will never need them.
GarageBand and iDVD files
If you do not use GarageBand or the older iLife program iDVD, you can save many tens of gigabytes by deleting their applications but especially their support files in the /Library/ directory.
With regard to GarageBand, depending on your installation, several gigabytes of data can be removed from two places:
/Library/Application Support/GarageBand/
/Library/Audio/Apple Loops/Apple/Apple Loops for GarageBand/
Printer drivers
Depending on your installation, you may have several gigabytes of printer drivers for printers that you have never actually used. If you are willing to take the trouble, you can delete everything in /Library/Printers/. The next time you turn on one of your printers and try to print to it, Mac OS X Lion will prompt you to download and install the driver needed for that printer alone.
Utilities to help you find files to delete
There are several utilities to list all the files on your hard drive and sort them by file size in various types of charts and graphs. These include: OmniDiskSweeper, which is free; WhatSize, a commercial app; and DaisyDisk. All these are useful not only for looking at system files but also for examining your Documents and user data. You'll find old files that you don't need and can archive or delete, saving further disk space.
Just remember
Just remember that if you do not know what you are doing, you might damage your system and then the only remedy would be to do a complete re-installation of your OS, which would put you right back where you started.
A:
/var/vm/sleepimage can take up the same amount of disk space as the amount of RAM your Mac has depending on the safe sleep mode.
~/Library/Caches/com.apple.Safari/Webpage Previews/ is at most about 1GB. If you don't need Top Sites or the cover flow views, you can tell Safari to not save the thumbnails with defaults write com.apple.Safari DebugSnapshotsUpdatePolicy -int 2.
/private/var/folders/ might contain cache folders for applications that have already been removed or partially downloaded documentation files. You can sort the folders by size with du -sm /private/var/folders/*/*/*/*/ | sort -rn.
The installers for audio plugins often copy VST versions to /Library/Audio/Plug-Ins/VST/ or DPM versions to /Library/Application Support/Digidesign/.
If you've installed Xcode just to use Homebrew or some shell utilities, you might remove it and install the Command Line Tools for Xcode package instead.
The CJK fonts in /Library/Fonts/ take up about 500 MB of disk space. The System library already contains the most common Japanese and Chinese fonts.
~/Library/Autosave Information/ can contain old unsaved documents that haven't been deleted properly.
A:
If you are going to use a SSD as main disk, you can set the hibernatemode to 0 and remove the sleepimagefile, which has the same size as your Ram. This has two advantages: no disk writing on the SSD, and no space lost on the SSD.
Hibernatemode to 0 means that no image of the ram is being made onto the disk when you go into safe sleep.
You do this with terminal commands:
to see the actual hibernatemode:
sudo pmset -g | grep hibernatemode
this returns the actual hibernatemode you are in.
to set the hibernate mode to 0:
sudo pmset -a hibernatemode 0
and to remove the sleep image:
sudo rm /var/vm/sleepimage
That does it; when a serious update of the OS is made you check if the hibernatemode still is 0 with the above command.
Note1: in some forums it is suggested to move the sleepimage file to another volume: DO NOT DO THAT: when the volume is not connected it will create crashes.
Note2: if you add more Ram while you have hibernate mode not 0 this can create a startup problem because the sleep imagefile has the wrong size. To avoid that remove the sleepimage file before adding ram.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Battleship with Python
Here I've some difficulties with my program ... I tried everything but nothing works ...
Here is my code:
import random
XBateau_un_joueur = 0
YBateau_un_joueur = 0
XBateau_deux_joueur = 1
YBateau_deux_joueur = 1
XBateau_un_IA = 0
YBateau_un_IA = 0
XBateau_deux_IA = 1
YBateau_deux_IA = 1
def Grille(): #définition du tableau qui servira de grille
tableau_joueur = [(0,)*taille]*taille
for i in range(taille):
tableau_joueur[i] = list(tableau_joueur[i])
tableau_joueur[XBateau_deux_joueur][YBateau_deux_joueur] = 1
tableau_joueur[XBateau_un_joueur][YBateau_un_joueur] = 1
if XBateau_un_joueur == XBateau_deux_joueur and YBateau_un_joueur == YBateau_deux_joueur :
XBateau_deux_joueur = XBateau_deux_joueur + 1
YBateau_deux_joueur = YBateau_deux_joueur + 1
if XBateau_deux_joueur > taille - 1 :
XBateau_deux_joueur = XBateau_deux_joueur - 2
if YBateau_deux_joueur > taille - 1 :
YBateau_deux_joueur = YBateau_deux_joueur - 2
tableau_IA = [(0,)*taille]*taille
for j in range(taille):
tableau_IA[j] = list(tableau_IA[j])
tableau_IA[XBateau_un_IA][YBateau_deux_IA] = 1
tableau_IA[XBateau_deux_IA][YBateau_deux_IA] = 1
if XBateau_un_IA and YBateau_un_IA == XBateau_deux_IA and YBateau_deux_IA :
XBateau_deux_IA = XBateau_deux_IA + 1
YBateau_deux_IA = YBateau_deux_IA + 1
if XBateau_deux_IA > taille - 1 :
XBateau_deux_IA = XBateau_deux_IA - 2
if YBateau_deux_joueur > taille - 1 :
YBateau_deux_IA = YBateau_deux_IA - 2
print tableau_joueur
print tableau_IA
def tour_IA():
compteur_de_point_IA = 0
for tour_IA in range (0, 3):
print "L'ennemi nous attaque Capitain !"
x = int(random.randint(0, taille - 1))
y = int(random.randint(0, taille - 1))
if ((x == XBateau_un_joueur) and (y == YBateau_un_joueur)) or ((x == XBateau_deux_joueur) and (y == YBateau_deux_joueur)) :
compteur_de_point_IA = compteur_de_point_IA + 8
print "Arg ! Cette raclure de fond de calle nous a coulé en vaisseau... prenez garde !"
else:
if (x == XBateau_un_joueur) or (y == YBateau_un_joueur) or (x == XBateau_deux_joueur) or (y == YBateau_deux_joueur) :
compteur_de_point_IA = compteur_de_point_IA + 1
print "nous sommes en vue de l'ennemi Cap'tain ! Faite attention !"
else:
print "A l'eau ! Il nous a raté !"
print "Voici les points marqué par l'ennemis :", compteur_de_point_IA
# tour du joueur IRL
def tour_joueur():
list_resultat = []
List_tot = []
print " C'est à vous d'attaquer"
for tour_joueur in range (0, 3):
compteur_de_point_joueur = 0
print "En attente des ordres, lattitude du tir mon capitain ?"
print "(colone)"
x = int(input())
print "longitude du tir ?"
print "(ligne)"
y = int(input())
if ((x == XBateau_un_IA) and (y == YBateau_un_IA)) or ((x == XBateau_deux_IA) and (y == YBateau_deux_IA)) :
compteur_de_point_joueur = compteur_de_point_joueur + 8
print "Aarrrr ! Navire ennemi envoyé par le fond Cap'tain!"
print "Vous marqué 8 points supplémentaires !! Bien joué!"
else:
if (x == XBateau_un_IA) or (y == YBateau_un_IA) or (x == XBateau_deux_IA) or (y == YBateau_deux_IA):
compteur_de_point_joueur = compteur_de_point_joueur + 1
print "L'ennemis est en vue ! Pas mal boucanier !"
print "Vous marqué 1 point supplémentaire !!"
else:
print "Mille sabords !!! Raté !!! Recommencez marins d'eau douce !"
print "Voici votre total de point marqué :", compteur_de_point_joueur
print " "
list_resultat.append(compteur_de_point_joueur)
print list_resultat[0]
print
def nombre_de_joueur() :
print "Combien de joueur êtes vous ?"
nombre = int(input())
print "Vent dans les voiles !! Vent dans les voiles !!"
for k in range(0, nombre) :
Grille()
tour_joueur()
print " "
print " "
tour_IA()
taille = int(input("Veuillez saisir la taille du champs de bataille matelot !"))
XBateau_un_joueur = random.randint(0, taille - 1)#bateau n°1 du joueur
YBateau_un_joueur = random.randint(0, taille - 1)
XBateau_deux_joueur = random.randint(0, taille - 1)#bateau n°2 du joueur
YBateau_deux_joueur = random.randint(0, taille - 1)
XBateau_un_IA = random.randint(0, taille - 1)#bateau n°1 de l'IA
YBateau_un_IA = random.randint(0, taille - 1)
XBateau_deux_IA = random.randint(0, taille - 1)#bateau n°2 de l'IA
YBateau_deux_IA = random.randint(0, taille - 1)
nombre_de_joueur()
And this is the shell:
Traceback (most recent call last):
File "C:\Users\Marion\Documents\Marion\Work\ISN\BatailleNavale2.py", line 138, in <module>
nombre_de_joueur()
File "C:\Users\Marion\Documents\Marion\Work\ISN\BatailleNavale2.py", line 116, in nombre_de_joueur
Grille()
File "C:\Users\Marion\Documents\Marion\Work\ISN\BatailleNavale2.py", line 17, in Grille
tableau_joueur[XBateau_deux_joueur][YBateau_deux_joueur] = 1
UnboundLocalError: local variable 'XBateau_deux_joueur' referenced before assignment
So if you have an idea..
PS : Sorry if my english is bad... I'm french!
A:
You are assigning a value to the variable here:
XBateau_deux_joueur = XBateau_deux_joueur + 1
Python sees this assignment within your function and then, as a result, understands this variable within a local, rather than a global, scope. So the variable does not refer to the global variable that you probably think it should refer to. Thus when you reference the local variable here:
tableau_joueur[XBateau_deux_joueur][YBateau_deux_joueur] = 1
Python has never seen this variable before within the local scope of the function. The name is unbound, and Python throws an error. If you want to refer to the global variable instead, try this: before any reference to the variable, declare it as a global variable (within the function):
def Grille(): #définition du tableau qui servira de grille
global XBateau_deux_joueur
. . .
Any time you assign a value to a variable within a function, Python will assume that the variable is local in scope throughout the entire function unless told otherwise.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use python parse json with nested children
I am trying to collect specific data from this json using python and I can not figure out how to navigate the structure.
I have tried looking at similar questions on and cant seem to figure it out.
This is the python code that I have
import json
import requests
response = requests.get("example.com/data.json")
data = json.loads(response.text)
#print (json.dumps(data, indent=4))
with open("data_file.json", "w") as write_file:
json.dump(data, write_file)
for stuff in data['Children']:
print(stuff['id'])
Here is part of the json I am trying to read
{
"Min": "Min",
"Text": "Sensor",
"ImageURL": "",
"Value": "Value",
"Children": [
{
"Min": "",
"Text": "PC",
"ImageURL": "images_icon/computer.png",
"Value": "",
"Children": [
{
"Min": "",
"Text": "MSI Z170A GAMING M7 (MS-7976)",
"ImageURL": "images_icon/mainboard.png",
"Value": "",
"Children": [],
"Max": "",
"id": 2
},
{
"Min": "",
"Text": "Intel Core i7-6700K",
"ImageURL": "images_icon/cpu.png",
"Value": "",
"Children": [
{
"Min": "",
"Text": "Clocks",
"ImageURL": "images_icon/clock.png",
"Value": "",
"Children": [
{
"Min": "100 MHz",
"Text": "Bus Speed",
"ImageURL": "images/transparent.png",
"Value": "100 MHz",
"Children": [],
"Max": "100 MHz",
"id": 5
},
{
"Min": "4408 MHz",
"Text": "CPU Core #1",
"ImageURL": "images/transparent.png",
"Value": "4409 MHz",
"Children": [],
"Max": "4409 MHz",
"id": 6
},
{
"Min": "4408 MHz",
"Text": "CPU Core #2",
"ImageURL": "images/transparent.png",
"Value": "4409 MHz",
"Children": [],
"Max": "4409 MHz",
"id": 7
},
],
"Max": "",
"id": 4
},
{
"Min": "",
"Text": "Temperatures",
"ImageURL": "images_icon/temperature.png",
"Value": "",
"Children": [
{
"Min": "24.0 \u00b0C",
"Text": "CPU Core #1",
"ImageURL": "images/transparent.png",
"Value": "32.0 \u00b0C",
"Children": [],
"Max": "58.0 \u00b0C",
"id": 11
},
{
"Min": "30.0 \u00b0C",
"Text": "CPU Package",
"ImageURL": "images/transparent.png",
"Value": "36.0 \u00b0C",
"Children": [],
"Max": "62.0 \u00b0C",
"id": 15
}
],
"Max": "",
"id": 10
},
],
"Max": "",
"id": 3
},
],
"Max": "",
"id": 1
}
],
"Max": "Max",
"id": 0
}
I am getting back only "1" returned. I need to get the Min, Max, Value from each entry but id was the only thing that I have been able to get so far.
A:
Recursion is pretty neat here... if the Python tricks need explaining, please ask.
def get_stuff(data_dict):
#gets: min,max,value from input and returns in a list alongside children's
# create new object of the relevant data fields
my_data = {k:data_dict[k] for k in ['Min', 'Max', 'Value']}
# recursively get each child's data and add that to a new list
children_data = [d for child in data_dict['Children'] for d in get_stuff(child)]
# add our data to the start of the children's data
return [my_data] + children_data
Which, when run on the data you posted in the question, gives:
[
{
"Min": "Min",
"Max": "Max",
"Value": "Value"
},
{
"Min": "",
"Max": "",
"Value": ""
},
{
"Min": "",
"Max": "",
"Value": ""
},
{
"Min": "",
"Max": "",
"Value": ""
},
{
"Min": "",
"Max": "",
"Value": ""
},
{
"Min": "100 MHz",
"Max": "100 MHz",
"Value": "100 MHz"
},
{
"Min": "4408 MHz",
"Max": "4409 MHz",
"Value": "4409 MHz"
},
{
"Min": "4408 MHz",
"Max": "4409 MHz",
"Value": "4409 MHz"
},
{
"Min": "",
"Max": "",
"Value": ""
},
{
"Min": "24.0 \u00b0C",
"Max": "58.0 \u00b0C",
"Value": "32.0 \u00b0C"
},
{
"Min": "30.0 \u00b0C",
"Max": "62.0 \u00b0C",
"Value": "36.0 \u00b0C"
}
]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
R - Sample consecutive series of dates in time series without replacement?
I have a data frame in R containing a series of dates. The earliest date is (ISO format) 2015-03-22 and the latest date is 2016-01-03, but there are two breaks within the data. Here is what it looks like:
library(tidyverse)
library(lubridate)
date_data <- tibble(dates = c(seq(ymd("2015-03-22"),
ymd("2015-07-03"),
by = "days"),
seq(ymd("2015-08-09"),
ymd("2015-10-01"),
by = "days"),
seq(ymd("2015-11-12"),
ymd("2016-01-03"),
by = "days")),
sample_id = 0L)
I.e.:
> date_data
# A tibble: 211 x 2
dates sample_id
<date> <int>
1 2015-03-22 0
2 2015-03-23 0
3 2015-03-24 0
4 2015-03-25 0
5 2015-03-26 0
6 2015-03-27 0
7 2015-03-28 0
8 2015-03-29 0
9 2015-03-30 0
10 2015-03-31 0
# … with 201 more rows
What I want to do is to take ten 10-day long samples of continous dates from within that time series without replacement. For example, a valid sample would be the ten days from 2015-04-01 to 2015-04-10 because that falls completely within the dates column in my date_data data frame. Each sample would then get a unique (non-zero) number in the sample_id column in date_data such as 1:10.
To be clear, my requirements are:
Each sample would be 10 consecutive days.
The sampling has to be without replacement. So if sample_id == 1 is the 2015-04-01 to 2015-04-10 period, those dates can't be part of another 10-day-long sample.
Each 10-day-long sample can't include any date that's not within date_data$dates.
At the end, date_data$sample_id would have unique numbers representing each 10-day-long sample, likely with lots of 0s left over that were not part of any sample (and there would be 200 rows - 10 for each sample - where sample_id != 0).
I am aware of dplyr::sample_n() but it doesn't sample consecutive values, and I don't know how to devise a way to "remember" which dates have already been sampled...
What's a good way to do this? A for loop?!?! Or perhaps something with purrr? Thank you very much for your help.
UPDATE: Thanks to @gfgm's solution, it reminded me that performance is an important consideration. My real dataset is quite a bit larger, and in some cases I would want to take 20+ samples instead of just 10. Ideally the size of the sample can be changed as well, i.e. not necessarily 10-days long.
A:
This is tricky, as you anticipated, because of the requirement of sampling without replacement. I have a working solution below which achieves a random sample and works fast on a problem of the scale given in your toy example. It should also be fine with more observations, but will get really really slow if you need to pick a lot of points relative to the sample size.
The basic premise is to pick n=10 points, generate the 10 vectors from these points forwards, and if the vectors overlap ditch them and pick again. This is simple and works fine given that 10*n << nrow(df). If you wanted to get 15 subvectors out of your 200 observations this would be a good deal slower.
library(tidyverse)
library(lubridate)
date_data <- tibble(dates = c(seq(ymd("2015-03-22"),
ymd("2015-07-03"),
by = "days"),
seq(ymd("2015-08-09"),
ymd("2015-10-01"),
by = "days"),
seq(ymd("2015-11-12"),
ymd("2016-01-03"),
by = "days")),
sample_id = 0L)
# A function that picks n indices, projects them forward 10,
# and if any of the segments overlap resamples
pick_n_vec <- function(df, n = 10, out = 10) {
points <- sample(nrow(df) - (out - 1), n, replace = F)
vecs <- lapply(points, function(i){i:(i+(out - 1))})
while (max(table(unlist(vecs))) > 1) {
points <- sample(nrow(df) - (out - 1), n, replace = F)
vecs <- lapply(points, function(i){i:(i+(out - 1))})
}
vecs
}
# demonstrate
set.seed(42)
indices <- pick_n_vec(date_data)
for (i in 1:10) {
date_data$sample_id[indices[[i]]] <- i
}
date_data[indices[[1]], ]
#> # A tibble: 10 x 2
#> dates sample_id
#> <date> <int>
#> 1 2015-05-31 1
#> 2 2015-06-01 1
#> 3 2015-06-02 1
#> 4 2015-06-03 1
#> 5 2015-06-04 1
#> 6 2015-06-05 1
#> 7 2015-06-06 1
#> 8 2015-06-07 1
#> 9 2015-06-08 1
#> 10 2015-06-09 1
table(date_data$sample_id)
#>
#> 0 1 2 3 4 5 6 7 8 9 10
#> 111 10 10 10 10 10 10 10 10 10 10
Created on 2019-01-16 by the reprex package (v0.2.1)
marginally faster version
pick_n_vec2 <- function(df, n = 10, out = 10) {
points <- sample(nrow(df) - (out - 1), n, replace = F)
while (min(diff(sort(points))) < 10) {
points <- sample(nrow(df) - (out - 1), n, replace = F)
}
lapply(points, function(i){i:(i+(out - 1))})
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Quitar corchetes de un arraylist
Estoy iniciando en esto de Java y tengo un problema. Hago una consulta en mysql y guardo los datos en un ArrayList, del cual solo necesito los datos, sin los corchetes que aparecen al inicio y final.
¿Alguien me podría ayudar diciendome como quitar esos corchetes?
public class ArregloObservadas {
public ArrayList setArregloOBS_1(){
PreparedStatement pstmt=null;
ResultSet rs=null;
ResultSetMetaData RSMD;
Connection conn=null;
ArrayList tmp3 = null;
String sql2;
conexion opendb = new conexion();
try{ conn = opendb.getConnection();
} catch(Exception e){System.out.println(e);}
try{
sql2 = "**consulta**";
pstmt = conn.prepareStatement(sql2);
rs = pstmt.executeQuery();
RSMD = rs.getMetaData();
tmp3 = new ArrayList();
while (rs.next()){
tmp3.add(rs.getInt("id_comision"));
}
if(rs!=null){
rs.close();
rs=null;
}
if(pstmt!=null){
pstmt.close();
pstmt = null;
}
if(conn!=null){
conn.close();
conn=null;
}
}catch(Exception e){
System.out.println(e);
}finally{
try{
if(rs!=null){
rs.close();
rs=null;
}
if(pstmt!=null){
pstmt.close();
pstmt = null;
}
if(conn!=null){
conn.close();
conn=null;
}
}catch(Exception e){
System.out.println(e);
}
}
return tmp3;
}
public static void main(String[] args) {
ArregloObservadas x = new ArregloObservadas();
System.out.println(x.setArregloOBS_1());
}
A:
Los corchetes que comentas se muestran porque estas imprimiendo la representación del ArrayList, por ejemplo algo similar a esto:
[Valor1, Valor2, Valor3, Valor4, Valor5]
Sin embargo esto no debe significar un problema en cuanto al manejo de los datos contenidos en el ArrayList, pero si tu objetivo es imprimir los valores, puedes realizarlo de esta forma:
String datosArray = "";
for (String elemento: myArrayList) {
datosArray += elemento + ", ";
}
System.out.println(datosArray);
agrega este método para limpiar el último , :
private static String limpia(String datosArray){
datosArray = datosArray.trim();
if (datosArray != null && datosArray.length() > 0 && datosArray.charAt(datosArray.length() - 1) == ',') {
datosArray = datosArray.substring(0, datosArray.length() - 1);
}
return datosArray;
}
y llamalo de esta forma:
System.out.println(limpia(datosArray));
de esta forma obtendrías:
Valor1, Valor2, Valor3, Valor4, Valor5
En el caso de tu código sería :
public static void main(String[] args) {
ArregloObservadas x = new ArregloObservadas();
//System.out.println(x.setArregloOBS_1());
String datosArray = "";
for (String elemento : x.setArregloOBS_1()) {
datosArray += elemento + ", ";
}
System.out.println(limpia(datosArray));
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Draw text inside pylab figure window
I want to add some more details about the calculations used to create a plot (histogram).
How can I add some text that I can position anywhere I want in the figure window?
A:
Lots of ways to do this:
text
figtext
annotate
|
{
"pile_set_name": "StackExchange"
}
|
Q:
.ajax() refreshes the page after ENTER is hit
I am using ajax to update the db with a new folder but it refreshes the page after ENTER is hit.
on my form I have onkeypress="if(event.keyCode==13) savefolder();"
here is the javascript code that I have: what it does basically is after you hit enter it calls the function savefolder, savefolder then sends a request through ajax to add the folder to the db. Issue is it refreshes the page... I want it to stay on the same page.
any suggestions? Thank you
<script>
function savefolder() {
var foldername= jQuery('#foldername').val(),
foldercolor= jQuery('#foldercolor').val();
// ajax request to add the folder
jQuery.ajax({
type: 'get',
url: 'addfolder.php',
data: 'foldername=' + foldername + '&foldercolor=' + foldercolor,
beforeSend: function() { alert('beforesend');},
success: function() {alert('success');}
});
return false;
}
</script>
A:
This is working:
<form>
<input type="submit" value="Enter">
<input type="text" value="" placeholder="search">
</form>
function savefolder() {
var foldername= jQuery('#foldername').val(),
foldercolor= jQuery('#foldercolor').val();
jQuery.ajax({
type: 'get',
url: '/echo/html/',
//data: 'ajax=1&delete=' + koo,
beforeSend: function() {
//fe('#r'+koo).slideToggle("slow");
},
success: function() {
$('form').append('<p>Append after success.</p>');
}
});
return false;
}
jQuery(document).ready(function() {
$('form').submit(savefolder);
});
http://jsfiddle.net/TFRA8/
You need to check to see if you're having any errors during processing (Firebug or Chrome Console can help). As it stands, your code is not well-formed, as the $(document).ready() is never closed in the code you included in the question.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Calling a Class or Method in C# using Mono
I'm having some problems calling a class or method in C# using Mono, can anyone give some help on what I'm doing wrong?
I want this piece of code:
public static void Execute(ScriptHost host)
{
try
{
TesteGUI teste = new TesteGUI(); //?
TesteGUI(); // ?
}
catch(Exception e)
{
}
finally
{
}
}
To call this piece of code:
public TesteGUI(Gtk.Window parentWindow) : base(Gtk.WindowType.Toplevel)
{
base.Modal = false;
base.TransientFor = parentWindow;
base.Decorated = false;
base.WindowPosition = WindowPosition.CenterAlways;
this.MyBuild();
base.KeyReleaseEvent += delegate(object o, KeyReleaseEventArgs args)
{
if (args.Event.Key == Gdk.Key.Escape)
{
this.Destroy();
}
};
}
What is my doing wrong and how can I make it work?
Thank you
A:
You should do it like this:
public static void Execute(ScriptHost host)
{
try
{
TesteGUI teste = new TesteGUI(parentWindow); //where parentWindow is defined somewhere earlier and is of type Gtk.Window
}
catch(Exception e)
{
}
finally
{
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
console shows "unknown operand" from a if statement
I'm trying to script an automatic md5sum check for my embedded system running uClinux.
The script is generated on my computer as well as the tar file I want to check.
The script goes like this :
#!/bin/sh
filename='My_File'
md5='d4deeac6f655ee5d4b9ec150fc6957a5'
if test ! -e $filename.tar
then
echo Update file does not exist
exit 1
fi
if [ -z `md5sum "$filename.tar" | grep $md5` ]
then
echo 'md5sum is not correct'
exit 1
else
echo 'md5sum is correct'
fi
tar -xvf "$filename.tar"
[...]
The md5sum check run as expected, i-e the script stops when the checksum is wrong and executes to the end otherwise. But when the checksum is correct, I get this message from the console :
[: My_File.tar: unknown operand
I don't understand why I get this, and I think this is not accurate to let my script like this. Can someone explain me what's the shell is doing and how to get rid of this message ?
Thanks
A:
Quote the output of md5sum so it's not split into multiple words. -z only expects one operand.
if [ -z "`md5sum "$filename.tar" | grep $md5`" ]
While we're here, might as well switch to the nicer $(...) syntax.
if [ -z "$(md5sum "$filename.tar" | grep $md5)" ]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ElasticSearch - Sort does not work
I'm trying make a search and sort the results. However, I'm getting a error dont know why.
EDIT - I'll provide my full mappings.
"myindex": {
"mappings": {
"mytype": {
"dynamic_templates": [
{
// Dynamic templates here!
}
],
"properties": {
"fieldid": {
"type": "keyword",
"store": true
},
"fields": {
"properties": {
"myfield": {
"type": "text",
"fields": {
"sort": {
"type": "keyword",
"ignore_above": 256
}
},
"analyzer": "myanalyzer"
}
}
},
"isDirty": {
"type": "boolean"
}
}
}
}
}
}
When I performed a search with sorting, like this:
POST /index/_search
{
"sort": [
{ "myfield.sort" : {"order" : "asc"}}
]
}
I get the following error:
{
"error": {
"root_cause": [
{
"type": "query_shard_exception",
"reason": "No mapping found for [myfield.sort] in order to sort on",
"index_uuid": "VxyKnppiRJCrrnXfaGAEfA",
"index": "index"
}
]
"status": 400
}
I'm following the documentation on elasticsearch.
DOCUMENTATION
I also check this link:
DOCUMENTATION
Can someone provided me help?
A:
Hmm it might be your mapping isn't set properly. I followed along using the following:
PUT /your-index/
{
"settings": {
"number_of_replicas": "1",
"number_of_shards": "3",
"analysis": {
"customanalyzer": {
"ID": {
"type": "custom",
"tokenizer": "keyword",
"filter": ["lowercase"]
}
}
}
},
"mappings": {
"thingy": {
"properties": {
"myfield": {
"type": "text",
"analyzer": "ID",
"fields": {
"sort": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
To double check if the index actually has myfield.sort field look at
GET /your-index/thingy/_mapping
Then, upload some document (no need to specify the sub-field(s), elasticsearch will do it for you)
POST /your-index/thingy/
{
"myfield": "some-value"
}
Now I can search with the following:
POST /your-index/thingy/_search
{
"sort": [
{ "myfield.sort": { "order": "asc" } }
]
}
So be sure to check:
Naming/typo's (you never know)
Your mapping (does it have the "myfield.sort" field)
Are you searching in the correct index?
Hopefully this helps
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Loop to check existence of file and process it
I am starting the first part of a school assignment and I must prompt the user to enter a filename, check for the existence of the file, and if it exists, open it for processing; otherwise I am to have the user enter another filename.
When I compile and run my program below, I get the error message "No file exists. Please enter another filename." When I type in names of files that don't exist it just runs the first part of my do while loop again. I'm a beginner at C++ but I've done this before and I feel as if it should be running properly. Any help would be appreciated.
#include <iostream>
#include <string>
#include <fstream>
using namespace std;
struct customerData
{
int _customerID;
string _firstName, _lastName;
double _payment1, _payment2, _payment3;
};
void processFile();
int main()
{
processFile();
system ("pause");
return 0;
}
void processFile()
{
string filename;
ifstream recordFile;
do
{
cout << "Please enter a filename\n";
cin >> filename;
recordFile.open(filename);
if (recordFile.good())
// {
// enter code for if file exists here
// }
;
}
while(recordFile.fail());
{
cout << "No file by that name. Please enter another filename\n";
cin >> filename;
recordFile.open(filename);
}
}
A:
To check whether a file was successfully opened you must use the std::fstream::is_open() function, like so:
void processfile ()
{
string filename;
cout << "Please enter filename: ";
if (! (cin >> filename))
return;
ifstream file(filename.c_str());
if (!file.is_open())
{
cerr << "Cannot open file: " << filename << endl;
return;
}
// do something with open file
}
The member functions .good() and .fail() check for something else not whether the file was opened successfully.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
My jQuery, AJAX call is getting a null response on a valid return of well-formatted JSON data from a PHP function
I have checked the logs of my PHP function and I have correctly formatted JSON data that I am returning from the method. The AJAX is calling it and returning, getting a null value for the response variable. Any Ideas? Here is the AJAX code:
$.ajax({
type: "POST",
url: "index.php/controllerFile/get_standby",
data: 'id=' + $(this).attr('id'),
success: function(response){
console.log('response is: ' + response); //It is null here
$.colorbox({'href':'index.php/config/view/standby' + response.urlData,'width':1000,'title':'Standby People'});
},
dataType:'json'
});
Here is the PHP function:
function get_standby()
{
$id = $this->input->post('id');
$this->load->model('teetime');
$url['urlData'] = ($this->teetime->get_standby_by_id($id));
$printing = json_encode($url);
log_message('error', 'JSON ' . $printing);
return $printing;
}
A:
Try using echo in your PHP instead of return.
echo $printing;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Moments in math, they describe the "shape"?
There's this one tantalizing line from the Wikipedia article:
In mathematics, a moment is a specific quantitative measure, used in both mechanics and statistics, of the shape of a set of points
Can someone expand on that?
Otherwise moments in math don't seem to arise from anything related to the figures of which you find the moments of, but rather are just "nice things to know" about a figure.
E.g. the first moment:
$$
\iint_R y \rho(x,y) dA
$$
(That's supposed to be over a region R)
Gets you the first moment with respect to $y$. Which has no meaning in and of itself, but happens to be:
(Physics)
torque (what it takes to rotate R pushing from the y side)
(Statistics)
the mean (where $\rho$ is a probability function over R)
The second moment gets you the variance (Statistics) and the Moment of Inertia (Physics), etc.
So it looks like this form just "shows up" in various applied disciplines and in math the general form of finding the moments of some figure (a "lamina" as my book calls it) is just:
$$
\iint_R y^n \rho(x,y) dA
$$
$$
n = 1, 2, ...
$$
Related:
Intuitive explanation of moments as they relate to center of mass
First and Second Moment of Mass
Moments and Centers of Mass
Moment (physics)
Moment (mathematics)
What is the use of moments in statistics
A:
Since there have been no further answers for about a week, I'll assume I'm correct and attempt to answer here.
A little story: I ride bikes for cardio. On my way home I pass by a corn field. I rode along the perfectly straight axis of the corn field and looked at the stalks. The stalks varied in height as I rode along depending on my position along the axis. But also my distance from them thanks to perspective.
Looking at the town as I rode by, I could see that the apparent height of the objects in the distance varied with their height and my distance from them. Thus the "moment" of their apparent height as a product of their height and their distance from me.
So that defines the shape of the "distribution" of the apparent height of buildings in my town quite neatly.
It's a bit like a contour map in one dimension. If I take my moment with respect to a single axis, then I should see the "profile" of the surface with respect to that axis. If I do it in two dimensions - with respect to both axes - for a three dimensional shape, I should be able to form a contour map of the surface, tell the distribution of its mass and then by dividing by the total mass, find the coordinates of the center of mass; Where the surface would balance, on its distribution of mass over the surface.
I'm guessing that this business with the "moment of inertia" - as I have it above - "happens to be proportional" to the second moment and moments define certain key properties of some systems that define how that system behaves; E.g. if you know all the properties of a physical system you can duplicate it exactly (velocity, spin, angular momentum, etc) which some of them happen to map to moments of equations that define some of these properties.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How secure are RequestFactory-generated ids?
I have a moderate understanding of the innards of RequestFactory, but now I'm implementing a security policy around it and I don't see much literature on this topic. I'm particularly interested in the security of the client-side proxy id's that RequestFactory is using. For example, an object's id in my database might be 32, but RequestFactory might refer to the id as Qkjnsd89urknasj3 or something.
How hard is it for someone to generate a valid id of one of my objects from thin air? How are these ids generated?
Any info is appreciated, thanks.
A:
They are not secure, they are base64 encoded. This is to make sure that no matter what kind of data you use in your key, it can turn it into and out of a real key.
So don't allow free and open access to objects based solely on the ID of an instance, but have some kind of logic that checks that the current user is allowed to look at a record. Or make the IDs non-predictable, and the ID space large enough that random guessing will get you nowhere. But even then, the keys will be base64 encoded when RF refers to them.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is A∨¬A a tautology when there is a proof (by contradiction)?
$A \lor \neg A$ is stated as a "tautology", but is it really a tautology? It can be proven by counterposition. And therefore it is not a tautology when it can be proven(?)
Update
Here's the proof (by contradiction) I mean:
¬(A∨¬A) (assumption)
A (assumption)
A∨¬A (rule of introduction)
人 (contradiction)
¬A
A∨¬A (rule of introduction)
人 (contradiction)
¬¬(A∨¬A)
A∨¬A
A:
$A\vee \neg A$ is a tautology in classical (i.e., Aristotelian) logic because you can prove that using the deduction rules of the classical proposition calculus no matter what the truth value of $A$ is, the truth value of $A\vee \neg A$ is always true. That is the meaning of tautology.
In non-classical logical systems, such as intuitionism or constructivism, $A \vee \neg A$ is not a tautology. There the interpretation of $P \vee Q$ is not "either P or Q is true" but rather the more constructive "Either I have a proof of P or I have a proof of Q". A famous example to illustrate this is the following: Theorem: There exist two irrational numbers $a,b$ such that $a^b$ is rational. A classical proof can go like this: if $\sqrt2 ^\sqrt2$ is rational we are done. Else, consider $(\sqrt2^{\sqrt2})^{\sqrt2}=\sqrt2^2=2$, a rational. Classically this finishes the proof but constructively it is not a valid proof since it does not actually show which one of the two candidates works.
A:
Part of the problem here may be that "tautology" has a far more specific meaning in mathematical logic than in ordinary usage. The more specific meaning is "a statement S that always true, solely on the basis of how S is constructed from smaller statements by means of propositional connectives and the meanings (truth tables) of the connectives". So $A\lor\neg A$ is a tautology because it is true solely because of the meanings of $\lor$ and $\neg$. But $1=1$ is not a tautology because its truth depends on the meaning of $=$, which is not a propositional connective. Similarly, if $P$ is a unary predicate, then $P(a)\to(\exists x)\,P(x)$, though logically valid, is not a tautology because its validity depends on the meanings of both $\to$ and $\exists$, the latter of which is not a propositional connective.
In ordinary, non-technical usage, "tautology" means (according to my dictionary) saying the same thing in different words; I've heard it used more generally to mean anything that is obviously true. So all of the examples in my first paragraph would be tautologies in this sense.
A:
Try constructing a truth table and you will see that it is in fact a tautology.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Prevent users from creating Schedules for jobs
I need to restrict the development users from creating new schedules for SQL Agent jobs. They should be able to create a job and pick one of the available schedules, but should not be able to create a new schedule.
I have a bunch of developers creating SSIS packages that need to be scheduled off business hours. I am now faced with either having to do that myself, or risking the possibility of someone scheduling a major task during business hours and slowing down the DEV server noticeably.
Is this possible?
Any alternate solutions also welcome.
Thanks,
Raj
A:
I just figured out one way.
DENY EXECUTE ON sp_add_schedule to <USER>
Will this lead to any other complications?
Raj
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Representing multi-tile objects in a tilemap
I'm building an isometric tiled game, where the terrain and objects are represented by a 2-dimensional array of lists. Depth of objects on the map is derived from the tile (and a per-tile sorting routine), the objects are updated/drawn by going over the entire map, and it is important to know whether a tile is occupied or not for interaction/collision purposes.
Now, I want to have objects that are larger than a single tile (say, a 2x2 or 2x3 object). Drawing them this way is trivial, however, there is the occupation detection requirement. Putting the same object or dummy objects into the array merely to indicate that the tiles are taken doesn't seem like an elegant solution, but neither does checking surrounding tiles for possible big objects, each time the occupation status of a tile is requested.
Is there a more elegant solution that I've overlooked?
A:
The most reasonable solution in this case would be the former where you indicate the same object is in multiple positions in the map array. There is nothing wrong with that. As you may realize a platform in a 2d tile based game is "one" object that is being indicated sometimes by dozens of tiles.
Checking nearby tiles for a "Large" object would be an obfuscated solution cause it will require unneeded work and lots of it if you decide to add larger objects. The collision map unlike a list of objects and their positions is required to handle (quickly and efficiently) the detection of the presence of an object in a certain position.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Efficient functional programming (using mapply) in R for a "naturally" procedural problem
A common use case in R (at least for me) is identifying observations in a data frame that have some characteristic that depends on the values in some subset of other observations.
To make this more concerete, suppose I have a number of workers (indexed by WorkerId) that
have an associated "Iteration":
raw <- data.frame(WorkerId=c(1,1,1,1,2,2,2,2,3,3,3,3),
Iteration = c(1,2,3,4,1,2,3,4,1,2,3,4))
and I want to eventually subset the data frame to exclude the "last" iteration (by creating a "remove" boolean) for each worker. I can write a function to do this:
raw$remove <- mapply(function(wid,iter){
iter==max(raw$Iteration[raw$WorkerId==wid])},
raw$WorkerId, raw$Iteration)
> raw$remove
[1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE TRUE FALSE FALSE FALSE TRUE
but this gets very slow as the data frame gets larger (presumably because I'm needlessly computing the max for every observation).
My question is what's the more efficient (and idiomatic) way of doing this in the functional programming style. Is it first creating a the WorkerId to Max value dictionary and then using that as a parameter in another function that operates on each observation?
A:
The "most natural way" IMO is the split-lapply-rbind method. You start by split()-ting into a list of groups, then lapply() the processing rule (in this case removing the last row) and then rbind() them back together. It's all doable as a nested set of function calls. The inner two steps are illustrated here and the final one-liner is presented at the bottom:
> lapply( split(raw, raw$WorkerId), function(x) x[-NROW(x),] )
$`1`
WorkerId Iteration
1 1 1
2 1 2
3 1 3
$`2`
WorkerId Iteration
5 2 1
6 2 2
7 2 3
$`3`
WorkerId Iteration
9 3 1
10 3 2
11 3 3
do.call(rbind, lapply( split(raw, raw$WorkerId), function(x) x[-NROW(x),] ) )
Hadley Wickham has developed a wide set of tools, the plyr package, that extend this strategy to a wider variety of tasks.
A:
For the specific problem posed !rev(duplicated(rev(raw$WorkerId))) or better, following Charles' advice, !duplicated(raw$WorkerId, fromLast=TRUE)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Measuring the user experience of Augmented reality application for elderly
I'm conducting a study on elderly experience of AR application. The plan was to use SUS (System Usability Scale) to measure the ease of use. Then, after planning the whole thing, I realized I made a stupid yet large mistake. My audience are Arabic Elderly and there is no translation of the SUS to Arabic.
My questions are:
What method might I use instead of SUS that will give more quantifiable results? I found other method but the mainly focus on website not mobile application or not suitable for elderly.
If I would use SUS, what should I do to validate the Arabic version?
I'm open to any suggestion.
A:
Concerning the second question, Blažica & Lewis detail their method of translating SUS to Slovenian in their paper A Slovene Translation of the System Usability Scale: The SUS-SI. They use method of back-translating and psychometric evaluation to validate the translation.
There were three stages in the translation process. First, 10 reviewers from the fields of computer and natural sciences individually reviewed a draft translation. Second, the final translation incorporated their comments. The third stage was to perform a back-translation. Three independent translators, without reference to the original, translated the final draft back into English. The translators were native Slovene speakers fluent in English. For all 10 items, all three translators provided back-translations with the same meaning as the original and, in some cases, exactly the same wording. For example, Item 9, “I felt very confident using the system,” was back-translated to “I was very self-reliant when using the system,” “I felt very confident using this system,” and “I felt confident when using the system.”
Translated SUS was then used to test Gmail with 182 subjects who also provided the likelihood-to-recommend (LTR) score to be used with validation. Results were then analysed as followed:
Reliability: assessed using coefficient alpha
Concurrent validity: correlation between SUS and LTR
Construct validity: two-factor solution of the SUS
Sensitivity
Normative comparison: comparing results to evaluation of Gmail using English language SUS.
For the first question I recommend the book Quantifying the User Experience by Lewis & Sauro, in which they present and compare different post-study and post-task standard questionnaires. They don't address the issue of this question but other post-study questionnaires to consider instead of SUS are UMUX with four items and UMUX-LITE with two items.
If you don't have to use post-study questionnaire, SEQ (Simple Ease Question) is a recommended post-task questionnaire and quite easy to translate. It just asks how difficult the user thought the task was on seven step scale and it could be asked by the person conducting the user test.
If you can't get your hands on the book, I'd recommend Sauro's website MeasuringU, if you haven't already checked it out.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What to do with a person who deliberately downvotes correct answers to be in the top of the answers?
Today I saw a behaviour which really surprised me:
I answered the question and then after a minute another guy answered it as well. The answer was basically a duplicate of my answer (which was not a surprise, the question was simple). I checked him, because he was quite reputable guy with 21k reputation.
Then I saw that my answer was downvoted. So the answer of another guy (not the guy with 21k, who also happened to have similar answer to mine). I was surprised, because both our answers were correct. But then I decided to check one more time the 21k guy who was not downvoted.
To my surprise I saw that right now he had +2 downvotes in his statistics. This was extremely lame. He basically downvoted two correct answers to be on top of the results.
Question:
Is this behaviour acceptable and what should a person do if he sees or suspect such behaviour? I was lucky because I saw two opened page and was able to see the difference in votes and it happened in less then a minute so I was pretty sure that he downvoted correct answers.
A:
The principle behind the site is simple: correct and good answers should get upvotes, while poor and incorrect answers should get downvotes, more or less. Users who downvote or upvote content for tactical reasons only, are not really playing the game right. Especially if those tactical votes are "incorrect" in a sense.
Another principle behind the voting system is that votes are anonymous. You don't really know how voted for what. And even if you happen to figure it out, you don't know why this happened. Where those correct answers really correct? Was there something about them that you missed? Might the voter have missed some of its points? Was it one user who cast all the votes across posts? Was it more than one user?
The thing is, it's pretty human to want to see patterns, or to confirm our suspicions. So you look at whatever information is available to you, and you use it to confirm the ideas you already had. The thing is though, that you're still pretty likely to be wrong. A ton of things could have happened. And even if it was a single user downvoting all competing answers, you'd still have to establish it was for tactical reasons.
So please don't go around and accuse others. No good things will come of that. If you want to know why you received a downvote, at most put a comment below your answer asking "Why the downvote? Is there anything I can address to take away the concerns?". If there is no response, you'll just have to move on. If your answer is correct, it should in general receive its fair share of upvotes. And if there is a response, with a legitimate explanation, it might well help you improve something you did not think of.
And if you really expect foul play, you could potentially flag for moderator attention, clearly explaining what you think happened. But I'd personally not do so for a one-of deal. I'd have to have a pretty big suspicion of this particular user doing so time and time again, before I'd ask moderators (who might even have to escalate it higher up) to investigate such an issue.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cordapp Web Application
Is there any documentation on how to build web application with CORDA as underlying technology. Need to be able to create a web application (html and js) to visualize the states and transactions of each node.
A:
There are several approaches:
Using SpringBoot to create a webserver. You can find a template here, and a detailed example implementation here.
Using Braid server and Open API generator. You can find an example that uses ReactJS here.
I wrote about Braid server in this article.
Samples are also written in Kotlin. Here's the deprecated repo that was split into Java samples, and Kotlin samples.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I hide or remove a menu item?
I have the following snippet of code that creates a list of navigation items (a menu):
<ul class="nav">
<li><%= link_to "Log in", login_path %></li>
<li><%= link_to "Help", help_path %></li>
<% if logged_in? %>
<li><%= link_to "Home", root_path %></li>
</ul>
When I am not logged in, the menu shows as:
Log in Help
When I do log in, it shows as
Log in Help Home
After logging in, I'd like to:
hide or remove the log in menu item and
rearrange the remainder menu items so that Home is first and Help is next.
A:
You just need to arrange them properly and use the condition properly
<ul class="nav">
<% if logged_in? %>
<li><%= link_to "Home", root_path %></li>
<% else %>
<li><%= link_to "Log in", login_path %></li>
<% end %>
<li><%= link_to "Help", help_path %></li>
</ul>
Explanation:
The first if-else checks for a logged in user and will return the <li> Home if logged in or Log in If not logged in
The last <li> will always be displayed no matter user is logged in or not
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL SELECT two records from other table?
NOTE: This is not the same as this question, as I need to get data from two other records, not two fields from one one other record!
MySQL newb. I have two tables, and I want to get data from both of them so I have the following:
wp_bowling_fixtures
fixture_id | fixture_date | home_team_id | away_team_id
-----------+--------------+--------------+-------------
1 | 2017-12-12 | 1 | 2
2 | 2017-12-12 | 3 | 4
3 | 2017-12-12 | 5 | 6
4 | 2017-12-12 | 7 | 8
5 | 2017-12-12 | 9 | 10
wp_bowling_teams
team_id | name | division | archived
--------+--------+----------+---------
1 | Team A | 1 | 0
2 | Team B | 1 | 0
3 | Team C | 2 | 1
4 | Team D | 2 | 0
5 | Team E | 3 | 0
6 | Team F | 3 | 0
7 | Team G | 4 | 0
8 | Team H | 4 | 1
9 | Team I | 4 | 0
10 | Team J | 4 | 0
The result I want a SELECT query to produce:
fixture_id | fixture_date | home_team_id | home_team_name | home_team_archived | home_team_division | away_team_id | away_team_name | away_team_archived | away_team_division
-----------+--------------+--------------+----------------+--------------------+--------------------+--------------+----------------+--------------------+-------------------
1 | 2017-12-12 | 1 | Team A | 0 | 1 | 2 | Team B | 0 | 1
I also want it ordered by fixture_date DESC, home_team_division ASC, home_team_name ASC.
Hope that makes sense.
TIA,
Nick.
A:
SELECT f.fixture_id, f.fixture_date, h.team_id as home_team_id, h.name as home_team_name, h.archived as home_team_archived, h.division as home_team_division, a.team_id as away_team_id, a.name as away_team_name, a.archived as away_team_archived, a.division as away_team_division FROM wp_bowling_fixtures f, wp_bowling_teams h, wp_bowling_teams a where f.home_team_id = h.team_id and f.away_team_id = a.team_id order by f.fixture_date desc, h.division asc, h.name asc;
Works.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Produce a tree diagram to display football matchup
I am working on a Math Project involving football matches. I hope I can produce a tree diagram like this. Could anyone help me with the basic code? I feel producing this diagram requires tikz, which I am not good at. Thanks for your help!
By the way, I do not need the text regarding the date and the place of the matches. Thanks again for your attention!
A:
By defining a basic block style and using nodes alignment, this is one possibility.
Code
\documentclass[border=15pt]{standalone}
\usepackage{tikz}
\usepackage{graphicx}
\usetikzlibrary{positioning,arrows,calc}
\tikzset{block/.style = {rectangle, draw,fill=blue!10,
text width=5cm, minimum height=0.5cm},
}
\begin{document}
\begin{tikzpicture}
\path (0,0)
node[block,text centered](a){Round of 16}
node[block,text centered,right=2cm of a](b){Quarter-finals}
node[block,text centered,right=2cm of b](c){Semi-finals}
node[block,text centered,right=2cm of c](d){Finals};
\node[block,below=0.5cm of a] (r1){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r1] (r2){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r2] (r3){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r3] (r4){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r4] (r5){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r5] (r6){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r6] (r7){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r7] (r8){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r8] (r9){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r9] (r10){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r10](r11){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r11] (r12){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r12](r13){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r13] (r14){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
\node[block,below=0.5cm of r14](r15){\includegraphics[scale=0.05]{example-image-a}\,Brazil\hfill 1(2)};
\node[block,below=0cm of r15] (r16){\includegraphics[scale=0.05]{example-image-b}\,Chile\hfill 1(2)};
%------------
\node[block,below=1.5cm of b] (q1){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of q1] (q2){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
\node[block,below=2.8cm of q2](q3){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of q3] (q4){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
\node[block,below=2.8cm of q4](q5){\includegraphics[scale=0.05]{example-image-a} Brasil\hfill1(2)};
\node[block,below=0cm of q5] (q6){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
\node[block,below=2.8cm of q6](q7){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of q7] (q8){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
%-------------semifinal
\node[block,below=3.6cm of c] (s1){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of s1] (s2){\includegraphics[scale=0.05]{example-image-b}\, Germany\hfill1(2)};
\node[block,below=7.5cm of s2](s3){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of s3] (s4){\includegraphics[scale=0.05]{example-image-b}\,Germany\hfill 1(2)};
%--------- final
\node[block,below=8cm of d] (f1){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of f1](f2){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
\node[block,below=3cm of f2,text centered]{Third places};
\node[block,below=4cm of f2](t1){\includegraphics[scale=0.05]{example-image-a}\,Brasil\hfill 1(2)};
\node[block,below=0cm of t1](t2){\includegraphics[scale=0.05]{example-image-b}\,Colombia\hfill 1(2)};
%---- connecting lines
\foreach \i/\j/\k in {1/3/1,5/7/3,9/11/5,13/15/7}{
\draw[thick] (r\i.south east) -- +(1,0) |- (q\k.south west);
\draw[thick] (r\j.south east) -- +(1,0) |- (q\k.south west);
}
\foreach \i/\j/\k in {1/3/1,5/7/3}{
\draw[thick] (q\i.south east) -- +(1,0) |- (s\k.south west);
\draw[thick] (q\j.south east) -- +(1,0) |- (s\k.south west);
}
\foreach \i/\j/\k in {1/3/1}{
\draw[thick] (s\i.south east) -- +(1,0) |- (f\k.south west);
\draw[thick] (s\j.south east) -- +(1,0) |- (f\k.south west);
}
\def\box{1.5cm}
\foreach \i/\j in {1/2,3/4,5/6,7/8,9/10,11/12,13/14,15/16}{
\draw[thick] ([xshift=\box]r\i.north) -- ([xshift=\box]r\j.south);
}
\foreach \i/\j in {1/2,3/4,5/6,7/8}{
\draw[thick] ([xshift=\box]q\i.north) -- ([xshift=\box]q\j.south);
}
\foreach \i/\j in {1/2,3/4}{
\draw[thick] ([xshift=\box]s\i.north) -- ([xshift=\box]s\j.south);
}
\foreach \i/\j in {1/2}{
\draw[thick] ([xshift=\box]f\i.north) -- ([xshift=\box]f\j.south);
}
\draw[thick] ([xshift=\box]t1.north) -- ([xshift=\box]t2.south);
\end{tikzpicture}
\end{document}
A:
This is a forest solution which involves a little less typing than some other possibilities.
The basic idea is to see the diagram as a tree with an invisible root note on the far right. This has two children: the final and the detached third place match. To avoid affecting the main tree, the detached node is drawn after all of the other nodes are typeset and their positions set.
matchnode={<flag-1>}{<team-1>}{<score-1>}{<flag-2>}{<team-2>}{<score-3>} is a style used to define the tabular content for each match. Each <flag> should be the name of the image file containing the relevant <team>'s flag.
round name={<name>} is a style used to define the competition stages marked at the top. The title of the third place match is added directly to the relevant node, since this is a one off.
The colours are set as lineblue, fillblue and textblue. The overall appearance of all nodes is set using the footnode style which can be modified as desired. The general shape and thickness of the branches is set using forest's edge. If you want straight edges, for example, just say edge={ultra thick}, removing the rounded corners... specification.
\PassOptionsToPackage{dvipsnames,svgnames,x11names,rgb,table}{xcolor}
\documentclass[tikz,border=10pt]{standalone}
\usepackage{array,forest}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usetikzlibrary{positioning,shadows}
\colorlet{textblue}{blue!75!black}
\colorlet{lineblue}{blue}
\colorlet{fillblue}{blue!10}
\newlength\Tht
\settoheight{\Tht}{T}
\begin{document}
\arrayrulecolor{lineblue}
\begin{forest}
/tikz/footnode/.style={
font=\sffamily,
text=textblue,
draw=lineblue,
inner color=fillblue!75,
outer color=fillblue,
drop shadow,
rounded corners=1pt,
},
matchnode/.style n args=6{
footnode,
inner sep=0pt,
content={\includegraphics[height=\Tht]{#1} \textcolor{textblue}{#2}&\textcolor{black}{#3}\\\hline\includegraphics[height=\Tht]{#4} \textcolor{textblue}{#5}&\textcolor{black}{#6}},
},
round name/.style={
tikz={\node [above=of .center |- pen.north, anchor=mid, footnode, minimum height=4ex] {#1};}
},
match align/.style={
align={l|c},
parent anchor=west,
child anchor=east,
anchor=west,
},
for tree={
edge path={
\noexpand\path [draw, \forestoption{edge}] (!u.parent anchor) -- +(-10pt,0) |- (.child anchor)\forestoption{edge label};
},
l sep+=20pt,
match align,
grow=180,
edge={ultra thick, rounded corners},
footnode,
tier/.wrap pgfmath arg={tier #1}{level()},
},
[, phantom,
before drawing tree={
append={[, match align, matchnode={example-image-a}{Team E}{0}{example-image-b}{Team M}{2}, typeset node, afterthought={\node [above=10pt of .north, anchor=south, footnode, minimum height=4ex] {Third place};}]},
}
[, matchnode={example-image-a}{Team A}{1}{example-image-b}{Team I}{0}, round name=Final
[, matchnode={example-image-a}{Team A}{1}{example-image-b}{Team E}{0}, round name=Semi-Finals
[, matchnode={example-image-a}{Team A}{1}{example-image-b}{Team C}{0}, round name=Quarter-Finals
[, matchnode={example-image-a}{Team A}{1}{example-image-b}{Team B}{0}, name=pen, round name=Round of 16]
[, matchnode={example-image-a}{Team C}{1}{example-image-b}{Team D}{0}]
]
[, matchnode={example-image-a}{Team E}{2}{example-image-b}{Team G}{1}
[, matchnode={example-image-a}{Team E}{1}{example-image-b}{Team F}{0}]
[, matchnode={example-image-a}{Team G}{1}{example-image-b}{Team H}{0}]
]
]
[, matchnode={example-image-a}{Team I}{1}{example-image-b}{Team M}{0}
[, matchnode={example-image-a}{Team I}{1}{example-image-b}{Team K}{0}
[, matchnode={example-image-a}{Team I}{0 (4)}{example-image-b}{Team J}{0 (2)}]
[, matchnode={example-image-a}{Team K}{1}{example-image-b}{Team L}{0}]
]
[, matchnode={example-image-a}{Team M}{1}{example-image-b}{Team O}{0}
[, matchnode={example-image-a}{Team M}{1}{example-image-b}{Team N}{0}]
[, matchnode={example-image-a}{Team O}{1}{example-image-b}{Team P}{0}]
]
]
]
]
\end{forest}
\end{document}
EDIT
Here's another version which uses a freeware font to supply the flag symbols, and a colour series to randomly apply colours to them. Instead of specifying the image file of the relevant flag, the first and fourth arguments of the matchnode style now specify the relevant character from the freeware font.
This solution requires lualatex. Note that xelatex will NOT work. (And, of course, pdflatex, latex etc. won't either as the code requires fontspec.)
\PassOptionsToPackage{dvipsnames,svgnames,x11names,rgb,table}{xcolor}
\documentclass[tikz,border=10pt]{standalone}
\usepackage{array,forest}
\usepackage{fontspec}
\newfontfamily\fflag{Flags.ttf}
\usetikzlibrary{positioning,shadows}
\colorlet{textblue}{blue!75!black}
\colorlet{lineblue}{blue}
\colorlet{fillblue}{blue!10}
\newlength\Tht
\settoheight{\Tht}{T}
% xcolor manual: 34
\definecolorseries{colours}{hsb}{grad}[hsb]{.575,1,1}{.987,-.234,0}
\resetcolorseries[12]{colours}
\begin{document}
\arrayrulecolor{lineblue}
\begin{forest}
/tikz/footnode/.style={
font=\sffamily,
text=textblue,
draw=lineblue,
inner color=fillblue!75,
outer color=fillblue,
drop shadow,
rounded corners=1pt,
},
matchnode/.style n args=6{
footnode,
inner sep=0pt,
content={\raisebox{-.25em}{\fflag\color{colours!!+}#1} \textcolor{textblue}{#2}&\textcolor{black}{#3}\\\hline\raisebox{-.25em}{\fflag\color{colours!!+}#4} \textcolor{textblue}{#5}&\textcolor{black}{#6}},
},
round name/.style={
tikz={\node [above=of .center |- pen.north, anchor=mid, footnode, minimum height=4ex] {#1};}
},
match align/.style={
align={l|c},
parent anchor=west,
child anchor=east,
anchor=west,
},
for tree={
edge path={
\noexpand\path [draw, \forestoption{edge}] (!u.parent anchor) -- +(-10pt,0) |- (.child anchor)\forestoption{edge label};
},
l sep+=20pt,
match align,
grow=180,
edge={ultra thick, rounded corners},
footnode,
tier/.wrap pgfmath arg={tier #1}{level()},
},
[, phantom,
before drawing tree={
append={[, match align, matchnode={E}{Team E}{0}{M}{Team M}{2}, typeset node, afterthought={\node [above=10pt of .north, anchor=south, footnode, minimum height=4ex] {Third place};}]},
}
[, matchnode={A}{Team A}{1}{I}{Team I}{0}, round name=Final
[, matchnode={A}{Team A}{1}{E}{Team E}{0}, round name=Semi-Finals
[, matchnode={A}{Team A}{1}{C}{Team C}{0}, round name=Quarter-Finals
[, matchnode={A}{Team A}{1}{B}{Team B}{0}, name=pen, round name=Round of 16]
[, matchnode={C}{Team C}{1}{D}{Team D}{0}]
]
[, matchnode={E}{Team E}{2}{G}{Team G}{1}
[, matchnode={E}{Team E}{1}{F}{Team F}{0}]
[, matchnode={G}{Team G}{1}{H}{Team H}{0}]
]
]
[, matchnode={I}{Team I}{1}{M}{Team M}{0}
[, matchnode={I}{Team I}{1}{K}{Team K}{0}
[, matchnode={I}{Team I}{0 (4)}{J}{Team J}{0 (2)}]
[, matchnode={K}{Team K}{1}{L}{Team L}{0}]
]
[, matchnode={M}{Team M}{1}{O}{Team O}{0}
[, matchnode={M}{Team M}{1}{N}{Team N}{0}]
[, matchnode={O}{Team O}{1}{P}{Team P}{0}]
]
]
]
]
\end{forest}
\end{document}
The font I used is 'Flags' by Sunwalk and is available here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Java based Thread Pool: How to test performance?
I have written a thread pool system in JAVA that dynamically grows and shrinks based on request frequency(request per second). each request is a Runnable object that sleeps for random time interval to simulate a disk I/O. I need to compare my thread pool performance with Java's ThreadPoolExecutor. Kindly help me in following points.
Is there any simulation tool/benchmark tool available to test these thread pools.
If not then how i will test the performance of my thread pool with Java's ThreadPoolExecutor?
Can i use Jmeter in this scenario?
Can i embed my thread pool inside Tomcat to compare its performance with Tomcat's ExecutorService?
A:
You can compile your code as a .jar and add it to JMeter's classpath.
Then you can use JSR223 Sampler to call your methods and compare them with ThreadPoolExecutor using JMeter Listeners, for instance Aggregate Report
While developing your code make sure you:
use "groovy" as a language (you'll need groovy-all.jar in classpath as well)
set "Compilation Key" for something unique for each sampler
You can also consider calling your code from Java Request Sampler, but it will be harder to debug and make changes as you'll have to recompile it each time while with JSR223 and groovy you'll be able See Beanshell vs JSR223 vs Java JMeter Scripting: The Performance-Off You've Been Waiting For! for more details.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I clone a Promise?
I have an asynchronous function that returns a promise. The operation should only be performed once. I want all callers of that function to get back the same Promise, but I don't want .catch()es of one caller to affect another caller. Can I clone a promise, or implement this in another way?
A:
but I don't want .catch()es of one caller to affect another caller.
They never1 do (unless you've chained the callbacks, which you don't).
I want all callers of that function to get back the same Promise
Just do it. Promises are immutable values2.
Can I clone a promise?
If you really need3 a distinct object that will follow the original promise (fulfill when it fulfills or reject when it rejects), you can use the then method without arguments:
var clone = promise.then();
console.assert(clone !== promise);
1: Assuming you use a proper promise library. I think I can remember a case of a library (old jQuery?) where then callback results changed the state of the promise.
2: In their resolving behaviour, at least. Every promise is still just an object of course.
3: You don't. You really should not. I'm just answering the title question, but you should stop doing weird stuff.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Setting the classpath for JAR files
I have recently just created Java project using Eclipse that requires 2 JAR files (phiget21.jar and the mysql.jar)
Everything works fine when running the programme in Eclipse, and I have noticed the the jar files are saved in a 'lib' folder.
I soon going to me moving the programme off my computer to be used on other machines, so I decided to create a batch file to compile all of the classes and then run.
However, I am having trouble with the locating of the jar files. In the batch file do I require a command something like: set classpath=.:..;mysql.jar:../phidget21.jar, before the compilation of the Java classes?
I have read that the dots (...) have something to do with directories but not entirely sure how to implement them.
My programme is currently saved in these locations:
Project/src/.java files (I have also put the .jar files in here as well as i thought this may make thing s easier)
Project/lib/ .jar files
Any help would be greatly appreciated!
A:
while setting the classpath a single dot (.) means current directory. As you jar files are in current directory, you just need to go to your current directory using cd command in DOS prompt, then use
set classpath = .;filename.jar;another filename.jar
Here . represents current directory and semicolon separates each classpaths.
You can even set classpath of more than one jar files using wild card character * which can be read as all.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I use a private Dockerhub image from my Synology NAS
Usually I have a public Dockerfile that I build with public DockerHub repository (songkong/songkong), then I can run it on my Synology by just searching for the tag in the registry
but I am making some changes in my private DockerHub repo (songkong/songkongdockerdev) before public release. I have built image okay in Dockerhub, and I guess I use Image/Add from Url in Synology
but I cannot get the syntax correct for the Hub Page or Repository field, I tried a few things such as
https://hub.docker.com/repository/docker/songkong/songkongdockerdev
songkong/songkongdockerdev
songkongdockerdev/latest
what should it be ?
A:
I had the same problem for a long time.
Instead of https://hub.docker.com/repository/docker/songkong/songkongdockerdev
use https://hub.docker.com/r/songkong/songkongdockerdev
Fill in your docker hub username and password.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the difference between 会社 and かいしゃ?
I'm a beginner in Japanese language. I'm confused as to where I should use kanji or hiragana.
For example, "Company" is written as
会社 in kanji
かいしゃ in hiragana
What is the difference between those two form writings? Which one should I use?
A:
Usually, a common word like kaisha will only ever be written as かいしゃ instead of 会社 in these two cases:
When accomodating for young children or non-Japanese speakers who might not be able to read kanji (yet).
For stylistic/typographic purposes. For example, as part of an all-hiragana name of a company on a billboard. Just another way to stand out in an attempt to catch your attention, I suppose.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
WCF Service not available from public IP
I've a WCF Windows Service self-hosted over https with Transport in https://myserver.mydomain.com .
When I try to open https://myserver.mydomain.com in any browser localy (means from wcf server for instance), i get a http 502 error. Same issue if i use its public IP. It works if i use its internal IP but i need to manualy accept ssl certificate because of wrong server name.
Please not that https://myserver.mydomain.com is accessible from internet, so outside my prod environnement!
Could you help me to identify where I should focus my effort to solve it?
Is it due to binding configuration? X509 certificate? App.config? Elsewhere?
Thanks!
A:
Ok, my issue was due to a NAT rule which had a strange behavior. This rule forwarded all packets from an internal IP to server internal IP. We added a new rule to go through Firewall and all is working fine now!
So, only one recommandation if someone else have this issue: be careful with your Firewall configuration (Sophos in my case)...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Setting the selected value on a Django forms.ChoiceField
Here is the field declaration in a form:
max_number = forms.ChoiceField(widget = forms.Select(),
choices = ([('1','1'), ('2','2'),('3','3'), ]), initial='3', required = True,)
I would like to set the initial value to be 3 and this doesn't seem to work. I have played about with the param, quotes/no quotes, etc... but no change.
Could anyone give me a definitive answer if it is possible? And/or the necessary tweak in my code snippet?
I am using Django 1.0
A:
Try setting the initial value when you instantiate the form:
form = MyForm(initial={'max_number': '3'})
A:
This doesn't touch on the immediate question at hand, but this Q/A comes up for searches related to trying to assign the selected value to a ChoiceField.
If you have already called super().__init__ in your Form class, you should update the form.initial dictionary, not the field.initial property. If you study form.initial (e.g. print self.initial after the call to super().__init__), it will contain values for all the fields. Having a value of None in that dict will override the field.initial value.
e.g.
class MyForm(forms.Form):
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
# assign a (computed, I assume) default value to the choice field
self.initial['choices_field_name'] = 'default value'
# you should NOT do this:
self.fields['choices_field_name'].initial = 'default value'
A:
You can also do the following. in your form class def:
max_number = forms.ChoiceField(widget = forms.Select(),
choices = ([('1','1'), ('2','2'),('3','3'), ]), initial='3', required = True,)
then when calling the form in your view you can dynamically set both initial choices and choice list.
yourFormInstance = YourFormClass()
yourFormInstance.fields['max_number'].choices = [(1,1),(2,2),(3,3)]
yourFormInstance.fields['max_number'].initial = [1]
Note: the initial values has to be a list and the choices has to be 2-tuples, in my example above i have a list of 2-tuples. Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Server query with multiple select
I have a query which select all car makes and count each make quantity in response
SELECT
q.Make, Count(q.ID)
FROM
(SELECT
cars.ID, cars.Make, cars.Model,
cars.Year1, cars.Month1, cars.KM, cars.VIN,
cars.Fuel, cars.EngineCap, cars.PowerKW,
cars.GearBox, cars.BodyType, cars.BodyColor,
cars.Doors, cars.FullName, Transport.address,
(DateDiff(second,Getdate(),cars.AuEnd)) as r,
cars.AuEnd, cars.BuyNowPrice, cars.CurrentPrice
FROM
cars
LEFT JOIN
Transport ON Cars.TransportFrom = Transport.ID
WHERE
Active = 'True'
AND AuEnd > GETDATE()
AND year1 >= 1900 AND year1 <= 2015
AND Make in ('AUDi', 'AIXAM', 'ALPINA')
ORDER BY
cars.make ASC, cars.model ASC
OFFSET 50 ROWS FETCH NEXT 50 ROWS ONLY) AS q
GROUP BY
q.make ORDER BY q.make ASC;
Now I need get in result as third field each make total count without offset,
so now I get result
Make CountInResponse
AIXAM 1
ALPINA 1
AUDI 48
But I need to get
Make CountInResponse Total
AIXAM 1 1
ALPINA 1 1
AUDI 48 100
I think I need something like
SELECT
q.Make, Count(q.ID),
(SELECT Make, Count(ID)
FROM cars
WHERE Active = 'True' AND AuEnd > GETDATE()
AND year1 >= 1900 AND year1 <= 2015
AND Make in ('AUDI', 'AIXAM', 'ALPINA')
GROUP BY Make) as q2
FROM
(SELECT
cars.ID, cars.Make, cars.Model,
cars.Year1, cars.Month1, cars.KM, cars.VIN,
cars.Fuel, cars.EngineCap, cars.PowerKW,
cars.GearBox, cars.BodyType, cars.BodyColor,
cars.Doors, cars.FullName, Transport.address,
(DateDiff(second,Getdate(),cars.AuEnd)) as r,
cars.AuEnd, cars.BuyNowPrice, cars.CurrentPrice
FROM
cars
LEFT JOIN
Transport ON Cars.TransportFrom = Transport.ID
WHERE
Active = 'True'
AND AuEnd > GETDATE()
AND year1 >= 1900 AND year1 <= 2015
AND Make in ('AUDi', 'AIXAM', 'ALPINA')
ORDER BY
cars.make ASC, cars.model ASC
OFFSET 50 ROWS FETCH NEXT 50 ROWS ONLY) AS q
But I get an error
Msg 116, Level 16, State 1, Line 10
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS.
How to write right syntax?
A:
The problem is you are selecting two columns in q2 (ie) Make, Count(ID) you cannot do that in SQL Server.
Try something like this.
WITH cte AS
(
SELECT
row_number() OVER(order by cars.make ASC,cars.model ASC) AS rn,
cars.id, cars.make
FROM
cars
LEFT JOIN
transport ON cars.transportfrom = transport.id
WHERE
active = 'True'
AND auend > getdate()
AND year1 >= 1900 AND year1 <= 2015
AND make IN ('AUDI', 'AIXAM', 'ALPINA')
)
SELECT
make ,
count(CASE WHEN RN BETWEEN 50 AND 100 THEN 1 END) AS countinresponse,
count(1) AS total
FROM
cte
GROUP BY
make
Or you need to convert the sub-query in select to correlated sub-query
SELECT q.make,
Count(q.id) countinresponse,
(
SELECT Count(id)
FROM cars C1
WHERE c1.id = q.id
AND active='True'
AND auend > Getdate()
AND year1 >= 1900
AND year1 <= 2015
AND make IN ('AUDi',
'AIXAM',
'ALPINA')
GROUP BY make) AS total
FROM (
SELECT cars.id,
cars.make
FROM cars
LEFT JOIN transport
ON cars.transportfrom=transport.id
WHERE active='True'
AND auend > Getdate()
AND year1 >= 1900
AND year1 <= 2015
AND make IN ('AUDi',
'AIXAM',
'ALPINA')
ORDER BY cars.make ASC,
cars.model ASC offset 50 rowsfetch next 50 rows only ) AS q
GROUP BY q.make
ORDER BY q.make ASC;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CUDA driver API equivalent for cudaSetDevice
What is the CUDA driver's API equivalent for the runtime API function cudaSetDevice?
I was looking into the driver API and cannot find an equivalent function. What I can do is
cuDeviceGet(&cuDevice, device_no);
cuCtxCreate(&cuContext, 0, cuDevice);
which is not equivalent since beside setting the device it also creates a context. The runtime API cudaSetDevice does not create a context per se. In the runtime API the CUDA context is created implicitly with the first CUDA call that requires state on the device.
Background for this question: CUDA-aware MPI (MVAPICH2 1.8/9) initialization requires the CUDA device to be set before MPI_init is called. Using the CUDA runtime API this can be done with
cudaSetDevice(device_no);
MPI_init();
However, I don't want to use the call to the CUDA runtime since the rest of my application is purely using the driver API and I'd like to avoid linking also to the runtime.
What's wrong in creating the context already before MPI is initialized? In principle nothing. Just wondering if there is an equivalent call in the driver API.
A:
You can find information about this in the Programming Guide Appendix about the Driver API, but the short version is this:
cuCtxCreate acts as the first cudaSetDevice call (that is it creates a context on the driver context stack)
The cuCtxPushCurrent() and cuCtxPopCurrent() pair (or cuCtxSetCurrent depending on which API version you are using) acts as any subsequent cudaSetDevice call (that is it pushes or selects a previously created context to be the active context for all subsequent API calls until the context is popped off the driver context stack or deselected)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
The model presented or the presented model?
For example, when referencing the model we presented in the paper,
The presented model is helpful to many other related
researches.
or
The model presented is helpful to many other related
researches.
If it is proposed, I think both is OK, and proposed model might be slightly better in academic writing? What about presented?
I also see sentences like
The model presented in this section gives a brief introduction on the
use of the COMSOL ECRE Version.
A:
"The presented model" - There's only one model, which is presented
"The model presented" - There are several models, and you're talking about the one that is presented
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Fully expanding header macros to get their numeric values for external use
I have a complicated header file with lots of dependencies on other earlier-defined macros that layout the memory map of an embedded system. For example:
#define RAM_BASE (0x40000000)
#define RAM_SIZE (0x10000000)
#define SECURE_RAM_SIZE (0x00200000)
#define SECURE_RAM_BASE (RAM_BASE + RAM_SIZE - SECURE_RAM_SIZE)
For many reasons I need the result of SECURE_RAM_BASE outside of the compiled program. So, my thought was to use the C preprocessor to expand these macros and awk the result as needed. However, the macro expansion with cpp -dD <file> was exactly as shown above (less extra white space).
I was expecting something like this:
RAM_BASE 0x40000000
RAM_SIZE 0x10000000
SECURE_RAM_SIZE 0x00200000
SECURE_RAM_BASE 0x4fe00000
But it seems (at least as far as I interpret the man page) that such expansion may only be accessible when the macros are actually used in code. Even still, the following code-use:
printf("Base: 0x%x\n", SECURE_RAM_BASE);
Expands to:
printf("Base: 0x%x\n", ((0x40000000) + (0x10000000) - (0x00200000)));
Is there a way to produce a 'fully-computed' expansion result using the C preprocessor?
A:
Is there a way to produce a 'fully-computed' expansion result using the C preprocessor?
Why, yes, there is. (Incidentally, boost preprocessor solves a similar problem using its concept of slots; but the solution here is more tailored to this use case and hex outputs). Comments to your question already lay the ground rules... C preprocessor macro expansion cannot evaluate expressions. But C preprocessor conditional directives can. Conditional directives don't reduce expressions, but with a bit of work you can get the CPP to coax the results out. Since your goal is to simply get results out, you're not actually limited by macro evaluation.
High level approach
Given those constraints, you want a file devoted to evaluating and printing an expression (let's say it's called print_expression.h). The evaluation should be performed by an include directive; i.e., #include "print_expression.h"; this way, we can use the one CPP tool capable of evaluating expressions (viz, conditional directives) to do so. We can simply have this file evaluate the expression EXPRESSION; you can #define this before the include. Since you're going to reuse this for multiple macros that expand to expressions, we may want to preface the evaluation result with EXPRESSION_LABEL, which you can define as something. Since this is a preprocessor program rather than a "normal" header, it can helpfully clean up after itself and skip inclusion guards so you can immediately reuse it.
Driving the solution
So for now ignore the details, and assume this just works... to generate something akin to the outputs you want on your sample header, you would include the header, and then need to pump this utility as follows:
#define EXPRESSION_LABEL 8RAM_BASE
#define EXPRESSION RAM_BASE
#include "print_expression.h"
#define EXPRESSION_LABEL 8RAM_SIZE
#define EXPRESSION RAM_SIZE
#include "print_expression.h"
...and so on. But you don't need this file (assuming your cpp takes stdin); you mentioned awk, so assuming you also have sed and bash (and maybe a gnu like -P flag to your cpp to strip #line directives and clutter):
(echo '#include "complicated_header.h"' ;
echo RAM_BASE,RAM_SIZE,SECURE_RAM_BASE,SECURE_RAM_SIZE | \
sed -e 's/,/\n/g' -e 's/.*/#define EXPRESSION_LABEL 8&\n#define EXPRESSION &\n#include "print_expression.h"/g' ) | cpp -E -P
Something like this is probably what you want to do, since from your question it sounds like you're going to do more processing on the outputs as well given specific extracted evaluated values. Note that I'm prepending a 8 to the expression labels; this in CPP-ese makes it a "pp-number" that can't possibly evaluate... you can strip it out on the output (maybe with a cut -c 2-).
The reusable calculator (by description)
To make print_expression.h isn't too complicated, but it's going to be a big file, so I'll just outline the concept rather than inline it here (but see below). I'll assume you want your output to be an 8 nibble hex number in hex format. What you want to do then is to define a macro for each nibble in this hex number, to be pasted together when producing the output; the definition of each nibble macro will be given by evaluating a #if/#elif/#else/#endif chain that specifically checks the value for that nibble. To make this a bit easier and repetitive (so you can copy/paste/replace it into being), you can have a helper macro evaluate EXPRESSION and shift the nibble in. So to get you started, your file looks something like this:
#define RESULT_NIBBLE(NDX_) (((EXPRESSION)>>(NDX_*4))&0xF)
#if RESULT_NIBBLE(7)==0xF
#define RESULT_NIBBLE_7 F
#elif RESULT_NIBBLE(7)==0xE
#define RESULT_NIBBLE_7 E
#elif RESULT_NIBBLE(7)==0xD
#define RESULT_NIBBLE_7 D
...
#elif RESULT_NIBBLE(7)==0x1)
#define RESULT_NIBBLE_7 1
#else
#define RESULT_NIBBLE_7 0
#endif
Following this, create a chain to define RESULT_NIBBLE_6 down to RESULT_NIBBLE_0. Once you finally get to the end, you just need to paste all of this onto 0x with an indirect paste, dump the results by invoking the macros, then clean up to make yourself ready for the next usage:
#define HEXRESULT(A,B,C,D,E,F,G,H) HEXRESULTI(A,B,C,D,E,F,G,H)
#define HEXRESULTI(A,B,C,D,E,F,G,H) 0x ## A ## B ## C ## D ## E ## F ## G ## H
EXPRESSION_LABEL HEXRESULT(RESULT_NIBBLE_7,RESULT_NIBBLE_6,RESULT_NIBBLE_5,RESULT_NIBBLE_4,RESULT_NIBBLE_3,RESULT_NIBBLE_2,RESULT_NIBBLE_1,RESULT_NIBBLE_0)
#undef HEXRESULTI
#undef HEXRESULT
#undef RESULT_NIBBLE_7
#undef RESULT_NIBBLE_6
#undef RESULT_NIBBLE_5
#undef RESULT_NIBBLE_4
#undef RESULT_NIBBLE_3
#undef RESULT_NIBBLE_2
#undef RESULT_NIBBLE_1
#undef RESULT_NIBBLE_0
#undef EXPRESSION
#undef EXPRESSION_LABEL
Brief demo
This demo emulates the solution good enough for single-file online demonstration. Note that lines 27-317 effectively is a viable working print_expression.h in full.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't switch workspaces
I can't switch workspaces using the panel or the shortcut[Ctrl+alt+left/right>]. When I click on the workspace panel, there is no response. How do i fix this problem?
A:
If you are using compiz, go to System/Preferences/Compizconfig Settings Manager" and lead to the "Desktop Wall" options.
Make sure you are using Desktop Wall, otherwise you may be using "Desktop Cube" in which case you should open "Rotate Cube".
In both cases, first make sure that the check box that enables the feature is activated, then properly configure your key bindings.
If they are already set and your keyboard don't activates the function, try with a different combination. Please inform if this didn't help in order to look for a different solution.
BTW: If you change key bindings and can't return to your initial screen, where "CompizConfig Settings Manager" is, try using the Super + E in order to activate "Expo", when open you can choose the desktop where you wish to go with the mouse or moving your selection with the arrow keys.
Good Luck!
A screenshot is placed here for you to see the Desktop Wall key binding config section.
A:
Check if your Desktop Size ain't set to 1x1.
Compiz > General > "General settings" > Desktop size
Credits: Madara Uchiha's comment above, which led me to remembering this also can be an issue.
I'm posting that as an answer, because I've stumbled on it three times so far, while I've never yet had the problem with cube turned on or bindings lost.
EDIT: added an image since I have localized Ubuntu, so may be off with translations, so I'm highlighting which icons to follow. This is for Ubuntu 14.04.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
functional programming in kotlin - assigning function
There is an arrayOf function in kotlin. I want to have the same under different name. I tried:
val each = ::arrayOf<UUID>
val each = ::arrayOf
val each = ::<UUID>arrayOf
val each = arrayOf
I get only compilation errors. is it possible in kotlin? how? or do i have to repeat the whole signature and invocation?
A:
This doesn't work because arrayOf is an inline function with a reified type parameter. It's not possible to store this reified type parameter as part of a function reference or to pass it when invoking the function through a function reference.
If you want to have an alias for this function, you need to define it differently:
inline fun <reified T> each(vararg x: T) = arrayOf(*x)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
HAML syntax error "expecting $end"
I am trying to DRY up some code in HAML, but seem to have stumbled into a whitespace error that I can't quite wrap my head around.
I took the header and left navigation code and put it into their own respective files in the layouts folder.
In the regular view files, I placed this at the top:
= render 'layouts/header'
= render 'layouts/left_navigation'
/ center content column
.centerContent.left.phm.rbm
/ start main center section
/ and the rest of the code goes below here
In the header file, I have:
/ main container area
#maincontainer
/ main content
#maincontent.mhauto
And in the left navigation file, I have:
/ left navigation column
#leftNav.leftNav.left
/ a bunch of code goes in here
/ end left navigation column
Now, I would expect this to be equivalent to:
/ main container area
#maincontainer
/ main content
#maincontent.mhauto
/ left navigation column
#leftNav.leftNav.left
/ a bunch of code goes in here
/ end left navigation column
/ center content column
.centerContent.left.phm.rbm
/ start main center section
/ and the rest of the code goes below here
But for whatever reason, it is not working correctly and instead gives me this syntax error syntax error, unexpected keyword_ensure, expecting $end while pointing to the last line in the file. What am I doing wrong? This is my first time using HAML, so this is rather perplexing to me.
By the way, it worked perfectly fine before I started DRYing up the code, so this seems to be a whitespace thing to me.
A:
You have a render nested inside another render.. I think thats the problem:
= render 'layouts/header'
= render 'layouts/left_navigation'
/ center content column
.centerContent.left.phm.rbm
/ start main center section
/ and the rest of the code goes below here
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Manually change MongoID
Through the PHP problem when inserting stuff into MongoDB explained in the answer by Denver Matt in this question I created duplicate IDs in a dataset in MongoDB. Fixing the PHP code is easy, but to be able to still use my dataset I wonder:
Can I change the MongoId manually without breaking something? Or can I just reset this ID somehow into a new one?
A:
The _id field of a document is immutable, as discussed in this documentation. Attempting to modify its value would result in an error/exception, as in:
> db.foo.drop()
> db.foo.insert({ _id: 1 })
> db.foo.update({ _id: 1 }, { $set: { _id: 3 }})
Mod on _id not allowed
> db.foo.find()
{ "_id" : 1 }
If you do need to alter the identifier of a document, you could fetch it, modify the _id value, and then re-persist the document using insert() or save(). insert() may be safer on the off chance that you new _id value conflicts and you're rather see a uniqueness error than overwrite the existing document (as save() would do). Afterwards, you'll need to go back and remove the original document.
Since you can't do all of this in a single atomic transaction, I would suggest the following order of operations:
findOne() existing document by its _id
Modify the returned document's _id property
insert() the modified document back into the collection
If the insert() succeeded, remove() the old document by the original _id
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cannot run shell (which) command in PHP code
So I am working with Flyway and I run specific commands using PHP exec() function:
exec('/path/absolute/flyway info');
These commands work as long as I specify the absolute path, but that may vary depending on the machines that it will be working on. That is why I want to use a variable which determines that absolute path, through the command exec('which flyway').
The thing is that this returns a null value, even though when I write it directly in shell I get the desired result. I also tried using the php interactive shell php -a, where if I run the command echo exec('which flyway') it also returns the desired path, altough when I write it directly in my code, I get the NULL result.
Note that if I want to verify the absolute path of php (which php), I can do that in the shell, php -a or in my code, and it returns the desired result in all three cases. So the which flyway command is the only one that has a null result in my code.
Can anyone please help me in this matter ?
A:
which uses the same path variables which are used when you run the command without the absolute path. So, either you could use them directly (because they are on the path) and would not need which after all, or which is not able to find the executables after all.
Additionally, keep in mind that not all hosters allow to run arbitrary from a PHP process. You should not built your application on such a foundation
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Don’t, she thought, thinking to throw down her purse and run
They turned and followed her past a deserted playground, one of them bump-bumping a stick along an iron fence, the other whistling: these sounds accumulated around her like that gathering roar of an oncoming engine, and when one of the boys, with a laugh, called, “Hey, whatsa hurry?” her mouth twisted for breath. Don’t, she thought, thinking to throw down her purse and run.
Source: T. Capote: "Master Misery"
http://www.sheilaomalley.com/?p=6786
What is the exact meaning of the last sentence? I guess that it could be paraphrased in this way: Initially she wanted to throw down her purse and run and in the end she decided not to do that. I suppose that this the sort of creative writing that does not follow the grammatical rules but is there some grammatical pattern from which this for me very unusual sentence is derived?
A:
The sentence makes a lot more sense in context - it would be difficult to discern the meaning of the sentence standing on its own.
As I interpret it:
The woman is dreading what these boys will do. They've been following her, and then one of them calls out to her. Internally, she thinks to herself, "Don't!" Meaning that she doesn't want the boys to do anything to her. She then contemplates throwing down her purse and running.
I can't access the linked site from work, but I'd guess that's what it means.
Edit: To clarify, it seems more likely that the "Don't" part of the sentence relates more to the boy's words than it does to the thought of dropping one's purse and running.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Firebase cannot understand what targets to deploy
When deploying the following hello-world equivalent code I get the error shown in the end:-
$ ls -lR
.:
total 8
-rw-r--r-- 1 hgarg hgarg 3 Aug 29 14:55 firebase.json
drwxr-xr-x 2 hgarg hgarg 4096 Aug 29 11:56 functions
./functions:
total 4
-rw-r--r-- 1 hgarg hgarg 1678 Aug 29 11:56 index.js
firebase.json looks like this:-
{}
and index.json like this:-
'use strict';
const functions = require('firebase-functions');
exports.search = functions.https.onRequest((req, res) => {
if (req.method === 'PUT') {
res.status(403).send('Forbidden!');
}
var category = 'Category';
console.log('Sending category', category);
res.status(200).send(category);
});
But deploying fails:-
$ firebase deploy
Error: Cannot understand what targets to deploy. Check that you specified valid targets if you used the --only or --except flag. Otherwise, check your firebase.json to ensure that your project is initialized for the desired features.
$ firebase deploy --only functions
Error: Cannot understand what targets to deploy. Check that you specified valid targets if you used the --only or --except flag. Otherwise, check your firebase.json to ensure that your project is initialized for the desired features.
A:
it would be better to pre-populate the firebase with the default options. I choose that I wanted to use only hosting the firebase.json should have be created with the default hosting option.
{
"hosting": {
"public": "public"
}
}
or you try run firebase init again.
A:
Faced similar issue. In firebase.json file (in hosting parameter), we have to give the name of directory that we want to deploy (in my case, I was already in the directory, which I wanted to deploy, hence I put "." in hosting specification). It solved the error for me.
{
"hosting": {
"public": ".",
"ignore": [
"firebase.json",
"**/.*",
"**/node_modules/**"
]
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Monoids as categories; does this construction have a name?
We can view a monoid $M$ as a category with a single object. However, there is another way to make $M$ into a category. Take the elements of $M$ as objects, and define $\mathrm{Hom}(x,y)$ to be set of all triples $(x,y,a)$ such that $ax=y$. Define composition such that $(y,z,b) \circ (x,y,a) = (x,z,ba)$ and identity arrows by $\mathrm{id}_x = (x,x,1).$
I'd like to learn more about this construction. Does it have a name?
A:
What's going on here is that you've allowed the monoid $M$ to operate on the set $M$, and you've successfully expressed this monoid action as a category.
For a general monoid $M$ acting on a set $X$, you can view the elements of $X$ as objects of a category and use the elements of $M$ as arrows between the objects.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
KUSTO: Threshold line in multiple split query
I want to show a threshold for a specific value in a KUSTO query. I seems simple but doesn't work, I think, when your query is using multiple 'by' clauses in a summarize.
This works, the line is shown at 4500:
customEvents
| where name == "Event sharepoint monitoring script"
| where timestamp >= ago(14d)
| extend spRequestDuration = customDimensions.["spRequestDuration"]
| summarize avgRequestDuration=avg(todouble(spRequestDuration)), threshold = 4500 by bin(timestamp, 5m) // use a time grain of 5 minutes
| render timechart
But for below query, no additional threshold line is shown.
customEvents
| where name == "Event sharepoint monitoring script"
| where timestamp >= ago(14d)
| extend spRequestDuration = customDimensions.["spRequestDuration"]
| extend siteType = customDimensions.["SiteType"]
| summarize avgRequestDuration=avg(todouble(spRequestDuration)), threshold = 4500 by tostring(siteType), bin(timestamp, 5m) // use a time grain of 5 minutes
| render timechart
Should I do it in a different way, or is this not supported?
A:
You need to create the "threshold" as a single additional "siteType" series, one way to do it is by having a union with another data set that contains just the "threshold" as a site of its own. here is an example:
let events = customEvents
| where name == "Event sharepoint monitoring script"
| where timestamp >= ago(14d)
| extend spRequestDuration = customDimensions.["spRequestDuration"]
| extend siteType = customDimensions.["SiteType"];
events
| summarize avgRequestDuration=avg(todouble(spRequestDuration)) by tostring(siteType), bin(timestamp, 5m) // use a time grain of 5 minutes
| union (events | summarize by bin(timestamp, 5m), siteType="Threshold" | extend avgRequestDuration = 4500.0)
| render timechart
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Group Javascript object based on Array Object list
I'm trying to find a way to convert this list of objects based on the group array. The tricky part I've found is iterating through the group Array and applying the object to more than one place if there are multiple groups.
I'm also trying to ignore any group that does not belong to anything. I've tried using the reduce function but I cannot get the iteration through the group array.
let cars =
[
{
"group":[],
"name": "All Makes",
"code": ""
},
{
"group":["Group A"],
"name": "BMW",
"code": "X821"
},
{
"group":["Group B"],
"name": "Audi",
"code": "B216"
},
{
"group":["Group B"],
"name": "Ford",
"code": "P385"
},
{
"group":["Group B", "Group C"],
"name": "Mercedes",
"code": "H801"
},
{
"group":["Group C"],
"name": "Honda",
"code": "C213"
}
]
To become this:
let cars = {
"Group A": [
{
name: "BMW",
code: "X821",
}
],
"Group B": [
{
name: "Audi",
code: "B216"
},
{
name: "Ford",
code: "P385"
},
{
name: "Mercedes",
code: "H801"
}
],
"Group C":[
{
name: "Mercedes",
code: "H801"
},
{
name:"Honda",
code: "C213"
}
]
};
I already tried using reduce to accomplish this but the grouping doesn't replicate if it's in more than one group.
let result = cars.reduce(function(x, {group, name}){
return Object.assign(x, {[group]:(x[group] || [] ).concat({group, name})})
}, {});
Any pointers to help with this would be much appreciated.
A:
You can use .reduce() to loop through each car object in cars. For each group array for a given car, you can then use .forEach() to then add that group as a key to the accumulator. If the group has already been set in the accumulator, you can grab the grouped array of objects, otherwise, you can create a new array []. Once you have an array you can then add the object to the array using .concat(). Since we're using .forEach() on the group array, it won't add the object to the accumulated object if it is empty as .forEach() won't iterate over an empty array.
See example below:
const cars = [{ "group":[], "name": "All Makes", "code": "" }, { "group":["Group A"], "name": "BMW", "code": "X821" }, { "group":["Group B"], "name": "Audi", "code": "B216" }, { "group":["Group B"], "name": "Ford", "code": "P385" }, { "group":["Group B", "Group C"], "name": "Mercedes", "code": "H801" }, { "group":["Group C"], "name": "Honda", "code": "C213" } ];
const res = cars.reduce((acc, {group, ...r}) => {
group.forEach(key => {
acc[key] = (acc[key] || []).concat({...r}); // copy r so it is a different reference for each grouped array
});
return acc;
}, {});
console.log(res);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Figuring out how to output data the right way for a chart to work
I've been trying to figure out a way to get data from my database, and then display it in a way that a a chart can read it.
Controller:
$this->Record->virtualFields['sum'] ='COUNT(*)';
$records=$this->Record->find('list',
array('fields' => array('drug_id', 'sum'), // from the table 'drugs' with 2 fields which drug_id and drug
'group' => 'drug_id'));
debug($records);
$chartData[] = implode(', ',$records);
debug($chartData);
How my 'debugs' display on the page:
/app/Controller/RecordsController.php (line 72)
array(
'Trees' => '2',
'Socks' => '1',
'Things' => '9',
'Mice' => '1',
'Clothes' => '6',
'Shoes' => '4',
'Underwear' => '3',
'Tables' => '6',
'Mouse' => '1'
)
/app/Controller/RecordsController.php (line 74)
array(
(int) 0 => '2, 1, 9, 1, 6, 4, 3, 6, 1'
)
As you can see, the impload, for some reason, removes the substance field. I need it to display like this:
'Underwear', '1'
etc
EDIT:
How would I use hasMany to assign the compound an id with the database. Because the same compound will have many different users. And many different compounds will have the same users.
I'm reading the documentation, but I don't know if i'm missing something but it seems like it just randomly cuts out on the documentation... it seems like it just tells you what it does and how to start it.
If I were to run the query right now (i edited the query above for this example) I will get values with
'5' => '2'
But I would need to, I assume, use hasMany to associate what will be in my database
ID | Compound
5 | Tables
So that I can display 'tables' instead of 5
A:
I think you are quite a bit confused. Lets take it step by step.
First, implode concatenates the values of the array, not the keys, so it's logical you get that output, it's not "for some reason". Read the docs about that function, is very helpful to know what it does.
Second... well, from what I see, you're trying to mix php variables with js. It's "possible", but you need to really understand what you're doing. Google Charts or Highcharts construct graphs with js, at least based on the output you want to have. It's weird that you're imploding a variable in the controller and print it just like that in the view.
So, pass the $records variable to the view as is. Do not try to transform it to an js-adequate-structure in the controller. Why? Because you can use $records in any part of your view (as a php variable), and output the necessary structure just in the js part.
Now, the graph. Let's say you're using Google charts because that's the first example I found. Somewhere in your view, you should have something similar to this
<script type="text/javascript">
//definitions of your chart, I'm not going to do that here
//the data part
var data = new google.visualization.DataTable();
data.addRows([
['Mushrooms', 3],
['Onions', 1],
['Olives', 1],
]);
That's the data you want to "translate" from php-find-variable to js, right? For that you can do something like this
//the data part
var data = new google.visualization.DataTable();
data.addRows([
<?php
foreach($records as $compound => $sum)
echo "['".$compound."', ".$sum."], ";
?>
]);
And you should have the js format you need. Check the syntax, though, I didn't test it so maybe there's a comma or something you need to add to that echo.
I hope I made it clear enough. If you're confuse about anything here, I recommend to read about using php variables in js.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I get a Steuer-Identifikationsnummer (tax ID no.) a month before my date of employment, without physically moving a month before?
I am a Dutch citizen moving from the United Kingdom to Germany. I have received a conditional employment offer. My next employer has asked me to submit my Lohnsteuerklasse und Steuer-Identifikationsnummer (salary tax classification and tax identification number) at least one month prior to my start of employment. I understand that I can obtain those from Finanzamt (finance office), but apparently only after I have registered with the Meldebehörde (registration authority), for which I apparently need a Wohnungsgeberbestätigung. Is there any way I can get obtain the Lohnsteuerklasse und Steuer-Identifikationsnummer (at least) a month before my start of employment, without physically moving a month before I am due to start (which might mean being unemployed for a month)?
I suppose I would need to rent a place to get a Wohnungsgeberbestätigung, but signing a lease in order to register more than a month before I move without actually moving would be costly (double rent), and I fear it may be considered fraudulent, too. Similarly fraudulent may be to temporarily register at my wifes Zweitwohnung (secondary residence) without ever planning to live there (and the landlord may be unwilling to provide a Wohnungsgeberbestätigung for two for an apartment so small it is only fit for one). Is there another way out of this catch?
A:
Your employer is making an unreasonable request. Either they have no idea how things are working in their country or they don't care.
The very first thing you need is an Arbeitsvertrag. You don't need a German tax registration or Anmeldung to get an Arbeitsvertrag.
Once you have an Arbeitsvertrag you can start to search for an appartment. While in theory it's not required, hardly anyone will want to show you the appartment if you have no solid job (except overpriced appartments, I speak from experience). People are not interested in short-time rental, they look for candidates with solid employment, and having a choice they will prefer those with stable employment over those with temporary jobs.
Then, having the rental agreement, you can arrange other formalities, like tax and social security registration, opening a bank account etc.
If you have no title for German tax residency (no citizenship, no residence, no job) I doubt you'll be able to register... even if theoretically possible, it would look so suspicious, you'd most likely (once again, speaking from my experience) have big problems arranging it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I find the formula for this basic algebra problem?
Based on the constant in blue, for each set of known numbers in green, how do i find red? The numbers in red may actual vary a bit $\pm 1.0$, but they are consistent.
I know there has to be a formula for finding this, but for the life of me I can't figure it out.
It's for solving an alignment issue with scaling an image on my website.
$$\huge\color{green}{4}\color{red}{=25}$$
$$\huge \color{green}{3}\color{red}{=37}$$
$$\huge \color{blue}{2=73}$$
$$\huge \color{green}{1.5}\color{red}{=146}$$
$$\huge \color{green}{1.1}\color{red}{=731}$$
A:
Plotting the points you provided in Desmos (really, this is an amazing tool), it seems like $f(x) = \frac{73}{x-1}$ is a good fit for the points you provided. To see exactly how good the fit is you could plot the residuals in that graph too. It's simple and looks like the sort of thing a scaling function would be.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jquery datepicker date +1 is adding 10 days
I have following code
$("#in").datepicker({
minDate: 0,
onSelect: function (dateText, inst) {
dateText = new Date(dateText);
dateText = dateText.getDate()+1;
$('#out').datepicker("option", 'minDate', dateText);
}
});
Example:
http://jsfiddle.net/ANYTA/1/
However, the out datepicker is adding 10 days instead of 1 day. What could be modified to make it work as intended? thank you very much
A:
dateText = dateText.setDate(dateText.getDate()+1);
NOTE
somedate.setDate(days);
1) days is integer
2) Expected values are 1-31, but other values are allowed:
2.1) 0 will result in the last hour of the previous month
2.1) -1 will result in the hour before the last hour of the previous month
3) when month has 31 days, then 32 will result in the first day of the next month
4) If month has 30 days then, 32 will result in the second day of the next month
Your code shoule
$("#in").datepicker({
minDate: 0,
onSelect: function(dateText, inst) {
var actualDate = new Date(dateText);
var newDate = new Date(actualDate.getFullYear(), actualDate.getMonth(), actualDate.getDate()+1);
$('#out').datepicker('option', 'minDate', newDate );
}
});
$("#out").datepicker();
DEMO
A:
Try:
dateText.setDate(dateText.getDate() + 1);
getDate returns a 1-31 day of the month, so adding one to it doesn't make sense.
setDate sets the day of the month, so if you add one day to the day of the month, you're effectively adding one day.
(Also, setDate is smart enough to handle rollovers, i.e. 31 Jan + 1 == 1 Feb)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
'Promise' is not assignable to type 'Promise' error
I am new to Angular 2, and Angular as well, and I am running into this issue related to promises.
I have this file name module.service.ts
import { Injectable } from '@angular/core';
import { Module } from './module.entity';
@Injectable()
export class ModuleService {
getModules(): Promise<Module[]> {
// TODO: Switch to a real service.
return Promise.resolve([{
uiid: "text",
type: "ahahaha"
}]);
}
}
Which call module.entity which contains this code:
export class Module {
uuid: string = '00000';
type: string = 'TextComponent';
// Maps ModuleSlots to modules
submodules: {[key: string]: [Module]} = {};
}
But running npm starts returns this error to me:
Type 'Promise<{ uiid: string; type: string; }[]>' is not assignable to
type 'Promise'. Type '{ uiid: string; type: string; }[]'
is not assignable to type 'Module[]'.
Type '{ uiid: string; type: string; }' is not assignable to type 'Module'.
Property 'uuid' is missing in type '{ uiid: string; type: string; }'.
Can anybody gives me a hint about what's not working ?
A:
Error message already gives you a hint
Property 'uuid' is missing in type '{ uiid: string; type: string; }'
declaration:
uuid: string = '00000';
^^
value:
uiid: "text",
^^
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Extract list from tuples and transpose in python
I have the dataframe given below. I want to extract the first list from the tuples list and transpose the the list which is extracted to columns.
data = {'Document_No':[0.0,1.0], 'list_of_topics': [
([(0, 0.14572892),
(1, 0.014889247),
(11, 0.44593897)],
[(4, [0]), (5, [4]), (6, [11]), (7, [11]), (8, [11, 4]), (9, [11, 4])],
[(4, [(0, 0.9999998)]),
(7, [(11, 0.9999998)]),
(9, [(4, 0.05520946), (11, 0.93936676)])]),
([(0, 0.2453892),
(11, 0.78657897)],
[(4, [0]), (5, [4]), (6, [11]), (7, [11]), (8, [11, 4]), (9, [11, 4])],
[(4, [(0, 0.9999998)]),
(7, [(11, 0.9999998)]),
(9, [(4, 0.05520946), (11, 0.93936676)])])
]}
df = pd.DataFrame(data)
desired result:
Document_No 0 1 11
0 0.0 0.14572892 0.014889247 0.44593897
1 1.0 0.2453892 0 0.78657897
My solution:
pd.DataFrame([[j[0] for j in i] for i in df['list_of_topics']], index=df['Document_No']).transpose()
Out[245]:
Document_No 0.0 1.0
0 (0, 0.14572892) (0, 0.14572892)
1 (4, [0]) (4, [0])
2 (4, [(0, 0.9999998)]) (4, [(0, 0.9999998)])
Not getting the desired result. Can anyone help me out in finding where I am doing wrong.
A:
You can pick your requrired tuples in column and use regular expressions to extract the data
df1 = pd.DataFrame.from_records(df.list_of_topics[0])
for tup in df.list_of_topics[1:]:
df1 = df1.merge(pd.DataFrame.from_records(tup),on=0,how='outer')
df1.set_index(0,inplace=True)
df1.T.reset_index(drop=True)
Out:
0 1 11
0 0.145729 0.014889 0.445939
1 0.245389 NaN 0.786579
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Version Control for Designers / Alternative to Version Cue
I've just been searching around the net for version control that has some support for graphic/Photoshop files. We're currently using BitBucket which is free and unlimited when it comes to file size but doesn't have any design review tools.
Here's what I've found. Does anyone else have any recommendations?
Pixelapse – Free/Beta, works well. Now has layer comp support.
pixelapse.com
Shipment – Private Beta (waiting for an invite)
blog.shipmentapp.com/articles/all_new_shipment_beta
LayerVault (out of business) – Windows/Mac, has layer comp support
layervault.com/support
PixelNovel – SVN Version control for Photoshop
pixelnovel.com
CAD only?! But looks really good.
sunglass.io/features.html
FileTrek – Very enterprise. No price or demo.
filetrek.com/solution/
Kaleidoscopeapp – GIT with image compare
kaleidoscopeapp.com/
Apps that copy files with an incremented version
alternativeto.net/software/autover/
alternativeto.net/software/filehamster/
A:
Version Cue, in my experience, is garbage. I have two systems for two different teams going right now.
SVN via Cornerstone
I've been running a large volume of creative work through SVN via Cornerstone for Mac for over a year now. It's a very slick and easy to use app that makes VC seem easy. It doesn't provide visual previews of the files like I believe PixelNovel does but our detailed change notes have been more than adequate. Cornerstone has been a very robust solution for the localized team I work in.
Git via SourceTree
I also just began coordinating a remote team via Bitbucket.org using SourceTree. Git has a little steeper learning curve at first but it's working well for us. We're essentially following the same principles as the SVN set-up, ie detailed change logs.
The differences
Git operates under the model that each user downloads the whole repository (history and all) to their machine. To keep this manageable, it's best to have a separate repository for each project. It's nice to have a repository that's easy to archive and retire when the project is over.
SVN, on the other hand, allows the user to checkout the latest version of a directory within a repository. If you want to roll back to a previous version you must connect to the server. This is a good system for a centralized repository that contains all the projects under way. I prefer it for a high volume environment where many simultaneous and often interconnected projects are underway.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Where to find Java 6 JSSE/JCE Source Code?
Where can I download the JSSE and JCE source code for the latest release of Java? The source build available at https://jdk6.dev.java.net/ does not include the javax.crypto (JCE) packages nor the com.sun.net.ssl.internal (JSSE) packages.
Not being able to debug these classes makes solving SSL issues incredibly difficult.
A:
there: openjdk javax.net in the security group
src/share/classes/javax/net
src/share/classes/com/sun/net/ssl
src/share/classes/sun/security/ssl
src/share/classes/sun/net/www/protocol/https
also on this page:
src/share/classes/javax/crypto
src/share/classes/com/sun/crypto/provider
src/share/classes/sun/security/pkcs11
src/share/classes/sun/security/mscapi
These directories contain the core
cryptography framework and three
providers (SunJCE, SunPKCS11,
SunMSCAPI). SunJCE contains Java
implementations of many popular
algorithms, and the latter two
libraries allow calls made through the
standard Java cryptography APIs to be
routed into their respective native
libraries.
A:
I downloaded the src jar from: http://download.java.net/jdk6/source/
NOTE:
This is a self extracting jar, so just linking to it won't work.
... and jar -xvf <filename> won't work either.
You need to: java -jar <filename>
cheers,
jer
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Assign expression to a variable and execute it
I would like to assign the following expression to a variable:
textFormat = 'soup.find("div", {"class" : "article-entry text"}).text.replace(\'\n\', "")'
I am calling this code in another file using
text = exec(textFormat)
Sadly I get the error message:
File "C:\Users\dadsad\documents\coding\dasdasd\functions.py", line 42, in loadAtom
text = exec(textFormat) File "<string>", line 1
soup.find("div", {"class" : "article-entry text"}).text.replace('
^ SyntaxError: EOL while scanning string literal
Any ideas? Thanks! :)
Edit: Tried the suggestion, getting a None:
A:
I suspect you are suffering from backslashitis. You need one more slash before the n:
textFormat = 'soup.find("div", {"class" : "article-entry text"}).text.replace(\'\\n\', "")'
But, instead of execing code this way, you might want expose a subroutine instead:
def textFormat(soup):
return soup.find("div", {"class" : "article-entry text"}).text.replace('\n', "")
|
{
"pile_set_name": "StackExchange"
}
|
Q:
start nodejs app with certain settings?
Is there a way to make an app with nodejs that can be started with additional parameters?
A few examples:
node myApp.js -nolog
would start the app with the custom noLog=true parameter, so that things would not be console.logged..
node myApp.js -prod
would start the app in a specific set of production settings.
I am not sure if there is anything equivalent in node already.. If this is a duplicate, possibly because I was not even aware of the keyword to search this specific problem's answers.
Enlighten me!
A:
To read command line arguments you need to parse process.argv, or use a 3rd-party module like minimist:
var argv = require('minimist')(process.argv.slice(2));
// do something ...
var config = argv.config;
if (config === 'dev') {
// set the flag
}
Then start your app via node app.js --config=dev.
In most general case you need to include more than one option, and manually hardcoding them in code is a bad idea. A recommended way is to write them down in a configutaion file, then use require to parse. You can use both .js and .json to store configutation, but .js is more convenient because JSON format is too strict, especially it does not even allow you put comments.
So here's a solution. Organize you configurations as follow:
config
├── dev.js
├── production.js
production.js is defined as a "base class", which stores all required settings, and exposes them using module.export.
module.exports = {
db: {
backend: 'mysql',
user: 'username',
password: 's3cr3t'
// ...
}
};
dev.js inherits all properties from production, override the value to fit your local env. It's recommend to ignore this file in version control system (git, SVN, etc.), so your local configutaion will not conflict with others in the project. To deep copy and merge an object, node.extend may help.
var base = require('./production'),
extend = require('node.extend');
var overrides = {
db: {
user: 'root',
password: ''
}
};
module.exports = extend(overrides, base);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
OLE DB Command DT_NTEXT Output Type and XML input
I am working on an SSIS data flow as shown in the image below. Here are the details of the flow.
Getting some records.
Adding a dummy column which is a DT_NTEXT type
This an OLE DB command which is executing a stored procedure. The output of the stored procedure is XML but is of type NVARCHAR(MAX). The output is populating the dummy field.
Writing the XML from the dummy column to a table.
When the package is executed, the destination DB only gets populated with a < instead of the full XML. If I change the dummy column to type WSTR, the XML is succesfully written to the table in full. I need to write the XML to an NVARCHAR(MAX) field, as the XML could be large and exceed the limits of the WSTR type.
Does anyone have an idea what is going on and how I can write my XML to an NVARCHAR(MAX) field?
A:
After running many experiments and searching over the internet, it looks like this is an issue in SSIS, since OLE DB Command cannot be mapped to DT_NTEXT columns:
SSIS 2008 OLE DB Command DT_NTEXT Output Type
As a workaround, you can use a Script Component to get the XML value using a parameterized SQLCommand, and map the output to an output column (no need to create column using a derived column transfomation).
Update 1
While searching, I found that this was an open issue in the Microsoft forums:
SSIS: Cannot return XML datatype to SSIS via a stored procedure output variable
The following feedback was given by Microsoft support team:
OLE DB clients handle XML columns (which are not part of the OLE DB spec, but SQL specific) like they would NTEXT fields. The OLE DB
provider for the SQL Task currently does not fully support LOB fields
' if you store the result in an Object variable, it will return a COM
object that points to the stream of data, but not the actual results
(which isn't very useful).
There are a couple of workarounds for this problem.
1) Cast the results to varchar, and use a String variable 2) Use an
ADO.NET connection instead of OLE DB
Considering the amount of work involved in changing/fixing the current
behavior, and that there are workarounds available, we've decided not
to fix this issue in this release.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Bokeh label and hover tool for barchart, python
I am trying to build a bar chart where I have three legends (so three different bar charts) combined with a hoverTool.
At the moment I have build the bar charts with the legend, however I am having problems with the HoverTool. In the code showed below the hover tool shows all three tooltips for all of the charts. I only want the hovertool to show one of the tooltips, so for example for the bar chart 'popPerc' I only want to see '@population' in the hover tool.
p = figure(x_range=df2.district,plot_height=300,plot_width=500,
y_range= ranges.Range1d(start=0,end=25))
p.xgrid.grid_line_color=None
p.ygrid.grid_line_color=None
p.y_range.start = 0
p.xaxis.major_label_orientation = 0.5
p.yaxis.visible = False
p.toolbar_location=None
p.outline_line_color = None
colors = all_palettes['BuGn'][3]
bar = {}
items = []
color = 0
features = ['popPerc','areaPerc','treesPerc']
for indx,i in enumerate(features):
bar[i] = p.vbar(x='district', top=i, source=df2, muted_alpha=0, muted=False,
width= 0.8, color=colors[color])
items.append((i,[bar[i]]))
color+=1
legend = Legend(items=items,location=(0,100))
p.add_tools(HoverTool(tooltips = [('Trees','@trees'),
('Population','@population'),
('Area [km^2]','@area')]))
p.add_layout(legend,'left')
p.legend.click_policy='hide'
show(p)
Hope that someone can help, thanks in advance! :)
A:
After reading up on the article sugested I figured it out. By changing the code block for the hovertool to the following it works.
p.add_tools(HoverTool(renderers=[items[0][1][0]], tooltips = [('Population','@population')]))
p.add_tools(HoverTool(renderers=[items[1][1][0]], tooltips = [('Area [km^2]','@area')]))
p.add_tools(HoverTool(renderers=[items[2][1][0]], tooltips = [('Trees','@trees @treePerc')]))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to animate objects of a class from within another class?
I'm using Python and tkinter to create a GUI. I've created two classes, App and drawBall. The App class inherits from Tk.Frame. I'm having trouble creating a drawBall object from within App.
I'd appreciate any other feedback about my code as well, I'm fairly new to OOP.
After creating the class App, which inherits from Tk.Frame. I'd like to create another class to draw a ball on the screen (using a canvas). I've created a base GUI, but when trying to call the class drawBall, I receive the following error: 'drawBall' object has no attribute 'canvas'.
class App(tk.Frame):
def __init__(self,master):
super().__init__(master)
#create title and size for the window
self.master.geometry("640x360")
self.canvas = tk.Canvas(self.master,relief = 'raised',borderwidth = 1)
self.canvas.grid(row = 0,column = 0,sticky = 'NW')
#create a startSimulation button, place it in the bottom right corner
self.startButton = tk.Button(self.master,text = 'Start',command = self.startCallback)
self.startButton.grid(row = 2,column = 3)
#create a quit button, place it in the bottom right corner
self.quitButton = tk.Button(self.master,text = "Quit",command = self.master.destroy)
self.quitButton.grid(row = 3, column =3)
#callback for start button click
def startCallback(self):
#### this is where the error occurs #####
self.ball1 = drawBall(self.master,self.canvas)
class drawBall():
def __init__(self,master,canvas):
self.canvas.create_oval(25,75,35,85,fill = 'blue')
def moveBall(self):
deltaX = 1
self.canvas.move(self.seed,deltaX,0)
self.canvas.after(50,self.moveBall)
if __name__ == '__main__':
window = tk.Tk()
simulate = App(window)
window.mainloop()
I'd hope that the call "self.ball1 = drawBall(self.master,self.canvas)" would result in the circle being drawn on the screen.
A:
You need a class Ball, that will take a canvas, and has the ability to move itself.
Then, in the App, you create a collection of balls, and order them to move.
something like this:
import tkinter as tk
class App(tk.Frame):
def __init__(self, master):
super().__init__(master)
self.master.geometry("640x360")
self.canvas = tk.Canvas(self.master, relief='raised', borderwidth=1)
self.canvas.grid(row=0, column=0, sticky='NW')
self.startButton = tk.Button(self.master, text='animate', command=self.launch_animation)
self.startButton.grid(row=2, column=3)
self.stopButton = tk.Button(self.master, text='stop', command=self.stop_animation)
self.stopButton.grid(row=3, column=3)
self.quitButton = tk.Button(self.master, text="Quit",command=self.master.destroy)
self.quitButton.grid(row=4, column=3)
self.balls = [Ball(self.canvas)]
self.anim_is_on = False
def stop_animation(self):
self.anim_is_on = False
def launch_animation(self):
if self.anim_is_on: # prevent launching several overlapping animation cycles
return
self.anim_is_on = True
self.animate()
def animate(self):
if not self.anim_is_on:
return
for ball in self.balls:
ball.moveball()
self.after(100, self.animate)
class Ball():
def __init__(self, canvas):
self.canvas = canvas
self.id = self.canvas.create_oval(25, 75, 35, 85, fill='blue')
def moveball(self):
delta_x = 1
self.canvas.move(self.id, delta_x, 0)
window = tk.Tk()
simulate = App(window)
window.mainloop()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
The binding type(s) 'signalR' are not registered
I had it working a day ago and I'm not sure what's changed, but now when I run func start
I get the following errors for my SignalR negotiate and messages nodejs function. I tried installing function core tools v2 and v3, but issue is still there. I appreciate any help!
host.json
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
local.settings.json
{
"IsEncrypted": false,
"Values": {
"AzureSignalRConnectionString": "<endpoint>",
"FUNCTIONS_WORKER_RUNTIME": "node"
},
"Host": {
"LocalHttpPort": 7070,
"CORS": "http://localhost:4200",
"CORSCredentials": true
}
}
negotiate function
module.exports = async function (context, req, connectionInfo) {
context.res.json(connectionInfo);
};
messages function
module.exports = async function (context, req) {
return {
"target": "newMessage",
"arguments": [ req.body ]
};
};
A:
I just tested the nodejs sample provided in this link and it worked fine.
The steps are
1.Download the project.
2.Rename local.settings.sample.json to local.settings.json
3.In local.settings.json, paste the connection string into the value of the AzureSignalRConnectionString settings.
Based on the error message you provided, please make sure the Service mode of your SignalR Service is Serverless. And please check the CORS configuration in the local.settings.json file.
If you still encounter 'The binding type(s) 'signalR' are not registered' issue, you can try to delete the extensionBundle settings in host.json file and run again. It will install the SignalRService extension automatically.
In this way, you might need to add AzureWebJobsStorage setting in local.settings.json file.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Understanding etale cohomology versus ordinary sheaves
I am a physicist trying to understand etale cohomology from Shafaverich, and I would like to check a misunderstanding, undoubtedly.
When defining etale cohomology, it seems it is sheaf cohomology in the sense of right-derived functors, but with the etale site, as opposed to just concerning open subsets.
For concreteness, we fix an etale sheaf $\mathcal F : U \mapsto \mathcal O_U(U)$ where $U$ is a scheme which comes equipped with an etale morphism $f:U\to X$. We could then take an injective resolution, i.e.
$$0\to \mathcal F \to \mathcal I^0 \to \mathcal I^1 \to \mathcal I^2 \to \cdots$$
We can then take sections, i.e. apply $\Gamma(X,-)$:
$$\Gamma(X,\mathcal I^0) \to \Gamma(X,\mathcal I^1) \to \Gamma(X,\mathcal I^2) \to \cdots$$
Taking the cohomology then yields $H^q(X,\mathcal F)$. However, I do not see how this makes use of the "new" version of a sheaf, namely the etale sheaf.
We are applying the etale sheaves to $X$, which belongs to the site used in ordinary sheaf cohomology, so it seems like ordinary sheaf cohomology and etale cohomology should always agree? I don't see from the definition of etale cohomology, how we end up using anything extra, thanks to enlarging the site to etale maps.
A:
As mentioned in the comments, acyclic sheaves for the Zariski site are not the same as those which are acyclic sheaves for the étale site. (see the edit for the reason.)
Consider $\mathbf{P}^1_{\mathbb{C}}$ in the Zariski site and consider the constant sheaf $\mathbb{Z}/n\mathbb{Z}$. Since $\mathbf{P}^1_{\mathbb{C}}$ is irreducible this sheaf is flasque and hence acyclic for Zariski cohomology.
However the étale case is far more interesting. It is well known for cohomology associated to any site that for $G$ a sheaf valued in groups, $H^1(X,G)$ is the isomorphism classes of $G$-torsors (i.e. principal homogeneous spaces for $G$). This is true even if $G$ is non-abelian. It follows from analysis of cocycle conditions.
Now, if $G$ is finite, in the étale case this is equivalent to isomorphism classes of finite étale covers of $Y\to X$ with automorphism group $G$, essentially by fpqc descent. (This is not true in Zariski!).
Thus $H^1_{ét}(X,\mathbb{Z}/n\mathbb{Z})$ classifies all finite étale covers of $X$ with automorphisms $\mathbb{Z}/n\mathbb{Z}.$
Using this one can compute $H^i_{ét}(\mathbf{P}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z}).$
Consider the étale cover $$\mathbf{A}^1_{\mathbb{C}}\coprod \mathbf{A}^1_{\mathbb{C}} \to \mathbf{P}^1_{\mathbb{C}} $$ and it follows from the degeneration of the Čech to derived functor spectral sequence that there is a Mayer-Vietoris sequence,
$$\ldots\to H^{i}_{ét}(\mathbf{P}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z})\to H^{i}_{ét}(\mathbf{A}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z})\oplus H^{i}_{ét}(\mathbf{A}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z})\to H^{i}_{ét}(\mathbf{G}_{m,\mathbb{C}},\mathbb{Z}/n\mathbb{Z})\to H^{i+1}_{ét}(\mathbf{P}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z}) \to \ldots$$
Here $\mathbf{G}_{m,\mathbb{C}}$ is $\operatorname{Spec} \mathbb{C}[t,t^{-1}].$
By Riemann's existence theorem $\mathbf{A}^1_{\mathbb{C}}$ is simply connected and so is $\mathbf{P}^1_{\mathbb{C}}$ by Riemann-Hurwitz, so the first cohomology groups vanish. The second cohomology groups of all affine schemes vanish as a general result.
We are left with computing $H^{1}_{ét}(\mathbf{G}_{m,\mathbb{C}},\mathbb{Z}/n\mathbb{Z})$. But this is the same as classifying finite étale covers of $\mathbf{G}_{m,\mathbb{C}}$. These correspond to degree $n$ maps $\mathbf{G}_{m,\mathbb{C}}\to \mathbf{G}_{m,\mathbb{C}}$ sending $$z\mapsto z^n.$$ It is an instructive exercise to see that there are exactly $\mu_n(\mathbb{C})\cong \mathbb{Z}/n\mathbb{Z}$ (Note the covers need not be connected!) isomorphism classes. Here $\mu_n(\mathbb{C})$ is the $n$-th roots of unity.
Thus we see that $H^2_{ét}(\mathbf{P}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z})=\mathbb{Z}/n\mathbb{Z}.$ (I learned this computation from de Jong's lectures on étale cohomology. )
As you can check this is the same as the singular cohomology of $\mathbf{P}^1_{\mathbb{C}}$ in the analytic topology. Note that $H^0_{ét}(\mathbf{P}^1_{\mathbb{C}},\mathbb{Z}/n\mathbb{Z})=\mathbb{Z}/n\mathbb{Z}$ since $\mathbf{P}^1_{\mathbb{C}}$ is connected.
If you move to a field in char $p>0$ then it is no longer true that the affine line is simply connected. So this method will no longer work, but it hints at being the correct generalisation.
Edit: (A more geometric viewpoint)
I think the confusion of the OP lies in viewing the definition in a very general fashion.
In general one can look at the category of sheaves on any site. This category is called a topos. One may define the cohomology associated to that topos very abstractly.
However, the geometry lies in what the geometric points of the topos look like.
If you buy into the philosophy of sheaf cohomology measuring obstructions to extensions of local to global sections, then the difference between the Zariski cohomology and étale cohomology is that the geometric points carry distinct information.
More precisely, in the category of abelian Zariski sheaves a sequence of sheaves is exact if and only if it is exact on stalks. On taking global sections, one loses some information about local sections. For a constant sheaf however, there are 'enough' sections to patch up to global information.
In the étale case the stalk local condition is still true. However étale stalks are distinct from Zariski stalks. For a complex variety $X$, the local rings for points of the étale topos are strict Henselization of the local rings for the Zariski topos. In particular, if $\mathcal{F}$ is a coherent Zariski sheaf, and $\mathcal{F}^{ét}$ the associated étale sheafification, then at a geometric point $\bar{x}\to X$ an étale (i.e. geometric) point of $X$ one has (we denote the image of the $\bar{x}\to X$ by $x$)
$$\mathcal{F}^{ét}_{\bar{x}}=\mathcal{F}_x\otimes_{\mathcal{O}_x}\mathcal{O}^{sh}_x.$$
where $\mathcal{O}^{sh}_x$ is the strict Henselization of $\mathcal{O}_x$, the local ring at $x$.
Thus the stalk local condition is explicitly different!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to navigate out of my php localhost server project (wamp server)?
I am creating a php page and using a localhost server(wamp server).
When i try to navigate throw a link out of my project it give me 403 error
I need to know how to get red of the localhost direction from the link
http://localhost/my%20project/%EF%BB%BFhttp://www.damascusuniversity.edu.sy/
my code is
<?php
$link="http://www.damascusuniversity.edu.sy/";
echo( "<a href='");echo $link; echo("'>");
echo $link;
echo( "</a>");
?>
A:
the problem was in wamp installation as i think,
because when i uninstalled wamp 2.2 and installed wamp 3,and then uninstalled it and back to wamp2.2 it works fine as its expected, and the links worked fine.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
bamboo xcode unit test - can't find workspace
I'm trying to implement automatic ios unit tests in Bamboo. But I get the error:
AppIOS.xcworkspace does not exist.
Step back: I tried to run my unit tests by shellscript.
xcodebuild \
-workspace AppIOS.xcworkspace \
-scheme App-IOS \
-sdk iphonesimulator \
-destination 'platform=iOS Simulator,name=iPhone 6,OS=9.3' \
test | xcpretty
That works fine.
So I try to "translate" this to Bamboo.
This are my configs.
Before I start my unit test I print the content of directory with ´-ls´
In the internet I can't find tutorials which are up to date.
Is there an alternative? My destination is to get a report in bamboo.
FYI: I blur the real app name. So you don't know if I make spelling mistakes. But I really checked this twice!
A:
It took some time but now I figured it out.
I don't know if it's a general issue in Bamboo, it's the normal process or if I do some wrong operations in my tasks before.
Anyway I have explicit to set the current path with ./ at the working directory. The tests will be found and everything is fine.
Don't forget to add the OCUnit Test Parser to the Final Tasks section in your Tasks. Then the Bamboo will print the results in Logs
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL find out the hours between 2 date time column
Is it possible to give condition if 2 of my time column has difference in more than 15 hour.
Given colTime1, colTime2 in table
SELECT * FROM table WHERE time between colTime1 and colTime 2 is difference for over 15 hours
A:
try this: make sure your date format also include time , other wise this return difference in days
DATEDIFF(hour, date1 ,date2);
for days
DATEDIFF(date1 ,date2);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Node js API REST service auth translate text mail node mailer
I have developed a REST API using node js and express.
I use node mailer to send mail when user do a registration.
I have auth services (POST method) that read the language code passing by app and send an email with translated text and token to validate the registration. The link with token is a GET request that send other mail.
How i can say to GET method which language code is used so can send a mail with translation?
Hi have implement this code to verification token:
// Verification token
router.get('/verification/:language/:token/', function(req, res, next) {
const language = req.params.language;
console.log(language);
// Check for validation errors
var errors = req.validationErrors();
if (errors) return res.status(400).send(errors);
// Find a matching token
Token.findOne({ token: req.params.token }, function (err, token) {
//if (!token) return res.status(400).send({status: 'ko',error: {type: 'not-verified', msg: 'We were unable to find a valid token. Your token my have expired.'}} );
if (!token) return res.redirect('https://localhost/expiredtoken');
// If we found a token, find a matching user
User.findOne({ _id: token._userId }, function (err, user) {
if (!user) return res.status(400).send({ msg: 'We were unable to find a user for this token.' });
//if (user.isVerified) return res.status(400).send({status: 'ko',error:{type: 'already-verified', msg: 'This user has already been verified.'}});
if (user.isVerified) return res.redirect('https://localhost/userverified');
// Verify and save the user
user.isVerified = true;
user.save(function (err) {
if (err) { return res.status(500).send({ msg: err.message }); }
//res.status(200).send({status: 'ok', data: { msg: 'The account has been verified. Please log in.'}});
res.redirect('https://localhost/login');
//
var text_email;
if (language == 'en') {
text_email = 'Hi'
}
if (language == 'it') {
text_email = 'Ciao'
}
var client = nodemailer.createTransport(sgTransport(options));
var email = {
from: '[email protected]',
to: user.email,
subject: 'Registration successfully confirmed',
text: text_email
};
client.sendMail(email, function(err, json){
if (err){
return res.status(500).send({ msg: err.message });
}
else {
res.status(200).send({status: 'ok', data: { msg: 'A verification email has been sent to ' + user.email + '.'}} )
}
});
//
});
});
});
});
The verification works and also the send mail but app crash with this error:
Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client
Any help please??
A:
Use GET http://localhost/route?language=en.
If you are using express you can access value of language by using req.query.language from your route handler.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
The Button in WinForms has a CLICK event. But who tells the Button object that it has been clicked?
I am learning C#. I was going through the Events and Delegates part of the language. I am working on a WinForms application for educating myself. I tried looking deep for understanding Buttons and how they work. I found the following:
1) There is a line public partial class Form1 : Form in my default Form1.cs file. This is a partial class.
2) I also have a Form1.Designer.cs class file that has a line partial
class Form1. Now the files mentioned in 1) and 2) combine to form a
full class.
3) The From1.Designer.cs file has a lot of statements that eventually
create the button object. It also has a statement that is of
particular interest to me:
this.btn_BaseBuildLocation.Click += new System.EventHandler(this.btn_BaseBuildLocation_Click);
This statement adds a custom function to the delegate Click. This
delegate is declared in the Control class (System.Windows.Forms.dll) as follows:
public event EventHandler Click;
4) The EventHandler is a delegate defined in System.EventHandler.cs
(mscorlib.dll).
5) The Button class inherits Control class and thus has access to the
Click EventHandler.
6) The Button class has all the logic to handle the flow once it knows
that someone has clicked it. I had a look at the Button class used in
Mono for understanding the inner details. I do this for almost all
classes that I want to learn.
7) All this is extremely beautiful. But I was troubled by the fact
that I did not know how the Button object knows that it has been
clicked.
8) I went through VC++ and how it handles the events. I found a lot
about Message Loops, Event Queues etc...
Questions:
1) Is the VC++ way of handling events the same as .NET's?
2) If so, is there a way to look into those details?
Any help would be appreciated.
Thanks.
A:
A WinForms Button is a managed wrapper around the unmanaged Windows type which is created and managed via a set of Win32 API calls that .NET performs P/Invoke on.
Deep down, the button subscribes to the same Window Event Loop (or Message Pump if you prefer) which drives the Win32 API calls you may have seen in VC++ examples. The unmanaged Windows runtime puts events (like "this button has been clicked") onto the event queue. When the loop executes, the queued event is picked up by the relevant control and is propagated into a "managed" event which is when you are able to observe it.
In essence, the Windows runtime is providing much of the infrastructure and .NET only provides a convenient set of wrappers which make it easy to work with the clunky old Win32 libraries.
You can discover a lot of this for yourself if you use Reflector and dig into Button and Control to just see where the .NET code ends and the unmanaged Win32 calls begin.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding a prime in a ring extension using Nakayama's lemma
This is a follow up to my previous question here
if $A \subset B$ is a finite ring extension and $P$ is a prime ideal of $A$ show there is a prime ideal $Q$ of $B$ with $Q \cap A = P$. (M. Reid, Undergraduate Commutative Algebra, Exercise 4.12(i))
We have seen that $PB \not= B$ and I found an example where $PB \cap A \not = A$ (the example is $P=(2)$, $B=\mathbb Z[\tfrac{1}{2}]$). This makes me think $Q$ must be a subset of $PB$ instead of my original idea: the maximal ideal containing $PB$.
So I would like to ask advice on:
How should we find a prime ideal $Q$ of $B$? and show that $Q \cap A = P$?
A:
Suppose that $P$ is a maximal ideal, and let $Q\subset B$ be a maximal ideal containing $PB$. Then $Q\cap A\supseteq P$, so $Q\cap A=P$.
If $P$ is not maximal, then localize at $A\setminus P$ and find a finite ring extension $A_P\subseteq B_P$. Now use the previous result for the maximal ideal $PA_P$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to write 1, 2, ... as a subscript to a letter?
I have two theta angles, one is \theta -1, other one is \theta -2. How can I write that properly? I want to give numbers to the thetas, putting the numbers at the bottom right end of the letter.
A:
This is called subscript and is activated (in math mode) with _:
\theta_1
\theta_2
You might (should) be interested in further reading, I recommend the Not So Short Introduction to LaTeX2e (surely available in your language).
MWE (some examples)
\documentclass{article}
\begin{document}\noindent
\verb|\theta_1| gives: \( \theta_1 \) \\
\verb|\theta_2| gives: \( \theta_2 \) \\
\verb|\theta_12| gives: \( \theta_12 \) \\
\verb|\theta_{12}| gives: \( \theta_{12} \) \\
\verb|\theta^1| gives: \( \theta^1 \) \\
\verb|\theta^1_2| gives: \( \theta^1_2 \) \\
\verb|\theta_1^2| gives: \( \theta_1^2 \) \\
\verb|\theta_{x,y}^{\frac{1}{2}}| gives: \( \theta_{x,y}^{\frac{1}{2}} \) \\
\end{document}
Output
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP 5.5 Static Function Syntax Issue
Running PHP 5.5.38, making a basic static function to check if a file exists.
I've tried a lot of different variations and can't see where I'm going wrong here.
<?php
class Configuration {
static function getDetails() {
private $fileContents;
if(file_exists(".\configuration\config.conf")) {
$this->fileContents = file_get_contents(".\configuration\config.conf");
}
elseif(file_exists("..\configuration\config.conf")) {
$this->fileContents = file_get_contents("..\configuration\config.conf");
}
else { $this->fileContents = "Config File Not Found"; }
// Clear cache
clearstatcache();
if(!$config = str_replace(" ", "", $fileContents)) {
echo "No configuration file";
die();
return false;
}
foreach(explode("\n", $config) as $value) {
$value = trim($value);
if(substr($value, 0, 2) == '//' || substr($value, 0, 2) == '/*' ||
substr($value, 0, 2) == '#' || $value == "\n" || $value == ""){
continue;
}
list($k, $v) = explode("=", $value);
$configTemp[$k] = $v;
}
return (object)$configTemp;
}
}
?>
Error output I'm getting,
Parse error: syntax error, unexpected 'private' (T_PRIVATE) in line 8
A:
You cannot have private $fileContents; inside a method.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL: How to create a query that executes using array variables on array indexes?
I have a query with 3 variables that I run over and over. How can I create a query that will accept arrays of values such that I can run the query once and get all the data in one result? Here is an example:
SELECT * FROM table WHERE column 1 = 3 AND column 2 != 35 AND column 3 > 10
SELECT * FROM table WHERE column 1 = 9 AND column 2 != 12 AND column 3 > 293
SELECT * FROM table WHERE column 1 = 6 AND column 2 != 96 AND column 3 > 39
I need the query to execute such that the first values (index 0) of each array get run together, then the second values (index 1) of each array get run together and so on. In other words, I want the query to run using the values [based on above example] (3,35,10) then (9,12,293) and so on.
The query needs to be stand alone meaning I need to be able to pass the 3 arrays via $_POST to a remote server that will get directly plugged into the query and executed on the remote server.
Using IN and NOT IN will not work because those comparison operators do not go in order of array indexes.
Any thoughts would be greatly appreciated. I have tried searching many places for solutions and cannot find anything. This type of query might be called something, so maybe that is why I have found nothing.
A:
In php, you could build a UNION ALL query as follows:
$vals = array(array(3,35,10), array (9,12,293)) //this can be built from $_POST or wherever
$queryArr = array();
foreach ($vals as $arr)
{
$queryArr[] = "SELECT * FROM table WHERE column 1 = $arr[0] AND column 2 != $arr[1] AND column 3 > $arr[2]";
}
$query = implode(' UNION ALL ', $queryArr); //note: if you want to do separate queries, rather than a UNION of them, just use $queryArr
PS: As outis mentioned in his comments, you should avoid SELECT * and specify the fields for the purpose of clarity.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Multithreaded exception bubbling
I have an application that handles an event callback, in my case it is the DataReceived event on a SerialPort. In that callback I have business logic that needs to have an exception raised on another thread. That thread is waiting for the event listener to send it a message, or let it know an exception has occurred.
What is the best way to retain the stack trace across threads?
A simple passing the thread over to the worker thread and rethrowing it, causes the stack trace to be lost.
A:
It is depending on your approach for example TPL: throw-->
AggregateException.
BackGroundWorker--> you have to take care about the error in result.
Threads--> you have to marshall the error to the main thread.
Tasks--> throw--> AggregateException.
Async/await--> throw also AggregateException (I'm not sure).
Tasks approach offer a continuation to handle exceptions thrown by the antecedent and good error handling.
Async/await very flexible.
BackGroundWroker is legacy but still sometimes required.
Asynchronous programming with callbacks (in your case is also legacy) but it can be used; I recommend you to use the Tasks.
AggregateException: Represents one or more errors that occur during application execution. You will get a list of exceptions(from other thread) in the root AggregateException
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to convert .flv video into .h264 format with FFmpeg?
I want to convert from .flv to .h264 format.
Problem: I did a conversion from FLV to H264 format but my converted video (.h264) is running so fast (just like we'd click on a fast forward button).
I used the following command:
ffmpeg -i example.flv -sameq -vcodec libx264 -vpre default -ar 22050 output.h264
A:
The solution is that you need a container format. Raw h.264 does not have any timing information for a video which is why your player is playing it very fast.
Also your command is all messed up. Do you want audio in your output or not? If yes then you need to specify a container format which will have both audio and video.
Either change your output to a mp4
ffmpeg -i input_file -c:a copy -c:v libx264 -profile:v baseline out.mp4
or some other container. If you want a video only stream remove the audio options and add -an.
If you want to use AAC audio instead of whatever the FLV file has:
ffmpeg -i input_file -c:a aac -strict -2 -b:a 128k -c:v libx264 -profile:v baseline out.mp4
If your ffmpeg does not have the -c or -profile options, update to a more recent version.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
retrieving data from python bottle using ajax jquery
I am trying an html page to interact with a python script which returns a json string, using ajax/jquery. I installed bottle for the basic functionality of url routing.
The problem is that my browser shows that my ajax script is recieving a content-length of 5 (which is the string "tring" (see server.py function showAll()) but somehow its not stored in the variable result(see function loader in script.js) and hence I cannot use the returned data.
can anyone tell me what is going wrong..
my html file looks like
index.html
<!DOCTYPE>
<html>
<head>
<title>
magpie
</title>
<script type="text/javascript" src="js/script.js"></script>
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js"></script>
<script type="text/javascript">loader();</script>
</head>
<body>
<div id="main"></div>
</body>
</html>
If I call load() instead of loader() in this file the content is displayed.
the ajax script:
script.js
function loader()
{
$(document).ready(function(){
$.get('http://[localhost]:1111/', function(result){
alert('success');
$('#main').html(result);
});
});
}
function load()
{
$(document).ready(function () {
$.getJSON('data.json', function (data) {
var output = '';
$.each(data, function(key, val) {
output += '<h3>' + val.device + ' ' + val.status + '</h3>';
})
$('#main').html(output);
});
});
}
the two python scripts are :
server.py
from bottle import get, post, request, run, response
import devices
@get('/')
def showAll():
# return devices.data
return "tring"
@post('/act')
def act():
deviceId = request.forms.get('deviceId')
devices.changeState(deviceId)
return
run(host = "[localhost]", port = 1111)
devices.py
import json
data = json.load(open('data.json', 'r+'))
nos = len(data) # Number of devices
def changeState(id):
state = data[id]['status']
if state == 'on':
data[id]['status'] = 'off'
else:
data[id]['status'] = 'on'
json.dump(data, open('data.json', 'w'))
return 1
and finally my json file(if it matters):
{"B1": {"device": "fan", "status": "on"}, "B3": {"device": "light", "status": "off"}, "B2": {"device": "fan", "status": "on"}}
Loading http://[localhost]:1111/ in browser shows the returned string "tring".Also the page loads only in firefox while other browsers show some allow-acces-origin error.
This is a screenshot of my browser :
A:
how is your browser recieving "tring" when you are returning "string" from your python script.
I think you will have to set
accessControlAllowCredentials true
I suppose
A:
I had to enable the accessControlAllowOrigin header (thanks to @user3202272, I got the hint from your answer).
Added a few lines to my server.py file and that was it.
Now it looks like
server.py
from bottle import Bottle, request, response, run
import devices
app = Bottle()
@app.hook('after_request')
def enable_cors():
response.headers['Access-Control-Allow-Origin'] = '*'
@app.get('/')
def show():
return devices.data
# return "tring"
@app.post('/act')
def act():
deviceId = request.forms.get('deviceId')
devices.changeState(deviceId)
return
app.run(host = "0.0.0.0", port = 1111)
So basically the problem was with the CORS. You can read more about it here
https://gist.github.com/richard-flosi/3789163
Bottle Py: Enabling CORS for jQuery AJAX requests
thanks to everyone..!! :D
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Changing Destructor order
I need in destruction of a class that first the destructor of members are called before the class itself.
I know the destruction order are normally in the reverse order. But I nedd this in special case.
// PortA
class PortA
{
public:
PortA() { cout << " PortA\n"; }
~PortA() { cout << " ~PortA\n"; }
};
// PortB
class PortB
{
public:
PortB() { cout << " PortB\n"; }
~PortB() { cout << " ~PortB\n"; }
};
class Card
{
public:
Card() { cout << "card\n"; }
~Card() { cout << "~card\n"; }
PortA mPA;
PortB mPB;
};
That produces :
PortA
PortB
card
~card
~PortB
~PortA
But I need in this case:
card
PortA
PortB
~PortB
~PortA
~card
Closse first the port before the card itself.
A:
Consider manipulate them explicitly, for example:
class Card
{
public:
Card()
{
cout << "card\n";
mPA = new PortA;
mPB = new PortB;
}
~Card()
{
delete mPB;
delete mPA;
cout << "~card\n";
}
PortA *mPA = nullptr;
PortB *mPB = nullptr;
};
Now if you write something like this:
{
Card c;
}
you'll get what you want.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using undefined tag name in HTML?
For example,
chapter {
display:block;
}
<book>
Harry Potter
<chapter>
Chapter 1
</chapter>
<chapter>
Chapter 2
</chapter>
</book>
This HTML snippet has clear syntax, however, I am not sure whether it is supported by most browsers. And is there any drawbacks for this approach, such as SEO issues?
A:
You are talking about custom HTML tags here. You can "create" custom HTML elements which is supported by most modern browsers.
internet Explorer does not recognise any of these tags unless you first 'create' them with JavaScript:
document.createElement('tagName');
Note: All custom elements have display: inline by default which can be modified by CSS or JavaScript. Custom tags are also not valid in HTML5.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.