text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
About the new modifier in C#
I don't quite understand the modifiernew in C#: In which way is the following code different depending on the presence of the new modifier?
class A
{
public void F() { Console.WriteLine("A::F()"); }
}
class B : A
{
public new void F() { Console.WriteLine("B::F()"); }
public static void Main()
{
A a = new B();
B b = new B();
A trueA = new A();
a.F();
b.F();
trueA.F();
Console.ReadLine();
}
}
Is there any equivalent or similar thing in C++?
A:
The result is the same if you are using public void F() or public new void F() in the class B. The method will still be shadowed.
The difference is that if you omit the new modifier, you will get a compiler warning because the code may be confusing. The new modifier is used to specify that you actually intended to shadow a member from the base class, and didn't just happen to use the same name by mistake.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Case insensitive string replacement in JavaScript?
I need to highlight, case insensitively, given keywords in a JavaScript string.
For example:
highlight("foobar Foo bar FOO", "foo") should return "<b>foo</b>bar <b>Foo</b> bar <b>FOO</b>"
I need the code to work for any keyword, and therefore using a hardcoded regular expression like /foo/i is not a sufficient solution.
What is the easiest way to do this?
(This an instance of a more general problem detailed in the title, but I feel that it's best to tackle with a concrete, useful example.)
A:
You can use regular expressions if you prepare the search string. In PHP e.g. there is a function preg_quote, which replaces all regex-chars in a string with their escaped versions.
Here is such a function for javascript (source):
function preg_quote (str, delimiter) {
// discuss at: https://locutus.io/php/preg_quote/
// original by: booeyOH
// improved by: Ates Goral (https://magnetiq.com)
// improved by: Kevin van Zonneveld (https://kvz.io)
// improved by: Brett Zamir (https://brett-zamir.me)
// bugfixed by: Onno Marsman (https://twitter.com/onnomarsman)
// example 1: preg_quote("$40")
// returns 1: '\\$40'
// example 2: preg_quote("*RRRING* Hello?")
// returns 2: '\\*RRRING\\* Hello\\?'
// example 3: preg_quote("\\.+*?[^]$(){}=!<>|:")
// returns 3: '\\\\\\.\\+\\*\\?\\[\\^\\]\\$\\(\\)\\{\\}\\=\\!\\<\\>\\|\\:'
return (str + '')
.replace(new RegExp('[.\\\\+*?\\[\\^\\]$(){}=!<>|:\\' + (delimiter || '') + '-]', 'g'), '\\$&')
}
So you could do the following:
function highlight(str, search) {
return str.replace(new RegExp("(" + preg_quote(search) + ")", 'gi'), "<b>$1</b>");
}
A:
function highlightWords( line, word )
{
var regex = new RegExp( '(' + word + ')', 'gi' );
return line.replace( regex, "<b>$1</b>" );
}
A:
You can enhance the RegExp object with a function that does special character escaping for you:
RegExp.escape = function(str)
{
var specials = /[.*+?|()\[\]{}\\$^]/g; // .*+?|()[]{}\$^
return str.replace(specials, "\\$&");
}
Then you would be able to use what the others suggested without any worries:
function highlightWordsNoCase(line, word)
{
var regex = new RegExp("(" + RegExp.escape(word) + ")", "gi");
return line.replace(regex, "<b>$1</b>");
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to regex after equal sign
I need to grab the text after the equal sign
and... i suck using regex.
i managed to build this but it still grabs the string
including the equal sign
this is what i got:
(name).\s.(.*)
Example String
| name = New York City
I need to grab only the "new york city"
how do i get rid of the equal sign inside the regex
Any ideas?
A:
You can use this RegExp name\s*=\s*([\S\s]+)
name checks for the text "name"
\s* selects all whitespace (greedy) if there is any
= Will match the = sign
([\S\s]+) Will select all characters
Add the flag i to make it case-insensitive. If you wish for only one space to be valid, replace \s* with \s{0,1}
Add [.] instead of [\S\s] and the g flag to make it match multiple lines
RegEx101
Other Version
|
{
"pile_set_name": "StackExchange"
}
|
Q:
remove duplicates from values in the rows
I have a df with dimension 58000*900 which contains replicates in row values, I want to traverse through every row and remove them. An example will make it more clear.
df
IDs Name col1 col2 col3
123 AB.C 1.3,1.3,1.3,1.3,1.3 0,0,0,0,0 5,5,5,5,5
234 CD-E 2,2,2,2,2 0.3,0.3,0.3,0.3,0.3 1,1,1,1,1
568 GHJ 123456 123456 123456
345 FGH 9,9,9,9,9 54,54,54,54,54 0,0,0,0,0
Apparently every value is replicated 5 times and in some cases their is a problem that there is no . or , separating the values.
What I want is drop those lines which does not contain either . or , and for the rest remove the duplicate values. So, the output will be:
IDs Name col1 col2 col3
123 AB.C 1.3 0 5
234 CD-E 2 0.3 1
345 FGH 9 54 0
dput(df)
structure(list(IDs = c(123L, 234L, 568L, 345L), Name = structure(c(1L,
2L, 4L, 3L), .Label = c("ABC", "CDE", "FGH", "GHJ"), class = "factor"),
col1 = structure(c(2L, 3L, 1L, 4L), .Label = c("123456",
"1.3,1.3,1.3,1.3,1.3", "2,2,2,2,2", "9,9,9,9,9"), class = "factor"),
col2 = structure(1:4, .Label = c("0,0,0,0,0", "0.3,0.3,0.3,0.3,0.3",
"123456", "54,54,54,54,54"), class = "factor"), col3 = structure(c(4L,
2L, 3L, 1L), .Label = c("0,0,0,0,0", "1,1,1,1,1", "123456",
"5,5,5,5,5"), class = "factor")), .Names = c("IDs", "Name",
"col1", "col2", "col3"), class = "data.frame", row.names = c(NA,
-4L))
A:
First, we restructure your data in a long format using gather(), then we filter() for value with no , using grepl(). We then split the string in value into a list using strsplit() and make each element of the list it's own row using unnest(). We remove duplicated rows using distinct() and spread() back the key and values to columns.
library(dplyr)
library(tidyr)
df %>%
gather(key, value, -(IDs:Name)) %>%
filter(grepl(",", value)) %>%
mutate(value = strsplit(value, ",")) %>%
unnest(value) %>%
distinct %>%
spread(key, value)
Which gives:
#Source: local data frame [3 x 5]
#
# IDs Name col1 col2 col3
# (int) (fctr) (chr) (chr) (chr)
#1 123 AB.C 1.3 0 5
#2 234 CD-E 2 0.3 1
#3 345 FGH 9 54 0
Another idea would be to use cSplit from splitstackshape:
df %>%
cSplit(., c("col1", "col2", "col3"), direction = "long", sep = ",") %>%
group_by(Name) %>%
filter(!any(is.na(.))) %>%
distinct
Which gives:
#Source: local data table [3 x 5]
#Groups: Name
#
# IDs Name col1 col2 col3
# (int) (fctr) (dbl) (dbl) (int)
#1 123 AB.C 1.3 0.0 5
#2 234 CD-E 2.0 0.3 1
#3 345 FGH 9.0 54.0 0
|
{
"pile_set_name": "StackExchange"
}
|
Q:
No good position of Images in BaseAdapter
I have a PullToRefresh ListView with 5 sections (each section is a item of the list), and each section has a number of photos. I have an BaseAdapter class which fills the 5 sections with the section photos. The problem is when the first time I load the listview, the photos don't appear correctly, but when I do scroll, then, the photos appear in the correct position.
I use the BitmapFun code for loading the photos.
I have:
private ArrayList<Pair<Integer, ArrayList<String>>> section_photos_url;
The first value is the section, and the second is an array of photos of each section.
The getView method :
public View getView(int position, View convertView, ViewGroup container) {
if (position < 5) {
ItemViewHolder viewHolder;
// Si la vista de dicha posicion ha sido inflada
if (view_array[position] == null) {
// Inflate the view
LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
switch (section_photos_url.get(position).first) {
case ConstantsTypeSection.SECTION1:
view_array[position] = li.inflate(R.layout.timeline_section_today_trending, container, false);
break;
case ConstantsTypeSection.SECTION2:
view_array[position] = li.inflate(R.layout.timeline_section_trendy_smiler, container, false);
break;
case ConstantsTypeSection.SECTION3:
view_array[position] = li.inflate(R.layout.timeline_section_around_me, container, false);
break;
case ConstantsTypeSection.SECTION4:
view_array[position] = li.inflate(R.layout.timeline_section_trending_hashtag, container, false);
break;
case ConstantsTypeSection.SECTION5:
view_array[position] = li.inflate(R.layout.timeline_section_last_content, container, false);
break;
}
viewHolder = new ItemViewHolder();
viewHolder.sectionName = (TextView) view_array[position].findViewById(R.id.section_name);
viewHolder.userName = (TextView) view_array[position].findViewById(R.id.user_name);
viewHolder.hashTag = (TextView) view_array[position].findViewById(R.id.hash_tag_name);
viewHolder.userImage = (ImageView) view_array[position].findViewById(R.id.user_photo);
viewHolder.item1 = (RecyclingImageView) view_array[position].findViewById(R.id.item_1);
viewHolder.item2 = (RecyclingImageView) view_array[position].findViewById(R.id.item_2);
viewHolder.item3 = (RecyclingImageView) view_array[position].findViewById(R.id.item_3);
viewHolder.item4 = (RecyclingImageView) view_array[position].findViewById(R.id.item_4);
viewHolder.item5 = (RecyclingImageView) view_array[position].findViewById(R.id.item_5);
viewHolder.item6 = (RecyclingImageView) view_array[position].findViewById(R.id.item_6);
viewHolder.item7 = (RecyclingImageView) view_array[position].findViewById(R.id.item_7);
viewHolder.photos_layout = (LinearLayout) view_array[position].findViewById(R.id.photo_section);
view_array[position].setTag(viewHolder);
} else {
viewHolder = (ItemViewHolder) view_array[position].getTag();
}
switch (section_photos_url.get(position).first) {
case ConstantsTypeSection.SECTION1:
viewHolder.sectionName.setText(context.getResources().getString(R.string.today_trending_section));
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
viewHolder.photos_layout.addView(getPhotoView(section_photos_url.get(0).second), lp);
break;
case ConstantsTypeSection.SECTION2:
viewHolder.sectionName.setText(context.getResources().getString(R.string.trendy_smiler_section));
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
viewHolder.photos_layout.addView(getPhotoView(section_photos_url.get(1).second), lp);
break;
case ConstantsTypeSection.SECTION3:
viewHolder.sectionName.setText(context.getResources().getString(R.string.around_me_section));
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
viewHolder.photos_layout.addView(getPhotoView(section_photos_url.get(2).second), lp);
break;
case ConstantsTypeSection.SECTION4:
viewHolder.sectionName.setText(context.getResources().getString(R.string.trending_hashtag_section));
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
viewHolder.photos_layout.addView(getPhotoView(section_photos_url.get(3).second), lp);
break;
case ConstantsTypeSection.SECTION5:
viewHolder.sectionName.setText(context.getResources().getString(R.string.last_content_section));
LinearLayout.LayoutParams lp = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.WRAP_CONTENT);
viewHolder.photos_layout.addView(getPhotoView(section_photos_url.get(0).second), lp);
break;
}
return view_array[position];
} else {
return null;
}
}
private class ItemViewHolder {
TextView sectionName;
TextView userName;
TextView hashTag;
ImageView userImage;
RecyclingImageView item1;
RecyclingImageView item2;
RecyclingImageView item3;
RecyclingImageView item4;
RecyclingImageView item5;
RecyclingImageView item6;
RecyclingImageView item7;
LinearLayout photos_layout;
}
private View getPhotoView(ArrayList<String> photos) {
View view = null;
LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
switch (photos.size()) {
case 1:
break;
case 3:
view = li.inflate(R.layout.timeline_item_three, null, false);
break;
case 5:
if (Math.random() * 2 < 1) {
view = li.inflate(R.layout.timeline_item_three_two, null, false);
} else {
view = li.inflate(R.layout.timeline_item_two_three, null, false);
}
break;
case 7:
default:
view = li.inflate(R.layout.timeline_item_seven, null, false);
break;
}
RecyclingImageView image = null;
for (int i = 0; i < photos.size(); i++) {
switch (i) {
case 0:
image = (RecyclingImageView) view.findViewById(R.id.item_1);
break;
case 1:
image = (RecyclingImageView) view.findViewById(R.id.item_2);
break;
case 2:
image = (RecyclingImageView) view.findViewById(R.id.item_3);
break;
case 3:
image = (RecyclingImageView) view.findViewById(R.id.item_4);
break;
case 4:
image = (RecyclingImageView) view.findViewById(R.id.item_5);
break;
case 5:
image = (RecyclingImageView) view.findViewById(R.id.item_6);
break;
case 6:
image = (RecyclingImageView) view.findViewById(R.id.item_7);
break;
}
CacheLoaderImagesSingleton.getInstance().getImageFetcher().loadImage(photos.get(i), image);
}
return view;
}
Any suggestions to show the images correctly? Those are the images. First one is the wrong images because all photos have to be purple flower photos, and don't yellow. Second one is the correct. The images are part of a section
A:
First, what is view_array if you are using that to store view at position in list. Then its wrong. The list is implemented to reuse views.
Second, for having different layouts for list view you use getItemViewType() and getViewTypeCount().
See example below,
public class Adapter extends BaseAdapter {
private static final int SECTION1 = 0;
private static final int SECTION2 = 1;
private static final int SECTION3 = 2;
private static final int SECTION4 = 3;
private static final int SECTION5 = 4;
...
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View view;
int type = getItemViewType(position);
if (convertView == null)
{
switch (type) {
case SECTION1:
view = inflater.inflate(layout_id1, parent, false);
break;
case SECTION2:
view = inflater.inflate(layout_id2, parent, false);
break;
case SECTION3:
view = inflater.inflate(layout_id3, parent, false);
break;
case SECTION4:
view = inflater.inflate(layout_id4, parent, false);
break;
case SECTION5:
view = inflater.inflate(layout_id5, parent, false);
break;
}
}
else
{
// Based on value of getItemViewType the ListView will pass the
// corresponding layout which was inflated before by getView so no need to inflate again
view = convertView;
}
...
return view;
}
// getItemViewType should return value from 0 to 4 since the count is 5
// See doc for more info
@Override
public int getItemViewType(int position) {
// return the section type which should be in range 0 to 4
}
@Override
public int getViewTypeCount() {
return 5;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What does the "3" in printf(3) denote?
man pages of the printf and online documentations often show printf(3) when explaining the functionality of printf. What does the "3" denote?
A:
According to the man about the man page. 3 referers to Library calls.So 'printf(3)' is a library call.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Entity Framework 4.1 Linq Contains and StartsWith
I am using Entity Framework Code First. I want to query entites from database against List objects. This works fine with contains, but how can I combine it with StartsWith?
This is my code:
List<string> values = new List<string>();
values.Add("A");
values.Add("B");
context.Customer.Where(c => values.Contains(c.Name)).ToList();
How can i query against all customers which starts with A or B?
A:
This should work in memory, but I am not sure if it could be translated into SQL by EF:
context.Customer.Where(c => values.Any(s => c.Name.StartsWith(s))).ToList();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Database multiple values for same attribute - Best practices?
I have found myself that some attributes from my Person table, need to hold multiple values/choices, which is not a good SQL practice so I created a second table, like this:
Before:
Person table
-ID (ex. 101)
-Name (ex. John)
-Accessories (ex. Scarf, Mask, Headband, etc..) - One person can have a combination of this
After:
Person Table
-ID
-Name
PersonDetails Table
-PersonID (FK to Person table)
-Attribute type
-Attribute value
and an example:
Person:
ID:13; Name: John Snow
PersonDetails:
PersonID: 13; Attribute type: Accessories; Attribute value: Scarf
PersonID: 13; Attribute type: Accessories; Attribute value: Mask
You can see that person with ID 13 has both Scarf and Mask.
Is this a good practice? What other ways are there to do this the most efficiently?
Also, what ways are there if an update comes up and Person with 13 doesn't have Scarf and Mask but only Glasses? (Delete the 2 separately and insert a new one? that means 3 queries for only one modify request)
A:
I think this is rather n:m-related. You'd need one table Person holding ID, name and other person's details. Another table Accessory with ID, name and more accessory's details. And a third table PersonAccessory to store pairs of PersonID and AccessoryID (this is called mapping table)
Working example (SQL-Server syntax)
CREATE TABLE Person(ID INT IDENTITY PRIMARY KEY,Name VARCHAR(100));
INSERT INTO Person VALUES('John'),('Jim');
CREATE TABLE Accessory(ID INT IDENTITY PRIMARY KEY,Name VARCHAR(100));
INSERT INTO Accessory VALUES('Scarf'),('Mask');
CREATE TABLE PersonAccessory(PersonID INT NOT NULL FOREIGN KEY REFERENCES Person(ID)
,AccessoryID INT NOT NULL FOREIGN KEY REFERENCES Accessory(ID));
INSERT INTO PersonAccessory VALUES(1,1),(2,1),(2,2);
SELECT p.Name
,a.Name
FROM PersonAccessory AS pa
INNER JOIN Person AS p ON pa.PersonID=p.ID
INNER JOIN Accessory AS a ON pa.AccessoryID=a.ID;
GO
--DROP TABLE PersonAccessory;
--DROP TABLE Accessory;
--DROP TABLE Person
The result
John Scarf
Jim Scarf
Jim Mask
|
{
"pile_set_name": "StackExchange"
}
|
Q:
.NET MVC file does not exist error for URL with extra /text
I have set up a similar friendly URL system to stackoverflow's questions.
The old URL syntax was: localhost:12345:/cars/details/1234
I've already set up the returning 301 and the URL generation but getting a file does not exist error when the url is redirected to:
localhost:12345/cars/details/1234/blue-subaru (because of the last "blue-subaru")
Of course I actually want: localhost:12345/cars/1234/blue-subaru :)
How can I achieve this? Thankyou
A:
This is a routing problem so you should little bit changes in your routing like this
routes.MapRoute(
"Default", // Route nameRegister
"{controller}/{action}/{id}/{name}", // URL with parameters
new { controller = "test", action = "Index", id = UrlParameter.Optional,name = UrlParameter.Optional } // Parameter defaults
);
i think this will help you.
A:
You could configure your route to accept the car's name in RouteTable on the global.asax.
routes.MapRoute(
"Cars",
"Car/{id}/{carName}",
new { controller = "Car", action = "Details", id = UrlParameter.Optional, carName = UrlParameter.Optional }
);
And in your CarController you could have your Detail action method and get both parameters (id and carName):
public ActionResult Details(int? id, string carName)
{
var model = /* create you model */
return View(model);
}
Your action link should look like this:
@Html.ActionLink("Text", "Details", "Car", new { id = 1, carName="Honda-Civic-2013" })
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there way to have ordered output without ORDER BY clause in MS SQL
We have a large application. Recently, we upgraded the database MS SQL 2016. As a result, we are having issues with output order in some reports. Unfortunately, we can not alter the source code of the application and will not be able to do this for awhile. The reason for this behavior is that some queries in format "SELECT * FROM table" with is no "ORDER BY" clause on them.
Is there any creative way to order the output? Like to add a trigger on SELECT<(I know there is no such thing...)>? Any other way? Any type of index that would be an ORDER default?
UPDATE: I can not alter the source code. This is compiled application and upgrade cycle is few months away. If I could, I would just go through the source code and change those queries.
A:
You asked:
Is there a way to have ordered output without an ORDER BY clause in MS SQL
No.
Unfortunately, there is not.
SQL (in all its flavors) is a declarative scheme for manipulating sets of rows of tables. Sets of things like rows have no inherent order.
If SQL, in any flavor, without ORDER BY statements, happens to deliver rows in an order you expect, it's a coincidence. It's pure luck. Formally speaking, without ORDER BY SQL delivers rows in an unpredictable order."Unpredictable" is like "random," but worse. Random implies you might get a different order every time you run the query: you can catch that kind of thing in system test. Unpredictable means you get the same order each time until you don't. Now you don't.
This is in the basic nature of SQL. Why? The optimizers in modern database servers are really smart about delivering exactly what you asked for as fast as possible, and they get smarter with every release. SQL Server 2016's optimizer probably noticed some opportunities to use concurrency for your queries, or something like that, and took them.
Your choices here are all unpleasant:
Roll back the database upgrade. That may or may not fix the problem.
Make emergency fixes to your software load and roll them out ahead of schedule.
Live with the wrongly-ordered reports until you can fix your software.
I wrote a long answer here so you can have useful arguments to present to your corporate overlords about your situation. I don't envy you when you explain this.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get full coverage of a CLI nodeJS app with istanbul?
I am using this config :Istanbul/Mocha/Chai/supertest(for http tests)/sinon (for timer tests) but I am having some problem with testing CLI tools
My question is simple: How can I test my cli program and achieve at the same time 100% code coverage with istanbul? No matter what tool you are using, I would like to understand how you are doing it please!
I found this article which was very helpful at the beginning but
It was written in 2014
The module mock-utf8-stream does not seem standard
It does not explain clearly the code architecture
cheers
A:
This will be done in 2 steps:
Make sure your test suite is set up to correctly spawn the CLI execution
Set up nyc (reason for switching from istanbul to nyc explained below) to go through the script files behind your CLI tool
Setting up your tests to run spawn subprocesses
I had to set up some CLI tests a few months ago on Fulky (the project is paused right now but it's temporary) and wrote my test suite as such:
const expect = require('chai').expect;
const spawnSync = require('child_process').spawnSync;
describe('Executing my CLI tool', function () {
// If your CLI tool is taking some expected time to start up / tear down, you
// might want to set this to avoid slowness warnings.
this.slow(600);
it('should pass given 2 arguments', () => {
const result = spawnSync(
'./my-CLI-tool',
['argument1', 'argument2'],
{ encoding: 'utf-8' }
);
expect(result.status).to.equal(0);
expect(result.stdout).to.include('Something from the output');
});
});
You can see an example here but bear in mind that this is a test file run with Mocha, that runs Mocha in a spawned process.
A bit Inception for your need here so it might be confusing, but it's testing a Mocha plugin hence the added brain game. That should apply to your use case though if you forget about that complexity.
Setting up coverage
You will then want to install nyc with npm i nyc --save-dev, nowadays' CLI tool for Istanbul, because as opposed to the previous CLI (istanbul itself), it allows coverage for applications that spawn subprocesses.
Good thing is it's still the same tool, the same team, etc. behind nyc, so the switch is really trivial (for example, see this transition).
In your package.json, then add to your scripts:
"scripts": {
"coverage": "nyc mocha"
}
You will then get a report with npm run coverage (you will probably have to set the reporter option in .nycrc) that goes through your CLI scripts as well.
I haven't set up this coverage part with the project mentioned above, but I have just applied these steps locally and it works as expected so I invite you to try it out on your end.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Aligning Text Left, Center, and Right with fabricjs
I've got some text on my canvas and I'm trying to use buttons to align the text to the left, center, and right in it's my fabricjs canvas. I saw this example but would prefer to use buttons over a select box. I've tried a bunch and am feeling pretty lost.
var $ = function(id){return document.getElementById(id)};
var canvas = this.__canvas = new fabric.Canvas('c');
canvas.setHeight(300);
canvas.setWidth(300);
document.getElementById('text-align').onclick = function() {
canvas.getActiveObject().setTextAlign(this.value);
canvas.renderAll();
};
var text = new fabric.IText('Some demo\nText', {
left: 10,
top: 10,
fontSize: 22,
hasBorders: true,
hasControls: true,
cornerStyle: 'circle',
lockRotation: true,
hasControls: true,
lockUniScaling: true,
hasRotatingPoint: false,
})
canvas.add(text);
canvas {
border: 1px solid #dddddd;
margin-top: 10px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/1.7.20/fabric.min.js"></script>
<button id="text-align" value="left">Left</button>
<button id="text-align" value="center">Center</button>
<button id="text-align" value="right">Right</button>
<canvas id="c"></canvas>
A:
The issue is not in the fabric code. Rather it was because you were using the same id for all the buttons. document.getElementById returns only the first instance and therefore your click listener was getting added to the 'left' button only
var $ = function(id) {
return document.getElementById(id)
};
var canvas = this.__canvas = new fabric.Canvas('c');
canvas.setHeight(300);
canvas.setWidth(300);
document.querySelectorAll('.text-align').forEach(function(btn) {
btn.onclick = function() {
canvas.getActiveObject().setTextAlign(this.value);
canvas.renderAll();
};
})
var text = new fabric.IText('Some demo\nText', {
left: 10,
top: 10,
fontSize: 22,
hasBorders: true,
hasControls: true,
cornerStyle: 'circle',
lockRotation: true,
hasControls: true,
lockUniScaling: true,
hasRotatingPoint: false,
})
canvas.add(text);
canvas {
border: 1px solid #dddddd;
margin-top: 10px;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/1.7.20/fabric.min.js"></script>
<button class="text-align" value="left">Left</button>
<button class="text-align" value="center">Center</button>
<button class="text-align" value="right">Right</button>
<canvas id="c"></canvas>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
System.IO.IOException exception when attempting to end thread
I'm learning netcode and multithreading in Monodevelop, using C# with GTK#. I've never done either before, and now I find myself needing to do both at once.
I've used a tutorial chat program that has no error handling, and I've caught an error that happens in the client every single time I disconnect from the server. The code that sits in a thread listening for messages is as follows, surrounded by try/catch statements:
try
{
while (Connected)
{
if (!srReceiver.EndOfStream && Connected)
{
string temp = srReceiver.ReadLine();
// Show the messages in the log TextBox
Gtk.Application.Invoke(delegate
{
UpdateLog(temp);
});
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
After which the function finishes and the thread ends.
The code that ends the connection looks like this, and runs on the main thread:
private void CloseConnection(string Reason)
{
// Show the reason why the connection is ending
UpdateLog(Reason);
// Enable and disable the appropriate controls on the form
txtIp.Sensitive = true;
txtUser.Sensitive = true;
txtMessage.Sensitive = false;
btnSend.Sensitive = false;
btnConnect.Label = "Connect";
// Close the objects
Connected = false;
swSender.Close();
srReceiver.Close();
tcpServer.Close();
}
And the try/catch statements above catch this error:
System.IO.IOException: Unable to read data from the transport
connection: A blocking operation was interrupted by a call to
WSACancelBlockingCall. ---> System.Net.Sockets.SocketException: A
blocking operation was interrupted by a call to WSACancelBlockingCall
at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset,
Int32 size, SocketFlags socketFlags)
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32
offset, Int32 size)
--- End of inner exception stack trace ---
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32
offset, Int32 size)
at System.IO.StreamReader.ReadBuffer()
at System.IO.StreamReader.get_EndOfStream()
at ChatClientGTK.MainWindow.ReceiveMessages() in
g:\Android\Tutes\ChatClientRemake\ChatClientGTK\MainWindow.cs:line 157
Now, as far as I can tell, when srReciever.Close() happens in the main thread, srReciever.ReadLine() is still trying to execute in the listening thread, which is where the problem lies, but even when I comment out srReciever.Close(), I still get the error.
As far as I can tell, there are no side-effects caused by just catching the error and moving on, but that doesn't really sit right with me. Do I need to fix this error, and if so, does anyone have any ideas?
A:
Instead of using a ReadLine, can't you just do a Read and build up the String until a CrLf is detected then output that to update log.
ReadLine is a blocking call meaning it will sit there and always error if the connection is closed.
Otherwise you could just ignore the error. I know what you mean when you say it doesn't sit right but unless anyone else can enlighten me, I don't see that there is any leak in resources due to it and if it is an expected error then you can handle it appropriately.
Also I would probably catch the specific exception
catch (IOException ex)
{
Console.WriteLine(ex.ToString());
}
catch (Exception ex)
{
Console.WriteLine(ex.ToString());
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spacing between primary maxima of $N$-slit diffraction pattern and single-slit envelope
As far as I know, in the double-slit diffraction pattern, the spacing between primary maxima is determined by the equation of double-slit interference pattern, and the intensities of primary maxima are governed by the single-slit envelope. In the double-slit interference pattern, the fringes are equally spaced. So, according to my understanding, the primary maxima of double-slit diffraction pattern should be equally spaced. But, I don't know if this holds for N-slit diffraction pattern.
Is my understanding of the double-slit diffraction pattern correct? If not, please explain what's wrong.
Are the primary maxima in the N-slit diffraction pattern equally spaced?
As the number of slits (N) increases, the intensity of central maximum (and other primary maxima) increases. So, when we talk about the single-slit envelope, the single-slit envelope is the pattern we would see when we leave only 1 slit of the N slits open and diffract the light of the intensity of the central maximum of N-slit diffraction pattern through the open slit?
My confusion arose from the image below. When the number of slits increases, the central maximum intensity also increases, so shouldn't the single-slit envelopes for double-slit, triple-slit, ... diffraction patterns also vary in their intensity?
A:
As Jon Custer commented, the far-field diffraction pattern is the Fourier transform of the slits. For the simplest case, a single slit, the diffraction pattern is the Fourier transform of a square impulse; that is, a sinc function.
Two slits is the convolution of a single slit with a pair of Dirac $\delta$-functions. So the diffraction pattern will be the product of the FT of the slit with the FT of the $\delta$-functions. Accordingly the pattern consists of a cosine (the FT of the pair of $\delta$-functions) multiplied by the same sinc function as before. The period of the cosine, and thus the spacing of the zeros, is inversely related to the separation of the slit - move the slits closer together, and the zeros of the diffraction pattern spread out more. They are always periodic though, as they arise from a cosine function.
An $N$-slit arrangement can be described as a convolution of the impulse function with an array of $\delta$-functions. The diffraction pattern will consist of the product of the sinc function with the FT of the $\delta$-functions. If the slits are equally spaced, it is easy to show that the period of this FT will be the same as for the single pair of slits. As $N$ gets larger and larger, the FT of the $\delta$-functions will approach a Dirac comb. You can see this tendency in the figure you provide; as $N$ increases the "wiggles" between the peaks is suppressed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
New Google Plus Comments in Blog - How to View / Receive Notifications?
I have a Blogger blog and I used to have Blogger's own comment system in it. I didn't like some parts of it, so I tried changing to Google+ comments instead.
I have no problems with the comment box, it's implemented well, works fine, etc. But when I had Blogger comments, I could see the newest comments my visitors had posted site-wide and I also received email notifications when someone posted a comment in any post of my blog.
However, now, with Google Plus comments, I don't seem to get any sort of notification. (no emails, not even that alert thingy on top-right corner of Google that only ever shows Youtube comments I don't care about) And, also, I know of no way to check the most recent comment in my website.
I kind of need either of these features (most recent / notifications) so I can reply to people when they post comments on my blog. After all I got dozens of posts it's not viable to check every single one of them for new comments every single day.
How can I view the most recent Google plus comments within a website? Or at least receive an email when there is a new Google plus comment posted in my website?
P.S.: I'm not interested in an API for these. There should be an actual user interface somewhere for these things, right?
A:
As it currently stands, this feature has not worked since October 2016.
According to a post by a Google Employee in the official Blogger Forum on 2nd February 2017 -
Hi all,
Thanks for posting.
Just wanted to let you know that the concerned team is aware of this
issue and is working on it. I will keep you all posted as soon as I
get an update from them.
Best,
Theo
Any updates regarding this issue will be likely posted in the above forum thread
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Required Field Validator in Eclipse
I am developing a mobile application in android using eclipse and phonegap. and my question is. How will i able to create a required field validator in my registration page? or what code should i input for required field validator. Many thanks guys.
A:
If your registration page is a bunch of editTexts, you can just use
String mString = editText.getText()
and then make sure that the mString has a certain length
mString.length()
or includes certain characteristics (like @ and .com for an email) For that you can just use regex (or Pattern in Android)
http://developer.android.com/reference/java/util/regex/Pattern.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to send an email in background with CakePHP?
That's it, how can I send an email in background with CakePHP, preferable using the out-of-the-box Email component?
EDIT: with in background I mean in another thread, or at least allowing the controller method to finish and send the response to the user. I want to run a function, return "OK" to the user and, after that, send an email.
If it's not possible, how could I do it using the PHP mail function? (only if it's a good approach)
Don't know if it matters, but I'm using SMTP.
Thank you in advance.
EDIT 2: I'm trying to use the CakeEmail::deliver() method, as I read from the documentation that:
Sometimes you need a quick way to fire off an email, and you don’t necessarily want do setup a bunch of configuration ahead of time. CakeEmail::deliver() is intended for that purpose.
I'm trying this:
CakeEmail::deliver('[email protected]', 'Test', 'Test',
array('from' => '[email protected]'), true);
But the mail is not actually being sent. Anyone has any hint on that method?
A:
So "in the background" means you want to process the mail sending "out of band". This is to give the user a quick response, and to be able to process slow operations like email sending after the user has feedback (or as you say, in a separate thread).
There are a number of ways to achieve this, including:
Workers / message queues
Cron job
The simplest way is probably to use a cron job that fires off every 15 or 30 seconds.
My recommended approach is to look into workers and queues, and use something like 0mq or RabbitMQ to queue the request to send an email, and process that outside the request.
If you want to do a cron job, rather than sending the email within the request the user has initiated, you would create a new model to represent your outbound email requests, and store that data into your database. Lets call this model Message for example.
CREATE TABLE `messages` (
`id` CHAR(36) NOT NULL PRIMARY KEY,
`to` VARCHAR(255) NOT NULL,
`from` VARCHAR(255) NOT NULL,
`subject` VARCHAR(255) NOT NULL,
`body` TEXT,
`sent` TINYINT(1) NOT NULL DEFAULT 0,
`created` DATETIME
);
Create a Console that grabs the Message model, and does a find for "unsent" messages:
$messages = $this->Message->findAllBySent(0);
Create yourself a send() method to simplify things, and process all unsent messages:
foreach ($messages as $message) {
if ($this->Message->send($message)) {
// Sending was a success, update the database
$this->Message->id = $message['Message']['id'];
$this->Message->saveField('sent', 1, false);
}
}
The implementation of the send() method on the Message model is up to you, but it would just pass the values from the passed in $message and shunt through to CakeEmail (http://api.cakephp.org/class/cake-email)
Once that is done, you can just call the console from the command line (or from your cron).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to choose columns with only NAs and a unique value and fill NA's with that value?
I have a data frame some columns of which have only a unique value or NA. I want to choose these columns and fill the NA's in these columns with the unique non-missing variable in the column.
Here is a mock-data:
df = data.frame( A = c(1,NA,1,1,NA), B = c(2,NA,5,2,5), C =c(3,3,NA,NA,NA))
#df
# A B C
#1 1 2 3
#2 NA NA 3
#3 1 5 NA
#4 1 2 NA
#5 NA 5 NA
I want to obtain:
#df
# A B C
#1 1 2 3
#2 1 NA 3
#3 1 5 3
#4 1 2 3
#5 1 5 3
So far, I tried:
df = df %>%
map_if((length(unique(na.omit(.)))== 1), ~ unique(na.omit(.)))
df = df %>%
mutate_if((length(unique(na.omit(.)))== 1), ~ unique(na.omit(.)))
Both gave the following error:
Error in probe(.x, .p) : length(.p) == length(.x) is not TRUE
Can somebody please tell me what is the correct syntax to achieve what I want?
A:
We could check for condition in mutate_if and if it is satsfied then use the first non-NA value for entire column
library(tidyverse)
df %>%
mutate_if(~n_distinct(.[!is.na(.)]) == 1, funs(.[!is.na(.)][1]))
# A B C
#1 1 2 3
#2 1 NA 3
#3 1 5 3
#4 1 2 3
#5 1 5 3
which could also be written as suggested by @RHertel
df %>% mutate_if(~n_distinct(.[na.omit(.)]) == 1, funs(na.omit(.)[1]))
To make it more clear we could create functions and use them accordingly
only_one_unique <- function(x) {
n_distinct(x[!is.na(x)]) == 1
}
first_non_NA_value <- function(x) {
x[!is.na(x)][1]
}
df %>% mutate_if(only_one_unique, first_non_NA_value)
We could keep everything in base R using the same logic
only_one_unique <- function(x) {
length(unique(x[!is.na(x)])) == 1
}
first_non_NA_value <- function(x) {
x[!is.na(x)][1]
}
df[] <- lapply(df, function(x) if (only_one_unique(x))
first_non_NA_value(x) else x)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How much information, at maximum, can a phisher/scammer obtain?
My iPhone was recently stolen and since then I've been receiving messages and emails saying that 'a sound was played on your iPhone' or 'your iPhone was found. Visit here to confirm your location'. The numbers and email addresses are clearly fake.
I was wondering what can he/she obtain, except from my Apple ID, from me? How much can he/she do?
A:
It depends on two main factors:
The skill of the Phisher/Social Engineer
The Gullibility of the victim (How much he/she is convinced by the Phisher)
It can range from basic information, such as your location, phone number and full name to your family's details and bank credentials.
Then you have to ask yourself, how much does Google know about you?
The main vulnerability in a human (apart from stupidity) is online presence...
Do you have a Facebook Account? Clearly this Phisher knows your Email account, is this linked to your Facebook, Twitter or LinkedIn account?
Does your password contain a date-of-birth, family/pet name or 13375P34K?
Then we reach your iPhone... Is it password protected? Number or Phrase?
If it's a number, is it 123, 1234, 12345, 15790, 78982, 1278?
If it's a phrase, is it P455W0RD, home, passwOrd, Password, QwErTy?
If the attacker does in-fact have your iPhone, and easily guesses your password, then everything you have ever connected to with that device is in danger...
Humans are creatures of habit, and always will be. Once an attacker is in your life, he will not stop!
This Phisher can Dox you right there and then! With the right amount of luck & skill...
Full Name
Address
Phone Number
Family names
Family accounts
Family details
Bank details
Other website accounts
Many, many more...
Then we can think about what happens when they've acquired enough information about your bank, the man sat at his desk in his boring office will just be as gullible as the next, if he's given the correct information, he will let anyone into your account.
Remember that Social Engineering is a very powerful weapon, and due to current media control and educational failure, humans are easier to hack than computers!
Happy thoughts! <3
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Yet another immutable string
I know, there are a few implementations of immutable strings out there, but my focus seems to be a little different.
My goal was to have a type that provided value semantics, but didn't incur the cost of dynamic memory allocation when constructed from a string literal which is already guaranteed to exist during the whole program runtime.
After refactoring, I ended up with two classes:
one str_ref class similar to string_view (maybe I can switch to gsl::string_span once I have a c++14 compiler) primarily used as a parameter for functions that don't intend to copy / take ownership of the string.
The actual const_string class which is derived from it but also stores the a dynamically allocated character array if necessary (via a shared pointer)
str_ref
#include <algorithm>
#include <iterator>
#include <string>
#include <ostream>
class str_ref {
public:
//type defs
using value_type = char;
using traits_type = std::char_traits<value_type>;
using size_type = size_t;
using difference_type = std::ptrdiff_t;
using reference = value_type&;
using const_reference = const value_type&;
using pointer = value_type*;
using const_pointer = const value_type*;
using iterator = pointer;
using const_iterator = const_pointer;
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
public:
/* #### CTORS #### */
constexpr str_ref() = default;
str_ref(const std::string& other) noexcept:
_start(other.data()),
_size(other.size())
{}
constexpr str_ref(const char* other, size_t size) noexcept :
_start(other),
_size(size)
{}
//NOTE: Use only for string literals!!!
template<size_t N>
constexpr str_ref(const char(&other)[N]) noexcept :
_start(other),
_size(N - 1)
{}
template<class T>
str_ref(const T * const& other) = delete;
/* #### Special member functions #### */
str_ref(const str_ref& other) = default;
str_ref& operator=(const str_ref& other) = default;
/* #### container functions #### */
constexpr const_reference first() const { return *begin(); }
constexpr const_reference last() const { return *(cend() - 1); }
constexpr const_iterator cbegin() const noexcept { return _start; }
constexpr const_iterator cend() const noexcept { return _start + _size; }
constexpr const_iterator begin() const noexcept { return cbegin(); }
constexpr const_iterator end() const noexcept { return cend(); }
const_reverse_iterator crbegin() const { return const_reverse_iterator(cend()); }
const_reverse_iterator crend() const { return const_reverse_iterator(cbegin()); }
const_reverse_iterator rbegin() const { return const_reverse_iterator(cend()); }
const_reverse_iterator rend() const { return const_reverse_iterator(cbegin()); }
constexpr size_type size() const noexcept { return _size; }
constexpr bool empty() const noexcept { return size() == 0; }
constexpr const_reference operator[](size_t idx) const {
return _start[idx];
}
/*#### string functions ####*/
std::string to_string() {
return std::string(begin(), end());
}
constexpr str_ref sub_string(size_t offset, size_t count) const {
return str_ref{ this->_start + offset, count };
}
int compare(const str_ref& other) const {
if ((begin() == other.begin()) && (size() == other.size())) {
return 0;
}
return std::lexicographical_compare(cbegin(), cend(), other.cbegin(), other.cend());
}
protected:
const char* _start = nullptr;
size_type _size = 0;
};
/* operator overloads */
bool operator==(const str_ref& l, const str_ref& r) { return l.compare(r) == 0; }
bool operator!=(const str_ref& l, const str_ref& r) { return !(l == r); }
bool operator< (const str_ref& l, const str_ref& r) { return l.compare(r) < 0; }
bool operator> (const str_ref& l, const str_ref& r) { return r<l; }
bool operator<=(const str_ref& l, const str_ref& r) { return !(l>r); }
bool operator>=(const str_ref& l, const str_ref& r) { return !(l<r); }
std::ostream& operator<<(std::ostream& out, const str_ref& string) {
out.write(&*(string.begin()), string.size());
return out;
}
const_string
#include <memory>
#include "StrRef.h"
class const_string;
namespace _impl_helper {
template<class ...ARGS>
const_string concat_impl(const ARGS&...args);
}
class const_string : public str_ref {
public:
/* #### CTORS #### */
const_string() = default;
const_string(const char* other, size_t size) { _copyFrom(other, size ); }
explicit const_string(const std::string& other) { _copyFrom(other.c_str(), other.size()); }
explicit const_string(const str_ref& other) { _copyFrom(other.cbegin(), other.size()); }
// don't accept c-strings
template<class T>
const_string(const T * const& other) = delete;
//NOTE: Use only for string literals (arrays with static storage duration)!!!
template<size_t N>
explicit constexpr const_string(const char(&other)[N]) noexcept :
str_ref(other, N - 1)
{}
/* #### Special member functions #### */
const_string(const const_string& other) = default;
const_string(const_string&& other) = default;
const_string& operator=(const const_string& other) = default;
const_string& operator=(const_string&& other) = default;
/* #### String functions #### */
const_string sub_string(size_t offset, size_t count) const {
const_string retval;
retval._start = this->_start + offset;
retval._size = count;
retval._data = this->_data;
return retval;
}
template<class ...ARGS>
friend const_string _impl_helper::concat_impl(const ARGS&...args);
private:
std::shared_ptr<char> _data = nullptr;
void _copyFrom(const char* other, size_t size) {
_data = std::shared_ptr<char>(new char[size], std::default_delete<char[]>());
std::copy_n(other, size, _data.get());
_size = size;
_start = _data.get();
}
};
namespace _impl_helper {
void addTo(char*& buffer, const str_ref& str) {
std::copy_n(str.begin(), str.size(), buffer);
buffer += str.size();
}
template<class ...ARGS>
const_string concat_impl(const ARGS& ...args) {
//determine total size
size_t newSize = 0;
int ignore[] = { (newSize += args.size(),0)... };
//create chonst_string object
const_string retval;
retval._data = std::shared_ptr<char>(new char[newSize], std::default_delete<char[]>());
retval._start = retval._data.get();
retval._size = newSize;
//place copy arguments to buffer
char * bufferStart = retval._data.get();
int ignore2[] = { (addTo(bufferStart,args),0)... };
return retval;
}
}
template<class ...ARGS>
const_string concat(ARGS&&...args) {
return _impl_helper::concat_impl(str_ref(args)...);
}
Aside from general advice of how to improve my class (I bet, there is a lot), I'd especially like to know if
The creation of a new string can be made cheaper (currently it incurs two dynamic memory allocations - one for the string and one for the shared_ptr control block)
You see a way to implement concat, that doesn't require the forward declaration of the _impl_helper namespace (I'd like to keep it stashed away at the end of the file, not in front of the actual class)
You think that it is a problem, that the internal representation of the string is not zero-terminated. In my current project, there are very few 3rd party functions that require zero terminated strings (and I try to avoid them myself where ever possible - hence this class), but what is your experience?
I also made the deliberate design choice, to not provide a virtual destructor due to the overhead it would incur and the fact that I don't see any use case, where I would want to destruct const_string via a pointer to str_ref. Still this goes against best practices and might be suprising for other people using that code - would you accept such a code in your codebase?
I haven't finished documentation and the unit tests yet (I hope the code is readable enough), but here is some sample code to play around with:
#include <iostream>
#include "const_string.h"
using namespace std;
namespace {
void print_str_ref(str_ref s) {
std::cout << s << std::endl;
}
}
int main() {
//constexpr const char* tmp{ "Hello World" };
//const const_string ccs(tmp); //<-should produce compiler error
const_string cs0;
const_string cs1("Hello World");
const_string cs2("Hello World"s);
const char* t = "Hello World";
const_string cs3(t, std::strlen(t));
const_string cs4(cs1);
const_string cs5("tmp");
cs5 = cs1;
const_string cs6;
cs6 = cs2;
const_string cs71("Hello 12"s);
const_string cs72("World"s);
const_string cs7(concat(cs71.sub_string(0, 6), cs72));
std::string cppStr(cs7.to_string());
std::cout << cs1 << ":" << (cs1 == cs1) << std::endl;
std::cout << cs2 << ":" << (cs1 == cs2) << std::endl;
std::cout << cs3 << ":" << (cs1 == cs3) << std::endl;
std::cout << cs4 << ":" << (cs1 == cs4) << std::endl;
std::cout << cs5 << ":" << (cs1 == cs5) << std::endl;
std::cout << cs6 << ":" << (cs1 == cs6) << std::endl << std::endl;
std::cout << cppStr << ":" << (cppStr == "Hello World"s) << std::endl << std::endl;
print_str_ref("Hello World");
print_str_ref("Hello World"s);
print_str_ref(cs1);
print_str_ref({ t,std::strlen(t) });
std::cout << std::endl;
const str_ref tstr("ld");
print_str_ref(concat("Hel", "lo"s, const_string(" W"), str_ref("or"), tstr));
}
/*#### static / Constexpr stuff ####*/
constexpr str_ref str("Hello");
constexpr auto it = str.begin() + 4;
static_assert(*it == 'o', "");
template<class T>
struct CheckNothrowDefaults {
static_assert(std::is_nothrow_copy_assignable<T>::value, "");
static_assert(std::is_nothrow_move_assignable<T>::value, "");
static_assert(std::is_nothrow_copy_constructible<T>::value, "");
static_assert(std::is_nothrow_move_constructible<T>::value, "");
};
CheckNothrowDefaults<str_ref> _1{};
CheckNothrowDefaults<const_string> _2{};
Note: So far I can't use c++14 features, as the compiler for my embedded system is based on gcc4.8
A:
The creation of a new string can be made cheaper (currently it incurs
two dynamic memory allocations - one for the string and one for the
shared_ptr control block)
Is there a reason that your control block is a shared_ptr? It doesn't look like you're going to be sharing that data with any other object since it seems like each instance of the class will own its own copy of a string literal. (Remember, shared_ptr and unique_ptr are for describing ownership). I would switch to a unique_ptr since unique_ptr has far less overhead and is more lightweight than a shared_ptr. Also, by switching to a unique_ptr, you make your ownership semantics clear to the code reviewer; when I read your code and saw "shared_ptr", I assumed you'd be sharing data when you weren't. You could then switch to a more concise initialization function for it:
_data = std::make_unique<char[]>(new char[size]);
You see a way to implement concat, that doesn't require the forward
declaration of the _impl_helper namespace (I'd like to keep it stashed
away at the end of the file, not in front of the actual class).
Yes, it seems that namespace is pretty unnecessary as you could just implement those functions as private member functions of the const_string class. This also achieves the goal of keeping them at the end of the file as you have your private declarations at the end of the file. ;)
You think that it is a problem, that the internal representation of
the string is not zero-terminated. In my current project, there are
very few 3rd party functions that require zero terminated strings (and
I try to avoid them myself where ever possible - hence this class),
but what is your experience?
It shouldn't be a problem as long as you don't pass your string's internal data representation to an API that works with or expects NULL-terminated strings. If the only thing operating on your data is the class itself, you should be fine. Remember, NULL-terminated strings are an implementation detail that should only be exposed if necessary. It doesn't look like that's necessary here.
Also, a few general remarks:
constexpr const_iterator begin() const noexcept { return cbegin(); }
constexpr const_iterator end() const noexcept { return cend(); }
const_reverse_iterator rbegin() const { return const_reverse_iterator(cend()); }
const_reverse_iterator rend() const { return const_reverse_iterator(cbegin()); }
I don't see a need for these to exist; they are identical to their cbegin and cend variations and they also communicate the wrong thing to the user - that they point to mutable data (which they don't). Since you're creating an immutable string class, keep your interface const-only.
/* #### CTORS #### */
constexpr str_ref() = default;
/* #### Special member functions #### */
str_ref(const str_ref& other) = default;
str_ref& operator=(const str_ref& other) = default;
No need to define these as the compiler essentially does this for you for "free". This just clutters the code.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Amazon EC2 gives no disk space but my instance has space
So i have an instance that supposedly has a lot of free space.
I have installed Postgres and when i try to import a dump it tells me my disk is full. I did a df -h and saw that in fact xvda1 is full, but what about `xvdb ?
it has a lot of free space, how can i use that for my Postgres database??
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.5G 1.1M 100% /
udev 1.9G 8.0K 1.9G 1% /dev
tmpfs 376M 184K 375M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 1.9G 0 1.9G 0% /run/shm
/dev/xvdb 394G 199M 374G 1% /mnt
A:
xvdb is most likely an the Amazon EC2 Instance Store volume (also called an ephemeral volume, in contrast to an Amazon EBS volume) and you almost certainly do not want to use it if you value your data, except if you exactly know what you are doing and are prepared to always have point in time backups etc.:
Ephemeral storage will be lost on stop/start cycles and can
generally go away, so you definitely don't want to put anything of
lasting value there, i.e. only put temporary data there you can
afford to lose or rebuild easily, like a swap file or strictly
temporary data in use during computations. Of course you might store
huge indexes there for example, but must be prepared to rebuild these
after the storage has been cleared for whatever reason (instance
reboot, hardware failure, ...).
See my answers to the following related questions for more details:
Will moving data from EBS to ephemeral storage improve MySQL query performance?
how to take backup of aws ec2 instance/ephemeral storage?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
DocumentNode.SelectNodes return null
I was wondering if you could explain to me what's wrong in my script. My string "temperature" return always "null".
I'm using the HtmlAgilityPack. What i would like to do is to get the temperature from my friend website's.
my code
private void temperatureBtn_Click(object sender, EventArgs e)
{
string url = "http://perso.numericable.fr/meteo-kintzheim/";
HtmlWeb web = new HtmlWeb();
HtmlAgilityPack.HtmlDocument doc = web.Load(url);
string temperature = doc.DocumentNode.SelectNodes("/html/body/table[1]/tbody/tr[5]/td[3]/b[2]/font")[0].InnerText;
MessageBox.Show(temperature.ToString());
}
I would be really grateful if someone could help me :D
A:
http://perso.numericable.fr/meteo-kintzheim/ is actually a frameset of different frames.
Change the URL to http://perso.numericable.fr/meteo-kintzheim/current.html (the frame you want) and change your XPath from:
string temperature = doc.DocumentNode.SelectNodes("/html/body/table[1]/tbody/tr[5]/td[3]/b[2]/font")[0].InnerText;
To:
var temperature = doc.DocumentNode.SelectNodes("/html/body/table[1]/tr[5]/td[3]/b[2]/font")[0].InnerText;
Omitting tbody as it's not part of the document.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
where is vimfiles directory on mac OS?
I am a new user of mac OSX, I found there is no vimfiles directory under /usr/share/vim, but directory vim73 is there. this vim was shipped with OS and I didn't install it.
vimfiles was used for command files, plugins, which won't be affect by vim updates, but vim73 is only for current version, if I install vim74, then all files under vim73 won't be used any more.
A:
Just create a .vim directory in your home folder:
$ cd
$ mkdir .vim
This will used by every version of VIM. This is explained under
:help vimfiles
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to win against a moving violation ticket that was issued different than what I had allegedly committed?
I was stopped by an officer that claimed I sped over a 30mph speed limit. When he saw that I have a radar detector, he proceeded to "cut me a break" and gave me a ticket for "failed to signal" (but I did use my signal, and the officer knows it).
He did not use radar (my radar detector would have indicated otherwise) and I was in the lead among other cars. I wasn't speeding, but accelerated faster than the other cars. It seems to me that he realized that he may not have a strong argument in court when he saw my radar. Therefore, I received a citation that was different than the one I was stopped for.
How would a typical case like this play out in court (generalized)?
Would he tell the truth, and if he does, would that means the case would be dismissed on the grounds that I did not violate the law in which the citation has stated? (I would probably use "motion to summary judgement" if this happens.)
RESULTS from my case:
First Trial: Officer did not show up (did not give a reason). The Judge offered a continuance to the officer. Case postponed. (NYS penal code states along the lines that if the officer did not give a reason for missing the court date or is not one of the few exceptions for missing the date, case should be dismissed. However, in my case it wasn't. Perhaps the judge had the final say regarding this matter.
2nd Trial: Officer did not show up. Prosecutor tried to get me to come back. However, I mentioned the continuance is final and that the officer not showing up means there is no witness against me. I got my hearing in front of the judge and the case was dismissed.
Notes: After going through this ordeal, I realized that the prosecutors were really out to get people to agree to plea bargains. Even going as for as lying to me the officer was present to get me to sign a plea.
A:
I took my car to the mechanic to have a squeaky brake looked at. I was told it would cost $30. The mechanic fixed whatever the problem was. When I was checking out, they could not find a $30 brake-work item in their computer so they billed it as Tire Balancing $30. Or some such thing. Meh, accounting.
This is not how the law works.
The prosecution needs to prove every element of the crime you are charged with. They need to prove you did not signal. The way this usually works is the cop takes the stand and testifies, and you can cross examine him. Then you can testify if you want to, and can be cross-examined. There might be other evidence against you also, like a dash cam. Assuming there is no other evidence, and that the officer did not prove every element of failing to signal, you do not need to testify. You can tell the judge that the prosecution failed to make the case and ask to have the charge dismissed. Of course, if the judge thinks they did make their case, then you lose. On the other hand, you could take the stand and testify, and subject yourself to cross examination. Just a word of warning, if it's your word against a cop's word, you will lose.
Your best bet is to get discovery, get the dash cam, and show that you did signal.
Be aware, if you get too saucy, the prosecution can add charges. So they could add the speeding charge, but of course, (see above), they then need to prove it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Chromium browser bundle
What is the current state of porting the TBB from Firefox to Chromium? Several years back the Tor Project blog mentioned work by the Chromium developers to implement the needed APIs; since then, nothing. Chromium seems to be far more advanced than Firefox in many respects: it has process separation, full sandboxing, and generally robust security. The Chromium Project is open-source, and therefore arguably safe from government interference and backdoors, and user tracking should be a simple matter to remove. The blog post claimed that a good deal of work had been done to this effect, and that if the APIs were completed, the functionality would in fact be much simpler to implement than in Firefox. Also, with Opera now based on Chromium, it would seem that working on a Chromium bundle would make porting to Opera, SRWare Iron, and many others simple.
Has this work stalled?
If so, why?
Is there any chance it will continue?
Links:
https://blog.torproject.org/blog/google-chrome-incognito-mode-tor-and-fingerprinting
https://blog.torproject.org/blog/improving-private-browsing-modes-do-not-track-vs-real-privacy-design
UPDATE:
I've reviewed the link mrphs provided and done some research. It seems that of the five blocking issues listed, most are obsolete, fixed, or on the way there, and only one major problem remains for which there is no workaround. It would be nice to see the Chrome bug list reviewed for updates.
UPDATE:
According to the conversation around that last major problem, a custom Chrome build would make it feasible. It seems that the Tor Project is waiting until they can make an extension-only version of TorButton for Chrome, rather than a fully custom fork like Firefox. Interesting thought: if something this simple became possible for Chrome, would Firefox be dumped?
In summary, progress is happening, but largely stuck on a few issues.
A:
"... and user tracking should be a simple matter to remove."
That's the tricky part. It's rather impossible than simple.
There are several fingerprinting and privacy related bugs which seem to be impossible to patch.
For more detailed info see: https://trac.torproject.org/projects/tor/wiki/doc/ImportantGoogleChromeBugs
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to find instantaneous positions of a central body of specified mass that results in an osculating orbit?
If I understand correctly, osculating orbital elements such as those posted in JPL's Horizons represent the mathematical osculating (tangential or "kissing") Keplerian orbit about a specified location in space, without regard to what mass that body would need to have in order for that orbit to happen.
Instead, I'd like to calculate the position of a body of specified mass such that an orbit about it would have a specified state vector; at a given position $\mathbf{x}$ would have the velocity vector $\mathbf{v}$.
The goal is to formulate an alternative, more numerical, graphic, and slightly creative answer to the question What point does Earth actually orbit?
I'd like to calculate where to put the Sun with 1 solar mass such that it's orbit would include a given state vector.
Is this mathematically possible
Will this produce a different set of orbital elements than the traditional osculating elements?
A:
If I understand you correctly the state vector you would use would be defined in an arbitrary reference frame and you would like to find the position of the central body in that same arbitrary reference frame that would result in an orbit which includes that state vector.
Mathematically there are infinitely many solutions to this problem, any position you choose for this central body will result in a keplerian orbit. Of course ruling out singularities such as zero velocity or zero position (with respect to the central body).
Say you have some state vector with a small velocity. This may be the result of a circular orbit about the central body with a large semi-major axis and low orbital velocity. The same state vector may be the result of a highly elliptical orbit around a central body with the same mass, where the body is much closer than in the other case, but the spacecraft is near or at apogee.
The Wikipedia page on Kepler orbits has a section on this initial value problem. The choice of location of a central body will just change the coordinate transformation you have to apply to the given state vector in the reference frame in which you also define the location of the central body and will simply result in different state vectors in the reference frame centered at the central body which will be used to determine the kepler orbit corresponding to this state.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to do auto-login without leaving your account open to everyone?
I always use only one account on my computer.
I do have a lot of programs in startup and I need them all. The problem is that they take some time to load (for example eclipse).
is there any way to do auto-login on the account and lock screen so no-one else can use my account?
A:
Use gnome-screensaver-command -l right after the list of programs you load.
It will lock your screen when you run it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding min and max value in each column in array
I have a problem with my program. I want to print min and max in each column but It doesn't work properly. I think everything should be ok. When the loop comes to end I restart min and max value.
public class tablice2 {
public static void main(String[] args){
int t [][] = new int [5][5];
int n [] = new int [5];
int x [] = new int [5];
Random r = new Random();
int min = t[0][0];
int max = t[0][0];
for (int i = 0; i <t.length ;i++){
min = 0;
max = 0;
for(int j = 0; j < t[i].length ;j++){
t[i][j] = r.nextInt(6)-5;
System.out.print(t[i][j] + " ");
if (t[j][i] < min){
min = t[j][i];
}
if (t[j][i] > max){
max = t[j][i];
}
}
n[i]=min;
x[i]=max;
System.out.println(" ");
}
for(int p=0;p<x.length;p++){
System.out.println("Max Column "+p + ": " +x[p] );
}
for(int k=0;k<n.length;k++){
System.out.println("Min Column "+k + ": " +n[k]);
}
}
}
A:
Do not input and sort at the same time as the elements may still be initialized with the default values (i.e 0). Also, in the outer loop, reset max and min to the first element of the column.
Do it as follows:
import java.util.Random;
public class Main {
public static void main(String[] args) {
int t[][] = new int[5][5];
int n[] = new int[5];
int x[] = new int[5];
Random r = new Random();
int min;
int max;
for (int i = 0; i < t.length; i++) {
for (int j = 0; j < t[i].length; j++) {
t[i][j] = r.nextInt(10) - 5;
System.out.printf("%4d", t[i][j]);
}
System.out.println();
}
for (int i = 0; i < t.length; i++) {
min = t[0][i];
max = t[0][i];
for (int j = 0; j < t[i].length; j++) {
if (t[j][i] < min) {
min = t[j][i];
}
if (t[j][i] > max) {
max = t[j][i];
}
}
n[i] = min;
x[i] = max;
}
for (int p = 0; p < x.length; p++) {
System.out.println("Max Column " + p + ": " + x[p]);
}
for (int k = 0; k < n.length; k++) {
System.out.println("Min Column " + k + ": " + n[k]);
}
}
}
A sample run:
3 -4 2 0 1
-2 -2 4 -1 -2
-3 1 4 -1 0
-4 4 -2 -5 2
-5 -3 -3 -4 -1
Max Column 0: 3
Max Column 1: 4
Max Column 2: 4
Max Column 3: 0
Max Column 4: 2
Min Column 0: -5
Min Column 1: -4
Min Column 2: -3
Min Column 3: -5
Min Column 4: -2
Notes:
I have changed r.nextInt(6)-5 to r.nextInt(10) - 5 in order to produce a mix of negative, 0 and positive numbers so that you can quickly validate the result. You can change it back to r.nextInt(6)-5 as per your requirement.
I have also used printf instead of print to print each number with a space of 4 units. You can change it back to print if you wish so.
Use of Integer.MAX_VALUE and/or Integer.MIN_VALUE is not required at all to solve this problem.
Feel free to comment in case of any doubt.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get Sidekiq workers running on Heroku
I've set up Sidekiq with my Rails project. It's running on Heroku with Unicorn. I've gone through all the configuration steps including setting the proper REDISTOGO_URL (as this question references), I've added the following to my after_fork in unicorn.rb:
after_fork do |server,worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
Sidekiq.configure_client do |config|
config.redis = { :size => 1 }
end
end
My Procfile is as follows:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec sidekiq
Right now I call my worker to perform_async and it adds the task to the queue. In fact in my Sidekiq web interface it says there are 7 items in the queue and it has all of the data there. Yet there are no workers processing the queue and for the life of me, I can't figure out why. If I run
heroku ps
I get the following output:
=== web: `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: up 2012/12/09 08:04:24 (~ 9m ago)
=== worker: `bundle exec sidekiq`
worker.1: up 2012/12/09 08:04:08 (~ 10m ago)
Anybody have any idea what's going on here?
Update
Here's the code for my worker class. Yes, I'm aware that the Oj gem has some issues potentially with sidekiq, but figured I'd give it a shot first. I'm not getting any error messages at this point (the workers don't even run).
require 'addressable/uri'
class DatasiftInteractionsWorker
include Sidekiq::Worker
sidekiq_options queue: "tweets"
def perform( stream_id , interactions )
interactions = Oj.load(interactions)
interactions.each{ |interaction|
if interaction['interaction']['type'] == 'twitter'
url = interaction['links']['normalized_url'] unless interaction['links']['normalized_url'][0].nil?
url = interaction['links']['url'] if interaction['links']['normalized_url'][0].nil?
begin
puts interaction['links'] if url[0].nil?
next if url[0].nil?
host = Addressable::URI.parse(url[0]).host
host = host.gsub(/^www\.(.*)$/,'\1')
date_str = Time.now.strftime('%Y%m%d')
REDIS.pipelined do
# Add domain to Redis domains Set
REDIS.sadd date_str , host
# INCR Redis host
REDIS.incr( host + date_str )
end
rescue
puts "ERROR: Could not store the following links: " + interaction['links'].to_s
end
end
}
end
end
A:
My preference is to create a /config/sidekiq.yml file and then use worker: bundle exec sidekiq -C config/sidekiq.yml in your Procfile.
Here's an example sidekiq.yml file
A:
Figured out that if you're using a custom queue, you need to make Sidekiq aware of that queue in the Procfile, as follows:
worker: bundle exec sidekiq -q tweets, 1 -q default
I'm not quite sure why this is the case since sidekiq is aware of all queues. I'll post this as an issue on the Sidekiq project.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Run at Element's Height
I need some JavaScript to run when the user scrolls past a certain element. In this case, the element is <span class="accomp"></span>. The script I put together to handle this works when given a pixel value, but this value may not correspond to the correct area on the page depending on the user's screen resolution, browser, etc.
$(window).scroll(function () {
if ($(window).scrollTop() === $(document).height() - $(window).height() - $( "span.accomp" ).height()) {
meSpeak.speak('hello world', { pitch: 10, speed: 100 });
}
});
The pixel value based script would have the pixels instead of $( "span.accomp" ).height()
A:
Use jQuery's offset().
Get the current coordinates of the first element in the set of matched elements, relative to the document.
var spoken = false;
$(window).scroll(function(){
if(spoken) return;
var scrollTop = $(window).scrollTop();
var elementTop = $("span.accomp").offset().top;
if(scrollTop >= elementTop){
meSpeak.speak('hello world', { pitch: 10, speed: 100 });
spoken = true;
}
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
linux screen detatch and logout
i love screen utility and i use it extensively on my server so i set up my .bashrc file to resume my screen session on login. the only function i am missing is something that logs out from the ssh session without the need of detaching/closing the screen session explicitly.
i was thinking about some script that would run screen -dS "mainScreen"; exit but it is not possible as this script obviously continues its execution inside the screen session after the detach instruction and does not affect my ssh session, so the only thing i get is that the screen session is terminated.
is there a way to do the 'detatch and exit' action atomically leading the screen to continue running and my ssh session to terminate?
A:
ssh supports a mechanism by which you can enter input directly to it instead of to the shell on the other end of the connection. That mechanism is enabled when you type the escape key, which can be set with -e and defaults to ~. This is useful for various functions like setting up port forwarding in an already connected session, or terminating the connection. You can type ~? to get a complete list of available commands. In particular, to terminate the session, type:
~.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Python - автоматическое заполнение
Задача такова, у меня заполняются переменные через POST запрос, обрабатываются сервером. Далее, я хочу с помощью этих переменных поменять дату и время на linux(RPi).
_year = request.form['_year']
_month = request.form['_month']
_day = request.form['_day']
_hour = request.form['_hour']
_minute = request.form['_minute']
_second = request.form['_second']
os.system('date -s "%d %s %d %d:%d:%d"') % (_day, _month, _year, _hour, _minute, _second)
подскажите пожалуйста почему "%d %s %d %d:%d:%d" не работает?
A:
почему "%d %s %d %d:%d:%d" не работает?
Потому что у вас % за скобками вызова находятся: os.system(..) % ...
Нужно писать f("%sformat%s" % ("a", 1)) вместо f("%sformat%s") % ("a", 1). Даже, если это исправить, то не хорошо, не проверяя, в оболочку произвольные строки передавать.
Разбейте вашу задачу на две:
Распознать дату как datetime объект:
from datetime import datetime
dt = datetime.strptime(request.form['date'], '%Y-%m-%d %H:%M:%S')
Вызвать необходимую комманду, используя полученный datetime объект:
import subprocess
subprocess.check_call(['date', '-s', str(dt)])
Это позволяет убедиться что формат даты был указан верно до попытки запуска команды и нет опасности исполнения произвольной команды (такой как удаление всех файлов).
date -s может требовать специальных привилегий (root). Эта команда не изменяет вывод sudo hwclock --show (hardware clock). Обычно ntp используется, если хочется синхронизировать время с внешним источником.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to efficiently validate selects in Rails 4
I currently have a few select drop downs in my application. But I have no way of validating the input on the backend. You could essentially change the values and submit it and it would work.
Take this for example:
<%= f.select :gender, [['Male','male'],['Female','female']], required: "true" %>
how can I validate that what they submit is either 'male' or 'female' in the model?
A:
Use validates_inclusion_of
In your model,write like this
validates_inclusion_of :gender, :in => %w(Male Female male female)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Scala passing a case class as a function
which Scala magic allows MyClass to be passed as a function:
trait T
case class MyClass(x: String) extends T
def m(f: (String) => T): Unit = println("working")
m(MyClass)
A:
You aren't passing the case class, you're passing it's companion object, also called MyClass. MyClass (the companion) is a Function1[String, T] because the compiler automatically creates the method:
def apply(s: String): MyClass
You can check:
scala> MyClass.isInstanceOf[Function1[String, T]]
res53: Boolean = true
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Stealth address in technical level
Lets say user A wants to send bitcoin to user B stealth address.
What is exactly user B's stealth address? Is it his public key? Since User A needs to multiply his private key with user B's public key based on Elliptic Curve Diffie-Hellman so I assume user A is receiving the complete point which is user B's public key.
User A shoud calculate PrivA * PublicB which results in a coordinate. As far as I know this point will be used by multiplying it to the publicB and a nonce and that will be sent to the network. If what I am saying is correct, how can I multiply this coordinate with PublicB since both of them are points. Do we only get one axis of this point from the diffie hellman's calculation?
A:
What is exactly user B's stealth address? Is it his public key?
In the simplest stealth address scheme, yes. The exact encoding depends on the implementation; DarkWallet's is described in their wiki.
how can I multiply this coordinate with PublicB since both of them are points?
Correct, S = PrivA * PublicB = PrivB * PublicA is a point. We want an integral shared secret. ECDH tells you to use the x coordinate of S as the shared secret, but in Bitcoin it's done differently. Instead int(sha256(compress(S))) is used:
serialize S in compressed form as per SEC 1's Elliptic-Curve-Point-to-Octet-String Conversion
take its SHA-256 digest
interpret the result as a 256-bit big-endian integer
This integer, c, is then used by A to calculate a public key PublicB + c*G to which A spends, and by B to calculate its private key (PrivB + c) * G from which B can redeem (where G is the base point generator).
Note that there's a problem with this "simplest" scheme. The only way that B's wallet software can see if a UTXO is redeemable by B (to keep track of the wallet's balance) is by doing the math above, which requires PrivB. In other words, it makes watching-only wallets impossible (and cold wallets impractical).
One solution to this is to use one PublicB/PrivB pair for ECDH, but a different PublicB2/PrivB2 pair for deriving the spend/redeem keys. Now you can keep PrivB in your watching-only wallet, and PrivB2 in your cold wallet. On the down side, B's stealth address is now twice as long, containing both PublicB and PublicB2.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Time complexity while loop with recursion
void call(int n)
{
for (int j=1;j<=n;j++)
{
call(n/2);
}
}
void main()
{
int i;
for (i=1;i<=n;i++)
{
call(i);
}
}
For the time complexity of this loop. Is this thought process correct? In the main function, the loop is O(N). In the call function, the loop is O(N), which the recursion is n/2, therefore the O(logN)with base 2. So the overall time complexity of in the main is O(N)*[O(N)*O(LogN)]= O(N^2 Log N)?
A:
you can use recursion tree to figure out the number of calls and the order of recursion function is equal to the number of nodes in the recursion tree (leaves are call(n/2) that is not showing):
so to calculate the number of all nodes you can calculate summation and estimate the order (using geometric sequence by formula to calculate summation) :
Order of the main loop is less than , so main loop order is
|
{
"pile_set_name": "StackExchange"
}
|
Q:
AngularJS - How to pass data through nested (custom) directives from child to parent
I am looking to find the best way of sending scope through nested directives.
I have found that you can do $scope.$parent.value, but I understood that's not a best practice and should be avoided.
So my question is, if I have 4 nested directives like below, each with it's own controller where some data is being modified, what's the best way to access a value from directive4 (let's say $scope.valueFromDirective4) in directive1?
<directive1>
<directive2>
<directive3>
<directive4>
</directive4>
</directive3>
</directive2>
</directive1>
A:
For the "presentational" / "dumb" components (directive3 and directive4), I think they should each take in a callback function which they can invoke with new data when they change:
scope: {
// Invoke this with new data
onChange: '&',
// Optional if you want to bind the data yourself and then call `onChange`
data: '='
}
Just pass the callback down from directive2 through directive4. This way directive3 and directive4 are decoupled from your app and reusable.
If they are form-like directives (similar to input etc), another option is to look into having them require ngModel and have them use ngModelController to update the parent and view. (Look up $render and $setViewValue for more info on this). This way you can use them like:
<directive4 ng-model="someObj.someProp" ng-change="someFunc()"></directive4>
When you do it like this, after the model is updated the ng-change function is automatically invoked.
For the "container" / "smart" directives (directive1 and directive2), you could also have directive2 take in the callback which is passed in from directive1. But since directive1 and directive2 can both know about your app, you could write a service which is injected and shared between directive1 and directive2.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Read a file with unicode characters
I have an asp.net c# page and am trying to read a file that has the following charater ’ and convert it to '. (From slanted apostrophe to apostrophe).
FileInfo fileinfo = new FileInfo(FileLocation);
string content = File.ReadAllText(fileinfo.FullName);
//strip out bad characters
content = content.Replace("’", "'");
This doesn't work and it changes the slanted apostrophes into ? marks.
A:
I suspect that the problem is not with the replacement, but rather with the reading of the file itself. When I tried this the nieve way (using Word and copy-paste) I ended up with the same results as you, however examining content showed that the .Net framework believe that the character was Unicode character 65533, i.e. the "WTF?" character before the string replacement. You can check this yourself by examining the relevant character in the Visual Studio debugger, where it should show the character code:
content[0]; // 65533 '�'
The reason why the replace isn't working is simple - content doesn't contain the string you gave it:
content.IndexOf("’"); // -1
As for why the file reading isn't working properly - you are probably using the wrong encoding when reading the file. (If no encoding is specified then the .Net framework will try to determine the correct encoding for you, however there is no 100% reliable way to do this and so often it can get it wrong). The exact encoding you need depends on the file itself, however in my case the encoding being used was Extended ASCII, and so to read the file I just needed to specify the correct encoding:
string content = File.ReadAllText(fileinfo.FullName, Encoding.GetEncoding("iso-8859-1"));
(See this question).
You also need to make sure that you specify the correct character in your replacement string - when using "odd" characters in code you may find it more reliable to specify the character by its character code, rather than as a string literal (which may cause problems if the encoding of the source file changes), for example the following worked for me:
content = content.Replace("\u0092", "'");
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Plotting results having only mean and standard deviation
I am trying to visualize an appropriate plot for the observations in this table of means and standard deviations of recall scores:
\begin{array} {c|c c|c c|}
& \text{Control} & & \text{Experimental} & \\
& \text{Mean} & \text{SD} &\text{Mean} &\text{SD} \\
\hline
\text{Recall} & 37 & 8 & 21 & 6 \\
\hline
\end{array}
What is is the best way to do that? Is bar chart a good way to do it? How can I illustrate the standard deviation in that case?
A:
Standard deviation on bar graphs can be illustrated by including error bars in them.
The visualization(source) below is an example of such visualization:
From a discussion in the comments below, having only the error whiskers instead of the error bars setup seems a better way to visualize such data. So, the graph can look somewhat like this:
A:
I'd suggest a dot plot:
Although there is still some room for improvement (perhaps dimming the edges of the big rectangle surrounding the data), almost all of the ink is being used to display information.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creating a line break within vCard format
I am creating a vCard and among other things am including a note, like so
[vCardArray addObject:[NSString stringWithFormat:@"NOTE:%@", note]];
Then, as is standard procedure, creating a string from the array using "\n", like so
NSString *string = [vCardArray componentsJoinedByString:@"\n"];
The conflict arising is that my note string has a "\n" in it, which messes up the vCard representation. I have tried using "\r" both as the separator and within my note string, but to no avail.
Is there a way around this? I'd like to add a line of extra space in my note. Thanks in advance!
A:
Newlines are represented as \n in vCard property values (as in: a backslash character, followed by an n character). So, you would have to replace all newlines characters with \n before assigning the string to the NOTE property.
BEGIN:VCARD
VERSION:3.0
NOTE:Line 1\nLine 2\nLine 3
END:VCARD
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to eliminate variables "panel1, panel2, panel3 .. etc." in Delphi?
i have this type
type
TMain = class(TForm)
panel1: Tpanel;
panel2: Tpanel;
panel3: Tpanel;
panel4: Tpanel;
panel5: Tpanel;
panel6: Tpanel;
panel7: Tpanel;
panel8: Tpanel;
......
panel45: Tpanel;
label1: TLabel;
label2: TLabel;
label3: TLabel;
label4: TLabel;
label5: TLabel;
label6: TLabel;
label7: TLabel;
...........
label109: TLabel;
How can i call this components in one line... inside the Type?
Thank you ...
UpDate....
Base from the anwser i get and accepted it works great when i have all this components and make the actions like a button1.click from the main form...
But i use to make the actions from units... so
When i click a button i great a procedure DoMaths(Sender: TObject);
procedure Tform1.DoMaths(Sender: TObject);
begin
if TButton1(Sender).hint := 'Make the standard Package' then
do_Maths_standard_package;
end;
the do_Maths_standard_package is in unit ComplexMaths.
is the procedure do_Maths_standard_package form unit ComplexMaths it calls some components form Form1... like Form1.label1 etc...
So when i call the RegisterClass(TLabel) and erase the Tlabel from the type it gives an error that it cant find the Label1...
Please can someone help me so not to do the hole program from the start...
Thank you again...
A:
You can delete the name of a TPanel or TLabel then it only exists in the Controls List not in the Type declaration of the form. You either need to leave one Label and one panel or
add:
initialization
RegisterClass(TPanel);
RegisterClass(Tlabel);
end.
at the end of the form.
This makes forms with a lot of controls much neater.
A:
Use the TForm.Controls array:
var
i: Integer;
Pnl: TPanel;
begin
for i := 0 to ControlCount - 1 do
if Controls[i] is TPanel then
begin
Pnl := TPanel(Controls[i]);
Pnl.Caption := 'This is panel ' + IntToStr(i);
end;
end;
Delphi automatically creates two lists for each TWinControl:
Controls contains a list of all TControl items the control contains.
Components is a list of all the TComponents on a control.
Note that all Controls are Components, but not all Components are Controls; that's why there are two lists. (A TDataSet, for instance, is in the Components list, but not in the Controls list; a TEdit, on the other hand, will be in both.)
You can use the same technique to iterate through the components on a panel or other container as well - TPanel has both Control and Component arrays, for instance.
If what you actually want is to reduce the number of items inside the type declaration itself, create them at runtime instead - Delphi will automatically add them to the arrays based on the Owner and Parent:
procedure TForm1.FormCreate(Sender: TObject);
var
i: Integer;
Panel: TPanel;
Label: TLabel;
begin
for i := 0 to 10 do
begin
Panel := TPanel.Create(Self); // Set who frees it
Panel.Parent := Self; // Set display surface
Panel.Align := alTop;
Panel.Name := Format('Panel%d', [i]); // Not necessary
Panel.Caption := Panel.Name;
// Add a label on each panel, just for fun.
Label := TLabel.Create(Panel); // Panel will free label
Label.Parent := Panel; // Label will show on panel
Label.Top := 10;
Label.Left := 10;
Label.Name := Format('Label%d', [i]);
Label.Caption := Label.Caption; // Not necessary
end;
end;
Note that creating them yourself is not an "optimization", as it just shifts the loading from the VCL doing it to you doing it yourself. It will reduce the size of the .dfm file, but won't speed up your code or loadtime any, and it means you can't visually lay out the form as well. (It's also much harder to maintain your code, because your controls have meaningless names. It's much easier to know what LastNameEdit or EditLastName is than Edit1 when you read the code 6 months from now.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Independent versus dependent event
Problem 1: Let $A$ be the event of getting dealt a flush. Let $B$ be the event of getting dealt a hand in which at least two cards have the same face value (2-10, jack, queen, king, ace). Are $A$ and $B$ independent?
Problem 2: Roll two dice, and let $A$ be the event that the dice match, and let $B$ be the event that the dice sum to $8$. Are $A$ and $B$ independent?
I found that $P(A|B) \not= P(A)$ for both, but I'm not sure that I did my calculations correctly.
$P(A|B) = P(0.00198*1)/P((13/52)*(12/51)$
$P(A|B) = P((6/36)*(1/6))/P(5/36)$
A:
First problem: We have $\Pr(A)\ne 0$ and $\Pr(B)\ne 0$. But clearly $\Pr(A\cap B)=0$.
Thus $\Pr(A\cap B)\ne \Pr(A)\Pr(B)$. It follows that $A$ and $B$ are not independent.
Second problem: We have $\Pr(A)=\frac{1}{6}$ and $\Pr(B)=\frac{5}{36}$.
The event $A\cap B$ occurs precisely if we get double-$4$ on the dice. This has probability $\frac{1}{36}$,
Note that we have $\Pr(A\cap B)\ne\Pr(A)\Pr(B)$, so the events are not independent.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Exploring the locations of monsters in a Minecraft map
I am trying to "debug" my large monster spawning room. I am aware that all surrounding caves must be lit, but my trap still seems to wind down after a few minutes. I would ideally like to use a tool to figure out where the monsters are so that I can light up the remaining areas. Does such a tool exist?
A:
If you're okay with using a mod, you can install zombe's modpack. There is a mod called "cheat" in there, and it has a feature that highlights all mobs through walls with a colored marker. Here's what that looks like in-game:
It works in multiplayer as well if you need that support.
If you don't like installing mods, you can always use MCEdit to explore your world, but development has become a bit stagnated, and you'll need to figure out how to compile a dev version to get it to work with MC 1.1.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Error durante la lectura de datos desde un archivo a una estructura
Me encuentro recogiendo datos de un fichero, del fichero quiero coger el DNI y el nombre, y el error me lo da al intentar guardar los nombres en los distintos compartimentos usando Nombre[i].
Codigo:
#include <stdio.h>
#include <string.h>
#define MAX 100
int main(){
FILE *f1;
char Linea1[200],Linea2[200],Apellido[14],linea[40];
char Nombre[70];
int DNI[9];
int i=0;
f1=fopen("h.txt","r");
fgets(Linea1,200,f1);
fgets(Linea2,200,f1);
while(!feof(f1)){
do{
i=0;
fscanf(f1,"%s %s %[^\n]",DNI[i],Nombre[i],linea);
i++;
}
while(linea==NULL);
}
fclose(f1);
}
Les dejo también el fichero por si acaso les ayuda a localizar el error:
Fecha Examen: 2018/08/09
Numero de pruebas: 6
23321223D Markel Zubieta 4.47 3.06 5.09 5.11 8.18 7.95 6.44 3.79 8.12 5.33 2000/2/3
13080976G Antonio Gonzalez Perez 3.51 2.08 3.01 4.71 1943/12/21
34235676F Jose Luis Martinez Garcia 2.26 1.85 9.05 9.80 4.39 1986/2/29
X345432Y Victor Mayo 2.58 4.09 5.9 2.6 6.3 7.1 4.24 5.08 2000/11/12
20205632S Silva Martinez Fernandez 0.86 2.62 8.01 0.9 9.2 7 4.89 5.79 2.37 7 1970/04/10
A:
El error es este:
fscanf(f1,"%s %s %[^\n]",DNI[i],Nombre[i],linea);
DNI está declarado como entero, pero lo estas leyendo como caracter. Nombre es un arreglo de caracteres con capacidad para un solo nombre. Lo que tú quieres es poder almacenar varios DNI y Nombres. Para eso usare una estructura que los unifique, y cambiare el tipo de DNI a char:
#define MAX_ALUMNOS 30
struct {
char Nombre[70];
char DNI[10];
} curso[MAX_ALUMNOS];
Asi tengo un curso con alumnos, y cada alumno tiene un DNI de 9 caracters + '\0' y un nombre de 69 caracteres + '\0'.
Otro problema es que estas reiniciando el indice 'i' en cada iteración. Cambiare esa variable por 'alumno', ya que nombres significativos ayudan a la comprensión del código.
Con todo, queda así:
#include <stdio.h>
#include <string.h>
#define MAX_ALUMNOS 100
int main() {
FILE *f1;
char Linea1[200], Linea2[200], Apellido[14], linea[40];
struct {
char Nombre[70];
char DNI[10];
} curso[MAX_ALUMNOS];
f1 = fopen("h.txt", "r");
fgets(Linea1, 200, f1);
fgets(Linea2, 200, f1);
int alumno = 0;
while (!feof(f1)) {
fscanf(f1, "%s %s %[^\n]", curso[alumno].DNI, curso[alumno].Nombre, linea);
alumno++;
}
fclose(f1);
for (int i = 1; i <= alumno; i++) {
printf("%d %s %s\n", i, curso[i].DNI, curso[i].Nombre);
}
}
Nota: no hay manejo de errores.
Demostración
Con el archivo proporcionado, la ejecución produce:
0 23321223D Markel
1 13080976G Antonio
2 34235676F Jose
3 X345432Y Victor
4 20205632S Silva
Sólo se leyó el primer nombre, pues eso es lo que hace scanf cuando encuentra "%s" ... leer hasta el primer espacio.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ld: link with std c static library
I compile my c code with gcc -c -nostdlib -fno-stack-protector <my code> -o <my cobj>, and I want to use the std library functions like sprintf strcmp and so on.So how can I link my cobj files with std c static library?
My Makefile link script is ld -T [email protected] -o [email protected] $^ -L.. -llib --no-check-sections
ps: I compile with -nostdlib option because I don't want the crt part of the std library, but I want to use the platform-independent functions like sprintf,strcmp,random,va_list and so on
A:
You can compile with -nostartfiles -static -nostdlib -fno-stack-protector -lc but beware that some parts of libc may have dependencies on pieces from libgcc (__gcc_personality_v0, etc.) so you'll most probably have errors during linking.
You can provide your own dummy (or not so dummy) implementations of such functions. Or you could just use different libc implementation which does not depend on libgcc (probably newlib or uClibc).
This question may be relevant.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Write to file via jenkins post-groovy script on slave
I'd like to do something very simple: Create/write to a file located in the remote workspace of a slave via the jenkins groovy post-build script plug-in
def props_file = new File(manager.build.workspace.getRemote() + "/temp/module.properties")
def build_num = manager.build.buildVariables.get("MODULE_BUILD_NUMBER").toInteger()
def build_props = new Properties()
build_props["build.number"] = build_num
props_file.withOutputStream { p ->
build_props.store(p, null)
}
The last line fails, as the file doesn't exist. I'm thinking it has something to do with the output stream pointing to the master executor, rather than the remote workspace, but I'm not sure:
Groovy script failed:
java.io.FileNotFoundException: /views/build_view/temp/module.properties (No such file or directory)
Am I not writing to the file correctly?
A:
While writing onto slave you need to check the channel first and then you can successfully create a file handle and start reading or writing to that file:
if(manager.build.workspace.isRemote())
{
channel = manager.build.workspace.channel;
}
fp = new hudson.FilePath(channel, manager.build.workspace.toString() + "\\test.properties")
if(fp != null)
{
String str = "test";
fp.write(str, null); //writing to file
versionString = fp.readToString(); //reading from file
}
hope this helps!
A:
Search for words The post build plugin runs on the manager and doing it as you say will fail if you are working with slaves! on the plugin page (the link to which you've provided) and see if the workaround there helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do people grow shorter as they age, how common is it, and what triggers it?
Is it true that people become gradually shorter as they age?
If it is, is it more common in one gender than the other?
And is it known what triggers this? (Considering that the "shortening" probably does not start immediately after the person stops growing?)
A:
Short answer: Yes, we do shrink with age.
The most important reason is that the cartilage in the joints between our bones gets worn out and thinner, as well as disks between the vertebrae of the spine. This results in a compression of the spine and also to a loss in height. Shrinking bones due to osteoporosis can also play a role, as well as muscular atrophy due to ageing.
According to "The Baltimore Longitudinal Study of Aging" (reference 1) adults start shrinking by the age of 30 by 5-8mm per decade. This is relatively short after you completely stop growing. This process gets faster with increasing age, the study says it goes faster after 70.
Men and women do not shrink equally, men loose around 3cm between 30-70 and a total of 5cm by the age of 80. Women loose about 5cm between 30 and 70 and a total of 4.5cm by the age of 80 (these are average numbers from reference 1). Reference 2 also has a nice summary of this original article.
This effect can be attenuated by regulary excercising as the study listed in reference 3 shows.
References:
Longitudinal Change in Height of Men and Women: Implications for
Interpretation of the Body Mass Index: The Baltimore Longitudinal
Study of Aging
Yes, You Are Getting Shorter
Role of physical activity training in attenuation of height loss
through aging.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
sql query to delete parent table rows which are not used in child table
my question title saying exactly what i want help.
Table defination & relation:
I tried this below query but its not delete the data "jl" from measuring_units table. I want to delete that row with 62 in "measuring_units" table, because it is not being used in "food_units_relation" table
this one i tried:
DELETE t2
FROM food_units_relation t1 JOIN measureing_units t2
ON t1.measuring_unit_id = t2.measuremnt_id
WHERE t1.foodUnit_relation_Id = 17 and t2.measuremnt_id NOT IN(t1.measuring_unit_id) and t2.creater_id=1;
A:
DELETE t2
FROM measureing_units t2
LEFT JOIN food_units_relation t1 ON t1.measuring_unit_id = t2.measuremnt_id
WHERE t1.foodUnit_relation_Id = 17 and t1.measuring_unit_id is null and t2.creater_id = 1;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
animation like flipboard iPhone app
I need to implement the animation same as appeared at the starting of flipboard iPhone app. I found couple of links for that:
https://github.com/mpospese/MPFoldTransition
https://github.com/Dillion/iOS-Flip-Transform
https://github.com/Reefaq/FlipView
But these all are for the simple view not with the gallery. Can anyone give me some idea that how to implement the flip animation with gallery?
A:
Have you tried this https://github.com/mtabini/AFKPageFlipper
Provides following data source
-numberOfPagesForPageFlipper: returns the number of pages to be displayed.
-viewForPage:inFlipper: returns the particular view to be displayed in the flipper for a given page number.
Hope it helps
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do you think responses to good questions come so quickly and with great quality?
I realize that there are questions that don't get quick responses or good answers. But, Why do you think responses to good questions come so quickly and with great quality?
A:
Because the people who have all of the answers are 'trolling'(in the fishing sense) the site for questions they can answer.
Wow, people don't like the word trolling. What I picture when I see the word trolling is someone sitting on a boat, slowly driving around a lake looking for fish.
So, when I apply that word to the internet, I mean someone continuously looking through the lake site(s) for new fish questions to catch answer.
I'm from Minnesota, so, maybe thats why I have a different sense of the word trolling. (We fish a lot here)
A:
I think there's some human psychology at work here.
I think it's a combination of some of the following factors...
Boost to self-esteem from being seen to have the right answer (in public)
Boost to self-esteem from being perceived as an "expert" on something
The element of competing against your peers (through reputation points)
The good feeling you get when you've helped someone else
The novelty value of this kind of community-based approach (might wear off in time as more similar sites get set up)
The fear of losing out on "easy" reputation points to another user for a question you know something about (accounts for the quick answers, but is driven by the reasons mentioned above)
The fear of looking stupid in front of your peers, some of whom may know you personally in the real world (hence high-quality answers, and the need to keep those answers high-quality consistently)
The pain involved in setting up a new account if you make yourself look like an idiot on the current one
I've just brainstormed a few answers there. It basically comes down to some combination of pleasure/pain linked in with the approval of others...all powerful human drivers.
I think these sites play on that combination of base human instincts pretty well and it's quite easy to get drawn in. Hence you get many "addicts" hopping on every question they can answer, and usually providing well-regarded answers.
A:
Stackoverflow is like playing Monopoly.
I've found that answering quickly is a huge advantage. After several attempts at answering with really good answers, I found I was being outdone by people with rather simplistic (and often incorrect) answers. Amazingly, sometimes the questioner had awarded the checkmark to some lame answer while I was in my second or third paragraph.
I realized you just want to throw something up there quickly to "buy" the real estate. I'll often answer now with a very short sentence. The down side to this is that you can get early downvotes.
Then you upgrade the real estate by building a nice answer on it. The downvoters then look like dorks.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use Int64 in C#
The question is easy! How do you represent a 64 bit int in C#?
A:
64 bit int is long
A:
System.Int64 is the .net type, in C# it's also called long
A:
A signed 64 bit integer is long, an unsigned is ulong.
The corresponding types in the framwwork are System.Int64 and System.UInt64, respectively.
Example:
long bigNumber = 9223372036854775807;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How get ASCII characters similar to output of the linux command 'tree'?
Is there a way to print the ASCII charachter '├── ' and '└──' using a bash or perl script?
I want it be exactly like the output of 'tree'.
[root@localhost www]# tree -L 1
.
├── cgi-bin
├── error
├── html
└── icons
A:
They look to be a part of the extended ascii codes. You can see them here http://www.asciitable.com/
Also available at the Unicode table here: http://unicode-table.com/en/sections/box-drawing/
I believe 192, 195 and 196 are the ones you are after.
In Unicode 2501, 2514 and 2523.
EDIT
Found a related stack question here, which recommends printing in Unicode.
What you want is to be able to print unicode, and the answer is in perldoc perluniintro.
You can use \x{nnnn} where n is the hex identifier, or you can do \N{...} with the name:
perl -E 'say "\x{2514}"; use charnames; say "\N{BOX DRAWINGS LIGHT UP AND RIGHT}"'
A:
echo -e "\0342\0224\0224\0342\0224\0200\0342\0224\0200 \033[01"
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the advantage of Raw Notifications over a normal Request to a server?
I know where Tile Notification and Toast Notification come in handy, when your app is inactive, but what is the advantage of Raw Notifications over the next line?
WebClient( ).DownloadStringAsync( );
A:
Raw notifications allow you to push custom data on-demand to your application while it's running. WebClient.DownloadStringAsync allows you to pull data from a server.
A:
A raw notification is probably best used alongside a pull request in a lot of cases. I would tend to use a notification purely to tell a running client that - that could be anything from a new message has arrived server-side (in which case you might want to send out the message in the notification) to "The whole of the data set on the server has been updated, you'd better come along when you can and get the latest changes".
As mentioned, you can use the raw notification to send out all the required information if it is small enough, but on other occasions you will most likely just be telling the app that it needs to call home for some action. Your choice would have to be based on the expected size of the payload. From MSDN
The maximum size of a notification is 1 KB for the header and 3 KB for
the payload.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why does SSDT deploy inline FK Constraint as WITH NOCHECK?
Given the following Foreign Key Constraint definition inline of a Create Table definition in a SSDT project:
CREATE TABLE A
(...Columns...),
CONSTRAINT [FK_O] FOREIGN KEY ([OID]) REFERENCES [dbo].[O] ([ID]) ON DELETE CASCADE NOT FOR REPLICATION,
(...)
and the following quote from MSDN
WITH CHECK | WITH NOCHECK
Specifies whether the data in the table is or is not validated against a newly added or re-enabled FOREIGN KEY or CHECK constraint. If not specified, WITH CHECK is assumed for new constraints, and WITH NOCHECK is assumed for re-enabled constraints.
, I wonder why the FIRST deployment result of this NEW table does look like that using a constraint WITH NOCHECK option as follows:
ALTER TABLE [dbo].[A] WITH NOCHECK ADD CONSTRAINT [FK_O] FOREIGN KEY([OID])
REFERENCES [dbo].[O] ([ID])
ON DELETE CASCADE
NOT FOR REPLICATION
GO
ALTER TABLE [dbo].[A] CHECK CONSTRAINT [FK_O]
GO
So the question is, shouldn't the inline Constraint definition that does not allow to define "with Nocheck" paired with the rule from the MSDB page result into a constraint using the WITH CHECK option on this NEW table? Why do we get a WITH NOCHECK Constraint here?
A:
NOT FOR REPLICATION means that the constraint is never checked, even if you check it explicitly.
Probably, SSDT here is making explicit something that is implicit.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to transform xml to sql query using XSLT?
I have an xml file and i want to create sql query from it.
like below
OUTPUT NEEDED:
INSERT INTO 'table' values
('2','0','SUGAR_QTY','2','2')
('2','0','OIL_QTY','5','1')
<recConfig>
<default_rec>
<recid>
2
</recid>
<recname>
WHITE_BREAD
</recname>
<description>
</description>
<accesslevel>
0
</accesslevel>
<parameter>
<parameterCode>
SUGAR_QTY
</parameterCode>
<parameterValue>
2
</parameterValue>
<ordinal>
2
</ordinal>
</parameter>
<parameter>
<parameterCode>
OIL_QTY
</parameterCode>
<parameterValue>
5
</parameterValue>
<ordinal>
1
</ordinal>
</parameter>
</default_rec>
</recConfig>
THIS IS MY XSLT FILE
<?xml version="1.0" encoding="UTF-8" ?>
<xsl:transform xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="text" encoding="UTF-8" indent="yes" />
<xsl:template match="/">
<xsl:text>"INSERT INTO 'recipe_parametrs'</xsl:text>
<xsl:text> values</xsl:text>
<xsl:for-each select="recConfig/default_rec">
<xsl:text>
</xsl:text>
<xsl:text> ('</xsl:text>
<xsl:value-of select="normalize-space(recid)"/>
<xsl:text>','</xsl:text>
<xsl:value-of select="normalize-space(accesslevel)"/>
<xsl:text>','</xsl:text>
<xsl:for-each select="parameter">
<xsl:value-of select="normalize-space(parameterCode)"/>
<xsl:text>','</xsl:text>
<xsl:value-of select="normalize-space(parameterValue)"/>
<xsl:text>','</xsl:text>
<xsl:value-of select="normalize-space(ordinal)"/>
<xsl:text>','</xsl:text>
</xsl:for-each>
<xsl:text>')</xsl:text>
</xsl:for-each>
<xsl:text>;</xsl:text>
</xsl:template>
</xsl:transform>
The output that I get is this
"INSERT INTO 'recipe_parametrs' values
('2','0','SUGAR_QTY','2','2','OIL_QTY','5','1','');
I just started learning xslt yesterday.
I dont know how separate SUGAR_QTY and OIL_QTY to different queries like shown in OUTPUT NEEDED: .
I understand that nested for loop is not the right way to do it. I know that the inside for loop should be replaced with some other logic. I am not sure what to do .
Can anyone please guide me on how to do it.
Thanks.
A:
How about something like:
XSLT 1.0
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text" encoding="UTF-8" indent="yes" />
<xsl:template match="/recConfig">
<xsl:text>INSERT INTO 'table' values </xsl:text>
<xsl:for-each select="default_rec">
<xsl:variable name="recid" select="recid" />
<xsl:variable name="accesslevel" select="accesslevel" />
<xsl:for-each select="parameter">
<xsl:text>(</xsl:text>
<xsl:for-each select="$recid | $accesslevel | parameterCode | parameterValue | ordinal">
<xsl:text>'</xsl:text>
<xsl:value-of select="normalize-space(.)"/>
<xsl:text>'</xsl:text>
<xsl:if test="position() != last()">
<xsl:text>,</xsl:text>
</xsl:if>
</xsl:for-each>
<xsl:text>) </xsl:text>
</xsl:for-each>
</xsl:for-each>
</xsl:template>
</xsl:stylesheet>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In Pandas, how to calculate the probability of a set of values in one column given a set of values of another column?
I have a DataFrame in which the rows represent traffic accidents. Two of the columns are Weather and Skidding:
import pandas as pd
df = pd.DataFrame({'Weather': ['rain', 'fine', 'rain', 'fine', 'snow', 'fine', 'snow'],
'Skidding': ['skid', 'skid', 'no skid', 'no skid', 'skid', 'no skid', 'jackknife']})
I'd like to compute how much more likely it is that either skidding or jackknifing occurs when it is raining or snowing compared to when it is not. So far I've come up with a solution using Boolean indexing and four auxiliary data frames:
df_rainsnow = df[[weather in ('rain', 'snow') for weather in df.Weather]]
df_rainsnow_skid = df_rainsnow[[skid in ('skid', 'jackknife') for skid in df_rainsnow.Skidding]]
df_fine = df[df.Weather == 'fine']
df_fine_skid = df_fine[[skid in ('skid', 'jackknife') for skid in df_fine.Skidding]]
relative_probability = len(df_rainsnow_skid)/len(df_fine_skid)
which evaluates to a relative_probability of 3.0 for this example. This seems unnecessarily verbose, however, and I'd like to refactor it.
One solution I tried is
counts = df.groupby('Weather')['Skidding'].value_counts()
relative_probability = (counts['rain']['skid'] + counts['snow']['skid']
+ counts['rain']['jackknife'] + counts['snow']['jackknife']) / (counts['fine']['skid'] + counts['fine']['jackknife'])
However, this leads to a KeyError because jackknife doesn't occur in every weather situation, and anyways it is also verbose to write out all the terms. What is a better way to achieve this?
A:
You can use isin instead of ... in ... for ... comprehension; Also no need to filter the data frame if you just need the number at the end, just build the conditions, sum and divide:
rain_snow = df.Weather.isin(['rain', 'snow'])
fine = df.Weather.eq('fine')
skid = df.Skidding.isin(['skid', 'jackknife'])
(rain_snow & skid).sum()/(fine & skid).sum()
# 3
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to prevent file duplication/burning of a CD/DVD
I have HTML and SWF files that I want to put on a CD. How can I protect the contents on the CD, or make it difficult for someone to burn or duplicate them?
A:
Fundamentally, you can't. It's impossible.
This is because you are unable to distinguish between an attacker and a legitimate user. You want the legitimate user to be able to access the data, you want the attacker not to be able to, but you can't tell if Alice is a legitimate user or an attacker.
Despite it being impossible, a lot of people have put a lot of effort and resources into trying to do it anyway. The closest they've come is with two approaches:
Have the users access the data only using computers that you control.
Authenticate each user before giving them access to the data, analyze their usage patterns, and revoke their access if you believe them to be an attacker.
Note that neither of these techniques work (because it's impossible) but they can sometimes delay unsophisticated attackers very briefly. (While unsophisticated attackers can't break these techniques, they can download copies of your data from sophisticated attackers.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How did this duplicate question manage to be answered after being marked as duplicate?
This question on TSE was marked as duplicate on 2016/02/24 at 15:52:19Z. It was then answered on 2016/02/24 at 15:52:52Z, seconds after being marked as duplicate.
How is this possible? Is there a delay between duplicate marking and closure? If so how long is this delay?
A:
That is possible since you are allowed to submit your answer if you started typing before it was closed as duplicate. This allows answers to come in minutes or even hours after it was closed. If you were connected it will disable the commit button but that can easily be reversed with some browser tools.
In my opinion, this is bad design, especially since it encourages bad answers to come through on rapidly closed questions.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Required Checkbox Field Has No Effect
I used hook_form_alter to add a required 'confirmation' checkbox to the node deletion form and the #required status is not honored.
The fields/fieldset are property displayed 'node-delete-confirmation' form.
Here is the hook_form_alter content:
$form['warning'] = array(
'#type' => 'fieldset',
'#title' => t('Warning! - You Must Check The Box Below To Continue'),
'#weight' => -30,
'#collapsible' => FALSE,
'#collapsed' => FALSE,
);
$form['confirm_delete'] = array(
'#title' => t('I fully understand that this action cannot be undone and that statistics linked to this item will be permanently removed.'),
'#type' => 'checkbox',
'#required' => TRUE,
'#default_value' => 0,
);
How can I enforce the 'required' status so that nodes cannot be deleted until the confirmation is acknowledged.
If I change the code, to completely remove the added fieldset everything works as it should.
Works:
$form['warning']['confirm_delete'] = array(
'#title' => t('I fully understand that this action cannot be undone and that statistics linked to this item will be permanently removed.'),
'#type' => 'checkbox',
'#required' => TRUE,
'#default_value' => 0,
);
A:
You can add your form validation handler (hook_form_alter):
$form['#validate'] = array('delete_validation_handler') + $form['#validate'];
Your validation handler:
function delete_validation_handler($form, $form_state){
if(!$form_state['values']['confirm_delete'])
form_set_error('confirm_delete', t('Message about to confirm it first'))
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Highcharts Custom SVG Marker Symbol is Shaped Different in Legend
the custom SVG marker symbol I have drawn is rendered differently in the legend than in the chart. I have drawn the marker that I need for the chart but in the legend, the symbol has a thin line to the left.
I have attached a picture below and will include the code, I have spent too much time on this and don't have anyone to ask on this topic. If anyone can help me out, it would be greatly appreciated.
function renderChart(data, startRange, endRange) {
// Create custom marker
Highcharts.SVGRenderer.prototype.symbols.lineBar = function (x, y, w, h) {
return ['M', x + w / 2, y + h / 2, 'L', x + w + 10, y + h / 2, 'z'];
};
if (Highcharts.VMLRenderer) {
Highcharts.VMLRenderer.prototype.symbols.lineBar = Highcharts.SVGRenderer.prototype.symbols.lineBar;
}
var chart = Highcharts.chart({
chart: {
renderTo: 'system-load-scheduler',
type: 'line',
},
navigation: {
buttonOptions: {
enabled: false
}
},
title: {
text: ''
},
yAxis: {
min: 0,
title: {
text: 'Tasks'
},
labels: {
style: {
color: 'blue'
}
},
categories: generateCategories(data),
},
xAxis: {
type: 'datetime',
dateTimeLabelFormats: {
day: '%b %d'
},
title: {
text: 'Date'
}
},
tooltip: {
headerFormat: '<b>{series.name}</b><br>',
pointFormat: 'Scheduled {point.x:%b. %e} at {point.x:%l:%M%P}'
},
plotOptions: {
line: {
marker: {
enabled: true
}
},
series: {
cursor: 'pointer',
stickyTracking: false,
marker: {
states: {
hover: {
radiusPlus: 0,
lineWidthPlus: 1,
halo: {
size: 0
}
}
}
},
states: {
hover: {
halo: {
size: 0
}
}
}
}
},
legend: {
enabled: true,
symbolPadding: 20
},
series: generateSeries(data, startRange, endRange)
});
chart.yAxis[0].labelGroup.element.childNodes.forEach(function (label) {
label.style.cursor = 'hand';
label.onclick = function () {
var idx = ctrl.allTaskNames.indexOf(this.textContent);
renderTaskInfo(ctrl.data[idx]);
ctrl.scheduler.taskIdx = idx;
ctrl.backService.saveObject(CTRL_DASHBOARD_SCHEDULER_STR, ctrl.scheduler);
};
});
return chart;
}
A:
You can erase the line with just some CSS code
.highcharts-legend .highcharts-graph {
display:none;
}
Fiddle
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Sum rows in column based on active date
I have Column value "DateThru" that has a date value that changes depending on the database update day. I have a few tables inner join'd and i'm trying to get a sum of a column (Alloc) up to the "DateThru" date. I tried limiting the table using
where ActivityDate <='DateThru'
But i'm getting server errors any time i try in where clause.
The table I am pulling from looks like this
Month ActivityDate Alloc
---------------------------
x 2017-10-01 .0238
x 2017-10-02 .0302
x 2017-10-03 .0156
x 2017-10-04 .0200
x 2017-10-05 .0321
x 2017-10-06 .0123
x 2017-10-07 .0248
Say "DateThru" is 2017-10-05.
I want to sum Alloc from 10-1 to 10-5 giving result of ".122" As MTDAlloc.
Can this be done as a "windows function" or in where clause?
Thanks!
I used sum over partition to get the total
I have The total working and organized correctly. My issue is now i have duplicate entries for each "r". so when I change the ActiveDate <= DateThru to just "=" it fixes my duplicate issue but does not total the MTDAlloc. I do use a "sum column" to create another so i cant use rownumber() rank to remove duplicates.
Month rep goal MTDAlloc
----------------------
x r1 20 .122
x r1 20 .122
x r1 20 .122
x r2 20 .122
x r2 20 .122
x r2 20 .122
The end result will have each unique "r" on one row.
Month rep goal Alloc
----------------------
x r1 20 .122
x r2 20 .122
Really Appreciate the assistance!
A:
Your problem may merely be single quotes. From what you describe, this should work:
where ActivityDate <= DateThru
The single quotes make the value a string. The comparison to a date makes SQL Server want to convert it to a date. And 'DateThru' is not recognized as a valid date.
EDIT:
My guess is that despite its name, DateThru is really a string. You can find the offending values by using:
select DateThru
from . . .
where try_convert(datetime, DateThru) is null;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Permissions on files uploaded using AWS iOS SDK 2.0
I'm uploading files to an S3 bucket using a method based on the AWS example code here:
http://docs.aws.amazon.com/mobile/sdkforios/developerguide/s3transfermanager.html
The problem is, once those files are uploaded, the default permission to view those files is only given to the owner of the S3 bucket, and I would like them to be publicly available. How can I achieve this using the iOS SDK 2.0?
A:
You can assign AWSS3ObjectCannedACLPublicRead to the ACL property on your AWSS3PutObjectRequest object.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Silverlight ProjectionPlane Oddity
I have this panel in my app which is 10400 pixels wide.
I have my CenterOfRotationX and CenterOfRotationZ = 0.5.
I have the GlobalOffsets configured so that the rotation of the panel is visible on the screen.
This video shows the RotationY being set from -180 to 180.
http://www.youtube.com/watch?v=zDrETOueb-w
Its really weird at RotationY = 90 (about 13 seconds on the video), it seems to get stretched to hell when I would expect it to disappear from view.
Also from about 8 - 9 seconds on the video shows the panel starting at RotationY = 0 to RotationY = 20, where it starts to stretch. Over this small rotation it appears to nearly rotate 180 degrees.
Maybe I have some settings wrong but this seems really strange. - The only value changing in this video is the RotationY.
A:
The problem was with the GlobalOffsetX variable on the ProjectionPlane.
This was set to something astronomical so I could see the full rotation on the screen but this had an effect on the rotation.
Setting this to 0 and then moving the Plane to the left using the Canvas.LeftProperty fixed this.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
copying a texture in xna into another texture
I am loading a Texture2D that contains multiple sprite textures. I would like to pull the individual textures out when I load the initial Texture to store into separate Texture2D objects, but can't seem to find a method any where that would let me do this. SpriteBatch.Draw I believe should only be called from within a begin, end block right?
Thanks.
A:
I am loading a Texture2D that contains
multiple sprite textures. I would like
to pull the individual textures out
when I load the initial Texture to
store into separate Texture2D objects.
You don't have to do this nor should you. Accessing a single texture is faster than multiple textures. Also, textures are stored in GPU texture memory. It just makes no sense to split it up.
You should instead focus on writing code that can access individual sprites within your sprite sheet. I suggest you have a look at how sprite based games work.
Here is a great tutorial video series that should help you out: tile engine videos
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Actionscript 3, Flash CC: Placing Objects In An Array From The Library Onto The Stage
Hello programming gurus of stackoverflow, I am hoping that at least one of you will be able to help me with my coding problem. This is the first time I'm posting on this site, so if I miss something with the structure of my post, or anything please let me know (preferably not in a condescending matter) and I will gladly change it.
I actually had a different problem I was going to ask about, but I recently realized that some objects from my library weren't showing up on my stage. Hopefully, if this gets solved I won't have my other problem.
I am creating a learning module app using Flash CC and Actionscript 3, I like to think I am fairly proficient with Flash, but right now all my code is on the timeline because when I started I wasn't aware of the package setup. When I finish with the learning module I'll try and move everything to an AS package, so please bear with me.
This current frame of the module is a drag and drop game where the user drags the correct food, for the animal they chose in the previous frame, to the animal in the middle. The animal is dynamically placed on the stage, as well as an array of six possible food choices, all MovieClips pulled from the library. The array of food elements is actually not what I'm having problem with, they appear on my stage with no problems at all. The problem I'm having is when the user drags the correct food onto the animal, and the win condition is met, the array of balloon elements does not show up on the stage. I find it weird because I'm using near identical code for both the food and balloon array.
Here is my full code:
import flash.display.MovieClip;
import flash.events.MouseEvent;
foodPet();
function foodPet():void {
//all of my pet, food, and balloon library objects have been exported for AS
var theBird:pet_bird = new pet_bird;
var theCat:pet_cat = new pet_cat;
var theChicken:pet_chicken = new pet_chicken;
var theDog:pet_dog = new pet_dog;
var theDuck:pet_duck = new pet_duck;
var theGuinea:pet_guinea = new pet_guinea;
var theHamster:pet_hamster = new pet_hamster;
var birdSeed:food_bird_seed = new food_bird_seed;
var catFood:food_cat_food = new food_cat_food;
var chickenFeed:food_chicken_feed = new food_chicken_feed;
var chocolate:food_chocolate = new food_chocolate;
var dogFood:food_dog_food = new food_dog_food;
var duckFood:food_duck_food = new food_duck_food;
var animalList:Array = [theBird, theCat, theChicken, theDog,
theDuck, theGuinea, theHamster];
var food1Array:Array = [birdSeed, catFood, chickenFeed,
chocolate, dogFood, duckFood, 4];
var xPosFood:Array = new Array();
var yPosFood:Array = new Array();
xPosFood = [32, 71, 146, 363, 431, 512];
yPosFood = [304, 222, 123, 123, 222, 304];
var animalClip:MovieClip;
animalClip = animalList[chosenAnimal];
addChild(animalClip);
animalClip.x = 256;
animalClip.y = 287;
animalClip.name = "selectedAnimal";
for (var i:uint = 0; i < food1Array.length - 1; i++){ //Where the food gets added
var isItRight:Boolean = false;
var foodName:String = ("food" + i);
var foodClip:MovieClip;
foodClip = food1Array[i];
foodClip.x = xPosFood[i];
foodClip.y = yPosFood[i];
foodClip.name = foodName;
addChild(foodClip);
trace(foodClip.parent);
foodDragSetup(foodClip, animalClip, food1Array[food1Array.length - 1], isItRight);
}
}
function foodDragSetup(clip:MovieClip, targ:MovieClip, correctNum:uint, isItRight:Boolean) {
var beingDragged:Boolean = false;
var xPos:Number = clip.x;
var yPos:Number = clip.y;
clip.addEventListener(MouseEvent.MOUSE_DOWN, beginDrag);
function beginDrag(event:MouseEvent):void
{
clip.startDrag();
if (int(clip.name.substr(4)) == correctNum){
isItRight = true;
}
this.beingDragged = true;
setChildIndex(clip, numChildren - 1);
clip.addEventListener(MouseEvent.MOUSE_UP, endDrag);
}
function endDrag(event:MouseEvent):void
{
if (this.beingDragged) {
this.beingDragged = false;
clip.stopDrag();
if ((isItRight) && (clip.hitTestPoint(targ.x, targ.y, true))){
trace(targ.name + " has been hit.");
clip.x = targ.x;
clip.y = targ.y;
win_animal_food();
} else {
isItRight = false;
clip.x = xPos;
clip.y = yPos;
}
}
}
}
function win_animal_food():void {
const BALLOON_ROW:int = 4;
var count:uint = 0;
var altX:uint = 0;
var bBalloon:blue_balloon = new blue_balloon;
var gBalloon:green_balloon = new green_balloon;
var oBalloon:orange_balloon = new orange_balloon;
var pBalloon:purple_balloon = new purple_balloon;
var rBalloon:red_balloon = new red_balloon;
var yBalloon:yellow_balloon = new yellow_balloon;
var balloonList:Array = [bBalloon, gBalloon, oBalloon,
pBalloon, rBalloon, yBalloon, bBalloon, gBalloon,
oBalloon, pBalloon, rBalloon, yBalloon, bBalloon,
gBalloon, oBalloon, pBalloon];
var balloonY:Array = [144, -205, -265, -325];
var balloonX:Array = [0, 140, 284, 428, 68, 212, 356, 500];
for (var ballY:uint = 0; ballY < balloonY.length; ballY++){ //Where balloons
for (var ballX:uint = altX; ballX < altX + BALLOON_ROW; ballX++){ //get added
var balloonName:String = ("balloon" + count);
var balloonClip:MovieClip;
balloonClip = balloonList[count];
balloonClip.x = balloonX[ballX];
balloonClip.y = balloonY[ballY];
balloonClip.name = balloonName;
addChild(balloonClip);
trace(balloonClip.parent);
trace(balloonClip + " has been added!");
balloonClip.addEventListener(MouseEvent.CLICK, balloonPop);
count++;
}
if (altX == 0) {
altX = BALLOON_ROW;
} else {
altX = 0;
}
}
function balloonPop(event:MouseEvent):void {
event.target.play();
event.target.removeEventListener(MouseEvent.CLICK, balloonPop);
}
}
I thought there might have been a problem with my balloon MovieClips, so I subbed them in the food array:
var birdSeed:blue_balloon = new blue_balloon;
var catFood:green_balloon = new green_balloon;
var chickenFeed:orange_balloon = new orange_balloon;
var chocolate:purple_balloon = new purple_balloon;
var dogFood:red_balloon = new red_balloon;
var duckFood:yellow_balloon = new yellow_balloon;
They all showed up on the stage, so there's nothing wrong with the MovieClips.
Added: The first values of balloonXArray and balloonYArray were originally -4 and -145 respectively, but when I started having problems I wanted to make sure the balloons were showing up so I set the first values to 0 and 144 the balloon height and width are both 144 and their cross (not sure on it's name) is in the top left corner.
Added: The reason why there are multiple instances of the same balloon in the balloonList is because I need four rows of four balloons, but only have six different balloons.
I know the balloons are on the stage because the debug display shows their x and y values on the viewable stage. Using trace(foodClip.parent) and trace(balloonClip.parent) shows that the balloons and food all have the same parent, MainTimeline, so I know the balloons aren't getting added to some different space.
I have searched online, but have not come across anyone with a similar problem. Thus, I am asking on this forum if anyone can tell me why my balloons will not show up on the stage.
Please and thank you.
A:
One thing I see straight off in the baloonList is that you have the same object instances listed multiple times. Each instance can only exist on stage exactly once. If you addChild() an instance that is already on stage, the instance is first removed, then re-added at the top of the display list.
You should change:
var bBalloon:blue_balloon = new blue_balloon;
var gBalloon:green_balloon = new green_balloon;
var oBalloon:orange_balloon = new orange_balloon;
var pBalloon:purple_balloon = new purple_balloon;
var rBalloon:red_balloon = new red_balloon;
var yBalloon:yellow_balloon = new yellow_balloon;
var balloonList:Array = [bBalloon, gBalloon, oBalloon,
pBalloon, rBalloon, yBalloon, bBalloon, gBalloon,
oBalloon, pBalloon, rBalloon, yBalloon, bBalloon,
gBalloon, oBalloon, pBalloon];
to:
var balloonList:Array = [
new blue_balloon,
new green_balloon,
new orange_balloon,
new purple_balloon,
new red_balloon,
new yellow_balloon,
new blue_balloon,
new green_balloon,
new orange_balloon,
new purple_balloon,
new red_balloon,
new yellow_balloon,
new blue_balloon,
new blue_balloon,
new green_balloon,
new orange_balloon,
new purple_balloon
];
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Assign multiple variables Haskell
Is there a way to assign the same value to multiple variables in Haskell?
e.g something like this:
h,f = 5
A:
No. And there's no reason to do so. Because variables are immutable in haskell, once some value is bound to some variable, that never changes within the scope. So, h, f = 5 does not make sense in haskell. Just use h where 5 is expected; there's no need for f
A:
Prelude> let [x, y] = replicate 2 5
Prelude> x
5
Prelude> y
5
Prelude>
You need replicate to "duplicate" a value. In this case, I duplicating 5 twice. [x, y] mean get x and y from a List. That list is [5, 5]. So, you got x = 5 and y = 5.
Well, I never did such behavior in the real world Haskell but you get what you want.
EDIT: We could use repeat function and the feature of lazy evaluation in the Haskell. Thanks to luqui.
Prelude> let x:y:_ = repeat 5
Prelude> x
5
Prelude> y
5
Prelude>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Adding items to a combo box's internal list programmatically
Despite Matt's generous explanation in my last question, I still didn't understand and decided to start a new project and use an internal list.
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
codesList = [[NSString alloc] initWithContentsOfFile: @".../.../codelist.txt"];
namesList = [[NSString alloc] initWithContentsOfFile: @".../.../namelist.txt"];
codesListArray = [[NSMutableArray alloc]initWithArray:[codesList componentsSeparatedByString:@"\n"]];
namesListArray = [[NSMutableArray alloc]initWithArray:[namesList componentsSeparatedByString:@"\n"]];
addTheDash = [[NSString alloc]initWithString:@" - "];
flossNames = [[NSMutableArray alloc]init];
[flossNames removeAllObjects];
for (int n=0; n<=[codesListArray count]; n++){
NSMutableString *nameBuilder = [[NSMutableString alloc]initWithFormat:@"%@", [codesListArray objectAtIndex:n]];
[nameBuilder appendString:addTheDash];
[nameBuilder appendString:[namesListArray objectAtIndex:n]];
[comboBoz addItemWithObjectValue:[NSMutableString stringWithString:nameBuilder]];
[nameBuilder release];
}
}
So this is my latest attempt at this and the list still isn't showing in my combo box. I've tried using the addItemsWithObjectValues outside the for loop along with the suggestions at this question:
Is this the right way to add items to NSCombobox in Cocoa?
But still no luck. If you can't tell, I'm trying to combine two strings from the files with a hyphen in between them and then put that new string into the combo box. There are over 400 codes and matching names in the two files, so manually putting them in would be a huge chore, not to mention, I don't see what would be causing this problem. The compiler shows no warnings or errors, and in the IB, I have it set to use the internal list, but when I run it, the list is not populated unless I do it manually.
Some things I thought might be causing it:
Being in the
applicationDidFinishLaunching: method
Having the string and array variables
declared as instance variables in the
header (along with @property and
@synth done to them)
Messing around with using
appendString multiple times with
NSMutableArrays
Nothing seems to be causing this to me, but maybe someone else will know something I don't.
A:
Did you try running this under the debugger and stepping through the code? If you do, I bet you'll find that codesList, namesList, or comboBoz is nil.
By the way, flossNames isn't doing anything, and the part inside the loop could be done more briefly using -[NSString stringWithFormat:].
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Typescript object casting not working
I got an HTTP API.
I am loading a set of objects as a json.
I want to cast to a Typescript object, but with the keyword "as" it is not working and even not with < Type > in front of the object.
r.forEach(entry => {
entry.creationDate = new Date(entry.creationDate.date);
entry.creator = <User>entry.creator;
return entry;
});
The console.log directly after entry.creator casting outputs a normal "Object".
Can someone give me an advice?
A:
I I've struggled with similar problems, and in my opinion this is a defect in typescript. When you are doing the cast as you did. Or a sample code like this:
class User {
name: string;
doConsole(): void {
console.log(`Name: ${this.name}`);
}
}
let userObj = { name: 'jose' };
let user = new User();
Object.assign(user, userObj);
user.doConsole();
You will notice that doConsole won't be a function in the casted object. That's the generated JS to this:
var User = (function () {
function User(name) {
this.name = name;
}
User.prototype.doConsole = function () {
console.log("Name: " + this.name);
};
return User;
}());
var userObj = { name: 'jose' };
var user = userObj;
user.doConsole();
As you can see it doesn't uses the prototype function you prepared by the class when doing the cast.
My alternative was to do something like this:
class User {
name: string;
doConsole(): void {
console.log(`Name: ${this.name}`);
}
}
let userObj = { name: 'jose' };
let user = new User();
Object.assign(user, userObj);
user.doConsole();
This ensures that you are using the prototype function, as you can see by the generated JS:
var User = (function () {
function User() {
}
User.prototype.doConsole = function () {
console.log("Name: " + this.name);
};
return User;
}());
var userObj = { name: 'jose' };
var user = new User();
Object.assign(user, userObj);
user.doConsole();
So basically what I'm saying is that I agree with you that it should works like you did, but the transpiler doesn't use the prototyped function, so it won't work.
I hope this helps you.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Concatenating MAC and salt with ciphertext
I've been having trouble adding a MAC to my password-based AES encryption/decryption program. I am trying to add the MAC'd plaintext and salt (to be used with password) (both byte arrays) to a final array along with the ciphertext, and then decrypt by reading in the ciphertext file and splitting it back up into salt, MAC, and cipher text byte arrays.
The encryption class seems to be running smoothly but the decryption class does not. I debugged the the class and found that it fails because it never enters the if statement that checks whether the computed and recovered MACs are the same:
if(Arrays.equals(macBytes, hmac))
I couldn't figure out why until I printed out the byte arrays for the salt, message, and MAC, and found that they don't match when printed from the encryption and decryption classes. All the array sizes match up across the two classes, but the byte values change somewhere.
Both classes worked perfectly without the MAC before, but I didn't add the salt directly to the encrypted data then and instead wrote it to a separate file. Including it with the encrypted data makes this slightly more portable for me, but was it a bad choice to do so? Is it better to write it to a separate file? Or am I just missing something blatantly obvious in my code?
Here is the full code.
Encryption class
public class AESEncryption
{
private final String ALGORITHM = "AES";
private final String MAC_ALGORITHM = "HmacSHA256";
private final String TRANSFORMATION = "AES/CBC/PKCS5Padding";
private final String KEY_DERIVATION_FUNCTION = "PBKDF2WithHmacSHA1";
private final String PLAINTEXT = "/Volumes/CONNOR P/Unencrypted.txt";
private final String ENCRYPTED = "/Volumes/CONNOR P/Encrypted.txt";
private final String PASSWORD = "javapapers";
private final String LOC = Paths.get(".").toAbsolutePath().normalize().toString();
private static final int SALT_SIZE = 64;
private final int KEY_LENGTH = 128;
private final int ITERATIONS = 100000;
public AESEncryption()
{
try
{
encrypt();
}
catch(Exception ex)
{
JOptionPane.showMessageDialog(null, "Error: " + ex.getClass().getName(), "Error", JOptionPane.ERROR_MESSAGE);
}
}
private void encrypt() throws Exception
{
File encrypted = new File(ENCRYPTED);
File plaintext = new File(PLAINTEXT);
int encryptedSize = (int)encrypted.length();
int plaintextSize = (int)plaintext.length();
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(PLAINTEXT));
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(ENCRYPTED));
//Create salt
byte[] salt = new byte[SALT_SIZE];
SecureRandom secureRandom = new SecureRandom();
secureRandom.nextBytes(salt);
//Create cipher key
SecretKeyFactory factory = SecretKeyFactory.getInstance(KEY_DERIVATION_FUNCTION);
KeySpec keySpec = new PBEKeySpec(PASSWORD.toCharArray(), salt, ITERATIONS, KEY_LENGTH);
SecretKey secret = new SecretKeySpec(factory.generateSecret(keySpec).getEncoded(), ALGORITHM);
//Create cipher
Cipher cipher = Cipher.getInstance(TRANSFORMATION);
cipher.init(Cipher.ENCRYPT_MODE, secret, new IvParameterSpec(new byte[16]));
//Read plaintext file into byte array
byte[] input = new byte[encryptedSize];
Path path = Paths.get(PLAINTEXT);
input = Files.readAllBytes(path);
byte[] crypt = cipher.doFinal(input);
//Create MAC object and apply to the byte array crypt[] containing the plaintext
KeyGenerator keyGenerator = KeyGenerator.getInstance(MAC_ALGORITHM);
SecretKey macKey = keyGenerator.generateKey();
Mac mac = Mac.getInstance(MAC_ALGORITHM);
mac.init(macKey);
byte[] macBytes = mac.doFinal(crypt);
//Add salt, MAC'd plaintext, and encrypted plaintext to final array
byte[] output = new byte[SALT_SIZE + crypt.length + macBytes.length];
System.arraycopy(salt, 0, output, 0, SALT_SIZE);
System.arraycopy(macBytes, 0, output, SALT_SIZE, macBytes.length);
System.arraycopy(crypt, 0, output, SALT_SIZE + macBytes.length, crypt.length);
//Write array with encrypted data to a new file
bufferedOutputStream.write(output);
bufferedInputStream.close();
bufferedOutputStream.flush();
bufferedOutputStream.close();
}
Decryption class
public class AESDecryption
{
private final String ALGORITHM = "AES";
private final String MAC_ALGORITHM = "HmacSHA256";
private final String TRANSFORMATION = "AES/CBC/PKCS5Padding";
private final String KEY_DERIVATION_FUNCTION = "PBKDF2WithHmacSHA1";
private final String PLAINTEXT = "/Volumes/CONNOR P/De-Encrypted.txt";
private final String ENCRYPTED = "/Volumes/CONNOR P/Encrypted.txt";
private final String PASSWORD = "javapapers";
private final String LOC = Paths.get(".").toAbsolutePath().normalize().toString();
private final int SALT_SIZE = 64;
private final int IV_SIZE = 16;
private final int KEY_LENGTH = 128;
private final int ITERATIONS = 100000;
public AESDecryption()
{
try
{
decrypt();
}
catch(Exception ex)
{
JOptionPane.showMessageDialog(null, "Error: " + ex.getClass().getName(), "Error", JOptionPane.ERROR_MESSAGE);
}
}
private void decrypt() throws Exception
{
File encrypted = new File(ENCRYPTED);
File plaintext = new File(PLAINTEXT);
int encryptedSize = (int)encrypted.length();
int plaintextSize = (int)plaintext.length();
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(encrypted));
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(plaintext));
//Read in the encrypted data
byte[] input = new byte[encryptedSize];
Path path = Paths.get(ENCRYPTED);
input = Files.readAllBytes(path);
int increment = (input.length-SALT_SIZE)/2;
if(input.length >= (SALT_SIZE + increment))
{
//Recover salt, MAC, and encrypted data and store in arrays
byte[] salt = Arrays.copyOfRange(input, 0, SALT_SIZE);
byte[] macBytes = Arrays.copyOfRange(input, SALT_SIZE, SALT_SIZE + increment);
byte[] crypt = Arrays.copyOfRange(input, SALT_SIZE + increment, input.length);
//Regenerate original MAC
KeyGenerator keyGenerator = KeyGenerator.getInstance(MAC_ALGORITHM);
SecretKey macKey = keyGenerator.generateKey();
Mac mac = Mac.getInstance(MAC_ALGORITHM);
mac.init(macKey);
byte[] hmac = mac.doFinal(crypt);
if(Arrays.equals(macBytes, hmac)) //This is where it fails, never enters
{
//Regenerate cipher and decrypt data
SecretKeyFactory factory = SecretKeyFactory.getInstance(KEY_DERIVATION_FUNCTION);
KeySpec keySpec = new PBEKeySpec(PASSWORD.toCharArray(), salt, ITERATIONS, KEY_LENGTH);
SecretKey secret = new SecretKeySpec(factory.generateSecret(keySpec).getEncoded(), ALGORITHM);
Cipher cipher = Cipher.getInstance(TRANSFORMATION);
cipher.init(Cipher.DECRYPT_MODE, secret, new IvParameterSpec(new byte[16]));
//Write decrypted data to new text file
byte[] output = cipher.doFinal(crypt);
bufferedOutputStream.write(output);
bufferedInputStream.close();
bufferedOutputStream.flush();
bufferedOutputStream.close();
}
}
}
Thanks for any help. It is much appreciated.
A:
public class AESEncryption
{
private final String ALGORITHM = "AES";
private final String MAC_ALGORITHM = "HmacSHA256";
private final String PRNG_ALGORITHM = "SHA1PRNG";
private final String TRANSFORMATION = "AES/CBC/PKCS5Padding";
private final String PLAINTEXT = "/Volumes/CONNOR P/Unencrypted.txt";
private final String ENCRYPTED = "/Volumes/CONNOR P/Encrypted.txt";
private final String PASSWORD = "javapapers";
private final String IV_FILE_NAME = "iv.enc";
private final String LOC = Paths.get(".").toAbsolutePath().normalize().toString();
private final int SALT_SIZE = 16;
private final int IV_SIZE = 16;
private final int KEY_LENGTH = 128;
private final int ITERATIONS = 100000;
private final int START = 0;
public AESEncryption()
{
try
{
encrypt();
}
catch(Exception ex)
{
JOptionPane.showMessageDialog(null, "Error: " + ex.getClass().getName(), "Error", JOptionPane.ERROR_MESSAGE);
}
}
private void encrypt() throws Exception
{
File encrypted = new File(ENCRYPTED);
File plaintext = new File(PLAINTEXT);
int plaintextSize = (int)plaintext.length();
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(plaintext));
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(encrypted));
//Create salt for cipher key
byte[] salt = new byte[SALT_SIZE];
SecureRandom saltSecureRandom = SecureRandom.getInstance(PRNG_ALGORITHM);
saltSecureRandom.nextBytes(salt);
//Create cipher key & use to initialize cipher
byte[] keyBytes = PBEKeyFactory.getKey(PASSWORD, salt, ITERATIONS, KEY_LENGTH);
SecretKeySpec secret = new SecretKeySpec(keyBytes, ALGORITHM);
Cipher cipher = Cipher.getInstance(TRANSFORMATION);
cipher.init(Cipher.ENCRYPT_MODE, secret, new IvParameterSpec(new byte[16]));
//Create byte array of encrypted data
byte[] input = new byte[plaintextSize];
Path path = Paths.get(PLAINTEXT);
input = Files.readAllBytes(path);
byte[] crypt = cipher.doFinal(input);
//Create salt for the MAC key for added security
byte[] macsalt = new byte[SALT_SIZE];
SecureRandom macsaltSecureRandom = SecureRandom.getInstance(PRNG_ALGORITHM);
macsaltSecureRandom.nextBytes(macsalt);
//PBEKeyFactory.getKey(password, salt, iterations, keylength)
//returns a byte array representation of a SecretKey.
//Used a SecretKeyFactory instead of a KeyGenerator to make key.
//SecretKeyFactory gives back the same key given the same specifications
//whereas KeyGenerator gives back a new random key each time.
byte[] macPBE = PBEKeyFactory.getKey(PASSWORD, macsalt, ITERATIONS, KEY_LENGTH);
SecretKeySpec macKey = new SecretKeySpec(macPBE, MAC_ALGORITHM);
Mac mac = Mac.getInstance(MAC_ALGORITHM);
mac.init(macKey);
byte[] macBytes = mac.doFinal(crypt);
byte[] output = new byte[SALT_SIZE + SALT_SIZE + crypt.length + macBytes.length];
System.arraycopy(salt, START, output, START, SALT_SIZE);
System.arraycopy(macsalt, START, output, SALT_SIZE, SALT_SIZE);
System.arraycopy(macBytes, START, output, SALT_SIZE + SALT_SIZE, macBytes.length);
System.arraycopy(crypt, START, output, SALT_SIZE + SALT_SIZE + macBytes.length, crypt.length);
bufferedInputStream.close();
bufferedOutputStream.write(output);
bufferedOutputStream.flush();
bufferedOutputStream.close();
}
}
public class AESDecryption
{
private final String ALGORITHM = "AES";
private final String MAC_ALGORITHM = "HmacSHA256";
private final String TRANSFORMATION = "AES/CBC/PKCS5Padding";
private final String PLAINTEXT = "/Volumes/CONNOR P/De-Encrypted.txt";
private final String ENCRYPTED = "/Volumes/CONNOR P/Encrypted.txt";
private final String PASSWORD = "javapapers";
private final String LOC = Paths.get(".").toAbsolutePath().normalize().toString();
private final int SALT_SIZE = 16;
//MAC key size is 256 bits (32 bytes) since it is created with
//the HmacSHA256 algorithm
private final int MAC_SIZE = 32;
private final int IV_SIZE = 16;
private final int START = 0;
private final int KEY_LENGTH = 128;
private final int ITERATIONS = 100000;
public AESDecryption()
{
try
{
decrypt();
}
catch(Exception ex)
{
JOptionPane.showMessageDialog(null, "Error: " + ex.getClass().getName(), "Error", JOptionPane.ERROR_MESSAGE);
}
}
private void decrypt() throws Exception
{
File encrypted = new File(ENCRYPTED);
File plaintext = new File(PLAINTEXT);
int encryptedSize = (int)encrypted.length();
BufferedInputStream bufferedInputStream = new BufferedInputStream(new FileInputStream(encrypted));
BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(new FileOutputStream(plaintext));
//Read in encrypted data
byte[] input = new byte[encryptedSize];
Path path = Paths.get(ENCRYPTED);
input = Files.readAllBytes(path);
if(input.length >= (SALT_SIZE*2 + MAC_SIZE))
{
byte[] cryptSalt = Arrays.copyOfRange(input, START, SALT_SIZE);
byte[] macSalt = Arrays.copyOfRange(input, SALT_SIZE, SALT_SIZE*2);
byte[] macBytes = Arrays.copyOfRange(input, SALT_SIZE*2, (SALT_SIZE*2 + MAC_SIZE));
byte[] cryptBytes = Arrays.copyOfRange(input, (SALT_SIZE*2 + MAC_SIZE), input.length);
//This generates the same MAC key from encryption.
//Before, the KeyGenerator created a new random key
//meaning the derived and computed MAC keys were never the same
byte[] macKeyBytes = PBEKeyFactory.getKey(PASSWORD, macSalt, ITERATIONS, KEY_LENGTH);
SecretKeySpec macKey = new SecretKeySpec(macKeyBytes, MAC_ALGORITHM);
Mac mac = Mac.getInstance(MAC_ALGORITHM);
mac.init(macKey);
byte[] compMacBytes = mac.doFinal(cryptBytes);
//Check if computed and derived MAC's are the same
if(Arrays.equals(macBytes, compMacBytes))
{
//Creates same key from encryption
byte[] cryptKeyBytes = PBEKeyFactory.getKey(PASSWORD, cryptSalt, ITERATIONS, KEY_LENGTH);
SecretKeySpec cryptKey = new SecretKeySpec(cryptKeyBytes, ALGORITHM);
//Creates cipher and reads decrypted data to array
Cipher cipher = Cipher.getInstance(TRANSFORMATION);
cipher.init(Cipher.DECRYPT_MODE, cryptKey, new IvParameterSpec(new byte[16]));
byte[] output = cipher.doFinal(cryptBytes);
bufferedInputStream.close();
bufferedOutputStream.write(output);
bufferedOutputStream.flush();
bufferedOutputStream.close();
}
}
}
}
//This class has only one method, getKey(), which returns a byte array
//of a SecretKey of the corresponding parameters
public class PBEKeyFactory
{
private static final String KEY_DERIVATION_FUNCTION = "PBKDF2WithHmacSHA1";
public static byte[] getKey(String password, byte[] salt, int iterations, int length) throws Exception
{
KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt, iterations, length);
SecretKeyFactory factory = SecretKeyFactory.getInstance(KEY_DERIVATION_FUNCTION);
return factory.generateSecret(keySpec).getEncoded();
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is the deflector disc for most starships below and behind the front of the ship?
One would assume that you would want it in front of the ship. As is the case with the "USS. Enterprise NX-01" Also why is there only a front facing deflector, when the ship can travel along several different axis.
The dreadnought USS Federation "also" has a rear-facing deflector dish(!) 'In the diagram the notation on the "main sensor dish F/A" the notation (F/A)=(Fore, & Aft); on the Star Trek USS. Enterprise Original Blueprints it reads "main sensor and navigational deflector" I have always wondered about that; is there an actual explanation for this?
For example; in the TOS episode "Balance of Terror" the Enterprise goes to "full reverse", or in the Wrath of Khan when the Enterprise goes straight up and then back down again. What protects the ship during these maneuvers?
Because the shield are not always on??? right? or am I confusing deflectors, & shields?
Funny thing well for everyone but me. I have the technical manual, I do not remember seeing that in there.
A:
The ship's deflector dish needs a clear line of sight along which it can project a deflection beam, hence why it needs to be at the front of the ship. Its exact placement is largely irrelevant. The deflector in the Galaxy-Class is below the saucer whereas it's right at the front of the Defiant-Class.
Star Trek: The Next Generation Technical Manual
Since the navigational deflector only operates when the ship is at warp (and since the ship can only warp forward, not backward), by necessity it needs to be at the front, rather than the rear.
When the ship is at sub-light speeds, the deflector tends to be turned off and the ship's shields take care of low-speed impacts.
For the record, the USS Federation (which appears briefly as a background image in Star Trek II) also has a deflector at the front. What you've mistaken for a rear-facing deflector is in fact a secondary sensor array.
Star Trek Star Fleet Technical Manual
|
{
"pile_set_name": "StackExchange"
}
|
Q:
JS Error - Unresponsive Script
I am dynamically adding elements to my webpage using the code below. It works, but it freezes the page. Can anyone explain why this would be happening?
<script>
$(function() {
var ruler = $("#ruler").height();
var body = $("body").height();
while (ruler <= body) {
$("#rulerStart").append("<div class='lineLarge'></div><div class='lineSmall'></div>");
};
});
</script>
A:
var ruler = $("#ruler").height();
var body = $("body").height();
These values are static. Once set, always set.
You'll need to reset them inside the loop:
$(function() {
var ruler = $("#ruler").height();
var body = $("body").height();
while (ruler <= body) {
$("#rulerStart").append("<div class='lineLarge'></div><div class='lineSmall'></div>");
ruler = $("#ruler").height();
body = $("body").height();
};
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how can i call a function in a onclick method of image button in asp.net
I have to call a function to send some data in onclick method
<div class="yildiz"><asp:ImageButton ID=Imageid runat="server" Height="19px"
ImageUrl="~/images/yildiz.png" onclick= "<%= someFunction('das') %>" Width="20px"
style="position: relative; top: 13px; left:6px; float:left; " /></div>
A:
Use OnClientClick, like so:
<asp:ImageButton
ID=Imageid runat="server"
Height="19px"
ImageUrl="~/images/yildiz.png"
OnClientClick="someFunction('das')"
Width="20px"
style="position: relative; top: 13px; left:6px; float:left; "
/>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery prettyPhoto : Click event doen't work
I have been using jQuery prettyPhoto plugin to show inline html content. This content has a link which is supposed to close prettyPhoto popup and call a custom script. The popup works fine but somehow click event of the link is not getting fired. Here is the code.
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html lang='en' xml:lang='en' xmlns='http://www.w3.org/1999/xhtml'>
<head>
<meta content='text/html; charset=utf-8' http-equiv='Content-Type' />
<title>Development</title>
<script src="js/jquery-1.4.4.min.js" type="text/javascript"></script>
<script src="js/jquery.prettyPhoto.js" type="text/javascript"></script>
<link rel="stylesheet" href="css/prettyPhoto.css" type="text/css"/>
<script type="text/javascript" charset="utf-8">
$(document).ready(function(){
$("#closeme").click( function (e){
e.preventDefault();
alert("closing popup");
$.prettyPhoto().close();
alert("do some othe stuff");
});
$("a[rel^='prettyPhoto']").prettyPhoto();
});
</script>
</head>
<body>
<a href="#inline-1" rel="prettyPhoto" >popop</a>
<div id="inline-1" style="display:none;">
<p>This is inline content opened in prettyPhoto.</p>
<p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p>
<a href="#" id="closeme">close</a>
</div>
</body>
</html>
If I remove $("a[rel^='prettyPhoto']").prettyPhoto(); then click event on inline html link works fine.
Can anybody please help me to identify the problem and fix it?
Thanks,
Amit Patel
A:
Modify your code as follows:
$(function () {
$("a[rel^='prettyPhoto']").prettyPhoto();
$('#closeme').live('click', function() {
$.prettyPhoto.close();
return false;
});
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do orc files consume more space than parquet files in Hive?
As far as I understand, ORC files are supposed to be smaller and more compressed than parquet files. However, when I populate my orc table in Apache Hive by selecting rows from my parquet table, the orc table ends up consuming about 7 times more disk space.
Any ideas why this happens? My table schema is as follows. It contains a length 200000 array of integers 0, 1 and 2 and each partition has about 10000 rows.
CREATE TABLE orc_table (
field1 STRING
, field2 INT
, field3 STRING
, field4 STRING
, array_field ARRAY < INT >
) PARTITIONED BY (
partition_name STRING
);
ALTER TABLE orc_table ADD PARTITION (partition_name='<partition-name>');
ALTER TABLE orc_table PARTITION (partition_name='<partition_name>') SET FILEFORMAT ORC;
INSERT INTO TABLE orc_table PARTITION (partition_name='<partition_name>')
SELECT field1, field2, field3, field4, array_field FROM parquet_table
WHERE partition_name='<partition_name>';
A:
Changing these settings solved the problem:
SET hive.exec.compress.intermediate=true;
SET hive.exec.compress.output=true;
SET mapred.output.compression.type=BLOCK;
Appearently, Hive uses map-reduce for converting between the data formats. Therefore, also map-reduce output compression needs to be switched on. But this is only guess.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is my probit analysis resulting in numbers outside 0 and 1?
I am doing a probit on a binary dependent variable. The binary dependent variable has observed outcomes of 227 successes and 1704 failures. If I run predict after the probit I get probabilities that make sense for each set of observations, but I want to be able to make sense of the beta coefficients that I get from a straight probit analysis. The same thing happens with logit.
A:
I think Dason's reply is appropriate: your title does not fit your question. If you estimate a probit/logit model, $P(Y=1|X,\beta)$, by the plugg-in solution, $P(Y=1|X,\hat\beta)$ your probability is always between 0 and 1, no matter what the values of the covariates $X$ are. If you find a probability outside $(0,1)$, you are not using a probit/logit model. Now, if you ask about the meaning of $\beta$, this is indeed a real number. For instance, in the logit model, since
$$
\frac{P(Y=1|X,\beta)}{P(Y=0|X,\beta)} = \exp\{ \beta^t X \}
$$
the coefficients of $\beta$ can be explained in terms of log-odds-ratios:
$$
\log\left( \frac{P(Y=1|X,\beta)}{P(Y=0|X,\beta)} \right) = \beta^t X
$$
so, when considering covariate $x_1$ for instance,
$$
\log\left( \frac{P(Y=1|x_1=2,X_{-1},\beta)}{P(Y=0|x_1=2,X_{-1},\beta)} \right)
- \log\left( \frac{P(Y=1|x_1=1,X_{-1},\beta)}{P(Y=0|x_1=1,X_{-1},\beta)} \right)
= \beta_1
$$
which means that the coefficient $\beta_1$ of $x_1$ is the amount the log-odds-ratio changes when the covariate $x_1$ is changed by one unit.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
RabbitMQ Request/Response payload structure
I'm designing a system that will use RabbitMQ for request/response between applications.
I'm used to working with REST APIs and coming from that background I've been thinking how to structure messages when doing request/response.
I need to structure it to handle several scenarios:
Getting / querying data from a remote server
Creating data on a remote server
Handling client side errors
I'm planning to have the payload JSON formatted. And I was thinking about using some kind of response codes similar to HTTP(maybe using the same codes?) and set the response code as a property/header on the message.
For getting/querying my idea was to have a query property in the payload object.
But that got me thinking that I might be thinking this too much like REST APIs and there could be some better, more-established way of doing this.
I've been reading the book "RabbitMQ in Action" while setting this up but I see no mention of this there. My google-fu has also failed me and not provided any results.
Anyone with experience willing to share how they structure their messages?
A:
If you are using RabbitMQ in a request/response scenario among applications that are already familiar or implemented to handle REST calls, there is no need for you to deviate from it in RabbitMQ for the message format.
From your question, what I gather is that RabbitMQ acts as an intermediate server in between your application. You mention three scenarios. If you take retrieving data and writing data, here RabbitMQ acts only as a router between the application which requests to retrieve or write data and the application that retrieves and writes data. If it so, there is already a standard message format that serving application (the server with data) may support. Assume it does not already have a standard defined. In this case, you can think in terms of what the application expects in the request payload. Forget the intermediate RabbitMQ server during this stage. Thinking about the RabbitMQ messages might deviate you from using the best practices.
As for the client side errors, you cannot directly set the HTTP status codes as headers as it will interfere with the RabbitMQ errors in the consumer. I believe, in this case you will have to use a customization by passing a custom header and converting it to a HTTP Status Code later on.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Run a method every time an activity is shown
I have an if statement checking if there are any rows in a sqlite database. If there are rows existing in the database it enables some buttons, if not it will disable the buttons.
like so
DBHandler dbHandler = new DBHandler(this, null, null, 1);
SQLiteDatabase db = dbHandler.getWritableDatabase();
String count = "SELECT count(*) FROM clients";
Cursor mcursor = db.rawQuery(count, null);
mcursor.moveToFirst();
int icount = mcursor.getInt(0);
if(icount == 0)
{
btnViewClient.setEnabled(false);
btnViewAppt.setEnabled(false);
btnAddAppt.setEnabled(false);
btnViewBill.setEnabled(false);
btnAddBill.setEnabled(false);
}
else
{
btnViewClient.setEnabled(true);
btnViewAppt.setEnabled(true);
btnAddAppt.setEnabled(true);
btnViewBill.setEnabled(true);
btnAddBill.setEnabled(true);
}
that is in my onCreate method.
So now if the user adds a row into the sqlite database and hits the back button returning to the main menu. How can I run this code again and enable the buttons? I need to run this code every time the activity is shown, but I don't want to finish the activity because then you can't use the back button to return to it.
A:
You might want to learn a bit more about Activity Lifecycle.
Particularly, you need to use onResume method - it will be called every time your app will return to your Activity.
So just put your code above to this method.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
301 redirect to the rewrited url
The CMS i am working with has these rewrite rules:
For the tags:
RewriteRule ^tags/([^/]+)/$ tag.php?t=$1&page=1
RewriteRule ^tags/([^/]+)/page-([0-9]+)(/)?$ tag.php?t=$1&page=$2
for the category pages:
RewriteRule ^browse-(.*)-videos.html$ category.php?cat=$1
RewriteRule ^browse-(.*)-videos-([0-9]+)-(.*).html$ category.php?cat=$1&page=$2&sortby=$3
But it does not redirect to the seo friendly url estructure creating duplicate content issues, how can i make so it also 301 redirects to the rewrited url if accessed from the other?
Thanks in advance
A:
You need to create 4 new rules and place them above your existing.
#redirect old URL to new SEO
RewriteCond %{THE_REQUEST} ^GET\ /+tag\.php\?t=([^&\s]+)&page=1
RewriteRule ^ /tags/%1/? [R=301,L]
#redirect old URL to new SEO
RewriteCond %{THE_REQUEST} ^GET\ /+tag\.php\?t=([^&\s]+)&page=([^&\s]+)
RewriteRule ^ /tags/%1/page-%2/? [R=301,L]
#redirect old URL to new SEO
RewriteCond %{THE_REQUEST} ^GET\ /+category\.php\?cat=([^&\s]+)&page=([^&\s]+)&sortby=([^&\s]+)
RewriteRule ^ /browse-%1-videos-%2-%3.html? [R=301,L]
#redirect old URL to new SEO
RewriteCond %{THE_REQUEST} ^GET\ /+category\.php\?cat=([^&\s]+)
RewriteRule ^ /browse-%1-videos.html? [R=301,L]
RewriteRule ^tags/([^/]+)/$ tag.php?t=$1&page=1 [L]
RewriteRule ^tags/([^/]+)/page-([0-9]+)(/)?$ tag.php?t=$1&page=$2 [L]
RewriteRule ^browse-(.*)-videos.html$ category.php?cat=$1 [L]
RewriteRule ^browse-(.*)-videos-([0-9]+)-(.*).html$ category.php?cat=$1&page=$2&sortby=$3 [L]
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I convert tuple of tuples to list in one line (pythonic)?
query = 'select mydata from mytable'
cursor.execute(query)
myoutput = cursor.fetchall()
print myoutput
(('aa',), ('bb',), ('cc',))
Why is it (cursor.fetchall) returning a tuple of tuples instead of a tuple since my query is asking for only one column of data?
What is the best way of converting it to ['aa', 'bb', 'cc'] ?
I can do something like this :
mylist = []
myoutput = list(myoutput)
for each in myoutput:
mylist.append(each[0])
I am sure this isn't the best way of doing it. Please enlighten me!
A:
This works as well:
>>> tu = (('aa',), ('bb',), ('cc',))
>>> import itertools
>>> list(itertools.chain(*tu))
['aa', 'bb', 'cc']
Edit Could you please comment on the cost tradeoff? (for loop and itertools)
Itertools is significantly faster:
>>> t = timeit.Timer(stmt="itertools.chain(*(('aa',), ('bb',), ('cc',)))")
>>> print t.timeit()
0.341422080994
>>> t = timeit.Timer(stmt="[a[0] for a in (('aa',), ('bb',), ('cc',))]")
>>> print t.timeit()
0.575773954391
Edit 2 Could you pl explain itertools.chain(*)
That * unpacks the sequence into positional arguments, in this case a nested tuple of tuples.
Example:
>>> def f(*args):
... print "len args:",len(args)
... for a in args:
... print a
...
>>> tu = (('aa',), ('bb',), ('cc',))
>>> f(tu)
len args: 1
(('aa',), ('bb',), ('cc',))
>>> f(*tu)
len args: 3
('aa',)
('bb',)
('cc',)
Another example:
>>> f('abcde')
len args: 1
abcde
>>> f(*'abcde')
len args: 5
a
b
c
d
e
See the documents on unpacking.
A:
You could do
>>> tup = (('aa',), ('bb',), ('cc',))
>>> lst = [a[0] for a in tup]
>>> lst
['aa', 'bb', 'cc']
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can the partial fraction method of integration be used with trig functions contained inside the function to be integrated?
Can the partial fractions method be used to integrate a problem like this? $$\int\frac{1}{\cos(x)\left(\sin^2(x)+4\right)}dx$$
Or, do the trig functions contained inside the denominator ruin it?
If yes, would this be the setup: $$\frac{A}{\cos(x)}+\frac{Bx+C}{\sin^2(x)+4}$$
A:
It is better to change a variable first:
$$\int\frac{dx}{\cos(x)\left(\sin^2(x)+4\right)}=\int\frac{\cos x \,dx}{\cos^2(x)\left(\sin^2(x)+4\right)}=\left[\begin{array}{c}t=\sin x \\ dt=\cos x\,dx\end{array}\right]=\int\frac{dt}{(1-t^2)\left(t^2+4\right)}$$
Now you can use partial fraction.
You can also see that the denominators you'll get are not exactly what you'd expected.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Change color of the text cell based on its value
I want to change the color of the text of the cells in the UITableView.
When I'm trying to do conditional formatting using if, it doesn't work.
Here is the code I'm trying:
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
return nomearray.count;
}
- (void)tableView: (UITableView*)tableView willDisplayCell: (UITableViewCell*)cell forRowAtIndexPath: (NSIndexPath*)indexPath
{
cell.textLabel.textAlignment = UITextAlignmentRight;
if(indexPath.row % 2 == 0) {
cell.backgroundColor = [UIColor colorWithRed:(240/255.0) green:(240/255.0) blue:(240/255.0) alpha:1];}
else {
cell.backgroundColor = [UIColor whiteColor];}
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *simpleTableIdentifier = @"SimpleTableItem";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:simpleTableIdentifier];
if (cell == nil) {
cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:simpleTableIdentifier];
}
if (risco == 1) {[cell.textLabel setTextColor: [UIColor colorWithRed:(60/255.0) green:(169/255.0) blue:(133/255.0) alpha:1]];} else {
if (risco == 2) {[cell.textLabel setTextColor: [UIColor colorWithRed:(255/255.0) green:(200/255.0) blue:(0/255.0) alpha:1]];} else {
if (risco == 3) {[cell.textLabel setTextColor: [UIColor colorWithRed:(150/255.0) green:(50/255.0) blue:(50/255.0) alpha:1]];}}}
cell.textLabel.text = [nomearray objectAtIndex:indexPath.row];
return cell;
}
A:
I think the problem is your if condition is not satisfying Because integer values is not setting up properly. So just set the property of risco integer and then check, Did you try?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Zero tensor product over a complex algebra?
Let $A$ be an algebra over $\mathbb{C}$. Let $M$ be a left $A$-module, let $N$ be a right $A$-module and consider the tensor product $N \otimes_A M$, which is a complex vector space.
Q1: Can this tensor product be the trivial vector spacefor non-zero $M$, $N$? If yes, what are examples where this happens?
Q2: Are there criteria on $M$ and $N$ which ensure that the tensor product is non-zero?
Edit: Thanks for the answers below. It seems that Q2 is too broad to answer in general. However, I am mostly interested in the case of an infinite-dimensional Clifford algebra; in other words, $A$ is central simple.
A:
This is intuitively dual to the problem of finding left $A$-modules $M$ and $V$, neither having zero action, such that ${}_A{\rm hom}(M,V)=0$. Then a natural thing to do is to take two inequivalent (fin dim) complex irreps of some finite group $G$ (Schur orthogonality then tells us the hom-space between two inequivalent irreps of $G$ is zero).
You then recover examples for Q1 by taking $V=N^*$. The ease with which such examples are found makes me pessimistic about answering Q2 unless you can narrow down the class of modules or algebras you are interested in.
We can come up with very simple examples based on choosing the easiest possible $G$, namely ${\bf Z}/2{\bf Z}$, which has two inequivalent 1-dim modules given by the two characters of this group.
This translates into the following setup. $A$ can be identified with ${\bf C}\oplus {\bf C}$ (Fourier transform of ${\bf C}G$ for $G$ being the group above) and then there are two characters $\phi_1, \phi_2: A \to {\bf C}$ defined by
$\phi_j((a_1,a_2)) = a_j$ ($j=1,2$). The corresponding 1-dimensional modules $V_1$ and $V_2$ satisfy $V_1\otimes_A V_2 =0$, because
$$ 1 \otimes 1 = 1\cdot x\otimes 1 - 1 \otimes x\cdot 1 \quad\hbox{if $x=(1,0)$} $$
A:
I give an answer in case the modules are finite dimensional (not 100% sure whether the algebra also has to be finite dimensional but I think it is not needed). The base field can be arbitrary and does not need to be $\mathbb{C}$. Let the algebra $A$ be over a field $K$ and $D=Hom_K(-,K)$. Then we have $N \otimes_A M \cong D Hom_A(N,D(M))$.
This makes it very easy to at least calculate the vector space dimension of $N \otimes_A M$ and give alot of examples for Q1: Just take $M$ and $N$ simple such that $N$ is not isomorphic to $D(M)$ and by Schur's lemma $N \otimes_A M \cong D Hom_A(N,D(M))=0$.
Example: Let $A$ be a finite dimensional algebra with $n$ fixed idempotent $e_1,...,e_n$ and simple right modules $S_1,...,S_n$ and simple left modules $G_1,...,G_n$ (corresponding to the idempotents ). Then $D(G_i) \cong S_i$ and thus $S_j \otimes_A G_i \neq 0$ if and only if $i=j$.
At least when the algeba is a finite dimensional quiver algebra deciding when $D Hom_A(N,D(M))=0$ can be reduced to linear algebra and answers Q2.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
macro which defines new macros with an added prefix
We have a profiling framework which can be enabled and disabled at compile time.
All the various calls to the framework are done through macros, eg:
PROFILE_START(msg)
PROFILE_END(msg)
The macros then resolve to the actual profiler call when profiling is enabled, and to nothing when disabled
#ifdef PROFILING_ENABLED
# define PROFILE_START(msg) currentProfiler().start(msg)
# define PROFILE_END(msg) currentProfiler().end(msg)
#else
# define PROFILE_START(msg)
# define PROFILE_END(msg)
#endif
We have various different components in our framework, and I want to enable profiling in each component.
I'd like to be able to selectively enable profiling in each component.
My idea is to prefix all the profiler macros with the component's name, eg:
FOO_PROFILE_START(msg)
FOO_PROFILE_END(msg)
BAR_PROFILE_START(msg)
BAR_PROFILE_END(msg)
I could manually create
#ifdef ENABLE_FOO_PROFILING
# define FOO_PROFILE_START(msg) PROFILE_START(msg)
# define FOO_PROFILE_END(msg) PROFILE_END(msg)
#else
# define FOO_PROFILE_START(msg)
# define FOO_PROFILE_END(msg)
#endif
#ifdef ENABLE_BAR_PROFILING
# define BAR_PROFILE_START(msg) PROFILE_START(msg)
# define BAR_PROFILE_END(msg) PROFILE_END(msg)
#else
# define BAR_PROFILE_START(msg)
# define BAR_PROFILE_END(msg)
#endif
However, this is both tedious and error-prone.
Any time a new feature is added to the profiling framework I would have to find all my component specific macros and add a new macro to each of them.
What I'm looking for is a way to automatically generate the component-prefixed macros.
#ifdef ENABLE_FOO_PROFILING
ADD_PREFIX_TO_ENABLED_PROFILING_MACROS(FOO)
#else
ADD_PREFIX_TO_DISABLED_PROFILING_MACROS(FOO)
#endif
The net result of the above would be to create all the FOO_PROFILE_XXX macros I would have done manually.
Questions:
Is such a helper macro possible?
Is there a better way of achieving what I'm looking for?
I'm happy to use BOOST_PP if necessary.
Before posting this question I tried figuring this out myself, and the code I came up with follows, which may serve to show the road I was going down
#include <stdio.h>
#define PROFILE_START(msg) printf("start(%s)\n", msg);
#define PROFILE_END(msg) printf("end(%s)\n", msg);
#define ENABLE(prefix) \
#define prefix ## _PROFILE_START PROFILE_START \
#define prefix ## _PROFILE_END PROFILE_END
#define DISABLE(prefix) \
#define prefix ## _PROFILE_START \
#define prefix ## _PROFILE_END
#define ENABLE_FOO
#ifdef ENABLE_FOO
ENABLE(FOO)
#else
DISABLE(FOO)
#endif
#ifdef ENABLE_BAR
ENABLE(BAR)
#else
DISABLE(BAR)
#endif
int main()
{
FOO_PROFILE_START("foo");
FOO_PROFILE_END("foo");
BAR_PROFILE_START("bar");
BAR_PROFILE_END("bar");
return 0;
}
A:
Is such a helper macro possible?
No. With the exception of pragmas, you cannot execute a preprocessing directive in a macro.
You can do something very similar using pattern matching. By taking the varying parts out of the macro name, and putting it inside the macro itself, you can make a form that allows enabling/disabling for arbitrary names.
This requires a tiny bit of preprocessor metaprogramming (which is a constant overhead; i.e., doesn't vary as you add modules), so bear with me.
Part 1: A C preprocessor solution
Using this set of macros:
#define GLUE(A,B) GLUE_I(A,B)
#define GLUE_I(A,B) A##B
#define SECOND(...) SECOND_I(__VA_ARGS__,,)
#define SECOND_I(_,X,...) X
#define SWITCH(PREFIX_,PATTERN_,DEFAULT_) SECOND(GLUE(PREFIX_,PATTERN_),DEFAULT_)
#define EAT(...)
#define PROFILER_UTILITY(MODULE_) SWITCH(ENABLE_PROFILER_FOR_,MODULE_,DISABLED)
#define PROFILER_IS_DISABLED ,EAT
#define PROFILE_START_FOR(MODULE_, msg) SWITCH(PROFILER_IS_,PROFILER_UTILITY(MODULE_),PROFILE_START)(msg)
#define PROFILE_END_FOR(MODULE_, msg) SWITCH(PROFILER_IS_,PROFILER_UTILITY(MODULE_),PROFILE_END)(msg)
...which you can include in each module, you will gain the ability to do this:
PROFILE_START_FOR(FOO,msg)
PROFILE_END_FOR(FOO,msg)
PROFILE_START_FOR(BAR,msg)
PROFILE_END_FOR(BAR,msg)
PROFILE_START_FOR(BAZ,msg)
PROFILE_END_FOR(BAZ,msg)
All of these macros, by default, expand to nothing; you can change this by defining ENABLE_PROFILER_FOR_xxx for any subset of FOO, BAR, or BAZ to expand to , (or ,ON if that looks better), in which case the corresponding macros will expand (initially, before your own macros come in) to PROFILE_START(msg)/PROFILE_END(msg); and the rest will continue expanding to nothing.
Using the FOO module as an example, you can do this with a "control file": #define ENABLE_PROFILER_FOR_FOO ,ON; the command line: ... -DENABLE_PROFILER_FOR_FOO=,ON; or in a makefile; CFLAGS += -DENABLE_PROFILER_FOR_FOO=,ON.
Part 2a: how it works; the SWITCH macro
#define GLUE(A,B) GLUE_I(A,B)
#define GLUE_I(A,B) A##B
#define SECOND(...) SECOND_I(__VA_ARGS__,,)
#define SECOND_I(_,X,...) X
#define SWITCH(PREFIX_,PATTERN_,DEFAULT_) SECOND(GLUE(PREFIX_,PATTERN_),DEFAULT_)
GLUE here is your typical indirect paste macro (allowing arguments to expand). SECOND is an indirect variadic macro returning the second argument.
SWITCH is the pattern matcher. The first two arguments are pasted together, comprising the pattern. By default, this pattern is discarded; but due to the indirection, if that pattern is an object like macro, and that pattern's expansion contains a comma, it will shift a new second argument in. For example:
#define ORDINAL(N_) GLUE(N_, SWITCH(ORDINAL_SUFFIX_,N_,th))
#define ORDINAL_SUFFIX_1 ,st
#define ORDINAL_SUFFIX_2 ,nd
#define ORDINAL_SUFFIX_3 ,rd
ORDINAL(1) ORDINAL(2) ORDINAL(3) ORDINAL(4) ORDINAL(5) ORDINAL(6)
...will expand to:
1st 2nd 3rd 4th 5th 6th
In this manner, the SWITCH macro behaves analogous to a switch statement; whose "cases" are object-like macros with matching prefixes, and which has a default value.
Note that pattern matching in the preprocessor works with shifting arguments, hence the comma (the main trick being that of discarding unmatched tokens by ignoring an argument, and applying matched tokens by shifting a desired replacement in). Also for the most general case with this SWITCH macro, you need at a minimum to ensure that all PREFIX_/PATTERN_ arguments are pasteable (even if that token isn't seen, it has to be a valid token).
Part 2b: combined switches for safety
A lone switch works like a case statement, allowing you to shove anything in; but when the situation calls for a binary choice (like "enable" or "disable"), it helps to nest one SWITCH in another. That makes the pattern matching a bit less fragile.
In this case, the implementation:
#define PROFILER_UTILITY(MODULE_) SWITCH(ENABLE_PROFILER_FOR_,MODULE_,DISABLED)
#define PROFILER_IS_DISABLED ,EAT
#define PROFILE_START_FOR(MODULE_, msg) SWITCH(PROFILER_IS_,PROFILER_UTILITY(MODULE_),PROFILE_START)(msg)
#define PROFILE_END_FOR(MODULE_, msg) SWITCH(PROFILER_IS_,PROFILER_UTILITY(MODULE_),PROFILE_END)(msg)
...uses PROFILER_UTILITY as the inner switch. By default, this expands to DISABLED. That makes the pattern in SWITCH(PROFILER_IS_,PROFILER_UTILITY(MODULE_),PROFILE_START) by default be PROFILER_IS_DISABLED, which shoves in EAT. In the non-default case of PROFILER_UTILITY, the outer switch kicks in making it expand to PROFILE_START. PROFILE_END_FOR works analogously.
The EAT macro takes (msg) in both cases to nothing; otherwise, the original macro's called.
Is there a better way of achieving what I'm looking for?
Depends on what you're looking for. This approach shows what's possible with the C preprocessor.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I reach the scenic spot on top of the Ossuaire?
I have tried for a long time to get onto the Ossuaire in The Saboteur, but I don't find any place to climb it. The yellow lights have not been helpful either. How do I reach the scenic spot?
TL;DR: There is none.
A:
There is no scenic spot on top of the Ossuaire. (I was misled by the existence of yellow lights, and I thought that every major building in the game has a scenic spot on top of it, because the others do, and I had not bought the map of targets yet.)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C++ class "cannot be shared between ARC and non-ARC code…"
Compiling Objective-C++, I get the following diagnostic:
'MyClass' cannot be shared between ARC and non-ARC code; add a non-trivial copy assignment operator to make it ABI-compatible
Similar errors come up for missing copy-constructors and destructors. It's easy enough to add the necessary methods, but often they aren't meaningful, particularly for classes that I want to make non-copyable, and I don't mix ARC and non-ARC code (not intentionally, at least).
Why am I getting this message, and can I get rid of it without writing pointless member functions for all my C++ classes?
A:
It is because ARC discourages Objective-C objects in POD structures and C++ classes due to the fact that their inclusion makes managing their memory ambiguous. You can resolve it by prefixing Objective-C members with the __unsafe_unretained qualifier, which tells ARC to keep it's sticky fingers off the class, but puts you in the awkward position of having to manage the memory of the member object yourself. Also, objects with internal linkage are not covered under the warning, so wrapping your class or struct in an unqualified namespace { } works as well. If you absolutely must use ARC in the file (or files) that happen to have Objective-C Objects as class members, you must provide the proper access operators (as the warning says), otherwise just turn ARC off and save yourself the trouble.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Set conditional based on which field in views-view-fields.html.twig
Is there a variable in views-view-fields.html.twig that contains the field machine name, or some other unique indicator? I am attempting something like this:
{% if field.[field_machine_name] == "my_field_machine_name" %}
// Do stuff
{% endif %}
A massive workaround would be to create individual views.view.field--[my_view]--[my_field].html.twig templates for every field in the view, but it really seems like that shouldn't be necessary.
A:
I ended up accomplishing this through the views UI by setting all of my fields, except for the last one, to "hidden" and then rewriting the output of the last field to include the hidden fields, and including the markup that I want.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android- Download mp3 to SDCARD via URL naming issue
I currently have a webview with various mp3 links. If the user presses on one of these links, an alertDialog will pop up and they can choose whether to listen or download the file. While my download portion works intended (via an asynctask), I have it currently set up where I specify the name which the mp3 file on the SDCARD will be called. I would like to have it so that the name of the track is the name of the mp3 file. Any ideas on how I could do that? Thanks.
Here is portion of my code:
//so you can click on links in app and not open the actual browser. will stay in app
private class HelloWebViewClient extends WebViewClient{
@Override
public boolean shouldOverrideUrlLoading(final WebView view, final String url){
view.loadUrl(url);
view.getSettings().getAllowFileAccess();
view.getSettings().setJavaScriptEnabled(true);
//load the dropbox files so people can listen to the track
if(url.endsWith(".mp3")){
progressWebView.dismiss();
progressWebView.cancel();
blogDialog.setButton("Listen", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setDataAndType(Uri.parse(url), "audio/*");
view.getContext().startActivity(intent);
}
});
blogDialog.setButton2("Download", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
sdrUrl = url.toString();
new DownloadFile().execute();
}
});
blogDialog.show();
}else{
return super.shouldOverrideUrlLoading(view, url);
}
return true;
}
}
//to handle the back button
@Override
public boolean onKeyDown(int keyCode, KeyEvent event){
if((keyCode == KeyEvent.KEYCODE_BACK) && sdrWebView.canGoBack()){
sdrWebView.goBack();
return true;
}
return super.onKeyDown(keyCode, event);
}
public void onPause(){
super.onPause();
}
/*create the pop up menu so you can reload*/
@Override
public boolean onCreateOptionsMenu(Menu menu){
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.menu, menu);
return true;
}
@Override
public boolean onOptionsItemSelected(MenuItem item){
switch (item.getItemId()){
case R.id.refreshsetting: sdrWebView.loadUrl("http://www.stopdroprave.com");
break;
}
return true;
}
private class DownloadFile extends AsyncTask<String, Integer, String>{
@Override
protected String doInBackground(String... url) {
try {
URL url2 = new URL(sdrUrl);
HttpURLConnection c = (HttpURLConnection) url2.openConnection();
c.setRequestMethod("GET");
c.setDoOutput(true);
c.connect();
int lengthOfFile = c.getContentLength();
String PATH = Environment.getExternalStorageDirectory()
+ "/download/";
Log.v("", "PATH: " + PATH);
File file = new File(PATH);
file.mkdirs();
String fileName = "testSDRtrack.mp3";
File outputFile = new File(file, fileName);
FileOutputStream fos = new FileOutputStream(outputFile);
InputStream is = c.getInputStream();
byte[] buffer = new byte[1024];
int len1 = 0;
while ((len1 = is.read(buffer)) != -1) {
publishProgress((int)(len1*100/lengthOfFile));
fos.write(buffer, 0, len1);
}
fos.close();
is.close();
}catch (IOException e) {
e.printStackTrace();
}
return null;
}
@Override
protected void onProgressUpdate(Integer... values) {
super.onProgressUpdate(values);
}
}
}
A:
I essentially split the url and got the portion that I wanted then saved it like that
Android- split URL string
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Should our FAQ contain a list of what's off topic and if so what should that list contain?
Now that there's a migration path from Stack Overflow to here, should we be more strict about what's off topic?
I know there's this old question about what the FAQ should contain, but the current FAQ doesn't have the what's not allowed list that the Super User one or Web Applications one does.
So, should we have an "off topic" list and what should it contain?
A couple of things spring to mind straight away:
Office environment questions - such as questions about office chairs and desks. These apply to any office worker and aren't unique to developers.
Office politics questions - these are clearly going to be very subjective and will usually fall foul of the 6 guidelines for quality subjective questions anyway.
UPDATE
I need to chase up the ability edit the "what's not allowed" section. If it's not going to be forthcoming any time soon I'll go ahead and edit it into the section I can edit.
As all the answers seem to have pretty much universal approval from all those who voted, they're all in.
A:
"What language should I learn next?"
As the answer is always "it depends on what you want to do" these are either too localised or not constructive.
A:
We need something like:
If your question is about …
Networking, servers, or maintaining other people's PCs and contains no source code, ask on Server Fault.
General computer software or hardware troubleshooting, ask on Super User.
Specific programming problem, software algorithms, coding, ask on Stack Overflow.
A:
"What salary/wage should I ask for?" Always too localized.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
open new window with javascript in current page
how open new window with javascript in current page (using a tag!)?
i have script, but this no work, why?
<a href="home.html" onclick="window.open('http://google.com/', '_self')">Go</a>;
A:
Why use JavaScript? why not: <a href="http://google.com" target="_blank">Go</a>
To open a second new window as well,
<a href="http://google.com" target="_blank" onclick="window.open('http://google.com')">Go</a>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Using wcstod to convert zero value
Am using wcstod() to convert to numeric type text data which I read from an XML file:
double x;
BSTR name;
MSXML::IXMLDOMNodeListPtr theList;
MSXML::IXMLDOMNodePtr theItem;
//Some XML API here...
theItem = theList->Getitem(0);
theItem->get_text(&name);
x = wcstod(name,NULL);
the problem is that this function returns NULL upon failure, but sometimes I do want to read and convert the valid string L"0".
Is there a workaround for this?
A:
You can use std::stod which will throw exception if the function fails.
Make sure BSTR is initialized to NULL. Add error check for Getitem and get_text
#include <string>
BSTR name = nullptr;
...
double x = 0;
if(name)
{
try
{
x = std::stod(name);
}
catch(...)
{
//error ...
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I eval expression in object context?
Good day. I need to eval expression in some object context, but the only solution I found is to create stubs for every object function:
var c = {
a : function () {
return 'a';
},
b : function () {
return 'b';
}
};
function evalInObj(c, js) {
function a() {
return c.a();
}
function b() {
return c.b();
}
return eval(js);
};
console.log(evalInObj(c, 'a() + b()'));
Show me the right way, please. Can I do it with prototype?
var C = function(id) {
this.id = id;
}
C.prototype.a = function () {
return 'a' + this.id;
}
C.prototype.b = function () {
return 'b' + this.id;
}
function evalCtx(js) {
console.log(this); // C {id: 1}
return eval(js);
}
var c1 = new C(1);
evalCtx.call(c1, 'a() + b()'); // error: a is not defined
A:
function runScript(ctx, js){ with(ctx){ return eval(js); }}
closed. thanks all
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Do we want cut-and-paste answers?
I've noticed a trend — on Islamic forums in general, not exclusively this site — of answering questions with nothing more than a copy-paste of an article written by a scholar. Even if they're properly cited (and thus not falling into the issue of plagiarism), I don't feel such answers are appropriate for this site.
To me, such answers are exceedingly low quality. We are trying to build a site of expert knowledge here, but there's not a lot of expert knowledge needed to just point someone to (for example) IslamQA. And that's not even getting into the issue of copy-pasting an article that isn't even a direct answer to the question asked, and thus would be only partially relevant. I doubt such posts, posts that effectively just tell people that we don't have our own answers, will attract the type of knowledgeable people we really need to make this site thrive.
After all, if we're not actually providing better information, or presenting good information in a better form, then why would anyone want to invest their time here?
Now I understand that there's not a lot of support in Islamic schools for laymen performing "original research," and the easiest way to avoid falling into that trap is to reference only opinions of scholars who are educated enough to do such research. But even then there's little to no reason (other than laziness) not to summarize a scholarly opinion in an easier to digest manner, or combine mutually-supporting opinions into one post, and then provide links to the originals if anyone wants more depth.
So the question I present to the community is thus: How should we the site (and especially we the moderators) deal with such answers?
(see also: Are answers that just contain links elsewhere really "good answers"?)
A:
Such posts ought to be downvoted and commented on, IMO. We don't want to become a repository of quotes from elsewhere.
You have two separate issues here:
Answers which just quote the Quran/etc
Yes, the Quran is considered paramount, but it doesn't hurt to provide an interpretation of the quoted lines. Yes, interpretations can be subjective, but in that case the quotes are always a fallback.
Also, I doubt the quote will actually answer the question at all times (Yes, it will answer it, but not exactly--it won't look like an answer to the question). So adding a conclusion that succinctly answers the question is good.
For example, https://islam.stackexchange.com/a/236/227 contains a quote which effectively answers the question, but a "So, the Quran forbids homosexuality" line underneath wouldn't be amiss. Along with some background (Who is Lut?). On the other hand, the first section of https://islam.stackexchange.com/a/1452/227 is much better, it had interpretations and background (though I'm not sure if the rest of that answer is good, though--I'm just focusing on the quoting part).
Answers which just quote another site
These are even worse, for one there is the credibility issue, and for another there's not much effort put into the post. Again, you can add interpretations, etc (or not quote the site at all, instead write it in your own words). Otherwise, just provide a link in a comment--don't answer if you're not going to write much yourself.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Invertibility of a matrix in portfolio optimization
Let $A$ be an $n\times n$ symmetric matrix with non-negative entries. Let $\mathbf{1}$ be the column vector of dimension $n$ with all entries being $1$.
Consider the $(n+1)\times (n+1)$ matrix
$$ B=
\begin{bmatrix}
A & \mathbf{1} \\
\mathbf{1}^T & 0
\end{bmatrix}
$$
Question: what is the condition for $A$ so that $B$ is invertible?
Remark: This matrix is related to portfolio optimization problems in finance. I note that when $A$ is a constant matrix, the determinant of $B$ is $0$ and thus $B$ is not invertible.
A:
If $\det(A) \neq 0$ so that $A^{-1}$ exists
and the scalar $\alpha = \mathbf{1}^T A^{-1} \mathbf{1} \neq 0$, then we have
$$B^{-1} = \begin{bmatrix}
A^{-1} - \alpha^{-1}A^{-1}\mathbf{1}\mathbf{1}^TA^{-1} & \alpha^{-1}A^{-1}\mathbf{1} \\
\alpha^{-1}\mathbf{1}^TA^{-1} & -\alpha^{-1}
\end{bmatrix}$$
Note that
$$\begin{align}BB^{-1}&= \begin{bmatrix}
AA^{-1} - A\alpha^{-1}A^{-1}\mathbf{1}\mathbf{1}^TA^{-1}+ \mathbf{1}\alpha^{-1}\mathbf{1}^TA^{-1} & A\alpha^{-1}A^{-1}\mathbf{1}-\alpha^{-1}\mathbf{1} \\
\mathbf{1}^TA^{-1} - \mathbf{1}^T\alpha^{-1}A^{-1}\mathbf{1}\mathbf{1}^TA^{-1} + 0\alpha^{-1}A^{-1}\mathbf{1} & \mathbf{1}^T\alpha^{-1}A^{-1}\mathbf{1}-0\alpha^{-1}
\end{bmatrix} \\ \\&= \begin{bmatrix}
I -\alpha^{-1}\mathbf{1}\mathbf{1}^TA^{-1}+ \alpha^{-1}\mathbf{1}\mathbf{1}^TA^{-1} & \alpha^{-1}\mathbf{1}-\alpha^{-1}\mathbf{1} \\
\mathbf{1}^TA^{-1} - \alpha^{-1}\alpha\mathbf{1}^TA^{-1} & \alpha^{-1}\alpha \end{bmatrix}\\ \\ &= \begin{bmatrix}
I & \mathbf{0} \\
\mathbf{0}^T & 1 \end{bmatrix}\end{align} $$
Addendum
In general, for a block matrix
$$B = \begin{bmatrix}
A & C \\
E & D
\end{bmatrix},$$
if $A^{-1}$ exists , then the Schur complement
of $A$ is $D- EA^{-1}C$ and
$$\det(B) = \det(A) \det(D- EA^{-1}C)$$
Thus, $\det(B) \neq 0$ and $B^{-1}$ exists if and only if $\det(D- EA^{-1}C) \neq 0$.
In this case, the Schur complement reduces to a scalar $-\mathbf{1}^TA^{-1} \mathbf{1}$, and the condition $\mathbf{1}^TA^{-1}\mathbf{1} \neq 0$ is necessary and sufficient for $B$ to be invertible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a cheat list/tutorial for Eclipse/NetBeans Dev moving to Visual Studio?
I'm a Java developer and I'm very familiar with Eclipse and Netbeans (I've memorized a bunch of shortcuts!). However, my advisor new research uses Windows and C#. So I need to learn C# and move to the "Visual Studio 2010" IDE. So far I'm hating it. I can't get used to it, is there a tutorial or cheat list for helping existing Eclipse/Netbeans devs switch to Visual Studio?
The major things that I mix is the flexibility of eclipse and some key shortcuts like:
Format-Code (Ctrl + Shift + F)
Quick Fix (Ctrl + 1)
Open Editor -- Navigate quickly (Ctrl + E)
Open File of Project -- RegEx (Ctrl + Shift + T)
A:
This chap has done some key bindings that might help you..
Visual Studio's keybindings for Eclipse
|
{
"pile_set_name": "StackExchange"
}
|
Q:
caching the result from a [n async] factory method iff it doesn't throw
UPDATE: Heavily revised after @usr pointed out I'd incorrectly assumed Lazy<T>'s default thread safety mode was LazyThreadSafetyMode.PublicationOnly...
I want to lazily compute a value via an async Factory Method (i.e. it returns Task<T>) and have it cached upon success. On exception, I want to have that be available to me. I do not however, want to fall prey to the exception caching behavior that Lazy<T> has in its default mode (LazyThreadSafetyMode.ExecutionAndPublication)
Exception caching: When you use factory methods, exceptions are cached. That is, if the factory method throws an exception the first time a thread tries to access the Value property of the Lazy object, the same exception is thrown on every subsequent attempt. This ensures that every call to the Value property produces the same result and avoids subtle errors that might arise if different threads get different results. The Lazy stands in for an actual T that otherwise would have been initialized at some earlier point, usually during startup. A failure at that earlier point is usually fatal. If there is a potential for a recoverable failure, we recommend that you build the retry logic into the initialization routine (in this case, the factory method), just as you would if you weren’t using lazy initialization.
Stephen Toub has an AsyncLazy class and writeup that seems just right:
public class AsyncLazy<T> : Lazy<Task<T>>
{
public AsyncLazy(Func<Task<T>> taskFactory) :
base(() => Task.Factory.StartNew(() => taskFactory()).Unwrap())
{ }
public TaskAwaiter<T> GetAwaiter() { return Value.GetAwaiter(); }
}
however that's effectively the same behavior as a default Lazy<T> - if there's a problem, there will be no retries.
I'm looking for a Task<T> compatible equivalent of Lazy<T>(Func<T>, LazyThreadSafetyMode.PublicationOnly), i.e. it should behave as that is specified:-
Alternative to locking In certain situations, you might want to avoid the overhead of the Lazy object's default locking behavior. In rare situations, there might be a potential for deadlocks. In such cases, you can use the Lazy(LazyThreadSafetyMode) or Lazy(Func, LazyThreadSafetyMode) constructor, and specify LazyThreadSafetyMode.PublicationOnly. This enables the Lazy object to create a copy of the lazily initialized object on each of several threads if the threads call the Value property simultaneously. The Lazy object ensures that all threads use the same instance of the lazily initialized object and discards the instances that are not used. Thus, the cost of reducing the locking overhead is that your program might sometimes create and discard extra copies of an expensive object. In most cases, this is unlikely. The examples for the Lazy(LazyThreadSafetyMode) and Lazy(Func, LazyThreadSafetyMode) constructors demonstrate this behavior.
IMPORTANT
When you specify PublicationOnly, exceptions are never cached, even if you specify a factory method.
Is there any FCL, Nito.AsyncEx or similar construct that might fit in nicely here? Failing this, can anyone see an elegant way to gate the "attempt in progress" bit (I'm OK with each caller making its own attempt in the same way that a Lazy<T>( ..., (LazyThreadSafetyMode.PublicationOnly) does) and yet still have that and the cache management encapsulated neatly?
A:
Disclaimer: This is a wild attempt at refactoring Lazy<T>. It is in no way production grade code.
I took the liberty of looking at Lazy<T> source code and modifying it a bit to work with Func<Task<T>>. I've refactored the Value property to become a FetchValueAsync method since we can't await inside a property. You are free to block the async operation with Task.Result so you can still use the Value property, I didn't want to do that because it may lead to problems. So it's a little bit more cumbersome, but still works. This code is not fully tested:
public class AsyncLazy<T>
{
static class LazyHelpers
{
internal static readonly object PUBLICATION_ONLY_SENTINEL = new object();
}
class Boxed
{
internal Boxed(T value)
{
this.value = value;
}
internal readonly T value;
}
class LazyInternalExceptionHolder
{
internal ExceptionDispatchInfo m_edi;
internal LazyInternalExceptionHolder(Exception ex)
{
m_edi = ExceptionDispatchInfo.Capture(ex);
}
}
static readonly Func<Task<T>> alreadyInvokedSentinel = delegate
{
Contract.Assert(false, "alreadyInvokedSentinel should never be invoked.");
return default(Task<T>);
};
private object boxed;
[NonSerialized]
private Func<Task<T>> valueFactory;
[NonSerialized]
private object threadSafeObj;
public AsyncLazy()
: this(LazyThreadSafetyMode.ExecutionAndPublication)
{
}
public AsyncLazy(Func<Task<T>> valueFactory)
: this(valueFactory, LazyThreadSafetyMode.ExecutionAndPublication)
{
}
public AsyncLazy(bool isThreadSafe) :
this(isThreadSafe ?
LazyThreadSafetyMode.ExecutionAndPublication :
LazyThreadSafetyMode.None)
{
}
public AsyncLazy(LazyThreadSafetyMode mode)
{
threadSafeObj = GetObjectFromMode(mode);
}
public AsyncLazy(Func<Task<T>> valueFactory, bool isThreadSafe)
: this(valueFactory, isThreadSafe ? LazyThreadSafetyMode.ExecutionAndPublication : LazyThreadSafetyMode.None)
{
}
public AsyncLazy(Func<Task<T>> valueFactory, LazyThreadSafetyMode mode)
{
if (valueFactory == null)
throw new ArgumentNullException("valueFactory");
threadSafeObj = GetObjectFromMode(mode);
this.valueFactory = valueFactory;
}
private static object GetObjectFromMode(LazyThreadSafetyMode mode)
{
if (mode == LazyThreadSafetyMode.ExecutionAndPublication)
return new object();
if (mode == LazyThreadSafetyMode.PublicationOnly)
return LazyHelpers.PUBLICATION_ONLY_SENTINEL;
if (mode != LazyThreadSafetyMode.None)
throw new ArgumentOutOfRangeException("mode");
return null; // None mode
}
public override string ToString()
{
return IsValueCreated ? ((Boxed) boxed).value.ToString() : "NoValue";
}
internal LazyThreadSafetyMode Mode
{
get
{
if (threadSafeObj == null) return LazyThreadSafetyMode.None;
if (threadSafeObj == (object)LazyHelpers.PUBLICATION_ONLY_SENTINEL) return LazyThreadSafetyMode.PublicationOnly;
return LazyThreadSafetyMode.ExecutionAndPublication;
}
}
internal bool IsValueFaulted
{
get { return boxed is LazyInternalExceptionHolder; }
}
public bool IsValueCreated
{
get
{
return boxed != null && boxed is Boxed;
}
}
public async Task<T> FetchValueAsync()
{
Boxed boxed = null;
if (this.boxed != null)
{
// Do a quick check up front for the fast path.
boxed = this.boxed as Boxed;
if (boxed != null)
{
return boxed.value;
}
LazyInternalExceptionHolder exc = this.boxed as LazyInternalExceptionHolder;
exc.m_edi.Throw();
}
return await LazyInitValue().ConfigureAwait(false);
}
/// <summary>
/// local helper method to initialize the value
/// </summary>
/// <returns>The inititialized T value</returns>
private async Task<T> LazyInitValue()
{
Boxed boxed = null;
LazyThreadSafetyMode mode = Mode;
if (mode == LazyThreadSafetyMode.None)
{
boxed = await CreateValue().ConfigureAwait(false);
this.boxed = boxed;
}
else if (mode == LazyThreadSafetyMode.PublicationOnly)
{
boxed = await CreateValue().ConfigureAwait(false);
if (boxed == null ||
Interlocked.CompareExchange(ref this.boxed, boxed, null) != null)
{
boxed = (Boxed)this.boxed;
}
else
{
valueFactory = alreadyInvokedSentinel;
}
}
else
{
object threadSafeObject = Volatile.Read(ref threadSafeObj);
bool lockTaken = false;
try
{
if (threadSafeObject != (object)alreadyInvokedSentinel)
Monitor.Enter(threadSafeObject, ref lockTaken);
else
Contract.Assert(this.boxed != null);
if (this.boxed == null)
{
boxed = await CreateValue().ConfigureAwait(false);
this.boxed = boxed;
Volatile.Write(ref threadSafeObj, alreadyInvokedSentinel);
}
else
{
boxed = this.boxed as Boxed;
if (boxed == null) // it is not Boxed, so it is a LazyInternalExceptionHolder
{
LazyInternalExceptionHolder exHolder = this.boxed as LazyInternalExceptionHolder;
Contract.Assert(exHolder != null);
exHolder.m_edi.Throw();
}
}
}
finally
{
if (lockTaken)
Monitor.Exit(threadSafeObject);
}
}
Contract.Assert(boxed != null);
return boxed.value;
}
/// <summary>Creates an instance of T using valueFactory in case its not null or use reflection to create a new T()</summary>
/// <returns>An instance of Boxed.</returns>
private async Task<Boxed> CreateValue()
{
Boxed localBoxed = null;
LazyThreadSafetyMode mode = Mode;
if (valueFactory != null)
{
try
{
// check for recursion
if (mode != LazyThreadSafetyMode.PublicationOnly && valueFactory == alreadyInvokedSentinel)
throw new InvalidOperationException("Recursive call to Value property");
Func<Task<T>> factory = valueFactory;
if (mode != LazyThreadSafetyMode.PublicationOnly) // only detect recursion on None and ExecutionAndPublication modes
{
valueFactory = alreadyInvokedSentinel;
}
else if (factory == alreadyInvokedSentinel)
{
// Another thread ----d with us and beat us to successfully invoke the factory.
return null;
}
localBoxed = new Boxed(await factory().ConfigureAwait(false));
}
catch (Exception ex)
{
if (mode != LazyThreadSafetyMode.PublicationOnly) // don't cache the exception for PublicationOnly mode
boxed = new LazyInternalExceptionHolder(ex);
throw;
}
}
else
{
try
{
localBoxed = new Boxed((T)Activator.CreateInstance(typeof(T)));
}
catch (MissingMethodException)
{
Exception ex = new MissingMemberException("Missing parametersless constructor");
if (mode != LazyThreadSafetyMode.PublicationOnly) // don't cache the exception for PublicationOnly mode
boxed = new LazyInternalExceptionHolder(ex);
throw ex;
}
}
return localBoxed;
}
}
A:
Does this get anywhere near your requirements?
The behaviour falls somewhere between ExecutionAndPublication and PublicationOnly.
While the initializer is in-flight all calls to Value will be handed the same task (which is cached temporarily but could subsequently succeed or fail); if the initializer succeeds then that completed task is cached permanently; if the initializer fails then the next call to Value will create a completely new initialization task and the process begins again!
public sealed class TooLazy<T>
{
private readonly object _lock = new object();
private readonly Func<Task<T>> _factory;
private Task<T> _cached;
public TooLazy(Func<Task<T>> factory)
{
if (factory == null) throw new ArgumentNullException("factory");
_factory = factory;
}
public Task<T> Value
{
get
{
lock (_lock)
{
if ((_cached == null) ||
(_cached.IsCompleted && (_cached.Status != TaskStatus.RanToCompletion)))
{
_cached = Task.Run(_factory);
}
return _cached;
}
}
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sphinx Search multiple config files import
How can you use multiple config files on Spinx Search (preferably by including one inside another)? I need it for different environments, because the only difference between development, stage and production is the database credentials. Is there an easy way to achieve this?
A:
On linux, can use dymamic config files.
ie the config file can be executed by arbitary parsers. So the config file could be a PHP, perl or even shell script.
http://sphinxsearch.com/blog/2013/11/05/sphinx-configuration-features-and-tricks/
More: https://www.google.com/search?q=sphinx+search+shebang
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Custom Adapter for fragment context cant resolve activity
i have a custom Gridview and used the custom adapter for this but afterwards i converted the list view activity to fragment and now i have incompatible types problem in custom adapter
Gridview :
package com.appp.web.a95;
import android.content.Context;
import android.os.Bundle;
import android.support.annotation.Nullable;
import android.support.v4.app.Fragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.GridView;
import java.util.ArrayList;
public class Main_grid extends Fragment {
GridView gv;
Context context;
ArrayList prgmName;
public static String [] prgmNameList={ //some items};
public static int [] prgmImages={//some items};
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
return inflater.inflate(R.layout.main_grid2, container, false);
}
@Override
public void onViewCreated(View view, @Nullable Bundle savedInstanceState) {
gv = (GridView) getView().findViewById(R.id.gridView1);
gv.setAdapter(new CustomAdapter(this, prgmNameList, prgmImages));
}
}
this is Custom adapter
package com.appp.web.a95;
import android.app.Dialog;
import android.content.Context;
import android.view.LayoutInflater;
import android.view.View;
import android.view.View.OnClickListener;
import android.view.ViewGroup;
import android.widget.BaseAdapter;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
public class CustomAdapter extends BaseAdapter{
String [] result;
Context context;
int [] imageId;
private static LayoutInflater inflater=null;
public CustomAdapter(Main_grid mainActivity, String[] prgmNameList, int[] prgmImages) {
// TODO Auto-generated constructor stub
result=prgmNameList;
//here is incompatible types error
context=mainActivity;
imageId=prgmImages;
inflater = ( LayoutInflater )context.
getSystemService(Context.LAYOUT_INFLATER_SERVICE);
}
@Override
public int getCount() {
// TODO Auto-generated method stub
return result.length;
}
@Override
public Object getItem(int position) {
// TODO Auto-generated method stub
return position;
}
@Override
public long getItemId(int position) {
// TODO Auto-generated method stub
return position;
}
public class Holder
{
TextView tv;
ImageView img;
}
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
// TODO Auto-generated method stub
Holder holder=new Holder();
View rowView;
if ((position % 2) == 0 ){
rowView = inflater.inflate(R.layout.grid_list4, null);}
else {rowView = inflater.inflate(R.layout.grid_list5, null);}
holder.tv=(TextView) rowView.findViewById(R.id.textView1);
holder.img=(ImageView) rowView.findViewById(R.id.imageView1);
holder.tv.setText(result[position]);
holder.img.setImageResource(imageId[position]);
rowView.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
// custom dialog
final Dialog dialog = new Dialog(context);
dialog.setContentView(R.layout.customdialog);
dialog.setTitle(result[position]);
// set the custom dialog components - text, image and button
TextView text = (TextView) dialog.findViewById(R.id.text);
ImageView image = (ImageView) dialog.findViewById(R.id.image);
image.setImageResource(imageId[position]);
Button dialogButton = (Button) dialog.findViewById(R.id.dialogButtonOK);
// if button is clicked, close the custom dialog
dialogButton.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View v) {
dialog.dismiss();
}
});
dialog.show();
}
});
return rowView;
}
}
im getting incompatible error in this :
context=mainActivity;
what can i do in this case?
A:
Your problem is when you instantiate your CustomAdapter -
new CustomAdapter(this, prgmNameList, prgmImages));
this is referencing the Fragment instance, and that is what you are passing in, which is obviously not and instance of an Actvitiy.
I would suggest you construct and instantiate your Custom Adapter like this:
new CustomAdapter(getActivity(), prgmNameList, prgmImages));
or/and change the signature to be :
public CustomAdapter(Context context, String[] prgmNameList, int[] prgmImages)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Accessibility of HTML contents loaded by javaFX's WebView
I am currently using a javaFX WebView to "protect" my webapp's javascript code (separate URL) from easy access. I am aware that the WebEngine loads the HTML content from the URL and processes it.
Is the HTML content loaded in-memory or is it cached somewhere first before retrieval?
Thank you very much in advance!
A:
The default WebView implementation in JavaFX 8 does not cache data it retrieves to disk. Though, as there is an open request for this feature, JDK-8014501 JavaFX WebView component to use internal cache, I wouldn't recommend on relying on this always being the case.
Anyway, it's client code, you can't really "protect' it. Somebody on the client machine could always install a proxy or network tracing tool on the client and intercept the traffic (even https traffic) to view your "protected" JavaScript files in clear text. You can obfuscate the JavaScript code to make it harder for somebody who does this to understand the code. You are really just trying to implement security through obscurity. My advice is to not worry about somebody accessing your JavaScript code - assume that, whatever you do, somebody could deobfuscate it, view it and understand it. If there is anything extremely sensitive about the code that you don't want exposed, then run the code on the server, not a client.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why do I have to run my 2D game at 300+ FPS for movement to be passably smooth?
I'm working on a simple game with SFML, and the only way to make the game reasonably smooth is to allow it the highest framerate my machine can handle (that is, giving it no limit). 60 FPS in my game looks like 5 FPS in most games.
I've made sure it wasn't any weird timing issue with sf::Clock by making my delta variable constant, and I've made a simple moving square with the default SFML template to make sure it wasn't anything else in my code.
Limiting framerate using window.setFramerateLimit(80), window.setVerticalSync(true), or by placing usleep(1000 * 15), a 15-millisecond pause (~67 FPS, in theory) in the game loop, cause the terribly glitchy, asymmetric animation.
I think the problem is my Mac. I hesitate to blame anything other than my code, but I just can't explain the problem otherwise. I'm running OS X Mavericks on a 15-inch MacBook Pro with a Retina display (2.4 GHz i7, Nvidia, 8GB RAM).
Is it the Retina display, or some issue with the sleep() system calls? Any thoughts are welcome. Thanks!
A:
Sleep calls are an extremely bad way of controlling framerate. Use them to reduce CPU usage for sure, but don't use them to control framerate.
usleep(1000 * 15), a 15-millisecond pause (~67 FPS, in theory)
No, it's not.
First of all, see the documentation for usleep:
The actual time slept may be longer, due to system latencies and possible limitations in the timer resolution of the hardware.
Secondly, realise that the sleep time is in addition to the time taken to run a frame, which may be longer. So if frame 1 takes 3ms, frame 2 takes 5ms, frame 3 takes 1ms, frame 4 takes 7ms, then when you add the sleep time (and assume that you actually do sleep for 15ms exact - which is unlikely) you're looking at frame times of 18ms, 20ms, 16ms, 22ms.
That's why you should never use sleep to control framerate.
The standard reference page for this topic is Glenn Fiedler's Fix your Timestep so I'm going to finish up by pointing you at that.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Understanding Cloud Instance per hour
I am getting to be into Cloud Computing, but I am really confused in what it means instance.
In programming you can call an instance whenever you call an object, for a Machine it's a server, but here I can tell it's different, there's where I get confused.
So basically what I need to understand is:
- What is an instance per hour?
- How does it work?
- If I have an app (web app), how can I measure an instance, to calculate the hourly pricing?
Can someone explain this to me as simple as possible? too many tutorials on google and still not getting it
Thanks,
;)
Update: I found this postWhat do 'instances' mean in terms of cloud computing?
And it's useful because someone answered exactly what I need to understand:
For App Engine it's not a VM; it's a process. – Guido van Rossum Aug 21 '12 at 3:38
So how can I calculate by process?
A:
The question is a bit vague. My answer is based on the amazon-web-services tag and the phrase "What is an instance per hour".
In the Amazon cloud, you can rent 'EC2' servers. EC2 stands for elastic compute cloud: Amazon's lingo for a virtualized server in the cloud.
An 'instance' is one running EC2 server: one running VM. "Instance per hour" refers to the pricing (as described on https://aws.amazon.com/ec2/pricing/). You generally pay-per-use on an hourly base: When the "price per instance per hour" is $1 and you run 1 instance for 4 hours, you pay $4.
When you want to run a web app, you pick an instance type based on memory, cpu capacity and storage type (although you can add storage separately) that's a good fit for your web app, launch the instance and deploy your web app to it.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.