text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Flutter - How to retrieve a list of IDs from Firestore and display their items in GridView?
I'm developing a book app and I need to display a favorite list from each user, I can save all favorites in array in firestore but I don't know how to retrieve them to my grid.I used StreamBuilder to retrieve all books but now I don't know if I need to store them to show and how. I tried to retrieve all favorites ids to a list of strings then load them in my GridView but the list is null.
Here is how my Firestore:
class _FavoriteScreenState extends State<FavoriteScreen> {
UserModel model = UserModel();
FirebaseUser firebaseUser;
FirebaseAuth _auth = FirebaseAuth.instance;
List<String> favorites;
Future<List<String>> getFavorites() async{
firebaseUser = await _auth.currentUser();
DocumentSnapshot querySnapshot = await Firestore.instance.collection("users")
.document(firebaseUser.uid).get();
if(querySnapshot.exists && querySnapshot.data.containsKey("favorites") &&
querySnapshot.data["favorites"] is List){
return List<String>.from(querySnapshot.data["favorites"]);
}
return [];
}
@override
Widget build(BuildContext context) {
Widget _buildGridItem(context, index){
return Column(
children: <Widget>[
Material(
elevation: 7.0,
shadowColor: Colors.blueAccent.shade700,
child: InkWell(
onTap: (){
Navigator.of(context).push(MaterialPageRoute(
builder: (context) => DetailScreen(document)
));
},
child: Hero(
tag: index['title'],
child: Image.network(
index["cover"],
fit: BoxFit.fill,
height: 132,
width: 100,
),
),
),
),
Container(
width: 100,
margin: EdgeInsets.only(top: 10, bottom: 5),
child: Text(index["title"],
textAlign: TextAlign.center,
overflow: TextOverflow.ellipsis,
maxLines: 2,
style: TextStyle(
fontSize: 10,
fontWeight: FontWeight.w900,
),
),
),
],
);
}
return Scaffold(
body: favorites != null ? Stack(
children: <Widget>[
GridView.builder(
padding: EdgeInsets.fromLTRB(16, 16, 16, 16),
primary: false,
gridDelegate: SliverGridDelegateWithFixedCrossAxisCount(
crossAxisCount: 3,
childAspectRatio: MediaQuery.of(context).size.width /
(MediaQuery.of(context).size.height),
//crossAxisSpacing: 3,
//mainAxisSpacing: 3
),
itemBuilder: (context, index){
return _buildGridItem(context, favorites[index]);
},
)
],
)
:
Center(child: CircularProgressIndicator())
);
}
@override
void initState() async {
// TODO: implement initState
super.initState();
favorites = await getFavorites();
}
}
A:
I had to change my firestore.
I created a new collection called "books" with a document as "userIDs" and subcollection with books and their informations.
Now I can retrieve favorite books easily.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What unit of measurement is Titanium Mobiles "font-size"?
Trying to find what unit of measurement Titanium uses for defining the font size in mobile applications. Want to match it up to Photoshop for mockup purposes.
A:
On iOS, font sizes are in typographical points (1/72 of an inch), so font size 12 should be the same visual size on both devices. (Of course, it will be larger in the Retina simulator, because it's twice as many pixels.)
Note that other iOS sizes are in Apple "points," which don't correspond to typographic points. An Apple "point" is 1px on a pre-Retina device, and 2px on a Retina device.
On Android, you can specify units. The default is pixels (for example, 12 and '12px' both specify 12 pixels). You can also specify sizes in Android's density-independent pixels, points, millimeters or inches. So:
'12dp' == 12 DIP (roughly equivalent to Apple's "points")
'12pt' == 12 points (typographical points)
'12mm' == 12 millimeters
'12in' is a REALLY big font
On a medium-density device like the G1, 12px == 12dp. On a high-density device (most of the newer Android phones with 800x480, 854x480, or 960x540 screens), 12dp renders twice as big as 12px--just like the Apple "point" system.
Why aren't DIP the default unit on Android? That I can't answer. I guess Androids just like pixels.
A:
Its in pixels, but don't forget your photoshop mockups need to be double the size for the retina display.
So your mockup would be font-size 24px and in Titanium you would specify 12px.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Close a div when clicked outside the div and its button
i have a div(target) controlled(show/hide) by a button. When the button is triggered the div will show up.. at that moment when i click outside anywhere other than button or that div(target). the div(target) should get hide.
i have tried with the event.stopPropagation(); but no result found
button
<div class="applicationTrigger appbtn">
<i class="so-icon"></i>
</div>
target div
<div class="app-drop"></div>
when i use event.stopPropagation(); method the complete button action is not working.
A:
I have attached the demo which will work as per your requirement. Click on the button will display the div content. If you click outside the div it will hide the content of div.
$(document).mouseup(function (e){
var container = $(".wrapper");
if (!container.is(e.target) && container.has(e.target).length === 0){
container.fadeOut();
}
});
$(document).on('click','#show_btn',function (e){
$(".wrapper").toggle();
});
.wrapper{
display:none;
height:200px;
width:100px;
border:1px solid black;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="wrapper">
Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a gall
</div>
<button id="show_btn" type="button">Show</button>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL 2005 - Variant Parameter Question
I am working on a function that will be used by no less than 10 SProc's, and will probably grow once it is ironed out.
Problem i am running into is that i do not want to develop a function for each Data Type, which is why the SQL_VARIANT data type is looking pretty convenient for the action. I know is can do the ISNULL check on the data type but i also want to check to see if the Value being passed is a valid Number but the ISNUMERIC function does not work with SQL_VARIANT and I'm not too sure about the SQL_VARIANT_PROPERTY function.
Code so far:
CREATE FUNCTION dbo.mpt_Format_Number
(
@value SQL_VARIANT
, @money BIT
)
RETURNS VARCHAR
AS
BEGIN
--Check for NULL value
IF ISNULL(@value) BEGIN
-- Value IS NULL, return NULL
RETURN NULL
END ELSE BEGIN
-- Value is NOT NULL
DECLARE @TMP VARCHAR
END
END
A:
CREATE FUNCTION dbo.mpt_Format_Number
(
@value SQL_VARIANT
, @money BIT
)
RETURNS VARCHAR
AS
BEGIN
--Check for NULL value
IF @value is null
-- Value IS NULL, return NULL
RETURN NULL
ELSE
BEGIN
-- Value is NOT NULL
if isnumeric(convert(varchar(max), @value)) = 1 RETURN 'Y' -- is valid number
--DECLARE @TMP VARCHAR
END
return 'N' --is not valid number
END
You can always test the property type with this syntax. Should be easy to incooperate in your function.
declare @t SQL_VARIANT
set @t = '3'
select SQL_VARIANT_PROPERTY(@t, 'basetype')
Result:
varchar
|
{
"pile_set_name": "StackExchange"
}
|
Q:
.htaccess subdomain redirect problem
I have a website let's say www.example.com and I have a subdomain something.example.com
Both main domain and sub domain are pointing to directory "public_html"
I have .htaccess on root which redirects any URL without www to www.
For e.g. if user enters example.com then he will be redirected to www.example.com
If user enters example.com/mypage.html then he will be redirected to www.example.com/mypage.html
Now the problem is this is also affecting my subdomain. Because if someone enters something.example.com/visit.html then he is redirect to www.example.com/visit.html
I don't want this! If user enters subdomain then I don't want to redirected to www domain. This is what I have in my .htacces file
RewriteCond %{HTTP_HOST} !^www.stackoverflow.com$ [NC]
RewriteRule ^(.*)$ http://www.stackoverflow.com/$1 [R=301,L]
Can you please tell me what should I do to solve above problem?
Thanks.
A:
Do you have access to your webserver configuration? This is better done by configuring the webserver.
On apache you would configure one virtual domain like this:
<VirtualHost *:80>
ServerName somedomainiwanttoredirect.com
ServerAlias maybe.somemoredomainstoredirect.com
ServerAlias orsubdomains.toredirect.com
RewriteEngine On
RewriteRule ^(.*)$ http://www.target.com/$1 [R=301,L]
</VirtualHost>
and on your real configuration, the www.target.com you add your subdomains that you do not want to be redirected:
ServerAlias subdomain.target.com
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get lat and long of center google map iOS
I’m building an iOS app using storyboards.I integrated google map in my app and getting location address on tapping.
I want to implement a pin at map centre on scrolling map pin will move and give the location address same as in Uber or Ola Cabs app.
Could some one help me how i can implement this,i found one tutorial(http://www.raywenderlich.com/81103/introduction-google-maps-ios-sdk-swift) but it is in swift but i have to implement in objective c.
here is my code:
- (void)mapView:(GMSMapView *)mapView didTapAtCoordinate:(CLLocationCoordinate2D)coordinate{
// coordinate contains your coordinate :)
NSLog(@"did tap at coordinate: (%f, %f)", coordinate.latitude, coordinate.longitude);
NSString *address=[NSString stringWithFormat:@"https://maps.googleapis.com/maps/api/geocode/json?latlng=%f,%f&key=AIzaSjhsdfg67tahdsg8981k8_5aJoZps",coordinate.latitude, coordinate.longitude];
NSURL *url=[NSURL URLWithString:address];
NSData* data = [NSData dataWithContentsOfURL:url];
if ((unsigned long)data.length > 3) {
// NSString *myString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
NSString *file=@"/Data.json";
[data writeToFile:file atomically:YES];
NSArray *ys_avatars = [NSJSONSerialization JSONObjectWithData:data options:0 error:nil];
NSLog(@"array=%@",ys_avatars);
NSLog(@"status====%@",[ys_avatars valueForKey:@"status"]);
if([[ys_avatars valueForKey:@"status"] isEqual: @"OK"]){
NSArray *results=[ys_avatars valueForKey:@"results"];
NSLog(@"formatted_address====%@",[results valueForKey:@"formatted_address"] );
NSArray *address=[results valueForKey:@"formatted_address"];
NSLog(@"add=%@",address[0]);
_myAddress.text=address[0];
}
}
}
A:
I can't comment so to respond to your question, you can get lat and long of the centre of the map this way:
- (void)mapView:(GMSMapView *)mapView didChangeCameraPosition:(GMSCameraPosition *)position
{
double latitude = mapView.camera.target.latitude;
double longitude = mapView.camera.target.longitude;
CLLocationCoordinate2D center = CLLocationCoordinate2DMake(latitude, longitude);
marker.position = center;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Warmup Script for SharePoint 2010 in two server Web farm
Can anybody provide a copy of, or a link to, a reliable powershell warm-up Script for SharePoint 2010.
Ideally I want this to run shortly after the app pool has recycled overnight.
Our configuration is two front end Web servers in a load balancer.
Thanks
A:
This script from MSFT employee which will handle the load balanced environment.
Warm-up Script for Web Front End Servers (WFE) in Load Balanced SharePoint Farms
These are the steps:
OPEN POWERSHELL ISE OR POWERSHELL AS A FARM ADMIN
COPY AND PASTE THE POWERSHELL BELOW (Between the "#--------------------------")
TEST THE SCRIPT TO ENSURE NO ERRORS OCCUR
SAVE THE FILE AS A .PS1 TYPE IN A DESIGNATED LOCATION e.g. c:\powershell\warmupscript.ps1
SCHEDULE A WINDOWS TASK TO RUN AFTER THE LAST APPLICATION POOL IS RECYCLED (4am should be safe)
PowerShell Starts here
--------------------------------------------------
Add-PsSnapin Microsoft.SharePoint.PowerShell -erroraction silentlycontinue
function get-webpage([string]$url,[System.Net.NetworkCredential]$cred=$null)
{
$bypassonlocal = $false
$proxyuri = "http://" + $env:COMPUTERNAME
$proxy = New-Object system.Net.WebProxy($proxyuri, $bypassonlocal)
$wc = new-object net.webclient
$wc.Proxy = $proxy
if($cred -eq $null)
{
$cred = [System.Net.CredentialCache]::DefaultCredentials;
}
$wc.credentials = $cred;
return $wc.DownloadString($url);
}
$cred = [System.Net.CredentialCache]::DefaultCredentials;
This can be used if required to force using certain credentials
#$cred = new-object System.Net.NetworkCredential("username","password","machinename")
Get the Web Apps
$apps = get-spwebapplication # -includecentraladministration (Central admin is not included as it is not running on my WFE Server)
foreach ($app in $apps) {
#Get the Site Collections
$sites = get-spsite -webapplication $app.url -Limit All
### UNCOMMENT THE 2 LINES BELOW IF YOU ONLY WANT TO USE THIS AT SITE COLLECTION LEVEL - Not required if Sites are warmed up.
#write-host $app.Url;
#$html=get-webpage -url $app.Url -cred $cred;
###COMMENT OUT BETWEEN THE "=======" IF THERE ARE TOO MANY WEBs i.e. Sites and you don't want to warm them up.
#==================
foreach ($site in $sites) {
foreach ($web in $site.AllWebs) {
#get the webs i.e. Sites
write-host $web.Url;
$html=get-webpage -url $web.Url -cred $cred;
}
}
#=================
}
--------------------------
|
{
"pile_set_name": "StackExchange"
}
|
Q:
views and contextual filters with entity reference field
how do you use contextual filters to a views block when you want to use a reference field from the content type displaying the view?
here's the simple scenario
artist content type has some basic fields
song content type has a reference field referencing which artist
event content type has a reference field referencing which artist is headlining
view has a block with fields
on each Artist content page, it's working and i have a block of songs for that artist in the sidebar. i used views provide default entity and basic validation (but does not work for below)
on each Event content page, i want a block of songs for that artist in the sidebar. Essentially i want the reference field in the event to match up with the artist's songs.
since we don't have access to the path, i've been playing around with contextual filters on the reference field (and reverse of it) to no avail.
Ok thanks to @Jimajamma I seem to have figured it out, it's actually quite simple.
In order to get the argument into the Song view, you add a 'contextual filter' and select the Song's artist reference field. Then under 'When the filter value is NOT available' choose 'provide default value' and PHP Code. In that PHP code you want to return the value of the Event's artist reference field like so
$node=menu_get_object();
return $node->field_event_artistref['und'][0]['target_id'];
then under 'When the filter value IS available or a default is provided', choose 'Specify validation criteria' and basic validation
In a nutshell, this is simply taking the argument of the artist the Song references and comparing it to the default value you provided which is the current node's artist reference field. also as @Jimajamma said, best practice would be to use a switch statement on the node->type to be able to use the songs by %artist block on several different types of content like artist nodes.
A:
In the contextual filter, you can use php to supply the argument. There, you could put in something like:
$node=menu_get_object();
return $node->field_headliner[0]['nid'];
where field_headliner is the node reference field of your event content type. The exact syntax above is D6, but I am pretty sure it's the same or at least very similar in D7.
Then, just as long as this view block is being show in/on an event page, the above will return the nid of the headlining band, and if your view is set up to filter on that node nid, you should be set.
You could even get fancy up in there with something like:
$node=menu_get_object();
switch ($node->type) {
case 'band':
return $node->nid;
case 'event':
return $node->field_headliner[0]['nid'];
default:
return 0; // or however you want to return an error
}
and then the same view block display could be used on both types of page. Then just put this block on band and event pages and you should be all set.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to limit handling of event to once per X seconds with jQuery / javascript?
For a rapidly-firing keypress event, I want to limit the handling of the event to a maximum of once per X seconds.
I'm already using jQuery for the event handling, so a jQuery-based solution would be preferred, though vanilla javascript is fine too.
This jsfiddle shows keypress firing rapidly without any limiting on the handling
This jsfiddle implements limiting the handling to once per 0.5 seconds in vanilla JS using setTimeout()
My question is
Does jQuery have an inbuilt way of doing this? I don't see anything in the .on() docs
If not, is there a better pattern for doing this in vanilla JS than I've used in my second jsfiddle example?
A:
Does jQuery have an inbuilt way of doing this?
No.
If not, is there a better pattern for doing this in vanilla JS than I've used in my second jsfiddle example?
Instead of using setTimeout and flags, you can keep track of when the handler was called the last time and only call it after a defined interval has passed. This is referred to as throttling and there are various ways to implement this.
In its simplest form you just have to ignore every call that is made within the time of lastCall + interval, so that only one call in every interval occurs.
E.g. here is a function that returns a new function which can only be called once every X milliseconds:
function throttle(func, interval) {
var lastCall = 0;
return function() {
var now = Date.now();
if (lastCall + interval < now) {
lastCall = now;
return func.apply(this, arguments);
}
};
}
which you can use as
$("#inputField").on("keypress", throttle(function(event) {
$("div#output").append("key pressed <br/>");
}, 500));
DEMO
As Esailija mentions in his comment, maybe it is not throttling that you need but debouncing. This is a similar but slightly different concept. While throttling means that something should occur only once every x milliseconds, debouncing means that something should occur only if it didn't occur in the last x milliseconds.
A typical example is the scroll event. A scroll event handler will be called very often because the event is basically triggered continuously. But maybe you only want to execute the handler when the user stopped scrolling and not while he is scrolling.
A simple way to achieve this is to use a timeout and cancel it if the function is called again (and the timeout didn't run yet):
function debounce(func, interval) {
var lastCall = -1;
return function() {
clearTimeout(lastCall);
var args = arguments;
var self = this;
lastCall = setTimeout(function() {
func.apply(self, args);
}, interval);
};
}
A drawback with this implementation is that you cannot return a value from the event handler (such as return false;). Maybe there are implementations which can preserve this feature (if required).
DEMO
|
{
"pile_set_name": "StackExchange"
}
|
Q:
XmlTextReader.Read() does not read all lines
I have an XML file from which I want to read all lines with XmlTextReader. Here is an example of the xml file:
<?xml version="1.0" encoding="utf-8"?>
<CodeSigns>
<CodeSign Format="1.0.0">
<Header>
<Title>AAA</Title>
<Shortcut>AAA</Shortcut>
<Description>AAA</Description>
<Author>Viva</Author>
<LoadTypes>
<LoadType>Expansion</LoadType>
</LoadTypes>
</Header>
<Sign>
<Declarations />
<Code Language="SQL Server" Kind="SQL Server"><![CDATA[SELECT TOP 100 [Id]
,[TotalSeconds]
,CAST('<![CDATA[' + [Parameters] + ']]]]><![CDATA[>' AS XML) as [Parameters]
,CAST('<![CDATA[' + [Query] + ']]]]><![CDATA[>' AS XML) as [Query]
,CAST('<![CDATA[' + [StackTrace] + ']]]]><![CDATA[>' AS XML) as [StackTrace]
,[CreateDate]
,[MachineName]
,[UserName]
,CAST('<![CDATA[' + [Exception] + ']]]]><![CDATA[>' AS XML) as [Exception]
,CAST('<![CDATA[' + [InnerException] + ']]]]><![CDATA[>' AS XML) as [InnerException]
,[CommandType]
FROM [QMaster].[dbo].[LogSlowQueries]
where CreateDate > CONVERT(DATE,GETDATE())
order by totalseconds desc]]>
</Code>
</Sign>
</CodeSign>
</CodeSigns>
Some elements don't matter for us and the problem happens when reading from the Code element because just three lines were read instead of all.
The code C# used for reading this file is given below:
public void Load(string path) {
filename = Path.GetFileName(path);
try {
using (XmlTextReader reader = new XmlTextReader(path)) {
while (reader.Read()) {
if (reader.NodeType == XmlNodeType.Element) {
switch (reader.Name) {
case "Title":
title = ReadProperty(reader);
break;
case "ToolTip":
tooltip = ReadProperty(reader);
break;
case "Description":
description = ReadProperty(reader);
break;
case "Author":
author = ReadProperty(reader);
break;
case "Code":
language = reader.GetAttribute("Language");
code = ReadProperty(reader);
break;
}
}
}
}
}
catch (XmlException) {
}
}
private string ReadProperty(XmlReader reader) {
if (reader.IsEmptyElement)
return string.Empty;
reader.Read();
return reader.Value;
}
We need to obtain all lines from the XmlNodeType.Element with the name Code, however, reader.Value returns ONLY the three lines in the picture:
Please help
A:
I generally find using Linq2Xml easier. With the help of XPath:
var xDoc = XDocument.Load(FILENAME);
var title = (string)xDoc.XPathSelectElement("/CodeSigns/CodeSign/Header/Title");
var author = (string)xDoc.XPathSelectElement("/CodeSigns/CodeSign/Header/Author");
var code = (string)xDoc.XPathSelectElement("/CodeSigns/CodeSign/Sign/Code");
OR
var xDoc = XDocument.Load(FILENAME);
var topElem = xDoc.XPathSelectElement("/CodeSigns/CodeSign");
var title = (string)topElem.XPathSelectElement("Header/Title");
var author = (string)topElem.XPathSelectElement("Header/Author");
var code = (string)topElem.XPathSelectElement("Sign/Code");
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to fix extra blank Excel files after converting from PDF?
The problem is that after converting from PDF to Excel, when browsing to save the output file it creates additional blank Excel file, no idea why.
If I convert 2 PDF's it outputs 2 converted Excel files and 2 additional blank Excel documents.
Below is the code:
Option Explicit
Sub PDF_To_Excel()
Dim setting_sh As Worksheet
Set setting_sh = ThisWorkbook.Sheets("Setting")
Dim pdf_path As String
Dim excel_path As String
pdf_path = Application.GetOpenFilename(FileFilter:="PDF Files (*.PDF), *.PDF", Title:="Select File To Be Opened")
excel_path = setting_sh.Range("E12").Value
Dim objFile As File
Dim sPath As String
Dim fso As New FileSystemObject
Dim fo As Folder
Dim f As File
Set objFile = fso.GetFile(pdf_path)
sPath = Left(objFile.Path, Len(objFile.Path) - Len(objFile.Name))
Set fo = fso.GetFolder(sPath)
Dim wa As Object
Dim doc As Object
Dim wr As Object
Set wa = CreateObject("word.application")
'Dim wa As New Word.Application
wa.Visible = False
'Dim doc As Word.Document
Dim nwb As Workbook
Dim nsh As Worksheet
'Dim wr As Word.Range
For Each f In fo.Files
Set doc = wa.documents.Open(f.Path, False, Format:="PDF Files")
Set wr = doc.Paragraphs(1).Range
wr.WholeStory
Set nwb = Workbooks.Add
Set nsh = nwb.Sheets(1)
wr.Copy
nsh.Activate 'Pastespecial like this needs to use an active sheet (according to https://docs.microsoft.com/en-us/office/vba/api/excel.worksheet.pastespecial)
ActiveSheet.PasteSpecial Format:=1, Link:=False, DisplayAsIcon:=False
Dim oILS As Shape
Set oILS = nsh.Shapes(nsh.Shapes.Count)
With oILS
.PictureFormat.CropLeft = 5
.PictureFormat.CropTop = 150
.PictureFormat.CropRight = 320
.PictureFormat.CropBottom = 250
End With
With oILS
.LockAspectRatio = True
' .Height = 260
' .Width = 450
End With
nsh.Shapes(nsh.Shapes.Count).Top = Sheets(1).Rows(1).Top
Dim IntialName As String
Dim sFileSaveName As Variant
'IntialName = "Name.xlsx"
sFileSaveName = Application.GetSaveAsFilename("Name.xlsx", "Excel Files (*.xlsx), *.xlsx")
If sFileSaveName <> False Then
nwb.SaveAs sFileSaveName
doc.Close True
nwb.Close True
End If
Next
wa.Quit
End Sub
Any help would be greatly appreciated.
Thanks!
A:
Your problem comes from the fact that, when you open your pdf file in Word, a temporary file is created. It has the same name but with "_$" prefix. Your code has to work as expected if you modify it adapting the loop as following:
For Each f In fo.Files
If Not Split(f.Name, ".")(1) = "pdf" Or _
left(f.Name, 2) = "~$" Then
Else
'your existing code follows here....
'...
End If
Next
If you use dots (.) in your pdf file names, we can find a different approach to extract its extension. If you drop in that folder only pdf files, you can transform the first line in something simpler:
If left(f.Name, 2) = "~$" Then
|
{
"pile_set_name": "StackExchange"
}
|
Q:
basic incremental on button click
I'm trying to write a simple button that when you click it adds 1 and updates the label but it only works on the first click after that it doesn't update the label anymore
protected void Page_Load(object sender, EventArgs e)
{ }
protected void ClickerButton_Click(object sender, ImageClickEventArgs e)
{
Lbl_PairsCooked.Text = CookSneaker(NumOfPairs).ToString();
}
public int CookSneaker(int num)
{
num += 1;
return num;
}
The image button i made only works on the first click...
A:
See the snippet below on how to get it working. Most importantly you need to read on IsPostBack the link
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.page.ispostback?view=netframework-4.8 to understand the difference.
public partial class _Default : Page
{
int NumOfPairs;
protected void Page_Load(object sender, EventArgs e)
{
if (IsPostBack)
{
//this line reads the text from label, converts and assigns to NumPairs. The UI is updated on the button click which takes this new NumOfPairs.
int.TryParse(Label1.Text, out NumOfPairs);
}
}
protected void Button1_Click(object sender, EventArgs e)
{
Label1.Text = CookSneaker(NumOfPairs).ToString();
}
public int CookSneaker(int num)
{
num += 1;
return num;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why would you use a managed service account rather than a virtual account in SQL Server 2012?
In SQL Server 2012, service accounts are created as virtual accounts (VAs), as described here, as opposed to managed service accounts (MSAs).
The important differences I can see for these, based on the descriptions:
MSAs are domain accounts, VAs are local accounts
MSAs use automagic password management handled by AD, VAs have no passwords
in a Kerberos context, MSAs register SPNs automatically, VAs do not
Are there any other differences? If Kerberos is not in use, why would a DBA ever prefer an MSA?
UPDATE: Another user has noted a possible contradiction in the MS docs concerning VAs:
The virtual account is auto-managed, and the virtual account can access the network
in a domain environment.
versus
Virtual accounts cannot be authenticated to a remote location. All virtual accounts
use the permission of machine account. Provision the machine account in the format
<domain_name>\<computer_name>$.
What is the "machine account"? How/when/why does it get "provisioned"? What is the difference between "accessing the network in a domain environment" and "authenticating to a remote location [in a domain environment]"?
A:
Here's the way I see it.
When you use a VA, you impersonate the machine account.
The problem is, that it is easy to make a VA or use an existing one (ex. NT Authority\NETWORKSERVICE). If you grant the machine account access to an instance, an application that is running as a VA will be able to connect to that instance and perform actions.
With a managed account, you will have to provide the credentials for that account to whatever application wants to use them, allowing you more granularity with permissions.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Are distance functions $ d(a_1,x), ..., d(a_n,x) $ for an arbitrary metric space linearly independent?
If we have a metric space $ (X,d) $, a finite set of distinct points $ a_1, ..., a_n $, do the distance functions $ d(a_1,x), ..., d(a_n,x) $ have to be linearly independent?
That is, if $ c_1d(a_1,x) + ... + c_nd(a_n,x) = 0$ for all $ x $, then $ c_1 = ... = c_n = 0 $?
This can easily be seen to be true for $ n = 2 $ and also for $ n = 3 $, but what's the answer in general?
If we substitude all the $ a_i $ for x, we get an interesting system of n equations, where we can treat $ c_i $ as the unknowns, and then the matrix of the coefficients $ d(a_i, a_j) $ is symmetric with 0s on the diagonal(apparently this is called a "hollow matrix"). All this looks quite interesting and promising, but I don't really know where to take it.
A:
A counterexample is given by the four-point metric space with the distance matrix
$$\begin{pmatrix} 0 & 1 & 2 & 1 \\ 1 & 0 & 1 & 2 \\ 2 & 1 & 0 & 1 \\ 1 & 2 & 1 & 0 \end{pmatrix}$$
Note that the sum of the $1$st and $3$rd rows is the same as the sum of the $2$nd and $4$th.
Geometrically, this metric space is realized by the vertices of the $4$-cycle $C_4$ with the path metric, i.e., the smallest number of edges to travel.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Doble animación CSS
Estoy intentando que un mismo objeto tenga 2 animaciones. La primera animación, que vaya de 10 a 100 (por ejemplo) y que esto se repita 2 veces. Cuando se acabe esa animación, que empiece la otra que al 100% del ancho vaya cambiando de color por infinito.
Lo que tengo es esto, pero no sale como quiero:
#AnimacionTitulo {
width: 200px;
background-color: #CCBDFB;
color: FloralWhite;
font-weight: bold;
animation: AnimacionTitulo 1.5s 2.5, cambioRGB 10s infinite;
animation-direction: alternate;
animation-fill-mode: forwards;
animation-timing-function: ease-in-out;
}
@keyframes AnimacionTitulo {
50% { width: 100%; }
}
@keyframes cambioRGB {
60% {background-color: #31e4a6;}
70% {background-color: #6de431;}
80% {background-color: #e4a331;}
90% {background-color: #772bec;}
100% {background-color: DarkSlateBlue;}
}
<html>
<head></head>
<body>
<div id="AnimacionTitulo">
<h1>Buenas tardes</h1>
</div>
</body>
</html>
Siento que estoy muy muy cerca, pero no logro dar con lo que me falta. La animación en si se hace bien, pero quiero que el cambio de color se haga cuando la primera animación termine.
Tampoco entiendo por qué van a distintas velocidades el cambio de color...
A:
Lo primero que haría sería independizar una animación de otra, para así poder dar propiedades a cada animación independientemente.
El problema de esto es que un mismo elemento no puede recibir dos animaciones por lo que la animación de desplegar y plegar se la di al <div> contenedor y el cambio de color al <h1>.
Luego, a través de animation-delay calcular lo que tarda en ejecutarse la primera animación para después lanzar la segunda.
#AnimacionTitulo {
width: 200px;
}
h1 {
background-color: #CCBDFB;
color: FloralWhite;
font-weight: bold;
}
.desplegar {
animation-name: AnimacionTitulo;
animation-duration: 2s;
animation-iteration-count: 2.5;
animation-direction: alternate;
animation-fill-mode: forwards;
animation-timing-function: ease-in-out;
}
.cambia_color {
animation-name: cambioRGB;
animation-duration: 10s;
animation-delay: 5s;
animation-iteration-count: infinite;
}
@keyframes AnimacionTitulo {
50% { width: 100%; }
}
@keyframes cambioRGB {
60% {background-color: #31e4a6;}
70% {background-color: #6de431;}
80% {background-color: #e4a331;}
90% {background-color: #772bec;}
100% {background-color: DarkSlateBlue;}
}
<html>
<head></head>
<body>
<div id="AnimacionTitulo" class="desplegar">
<h1 class="cambia_color">Buenas tardes</h1>
</div>
</body>
</html>
Con el shortHand de CSS
#AnimacionTitulo {
width: 200px;
background-color: #CCBDFB;
color: FloralWhite;
font-weight: bold;
animation: AnimacionTitulo 1.5s 2.5, cambioRGB 10s infinite 3.75s;
animation-direction: alternate;
animation-fill-mode: forwards;
animation-timing-function: ease-in-out;
}
@keyframes AnimacionTitulo {
50% { width: 100%; }
}
@keyframes cambioRGB {
60% {background-color: #31e4a6;}
70% {background-color: #6de431;}
80% {background-color: #e4a331;}
90% {background-color: #772bec;}
100% {background-color: DarkSlateBlue;}
}
<html>
<head></head>
<body>
<div id="AnimacionTitulo">
<h1>Buenas tardes</h1>
</div>
</body>
</html>
Con la notación abreviada de animation se le podría pasar el delay como parámetro.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Two font faces in grid.tables
I have created a grid.table object to display a dataframe in PowerBi, below there is my code:
library(reshape)
library(gridExtra)
library(grid)
mydf <- data.frame(id = c(1:5), value = c("A","B","C","D","E"))
mytheme <- ttheme_default(base_size = 10,
core=list(fg_params=list(hjust=0, x=0.01),
bg_params=list(fill=c("white", "lightgrey"))))
grid.table(mydf,cols = NULL, theme = mytheme, rows = NULL)
and this is my output:
I would like to style the font of the output so that only the first column has the font in bold, does anyone knows how to achive this?
Thanks
A:
grid.table() is just a wrapper for grid.draw(tableGrob(...))
You can get your desired results with some Grob surgery:
library(grid)
library(gridExtra)
mydf <- data.frame(id = c(1:5), value = c("A","B","C","D","E"))
mytheme <- ttheme_default(base_size = 10,
core = list(fg_params=list(hjust=0, x=0.01),
bg_params=list(fill=c("white", "lightgrey"))))
Make the tableGrob:
tg <- tableGrob(mydf, cols = NULL, theme = mytheme, rows = NULL)
Edit the tableGrob (column 1 is first 5 slots):
for (i in 1:5) {
tg$grobs[[i]] <- editGrob(tg$grobs[[i]], gp=gpar(fontface="bold"))
}
I like to use a new page for examples , but you can remove it since grid.table() doesn't use it either:
grid.newpage()
grid.draw(tg)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Handling calender in selenium?
I have coded for handling calender for selecting year , month and day.. but my requirement is to perform like If the year is less than current year , it should navigate to previous year and if the year is greater than current year it should navigate to future years ... i have done for future date.. can anyone suggest me how to handle year to navigate back if i pass the year value less than current year?
import java.util.Date;
import java.util.List;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
public class DateCalender2 {
public static WebDriver driver = null;
public static void main(String[] args) throws InterruptedException {
System.setProperty("webdriver.chrome.driver","D:\\work\\work_files\\eclipse_selenium_files\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
driver.get("http://jqueryui.com/datepicker/");
driver.manage().window().maximize();
driver.switchTo().frame(0);
driver.findElement(By.xpath("//input[@id='datepicker']")).click();
// String currentYear = driver.findElement(By.xpath("//div[@class='ui-datepicker-title']/span[@class='ui-datepicker-year']")).getText();
Thread.sleep(2000);
// System.out.println(currentYear);
chooseDate("2019", "May", "13", driver);
}
public static void chooseDate(String year,String MonthName,String Day, WebDriver driver) throws InterruptedException {
String currentMonth = driver.findElement(By.xpath("//div[@class='ui-datepicker-title']/span[@class='ui-datepicker-month']")).getText();
String currentYear = driver.findElement(By.xpath("//div[@class='ui-datepicker-title']/span[@class='ui-datepicker-year']")).getText();
System.out.println(currentYear);
//Selecting year
// int YearInt = Integer.parseInt(currentYear);
// System.out.println(YearInt);
if(!currentYear.equalsIgnoreCase(year)) {
do {
driver.findElement(By.xpath("//div[@id='ui-datepicker-div']/div/a[2]/span")).click();
}
while(!driver.findElement(By.xpath("//div[@class='ui-datepicker-title']/span[@class='ui-datepicker-year']")).getText().equals(year));
}
//Selecting month
if(!currentMonth.equalsIgnoreCase(MonthName)) {
do {
driver.findElement(By.xpath("//div[@id='ui-datepicker-div']/div/a[2]/span")).click();
Thread.sleep(2000);
}
while(!driver.findElement(By.xpath("//div[@class='ui-datepicker-title']/span[@class='ui-datepicker-month']")).getText().equals(MonthName));
}
//Selecting Date
WebElement dateWidget = driver.findElement(By.id("ui-datepicker-div"));
List rows=dateWidget.findElements(By.tagName("tr"));
List<WebElement> columns=dateWidget.findElements(By.tagName("td"));
for (WebElement cell: columns){
//Select 13th Date
if (cell.getText().equals(Day)){
cell.findElement(By.linkText(Day)).click();
break;
}
}
}
}
A:
You can try like this using parseInt method of Integer class
Integer yearGiven="1997"
String year =driver.findElement(By.cssSelector(".ui-datepicker-year"))).getText();
String intYear =Integer.parseInt(year); // get Current year
if(intYear>yearGiven){
// Click left arrow
}
else if (intYear<yearGiven){
// Click right arrow
}
/* You can also convert Integer to String */
Integer x=1955;
String strYear = String.valueOf(x);
// compare code goes here
Note - The example you have shown does not have any link which switches year, we can only switch months
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I remove Vista folder tabs?
How can I remove the sorting tabs COMPLETELY in folders in Vista Home Premium SP2 OS?
You can see the details in the below image:
A:
you can remove all but NAME, just right click on the bar and remove the checkmarks of the columns you may not want to be displayed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
(+) syntax for outer joins in mysql
Possible Duplicates:
Oracle “(+)” Operator
Oracle (Old?) Joins - A tool/script for conversion?
I have been somewhat spoiled by using Oracle for years. Now I am using mysql and cannot find a non-ansi version/shorthand version of outer joins in MySQL.
In oracle I could do this
select a.country acountry,
a.stateProvince aStateProvince,
b.countryName bcountry,
b.name bstateProvince
from User a,
stateprovince b
where a.country*=b.countryName **(+)**
and a.stateProvince*=b.name **(+)**
to get an outer join. Can mysql do something similar?
A:
Simpler than this:
select a.country acountry,
a.stateProvince aStateProvince,
b.countryName bcountry,
b.name bstateProvince
from User a
left join
stateprovince b
on a.country = b.countryName
and a.stateProvince = b.name
No.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spring Transactions in different DAOs does not work anyway?
Here is my implementation in summary
1) all DAOs implemented using HibernateDAO support/ @Transational annotation is only used in Service layer
2) Use MySQL / HibernateTransactionManager
3) Test using main(String args[]) method (do transactions work using this method? )
Transactions does not get rollbacked and invalid entried can be seen in the database.
What am I doing wrong here?
Detailed information is given below.
1) I have configured transactions using @Transactional annotation in service layer as:
@Transactional(readOnly=false, rollbackFor={DuplicateEmailException.class,DuplicateLoginIdException.class,IdentityException.class},propagation=Propagation.REQUIRES_NEW)
public void createUserProfile(UserProfile profile)
throws DuplicateEmailException, DuplicateLoginIdException,
IdentityException {
//Accessing DAO (implemented using HibernateDAOSupport)
identityDAO.createPrincipal(profile.getSecurityPrincipal());
try {
//Accessing DAO
userProfileDAO.createUserProfile(profile);
} catch (RuntimeException e) {
throw new IdentityException("UseProfile create Error", e);
}
}
2) My Transation Manager configuration / datasource is as follows :
<bean id="dataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver" />
<property name="url"
value="jdbc:mysql://localhost:3306/ibmdusermgmt" />
<property name="username" value="root" />
<property name="password" value="root" />
<property name="defaultAutoCommit" value="false"/>
</bean>
<bean id="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="mappingLocations"
value="classpath*:hibernate/*.hbm.xml" />
<property name="hibernateProperties">
<prop key="hibernate.dialect">
org.hibernate.dialect.MySQLDialect
</prop>
<prop key="hibernate.query.substitutions">
true=1 false=0
</prop>
<prop key="hibernate.show_sql">true</prop>
<prop key="hibernate.use_outer_join">false</prop>
</props>
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager"/>
3) Test using main() method as follows :
public static void main(String[] args) {
ApplicationContext cnt=new ClassPathXmlApplicationContext("testAppContext.xml");
IUserProfileService upServ=(IUserProfileService)cnt.getBean("ibmdUserProfileService");
UserProfile up=UserManagementFactory.createProfile("testlogin");//
up.setAddress("address");
up.setCompany("company");
up.setTelephone("94963842");
up.getSecurityPrincipal().setEmail("[email protected]");
up.getSecurityPrincipal().setName("Full Name");
up.getSecurityPrincipal().setPassword("password");
up.getSecurityPrincipal().setSecretQuestion(new SecretQuestion("Question", "Answer"));
try {
upServ.createUserProfile(up);
} catch(Exception e){
e.printStackTrace();
}
A:
As far as I know, MySQL's default storage engine MyISAM does not support transactions.
MySQL Server (version 3.23-max and all versions 4.0 and above) supports transactions with the InnoDB and BDB transactional storage engines. InnoDB provides full ACID compliance. ... The other nontransactional storage engines in MySQL Server (such as MyISAM) follow a different paradigm for data integrity called “atomic operations.” In transactional terms, MyISAM tables effectively always operate in autocommit = 1 mode
In order to get transactions working, you'll have to switch your storage engine for those tables to InnoDB. You may also want to use Hibernate's MySQLInnodDBDialect if you generate your tables (hdm2ddl) from your mappings:
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLInnoDBDialect</prop>
As mentioned by @gid, it's not a requirement for transactions though.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Sub is affecting the wrong Excel workbook
I wrote this VBA code to generate a report from data in an Access table and dump it into Excel with user friendly formatting.
The code works great the first time. But if I run the code again while the first generated Excel sheet is open, one of my subroutines affects the first workbook instead of the newly generated one.
Why? How can I fix this?
I think the issue is where I pass my worksheet and recordset to the subroutine called GetHeaders that prints the columns, but I'm not sure.
Sub testROWReport()
DoCmd.Hourglass True
'local declarations
Dim strSQL As String
Dim rs1 As Recordset
'excel assests
Dim xlapp As excel.Application
Dim wb1 As Workbook
Dim ws1 As Worksheet
Dim tempWS As Worksheet
'report workbook dimentions
Dim intColumnCounter As Integer
Dim lngRowCounter As Long
'initialize SQL container
strSQL = ""
'BEGIN: construct SQL statement {
--this is a bunch of code that makes the SQL Statement
'END: SQL construction}
'Debug.Print (strSQL) '***DEBUG***
Set rs1 = CurrentDb.OpenRecordset(strSQL)
'BEGIN: excel export {
Set xlapp = CreateObject("Excel.Application")
xlapp.Visible = False
xlapp.ScreenUpdating = False
xlapp.DisplayAlerts = False
'xlapp.Visible = True '***DEBUG***
'xlapp.ScreenUpdating = True '***DEBUG***
'xlapp.DisplayAlerts = True '***DEBUG***
Set wb1 = xlapp.Workbooks.Add
wb1.Activate
Set ws1 = wb1.Sheets(1)
xlapp.Calculation = xlCalculationManual
'xlapp.Calculation = xlCalculationAutomatic '***DEBUG***
'BEGIN: Construct Report
ws1.Cells.Borders.Color = vbWhite
Call GetHeaders(ws1, rs1) 'Pastes and formats headers
ws1.Range("A2").CopyFromRecordset rs1 'Inserts query data
Call FreezePaneFormatting(xlapp, ws1, 1) 'autofit formatting, freezing 1 row,0 columns
ws1.Name = "ROW Extract"
'Special Formating
'Add borders
'Header background to LaSenza Pink
'Fix Comment column width
'Wrap Comment text
'grey out blank columns
'END: Report Construction
'release assets
xlapp.ScreenUpdating = True
xlapp.DisplayAlerts = True
xlapp.Calculation = xlCalculationAutomatic
xlapp.Visible = True
Set wb1 = Nothing
Set ws1 = Nothing
Set xlapp = Nothing
DoCmd.Hourglass False
'END: excel export}
End Sub
Sub GetHeaders(ws As Worksheet, rs As Recordset, Optional startCell As Range)
ws.Activate 'this is to ensure selection can occur w/o error
If startCell Is Nothing Then
Set startCell = ws.Range("A1")
End If
'Paste column headers into columns starting at the startCell
For i = 0 To rs.Fields.Count - 1
startCell.Offset(0, i).Select
Selection.Value = rs.Fields(i).Name
Next
'Format Bold Text
ws.Range(startCell, startCell.Offset(0, rs.Fields.Count)).Font.Bold = True
End Sub
Sub FreezePaneFormatting(xlapp As excel.Application, ws As Worksheet, Optional lngRowFreeze As Long = 0, Optional lngColumnFreeze As Long = 0)
Cells.WrapText = False
Columns.AutoFit
ws.Activate
With xlapp.ActiveWindow
.SplitColumn = lngColumnFreeze
.SplitRow = lngRowFreeze
End With
xlapp.ActiveWindow.FreezePanes = True
End Sub
A:
When Cells and Columns are used alone, they refer to ActiveSheet.Cells and ActiveSheet.Columns.
Try to prefix them with the targeted sheet:
Sub FreezePaneFormatting(xlapp As Excel.Application, ws As Worksheet, Optional lngRowFreeze As Long = 0, Optional lngColumnFreeze As Long = 0)
ws.Cells.WrapText = False
ws.Columns.AutoFit
...
End Sub
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Probability of two random n-digit numbers dividing each other
Let $n$ be a positive integer. Suppose $a$ and $b$ are randomly (and independently) chosen two $n$-digit positive integers which consist of digits 1, 2, 3, ..., 9. (So in particular neither $a$ nor $b$ contains digit 0; I am adding this condition so that division by $b$ will be possible, and that we don't get numbers of the form $0002$ and so on). Here "randomly" means each digit of $a$ and $b$ is equally likely to be one of the 9 digits from $\{1,2,3,..., 9\}$.
My question concerns the divisibility of these integers:
1) What is the probability that $b$ divides $a$ ?
The answer, of course, will depend on $n$. Denote this probability by $p(n)$. I would be happy with rough estimates for $p(n)$ as well :)
2) Is it true that $p(n)\to 0$ as $n\to\infty$?
I think answer to question 2) is yes (just by intuition).
A:
As $a,b$ are $n$-digit numbers, we have $10^{n-1}\le a,b<10^n$. If $b|a$, this implies $\frac ab\in\{1,2,\ldots,9\}$. So at most $9$ of the $9^n$ possible $b$ are divisors of $a$. $p(n)\le \frac 9{9^n}$.
While this estimate is far from sharp, it shows $p(n)\to0$ as $n\to\infty$.
Can we calculate $p(n)$ more precisely? Your forbidding zeroes is partly making this more difficult as divisibility of $a$ by small numbers $\in\{1,2,\ldots,9\}$ becomes less easily predictable.
We need to calculate, for $d\in\{1,\ldots,9\}$, the probability that $a$ is both a multiple of $d$ and is $\ge d\cdot 111\ldots1$ (where $111\ldots1$ is the smallest $n$-digit number without zeroes) and $\frac ad$ has no zeroes.
The probability that $a\ge d\cdot 111\ldots 1$ is approximately(!) $\frac{9-d}8$.
The probability that $a$ is divisible by $d$ is approximately(!) $\frac1d$ and is almost(!) independant of whether or not $a$ is $\ge d\cdot 111\ldots1$.
The probability that, if $a$ is indeed divisible by $d$, the number $\frac ad$ has no zeroes is at most $1$; in fact for $d>1$ it looks like it decreases $\to0$ as $n\to\infty$. Therefore we estimate the expected number of zero-free $n$-digit divisors of $a$ as approximately (and likely less than)
$$\sum_{d=1}^9 \frac{9-d}{8}\cdot\frac1d=\frac{4609}{2240}\approx 2.06$$
and hence get $$\tag1p(n)\approx \frac{2.06}{9^n}.$$
For small $n$, this estimate as quite a bit off. For example, it gives $p(1)\approx0.2286$ instead of the correct value $p(1)=\frac{23}{81}=0.28395\ldots$, next $p(2)\approx 0.0254$ instead of $p(2)=\frac{163}{6561}=0.02484\ldots$, and next $p(3)\approx 0.00282$ instead of $p(3)=\frac{483}{177147}=0.00261\ldots$. However, we observe already here (though only empirically!) that the estimate $(1)$ is an upper bound for $n\ge 2$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is unextended OpenGL?
What does unextended OpenGL mean. Does it refer to the non-programmable/Fixed pipeline version without any extensions? Why is it "unextended".
A:
It just refers to OpenGL without extensions or, depending on context, without any extensions relevant to the discussion.
For example, in the GL_KHR_blend_equation_advanced specification you were reading, "the standard blend modes provided by unextended OpenGL" just means the modes OpenGL offers by default. It uses "unextended" for clarity, to qualify that it is referring to OpenGL's set of blend modes not just without GL_KHR_blend_equation_advanced, but also without any other extensions that might add more advanced blend modes as well.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
HTML canvas only triggered after browser resize?
I've a nice bit of JS that creates falling confetti using the HTML canvas -- but it only starts drawing if you resize the browser:
http://jsfiddle.net/JohnnyWalkerDesign/pmfgexfL/
Before resize:
After resize:
Why is this?
A:
Your canvas width is only set inside the resizeWindow(); function.
Try adding resizeWindow(); directly after the function definition:
resizeWindow = function() {
window.w = canvas.width = window.innerWidth;
return window.h = canvas.height = window.innerHeight;
};
resizeWindow();
otherwise it doesn't get called until the page is resized.
http://jsfiddle.net/pmfgexfL/3/
Your window.onload won't get called because it is inside confetti().
EDIT: As fuyushimoya pointed out, also remove the window.w = 0; and window.h = 0; lines since they're not required. I think that it is actually cleaner to set the size in a function on resize (as you do) in case you want to do something more custom, but you need to call it to initialize too.
http://jsfiddle.net/pmfgexfL/5/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Webpack "Refused to apply style from leaflet.css"
I am using ngX-Rocket angular 8 starter. I want to include this leaflet map library into my project: https://github.com/Asymmetrik/ngx-leaflet
I included the file in my index.html
<link rel="stylesheet" type="text/css" href="./node_modules/leaflet/dist/leaflet.css" />
I am getting the following error in the console:
Refused to apply style from 'http://localhost:4200/node_modules/leaflet/dist/leaflet.css' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
I noticed that there is no webpack.config.js file, so I created one and put:
module.exports = {
"module" : {
loaders: [
{ test: /\.css$/, loaders: [ 'style-loader', 'css-loader' ] }
]
}
};
I also put in my angular.json:
{
"$schema": "./node_modules/@angular-devkit/core/src/workspace/workspace-schema.json",
"version": 1,
"newProjectRoot": "projects",
"projects": {
"myproject": {
"root": "",
"sourceRoot": "src",
"projectType": "application",
"prefix": "app",
"schematics": {
"@schematics/angular:component": {
"styleext": "scss"
}
},
"architect": {
"build": {
"builder": "@angular-devkit/build-angular:browser",
"options": {
"outputPath": "dist",
"index": "src/index.html",
"main": "src/main.ts",
"tsConfig": "tsconfig.app.json",
"polyfills": "src/polyfills.ts",
"assets": [
"src/favicon.ico",
"src/apple-touch-icon.png",
"src/robots.txt",
"src/manifest.json",
"src/assets"
],
"styles": [
"src/main.scss",
"node_modules/leaflet/dist/leaflet.css"
],
"scripts": []
},
"configurations": {
"production": {
"optimization": true,
"outputHashing": "all",
"sourceMap": false,
"extractCss": true,
"namedChunks": false,
"aot": true,
"extractLicenses": true,
"vendorChunk": false,
"buildOptimizer": true,
"budgets": [
{
"type": "initial",
"maximumWarning": "2mb",
"maximumError": "5mb"
}
],
"serviceWorker": true,
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.prod.ts"
}
]
},
"ci": {
"progress": false
}
}
},
"serve": {
"builder": "@angular-devkit/build-angular:dev-server",
"options": {
"hmr": true,
"hmrWarning": false,
"browserTarget": "myproject:build"
},
"configurations": {
"production": {
"hmr": false,
"browserTarget": "myproject:build:production"
},
"ci": {
"progress": false
}
}
},
"extract-i18n": {
"builder": "@angular-devkit/build-angular:extract-i18n",
"options": {
"browserTarget": "myproject:build"
}
},
"test": {
"builder": "@angular-devkit/build-angular:karma",
"options": {
"main": "src/test.ts",
"karmaConfig": "karma.conf.js",
"polyfills": "src/polyfills.ts",
"tsConfig": "tsconfig.spec.json",
"scripts": [],
"styles": [
"src/main.scss",
"node_modules/leaflet/dist/leaflet.css"
],
"assets": [
"src/favicon.ico",
"src/apple-touch-icon.png",
"src/robots.txt",
"src/manifest.json",
"src/assets"
]
},
"configurations": {
"ci": {
"progress": false,
"watch": false
}
}
},
"lint": {
"builder": "@angular-devkit/build-angular:tslint",
"options": {
"tsConfig": [
"tsconfig.app.json",
"tsconfig.spec.json"
],
"exclude": [
"**/node_modules/**"
]
}
}
}
},
"myproject-e2e": {
"root": "e2e",
"projectType": "application",
"architect": {
"e2e": {
"builder": "@angular-devkit/build-angular:protractor",
"options": {
"protractorConfig": "e2e/protractor.conf.js",
"devServerTarget": "myproject:serve"
}
},
"lint": {
"builder": "@angular-devkit/build-angular:tslint",
"options": {
"tsConfig": [
"e2e/tsconfig.e2e.json"
],
"exclude": [
"**/node_modules/**"
]
}
}
}
}
},
"defaultProject": "myproject"
}
Though I am still seeing the error in the web console. What is it do I have to do to get rid of this error?
I included the file in my index.html
A:
Try to add it in the angular.json file in the styles section
"styles": [
"styles.css",
"./node_modules/leaflet/dist/leaflet.css"
],
You can find more info here: Adding CSS and JavaScript to an Angular CLI Project
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get path of the file to execute from received http request
I am writing a backend app to receive .jar file from the client and run it. Following is the client code.
let formData = new FormData();
let req = new XMLHttpRequest();
let file = document.getElementById("filePath").files[0];
formData.append("jarFile", file);
req.open("POST", '/deployArtifact');
req.send(formData);
I need to write the backend endpoint(Node Red) to get this file and run it. Following is the sketch of the nodejs post endpoint code.
RED.httpAdmin.post("/deployArtifact", RED.auth.needsPermission('MyNode.read'), function (req, res) {
console.log(req.files);
console.log(req);
cmd.get(
'java -jar ' + xxxx,
function (err, data, stderr) {
if (err) {
res.json(err);
} else {
res.json(data);
}
});
});
First, I need to get the file object receiving. I tried to access req.files but got undefined for it. I printed whole req. Following is the Incoming message.
IncomingMessage {
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: null,
pipesCount: 0,
flowing: null,
ended: false,
endEmitted: false,
reading: false,
sync: true,
needReadable: false,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
destroyed: false,
defaultEncoding: 'utf8',
awaitDrain: 0,
readingMore: true,
decoder: null,
encoding: null },
readable: true,
domain: null,
_events: {},
_eventsCount: 0,
_maxListeners: undefined,
socket:
Socket {
connecting: false,
_hadError: false,
_handle:
TCP {
reading: true,
owner: [Circular],
onread: [Function: onread],
onconnection: null,
writeQueueSize: 0,
_consumed: true },
_parent: null,
_host: null,
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: [Object],
length: 0,
pipes: null,
pipesCount: 0,
flowing: true,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
destroyed: false,
defaultEncoding: 'utf8',
awaitDrain: 0,
readingMore: false,
decoder: null,
encoding: null },
readable: true,
domain: null,
_events:
{ end: [Array],
finish: [Function: onSocketFinish],
_socketEnd: [Function: onSocketEnd],
drain: [Array],
timeout: [Function: socketOnTimeout],
data: [Function: bound socketOnData],
error: [Function: socketOnError],
close: [Array],
resume: [Function: onSocketResume],
pause: [Function: onSocketPause] },
_eventsCount: 10,
_maxListeners: undefined,
_writableState:
WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
bufferedRequest: null,
lastBufferedRequest: null,
pendingcb: 0,
prefinished: false,
errorEmitted: false,
bufferedRequestCount: 0,
corkedRequestsFree: [Object] },
writable: true,
allowHalfOpen: true,
_bytesDispatched: 465,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_idleTimeout: 120000,
_idleNext:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idlePrev:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idleStart: 39624,
_destroyed: false,
parser:
HTTPParser {
'0': [Function: parserOnHeaders],
'1': [Function: parserOnHeadersComplete],
'2': [Function: parserOnBody],
'3': [Function: parserOnMessageComplete],
'4': [Function: bound onParserExecute],
_headers: [],
_url: '',
_consumed: true,
socket: [Circular],
incoming: [Circular],
outgoing: null,
maxHeaderPairs: 2000,
onIncoming: [Function: bound parserOnIncoming] },
on: [Function: socketOnWrap],
_paused: false,
read: [Function],
_consuming: true,
_httpMessage:
ServerResponse {
domain: null,
_events: [Object],
_eventsCount: 1,
_maxListeners: undefined,
output: [],
outputEncodings: [],
outputCallbacks: [],
outputSize: 0,
writable: true,
_last: false,
upgrading: false,
chunkedEncoding: false,
shouldKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: true,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: false,
_headerSent: false,
socket: [Circular],
connection: [Circular],
_header: null,
_onPendingData: [Function: bound updateOutgoingData],
_sent100: false,
_expect_continue: false,
req: [Circular],
locals: {},
[Symbol(outHeadersKey)]: [Object] },
[Symbol(asyncId)]: 2108,
[Symbol(bytesRead)]: 0,
[Symbol(asyncId)]: 2109,
[Symbol(triggerAsyncId)]: 2108 },
connection:
Socket {
connecting: false,
_hadError: false,
_handle:
TCP {
reading: true,
owner: [Circular],
onread: [Function: onread],
onconnection: null,
writeQueueSize: 0,
_consumed: true },
_parent: null,
_host: null,
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: [Object],
length: 0,
pipes: null,
pipesCount: 0,
flowing: true,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
destroyed: false,
defaultEncoding: 'utf8',
awaitDrain: 0,
readingMore: false,
decoder: null,
encoding: null },
readable: true,
domain: null,
_events:
{ end: [Array],
finish: [Function: onSocketFinish],
_socketEnd: [Function: onSocketEnd],
drain: [Array],
timeout: [Function: socketOnTimeout],
data: [Function: bound socketOnData],
error: [Function: socketOnError],
close: [Array],
resume: [Function: onSocketResume],
pause: [Function: onSocketPause] },
_eventsCount: 10,
_maxListeners: undefined,
_writableState:
WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
bufferedRequest: null,
lastBufferedRequest: null,
pendingcb: 0,
prefinished: false,
errorEmitted: false,
bufferedRequestCount: 0,
corkedRequestsFree: [Object] },
writable: true,
allowHalfOpen: true,
_bytesDispatched: 465,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_idleTimeout: 120000,
_idleNext:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idlePrev:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idleStart: 39624,
_destroyed: false,
parser:
HTTPParser {
'0': [Function: parserOnHeaders],
'1': [Function: parserOnHeadersComplete],
'2': [Function: parserOnBody],
'3': [Function: parserOnMessageComplete],
'4': [Function: bound onParserExecute],
_headers: [],
_url: '',
_consumed: true,
socket: [Circular],
incoming: [Circular],
outgoing: null,
maxHeaderPairs: 2000,
onIncoming: [Function: bound parserOnIncoming] },
on: [Function: socketOnWrap],
_paused: false,
read: [Function],
_consuming: true,
_httpMessage:
ServerResponse {
domain: null,
_events: [Object],
_eventsCount: 1,
_maxListeners: undefined,
output: [],
outputEncodings: [],
outputCallbacks: [],
outputSize: 0,
writable: true,
_last: false,
upgrading: false,
chunkedEncoding: false,
shouldKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: true,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: false,
_headerSent: false,
socket: [Circular],
connection: [Circular],
_header: null,
_onPendingData: [Function: bound updateOutgoingData],
_sent100: false,
_expect_continue: false,
req: [Circular],
locals: {},
[Symbol(outHeadersKey)]: [Object] },
[Symbol(asyncId)]: 2108,
[Symbol(bytesRead)]: 0,
[Symbol(asyncId)]: 2109,
[Symbol(triggerAsyncId)]: 2108 },
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: false,
headers:
{ host: '127.0.0.1:1880',
'user-agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0',
accept: '*/*',
'accept-language': 'en-US,en;q=0.5',
'accept-encoding': 'gzip, deflate',
referer: 'http://127.0.0.1:1880/',
'content-type': 'multipart/form-data; boundary=---------------------------18210612061998272948903541154',
'content-length': '39395364',
connection: 'keep-alive' },
rawHeaders:
[ 'Host',
'127.0.0.1:1880',
'User-Agent',
'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0',
'Accept',
'*/*',
'Accept-Language',
'en-US,en;q=0.5',
'Accept-Encoding',
'gzip, deflate',
'Referer',
'http://127.0.0.1:1880/',
'Content-Type',
'multipart/form-data; boundary=---------------------------18210612061998272948903541154',
'Content-Length',
'39395364',
'Connection',
'keep-alive' ],
trailers: {},
rawTrailers: [],
upgrade: false,
url: '/deployArtifact',
method: 'POST',
statusCode: null,
statusMessage: null,
client:
Socket {
connecting: false,
_hadError: false,
_handle:
TCP {
reading: true,
owner: [Circular],
onread: [Function: onread],
onconnection: null,
writeQueueSize: 0,
_consumed: true },
_parent: null,
_host: null,
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 16384,
buffer: [Object],
length: 0,
pipes: null,
pipesCount: 0,
flowing: true,
ended: false,
endEmitted: false,
reading: true,
sync: false,
needReadable: true,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
destroyed: false,
defaultEncoding: 'utf8',
awaitDrain: 0,
readingMore: false,
decoder: null,
encoding: null },
readable: true,
domain: null,
_events:
{ end: [Array],
finish: [Function: onSocketFinish],
_socketEnd: [Function: onSocketEnd],
drain: [Array],
timeout: [Function: socketOnTimeout],
data: [Function: bound socketOnData],
error: [Function: socketOnError],
close: [Array],
resume: [Function: onSocketResume],
pause: [Function: onSocketPause] },
_eventsCount: 10,
_maxListeners: undefined,
_writableState:
WritableState {
objectMode: false,
highWaterMark: 16384,
finalCalled: false,
needDrain: false,
ending: false,
ended: false,
finished: false,
destroyed: false,
decodeStrings: false,
defaultEncoding: 'utf8',
length: 0,
writing: false,
corked: 0,
sync: false,
bufferProcessing: false,
onwrite: [Function: bound onwrite],
writecb: null,
writelen: 0,
bufferedRequest: null,
lastBufferedRequest: null,
pendingcb: 0,
prefinished: false,
errorEmitted: false,
bufferedRequestCount: 0,
corkedRequestsFree: [Object] },
writable: true,
allowHalfOpen: true,
_bytesDispatched: 465,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_server:
Server {
domain: null,
_events: [Object],
_eventsCount: 5,
_maxListeners: 0,
_connections: 2,
_handle: [Object],
_usingSlaves: false,
_slaves: [],
_unref: false,
allowHalfOpen: true,
pauseOnConnect: false,
httpAllowHalfOpen: false,
timeout: 120000,
keepAliveTimeout: 5000,
_pendingResponseData: 0,
maxHeadersCount: null,
_connectionKey: '4:0.0.0.0:1880',
_webSocketPaths: [Object],
[Symbol(asyncId)]: 772 },
_idleTimeout: 120000,
_idleNext:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idlePrev:
TimersList {
_idleNext: [Circular],
_idlePrev: [Circular],
_timer: [Object],
_unrefed: true,
msecs: 120000,
nextTick: false },
_idleStart: 39624,
_destroyed: false,
parser:
HTTPParser {
'0': [Function: parserOnHeaders],
'1': [Function: parserOnHeadersComplete],
'2': [Function: parserOnBody],
'3': [Function: parserOnMessageComplete],
'4': [Function: bound onParserExecute],
_headers: [],
_url: '',
_consumed: true,
socket: [Circular],
incoming: [Circular],
outgoing: null,
maxHeaderPairs: 2000,
onIncoming: [Function: bound parserOnIncoming] },
on: [Function: socketOnWrap],
_paused: false,
read: [Function],
_consuming: true,
_httpMessage:
ServerResponse {
domain: null,
_events: [Object],
_eventsCount: 1,
_maxListeners: undefined,
output: [],
outputEncodings: [],
outputCallbacks: [],
outputSize: 0,
writable: true,
_last: false,
upgrading: false,
chunkedEncoding: false,
shouldKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: true,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: false,
_headerSent: false,
socket: [Circular],
connection: [Circular],
_header: null,
_onPendingData: [Function: bound updateOutgoingData],
_sent100: false,
_expect_continue: false,
req: [Circular],
locals: {},
[Symbol(outHeadersKey)]: [Object] },
[Symbol(asyncId)]: 2108,
[Symbol(bytesRead)]: 0,
[Symbol(asyncId)]: 2109,
[Symbol(triggerAsyncId)]: 2108 },
_consuming: false,
_dumped: false,
next: [Function: next],
baseUrl: '',
originalUrl: '/deployArtifact',
_parsedUrl:
Url {
protocol: null,
slashes: null,
auth: null,
host: null,
port: null,
hostname: null,
hash: null,
search: null,
query: null,
pathname: '/deployArtifact',
path: '/deployArtifact',
href: '/deployArtifact',
_raw: '/deployArtifact' },
params: {},
query: {},
res:
ServerResponse {
domain: null,
_events: { finish: [Function: bound resOnFinish] },
_eventsCount: 1,
_maxListeners: undefined,
output: [],
outputEncodings: [],
outputCallbacks: [],
outputSize: 0,
writable: true,
_last: false,
upgrading: false,
chunkedEncoding: false,
shouldKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: true,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
_contentLength: null,
_hasBody: true,
_trailer: '',
finished: false,
_headerSent: false,
socket:
Socket {
connecting: false,
_hadError: false,
_handle: [Object],
_parent: null,
_host: null,
_readableState: [Object],
readable: true,
domain: null,
_events: [Object],
_eventsCount: 10,
_maxListeners: undefined,
_writableState: [Object],
writable: true,
allowHalfOpen: true,
_bytesDispatched: 465,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: [Object],
_server: [Object],
_idleTimeout: 120000,
_idleNext: [Object],
_idlePrev: [Object],
_idleStart: 39624,
_destroyed: false,
parser: [Object],
on: [Function: socketOnWrap],
_paused: false,
read: [Function],
_consuming: true,
_httpMessage: [Circular],
[Symbol(asyncId)]: 2108,
[Symbol(bytesRead)]: 0,
[Symbol(asyncId)]: 2109,
[Symbol(triggerAsyncId)]: 2108 },
connection:
Socket {
connecting: false,
_hadError: false,
_handle: [Object],
_parent: null,
_host: null,
_readableState: [Object],
readable: true,
domain: null,
_events: [Object],
_eventsCount: 10,
_maxListeners: undefined,
_writableState: [Object],
writable: true,
allowHalfOpen: true,
_bytesDispatched: 465,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: [Object],
_server: [Object],
_idleTimeout: 120000,
_idleNext: [Object],
_idlePrev: [Object],
_idleStart: 39624,
_destroyed: false,
parser: [Object],
on: [Function: socketOnWrap],
_paused: false,
read: [Function],
_consuming: true,
_httpMessage: [Circular],
[Symbol(asyncId)]: 2108,
[Symbol(bytesRead)]: 0,
[Symbol(asyncId)]: 2109,
[Symbol(triggerAsyncId)]: 2108 },
_header: null,
_onPendingData: [Function: bound updateOutgoingData],
_sent100: false,
_expect_continue: false,
req: [Circular],
locals: {},
[Symbol(outHeadersKey)]: { 'x-powered-by': [Array] } },
body: {},
route:
Route {
path: '/deployArtifact',
stack: [ [Object], [Object] ],
methods: { post: true } } }
How should I get the file object from the request to run it with 'java -jar' command. According to my understanding I need to get the path of the file uploaded to the backend server. How can I get the path of the file object?
A:
I dont know much about RED. I guess your problem is generic in terms of uploading files/receiving files.
you may want to look at multipart/form-data while posting(uploading your file)
A very simple example is present at https://www.w3schools.com/nodejs/nodejs_uploadfiles.asp
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I can get a virus just by viewing a site or opening an email?
Everyone know's that some sites can harm your computer just by looking at them, or some emails can send mail to all of your friends or collect information about you just by reading them.
How is this possible? Every site is just plain HTML, CSS and JS , that can't make any permanent changes on the computer (except cookies, but that can't harm you, can it?) so how could I get a virus?
If i click an ad, How do I get a virus? downloading link for autorun program?
How are these things done? what programming language?
A:
In general, the way these vectors work is by exploiting flaws in the software used to read/render the HTML, CSS, and JavaScript. In a perfect world with perfectly secure browsers/email programs with perfect sandboxes, then you'd be right that just viewing a page or an email couldn't load a virus on your computer. But we don't live in that perfect world.
One example is the "buffer overrun" vulnerability: The attacker spends a huge amount of time and effort to find that a particular program loads some resource (a CSS cursor, for instance) into a buffer failing to check that the resource is small enough to fit in the buffer. So the program writes bytes beyond the end of the buffer. Buffers are frequently on the stack, and so overwriting them can overwrite things like the return addresses for function calls. If you craft the data just right, you can make a return address jump to instructions in the data of the resource you're loading. At that point, all bets are off, the attacker can run arbitrary machine code embedded in that resource.
Other vectors involve vulnerabilities in the sandbox in which the JavaScript on the page runs.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
what's the different between full path and relative path in dart
I develop a flutter app, define serveral models in 'model' package.
Then I declare a class Example in 'model' for example.
model/example.dart
class Example {
@override
String toString() {
return 'class example';
}
}
test_a.dart
import 'package:example/model/example.dart'
Example testA() {
return Example()
}
test.dart
import 'model/example.dart'
import 'test_a.dart'
test() {
Example example = testA();
if (example is Example) {
print('this class is Example');
} else {
print('$example');
}
}
I will get output class example
If I change from import 'model/example.dart' to import 'package:example/model/example.dart' in test.dart, then I will get the output this class is Example.
So I'm confused what is different between the full path and relative path in dart.
A:
package imports
'package:... imports work from everywhere to import files from lib/*.
relative imports
Relative imports are always relative to the importing file.
If lib/model/test.dart imports 'example.dart', it imports lib/model/example.dart.
If you want to import test/model_tests/fixture.dart from any file within test/*, you can only use relative imports because package imports always assume lib/.
This also applies for all other non-lib/ top-level directories like drive_test/, example/, tool/, ...
lib/main.dart
There is currently a known issue with entry-point files in lib/* like lib/main.dart in Flutter. https://github.com/dart-lang/sdk/issues/33076
Dart always assumed entry-point files to be in other top-level directories then lib/ (like bin/, web/, tool/, example/, ...).
Flutter broke this assumption.
Therefore you currently must not use relative imports in entry-point files inside lib/
See also
How to reference another file in Dart?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Querying User Stories under an Initiative in Rally
I need to see all of the user stories under an initiative, in rally. I realize that the parent hierarchy doesn't work this way. It goes:
Initiative -> Feature -> User Story
Is it possible to query(in the rally custom list UI) user stories under an initiative?
(PortfolioItem.FormattedID = I12345)
A:
Perhaps try querying for UserStories whereby:
(Feature.Parent.FormattedID = I12345)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Should we push the package.json, bower.json, gulpfile.js to production server
I am using gulp, bower, stylus for an angularjs application.
I am not using any Continuous Integration technology, git pulling code from repo manually when git push are made to master branch on bitbucket, considering this scenario :
Is it a good practice to include bower.json, package.json and
gulpfile.js on the production server and install dependencies
manually by npm install or bower install on server?
Is it safe to include gulpfile.js on the server?
Also, if using any Continuous Integration technology, what would be the best practice?
My .gitignore file is as follows :
node_modules
dist
.tmp
.sass-cache
bower_components
private.xml
nbproject
gruntfile.js
gulpfile.js
package.json
A:
Add package.json and bower.json files to keep the track of dependencies that are being used on production server. However you should skip uploading gulp or grunt files as they are for local use only. They are not needed to be uploaded on production server.
EDIT :
If you use grunt/gulp for restarting your node server as well, like using nodemon from grunt/gulp, You may upload grunt/gulp file. In the end if you have structured your node server properly there is no harm putting grunt/gulp file on server, as these interact with your system before server starts.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Where can I find the flagging summary on my profile?
I cannot find the flagging summary on my user profile.
I have seen many questions about flagging system: May I have a list of just my declined flags?, What does empty "Flagging Summary" page mean when "helpful flags" is "1"?, What does the helpful flag mean?, etc.
I know the results of my flags are not of my business and moderators are responsible for the follow up, but I would like to see the summary as there are other things involved, such as Deputy and Marshal badges.
A:
On your profile page, below the profile views
there is the helpful flag count. Click on the number to go to your flagging history.
A:
The information has been moved. It's now easiest to just click on your user widget in the navigation bar; wherever you may be on the site, this will open your profile page with the 'Activity' tab already selected (unlike other users' profile pages which open on the 'Profile' tab by default). The number of helpful flags is in the Impact widget on the right side, and it's (still) a link to your flagging summary.
Alternatively, if you want to view your flagging summary throughout the Stack Exchange network, you can install the Global Flagging Summary userscript. It will add a Flags tab to your network profile:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can you rip a DVD using VLC media player on Mac OSX?
I want to be able to rip DVD's on my Mac using VLC media player.
A:
Easy peasy! There is this fantastic guide on how to Rip DVDs with VLC.
It is for Windows and seems a tad old, but it should still apply
Edit: eHow also has a guide which is a bit nicer
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Add another Table in Where Clause
I am using Oracle DB.
I have a static query like
SELECT * FROM TESTSCHEMA.TABLE_A WHERE LOAN_ID = :LOAN_ID
The remaining where condition is coming from a table .
Now we came across a scenario where we need to introduce a new table if one column in TABLE_A is NULL i need to take value from TABLE_B
We are not looking for a code change or deployment , instead if we can update in table .
SELECT
*
FROM
TESTSCHEMA.TABLE_A
WHERE
LOAN_ID = :LOAN_ID
AND
(
/* if COLUMNA is NULL in TABLE_A
then i need to pull value from TABLE_B in the same select statement */
)
A:
My interpretation...
If you're wanting to limit the data returned based on columnA when it has a value, but when it doesn't have a value (NULL) use columnB from table B... to limit by it; then coalesce & a sub query should do it.
Or you could replace the whole subquery with a scalar function and call it.
SELECT *
FROM TESTSCHEMA.TABLE_A
WHERE LOAN_ID = :LOAN_ID
AND COALESCE(COLUMNA,(SELECT ColumNName
FROM table_B
WHERE [Some Limits to get 1 record always]) = DesiredValue
Replace ColumnName with desired column from table_B
Replace DesiredValue with value you want to compare columnA, or ColumnB to.
Add Where clause limits to table_B to ensure you only get 1 record back always.
Or...
SELECT *
FROM TESTSCHEMA.TABLE_A
WHERE LOAN_ID = :LOAN_ID
AND COALESCE(COLUMNA,GetBValueWhenAValueNull(Paramaters?)) = DesiredValue
But as that would require a deployment/code change... guessing not
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jquery - how to use errorPlacement for a specific element?
I have checkbox and a text follows it, like;
[checkbox] I agree
If the checkbox is not clicked when submitting, the current way of showing the error msg I have is(I use errorElement:"div");
[checkbox]
This field is required.
I agree**
I would rather like;
[checkbox] I agree
This field is required
Any idea how to get this done?.
The html wrap for the concerned checkbox and text elements I have is like this;
[div] [checkbox] I agree [/div] [div][/div]
I tried errorPlacment: as follows hoping to apply it just for that element alone;
...
messages: {
usage_terms: {
required: "Must agree to Terms of Use.",
//errorElement: "div"
errorPlacement: function(error, element) {
error.appendTo( element.parent("div").next("div") );
}
}
}
...
It didn't work. Any idea?.
A:
errorPlacement isn't underneath messages (as an option), it's a global option, so instead it should look like this:
messages: {
usage_terms: "Must agree to Terms of Use."
}
errorPlacement: function(error, element) {
if(element.attr("name") == "usage_terms") {
error.appendTo( element.parent("div").next("div") );
} else {
error.insertAfter(element);
}
}
Here's we're just checking the element name when deciding where it goes, you could do another check though, for example giving all the ones that behave like this a class, and doing element.hasClass("placeAfter").
|
{
"pile_set_name": "StackExchange"
}
|
Q:
array_keys returns Array in php
i have this function
protected function insert($data){
$data['datecreated'] = date('Y-m-d h:i:s');
echo "array_keys(data) = ".$data['datecreated'];
var_dump($data);
echo array_keys($data);
$sql = "INSERT INTO {$this->table_name} (". array_keys($data).")";
$sql.= " VALUES ('";
$sql.=implode("','", $data);
$sql.=")";
$this->execute($sql);
$this->last_id = mysql_insert_id();
}
when i read the array_keys($data) it returns 'Array' not the key
i call it like this $this->insert(array()); why is that ?
EDIT :
this is the output
array_keys(data) = 2012-05-18 04:44:46array(2) { [0]=> array(0) { } ["datecreated"]=> string(19) "2012-05-18 04:44:46" } Array
Notice: Array to string conversion in /Applications/MAMP/htdocs/Tamara/model/dbTable.php on line 105
INSERT INTO account (Array) VALUES ('Array','2012-05-18 04:44:46)You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''2012-05-18 04:44:46)' at line 1
A:
array_keys returns an array with all the keys.
You need to implode that aswell
implode(',', array_keys($data));
Edit:
And you might want to take a look at this part
$sql.=implode("','", $data);
$sql.=")";
You need need a starting and trailing '.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Agrupar valores de un dataframe en rangos
Me gustaría agrupar una tabla como esta:
df:
lib sstart
PV002 256
PV002 256
PV002 390
PV002 834
PV003 626
PV003 834
PV003 1075
PV004 116
PV004 320
PV005 400
En una tabla como esta:
lib sstart_range
PV002 [256-834]
PV003 [626-1075]
PV004 [116-320]
PV005 [400]
He probado esta función pero no obtengo resultados:
Record_DNAJ<-df%>%
group_by(lib, sstart) %>%
summarize(sstart_range = paste(range(sstart)))
¿Cuál es mi fallo?
Gracias de antemano.
A:
Para lo que estas buscando: pegar los dos elementos del vector que te retorna range() y retornar un solo dato a summarize() y no dos, tienes que usar el parámetro collapse:
summarize(sstart_range = paste(range(sstart), collapse="-"))
El valor es la cadena con las que vas a separar cada valor del rango.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to take multiples inputs in a single line in python 3?
I am trying to get multiple inputs in a single line in python. This are the kind of inputs I am trying to take, the first line consists of the number of lines of input I will take:
3
1 2 3
4 3
8 9 6 3
Is there any way to take that kind of input, without knowing how many inputs are going to be given per line?
A:
You could use.
separator = ' '
parameters = input("parameters:").split(separator)
e.g.
Python 3.6.5 (default, Mar 30 2018, 06:42:10)
[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> separator = '-'
>>> parameters = input("parameters:").split(separator)
parameters:5-6
>>> parameters
['5', '6']
>>>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can you spot an enemy jet from 3rd person view while on a jet?
I would like to spot an enemy jet behind me when on a jet
A:
If an enemy jet is right behind yours, you will know it by its bullets.
The best way to spot a jet its using mouse right click(freelook) and controlling the camera. I usually play inside the cockpit and I never managed to put the mouse center right over it when he is behind me, and it is risk to try it. You can try some maneuvers, to get to his side, under, over or back, it is better than to stay at its aim spamming "Q".
http://en.wikipedia.org/wiki/Air_combat_manoeuvring
I recommend you to stay way from maneuvers that put your plane perpendicular to the enemy, because in this game it makes you an easy target most of the time, but the immelmann turn is a good one if you are using the fighting jet to land some bombs.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Cannot access oracle dba_directories from stored procedure
I am trying to access DBA_DIRECTORIES table from a stored procedure using SYSTEM schema.But I am getting the following error
ORA-00942: table or view does not exist
I can access the table from outside the stored procedure.The stored procedure is also under SYSTEM schema.How can i access the DBA_DIRECTORIES table from within a stored proc ?
A:
In order to access a view or a table that doesn't belong to you in a stored procedure, you need to have the necessary rights granted to you directly, not via a role:
So, have a DBA execute
grant all on dba_directories to <your_name>;
Then you should be able to access the view in your stored procedure.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to install Android Virtual Device manager and intel system image with sdkmanager
I'm trying to create android emulator with only command line tools.
I've downloaded sdkmanager and successfully installed "platform"
sdkmanager "platforms;android-25". But I can't install system image, because
sdkmanager --list gives this
system-images;a...ult;armeabi-v7a | 4 | ARM EABI v7a System Image
system-images;a...-10;default;x86 | 4 | Intel x86 Atom System Image
system-images;a...pis;armeabi-v7a | 5 | Google APIs ARM EABI v7a Syste...
Someone decided that I don't need to see full names of packages that I want to install. But at the same time when trying to install something from this list sdkmanager seems to think otherwise.
A:
In addition to the comment you received, you could also use the "android" script located in the tools folder, here is the output on my machine (I have truncated the output):
[user@pc:~/sdk/tools]
└─ $ ▶ ./android list sdk
******************************************************************
The "android" command is deprecated.
For manual SDK, AVD, and project management, please use Android Studio.
For command-line tools, use tools/bin/sdkmanager and tools/bin/avdmanager
*********************************************************************
"android" SDK commands can be translated to sdkmanager commands on a
best-effort basis.
Continue? (This prompt can be suppressed with the --use-sdk-wrapper
command-line argument
or by setting the USE_SDK_WRAPPER environment variable) [y/N]: y
Running /home/user/sdk/tools/bin/sdkmanager --list --verbose
Info: Parsing /home/user/sdk/build-tools/24.0.3/package.xml
Info: Parsing /home/user/sdk/build-tools/25.0.2/package.xml
Info: Parsing /home/user/sdk/emulator/package.xml
Info: Parsing /home/user/sdk/patcher/v4/package.xml
...
Info: Parsing /home/user/sdk/tools/package.xml
Warning: File /home/user/.android/repositories.cfg could not be loaded.
Installed packages:
--------------------------------------
build-tools;24.0.3
Description: Android SDK Build-Tools 24.0.3
Version: 24.0.3
Installed Location: /home/user/sdk/build-tools/24.0.3
build-tools;25.0.2
Description: Android SDK Build-Tools 25.0.2
Version: 25.0.2
Installed Location: /home/user/sdk/build-tools/25.0.2
...
system-images;android-25;google_apis;x86_64
Description: Google APIs Intel x86 Atom_64 System Image
Version: 4
To make sure we are on the same page:
I have downloaded the file tools_r25.2.3-linux.zip from this site. Unzipped it into ~/sdk
I have this packages installed using the tools/sdkmanager:
build-tools;24.0.3
build-tools/25.0.2
emulator | 26.0.0
platform-tools | 25.0.4
platforms;android-24 | 2
platforms;android-25 | 3
tools | 26.0.1
Edit: After reading the whole command output, it turns out that you could also use the --verbose flag:
[user@pc:~/sdk/tools]
└─ $ ▶ sdkmanager --list --verbose
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why are the Lego boxes so unnecessary big?
I ask myself often when buying Lego why the boxes are so much bigger than the space that is needed for the contained content. It seems a waste in terms of material and logistics. I do store the boxes and especially the bigger sets needs lot of space in my appartment. I would be happy to have them smaller.
A:
The possible reasons include:
Surface area needed for legal info, practical requirements like barcodes, information, feature descriptions and artwork
Aspect ratio of the artwork means the boxes can't be too long and thin
Leaving space between the bricks so they don't get squished up against each other
Size of the instruction manual or sticker sheet
Psychological tool to increase the perceived value of the set
Larger boxes grab the attention of the consumer better
Larger packages are harder to steal
But the exact reason is likely a combination of some of the above ones and maybe some others, most probably in a top-secret ratio known only by LEGO.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to show images from assets or internal storage in a gallery to pick from?
I am currently looking for a way to let the user pick an image from my apps data. The images are currently in the Assets folder, but may be shoved to the internal storage if that's easier.
I don't want all the files to be stored in a public storage.
I am using the following code to pick from the users gallery and it works well. I hope to somehow use a similar code for the current situation.
Intent intent = new Intent();
intent.setType("image/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
intent.putExtra(Intent.EXTRA_LOCAL_ONLY, true);
startActivityForResult(Intent.createChooser(intent, "Select Picture"), PICK_IMAGE_REQUEST);
Is there a way to change that code to show the images from the apps assets folder or it's internal storage, instead of the users public gallery-files?
A:
I created a "Gallery" myself by using a GridView.
I used the code from that side to create an ImageAdapter, with a few changes:
public class ImageAdapter extends BaseAdapter {
private ArrayList <String> data = new ArrayList();
// I'm using a yamlReader to fill in the data, but it could instead just be hardcoded.
fillDataWithImageNames();
// create a new ImageView for each item referenced by the Adapter
public View getView(int position, View convertView, ViewGroup parent) {
ImageView imageView;
if (convertView == null) {
// if it's not recycled, initialize some attributes
imageView = new ImageView(mContext);
imageView.setLayoutParams(new GridView.LayoutParams(85, 85));
imageView.setScaleType(ImageView.ScaleType.CENTER_CROP);
imageView.setPadding(8, 8, 8, 8);
} else {
imageView = (ImageView) convertView;
}
// The images are in /app/assets/images/thumbnails/example.jpeg
imageView.setImageDrawable(loadThumb("images/thumbnails/" + data.get(position) + ".jpeg"));
return imageView;
}
// references to our images
private Drawable loadThumb(String path) {
try {
// get input stream
InputStream ims = mContext.getAssets().open(path);
// load image as Drawable
Drawable d = Drawable.createFromStream(ims, null);
// set image to ImageView
return d;
}
catch(IOException ex) {
return null;
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a mySQL server + JSP server package similar to a WAMP?
Is there a server package that has both JSP and MySQL support?
I like how WAMP is packaged so that PHP and MySQL is in one package, is there one that has JSP instead of PHP? If not what VPS hosts would you recommend for JSP and MySQL development?
A:
XAMPP https://www.apachefriends.org/index.html - has servers for PHP,MySQL, Perl, and though not advertised on the main page a tomcat server as well to run JSP
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Dynamic string parsing in C#
I'm implementing "type-independent" method for filling DataRow with values.
Intended functionality is quite straight forward - caller passes the collection of string representations of column values and DataRow that needs to be filled:
private void FillDataRow(List<ColumnTypeStringRep> rowInput, DataRow row)
ColumnTypeStringRep structure contains the string representation of value, column name, and - what's of most importance - column data type:
private struct ColumnTypeStringRep
{
public string columnName; public Type type; public string stringRep;
public ColumnTypeStrinRep(string columnName, Type type, string stringRep)
{
this.columnName = columnName; this.type = type; this.stringRep = stringRep;
}
}
So what's that "type-independency"? Basically I don't care about the data row schema (which always be a row of some typed data table), as long as passed column names match DataRow's colum names and column data types match those of DataRow I'm fine. This function needs to be as flexible as possible (and as simple as possible - just not simpler).
Here it is (almost):
private void FillDataRow(List<ColumnTypeStrinRep> rowInput, DataRow row)
{
Debug.Assert(rowInput.Count == row.Table.Columns.Count);
foreach (ColumnTypeStrinRep columnInput in rowInput)
{
Debug.Assert(row.Table.Columns.Contains(columnInput.columnName));
Debug.Assert(row.Table.Columns[columnInput.columnName].DataType == columnInput.type);
// --> Now I want something more or less like below (of course the syntax is wrong) :
/*
row[columnInput.columnName] = columnInput.type.Parse(columnInput.stringRep);
*/
// --> definitely not like below (this of course would work) :
/*
switch (columnInput.type.ToString())
{
case "System.String":
row[columnInput.columnName] = columnInput.stringRep;
break;
case "System.Single":
row[columnInput.columnName] = System.Single.Parse(columnInput.stringRep);
break;
case "System.DateTime":
row[columnInput.columnName] = System.DateTime.Parse(columnInput.stringRep);
break;
//...
default:
break;
}
*/
}
}
Now You probably see my problem - I don't want to use the switch statement. It would be perfect, as in the first commented segment, to somehow use the provided column type to invoke Parse method of particular type that returns the object of that type constructed from string representation.
The switch solutions works but it's extremely non flexible - what if in future I'll be filling not the DataRow but some other custom type with "columns" that can be of custom type (of course every such type will need to expose Parse method to build itself from string representation).
Hope you got what I mean - its like "dynamic parsing" kind of functionality. Is there a way to achieve this in .NET?
Example of FillDataRow call could look like this:
List<ColumnTypeStrinRep> rowInput = new List<ColumnTypeStrinRep>();
rowInput.Add(new ColumnTypeStringRep("colName1", Type.GetType("System.Int32"), "0"));
rowInput.Add(new ColumnTypeStringRep("colName2", Type.GetType("System.Double"), "1,11"));
rowInput.Add(new ColumnTypeStringRep("colName3", Type.GetType("System.Decimal"), "2,22"));
rowInput.Add(new ColumnTypeStringRep("colName4", Type.GetType("System.String"), "test"));
rowInput.Add(new ColumnTypeStringRep("colName5", Type.GetType("System.DateTime"), "2010-01-01"));
rowInput.Add(new ColumnTypeStringRep("colName6", Type.GetType("System.Single"), "3,33"));
TestDataSet.TestTableRow newRow = this.testDataSet.TestTable.NewTestTableRow();
FillDataRow(rowInput, newRow);
this.testDataSet.TestTable.AddTestTableRow(newRow);
this.testDataSet.TestTable.AcceptChanges();
Thank You!
A:
The TypeConverterclass is the generic .NET way for converting types. The System.ComponentModel namespace includes implementation for primitive types and WPF ships with some more (but I am not sure in which namespace). Further there is the static Convert class offering primitive type conversions, too. It handles some simple conversions on itself and falls back to IConvertible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Change the definition of a "day" to be localized for users
Normally I would not complain about this, but given that two badges are based on "visiting the site" each "day" it is silly for the "day" to be based on UTC clock for all users.
I can speculate about the reason for this limitation, but I will refrain from that.
I know I am not the only one who has brought this up as an issue, but if badges are based on it, at least do the right thing and use local time.
EDIT
Someone suggested that changing a user timezone creates a problem.
All times stored in the database are UTC. If Jeff and co just do two passes for whatever logic is used for checking the badges (one for UTC - like the way it exists now, and one for local time I think that would be far more preferable. )
A:
I presume then that you suggest a configuration option be added, so that each user could specify his particular timezone.
What would happen to the calculation of badge progress if one changed his timezone? Say I am 99 days into achieving the Fanatic badge, and I changed my timezone from UTC-5 to UTC+5, thereby making the 100th day arrive? What if I changed my timezone in the other direction?
Would everyone's reputation cap therefore also be switching to using local days? Could I artificially raise my rep cap if I switched timezones?
I can speculate about the reason for this limitation, but I will refrain from that.
There's no need to speculate; the reason is quite clear. The contrived edge examples above show that there are other considerations that come into play when one allows customizing timezones per user, therefore:
It's simply much easier if everything is on one standardized calendar.
A:
I ran foul of this at the beginning of the year. I thought I'd go for the "Enthusiast" badge (30 consecutive days). So I used the site (SO) for 30 consecutive days in January and didn't get the badge. I figured maybe I'd inadvertently slipped a day so I went for another month or two but no luck.
I never checked my "consecutive days" stat which I didn't know about. In the end I figured that the day transition was happening in the middle of the day based on some random time-zone on the far side of the world and I would need to make sure I used the site at the same time each day in order to qualify for this badge. Too hard!
I see now the day-transition was happening at lunch-time (UTC midnight). Oh well.
A:
So I guess I see both sides of this issue. UTC calculations are simple and make a bright line rule, but at the same time a day-change in the middle of my-day is also odd. The world is full of oddity, but that doesn't mean we can't explain the oddity.
It took a few resets of my consecutive day count before I realized what was happening. It then took 4 different searches of the meta site to confirm my assumption that days are calculated based on UTC.
So, why not:
change the field label from "consecutive days" to "consecutive UTC days?" and/or
add a note to the calendar that folds out when i click on consecutive days? and/or
modify the description of the enthusiast & fanatic badges to note the UTC nature?
It will clear up confusion that is apparent in the vibrant comments in this post and other closed posts. Also, who care about & aim for these badges can at least learn how the badges are calculated.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Establishing Baselines & Benchmarks
When you are hired as a DBA in a new shop, what are the important tools that you would use for establishing baselines and implementing benchmarks for 50+ instances? You advice would be highly appreciated.
A:
For Microsoft SQL Server, the lowest-intrusive tool is Performance Monitor, aka Perfmon. Here's my tutorial on grabbing Perfmon counters for SQL Server and analyzing them:
http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Inserting in a WStandardItemModel is too slow
I am working on a application built uppon WT.
We have a performance problem, as it must display a lot of data in a WTableView associated with a WStandardItemModel.
For each new item to be added in the table it does:
model->setData( row, column, data )
(which happens a few thousand times).
Is there some way to make it faster? some other way to add data in the table?
it can take 2 seconds to generate the data and several minutes to display it ...
A:
WStandardItemModel is a general-purpose model that is easy to use, but it's not optimal for very large models. Try to specialize a WAbstractTableModel; you only need to reimplement three methods and you can read your data from wherever it resides, or compute it on the fly.
It's not normal that a view takes minutes to display. I've used views on tables with many thousands of entries without performance problems. Was your system swapping because of the memory wasted in a (extremely large) WStandardItemModel?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is the meaning of the term "Custom Class"?
Many questions here on SO ask about custom classes. I, on the other hand, have no idea what they're talking about. "Custom class" seems to mean the same thing that I mean when I say "class".
What did I miss, back in the '80s, that keeps me from understanding?
I know that it's possible to purchase a packaged system - for Accounting, ERP, or something like that. You can then customize it, or add "custom code" to make the package do things that are specific to your business.
But that doesn't describe the process used in writing a .NET program. In this case, the entire purpose of the .NET Framework is to allow us to write our own code. There is nothing useful out of the box.
A:
Classes that you write yourself versus classes that come with the framework
A:
The term "custom code" is generally used to refer to code you can write to extend an existing library or framework. I suppose a "custom class" would be a class that you can plug in to a library or framework, perhaps by implementing an interface or inheriting from an abstract base class.
I'd probably call it a "customization class" instead, but it's certainly not the first awkwardly-named computing concept I've heard of here.
A:
Working with Custom Classes in dBASE, Ken Mayer, Senior SQA Engineer, January 30, 2001 at http://www.dbase.com/knowledgebase/int/custom_classes/custclas.htm
What is a Class, and What is a Custom Class?
A Class is a definition of an object
-- it stores within its definition all of the properties,events and methods
associated with the object (this is,
by the way, 'encapsulation').
A Custom Class is a developer defined
class, based on one of the stock
classes (classes built-in to dBASE). A
really good example of a Custom Class
file ships with dB2K -- it is in the
CLASSES (in Visual dBASE 7.x this is
the CUSTOM folder) directory, and is
called DATABUTTONS.CC. We will briefly
look at one of the buttons defined in
this class file, but most of the code
we will look at will be a bit
different than what's defined here.
Microsoft uses the term "custom" in its documentation for any extension of its supplied libraries.
If you wanted to extend a ListBox you would create a "custom control". If you wanted to extend a Timer, you would create a "custom component". Extend the DataTable, create a "custom class". They have done this for a long time. The earliest reference I can remember is the Visual Basic 5.0 manuals, which I think was 1996/1997.
There were "Custom App Wizard" projects, "Custom Business Objects in RDS [ADO]", "Custom Click Events", "Custom Properties in SQL Server MDX", "Custom OCX Controls", "Custom Controls with DHTML", and the list goes on and on. I estimate that the MSDN Library of October 2001 has over 300 index entries starting with the word "custom".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Determining effects of clustering
In clustering what effects does noisy,redundant, and irrelevant attributes have on it? Do they end up helping or hurting clustering?I know that it is unable to handle noisy data but not sure on the other two.
A:
Noise
Performance of many clustering algorithms like k-means, partitioning around median etc. degrades as the percentage of noise increases. For examples in k-means clustering, because of the outliers (data which is largely different from the data set), clustering centroid varies. The algorithm takes long time to converge and may not results in good clustering.
Most of the clustering algorithm prefer to remove the noise (outliers) from the data set before the clustering.
For more details: Effect of noise on the performance of clustering techniques
Redundant data (no redundant attribute but redundant data points)
This also effect the clustering in negative way but depends on the clustering algorithm. If any algorithm takes frequency of the data point into consideration (example taking mean of clustered points, median etc.) then mean, median of cluster may vary.
Normally you don't want to cluster data on the basis of likelihood of the occurrence of any data point. So if any data point is redundant, it is suggested to be removed before clustering.
If you consider redundant attrubute (i.e co-related attribute), it may or may not effect clustering. Depends on domain of data set.
Irrelevant attribute
This too effect clustering in negative way. Because of irrelevant attribute, clustering may not converge. In fact sometimes irrelevant attributes are considered as noise. Also with higher dimensions, comes the curse of dimensionality. So it is often suggested to perform dimensionality reduction before clustering.
Some details:
Clustering high dimensional data
Effect of irrelevant attribute on fuzzy clustering
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Linux put permissions good
What I would like to is create a directory that belongs to a group and each of those member can create, edit & remove files.
chgrp OldGroup NewGroup
chmod g=rwx
That's what I learned, but now my big problem is that I need to make sure people from that group can only delete their own files.
I am not sure how to put these rights,
if you have any ideas, please share them!
Thnx for reading.
A:
did you try setting sticky bit?
chmod 1775 /directory/with/group/files
when the sticky bit is enabled on a directory, users (other than the owner) can only remove their own files inside a directory. This is used on directories like /tmp whose permissions are 1777=rwxrwxrwt
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Как задать рамки или границы для кнопок?
Здесь представлены 6 кнопок. Они прижаты к друг-другу. Как показать их границы (или поставить рамки тонкой черной линией)? Тогда пользователю было бы более понятно и приложение выглядело бы более красиво.
A:
Вбиваем в гугл
android border
Идём по первой ссылке: how to put a border around an android textview
Копипастим xml-drawable фон
<shape xmlns:android="http://schemas.android.com/apk/res/android" android:shape="rectangle" >
<solid android:color="#ffffff" />
<stroke android:width="1dip" android:color="#4fa5d5"/>
</shape>
Назначаем его к-л вьюхе:
<Button android:text="Some text" android:background="@drawable/back"/>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to give a one to one relationship between sales force custom objects?
i am looking to give a one to one relation between objects in sales force.For example
each product has one structure one service one card product one account service which are also custom objects.Please help me with the same.
A:
There is no one-to-one relationship in salesforce. You "emulate" it by having a lookup from object A into B and from object B into A and you use custom coding to enforce it (usually with triggers on all participating objects that will detect new/updated link and reciprocally update lookups in the other direction).
On page layouts, you hide the related lists and leave only lookup fields.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
R - filtering Matrix based off True/False vector
I have a data structure that can contain both vectors and matrices. I want to filter it based off of of a true false column. I can't figure out how to filter both of them successfully.
result <- structure(list(aba = c(1, 2, 3, 4), beta = c("a", "b", "c", "d"),
chi = structure(c(0.438148361863568, 0.889733991585672, 0.0910745360888541,
0.0512442977633327, 0.812013201415539, 0.717306115897372, 0.995319503592327,
0.758843480376527, 0.366544214077294, 0.706843026448041, 0.108310810523108,
0.225777650484815, 0.831163870869204, 0.274351604515687, 0.323493955424055,
0.351171918679029), .Dim = c(4L, 4L))), .Names = c("aba", "beta", "chi"))
> result
$aba
[1] 1 2 3 4
$beta
[1] "a" "b" "c" "d"
$chi
[,1] [,2] [,3] [,4]
[1,] 0.43814836 0.8120132 0.3665442 0.8311639
[2,] 0.88973399 0.7173061 0.7068430 0.2743516
[3,] 0.09107454 0.9953195 0.1083108 0.3234940
[4,] 0.05124430 0.7588435 0.2257777 0.3511719
tf <- c(T,F,T,T)
What I would like to do is something like
> lapply(result,function(x) {ifelse(tf,x,NA)})
$aba
[1] 1 NA 3 4
$beta
[1] "a" NA "c" "d"
$chi
[1] 0.43814836 NA 0.09107454 0.05124430
but the $chi matrix structure is lost.
The result I'd expect is
ifelse(matrix(tf,ncol=4,nrow=4),result$chi,NA)
[,1] [,2] [,3] [,4]
[1,] 0.43814836 0.8120132 0.3665442 0.8311639
[2,] NA NA NA NA
[3,] 0.09107454 0.9953195 0.1083108 0.3234940
[4,] 0.05124430 0.7588435 0.2257777 0.3511719
The challenge I'm having a problem solving is how to match the tf vector to the data. It feels like I need to set it using a conditional based on data type, which I'd like to avoid. Thoughts and answers are appreciated.
A:
I don't see how you can avoid either checking the data type or the "dimensions" of the data. As such, I would propose something like:
lapply(result, function(x) {
if (is.null(dim(x))) x[!tf] <- NA else x[!tf, ] <- NA
x
})
# $aba
# [1] 1 NA 3 4
#
# $beta
# [1] "a" NA "c" "d"
#
# $chi
# [,1] [,2] [,3] [,4]
# [1,] 0.43814836 0.8120132 0.3665442 0.8311639
# [2,] NA NA NA NA
# [3,] 0.09107454 0.9953195 0.1083108 0.3234940
# [4,] 0.05124430 0.7588435 0.2257777 0.3511719
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Questions about the setup of adversarial examples in the paper 'Intriguing Properties of Neural Networks'
In the paper 'Intriguing Properties of Neural Networks', the process of finding adversarial examples is set up as follows (section 4.1):
We denote by $f : \mathbb{R}^m → \{1 . . . k\}$ a classifier mapping image pixel value vectors to a discrete label set. We also assume that $f$ has an associated continuous loss function denoted by $\text{loss}_f$ :
$\mathbb{R}^m × \{1 . . . k\} → \mathbb{R}^+$. For a given $x ∈ \mathbb{R}^m$ image and target label $l ∈ \{1 . . . k\}$, we aim to solve
the following box-constrained optimization problem:
Minimize $\lVert r \rVert_2$ subject to:
$f(x + r) = l$
$x + r ∈ [0, 1]^m$
The minimizer $r$ might not be unique, but we denote one such $x + r$ for an arbitrarily chosen
minimizer by $D(x, l)$. Informally, $x + r$ is the closest image to $x$ classified as $l$ by $f$. Obviously,
$D(x, f(x)) = f(x)$, so this task is non-trivial only if $f(x) \neq l$. In general, the exact computation
of $D(x, l)$ is a hard problem, so we approximate it by using a box-constrained L-BFGS. Concretely,
we find an approximation of $D(x, l)$ by performing line-search to find the minimum $c > 0$ for which
the minimizer $r$ of the following problem satisfies $f(x + r) = l$.
Minimize $c|r| + \text{loss}_f (x + r, l)$ subject to $x + r ∈ [0, 1]^m$
I have two questions about this passage.
Why is $D(x,f(x))=f(x)$? My interpretation of the definition of $D(x,l)$ from the sentence before is that it is equal to $x+r$ where $r$ is the minimum magnitude vector such that $f(x+r)=l$. It appears that $D(x,f(x))=x$. Am I misunderstanding something here?
Why are we looking for the minimum such $c$? My intuition is that the $c|r|$ term of the problem serves to pull $r$ towards the $0$ vector, while the $loss_f$ term serves to pull $r$ towards the "perfect" input image representing label $l$, which will likely be away from the $0$ vector. If this intuition is true, then increasing $c$ should pull the minimizer $r$ towards the $0$ vector and reduce its magnitude. So I would think we would want to find the maximum $c$ for which the minimizer $r$ of that expression satisfies $f(x+r)=l$, as this would lead to a smaller perturbation that still causes an adversarial example. What is wrong with this logic?
A:
For your first question -- probably just a typo. For your second question, the lagrangian dual of the first formulation would be
$$\max_{\lambda > 0} \min_{r} |r|+ \lambda \text{loss}(x+r,l)$$
If we set $c = 1/\lambda$ and multiply through, then we have
$$\min_{c,r} c|r|+\text{loss}(x+r,l)$$
To gain better intuition on this part, it might help to search for some geometric illustrations of duality (i think math stack exchange has some nice posts about this).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
ios - can't get the UITableView cell height to be proportional to the amount of text
I am trying to get the UITableView cells to fit the text. So far I have something like this:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
UILabel *label = nil;
UITableViewCell *cell = [[UITableViewCell alloc] initWithStyle:(UITableViewCellStyleDefault)
reuseIdentifier:@"business"];
NSString *comment = [[items_array objectAtIndex:[indexPath row]] objectForKey:(@"comment")];
cell.textLabel.numberOfLines = 0;
cell.textLabel.lineBreakMode = UILineBreakModeWordWrap;
CGSize constraint = CGSizeMake(320 - (10 * 2), 20000.0f);
CGSize size = [comment sizeWithFont:[UIFont systemFontOfSize:14] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap];
label = [[UILabel alloc] initWithFrame:CGRectZero];
[label setText:comment];
[label setFrame:CGRectMake(10, 10, 320 - (10 * 2), MAX(size.height, 44.0f))];
cell.textLabel.text = comment;
}
I also have this function
- (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath;
{
NSString *comment = [[items_array objectAtIndex:[indexPath row]] objectForKey:(@"comment")];
CGSize constraint = CGSizeMake(320 - (10 * 2), 20000.0f);
CGSize size = [comment sizeWithFont:[UIFont systemFontOfSize:14] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap];
CGFloat height = MAX(size.height, 44.0f);
return height + (10 * 2);
}
But it isn't allocating the right amount of height to the cells of the UITableViews. I think I am making some mistakes with how/where I assign the label and the text of the cells.
Please help me understand how it should be.
Thanks!
A:
does this get you closer?
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"business"];
if (cell == nil)
{
cell = [[UITableViewCell alloc] initWithStyle:(UITableViewCellStyleDefault)reuseIdentifier:@"business"];
}
NSString *comment = [[items_array objectAtIndex:[indexPath row]] objectForKey:(@"comment")];
CGSize constraint = CGSizeMake(320 - (10 * 2), 20000.0f);
CGSize size = [comment sizeWithFont:[UIFont systemFontOfSize:14] constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap];
UILabel *label = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 300, MAX(size.height, 44.0f) + 20.0f)];
label.numberOfLines = 0;
label.lineBreakMode = UILineBreakModeWordWrap;
label.text = comment;
[cell.contentView addSubview:label];
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Get Selected index and Selected item from DataGridViewComboBoxCell in unbound column
I have a big problem that make me so confuse that I have a DataGridView without using binding which has DataGridViewComboBoxColumn (unbound column) and I want to get selected index or selected item in the ComBoBoxCell (my item i custom item).
I try to cast or follow this website (http://satishjdotnet.blogspot.com/2009/05/getting-selected-value-of-combo-box-in.html) but I only recieve Error:
"Value is not invalid"
. So how can I solve it?
Please help me. Thanks a lot.
Here is my custom Item in combobox:
public class CustomItem {
public string Text { get; set; }
public object Value { get; set; }
public override string ToString() {
return Text;
}
public CustomItem(string text, object value) {
this.Text = text;
this.Value = value;
}
}
and how I add it to DataGridViewComboBoxColumn:
List<CustomItem> teamItem = new List<CustomItem>();
teamItem.Add(new CustomItem(this._homeTeam["Name"].ToString(), Convert.ToInt32(this._homeTeam["Id"])));
teamItem.Add(new CustomItem(this._awayTeam["Name"].ToString(), Convert.ToInt32(this._awayTeam["Id"])));
foreach (CustomItem i in teamItem) {
((DataGridViewComboBoxColumn)this.dataGridViewGoalInformation.Columns["Team"]).Items.Add(i);
}
A:
Given the CustomItem class, with the Value as an int
public class CustomItem
{
public string Text { get; set; }
public int Value { get; set; }
public override string ToString()
{
return Text;
}
public CustomItem(string text, int value)
{
this.Text = text;
this.Value = value;
}
}
To get the value, make sure to hook up the event: EditingControlShowing
dataGridView1.EditingControlShowing += dataGridView1_EditingControlShowing;
Then to get the value out of the combobox when it changes: 1) get the combobox control, 2) then get it's selected value:
private void dataGridView1_EditingControlShowing(object sender, DataGridViewEditingControlShowingEventArgs e)
{
if (dataGridView1.CurrentCell.ColumnIndex == 0 && e.Control is ComboBox)
{
ComboBox comboBox = e.Control as ComboBox;
comboBox.SelectedIndexChanged += ComboBox_SelectedIndexChanged;
}
}
private void ComboBox_SelectedIndexChanged(object sender, EventArgs e)
{
DataGridViewComboBoxEditingControl dataGridViewComboBoxEditingControl = sender as DataGridViewComboBoxEditingControl;
object value = dataGridViewComboBoxEditingControl.SelectedValue;
if (value != null)
{
int intValue = (int)dataGridViewComboBoxEditingControl.SelectedValue;
//...
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What is a "foo" in category theory?
While browsing through several pages of nlab(mainly on n-Categories), I encountered the notion "foo" several times. However, there seems to be article on nlab about this notion. Is this some kind of category theorist slang? Please explain to me what this term means.
A:
It's slang, which I've mostly seen used in the context of computing rather than category theory; foo is just a placeholder for something else, as is bar. A logician I know likes talking about widgets and wombats $-$ it all serves the same purpose.
For example, you might say "an irreducible foo is a foo with no proper sub-foos".
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spark resources not fully allocated on Amazon EMR
I'm trying to maximize cluster usage for a simple task.
Cluster is 1+2 x m3.xlarge, runnning Spark 1.3.1, Hadoop 2.4, Amazon AMI 3.7
The task reads all lines of a text file and parse them as csv.
When I spark-submit a task as a yarn-cluster mode, I get one of the following result:
0 executor: job waits infinitely until I manually kill it
1 executor: job under utilize resources with only 1 machine working
OOM when I do not assign enough memory on the driver
What I would have expected:
Spark driver run on cluster master with all memory available, plus 2 executors with 9404MB each (as defined by install-spark script).
Sometimes, when I get a "successful" execution with 1 executor, cloning and restarting the step ends up with 0 executor.
I created my cluster using this command:
aws emr --region us-east-1 create-cluster --name "Spark Test"
--ec2-attributes KeyName=mykey
--ami-version 3.7.0
--use-default-roles
--instance-type m3.xlarge
--instance-count 3
--log-uri s3://mybucket/logs/
--bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark,Args=["-x"]
--steps Name=Sample,Jar=s3://elasticmapreduce/libs/script-runner/script-runner.jar,Args=[/home/hadoop/spark/bin/spark-submit,--master,yarn,--deploy-mode,cluster,--class,my.sample.spark.Sample,s3://mybucket/test/sample_2.10-1.0.0-SNAPSHOT-shaded.jar,s3://mybucket/data/],ActionOnFailure=CONTINUE
With some step variations including:
--driver-memory 8G --driver-cores 4 --num-executors 2
install-spark script with -x produces the following spark-defaults.conf:
$ cat spark-defaults.conf
spark.eventLog.enabled false
spark.executor.extraJavaOptions -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:MaxHeapFreeRatio=70
spark.driver.extraJavaOptions -Dspark.driver.log.level=INFO
spark.executor.instances 2
spark.executor.cores 4
spark.executor.memory 9404M
spark.default.parallelism 8
Update 1
I get the same behavior with a generic JavaWordCount example:
/home/hadoop/spark/bin/spark-submit --verbose --master yarn --deploy-mode cluster --driver-memory 8G --class org.apache.spark.examples.JavaWordCount /home/hadoop/spark/lib/spark-examples-1.3.1-hadoop2.4.0.jar s3://mybucket/data/
However, if I remove the '--driver-memory 8G', the task gets assigned 2 executors and finishes correctly.
So, what's the matter with driver-memory preventing my task to get executors?
Should the driver be executed on the cluster's master node alongside with Yarn master container as explained here?
How do I give more memory to my spark job driver? (Where collects and some other useful operations arise)
A:
The solution to maximize cluster usage is to forget about the '-x' parameter when installing spark on EMR and to adjust executors memory and cores by hand.
This post gives a pretty good explanation of how resources allocation is done when running Spark on YARN.
One important thing to remember is that all executors must have the same resources allocated! As we speak, Spark does not support heterogeneous executors. (Some work is currently being made to support GPUs but it's another topic)
So in order to get maximum memory allocated to the driver while maximizing memory to the executors, I should split my nodes like this (this slideshare gives good screenshots at page 25):
Node 0 - Master (Yarn resource manager)
Node 1 - NodeManager(Container(Driver) + Container(Executor))
Node 2 - NodeManager(Container(Executor) + Container(Executor))
NOTE: Another option would be to spark-submit with --master yarn --deploy-mode client from the master node 0. Are there any counter example this is a bad idea?
In my example, I can have at most have 3 executors of 2 vcores with 4736 MB each + a driver with same specs.
4736 memory is derived from the value of yarn.nodemanager.resource.memory-mb defined in /home/hadoop/conf/yarn-site.xml. On a m3.xlarge, it is set to 11520 mb (see here for all values associated to each instance types)
Then, we get:
(11520 - 1024) / 2 (executors per nodes) = 5248 => 5120 (rounded down to 256 mb increment as defined in yarn.scheduler.minimum-allocation-mb)
7% * 5120 = 367 rounded up to 384 (memory overhead) will become 10% in spark 1.4
5120 - 384 = 4736
Other interesting links:
Apache Spark: setting executor instances does not change the executors
Performance issues for spark on YARN
http://www.wdong.org/wordpress/blog/2015/01/08/spark-on-yarn-where-have-all-my-memory-gone/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Read and write cookies with Ruby on Rails 4 AND Nginx
I want to read and write cookies with a Ruby on Rails 4 application and Nginx. The first time Ruby on Rails sets a cookie and later Nginx should be able to deliver specific content depending on that cookie (without running through Rails). And vice versa.
How can I write and read unencrypted cookies with Ruby on Rails? It's easy to use encrypted cookies and sessions but I can't get it working with unencrypted cookies.
PS: Security is not an issue at all with this specific application. Actually I want the user to be able to read the cookie.
A:
If you use following code, it won't store cookie in a session. So, it will be plain text:
cookies[:test] = "test"
You can check it with
curl -I <URL>
It will return headers including
Set-Cookie: test=test; path=/
BTW. I believe by default even session cookies aren't encrypted. You have to configure your app for them to be encrypted. However, they are base64 encoded.
You can take a look at this article:
http://www.andylindeman.com/2013/02/18/decoding-rails-session-cookies.html
|
{
"pile_set_name": "StackExchange"
}
|
Q:
503 error code testing
that would test the availability of certain image on the server to find out, whether the server where the image is located is ok or not.
I am interesting in catching 503 error code as a webexception. I would like to ask you what happens if the image won't be located on the server (404) and the service wouldn't be also available? Does it make sense to test also the 404?
A:
I found out, that after error 503 you cannot get the 404. So if there are both 404 and 503 errors, you will see only the 503. Therefore, there is no reason to test for 404 after 503.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Transpose all columns to rows based on a column without PIVOT
I am working on a report in which all columns need to be transposed to rows based on a column.
CREATE TABLE TempTable(
Company VARCHAR(5),
ProcessDate DATETIME,
OpExp DECIMAL,
Tax DECIMAL,
Total DECIMAL);
INSERT INTO TempTable VALUES
('Comp1', getdate(), 1000, 100, 1100),
('Comp1', dateadd(year, -1, getdate()), 2000, 200, 2200),
('Comp1', dateadd(year, -2, getdate()), 3000, 300, 3300);
SELECT * FROM TempTable;
But for report, I have to transpose this table into
Here the columns 2015, 2016, 2017 are dynamic which are based on 'ProcessDate' year.
I tried with PIVOT but it throws
Incorrect syntax near 'PIVOT'. You may need to set the compatibility level of the current database to a higher value to enable this feature. See help for the SET COMPATIBILITY_LEVEL option of ALTER DATABASE
As the database is already in production, it is not possible to alter the
compatibility level.
I tried with UNION ALL, CASE as suggested in
Simple way to transpose columns and rows in Sql?
But column data types are different and cannot use aggregate functions as the result must have all rows.
Is there any way to convert the columns to rows? Or Is it possible to generate this report using SSRS instead of RDLC?
A:
Here is the SSRS only way which is probably the best way to do it...
I've extended your sample data a little in this example to include 9 rows, 3 rows for 3 companies.
Create a new blank report and add your datasets as normal.
Then insert a matrix control on the report.
Drag the fields as shown in the diagram below to initially setup the matrix. We will edit this but it gives us a quick start.
NOTE: Step 3 should read "Drag ProcessDate here..."
Now we need to chnage from process date to just the year so right-click the ProcessDate cell from step3 and click 'expression'. Set the expression to
=YEAR(Fields!ProcessDate.Value)
Under the report design in the group designer pane, right-click on the [ProcessDate] column group and choose 'group properties' and set the Group On property to the same expression =YEAR(Fields!ProcessDate.Value)
Now back in the report design, right-click the data cell (from step 4 above) and do 'Insert Row' -> 'Inside Group - Below'
Repeat this process to add a 3rd row.
In the two new blank cells, click the field list drop down and choose Tax and Total respectively.
If you want to add a caption column. Right-click the cell with [Company] in it, do 'Insert Column' -> 'Inside Group - Right', then right click the new cell and choose 'split cell' this will unmerge the 3 rows. You can then type a caption for each row.
The design should now look like this.
And that's it really..
The final output looks like this.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Getting stuck in this method returning wrong value from table
public int countGuestvisitnew() throws SQLException {
Log.v("", " count when 1111 click = ");
try {
Cursor c = db.rawQuery("select count(*) from guestvisitsnew", null);
if (c.getCount() > 0) {
int f=c.getCount();
Log.v("", " count1212121 whenbus click =/// " + f);
return f;
}
}catch (Exception e) {
e.printStackTrace();
}
return 0;
}
Always getting count = 1,but there is no data in guestvisitsnew table.How can I get the row count from the table.
Please help me.
Thank you.
A:
select count(*) from guestvisitsnew
Will return 1 row, that contains the count(*).
c.getCount()
will then return 1.
Change it to:
int f=c.getInt(0);
if (f > 0) {
Log.v("", " count1212121 whenbus click =/// " + f);
return f;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pass And Return A String Into VB.NET Web Method
I am trying to send and return a simple string to a VB.NET web method from Javascript using AJAX. Here is the Javascript/jQuery script I am using:
function jQuerySerial() {
//I SET A VARIABLE TO THE STRING I WANT TO PASS INTO MY WEB METHOD
var str = "Hello World";
//AND TRY TO PASS IT INTO MY VB.NET WEB METHOD
$.ajax({
type: "POST",
url: "test_WebService.asmx/testWebService",
data: str,
contentType: "application/json; charset=utf-8",
dataType: "json",
success: function (e) {
alert("It worked: " + e);
},
error: function (e) {
alert("There was an error retrieving records: " + e);
}
});
}//END jQuerySerial
And here is the very simple VB.net Web Method. The Web Method does nothing but get the string and then pass it back to Javascript:
<WebMethod( )> _
Public Function testWebService(str As String) As String
Return str
End Function
When I attempt to run this the error: function fires and returns a message saying:
"There was an error retrieving records: [object Object]"
I have many, many other Web Methods in this same Web Service class that manipulate database records and they all work. But, this is the first one I have ever tried to write using the $.ajax syntax and return something to the calling Javascript so I am completely clueless on whats wrong here.
Any suggestions on how to make this work would be appreciated. Thanks
A:
It looks like the issue here is that you're passing a simple string to the Web Service when it is expecting a JSON object. See this article on common issues with jQuery and ASP.NET web services (specifically item 2):
http://encosia.com/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it possible to give commands to a running service?
I am running a Minecraft Feed the beast server as a service.
This is my systemd script:
[unit]
Description=Een Minecraft Feed The Beast server
[Service]
Environment= MY_ENVIRONMENT_VAR
WorkingDirectory=/root/ftb_minecraft
ExecStart=/bin/bash ServerStart.sh
Restart=always
[Install]
WantedBy=multi-user.target
The minecraft server works now.
But I can't input commands. Normaly you get a little server terminal where you can input commands.
Now my question is: Is is still possible to input commands but through some other commands. Something like systemctl ftb command <Insert command here>
A:
The Minecraft server is running in the background, so it is disconnected from the foreground terminal where you might input commands.
It is up to the server to provide a way to interact, such as offering a web-based interface, or a CLI that communicates with the server over a socket.
systemd offers the sd-bus as D-Bus IPC client and the related busctl, but they would only be of any use if the server implemented D-Bus.
Summary: Check the docs of your server to see what's possible.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Webkit not rendering custom scrollbars after making the element visible
I have a page that has custom webkit scrollbars. Inline styling on the body has visibility:hidden on it and it's set to visible in the JavaScript, however the scrollbars aren't drawn unless you mess around with the size of the window (or the preview frame in the pen I link at the end).
I've tried this with Chrome 29 on Windows, and Safari 6.0.5 on Mac.
How can I fix this?
Here's an example
If you open this up in a new tab make sure you refresh it so the tab is active while it's rendering, otherwise the scrollbars will appear.
A:
Try this i think this will help you.........
::-webkit-scrollbar { width: 12px; height: 12px; }
::-webkit-scrollbar-button { background-color: #666; }
::-webkit-scrollbar-track { background-color: #999; }
::-webkit-scrollbar-track-piece { background-color: #ffffff; }
::-webkit-scrollbar-thumb { height: 50px; background-color: #4CC417; border-radius: 3px; }
::-webkit-scrollbar-corner { background-color: #999; }
::-webkit-resizer { background-color: #666; }
|
{
"pile_set_name": "StackExchange"
}
|
Q:
docker container starts new container after stopping or removing
I followed 'get started' tutorial on the official docker web-site, and everything went well until I tried to remove containers. Each time when I run
docker rm -f [container_id]
or
docker stop -f [container_id]
the initial container with that ID becomes stopped or removed, BUT new container appears and runs. I tried to update containers with following command:
docker update --restart=no $(docker ps -aq)
however it doesn't help.
here 'docker info' command result:
$ docker info
Containers: 5
Running: 5
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.05.0-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 22
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: active
NodeID: qpmjeqewpl5hnneuhm99dp7g7
Is Manager: true
ClusterID: x14jwe053zapko55g7edlh9fp
Managers: 1
Nodes: 1
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Node Address: 192.168.0.108
Manager Addresses:
192.168.0.108:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 9048e5e50717ea4497b757314bad98ea3763c145
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.10.0-28-generic
Operating System: Ubuntu 16.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 5.74GiB
Name: roman-LIFEBOOK-AH531
ID: BHNU:2PGO:UDAA:MFYE:XGLO:R4PS:27QR:GXF6:KZME:W4EN:C5VO:FIJM
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: rmcontkit
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
and 'docker inspect [container_id]'
$ docker inspect f53643ac5f9b
[
{
"Id": "f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775",
"Created": "2017-07-30T15:03:23.009739796Z",
"Path": "python",
"Args": [
"app.py"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 10661,
"ExitCode": 0,
"Error": "",
"StartedAt": "2017-07-30T15:03:30.614115776Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:413941614a833af57c05119d70cae063e0dc164b919e8ec84c6af595e7225b85",
"ResolvConfPath": "/var/lib/docker/containers/f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775/hostname",
"HostsPath": "/var/lib/docker/containers/f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775/hosts",
"LogPath": "/var/lib/docker/containers/f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775/f53643ac5f9b105c6b0d09de43eecd01f9d8bc3a20c71a535c1a25818db64775-json.log",
"Name": "/getstartedlab_web.2.yr7k0egu4ketu7m0kdxt7v3o7",
"RestartCount": 0,
"Driver": "aufs",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 52428800,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 100000,
"CpuQuota": 10000,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": -1,
"MemorySwappiness": -1,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"Mounts": [],
"Config": {
"Hostname": "f53643ac5f9b",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"LANG=C.UTF-8",
"GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF",
"PYTHON_VERSION=2.7.13",
"PYTHON_PIP_VERSION=9.0.1",
"NAME=World"
],
"Cmd": [
"python",
"app.py"
],
"ArgsEscaped": true,
"Image": "rmcontkit/get-started:part1@sha256:e7c1e776eb3eca11213449a8621cc84f989cb127350c3207ae8c610d482c0398",
"Volumes": null,
"WorkingDir": "/app",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.stack.namespace": "getstartedlab",
"com.docker.swarm.node.id": "qpmjeqewpl5hnneuhm99dp7g7",
"com.docker.swarm.service.id": "8dj5jmxiqcip81ay1we2yd8uc",
"com.docker.swarm.service.name": "getstartedlab_web",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "yr7k0egu4ketu7m0kdxt7v3o7",
"com.docker.swarm.task.name": "getstartedlab_web.2.yr7k0egu4ketu7m0kdxt7v3o7"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "e6d8f06b6ca25b73c100921213fe47dd4fc93eaf95db09be2dc0191b989aa139",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"80/tcp": null
},
"SandboxKey": "/var/run/docker/netns/e6d8f06b6ca2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"getstartedlab_webnet": {
"IPAMConfig": {
"IPv4Address": "10.0.0.7"
},
"Links": null,
"Aliases": [
"f53643ac5f9b"
],
"NetworkID": "k3as780fjdoywrdxdiabned7u",
"EndpointID": "44933d5909d3ff46c34e9a28317d5eaf025497ab61b951dbf054853415634c6e",
"Gateway": "",
"IPAddress": "10.0.0.7",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:00:00:07"
},
"ingress": {
"IPAMConfig": {
"IPv4Address": "10.255.0.8"
},
"Links": null,
"Aliases": [
"f53643ac5f9b"
],
"NetworkID": "6um0tpqyj5pkl12xzklmt7vdv",
"EndpointID": "20a9ac122ba2cdcda5827b2d54688426d1df0dafa20afa70bf8f65b387d07b77",
"Gateway": "",
"IPAddress": "10.255.0.8",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:ff:00:08"
}
}
}
}
]
A:
It is probably due to the fact you're running a docker swarm stack which load balances the number of nodes running. Try to take down the stack with:
docker stack rm getstartedlab
As they suggest on the website.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
OneToMany child entities not persisted on save()
I am running into the following behaviour in hibernate which I can't explain and I was hoping that someone can provide an explanation of what is going on.
I have a bidirectional @OneToMany association between Instructor and Course. Since they have a CascadeType.PERSIST I am expecting that when I create a Course object and associate it to Instructor and vice versa and then call the .save() method on the instructor both objects will be persisted in the database, although that results in only the Instructor to be persisted without the courses.
If I call the persist() method for ex: session.persist(instructor) both instructor and the courses are persisted.
Here are my entities and my main method.
Instructor:
@Entity
@Table(name = "instructor")
public class Instructor {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private int id;
@Column(name = "first_name")
private String firstName;
@Column(name = "last_name")
private String lastName;
@Column(name = "email")
private String email;
@OneToOne(cascade = CascadeType.ALL, fetch = FetchType.LAZY)
@JoinColumn(name = "instructor_detail_id")
private InstructorDetail instructorDetail;
@OneToMany(mappedBy = "instructor", cascade = {CascadeType.DETACH,
CascadeType.MERGE, CascadeType.PERSIST, CascadeType.REFRESH}, fetch = FetchType.LAZY)
private List<Course> courses = new ArrayList<>();
public Instructor() {
}
public Instructor(String firstName, String lastName, String email) {
this.firstName = firstName;
this.lastName = lastName;
this.email = email;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public InstructorDetail getInstructorDetail() {
return instructorDetail;
}
public void setInstructorDetail(InstructorDetail instructorDetail) {
this.instructorDetail = instructorDetail;
}
public List<Course> getCourses() {
return courses;
}
public void setCourses(List<Course> courses) {
this.courses = courses;
}
@Override
public String toString() {
return "Instructor{" +
"id=" + id +
", firstName='" + firstName + '\'' +
", lastName='" + lastName + '\'' +
", email='" + email + '\'' +
", instructorDetail=" + instructorDetail +
", courses=" + courses +
'}';
}
public void addCourse(Course course) {
courses.add(course);
course.setInstructor(this);
}
}
Course
@Entity
@Table(name = "course")
public class Course {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private int id;
@Column(name = "title")
private String title;
@ManyToOne(cascade = {CascadeType.DETACH, CascadeType.MERGE,
CascadeType.PERSIST, CascadeType.REFRESH}, fetch = FetchType.LAZY)
@JoinColumn(name = "instructor_id")
private Instructor instructor;
public Course(String title) {
this.title = title;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getTitle() {
return title;
}
public void setTitle(String title) {
this.title = title;
}
public Instructor getInstructor() {
return instructor;
}
public void setInstructor(Instructor instructor) {
this.instructor = instructor;
}
}
Main
public class Test {
public static void main(String[] args) {
//create session factory
SessionFactory sessionFactory = new Configuration()
.configure("com/ysoft/config/hibernate.cfg.xml")
.addAnnotatedClass(Instructor.class)
.addAnnotatedClass(InstructorDetail.class)
.addAnnotatedClass(Course.class)
.buildSessionFactory();
//create session
Session session = sessionFactory.getCurrentSession();
try (sessionFactory) {
session.beginTransaction();
Instructor instructor = new Instructor("John", "Doe", "[email protected]");
InstructorDetail instructorDetail = new InstructorDetail("http://www.something.com", "Coding");
Course mathCourse = new Course("Math");
Course englishCourse = new Course("English");
Course sportsCourse = new Course("Sports");
instructor.addCourse(mathCourse);
instructor.addCourse(englishCourse);
instructor.addCourse(sportsCourse);
instructor.setInstructorDetail(instructorDetail);
session.save(instructor);
session.getTransaction().commit();
System.out.println("Persisted" + instructor);
}
}
}
A:
If I changed the @OneToMany cascade array to CascadeType.ALL then it worked - but I don't know why, since the relevant types were already there.
The child side shouldn't have cascade specified - to quote the latest Hibernate docs:
The @OneToMany association is by definition a parent association, even
if it’s a unidirectional or a bidirectional one. Only the parent side
of an association makes sense to cascade its entity state transitions
to children.
Update:
According to this it seems calling save() rather than using JPA EntityManager.persist() it also needs Hibernate-specific cascade type SAVE_UPDATE. (Which isn't compatible with JPA @OneToMany definition)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I return results while inside a loop in C#?
Essentially, I have a Do..While loop going through some lines from a text file. I want to process a line, return a value (worked or didn't), then move to the next line.
I have a function called ProcessTXT that accepts 2 strings. Source and Destination of new file.
Is there a way to set a ReturnedValue string = to the result and have the backgroundworker check to see if the variable changed? And if so, add this value to the list box?
private void TranslatePOD(string strSource, string strDest,)
{
TextWriter tw = new StreamWriter(strDest);
TextReader tr = new StreamReader(strSource);
do
{
//My Code doing stuff
//Need to send a result somehow now, but i have more work to do in this loop
//Then using tw.writeline() to write my results to my new file
} while (tr.ReadLine() != null);
}
EDIT: Current test code using Yield. My output is "TestingGround.Form1+d__0". Did i do something wrong?
namespace TestingGround
{
public partial class Form1 : Form
{
static IEnumerable<string> TestYield(string strSource)
{
TextReader tr = new StreamReader(strSource);
string strCurLine = System.String.Empty;
while ((strCurLine = tr.ReadLine()) != null)
{
yield return strCurLine;
}
}
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
string MySource = System.String.Empty;
MySource = @"C:\PODTest\Export Script\Export\Shipment List.csv";
listBox1.Items.Add(TestYield(MySource));
}
}
A:
Yield is typically used to return results iteratively, or streaming. There are plenty of examples online. There's one on SO for reading in a file.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to specify NSLineBreakMode in boundingRectWithSize?
[text boundingRectWithSize:BOLIVIASize options:NSStringDrawingUsesLineFragmentOrigin attributes:@{NSFontAttributeName:FONT} context:nil];
That is the new replacement for
- (CGSize) sizeWithFont:(UIFont *)font constrainedToSize:(CGSize)size lineBreakMode:(NSLineBreakMode) lineBreakMode
However, how do I specify the lineBreakMode parameter on boundingRectWithSize?
A:
use NSParagraphStyleAttributeName & NSParagraphStyle:
NSMutableParagraphStyle *paragraph = [[NSMutableParagraphStyle alloc] init];
paragraph.lineBreakMode = NSLineBreakByWordWrapping; //e.g.
CGSize size = [label.text boundingRectWithSize: constrainedSize options:NSStringDrawingUsesLineFragmentOrigin attributes: @{ NSFontAttributeName: label.font, NSParagraphStyleAttributeName: paragraph } context: nil].size;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why won't hasNext() become false?
From what I can see, if you have an iterable and you use iterable.hasNext inside of a loop it will loop through all of the elements inside of the array, but in the code here, it never terminates:
//ArrayList<FriendlyBullet> list = . . .;
Iterator<FriendlyBullet> fbi = list.iterator();
while (fbi.hasNext()) { // This loops supposedly infinitely
this.game.list.iterator().next().draw(g2d);
}
I know there are not an infinite or a great amount items in the List. FriendlyBullet is just a class I am using. If there is something that I didn't include in my question that is essential for helping me, please tell me! I am not sure if this is a problem with my syntax or what, any help is greatly appreciated.
A:
This method call
this.game.list.iterator()....
creates a new iterator in each loop iteration. The fbi.next() never gets invoked.
You should reuse fbi as follows:
//ArrayList<FriendlyBullet> list = . . .;
Iterator<FriendlyBullet> fbi = list.iterator();
while (fbi.hasNext()) { // This loops supposedly infinitely
fbi.next().draw(g2d);
}
Note that this happens to be equivalent to
for (FriendlyBullet fb : list)
fb.draw(g2d);
or (if you're on Java 8)
list.forEach(fb -> fb.draw(g2d));
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do tape backups work?
We are backing up to tapes but more often then not the tape will be spat out and we will have an error like The requested media failed to mount. The operation was aborted. which from what I can tell could mean the drive is full (doesn't looks like it as the drives ate ~700GB big).
We sometimes get this errors but other times it works.
We are running Windows Small Business Server 2003 SP? (the latest SP and updates)
But my question is, when I put in a tape each day (a tape for each day of the week) does it start from the start of the tape and start copying over data or does it continue off from where the last backup filled it up to?
I'm quite new to tapes and how they work, so links to good resources about how it works would be great too.
A:
Tapes are a serial access (as opposed to random access) medium.
Miniature pixies engrave each bit onto a tiny section of the physical media; they can't be everywhere on the tape at once so they have to do the work in order.
The Wikipedia article on tape storage is a good resource for general questions.
Your question needs far more detail (the specific error you're receiving, what backup software you're using, your configuration) before it could be answered.
Only three of the above statements are true.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
CPU frequency limited when AC adapter is on, but no limit with battery
Here I got a strange problem with my ThinkPad X200.
This notebook uses Intel(R) Core(TM)2 Duo CPU P8600, which has a designed frequency of 2.4GHz. When I use battery only, the maximum scaling frequency can be 2.4GHz, HOWEVER, when I insert the AC adapter, the frequency is limited to 1.6GHz.
The problem still exists even I have disabled cpufreqd, and forced the governor to performance.
That's so weird! Would anybody have an idea?
P.S. My kernel version is 4.19.5.
A:
Yeah, no one cares my problem, that's a tremendous pity.
However, luckily, I lave found a solution to deal with it!
What limits the CPU's maximum frequency? BIOS!
The file /sys/devices/system/cpu/cpu*/cpufreq/bios_limit tells the limitation value of BIOS.
On condition that performance governor is activated, when I use battery only, the value of bios_limit is 2400000, the maximum of the hardware. However, when I connect AC adapter, this value will soon lower to 1600000.
By default, Linux's governor follows bios_limit, so the problem occurs. But we can let Linux to ignore it, and don't let the maximum frequency stucks. Just set ignore_ppc to 1.
echo 1 | sudo tee /sys/module/processor/parameters/ignore_ppc
And modify /etc/default/grub to automatically set ignore_ppc on reboot. Open it with root privilege, attach processor.ignore_ppc=1 to GRUB_CMDLINE_LINUX_DEFAULT, just like this:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash processor.ignore_ppc=1"
then run:
sudo update-grub
and reboot.
References:
Permanently change maximum CPU frequency
Maximum CPU frequency stuck at low value
I cant set the cpu frequency to maximum
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Connection pooling Postgres with multiple schemas and users
Using Postgres with a schema per customer. For isolation and security. A different user per schema to limit access. Looking for a way to efficiently pool connections across the schemas.
Have tried to use application level connection pool (Hikari), but I don't see it being able to pool across schemas efficiently. Don't want to hit limits on Postgress connection counts by growing number of connections per schema/user. And in general it does not seem most effective way to pool connections if they grow as a factor of schemas.
Also tried pgbouncer but not sure how to configure it effectively for this purpose. Tried to use Hikari on the application side per customer, and pgbouncer to map these to fewer postgres connections. In session mode, pgbouncer seems to be just acting as a proxy and the number of connections grows in line with each connection from Hikari. In transaction mode pgbouncer and Hikari seem to get out of sync somehow, and I get protocol error messages from Postgres.
The problem seems quite similar to this question from a few years back. Unfortunately, I do not see a clear answer on how to manage this type of connection pooling effectively.
So, potentially having quite a few schemas, as per customer, the question is how to properly do connection pooling for Postgres when using multiple schemas and users?
A:
A different user per schema to limit access.
Is this actually effective? Doesn't your application server need to know how to connect as each user, in order to do its job? If I can trick the app server into showing me data from the wrong schema, couldn't I just as easily trick it into connecting as the wrong user before doing so?
If so, I think these two "layers" of security aren't really independent from each other, so they aren't really two layers.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Where should I submit bug reports for the pattern lock?
I believe i found a bug with the pattern lock.
Where should I submit bug reports for the pattern lock?
Note: this might just be a Sense thing:
Create a pattern lock.
Lock phone.
Wake phone draw pattern. Hold finger on screen and not activate any more points (so that it has your valid pattern)
Let screen go dark.
Release finger.
Rewake phone.
Phone should unlock.
A:
The android issue tracker is located at http://b.android.com. you can report the issue there.
A:
The Android issue tracker can be found here.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How should I read a format string of variable length in C++ from stdin?
sorry for such a stupid question but I couldn't find any obvious answer.
I need to read from stdin first an int n with the size of an array, and then integer values from a string in the format "1 2 3 4 5 6" with n elements.
If I knew the number of parameters at compile time I could use something like a scanf (or the safe alternatives) with a format string like "%d %d %d %d %d %d", but here I will only know that value at run time.
What would be the best way to do this in C++? Performance is important but more than that safety.
A:
How should I read a format string of variable length in C++ from stdin?
You should not attempt to do such thing. Only ever use constant format strings.
I need to read from stdin first an int n with the size of an array, and then integer values
What would be the best way to do this in C++?
Read one value at a time. Repeat using a loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MVC default document under folders
So in IIS you can set the default document for all site folders to be say "index.aspx".
In MVC how do I do this across a) all directories or failing that b) one directory at a time.
I have a page in [Views]/[Search]/[index.aspx]
This url works - www.[mysite]/search/index
but I can't get it to work under - www.[mysite]/search
I have tried adding this into global.asax > RegisterRoutes
routes.MapRoute(
"Search",
"{action}",
new { controller = "Search", action = "Index" }
);
A:
MVC doesn't use a default document, but a default route.
Your route above shows us that the default page when someone visits your website (http://example.com) will be the Index view contained within the search directory.
The default route that gets generated with a new MVC project looks like this
routes.MapRoute( _
"Default", _
"{controller}/{action}/{id}", _
New With {.controller = "Home", .action = "Index", .id = UrlParameter.Optional} _
)
What this means is that your routing structure would look like
http://example.com/ (showing the "index" view within the "home" folder)
http://example.com/about/ (showing the "index" view within the "about" folder)
http://example.com/about/contact (showing the "contact" view within the "about" folder)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
php date is wrong?
Let's say I have this:
$a11 = date("F j, Y, g:i a", $a['date']);
$newTime = date($a['date'], strtotime('+3 hour'));
$b11 = date("F j, Y, g:i a", $newTime);
echo $a11 . " AND " . $b11;
I know $a['date'] is right because I get: March 22, 2011, 10:22 pm. However, the echo produces: March 22, 2011, 10:22 pm AND March 22, 2011, 10:22 pm when clearly the second part is suppose to be three hours ahead.
What am I doing wrong?
A:
Don't you want:
$newTime = strtotime( '+3 hours',$a['date'] );
$b11 = date("F j, Y, g:i a", $newTime );
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Exclude parent object on binary serialization in C#
I have a question about binary serialization in C#
I need to be able to deep clone objects of class B (along with all it's subobjects in the graph of course). I'd like to implement this by using binary serialization. The discussion if that's the best method is irrelevant in the context of this question.
Say I have this class structure:
public class A
{
private B objB;
}
[Serializable]
public class B : ICloneable
{
private C objC1;
private C objC2;
public object Clone()
{
B clone = Helper.Clone<B>(this);
return (B)clone;
}
}
[Serializable]
public class C
{
int a;
int b;
}
The helper class for deep cloning with binary serialization (I got this method code from somewhere on the net, don't really remember where TBH, but it looks alright)
public static class Helper
{
public static T Clone<T>(T OriginalObject)
{
using (Stream objectStream = new MemoryStream())
{
IFormatter formatter = new BinaryFormatter();
formatter.Serialize(objectStream, OriginalObject);
objectStream.Seek(0, SeekOrigin.Begin);
return (T)formatter.Deserialize(objectStream);
}
}
}
So at some point in my app, I hit following code:
B clone = (B)objA.objB.Clone();
The problem is that the debugger moans about class A not being marked as Serializable.
But I don't want to serialize A, I want to serialize B and it's subobject C.
I guess it tries to serialize the parent object A too, because it's all interconnected in the object graph. But is there any way I can exclude it from being serialized at this point?
A:
This quick mock code works perfectly fine, your problem is elsewhere, perhaps C has variable of class A ?
public class A
{
private B bObj = new B();
public A()
{
B copy = bObj.Clone() as B;
}
}
[Serializable]
public class B : ICloneable
{
private int test = 10;
public object Clone()
{
return Helper.Clone(this);
}
}
EDIT per comment:
please add the following to your event in B
[field:NonSerialized]
This will prevent serialization of the invocation list of the event which in turn references A
|
{
"pile_set_name": "StackExchange"
}
|
Q:
app not setup: this app is still in development mode,Switch to a registered test user or ask an app admin for
app not setup: this app is still in development mode, and you don't have access to it. Switch to a registered test user or ask an app admin for permission
A:
STEP 1:
In Settings -> Basic -> Contact Email. (Give your/any email)
STEP 2: in 'App Review' Tab : change
Do you want to make this app and all its live features available to the general public? Yes
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to access iphone's lockscreen and homescreen image in xCode?
I want to set the background of my app the same image as it is in the user's iphone lockscreen image, and also I would like to know how to access to homescreen image as well.
Thank you!
A:
Short answer: you can't. Unless there's a public API, things outside your app are blocked.
I don't know of an API to access either of those things.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
REGEXP_LIKE to match a specific word in a comma separated string
I have a field in the database that contains comma separated values, and I am trying to use REGEXP_LIKE to match for a specific word within the string.
Basically, I want to match for the word SE but don't know how to approach it.
The values of the database field can be:
"SE, SEM, NE" This should match
"SI, SE" This should match
"SI, NE" This should not match
I tried the following regex but does not work well.
'(^|,)(SE)(,|$)'
A:
To match SE in between commas or start/end of string when enclosed with 0 or more whitespace chars use
(^|,)\s*SE\s*(,|$)
^^^ ^^^
See the regex demo and the Regulex graph:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
LDAP query doesn't work when there is space in the OU
There are about 20 OU in AD. 6 of the OU has space in it. for example, "Department Heads", "Operation Managers". The following works all 15 OU where there are no space but doesn't work when there is a space in the OU description.
Any idea. I tried putting the string into "" but nothing helps.
$LdapServer = "FLEX01AD.COLGATE.FILA"
$SearchBase = "OU=Department Heads,DC=COLGATE,DC=FILA"
$LDAPResult = Get-ADUser -SearchBase $searchbase -SearchScope 'subtree' -Server $ldapserver -filter "employeeID="U99YBTTXR" -Properties * | Select -Property userAccountControl, whenChanged
A:
Actually, spaces in the OU is not the issue. I found the solution here
https://community.spiceworks.com/topic/2054880-powershell-count-unique-users-in-group-and-nested-group
this code worked for me
$LDAPResultCount = ($LDAPResult.userAccountControl | select -Unique).count
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to encrypt and decrypt connection string
I've read several posts but I've yet to find something that helps me.
I've got a simple C# winforms application that connects to a SQL DB. What I want to do is to encrpyt this string and decryt it on the fly. I've found this thread which does what I want - Encrypting & Decrypting a String in C#
but .... where to I then store the key/saltkey? Any help would be great!
A:
You have to put your connection string in app.config, then you should encrypt your config file, please refer to this article to find more details
|
{
"pile_set_name": "StackExchange"
}
|
Q:
date format dd.mm.yyyy in C
I want to know if there is a way to read date from console in format dd.mm.yyyy in C. I have a structure with information for the date. I tried with another structure just for the date with day, month and year in it:
typedef struct
{
int day;
int month;
int year;
} Date;
but the dots are a problem. Any idea?
A:
Try:
Date d;
if (scanf("%d.%d.%d", &d.day, &d.month, &d.year) != 3)
error();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Swift - How to display thumbnail capture of video in table view cell?
From the initial research I've done, I see where the AVPlayer control is provided by Apple to play video through the iPhone. I would like to be able to able to embed a thumbnail of the videos in a UITableView. The problem I'm running into is that I can't drag and drop the AVPlayer component onto my custom table view cell. It appears that the AVPlayer control needs to be dropped onto the storyboard like a separate view controller.
It is my understanding that the AVPlayer component has the ability to capture a thumbnail of the video, but if I can't drag and drop the AVPlayer component onto my custom table view cell, how can I grab the thumbnail and display it in the cell? Would I use a UIImageView somehow? Thanks in advance.
A:
I resolved this by using MPMoviePlayerController instead of AVPlayer. Using MPMoviePlayerController, I was able to grab the thumbnail image by specifying the number of seconds into the video to grab the frame to be used for the thumbnail. I then assigned the image obtained to a UIImageView. Hope this helps someone.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how could get {{a.b}} work in angular
I am newbie with typescript and angularjs.
1.in file app.component.ts
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent implements Oninit {
title = 'Tour of Heroes';
a = {"hello":"world", "nice": "day"};
b = "hello";
c = "nice";
d : string;
ngOnInit(): void{
this.d = this.b; //or this.d = this.c;
}
}
2.in app.component.html
{{a.b}} // actually i want it print "world" or "day"
how could i make {{a.b}} print "world" ?
ANSWERED by "vishnu s pillai" and "Muhammed Albarmawi":
object['propertyName'] can use var
so {{a[b]}} is the answer
A:
In javascript you can access to any object property by object.propertyName or bracket notation like this object['propertyName']
In your app.component.html just write like this {{a[b]}} or {{a.hello}}
read more about Property accessors
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Parsing XML type file
I'm using Java, and I need to get information from one AutomationML file (XML type file). I try to use JAXB to do that but in the end I can't get the information I need.
In AML I have one InstanceHierarchy with 3 InternalElements with some attributes, and I need that attributes values, but using JAXB I get the AttributeName but I can't get its value.
public static void main(String[] args) throws Exception {
CAEXFile caex = null;
CAEXFile.InstanceHierarchy ih = null;
try {
JAXBContext jc = JAXBContext.newInstance(CAEXFile.class);
//JAXBContext jc = JAXBContext.newInstance(generated.CAEXFile.InstanceHierarchy.class);
Unmarshaller ums = jc.createUnmarshaller();
CAEXFile aml = (CAEXFile)ums.unmarshal(new File("src\\teste2.aml"));
System.out.println("ins = " + aml.getInstanceHierarchy().get(0).getInternalElement().get(0).getAttribute().get(0).getName());
} catch (JAXBException e) {
System.out.println(e.getMessage());
}
}
The xsd file XSD (CAEX) and AML file AML
Can someone help me using JAXB or give me some directions how to solve this?
Thanks in advance.
A:
You can actually avoid JAXB altogether, which can be useful depending on the rest of your code. If you can use Java 8 perhaps Dynamics would be a nice & direct solution.
XmlDynamic example = new XmlDynamic(xmlStringOrReaderOrInputSourceEtc);
String firstInternalName = example.get("CAEXFile|InstanceHierarchy|InternalElement|@Name").asString();
// TestProduct_1
List<String> allInternalNames = example.get("CAEXFile").children()
.filter(hasElementName("InstanceHierarchy")) // import static alexh.weak.XmlDynamic.hasElementName;
.flatMap(Dynamic::children)
.filter(hasElementName("InternalElement"))
.map(internalElement -> internalElement.get("@Name").asString())
.collect(toList());
// [TestProduct_1, TestResource_1, TestProduct_2, TestProduct_3, TestResource_2]
It's a single and lightweight extra dependency, ie in maven:
<dependency>
<groupId>com.github.alexheretic</groupId>
<artifactId>dynamics</artifactId>
<version>2.3</version>
</dependency>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
What does 6" mean under speed in the Monster Manual?
In the AD&D 1st edition Monster Manual, every monster's speed is represented in inches. Obviously, a big monster cannot just move 6 inches per turn, so what does the " mean?
A:
In AD&D 1st edition inches of movement represents three things.
6" = 60 feet per turn exploring a dungeon. This allows for the normal checks for surprise, mapping, detection of secret door, etc.
6" = 60 yards per round moving through passageways. Basically if the person or party is in a interior location that they know they move at this rate.
6" = 60 yards per round moving outdoors like in a city.
6" = 6 miles per half-day treking.
This is all found on page 102 of the AD&D players handbook.
And to be complete for range (both spells and missile weapons) there is the following.
1" of range = 10 feet indoors
1" of range = 10 yards outdoors
This represents the ability to lob missiles in an arc outside as opposed to a flat trajectory indoors.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Print the name of day
How can I improve this code? It takes a number in the range of [1,7] and prints its corresponding day.
#include "stdafx.h"
#include <iostream> // std::cout
#include <vector> // std::vector
#include <string>
using std::cin;
using std::cout;
using std::vector;
using std::string;
int main()
{
vector<string> name_day = { "Saturday" ,"Sunday" , "Monday" , "Tuesday" , "Wednesday" , "Thursday" , "Friday" };
int n ;
cout << "This program takes a number in the range of" << "\n" << "[1 , 7] and prints its corresponding day." << "\n" << "............................................................." << "\n";
while (true)
{
bool flag = true;
cout << "Input a number : ";
cin >> n;
if (!cin) //If input is wrong.
{
// reset failbit
cin.clear();
cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
//If the condition is not satisfied .
else
{
for (int i = 0; i <= 6; i++)
{
if (n-1 == i)
{
cout << name_day[i] << "\n" << "-----------------------" << "\n";;
flag = false;
}
}//end of for
}
if (flag) { cout << "Bad Entery !." << "\n" << "-----------------------" << "\n"; }
}//end of while
system("pause");
}
A:
Here are some things that may help you improve your program.
Isolate platform-specific code
If you must have stdafx.h, consider wrapping it so that the code is portable:
#ifdef WINDOWS
#include "stdafx.h"
#endif
Make sure you have all required #includes
The code uses std::numeric_limits but doesn't #include <limits>. A program may compile without the explicit #include because some other header might happen to include it, but that is not guaranteed by the standard. To compile reliably, the documented #include files should be used.
Don't use system("pause")
There are two reasons not to use system("cls") or system("pause"). The first is that it is not portable to other operating systems which you may or may not care about now. The second is that it's a security hole, which you absolutely must care about. Specifically, if some program is defined and named PAUSE or pause, your program will execute that program instead of what you intend, and that other program could be anything. First, isolate these into a seperate functions pause() and then modify your code to call those functions instead of system. Then rewrite the contents of those functions to do what you want using C++. For example:
void pause() {
getchar();
}
Use const where practical
The name_day vector is not and should not be modified, so it should be declared const. Better yet, it could be declared static const.
Use better naming
The vector named name_day is well named because it is descriptive of the contents. However, flag is not a good name because it doesn't suggest the meaning of the variable in the context of the program.
Use string concatenation
The main routine includes this very long line:
std::cout << "This program takes a number in the range of" << "\n" << "[1 , 7] and prints its corresponding day." << "\n" << "............................................................." << "\n";
Each of those is a separate call to operator<< but they don't need to be. Another way to write that would be like this:
std::cout << "This program takes a number in the range of\n"
"[1 , 7] and prints its corresponding day.\n"
".............................................................\n";
This reduces the entire menu to a single call to operator<< because consecutive strings in C++ (and in C, for that matter) are automatically concatenated into a single string by the compiler. It is also more readable.
Fix the spelling error
The error string currently says "Bad Entery !." With corrected punctuation and spelling, that becomes "Bad entry!". However, it would be even better if the error message suggested what was actually wrong and gave some suggestion to the user about what to do about it.
Use indexing rather than a loop
The current code has this peculiar bit of code:
for (int i = 0; i <= 6; i++)
{
if (n-1 == i)
{
std::cout << name_day[i] << "\n" << "-----------------------" << "\n";;
flag = false;
}
}//end of for
Instead of a loop to look up the day, why not simply use an index? Here's one way to do it:
while (true) {
std::cout << "Input a number : ";
std::cin >> n;
if (std::cin && n >= 1 && n <= 7) {
std::cout << name_day[n-1] << "\n----------------\n";
} else {
std::cout << "Bad entry: The number must be in the range 1-7 inclusive\n";
std::cin.clear();
std::cin.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
}
}
Note that this will flush the input stream on any error which differs from the behavior of the current code.
Consider the user of the code
There is currently no graceful way to end the program. It would be nicer if, for instance, a value of 0 or maybe -1 were used as a signal to end the program.
Think carefully about using using
There is nothing inherently wrong with these lines:
using std::vector;
using std::string;
However, since each is only used once, I'd probably omit them and simply use the full namespace in the code:
static const std::vector<std::string> name_day{
"Saturday" ,"Sunday" , "Monday" , "Tuesday" , "Wednesday" , "Thursday" , "Friday" };
Consider using constexpr
Because it is a vector of strings, one can't make name_day a constexpr value, but it could be simply by making a minor change:
static constexpr const char* name_day[]{
"Saturday" ,"Sunday" , "Monday" , "Tuesday" , "Wednesday" , "Thursday" , "Friday" };
This also means that the #include <vector> and #include <string> are no longer needed.
A:
This loop
for (int i = 0; i <= 6; i++)
{
if (n-1 == i)
{
cout << name_day[i] << "\n" << "-----------------------" << "\n";;
flag = false;
}
}/
is superfluous. All you need is to use n-1 as the index:
if(n > 0 && n < 8)
{
cout << name_day[n-1] << "\n" << "-----------------------" << "\n";
}
else
{
cout << "Bad Entry !." << "\n" << "-----------------------" << "\n";
}
Also using System("pause"), while convenient, is a bad habit to get into.
A:
So, my comment is going to be a bit broader about practices
On top of what tinstaafl has said, please do not use while(true). Check for user input instead.
I know it's not your focus, but while(true) can lead to pretty bad design such as using breaks and gotos.
Also always have a return at the end of your main, enforcing type signature and having no warning will somettime save your day ;)
Finnaly, comments. Just an advice as you go on, write comments about why you do things, not about what exactly you do.
// if the condition wasn't satisfied or // end of loop are not very informative, code already gives you a lot of informations, you don't need to duplicate it
Hope it helped
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Finding average of data within a certain range
How do I find the average of data set within a certain range? Specifically I am looking to find the average for a data set for all data points that are within one standard deviations of the original average. Here is an example:
Student_ID Test_Scores
1 3
1 20
1 30
1 40
1 50
1 60
1 95
Average = 42.571
Standard Deviation = 29.854
I want to find all data points that are within one standard deviation of this original average, so within the range (42.571-29.854)<=Data<=(42.571+29.854). And from here I want to recalculate a new average.
So my desired data set is:
Student_ID Test_Scores
1 20
1 30
1 40
1 50
1 60
My desired new average is: 40
Here is my following SQL code and it didn't yield my desired result:
SELECT
Student_ID,
AVG(Test_Scores)
FROM
Student_Data
WHERE
Test_Scores BETWEEN (AVG(Test_Scores)-STDEV(Test_Scores)) AND (AVG(Test_Scores)+STDEV(Test_Scores))
ORDER BY
Student_ID
Anyone know how I could fix this?
A:
Use either window functions or do the calculation in a subquery:
SELECT sd.Student_ID, sd.Test_Scores
FROM Student_Data sd CROSS JOIN
(SELECT AVG(Test_Scores) as avgts, STDEV(Test_Scores) as stdts
FROM Student_Data
) x
WHERE sd.Test_Scores BETWEEN avgts - stdts AND avgts + stdts
ORDER BY sd.Student_ID;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Number of the rhombuses
Consider an equilateral triangle with side length 10, which is divided perfectly into little triangles with unit side length. Find the number of the little rhombuses with side length at least 2.
(Deleted incorrect attempted work)
A:
Consider the rhombuses directly:
^ * * ` ` ` ^ * * ` `
* ` * ` ` ` * ` * `
* * * ` ` ` * * *
` ` ` ` ` ` ` `
` ` ` ` ` ` `
` ` ` ` ` `
^ * * ` `
* ` * `
* * *
` `
`
There are three ways to orient a rhombus in the triangular grid. For a given orientation there are $T_7=28$ rhombuses of side length 2 in the given equilateral triangle, as evidenced by the carets in the diagram above ($T_n=\frac{n(n+1)}2$ is the $n$th triangular number). Similarly we can find $T_5=15$ rhombuses of side 3, $T_3=6$ of side 4 and $T_1=1$ of side 5 with a given orientation. Adding these counts up and multiplying by three gives the answer as
$$3(28+15+6+1)=150$$
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Sum table with fixed rows?
I've got a query that I want to give a fixed table output (So I can graph in Excel easier).
The query is as follows:
SELECT
Case
WHEN ScrapsReasonID = 2339 THEN 'Box 5'
WHEN ScrapsReasonID = 2340 THEN 'Box 6'
WHEN ScrapsReasonID = 2342 THEN 'Box 7'
WHEN ScrapsReasonID = 2343 THEN 'Box 8'
WHEN ScrapsReasonID = 2344 THEN 'Box 9'
Else 'Unknown'
END
AS 'BoxNumber',
count(Case When PartNumberID = '378' Then Scraps End) AS '9.5mm',
count(Case When PartNumberID = '379' Then Scraps End) AS '10.0mm',
count(Case When PartNumberID = '380' Then Scraps End) AS '10.5mm'
FROM [ProcessControl].[dbo].[OutputScrap]
WHERE
MachineId = '93'
And ScrapsReasonID In
(
'2339',
'2340',
'2342',
'2343',
'2344'
)
And PDate Between '22-may-2014' and '29-may-2014'
GROUP BY ScrapsReasonID
This works if there is definitely all 5 scrapreasonIDs between the set dates, but if there are only 3 for instance I only get 3 rows in the Sum table. Is there a way of always returning all 5 scrapreasonIDs and zero values for the 9.5, 10.0, 10.5 if they don't exist?
A:
You want to use a left outer join instead of the case statement. In addition, I'm making the following changes:
Changing the count() to sum(). It seems more reasonable to want to sum the values in a field called `scraps, although I might be wrong.
Added Else 0 to the conditional aggregations. I'm assuming you want 0 when there are no matches rather than NULL.
Removed the condition on ScrapsReasonID. The join does the necessary filtering.
Moved the remaining conditions to the on clause, so the where doesn't turn the outer join into an inner join.
Removed the single quotes for the column names. For SQL Server, use square braces (I am assuming SQL Server because of the three-part table naming convention).
Changed the date format to the ISO standard YYYY-MM-DD format.
Here is the resulting query
SELECT bn.BoxNumber,
SUM(Case When PartNumberID = '378' Then Scraps Else 0 End) AS [9.5mm],
SUM(Case When PartNumberID = '379' Then Scraps Else 0 End) AS [10.0mm],
SUM(Case When PartNumberID = '380' Then Scraps Else 0 End) AS [10.5mm]
FROM (SELECT 2339 as ScrapsReasonID, 'Box 5' as BoxNumber UNION ALL
SELECT 2340, 'Box 6' UNION ALL
SELECT 2342, 'Box 7' UNION ALL
SELECT 2343, 'Box 8' UNION ALL
SELECT 2344, 'Box 9'
) bn LEFT OUTER JOIN
[ProcessControl].[dbo].[OutputScrap] os
ON os.ScrapsReasonID = bn.ScrapsReasonID AND
os.MachineId = '93' AND
os.PDate Between '2014-05-22' and '2014-05-29'
GROUP BY bn.BoxNumber;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can I set Grub to not boot a default OS?
I'd like the computer to wait at the Grub screen for me to make a choice, rather than picking one option and running with it. Is there a way to have no default operating system in Grub?
A:
You can set GRUB_TIMEOUT to a very big value or to -1 (infinite). This is by default set at 10 seconds. To change it, edit /etc/default/grub file using the following command in terminal:
gksu gedit /etc/default/grub
After you finished to edit that file, save it and don't forget to update your Grub using the following command:
sudo update-grub
|
{
"pile_set_name": "StackExchange"
}
|
Q:
PHP debugging on a Docker host using Vscode
How can I get Xdebug (or another debugger) working in VSCode for a Docker PHP container?
I found this tutorial but it is for other ides not VS code. I haven't been able to find any other guides that come close to explaining how to do anything other than debug locally in VS Code.
https://phauer.com/2017/debug-php-docker-container-idea-phpstorm/
A:
Well, since there hasn't been any answers for about a month, here is an article I found that looks like the most comprehensive answer on the web.
This link goes over the directions (way to verbose for here)
https://medium.com/@jasonterando/debugging-with-visual-studio-code-xdebug-and-docker-on-windows-b63a10b0dec
Basically:
Install PHP Debug and PHP IntelliSense Extensions in VS code
Add Xdebug config to your docker compose file
Set up the Debug task
Test the debugger
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How does one 'override' string used in a core file?
I'm trying to change the text "Already a member?" located in commons_core.blocks.inc to a different value. I would preferrably like to override this value and not directly edit a file in /profiles/drupal_commons/.
Can anyone recommend a good solution?
A:
For overriding a few text strings the easiest way is probably to use the String Overrides module:
Provides a quick and easy way to replace any text on the site.
UPDATE
The line that sets this particular string is:
$block['content'] = t('Already a member? !login', array('!login' => l(t('Login'), 'user')));
Once it's built up, the string will actually look something like this:
'Already a member? <a href="/user">Login</a>'
That's why you're having difficulty targeting it with the string overrides module at the moment. If you're not developing a multilingual site you might just get away with inspecting the <a> tag in the block, and using that HTML in string overrides. I'm pretty sure it'll work.
A:
The string you are referring is used on line 167 of the commons_core.block.inc file, that is part of _commons_core_header_login_block_view(), and which is included in the following control statement.
if (drupal_is_front_page()) {
// Only provide a link
$block['content'] = t('Already a member? !login', array('!login' => l(t('Login'), 'user')));
}
You can use the String Overrides module as suggested by Clive. The help provided from the module says:
To replace a string, enter the complete string that is passed through the t() function. String Overrides cannot translate user-defined content; it can only replace strings wrapped in the t() function. To find the strings you can actually change, open up a module and look for t() function calls. Places where %, @, or ! are used means that the translation contains dynamic information (such as the node type or title in the above examples); these are not translated while the text around them is.
In your case, you need to use the string "Already a member? !login" because that is the string passed to t(), not the string containing the login link as replaced by t().
You can also change the code in the settings.php file as suggested by md2. In this case, the code that you need to add to the file is the following one.
$conf['locale_custom_strings_en'] = array(
'Already a member? !login' => 'YOUR STRING',
);
Replace 'YOUR STRING' with the string you want to use.
What of the two methods you should use depends from how many strings you need to replace. If you have a single string to replace, then you could use the settings.php method, especially if there is just a limited number of users that would need to alter that string, and those users are comfortable doing it without using the user interface offered by Drupal, but through FTP, or SSH. In the case you are going to use the settings.php method, you could write your customizations in a different file that is then included from settings.php; in this way:
You can use the same settings for different domains/sub-domains, without to even copy that part in different settings.php files.
You can control who has the permission to write the settings.php file, which contains the most sensitive information, without to be so restrictive on who can change the other settings available on settings.php.
If vice-versa, you need to change more than one string, your site is using more than two languages (which could be possibly increased on the near future), or your need that more than two users can alter the strings, without to give them FTP, or SSH access, then you should use the String Overrides module.
Both the methods replaces any occurrence of the string with the one you supply. It is not possible to replace the string only in specific pages, or in a specific block, using these methods; if there is another module that uses the same string, the string will be replaced with the string you set.
As the string is not used from a theme function (which could be overridden with your own theme function), the alternative to the methods I reported here would be to implement a block in your custom module, which is used to replace the block already implemented in Drupal Commons, and which uses the string you want to use.
In Drupal 7 would be easier, as it allows a module to implement hook_block_view_alter(), or hook_block_view_MODULE_DELTA_alter() to alter how the block implemented from another module is rendered.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Spring Integration - channels and threads
I would like to understand how messages are processed in Spring Integration: serial or parallel. In particular I have an inbound channel adapter with a poller and an HTTP outbound gateway. I guess splitters, transformers, header enrichers etc are not spawning their own threads.
I could have missed them, but are these details specified somewhere in the documentation?
Also can I programatically get all the channels in the system?
A:
Channel types are described here.
The default channel type is Direct (the endpoint runs on the caller's thread); QueueChannel and ExcecutorChannel provide for asynch operations.
context.getBeansOfType(MessageChannel.class)
A:
Actually "threading" dependes on MessageChannel type:
E.g. DirectChannel (<channel id="foo"/> - default config) doesn't do anything with threads and just shifts the message from send to the subscriber to handle it. If the handler is an AbstractReplyProducingMessageHandler it sends its result to the outputChannel and if the last one is DirectChannel, too, the work is done within the same Thread.
Another sample is about your inbound channel adapter. On the background there is a scheduled task, which executes within the Scheduler thread and if your poll is very often the next poll task might be executed within the new Thread.
The last "rule" is acceptable for QueueChannel: their handle work is done with Scheduler thread, too.
ExcecutorChannel just places the handle task to the Executor.
All other info you you can find in the Reference Manual provided by Gary Russell in his answer.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Mail PDF attachments shown as 0 KB
Since I updated to macOS Sierra (10.12.2) PDF attachment are shown with sizes of 0 KB and a preview is not embedded. Resetting Mail and even re-installing the whole macOS system (factory default, no recovery) did not help.
The only way to read the attachments is to view the e-mail raw data (⌥+⌘+u) and then store it into an *.eml file. The created eml file can then be opened with Apple Mail via ⌘+o and the attachment is readable.
What am I missing? My inbox is IMAP not POP3 and my provider is posteo.de.
A:
Go to Preferences -> Accounts -> specific account, and change the pop-up entry for Attachments from "all" to "last" (or however the "zuletzt" entry is called in english).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to implode JavaScript into PHP's return function?
I have a PHP function:
function output_errors($errors) {
return '<ul><li>' . implode('</li><li>', $errors) . '</li></ul>';
}
But instead of <ul> and <li>s I wish to use jQuery to show errors:
$(function() {
$.pnotify({
title: 'Update Successful!',
text: '<p>Error text here!</p>',
});
});
I tried a lot of combinations but it doesn't work.
A:
What specifically doesnt work ? Just Simply implode with a comma :
function output_errors($errors) {
$error = implode(', ', $errors);
echo <<< xyz
<script>
$(function() {
$.pnotify({
title: 'Update Successful!',
text: '$error',
});
});
</script>
xyz;
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Set Item Level Permissions on a SharePoint online library using CSOM PowerShell
I'm using this script below to upload a file or multiple files in a document library in sharepoint online. I would like to set only the 'read' permission so that only one specific user can read (see) the file.
How can I set the 'read' permission for a specific domain user first and then upload the file?
Why? -> I want to make a folder "paychecks 2019" and then upload a paycheck and set the permissions so that the workers can only see their personal paycheck.
I can also create a folder for each worker and set the read permission on folder level but I prefer to give them all access to "paychecks 2019" and give them permissions on file level during uploading.
Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.dll"
Add-Type -Path "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\ISAPI\Microsoft.SharePoint.Client.Runtime.dll"
$Context = New-Object Microsoft.SharePoint.Client.ClientContext($SiteURL)
$Creds = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($User,$Password)
$Context.Credentials = $Creds
$List = $Context.Web.Lists.GetByTitle($DocLibName)
$Context.Load($List.RootFolder)
$Context.ExecuteQuery()
$TargetFolder = $Context.Web.GetFolderByServerRelativeUrl($List.RootFolder.ServerRelativeUrl + "/" + $FolderName);
Foreach ($File in (dir $Folder -File))
{
$FileStream = New-Object IO.FileStream($File.FullName,[System.IO.FileMode]::Open)
$FileCreationInfo = New-Object Microsoft.SharePoint.Client.FileCreationInformation
$FileCreationInfo.Overwrite = $true
$FileCreationInfo.ContentStream = $FileStream
$FileCreationInfo.URL = $File
$Upload = $TargetFolder.Files.Add($FileCreationInfo)
$Context.Load($Upload)
$Context.ExecuteQuery()
}
A:
You cannot grant a permission on a file before uploading it as there is nothing to set the permission on before the file has been uploaded, unless you have a container (folder). In any case, if you upload the file and then set the permission, the file would only be visible for a second or so, and probably just a fraction of a second.
If you use folders, you can create the folder, set the read only permission and then upload the file. You can hide that fact that you are using folders by creating a default view that sets "Show all items without folders" so users only see a single list of files (but still secured by the folders they were uploaded to).
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Earliest opportunity to catch a page view
I have a function I wish to be called on every page view, at the earliest possible opportunity. The code gets the users IP address, an increments a counter by 1 in a database.
This code is a basic flood limiter. If more than x requests are made in interval i then it temporarially bans that IP address. This is why the check needs to be as early as possible and on every page.
Calling in MasterPages Page_Init
This seems to work OK, however sometimes the counter increments by more than 1 (I assume if there's a URL re-write or redirect).
Calling in Global.asax on Session_Start
It seems to add ~30 to the counter on each page view.
What's the best way to catch a page view at the earliest possible opportunity, preferably without needing to change every single page on the site?
A:
Inside Global.asax, you can hook to HttpApplication.BeginRequest, which is :
the first event in the HTTP pipeline chain of execution when ASP.NET
responds to a request
This will fire for each request (for images, css, javascript, and so on). If you want to filter on pages only, you could write a HttpModule, that only increment your counter according to request extension.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.