date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 441 | 1,851 | <issue_start>username_0: For my Website, I have a set background-color of the body, for example
```
body {
background-color: #FFFFFF;
}
```
I also have a button defined in the .html-File (Let's say it has the ID "button1"), and I want the Background-Color of the body to change (to for example #000000;) when the button is beeing hovered over with a mouse and change back when the mouse isnt on the button anymore. Is there a way to do this?
I am a new webdeveloper and am still looking at/learning JavaScript.<issue_comment>username_1: Turns out there were some things I could do myself on this, and some things I had to turn to Azure Support for.
Thing I could do myself: Enable a bunch of resource providers
-------------------------------------------------------------
It turned out the subscription did not have a single resource provider enabled. To fix, I found the Subscription blade, and clicked the "Resource Providers" menu item (toward the bottom). That opened up a list where I could register lots of stuff. This also enabled corresponding quotas on the subscription.
Thing I couldn't do myslef: Increase e.g. VM quotas
---------------------------------------------------
Some of those quotas, however, I wasn't able to figure out how to turn on myself. Crucially, one of them was for provisioning VM:s (including "hidden" ones, e.g. the underlying VM:s in an AKS cluster or an App Service). Thankfully, Azure Support were really responsive and from first contact until I was able to provision stuff took less than a business day.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You only need to register to the *Microsoft.Compute* Resource Provider on the Subscriptions Blade.
* Go to Subscriptions
* Select the desired subscription
* Click on Resource Providers
* Register to the option *"Microsoft.Compute"*
Upvotes: 3 |
2018/03/22 | 1,059 | 3,580 | <issue_start>username_0: I am running a **MySQL** database.
I have the following script:
```
DROP TABLE IF EXISTS `org_apiinteg_assets`;
DROP TABLE IF EXISTS `assessmentinstances`;
CREATE TABLE `org_apiinteg_assets` (
`id` varchar(20) NOT NULL default '0',
`instance_id` varchar(20) default NULL,
PRIMARY KEY (`id`)
) ENGINE= MyISAM DEFAULT CHARSET=utf8 PACK_KEYS=1;
CREATE TABLE `assessmentinstances` (
`id` varchar(20) NOT NULL default '0',
`title` varchar(180) default NULL,
PRIMARY KEY (`id`)
) ENGINE= MyISAM DEFAULT CHARSET=utf8 PACK_KEYS=1;
INSERT INTO assessmentinstances(id, title) VALUES ('14026lvplotw6','One radio question survey');
INSERT INTO org_apiinteg_assets(id, instance_id) VALUES ('8kp9wgx43jflrgjfe','14026lvplotw6');
```
Looks like this
```
assessmentinstances
+---------------+---------------------------+
| id | title |
+---------------+---------------------------+
| 14026lvplotw6 | One radio question survey |
+---------------+---------------------------+
org_apiinteg_assets
+-------------------+---------------+
| id | instance_id |
+-------------------+---------------+
| 8kp9wgx43jflrgjfe | 14026lvplotw6 |
+-------------------+---------------+
```
And I then have the following query (I reduced it to the simplest failing query)
```
SELECT ai.id, COUNT(*) AS `count`
FROM assessmentinstances ai, org_apiinteg_assets a
WHERE a.instance_id = ai.id
AND ai.id = '14026lvplotw6'
AND a.id != '8kp9wgx43jflrgjfe';
```
When I run the query I get this
```
null, 0
```
Until now, all is good. Now, here is my issue, when I recreate both tables with `ENGINE=InnoDB` instead of `ENGINE=MyISAM` and run the same query again, I get this:
```
'14026lvplotw6','0'
```
So 2 things are confusing me:
* Why don't I get the same result?
* How can the `COUNT(*)` return 0 in the second case when it actually returns values for the row, and should therefore be 1?
I am lost, I'd appreciate if anybody could explain this behaviour to me.
**EDIT**:
Interestingly, if I add `GROUP BY ai.id` at the end of the query, it works fine in both cases and return no rows.<issue_comment>username_1: This happen because you are using aggregation function without `GROUP BY` .. in this case the result for non aggregated column is unpredictable .. (typically is show the first value encountered during the query)
Try adding a `GROUP BY`
```
SELECT ai.id, COUNT(*) AS `count`
FROM assessmentinstances ai, org_apiinteg_assets a
WHERE a.instance_id = ai.id
AND a.id != '8kp9wgx43jflrgjfe'
AND ai.id = '14026lvplotw6'
GROUP BY ai.id;
```
Remember that the use of aggregation in presence of column not mentioned in group by is deprecated in SQL and is not allowed in most of the db and in the more recent version of mysql (starting from 5.7)
Upvotes: 3 [selected_answer]<issue_comment>username_2: `EXPLAIN SELECT` for MyISAM returns: `Impossible WHERE noticed after reading const tables`. So MyISAM isn't processing any data at all.
For the InnoDB there are two rows of `EXPLAIN` results: one `Using Index` and one `Using where`. So InnoDB data is being scanned and bits of it slip into the output as there is no aggregate function specified for the first column and AFAIK its not specified what should happen in such situation. If you directly specify some aggregate function, then [if there are no matching rows, it will return NULL](https://dev.mysql.com/doc/refman/5.7/en/group-by-functions.html). So, for example, `SELECT min(ai.id), COUNT(*) ...` would return `NULL, 0`.
Upvotes: 1 |
2018/03/22 | 838 | 2,912 | <issue_start>username_0: I thought this would be pretty straightforward, but I have about 80 databases in the server I am looking at, each database has 5-500 tables.
I am wondering how i can search for a TABLE NAME across everything. I tried a basic
```
SELECT
*
FROM sys.tables
```
but I only get 6 results.<issue_comment>username_1: This is a bit of a hack, but I think it should work:
```
sp_msforeachdb 'select ''?'' from ?.information_schema.tables where table_name=''YourTableName''';
```
It will output the names of the DBs that contain a table with the given name.
Here's a version using `print` that is a little better IMHO:
```
sp_msforeachdb '
if exists(select * from ?.information_schema.tables where table_name=''YourTableName'')
print ''?'' ';
```
The above queries are using [ms\_foreachdb](https://www.mssqltips.com/sqlservertip/1414/run-same-command-on-all-sql-server-databases-without-cursors/), a stored procedure that runs a given query on all databases present on the current server.
Upvotes: 2 <issue_comment>username_2: Just because I really dislike loops I wanted to post an alternative to answers already posted that are using cursors.
This leverages dynamic sql and the sys.databases table.
```
declare @SQL nvarchar(max) = ''
select @SQL = @SQL + 'select DatabaseName = name from [' + name + '].sys.tables where name = ''YourTableName'' union all '
from sys.databases
set @SQL = stuff(@SQL, len(@SQL) - 9, 11, '') --removes the last UNION ALL
exec sp_executesql @SQL
```
Upvotes: 2 <issue_comment>username_3: Here's a bit of a simpler option using dynamic sql. This will get you the name of all tables in every database in your environment:
```
declare @table table (idx int identity, name varchar(max))
insert @table
select name from master.sys.databases
declare @dbname varchar(max)
declare @iterator int=1
while @iterator<=(select max(idx) from @table) begin
select @dbname=name from @table where idx=@iterator
exec('use ['+@dbname+'] select name from sys.tables')
set @iterator=@iterator+1
end
select * from @table
```
Upvotes: 0 <issue_comment>username_4: This version uses FOR XML PATH('') instead of string concatenation, eliminates the default system databases, handles databases with non-standard names and supports a search pattern.
```
DECLARE @pattern NVARCHAR(128) = '%yourpattern%';
DECLARE @sql NVARCHAR(max) = STUFF((
SELECT 'union all select DatabaseName = name from ' + QUOTENAME(d.name) + '.sys.tables where name like ''' + @pattern + ''' '
FROM sys.databases d
WHERE d.database_id > 4
FOR XML path('')
), 1, 10, '');
EXEC sp_executesql @sql;
```
You might need to write:
```
select DatabaseName = name collate Latin1_General_CI_AS
```
I know I did.
Upvotes: 2 <issue_comment>username_5: Dim sql As String = ("Select \* from " & ComboboxDatabaseName.Text & ".sys.tables")
use this key
Upvotes: -1 |
2018/03/22 | 657 | 2,415 | <issue_start>username_0: I am trying to access a data storage of the azure stack. The following instructions work:
```
BlobClient = StorageAccount.CreateCloudBlobClient();
CloudBlobContainer myContainer = BlobClient.GetContainerReference("mycontainer");
```
But it crashes, when creating the Blob via `myContainer.CreateIfNotExists()`:
>
> ...StatusMessage:The value for one of the HTTP headers is not in
> the correct format.\r\n ErrorCode:\r\nErrorMessage:The value for one
> of the HTTP headers is not in the correct format.\n
> RequestId:"hiddenId"...
>
>
>
The behavior is exactly the same for Queues and Tables.
I tried the "Microsoft.WindowsAzure.Storage" library in its older version 7.2.1, version 8.7 and the current version 9.1. (8.7 should be fine regarding [this](https://learn.microsoft.com/en-us/azure/azure-stack/user/azure-stack-storage-dev) documentation. [Another documentation](https://learn.microsoft.com/en-us/azure/azure-stack/user/azure-stack-acs-differences) even sais, that every version between 6.2.0 and 8.7.0 should be compatible. 9.1 is not supported for sure.) In the end it is always the above error.
The code runs fine when targeting public Azure storages instead Azure Stack storages.<issue_comment>username_1: This error could be a result if **BlobRequestOptions** and **OperationContext** have not been set, Could you provide values to both then check again? The documentation can be found here: <https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.blob.cloudblobcontainer.createifnotexists?redirectedfrom=MSDN&view=azure-dotnet#overloads>
Upvotes: 0 <issue_comment>username_2: So I finally found out the solution with kind support of Microsoft.
It was indeed a versioning problem, one can find out when understanding [this](https://learn.microsoft.com/en-us/azure/azure-stack/user/azure-stack-storage-dev) documentation the right way.
I am using Azure Stack version `1.0.180103.2`. This is the internal version number an means it has been created on the 3rd of January 2018. This refers to version 1801, in the versioning scheme used within the documentation.
The API version 8.7 is only valid from 1802 on. So I have to look at the bottom of that documentation page under "previous". And there one can see that I have to go back to "Microsoft.WindowsAzure.Storage" library 6.2. Using this library it works.
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,183 | 3,895 | <issue_start>username_0: Last time I build successfully but after adding the number of pages I am unable to build, I did search for that error a lot but nothing works for me.For example, **added and removed the platform**, **ionic cordova clean** , **ionic cordova build** etc…
Following Error facing by running-- **ionic cordova build android**
What went wrong:
Execution failed for task ‘**:app:transformDexArchiveWithExternalLibsDexMergerForD
ebug’.**
java.lang.RuntimeException: java.lang.RuntimeException: com.android.builder.de
xing.DexArchiveMergerException: Unable to merge dex
Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output.
Get more help at <https://help.gradle.org>
BUILD FAILED in 7s
**IOnic Info:**
cli packages: (C:\Users\ITSERV\AppData\Roaming\npm\node\_modules)
@ionic/cli-utils : 1.19.2
ionic (Ionic CLI) : 3.20.0
global packages:
cordova (Cordova CLI) : 8.0.0
local packages:
@ionic/app-scripts : 3.1.8
Cordova Platforms : android 7.0.0
Ionic Framework : ionic-angular 3.9.2
System:
Node : v6.11.2
npm : 3.10.10
OS : Windows 7
**Thanks in advance!**<issue_comment>username_1: I did struggle with this issue for 2 days after i used google maps plugin inside my app and i suppose it is because of google-play-service conflicts and i resolved it by install the latest grade release by using this command. Hope it helps anyone else facing this issue
```
ionic cordova plugin add cordova-android-play-services-gradle-release
```
Upvotes: 3 <issue_comment>username_2: What helped me was running:
```
cordova clean
```
Upvotes: 5 <issue_comment>username_3: I solve with
```
Cordova Clean Android
Cordova Build Android
```
Upvotes: 2 <issue_comment>username_4: I solved this issue by running simply this command and then build the project. don't go into complicated answers like upgrading Gradle or google packages etc..
```
cordova clean android
ionic cordova build android
```
Upvotes: 4 [selected_answer]<issue_comment>username_5: I have the same issue
>
> (Unable to merge dex Ionic)
>
>
>
when I created the Ionic app. When I run the app the
>
> BUILD FAILED in 11s
> 37 actionable tasks: 3 executed
>
>
>
error occurred. First I use the following solution but it doesn't work for me but it work for the other I guess.
```
configurations.all {
resolutionStrategy {
force 'com.android.support:support-v4:26.1.0'
}
}
```
Finally this is what helped me to overcome the issue of failing to build the app.
```
cordova clean android
ionic cordova build android
ionic cordova run android (To re-launch the app)
```
Upvotes: 1 <issue_comment>username_6: Above commands didn't fix my problem.
I change the following line in project.properties in platforms folder
```
com.google.android.gms:play-services-analytics:11.0.1
```
to
```
com.google.android.gms:play-services-analytics:+
```
and it's fixed
Upvotes: 0 <issue_comment>username_7: I also faced this issued. I solved it via two steps:
**Step 1 : cordova clean**
**Step 2: cordova build**
Upvotes: 1 <issue_comment>username_8: Copying this answer from [this Github issue comment](https://github.com/phonegap/phonegap-plugin-barcodescanner/issues/606#issuecomment-373283725) from [<NAME>](https://github.com/tgardner), since anything else didn't work for me:
>
> Fix for the Facebook plugin was to force the **Facebook SDK** to version **4.25.0**.
>
>
>
> ```
> cordova plugin add cordova-plugin-platform-replace
>
> ```
>
> Then add to your **config.xml**
>
>
>
> ```
>
>
>
>
> ```
>
> Finally run `cordova prepare android`
>
>
> Works with the latest 1.9.1 version of cordova-facebook-plugin4
>
>
>
Upvotes: 0 <issue_comment>username_9: I encountered a similar problem. Installing the plugin helped me
```
cordova plugin add phonegap-plugin-multidex
```
Upvotes: -1 |
2018/03/22 | 520 | 1,885 | <issue_start>username_0: Below I have a migration for the "test" model which uses it's own primary key, a String instead of an Integer.
```
class CreateTest < ActiveRecord::Migration[5.1]
def change
create_table :test, id: false do |t|
t.string :id, primary_key: true
t.timestamps
end
end
end
```
Now we have the "client" model that `t.references` test.
```
class CreateClients < ActiveRecord::Migration[5.1]
def change
create_table :clients do |t|
t.references :test, null: false
t.timestamps
end
end
end
```
The issue is that `t.references` assumes it's an integer id.
```
# == Schema Information
#
# Table name: clients
#
# id :integer not null, primary key
# test_id :integer not null
# created_at :datetime not null
# updated_at :datetime not null
```
This is obviously wrong as `Test.id` is a string.
Is there some magic I need to do to have the `t.references` "know" that it's a string based on the model or something?
Thanks.<issue_comment>username_1: Could you not use:
```
class CreateClients < ActiveRecord::Migration[5.1]
def change
create_table :clients do |t|
t.string :test_id, null: false
t.timestamps
end
end
end
```
Upvotes: 2 <issue_comment>username_2: Add this to the migration with the `references`:
```
type: :string
```
>
> The reference column type. Defaults to :integer.
>
>
>
[You can read more here.](http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_reference)
Upvotes: 4 [selected_answer]<issue_comment>username_3: Just add a `type` to your definition of the reference:
```
t.references :test, type: :string, null: false
```
Upvotes: 2 |
2018/03/22 | 306 | 1,046 | <issue_start>username_0: I would like to add to all elements with class `.double-text` attribute `data-content` with element's content value.
I tried something like that
```
$(function(){
$('.text-double').attr("data-content", $(this).text());
});
```
But, obviously, it doesn't work.
Can someone help me, please?<issue_comment>username_1: Could you not use:
```
class CreateClients < ActiveRecord::Migration[5.1]
def change
create_table :clients do |t|
t.string :test_id, null: false
t.timestamps
end
end
end
```
Upvotes: 2 <issue_comment>username_2: Add this to the migration with the `references`:
```
type: :string
```
>
> The reference column type. Defaults to :integer.
>
>
>
[You can read more here.](http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/SchemaStatements.html#method-i-add_reference)
Upvotes: 4 [selected_answer]<issue_comment>username_3: Just add a `type` to your definition of the reference:
```
t.references :test, type: :string, null: false
```
Upvotes: 2 |
2018/03/22 | 598 | 1,993 | <issue_start>username_0: Basically in cells B42 and B43 I have more than 255 characters and my code breaks and gives RUN type error mismatch 13.
when I am running the following line:
```
CopyTranspose wb.Sheets("Apple").Range("B17:B46"), shtDest.Cells(pasteRow, "R")
```
I get an mismatch error in here:
```
Sub CopyTranspose(rngCopy As Range, rngDest As Range)
rngDest.Resize(rngCopy.Columns.Count, rngCopy.Rows.Count).Value = _
Application.Transpose(rngCopy.Value)
End Sub
```<issue_comment>username_1: If you Google you will find that `Application.Transpose` has several odd limitations. But you can write your own `TransposeArray` function and see if that works better.
```
Public Function TransposeArray(myarray As Variant) As Variant
Dim x As Long
Dim y As Long
Dim Xlower As Long, Xupper As Long
Dim Ylower As Long, Yupper As Long
Dim tempArray As Variant
Xlower = LBound(myarray, 2)
Ylower = LBound(myarray, 1)
Xupper = UBound(myarray, 2)
Yupper = UBound(myarray, 1)
ReDim tempArray(Xlower To Xupper, Ylower To Yupper)
For x = Xlower To Xupper
For y = Ylower To Yupper
tempArray(x, y) = myarray(y, x)
Next y
Next x
TransposeArray = tempArray
End Function
```
And just use it like `Application.Tanspose`
```
Sub CopyTranspose(rngCopy As Range, rngDest As Range)
rngDest.Resize(rngCopy.Columns.Count, rngCopy.Rows.Count).Value = _
TransposeArray(rngCopy.Value)
End Sub
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you simply want to copy and transpose a range, then this will do:
```
wb.Sheets("Apple").Range("B17:B46").Copy
shtDest.Cells(pasteRow, "R").PasteSpecial Transpose:=True
```
Or if want to copy only values without formats then
```
shtDest.Cells(pasteRow, "R").PasteSpecial xlPasteValues, Transpose:=True
```
EDIT:
The issue may be with `Application.Transpose(rngCopy.Value)` as transpose inputs array or range of cells, not values.
Upvotes: 0 |
2018/03/22 | 589 | 2,053 | <issue_start>username_0: I'm trying to write an SQL statement that would indicate what kind of commission is being given. I'm stuck on the last part that shows what percentage is given because SQL won't convert a char value to money.
I'm new to this so my conversion attempt may be entirely bogus.
```
CASE
WHEN VehicleSales.CommissionSP1 ='500' then 'Flat'
WHEN VehicleSales.CommissionSP1 = '0' then 'Not Applicable'
else Convert(int, VehicleSales.CommissionSP1) / Convert (int, VehicleSales.GrossProfit)
END AS Commission_Type
```<issue_comment>username_1: If you Google you will find that `Application.Transpose` has several odd limitations. But you can write your own `TransposeArray` function and see if that works better.
```
Public Function TransposeArray(myarray As Variant) As Variant
Dim x As Long
Dim y As Long
Dim Xlower As Long, Xupper As Long
Dim Ylower As Long, Yupper As Long
Dim tempArray As Variant
Xlower = LBound(myarray, 2)
Ylower = LBound(myarray, 1)
Xupper = UBound(myarray, 2)
Yupper = UBound(myarray, 1)
ReDim tempArray(Xlower To Xupper, Ylower To Yupper)
For x = Xlower To Xupper
For y = Ylower To Yupper
tempArray(x, y) = myarray(y, x)
Next y
Next x
TransposeArray = tempArray
End Function
```
And just use it like `Application.Tanspose`
```
Sub CopyTranspose(rngCopy As Range, rngDest As Range)
rngDest.Resize(rngCopy.Columns.Count, rngCopy.Rows.Count).Value = _
TransposeArray(rngCopy.Value)
End Sub
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you simply want to copy and transpose a range, then this will do:
```
wb.Sheets("Apple").Range("B17:B46").Copy
shtDest.Cells(pasteRow, "R").PasteSpecial Transpose:=True
```
Or if want to copy only values without formats then
```
shtDest.Cells(pasteRow, "R").PasteSpecial xlPasteValues, Transpose:=True
```
EDIT:
The issue may be with `Application.Transpose(rngCopy.Value)` as transpose inputs array or range of cells, not values.
Upvotes: 0 |
2018/03/22 | 2,029 | 7,279 | <issue_start>username_0: I recently tried to add a launch(splash) screen to my app, But an error is occurring all the time i try to run the app by the emulator.
this is the error log:
```
E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.aidin.workbook, PID: 29714
java.lang.RuntimeException: Unable to start activity ComponentInfo{com.aidin.workbook/com.aidin.workbook.SplashActivity}: android.content.res.Resources$NotFoundException: Drawable com.aidin.workbook:drawable/background_splash with resource ID #0x7f07005c
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2892)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3027)
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:101)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:73)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1786)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6656)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:823)
Caused by: android.content.res.Resources$NotFoundException: Drawable com.aidin.workbook:drawable/background_splash with resource ID #0x7f07005c
Caused by: android.content.res.Resources$NotFoundException: File res/drawable/background_splash.xml from drawable resource ID #0x7f07005c
at android.content.res.ResourcesImpl.loadDrawableForCookie(ResourcesImpl.java:821)
at android.content.res.ResourcesImpl.loadDrawable(ResourcesImpl.java:630)
at android.content.res.Resources.getDrawableForDensity(Resources.java:877)
at android.content.res.Resources.getDrawable(Resources.java:819)
at android.content.Context.getDrawable(Context.java:626)
at android.support.v4.content.ContextCompat.getDrawable(ContextCompat.java:351)
at android.support.v7.widget.AppCompatDrawableManager.getDrawable(AppCompatDrawableManager.java:200)
at android.support.v7.widget.TintTypedArray.getDrawableIfKnown(TintTypedArray.java:87)
at android.support.v7.app.AppCompatDelegateImplBase.(AppCompatDelegateImplBase.java:128)
at android.support.v7.app.AppCompatDelegateImplV9.(AppCompatDelegateImplV9.java:149)
at android.support.v7.app.AppCompatDelegateImplV11.(AppCompatDelegateImplV11.java:29)
at android.support.v7.app.AppCompatDelegateImplV14.(AppCompatDelegateImplV14.java:54)
at android.support.v7.app.AppCompatDelegateImplV23.(AppCompatDelegateImplV23.java:31)
at android.support.v7.app.AppCompatDelegateImplN.(AppCompatDelegateImplN.java:31)
at android.support.v7.app.AppCompatDelegate.create(AppCompatDelegate.java:198)
at android.support.v7.app.AppCompatDelegate.create(AppCompatDelegate.java:183)
at android.support.v7.app.AppCompatActivity.getDelegate(AppCompatActivity.java:519)
at android.support.v7.app.AppCompatActivity.onCreate(AppCompatActivity.java:70)
at com.aidin.workbook.SplashActivity.onCreate(SplashActivity.java:13)
at android.app.Activity.performCreate(Activity.java:7117)
at android.app.Activity.performCreate(Activity.java:7108)
at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1262)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2867)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3027)
at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:78)
at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:101)
at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:73)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1786)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6656)
at java.lang.reflect.Method.invoke(Native Method)
```
My Android Studio, Gradle and Build tools all are up to date,
I tried to clean and rebuild the project for many times but it didn't help and non of the related answers out there were useful for this issue; I tried all of theme.
this is my style.xml:
```
<!-- Customize your theme here. -->
<item name="colorPrimary">@color/colorPrimary</item>
<item name="colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="colorAccent">@color/colorAccent</item>
<item name="android:windowBackground">@drawable/background\_splash</item>
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
```
and this is my background\_splash.xml in res/drawable directory:
```
xml version="1.0" encoding="utf-8"?
```
and this is the AndroidManifest.xml:
```
xml version="1.0" encoding="utf-8"?
```
Finally here is SplashActivity.java:
```
package com.aidin.workbook;
import android.content.Intent;
import android.support.annotation.Nullable;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
public class SplashActivity extends AppCompatActivity {
@Override
protected void onCreate(@Nullable Bundle savedInstanceState) {
setTheme(R.style.SplashTheme);
super.onCreate(savedInstanceState);
Intent intent = new Intent(this, HomeActivity.class);
startActivity(intent);
finish();
}
}
```
this the Build.gradle:
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 26
defaultConfig {
applicationId "com.aidin.workbook"
minSdkVersion 21
targetSdkVersion 26
buildToolsVersion "28.0.0-rc1"
versionCode 1
versionName "1.0"
testInstrumentationRunner
"android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'),
'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support:design:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:0.5'
androidTestImplementation 'com.android.support.test.espresso:espresso-
core:2.2.2'
}
```<issue_comment>username_1: In your ***background\_splash.xml***,you have to change **bitmap src file to a drawable file that has to be in png format not in xml format**
like this **drawbles/ic\_launcher.png**
```xml
xml version="1.0" encoding="utf-8"?
```
Upvotes: 1 <issue_comment>username_2: ```
xml version="1.0" encoding="utf-8"?
```
comment the contents inside as above
Upvotes: 0 |
2018/03/22 | 518 | 1,699 | <issue_start>username_0: Super new to CSS and need some help! I want to hover over some text, and have my image show up in s specific spot on my page and not pop-up around the text. I tried changing the location of the photo, but no matter what I try, it stays in the same place, hovering around the text. Your help would be much appreciated!
```css
#image {
display: none;
position: relative;
right: 10%;
}
#text:hover+#image {
display: block;
}
```
```html
Some Text

```<issue_comment>username_1: Change the position of the image to `absolute`.
```css
#image {
display: none;
position: absolute;
right: 10%;
}
#text:hover + #image {
display: block;
}
```
```html
Some Text

```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Although username_1 answered the questions but here is css code with an extra few lines .
This will make the image visible only when the **#text** is hover . Not the whole line .
```css
#image {
display: none;
position: absolute;
right: 10%;
}
#text {
width: 70px;
}
#text:hover+#image {
display: block;
}
```
```html
Some Text

```
Upvotes: 2 <issue_comment>username_3: Only change the position relative to absolute, and then you can move the image
where you want, like this
```css
#image {
display: none;
position: absolute;
right: 10%;
top:100px;
}
#text:hover + #image {
display: block;
}
```
```html
Some Text

```
Upvotes: 0 |
2018/03/22 | 478 | 1,985 | <issue_start>username_0: I understand that the minimum part size for uploading to an S3 bucket is 5MB
Is there any way to have this changed on a per-bucket basis?
The reason I'm asking is there is a list of raw objects in S3 which we want to combine in the single object in S3.
Using PUT part/copy we are able to "glue" objects in the single one providing that all objects except last one are >= 5MB. However sometimes our raw objects are not big enough and in this case when we try to complete multipart uploading we're getting famous error "Your proposed upload is smaller than the minimum allowed size" from AWS S3.
Any other idea how we could combine S3 objects without downloading them first?<issue_comment>username_1: There is no way to have the minimum part size changed
You may want to either;
1. Stream them together to AWS (which does not seem like an option, otherwise you would already be doing this)
2. Pad the file so it fill the minimum size of 5MB (what can or cannot be feasible to you since this will increase your bill). You will have the option to either use *infrequent access (when you access these files rarely)* or *reduced redundancy (when you can recover lost files)* if you think it can be applied to these specific files in order to reduce the impact.
3. Use an external service that will zip (or "glue" them together) your files and then re-upload to S3. I dont know if such service exists, but I am pretty sure you can implement it your self using a lambda function (I have even tried something like this in the past; <https://github.com/gammasoft/zipper-lambda>)
Upvotes: 0 <issue_comment>username_2: "However sometimes our raw objects are not big enough... "
You can have a 5MB garbage object sitting on S3 and do concatenation with it where part 1 = 5MB garbage object, part 2 = your file that you want to concatenate. Keep repeating this for each fragment and finally use the range copy to strip out the 5MB garbage
Upvotes: 2 [selected_answer] |
2018/03/22 | 807 | 2,838 | <issue_start>username_0: I'm writing some code size analysis tool for my C program, using the output `ELF` file.
I'm using `readelf -debug-dump=info` to generate `Dwarf format file`.
I've noticed that My compiler is adding as a part of the optimization new consts, that are not in Dwarf file, to the `.rodata` section.
So `.rodata` section size includes their sizes but i don't have their sizes in `Dwarf`.
Here is an example fro map file:
```
*(.rodata)
.rodata 0x10010000 0xc0 /<.o file0 path>
0x10010000 const1
0x10010040 const2
.rodata 0x100100c0 0xa /<.o file1 path>
fill 0x100100ca 0x6
.rodata 0x100100d0 0x6c /<.o file2 path>
0x100100d0 const3
0x100100e0 const4
0x10010100 const5
0x10010120 const6
fill 0x1001013c 0x4
```
In file1 above, although i didn't declare on const variable - the compiler does, this const is taking space in *.rodata* yet there is no symbol/name for it.
Here is the code inside some function that generates it:
```
uint8 arr[3][2] = {{146,179},
{133, 166},
{108, 141}} ;
```
So compiler add some consts values to optimize the load to the array.
How can i extract theses hidden additions from data sections?
I want to be able to fully characterize my code - How much space is used in each file, etc...<issue_comment>username_1: There is no way to have the minimum part size changed
You may want to either;
1. Stream them together to AWS (which does not seem like an option, otherwise you would already be doing this)
2. Pad the file so it fill the minimum size of 5MB (what can or cannot be feasible to you since this will increase your bill). You will have the option to either use *infrequent access (when you access these files rarely)* or *reduced redundancy (when you can recover lost files)* if you think it can be applied to these specific files in order to reduce the impact.
3. Use an external service that will zip (or "glue" them together) your files and then re-upload to S3. I dont know if such service exists, but I am pretty sure you can implement it your self using a lambda function (I have even tried something like this in the past; <https://github.com/gammasoft/zipper-lambda>)
Upvotes: 0 <issue_comment>username_2: "However sometimes our raw objects are not big enough... "
You can have a 5MB garbage object sitting on S3 and do concatenation with it where part 1 = 5MB garbage object, part 2 = your file that you want to concatenate. Keep repeating this for each fragment and finally use the range copy to strip out the 5MB garbage
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,027 | 4,419 | <issue_start>username_0: How can I set the response header for each call in my application made with Spring Boot?
I would like to try to use a filter to intercept all the calls and be able to set the response header.
I followed the guide [Disable browser caching HTML5](https://stackoverflow.com/questions/49401531/disable-browser-caching-html5/49402824#49402824), but only set the request header, and not always.<issue_comment>username_1: Implement Filter and is registered by `@Component` annotation. The `@Order(Ordered.HIGHEST_PRECEDENCE)` is used for Advice execution precedence.
```
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class NoCacheWebFilter implements Filter {
private static final Logger logger = LoggerFactory.getLogger(NoCacheWebFilter.class);
@Override
public void init(FilterConfig filterConfig) throws ServletException {
logger.debug("Initiating WebFilter >> ");
}
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest req = (HttpServletRequest) request;
HeaderMapRequestWrapper requestWrapper = new
HeaderMapRequestWrapper(req);
// implement you logic to add header
//requestWrapper.addHeader("remote_addr", "");
chain.doFilter(requestWrapper, response);
}
@Override
public void destroy() {
logger.debug("Destroying WebFilter >> ");
}
}
```
Upvotes: 0 <issue_comment>username_2: There are three ways to do this:
1. Set the response for a specific controller, in the Controller class:
```java
@Controller
@RequestMapping(value = DEFAULT_ADMIN_URL + "/xxx/")
public class XxxController
....
@ModelAttribute
public void setResponseHeader(HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
....
}
```
or
```java
@RequestMapping(value = "/find/employer/{employerId}", method = RequestMethod.GET)
public List getEmployees(@PathVariable("employerId") Long employerId, final HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
return employeeService.findEmployeesForEmployer(employerId);
}
```
2. Or you can put the response header for **each** call in the application (this is for Spring annotation-based, otherwise see [automatically add header to every response](https://stackoverflow.com/questions/16190699/automatically-add-header-to-every-response)):
```java
@Component
public class Filter extends OncePerRequestFilter {
....
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
//response.addHeader("Access-Control-Allow-Origin", "*");
//response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0"); // Proxies.
filterChain.doFilter(request, response);
}
}
```
3. The last way I found is using an Interceptor that extends HandlerInterceptorAdapter; for more info see <https://www.concretepage.com/spring/spring-mvc/spring-handlerinterceptor-annotation-example-webmvcconfigureradapter>
* create your Interceptor that extends HandlerInterceptorAdapter:
```java
public class HeaderInterceptor extends HandlerInterceptorAdapter {
@Override
public boolean preHandle(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, Object handler) {
httpServletResponse.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
httpServletResponse.setHeader("Pragma", "no-cache"); // HTTP 1.0.
httpServletResponse.setHeader("Expires", "0"); // Proxies.
return true;
}
}
```
* In your MvcConfig thath extends WebMvcConfigurerAdapter you must Override the addInterceptors method and add new Interceptor:
```java
@Override
public void addInterceptors(InterceptorRegistry registry) {
....
registry.addInterceptor(new HeaderInterceptor());
}
```
I hope I was helpful!
Upvotes: 5 [selected_answer] |
2018/03/22 | 1,087 | 4,640 | <issue_start>username_0: I am using maven assembly plugin to zip my web application dist folder.
I use this `descriptorRef` file:
```
webapp-build
zip
false
dist
.
\*\*/\*
```
and I use it as a dependency in a parent pom like this:
```
maven-assembly-plugin
3.1.0
com.company
build-tools
1.0.0-SNAPSHOT
webapp-build
package
single
webapp-build
${project.assembly.directory}
```
Depending on the child using this parent pom, I would like to tell in which directory, the maven assembly plugin has to use the `webapp-build` assembly descriptor. I tried with attribute but it is not using it. Any ideas ?<issue_comment>username_1: Implement Filter and is registered by `@Component` annotation. The `@Order(Ordered.HIGHEST_PRECEDENCE)` is used for Advice execution precedence.
```
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class NoCacheWebFilter implements Filter {
private static final Logger logger = LoggerFactory.getLogger(NoCacheWebFilter.class);
@Override
public void init(FilterConfig filterConfig) throws ServletException {
logger.debug("Initiating WebFilter >> ");
}
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest req = (HttpServletRequest) request;
HeaderMapRequestWrapper requestWrapper = new
HeaderMapRequestWrapper(req);
// implement you logic to add header
//requestWrapper.addHeader("remote_addr", "");
chain.doFilter(requestWrapper, response);
}
@Override
public void destroy() {
logger.debug("Destroying WebFilter >> ");
}
}
```
Upvotes: 0 <issue_comment>username_2: There are three ways to do this:
1. Set the response for a specific controller, in the Controller class:
```java
@Controller
@RequestMapping(value = DEFAULT_ADMIN_URL + "/xxx/")
public class XxxController
....
@ModelAttribute
public void setResponseHeader(HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
....
}
```
or
```java
@RequestMapping(value = "/find/employer/{employerId}", method = RequestMethod.GET)
public List getEmployees(@PathVariable("employerId") Long employerId, final HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
return employeeService.findEmployeesForEmployer(employerId);
}
```
2. Or you can put the response header for **each** call in the application (this is for Spring annotation-based, otherwise see [automatically add header to every response](https://stackoverflow.com/questions/16190699/automatically-add-header-to-every-response)):
```java
@Component
public class Filter extends OncePerRequestFilter {
....
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
//response.addHeader("Access-Control-Allow-Origin", "*");
//response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0"); // Proxies.
filterChain.doFilter(request, response);
}
}
```
3. The last way I found is using an Interceptor that extends HandlerInterceptorAdapter; for more info see <https://www.concretepage.com/spring/spring-mvc/spring-handlerinterceptor-annotation-example-webmvcconfigureradapter>
* create your Interceptor that extends HandlerInterceptorAdapter:
```java
public class HeaderInterceptor extends HandlerInterceptorAdapter {
@Override
public boolean preHandle(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, Object handler) {
httpServletResponse.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
httpServletResponse.setHeader("Pragma", "no-cache"); // HTTP 1.0.
httpServletResponse.setHeader("Expires", "0"); // Proxies.
return true;
}
}
```
* In your MvcConfig thath extends WebMvcConfigurerAdapter you must Override the addInterceptors method and add new Interceptor:
```java
@Override
public void addInterceptors(InterceptorRegistry registry) {
....
registry.addInterceptor(new HeaderInterceptor());
}
```
I hope I was helpful!
Upvotes: 5 [selected_answer] |
2018/03/22 | 1,798 | 7,402 | <issue_start>username_0: I have written a function in a component to handle when errors are returned from a Rest Service and determine what error message should be shown to the user, the method takes an error object (this is custom data with a specific structure from the rest service) as an argument, drills sown to find the relevant content and then using a switch statement sends a JSON key that is used by an i18n service, it is below (I know it isn't the greatest)
```
private myErrorHandler(err: any): string {
// Why doesn't typescript support null-conditional?
if (err.error && err.error.errors && err.error.errors[0] && err.error.errors[0].error) {
const errorMsg = err.error.errors[0].error;
const errorValue = err.error.errors[0].value;
const translationArgs: any = {errorValue: null};
let correctMsg;
if (errorValue) {
translationArgs.errorValue = errorValue; // this line gives me the TypeScript compiler error TS2339: Property 'errorValue' does not exist on type {}
}
switch (errorMsg) {
case 'not_unique': {
correctMsg = errorValue ? 'common.validation.not_unique_value' : 'common.validation.not_unique';
break;
}
default: {
correctMsg = 'common.messages.global_error';
break;
}
}
return this.localizationService.translate(correctMsg, translationArgs as any);
}
return this.localizationService.translate('common.messages.global_error');
}
```
My problem is that sometimes I wish to interpolate some of the returned error data into the returned message as an argument, the method above allows me to do this but the way I set this, like so, raises a TypeScript compiler error:
```
if (errorValue) {
translationArgs.errorValue = errorValue;
}
```
How can I prevent this linting error? I thought by giving the object const translationArgs a errorValue property and assigning this to null would be enough... but I was / am wrong. Any advice would be appreciated.
I am also aware that the method / function is a little ugly, so any comments on that are appreciated too.
(as requested) This is the content of my tsconfig file
```
{
"compilerOptions": {
"alwaysStrict": true,
"emitDecoratorMetadata": true,
"experimentalDecorators": true,
"lib": [
"es6",
"dom"
],
"module": "commonjs",
"moduleResolution": "node",
"noFallthroughCasesInSwitch": true,
"noImplicitAny": true,
"noImplicitReturns": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"outDir": "dist",
"pretty": true,
"sourceRoot": "frontend",
"rootDir": "frontend",
"sourceMap": true,
"target": "es5",
"types": [
"node",
"mocha",
"chai",
"chai-as-promised",
"aws-sdk",
"q",
"sinon",
"file-saver"
],
"typeRoots": [
"./node_modules/@types"
]
},
"include": [
"frontend/**/*.ts"
],
"exclude": [
".git",
".idea",
"config",
"dist",
"e2e_tests",
"gulp",
"node_modules",
"reports",
"server",
"typings/browser.d.ts"
],
"awesomeTypescriptLoaderOptions": {
"useWebpackText": true
},
"angularCompilerOptions": {
"debug": false
},
"compileOnSave": false,
"buildOnSave": false,
"atom": {
"rewriteTsconfig": false
}
}
```<issue_comment>username_1: Implement Filter and is registered by `@Component` annotation. The `@Order(Ordered.HIGHEST_PRECEDENCE)` is used for Advice execution precedence.
```
@Component
@Order(Ordered.HIGHEST_PRECEDENCE)
public class NoCacheWebFilter implements Filter {
private static final Logger logger = LoggerFactory.getLogger(NoCacheWebFilter.class);
@Override
public void init(FilterConfig filterConfig) throws ServletException {
logger.debug("Initiating WebFilter >> ");
}
@Override
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
HttpServletRequest req = (HttpServletRequest) request;
HeaderMapRequestWrapper requestWrapper = new
HeaderMapRequestWrapper(req);
// implement you logic to add header
//requestWrapper.addHeader("remote_addr", "");
chain.doFilter(requestWrapper, response);
}
@Override
public void destroy() {
logger.debug("Destroying WebFilter >> ");
}
}
```
Upvotes: 0 <issue_comment>username_2: There are three ways to do this:
1. Set the response for a specific controller, in the Controller class:
```java
@Controller
@RequestMapping(value = DEFAULT_ADMIN_URL + "/xxx/")
public class XxxController
....
@ModelAttribute
public void setResponseHeader(HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
....
}
```
or
```java
@RequestMapping(value = "/find/employer/{employerId}", method = RequestMethod.GET)
public List getEmployees(@PathVariable("employerId") Long employerId, final HttpServletResponse response) {
response.setHeader("Cache-Control", "no-cache");
return employeeService.findEmployeesForEmployer(employerId);
}
```
2. Or you can put the response header for **each** call in the application (this is for Spring annotation-based, otherwise see [automatically add header to every response](https://stackoverflow.com/questions/16190699/automatically-add-header-to-every-response)):
```java
@Component
public class Filter extends OncePerRequestFilter {
....
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
throws ServletException, IOException {
//response.addHeader("Access-Control-Allow-Origin", "*");
//response.setHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
response.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
response.setHeader("Pragma", "no-cache"); // HTTP 1.0.
response.setHeader("Expires", "0"); // Proxies.
filterChain.doFilter(request, response);
}
}
```
3. The last way I found is using an Interceptor that extends HandlerInterceptorAdapter; for more info see <https://www.concretepage.com/spring/spring-mvc/spring-handlerinterceptor-annotation-example-webmvcconfigureradapter>
* create your Interceptor that extends HandlerInterceptorAdapter:
```java
public class HeaderInterceptor extends HandlerInterceptorAdapter {
@Override
public boolean preHandle(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, Object handler) {
httpServletResponse.setHeader("Cache-Control", "no-store"); // HTTP 1.1.
httpServletResponse.setHeader("Pragma", "no-cache"); // HTTP 1.0.
httpServletResponse.setHeader("Expires", "0"); // Proxies.
return true;
}
}
```
* In your MvcConfig thath extends WebMvcConfigurerAdapter you must Override the addInterceptors method and add new Interceptor:
```java
@Override
public void addInterceptors(InterceptorRegistry registry) {
....
registry.addInterceptor(new HeaderInterceptor());
}
```
I hope I was helpful!
Upvotes: 5 [selected_answer] |
2018/03/22 | 812 | 2,586 | <issue_start>username_0: I have a problem, for school we continue working on a project another group started last semester.
The first table: testresult4values
[](https://i.stack.imgur.com/hIcBC.png)
The next table: event
[](https://i.stack.imgur.com/4tBce.png)
The third and last table: sec\_user
[](https://i.stack.imgur.com/B3Uku.png)
If I run this query:
```
SELECT sec_user.accountid, MIN(sec_user.firstname) as firstname, MIN(sec_user.lastname) as lastname, convert(date, event.starttime) as startdate, sum(cast(value as int)) as value , sum(cast(value2 as int)) as value2, value3, value4
FROM testresult4values as result
JOIN event
ON result.eventid = event.eventid
JOIN sec_user
ON result.userid = sec_user.accountid
group by value3, value4, sec_user.accountid, starttime
```
I get the following output:
[](https://i.stack.imgur.com/5d6BU.png)
As you can see in line 8,9 and 10 the person with accountid 32 has 3 outputs, when I only want 2. I know why this is, it is because he is involved in 2 events on the same date. This is happening because of connection errors and there is nothing we can do about that. The priority is on other parts of the application. What I want is to combine those 2 rows with the same date so the result is:
8| 32| firstname | lastname | 2017-12-13 | 16 | 22
9| 32| firstname | lastname | 2018-03-09 | 18 | 30
Thanks in advance!<issue_comment>username_1: You have to `GROUP BY` your date with the time removed. Swap `starttime` in your grouping with:
```
GROUP BY convert(date, event.starttime)
```
Right now you are *seeing* the date with the time removed because of your select, but the lines aren't rolling up because time is being considered in the grouping.
Upvotes: 3 [selected_answer]<issue_comment>username_2: `GROUP By` startdate not starttime and you should get your expected results.
```
SELECT sec_user.accountid, MIN(sec_user.firstname) as firstname, MIN(sec_user.lastname) as lastname, convert(date, event.starttime) as startdate, sum(cast(value as int)) as value , sum(cast(value2 as int)) as value2, value3, value4
FROM testresult4values as result
JOIN event
ON result.eventid = event.eventid
JOIN sec_user
ON result.userid = sec_user.accountid
group by value3, value4, sec_user.accountid, convert(date, event.starttime)
```
Upvotes: 1 |
2018/03/22 | 539 | 1,822 | <issue_start>username_0: I have a table (called new\_table). It consists of 4 fields. id (with PK and NOT NULL) and field1, field2, and field3 (which have do not have NOT NULL).
The only data populated in the table are the 1 and 2 in the id field. (I'll include a screenshot below) Now I want to populate the rest of those two rows with some values by using the query below.
```
INSERT INTO new_table (field1, field2, field3)
VALUES
('value1', 'value2', 'value3'),
('value4', 'value5', 'value6');
```
And here is what I always get:
>
> "ERROR: null value in column "id" violates not-null constraint
> DETAIL: Failing row contains (null, value1, value2, value3)."
>
>
>
Is it thinking that I'm trying to populate the third and fourth rows? How do I just populate the rest of rows 1 and 2 with those values?
[table currently looks like this](https://i.stack.imgur.com/WwvM8.png)<issue_comment>username_1: You have to `GROUP BY` your date with the time removed. Swap `starttime` in your grouping with:
```
GROUP BY convert(date, event.starttime)
```
Right now you are *seeing* the date with the time removed because of your select, but the lines aren't rolling up because time is being considered in the grouping.
Upvotes: 3 [selected_answer]<issue_comment>username_2: `GROUP By` startdate not starttime and you should get your expected results.
```
SELECT sec_user.accountid, MIN(sec_user.firstname) as firstname, MIN(sec_user.lastname) as lastname, convert(date, event.starttime) as startdate, sum(cast(value as int)) as value , sum(cast(value2 as int)) as value2, value3, value4
FROM testresult4values as result
JOIN event
ON result.eventid = event.eventid
JOIN sec_user
ON result.userid = sec_user.accountid
group by value3, value4, sec_user.accountid, convert(date, event.starttime)
```
Upvotes: 1 |
2018/03/22 | 847 | 3,177 | <issue_start>username_0: I'm trying to read a basic txt file that contains prices in euros. My program is supposed to loop through these prices and then create a new file with the other prices. Now, the problem is that java says it cannot find the first file.
It is in the exact same package like this:
[](https://i.stack.imgur.com/PFvJF.png)
Java already fails at the following code:
```
FileReader fr = new FileReader("prices_usd.txt");
```
Whole code :
```
import java.io.*;
public class DollarToEur {
public static void main(String[] arg) throws IOException, FileNotFoundException {
FileReader fr = new FileReader("prices_usd.txt");
BufferedReader br = new BufferedReader(fr);
FileWriter fw = new FileWriter("prices_eur");
PrintWriter pw = new PrintWriter(fw);
String regel = br.readLine();
while(regel != null) {
String[] values = regel.split(" : ");
String beschrijving = values[0];
String prijsString = values[1];
double prijs = Double.parseDouble(prijsString);
double newPrijs = prijs * 0.913;
pw.println(beschrijving + " : " + newPrijs);
regel = br.readLine();
}
pw.close();
br.close();
}
}
```<issue_comment>username_1: Your file looks to be named "prices\_usd" and your code is looking for "prices\_usd.txt"
Upvotes: 1 <issue_comment>username_2: It is bad practice to put resource files (like `prices_usd.txt`) in a package. Please put it under the `resources/` directory. If you put it directly in the `resources/` directory, you can access the file like this:
```
new FileReader(new File(this.getClass().getClassLoader().getResource("prices_usd.txt").getFile()));
```
But if you really have a good reason to put it in the package, you can access it like this:
```
new FileReader("src/main/java/week5/practicum13/prices_usd.txt");
```
But this will not work when you export your project (for example: as a jar).
**EDIT 0:** Also of course, your file's name needs to be "prices\_usd.txt" and not just "prices\_usd".
**EDIT 1:** The first (recommended) solution does return a string on `.getFile()` which can not directly be passed to the `new File(...)` constructor when the application is built / not run in the IDE. [Spring](https://spring.io/) has a solution to it though: [org.springframework.core.io.ClassPathResource](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/core/io/ClassPathResource.html).
Simply use this code with Spring:
```
new FileReader(new ClassPathResource("prices_usd.txt").getFile());
```
Upvotes: 0 <issue_comment>username_3: There are a couple of things you need to do:
1. Put the file directly under the project folder in Eclipse. *When your execute your code in Eclipse, the project folder is considered to be the working directory. So you need to put the file there so that Java can find it.*
2. Rename the file correctly with the `.txt` extn. *From your screen print it looks like the file does not have an extension or may be it's just not visible.*
---
Hope this helps!
Upvotes: 1 [selected_answer] |
2018/03/22 | 642 | 2,247 | <issue_start>username_0: Github Repo: [react-select](https://github.com/JedWatson/react-select)
After searching in the select box:
[](https://i.stack.imgur.com/Yy3IN.png)
After typing a text that is not in the dropdown and enter is pressed. I want to hide the dropdown box.
My implementation:
```
this.input = input }
name="form-field-name"
searchable
autoBlur
clearable={false}
openOnFocus
onInputKeyDown={this.onInputKeyDown.bind(this)}
value={this.state.selectedOption}
onChange={this.handleChange.bind(this)}
options={this.props.items}
/>
```
using `onInputKeyDown` I am detecting enter keycode. What do I do to remove the dropdown there when 'No results found' is shown?
```
onInputKeyDown(e) {
if (e.keyCode === keys.ENTER) {
console.log('on input key down');
// How to detect 'No results found' shown?
// And then, how to close the dropdown?
}
}
```<issue_comment>username_1: Try using the `noResultsText` prop. Set it to `null` whenever you would want to hide it.
Upvotes: 1 <issue_comment>username_2: In V2 you can achieve this by setting `noOptionsMessage` to a function that returns `null`:
```
null}/>
```
This will prevent the fallback option from displaying completely. Note that setting `noOptionsMessage` to `null` directly will result in an error, the expected prop type here is a function.
Upvotes: 5 <issue_comment>username_3: **First method:**
Turn off component to hide dropdown list.
```
null
}}
/>
```
**Second method:**
Turn off dropdown conditionally. e.g. When there is no value in input.
```
// Select.js
import { Menu } from './Menu'
-------
// Menu.js
import { components } from 'react-select'
export const Menu = props => {
if (props.selectProps.inputValue.length === 0) return null
return (
<>
)
}
```
Upvotes: 2 <issue_comment>username_4: If you want to hide the menu when no more options are available you can try to override the `MenuList` component. This worked for me:
```js
const MenuList = ({ children, ...props }: MenuListProps) => {
return Array.isArray(children) && children?.length > 0 ? (
{children}
) : null;
};
```
Upvotes: 0 |
2018/03/22 | 491 | 1,443 | <issue_start>username_0: I have a data frame that has about 25 columns and I want to rename the fields names for each column to a set name will the column number. For example column 4 will be called Col4, column 5 will be called Col5..etc.
I could write my R code so that:
```
colnames(df)[1]<-'Col1'
colnames(df)[2]<-'Col2'
... x25
```
However there must be a better way. I have decided to use a loop instead where my code is:
```
for (i in 1:ncol(df))
{
colnames(df)[i]<-'Col'&i
}
```
But I get the error:
'Error in "Col" & i :
operations are possible only for numeric, logical or complex types'
How can I correct the code in the loop?<issue_comment>username_1: You need to use functions `paste` or `paste0` when handling characters.
In your case, this should do the trick.
```
colnames(df) <- paste0("Col", seq(1,25))
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
df <- as.data.frame(matrix(ncol=25,nrow=1)) # create a dataframe with this shape
labels <- paste0("Col",seq(1,25,1))
colnames(df) <- labels
```
And then it will look like this:
```
> df
Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8 Col9 Col10 Col11 Col12 Col13 Col14
Col15 Col16 Col17 Col18 Col19 Col20
1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA
NA NA NA NA NA NA
Col21 Col22 Col23 Col24 Col25
1 NA NA NA NA NA
```
Upvotes: 0 |
2018/03/22 | 470 | 1,274 | <issue_start>username_0: I´m trying to sum up some vectors within a function. Depending on my input, some of the vectors might be created or not. For example, I have vectors A, B, C and D:
```
A <- c(1,1,1,0)
B <- c(1,0,1,1)
C <- c(0,0,0,1)
```
in this case, D doesn't exist.
I need to write a code that sums up the values if the vector exists and ignores the non-existing vectors.
The output should be A + B + C + D, ignoring the missing vectors:
```
> A + B +C
[1] 2 1 2 2
```
Do you know an easy solution?
Thanks<issue_comment>username_1: You need to use functions `paste` or `paste0` when handling characters.
In your case, this should do the trick.
```
colnames(df) <- paste0("Col", seq(1,25))
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
df <- as.data.frame(matrix(ncol=25,nrow=1)) # create a dataframe with this shape
labels <- paste0("Col",seq(1,25,1))
colnames(df) <- labels
```
And then it will look like this:
```
> df
Col1 Col2 Col3 Col4 Col5 Col6 Col7 Col8 Col9 Col10 Col11 Col12 Col13 Col14
Col15 Col16 Col17 Col18 Col19 Col20
1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA
NA NA NA NA NA NA
Col21 Col22 Col23 Col24 Col25
1 NA NA NA NA NA
```
Upvotes: 0 |
2018/03/22 | 825 | 1,985 | <issue_start>username_0: ```
var count = 0;
arrA = ["6", "2", "21", "8", "4", "12"];
arrB = ["8", "2", "12", "2", "5", "11"];
```
I want to compare each element
only if if `arrA[i] > arrB[i]`, then count add 1
How to implement this in javascript?<issue_comment>username_1: Use a simple for loop:
```
for(let i = 0; i < arrA.length; i++) {
if (arrA[i] > arrB[i]) {
count++;
}
}
```
[Alternatively, `Array.forEach` would also work:](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach)
```
arrA.forEach((value, index) => {
if (value > arrB[index]) {
count++;
}
})
```
Upvotes: 0 <issue_comment>username_2: You can use [`Array#every`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/every) :
```js
let count = 0;
let arrA = ["6", "2", "21", "8", "4", "12"];
let arrB = ["8", "2", "12", "2", "5", "11"]; // !< A
let arrC = ["5", "1", "20", "7", "3", "11"]; // < A
let res = arrA.every((e,i) => e > arrB[i]); //False
let res2 = arrA.every((e,i) => e > arrC[i]); //True
if(res) count++; //Doesn't trigger
if(res2) count++; //Triggers
console.log(count);
```
Upvotes: 0 <issue_comment>username_3: You could reduce the array by adding the result of compairing the numerical values.
```js
var arrA = ["6", "2", "21", "8", "4", "12"],
arrB = ["8", "2", "12", "2", "5", "11"],
count = arrA.reduce((c, v, i) => c + (+v > +arrB[i]), 0);
console.log(count);
```
Upvotes: 2 <issue_comment>username_4: You can use `array.prototype.reduce`:
```js
var arrA = ["6", "2", "21", "8", "4", "12"];
var arrB = ["8", "2", "12", "2", "5", "11"];
var count = arrA.reduce((m, o, i) => m + (o > arrB[i] ? 1 : 0), 0);
console.log(count);
```
Or, with `array.prototype.filter`:
```js
var arrA = ["6", "2", "21", "8", "4", "12"];
var arrB = ["8", "2", "12", "2", "5", "11"];
var count = arrA.filter((e, i) => e > arrB[i]).length;
console.log(count);
```
Upvotes: 0 |
2018/03/22 | 802 | 2,922 | <issue_start>username_0: I'm using [StripeAPI](https://stripe.com/docs/api "StripeAPI") in PHP and I don't know how can I stop a customer subscription at a specific date.
I know there is an option for cancel subscription immediatly :
```
$subscription = \Stripe\Subscription::retrieve('sub_49ty4767H20z6a');
$subscription->cancel();
```
Or for cancel subscription at the end of the current billing period (for the duration of time the customer has already paid for) with :
```
$subscription->cancel(['at_period_end' => true]);
```
With this option, the cancellation is scheduled.
But how can I cancel a subscription at a specific date? In my system, a user can ask for a cancellation but he has to wait for a specific date (something like at least stay X months as a subscriber)
Do you know how can I do that? Thanks!<issue_comment>username_1: Stripe just added this to their API and I happened to stumble upon it. They have a field called "cancel\_at" that can be set to a date in the future. They don't have this attribute listed in their documentation since it is so new. You can see the value in the response object here:
<https://stripe.com/docs/api/subscriptions/create?lang=php>
I've tested this using .NET and can confirm it sets the subscription to expire at value you provide.
Upvotes: 2 <issue_comment>username_2: Ah, yes. Good question. Here's a check/execute example:
```
$today = time();
$customer = \Stripe\Customer::retrieve('Their Customer ID');
$signUpDate = $customer->subscriptions->data[0]->created;
// This will define the date where you wish to cancel them on.
// The example here is 30 days from which they initially subscribed.
$cancelOnDate = strtotime('+30 day', $signUpDate);
// If today is passed or equal to the date you defined above, cancel them.
if ($today >= $cancelOnDate) {
$subscription = \Stripe\Subscription::retrieve('sub_49ty4767H20z6a');
$subscription->cancel();
}
```
Here's an example of setting cancellation upon them subscribing:
```
$today = time();
// Cancel them 30 days from today
$cancelAt = strtotime('+30 day', $today);
$subscription = \Stripe\Subscription::create(array(
"customer" => 'Their Customer ID',
"plan" => 'xxxxxxxxx',
"cancel_at" => $cancelAt
));
```
Upvotes: 2 <issue_comment>username_3: Maybe a little bit late, but I had the same problem and checked your post for help. We want to offer a season pass for the users, so if they cancel the subscription the payment will end at the end of the season, for example every year in June.
After a lot of research I have found a solution in the laravel [docs](https://laravel.com/docs/8.x/billing#cancelling-subscriptions), that might help. For my solution it was not `cancel_at` but `cancelAt`. Not much difference, but helped me a lot :)
```
$subscription = $user->subscription('default')->cancelAt(
Carbon::create(2021,6,30)
);
```
Best,
Timo
Upvotes: 0 |
2018/03/22 | 437 | 1,153 | <issue_start>username_0: I have a list that I want to break into three separate lists created from the first, second, and third positions in the list. It would look something like this:
input:
```
X = ['GCA','GCC','GCT','GCG']
```
output:
```
C1 = ['G', 'G', 'G', 'G']
C2 = ['C', 'C', 'C', 'C']
C3 = ['A', 'C', 'T', 'G']
```
I was thinking about using list comprehension. Doing something like:
```
C1 = [p for p in X if X[0][0]]
```
But I don't know how to slice a string that is inside a list.
Thanks for the help.<issue_comment>username_1: For your attempt of using comprehension, you just need code like
```
C1 = [p[0] for p in X]
C2 = [p[1] for p in X]
C3 = [p[2] for p in X]
```
or use [zip()](https://docs.python.org/3/library/functions.html#zip):
```
C1, C2, C3 = zip(*X)
```
Upvotes: 1 <issue_comment>username_2: @Chris\_Rands answer using `zip()` is better, but since you also asked about list comprehension:
`C1=[s[0] for s in X]`
works.
`[[s[n] for s in X] for n in range(3))]` would get you all of them as a list.
Output: `[['G', 'G', 'G', 'G'], ['C', 'C', 'C', 'C'], ['A', 'C', 'T', 'G']]`
Upvotes: 1 [selected_answer] |
2018/03/22 | 615 | 2,230 | <issue_start>username_0: Currently trying to figure out how to make face crops from bounding boxes (from detect-faces response) and use those crops to search an existing collection using the SearchFacesByImage API
This is mentioned on the SearchFacesByImage documentation.
>
> You can also call the DetectFaces operation and use the bounding boxes in the response to make face crops, which then you can pass in to theSearchFacesByImage operation
>
>
>
I am trying to do this in Python or Node.js in a Lambda function. The input image is an s3 object.
All help greatly appreciated.<issue_comment>username_1: I have faced the exact same problem. Refer [this link](https://docs.aws.amazon.com/rekognition/latest/dg/images-orientation.html) from AWS Documentation. Here you will find sample code for python or java both. It will return the Top, Lfet, Width and Height of the bounding box. Remember, the upper-left corner will be considered as (0,0).
Then if you use python, you can crop image with cv2 or PIL.
Here is an example with PIL:
```
from PIL import Image
img = Image.open( 'my_image.png' )
cropped = img.crop( ( Left, Top, Left + Width, Top + Height ) )
cropped.show()
```
In this code Top, Lfet, Width and Height is a response from code given in the link.
Upvotes: 1 <issue_comment>username_2: I did this script in java, maybe its help
```
java.awt.image.BufferedImage image = ...
com.amazonaws.services.rekognition.model.BoundingBox target ...
int x = (int) Math.abs((image.getWidth() * target.getLeft()));
int y = (int) Math.abs((image.getHeight() *target.getTop()));;
int w = (int) Math.abs((image.getWidth() * target.getWidth()));
int h = (int) Math.abs((image.getHeight() * target.getHeight()));
int finalX = x + w;
int finalH = y + h;
if (finalX > image.getWidth())
w = image.getWidth()-x;
if (finalH > image.getHeight())
h = image.getHeight()-y;
System.out.println(finalX);
System.out.println(finalH);
//
//
BufferedImage subImage = image.getSubimage(
x,
y,
w,
h);
//
//
String base64 = ImageUtils.imgToBase64String(subImage, "jpg");
```
Upvotes: 0 |
2018/03/22 | 1,089 | 3,551 | <issue_start>username_0: How can I make python mmap assignment atomic? Nothing about atomic is said here: <https://docs.python.org/3.0/library/mmap.html>
```
huge_list1 = [888 for _ in range(100000000)]
huge_list2 = [9999 for _ in huge_list1]
b1 = struct.pack("100000000I", *huge_list1)
b2 = struct.pack("100000000I", *huge_list1)
f = open('mmp', 'wb')
f.write(b1)
f.close()
f = open('mmp', 'r+')
m = mmap.mmap(f.fileno(), 0)
m[:]=b2
```
Immediately, I execute the following code in another process
```
f = open('mmp', 'r')
m = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
mm = m[:]
l = struct.unpack("100000000I", mm)
set(l)
```
Then I am seeing {888, 9999}
Which means mmap is not atomic. Anyway to make it atomic?<issue_comment>username_1: I don't think it's a `mmap` problem - I'd bet that it happens because `f.close()` guarantess just that Python has sent the data to the underlying OS's buffer but that doesn't mean it has been actually written. Then when you open it again, and give the handle to `mmap`, you're still operating on the buffer.
You can try syncing the buffer before you close the file to ensure everything has been written:
```
import os
f = open('mmp', 'wb')
f.write(b1)
f.flush()
os.fsync(f.fileno())
f.close()
```
Or better, just to let Python handle closing cleanly in case of an error:
```
with open('mmp', 'wb') as f:
f.write(b1)
f.flush()
os.fsync(f.fileno())
```
Although even `os.fsync()` is not a 100% guarantee, from the underlying `fsync()` man page:
>
> Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.
>
>
>
But I'd bet that it wouldn't do what you need in very rare edge cases.
Upvotes: 1 <issue_comment>username_2: In general, you can't. File writes aren't atomic to begin with, whether done via mmap or write. Some filesystems, such as Tahoe-LAFS, do have a file put operation, but even there it's a matter of known completion, not atomic operation (chunks are stored individually). Atomicity of file content updates are frequently done with three methods:
1. Using the [rename](http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html) call, where you can be sure a name points to either the old or new file (Python's [Path.replace](https://docs.python.org/3/library/pathlib.html#pathlib.Path.replace) might be more clear). This is the method used in e.g. [maildir](http://www.qmail.org/man/man5/maildir.html).
2. Using [file locks](https://docs.python.org/3/library/os.html#os.lockf). These are in general cooperative, meaning all programs that access the file must use the same locking method consistently. Sometimes this is not possible, for instance across some network filesystems. Due to this inconsistency, other lock methods such as [lock files](https://pypi.python.org/pypi/filelock) are also used - thus the "same method" requirement.
3. Using smaller accesses that are atomic due to underlying architecture, such as disk sectors. This is done e.g. in [SQLite's journal headers](https://www.sqlite.org/atomiccommit.html). Notably the threshold is different with mmap because the memory page itself may be shared, allowing far finer granularity for atomic accesses (perhaps CPU word size or single byte).
The topic is fairly complex. The key to combining any of these synchronization methods with mmap is [mmap.flush](https://docs.python.org/3/library/mmap.html#mmap.mmap.flush).
Upvotes: 2 |
2018/03/22 | 954 | 3,425 | <issue_start>username_0: I try to get Mattermost working with Docker for Windows. As mentioned [here](https://docs.mattermost.com/install/docker-local-machine.html#mac-os-x-and-windows-10) I executed the following command:
`docker run --name mattermost-preview -d --publish 8065:8065 mattermost/mattermost-preview`
After pulling and extracting the files, docker exits and throws the following error:
`docker.exe: Error response from daemon: Unrecognised volume spec: invalid volume specification: './mattermost-data'.`
Running `Windows Server 2019 PreRelease 17623` and `docker 17.10.0-ee-preview-3`<issue_comment>username_1: I don't think it's a `mmap` problem - I'd bet that it happens because `f.close()` guarantess just that Python has sent the data to the underlying OS's buffer but that doesn't mean it has been actually written. Then when you open it again, and give the handle to `mmap`, you're still operating on the buffer.
You can try syncing the buffer before you close the file to ensure everything has been written:
```
import os
f = open('mmp', 'wb')
f.write(b1)
f.flush()
os.fsync(f.fileno())
f.close()
```
Or better, just to let Python handle closing cleanly in case of an error:
```
with open('mmp', 'wb') as f:
f.write(b1)
f.flush()
os.fsync(f.fileno())
```
Although even `os.fsync()` is not a 100% guarantee, from the underlying `fsync()` man page:
>
> Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.
>
>
>
But I'd bet that it wouldn't do what you need in very rare edge cases.
Upvotes: 1 <issue_comment>username_2: In general, you can't. File writes aren't atomic to begin with, whether done via mmap or write. Some filesystems, such as Tahoe-LAFS, do have a file put operation, but even there it's a matter of known completion, not atomic operation (chunks are stored individually). Atomicity of file content updates are frequently done with three methods:
1. Using the [rename](http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html) call, where you can be sure a name points to either the old or new file (Python's [Path.replace](https://docs.python.org/3/library/pathlib.html#pathlib.Path.replace) might be more clear). This is the method used in e.g. [maildir](http://www.qmail.org/man/man5/maildir.html).
2. Using [file locks](https://docs.python.org/3/library/os.html#os.lockf). These are in general cooperative, meaning all programs that access the file must use the same locking method consistently. Sometimes this is not possible, for instance across some network filesystems. Due to this inconsistency, other lock methods such as [lock files](https://pypi.python.org/pypi/filelock) are also used - thus the "same method" requirement.
3. Using smaller accesses that are atomic due to underlying architecture, such as disk sectors. This is done e.g. in [SQLite's journal headers](https://www.sqlite.org/atomiccommit.html). Notably the threshold is different with mmap because the memory page itself may be shared, allowing far finer granularity for atomic accesses (perhaps CPU word size or single byte).
The topic is fairly complex. The key to combining any of these synchronization methods with mmap is [mmap.flush](https://docs.python.org/3/library/mmap.html#mmap.mmap.flush).
Upvotes: 2 |
2018/03/22 | 916 | 3,208 | <issue_start>username_0: I would like to validate the list of numbers are in the range or not.
if the list of numbers are smaller than 33 and bigger than 38, I want those numbers to be returned with their order.
```
i<-c(33,34,35,36,37,38,80,100)
for (i in 1:length(i)) {
if ( 33 < i & i < 38 ) {
next
}
print(i)
}
```
but it returns everything even though it shouldn't be<issue_comment>username_1: I don't think it's a `mmap` problem - I'd bet that it happens because `f.close()` guarantess just that Python has sent the data to the underlying OS's buffer but that doesn't mean it has been actually written. Then when you open it again, and give the handle to `mmap`, you're still operating on the buffer.
You can try syncing the buffer before you close the file to ensure everything has been written:
```
import os
f = open('mmp', 'wb')
f.write(b1)
f.flush()
os.fsync(f.fileno())
f.close()
```
Or better, just to let Python handle closing cleanly in case of an error:
```
with open('mmp', 'wb') as f:
f.write(b1)
f.flush()
os.fsync(f.fileno())
```
Although even `os.fsync()` is not a 100% guarantee, from the underlying `fsync()` man page:
>
> Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.
>
>
>
But I'd bet that it wouldn't do what you need in very rare edge cases.
Upvotes: 1 <issue_comment>username_2: In general, you can't. File writes aren't atomic to begin with, whether done via mmap or write. Some filesystems, such as Tahoe-LAFS, do have a file put operation, but even there it's a matter of known completion, not atomic operation (chunks are stored individually). Atomicity of file content updates are frequently done with three methods:
1. Using the [rename](http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html) call, where you can be sure a name points to either the old or new file (Python's [Path.replace](https://docs.python.org/3/library/pathlib.html#pathlib.Path.replace) might be more clear). This is the method used in e.g. [maildir](http://www.qmail.org/man/man5/maildir.html).
2. Using [file locks](https://docs.python.org/3/library/os.html#os.lockf). These are in general cooperative, meaning all programs that access the file must use the same locking method consistently. Sometimes this is not possible, for instance across some network filesystems. Due to this inconsistency, other lock methods such as [lock files](https://pypi.python.org/pypi/filelock) are also used - thus the "same method" requirement.
3. Using smaller accesses that are atomic due to underlying architecture, such as disk sectors. This is done e.g. in [SQLite's journal headers](https://www.sqlite.org/atomiccommit.html). Notably the threshold is different with mmap because the memory page itself may be shared, allowing far finer granularity for atomic accesses (perhaps CPU word size or single byte).
The topic is fairly complex. The key to combining any of these synchronization methods with mmap is [mmap.flush](https://docs.python.org/3/library/mmap.html#mmap.mmap.flush).
Upvotes: 2 |
2018/03/22 | 851 | 3,176 | <issue_start>username_0: I need a command to check all folder and files owner/ group permissions on linux server.
I have several websites and crons running the server. The idea is to get all files / folder in any website folder or crons having a owner apache. Which will list out the files and folder names whose owner is "apache" throughout the server<issue_comment>username_1: I don't think it's a `mmap` problem - I'd bet that it happens because `f.close()` guarantess just that Python has sent the data to the underlying OS's buffer but that doesn't mean it has been actually written. Then when you open it again, and give the handle to `mmap`, you're still operating on the buffer.
You can try syncing the buffer before you close the file to ensure everything has been written:
```
import os
f = open('mmp', 'wb')
f.write(b1)
f.flush()
os.fsync(f.fileno())
f.close()
```
Or better, just to let Python handle closing cleanly in case of an error:
```
with open('mmp', 'wb') as f:
f.write(b1)
f.flush()
os.fsync(f.fileno())
```
Although even `os.fsync()` is not a 100% guarantee, from the underlying `fsync()` man page:
>
> Calling fsync() does not necessarily ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync() on a file descriptor for the directory is also needed.
>
>
>
But I'd bet that it wouldn't do what you need in very rare edge cases.
Upvotes: 1 <issue_comment>username_2: In general, you can't. File writes aren't atomic to begin with, whether done via mmap or write. Some filesystems, such as Tahoe-LAFS, do have a file put operation, but even there it's a matter of known completion, not atomic operation (chunks are stored individually). Atomicity of file content updates are frequently done with three methods:
1. Using the [rename](http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html) call, where you can be sure a name points to either the old or new file (Python's [Path.replace](https://docs.python.org/3/library/pathlib.html#pathlib.Path.replace) might be more clear). This is the method used in e.g. [maildir](http://www.qmail.org/man/man5/maildir.html).
2. Using [file locks](https://docs.python.org/3/library/os.html#os.lockf). These are in general cooperative, meaning all programs that access the file must use the same locking method consistently. Sometimes this is not possible, for instance across some network filesystems. Due to this inconsistency, other lock methods such as [lock files](https://pypi.python.org/pypi/filelock) are also used - thus the "same method" requirement.
3. Using smaller accesses that are atomic due to underlying architecture, such as disk sectors. This is done e.g. in [SQLite's journal headers](https://www.sqlite.org/atomiccommit.html). Notably the threshold is different with mmap because the memory page itself may be shared, allowing far finer granularity for atomic accesses (perhaps CPU word size or single byte).
The topic is fairly complex. The key to combining any of these synchronization methods with mmap is [mmap.flush](https://docs.python.org/3/library/mmap.html#mmap.mmap.flush).
Upvotes: 2 |
2018/03/22 | 848 | 2,575 | <issue_start>username_0: I'm trying to recursively find a sum of perfect squares in a dynamically allocated list. For some reason, my function keeps overlooking the first element.
\*A is the pointer to the first element of the array. n is the number of elements, meaning they are in range from 0 to n-1. When n is less than or equal to zero, n-1 isn't a valid index so I'm returning 0 to the sum of perfect squares.
```
int sum(int *A, int n)
{
int i, num = 0;
if (n <= 0)
return num;
for (i = 0; i < A[n - 1]; i++) {
if (i*i == A[n - 1]) {
num = A[n - 1];
}
}
return num + sum(A, n - 1);
}
```
Why does the first element always get overlooked? It works for all the other elements in the list.
EDIT: I've tried calling the function again and it seems that only number 1 got overlooked. That was fixed by modifying the for loop condition, so the solution would be:
```
int sum(int *A, int n)
{
int i, num = 0;
if (n <= 0)
return num;
for (i = 0; i <= A[n - 1]; i++) {
if (i*i == A[n - 1]) {
num = A[n - 1];
}
}
return num + sum(A, n - 1);
}
```<issue_comment>username_1: First element in an array is `A[0]`. You're returning `0` rather than the value of `A[0]` when you're calling `sum(A,0)`.
Did you try changing the line to: `if (n<=0) return A(0);`?
Upvotes: 1 <issue_comment>username_2: For starters as the array pointed to by `A` is not changed the pointer should be declared with the qualifier `const`.
Sizes of objects in C are estimated by using the type `size_t`. So the second parameter should be declared as having the type `size_t`.
Also the sum of perfect squares can be larger than an object of the type `int` can accomodate. So it is better to use the type `long long int` as the return type.
And if I am not mistaken 0 is not a perfect square. Though it is not very important nevertheless the loop can start with 1 instead of 0..
I can suggest the following solution.
```
#include
long long int sum( const int \*a, size\_t n )
{
int perfect\_square = 0;
if ( n )
{
int i = 1;
while ( i \* i < a[n-1] ) i++;
if ( a[n-1] == i \* i ) perfect\_square = a[n-1];
}
return n == 0 ? perfect\_square : perfect\_square + sum( a, n -1 );
}
int main(void)
{
int a[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9 };
const size\_t N = sizeof( a ) / sizeof( \*a );
printf( "The sum of perfect squares is %lld\n", sum( a, N ) );
return 0;
}
```
The program output is
```
The sum of perfect squares is 14
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 5,624 | 17,178 | <issue_start>username_0: I have a Powershell scripts that creates 12 unique check boxes. Later in the script, it creates an email and enter information. I am trying to get the email to include the value of the boxes that are checked, and not the others. The variable name for each box is $CB1 up through $CB12.
```
Team,
Per ticket $SCTicket, $User is requesting access to GetPaid.
Details from RITM ticket $RITMTicket are below:
Access Type: $AccessType
Roll: $Roll
Responsibilities:
Thank You
```
Doing something such as $CB1,$CB2,$CB3, ect. does not look right, as it leaves a bunch of random commas. Putting each checkbox value on it's own line can work, but then there are potentially several blank lines for no reason.
If the user checks off, say, boxes 4, 7, and 10, how can I get just those values written??
Full code is below:
```
if ($startupvariables) { try {Remove-Variable -Name startupvariables -Scope Global -ErrorAction SilentlyContinue } catch { } }
New-Variable -force -name startupVariables -value ( Get-Variable | ForEach-Object { $_.Name } )
Write-Output "Don't close this window!"
Add-Type -AssemblyName PresentationCore,PresentationFramework
$ButtonType = [System.Windows.MessageBoxButton]::OKCancel
$MessageIcon = [System.Windows.MessageBoxImage]::Warning
$MessageTitle = "GetPaid - Access Request"
$MessageBody = "This script sends a GetPaid access request setup to xxxxxxxx and xxxxxx.`n`nTo use it, enter the below information:`n`n`n`tTicket Number`n`n`tUser's Email Address`n`n`tAccess Requested`n`n`nIf this is the script you want to use, click OK.`nIf not, click Cancel."
$Result = [System.Windows.MessageBox]::Show($MessageBody,$MessageTitle,$ButtonType,$MessageIcon)
if ($Result -eq "Cancel")
{
Exit-PSSession
}
else
{
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null
$Separator = ".", "@"
$SCTicket = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the SCTask ticket number" , "Ticket Number")
$RITMTicket = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the RITM ticket number" , "Ticket Number")
$UserID = [Microsoft.VisualBasic.Interaction]::InputBox("Enter the user's email address" , "User Email Address")
$User = $UserID.split($Separator)
$Firstname = $User[0].substring(0,1).toupper()+$User[0].substring(1).tolower()
$Lastname = $User[1].substring(0,1).toupper()+$User[1].substring(1).tolower()
$User = $Firstname, $Lastname
function Access_Type{
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$Form = New-Object System.Windows.Forms.Form
$Form.width = 225
$Form.height = 215
$Form.Text = ”Select Type of Access"
$Font = New-Object System.Drawing.Font("Verdana",11)
$Form.Font = $Font
$MyGroupBox = New-Object System.Windows.Forms.GroupBox
$MyGroupBox.Location = '5,5'
$MyGroupBox.size = '190,125'
$RadioButton1 = New-Object System.Windows.Forms.RadioButton
$RadioButton1.Location = '20,20'
$RadioButton1.size = '120,30'
$RadioButton1.Checked = $false
$RadioButton1.Text = "Grant"
$RB1 = "Grant"
$RadioButton2 = New-Object System.Windows.Forms.RadioButton
$RadioButton2.Location = '20,50'
$RadioButton2.size = '120,30'
$RadioButton2.Checked = $false
$RadioButton2.Text = "Change"
$RB2 = "Change"
$RadioButton3 = New-Object System.Windows.Forms.RadioButton
$RadioButton3.Location = '20,80'
$RadioButton3.size = '120,30'
$RadioButton3.Checked = $false
$RadioButton3.Text = "Revoke"
$RB3 = "Revoke"
$OKButton = new-object System.Windows.Forms.Button
$OKButton.Location = '5,135'
$OKButton.Size = '90,35'
$OKButton.Text = 'OK'
$OKButton.DialogResult=[System.Windows.Forms.DialogResult]::OK
$CancelButton = new-object System.Windows.Forms.Button
$CancelButton.Location = '105,135'
$CancelButton.Size = '90,35'
$CancelButton.Text = "Cancel"
$CancelButton.Add_Click({$objForm.Close()})
$CancelButton.DialogResult=[System.Windows.Forms.DialogResult]::Cancel
$form.Controls.AddRange(@($MyGroupBox,$OKButton,$CancelButton))
$MyGroupBox.Controls.AddRange(@($Radiobutton1,$RadioButton2,$RadioButton3))
$form.AcceptButton = $OKButton
$form.CancelButton = $CancelButton
$form.Add_Shown({$form.Activate()})
$dialogResult = $form.ShowDialog()
if ($DialogResult -eq "OK")
{
if ($RadioButton1.Checked){$global:AccessType = $RB1}
elseif ($RadioButton2.Checked){$global:AccessType = $RB2}
elseif ($RadioButton3.Checked){$global:AccessType = $RB3}
}
elseif ($DialogResult -eq "Cancel")
{
break
}
}
Access_Type
function Access_Roll{
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$Form = New-Object System.Windows.Forms.Form
$Form.width = 225
$Form.height = 180
$Form.Text = ”Select Type of Access"
$Font = New-Object System.Drawing.Font("Verdana",11)
$Form.Font = $Font
$MyGroupBox = New-Object System.Windows.Forms.GroupBox
$MyGroupBox.Location = '5,5'
$MyGroupBox.size = '190,90'
$RadioButton1 = New-Object System.Windows.Forms.RadioButton
$RadioButton1.Location = '20,20'
$RadioButton1.size = '130,30'
$RadioButton1.Checked = $false
$RadioButton1.Text = "User"
$RB11 = "User"
$RadioButton2 = New-Object System.Windows.Forms.RadioButton
$RadioButton2.Location = '20,50'
$RadioButton2.size = '130,30'
$RadioButton2.Checked = $false
$RadioButton2.Text = "Administrator"
$RB22 = "Administrator"
$OKButton = new-object System.Windows.Forms.Button
$OKButton.Location = '5,100'
$OKButton.Size = '90,35'
$OKButton.Text = 'OK'
$OKButton.DialogResult=[System.Windows.Forms.DialogResult]::OK
$CancelButton = new-object System.Windows.Forms.Button
$CancelButton.Location = '105,100'
$CancelButton.Size = '90,35'
$CancelButton.Text = "Cancel"
$CancelButton.Add_Click({$objForm.Close()})
$CancelButton.DialogResult=[System.Windows.Forms.DialogResult]::Cancel
$form.Controls.AddRange(@($MyGroupBox,$OKButton,$CancelButton))
$MyGroupBox.Controls.AddRange(@($Radiobutton1,$RadioButton2))
$form.AcceptButton = $OKButton
$form.CancelButton = $CancelButton
$form.Add_Shown({$form.Activate()})
$dialogResult = $form.ShowDialog()
if ($DialogResult -eq "OK")
{
if ($RadioButton1.Checked){$global:Roll = $RB11}
elseif ($RadioButton2.Checked){$global:Roll = $RB22}
}
elseif ($DialogResult -eq "Cancel")
{
break
}
}
Access_Roll
function Access_Responsibilities{
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Drawing")
$Form = New-Object System.Windows.Forms.Form
$Form.width = 265
$Form.height = 510
$Form.Text = ”Select Type of Access"
$Font = New-Object System.Drawing.Font("Verdana",11)
$Form.Font = $Font
$MyGroupBox = New-Object System.Windows.Forms.GroupBox
$MyGroupBox.Location = '5,5'
$MyGroupBox.size = '230,420'
$Checkbox1 = New-Object System.Windows.Forms.Checkbox
$Checkbox1.Location = '20,20'
$Checkbox1.size = '200,25'
$Checkbox1.Checked = $false
$Checkbox1.Text = "Collections"
$CB1 = "Collections"
$Checkbox2 = New-Object System.Windows.Forms.Checkbox
$Checkbox2.Location = '20,45'
$Checkbox2.size = '200,25'
$Checkbox2.Checked = $false
$Checkbox2.Text = "Credit Management"
$CB2 = "Credit Management"
$Checkbox3 = New-Object System.Windows.Forms.Checkbox
$Checkbox3.Location = '20,70'
$Checkbox3.size = '200,25'
$Checkbox3.Checked = $false
$Checkbox3.Text = "Cash Application"
$CB3 = "Cash Application"
$Checkbox4 = New-Object System.Windows.Forms.Checkbox
$Checkbox4.Location = '20,95'
$Checkbox4.size = '200,25'
$Checkbox4.Checked = $false
$Checkbox4.Text = "Sales and Service User"
$CB4 = "Sales and Service User"
$Checkbox5 = New-Object System.Windows.Forms.Checkbox
$Checkbox5.Location = '20,115'
$Checkbox5.size = '200,50'
$Checkbox5.Checked = $false
$Checkbox5.Text = "Controller (specify plant below)"
$CB5 = "Controller (specify plant below)"
$Checkbox6 = New-Object System.Windows.Forms.Checkbox
$Checkbox6.Location = '20,160'
$Checkbox6.size = '200,50'
$Checkbox6.Checked = $false
$Checkbox6.Text = "Plant (specify plant below)"
$CB6 = "Plant (specify plant below)"
$Checkbox7 = New-Object System.Windows.Forms.Checkbox
$Checkbox7.Location = '20,205'
$Checkbox7.size = '200,25'
$Checkbox7.Checked = $false
$Checkbox7.Text = "Sales"
$CB7 = "Sales"
$Checkbox8 = New-Object System.Windows.Forms.Checkbox
$Checkbox8.Location = '20,230'
$Checkbox8.size = '200,25'
$Checkbox8.Checked = $false
$Checkbox8.Text = "Pricing"
$CB8 = "Pricing"
$Checkbox9 = New-Object System.Windows.Forms.Checkbox
$Checkbox9.Location = '20,255'
$Checkbox9.size = '200,25'
$Checkbox9.Checked = $false
$Checkbox9.Text = "Warranty"
$CB9 = "Warranty"
$Checkbox10 = New-Object System.Windows.Forms.Checkbox
$Checkbox10.Location = '20,280'
$Checkbox10.size = '200,25'
$Checkbox10.Checked = $false
$Checkbox10.Text = "GL Accountant"
$CB10 = "GL Accountant"
$Checkbox11 = New-Object System.Windows.Forms.Checkbox
$Checkbox11.Location = '20,300'
$Checkbox11.size = '200,50'
$Checkbox11.Checked = $false
$Checkbox11.Text = "Mexico Billing (specific plant below)"
$CB11 = "Mexico Billing (specific plant below)"
$Checkbox12 = New-Object System.Windows.Forms.Checkbox
$Checkbox12.Location = '20,340'
$Checkbox12.size = '200,75'
$Checkbox12.Checked = $false
$Checkbox12.Text = "Non Problem Owner (To be used for Inquiry Only)"
$CB12 = "Non Problem Owner (To be used for Inquiry Only)"
$OKButton = new-object System.Windows.Forms.Button
$OKButton.Location = '10,430'
$OKButton.Size = '90,35'
$OKButton.Text = 'OK'
$OKButton.DialogResult=[System.Windows.Forms.DialogResult]::OK
$CancelButton = new-object System.Windows.Forms.Button
$CancelButton.Location = '110,430'
$CancelButton.Size = '90,35'
$CancelButton.Text = "Cancel"
$CancelButton.Add_Click({$objForm.Close()})
$CancelButton.DialogResult=[System.Windows.Forms.DialogResult]::Cancel
$form.Controls.AddRange(@($MyGroupBox,$OKButton,$CancelButton))
$MyGroupBox.Controls.AddRange(@($Checkbox1,$Checkbox2,$Checkbox3,$Checkbox4,$Checkbox5,$Checkbox6,$Checkbox7,$Checkbox8,$Checkbox9,$Checkbox10,$Checkbox11,$Checkbox12))
$form.AcceptButton = $OKButton
$form.CancelButton = $CancelButton
$form.Add_Shown({$form.Activate()})
$dialogResult = $form.ShowDialog()
if ($DialogResult -eq "OK")
{
if ($Checkbox1.Checked){$global:AccessResponsibilities = $CB1}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB2}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB3}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB4}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB5}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB6}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB7}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB8}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB9}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB10}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB11}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB12}
}
elseif ($DialogResult -eq "Cancel")
{
break
}
}
Access_Responsibilities
function Read-MultiLineInputBoxDialog([string]$Message, [string]$WindowTitle, [string]$DefaultText)
{
Add-Type -AssemblyName System.Drawing
Add-Type -AssemblyName System.Windows.Forms
$label = New-Object System.Windows.Forms.Label
$label.Location = New-Object System.Drawing.Size(10,10)
$label.Size = New-Object System.Drawing.Size(280,20)
$label.AutoSize = $true
$label.Text = $Message
$textBox = New-Object System.Windows.Forms.TextBox
$textBox.Location = New-Object System.Drawing.Size(10,40)
$textBox.Size = New-Object System.Drawing.Size(575,200)
$textBox.AcceptsReturn = $true
$textBox.AcceptsTab = $false
$textBox.Multiline = $true
$textBox.ScrollBars = 'Both'
$textBox.Text = $DefaultText
$okButton = New-Object System.Windows.Forms.Button
$okButton.Location = New-Object System.Drawing.Size(415,250)
$okButton.Size = New-Object System.Drawing.Size(75,25)
$okButton.Text = "OK"
$okButton.Add_Click({ $form.Tag = $textBox.Text; $form.Close() })
$cancelButton = New-Object System.Windows.Forms.Button
$cancelButton.Location = New-Object System.Drawing.Size(510,250)
$cancelButton.Size = New-Object System.Drawing.Size(75,25)
$cancelButton.Text = "Cancel"
$CancelButton.Add_Click({$objForm.Close()})
$cancelButton.Add_Click({ $form.Tag = $null; $form.Close() })
$form = New-Object System.Windows.Forms.Form
$form.Text = $WindowTitle
$form.Size = New-Object System.Drawing.Size(610,320)
$form.FormBorderStyle = 'FixedSingle'
$form.StartPosition = "CenterScreen"
$form.AutoSizeMode = 'GrowAndShrink'
$form.Topmost = $True
$form.AcceptButton = $okButton
$form.CancelButton = $cancelButton
$form.ShowInTaskbar = $true
$form.Controls.Add($label)
$form.Controls.Add($textBox)
$form.Controls.Add($okButton)
$form.Controls.Add($cancelButton)
$form.Add_Shown({$form.Activate()})
$form.ShowDialog() > $null
return $form.Tag
}
$global:Comments = Read-MultiLineInputBoxDialog -Message "Enter any comments from the ticket" -WindowTitle "Comments"
$Username = [System.Environment]::UserName
$subject = "Ticket $SCTicket - $User's GetPaid Access Request"
$body = "
Team,
Per ticket $SCTicket, $User is requesting access to GetPaid.
Details from RITM ticket $RITMTicket are below:
Access Type: $AccessType
Roll: $Roll
Responsibilities:
Comments: $Comments
Thank You,
"
$ButtonType = [System.Windows.MessageBoxButton]::YesNo
$MessageIcon = [System.Windows.MessageBoxImage]::Warning
$MessageTitle = "GetPaid - Access Request"
$MessageBody = "The information you have entered is show below:`n`n`nTicket Number: $Ticket`n`nUser's Email Address: $UserID`n`nAccess Type: $AccessType`n`nRoll: $Roll`n`nResponsibilities: `n`nComments: $Comments`n`n`nIf you would like to send the email, click Yes.`nOtherwise, click No."
$Result = [System.Windows.MessageBox]::Show($MessageBody,$MessageTitle,$ButtonType,$MessageIcon)
if ($Result -eq "No")
{
Exit-PSSession
}
else
{
Send-MailMessage -To "<<EMAIL>>" -bcc "<<EMAIL>>" -from "" -Subject $subject -SmtpServer "mailrelay.xxx.com" -body $body
}
}
Function Clean-Memory {
Get-Variable |
Where-Object { $startupVariables -notcontains $\_.Name } |
ForEach-Object {
try { Remove-Variable -Name "$($\_.Name)" -Force -Scope "global" -ErrorAction SilentlyContinue -WarningAction SilentlyContinue}
catch { }
}
}
```<issue_comment>username_1: Just from a preliminary look through the code and the question that you are asking, this is what I have come up with.
Basically, if the checkbox is checked, it appends the $CB(1-12) string to the `$global:AccessResponsibilities` and then in your email just reference `$global:AccessResponsibilities` and it should have all the strings of the checkboxes that were checked
```
if ($Checkbox1.Checked){$global:AccessResponsibilities += $CB1 + "`r`n"}
if ($Checkbox2.Checked){$global:AccessResponsibilities += $CB2 + "`r`n"}
if ($Checkbox3.Checked){$global:AccessResponsibilities += $CB3 + "`r`n"}
if ($Checkbox4.Checked){$global:AccessResponsibilities += $CB4 + "`r`n"}
if ($Checkbox5.Checked){$global:AccessResponsibilities += $CB5 + "`r`n"}
if ($Checkbox6.Checked){$global:AccessResponsibilities += $CB6 + "`r`n"}
if ($Checkbox7.Checked){$global:AccessResponsibilities += $CB7 + "`r`n"}
if ($Checkbox8.Checked){$global:AccessResponsibilities += $CB8 + "`r`n"}
if ($Checkbox9.Checked){$global:AccessResponsibilities += $CB9 + "`r`n"}
if ($Checkbox10.Checked){$global:AccessResponsibilities += $CB10 + "`r`n"}
if ($Checkbox11.Checked){$global:AccessResponsibilities += $CB11 + "`r`n"}
if ($Checkbox12.Checked){$global:AccessResponsibilities += $CB12 + "`r`n"}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here's a procedural way to retrieve the matching values.
First, we retrieve all variables with `$Checkbox` in the name, then grab only ones with a `CheckState` value. Then, for each, we substring out the number and append that to the string `CB` to retrieve the value of the corresponding `$CB#` variable.
Then we store that in an array list and join with a comma at the end, for the output you see below.
```
$properties = New-object System.Collections.ArrayList
$CheckedBoxes = Get-Variable checkbox* | Where-object {$_.Value.CheckState -eq 'Checked'}
ForEach ($Checked in $CheckedBoxes){
$CBNumber = $CheckBox.Name.Split('Checkbox')[-1]
$Properties.Add((Get-variable "CB$CBNumber").Value) | out-null
}
$Properties -join ','
>GL Accountant, Pricing, Credit Management
```
All of this code should be used where you have this code today:
```
if ($Checkbox1.Checked){$global:AccessResponsibilities = $CB1}
#Insert new code here <---------------
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB2}
elseif ($Checkbox2.Checked){$global:AccessResponsibilities = $CB3}
```
Upvotes: 2 <issue_comment>username_3: heres a small loop that should work for username_1's answer.
```
foreach ($i in 1..12)
{
if ((Get-Variable -Name Checkbox$i).Value.Checked)
{
$Global:AccessResponsibilities += ((Get-Variable -Name CB$i).Value + "`r`n")
}
}
```
Upvotes: 2 |
2018/03/22 | 1,294 | 3,816 | <issue_start>username_0: I find it impossible to do a subtraction between an immutable array reference and a mutable one using the [ndarray crate](https://crates.io/crates/ndarray):
```
#[macro_use]
extern crate ndarray;
use ndarray::Array1;
fn main() {
let a: &Array1 = &array![3.0, 2.0, 1.0];
let b: &mut Array1 = &mut array![1.0, 1.0, 1.0];
let c = a - &b.view(); // This compiles
let d = a - b; // This fails to compile
}
```
The error message I get is:
```none
let d = a - b;
^ no implementation for `f64 - &mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
```
I don't understand what these two types mean, but is there any special reason why this is not implemented?<issue_comment>username_1: The [`Sub` trait in `ndarray::ArrayBase`](https://docs.rs/ndarray/0.11.2/ndarray/struct.ArrayBase.html#impl-Sub%3C%26%27a%20ArrayBase%3CS2%2C%20E%3E%3E-1) is not implemented for `&mut` arguments (it is for immutable references to other arrays). It is not needed because the right-handed value should not be modified. The second operand is a `&mut Array`, and it ends up being one of the cases where type weakening into an immutable reference does not happen automatically. Still, you can explicitly reborrow the value:
```
let a: &Array1 = &array![3.0, 2.0, 1.0];
let b: &mut Array1 = &mut array![1.0, 1.0, 1.0];
let c = a - &b.view();
let d = a - &\*b;
```
This assumes that `a` and `b` were obtained elsewhere. In fact, you can make these variables own the arrays instead:
```
let a: Array1 = array![3.0, 2.0, 1.0];
let mut b: Array1 = array![1.0, 1.0, 1.0];
let c = &a - &b.view();
let d = &a - &b
```
Upvotes: 2 <issue_comment>username_2: The full error message is:
```none
error[E0277]: the trait bound `f64: std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ no implementation for `f64 - &mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= help: the trait `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not implemented for `f64`
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
error[E0277]: the trait bound `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>: ndarray::ScalarOperand` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
```
The first error message is quite obscure for me but the second is more enlightening:
>
> the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase
>
>
>
The reason of this error is that Rust does not perform coercions when matching traits: If there is an impl for some type `U` and `T` coerces to `U`, that does not constitute an implementation for `T` (copied directly from [the Nomicon](https://doc.rust-lang.org/beta/nomicon/coercions.html))
Without entering into ndarray internals, this is a minimal example reproducing the same problem:
```
trait MyShape {}
fn foo(x: T) -> T {
x
}
impl<'a> MyShape for &'a i32 {}
fn main() {
let mut num = 1;
let arg: &mut i32 = &mut num;
foo(arg);
}
```
Result:
```none
error[E0277]: the trait bound `&mut i32: MyShape` is not satisfied
--> src/main.rs:12:5
|
12 | foo(arg);
| ^^^ the trait `MyShape` is not implemented for `&mut i32`
|
= help: the following implementations were found:
<&'a i32 as MyShape>
= note: required by `foo`
```
Upvotes: 1 |
2018/03/22 | 1,201 | 3,610 | <issue_start>username_0: There is a problem in the page permalink in wordpress when i add page name it's permalink has to show like ["www.example.com/page-name](http://www.example.com/page-name)" instead of this it shows like
[www.example.com/page-name-2](http://www.example.com/page-name) or -3 when i try to change it it turns back i think maybe this is a page but when i access the link it shows that it is a media file permalink of an image . Is there any solution of this sort of problem<issue_comment>username_1: The [`Sub` trait in `ndarray::ArrayBase`](https://docs.rs/ndarray/0.11.2/ndarray/struct.ArrayBase.html#impl-Sub%3C%26%27a%20ArrayBase%3CS2%2C%20E%3E%3E-1) is not implemented for `&mut` arguments (it is for immutable references to other arrays). It is not needed because the right-handed value should not be modified. The second operand is a `&mut Array`, and it ends up being one of the cases where type weakening into an immutable reference does not happen automatically. Still, you can explicitly reborrow the value:
```
let a: &Array1 = &array![3.0, 2.0, 1.0];
let b: &mut Array1 = &mut array![1.0, 1.0, 1.0];
let c = a - &b.view();
let d = a - &\*b;
```
This assumes that `a` and `b` were obtained elsewhere. In fact, you can make these variables own the arrays instead:
```
let a: Array1 = array![3.0, 2.0, 1.0];
let mut b: Array1 = array![1.0, 1.0, 1.0];
let c = &a - &b.view();
let d = &a - &b
```
Upvotes: 2 <issue_comment>username_2: The full error message is:
```none
error[E0277]: the trait bound `f64: std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ no implementation for `f64 - &mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= help: the trait `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not implemented for `f64`
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
error[E0277]: the trait bound `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>: ndarray::ScalarOperand` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
```
The first error message is quite obscure for me but the second is more enlightening:
>
> the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase
>
>
>
The reason of this error is that Rust does not perform coercions when matching traits: If there is an impl for some type `U` and `T` coerces to `U`, that does not constitute an implementation for `T` (copied directly from [the Nomicon](https://doc.rust-lang.org/beta/nomicon/coercions.html))
Without entering into ndarray internals, this is a minimal example reproducing the same problem:
```
trait MyShape {}
fn foo(x: T) -> T {
x
}
impl<'a> MyShape for &'a i32 {}
fn main() {
let mut num = 1;
let arg: &mut i32 = &mut num;
foo(arg);
}
```
Result:
```none
error[E0277]: the trait bound `&mut i32: MyShape` is not satisfied
--> src/main.rs:12:5
|
12 | foo(arg);
| ^^^ the trait `MyShape` is not implemented for `&mut i32`
|
= help: the following implementations were found:
<&'a i32 as MyShape>
= note: required by `foo`
```
Upvotes: 1 |
2018/03/22 | 1,791 | 5,453 | <issue_start>username_0: This is about Eclipse Jee Oxygen (eclipse.buildId = 4.7.3.M20180301-0715).
Until this morning everything was running fine, then Eclipse update poped-up.
I decided to take the update and after restarting Eclipse I tried to debug a plugin project I'm working on. When I clicked `Debug as > Eclipse Application` menu, the other Eclipse instance started to be loaded, but it was interrupted with the following error stack:
```
java.lang.NullPointerException
at org.eclipse.e4.ui.internal.workbench.ModelServiceImpl.(ModelServiceImpl.java:122)
at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createDefaultContext(E4Application.java:511)
at org.eclipse.e4.ui.internal.workbench.swt.E4Application.createE4Workbench(E4Application.java:204)
at org.eclipse.ui.internal.Workbench.lambda$3(Workbench.java:614)
at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:336)
at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:594)
at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:148)
at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:151)
at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:388)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:653)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:590)
at org.eclipse.equinox.launcher.Main.run(Main.java:1499)
at org.eclipse.equinox.launcher.Main.main(Main.java:1472)
```
I tryed to revert the update but now nothing works anymore. This is real only for debugging an Eclipse Application, other features of Eclipse *seem* to be ok.
Any ideas?<issue_comment>username_1: The [`Sub` trait in `ndarray::ArrayBase`](https://docs.rs/ndarray/0.11.2/ndarray/struct.ArrayBase.html#impl-Sub%3C%26%27a%20ArrayBase%3CS2%2C%20E%3E%3E-1) is not implemented for `&mut` arguments (it is for immutable references to other arrays). It is not needed because the right-handed value should not be modified. The second operand is a `&mut Array`, and it ends up being one of the cases where type weakening into an immutable reference does not happen automatically. Still, you can explicitly reborrow the value:
```
let a: &Array1 = &array![3.0, 2.0, 1.0];
let b: &mut Array1 = &mut array![1.0, 1.0, 1.0];
let c = a - &b.view();
let d = a - &\*b;
```
This assumes that `a` and `b` were obtained elsewhere. In fact, you can make these variables own the arrays instead:
```
let a: Array1 = array![3.0, 2.0, 1.0];
let mut b: Array1 = array![1.0, 1.0, 1.0];
let c = &a - &b.view();
let d = &a - &b
```
Upvotes: 2 <issue_comment>username_2: The full error message is:
```none
error[E0277]: the trait bound `f64: std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ no implementation for `f64 - &mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= help: the trait `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not implemented for `f64`
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
error[E0277]: the trait bound `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>: ndarray::ScalarOperand` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
```
The first error message is quite obscure for me but the second is more enlightening:
>
> the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase
>
>
>
The reason of this error is that Rust does not perform coercions when matching traits: If there is an impl for some type `U` and `T` coerces to `U`, that does not constitute an implementation for `T` (copied directly from [the Nomicon](https://doc.rust-lang.org/beta/nomicon/coercions.html))
Without entering into ndarray internals, this is a minimal example reproducing the same problem:
```
trait MyShape {}
fn foo(x: T) -> T {
x
}
impl<'a> MyShape for &'a i32 {}
fn main() {
let mut num = 1;
let arg: &mut i32 = &mut num;
foo(arg);
}
```
Result:
```none
error[E0277]: the trait bound `&mut i32: MyShape` is not satisfied
--> src/main.rs:12:5
|
12 | foo(arg);
| ^^^ the trait `MyShape` is not implemented for `&mut i32`
|
= help: the following implementations were found:
<&'a i32 as MyShape>
= note: required by `foo`
```
Upvotes: 1 |
2018/03/22 | 1,240 | 3,679 | <issue_start>username_0: I need to set a rectangle border around an `ImageView` on Android.
I do not know exact ratio of the image, so set height and width to `wrap_content`. The `ImageView` is limited by top and bottom views.
How to draw the border exactly around the image? Not full ImageView?
Currently it gives the following result:
[](https://i.stack.imgur.com/uvyoa.png)
My XMLs.
Layout
```
xml version="1.0" encoding="utf-8"?
```
border\_course\_logo.xml
```
xml version="1.0" encoding="UTF-8"?
```<issue_comment>username_1: The [`Sub` trait in `ndarray::ArrayBase`](https://docs.rs/ndarray/0.11.2/ndarray/struct.ArrayBase.html#impl-Sub%3C%26%27a%20ArrayBase%3CS2%2C%20E%3E%3E-1) is not implemented for `&mut` arguments (it is for immutable references to other arrays). It is not needed because the right-handed value should not be modified. The second operand is a `&mut Array`, and it ends up being one of the cases where type weakening into an immutable reference does not happen automatically. Still, you can explicitly reborrow the value:
```
let a: &Array1 = &array![3.0, 2.0, 1.0];
let b: &mut Array1 = &mut array![1.0, 1.0, 1.0];
let c = a - &b.view();
let d = a - &\*b;
```
This assumes that `a` and `b` were obtained elsewhere. In fact, you can make these variables own the arrays instead:
```
let a: Array1 = array![3.0, 2.0, 1.0];
let mut b: Array1 = array![1.0, 1.0, 1.0];
let c = &a - &b.view();
let d = &a - &b
```
Upvotes: 2 <issue_comment>username_2: The full error message is:
```none
error[E0277]: the trait bound `f64: std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ no implementation for `f64 - &mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= help: the trait `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` is not implemented for `f64`
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
error[E0277]: the trait bound `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>: ndarray::ScalarOperand` is not satisfied
--> src/main.rs:10:15
|
10 | let d = a - b;
| ^ the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
|
= note: required because of the requirements on the impl of `std::ops::Sub<&mut ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>>` for `&ndarray::ArrayBase, ndarray::Dim<[usize; 1]>>`
```
The first error message is quite obscure for me but the second is more enlightening:
>
> the trait `ndarray::ScalarOperand` is not implemented for `&mut ndarray::ArrayBase
>
>
>
The reason of this error is that Rust does not perform coercions when matching traits: If there is an impl for some type `U` and `T` coerces to `U`, that does not constitute an implementation for `T` (copied directly from [the Nomicon](https://doc.rust-lang.org/beta/nomicon/coercions.html))
Without entering into ndarray internals, this is a minimal example reproducing the same problem:
```
trait MyShape {}
fn foo(x: T) -> T {
x
}
impl<'a> MyShape for &'a i32 {}
fn main() {
let mut num = 1;
let arg: &mut i32 = &mut num;
foo(arg);
}
```
Result:
```none
error[E0277]: the trait bound `&mut i32: MyShape` is not satisfied
--> src/main.rs:12:5
|
12 | foo(arg);
| ^^^ the trait `MyShape` is not implemented for `&mut i32`
|
= help: the following implementations were found:
<&'a i32 as MyShape>
= note: required by `foo`
```
Upvotes: 1 |
2018/03/22 | 586 | 2,275 | <issue_start>username_0: I am new to Jenkins pipeline scripting. I am developing a Jenkins pipeline in which the Jenkins code is as follows. The logic looks like this:
```
node{
a=xyz
b=abc
//defined some global variables
stage('verify'){
verify("${a}","${b}")
abc("${a}","${b}")
echo "changed values of a and b are ${a} ${b}"
}}
def verify(String a, String b)
{ //SOme logic where the initial value of a and b gets changed at the end of this function}
def verify(String a, String b){
//I need to get the changed value from verify function and manipulate that value in this function}
```
I need to pass the initial `a` and `b`(multiple) values to the `verify` function and pass the changed value on to the other function. I then need to manipulate the changed value, and pass it to the stage in the pipeline where echo will display the changed values. How can I accomplish all this?<issue_comment>username_1: Ok, here's what I meant:
```
def String verify_a(String a) { /* stuff */ }
def String verify_b(String b) { /* stuff */ }
node {
String a = 'xyz'
String b = 'abc'
stage('verify') {
a = verify_a(a)
b = verify_b(b)
echo "changed values of a and b are $a $b"
}
stage('next stage') {
echo "a and b retain their changed values: $a $b"
}
}
```
Upvotes: 1 <issue_comment>username_2: The easiest way I have found to pass variables between stages is to just use Environment Variables. The one - admittedly major - restriction is that they can only be Strings. But I haven't found that to be a huge issue, especially with liberal use of the `toBoolean()` and `toInteger()` functions. If you need to be passing maps or more complex objects between stages, you might need to build something with external scripts or writing things to temporary files (make sure to stash what you need if there's a chance you'll switch agents). But env vars have served me well for almost all cases.
[This article is, as its title implies, the definitive guide on environment variables in Jenkins.](https://e.printstacktrace.blog/jenkins-pipeline-environment-variables-the-definitive-guide/) You'll see a comment there from me that it's really helped me grok the intricacies of Jenkins env vars.
Upvotes: 0 |
2018/03/22 | 654 | 1,483 | <issue_start>username_0: I have this file:
```
this is line 1 192.168.1.1
this is line 2 192.168.1.2
this is line 3 192.168.1.2
this is line 4 192.168.1.1
this is line 5 192.168.1.2
```
I would like to get, in a bash script, all lines (with tabs) which contains, for example, the pattern "192.168.1.1". By this way, I would get:
```
this is line 1 192.168.1.1
this is line 4 192.168.1.1
```
But i don't want this result:
```
this is line 1 192.168.1.1 this is line 4 192.168.1.1
```
I tried it with sed without success:
```
var='192.168.1.1'
body=`sed -n -e '/$var/p' $file`
echo $body
```
Thanks beforehand!<issue_comment>username_1: `awk` to the rescue!
```
$ awk -v var='192.168.1.1' '$NF==var' file
this is line 1 192.168.1.1
this is line 4 192.168.1.1
```
Upvotes: 1 <issue_comment>username_2: Following `sed` may help you on same.
```
var="your_ip"
sed -n "/$var/p" Input_file
```
Or in `awk` following will help on same.
```
var="your_ip"
awk -v var="$var" 'var==$NF' Input_file
```
Upvotes: 0 <issue_comment>username_3: A simple `grep` command could do that:
```
grep 192.168.1.1 filename
```
Or a more "complex" `grep` command could do that and write it to another file:
```
grep 192.168.1.1 filename > new_filename
```
---
Hope this helps!
Upvotes: 0 <issue_comment>username_4: Assuming the data format is well defined like you said:
```
"this is line "
```
you could do simply this:
```
cat file | egrep "192.168.0.1$"
```
Upvotes: 0 |
2018/03/22 | 802 | 2,409 | <issue_start>username_0: i'm having a serious issue here. I have a function that pads a user's input with zeros. For example if i enter 88 it will normalize it to:
00000088. My function is this:
```
export default length => (value) => {
const noLeadingZeros = value.toString().replace(/(0+)/, '');
if (noLeadingZeros.length === 0) {
return value;
}
return padLeft(noLeadingZeros, length);
};
```
with padleft is:
```
export default (num, len) => (
len > num.toString().length ? '0'.repeat(len - num.toString().length) + num
: num
);
```
My problem is that if i entered something like this:
80112345 it convert it to 08112345. Any ideas?<issue_comment>username_1: this looks wrong :
```
const noLeadingZeros = value.toString().replace(/(0+)/, '');
```
you are deleting every zeros out of your number... even those inside !
You can use this regex instead, instead of for `/(0+)/` in your code :
```
/\b(0+)/
```
explanation : the `\b` ensures the zeros are at the beginning of a word
or this
```
/^(0+)/
```
explanation : the `^` ensure this is the beginning of the string
Upvotes: 2 [selected_answer]<issue_comment>username_2: In your replace, you're replacing all the zeros in the number not just those on the left side, and even if there are zeros on the left side, why remove them if you're just going to add them back. You could use a for loop that pads the string with a zero `n` times (where `n` is the number of digits that the string needs to have length 8), or (thanks to a comment by @AndrewBone), you can use the `.repeat()` function that does this for you:
```js
function padLeft(value, len) {
return '0'.repeat(String(value).length < len ? len - String(value).length : 0) + value;
}
console.log(padLeft("", 8));
console.log(padLeft("88", 8));
console.log(padLeft("00088", 8));
console.log(padLeft("12312388", 8));
console.log(padLeft("00000000", 8));
```
Upvotes: 2 <issue_comment>username_3: Just use a RegEx to assert that the number is a valid number.
```
/0+/
```
Then get the number of digits in the number:
```
('' + num).length;
```
Then put the whole thing together
```
var paddedNum ='';
for (var i=0;i<8-len;i++) {
paddedNum += "0";
}
paddedNum += num;
```
Upvotes: 1 <issue_comment>username_4: Using `slice`:
```js
let str = '00000000' + 88;
let resp = str.slice(-8, str.length)
console.log(resp) // 00000088
```
Upvotes: 2 |
2018/03/22 | 787 | 2,533 | <issue_start>username_0: I am running into an issue where when I am attempting to override a function it is telling me that there is no method in the superclass. The subclass is in an XCTest. When I subclass in the regular project it works perfectly but the XCTest does not work for some reason Below is the Code
**SuperClass**
```
class BackupServerCell: UITableViewCell {
@IBOutlet var serverNameLabel: UILabel!
@IBOutlet var serverDescriptionLabel: UILabel!
func configCell(with server: VeeamBackupServer) {
}
}
```
**Subclass** - Located in the XCTest file
```
class MockBackupServerCell : BackupServerCell {
var configCellGotCalled = false
override func configCell(with server: VeeamBackupServer) {
configCellGotCalled = true
}
}
```<issue_comment>username_1: this looks wrong :
```
const noLeadingZeros = value.toString().replace(/(0+)/, '');
```
you are deleting every zeros out of your number... even those inside !
You can use this regex instead, instead of for `/(0+)/` in your code :
```
/\b(0+)/
```
explanation : the `\b` ensures the zeros are at the beginning of a word
or this
```
/^(0+)/
```
explanation : the `^` ensure this is the beginning of the string
Upvotes: 2 [selected_answer]<issue_comment>username_2: In your replace, you're replacing all the zeros in the number not just those on the left side, and even if there are zeros on the left side, why remove them if you're just going to add them back. You could use a for loop that pads the string with a zero `n` times (where `n` is the number of digits that the string needs to have length 8), or (thanks to a comment by @AndrewBone), you can use the `.repeat()` function that does this for you:
```js
function padLeft(value, len) {
return '0'.repeat(String(value).length < len ? len - String(value).length : 0) + value;
}
console.log(padLeft("", 8));
console.log(padLeft("88", 8));
console.log(padLeft("00088", 8));
console.log(padLeft("12312388", 8));
console.log(padLeft("00000000", 8));
```
Upvotes: 2 <issue_comment>username_3: Just use a RegEx to assert that the number is a valid number.
```
/0+/
```
Then get the number of digits in the number:
```
('' + num).length;
```
Then put the whole thing together
```
var paddedNum ='';
for (var i=0;i<8-len;i++) {
paddedNum += "0";
}
paddedNum += num;
```
Upvotes: 1 <issue_comment>username_4: Using `slice`:
```js
let str = '00000000' + 88;
let resp = str.slice(-8, str.length)
console.log(resp) // 00000088
```
Upvotes: 2 |
2018/03/22 | 664 | 2,051 | <issue_start>username_0: I've made a simple PHP based exam for my students.
The results are recorded as "q1", "q2", "q3", "q4" and "q5" in the database.
Correct answers are recorded as "yes" and incorrect answers as "no".
There are 5 questions and I want the total score to be 100.
All the functions work as I'd like them to, but the problem is, I have no idea how to sort the results from high to low. Any help is highly appreciated!
```
$result = mysqli_query($db,"SELECT * FROM table");
while($row = mysqli_fetch_array($result)){
$score = 0;
echo ' '.$row["name"] . ' ' . $score;
}
```<issue_comment>username_1: Instead of the `while` and the inner `if`, you could do all with the `select` and `order by` the result column eg:
```
select name, if(q1='yes', 20, 0) +
if(q2='yes', 20, 0) +
if(q3='yes', 20, 0) +
if(q4='yes', 20, 0) +
if(q5='yes', 20, 0) result
from table
order by result desc
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: ```
$result = mysqli_query($db,"SELECT * FROM table");
//I sugges you to first store the information into an array:
$scores = [];
while($row = mysqli_fetch_array($result)){
$score = 0;
//Calculate the score
if ($row["q1"] === "yes") {$score=$score+20;}
if ($row["q2"] === "yes") {$score=$score+20;}
if ($row["q3"] === "yes") {$score=$score+20;}
if ($row["q4"] === "yes") {$score=$score+20;}
if ($row["q5"] === "yes") {$score=$score+20;}
//Push it into the array
array_push($scores, array("name" => $row["name"],
"image" => 'src="images/'.$row["name"].'.jpg"',
"score" => $score
));
}
```
Sort the array: ([Sort Multi-dimensional Array by Value](https://stackoverflow.com/questions/2699086/sort-multi-dimensional-array-by-value))
```
usort($scores, function($a, $b) {
return $b['score'] - $a['score'];
});
//Do as you wish with the array
print_r($scores);
```
Upvotes: 0 |
2018/03/22 | 958 | 3,260 | <issue_start>username_0: I essentially have the same situation as a lot of other people.
Through extensive searching within Google I was able to come up with several different methods in which people claim their method works. I have yet to get any to work correctly yet. I don't yet know enough about jQuery to fully understand how to write this from scratch, thus I rely on really good examples for now.
What I've been trying to work with (based on examples I've found and tried) is this:
```
$(document).ready(function() {
('.box').hide();
('#dropdown').change(function() {
('#divarea1')[ ($(this).val() == 'area1') ? 'hide' : 'show' ]()
('#divarea2')[ ($(this).val() == 'area2') ? 'hide' : 'show' ]()
('#divarea3')[ ($(this).val() == 'area3') ? 'hide' : 'show' ]()
});
});
Choose
DIV Area 1
DIV Area 2
DIV Area 3
DIV Area 1
DIV Area 2
DIV Area 3
```
Note: I am using brackets rather than the less-than and greater-than signs around html to display correctly in this message.
What I get when I test this:
On first load with nothing selected => No DIV is display.
When I select DIV Area 1 => DIV Area 2 and 3 are displayed.
When I select DIV Area 2 => DIV Area 1 and 3 are displayed.
When I select DIV Area 3 => DIV Area 1 and 2 are displayed.
My brain is fried for the day. What can I do to fix this?<issue_comment>username_1: Note that you're missing the `$` reference to jQuery in the JS code. I'm going to assume that's just a typo in the question.
The simplest way to achieve this is to `show()` all the divs on change of the select, then use the chosen value to hide the relevant single `div` based on its `id`. Something like this:
```js
$(document).ready(function() {
$('#dropdown').change(function() {
$('.box').show().filter('#div' + $(this).val()).hide();
});
});
```
```css
.box {
display: none;
}
```
```html
Choose
DIV Area 1
DIV Area 2
DIV Area 3
DIV Area 1
DIV Area 2
DIV Area 3
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You are missing `$` reference.
I tried your code in `JsFiddle` with `$`, it works fine
```
$(document).ready(function() {
$('.box').hide();
});
//just tried to move it out of document. ready. But it makes no difference
$('#dropdown').change(function() {
$('#divarea1')[ ($(this).val() == "area1") ? 'hide' : 'show' ]();
$('#divarea2')[ ($(this).val() == "area2") ? 'hide' : 'show' ]();
$('#divarea3')[ ($(this).val() == "area3") ? 'hide' : 'show' ]();
});
```
Upvotes: 0 <issue_comment>username_3: as others noted, you missed a `$`.
BASED on your comment above: *`On first load with nothing selected => No DIV is displayed. When I select DIV Area 1 => DIV Area 2 and 3 are displayed. When I select DIV Area 2 => DIV Area 1 and 3 are displayed. When I select DIV Area 3 => DIV Area 1 and 2 are displayed`*
**If you want to show the other DIVs and NOT the one selected... you can use this code:**
```
$(function() {
$('.box').hide();
$('#dropdown').change(function() {
var v = $(this).val();
$('.box').hide();
$(".box:not(#div"+v+")").show();
});
});
```
See FIDDLE: <https://jsfiddle.net/dwn2bwhn/19/>
Upvotes: 0 |
2018/03/22 | 634 | 2,685 | <issue_start>username_0: The fictitious situation is the following: I have a mobile application that has been published to the store for about a year now (both for iOS and Android). I’m preparing a new version of the application. Some of the api’s in the back end are obsolete or deprecated.
The problem is that users of the application that would not update the app to the new version will experience problems with the operation of the app because the back end api’s have been replaced or removed.
The question is how to deal with this situation before becoming a problem?
Are there any guidelines from apple or google for obsolete functionality between different versions of the application?<issue_comment>username_1: As **<NAME>** said in this post: <https://stackoverflow.com/a/18756151/8354952>.
>
> Almost all changes to the iOS versions are additive and hence an
> application build using lower version still runs on the higher iOS
> version. But we also need to notice that APIs are introduced or
> deprecated and behaviors of existing APIs may occasionally change.
>
>
>
Basically the old version application may run on the latest system version. But some special api or class may be deprecated, this will cause some weird behavior or crash. So Apple or Google will also recommend user to update old version applications.
For the api from our own server, we can do the application compatibility by ourselves. Upload the current version identifier, the server can transfer different types of data through detecting the identifier.
Upvotes: -1 <issue_comment>username_2: @Dimitris, Here you need to provide force update to the old apps. This can be done using app configuration file. Basically, you will have an app configuration file which contains JSON with following keys:
```html
{
"server":{
"app-server1-base-url":"http://",
"status":{
"is-running":true,
"message":"We are busy upgrading XYZ server with technology and features. We will be back soon. We apologize for the inconvenience and appreciate your patience. Thank you for using XYZ!"
},
"force-update":{
"status":false,
"message":"Please download the latest version of XYZ from App Store to continue using the app. Thank You!"
}
}
}
```
Here 'app-server1-base-url' key will be base URL for the app. you can put all the service URL in this file.
Case 1:
Your app will check this at the time of launch whether force update available or not to the app.
Case 2:
API versioning can be done if you want to handle it using backend.
Note: Please keep configuration file on services like AWS S3 etc.
Upvotes: 1 |
2018/03/22 | 1,065 | 3,300 | <issue_start>username_0: When building a python gensim word2vec [model](https://rare-technologies.com/word2vec-tutorial/), is there a way to see a doc-to-word matrix?
With input of `sentences = [['first', 'sentence'], ['second', 'sentence']]` I'd see something like\*:
```
first second sentence
doc0 1 0 1
doc1 0 1 1
```
\*I've illustrated 'human readable', but I'm looking for a scipy (or other) matrix, indexed to `model.wv.index2word`.
And, can that be transformed into a word-to-word matrix (to see co-occurences)? Something like:
```
first second sentence
first 1 0 1
second 0 1 1
sentence 1 1 2
```
I've already implemented something like [word-word co-occurrence matrix](https://stackoverflow.com/questions/35562789/word-word-co-occurrence-matrix) using CountVectorizer. It works well. However, I'm already using gensim in my pipeline and speed/code simplicity matter for my use-case.<issue_comment>username_1: Given a corpus that is a list of lists of words, what you want to do is create a Gensim Dictionary, change your corpus to bag-of-words and then create your matrix :
```
from gensim.matutils import corpus2csc
from gensim.corpora import Dictionary
# somehow create your corpus
dct = Dictionary(corpus)
bow_corpus = [dct.doc2bow(line) for line in corpus]
term_doc_mat = corpus2csc(bow_corpus)
```
Your `term_doc_mat` is a Numpy compressed sparse matrix. If you want a term-term matrix, you can always multiply it by its transpose, i.e. :
```
import numpy as np
term_term_mat = np.dot(term_doc_mat, term_doc_mat.T)
```
Upvotes: 2 <issue_comment>username_2: The doc-word to word-word transform turns out to be more complex (for me at least) than I'd originally supposed. `np.dot()` is a key to its solution, but I need to apply a mask first. I've created a more complex example for testing...
Imagine a doc-word matrix
```
# word1 word2 word3
# doc0 3 4 2
# doc1 6 1 0
# doc3 8 0 4
```
* in docs were word2 occurs, word1 occurs 9 times
* in docs were word2 occurs, word2 occurs 5 times
* in docs were word2 occurs, word3 occurs 2 times
So, when we're done we should end up with something like the below (or it's inverse). Reading in columns, the word-word matrix becomes:
```
# word1 word2 word3
# word1 17 9 11
# word2 5 5 4
# word3 6 2 6
```
A straight `np.dot()` product yields:
```
import numpy as np
doc2word = np.array([[3,4,2],[6,1,0],[8,0,4]])
np.dot(doc2word,doc2word.T)
# array([[29, 22, 32],
# [22, 37, 48],
# [32, 48, 80]])
```
which implies that word1 occurs with itself 29 times.
But if, instead of multiplying doc2word times itself, I first build a mask, I get closer. Then I need to reverse the order of the arguments:
```
import numpy as np
doc2word = np.array([[3,4,2],[6,1,0],[8,0,4]])
# a mask where all values greater than 0 are true
# so when this is multiplied by the orig matrix, True = 1 and False = 0
doc2word_mask = doc2word > 0
np.dot(doc2word.T, doc2word_mask)
# array([[17, 9, 11],
# [ 5, 5, 4],
# [ 6, 2, 6]])
```
I've been thinking about this for too long....
Upvotes: 1 [selected_answer] |
2018/03/22 | 768 | 2,594 | <issue_start>username_0: I am looking at visualising neo4j graph data in Tableau but Tableau requires the coordinates to be included in the data before it can plot the nodes in a graph.
By coordinates I just mean an x & y value so that I can plot nodes and edges on a 2d graph. There aren't any geographic/address requirements
Is there any way to return coordinates in a result set using cypher? Something along the lines of
```
MATCH (j:Job)-[:PARENT_OF]->(w:Workflow)
RETURN j, coords(j), w, coords(w)
```
There is nothing obvious that I can see having searched the web and also stackoverflow. I have searched the [APOC procedures](https://neo4j-contrib.github.io/neo4j-apoc-procedures/index33.html#_overview_of_apoc_procedures_functions) with terms such as "spatial", "coordinates" and "position" but so far I've had no joy.
I'm using neo4j community 3.3.1
EDIT: Looking at [howto graph visualization step by step](https://neo4j.com/developer/guide-data-visualization/#_howto_graph_visualization_step_by_step) this suggests that you need to use external libraries in order to do this rather than getting the information from neo4j. I appreciate the visualisations are fluid and can be moved about but I would have thought that given the neo4j server has crunched the geospatial algorithms already, it would be something that could somehow be output via a function or procedure.<issue_comment>username_1: APOC does have a [couple of procedures](https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_spatial_functions) for getting the coordinates of an address: `apoc.spatial.geocodeOnce` and `apoc.spatial.geocode`.
For example, this query:
```
CALL apoc.spatial.geocodeOnce('21 <NAME> 44000 NANTES FRANCE') YIELD location
RETURN location;
```
produces this result:
```
{
"description": "21, <NAME>, Talensac - <NAME>, Hauts-Pavés - Saint-Félix, Nantes, Loire-Atlantique, Pays de la Loire, France métropolitaine, 44000, France",
"latitude": 47.2221667,
"longitude": -1.5566625
}
```
Modifying the example in your question (assuming that `j` and `w` have `address` properties), this should work:
```
MATCH (j:Job)-[:PARENT_OF]->(w:Workflow)
CALL apoc.spatial.geocodeOnce(j.address) YIELD location AS jLoc
CALL apoc.spatial.geocodeOnce(w.address) YIELD location AS wLoc
RETURN j, jLoc, w, wLoc;
```
Upvotes: 2 <issue_comment>username_2: When I asked this question I didn't understand that what actually happens is a frontend algorithm sorts and orders the nodes dynamically and therefore no coordinates are provided.
Upvotes: 1 [selected_answer] |
2018/03/22 | 542 | 1,789 | <issue_start>username_0: I have a server which requires ID und Password has to be encoded in Base64. Meaning instead of `Username` you have to encode it into Base64 first and give the encoded String in.
And then in my Java Client i have an `javax.mail.Authenticator`. I manually encoded ID and Password into base64, save them to strings and initialize the Authenticator with those Strings. But then the ID and password is wrong, even if i checked again and again.
What could be wrong in this case ?
Edit : The other question was about body part, here it is about `Authenticator`<issue_comment>username_1: APOC does have a [couple of procedures](https://neo4j-contrib.github.io/neo4j-apoc-procedures/#_spatial_functions) for getting the coordinates of an address: `apoc.spatial.geocodeOnce` and `apoc.spatial.geocode`.
For example, this query:
```
CALL apoc.spatial.geocodeOnce('21 r<NAME> 44000 NANTES FRANCE') YIELD location
RETURN location;
```
produces this result:
```
{
"description": "21, <NAME>, Talensac - <NAME>, Hauts-Pavés - Saint-Félix, Nantes, Loire-Atlantique, Pays de la Loire, France métropolitaine, 44000, France",
"latitude": 47.2221667,
"longitude": -1.5566625
}
```
Modifying the example in your question (assuming that `j` and `w` have `address` properties), this should work:
```
MATCH (j:Job)-[:PARENT_OF]->(w:Workflow)
CALL apoc.spatial.geocodeOnce(j.address) YIELD location AS jLoc
CALL apoc.spatial.geocodeOnce(w.address) YIELD location AS wLoc
RETURN j, jLoc, w, wLoc;
```
Upvotes: 2 <issue_comment>username_2: When I asked this question I didn't understand that what actually happens is a frontend algorithm sorts and orders the nodes dynamically and therefore no coordinates are provided.
Upvotes: 1 [selected_answer] |
2018/03/22 | 720 | 2,389 | <issue_start>username_0: I have a problem when I try to change the options of my select and its value at the same time. If I use `v-model`, it works properly but if I use `v-bind:value` + `v-on:change`, it will not work.
Here is a js fiddle that will illustrate the problem : <https://jsfiddle.net/2vcer6dz/18/>
The first time you click on the button "change", only the first select value will be 3. If you reclick they all become 3.
**Html**
```
{{item.Text}}
{{item.Text}}
{{item.Text}}
```
**Javascript**
```
Vue.component('select-option', {
template: '#template-select-option',
props: ['value', 'options'],
methods: {
update: function (value) {
this.$emit('input', value);
}
}
});
new Vue({
el: '#app',
data: {
value: 1,
options: [{Value:1, Text:1}, {Value:2, Text:2}]
},
methods: {
change: function () {
this.options = [{Value:1, Text:1}, {Value:2, Text:2}, {Value:3, Text:3}];
this.value = 3;
}
}
});
```
**Expected result**
All selects should have the value "3" when you click on the button "change"<issue_comment>username_1: Changing the options and the value at the same time is confusing Vue. This is probably a minor bug in Vue. If you use `$nextTick` to push the value change off to the next update cycle, they all work.
```
change: function () {
this.options = [{Value:1, Text:1}, {Value:2, Text:2}, {Value:3, Text:3}];
this.$nextTick(() => {
this.value = 3;
});
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: It seems that this is a [known bug](https://github.com/vuejs/vue/issues/4810) which was closed because a workaround was found.
The workaround is to declare another property and cast v-model on it. This solutions is easier to implement inside a component.
<https://jsfiddle.net/6gbfhuhn/8/>
**Html**
```
{{item.Text}}
```
**Javascript**
```
Vue.component('select-option', {
template: '#template-select-option',
props: ['value', 'options'],
computed: {
innerValue: {
get: function() { return this.value; },
set: function(newValue) { this.$emit('input', newValue); }
}
}
});
```
Note: In the github thread, it is suggested to use a computed property instead, but if you use a computed property, vue will throw warning every time you change the value in your dropdown because computed property don't have setter.
Upvotes: 0 |
2018/03/22 | 448 | 1,688 | <issue_start>username_0: I need to collect different informations about weblogic servers in a production environnnement . I am currently using the restful management services api to collect a lot informations like health and status .
But this api can't give me informations like threadstuck.
My Question is What is the best way to collect threadstuck using python or another langage.<issue_comment>username_1: Changing the options and the value at the same time is confusing Vue. This is probably a minor bug in Vue. If you use `$nextTick` to push the value change off to the next update cycle, they all work.
```
change: function () {
this.options = [{Value:1, Text:1}, {Value:2, Text:2}, {Value:3, Text:3}];
this.$nextTick(() => {
this.value = 3;
});
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: It seems that this is a [known bug](https://github.com/vuejs/vue/issues/4810) which was closed because a workaround was found.
The workaround is to declare another property and cast v-model on it. This solutions is easier to implement inside a component.
<https://jsfiddle.net/6gbfhuhn/8/>
**Html**
```
{{item.Text}}
```
**Javascript**
```
Vue.component('select-option', {
template: '#template-select-option',
props: ['value', 'options'],
computed: {
innerValue: {
get: function() { return this.value; },
set: function(newValue) { this.$emit('input', newValue); }
}
}
});
```
Note: In the github thread, it is suggested to use a computed property instead, but if you use a computed property, vue will throw warning every time you change the value in your dropdown because computed property don't have setter.
Upvotes: 0 |
2018/03/22 | 754 | 2,245 | <issue_start>username_0: I have a python dataframe which has a filename column that looks like this:
```
Filename
/var/www/html/projects/Bundesliga/Match1/STAR_SPORTS_2-20170924-200043-210917-00001.jpg
/var/www/html/projects/Bundesliga/Match1/STAR_SPORTS_2-20170924-200043-210917-00001.jpg
```
From the Filename column I want to replace the directory name with a new destination directory name.
```
dst = "/home/mycomp/Images'
```
I have tried the following:
```
df['Filename'] = df['Filename'].str.replace(os.path.dirname(df['Filename']), dst)
```
But I am getting the following error.
```
Traceback (most recent call last):
File "", line 1, in
File "/usr/lib/python2.7/posixpath.py", line 129, in dirname
i = p.rfind('/') + 1
File "/usr/local/lib/python2.7/dist-packages/pandas/core/generic.py", line 3614, in \_\_getattr\_\_
return object.\_\_getattribute\_\_(self, name)
AttributeError: 'Series' object has no attribute 'rfind'
```<issue_comment>username_1: The problem is in `os.path.dirname(df['Filename'])`: you are passing a `Series` here where it expects a `str`. What you can do is `filenames = df['Filename'].str.split('/').str[-1]` to get the filename without the directory and then `dst + '/' + filenames` to get the new paths. Better to define `dst = '"/home/mycomp/Images/'`
Upvotes: 1 <issue_comment>username_2: ```
df['Filename'] = df['Filename'].apply(lambda x: x.replace(os.path.dirname(x), dst))
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Here is one way using regular expression.
```
import os, re
dst = r'/home/mycomp/Images'
paths = '|'.join([re.escape(s) for s in set(df['Filename'].map(os.path.dirname))])
df['Filename'] = df['Filename'].str.replace(paths, dst)
# Filename
# 0 /home/mycomp/Images/STAR_SPORTS_2-20170924-200...
# 1 /home/mycomp/Images/STAR_SPORTS_2-20170924-200...
```
**Explanation**
* Extract all directories, escape special characters, and combine into a single string separated by `|` [regex or]. This ensures *all* paths in the series are replaced.
* Use `os.path.dirname` to extract the correct path across platforms.
* Use `pd.Series.str.replace` with regex to replace all paths with `dst` input.
Upvotes: 0 |
2018/03/22 | 1,722 | 5,283 | <issue_start>username_0: Is it possible to debug `find_library` from CMake?
What I want is a list of considered paths. My use case is a call like
```
find_library (FOO_LIBRARY
NAMES foo foo.so.0)
```
and there is `/lib64/libfoo.so.0` on my system. However CMake does not find it. I checked that `FIND_LIBRARY_USE_LIB64_PATHS` is set to `TRUE`.<issue_comment>username_1: I know this isn't a complete answer, but I had the same problem and found that it was necessary to add debug logging to find\_library in the CMake source code. I submitted a [pull request](https://gitlab.kitware.com/cmake/cmake/merge_requests/2313) (work in progress) to add this to mainstream CMake.
I eliminated some possible sources of errors by logging an error message with some relevant details if `find_library` fails, like this:
```
set(libssl_names
ssl${_OPENSSL_MSVC_ARCH_SUFFIX}${_OPENSSL_MSVC_RT_MODE}
ssl${_OPENSSL_MSVC_RT_MODE}
ssl
ssleay32${_OPENSSL_MSVC_RT_MODE}
ssleay32
)
find_library(SSL_EAY_DEBUG
NAMES ${libssl_names}
NAMES_PER_DIR
PATHS ${OPENSSL_ROOT_DIR}
PATH_SUFFIXES ${_OPENSSL_PATH_SUFFIXES}
NO_DEFAULT_PATH
)
if(NOT SSL_EAY_DEBUG)
message(FATAL_ERROR "OPENSSL_ROOT_DIR is set to '${OPENSSL_ROOT_DIR}', but did not find any file matching ${OPENSSL_ROOT_DIR}/{${_OPENSSL_PATH_SUFFIXES}}/${CMAKE_FIND_LIBRARY_PREFIXES}{${libssl_names}}${CMAKE_FIND_LIBRARY_SUFFIXES}")
endif()
```
Which outputs something like:
```
CMake Error at CMakeLists.txt:526 (message):
OPENSSL_ROOT_DIR is set to '../../Install/openssl/', but did not find any
file matching
../../Install/openssl//{lib/VC/static;VC/static;lib}/lib{ssl64MT;sslMT;ssl;ssleay32MT;ssleay32}.a
```
Apart from semicolons (instead of commas) in the {braced} portions of the above pattern, and regexes if multiple CMAKE\_FIND\_LIBRARY\_SUFFIXES are configured (e.g. `.lib .a` on Windows), this is the correct form for shell expansion to a list of paths, which you can pass to `ls` to check for their existence:
```
$ ls ../../Install/openssl//{lib/VC/static,VC/static,lib}/lib{ssl64MT,sslMT,ssl,ssleay32MT,ssleay32}.a
ls: ../../Install/openssl//VC/static/libssl.a: No such file or directory
ls: ../../Install/openssl//VC/static/libssl64MT.a: No such file or directory
ls: ../../Install/openssl//VC/static/libsslMT.a: No such file or directory
ls: ../../Install/openssl//VC/static/libssleay32.a: No such file or directory
ls: ../../Install/openssl//VC/static/libssleay32MT.a: No such file or directory
ls: ../../Install/openssl//lib/VC/static/libssl.a: No such file or directory
ls: ../../Install/openssl//lib/VC/static/libssl64MT.a: No such file or directory
ls: ../../Install/openssl//lib/VC/static/libsslMT.a: No such file or directory
ls: ../../Install/openssl//lib/VC/static/libssleay32.a: No such file or directory
ls: ../../Install/openssl//lib/VC/static/libssleay32MT.a: No such file or directory
ls: ../../Install/openssl//lib/libssl64MT.a: No such file or directory
ls: ../../Install/openssl//lib/libsslMT.a: No such file or directory
ls: ../../Install/openssl//lib/libssleay32.a: No such file or directory
ls: ../../Install/openssl//lib/libssleay32MT.a: No such file or directory
../../Install/openssl//lib/libssl.a
```
It's not obvious (at least to me) that:
* relative paths (like `../../Install` above) are actually relative to the original source directory (not the current project build directory, which CMake calls [CMAKE\_BINARY\_DIR](https://cmake.org/cmake/help/v3.0/variable/CMAKE_BINARY_DIR.html). (So you should run `ls` from there, not your build directory).
* [FIND\_LIBRARY\_USE\_LIB64\_PATHS](https://cmake.org/cmake/help/v3.0/prop_gbl/FIND_LIBRARY_USE_LIB64_PATHS.html), which is often ON by default, results in your original paths being replaced by mangled ones (lib -> lib64) (not just supplemented by additional search paths, but completely replaced).
* Other, even less well-known properties such as FIND\_LIBRARY\_USE\_LIB32\_PATHS result in similar mangling (lib -> lib32).
* This mangling can be disabled with `set(CMAKE_FIND_LIBRARY_CUSTOM_LIB_SUFFIX "")`.
Upvotes: 2 <issue_comment>username_2: With CMake 3.17 this [got added](https://blog.kitware.com/cmake-3-17-0-available-for-download/):
>
> The “CMAKE\_FIND\_DEBUG\_MODE” variable was introduced to print extra
> find call information during the cmake run to standard error. Output
> is designed for human consumption and not for parsing.
>
>
>
So you pass either `-DCMAKE_FIND_DEBUG_MODE=ON` or `--debug-find` to your CMake command.
Here is an example output when searching for libFOO:
```
find_library considered the following locations:
/usr/local/lib64/(lib)FOO(\.so|\.a)
/usr/local/lib/(lib)FOO(\.so|\.a)
/usr/local/lib64/(lib)FOO(\.so|\.a)
/usr/local/lib/(lib)FOO(\.so|\.a)
/usr/local/lib64/(lib)FOO(\.so|\.a)
/usr/local/(lib)FOO(\.so|\.a)
/usr/lib64/(lib)FOO(\.so|\.a)
/usr/lib/(lib)FOO(\.so|\.a)
/usr/lib64/(lib)FOO(\.so|\.a)
/usr/lib/(lib)FOO(\.so|\.a)
/usr/lib64/(lib)FOO(\.so|\.a)
/usr/(lib)FOO(\.so|\.a)
/lib64/(lib)FOO(\.so|\.a)
/lib/(lib)FOO(\.so|\.a)
/opt/(lib)FOO(\.so|\.a)
The item was not found.
```
Upvotes: 6 [selected_answer] |
2018/03/22 | 868 | 2,982 | <issue_start>username_0: I have a PHP application that needs to remotely connect to a HEROKU app's postgresql database and run database queries (preferably using pg\_query, though PDO is ok too). I am stuck trying to 1.) obtain live db credentials using the Heroku DATABASE\_URL (as they recommend), and 2) generate any kind of db connection even when hard-coded credentials are supplied. I haven't found the proper sequence of steps to make a connection. (App is running on GoDaddy hosting - mentioned for clarity.)
I am trying to connect to the database, run a select query and assign the values from the resultset to PHP variables. I can connect to the database using PGAdmin3 so remote access isn't the issue.
What I have so far (Edited- commented lines are from original post):
```
//$DATABASE_URL='$({heroku CLI string, pasted from Heroku db credentials page})';
//$db = parse_url(getenv($DATABASE_URL));
$db_url = getenv("DATABASE_URL") ?: "postgres://user:pass@host:port/dbname";
$db = pg_connect($db_url);
if($db) {echo "connected";} else {echo "not connected";}
$selectSql = "SELECT id, name FROM companies ORDER BY id";
$result = pg_query($db, $selectSql);
while ($row = pg_fetch_row($result)) {
$id = $row["id"];
$name = $row["name"];
echo "
id: ".$id;
echo "
name: ".$name;
}
```
What is missing to connect to the database and run queries?<issue_comment>username_1: You need to call `pg_connect` with the correct connection string. I don't think it's the same as the URL given via Heroku; [refer to the documentation for that one](https://secure.php.net/manual/en/function.pg-connect.php).
You're also using `getenv` improperly.
```
$db_url = getenv("DATABASE_URL") ?: "some default value here";
$db = pg_connect($db_url);
```
Upvotes: 1 <issue_comment>username_2: I see at least 3 problems:
1) Delete or comment out this:
```
$DATABASE_URL='$({heroku CLI string, pasted from Heroku db credentials page})';
```
2) Change this line to get the environment variable (so it looks like this). You have to set the environment variable before starting PHP. You could also hardcode the value, but not recommended:
```
$db = parse_url(getenv('DATABASE_URL'));
```
3) It doesn't look like you are actually connecting to the database, you just start your query. You have to connect first. See [heroku docs for PDO](https://devcenter.heroku.com/articles/getting-started-with-php#provision-a-database) or [PHP docs for pg\_query](http://php.net/manual/en/function.pg-connect.php)
*Update 2018-03-23*
Here is a standalone program that will make sure it is connecting:
```
#!/usr/bin/env php
php
$db_url = getenv("DATABASE_URL") ?: "postgres://user:pass@host:port/dbname";
echo "$db_url\n";
$db = pg_connect($db_url);
if($db) {echo "connected";} else {echo "not connected";}
$selectSql = "SELECT 1";
$result = pg_query($db, $selectSql);
while ($row = pg_fetch_row($result)) {
var_dump($row);
}
?
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 979 | 2,573 | <issue_start>username_0: How do I desugar
```
λ: [[b|(a,b)<-[(1,"A"),(2,"B")], mod x 2 == 0]|x <- [1..10]]
[[],["A","B"],[],["A","B"],[],["A","B"],[],["A","B"],[],["A","B"]]
```
I have tried
```
do
x <- [1..10]
do
(a,b) <- [(1,"A"),(2,"B")]
guard $ mod x 2 == 0
return b
```
but this seems to automatically `join` the result.
`["A","B","A","B","A","B","A","B","A","B"]`<issue_comment>username_1: Since you return the list comprehension, you need an **extra `return`**:
```
do
x <- [1..10]
**return** $ do
(a,b) <- [(1,"A"),(2,"B")]
guard $ mod x 2 == 0
return b
```
Otherwise, you would construct have constructed it like:
```
[b|x <- [1..10], (a,b)<-[(1,"A"),(2,"B")], mod x 2 == 0]
```
Since you used the `do` as tailing element in the outer `do`, that `do` was redundant.
Upvotes: 2 <issue_comment>username_2: Remember that nested `do` constructs "collapse":
```
do
A
do B
C
```
is the same as
```
do
A
B
C
```
(Or more precisely, multi-line `do` constructs desugar to nested `do` constructs.) So you want the sublist to be constructed and add to the outer list, rather than building a single list using the nested `do` construct to fill it.
Using a hybrid, intermediate approach, you build the outer list by `return`ing each inner list separately.
```
do
x <- [1..10]
return [[b|(a,b)<-[(1,"A"),(2,"B")], mod x 2 == 0]
```
Then you desugar *that* comprehension:
```
do
x <- [1..10]
return (do
(a, b) <- ...
guard $ mod x 2 == 0
return b)
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: I made minor modifications to your original list comprehension.
```
[ (x,[(b) | (a,b) <- [(1,"A"),(2,"B")], mod x 2 == 0]) | x <- [1..10] ]
```
Which resulted in
```
[(1,[]),(2,["A","B"]),(3,[]),(4,["A","B"]),(5,[]),(6,["A","B"]),(7,[]),(8,["A","B"]),(9,[]),(10,["A","B"])]
```
So with these parameters, it was producing b of (a,b) or a [].
This translated into the following map.
```
map (\x -> if mod x 2 == 0 then [x] else [] ) [1..10]
```
Which produced
```
[[],[2],[],[4],[],[6],[],[8],[],[10]]
```
The following segment of your list comprehension
```
[b|(a,b)<-[(1,"A"),(2,"B")]]
```
Produces simply
```
["A","B"]
```
The following map produces the same.
```
map snd [(1,"A"),(2,"B")]
```
Replacing [x] in the first map with the preceding map
```
map (\x -> if mod x 2 == 0 then map snd [(1,"A"),(2,"B")] else [] ) [1..10]
```
Does exactly what your list comprehension does. Changing the parameters would result in identical results.
Upvotes: 0 |
2018/03/22 | 339 | 1,227 | <issue_start>username_0: I have a `Postgres` database. I want to find the minimum value of a column called `calendarid`, which is of type integer and the format `yyyymmdd`, from a certain table. I am able to do so via the following code.
```
get_history_startdate <- function(src) {
get_required_table(src) %>% # This gives me the table tbl(src, "table_name")
select(calendarid) %>%
as_data_frame %>%
collect() %>%
min() # Result : 20150131
}
```
But this method is really slow as it loads all the data from the database to the memory. Any ideas how can I improve it?<issue_comment>username_1: If you just want the minimum value of the `calendarid` column across the entire table, then use this:
```
SELECT MIN(calendarid) AS min_calendarid
FROM your_table;
```
I don't exactly what your R code is doing under the hood, but if it's bringing in the entire table from Postgres into R, then it is very wasteful. If so, then running the above query directly on Postgres should give you a boost in performance.
Upvotes: 1 <issue_comment>username_2: ```
get_required_table(src) %>%
summarise(max(calendarid, na.rm = TRUE)) %>%
pull
```
will run the appropriate SQL query.
Upvotes: 3 [selected_answer] |
2018/03/22 | 429 | 1,521 | <issue_start>username_0: I want to replace parts of file paths in a configuration file using sed in Cygwin. The file paths are in form of `\\\\some\\constant\\path\\2018-03-20_2030.1\\Release\\base\\some_dll.dll` (yes, double backslashes in the file) and the beginning part containing date should be replaced.
For matching I've written following regex: `\\\\\\\\some\\\\constant\\\\path\\\\[0-9_\.-]*` with a character set supposed to match *only* date, consisting of digits and "-", "\_" and "." symbols. This results into following command for replacement: `sed 's/\\\\\\\\some\\\\constant\\\\path\\\\[0-9_\.-]*/bla/g' file.txt`
The problem is that, after replacement, I get `blaRelease\\base\\some_dll.dll` instead of `bla\\Release\\base\\some_dll.dll` as it was successfully replaced using [Regexr](https://regexr.com/3mmht).
Why does sed behave this way and how can I fix it?<issue_comment>username_1: If you just want the minimum value of the `calendarid` column across the entire table, then use this:
```
SELECT MIN(calendarid) AS min_calendarid
FROM your_table;
```
I don't exactly what your R code is doing under the hood, but if it's bringing in the entire table from Postgres into R, then it is very wasteful. If so, then running the above query directly on Postgres should give you a boost in performance.
Upvotes: 1 <issue_comment>username_2: ```
get_required_table(src) %>%
summarise(max(calendarid, na.rm = TRUE)) %>%
pull
```
will run the appropriate SQL query.
Upvotes: 3 [selected_answer] |
2018/03/22 | 819 | 3,146 | <issue_start>username_0: I am having difficulties in finding a solution for opening a stored procedure straight to MSQL management studio for modifying in a new SQLQuery from my `C#` application (winform).
Here is my code:
```
Process openSQL = new Process();
openSQL.StartInfo.FileName = "Ssms.exe";
openSQL.StartInfo.Arguments = "dbo.getResults"; //name of the stored procedure I want to open
openSQL.Start();
```
I am getting error after executing the code :"The following files were specified on the command line: dbo.getResults These files could not be found and will not be loaded."
How am I supposed to "point" to the stored procedure in C# and get its definition displayed and ready to get modifications in MSQL management studio?<issue_comment>username_1: This isn't possible I'm afraid. If you run `ssms -?` from the command line you can see all the parameters that you can pass in:
[](https://i.stack.imgur.com/VuUhN.png)
Some options:
1. Let users edit procs themselves. After all, anyone capable of doing this will understand how to use SSMS properly.
2. Make your own UI. You can read the contents of a stored procedure and display them in a text box. The downside is that you lose features such as syntax highlighting (unless you also build that in too)
3. You could download the procedure and store it in a `procedure.sql` file and get SSMS to open that. Don't forget to pass in the server, database and credentials.
Upvotes: 1 <issue_comment>username_2: I found a way to open a stored procedure script straight to MSQL management studio for modifying in a new SQLQuery from my C# application (winform).
1. I am taking the script of the procedure with `EXEC sp_helptext 'procedure_name'`
2. The result set is filled in a DataSet
3. And the DataSet is getting written in an empty .sql file
4. The .sql file is getting opened in MSQL Managment Studio with `System.Diagnostics;`
Here are the steps with code snippets:
```
private void saveProcToAFile()
{
StreamWriter log;
if (!File.Exists("procedureToBeLoaded.sql"))
{
log = new StreamWriter("procedureToBeLoaded.sql");
}
else
{
log = new StreamWriter(File.Create("procedureToBeLoaded.sql"));
}
SqlConnection conn = new SqlConnection(conString);
SqlDataAdapter da = new SqlDataAdapter();
SqlCommand cmd = conn.CreateCommand();
cmd.CommandText = string.Format("EXEC sp_helptext '{0}'", "procedure_name"); //Step 1.
da.SelectCommand = cmd;
DataSet ds = new DataSet();
conn.Open();
da.Fill(ds); //Step 2.
conn.Close();
foreach (DataRow dr in ds.Tables[0].Rows)
{
log.WriteLine(dr[0]); //Step 3.
}
log.Close();
}
```
Step 4.
```
private void contextMenuStripOpenInSqlStudio_Click(object sender, EventArgs e)
{
saveProcToAFile();
Process openSQL = new Process();
openSQL.StartInfo.FileName = "Ssms.exe";
openSQL.StartInfo.Arguments = "procedureToBeLoaded.sql";
openSQL.Start();
}
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 141 | 622 | <issue_start>username_0: I want my react native app to detect when Google Maps app is open to show some travel tip. Is it possible?<issue_comment>username_1: You can not detect if another app is launched for security reasons. Installed apps can´t listen another apps because they are "sandboxed".
You can only interact with another apps opening it throught your app by Deeplinking or detecting if is or not installed.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's not a limitation of React Native. All iOS applications cannot access other applications, except to open it or share some resources.
Upvotes: 2 |
2018/03/22 | 1,822 | 6,648 | <issue_start>username_0: I'm trying to share values between my `before` and `beforeEach` hooks using aliases. It currently works if my value is a string but when the value is an object, the alias is only defined in the first test, every test after that `this.user` is undefined in my `beforeEach` hook. How can I share a value which is an object between tests?
This is my code:
```
before(function() {
const email = `test+${uuidv4()}<EMAIL>`;
cy
.register(email)
.its("body.data.user")
.as("user");
});
beforeEach(function() {
console.log("this.user", this.user); // This is undefined in every test except the first
});
```<issue_comment>username_1: The alias is ***undefined in every test except the first*** because aliases are cleared down **after** each test.
Aliased variables are accessed via `cy.get('@user')` syntax. Some commands are inherently asynchronous, so using a wrapper to access the variable ensures it is resolved before being used.
See documentation [Variables and Aliases](https://docs.cypress.io/guides/core-concepts/variables-and-aliases.html#) and [get](https://docs.cypress.io/api/commands/get.html#Syntax).
---
There does not seem to be a way to explicitly preserve an alias, as the is with [cookies](https://docs.cypress.io/api/cypress-api/cookies)
```
Cypress.Cookies.preserveOnce(names...)
```
but this [recipe for preserving fixtures](https://github.com/cypress-io/cypress-example-recipes/blob/db2f5a9c2e3dcf3b4f2b352f10a2d6ed443946fe/examples/fundamentals__fixtures/cypress/integration/load-fixtures-spec.js#L103) shows a way to preserve global variables by reinstating them in a `beforeEach()`
```js
let city
let country
before(() => {
// load fixtures just once, need to store in
// closure variables because Mocha context is cleared
// before each test
cy.fixture('city').then((c) => {
city = c
})
cy.fixture('country').then((c) => {
country = c
})
})
beforeEach(() => {
// we can put data back into the empty Mocha context before each test
// by the time this callback executes, "before" hook has finished
cy.wrap(city).as('city')
cy.wrap(country).as('country')
})
```
---
If you want to access a global `user` value, you might try something like
```js
let user;
before(function() {
const email = `<EMAIL>+${uuidv4()}<EMAIL>`;
cy
.register(email)
.its("body.data.user")
.then(result => user = result);
});
beforeEach(function() {
console.log("global user", user);
cy.wrap(user).as('user'); // set as alias
});
it('first', () => {
cy.get('@user').then(val => {
console.log('first', val) // user alias is valid
})
})
it('second', () => {
cy.get('@user').then(val => {
console.log('second', val) // user alias is valid
})
})
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: TL;DR: If you want an aliased `user` object available in each of your tests, you must define it in a `beforeEach` hook not a `before` hook.
Cypress performs a lot of cleanup between tests and this includes clearing all aliases. According to the Sharing Contexts section of [Variables and Aliases](https://docs.cypress.io/guides/core-concepts/variables-and-aliases.html#Sharing-Context): "Aliases and properties are automatically cleaned up after each test." The result you are seeing (your alias is cleaned after the first test and subsequently undefined) is thus expected behavior.
I cannot determine what `register` does in the original post, but it seems your intention is to save the overhead of performing API calls repeatedly in a `beforeEach` hook. It is definitely easiest to put everything you want in the `beforeEach` hook and ignore the overhead (also, pure API calls with no UI interaction will not incur much penalty).
If you really need to avoid repetition, this should not be accomplished through regular variables due to potential timing problems with Cypress' custom chainables. This is an [anti-pattern they publish](https://docs.cypress.io/guides/core-concepts/variables-and-aliases.html#Aliases). The best way to do this would be:
* Create a [fixture file](https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests.html#Fixture-Files) with static user data that you will use to conduct the test. (Remove the uuidv4.)
* For the set of tests that need your user data, call `register` in a `before` hook using the fixture data. This will create the data in the system under test.
* Use a `beforeEach` hook to load the fixture data and alias it for each of your tests. Now, the static data you need is accessible with no API calls and it is guaranteed to be in the system properly thanks to the `before` hook.
* Run your tests using the alias.
* Clean up the data in an `after` hook (since your user no longer has a random email, you need to add this step).
If you need to do the above for the whole test suite, put your `before` and `after` hooks in the [support file](https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests.html#Support-file) to make them global.
Upvotes: 0 <issue_comment>username_3: Replace
```
console.log("global user", this.user);
```
with
```
cy.log(this.user);
```
and it should work as expected.
The reason for this is the asynchronous nature of cypress commands. Think of it as a two-step process: All the cypress commands are not doing what you think, when they run. They just build up a chain of commands. This chain is executed as the test later on.
This is obviously not the case for other commands like `console.log()`. This command is executed when preparing the test.
This is [explained in great detail in the cypress documentation](https://docs.cypress.io/guides/core-concepts/introduction-to-cypress.html#Commands-Are-Asynchronous):
But I felt it very hard to get my head around this. You have to get used to it.
One rule of thumb: Almost every command in your test should be a cypress command.
So just use `cy.log` instead of `console.log`
If you must use console.log you can do it like this:
```
cy.visit("/).then(() => console.log(this.user))
```
this way the `console.log` is chained. Or if you do not have a subject to chain off, build your own custom command like this:
```
Cypress.Commands.add("console", (message) => console.log(message))
cy.console(this.user)
```
Another mistake with using `this` in cypress is using arrow functions. If you do, you don't have access to the `this` you are expecting. See [Avoiding the use of this](https://docs.cypress.io/guides/core-concepts/variables-and-aliases.html#Avoiding-the-use-of-this) in the cypress docs.
Upvotes: 1 |
2018/03/22 | 347 | 1,339 | <issue_start>username_0: If I try to run `rails server` or `rails console` and there are uninstalled dependencies or pending migrations, I will get an error message informing me about this.
Is there any similar Rails command that can be run for doing this check, without booting server or console?<issue_comment>username_1: For gems it's easy enough to just run:
```
bundle check
```
For db this will show you if any migrations are pending with a "down" status:
```
rake db:migrate:status
```
or in rails 5 or higher
```
rails db:migrate:status #rails 5+
```
Upvotes: 0 <issue_comment>username_2: I don't know how much you'll have to gain from checking, since you can just run the actual commands and it'll tell you the same information and take pretty much the same amount of time:
```
alias rs='bundle && rake db:migrate && rails s'
```
One thing you could do is integrate the [`hookup` gem](https://github.com/tpope/hookup). It basically manages this annoyance for you whenever you change branches, automatically running `bundle` and `rake db:migrate`. It also, conveniently, rolls back migrations that are not on the branch you're changing to, which can be a pain, too. It does add a bit of a performance penalty, though, especially on larger projects.
```
gem install hookup
cd yourproject
hookup install
```
Upvotes: 1 |
2018/03/22 | 961 | 3,425 | <issue_start>username_0: I have a Bootstrap page that is dynamically created using PHP, and on the page there are 25+ forms that are used to edit records.
The form submits to the server using jQuery Ajax script which works as expected on the first form, but when the second form is edited and submitted it submits form 1 and 2, and when I go to form 3 it will submit forms 1, 2, and 3
Here is the HTML:
```
| audio controls to play wav file |
There is text from the Database
Edit
| [**Hide**](#) |
```
And here is the Java:
```
function update_stt() {
var url = "function_ajax.php"; // the script where you handle the form input.
$('form[id]').on('submit', function(e) {
$.ajax({
type: 'POST',
url: url,
data: $(this).serialize(),
success: function(data) {
console.log('Submission was successful.');
console.log(data);
$(e.target).closest('tr').children('td,th').css('background-color', '#000');
},
error: function(data) {
console.log('An error occurred.');
console.log(data);
},
});
e.preventDefault();
});
}
```
How can I identify only the id of the form that I want submitted, or submit only the form on that row?<issue_comment>username_1: You can give id to forms and use
```
$('#formid').submit();
```
in your case
```
$('#375987').submit();
```
But that id is used by your tr div too. You should consider using id for form only
Upvotes: 1 <issue_comment>username_2: You use a `submit` button which will automatically submit the form, but you also add a click event to it using the `onclick` attribute, so it will execute the associated function and submit the form. All that is unnecessarily complicated.
Remove the `onclick` attribute on your button:
```
Edit
```
And change your code to:
```
$('#375987').on('submit', function(e) {
var url = "function_ajax.php"; // the script where you handle the form input.
$.ajax({
type: 'POST',
url: url,
data: $(this).serialize(),
success: function(data) {
console.log('Submission was successful.');
console.log(data);
$(e.target).closest('tr').children('td,th').css('background-color', '#000');
},
error: function(data) {
console.log('An error occurred.');
console.log(data);
},
});
e.preventDefault();
});
```
If you want all your forms to use the same function, then simply replace the selector `$('#375987')` with `$('form')`:
```
$('form').on('submit', function(e) { ... }
```
If you only want to select some forms, not all, then you can give them the same class and then select them by that class, like :
```
$('form.ajaxed').on('submit', function(e) { ... }
```
Upvotes: 2 <issue_comment>username_3: It's because each time you click the button and call `update_stt()` you are attaching again the `.on()` listener.
This is why you end up with more and more submit requests.
```
$('form[id]').on('submit', function (e) {
```
This line of code should only be called once not on every click.
ALSO: you said you build these on the backend so you could pass the ID straight to the function:
```
update_stt(375987)
```
Then you can use this:
```
function update_stt(passedNumber) { ...
```
Then you can use the id number in the call
Your mixing jQuery and vanilla JS a lot here, might make sense to try a larger refactor.
Upvotes: 0 |
2018/03/22 | 687 | 2,421 | <issue_start>username_0: How to get count of rows in database by id?
`SELECT count(*) FROM members;`
Without performance issues. What are ways to write this query using entityManager?
I am using `php version 5.6` and `symfony 3`<issue_comment>username_1: You can give id to forms and use
```
$('#formid').submit();
```
in your case
```
$('#375987').submit();
```
But that id is used by your tr div too. You should consider using id for form only
Upvotes: 1 <issue_comment>username_2: You use a `submit` button which will automatically submit the form, but you also add a click event to it using the `onclick` attribute, so it will execute the associated function and submit the form. All that is unnecessarily complicated.
Remove the `onclick` attribute on your button:
```
Edit
```
And change your code to:
```
$('#375987').on('submit', function(e) {
var url = "function_ajax.php"; // the script where you handle the form input.
$.ajax({
type: 'POST',
url: url,
data: $(this).serialize(),
success: function(data) {
console.log('Submission was successful.');
console.log(data);
$(e.target).closest('tr').children('td,th').css('background-color', '#000');
},
error: function(data) {
console.log('An error occurred.');
console.log(data);
},
});
e.preventDefault();
});
```
If you want all your forms to use the same function, then simply replace the selector `$('#375987')` with `$('form')`:
```
$('form').on('submit', function(e) { ... }
```
If you only want to select some forms, not all, then you can give them the same class and then select them by that class, like :
```
$('form.ajaxed').on('submit', function(e) { ... }
```
Upvotes: 2 <issue_comment>username_3: It's because each time you click the button and call `update_stt()` you are attaching again the `.on()` listener.
This is why you end up with more and more submit requests.
```
$('form[id]').on('submit', function (e) {
```
This line of code should only be called once not on every click.
ALSO: you said you build these on the backend so you could pass the ID straight to the function:
```
update_stt(375987)
```
Then you can use this:
```
function update_stt(passedNumber) { ...
```
Then you can use the id number in the call
Your mixing jQuery and vanilla JS a lot here, might make sense to try a larger refactor.
Upvotes: 0 |
2018/03/22 | 1,485 | 5,240 | <issue_start>username_0: I'm curious if there's a better way or approach to writing the following code.
```
private buildQueryParams(filterData: BaseFilterModel | RegistrationFilterModel): HttpParams {
let params = new HttpParams();
if (filterData) {
if (filterData.regionId) {
params = params.set('regionId', filterData.regionId.toString());
}
if (filterData.facilityId) {
params = params.set('facilityId', filterData.facilityId.toString());
}
if (filterData.dept1 && filterData.dept1.id) {
params = params.set('dept1', filterData.dept1.id.toString());
}
if (filterData.dept2 && filterData.dept2.id) {
params = params.set('dept2', filterData.dept2.id.toString());
}
if (filterData.role && filterData.role !== 'All') {
params = params.set('role', filterData.role);
}
if ((filterData).registrationDate) {
params = params.set('registrationDate', (filterData).registrationDate.toString());
}
if ((filterData).activeStatus
&& (filterData).activeStatus !== ActiveStatus.All) {
params = params.set('activeStatus', (filterData).activeStatus.toString());
}
}
return params;
}
```
What this is basically doing is creating the parameters for a http request based on an object (filterData) that has these set values. We use this object for multiple endpoints with similar but slightly different parameters, so it goes through each one of these if statements to make sure the regionId, facilityId, ect. is not null.
Is there a better way to do this instead of having a list of if statements that could get larger and larger if we add more parameters.<issue_comment>username_1: You can reduce some lines with:
```
var fields = ['regionId','facilityId','dept1','dept2'];
for (let field of fields) {
if (filterData[field]){
params = params.set(field, filterData[field].toString());
}
}
```
or manage each case with concrete strategy with strategy pattern.
Upvotes: 1 <issue_comment>username_2: >
> Lodash is the way to go:
>
>
>
```js
var object = {
'a': [{
'b': {
'c': 100
},
'd': 'MARKS'
}]
};
console.log(_.get(object, ['a', '0', 'b', 'c']));
console.log(_.get(object, ['a', '0', 'd']).toLowerCase()); // Do operation on the object found
console.log(_.get(object, ['a', '0', 'foo'], 'granted')); // Specify default value, if incorrect path
```
>
> Take a look at the documentation:
>
>
>
[Lodash Get](https://lodash.com/docs/4.17.5#get)
>
> Explanation
>
>
>
1. It takes care of the multiple if cases within and if nothing is found, returns a default value
2. You can do whatever you want with the returned value, make it lower, or upper, or convert to string
3. There are multiple syntaxes possible with lodash, take a look at the documentation for them
Upvotes: 0 <issue_comment>username_3: You could define a local function to do the actual setting with a test to avoid null or undefined (or empty string) and conversion to string where required. Something like:
```
private buildQueryParams(filterData: BaseFilterModel | RegistrationFilterModel): HttpParams {
let params = new HttpParams();
function setparam(name: string, value: number|string) {
if (value) params = params.set(name, value.toString());
}
if (filterData) {
setparam('regionId', filterData.regionId);
setparam('facilityId', filterData.facilityId);
setparam('dept1', filterData.dept1 && filterData.dept1.id);
setparam('dept2', filterData.dept2 && filterData.dept2.id);
if (filterData.role != 'All') {
setparam('role', filterData.role);
}
setparam('registrationDate', (filterData).registrationDate);
if ((filterData).activeStatus !== ActiveStatus.All) {
setparam('activeStatus', (filterData).activeStatus);
}
}
return params;
}
```
The actual type for `value` should match the expected types, or could even be `any`.
Upvotes: 3 [selected_answer]<issue_comment>username_4: More object oriented way is do something like this.
```
enum ActiveStatus {
ALL
}
class Department {
constructor(private _id: number){}
public toString() {
return this._id.toString();
}
}
class Role {
private _name: "ALL";
public toString() {
return this._name == "ALL" ? "" : this._name;
}
}
class FilterData {
constructor(
private _regionId: number,
private _facilityId: number,
private _dep1: Department,
private _dep2: Department,
private _activeStatus: ActiveStatus
) {
}
public toHttpParams() {
let params = new HttpParams();
Object.keys(this).forEach((key) => {
let value = this[key].toString();
value && params.set(key.slice(1), value);
});
return params;
}
}
let filterData = new FilterData(
1,
2,
new Department(5),
new Department(7),
ActiveStatus.ALL
);
console.log(filterData.toHttpParams());
```
Upvotes: 0 <issue_comment>username_5: You could use the optional chaining operator **?**
>
>
> ```
> let x = foo?.bar.baz();
>
> ```
>
>
and the Nullish coalescing operator **??**
>
> let x = foo ?? bar();
>
>
>
[Doc Link](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-7.html)
Upvotes: 0 |
2018/03/22 | 522 | 1,888 | <issue_start>username_0: I have this AuthorizationHelpers and I'm using on RSpec.
```
module AuthorizationHelpers
def assign_role!(user, role, post)
Role.where(user: user, post: post).delete_all
Role.create!(user: user, role: role, post: post)
end
end
RSpec.configure do |c|
c.include AuthorizationHelpers
end
```
And now I want to use this method `assign_role!` on seeds.
like this:
```
Post.all.each do |post|
[:manager, :editor, :viewer].each do |role|
User.all.where(admin: false).each do |user|
assign_role!(user, role, post)
end
end
end
```
If I try to use this on rails console I get an error:
```
NoMethodError: undefined method `assign_role!' for main:Object
```
Has some way to use this on seeds? Or I need to do other thing?<issue_comment>username_1: This answer is not necessarily related to your question, but more about the semantics of your code.
First of all: why you don't define `assign_role!` as an instance method for your user model and you can use anywhere? Seems to be a reasonable place for that logic to be.
Second: seeds are supposed to be the minimal data for your application to work properly after a fresh installation. Usually it's an one-off command; to create data in specs consider using factories instead (a popular library is [FactoryGirl](https://github.com/thoughtbot/factory_bot)).
It feels unnatural for your code to invoke stuff defined in `spec` folder.
Upvotes: 0 <issue_comment>username_2: In the end, the OP abandoned use of the module and did the following in `seeds.rb`:
```
unless Post.exists?(title: title)
post = Post.create!(...)
[:manager, :editor, :viewer].each do |role|
if user = User.find_by_email("#{<EMAIL>")
user.roles.create(post: post, role: role)
end
end
```
Check out the chat (in the original question comments) for the whole story.
Upvotes: 1 |
2018/03/22 | 432 | 1,474 | <issue_start>username_0: I have a vector of strings that look like this:
```
a - bc/def_g - A/mn/us/ww
opq - rs/ts_uf - BC/wx/yza
Abc - so/dhie7u - XYZ/En/xy/jkq - QWNE
```
I'd like to get the text after 2nd dash (-) but before first flash (/), i.e. the result should look like
```
A
BC
XYZ
```
What is the best way to do it (the vector has more than 500K rows.)
Thanks<issue_comment>username_1: This answer is not necessarily related to your question, but more about the semantics of your code.
First of all: why you don't define `assign_role!` as an instance method for your user model and you can use anywhere? Seems to be a reasonable place for that logic to be.
Second: seeds are supposed to be the minimal data for your application to work properly after a fresh installation. Usually it's an one-off command; to create data in specs consider using factories instead (a popular library is [FactoryGirl](https://github.com/thoughtbot/factory_bot)).
It feels unnatural for your code to invoke stuff defined in `spec` folder.
Upvotes: 0 <issue_comment>username_2: In the end, the OP abandoned use of the module and did the following in `seeds.rb`:
```
unless Post.exists?(title: title)
post = Post.create!(...)
[:manager, :editor, :viewer].each do |role|
if user = User.find_by_email("<EMAIL>")
user.roles.create(post: post, role: role)
end
end
```
Check out the chat (in the original question comments) for the whole story.
Upvotes: 1 |
2018/03/22 | 786 | 2,487 | <issue_start>username_0: ```
Flux.just("a", "b")
.flatMap(s -> s.equals("a") ? Mono.error(new RuntimeException() : Flux.just(s + "1", s + "2"))
.onErrorResume(throwable -> Mono.empty())
.subscribe(System.out::println);
```
Hello!
Here I made a flux of two elements and then expose by flatMap first one to exception, and second one to another Flux.
With `onErrorResume` I expect the output
```
b1
b2
```
but get nothing. Could anyone explain why does it happens, please?
Thanks.<issue_comment>username_1: Given this:
```
Flux.just("a", "b", "c")
.flatMap { s ->
if (s == "b")
Mono.error(RuntimeException())
else
Flux.just(s + "1", s + "2")
}.onErrorResume { throwable -> Mono.just("d") }.log()
.subscribe { println(it) }
```
The output is:
```
12:35:19.673 [main] INFO reactor.Flux.OnErrorResume.1 - onSubscribe(FluxOnErrorResume.ResumeSubscriber)
12:35:19.676 [main] INFO reactor.Flux.OnErrorResume.1 - request(unbounded)
12:35:19.677 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(a1)
a1
12:35:19.677 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(a2)
a2
12:35:19.712 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(d)
d
12:35:19.713 [main] INFO reactor.Flux.OnErrorResume.1 - onComplete()
```
What's going on here? `onErrorResume()` **is being applied to the Publisher returned by the `flatMap()` operator**. Since on "b" the Publisher signals a failure, the `flatMap()` Publisher doesn't execute anymore and `onErrorResume()` operator keeps publishing using its fallback.
The [documentation for onErrorResume()](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorResume-java.lang.Class-java.util.function.Function-) shows clearly that the original Publisher finishes because of the error and the fallback takes over:
[](https://i.stack.imgur.com/eSCXa.png)
Upvotes: 5 [selected_answer]<issue_comment>username_2: This question has already a solid answer by `username_1` why this happens. To answer a little off-topic **how to achieve the expected output**:
The `onErrorResume` call has to be moved into the `flatMap`:
```
Flux.just("a", "b")
.flatMap(s ->
(s.equals("a") ? Mono.error(RuntimeException()) : Flux.just(s + "1", s + "2"))
.onErrorResume(ex -> Mono.empty())
)
.subscribe(System.out::println)
```
This way the output is as expected
```
b1
b2
```
Upvotes: 3 |
2018/03/22 | 909 | 2,735 | <issue_start>username_0: I just started learning C++, and I'm trying to use nesting loops to output an American flag. I'm very close but I cant seem to fix this logical error in my code, and was hoping someone can point it out because I've been searching for hours and its driving me crazy. The issue lies within the last three rows.
```
#include
using namespace std;
int main()
{
int rows = 7;
int columns = 24;
int max = rows \* columns;
int value = 1;
for (int r=1; r<=7; r++ ){
for (int c=1; c<=24; c++ ){
if (c <= 7 && r <= 4)
cout << "\* ";
if(r % 2 != 0 && c > 7 && r > 0)
cout << "= ";
if(r % 2 == 0 && c > 7 && r > 0)
cout << "- ";
}
cout << endl;
}
return 0;
}
```
This code will output this: <https://i.stack.imgur.com/0GN2P.png><issue_comment>username_1: Given this:
```
Flux.just("a", "b", "c")
.flatMap { s ->
if (s == "b")
Mono.error(RuntimeException())
else
Flux.just(s + "1", s + "2")
}.onErrorResume { throwable -> Mono.just("d") }.log()
.subscribe { println(it) }
```
The output is:
```
12:35:19.673 [main] INFO reactor.Flux.OnErrorResume.1 - onSubscribe(FluxOnErrorResume.ResumeSubscriber)
12:35:19.676 [main] INFO reactor.Flux.OnErrorResume.1 - request(unbounded)
12:35:19.677 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(a1)
a1
12:35:19.677 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(a2)
a2
12:35:19.712 [main] INFO reactor.Flux.OnErrorResume.1 - onNext(d)
d
12:35:19.713 [main] INFO reactor.Flux.OnErrorResume.1 - onComplete()
```
What's going on here? `onErrorResume()` **is being applied to the Publisher returned by the `flatMap()` operator**. Since on "b" the Publisher signals a failure, the `flatMap()` Publisher doesn't execute anymore and `onErrorResume()` operator keeps publishing using its fallback.
The [documentation for onErrorResume()](https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Flux.html#onErrorResume-java.lang.Class-java.util.function.Function-) shows clearly that the original Publisher finishes because of the error and the fallback takes over:
[](https://i.stack.imgur.com/eSCXa.png)
Upvotes: 5 [selected_answer]<issue_comment>username_2: This question has already a solid answer by `username_1` why this happens. To answer a little off-topic **how to achieve the expected output**:
The `onErrorResume` call has to be moved into the `flatMap`:
```
Flux.just("a", "b")
.flatMap(s ->
(s.equals("a") ? Mono.error(RuntimeException()) : Flux.just(s + "1", s + "2"))
.onErrorResume(ex -> Mono.empty())
)
.subscribe(System.out::println)
```
This way the output is as expected
```
b1
b2
```
Upvotes: 3 |
2018/03/22 | 1,661 | 4,955 | <issue_start>username_0: I'm trying to get a JSON from a server to use it in a Python code. For test purposes, I did `POST` by `curl`:
```
$ curl -u trial:trial -H "Content-Type: application/json"
-X POST -d '{"BP_TSM":"22"}' http://some-host --trace-ascii -
```
My Java code seems to correctly handle creating JSON as a response. Please look at the result of `curl` command:
```
== Info: About to connect() to localhost port 8080 (#0)
== Info: Trying ::1...
== Info: Connected to localhost (::1) port 8080 (#0)
== Info: Server auth using Basic with user 'trial'
=> Send header, 224 bytes (0xe0)
0000: POST /get/auth HTT
0040: P/1.1
0047: Authorization: Basic dHJpYWw6dHJpYWw=
006e: User-Agent: curl/7.29.0
0087: Host: localhost:8080
009d: Accept: */*
00aa: Content-Type: application/json
00ca: Content-Length: 15
00de:
=> Send data, 15 bytes (0xf)
0000: {"BP_TSM":"22"}
== Info: upload completely sent off: 15 out of 15 bytes
<= Recv header, 23 bytes (0x17)
0000: HTTP/1.1 202 Accepted
<= Recv header, 34 bytes (0x22)
0000: Server: Payara Micro #badassfish
<= Recv header, 32 bytes (0x20)
0000: Content-Type: application/json
<= Recv header, 37 bytes (0x25)
0000: Date: Thu, 22 Mar 2018 14:30:43 GMT
<= Recv header, 21 bytes (0x15)
0000: Content-Length: 108
<= Recv header, 29 bytes (0x1d)
0000: X-Frame-Options: SAMEORIGIN
<= Recv header, 2 bytes (0x2)
0000:
<= Recv data, 108 bytes (0x6c)
0000: {"title":"Free Music Archive - Albums","message":"","total":"112
0040: 59","total_pages":2252,"page":1,"limit":"5"}
{"title":"Free Music Archive - Albums","message":"","total":"11259","total_pages
":2252,"page":1,"limit":"5"}== Info: Connection #0 to host localhost left intact
```
Now I would like Python script be able to receive the same message that `curl` did. I wrote the following Python code (note I'm not Python developer):
```
import pickle
import requests
import codecs
import json
from requests.auth import HTTPBasicAuth
from random import randint
req = requests.get('server/get/auth', auth=HTTPBasicAuth('trial', 'trial'))
return pickle.dumps(req)
```
Unfortunately, I get error message `'unicode' object has no attribute 'copy'` when `return pickle.dumps(req)` command is executed. I also tried using `return json.dumps(req)` but this time I get another error:
```
Traceback (most recent call last):
File "/tmp/tmp8DfLJ7/usercode.py", line 16, in the_function
return json.dumps(req)
File "/usr/lib64/python2.7/json/__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "/usr/lib64/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib64/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib64/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: is not JSON serializable
```
Do I have some error in Python code or is it fault of my Java server returning incorrect JSON?<issue_comment>username_1: You can use `requests.json()` method to get json response as dict
```
req = requests.get('http://yourdomain.com/your/path', auth=HTTPBasicAuth('trial', 'trial'))
mydict = req.json()
```
Upvotes: 0 <issue_comment>username_2: You're trying to dumps a `Response` object.
Try returning `req.json()` or calling `json.loads(req.text)`
Upvotes: 0 <issue_comment>username_3: In order to load the Json string, you'll need to use `json.loads(req.text)`.
You must also ensure that the req string is valid json.
eg
```
'{"FOO":"BAR"}'
```
Upvotes: 0 <issue_comment>username_4: There are a number of errors in your Python code.
* You are using `request.get` to POST. Instead, use `request.post`.
* You are not passing the `BP_TSM` json string into your request. Use `data=` in your `request.post`.
* You are not emulating the `-H` switch to curl. Use `headers=` in your `request.post`.
* You are using `pickle` for no apparent reason. Don't do that.
* You are using a `return` statement when you are not in a function. Don't do that. If you want to print to stdout, use `print()` or `sys.stdout.write()` instead.
* If you actually want to use the returned variables from the JSON (as opposed to simply printing to stdout), you shoud invoke `req.json()`.
Here is a version of your code with problems addressed.
```
import requests
import json
import sys
from requests.auth import HTTPBasicAuth
data = '{"BP_TSM": "22"}' # curl -d
headers = {'content-type': 'application/json'} # curl -H
auth = HTTPBasicAuth('trial', 'trial') # curl -u
req = requests.post( # curl -X POST
'http://httpbin.org/post',
auth=auth,
data=data,
headers=headers)
sys.stdout.write(req.text) # Display JSON on stdout
returned_data = req.json()
my_ip = returned_data["origin"] # Query value from JSON
print("My public IP is", my_ip)
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 930 | 3,233 | <issue_start>username_0: Why do I get
```
Error: StaticInjectorError(AppServerModule)[NgModuleFactoryLoader -> InjectionToken MODULE_MAP]:
StaticInjectorError(Platform: core)[NgModuleFactoryLoader -> InjectionToken MODULE_MAP]:
NullInjectorError: No provider for InjectionToken MODULE_MAP!
```
when trying to deploy with firebase?
I do use
```
extraProviders: [
provideModuleMap(LAZY_MODULE_MAP)
]
```
and in my app-server.module I do import ModuleMapLoaderModule (btw I tried importing ServerModule and AppModule in a different order, I was told it might be the problem, but it didnt work):
```
@NgModule({
imports: [
ServerModule,
AppModule,
ModuleMapLoaderModule,
],
bootstrap: [AppComponent],
})
export class AppServerModule { }
```
The main.bundle.js contains this:
```
Object.defineProperty(exports, "__esModule", { value: true });
var app_server_module_ngfactory_1 = __webpack_require__("./src/app/app.server.module.ngfactory.js");
exports.AppServerModuleNgFactory = app_server_module_ngfactory_1.AppServerModuleNgFactory;
var __lazy_0__ = __webpack_require__("./src/app/features/blog/blog.module.ngfactory.js");
var app_server_module_1 = __webpack_require__("./src/app/app.server.module.ts");
exports.AppServerModule = app_server_module_1.AppServerModule;
exports.LAZY_MODULE_MAP = { "app/features/blog/blog.module#BlogModule": __lazy_0__.BlogModuleNgFactory };
```
`main.bundle.js` does get imported in the firebase script correctly, because if I change some letters in the `require(...)`, I get an error that the file is not known. So what is wrong with the `LAZY_MODULE_MAP`? it looks like a string-route-to-factory map/js-object and it gets exported. so why does it not get resolved by `provideModuleMap` correctly? The `BlogModule` has only a declaration of a Hello-World-Stub component.
Btw, there is a similar question here but with no replies: [Angular5 Universal lazy loading on firebase hosting and seo](https://stackoverflow.com/questions/49038948/angular5-universal-lazy-loading-on-firebase-hosting-and-seo)<issue_comment>username_1: I stumbled on this error when I try to add module-map-ngfactory-loader to enable lazy loading but in the new version of Angular, you don't need to manually add this module.
String-based lazy loading syntax is not supported with Ivy and hence `@nguniversal/module-map-ngfactory-loader` is no longer required.
uninstalling the module (npm uninstall "package-name" ). and removing ModuleMapLoaderModule worked for me.
[see here for detail](https://github.com/alan-agius4/universal/commit/04c1d495a79782cc5b1081cf1024a9fc653f724a)
Upvotes: 2 <issue_comment>username_2: TLDR;
```
npm uninstall @nguniversal/common
npm uninstall @nguniversal/module-map-ngfactory-loader
```
and remove `ModuleMapLoaderModule` from your `app.server.module.ts`.
TSWM;
In Angular < 9, Universal was relying on `@nguniversal/common` and `@nguniversal/module-map-ngfactory-loader` to handle lazy loaded modules. With Angular 9+, they now provide out of the box the `ngExpressEngine` which handles this. All you need is `@nguniversal/express-engine` as [the docs](https://angular.io/guide/universal#universal-tutorial) mention.
Upvotes: 4 |
2018/03/22 | 670 | 2,479 | <issue_start>username_0: While I wasn't that concerned about it in the beginning, I noticed that my page size is about **9 MB** (+/- 200 images). I want to somehow decrease this by only loading the image when the user hovers over the specific , so that only *that* image is loaded (which should decrease the page size drastically).
The code below is what I'm using right now
```
div.img {
display: none;
position: absolute;
}
a:hover + div.img {
display: block;
}
[Some Name](/somename)

```
I think it's possible with jQuery, but I don't know where to start.
Thanks in advance.<issue_comment>username_1: Well if you have around 200 images in your directory, when a client requests the webpage it is going to have to download the images to have them ready if you are using a single page layout. I would look into lazy loading just as Adam stated. If you can also I would suggest to try to compress the photos if you can to lower the file size if possible. Good luck!
Upvotes: 1 <issue_comment>username_2: you can put the image with no src attribute and put the specific src in the href of div or the image!
then use jquery to get the href of a or data-src of image and then give it to the image the code will be something like this:
```
[)](the-src-of-the-image)
```
and this is the jquery
```
jQuery(document).ready(function($){
$('.image').on('hover',function(){
var img_src = $(this).attr('href');
$(this).children('img').attr('src',img_src);
});
});
```
Upvotes: 0 <issue_comment>username_3: I fixed my problem by adapting an existing pen-code to adjust my needs (using jQuery). It now works again in IE/Firefox
```html
$(document).ready(function($) {
$('.trigger').mouseover(function() {
// find our span
var elem = $(this).siblings('span');
// get our img url
var src = elem.attr('data-original');
// change span to img using the value from data-original
elem.replaceWith('<img src="' + src + '" style="display:block;position:absolute;"/>');
});
$('.trigger').mouseout(function() {
// find our span
var elem = $(this).siblings('img');
// get our img url
var src = elem.attr('src');
// change span to img using the value from data-original
elem.replaceWith('<span data-original="'+src+'"></span>');
});
});
[Hover over me to fetch an image](/gotoo)
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 539 | 2,014 | <issue_start>username_0: How can I get `modules/bx/motif` only on the following through pipeline?
```
$ find . | grep denied
find: `modules/bx/motif': Permission denied
```<issue_comment>username_1: Well if you have around 200 images in your directory, when a client requests the webpage it is going to have to download the images to have them ready if you are using a single page layout. I would look into lazy loading just as Adam stated. If you can also I would suggest to try to compress the photos if you can to lower the file size if possible. Good luck!
Upvotes: 1 <issue_comment>username_2: you can put the image with no src attribute and put the specific src in the href of div or the image!
then use jquery to get the href of a or data-src of image and then give it to the image the code will be something like this:
```
[)](the-src-of-the-image)
```
and this is the jquery
```
jQuery(document).ready(function($){
$('.image').on('hover',function(){
var img_src = $(this).attr('href');
$(this).children('img').attr('src',img_src);
});
});
```
Upvotes: 0 <issue_comment>username_3: I fixed my problem by adapting an existing pen-code to adjust my needs (using jQuery). It now works again in IE/Firefox
```html
$(document).ready(function($) {
$('.trigger').mouseover(function() {
// find our span
var elem = $(this).siblings('span');
// get our img url
var src = elem.attr('data-original');
// change span to img using the value from data-original
elem.replaceWith('<img src="' + src + '" style="display:block;position:absolute;"/>');
});
$('.trigger').mouseout(function() {
// find our span
var elem = $(this).siblings('img');
// get our img url
var src = elem.attr('src');
// change span to img using the value from data-original
elem.replaceWith('<span data-original="'+src+'"></span>');
});
});
[Hover over me to fetch an image](/gotoo)
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,665 | 5,229 | <issue_start>username_0: Not too sure where I've gone wrong here, expecting to factorialize 5 (1\*2\*3\*4\*5 = 120) by turning 5 into a string of [1,2,3,4,5] and then using reduce to multiply the string all together. When I run the code, it just gives me [1,2,3,4,5]...
```
var arr = [];
function factorialize(num) {
for (var i = 1; i <= num; i++) {
arr.push(i);
}
return arr;
}
var factors = 0;
factors = arr.reduce(function(previousVal, currentVal) {
return previousVal * currentVal;
}, 0); // Expecting 120, instead result = [1,2,3,4,5]
factorialize(5);
```
Forgive the long route - my first week of Javascript!<issue_comment>username_1: `arr` is empty, you should give it the resulting array of the factorisation first, and you should multiply, not add, and when multiplying, the starting value is 1 not 0:
```js
var arr = [];
function factorialize(num) {
for (var i = 1; i <= num; i++) {
arr.push(i);
}
return arr;
}
arr = factorialize(5); // give it the value
var factors = arr.reduce(function(previousVal, currentVal) {
return previousVal * currentVal; // multiply, don't add
}, 1); // start with 1 when multiplying
console.log(arr);
console.log(factors);
```
If you just want to calculate the factorial:
```js
function factorial(num) {
var res = 1;
for (var i = 2; i <= num; i++) {
res *= i;
}
return res;
}
console.log('factorial(5) = ' + factorial(5));
console.log('factorial(10) = ' + factorial(10));
```
Upvotes: 2 <issue_comment>username_2: You could get first the factors and then multiply in [`Array#reduce`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce) the factors.
I suggest to name the function what it does and move the array declaration inside of the function, because the function returns this array.
For getting the product, you need to multiply the values and use `1` as neutral start value for getting a product out of the numbers.
```js
function getFactors(num) {
var i, arr = [];
for (i = 1; i <= num; i++) {
arr.push(i);
}
return arr;
}
var factors = getFactors(5),
product = factors.reduce(function(previousVal, currentVal) {
return previousVal * currentVal;
}, 1);
console.log(factors);
console.log(product);
```
Upvotes: 1 <issue_comment>username_3: ```
var arr = [];
function factorialize(num) {
for (var i = 1; i <= num; i++) {
arr.push(i);
}
var factors = 0;
factors = arr.reduce(function (previousVal, currentVal) {
return previousVal * currentVal;
});
return factors
}
factorialize(5); // 120
```
Upvotes: 1 <issue_comment>username_4: * Put the global variable `arr` inside of the function `factorialize`.
* Get the returned array and then execute the function `reduce`.
* You need to multiply rather than to add the numbers.
* Start the reduce with initialValue = 1, this is to avoid `0 * n`.
```js
function factorialize(num) {
var arr = [];
for (var i = 1; i <= num; i++) {
arr.push(i);
}
return arr;
}
var arr = factorialize(5);
var factors = arr.reduce(function(previousVal, currentVal) {
return previousVal * currentVal;
}, 1);
console.log(factors)
```
Upvotes: 0 <issue_comment>username_5: You are not calling `factors`. `factorialize(5);` by doing this you are just calling `function factorialize(num)` which will give you array(of 1...num).
(Additional info)And also in reduce you are adding `+` instard of multiplying `*` so change that too and
```
factors = arr.reduce(function(previousVal, currentVal) {
return previousVal + currentVal;
}, 0);
^
|_ either initialize it to 1 or remove this.
```
See below code. I just create array and then apply reduce on that array.
```js
function factorialize(num) {
var arr = [];
for (var i = 1; i <= num; i++) {
arr.push(i);
}
return arr.reduce(function(previousVal, currentVal) {
return previousVal * currentVal;
});
}
console.log(factorialize(5));
```
Upvotes: 2 [selected_answer]<issue_comment>username_6: Issue is with the initial value set to 0 and instead of multiplying numbers, it is being added
arr.reduce(callback, initValue)
arr = [1,2,3,4,5]
In the code provided, it accumulates in below format
```
arr.reduce(function(previousVal, currentVal) {
return previousVal + currentVal;
}, 0);
```
First call -> 0 + 1 = 1 (factors = 1)
Second call -> 0 + 2 = 2 (factors = 2)
First call -> 0 + 3 = 3 (factors = 3)
First call -> 0 + 4 = 4 (factors = 10)
First call -> 0 + 5 = 5 (factors = 15)
To achieve expected result, use below option
```js
var arr = []; // initialize array arr
function factorialize(num) {
//for loop to push 1,2 ,3, 4, 5 to arr array
for (var i = 1; i <= num; i++) {
arr.push(i);
}
// return arr.reduce value by multiplying all values in array, default initial value is first element
return arr.reduce(function(previousVal, currentVal) {
//console log to debug and display loop values
console.log(previousVal, currentVal);
return previousVal * currentVal;
});
}
console.log("output", factorialize(5));
```
code sample - <https://codepen.io/nagasai/pen/YaQKZw?editors=1010>
Upvotes: 0 |
2018/03/22 | 756 | 3,520 | <issue_start>username_0: I'm trying to import an excel file to SQL server via visual Studio. How do I make it read the data starting from the second row and not the first row of the excel file? My code works perfectly as it is, but I want it to start reading the data from the 2nd row. Here is my code:
```
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.IO;
using ExcelDataReader;
namespace ImportDB2
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
OpenFileDialog ope = new OpenFileDialog();
ope.Filter = "Excel Files|*.xls; *.xlsx; *.xlsm";
if (ope.ShowDialog() == DialogResult.Cancel)
return;
FileStream stream = new FileStream(ope.FileName, FileMode.Open);
IExcelDataReader excelReader = ExcelReaderFactory.CreateOpenXmlReader(stream);
DataSet result = excelReader.AsDataSet();
DataClasses1DataContext conn = new DataClasses1DataContext();
foreach (DataTable table in result.Tables)
{
foreach (DataRow dr in table.Rows)
{
Employee addtable = new Employee()
{
Serial = Convert.ToInt32(dr[0]),
Name = Convert.ToString(dr[1]),
Class = Convert.ToString(dr[2]),
Department = Convert.ToString(dr[3]),
Status = Convert.ToString(dr[4]),
Position = Convert.ToString(dr[5]),
Email = Convert.ToString(dr[6])
};
conn.Employees.InsertOnSubmit(addtable);
}
}
conn.SubmitChanges();
excelReader.Close();
stream.Close();
MessageBox.Show("YEEESSSS FINALLY");
}
}
}
```
What line of code should I put for it to start reading on the second row, and where inside my code should I put it in? Hope someone can help, thank you. ^\_^<issue_comment>username_1: If you just want to skip a row in your `for` loop for each table, do this:
```
foreach (DataTable table in result.Tables)
{
bool skippedRow = false;
foreach (DataRow dr in table.Rows)
{
if (!skippedRow)
{
skippedRow = true;
continue;
}
Employee addtable = new Employee()
{
Serial = Convert.ToInt32(dr[0]),
Name = Convert.ToString(dr[1]),
Class = Convert.ToString(dr[2]),
Department = Convert.ToString(dr[3]),
Status = Convert.ToString(dr[4]),
Position = Convert.ToString(dr[5]),
Email = Convert.ToString(dr[6])
};
conn.Employees.InsertOnSubmit(addtable);
}
}
```
Upvotes: 0 <issue_comment>username_2: ```
bool skip = true;
foreach (DataRow dr in table.Rows)
{
if(skip)
{
skip = false;
continue;
}
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 671 | 1,863 | <issue_start>username_0: Would create a list of items and align icon at the right.
```
* {{ group.name }} (ID {{group.code}} )

```
In CSS:
```
.list-item {
width: 300px;
display : inline-block;
}
.trash-icon {
float: right;
}
.trash-icon::after {
clear: both;
}
```
The first line is aligned correctly, however, the following lines are indented.
Here's a screenshot:
[](https://i.stack.imgur.com/nOe0i.png)
When I replace float by right align:
[](https://i.stack.imgur.com/nGmog.png)<issue_comment>username_1: Use the flexbox display for that.
```
* {{ group.name }} (ID {{group.code}} )

```
Your styles
```
ul.list li {
display: flex;
justify-content: center;
}
ul.list li div.list-item {
flex: 1 1 auto;
}
ul.list li div.trash-icon {
flex: 0 0 20px;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: **The answer to the original question**
You don't really need `::after` pseudoclass when you want to clear floated items, you can just do the following:
```
.trash-icon {
float: right;
clear: both;
}
```
And here is the example:
```css
.list-item {
width: 300px;
display : inline-block;
}
.trash-icon {
float: right;
clear: both;
}
```
```html
- SPAN 1

- SPAN 2

- SPAN 3

```
Upvotes: 1 <issue_comment>username_3: have you tried to replace .trash-icon float with text-align
Upvotes: 1 |
2018/03/22 | 737 | 2,384 | <issue_start>username_0: I want to say that I am not an expert, it´s my first Wordpress project and I want to dig in. Therefore I tried to word only with Plugins and few coding.
I don´t know but since today my site has a failure and I don´t understand:
```
Fatal error: Uncaught ArgumentCountError: Too few arguments to function
Cherry_Dynamic_Css_Utilities::typography_font_family(), 0 passed in /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modules/cherry-dynamic-css/cherry-dynamic-css.php on line 487 and exactly 1 expected in /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modules/cherry-dynamic-css/inc/class-cherry-dynamic-css-utilities.php:688
Stack trace:
#0 /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modules/cherry-dynamic-css/cherry-dynamic-css.php(487): Cherry_Dynamic_Css_Utilities->typography_font_family()
#1 [internal function]: Cherry_Dynamic_Css->replace_func(Array)
#2 /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modules/cherry-dynamic-css/cherry-dynamic-css.php(327): preg_replace_callback('/@(([a-zA-Z_]+)...', Array, '/* #Typography ...')
#3 /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modu in /home/.sites/48/site8294423/web/wp-content/plugins/jet-elements/cherry-framework/modules/cherry-dynamic-css/inc/class-cherry-dynamic-css-utilities.php on line 688
```
Tried to restore backup (database and rest) but the same after backup
Can you please help a little bit....
Thank you very much<issue_comment>username_1: problem with jet-elements plugin just go to wp-content/plugin and move or delete jet-elements folder
Upvotes: 0 <issue_comment>username_2: Either via FTP, your server's file manager, or terminal, you should change the directory name of the plugin in /wp-content/plugins/jet-elements to something like /wp-content/plugins/jet-elements-DISABLED
Once done, attempt to navigate to your wordpress admin dashboard. You'll be given the option of deleting the now missing plugin "Jet Elements" (or whatever it's front name is). Delete it from there. Later, reinstall the plugin, do not activate, and delete it. This will force the plugin's native data deletion function to fire, killing all data in your database from the original install.
Best of luck!
Upvotes: 1 |
2018/03/22 | 302 | 1,127 | <issue_start>username_0: I am using JSS in my React project and I encountered strange issue, which I find hard to solve. Basically I write media query and it is triggered when I shrink my desktop browser. Though while using device toolbar, it does not seem to work. I am trying to hide span when device "width" is smaller than 600px. Any help will be appreciated.
Here is the code:
```
const menuStyles = theme => ({
flex: {
display: 'flex',
alignItems: 'center',
},
wrapper: {
composes: '$flex',
cursor: 'pointer',
},
span: {
fontFamily: theme.fontMontserrat,
marginRight: '30px',
},
'@media screen and (max-width: 600px)': {
span: {
display: 'none',
},
},
});
```<issue_comment>username_1: Ok, fixed the problem. It was in a completely different part of application.
I forget to add this line in my code:
```
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can always check the generated CSS, if it is what you expect it to be then you have wrong assumptions about CSS/html. At the end, jss does nothing else than generating regular CSS.
Upvotes: 0 |
2018/03/22 | 2,400 | 5,684 | <issue_start>username_0: In Julia (0.6.2), push! occasionally overwrites all previous elements with the last element pushed. Here is a piece of code where it appeared; in the end, `out` consists of identical entries.
```
using Distributions
l = [(0, 10, 1, [1.1; 2.2], [1/3; 2/3], 4)];
out = Array{Float64}[ ]; # output
entry = zeros(Float64,3);
for i = 1:length(l)
row = l[i];
startingtime = row[1];
finishingtime = row[2];
node = row[3];
sizelist = row[4];
sizedistr = row[5];
density = row[6];
t = startingtime;
deltat = rand(Exponential(1/density));
while t + deltat < finishingtime
t = t + deltat;
filetype = rand(Categorical(sizedistr));
size = sizelist[filetype];
entry[1] = t; entry[2] = node; entry[3] = size;
print(entry,"\n");
push!(out, entry);
deltat = rand(Exponential(density));
end;
end;
out
```
I am a bit worried about reproducibility, because in other similar situations, push! worked fine. So... is this an actual bug? Or is there a simple fix to this? Thanks in advance!
Edit: OK, to clarify: by reproducability, I mean that in many other situations, push! works fine, and I don't see a particular pattern for it. It might even happen that on another computer, this code runs fine. I don't know. But for some reason, for me, the above code does not work properly: for each run, it produces an `out` consisting of identical entries. Here is one output:
```
[0.175033, 1.0, 2.2]
[0.24153, 1.0, 2.2]
[4.95478, 1.0, 2.2]
[7.46299, 1.0, 2.2]
Array{Float64,N} where N[[7.46299, 1.0, 2.2], [7.46299, 1.0, 2.2], [7.46299, 1.0, 2.2], [7.46299, 1.0, 2.2]]
```
The print output also shows that the entries are actually different when generated, but they get rewritten after `push!`. This phenomenon also happened for some other code not containing randomness, but it's too long (and I don't even know which parts are relevant to the bug, so I can't even reduce it) so I try to avoid including it here.<issue_comment>username_1: That has nothing to do with randomness or `push!`. You just always push the same array, which is essentially a pointer, to `out`, while changing its contents every time:
```
julia> entry = zeros(Float64, 3)
3-element Array{Float64,1}:
0.0
0.0
0.0
julia> out = Array{Float64}[]
0-element Array{Array{Float64,N} where N,1}
julia> for t = 1:3
entry[1], entry[2], entry[3] = rand(), rand(), rand()
println(entry)
push!(out, entry)
println(out)
end
[0.913257, 0.413237, 0.612766]
Array{Float64,N} where N[[0.913257, 0.413237, 0.612766]]
[0.00247971, 0.0204771, 0.891242]
Array{Float64,N} where N[[0.00247971, 0.0204771, 0.891242], [0.00247971, 0.0204771, 0.891242]]
[0.847745, 0.742295, 0.0260808]
Array{Float64,N} where N[[0.847745, 0.742295, 0.0260808], [0.847745, 0.742295, 0.0260808], [0.847745, 0.742295, 0.0260808]]
julia> pointer(out[1])
Ptr{Float64} @0x00007f7e0abfaf20
julia> pointer(out[2])
Ptr{Float64} @0x00007f7e0abfaf20
```
See that? Essentially, every time, you change the same `entry`, and just end up having the same reference three times in `out`.
How to avoid that? Just assign `entry` inside the loop:
```
julia> out = Array{Float64}[]
0-element Array{Array{Float64,N} where N,1}
julia> for t = 1:3
entry = rand(3)
println(entry)
push!(out, entry)
println(out)
end
[0.141818, 0.743078, 0.760137]
Array{Float64,N} where N[[0.141818, 0.743078, 0.760137]]
[0.625746, 0.558617, 0.633356]
Array{Float64,N} where N[[0.141818, 0.743078, 0.760137], [0.625746, 0.558617, 0.633356]]
[0.337548, 0.55715, 0.78439]
Array{Float64,N} where N[[0.141818, 0.743078, 0.760137], [0.625746, 0.558617, 0.633356], [0.337548, 0.55715, 0.78439]]
```
Then you can remove the outer declaration completely. You can't avoid having 3 separate `entry`s, because you actually need that much allocation, if you want them to be different...
---
Oh, and no need for those semicolons. Julia doesn't automatically print anything, like Matlab, if that was your motivation.
Upvotes: 1 <issue_comment>username_2: You are appending the *same* `entry` vector to `out` in each iteration of the loop. And at every iteration, you're just mutating `entry`, effectively mutating it for all instances in your final `out` array.
This is much easier to see with a simpler block of code:
```
julia> entry = zeros(3);
julia> out = [];
julia> for i in 1:3
entry[1] = rand()
@show entry
push!(out, entry)
end
entry = [0.580382, 0.0, 0.0]
entry = [0.210324, 0.0, 0.0]
entry = [0.658214, 0.0, 0.0]
julia> out
3-element Array{Any,1}:
[0.658214, 0.0, 0.0]
[0.658214, 0.0, 0.0]
[0.658214, 0.0, 0.0]
```
The final `out` contains what appears to be repeated elements because every element in it is the *same* vector which you have been modifying in the loop:
```
julia> out[1] === out[2] === out[3]
true
```
If you want each entry in `out` to be a separate vector, then you need to either construct a new `entry` at each iteration or `copy()` it before pushing it into `out`. For example:
```
julia> out = [];
julia> for i in 1:3
entry[1] = rand()
@show entry
push!(out, copy(entry))
end
entry = [0.992697, 0.0, 0.0]
entry = [0.0971598, 0.0, 0.0]
entry = [0.918921, 0.0, 0.0]
julia> out
3-element Array{Any,1}:
[0.992697, 0.0, 0.0]
[0.0971598, 0.0, 0.0]
[0.918921, 0.0, 0.0]
```
If you don't want to `copy()` then instead of:
```
entry[1] = t; entry[2] = node; entry[3] = size;
print(entry,"\n");
push!(out, entry);
```
Just do:
```
push!(out,[t,node,size]);
```
Upvotes: 2 |
2018/03/22 | 831 | 2,764 | <issue_start>username_0: I am using matter.js for physics and p5.js for rendering things. I was trying to simply create boxes when you click in the canvas and then when the boxes reaches the end of the canvas it collide with a ground but ground won't detect the collision with the box. i am assuming the problem is that ground is positioned correctly.
this the code
```js
// module aliases
var Engine = Matter.Engine,
World = Matter.World,
Bodies = Matter.Bodies;
var x = 0;
var engine;
let frame = document.getElementById("frames");
var ground;
var boxs = [];
function setup() {
//put setup code here
createCanvas(640, 480);
engine = Engine.create();
// run the engine
Engine.run(engine);
ground = Bodies.rectangle(0, height / 2, width, 10);
World.add(engine.world, ground);
}
function mousePressed() {
boxs.push(new Box(mouseX, mouseY, 20, 20));
}
function draw() {
x = x + 1;
background(0);
for (var i = 0; i < boxs.length; i++) {
boxs[i].show();
}
}
var Box = function (x, y, width, height) {
this.box = Bodies.rectangle(x, y, width, height);
this.h = height;
this.w = width;
World.add(engine.world, this.box);
this.show = function () {
push();
fill(255);
let pos = this.box.position;
let angle = this.box.angle;
translate(pos.x, pos.y);
rotate(angle);
rect(0, 0, this.w, this.h);
rectMode(CENTER);
pop();
};
};
```
```html
Ball Clash
```<issue_comment>username_1: You should get into the habit of [debugging your code](http://happycoding.io/tutorials/processing/debugging). (That tutorial is for Processing, but many of the same ideas apply to P5.js.)
>
> i am assuming the problem is that ground is positioned correctly.
>
>
>
So why don't you just check the position of the ground? I don't see anywhere that you're drawing the ground, why don't you start by doing that?
Or you could add this line into the `draw()` function:
```
console.log(ground.position);
```
Either way, you'll notice that the ground is falling, which makes sense because it's a rectangle, just like the other boxes which are falling.
To fix your problem, you need to figure out how to create a stationary box that doesn't move. I'm sure Googling "matter.js stationary rectangle" will return a ton of results, or you can consult their documentation.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Elaborating on [Kevin's answer](https://stackoverflow.com/a/49433319/6243352), add [`{isStatic: true}`](https://brm.io/matter-js/docs/classes/Body.html#property_isStatic) to the ground to prevent it from falling:
```js
ground = Bodies.rectangle(0, height / 2, width, 10, {isStatic: true});
// ^^^^^^^^^^^^^^^^^^
```
Upvotes: 0 |
2018/03/22 | 637 | 2,448 | <issue_start>username_0: I'm using a reactive content page base but how do I access something from a base viewmodel? Do I have to use the object of the actual base viewmodel?
I've tried something like this but intellisense is barking at me for obvious reasons. How do I access the a command I added to my base veiwmodel that all my view models extend. The base viewmodel is called "BaseViewModel".
Also, can you have .WhenActivated in your ContentPageBase as well as in your Content Pages or does this mess something up?
```
public class ContentPageBase : ReactiveContentPage, IContentPageBase where TViewModel : class
{
protected IBindingTypeConverter bindingDoubleToIntConverter;
private Subject clearMessageQueueSubject;
public ContentPageBase() : base()
{
clearMessageQueueSubject = new Subject();
bindingDoubleToIntConverter = (IBindingTypeConverter)App.Container.Resolve();
this
.WhenActivated(
disposables =>
{
// VS complains when I try to use TViewModel as first param
clearMessageQueueSubject.InvokeCommand(TViewModel, x => x.ClearMessageQueueCommand).DisposeWith(disposables);
});
}
```<issue_comment>username_1: You should get into the habit of [debugging your code](http://happycoding.io/tutorials/processing/debugging). (That tutorial is for Processing, but many of the same ideas apply to P5.js.)
>
> i am assuming the problem is that ground is positioned correctly.
>
>
>
So why don't you just check the position of the ground? I don't see anywhere that you're drawing the ground, why don't you start by doing that?
Or you could add this line into the `draw()` function:
```
console.log(ground.position);
```
Either way, you'll notice that the ground is falling, which makes sense because it's a rectangle, just like the other boxes which are falling.
To fix your problem, you need to figure out how to create a stationary box that doesn't move. I'm sure Googling "matter.js stationary rectangle" will return a ton of results, or you can consult their documentation.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Elaborating on [Kevin's answer](https://stackoverflow.com/a/49433319/6243352), add [`{isStatic: true}`](https://brm.io/matter-js/docs/classes/Body.html#property_isStatic) to the ground to prevent it from falling:
```js
ground = Bodies.rectangle(0, height / 2, width, 10, {isStatic: true});
// ^^^^^^^^^^^^^^^^^^
```
Upvotes: 0 |
2018/03/22 | 835 | 3,034 | <issue_start>username_0: When trying to fetch data from my get route, if I return just the response, the data gets fetched. But when I use mongoose methods i.e. mongoose.methods.summary = function() {...}.
Here is the code (this code goes to the catch block and always returns an error):
```
router.get('/callsheets', authenticate, (req, res) => {
Callsheet.find(req.query)
.then(callsheets => res.send(callsheets.summary()))
.catch(err => res.json({ err }));
});
//the query is {creator: id}, which is passed from axios
//no errors in passing the query
```
Below is the code for my mongoose model:
import mongoose from 'mongoose';
```
const callSheetSchema = new mongoose.Schema({
creator: { type: String, default: '' },
published: { type: Boolean, default: false },
sent: { type: Boolean, default: false },
completed: { type: Boolean, default: false },
archived: { type: Boolean, default: false },
client: { type: String, default: '' },
project: { type: String, default: '' },
production: { type: Array, default: [] },
wardrobe: { type: Array, default: [] }
});
callSheetSchema.methods.summary = function() {
const summary = {
id: this._id,
creator: this.creator,
published: this.published,
sent: this.sent,
completed: this.completed,
archived: this.archived,
client: this.client,
project: this.project,
production: this.production,
wardrobe: this.wardrobe
};
return summary;
};
export default mongoose.model('callsheet', callSheetSchema);
```
Again, if I just return `res.send(callsheet)`, the data is sent correctly. If I return `res.send(callsheet.summary())`, I hit the error block.
Any all help is much appreciated.<issue_comment>username_1: You should get into the habit of [debugging your code](http://happycoding.io/tutorials/processing/debugging). (That tutorial is for Processing, but many of the same ideas apply to P5.js.)
>
> i am assuming the problem is that ground is positioned correctly.
>
>
>
So why don't you just check the position of the ground? I don't see anywhere that you're drawing the ground, why don't you start by doing that?
Or you could add this line into the `draw()` function:
```
console.log(ground.position);
```
Either way, you'll notice that the ground is falling, which makes sense because it's a rectangle, just like the other boxes which are falling.
To fix your problem, you need to figure out how to create a stationary box that doesn't move. I'm sure Googling "matter.js stationary rectangle" will return a ton of results, or you can consult their documentation.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Elaborating on [Kevin's answer](https://stackoverflow.com/a/49433319/6243352), add [`{isStatic: true}`](https://brm.io/matter-js/docs/classes/Body.html#property_isStatic) to the ground to prevent it from falling:
```js
ground = Bodies.rectangle(0, height / 2, width, 10, {isStatic: true});
// ^^^^^^^^^^^^^^^^^^
```
Upvotes: 0 |
2018/03/22 | 522 | 1,935 | <issue_start>username_0: To import my crm data into Google analytics (GA), I linked the UserID of my users with ClientID in GA.
For this, I used the following code from GA documentation:
```
ga('set', 'userId', '432432');
```
Over time, the format of the User IDs on my website has changed - instead of the numbers, hashes are now used.
Can I now use the same code above, but only with new identifiers of my users, to send UserIDs то GA without damage current analytics?
In short, can I override the current User IDs in GA so that one user is not identified by the GA system as two different people?<issue_comment>username_1: You should get into the habit of [debugging your code](http://happycoding.io/tutorials/processing/debugging). (That tutorial is for Processing, but many of the same ideas apply to P5.js.)
>
> i am assuming the problem is that ground is positioned correctly.
>
>
>
So why don't you just check the position of the ground? I don't see anywhere that you're drawing the ground, why don't you start by doing that?
Or you could add this line into the `draw()` function:
```
console.log(ground.position);
```
Either way, you'll notice that the ground is falling, which makes sense because it's a rectangle, just like the other boxes which are falling.
To fix your problem, you need to figure out how to create a stationary box that doesn't move. I'm sure Googling "matter.js stationary rectangle" will return a ton of results, or you can consult their documentation.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Elaborating on [Kevin's answer](https://stackoverflow.com/a/49433319/6243352), add [`{isStatic: true}`](https://brm.io/matter-js/docs/classes/Body.html#property_isStatic) to the ground to prevent it from falling:
```js
ground = Bodies.rectangle(0, height / 2, width, 10, {isStatic: true});
// ^^^^^^^^^^^^^^^^^^
```
Upvotes: 0 |
2018/03/22 | 294 | 937 | <issue_start>username_0: Is there a way to open an excel file without knowing the full path name?
For example:
`TEST_03222018.csv` is the file name located in `C:\test\folder`
the known part of the string\path is
```
C:\test\folder\TEST_03
```
is there a way to open this csv sheet without the rest of the path (preferably without using `InStr()` or any `If, While` loops<issue_comment>username_1: ```
Function findFile(strFileStart as string) as string
findFile= Dir(strFileStart & "*", vbNormal)
End Function
```
Echo, @<NAME>'s comments about having more than one file with the same prefix though.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use Dir with a wildcard to confirm the existence and if found, open it.
```
dim fp as string, fn as string
fp = "C:\test\folder\"
fn = "test_03"
fn = dir(fp & fn & "*.csv")
if cbool(len(fn)) then
workbooks.open fp & fn, delimiter:=","
end if
```
Upvotes: 1 |
2018/03/22 | 680 | 2,580 | <issue_start>username_0: if you have a method like this in java 8, is it correct to wrap the argument in an Optional? What is the best practice when the argument can be null?
Worth to mention that argument "c" is coming from a source that I cannot change to Optional from there.
```
public String isSomething(c) {
if(c != null) {
return ("something".equals(c))? "YES" : "NO";
}
return null;
}
```<issue_comment>username_1: you could write:
```
public String isSomething(String c) {
return Optional.ofNullable(c)
.map(x -> "something".equals(x) ? "YES" : "NO")
.orElse(null);
}
```
But whatever you have in place (with slightly modifications) is probably more readable:
```
public String isSomething(String c) {
if (c == null) {
return c;
}
return "something".equals(c) ? "YES" : "NO";
}
```
Upvotes: 0 <issue_comment>username_2: The best practice is not to declare that an “argument can be null”, when in reality, the method doesn’t do anything useful when the argument is `null`.
Your code is the perfect example of an anti-pattern. If the argument is `null`, your method returns `null`, to let the caller’s code fail at some later time, in the worst case, at a point where it is impossible to trace back where the `null` reference appeared the first time.
If your method doesn’t do anything useful with `null`, it shouldn’t pretend to do so and rather implement a *fail-fast* behavior that allows the caller to spot errors as close to the original cause as possible:
```
public String isSomething(String c) {
return c.equals("something")? "YES": "NO";// yes, that ought to fail if c is null
}
```
If the caller is aware of `Optional` values and actively using it, there is no need for special support from your side:
```
Optional o = …;
o = o.map(s -> isSomething(s));
// now o encapsulates "YES", "NO", or is empty
```
`Optional.map` does already take care not to evaluate the function if it is empty, hence, your method doesn’t need to accept an optional.
---
If you have an operation that can truly do something useful if an argument is absent, you should consider the established pattern of method overloading first:
```
System.out.println("foo"); // prints foo and a newline
System.out.println(); // just prints a newline
```
here, it is entirely obvious to the reader that omitting the argument is permitted, without ever having to deal with `null` or `Optional`.
If you have multiple optional arguments to support, have a look at the Builder Pattern.
Upvotes: 2 |
2018/03/22 | 598 | 1,436 | <issue_start>username_0: I have a this specific nested list :
```
paths = [['s', 'a', 'b', 't'], ['s', 'c', 'd', 't'], ['s', 'c', 'e']
```
and i want to take tuples of 2 in each nested list , for example i want as output this:
```
['s', 'a'] , ['a', 'b'] ,['b', 't'] , ['s', 'c'] ,...
```
and so on.
Any idea how to do this?<issue_comment>username_1: Here is a version with no imported modules using a list comprehension:
```
paths = [['s', 'a', 'b', 't'], ['s', 'c', 'd', 't'], ['s', 'c', 'e']]
res = [[j[i], j[i+1]] for j in paths for i in range(len(j)-1)]
print(res)
```
Output:
```
[['s', 'a'], ['a', 'b'], ['b', 't'], ['s', 'c'], ['c', 'd'], ['d', 't'], ['s', 'c'], ['c', 'e']]
```
Upvotes: 1 <issue_comment>username_2: One way is to use a nested list comprehension with `zip`:
```
paths = [['s', 'a', 'b', 't'], ['s', 'c', 'd', 't']]
res = [[i, j] for x in paths for i, j in zip(x, x[1:])]
```
Result:
```
[['s', 'a'], ['a', 'b'], ['b', 't'], ['s', 'c'], ['c', 'd'], ['d', 't']]
```
Upvotes: 1 <issue_comment>username_3: ```
paths = [['s', 'a', 'b', 't'], ['s', 'c', 'd', 't'], ['s', 'c', 'e']]
res = []
for item in paths:
for i in range(len(item) - 1):
x = []
x.append(item[i])
x.append(item[i+1])
res.append(x)
print res
```
output:
```
[['s', 'a'], ['a', 'b'], ['b', 't'], ['s', 'c'], ['c', 'd'], ['d', 't'], ['s', 'c'], ['c', 'e']]
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,249 | 4,543 | <issue_start>username_0: Suppose I have the following component:
```
import { mapState } from 'vuex';
import externalDependency from '...';
export default {
name: 'Foo',
computed: {
...mapState(['bar'])
},
watch: {
bar () {
externalDependency.doThing(this.bar);
}
}
}
```
When testing, I want to ensure that `externalDependency.doThing()` is called with `bar` (which comes from the vuex state) like so:
```
it('should call externalDependency.doThing with bar', () => {
const wrapper = mount(Foo);
const spy = jest.spyOn(externalDependency, 'doThing');
wrapper.setComputed({bar: 'baz'});
expect(spy).toHaveBeenCalledWith('baz');
});
```
Vue test-utils has a setComputed method which allows me to currently test it, but I keep getting warnings that setComputed will be removed soon, and I don't know how else this can be tested:
<https://github.com/vuejs/vue-test-utils/issues/331><issue_comment>username_1: You will need some sort of mutator on the VueX instance, yes this does introduce another unrelated unit to the test but personally by your test including the use of Vuex, that concept has already been broken.
Modifying the state in an unexpected way is more prone to cause behaviour that differs from the actual usage.
Upvotes: 0 <issue_comment>username_2: You can set the value straight at the source, i.e. VueX. so you'd have something like this in your **store.js**:
```
const state = {
bar: 'foo',
};
const mutations = {
SET_BAR: (currentState, payload) => {
currentState.bar = payload;
},
};
const actions = {
setBar: ({ commit }, payload) => {
commit('SET_BAR', payload);
},
};
export const mainStore = {
state,
mutations,
actions,
};
export default new Vuex.Store(mainStore);
```
and then in your **component.spec.js** you'd do this:
```
import { mainStore } from '../store';
import Vuex from 'vuex';
//... describe, and other setup functions
it('should call externalDependency.doThing with bar', async () => {
const localState = {
bar: 'foo',
};
const localStore = new Vuex.Store({
...mainStore,
state: localState,
});
const wrapper = mount(Foo, {
store: localStore,
});
const spy = jest.spyOn(externalDependency, 'doThing');
localStore.state.bar = 'baz';
await wrapper.vm.$nextTick();
expect(spy).toHaveBeenCalledWith('baz');
});
```
You can also call the `dispatch('setBar', 'baz')` method on the store and have the mutation occur properly instead of directly setting the state.
**NB** It's important to re-initialize your state for every mount (i.e. either make a clone or re-declare it). Otherwise one tests can change the state and the next test will start with that dirty state, even if wrapper was destroyed.
Upvotes: 0 <issue_comment>username_3: The [Vue Test Utils](https://vue-test-utils.vuejs.org/guides/using-with-vuex.html) documentation points at a different approach where you use a very simple Vuex store:
```
import { shallowMount, createLocalVue } from '@vue/test-utils'
import Vuex from 'vuex'
// use a localVue to prevent vuex state from polluting the global Vue instance
const localVue = createLocalVue();
localVue.use(Vuex);
describe('Foo.vue', () => {
let state;
let store;
beforeEach(() => {
// create a new store for each test to prevent pollution
state = { bar: 'bar' };
store = new Vuex.Store({ state });
})
it('should call externalDependency.doThing with bar', () =>
{
shallowMount(MyComponent, { store, localVue });
const spy = jest.spyOn(externalDependency, 'doThing');
// trigger the watch
state.bar = 'baz';
expect(spy).toHaveBeenCalledWith('baz');
});
})
```
Upvotes: 2 <issue_comment>username_4: From you're trying to achieve
>
> When testing, I want to ensure that externalDependency.doThing() is called with bar (which comes from the vuex state) like so:
>
>
>
(and this is indeed pure unit test approach), you can just force change of this watcher, which basically is a function.
There's no need to track if watcher is changing in case of computed or data value change - let Vue handle it.
So, to change a watcher in a mounted Vue instance, just call it like
```
wrapper.vm.$options.watch.bar.call(wrapper.vm)
```
Where `bar` is name of your watcher. This way you will be able to test exact functionality that you're aiming to test.
Idea taken from this comment <https://github.com/vuejs/vue-test-utils/issues/331#issuecomment-382037200>, on a vue-test-utils issue, mentioned by you in a question.
Upvotes: 3 |
2018/03/22 | 479 | 2,200 | <issue_start>username_0: My company needs me to create a tool to read and send messages over a controller area network(CANBUS). I've been successful in creating one in an Ubuntu virtual environment.
1. Can I use my Linux program in windows? (Written in python)
2. If I have to recreate it for windows, what would be the best coding language to interact with the bus and its hardware?
3. Are there any libraries that work well with canbus through windows?<issue_comment>username_1: 1. Can I use my Linux program in windows? (Written in python)
=============================================================
Yes, you can. Python is SO agnostic. You will obviously need to install Python for Windows. Check which driver dependancies has the library you used for Python as you may need to install some driver in your PC.
2. If I have to recreate it for windows, what would be the best coding language to interact with the bus and its hardware?
==========================================================================================================================
No need to recreate it
3. Are there any libraries that work well with canbus through windows?
======================================================================
Same as on Linux.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm not sure about (3), but here's what I can say for (1) and (2):
### Is it gonna work on Windows?
I don't know.
It depends on the modules you used.
Most of the time it is portable across platforms, but you might have to change some portions of the code.
If you mostly used TkInter and PySerial, it should work on Windows as well, but you need to give it a try.
### What's the best language for doing this on Windows?
If you have a working version of your program for Linux, you should stick to porting it onto Windows, it shouldn't be too hard if what you used was standard.
Thus, I'd say that the best language is the one that already works on another platform.
However, if it happened that Python was definitely not suited for this program on Windows, then the best language would probably be the one you know best, as long as you're sure it provides the features you need.
Upvotes: 0 |
2018/03/22 | 931 | 2,835 | <issue_start>username_0: I currently have this code that get information from an API and uses gathered information and write it to a JSON file.
```
var xp;
fs.readFile('./xp.json', 'utf8', (err, data ) => {
if (err) throw err;
xp = JSON.parse(data);
});
console.log(m['id'] + " " + m['channel_id'] + " " + m['username']);
var usr = m['id'];
if (!xp[usr])
xp[usr] = {xpp: 0};
xp[usr].xpp++;
fs.writeFile("./xp.json", JSON.stringify(xp), (err) => {
if (err) console.error(err);
});
```
When I run this I get this error message:
[Error MSG image](https://i.stack.imgur.com/V3PGa.png)
So I guess the problem is with the `if (!xp[usr]) xp[usr] = {xpp: 0};`
My intentions with that line is:
```
if it's no field in the json object called usr
...
make a field looking like this:
string of usr = {
xpp = 0
}
```
I have been stuck on this one for some hours now. Any suggestions?
full function:
```
var infoUrl = "https:widget.json";
function giveXP(xyz) {
console.log("Interval Started");
request(xyz, function(error, response, body){
if(!error && response.statusCode == 200){
var parsedData = JSON.parse(body);
var memberz = parsedData['members'];
memberz.forEach(function(m){
if(m['channel_id']){
var xp;
fs.readFile('./xp.json', 'utf8', (err, data ) => {
if (err) throw err;
xp = JSON.parse(data);
console.log(m['id'] + " " + m['channel_id'] + " " + m['username']);
var usr = m['id'];
if (!xp[usr]) xp[usr] = {xpp: 0};
xp[usr].xpp++;
});
fs.writeFile("./xp.json", JSON.stringify(xp), (err) => {
if (err) console.error(err);
});
}
});
}
});
} setInterval(giveXP, 3000, infoUrl);
```<issue_comment>username_1: you need to access xp[usr] once the fs.read gives you result. as its async you have to put your code in its callback. please take a look of below code
```
var xp;
fs.readFile('./xp.json', 'utf8', (err, data ) => {
if (err) throw err;
xp = JSON.parse(data);
console.log(m['id'] + " " + m['channel_id'] + " " + m['username']);
var usr = m['id'];
if (!xp[usr]) xp[usr] = {xpp: 0};
xp[usr].xpp++;
});
fs.writeFile("./xp.json", JSON.stringify(xp), (err) => {
if (err) console.error(err);
});
// ...
```
Upvotes: 1 <issue_comment>username_2: Perhaps `xp` indeed doesn't exist at that time due to the fact that `readFile` is asynchronous.
So you need to move you logic in the callback:
```
fs.readFile('./xp.json', 'utf8', (err, data ) => {
if (err) throw err;
xp = JSON.parse(data);
if (!xp[usr]) {
xp[usr] = {xpp: 0};
}
xp[usr].xpp++;
fs.writeFile("./xp.json", JSON.stringify(xp), (err) => {
if (err) console.error(err);
});
});
```
Upvotes: 0 |
2018/03/22 | 487 | 1,794 | <issue_start>username_0: I have a format designed in Excel, kind-of an Invoice and I enter the particular cells....... and then, through a Macro function, (I assigned one to a box shape) add those entered cells onto another Workbook called as 'Database'.. Only then would I like to print. Possibly even warn with 'msg' if thee step is not done. I hope I'm clear.
My concern is that the page being printed might be faked.. like for instance I fill out the form, then print it... then edit particular cells and then I add it to the database. So the values aren't matching and hence I've successfully created a fake document right? That's what I want to prohibit.<issue_comment>username_1: Use the BeforePrint event. For a brief introduction to events, I recommend: <http://www.cpearson.com/excel/events.aspx>. Your code would have to be inside the workbook module and would look like this:
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
macro_to_run_before
End Sub
```
Or, alternatively, you could use an If to check if the macro was executed.
Upvotes: 0 <issue_comment>username_2: * Make a "macro" which writes to a specific cell in a specific worksheet. Make sure that the worksheet is with a `.VeryHidden` visibility. Something like this:
---
```
Public Sub AllowPrinting()
Worksheets(1).Range("A1") = "SomePassword"
End Sub
Public Sub BanPrinting()
Worksheets(1).Range("A1") = ""
End Sub
```
* Use `BeforePrint` event in the `ThisWorkbook`. The `Cancel = True` part disables printing.
---
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
If Worksheets(1).Range("A1") <> "SomePassword" Then Cancel = True
End Sub
```
[BeforePrint MSDN](http://%20https://msdn.microsoft.com/en-us/vba/excel-vba/articles/workbook-beforeprint-event-excel)
Upvotes: 1 |
2018/03/22 | 672 | 2,216 | <issue_start>username_0: I'm new to Json and i'm trying to parse the following example coming from one supplier:
```
{
"Nodes": [ "ID1", "ID2", "ID3", "IDxx" ],
"Results": {
"ID1": {
"ID2": {
"value1": "example1",
"value2": "exempleHexa"
},
"ID3": {
"value1": "example2",
"value2": "exempleHexa"
}
},
"ID3": {
"ID1": {
"value1": "example3",
"value2": "exempleHexa"
},
"ID2": {
"value1": "example4",
"value2": "exempleHexa"
},
"ID3": {
"value1": "example5",
"value2": "exempleHexa"
}
}
}
}
```
I tried these line to access Value1 but it always returns null:
```
var json = JsonConvert.DeserializeObject(JsonData);
var data = ((JObject)json.Results).Children().ToArray();
foreach (var item in data)
{
var childss = item.Children().ToArray();
foreach (var item2 in childss)
{
var sub = JsonConvert.DeserializeObject(item2.ToString());
string hmm = sub.value1;
}
}
```
would anyone have an idea? thanks.<issue_comment>username_1: Use the BeforePrint event. For a brief introduction to events, I recommend: <http://www.cpearson.com/excel/events.aspx>. Your code would have to be inside the workbook module and would look like this:
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
macro_to_run_before
End Sub
```
Or, alternatively, you could use an If to check if the macro was executed.
Upvotes: 0 <issue_comment>username_2: * Make a "macro" which writes to a specific cell in a specific worksheet. Make sure that the worksheet is with a `.VeryHidden` visibility. Something like this:
---
```
Public Sub AllowPrinting()
Worksheets(1).Range("A1") = "<PASSWORD>"
End Sub
Public Sub BanPrinting()
Worksheets(1).Range("A1") = ""
End Sub
```
* Use `BeforePrint` event in the `ThisWorkbook`. The `Cancel = True` part disables printing.
---
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
If Worksheets(1).Range("A1") <> "SomePassword" Then Cancel = True
End Sub
```
[BeforePrint MSDN](http://%20https://msdn.microsoft.com/en-us/vba/excel-vba/articles/workbook-beforeprint-event-excel)
Upvotes: 1 |
2018/03/22 | 491 | 1,732 | <issue_start>username_0: Docker command is unable to find my configuration file
I have a **pm2** Docker image for launching nodeJS process in production. The image can be found here: <https://hub.docker.com/r/keymetrics/pm2/~/dockerfile/>
Docker launch the command `pm2-runtime start pm2.json` and fails as it can't find any `/pm2.json`.
However I do have a `pm2.json` in the working directory, so why is it looking at the root `/` ?
I would also like to run the docker images with docker-compose. Is there something to configure in the docker-compose.yml file to look at the working directory ?<issue_comment>username_1: Use the BeforePrint event. For a brief introduction to events, I recommend: <http://www.cpearson.com/excel/events.aspx>. Your code would have to be inside the workbook module and would look like this:
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
macro_to_run_before
End Sub
```
Or, alternatively, you could use an If to check if the macro was executed.
Upvotes: 0 <issue_comment>username_2: * Make a "macro" which writes to a specific cell in a specific worksheet. Make sure that the worksheet is with a `.VeryHidden` visibility. Something like this:
---
```
Public Sub AllowPrinting()
Worksheets(1).Range("A1") = "SomePassword"
End Sub
Public Sub BanPrinting()
Worksheets(1).Range("A1") = ""
End Sub
```
* Use `BeforePrint` event in the `ThisWorkbook`. The `Cancel = True` part disables printing.
---
```
Private Sub Workbook_BeforePrint(Cancel As Boolean)
If Worksheets(1).Range("A1") <> "SomePassword" Then Cancel = True
End Sub
```
[BeforePrint MSDN](http://%20https://msdn.microsoft.com/en-us/vba/excel-vba/articles/workbook-beforeprint-event-excel)
Upvotes: 1 |
2018/03/22 | 1,102 | 2,738 | <issue_start>username_0: I would like to add up the upper part diagonals of a matrix starting from the middle, with increment in column until (1,n), n being the last column and save each sum of every diagonal. My code only add the middle diagonal, how can I loop through the matrix to get the sum of the diagonals
```
A <- matrix(c(2, 4, 3, 1,
5, 7, 1, 2,
3, 2, 3, 4,
1, 5, 6, 0), # the data elements
nrow = 4, # number of rows
ncol = 4, # number of columns
byrow = TRUE) # fill matrix by rows
sum <- 0
print(A)
for (a in 1){
for (b in 1:ncol){
if (a<-b){
sum = sum + A[a,b]
print (sum)
}
}
}
```
Here is my result
```
> print(A)
[,1] [,2] [,3] [,4]
[1,] 2 4 3 1
[2,] 5 7 1 2
[3,] 3 2 3 4
[4,] 1 5 6 0
for (a in 1){
for (b in 1:ncol){
if (a<-b){
sum = sum + A[a,b]
tail(sum, n=1)
}
}
}
12
```<issue_comment>username_1: You need `diag` to extract all main diagonal elements and `sum` to get the sum of them
```
sum(diag(A))
```
I am not sure about what you're asking for, but if you also want to extract the upper triangular matrix, you can use `A[upper.tri(A)]` which excludes the main diagonal elements, you can also set `diag=TRUE` to include them `A[upper.tri(A, diag = TRUE)]`
@shegzter based on your comment, you can use `col` and `row` combined with logical comparison `==` to get the numbers you want.
```
> A[row(A)==col(A)] # this gives the same out put as `diag(A)`
[1] 2 7 3 0
> A[row(A)+1==col(A)]
[1] 4 1 4
> A[row(A)+2==col(A)]
[1] 3 2
> A[row(A)+3==col(A)]
[1] 1
```
If you want the sum of each of them, so use `sum` over those elements:
```
> sum(A[row(A)==col(A)])
[1] 12
> sum(A[row(A)+1==col(A)])
[1] 9
> sum(A[row(A)+2==col(A)])
[1] 5
> sum(A[row(A)+3==col(A)])
[1] 1
```
If your objective is getting the following sum 12+9+5+1, then you can do it all at once using `upper.tri` and `sum`
```
> sum(A[upper.tri(A, diag = TRUE)])
[1] 27
```
Or without the diagonal elements:
```
> sum(A[upper.tri(A)])
[1] 15
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The following returns the sum for each diagonal:
```
sapply(split(A, col(A) - row(A)), sum)
# -3 -2 -1 0 1 2 3
# 1 8 13 12 9 5 1
```
Hence, to get only the upper, ones you may use
```
tail(sapply(split(A, col(A) - row(A)), sum), ncol(A))
# 0 1 2 3
# 12 9 5 1
```
The drawback of using `tail` is that we also compute lower diagonal sums. Hence, to save some time when `A` is large you may want to use
```
sapply(split(A[upper.tri(A, diag = TRUE)], (col(A) - row(A))[upper.tri(A, diag = TRUE)]), sum)
# 0 1 2 3
# 12 9 5 1
```
Upvotes: 0 |
2018/03/22 | 2,092 | 8,799 | <issue_start>username_0: In my application, I have objects with several `SpEL` expressions that usually contains signatures of methods with `boolean` return type to invoke and logical operators. Before these objects are cached, I check the consistency of the expression by simply executing the parsed expression. When an exception is thrown, I set an appropriate flag inside the object to indicate that the expression is invalid to avoid further execution.
I am executing the expression on an `EvaluationContext` that implements all methods that are permitted to be a part of the expression. All these methods return `false`. I have come across a problem that involves a short circuit evaluation.
Given `methodOne` and `methodTwo` are the only permitted methods to invoke, this expression correctly sets the inconsistency flag
```
methodERROROne("arg") AND methodTwo("arg")
```
this one, however, does not because `methodOne` returns `false`, `Spring` uses short circuit evaluation and does not execute the remaining operands. This causes the expression to fail when it is executed on real `EvaluationContext` and the `methodOne` returns `true`
```
methodOne("arg") AND methodERRORTwo("arg")
```
Is there a way to disable short circuit evaluation is Spring expression language?<issue_comment>username_1: No; the `OpAnd` operator always short-circuits...
```
@Override
public TypedValue getValueInternal(ExpressionState state) throws EvaluationException {
if (!getBooleanValue(state, getLeftOperand())) {
// no need to evaluate right operand
return BooleanTypedValue.FALSE;
}
return BooleanTypedValue.forValue(getBooleanValue(state, getRightOperand()));
}
```
...there is no equivalent to Java's `&` operator.
**EDIT**
>
> All these methods return false
>
>
>
If they all return `false` can't you use `"!(!m1() and !m2())"` ?
or even
`"!((!m1() or m1()) and (!m2() or m2()))"`
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you really want to disable short circuit evaluation it is possible, but you have to get your hands dirty. As [Gary pointed out](https://stackoverflow.com/a/49435114/1651107) logic in OpAnd (OpOr) is fixed. So we have to change/override implementation of OpAnd/OpOr.getValueInternal(ExpressionState state). This requies some source code copying, so source code can be found [here](https://github.com/spring-projects/spring-framework/tree/main/spring-expression/src/main/java/org/springframework/expression). My samples may differ a bit since I implemented it and tested on different version of maven dependency:
```
org.springframework
spring-expression
5.1.5.RELEASE
```
First of all, we have to implement our own version of OpAnd and OpOr. Since OpAnd.getBooleanValue(ExpressionState state, SpelNodeImpl operand) and SpelNodeImpl.getValue(ExpressionState state, Class desiredReturnType) are private, we have to implement our version of getBooleanValue.
```
import org.springframework.expression.EvaluationException;
import org.springframework.expression.TypedValue;
import org.springframework.expression.common.ExpressionUtils;
import org.springframework.expression.spel.ExpressionState;
import org.springframework.expression.spel.SpelEvaluationException;
import org.springframework.expression.spel.SpelMessage;
import org.springframework.expression.spel.ast.OpAnd;
import org.springframework.expression.spel.ast.SpelNodeImpl;
import org.springframework.expression.spel.support.BooleanTypedValue;
import org.springframework.lang.Nullable;
public class DomailOpAnd extends OpAnd {
public static boolean getBooleanValue(ExpressionState state, SpelNodeImpl operand) {
try {
Boolean value = ExpressionUtils.convertTypedValue(state.getEvaluationContext(), operand.getValueInternal(state), Boolean.class);
//Boolean value = operand.getValue(state, Boolean.class);
assertValueNotNull(value);
return value;
}
catch (SpelEvaluationException ex) {
ex.setPosition(operand.getStartPosition());
throw ex;
}
}
private static void assertValueNotNull(@Nullable Boolean value) {
if (value == null) {
throw new SpelEvaluationException(SpelMessage.TYPE_CONVERSION_ERROR, "null", "boolean");
}
}
public DomailOpAnd(int pos, SpelNodeImpl ... operands) {
super(pos, operands);
}
@Override
public TypedValue getValueInternal(ExpressionState state) throws EvaluationException {
//HERE is our non-short-circuiting logic:
boolean left = getBooleanValue(state, getLeftOperand());
boolean right = getBooleanValue(state, getRightOperand());
return BooleanTypedValue.forValue(left && right);
}
}
public class DomailOpOr extends OpOr {
public DomailOpOr(int pos, SpelNodeImpl ... operands) {
super(pos, operands);
}
@Override
public BooleanTypedValue getValueInternal(ExpressionState state) throws EvaluationException {
Boolean left = DomailOpAnd.getBooleanValue(state, getLeftOperand());
Boolean right = DomailOpAnd.getBooleanValue(state, getRightOperand());
return BooleanTypedValue.forValue(left || right);
}
}
```
Now we have to use our versions of OpAnd and OpOr where they are created in InternalSpelExpressionParser. But InternalSpelExpressionParser is not a public class so we cannot use override as with OpAnd. We have to copy InternalSpelExpressionParser source code and create own class. Here I show only edited parts:
```
/**
* Hand-written SpEL parser. Instances are reusable but are not thread-safe.
*
* @author <NAME>
* @author <NAME>
* @author <NAME>
* @since 3.0
*/
class DomailInternalExpressionParser extends TemplateAwareExpressionParser {
//rest of class ...
@Nullable
private SpelNodeImpl eatLogicalOrExpression() {
SpelNodeImpl expr = eatLogicalAndExpression();
while (peekIdentifierToken("or") || peekToken(TokenKind.SYMBOLIC_OR)) {
Token t = takeToken(); //consume OR
SpelNodeImpl rhExpr = eatLogicalAndExpression();
checkOperands(t, expr, rhExpr);
expr = new DomailOpOr(toPos(t), expr, rhExpr);
}
return expr;
}
@Nullable
private SpelNodeImpl eatLogicalAndExpression() {
SpelNodeImpl expr = eatRelationalExpression();
while (peekIdentifierToken("and") || peekToken(TokenKind.SYMBOLIC_AND)) {
Token t = takeToken(); // consume 'AND'
SpelNodeImpl rhExpr = eatRelationalExpression();
checkOperands(t, expr, rhExpr);
expr = new DomailOpAnd(toPos(t), expr, rhExpr);
}
return expr;
}
//rest of class ...
```
This also requires to copy source of other non public classes to be available:
* org.springframework.expression.spel.standard.Token
* org.springframework.expression.spel.standard.Tokenizer
* org.springframework.expression.spel.standard.TokenKind
Finally we swap implementation of parser in SpelExpression Parser:
```
import org.springframework.expression.ParseException;
import org.springframework.expression.ParserContext;
import org.springframework.expression.spel.SpelParserConfiguration;
import org.springframework.expression.spel.standard.SpelExpression;
import org.springframework.expression.spel.standard.SpelExpressionParser;
public class DomailExpressionParser extends SpelExpressionParser {
private final SpelParserConfiguration configuration;
public DomailExpressionParser(SpelParserConfiguration configuration) {
super(configuration);
this.configuration = configuration;
}
// we cannot use this because configuration is not visible
// public DomailExpressionParser() {
// super();
// }
@Override
protected SpelExpression doParseExpression(String expressionString, ParserContext context) throws ParseException {
return new DomailInternalExpressionParser(this.configuration).doParseExpression(expressionString, context);
}
}
```
and use our version of parser as if it would be SpelExpressionParser:
```
SpelParserConfiguration engineConfig = new SpelParserConfiguration();
ExpressionParser engine = new DomailExpressionParser(engineConfig);
Expression parsedScript = engine.parseExpression(script);
T result = parsedScript.getValue(scriptCtx, resultType);
```
We can test it on simple expression "false && blabla":
* with SpelExpressionParser there is no error
* with DomailExpressionParser I get org.springframework.expression.spel.SpelEvaluationException: EL1008E: Property or field 'blabla' cannot be found on object of type 'sk.dominanz.domic.common.eml.service.EmlScriptRootObject' - maybe not public or not valid?
Upvotes: 0 |
2018/03/22 | 1,530 | 4,792 | <issue_start>username_0: Does my `$to` variable have to equals something in this situation or will my functions fill that in automatically.
```
php
if (isset($_POST["MainCB"])) {
$to = "<EMAIL>";
}
if (isset($_POST["ITCB"])) {
$to = "<EMAIL>";
}
if (isset($_POST["CateCB"])) {
$to = "<EMAIL>";
}
if (isset($_POST['submit'])) {
$full_name = $_POST['full_name'];
$MainCB = $_POST['MainCB'];
$ITCB = $_POST['ITCB'];
$CateCB = $_POST['CateCB'];
$subject = "Form submission";
$message = $full_name . " " . $MainCB . " " . $ITCB . " " . $CateCB;
mail($to, $subject, $message);
include 'mail.php';
}
?
```
---
```
Event Form
LOGS
====
Event Request Form
==================
| | |
| --- | --- |
| Full Name: | |
| Event Title: | |
| Person/Dept in Charge: | |
| Venue: | ||| Have you checked venue availability: | Yes No ||| No. of Adults: | ||| No. of Children: | ||| Maintenance: | ||| IT: | ||| Catering: | ||| Catering Requirments: |
| ||| Logistical Requirements/Equipment: |
| ||| IT Requirements: |
| ||| Transport Booked: | Yes No ||| Email: | ||| EXT: | ||
```
---
```
php
$recipients = array();
if(isset($_POST["MainCB"])) {
$recipients[] = "<EMAIL>"// one address email;
}
if(isset($_POST["ITCB"])) {
$recipients[] = "<EMAIL>"// one other address email;
}
if(isset($_POST["CateCB"])) {
$recipients[] = "<EMAIL>"// one more address email;
}
if(isset($_POST['submit']) && !empty($recipients) ){ // You need to have at least one email address to send it
$to = implode(',', $recipients); // All your email address
$full_name = $_POST['full_name'];
$MainCB = $_POST['MainCB'];
$ITCB = $_POST['ITCB'];
$CateCB = $_POST['CateCB'];
$subject = "Form submission";
$message = $full_name . " " . $MainCB . " " . $ITCB . " " . $CateCB;
mail($to,$subject,$message);
include 'mail.php';
}
?
```<issue_comment>username_1: According to what you said in the comment and this question : [PHP send mail to multiple email addresses](https://stackoverflow.com/questions/4506078/php-send-mail-to-multiple-email-addresses)
You can try somehting like this to have multiple "to":
```
php
$recipients = array();
if(isset($_POST["MainCB"])) {
$recipients[] = // one address email;
}
if(isset($_POST["ITCB"])) {
$recipients[] = // one other address email;
}
if(isset($_POST["CateCB"])) {
$recipients[] = // one more address email;
}
if(isset($_POST['submit']) && !empty($recipients) ){ // You need to have at least one email address to sent it
$to = implode(',', $recipients); // All your email address
$full_name = $_POST['full_name'];
$MainCB = $_POST['MainCB'];
$ITCB = $_POST['ITCB'];
$CateCB = $_POST['CateCB'];
$subject = "Form submission";
$message = $full_name . " " . $MainCB . " " . $ITCB . " " . $CateCB;
mail($to,$subject,$message);
include 'mail.php';
}
?
```
Is it what you are looking for?
Upvotes: 0 <issue_comment>username_2: I recommend to read some documentation on how to do posts in PHP, besides that what are you trying to accomplish?
```
if(isset($_POST["MainCB"]) || isset($_POST["ITCB"]) || isset($_POST["CateCB"])) {
$to = "<EMAIL>";
}
```
Will do the same thing, further more `if(isset($_POST['submit'])){` should always equal true, in fact this should be your most outer if else statement. Then you're rewriting the `$to` variable with with `''` so remove that other wise the code before it has no use-case.
Code is logic, read the code understand it and adjust your if else statements where needed.
```
if(isset($_POST["MainCB"])){
$to[] = '<EMAIL>';
}
if(isset($_POST["ITCB"])){
$to[] = '<EMAIL>';
}
if(isset($_POST["CateCB"])){
$to[] = '<EMAIL>';
}
if(!empty($to)){
$to = array_unique($to); // remove duplicate entry's.
foreach($to as $address){
if(!filter_var($address, FILTER_VALIDATE_EMAIL)){
die("'$address' is not a valid email");
}
}
$to = implode(", ", $to);
} else {
die('No addresses to send the mail to!');
}
```
Now `$to` will have all the addresses wanted in 1 string to be used in `mail()` however should think about using a library such as [PHPMailer](https://github.com/PHPMailer/PHPMailer), but first focus on the basics of programming.
Upvotes: 1 <issue_comment>username_3: ```
if(isset($_POST["MainCB"])) {
$email = $email. ", <EMAIL>";
}
if(isset($_POST["ITCB"])) {
$email = $email. ", <EMAIL>";
}
if(isset($_POST["CateCB"])) {
$email = $email. ", <EMAIL>";
}
$mail = $_POST["Email"];
```
Upvotes: 0 |
2018/03/22 | 876 | 3,511 | <issue_start>username_0: I have been diving into JavaScript recently, and after spending several months on it I am still confused about some of the internals.
Specifically, I was trying to wrap my head around the so-called [Standard Built-In Objects](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects).
**What I Know**
1. All functions in JavaScript, both built-in and user-created, are objects (`function objects`)
2. Difference between `general objects` and `function objects` is that `function objects` implement the `[[Call]]` property (and can thus be invoked)
3. All `function objects` implement the `.prototype` property, which represents the prototype of all objects created with the `function object` as a constructor
**Questions**
1. Are all the `Standard Built-In Objects` actually `function objects` (i.e. constructor functions)?
2. Do all (and only) `function objects` implement `.prototype`?
3. Are the right terms `general object` vs `function object`?
Thank you.<issue_comment>username_1: 1. No, some built-in functions throw an exception if you try to use them with `new` (`Symbol` comes to mind).
2. The prototype concept only applies to functions, though you can create an object with anything you want as the immediate parent in its prototype chain with `Object.create()`
3. All functions are objects, but not all objects are functions.
To clarify 2, all objects have a prototype chain with zero or more objects arranged in (effectively) a list. The automatic setting of a new object's prototype chain based on constructor function `.prototype` value only applies to functions, but that's sort-of tautological because a function is the only thing that can be a constructor.
Upvotes: 1 <issue_comment>username_2: >
> All function objects implement the `.prototype` property, which represents the prototype of all objects created with the function object as a constructor
>
>
>
No. Only constructor functions do that. There are functions that are no constructors (i.e. they have a [[call]] but no [[construct]] internal method), they throw an error when you try to use them with `new` (as do some functions that have a [[construct]] internal method, but throw nonetheless).
And technically, not even constructor functions necessarily need a `.prototype`. You could implement a builtin that can be constructed but does not have a `.prototype` property. It's true however that all native constructor functions do have one.
>
> Are all the Standard Built-In Objects actually function objects?
>
>
>
No. Consider the `Math`, `JSON`, `Atomics` or `Reflect` built-in objects. They're not functions at all.
>
> function objects (i.e. constructor functions)?
>
>
>
No. Consider the `parseInt`, `JSON.stringify` or `Array.prototype.slice` functions (and many more globals, static and prototype methods). They're not constructors at all.
>
> Do all (and only) function objects implement `.prototype`?
>
>
>
No. There is nothing special about the `.prototype` property, except that it is usually used on a `function` when constructing an instance with `new`. But every object can have or can not have a `.prototype` property.
>
> Are the right terms general object vs function object?
>
>
>
No, "general objects" is not an official term. I'd name them *non-callable* objects and [*callable* objects](http://www.ecma-international.org/ecma-262/8.0/#table-6) if I had to (the latter term is used in the spec).
Upvotes: 3 [selected_answer] |
2018/03/22 | 394 | 1,584 | <issue_start>username_0: [Azure Data Factory IF condition Image](https://i.stack.imgur.com/4W8Yf.png)
What I'm trying to do is executing a stored produre, however it should only be executed if the two preceding "Pipelines" are successfully executed. *See Image URL above.*
I'm struggling with the correct expression in the "IF condition".
I'm trying to accomplish something like:
**IF** *TriggerCopyAX* is *succesfull* AND *TriggerCopyNav* is *succesfull* Continue..<issue_comment>username_1: Execute pipeline is a type of activity, what you want to do is chain activities, not pipelines. You can configure activity dependency with the dependsOn property. See here: <https://learn.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities#control-activity>
Example:
```
"dependsOn": [
{
"activity": "YourActivityName",
"dependencyConditions": [ "Succeeded" ]
}
]
```
Hope this helped!
Upvotes: 1 [selected_answer]<issue_comment>username_2: Yes that works.
So basically what I did:
1. Did not use the "If condition".
2. I dragged the two "execute pipelines" directly to the stored procedure.
3. When I looked at the code behind the stored procedure element I saw the "dependsOn" property is automatically used.
```
"dependsOn": [
{
"activity": "TriggerCopy_AX",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "TriggerCopy_NAV",
"dependencyConditions": [
"Succeeded"
]
}
],
```
Upvotes: 2 |
2018/03/22 | 1,036 | 3,086 | <issue_start>username_0: How can I get a named list back from `purrr::map` the same way I could do it with `plyr::dlply`? I am providing a reprex here. As can be seen, plyr::ldply returns a named list while `purrr::map` doesn't. I also checked out a similar question from 2 years ago ([How to get list name and slice name with pipe and purrr](https://stackoverflow.com/questions/34834647/how-to-get-list-name-and-slice-name-with-pipe-and-purrr)), but that wasn't that helpful because there `purrr::map` is not being used on a list column inside a dataframe and that's what I want to do.
```r
library(tidyverse)
library(plyr)
# creating a list of plots with purrr
plotlist_purrr <- iris %>%
dplyr::group_by(.data = ., Species) %>%
tidyr::nest(data = .) %>%
dplyr::mutate(
.data = .,
plots = data %>% purrr::map(
.x = .,
.f = ~ ggplot2::ggplot(
data = .,
mapping = aes(x = Sepal.Length, y = Sepal.Width)
) + geom_point() + geom_smooth(method = "lm")
)
)
# see the names of the plots
names(plotlist_purrr$plots)
#> NULL
# creating a list of plots with plyr
plotlist_plyr <- plyr::dlply(
.data = iris,
.variables = .(Species),
.fun = function(data)
ggplot2::ggplot(
data = data,
mapping = aes(x = Sepal.Length, y = Sepal.Width)
) + geom_point() + geom_smooth(method = "lm")
)
# see the names of the plots
names(plotlist_plyr)
#> [1] "setosa" "versicolor" "virginica"
```
Created on 2018-03-22 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
I am trying to move away from `plyr` and completely depend on `tidyverse` in my scripts but some of the things I could do with `plyr` I am still trying to figure out how to them with `purrr` and this is one of them.<issue_comment>username_1: You can try
```
library(tidyverse)
my_way <- iris %>%
group_by(Species) %>%
nest() %>%
mutate(plots= data %>%
map(~ggplot(., aes(x= Sepal.Length, y= Sepal.Width)) +
geom_point() +
geom_smooth(method= "lm"))) %>%
mutate(plots= set_names(plots, Species))
my_way
# A tibble: 3 x 3
Species data plots
1 setosa
2 versicolor
3 virginica
names(my\_way$plots)
[1] "setosa" "versicolor" "virginica"
```
Upvotes: 2 <issue_comment>username_2: You just need to use `purrr::set_names(Species)` before doing `map`
```r
library(plyr)
library(tidyverse)
# creating a list of plots with purrr
plotlist_purrr <- iris %>%
dplyr::group_by(.data = ., Species) %>%
tidyr::nest(data = .) %>%
dplyr::mutate(
.data = .,
plots = data %>%
purrr::set_names(Species) %>%
purrr::map(
.x = .,
.f = ~ ggplot2::ggplot(
data = .,
mapping = aes(x = Sepal.Length, y = Sepal.Width)
) + geom_point() + geom_smooth(method = "lm")
)
)
# see the names of the plots
names(plotlist_purrr$plots)
#> [1] "setosa" "versicolor" "virginica"
```
Created on 2018-03-22 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,324 | 4,673 | <issue_start>username_0: I recently went for an interview and got a question on JAVASCRIPT which I could not answer, but i would really like to know the answer, but I do not know how to phrase it in google.
The question is:
```
var a = new A();
a(); // return 1
a(); // return 2
var b = new A();
b(); //return 1;
b(); //return 2;
```
Implement A
How can I solve this using javascript. I got this so far
```
class A {
constructor(){
this.num = 1;
}
function(){
console.log(this.num--);
}
}
```<issue_comment>username_1: When using the [new operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/new), A class constructor or a function can return a value other than `this`. According to MDN:
>
> The object returned by the constructor function becomes the result of
> the whole new expression. If the constructor function doesn't
> explicitly return an object, the object created in step 1 is used
> instead. (Normally constructors don't return a value, but they can
> choose to do so if they want to override the normal object creation
> process.)
>
>
>
So a possible answer to this question is to return a function from the constructor:
```js
class A {
constructor(){
this.num = 1;
return this.func.bind(this); // bind the function to `this`
}
func() {
console.log(this.num++);
}
}
var a = new A();
a(); // return 1
a(); // return 2
var b = new A();
b(); //return 1;
b(); //return 2;
```
**Possible uses:**
After answering this question, I've started to think about the legitimate uses of the return value.
1. Singleton
```js
class Singleton {
constructor() {
if(Singleton.instance) {
return Singleton.instance;
}
Singleton.instance = this;
return this;
}
}
console.log(new Singleton() === new Singleton()); // true
```
2. Encapsulated module using the [Revealing Module Pattern](https://addyosmani.com/resources/essentialjsdesignpatterns/book/#revealingmodulepatternjavascript) with a class
```js
class Module {
constructor(value) {
this.value = value;
return {
increment: this.increment.bind(this),
decrement: this.decrement.bind(this),
getValue: this.getValue.bind(this),
};
}
increment() {
this.value++;
}
decrement() {
this.value--;
}
getValue() {
return this.value;
}
}
const module = new Module(5);
module.increment();
module.increment();
module.decrement();
console.log(module.value); // undefined
console.log(module.getValue()); // 6
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: It is unclear whether using a `class` is required; here's a solution using a simple closure and basically ignoring the `new` keyword:
```js
function A() {
var i = 1;
return function () {
console.log(i++);
}
}
var a = new A();
a();
a();
var b = new A();
b();
b();
```
Upvotes: 2 <issue_comment>username_3: ```
class A {
constructor () {
this.num = 1
return () => {
console.log(this.num++)
}
}
}
```
Upvotes: -1 <issue_comment>username_4: This is XY problem. There is no requirement to use ES6 class in interview question. It is misused here because it cannot provide any reasonable class design (it makes no sense from OOP point of view and requires unreasonable prototype hacking from JS point of view).
There is no such thing as 'default' method, this isn't how OOP works. `new` creates an instance of a class, and in order to receive a function as an instance, a class should be a child of `Function`, so [this answer](https://stackoverflow.com/a/49432292/3731501) is technically correct, although impractical.
The answers don't really explain that, but the proper answer to this interview question is:
*New function should be returned from constructor function.*
```
function A(num = 0) {
return () => ++num;
}
var a = new A();
a(); // return 1
a(); // return 2
```
*It is workable, but there is no reason to use this function as a constructor. It's expected that class instance is used to store **and** expose `num` state, and there is no good way to use `this` as class instance in this capacity because prototypal inheritance between constructor and returned function weren't established. `new` provides additional overhead without any purpose. The solution is purely functional, and the use of `new` is the mistake, so it should be omitted.*
```
var a = A();
a(); // return 1
a(); // return 2
```
And there is a good chance that the one who asks the question will be frustrated by the answer; it's unlikely that this was a catch for A+ answer.
Upvotes: 2 |
2018/03/22 | 701 | 1,683 | <issue_start>username_0: I'm new to pandas. My df looks like this:
```
A A A B B B
a NaN NaN 2 NaN NaN 5
b NaN 1 NaN 9 NaN NaN
c 3 NaN NaN 7 NaN
```
How can I get
```
A B
a 2 5
b 1 9
c 3 7
```
It looks like merge, join are for more than one dataframe. I have also tried
```
df.groupby(by=[A,B], axis=1)
```
but got
```
ValueError: Grouper and axis must be same length
```<issue_comment>username_1: I believe you need specify first level with aggregate function like `sum`, `mean`, `first`, `last`...:
```
import pandas as pd
df = df.groupby(level=0, axis=1).sum()
print (df)
A B
a 2.0 5.0
b 1.0 9.0
c 3.0 7.0
```
And if need filter columns by names use subset:
```
df = df[['A','B']].groupby(level=0, axis=1).sum()
```
If working with index values:
```
df1 = df.T
print (df1)
a b c
A NaN NaN 3.0
A NaN 1.0 NaN
A 2.0 NaN NaN
B NaN 9.0 7.0
B NaN NaN NaN
B 5.0 NaN NaN
df = df1.groupby(level=0).sum()
#default parameter axis=0 should be omit above
#df = df1.groupby(level=0, axis=0).sum()
print (df)
a b c
A 2.0 1.0 3.0
B 5.0 9.0 7.0
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Maybe using `first`
```
df.groupby(df.columns,axis=1).first()
Out[35]:
A B
a 2.0 5.0
b 1.0 9.0
c 3.0 7.0
```
Upvotes: 2 <issue_comment>username_3: One clean way is to use a list comprehension with `numpy.isfinite`:
```
import pandas as pd, numpy as np
arr = [list(filter(np.isfinite, x)) for x in df.values]
res = pd.DataFrame(arr, columns=['A', 'B'], index=['a', 'b', 'c'], dtype=int)
```
Result:
```
A B
a 2 5
b 1 9
c 3 7
```
Upvotes: 0 |
2018/03/22 | 772 | 2,218 | <issue_start>username_0: As in the title, I'm struggling with what I thought being a trivial issue: disabling an option from an HTML Select tag list when I click on an aforethought radio button.
My code does work, but when a user hits the browser back-button, my function doesn't get called anymore, so the previously disabled selector can now be chosen from the list. I'd prefer to avoid reloading the HTML page when I press the back-button using PHP, as advised in [this topic](https://stackoverflow.com/questions/3645609/reload-the-page-on-hitting-back-button).
This is my Javascript code:
```
$(function () {
$("#radioButton1").on("click", function () {
$("#selector3").prop("disabled", false);
});
$("#radioButton2").on("click", function () {
var selector = $("#selector3");
selector.prop("disabled", true);
selector.css("background", "#dddddd");
$("#orderByList").val("1");
});
});
```<issue_comment>username_1: I believe you need specify first level with aggregate function like `sum`, `mean`, `first`, `last`...:
```
import pandas as pd
df = df.groupby(level=0, axis=1).sum()
print (df)
A B
a 2.0 5.0
b 1.0 9.0
c 3.0 7.0
```
And if need filter columns by names use subset:
```
df = df[['A','B']].groupby(level=0, axis=1).sum()
```
If working with index values:
```
df1 = df.T
print (df1)
a b c
A NaN NaN 3.0
A NaN 1.0 NaN
A 2.0 NaN NaN
B NaN 9.0 7.0
B NaN NaN NaN
B 5.0 NaN NaN
df = df1.groupby(level=0).sum()
#default parameter axis=0 should be omit above
#df = df1.groupby(level=0, axis=0).sum()
print (df)
a b c
A 2.0 1.0 3.0
B 5.0 9.0 7.0
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Maybe using `first`
```
df.groupby(df.columns,axis=1).first()
Out[35]:
A B
a 2.0 5.0
b 1.0 9.0
c 3.0 7.0
```
Upvotes: 2 <issue_comment>username_3: One clean way is to use a list comprehension with `numpy.isfinite`:
```
import pandas as pd, numpy as np
arr = [list(filter(np.isfinite, x)) for x in df.values]
res = pd.DataFrame(arr, columns=['A', 'B'], index=['a', 'b', 'c'], dtype=int)
```
Result:
```
A B
a 2 5
b 1 9
c 3 7
```
Upvotes: 0 |
2018/03/22 | 1,046 | 3,745 | <issue_start>username_0: I have the following function:-
```
uploadPhoto() {
var nativeElement: HTMLInputElement = this.fileInput.nativeElement;
this.photoService.upload(this.vehicleId, nativeElement.files[0])
.subscribe(x => console.log(x));
}
```
however on the nativeElement.files[0], I am getting a typescript error, "Object is possibly 'null'". Anyone can help me solve this issue?
I tried to declare the nativeElement as a null value, however did not manage to succeed.
Thanks for your help and time.<issue_comment>username_1: If you are sure that there is a file in all cases.
You need make compiler to be sure.
```
(nativeElement.files as FileList)[0]
```
Upvotes: 3 <issue_comment>username_2: `files` is defined to be `FileList | null` so it can be `null`.
You should either check for `null` (using an `if`) or use a "Non-null assertion operator" (`!`) if you are sure it is not `null`:
```
if(nativeElement.files != null) {
this.photoService.upload(this.vehicleId, nativeElement.files[0])
.subscribe(x => console.log(x));
}
//OR
this.photoService.upload(this.vehicleId, nativeElement.files![0])
.subscribe(x => console.log(x));
```
**Note:**
The "Non-null assertion operator" will not perform any runtime checks, it just tells the compiler you have special information and you know `nativeElement.files` will not be `null` at runtime.
If `nativeElement.files` is `null` at runtime, it will generate an error. This is not the safe navigation operator of other languages.
Upvotes: 8 [selected_answer]<issue_comment>username_3: TypeScript 3.7 got released in 11/2019. Now "**[Optional Chaining](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-7.html#optional-chaining)**" is supported, this is the easiest and most secure way of working with potentially null-able values:
You simply write:
```
nativeElement?.file?.name
```
**Note the Question-Mark**! They check for null/undefined and only return the value, if none of the properties (chained with dots) is null/undefined.
Instead of
```
if(nativeElement!=null && nativeElement.file != null) {
....
}
```
But imagine something more complex like this: `crm.contract?.person?.address?.city?.latlang` that would otherwise a lot more verbose to check.
Upvotes: 5 <issue_comment>username_4: Using the answer from username_3 which referenced optional chaining, I solved your problem by casting `nativeElement` to `HTMLInputElement` and then accessing the `0th` file by using `.item(0)` with the optional chaining operator `?.`
```
uploadPhoto() {
var nativeElement = this.fileInput.nativeElement as HTMLInputElement
this.photoService.upload(this.vehicleId, nativeElement?.files?.item(0))
.subscribe(x => console.log(x));
}
```
Upvotes: 2 <issue_comment>username_5: In addition to all the answers mentioned above, still if the user doesn't want the strict null checks in its application, we can just simply disable the `strictNullChecks` property in our `tsconfig.json` file.
```
{
...
"angularCompilerOptions": {
"strictNullChecks": false,
...
}
}
```
Upvotes: 4 <issue_comment>username_6: app.component.html
```
Enter your Name :
Enter name :{{data1}}
```
app.component.ts
```
data1:any = 'yash';
```
Upvotes: -1 <issue_comment>username_7: Thank you.
I had the same issue. In older version Ionic was all right. After update TS2531.
Now the function have two non-null assertion operator:
```
async saveToStorage(jsonMessage){
await this.storage.set(jsonMessage["key"],jsonMessage["value"]);
document.getElementsByTagName('iframe').item(0)!.contentWindow!.postMessage'{"type": "storage-save","result": "ok","key": "' + jsonMessage["key"] + '"}',"*");
}
```
Upvotes: 0 |
2018/03/22 | 284 | 1,034 | <issue_start>username_0: I'm running a [shell execute](http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-sx) command in a notebook (Google Collaboratory), and such command prompts me to give a yes/no answer. How can I respond to this prompt shell message within the notebook?<issue_comment>username_1: **Update**: Colab now supports input prompts, so you should see these immediately, e.g.,
[](https://i.stack.imgur.com/7hCnv.png)
**Old answer:** A typical pattern is to run the command without prompts, if it has such an option. For example, for installation using `apt`, provide the option `-y`.
If no such option exists, you can do something like piping yes to the program like so: `yes | programThatHasConfirmationPrompts`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Colab input prompts did not pop out for my command but the pipelining worked, for example:
```
!yes | sh -c "$(curl -fsSL https://someURL...)"
```
Upvotes: 0 |
2018/03/22 | 440 | 1,750 | <issue_start>username_0: I was handed an accounting spreadsheet to recreate because of some broken references. I'm not really versed in Excel and am having trouble finding where certain declarations are coming from within the Excel document. I'm sure this is extremely simple, yet I apparently can't figure it out... In the following VBA modules within an excel document I see the following code:
```
Dim OriginalQi As Double, OriginalGOR_Initial As Double
OriginalQi = Range("Input_Qi")
OriginalGOR_Initial = Range("Input_GOR_Initial")
```
I am a programmer and understand the decelerations of the OriginalQi and OriginalGOR\_Initial doubles. What I can't seem to find within Excel is where the "Input\_Qi" and "Input\_GOR\_Initial" strings are being declared and given values. I'm sure there is a menu somewhere in Excel that should display these given names and the values they contain, but I cannot find it. Can someone please point out to me where these references are stored within the Excel document.<issue_comment>username_1: **Update**: Colab now supports input prompts, so you should see these immediately, e.g.,
[](https://i.stack.imgur.com/7hCnv.png)
**Old answer:** A typical pattern is to run the command without prompts, if it has such an option. For example, for installation using `apt`, provide the option `-y`.
If no such option exists, you can do something like piping yes to the program like so: `yes | programThatHasConfirmationPrompts`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Colab input prompts did not pop out for my command but the pipelining worked, for example:
```
!yes | sh -c "$(curl -fsSL https://someURL...)"
```
Upvotes: 0 |
Subsets and Splits