date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 735 | 3,232 | <issue_start>username_0: I'm investigating using Azure IoT hub for communication between software clients and a backend. The software clients should act as devices in IoT hub parlance.
Is it possible to provision the software clients as devices with IoT Hub device provisioning service without each client having been given a unique identifier in a TPM module or a unique X.509 certificate?
I don't have a need for authenticating the clients before provisioning, I just want each client to be able to communicate securely with the IoT hub. After provisioning, the client must be uniquely identifiable. At that time, other clients should not be able to impersonate it.
At the moment, I believe that the client software needs to have an intermediate certificate embedded, which it can use to sign a certificate that it creates when it attempts to provision itself.
Are there other ways to let an IoT hub client-device provision itself without having been preprogrammed with a SAS, token, or other unique identifier?<issue_comment>username_1: I am afraid the answer is NO.
To connect Azure IoT Hub, the device(software client) need select either Secuity Keys(related to Tokens) or x509(certificate) as authentication method.
As for [Device Provisioning Service](https://learn.microsoft.com/en-us/azure/iot-dps/about-iot-dps), you need select x509 or TPM-based identity to provision device.
>
> Device Provisioning Service is a helper service for IoT Hub that
> enables zero-touch, just-in-time provisioning to the right IoT hub
> without requiring human intervention, allowing customers to provision
> millions of devices in a secure and scalable manner.
>
>
>
Both X.509 certificates and SAS tokens can be stored in Hardware security module, so you don't have to preprogrammed with a SAS, token, or other unique identifier in your device client.
For software client, maybe you can use software-based TPM. But I am not sure your platform is software-based TPM supported. [There](https://learn.microsoft.com/en-us/windows/iot-core/secure-your-device/tpm#tpm-solutions-available-on-windows-iot-core) is a software TPM supported on Windows 10 IoT core.
Upvotes: 0 <issue_comment>username_2: Use of an HSM is not required to use the Device Provisioning Service (source: I'm the PM for the service). If your clients already all have an intermediate certificate they can use to generate a leaf cert, and if all the intermediate certs for all your devices have a common signer, then you can create an enrollment group in the provisioning service using that common signing cert and allow all your devices to provision via that enrollment group.
The provisioning service doesn't care which entity is the ultimate root of trust for the certificates your device clients present, but you will have to go through a proof of possession step to show you have access to the private portion of the signing cert when using an enrollment group. By proving ownership of a signing cert in the chain (doesn't have to be the root, just one of the intermediates), you're proving that you have permission to generate leaf certificates for the devices that will be registering as a part of that enrollment group.
Upvotes: 3 [selected_answer] |
2018/03/20 | 614 | 2,602 | <issue_start>username_0: I'm having an issue on an older bit of code with an asp:Datagrid. This Datagrid has sorting columns enabled and if the user clicks too quickly, before the page finishes reloading from the sort, an invalid viewstate error is thrown. I tried disabling viewstate on the control and then rebinding it in every page load but to no avail. Anyone know how I can fix this?<issue_comment>username_1: I am afraid the answer is NO.
To connect Azure IoT Hub, the device(software client) need select either Secuity Keys(related to Tokens) or x509(certificate) as authentication method.
As for [Device Provisioning Service](https://learn.microsoft.com/en-us/azure/iot-dps/about-iot-dps), you need select x509 or TPM-based identity to provision device.
>
> Device Provisioning Service is a helper service for IoT Hub that
> enables zero-touch, just-in-time provisioning to the right IoT hub
> without requiring human intervention, allowing customers to provision
> millions of devices in a secure and scalable manner.
>
>
>
Both X.509 certificates and SAS tokens can be stored in Hardware security module, so you don't have to preprogrammed with a SAS, token, or other unique identifier in your device client.
For software client, maybe you can use software-based TPM. But I am not sure your platform is software-based TPM supported. [There](https://learn.microsoft.com/en-us/windows/iot-core/secure-your-device/tpm#tpm-solutions-available-on-windows-iot-core) is a software TPM supported on Windows 10 IoT core.
Upvotes: 0 <issue_comment>username_2: Use of an HSM is not required to use the Device Provisioning Service (source: I'm the PM for the service). If your clients already all have an intermediate certificate they can use to generate a leaf cert, and if all the intermediate certs for all your devices have a common signer, then you can create an enrollment group in the provisioning service using that common signing cert and allow all your devices to provision via that enrollment group.
The provisioning service doesn't care which entity is the ultimate root of trust for the certificates your device clients present, but you will have to go through a proof of possession step to show you have access to the private portion of the signing cert when using an enrollment group. By proving ownership of a signing cert in the chain (doesn't have to be the root, just one of the intermediates), you're proving that you have permission to generate leaf certificates for the devices that will be registering as a part of that enrollment group.
Upvotes: 3 [selected_answer] |
2018/03/20 | 2,086 | 4,193 | <issue_start>username_0: My intention is to get the following validated using regex
```
10.10.*.1
10.10.0.1
10.10.255.1
10.10.10.*
10.10.10.0
10.10.10.255
```
In simple terms, a star can appear in the last two octets and the IP address would still be valid
My code is as follows:
```
function ValidateIPaddress(ipaddress) {
if (/^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]|\*?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]|\*?)$/.test(ipaddress)) {
return (true)
}
return (false)
}
```
I am unable to set an or condition in the last two octets. Please help<issue_comment>username_1: You are missing on the last two octets anything to capture the single digit possibility. Try:
```
^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]|[0-9]|\*?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]|[0-9]|\*?)$
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ### Method 1 - regex simplified
No need to overcomplicate the regex. Just ensure that the string conforms with the general format (the following regex) and then test that each octet is `< 256` as the code snippet below shows.
```
^\d{1,3}\.\d{1,3}(?:\.(?:\d{1,3}|\*)){2}$
```
* `^` Assert position at the start of the line
* `\d{1,3}` Match a digit 1 to 3 times
* `\.` Match `.` literally
* `\d{1,3}` Match a digit 1 to 3 times
* `(?:\.(?:\d{1,3}|\*)){2}` Match the following twice
+ `\.` Match `.` literally
+ `(?:\d{1,3}|\*)` Match either a digit 1 to 3 times or `*` literally\
* `$` Assert position at the end of the line
```js
var a = [
// valid
"10.10.*.1",
"10.10.0.1",
"10.10.255.1",
"10.10.10.*",
"10.10.10.0",
"10.10.10.255",
// invalid
"256.1.1.1",
"*.1.1.1",
"1.1.1",
"1.1.1.1.1"
]
var r = /^\d{1,3}\.\d{1,3}(?:\.(?:\d{1,3}|\*)){2}$/
a.forEach(function(ip) {
console.log(`${ip}: ${r.test(ip) && ip.split('.').every(function(x) { return Number(x) < 256 || x === '*' })}`)
})
```
---
### Method 2 - no regex
Alternatively, without even using regex:
```js
var a = [
// valid
"10.10.*.1",
"10.10.0.1",
"10.10.255.1",
"10.10.10.*",
"10.10.10.0",
"10.10.10.255",
// invalid
"256.1.1.1",
"*.1.1.1",
"1.1.1",
"1.1.1.1.1"
]
a.forEach(function(ip) {
var octets = ip.split('.'),
valid = false
if(octets.length === 4) {
if(Number(octets[0]) < 256
&& Number(octets[1]) < 256
&& (Number(octets[2]) < 256 || octets[2] === '*')
&& (Number(octets[3]) < 256 || octets[3] === '*')
)
valid = true
}
console.log(`${ip}: ${valid}`)
})
```
---
### Method 3 - single regex
This is the most bloated method, but it seems that's what you're looking for.
```
^(?:25[0-5]|2[0-4]\d|[01]?\d{1,2})\.(?:25[0-5]|2[0-4]\d|[01]?\d{1,2})(?:\.(?:25[0-5]|2[0-4]\d|[01]?\d{1,2}|\*)){2}$
```
```js
var a = [
// valid
"10.10.*.1",
"10.10.0.1",
"10.10.255.1",
"10.10.10.*",
"10.10.10.0",
"10.10.10.255",
// invalid
"256.1.1.1",
"*.1.1.1",
"1.1.1",
"1.1.1.1.1"
]
var r = /^(?:25[0-5]|2[0-4]\d|[01]?\d{1,2})\.(?:25[0-5]|2[0-4]\d|[01]?\d{1,2})(?:\.(?:25[0-5]|2[0-4]\d|[01]?\d{1,2}|\*)){2}$/
a.forEach(function(ip) {
console.log(`${ip}: ${r.test(ip)}`)
})
```
Upvotes: 2 <issue_comment>username_3: I think this should do it.
```js
let rx = /^(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9]|\*)\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9]|\*)$/;
const test = str => rx.test(str)
console.log("these should pass")
console.log(test("10.10.*.1"))
console.log(test("10.10.0.1"))
console.log(test("10.10.255.1"))
console.log(test("10.10.10.*"))
console.log(test("10.10.10.0"))
console.log(test("10.10.10.255"))
console.log("------------------------------")
console.log("these should fail")
console.log(test("10.267.0.1"))
console.log(test("10.10.0.1.0.1"))
console.log(test("10.*.0.1"))
console.log(test("*.42.0.1"))
```
But, as @username_2 points out, this might be better with less regex.
Upvotes: 1 |
2018/03/20 | 3,730 | 14,642 | <issue_start>username_0: I'm submitting a series of `select` statements (queries - thousands of them) to a single database synchronously and getting back one `DataTable` per query (Note: This program is such that it has knowledge of the DB schema it is scanning only at run time, hence the use of `DataTables`). The program runs on a client machine and connects to DBs on a remote machine. It takes a long time to run so many queries. So, assuming that executing them async or in parallel will speed things up, I'm exploring `TPL Dataflow (TDF)`. I want to use the `TDF` library because it seems to handle all of the concerns related to writing multi-threaded code that would otherwise need to be done by hand.
The code shown is based on <http://blog.i3arnon.com/2016/05/23/tpl-dataflow/>. Its minimal and is just to help me understand the basic operations of `TDF`. Please do know I've read many blogs and coded many iterations trying to crack this nut.
None-the-less, with this current iteration, I have one problem and a question:
**Problem**
The code is inside a `button click` method (Using a UI, a user selects a machine, a sql instance, and a database, and then kicks off the scan). The two lines with the `await` operator return an error at build time: `The 'await' operator can only be used within an async method. Consider marking this method with the 'async' modifier and changing its return type to 'Task'`. I can't change the return type of the button click method. Do I need to somehow isolate the `button click` method from the `async-await` code?
**Question**
Although I've found beau-coup write-ups describing the basics of `TDF`, I can't find an example of how to get my hands on the output that each invocation of the `TransformBlock` produces (i.e., a `DataTable`). Although I want to submit the queries `async`, I do need to block until all queries submitted to the `TransformBlock` are completed. How do I get my hands on the series of `DataTable`s produced by the `TransformBlock` and block until all queries are complete?
*Note: I acknowledge that I have only one block now. At a minimum, I'll be adding a cancellation block and so do need/want to use TPL.*
```
private async Task ToolStripButtonStart_Click(object sender, EventArgs e)
{
UserInput userInput = new UserInput
{
MachineName = "gat-admin",
InstanceName = "",
DbName = "AdventureWorks2014",
};
DataAccessLayer dataAccessLayer = new DataAccessLayer(userInput.MachineName, userInput.InstanceName);
//CreateTableQueryList gets a list of all tables from the DB and returns a list of
// select statements, one per table, e.g., SELECT * from [schemaname].[tablename]
IList tableQueryList = CreateTableQueryList(userInput);
// Define a block that accepts a select statement and returns a DataTable of results
// where each returned record is: schemaname + tablename + columnname + column datatype + field data
// e.g., if the select query returns one record with 5 columns, then a datatable with 5
// records (one per field) will come back
var transformBlock\_SubmitTableQuery = new TransformBlock>(
async tableQuery => await dataAccessLayer.\_SubmitSelectStatement(tableQuery),
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 2,
});
// Add items to the block and start processing
foreach (String tableQuery in tableQueryList)
{
await transformBlock\_SubmitTableQuery.SendAsync(tableQuery);
}
// Enable the Cancel button and disable the Start button.
toolStripButtonStart.Enabled = false;
toolStripButtonStop.Enabled = true;
//shut down the block (no more inputs or outputs)
transformBlock\_SubmitTableQuery.Complete();
//await the completion of the task that procduces the output DataTable
await transformBlock\_SubmitTableQuery.Completion;
}
public async Task \_SubmitSelectStatement(string queryString )
{
try
{
.
.
await Task.Run(() => sqlDataAdapter.Fill(dt));
// process dt into the output DataTable I need
return outputDt;
}
catch
{
throw;
}
}
```<issue_comment>username_1: As it turns out, to meet my requirements, `TPL Dataflow` is a bit overkill. I was able to meet my requirements using `async/await` and `Task.WhenAll`. I used the Microsoft How-To [How to: Extend the async Walkthrough by Using Task.WhenAll (C#)](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/how-to-extend-the-async-walkthrough-by-using-task-whenall) as a model.
**Regarding my "Problem"**
My "problem" is not a problem. An event method signature (in my case, a "Start" button click method that initiates my search) can be modified to be `async`. In the Microsoft How-To `GetURLContentsAsync` solution, see the `startButton_Click` method signature:
```
private async void startButton_Click(object sender, RoutedEventArgs e)
{
.
.
}
```
**Regarding my question**
Using Task.WhenAll, I can wait for all my queries to finish then process all the outputs for use on my UI. In the Microsoft How-To `GetURLContentsAsync` solution, see the `SumPageSizesAsync` method, i.e,, the array of int named `lengths` is the sum of all outputs.
```
private async Task SumPageSizesAsync()
{
.
.
// Create a query.
IEnumerable> downloadTasksQuery = from url in urlList select ProcessURLAsync(url);
// Use ToArray to execute the query and start the download tasks.
Task[] downloadTasks = downloadTasksQuery.ToArray();
// Await the completion of all the running tasks.
Task whenAllTask = Task.WhenAll(downloadTasks);
int[] lengths = await whenAllTask;
.
.
}
```
Upvotes: 1 <issue_comment>username_2: The cleanest way to retrieve the output of a `TransformBlock` is to perform a nested loop using the methods [`OutputAvailableAsync`](https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.dataflow.dataflowblock.outputavailableasync) and [`TryReceive`](https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.dataflow.dataflowblock.tryreceive). It is a bit verbose, so you could consider encapsulating this functionality in an extension method `ToListAsync`:
```
public static async Task> ToListAsync(this IReceivableSourceBlock source,
CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(source);
List list = new();
while (await source.OutputAvailableAsync(cancellationToken).ConfigureAwait(false))
{
while (source.TryReceive(out T item))
{
list.Add(item);
}
}
Debug.Assert(source.Completion.IsCompleted);
await source.Completion.ConfigureAwait(false); // Propagate possible exception
return list;
}
```
Then you could use the `ToListAsync` method like this:
```
private async Task ToolStripButtonStart_Click(object sender, EventArgs e)
{
TransformBlock transformBlock = new(async query => //...
//...
transformBlock.Complete();
foreach (DataTable dataTable in await transformBlock.ToListAsync())
{
// Do something with each dataTable
}
}
```
**Note:** this `ToListAsync` implementation is destructive, meaning that in case of an error the consumed messages are discarded. To make it non-destructive, just remove the `await source.Completion` line. In this case you'll have to remember to `await` the `Completion` of the block after processing the list with the consumed messages, otherwise you won't be aware if the `TransformBlock` failed to process all of its input.
Alternative ways to retrieve the output of a dataflow block do exist, for example [this one](https://stackoverflow.com/questions/22492383/throttling-asynchronous-tasks/22492731#22492731 "Throttling asynchronous tasks") by dcastro uses a `BufferBlock` as a buffer and is slightly more performant, but personally I find the approach above to be safer and more straightforward.
Instead of waiting for the completion of the block before retrieving the output, you could also retrieve it in a streaming manner, as an [`IAsyncEnumerable`](https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/november/csharp-iterating-with-async-enumerables-in-csharp-8 "Iterating with Async Enumerables in C# 8") sequence:
```
public static async IAsyncEnumerable ToAsyncEnumerable(
this IReceivableSourceBlock source,
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(source);
while (await source.OutputAvailableAsync(cancellationToken).ConfigureAwait(false))
{
while (source.TryReceive(out T item))
{
yield return item;
cancellationToken.ThrowIfCancellationRequested();
}
}
Debug.Assert(source.Completion.IsCompleted);
await source.Completion.ConfigureAwait(false); // Propagate possible exception
}
```
This way you will be able to get your hands to each `DataTable` immediately after it has been cooked, without having to wait for the processing of all queries. To consume an `IAsyncEnumerable` you simply move the `await` before the `foreach`:
```
await foreach (DataTable dataTable in transformBlock.ToAsyncEnumerable())
{
// Do something with each dataTable
}
```
---
**Advanced:** Below is a more sophisticated version of the `ToListAsync` method, that propagates all the errors of the underlying block, in the same direct way that are propagated by methods like the `Task.WhenAll` and `Parallel.ForEachAsync`. The original simple `ToListAsync` method wraps the errors in a nested `AggregateException`, using the `Wait` technique that is shown in [this](https://stackoverflow.com/questions/18314961/i-want-await-to-throw-aggregateexception-not-just-the-first-exception/55664013#55664013 "I want await to throw AggregateException, not just the first Exception") answer.
```
///
/// Asynchronously waits for the successful completion of the specified source, and
/// returns all the received messages. In case the source completes with error,
/// the error is propagated and the received messages are discarded.
///
public static Task> ToListAsync(this IReceivableSourceBlock source,
CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(source);
async Task> Implementation()
{
List list = new();
while (await source.OutputAvailableAsync(cancellationToken)
.ConfigureAwait(false))
while (source.TryReceive(out T item))
list.Add(item);
await source.Completion.ConfigureAwait(false);
return list;
}
return Implementation().ContinueWith(t =>
{
if (t.IsCanceled) return t;
Debug.Assert(source.Completion.IsCompleted);
if (source.Completion.IsFaulted)
{
TaskCompletionSource> tcs = new();
tcs.SetException(source.Completion.Exception.InnerExceptions);
return tcs.Task;
}
return t;
}, default, TaskContinuationOptions.DenyChildAttach |
TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Default).Unwrap();
}
```
---
**.NET 6 update:** A new API [`DataflowBlock.ReceiveAllAsync`](https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.dataflow.dataflowblock.receiveallasync) was introduced in .NET 6, with this signature:
```
public static IAsyncEnumerable ReceiveAllAsync (
this IReceivableSourceBlock source,
CancellationToken cancellationToken = default);
```
It is similar with the aforementioned `ToAsyncEnumerable` method. The important difference is that the new API **does not** propagate the possible exception of the consumed `source` block, after propagating all of its messages. This behavior is not consistent with the analogous API [`ReadAllAsync`](https://learn.microsoft.com/en-us/dotnet/api/system.threading.channels.channelreader-1.readallasync) from the Channels library. I have reported this consistency on GitHub, and [the issue](https://github.com/dotnet/runtime/issues/79535 "ReceiveAllAsync, ReadAllAsync and propagation of errors") is currently labeled by Microsoft as a [bug](https://github.com/dotnet/runtime/labels/bug).
Upvotes: 3 [selected_answer]<issue_comment>username_3: Using Dataflow blocks properly results in both cleaner and faster code. Dataflow blocks aren't agents or tasks. They're meant to work in a pipeline of blocks, connected with `LinkTo` calls, not manual coding.
It seems the scenario is to download some data, eg some CSVs, parse them and insert them to a database. Each of those steps can go into its own block:
* a Downloader with a DOP>1, to allow multiple downloads run concurrently without flooding the network.
* a Parser that converts the files into arrays of objects
* an Importer that uses SqlBulkCopy to bulk insert the rows into the database in the fastest way possible, using minimal logging.
```
var downloadDOP=8;
var parseDOP=2;
var tableName="SomeTable";
var linkOptions=new DataflowLinkOptions { PropagateCompletion = true};
var downloadOptions =new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism = downloadDOP,
};
var parseOptions =new ExecutionDataflowBlockOptions {
MaxDegreeOfParallelism = parseDOP,
};
```
With these options, we can construct a pipeline of blocks
```
//HttpClient is thread-safe and reusable
HttpClient http=new HttpClient(...);
var downloader=new TransformBlock<(Uri,string),FileInfo>(async (uri,path)=>{
var file=new FileInfo(path);
using var stream =await httpClient.GetStreamAsync(uri);
using var fileStream=file.Create();
await stream.CopyToAsync(stream);
return file;
},downloadOptions);
var parser=new TransformBlock(async file=>{
using var reader = file.OpenText();
using var csv = new CsvReader(reader, CultureInfo.InvariantCulture);
var records = csv.GetRecords().ToList();
return records;
},parseOptions);
var importer=new ActionBlock(async recs=>{
using var bcp=new SqlBulkCopy(connectionString, SqlBulkCopyOptions.TableLock);
bcp.DestinationTableName=tableName;
//Map columns if needed
...
using var reader=ObjectReader.Create(recs);
await bcp.WriteToServerAsync(reader);
});
downloader.LinkTo(parser,linkOptions);
parser.LinkTo(importer,linkOptions);
```
Once the pipeline is complete, you can start posting Uris to the head block and await until the tail block completes :
```
IEnumerable<(Uri,string)> filesToDownload = ...
foreach(var pair in filesToDownload)
{
await downloader.SendAsync(pair);
}
downloader.Complete();
await importer.Completion;
```
The code uses [CsvHelper](https://github.com/JoshClose/CsvHelper) to parse the CSV file and [FastMember's ObjectReader](https://github.com/mgravell/fast-member#ever-needed-an-idatareader) to create an IDataReader wrapper over the CSV records.
In each block you can use a [Progress](https://blog.stephencleary.com/2012/02/reporting-progress-from-async-tasks.html) instance to update the UI based on the pipeline's progress
Upvotes: 0 |
2018/03/20 | 1,374 | 4,922 | <issue_start>username_0: Im currently working on a `Rails 4.2.6` and with `RSpec 3.7` version. When I run my test I get the following error:
```
undefined method `build' for #
```
What is triggering this error is the following code.
require 'rails\_helper'
```
RSpec.describe User, "name" do
#setup
it "returns the email" do
#build
user = build(:user, email: "<EMAIL>")
# excercise and verify
expect(user.email).to eq "<EMAIL>"
end
end
```
I'm using build instead of create because I dont want to persist data into the database. I am however using `factory_bot_rails` so I should have access to this method.
Here is my Gemfile:
```
group :development, :test do
gem 'rspec'
gem 'rspec-rails'
gem 'factory_bot_rails'
gem 'byebug'
gem 'pry'
gem 'pry-nav'
gem 'pry-stack_explorer'
end
group :test do
gem "capybara"
gem "selenium-webdriver"
end
```
spec\_helper.rb
```
# This file is copied to spec/ when you run 'rails generate rspec:install'
ENV['RAILS_ENV'] ||= 'test'
require File.expand_path('../../config/environment', __FILE__)
require 'spec_helper'
require "rspec/rails"
require "capybara/rspec"
# Prevent database truncation if the environment is production
abort("The Rails environment is running in production mode!") if Rails.env.production?
# Add additional requires below this line. Rails is not loaded until this point!
# Requires supporting ruby files with custom matchers and macros, etc, in
# spec/support/ and its subdirectories. Files matching `spec/**/*_spec.rb` are
# run as spec files by default. This means that files in spec/support that end
# in _spec.rb will both be required and run as specs, causing the specs to be
# run twice. It is recommended that you do not name files matching this glob to
# end with _spec.rb. You can configure this pattern with the --pattern
# option on the command line or in ~/.rspec, .rspec or `.rspec-local`.
#
# The following line is provided for convenience purposes. It has the downside
# of increasing the boot-up time by auto-requiring all files in the support
# directory. Alternatively, in the individual `*_spec.rb` files, manually
# require only the support files necessary.
#
# Dir[Rails.root.join('spec/support/**/*.rb')].each { |f| require f }
# Checks for pending migrations and applies them before tests are run.
# If you are not using ActiveRecord, you can remove this line.
ActiveRecord::Migration.maintain_test_schema!
Capybara.register_driver :selenium_chrome do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.javascript_driver = :selenium_chrome
RSpec.configure do |config|
# Remove this line if you're not using ActiveRecord or ActiveRecord fixtures
config.fixture_path = "#{::Rails.root}/spec/fixtures"
# If you're not using ActiveRecord, or you'd prefer not to run each of your
# examples within a transaction, remove the following line or assign false
# instead of true.
config.use_transactional_fixtures = false
# RSpec Rails can automatically mix in different behaviours to your tests
# based on their file location, for example enabling you to call `get` and
# `post` in specs under `spec/controllers`.
#
# You can disable this behaviour by removing the line below, and instead
# explicitly tag your specs with their type, e.g.:
#
# RSpec.describe UsersController, :type => :controller do
# # ...
# end
#
# The different available types are documented in the features, such as in
# https://relishapp.com/rspec/rspec-rails/docs
config.infer_spec_type_from_file_location!
# Filter lines from Rails gems in backtraces.
config.filter_rails_from_backtrace!
# arbitrary gems may also be filtered via:
# config.filter_gems_from_backtrace("gem name")
# config.before(:suite) do
# DatabaseCleaner.clean_with(:truncation)
# end
#
# config.before(:each) do
# DatabaseCleaner.strategy = :transaction
# end
#
# config.before(:each, js: true) do
# DatabaseCleaner.strategy = :truncation
# end
#
# # This block must be here, do not combine with the other `before(:each)` block.
# # This makes it so Capybara can see the database.
# config.before(:each) do
# DatabaseCleaner.start
# end
#
# config.after(:each) do
# DatabaseCleaner.clean
# end
end
```
How can I fix this issue, or should I use `create` instead?<issue_comment>username_1: The `build` method is part of `FactoryGirl` or `FactoryBot` namespace
Why don't you try
```
FactoryBot.build(:user :email => '<EMAIL>')
```
Upvotes: 2 <issue_comment>username_2: After adding the gem
1. Create file in spec/support/factory\_bot.rb
2. Add to factory\_bot.rb
`RSpec.configure do |config|
config.include FactoryBot::Syntax::Methods
end`
3. Add on rails\_helper.rb
`require 'support/factory_bot'`
Upvotes: 5 [selected_answer] |
2018/03/20 | 856 | 3,562 | <issue_start>username_0: [](https://i.stack.imgur.com/2iQK4.gif)
We have this camera array arranged in an arc around a person (red dot). Think The Matrix - each camera fires at the same time and then we create an animated gif from the output. The problem is that it is near impossible to align the cameras exactly and so I am looking for a way in OpenCV to align the images better and make it smoother.
Looking for general steps. I'm unsure of the order I would do it. If I start with image 1 and match 2 to it, then 2 is further from three than it was at the start. And so matching 3 to 2 would be more change... and the error would propagate. I have seen similar alignments done though. Any help much appreciated.<issue_comment>username_1: Here's a thought. How about performing a quick and very simple "calibration" of the imaging system by using a single reference point?
The best thing about this is you can try it out pretty quickly and even if results are too bad for you, they can give you some more insight into the problem. But the bad thing is it may just not be good enough because it's hard to think of anything "less advanced" than this. Here's the description:
* Remove the object from the scene
* Place a small object (let's call it a "dot") to position that rougly corresponds to center of mass of object you are about to record (the center of area denoted by red circle).
* Record a single image with each camera
* Use some simple algorithm to find the position of the dot on every image
* Compute distances from dot positions to image centers on every image
* Shift images by (-x, -y), where (x, y) is the above mentioned distance; after that, the dot should be located in the center of every image.
When recording an actual object, use these precomputed distances to shift all images. After you translate the images, they will be roughly aligned. But since you are shooting an object that is three-dimensional and has considerable size, I am not sure whether the alignment will be very convincing ... I wonder what results you'd get, actually.
Upvotes: 2 <issue_comment>username_2: If I understand the application correctly, you should be able to obtain the relative pose of each camera in your array using homographies:
<https://docs.opencv.org/3.4.0/d9/dab/tutorial_homography.html>
From here, the next step would be to correct for alignment issues by estimating the transform between each camera's actual position and their 'ideal' position in the array. These ideal positions could be computed relative to a single camera, or relative to the focus point of the array (which may help simplify calculation). For each image, applying this corrective transform will result in an image that 'looks like' it was taken from the 'ideal' position.
Note that you may need to estimate relative camera pose in 3-4 array 'sections', as it looks like you have a full 180deg array (e.g. estimate homographies for 4-5 cameras at a time). As long as you have some overlap between sections it should work out.
Most of my experience with this sort of thing comes from using MATLAB's stereo camera calibrator app and related functions. Their help page gives a good overview of how to get started estimating camera pose. OpenCV has similar functionality.
<https://www.mathworks.com/help/vision/ug/stereo-camera-calibrator-app.html>
The cited paper by Zhang gives a great description of the mathematics of pose estimation from correspondence, if you're interested.
Upvotes: 2 |
2018/03/20 | 914 | 3,183 | <issue_start>username_0: I was doing some homework and I'm kinda stumped. part of my assignment is that I need to have a If statement that checks if a number that was entered is 16 characters long or not, this is the code I have so far:
```
#the input
CreditCardNum = input("Input a credit card number(no spaces/hyphens): ")
#The if statements
if str(CreditCardNum) != len(16):
print("This is not a valid number, make sure the number is 16 characters.")
elif str(CreditCardNum) == len(16):
if str(CreditCardNum[0:]) == 4:
print("The Card is a Visa")
elif str(CreditCardNum[0:]) == 5:
print("The Card is a Master Card")
elif str(CreditCardNum[0:]) == 6:
print("The Card is a Discover Card.")
else:
print("The brand could not be determined.")
```<issue_comment>username_1: This is the logic I believe you are looking for.
If card length is 16, it checks for the first character to determine which type.
```
CreditCardNum = input("Input a credit card number(no spaces/hyphens): ")
n = len(CreditCardNum)
if n != 16:
print("This is not a valid number, make sure the number is 16 characters.")
else:
x = CreditCardNum[0]
if x == '4':
print("The Card is a Visa")
elif x == '5':
print("The Card is a Master Card")
elif x == '6':
print("The Card is a Discover Card.")
else:
print("The brand could not be determined.")
```
**Explanation**
* Use `n = len(CreditCardNum)` to store in variable `n` the number of characters in the input string. Likewise first character of the input.
* `len(16)` makes no logical sense. You want to compare `n`, which is an integer, to another integer.
* To extract first letter of a string, simply do `mystr[0]`.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can maybe try something like this:
```
#the input
CreditCardNum = input("Input a credit card number(no spaces/hyphens): ")
#The if statements
if len(str(CreditCardNum)) != 16:
print("This is not a valid number, make sure the number is 16 characters.")
elif len(str(CreditCardNum)) == 16:
if str(CreditCardNum[0]) == '4':
print("The Card is a Visa")
elif str(CreditCardNum[0]) == '5':
print("The Card is a Master Card")
elif str(CreditCardNum[0]) == '6':
print("The Card is a Discover Card.")
else:
print("The brand could not be determined.")
```
I am not sure what you try to do in conditional statements within the outer `elif` but I am assuming you are trying to get the first character of your `CreditCardNum`?
Upvotes: 0 <issue_comment>username_3: Python don't have switch function so you can either use `if elif` or `dictionary`.
Your case definately a dictionary type.
```
card_dict = {
'4': "Visa",
'5': "Master card",
'6': "Discover card"
}
CreditCardNum = input("Input a credit card number(no
spaces /hyphens): ")
n = len(CreditCardNum)
x = CreditCardNum[0]
if n != 16:
print("This is not a valid number, make sure the number is 16 characters.")
elif x in card_dict:
print("The Card is a {}".format(card_dict[x]))
else:
print("The brand could not be determined")
```
Upvotes: 1 |
2018/03/20 | 821 | 2,919 | <issue_start>username_0: I'm trying to develop a script that interacts with `salesforcedx` and `bamboo`. I want to write a simple python script that run each cli command and runs an exit code after each call. for example
```
import os
path = "/var/Atlassian/bamboo/bamboo-home/xml-data/build-dir/SAL-SC-JOB1"
auth = "sfdx force:auth:jwt:grant --clientid clientidexample --jwtkeyfile /root/server.key --username <EMAIL> --setalias Alias --setdefaultdevhubusername; echo $?"
os.chdir(path)
os.system(auth)
```
I get a result like this
```
Successfully authorized <EMAIL> with org ID 234582038957
0<< exit code 0 or could 1 or 100
```
I want to be able to run an IF statement (if possible) to stop the script if any number other than 0 exit code pops up. keep in mind my script will be making several calls using Saleforce cli commands which should hopefully all result in 0, however just in case one of the many calls fails I need some means of stopping the script. Any advice or help is greatly appreciated!<issue_comment>username_1: ```
import subprocess
path = "/var/Atlassian/bamboo/bamboo-home/xml-data/build-dir/SAL-SC-JOB1"
users = {
'<EMAIL>': 'Alias',
'<EMAIL>': 'Other Alias'
}
for username, alias in users.iteritems():
auth = ['sfdx', 'force:auth:jwt:grant',
'--clientid', 'clientidexample',
'--jwtkeyfile', '/root/server.key',
'--username', username,
'--setalias', alias,
'--setdefaultdevhubusername']
status = subprocess.call(auth, cwd=path)
if status != 0:
print("Argument list %r failed with exit status %r" % (auth, status))
```
...will automatically stop on any nonzero exit code. If you didn't want to do the comparison yourself, you could use `subprocess.check_call()` and rely on a CalledProcessError being thrown.
Community Wiki because this is duplicative of many, *many* questions on the subject already.
Upvotes: 2 [selected_answer]<issue_comment>username_2: This is my final code based on advice from here and some other articles.
```
#!/usr/bin/python3
import subprocess
import os
import sys
path = "/var/Atlassian/bamboo/bamboo-home/xml-data/build-dir/SAL-SC-JOB1"
sfdx = 'sfdx'
auth_key= (os.environ['AUTH_KEY']) ### environment variable
def auth():
username="<EMAIL>"
alias="somealias"
key="server.key"
command="force:auth:jwt:grant"
auth= [ sfdx, command,
'--clientid', auth_key,
'--jwtkeyfile', key,
'--username', username,
'--setalias', alias,
'--setdefaultdevhubusername']
status = subprocess.call(auth, cwd=path)
if status != 0:
raise ValueError("Argument list %r failed with exit status %r" % (auth, status))
elif status == 0:
print("auth passed with exit code %r" % (status))
auth ()
```
Upvotes: 0 |
2018/03/20 | 1,325 | 5,562 | <issue_start>username_0: Inside my app I have several things that get enabled in the completion handler below but for simplicity purposes I just used one button for this example. I only say this because someone will look at the example and say since there is only 1 button to make things simple use the 1st option which would make sense for 1 button but not several. Also this question can pertain to anything that runs on a different thread then then main thread and not a CaptureSesion.
I have an AVFoundation CaptureSession completion handler that is on a different thread then the main queue. When it runs it updates a button to `.isEnabled = true`.
I can either
•1. update the function on the main queue directly:
```
... completionHandler{(
DispatchQueue.main.async { [weak self] in
self?.recordButton.isEnabled = true
}
)}
```
•2. put the button inside a function and then update that function on the main queue:
```
... completionHandler{(
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
)}
func enableRecordButton(){
recordButton.isEnabled = true
}
```
•3. update both the function and what's inside the function on the main queue:
```
... completionHandler{(
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
)}
func enableRecordButton(){
DispatchQueue.main.async { [weak self] in
self?.recordButton.isEnabled = true
}
}
```
**What's the difference between the 3?**<issue_comment>username_1: As a rule you should update the UI components only on main thread. Hence
```
DispatchQueue.main.async { [weak self] in
self?.recordButton.isEnabled = true
}
```
Makes absolute sense. Because you would like the button to be enabled and it is a UI modification you would want it to be in main thread. So all the statements in the closure of `DispatchQueue.main.async` will be executed on main thread.
Where as in
```
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
```
The method `enableRecordButton` will be executed on main thread. That means all the statements in `enableRecordButton` method and all the subsequent methods it calls will be executed on main thread.
In your case, if all that you wanna achieve is enabling a button than putting it in a separate function would not make much sense. Only benefit that you get is that you would be able to enable the button by calling `enableRecordButton` function from wherever you want rather than repeating same statement again n again.
Finally
```
... completionHandler{(
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
)}
func enableRecordButton(){
DispatchQueue.main.async { [weak self] in
self?.recordButton.isEnabled = true
}
}
```
This in your case makes no sense. Statement
```
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
```
will already ensure that all statements in `enableRecordButton` would execute on main thread so adding `DispatchQueue.main.async` in `enableRecordButton` makes no sense.
```
func enableRecordButton(){
DispatchQueue.main.async { [weak self] in
self?.recordButton.isEnabled = true
}
}
```
This might come handy only if you have multiple points in your code from where you might call `enableRecordButton` and might call it on non-main thread in that case you can safely remove
```
DispatchQueue.main.async { [weak self] in
self?.enableRecordButton()
}
```
and simply call `self?.enableRecordButton()` in your completion handler and `DispatchQueue.main.async` in `enableRecordButton` function will ensure that all the statements will be executed on main thread.
Hope it helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe adding DispatchQueue.main.async { } again in the function is useless because you are already on the main thread, which you need to update the user interface. Suppose you have certain task which you need to schedule after update of button you can add another DispatchQueue.main.async { } to put it in serial queue which will be executed after updation of button. But in every case updations will be done on main thread only and those will be serial.
Upvotes: 0 <issue_comment>username_3: Technically, all three would achieve the same result. It's more about code organization, maintainability, and readability.
For #1, you're likely doing business logic in your completion handler that has nothing to do with updating UI. It'd be cleaner to move the UI updating to its own method, say, in the UIViewController whose view owns that button (and the other UI controls you mentioned). Benefits of doing so are that you can enable your UI controls via the same call from elsewhere in your app, and you can write tests against it. You can *also* better test your business logic of this completion handler, if you had a way to redirect the UI update call (such as via a protocol, to a mock).
And #3 is a waste, in that you're wasting a yield, adding a task to main thread, just to ensure the UI code will be on the main thread. You should instead organize your classes and code to always have a good understanding of what thread they're to be called from. If you're writing a public method in an SDK, you can certainly check the thread and assert if caller calls on a thread other than you expect, but that's about the only time I'd bother with that.
So my answer is #2 is the best.
Upvotes: 0 |
2018/03/20 | 1,115 | 4,209 | <issue_start>username_0: **The issue** I have is we currently are using IdentityServer as our SSO authentication for our corporate applications. However, the bulk of our applications are under the same Site ID in IIS 7.5. When navigating to more than 5 of these applications under the same Site ID, you end up getting a 400 error, request header too long. The reason being each application has its own cookie, so the request header is passing around 5+ cookies with token information and the becoming too large.
**My question** is, are you able to prevent the sharing of cookies between applications under the same Site ID in IIS 7.5?<issue_comment>username_1: Few things that you can try. Make the following changes at the server level.
1. Highlight the server name in IIS, select "configuration editor", select "system.web" and "httpRuntime" and change "maxRequestLength" to "1048576".
You can also edit the "applicationHost.config" file in the following way- C:\Windows\System32\inetsrv\Config
```
```
2. Edit "Request Filtering" settings at server level on IIS and set "maxAllowedContentLength" to "1073741824"
You can also edit the root web.config file in the following manner - C:\Windows\Microsoft.NET\Framework64\*\v4.0.30319\*\Config
\*Folder is based on your application. if its a 32 bit application, navigate to "Framework" folder. If its a .net 2.0 application, navigate to v2.0.50727.
```
```
Upvotes: 0 <issue_comment>username_2: First of all - I want to say that I have not tried this myself, so I can't assure that it is a solution, but I'm trying to help.
The problem with the cookies originates from the Microsoft OWIN/Katana and the way they are encrypting them. They become enormous, but this has nothing to do with Identity Server. However [here](https://github.com/IdentityServer/IdentityServer3/issues/1124) and [here](https://github.com/IdentityServer/IdentityServer3/blob/master/source/Core/Configuration/AuthenticationOptions.cs#L135) there are good discussion around this.
The main thing to try first is in the `Startup.cs` of the IdentityServer project, in the `IdentityServerOptions.AuthenticationOptions` there is a property `SignInMessageThreshold` which defaults to 5. Try setting it to something lower, this will keep your header smaller (which may cause round trips to identity server when an app doesn't have its message in the cookies, but this will not force the user to re-login).
Another thing, that we achieved in one of out projects, is to create a DataBase backed cookie session handler. In your clients, where you use
```
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
CookieName = cookieName,
});
```
There is also a property `SessionStore`. You can have a custom implementation of the `Microsoft.Owin.Security.Cookies.IAuthenticationSessionStore`. In our case this reduced the cookie size to less than (or around) 300.
Upvotes: 0 <issue_comment>username_3: We also have IdentityServer for SSO and internal applications hosted on the same machine on IIS.
And I faced with the same problem too.
Here is a solution:
1) You need to solve Owin/Katana middleware problem to avoid nonce overfloating. [Here you can find the code for that fix](https://github.com/IdentityServer/IdentityServer3/issues/1124#issuecomment-226519073)
2) You have to stop sharing cookies.
So if your base address for applications is "*mysite.com*".
And you have a lot of different applications like this:
* *Good App:* mysite.com/**good\_app**/
* *Best App*: mysite.com/**best\_app**/
* *Super App*: mysite.com/**super\_app**/
Use **CookiePath** for each application on an application's side and it will [limit cookies](https://msdn.microsoft.com/en-us/library/ms178194.aspx#Background) (and look [here](https://msdn.microsoft.com/en-us/library/system.web.httpcookie.path.aspx) too).
Use the code like this (for "*Good App*"):
```
var cookieOptions = new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
CookieName = "GoodAppCookies",
// Cookie Path same as application name on IIS
CookiePath = "/good_app
};
```
Hope it'll help.
Upvotes: 1 |
2018/03/20 | 1,305 | 4,722 | <issue_start>username_0: the data on the [file.txt](https://i.stack.imgur.com/5OUmE.png) are placed as shown.
My Code is this:
```
int searchBookname()
{
FILE *myFile=fopen("file.txt","r+");
if (myFile!=NULL) // file exists
{
char tmp1[512];
char tmp2[512];
while(fgets(tmp1,512,myFile)!=EOF)
{
puts("Insert the Book Name: ");
scanf("%s",tmp2);
if(strstr(tmp1,tmp2)!=NULL){
printf("the book is found: %s\n\n",tmp1);
}else{
puts("\nSorry there was no Match with this name! Maybe Book is not recorded yet :(\n");
}
}
}else{ // file doesn't exist
puts("\nDatabase is not created yet. Please record a new book to create a simple database\n");
exit(0);
}
fclose(myFile); // closing the file
}
```
It keeps skipping the if statement 2 times for some reason and on the
3rd time it prints the correct result.
This happen for whatever book I try to search.
See [here](https://i.stack.imgur.com/j6IKO.png)
How can I make it find the result without skipping the if statement.<issue_comment>username_1: Few things that you can try. Make the following changes at the server level.
1. Highlight the server name in IIS, select "configuration editor", select "system.web" and "httpRuntime" and change "maxRequestLength" to "1048576".
You can also edit the "applicationHost.config" file in the following way- C:\Windows\System32\inetsrv\Config
```
```
2. Edit "Request Filtering" settings at server level on IIS and set "maxAllowedContentLength" to "1073741824"
You can also edit the root web.config file in the following manner - C:\Windows\Microsoft.NET\Framework64\*\v4.0.30319\*\Config
\*Folder is based on your application. if its a 32 bit application, navigate to "Framework" folder. If its a .net 2.0 application, navigate to v2.0.50727.
```
```
Upvotes: 0 <issue_comment>username_2: First of all - I want to say that I have not tried this myself, so I can't assure that it is a solution, but I'm trying to help.
The problem with the cookies originates from the Microsoft OWIN/Katana and the way they are encrypting them. They become enormous, but this has nothing to do with Identity Server. However [here](https://github.com/IdentityServer/IdentityServer3/issues/1124) and [here](https://github.com/IdentityServer/IdentityServer3/blob/master/source/Core/Configuration/AuthenticationOptions.cs#L135) there are good discussion around this.
The main thing to try first is in the `Startup.cs` of the IdentityServer project, in the `IdentityServerOptions.AuthenticationOptions` there is a property `SignInMessageThreshold` which defaults to 5. Try setting it to something lower, this will keep your header smaller (which may cause round trips to identity server when an app doesn't have its message in the cookies, but this will not force the user to re-login).
Another thing, that we achieved in one of out projects, is to create a DataBase backed cookie session handler. In your clients, where you use
```
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
CookieName = cookieName,
});
```
There is also a property `SessionStore`. You can have a custom implementation of the `Microsoft.Owin.Security.Cookies.IAuthenticationSessionStore`. In our case this reduced the cookie size to less than (or around) 300.
Upvotes: 0 <issue_comment>username_3: We also have IdentityServer for SSO and internal applications hosted on the same machine on IIS.
And I faced with the same problem too.
Here is a solution:
1) You need to solve Owin/Katana middleware problem to avoid nonce overfloating. [Here you can find the code for that fix](https://github.com/IdentityServer/IdentityServer3/issues/1124#issuecomment-226519073)
2) You have to stop sharing cookies.
So if your base address for applications is "*mysite.com*".
And you have a lot of different applications like this:
* *Good App:* mysite.com/**good\_app**/
* *Best App*: mysite.com/**best\_app**/
* *Super App*: mysite.com/**super\_app**/
Use **CookiePath** for each application on an application's side and it will [limit cookies](https://msdn.microsoft.com/en-us/library/ms178194.aspx#Background) (and look [here](https://msdn.microsoft.com/en-us/library/system.web.httpcookie.path.aspx) too).
Use the code like this (for "*Good App*"):
```
var cookieOptions = new CookieAuthenticationOptions
{
AuthenticationType = "Cookies",
CookieName = "GoodAppCookies",
// Cookie Path same as application name on IIS
CookiePath = "/good_app
};
```
Hope it'll help.
Upvotes: 1 |
2018/03/20 | 1,394 | 5,228 | <issue_start>username_0: I am writing a selenium test using javascript in Jmeter. When I click on a link on the site, it opens in a new tab by default. The automated browser even switches to this new tab. But, it seems selenium is not switching to the new tab. When I print the inner HTML for the body element (obtained by xpath `//body`) after clicking on the link, I get back the source for the first tab, not the second.
When I try to wait for any element at all on the next page (waiting for `//div[@id="my-div-on-second-page-only"]`, for example), I get a timeout saying the element was never located (long after I can see the page has finished loading).
I did happen upon [this question](https://stackoverflow.com/questions/28715942/how-do-i-switch-to-the-active-tab-in-selenium), but it's for python, and I am also struggling to understand the accepted answer.
Update
======
My code for switching the tabs currently looks like this:
```
// Switch tabs
var tabs = WDS.browser.getWindowHandles();
var tab = WDS.browser.getWindowHandle();
WDS.log.info("Tabs: " + tabs + " current: " + tab); // Output below
// Assume always there are only two tabs
for (t in tabs)
{
WDS.log.info("Checking tab " + t);
if (t != tab)
{
WDS.log.info("Switching to " + t);
WDS.browser.switchTo().window(t);
break;
}
}
```
The output for the line marked with `// Output below` is:
```
Tabs: [CDwindow-A83928D86BA4F6F46C5D7F4B63B674A5, CDwindow-177CF406C98C28DF4AF5E7EC3228B896] current: CDwindow-A83928D86BA4F6F46C5D7F4B63B674A5
```
I am not even entering the for/in loop. I have tried switching using `tabs[index]`, `tabs.get(0);`, and `tabs.iterator.next();`, but none have worked. I've been scouring the internet all day for some information on the data type returned by `WDS.browser.getWindowHandles();`, but just can't seem to find it.
Update 2
========
I ultimately switched to the Java interpreter. None of the proposed solutions, even the ones in javascript, worked. I suspect there is an issue with the javascript interpreter itself. I'm going to leave the question open in case anyone would like to offer a solution that is tested and works in the Jmeter webdriver extension for future knowledge.
Below is my working Java code. It uses <NAME>'s example, which makes the assumption that the last entry in `tabs` is the new tab. Although I find M<NAME>'s solution seems to be the best in terms of a general solution, Matias' was good enough for my problem.
```
// Switch tabs
ArrayList tabs = new ArrayList(); // Jmeter interpreter uses by default
tabs.addAll(WDS.browser.getWindowHandles());
String tab = WDS.browser.getWindowHandle();
WDS.log.info("Tabs: " + tabs + " current: " + tab + " Size of tabs: " + tabs.size());
WDS.browser.switchTo().window(tabs.get(tabs.size() - 1));
WDS.log.info("Tabs switched successfully");
```<issue_comment>username_1: When you click a link and its open a new tab the driver still focus on the first tab, so you have to move to it:
```
List tabs = new ArrayList<>();
tabs.addAll(driver().getWindowHandles());
driver().switchTo().window(tabs.size() - 1);
```
This get all windows handles and move to the last tab opened, then you can find(theElement).
Upvotes: 1 <issue_comment>username_2: I would load up a list with all the window handles before the click action. Then after clicking the new window would not be equal to any of the saved window handle values.
```
List tabs = new ArrayList<>();
tabs.addAll(driver().getWindowHandles());
String newTab = null;
targetElement.click();
for (String currentWindow: driver().getWindowHandles()) {
if (!tabs.contains(currentWindow)) {
newTab = currentWindow;
break;
}
}
```
Upvotes: 2 <issue_comment>username_3: You can switch between tabs calling the below method:
```
public void setMainWindow() {
try{
int numberOfWindow = driver.getWindowHandles().size();
boolean tmpIndex = true;
String mainWindowHandler = driver.getWindowHandle(); // Store the current window handle
if (numberOfWindow > 1) {
for(String winHandle : driver.getWindowHandles()){
driver.switchTo().window(winHandle); // Switch to new window opened
if (tmpIndex) {
mainWindowHandler = winHandle;
}
tmpIndex = false;
}
// Perform the necessary actions on new window
driver.close(); // Close the new window, if that window no more required
driver.switchTo().window(mainWindowHandler); // Switch back to original browser (first window)
}
} catch (Exception e) {}
}
// Continue with original browser (first window)...
```
Upvotes: 0 <issue_comment>username_4: You can do something like below.
```
var full_tabs = WDS.browser.getWindowHandles();
var x = full_tabs.toString();
var arr = x.split(',');
var newtab = arr[arr.length-1].split(']').toString().replace(',','').replace(' ','');
```
It is not best solution but it will work. You can combine it into 2 line.
Upvotes: 1 |
2018/03/20 | 983 | 3,644 | <issue_start>username_0: Here's my sample code for a programming problem asking to split a string and sort the individual words to avoid duplicates. I know that this code is 100% correct, but I'm not really sure what the purpose of `lst = list()` line of code is?
How does the program know to put the file romeo in the list?
```
fname = input("Enter file name: ")
romeo = open(fname)
lst = list()
for line in romeo:
line = line.rstrip()
line = line.split()
for e in line:
if e not in lst:
lst.append(e)
lst.sort()
print(lst)
```<issue_comment>username_1: The purpose of `lst = list()` is to create an instance of **list** called *lst*.
You could also replace it by
```
lst = []
```
it's exactely the same.
The line `lst.append(e)` is filling it. [Here](https://www.tutorialspoint.com/python/list_append.htm) more about the **append** method
Upvotes: -1 [selected_answer]<issue_comment>username_2: Maybe you are confused with iteration over the file. Iteration allows us to treat the file as a container which can be iterated just like we do for any other container like list or set or dict.items().
Also `lst = list()` means `lst = []`. This has got nothing to do with file iteration.
Upvotes: 0 <issue_comment>username_3: List is a python object. Type help(list) in your interpreter. You would see your screen
[](https://i.stack.imgur.com/zqqle.png)
Usually for some programming languages calling className() would create object of the type class. For example in C++
```
class MyClass{
var declarations
method definitions
}
MyObj=MyClass()
```
The MyObj in above code is object for your class MyClass. Apply same thing for your code **lst is object type of list class** that is predefined in Python which we call it as builtin data structure.
So your above lst definition would initialize lst to be a empty list.
The help section shows two types of constructors for list class those are used in different ways. The second type of constructor
```
list(iterable)
```
would create a list with already created sequence. For example
```
tuple1=(1,'mars')
new_list=list(tuple1)
print(new_list)
```
would create new list new\_list using the tuple which is a iterable.
Upvotes: 0 <issue_comment>username_4: See below for more insights:
```
# the following line stores your input in fname as a str
fname = input("Enter file name: ")
# the following line opens the file named fname and stores it in romeo
romeo = open(fname)
# next line creates an empty list through the built in function list()
lst = list()
# now you go through all the lines in the file romeo
# each word is assigned to the variable line sequentially
for line in romeo:
# strip the line of evntual withespaces at the end of the string
line = line.rstrip()
# split the string on withespaces and stores each element
# of the splitted string in the list line which will then contain every
# word of the line.
line = line.split()
# now you go through all the elements in the list line
# each word is assigned to e sequentially
for e in line:
# now if the word is not contained in the list lst
if e not in lst:
# it inserts the word to the list in the last postion of the list lst.
lst.append(e)
# sort the list alphabetically
lst.sort()
print(lst)
```
Some notes:
* you would probably want to add `romeo.close()` at the end of the script to close the file
* it is important to note that not all the file will be stored in the lst list. Each word will be stored there only once thanks to `if e not in lst:`
Upvotes: 0 |
2018/03/20 | 597 | 1,909 | <issue_start>username_0: I have a column of items like this
```
{apple}
{orange}>s>
{pine--apple}
{kiwi}
{strawberry}>s>
```
I would like to filter it so that I only get items that are NOT just a word between brackets (but have other stuff before or after the bracket), so in this example I would like to select these two:
```
{orange}>s>
{strawberry}>s>
```
I have tried the following code using `dplyr` and `stringr`, but even though on <https://regexr.com/> the regular expression works as expected, in `R` it does not (it just selected rows in which the `var` column is empty. What am I doing wrong?
```
d_filtered <- d %>%
filter(!str_detect(var, "\\{(.*?)\\}"))
```<issue_comment>username_1: Your pattern is saying "match anything where there are brackets, with or without stuff between them". Then you negate it with `!`, so filtering out anything that has a `{` followed by a `}` anywhere in the string.
Sounds like what you want to keep strings if there is something before or after the brackets, so let's match that. A `.` matches any (single) thing, so a pattern for "something before open bracket" is `".\\{"`. Similarly a pattern for "something after closing bracket" is `"\\}."`. We can connect them with `|` for "or". In your `filter`, use
```
filter(str_detect(var, ".\\{|\\}."))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This will solve your problem by testing if all character within the vector is within `[a-zA-Z]`, `{` or `}`:
```
cl=c("{apple}",
"{orange}>s>",
"{pine--apple}",
"{kiwi}",
"{strawberry}>s>")
find=function(x){
x=unlist(strsplit(x,""))
poss=c(letters,LETTERS,"{","}")
all(x%in%poss)
}
cl=cl[!sapply(cl,find)]
```
Upvotes: 0 <issue_comment>username_3: One can also use `grep` of base R:
```
> d = c("s>", "{pine--apple}", "{kiwi}", "{strawberry}>s>")
# I have added " d[grep(".\\{|}.", d)]
[1] "s>" "{strawberry}>s>"
```
Upvotes: 0 |
2018/03/20 | 618 | 2,077 | <issue_start>username_0: ```
private void buttonLogin_Click(object sender, EventArgs e)
{
SqlConnection sqlcon = new SqlConnection(@"Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\Users\sesha\Documents\Visual Studio 2013\Projects\Achievers - Grocery Management System\Login Database\Login DB.mdf;Integrated Security=True;Connect Timeout=30");
string query = "Select * from [Table] where username = '" + textBoxUsername.Text.Trim() + "' and password = '" + textBoxPassword.Text.Trim() + "'";
SqlDataAdapter sda = new SqlDataAdapter(query, sqlcon);
DataTable dt = new DataTable();
sda.Fill(dt);
if (dt.Rows.ToString() == "1")
{
Dashboard objDashboard = new Dashboard();
this.Hide();
objDashboard.Show();
}
else
{
MessageBox.Show("Check your username and password");
}
}
```<issue_comment>username_1: Your pattern is saying "match anything where there are brackets, with or without stuff between them". Then you negate it with `!`, so filtering out anything that has a `{` followed by a `}` anywhere in the string.
Sounds like what you want to keep strings if there is something before or after the brackets, so let's match that. A `.` matches any (single) thing, so a pattern for "something before open bracket" is `".\\{"`. Similarly a pattern for "something after closing bracket" is `"\\}."`. We can connect them with `|` for "or". In your `filter`, use
```
filter(str_detect(var, ".\\{|\\}."))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This will solve your problem by testing if all character within the vector is within `[a-zA-Z]`, `{` or `}`:
```
cl=c("{apple}",
"{orange}>s>",
"{pine--apple}",
"{kiwi}",
"{strawberry}>s>")
find=function(x){
x=unlist(strsplit(x,""))
poss=c(letters,LETTERS,"{","}")
all(x%in%poss)
}
cl=cl[!sapply(cl,find)]
```
Upvotes: 0 <issue_comment>username_3: One can also use `grep` of base R:
```
> d = c("s>", "{pine--apple}", "{kiwi}", "{strawberry}>s>")
# I have added " d[grep(".\\{|}.", d)]
[1] "s>" "{strawberry}>s>"
```
Upvotes: 0 |
2018/03/20 | 1,119 | 3,897 | <issue_start>username_0: I have a column with text (Words and numbers) separated by spaces. There are two cases:
Case 1 (3 words separated by 2 space): **BALDOR** 3 hp-4
Case 2(4 words separated by 3 space): **US ELECTRICAL** 75 hp-232
I need to extract the Bolded word(s) (they aren't bolded in the data i have, it's just to illustrate) so I figured I would reverse the order of the words then get rid of the first two (3 hp4 and 75 hp232) which will always output the bolded words.
I might be going about it the wrong way with reversing the order of the words so If you have another method that you think is better do tell.
This is what I have so far:
```
Sub ExtractMissingInfo2()
Dim TypeCell As Variant
Dim Manufacturer As String
Dim MFG As Variant
Dim MFGrev As Variant
Dim MFGout As Variant
Dim RowCount As Variant
Dim Rng As Range
Dim a As Variant
Dim I As Variant
Dim wbdata As Workbook
Dim wsData As Worksheet
Set wbdata = Workbooks("trial1")
Set wsData = wbdata.Worksheets("Final Data")
wsData.Activate
'Counts how many cells in the chosen column
RowCount = Cells(Rows.Count, 4).End(xlUp).Row
For a = 2 To RowCount
If Not IsEmpty(Cells(a, 4)) Then
TypeCell = wsData.Cells(a, 4).Text 'cells with information
MFG = Split(TypeCell, " ") 'separate them
'Reverse the order of the words
For I = UBound(MFG) To 0 Step -1
MFGrev = MFGrev + "" + MFG(I) + " "
'Use the last iteration which will include all the words in reverse order
If I = 0 Then
MFGout = MFGrev
End If
Next
'This part I am not sure about
Manufacturer = Split(MFGout.Text, " ")(0)
'Insert extracted words into new column
Cells(a, 16) = WorksheetFunction.Transpose(Manufacturer)
Else
MsgBox ("Is empty... row " & a)
End If
Next
End Sub
```
So my First issue is that when looping, it keeps adding every string of every cell to the next instead of going through each cell one by one and outputting the words in reverse order.
My second issue is that I am not sure how to delete the first two words after reversing the order.
This is my first question on here so if i made mistakes in the formatting let me know.
Thank you in advance for any help!
EDIT:
What I am trying to do is extract the manufacturers' names for a list of equipment. The names can have one or two words in it so that is what i need to extract. I then am pasting those in another column.
The cases I gave where just examples to show the two cases that arise in that list and ask how to deal with them.<issue_comment>username_1: Your pattern is saying "match anything where there are brackets, with or without stuff between them". Then you negate it with `!`, so filtering out anything that has a `{` followed by a `}` anywhere in the string.
Sounds like what you want to keep strings if there is something before or after the brackets, so let's match that. A `.` matches any (single) thing, so a pattern for "something before open bracket" is `".\\{"`. Similarly a pattern for "something after closing bracket" is `"\\}."`. We can connect them with `|` for "or". In your `filter`, use
```
filter(str_detect(var, ".\\{|\\}."))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This will solve your problem by testing if all character within the vector is within `[a-zA-Z]`, `{` or `}`:
```
cl=c("{apple}",
"{orange}>s>",
"{pine--apple}",
"{kiwi}",
"{strawberry}>s>")
find=function(x){
x=unlist(strsplit(x,""))
poss=c(letters,LETTERS,"{","}")
all(x%in%poss)
}
cl=cl[!sapply(cl,find)]
```
Upvotes: 0 <issue_comment>username_3: One can also use `grep` of base R:
```
> d = c("s>", "{pine--apple}", "{kiwi}", "{strawberry}>s>")
# I have added " d[grep(".\\{|}.", d)]
[1] "s>" "{strawberry}>s>"
```
Upvotes: 0 |
2018/03/20 | 502 | 1,645 | <issue_start>username_0: The problem is somewhat simple. My objective is to compute the days difference between two dates, say A and B.
These are my attempts:
```
df['daydiff'] = df['A']-df['B']
df['daydiff'] = ((df['A']) - (df['B'])).dt.days
df['daydiff'] = (pd.to_datetime(df['A'])-pd.to_datetime(df['B'])).dt.days
```
These works for me before but for some reason, I'm keep getting this error this time:
>
> TypeError: class 'datetime.time' is not convertible to datetime
>
>
>
When I export the df to excel, then the date works just fine. Any thoughts?<issue_comment>username_1: Use pd.Timestamp to handle the awkward differences in your formatted times.
```
df['A'] = df['A'].apply(pd.Timestamp) # will handle parsing
df['B'] = df['B'].apply(pd.Timestamp) # will handle parsing
df['day_diff'] = (df['A'] - df['B']).dt.days
```
Of course, if you don't want to change the format of the df['A'] and df['B'] within the DataFrame that you are outputting, you can do this in a one-liner.
```
df['day_diff'] = (df['A'].apply(pd.Timestamp) - df['B'].apply(pd.Timestamp)).dt.days
```
This will give you the days between as an integer.
Upvotes: 4 [selected_answer]<issue_comment>username_2: When I applied the solution offered by username_1, I got **TypeError: Cannot convert input [00:00:00] of type** as well. It's basically saying that the dataframe contains missing timestamp values which are represented as [00:00:00], and this value is rejected by `pandas.Timestamp` function.
To address this, simply apply a suitable missing-value strategy to clean your data set, before using
```
df.apply(pd.Timestamp)
```
Upvotes: 2 |
2018/03/20 | 1,297 | 4,715 | <issue_start>username_0: I am having this error every time I try to filter a column of a column of a table.I fetch my data from a FireStore cloud collection and the the field auteur is well defined in each documents.
here's what my component looks like
```
import { Component, OnInit, ViewChild} from '@angular/core';
import { Router } from '@angular/router';
import { DataService } from '../../services/data.service';
import { Observable } from 'rxjs/Observable';
import { AngularFirestore } from 'angularfire2/firestore';
import { AuthentificationService } from '../../services/authentification.service';
@Component({
selector: 'app-data-preview',
templateUrl: './data-preview.component.html',
styleUrls: ['./data-preview.component.css']
})
export class DataPreviewComponent implements OnInit {
rows = [];
temp = [];
selected: any[] = [];
columns;
panelOpenState = false;
id: any;
@ViewChild(DataPreviewComponent) table: DataPreviewComponent;
constructor(private router: Router, private dataService: DataService,
private afs: AngularFirestore,public authService: AuthentificationService) {
this.dataService.getData().subscribe((datas) => {
this.temp = [...datas];
this.rows = datas;
console.log(datas);
});
}
ngOnInit() {
}
updateFilter(event) {
const val = event.target.value.toLowerCase();
// filter our data
const temp = this.temp.filter(function(d) {
return d.nom.toLowerCase().indexOf(val) !== -1 || !val;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
updateFilter1(event) {
const val1 = event.target.value.toLowerCase();
// filter our data
const temp = this.temp.filter(function(d) {
return d.prenom.toLowerCase().indexOf(val1) !== -1 || !val1;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
updateFilter2(event) {
const val2 = event.target.value.toLowerCase();
// filter our data
console.log(val2);
const temp = this.temp.filter(function(d) {
return d.auteur.toLowerCase().indexOf(val2) !== -1 || !val2;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
updateFilter3(event) {
const val3 = event.target.value.toLowerCase();
// filter our data
const temp = this.temp.filter(function(d) {
return d.departement.toLowerCase().indexOf(val3) !== -1 || !val3;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
updateFilter4(event) {
const val4 = event.target.value.toLowerCase();
// filter our data
const temp = this.temp.filter(function(d) {
return d.commune.toLowerCase().indexOf(val4) !== -1 || !val4;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
updateFilter5(event) {
const val5 = event.target.value.toLowerCase();
// filter our data
const temp = this.temp.filter(function(d) {
return d.typeF.toLowerCase().indexOf(val5) !== -1 || !val5;
});
// update the rows
this.rows = temp;
// Whenever the filter changes, always go back to the first page
}
// onSelect({ selected }) {
// console.log('Select Event', selected, this.selected);
// }
onActivate(event) {
if (event.type === 'click') {
console.log('Event: activate', event);
this.id = event.row.id;
this.router.navigate(['/data', this.id]);
}
}
}
```
and the component associated with it
```html
**Panneau de recherche**
{{value}}
{{value}}
{{value}}
{{value}}
{{value}}
{{value}}
{{value}}
{{value}}
```
I tried to console.log(auteur.id) and it displayed the content in the console.
PS: I have tried to filter other fileds of my document it works perfectly.
Any idea of how to solve this?<issue_comment>username_1: When this happens I usually check if the data is there before, in your case:
```
if(d && d.auteur){
return d.auteur.toLowerCase().indexOf(val2) !== -1 || !val2;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: In angular material, there is no event.target.value, but instead event.value. You should try using this.
```
updateFilter2(event) {
const val2 = event.value.toLowerCase(); // filter our data
const temp = this.temp.filter(function(d) {
if(d && d.auteur){
return d.auteur.toLowerCase().indexOf(val2) !== -1 || !val2;
}
});
this.rows = temp;
}
```
Upvotes: -1 |
2018/03/20 | 918 | 3,603 | <issue_start>username_0: I have this code where the length and the width get multiplied but I need them to be 'Float' numbers and not 'integers'
```
// Outlets for square start
@IBOutlet weak var LengthForSquare: UITextField!
@IBOutlet weak var WidthForSquare: UITextField!
@IBOutlet weak var ResultForSquare: UILabel!
// Outlet for Square finish
@IBAction func AnswerForSquarePressed(_ sender: Any) {
ResultForSquare.text = ((LengthForSquare.text! as NSString).integerValue * (WidthForSquare.text! as NSString).integerValue ).description
}
```
I want to convert the 'integerValue' to a float but it shows an error every time I try.
It would be great if someone can help me with this.<issue_comment>username_1: Smart *swifty* solution which is not now covered in the linked duplicate, an extension of `UITextField`
```
extension UITextField {
var floatValue : Float {
let numberFormatter = NumberFormatter()
numberFormatter.numberStyle = .decimal
let nsNumber = numberFormatter.number(from: text!)
return nsNumber == nil ? 0.0 : nsNumber!.floatValue
}
}
```
---
And use `String(format` rather than `.description`
```
ResultForSquare.text = String(format: "%.2f", LengthForSquare.floatValue * WidthForSquare.floatValue)
```
If the locale matters use also `NumberFormatter` to convert `Float` to `String`
---
Note: Please conform to the naming convention that variable names start with a lowercase letter.
Upvotes: 2 <issue_comment>username_2: When converting a regular number contained in a string (i.e. one that isn't currency, etc) you don't need a number formatter:
```
let aString = "3.14159"
let defaultValue: Float = 0.0
let converted = Float(aString) ?? defaultValue
```
You can use whatever string you want in place of `aString` and you can replace `defaultValue` with whatever float value you want.
In the above example `converted` would end up being a float with the value `3.14159` in it. If however `aString` was something like "TARDIS" then `converted` would end up being `0.0`. This way you know `converted` will *always* hold a value and you won't have to unwrap anything to get to it. (if you left off `?? defaultValue` you would have to unwrap `converted`)
Upvotes: 0 <issue_comment>username_3: Created extension for `String` and `Int` and simply use below written computed properties just like you use `NSStringObject.intValue`
```
extension String {
var intValue: Int {
if let value = Int(self) {
return value
}
return 0// Set your default int value if failed to convert string into Int
}
var floatValue: Float {
if let value = Float(self) {
return value
}
return 0// Set your default float value if failed to convert string into float
}
var doubleValue: Double {
if let value = Double(self) {
return value
}
return 0// Set your default double value if failed to convert string into Double
}
var boolValue: Bool {
if let value = Bool(self) {
return value
}
return false// Set your default bool value if failed to convert string into Bool
}
}
extension Int {
var stringValue: String {
return "\(self)"
}
var floatValue: Float {
return Float(self)
}
var doubleValue: Double {
return Double(self)
}
}
@IBAction func AnswerForSquarePressed(_ sender: Any) {
let area = (LengthForSquare.text.intValue * WidthForSquare.text.intValue)
ResultForSquare.text = area.stringValue
}
```
Upvotes: -1 |
2018/03/20 | 1,234 | 4,005 | <issue_start>username_0: I have a DDBB with three tables: loan, person and loan\_person\_rel and the respective POJO for each table.
Loan
```
private int line;
private Double balance;
private Double expired;
private int state;
private int defaultDays;
private Branch branch;
private String balanceHistory;
private String expiredHistory;
private Long loanCode;
private List persons;
private String extraInfo;
private LoanTypes loanType;
private String nomPro;
//The class contains the getters and setters :)
```
Person
```
private String name;
private String documentNumber;
private String workEnterprise;
private String workCity;
private String workAddress;
private String workNeighborhood;
private String workPhone;
private String occupation;
private String homePhone;
private String cellPhone;
private String phone3;
private String phone4;
private String homeAddress;
private String homeCity;
private String homeNeighborhood;
private String email;
private String relationship;
private List loans;
//The class contains the getters and setters :)
```
Loan\_person\_rel
```
private String personId;
private String loanId;
private int type;
//The class contains the getters and setters :)
```
How i can build a JOOQ select or some method for retrieve the data and fill the class loan with the field `persons` populated?<issue_comment>username_1: It must be so way:
```
List list = dsl.selectFrom(Loan).fetch(this::recordToPojo);
private Loan recordToPojo(final LoanRecord record) {
return new Loan(
record.getLine(),
record.getBalance(),
....
);
}
```
\*Loan - name of pojo!
Upvotes: 1 <issue_comment>username_2: ### jOOQ 3.15 solutoin using `MULTISET`
Starting [with jOOQ 3.15, the standard SQL `MULTISET` operator was introduced](https://blog.jooq.org/2021/07/06/jooq-3-15s-new-multiset-operator-will-change-how-you-think-about-sql/), which is emulated using SQL/XML or SQL/JSON if needed. For simplicity, I'm assuming your `Loan` and `Person` classes are Java 16 records:
```java
List result =
ctx.select(
// Project the loan columns you need
LOAN.LINE,
LOAN.BALANCE,
..
multiset(
select(PERSON.NAME, PERSON.DOCUMENT\_NUMBER, ...)
.from(PERSON)
.join(LOAN\_PERSON\_REL)
.on(PERSON.PERSON\_ID.eq(LOAN.PERSON\_REL.PERSON\_ID))
.where(LOAN\_PERSON\_REL.LOAN\_ID.eq(LOAN.LOAN\_ID))
).as("persons").convertFrom(r -> r.map(Records.mapping(Person::new)))
)
.from(LOAN)
.fetch(Records.mapping(Loan::new));
```
The mapping into the `Loan` and `Person` constructor references is type safe and reflection free, using the [new jOOQ 3.15 ad-hoc converter feature](https://www.jooq.org/doc/latest/manual/sql-execution/fetching/ad-hoc-converter/).
Unlike JPA based ORMs, jOOQ doesn't offer object *graph* persistence, i.e. your `Person` objects can't contain identity-based references back to `Loan` objects. Instead, this approach projects data in tree form, which may be fine for your use-cases.
### jOOQ 3.14 solution using SQL/XML or SQL/JSON
Starting with jOOQ 3.14, the preferred approach here is to [nest your collections directly in SQL using SQL/XML or SQL/JSON](https://blog.jooq.org/2020/10/09/nesting-collections-with-jooq-3-14s-sql-xml-or-sql-json-support/). You could write a query like this:
```java
List result =
ctx.select(
// Project the loan columns you need, or all of them using LOAN.asterisk()
LOAN.LINE,
LOAN.BALANCE,
...
field(select(
jsonArrayAgg(jsonObject(
key("name").value(PERSON.NAME),
key("documentNumber").value(PERSON.DOCUMENT\_NUMBER),
...
))
.from(PERSON)
.join(LOAN\_PERSON\_REL)
.on(PERSON.PERSON\_ID.eq(LOAN.PERSON\_REL.PERSON\_ID))
.where(LOAN\_PERSON\_REL.LOAN\_ID.eq(LOAN.LOAN\_ID))
)).as("persons")
)
.from(LOAN)
.fetchInto(Loan.class);
```
The same restriction about this fetching trees instead of graphs applies.
Note that `JSON_ARRAYAGG()` aggregates empty sets into `NULL`, not into an empty `[]`. [If that's a problem, use `COALESCE()`](https://stackoverflow.com/a/67932476/521799)
Upvotes: 3 [selected_answer] |
2018/03/20 | 902 | 3,199 | <issue_start>username_0: I'm trying to read two text files and use values from both as parameters to call an API. I'm having an issue with doing this effectively.
Here's the relevant code:
```
def read_addresses(self):
f = open("addresses.txt", "r")
f2 = open("cities.txt", "r")
self.addresses = f.readlines()
self.cities = f2.readlines()
def get_data(self):
for x in self.addresses:
sep = ' Placeholder'
self.k = x.split(sep, 1)[0]
self.k = re.sub('\s+',' ', self.k)
for x in self.cities:
sep = ' Placeholder'
self.j = x.split(sep, 1)[0]
self.j = re.sub('\s+', ' ', self.j)
self.city_state = self.j + ',' + ' TX'
try:
params = (
('address1', self.k
),
('address2', self.city_state),
)
print params
```
**Input (cities.txt)**
```
Unique City 1
Unique City 2
Unique City 3
```
**Input (addresses.txt)**
```
Unique Address 1
Unique Address 2
Unique Address 3
```
**And the output:**
```
(('address1', 'Unique Address 3'), ('address2', 'Unique City 1 , TX'))
(('address1', 'Unique Address 3'), ('address2', 'Unique City 2 , TX'))
(('address1', 'Unique Address 3'), ('address2', 'Unique City 3, TX'))
```
**The output I desire:**
```
(('address1', 'Unique Address 1'), ('address2', 'Unique City 1, TX'))
(('address1', 'Unique Address 2'), ('address2', 'Unique City 2, TX'))
(('address1', 'Unique Address 3'), ('address2', 'Unique City 3, TX'))
```
What is the most effective way to approach this? I'd like to do it in one `for` loop.<issue_comment>username_1: Your assignment to
`self.k`
Is the last value from the first for loop. To get your desired output you'll need to store self.k in say a list on every iteration, eg k.append(..)
I don't see how you'll get away with just a single for loop when iterating over 2 lists, though.
Upvotes: 0 <issue_comment>username_2: The problem you're running into is that your first for loop is running completely before you reference *self.k*. What this does is iterates through *self.addresses* and sets *self.k* each time, but when you reference *self.k* in the for loop below it will only hold the value of the last thing in the list because it isn't being iterated through at the same time.
If you want to iterate through both lists at the same time you can do it like this
```
f = open("addresses.txt", "r")
f2 = open("cities.txt", "r")
self.addresses = f.readlines()
self.cities = f2.readlines()
for i,j in zip(self.addresses, self.cities):
print i
print j
print (i, j)
```
This way you won't run into the issue of setting self.k to the final value in the list.
On another note you might want to consider condensing this into a single function and avoiding the use of global variables or class attributes where they aren't necessary. This looks like temporary data that doesn't need to be passed between classes or class methods. I would suggest an approach where within a single function you instantiate and reference variables within the scope of that function and then return or print the desired data. Good luck!
Upvotes: 3 [selected_answer] |
2018/03/20 | 966 | 3,367 | <issue_start>username_0: I'm trying to do an advanced class template argument deduction by using the new deduction guides from c++17. Unfortunately, it looks like you can only use simple template declarations after the `->`, but I need a helper struct to determine the resulting type.
My use case is this one: I have a variadic template class that takes an arbitrary amount of different types. For one constructor I want to specify every single one, for another ctor I want to specify only one type and replicate it N times.
To access this `N` in the deduction guide I introduced a new type:
```
template
struct Replicate { };
```
The class I have is similar this one:
```
template
struct Foo {
// [ ... ] member std::tuple
// ctor 1: give values for all types (easy to deduce)
Foo(Foos&&... args /\* rvalue ref, NOT forwarding \*/) { };
// ctor 2: give one value and N, result is N copies of this value.
// takes Replicate as parameter to aid the deduction.
template
Foo(const T& t, Replicate) { };
};
```
The usage would be like this:
```
Foo f1{1, 2, 3.0}; // deduce Foo;
Foo f2{8, Replicate<4>{}}; // deduce Foo;
```
The deduction guide for the first one is straight forward:
```
template
Foo(Ts&&...) -> Foo;
```
It gets problematic with the second (ctor 2) deduction guide. First I need a helper struct to create `Foo` from `T` and `N`.
```
template
struct ExpandNTimes;
template
struct ExpandNTimes> {
template using NthType = T;
using Type = Foo...>;
};
```
Then in the deduction guide I want to utilize the helper to deduce the correct type. I cant directly use `Foo` as there is no kind of "in place parameter pack creation", therefore the helper struct.
```
template
Foo(const T&, Replicate) -> typename ExpandNTimes>::Type;
```
Unfortunately this results int an error similar to this one:
>
>
> ```
> error: trailing return type of 'typename ExpandNTimes::Type' deduction guide is not a specialization of ‘Foo’
>
> ```
>
>
Is there any way to work around this issue?<issue_comment>username_1: This is impossible with class template argument deduction - both template names must be [the same](http://eel.is/c++draft/temp#deduct.guide-3.sentence-3), and the thing after the `->` must be a [*simple-template-id*](http://eel.is/c++draft/temp#deduct.guide-1). This doesn't leave any room for template shenanigans.
But nothing prevents you from doing the thing that class template argument deduction is intended to replace: factory functions:
```
template
typename ExpandNTimes>::Type
makeFoo(T const&, Repliace);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: This is doable if you can change `Replicate`'s definition to embed a pack into a base class:
```
template struct ReplicateBase {};
template struct Replicate : ReplicateBase> {};
template using Meow = T;
template
Foo(const T&, ReplicateBase>) -> Foo...>;
```
Then it's a "simple" matter of constraining everything else to not compete with this guide when passed a `Replicate`:
* `Foo(Foos&&... args) { }` and `template Foo(Ts&&...) -> Foo;` (are you sure you want to deduce references when passed lvalues?) should be constrained to when `Foos`/`Ts` aren't `Replicate`s
* `template Foo(const T& t, Replicate);` needs to be constrained to prevent it from being used to deduce an empty pack (e.g., to when `sizeof...(Foos) == N`)
Upvotes: 1 |
2018/03/20 | 1,290 | 3,394 | <issue_start>username_0: I would like to animate a circle into a line with the radius equaling the width, and I am wondering how I can do this with an Arc? Or perhaps there is a better way?
From
[](https://i.stack.imgur.com/Syf3o.png)
To [](https://i.stack.imgur.com/6wfUN.png)
Here's my arc:
```
function drawStar(x,y,size,scale,opacity,ctx){
ctx.save();
ctx.beginPath();
ctx.arc(x,y,size+scale,0,size+scale * Math.PI,false);
ctx.globalAlpha = opacity
ctx.closePath();
setFillStyle('rgb(255,237,219)',ctx);
ctx.fill()
ctx.restore();
}
```
I tried using ctx.scale(n,1), however it does not keep the same radius(width) and it scales the collection of arcs as a whole (zoom in effect).<issue_comment>username_1: You could draw the left and right halves of a circle using `arc`, then do a `fillRect` in between to connect them.
Edit: To elaborate on what I said earlier:
```
function init() {
let canvas = document.getElementById('myCanvas');
canvas.width = 400;
canvas.height = 400;
canvas.style.width = "400px";
canvas.style.height = "400px";
let ctx = canvas.getContext("2d");
function fillArc(ctx, cx, cy, r, startDeg, endDeg) {
ctx.beginPath();
ctx.arc(cx, cy, r, startDeg * Math.PI / 180, endDeg * Math.PI / 180);
ctx.fill();
}
function fillOval(ctx, cx, cy, r, sideLength, skipFirstArc) {
if (!skipFirstArc) {
fillArc(ctx, cx, cy, r, 90, 270);
}
ctx.fillRect(cx, cy - r, sideLength, r * 2);
fillArc(ctx, cx + sideLength, cy, r, 270, 90);
}
let sideLength = 0;
ctx.fillStyle = 'red';
function animateOval() {
if (sideLength === 100) {
ctx.clearRect(0, 0, 400, 400);
}
else {
fillOval(ctx, 30, 30, 25, sideLength, sideLength > 0);
}
++sideLength;
if (sideLength > 100) {
sideLength = 0;
}
}
setInterval(animateOval, 16);
}
```
Here's a Plunker with the above code running: <http://plnkr.co/edit/vNqoUjPKg2lqC7JtYuEb?p=preview>
Upvotes: 1 <issue_comment>username_2: You can use [Bezier Curves](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/bezierCurveTo) to 'transform' your arc.
There's some [math involved](https://stackoverflow.com/questions/1734745/how-to-create-circle-with-b%C3%A9zier-curves) in calculating the perfect ends of your stretched circle but I guessed and tweaked my numbers.
```js
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
ctx.beginPath();
ctx.moveTo(40, 20);
ctx.lineTo(100, 20);
ctx.bezierCurveTo(130, 20, 130, 60, 100, 60);
ctx.lineTo(40, 60);
ctx.bezierCurveTo(10, 60, 10, 20, 40, 20);
ctx.stroke();
ctx.fill();
ctx.closePath();
ctx.beginPath();
ctx.moveTo(40, 80);
ctx.bezierCurveTo(68, 80, 68, 120, 40, 120);
ctx.bezierCurveTo(12, 120, 12, 80, 40, 80);
ctx.fill();
ctx.stroke();
```
Upvotes: 1 <issue_comment>username_3: Use instead a wide line-width value with "round" `lineCap` and `stroke()`:
```js
var ctx = c.getContext("2d");
ctx.lineWidth = 50;
ctx.lineCap = "round";
ctx.moveTo(45 , 25);
ctx.lineTo(45.5, 25); // in IE11 this must be slightly offset
ctx.moveTo( 45, 100);
ctx.lineTo(150, 100);
ctx.stroke();
```
Remember `beginPath()` for animation.
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,645 | 6,101 | <issue_start>username_0: I'm new to MongoDB and I'm looking for a way to do the following:
I have a collection of a number of available "things" to be used.
The user can "save" a "thing" and decrement the number of available things.
But he has a time to use it before it expires.
If it expires, the thing has to go back to the collection, incrementing it again.
It would be ideal if there was a way to monitor "expiring dates" in Mongo. But in my searches I've only found a TTL (time to live) for automatically deleting entire documents.
However, what I need is the "event" of the expiration... Than I was wondering if it would be possible to capture this event with Change Streams. Then I could use the event to increment "things" again.
Is it possible or not? Or would there be a better way of doing what I want?<issue_comment>username_1: I was able to use Change Streams and TTL to emulate a cronjob.
But, basically, anytime I need to schedule an "event" for a document, when I'm creating the document I also create an event document in parallel. This event document will have as its \_id the same id of the first document.
Also, for this event document I will set a TTL.
When the TTL expires I will capture its "delete" change with Change Streams. And then I'll use the documentKey of the change (since it's the same id as the document I want to trigger) to find the target document in the first collection, and do anything I want with the document.
I'm using Node.js with Express and Mongoose to access MongoDB.
Here is the relevant part to be added in the App.js:
```
const { ReplSet } = require('mongodb-topology-manager');
run().catch(error => console.error(error));
async function run() {
console.log(new Date(), 'start');
const bind_ip = 'localhost';
// Starts a 3-node replica set on ports 31000, 31001, 31002, replica set
// name is "rs0".
const replSet = new ReplSet('mongod', [
{ options: { port: 31000, dbpath: `${__dirname}/data/db/31000`, bind_ip } },
{ options: { port: 31001, dbpath: `${__dirname}/data/db/31001`, bind_ip } },
{ options: { port: 31002, dbpath: `${__dirname}/data/db/31002`, bind_ip } }
], { replSet: 'rs0' });
// Initialize the replica set
await replSet.purge();
await replSet.start();
console.log(new Date(), 'Replica set started...');
// Connect to the replica set
const uri = 'mongodb://localhost:31000,localhost:31001,localhost:31002/' + 'test?replicaSet=rs0';
await mongoose.connect(uri);
var db = mongoose.connection;
db.on('error', console.error.bind(console, 'connection error:'));
db.once('open', function () {
console.log("Connected correctly to server");
});
// To work around "MongoError: cannot open $changeStream for non-existent database: test" for this example
await mongoose.connection.createCollection('test');
// *** we will add our scheduler here *** //
var Item = require('./models/item');
var ItemExpiredEvent = require('./models/scheduledWithin');
let deleteOps = {
$match: {
operationType: "delete"
}
};
ItemExpiredEvent.watch([deleteOps]).
on('change', data => {
// *** treat the event here *** //
console.log(new Date(), data.documentKey);
Item.findById(data.documentKey, function(err, item) {
console.log(item);
});
});
// The TTL set in ItemExpiredEvent will trigger the change stream handler above
console.log(new Date(), 'Inserting item');
Item.create({foo:"foo", bar: "bar"}, function(err, cupom) {
ItemExpiredEvent.create({_id : item._id}, function(err, event) {
if (err) console.log("error: " + err);
console.log('event inserted');
});
});
}
```
And here is the code for model/ScheduledWithin:
```
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var ScheduledWithin = new Schema({
_id: mongoose.Schema.Types.ObjectId,
}, {timestamps: true});
// timestamps: true will automatically create a "createdAt" Date field
ScheduledWithin.index({createdAt: 1}, {expireAfterSeconds: 90});
module.exports = mongoose.model('ScheduledWithin', ScheduledWithin);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thanks for the detailed code.
I have two partial alternatives, just to give some ideas.
1.
Given we at least get the **\_id** back, if you only need a specific key from your deleted document, you can manually specify **\_id** when you create it and you'll at least have this information.
2. (mongodb 4.0)
A bit more involved, this method is to take advantage of the **oplog history** and open a watch stream at the moment of creation (if you can calculate it), via the **startAtOperationTime** option.
You'll need to check how far back your **oplog history** goes, to see if you can use this method:
<https://docs.mongodb.com/manual/reference/method/rs.printReplicationInfo/#rs.printReplicationInfo>
Note: I'm using the mongodb library, not mongoose
```
// https://mongodb.github.io/node-mongodb-native/api-bson-generated/timestamp.html
const { Timestamp } = require('mongodb');
const MAX_TIME_SPENT_SINCE_CREATION = 1000 * 60 * 10; // 10mn, depends on your situation
const cursor = db.collection('items')
.watch([{
$match: {
operationType: 'delete'
}
}]);
cursor.on('change', function(change) {
// create another cursor, back in time
const subCursor = db.collection('items')
.watch([{
$match: {
operationType: 'insert'
}
}], {
fullDocument : 'updateLookup',
startAtOperationTime: Timestamp.fromString(change.clusterTime - MAX_TIME_SPENT_SINCE_CREATION)
});
subCursor.on('change', function(creationChange) {
// filter the insert event, until we find the creation event for our document
if (creationChange.documentKey._id === change.documentKey._id) {
console.log('item', JSON.stringify(creationChange.fullDocument, false, 2));
subCursor.close();
}
});
});
```
Upvotes: 1 |
2018/03/20 | 919 | 2,944 | <issue_start>username_0: I have this database
```
Game(ID,Version,Name,Price,Color,IDDISTRIBUTION,#worker)
Distribution(ID,Name,#worker,quality)
Istitute(ID,Name,NINCeo,City)
Sponsor(IDGAME,IDISTITUTE,VERSIONGAME)
Designer(NIN,name,surname,role,budget)
Project(NINDESIGNER,IDGAME,VERSIONGAME,#hours)
```
(the uppercase is for indicate the foreign keys)
I have to write in SQL this nested query:
* Select the Istitute that sponsored the max number of
games(Name,#max\_games)
* Select the Istitute that sponsored the min
number of games(Name,#min\_games)
* Select the Designer that had taken
part at almost 10 games(NIN,#game)
(In the parenthesis are the row to select,# is the result of the COUNT query)
Thank you so much for answers and sorry for my bad English.<issue_comment>username_1: Hey I know it can be difficult to get feedback about how to ask questions properly, I've been there! Try writing the question more clearly and format it so it's easy to understand.
With that said, check out window functions! They are really cool and allow you to do fun analytics, such as who had the 3rd most number of games? Try this:
```
with counts as (
select
i.id
,i.name
,count(distinct g.id) as gamecount
from istitute i
inner join games g
on i.id=g.id
and i.name=g.name
group by i.id
,i.name
)
select
c.id
,c.name
,RANK() over (ORDER BY c.gamecount DESC) as rank
from counts c
```
Upvotes: 1 <issue_comment>username_2: I think that i've found the solution:
```
a)SELECT Name,#max_game
FROM Istitute
WHERE(SELECT COUNT (IDGAME) as #max_game
FROM Sponsor,Game,Istitute
WHERE(IDGAME=Game.id AND IDISTITUTE=Istitute.id AND VERSIONGAME=Game.Version))
HAVING=(SELECT MAX COUNT (IDGAME) as #realmax_game)
FROM Sponsor,Game,Istitute
WHERE(IDGAME=Game.id AND IDISTITUTE=Istitute.id AND VERSIONGAME=Game.Version))
```
b) the same as a) but with MIN
```
c)SELECT Name,#game
FROM Istitute
WHERE(SELECT COUNT (IDGAME) as #game
FROM Sponsor,Game,Istitute
WHERE(IDGAME=Game.id AND IDISTITUTE=Istitute.id AND VERSIONGAME=Game.Version))
HAVING #game<=10
```
Upvotes: 0 <issue_comment>username_3: a) For Select the Istitute that sponsored the max number of games
```
;with TempCount as (
select IDISTITUTE,
count(IDGAME) As GameCount
from Sponsor S
Group by IDGAME,IDISTITUTE
)
select Top 1
T.IDISTITUTE,
Count(T.GameCount) As MaxGameCount
from TempCount T
Group by T.IDISTITUTE order by Count(T.GameCount) desc
```
b) For Select the Istitute that sponsored the min number of games
```
;with TempCount as (
select IDISTITUTE,
count(IDGAME) As GameCount
from Sponsor S
Group by IDGAME,IDISTITUTE
)
select Top 1
T.IDISTITUTE,
Count(T.GameCount) As MinGameCount
from TempCount T
Group by T.IDISTITUTE order by Count(T.GameCount) asc
```
Upvotes: 1 |
2018/03/20 | 507 | 1,693 | <issue_start>username_0: I am trying to add a column from one pandas data-frame to another pandas data-frame.
Here is data frame 1:
```
print (df.head())
ID Name Gender
1 John Male
2 Denver 0
0
3 Jeff Male
```
Note: Both ID and Name are indexes
Here is the data frame 2:
```
print (df2.head())
ID Appointed
1 R
2
3 SL
```
Note: ID is the index here.
I am trying to add the Appointed column from `df2` to `df1` based on the ID. I tried inserting the column and copying the column from df2 but the Appointed column keeps returning all NAN values. So far I had no luck any suggestions would be greatly appreciated.<issue_comment>username_1: Reset index for both datafrmes and then create a column named 'Appointed' in df1 and assign the same column of df2 in it.
After resetting index,both datafrmes have index beginning from 0. When we assign the column, they automatically align according to index which is a property of pandas dataframe
```
df1.reset_index()
df2.reset_index()
df1['Appointed'] = df2['Appointed']
```
Upvotes: 0 <issue_comment>username_2: If I understand your problem correctly, you should get what you need using this:
```
df1.reset_index().merge(df2.reset_index(), left_on='ID', right_on='ID')
ID Name Gender Appointed
0 1 John Male R
1 2 Denver 0 NaN
2 3 Jeff Male SL
```
Or, as an alternative, as pointed out by Wen, you could use join:
```
df1.join(df2)
Gender Appointed
ID Name
1 John Male R
2 Denver 0 NaN
0 NaN NaN NaN
3 Jeff Male SL
```
Upvotes: 1 |
2018/03/20 | 1,349 | 4,916 | <issue_start>username_0: Hi i created ALB listener 443 and target group instance on 7070 port (not-ssl)
I can access instanceip:7070 without problem , but with <https://elb-dns-name> not able to access.. instance health check also failed with 302 code
ALB listener port https and instance is http protocol ,
when i browse with <https://dns-name> it redirecting to <http://elb-dns-name><issue_comment>username_1: you get 302 when performing URL redirection, any ELB Health check will look for success code 200 for the health check to pass. In ALB, this can be configured under health check in the ELB console.
To modify the health check settings of a target group using the console
1. Open the Amazon EC2 console at <https://console.aws.amazon.com/ec2/>.
2. On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
3. On the Health checks tab, choose Edit.
4. On the Edit target group page, modify the setting `Success Codes` to 302 or as needed, and then choose Save.
[](https://i.stack.imgur.com/rwsjA.png)
Upvotes: 7 [selected_answer]<issue_comment>username_2: add this annotation in your ingress controller it will modify the success code and nodes will be in healthy state.
```
alb.ingress.kubernetes.io/success-codes: 200,404,301,302
```
Upvotes: 2 <issue_comment>username_3: I run into the same issue recently, and as suggested by @SudharsanSivasankaran we have edited the health check settings at the target level.
But we have kept the 200 only status code and instead updated the path to directly hit the page the redirection goes to.
For instance if a website hosted under instance:80 needs the user to be logged on and redirect it to the /login page, all we need to do is add the /login path in the health check.
Upvotes: 2 <issue_comment>username_4: I stuck with the same problem in AWS ALB (Health checks failed with these codes: [302])
Configuration:
* Tomcat 9 servers that are listening on port 80 only
* ALB health check path was set to "/my\_app\_name" expecting to serve health check from the application's root index page.
My configured health page is not expected to do any redirects, but to return HTTP/200 if server is healthy and HTTP/500 if unhealthy.
**The proposed solution just to add HTTP/302 as a success code is absolutely WRONG and misleading**.
It means that the page's internal health check logic isn't run, as HTTP/302 redirect code just shows common ability of the server to respond.
The problem was in Tomcat server itself that in the case of request to "/my\_app\_name" was redirecting with HTTP/302 to "/my\_app\_name/" (**pay attention to the slash at the end**).
So setting health check path to "/my\_app\_name/" fixed the problem, health check logic runs well and HTTP/200 is returned.
Upvotes: 2 <issue_comment>username_5: I had a similar case where I'm offloading TLS on the ELB and then sending traffic to port 80 with plain HTTP. I'm always getting the 302 code from the ELB.
You can change the status code for the target group and specify the success code as 302, but I don't think that is a very good idea. Since you may encounter a different status code if you changed some configuration in your Apache or htaccess files which may cause your instance to put out of service. The goal of Health Check is identify faulty servers and remove them from the production environment.
This solution worked great for me: <https://stackoverflow.com/a/48140513/14033386>
Cited below with more explanation:
Enable the mod\_rewrite module. In most Linux distros it's enabled by default when you install Apache. But check for it anyway. Check this: <https://stackoverflow.com/a/5758551/14033386>
```
LoadModule rewrite_module modules/mod_rewrite.so
```
and then add the following to your virtual host.
```
ErrorDocument 200 "ok"
RewriteEngine On
RewriteRule "/AWS-HEALTH-CHECK-URL" - [R=200]
```
AWS-HEALTH-CHECK-URL is the one you specify in the health check settings.

This solution will always return 200 code that specific URL as long as your server is active and serving requests.
Upvotes: 0 <issue_comment>username_6: In my case I had a domain `www.domain.com`
but by default when you accessing the domain and you are not logged in you are immediately redirected to `www.domain.com/login`
... and that is something that caused the problem
So you have 2 options:
1. Go to your aws target group -> health check and change your default path `/` to the new one which in my case was `/login`. I'm really sure if login endpoint works - website works too.
2. Go to your aws target group -> health check and change your default status code from `200` to `200,302`. It is definitely less appropriate way but still acceptable, depends on the case
Upvotes: 1 |
2018/03/20 | 905 | 3,049 | <issue_start>username_0: I am trying to use a property from a template in angular. It doesn't work and I can't figure out why. I've set up an example here:
<https://github.com/Fulkerson/angular-property>
It is the example from the [Tour of Heroes](https://angular.io/tutorial/toh-pt6). My [commit](https://github.com/Fulkerson/angular-property/commit/c3880dac1a23717d39daa787606925d12ef0185b) adds the property `propname` which simply returns `name` and uses it in the dashboard (main page).
Basically what I want to do is add multiple properties that process a field in different ways. But why is it I cannot use the properties from templates, and how can accomplish what I want to do in another way?<issue_comment>username_1: The data returned by the `heroService.getHeroes` method is plain JSON object (without the getter you defined in the Hero class). If you want to use the getter defined in the class you have to return an Array of real `Hero` instance.
1/ We add a constructor for the Hero class in app/hero.ts for convenience
```
constructor ({id,name} = {id: undefined, name: undefined}) {
this.id = id;
this.name = name;
}
```
2/ Instead of returning plain js data from heroService we map to obtain an array of Hero instance in dashboard.component.ts
```
getHeroes(): void {
this.heroService.getHeroes()
.subscribe(heroes =>
this.heroes = heroes
.slice(1, 5)
.map(hero => new Hero(hero))
);
}
```
And Tadaa !
[Here is the stackblitz created upon your github repo](https://stackblitz.com/github/yqnimklb)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Well, your problem is that, though you're adding a new getter to the `Hero` class, in the dashboard you're listing the heroes coming from the service. The problem is that the service is mocking data, and that data is, in truth, a simple array of objects (defined in `in-memory-data.service.ts`):
```
const heroes = [
{ id: 11, name: '<NAME>' },
{ id: 12, name: 'Narco' },
{ id: 13, name: 'Bombasto' },
{ id: 14, name: 'Celeritas' },
{ id: 15, name: 'Magneta' },
{ id: 16, name: 'RubberMan' },
{ id: 17, name: 'Dynama' },
{ id: 18, name: '<NAME>' },
{ id: 19, name: 'Magma' },
{ id: 20, name: 'Tornado' }
];
```
In the original example this does not matter as the `Hero` class is used there basically as an interface or type annotation. You, however, are giving that class methods, but those methods do not exist in the array.
So, basically, your template is trying to access a `propname` property from the items of this array. That property, as you see, is `undefined`. As a result, you don't see the name in the page. To avoid this, just change the definition of the array:
```
const heroes = [
Object.assign(new Hero(), { id: 11, name: '<NAME>' }),
...
];
```
(you can also give `Hero` a constructor with parameters and use it instead of `Object.assign`). This way you'll get an array of true `Hero` objects.
Upvotes: 2 |
2018/03/20 | 1,616 | 3,920 | <issue_start>username_0: I was programming myself a pretty nice api to get some json data from my gameserver to my webspace using json,
but everytime i am sending a request using angular i am getting this:
>
> 127.0.0.1 - - [20/Mar/2018 17:07:33] code 400, message Bad request version
> ("▒\x9c▒▒{▒'\x12\x99▒▒▒\xadH\x00\x00\x14▒+▒/▒,▒0▒\x13▒\x14\x00/\x005\x00")
> 127.0.0.1 - - [20/Mar/2018 17:07:33] "▒\x9dtTc▒\x93▒4▒M▒▒▒▒▒\x9c▒▒{▒'\x99▒▒▒▒H▒+▒/▒,▒0▒▒/5"
> HTTPStatus.BAD\_REQUEST -
> 127.0.0.1 - - [20/Mar/2018 17:07:33] code 400, message Bad request syntax
> ('\x16\x03\x01\x00▒\x01\x00\x00\x9d\x03\x03▒k,&▒▒ua\x8c\x82\x17\x05▒QwQ$▒0▒▒\x9f▒B1\x98\x19W▒▒▒▒\x00\x00\x14▒+▒/▒,▒0▒\x13▒\x14\x00/\x005\x00')
> 127.0.0.1 - - [20/Mar/2018 17:07:33] "▒\x9d▒k,&▒▒ua\x8c\x82▒QwQ$▒0▒▒\x9f▒B1\x98W▒▒▒▒▒+▒/▒,▒0▒▒/5"
> HTTPStatus.BAD\_REQUEST -
> 127.0.0.1 - - [20/Mar/2018 17:07:33] code 400, message Bad request syntax
> ('\x16\x03\x01\x00▒\x01\x00\x00▒\x03\x03)▒▒\x1e\xa0▒\t\r\x14g%▒▒\x17▒▒\x80\x8d}▒F▒▒\x08U▒ġ▒▒\x06▒\x00\x00\x1c▒+▒/▒,▒0▒')
> g%▒▒▒▒\x80\x8d}▒F▒U▒ġ▒▒▒▒+▒/▒,▒0▒" HTTPStatus.BAD\_REQUEST -
>
>
>
My api
```html
from flask import Flask, jsonify
from flaskext.mysql import MySQL
from flask_cors import CORS, cross_origin
app = Flask(__name__)
CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
cors = CORS(app, resources={r"/punishments": {"origins": "http://localhost:5000" "*"}})
mysql = MySQL()
# MySQL configurations
app.config['MYSQL_DATABASE_USER'] = 'test'
app.config['MYSQL_DATABASE_PASSWORD'] = '<PASSWORD>'
app.config['MYSQL_DATABASE_DB'] = 'test'
app.config['MYSQL_DATABASE_HOST'] = 'localhost'
mysql.init_app(app)
@app.route('/punishments', methods=['GET'])
@cross_origin(origin='localhost:5000',headers=['Content- Type','Authorization'])
def get():
cur = mysql.connect().cursor()
cur.execute('''select * from test.punishments''')
r = [dict((cur.description[i][0], value)
for i, value in enumerate(row)) for row in cur.fetchall()]
return jsonify({'punishments' : r})
if __name__ == '__main__':
app.run()
```
My client function
```html
export class ApiUserService {
private _postsURL = "https://localhost:5000/punishments";
constructor(private http: HttpClient) {
}
getPosts(): Observable {
let headers = new HttpHeaders();
headers = headers.set('Content-Type', 'application/json; charset=utf-8');
return this.http
.get(this.\_postsURL,{
headers: {'Content-Type':'application/json; charset=utf-8'}
})
.map((response: Response) => {
return response.json();
})
.catch(this.handleError);
}
private handleError(error: Response) {
return Observable.throw(error.statusText);
}
}
```<issue_comment>username_1: I had the same error as yours.
My flask server was installed inside `respberry-pi` and I was trying to access it using `https://ip:5000`.
The problem was I was using `https` instead of `http`.
When I changed it to `http://ip:5000`, it worked.
Upvotes: 7 <issue_comment>username_2: I also faced same problem
use only http not https :-
```
http://ip:portnumber
```
Upvotes: 4 <issue_comment>username_3: Recently I also faced this, the problem came from the SSL config on my app.
In my **.env** I set the `SSL_DISABLE` to `False`, then I change it to `True`.
change `SSL_DISABLE=False` to `SSL_DISABLE=True`
So, the point here is are: check your URL, maybe be it something like: `https://127.0.0.1:5000`, just change it to `http://127.0.0.1:5000`.
Hope it helps to someone who also facing this issue in the future.
Upvotes: 3 <issue_comment>username_4: in my case, i was trying to debug SocketIo server running on flask. I was trying to access the server using **`wss://`** which was causing the bad request. Changing it to **`ws://`** resolved the issue.
Upvotes: 0 <issue_comment>username_5: In my case i resolve the problem reverting flask version to Flask==1.1.4 and jinja dependency to Jinja2==3.0.3
Upvotes: 0 |
2018/03/20 | 509 | 1,740 | <issue_start>username_0: ```
def Delete_con():
contact_to_delete= input("choose name to delete from contact")
to_Delete=list(contact_to_delete)
with open("phonebook1.txt", "r+") as file:
content = file.read()
for line in content:
if not any(line in line for line in to_Delete):
content.write(line)
```
I get zero error. but the line is not deleted. This function ask the user what name he or she wants to delete from the text file.<issue_comment>username_1: I had the same error as yours.
My flask server was installed inside `respberry-pi` and I was trying to access it using `https://ip:5000`.
The problem was I was using `https` instead of `http`.
When I changed it to `http://ip:5000`, it worked.
Upvotes: 7 <issue_comment>username_2: I also faced same problem
use only http not https :-
```
http://ip:portnumber
```
Upvotes: 4 <issue_comment>username_3: Recently I also faced this, the problem came from the SSL config on my app.
In my **.env** I set the `SSL_DISABLE` to `False`, then I change it to `True`.
change `SSL_DISABLE=False` to `SSL_DISABLE=True`
So, the point here is are: check your URL, maybe be it something like: `https://127.0.0.1:5000`, just change it to `http://127.0.0.1:5000`.
Hope it helps to someone who also facing this issue in the future.
Upvotes: 3 <issue_comment>username_4: in my case, i was trying to debug SocketIo server running on flask. I was trying to access the server using **`wss://`** which was causing the bad request. Changing it to **`ws://`** resolved the issue.
Upvotes: 0 <issue_comment>username_5: In my case i resolve the problem reverting flask version to Flask==1.1.4 and jinja dependency to Jinja2==3.0.3
Upvotes: 0 |
2018/03/20 | 975 | 3,007 | <issue_start>username_0: **Database**
Db2
**Scenario**
I have a column that needs to be true or false. I have found two sources that point to how to achieve this; however, when I bring them together I get an error.
1. [Boolean values](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.ref.doc/doc/r0055394.html)
2. [Casting between data types](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.7.0/com.ibm.db2.luw.sql.ref.doc/doc/r0008478.html)
**Current Solution**
```
CREATE TABLE USERS
(
ID INT NOT NULL,
.
.
.
IS_LOCKED SMALLINT NOT NULL WITH DEFAULT 0,
PRIMARY KEY(ID)
);
```
`SELECT U.ID, CAST(U.IS_LOCKED AS BOOLEAN) as IS_LOCKED FROM USERS U`
Error: `A value with data type "SYSIBM.SMALLINT" cannot be CAST to type "SYSIBM.BOOLEAN"`
**Question**
How can I use BOOLEANs in Db2?<issue_comment>username_1: In other databases, you could be explicit:
```
select (case when u.isLocked = 0 then FALSE else TRUE end)
```
But DB2 [explicitly](https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.7.0/com.ibm.db2.luw.apdv.sqlpl.doc/doc/c0053651.html) only supports boolean in programming code:
>
> * The Boolean data type cannot be returned in a result set.
>
>
>
Upvotes: -1 <issue_comment>username_2: Db2 V11.1 on Linux/Unix/Windows supports BOOLEAN as a column data type and such columns can be returned in a result set. Here is an example using the command-line-processor (at the bash shell):
```
create table mytable( id integer, mybool boolean with default true )
DB20000I The SQL command completed successfully.
insert into mytable(id, mybool) values (1, false), (2, true), (3, false)
DB20000I The SQL command completed successfully.
select id,mybool from mytable order by 1
ID MYBOOL
----------- ------
1 0
2 1
3 0
3 record(s) selected.
```
However, while plain DDL and the CLP for SQL DML support boolean, consider the impact on the applications of using the Db2 column-datatype boolean. Check how PHP , Python, Java, .net, etc can manipulate this datatype according to whatever languages are used to access your databases.
**Tip**: when asking for help about Db2, it is wise to always mention your Db2-version and the operating-system that runs the Db2-server (i.e. z/os , iSeries, linux/unix/windows) and tag your question accordingly.
Upvotes: 2 <issue_comment>username_3: You give a link to the Db2 11.1 manuals, and a second link to the DB2 9.7 manuals. Hereby lies your answer. BOOLEAN is supported as a column data type in Db2 192.168.127.12, but not in DB2 9.7.
See the enhancements listed for Mod Pack 1 Fix Pack 1 of Db2 11.1 here <https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.wn.doc/doc/c0061179.html#c0061179__FP1> where it says
>
> The BOOLEAN data type can now be used for table columns and for expression results. This enhances compatibility with other relational database management systems
>
>
>
Upvotes: 2 |
2018/03/20 | 1,941 | 6,963 | <issue_start>username_0: I need to build a nested dictionnary based on a configuration file.
I have :
```
Chain1 {
Chain1_value1
Chain1_Chain2 {
Chain1_Chain2_value1
Chain1_Chain2_Chain3 {
Chain1_Chain2_Chain3_value1
Chain1_Chain2_Chain3_value2
Chain1_Chain2_Chain3_value3
}
Chain1_Chain2_Chain4 {
Chain1_Chain2_Chain4_value1
Chain1_Chain2_Chain4_value2
Chain1_Chain2_Chain4_value3
}
}
}
```
"XXX {" => Open a block named XXX
"XXX }" => Close a bloc named XXX
If no "{" or "}" => Its a value of the block.
and I need this :
```
{'Chain1 {': {'Chain1_Chain2 {': {'Chain1_Chain2_Chain3 {': {'Chain1_Chain2_Chain3_value1': '',
'Chain1_Chain2_Chain3_value2': '',
'Chain1_Chain2_Chain3_value3': '',}}
'Chain1_Chain2_Chain4 {': {'Chain1_Chain2_Chain4_value1': '',
'Chain1_Chain2_Chain4_value2': '',
'Chain1_Chain2_Chain4_value3': ''}},
'Chain1_Chain2_value1': ''},
'Chain1_value1': ''}}
```
But currently I have this :
```
{'Chain1 {': {'Chain1_Chain2 {': {'Chain1_Chain2_Chain3 {': {'Chain1_Chain2_Chain3_value1': '',
'Chain1_Chain2_Chain3_value2': '',
'Chain1_Chain2_Chain3_value3': '',
'Chain1_Chain2_Chain4 {': {'Chain1_Chain2_Chain4_value1': '',
'Chain1_Chain2_Chain4_value2': '',
'Chain1_Chain2_Chain4_value3': ''}},
'Chain1_Chain2_value1': ''},
'Chain1_value1': ''}}
```
My code :
```
import re
import pprint
a = []
a.append("Chain1 {")
a.append(" Chain1_value1")
a.append(" Chain1_Chain2 {")
a.append(" Chain1_Chain2_value1")
a.append(" Chain1_Chain2_Chain3 {")
a.append(" Chain1_Chain2_Chain3_value1")
a.append(" Chain1_Chain2_Chain3_value2")
a.append(" Chain1_Chain2_Chain3_value3")
a.append(" }")
a.append(" Chain1_Chain2_Chain4 {")
a.append(" Chain1_Chain2_Chain4_value1")
a.append(" Chain1_Chain2_Chain4_value2")
a.append(" Chain1_Chain2_Chain4_value3")
a.append(" }")
a.append(" }")
a.append("}")
for l in a:
print l
dict_test = {}
current_dict = dict_test
for line in a:
line = line.strip()
if re.search(r"{$", line):
current_dict[line] = {}
current_dict = current_dict[line]
elif re.search(r'}$', line):
pass
else:
current_dict[line] = ""
pprint.pprint(dict_test)
```
I think, it miss something here : elif re.search(r'}$', line)
but i don't know what.
Could you help me ? :)<issue_comment>username_1: You are not tracking the history of the dict. I have added previous\_dict and reverted the dict to last value when '}' is present. Please find solution below. Hopefully it helps.
```
import re
import pprint
def get_data() :
a = []
a.append("Chain1 {")
a.append(" Chain1_value1")
a.append(" Chain1_Chain2 {")
a.append(" Chain1_Chain2_value1")
a.append(" Chain1_Chain2_Chain3 {")
a.append(" Chain1_Chain2_Chain3_value1")
a.append(" Chain1_Chain2_Chain3_value2")
a.append(" Chain1_Chain2_Chain3_value3")
a.append(" }")
a.append(" Chain1_Chain2_Chain4 {")
a.append(" Chain1_Chain2_Chain4_value1")
a.append(" Chain1_Chain2_Chain4_value2")
a.append(" Chain1_Chain2_Chain4_value3")
a.append(" }")
a.append(" }")
a.append("}")
return a
def get_dict(input_chain):
dict_test = {}
current_dict = dict_test
previous_dict = dict_test #this was missing
for line in input_chain:
line = line.strip()
if re.search(r"{$", line):
previous_dict = current_dict
current_dict[line] = {}
current_dict = current_dict[line]
elif re.search(r'}$', line):
current_dict = previous_dict #revert to last dict at the end of the block
else:
current_dict[line] = ""
return dict_test
def main():
a = get_data()
b = get_dict(a)
pprint.pprint(b)
if __name__ == '__main__':
main()
```
**Output**
```
{'Chain1 {': {'Chain1_Chain2 {': {'Chain1_Chain2_Chain3 {': {'Chain1_Chain2_Chain3_value1': '',
'Chain1_Chain2_Chain3_value2': '',
'Chain1_Chain2_Chain3_value3': ''},
'Chain1_Chain2_Chain4 {': {'Chain1_Chain2_Chain4_value1': '',
'Chain1_Chain2_Chain4_value2': '',
'Chain1_Chain2_Chain4_value3': ''},
'Chain1_Chain2_value1': ''},
'Chain1_value1': ''}}
```
Upvotes: 0 <issue_comment>username_2: You can create a class with a `parse` method:
```
import re
new_s = iter([re.sub('^\s+', '', i) for i in filter(None,
s.split('\n'))])
from collections import defaultdict
class DictObject:
def __init__(self, d):
self.d = d
self.current_dict = {}
self.parse()
def parse(self):
while True:
current = next(self.d, None)
if not current:
break
if current.endswith('{'):
second_piece = DictObject(self.d)
self.current_dict[current] = second_piece.current_dict
self.d = second_piece.d
elif current == '}':
break
else:
self.current_dict[current] = ''
print(self.current_dict)
c = DictObject(new_s)
print(c.current_dict)
```
Output:
```
{'Chain1 {': {'Chain1_Chain2 {': {'Chain1_Chain2_Chain3 {': {'Chain1_Chain2_Chain3_value1': '',
'Chain1_Chain2_Chain3_value2': '',
'Chain1_Chain2_Chain3_value3': ''},
'Chain1_Chain2_Chain4 {': {'Chain1_Chain2_Chain4_value1': '',
'Chain1_Chain2_Chain4_value2': '',
'Chain1_Chain2_Chain4_value3': ''},
'Chain1_Chain2_value1': ''},
'Chain1_value1': ''}}
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 844 | 2,625 | <issue_start>username_0: I want to create a pyspark dataframe with one column of specified name containing a range of integers (this is to feed into the ALS model's recommendForUserSubset method).
So I've created a list of integers using range, and found [this question](https://stackoverflow.com/questions/35001229/create-dataframe-from-list-of-tuples-using-pyspark) showing how to make a list into a dataframe using SQLContext. But since I'm using the SparkSession API, not the older one, I don't know how to create a SQLContext--and per the [documentation](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html) on SQLContext I probably shouldn't have to: "As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility."
This is the code I'm trying to replace:
```
dataset = sqlContext.createDataFrame(range(i, i+1000), ['session_idx'])
```
I also don't know if I should be mapping the list of integers to a list of tuples or if it'll "just work".<issue_comment>username_1: In this case, the same method (spark.createDataFrame) exists on SparkSession.
However, for the specific use case of getting a range column, there's also a dedicated method for that:
```
dataset = spark.range(i, i + 1000)
dataset = dataset.withColumnRenamed('id', 'user_idx')
```
Upvotes: 2 <issue_comment>username_2: ```
i=0
dataset = sqlContext.createDataFrame(range(i, i+10), ['session_idx'])
```
would **certainly give you errors**
>
>
> >
> > TypeError: Can not infer schema for type:
> >
> >
> >
>
>
>
The error is because `range(i, i+10)` has to be the following as explained in api documentation
>
>
> >
> > data: an RDD of any kind of SQL data representation(e.g. row, tuple, int, boolean, etc.), or :class:`list`, or :class:`pandas.DataFrame`
> >
> >
> >
>
>
>
**Solution**
Just make `range(i, i+10)` **a list** as
```
i=0
dataset = sqlContext.createDataFrame([range(i, i+10)], ['session_idx'])
```
which would give you
```
+-----------+---+---+---+---+---+---+---+---+---+
|session_idx|_2 |_3 |_4 |_5 |_6 |_7 |_8 |_9 |_10|
+-----------+---+---+---+---+---+---+---+---+---+
|0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |
+-----------+---+---+---+---+---+---+---+---+---+
```
Or **a list of list** as
```
i=0
dataset = sqlContext.createDataFrame([[range(i, i+10),]], ['session_idx'])
```
which would generate
```
+------------------------------+
|session_idx |
+------------------------------+
|[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]|
+------------------------------+
```
I hope the answer is helpful
Upvotes: 0 |
2018/03/20 | 1,142 | 3,200 | <issue_start>username_0: I am not sure how to get my two-hop neighbors correctly. It's almost correct but on my output, I don't want to include the same vertex. For my output now if vertex 0 is 0, it says "vertex 0: 0.....
I want to skip the vertex it is currently looking at.
Please help me, are my codes for two hop wrong?
this is my codes:
```
#include
#include
#include
#include
#define M 20
#define N 20
int main()
{
int i, j, x, a, b;
int G[20][20] = { { 0 } };
/\*creaate random adjaceney matrix\*/
printf("==================================================\n");
printf("Welcome to my Graph Processing tool!\n\n");
srand(time(NULL));
for (i = 0; i < M; i++) {
for (j = 0; j < N; j++) {
if (i == j) {
G[i][j] = 0;
}
else {
G[i][j] = rand() % 2;
G[j][i] = G[i][j];
}
}
}
/\*check whether the whole row equals to 0\*/
for (j = 0; j < N; j++) {
if (G[j] == 0) {
x = rand() % 20 + 1;
G[x][j] = G[j][x] = 1;
}
/\*print the matrix G\*/
else
{
printf("The adjacency for graph G is\n");
for (i = 0; i < M; i++) {
for (j = 0; j < N; j++) {
printf("%d ", G[i][j]);
}
printf("\n");
}
}
}
/\*all one-hop neighbors\*/
printf("\nList of one-hop neighbors:");
for (i = 0; i < M; i++) {
printf("\nVertex %d: ", i);
for (j = 0; j < N; j++) {
if (G[i][j] == 1) {
printf("%d ", j);
}
}
}
printf("\n===================================\n\n");
/\*two-hop neighbors\*/
for (i = 0; i < M; i++) {
printf("\nVertex %d: ", i);
for (j = 0; j < N; j++) {
if (G[i][j] == 0) {
printf("%d ", j);
}
}
}
}
printf("\n============================================\n");
system("pause");
return 0;
}
```
This is my output:
[One hop](https://i.stack.imgur.com/3L3E6.png)
[Two hop](https://i.stack.imgur.com/af18a.png)<issue_comment>username_1: Couple things to note here.
Be more descriptive with your variable naming, it would have made this a lot easier to read.
M-ROWS, N-COLS, G-Graph
When you loop through each row, you initialize j to 0. This includes the vertex that you are wanting to leave out.
`for (j = 1; j < N; j++)`
Upvotes: 1 [selected_answer]<issue_comment>username_2: The answer provided by @username_1 only works for node 0. For only looking at different nodes you need to do
```
for (j = 0; j < N; j++) {
if (i != j && G[i][j] == 0) {
printf("%d ", j);
}
}
```
Furthermore, you are assuming that all nodes without an edge are two-hop neighbors. This is not correct. One way to calculate the actual two-hop neighbors would be ((A + I)^2 > 0) - ((A + I) > 0), where I is the identity matrix.
Also, you can code this via a three-layer loop:
```
int node, neighbor, neighbor2;
for (node = 0; node < N; node++) {
printf("\nVertex %d: ", node);
for (neighbor = 0; neighbor < N; neighbor++) {
if (G[node][neighbor] == 1) {
for (neighbor2 = 0; neighbor2 < N; neighbor2++) {
if (node != neighbor2 && G[neighbor][neighbor2] == 1) {
printf("%d ", neighbor2);
}
}
}
}
}
```
Note that per definition M=N, so I've just used N. Also, this might print some 2-hop neighbors twice. You might want to do some filtering before printing.
Upvotes: 1 |
2018/03/20 | 558 | 2,534 | <issue_start>username_0: I'm having a strange issue adding the "ExecuteDelete(index)" function to an onclick attribute. The basic logic here is that when a user clicks delete, it triggers the Remove(index) function which shows a modal window and adds an onclick attribute to the Confirm Delete button on the modal. That confirm delete buttons onclick is supposed to execute after the Confirm Delete button has been clicked in the modal window.
This will work and the alert will be displayed after the user clicks the confirm delete button...
```
function Remove(index){
//set delete food modal message.
$("#DeleteFoodMessage").text("Are you sure you want to delete " + $("#FoodName-" + index).val());
//show modal.
$("#ConfirmDelete").modal();
//add onclick= event to the modal to delete the element at 'index'.
document.getElementById('ExecuteDeleteButton').onclick = ExecuteDelete;
}
function ExecuteDelete(index){
alert("This works");
}
```
However, when trying to pass in the parameter for the index, so that ExecuteDelete() knows what it's about to delete, the alert is called when the Remove() function is called.
```
function Remove(index){
//set delete food modal message.
$("#DeleteFoodMessage").text("Are you sure you want to delete " + $("#FoodName-" + index).val());
//show modal.
$("#ConfirmDelete").modal();
//add onclick= event to the modal to delete the element at 'index'.
document.getElementById('ExecuteDeleteButton').onclick = ExecuteDelete(index);
}
function ExecuteDelete(index){
alert("This does not work");
}
```
Thanks for the help.
-Andrew<issue_comment>username_1: Instead of the `...onclick = ExecuteDelete(index);` you should use jquery bind like
```
$('#ExecuteDeleteButton')
.off('click')
.on('click', function() {
ExecuteDelete(index);
}
);
```
Dont forget the `off()` to unbind from your reused delete button!
Upvotes: 2 [selected_answer]<issue_comment>username_2: **You're executing the function just right on the assignment.**
Bind that event `click` using the function **[`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener)**:
```
document.getElementById('ExecuteDeleteButton').addEventListener('click',
function() {
ExecuteDelete(index);
}
});
```
Upvotes: 0 |
2018/03/20 | 417 | 1,705 | <issue_start>username_0: I know how to set up a java GUI (with swing) and use components, actionlisteners etc. What I am not sure about, is how I should pass informtation input to my gui (in a field, for instance) to classes in my application so that they can process the information.
For instance, I set up a GUI to take a filename in. I want to pass the filename on to a reader class that I have set up, so that it can read the data in that file and make it available to other classes for processing. The simplest way I know to do this, is to store the filename in a field within my gui class, and provide a get() method for that field. Then, in the main() method I use to create the GUI, I can call the get() method and then pass it along to whatever else needs it. Is this a reasonable approach, or is there some other better way? I think there may be more advanced ways to launch the GUI, and if so, I am not sure how to use them and still be able to pass the filename along.<issue_comment>username_1: Instead of the `...onclick = ExecuteDelete(index);` you should use jquery bind like
```
$('#ExecuteDeleteButton')
.off('click')
.on('click', function() {
ExecuteDelete(index);
}
);
```
Dont forget the `off()` to unbind from your reused delete button!
Upvotes: 2 [selected_answer]<issue_comment>username_2: **You're executing the function just right on the assignment.**
Bind that event `click` using the function **[`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener)**:
```
document.getElementById('ExecuteDeleteButton').addEventListener('click',
function() {
ExecuteDelete(index);
}
});
```
Upvotes: 0 |
2018/03/20 | 559 | 1,772 | <issue_start>username_0: I have four tables:
1) categories
```
id | name
------------------
1 | category 1
2 | category 2
3 | category 3
4 | category 4
5 | category 5
6 | category 6
```
2) countries
```
id | name
------------------
1 | country 1
2 | country 2
3 | country 3
4 | country 4
5 | country 5
6 | country 6
```
3) users
```
id | name | country_id
-------------------------------
1 | <NAME> 1 | 1
2 | <NAME> 2 | 2
3 | <NAME> 3 | 3
4 | <NAME> 4 | 4
5 | <NAME> 5 | 4
6 | <NAME> 6 | 5
```
4) users\_categories
```
id | category_id | user_id
------------------------------
1 | 1 |2
2 | 1 |3
3 | 2 |4
```
What i need to know is what categories don't have any user in some country.
My starting point was getting the categories with no users, but i'm stucked on getting the list by country..
```
SELECT * FROM categories WHERE id NOT IN (SELECT id FROM users_categories);
```
Any help ?<issue_comment>username_1: Instead of the `...onclick = ExecuteDelete(index);` you should use jquery bind like
```
$('#ExecuteDeleteButton')
.off('click')
.on('click', function() {
ExecuteDelete(index);
}
);
```
Dont forget the `off()` to unbind from your reused delete button!
Upvotes: 2 [selected_answer]<issue_comment>username_2: **You're executing the function just right on the assignment.**
Bind that event `click` using the function **[`addEventListener`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener)**:
```
document.getElementById('ExecuteDeleteButton').addEventListener('click',
function() {
ExecuteDelete(index);
}
});
```
Upvotes: 0 |
2018/03/20 | 314 | 1,247 | <issue_start>username_0: I get an infinite loop of @@redux-form/INITIALIZE messages when I try to initialise the value of a fabric ui datepicker field
```
function mapStateToProps(state) {
const { bookingform } = state;
return {
initialValues: { date: new Date()},
bookingform
};
}
```
If I replace new Date() with "" then no loop - but then no initialisation.
React Newb
Update. Date() generates a different value each time it is called. Is that upsetting redux-form in some way? I have worked around he problem by setting the default directly in the fabric ui component for the time being<issue_comment>username_1: `mapStateToProps` is called every time when it's are updated, so if you pass `new Date()` as params it is predictably that your connected component will rerender each millisecond.
Move `new Date()` to variable and than pass it to `mapStateToProps`.
```
const now = new Date();
function mapStateToProps(state) {
const { bookingform } = state;
return {
initialValues: { date: now },
bookingform
};
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
export default Example = reduxForm({
form: 'example',
enableReinitialize: false // set it to false
})(Example);
```
Upvotes: 1 |
2018/03/20 | 835 | 2,489 | <issue_start>username_0: I am trying to create a table using kable/kableextra without showing the horizontal lines in the table except for the first row which is the row names.
```
```
{r echo=FALSE}
library(knitr)
library(kableExtra)
options(knitr.kable.NA = '')
dt <- mtcars[1:5, 1:6]
kable(dt, "html") %>%
kable_styling(full_width = F, position = "left") %>%
row_spec(0, align = "c",bold=T ) %>%
column_spec(1, bold = T)
```
```
In the code above there is a line below the first row, which I like since those are row names, but there are lines between every row which I would like to remove.
Ideally I would like to have a slightly thicker line at the top at bottom of this table. Similar to the booktabs look in LaTeX.
I have read the documentation but the CSS is beyond me.
Thanks for any suggestions.<issue_comment>username_1: What you need is to set `booktabs = T` argument inside `kable`.
In your example, just change the following line of code:
```
kable(dt, "html")
```
to:
```
kable(dt, "html", booktabs = T)
```
Cheers!
Upvotes: 3 <issue_comment>username_2: You can include a **LaTeX** table in your **html** doc as an image, but you need a *complete* LaTeX distribution (not tinytex) and the R package **magick** (+Ghostscript if you are on Windows).
Replace
```
kable(dt, "html") %>%
```
with
```
kable(dt, "latex", booktabs=T) %>%
```
and add
```
kable_as_image()
```
as last line (dont forget the pipe symbol). The following code works for me:
```
```{r echo=FALSE}
library(knitr)
library(kableExtra)
options(knitr.kable.NA = '')
dt <- mtcars[1:5, 1:6]
kable(dt, "latex", booktabs=T) %>%
kable_styling(full_width = F, position = "left") %>%
row_spec(0, align = "c",bold=T ) %>%
column_spec(1, bold = T) %>%
kable_as_image()
```
```
Ref: See page 24 here:
<https://cran.r-project.org/web/packages/kableExtra/vignettes/awesome_table_in_pdf.pdf>
Upvotes: 2 <issue_comment>username_3: Can't provide a full answer, but I was able to suppress the horizontal lines by altering the CSS for the table in the `row_spec()` function.
```
dt <- mtcars[1:5, 1:6]
kable(dt, "html") %>%
kable_styling(full_width = FALSE, position = "left") %>%
row_spec(1:4, extra_css = "border-bottom-style: none")
```
Though this doesn't seem to do anything when included in a `Rmarkdown` document, it does work for `Quarto` documents.
[](https://i.stack.imgur.com/xYeS7.png)
Upvotes: 1 |
2018/03/20 | 805 | 2,662 | <issue_start>username_0: I am trying to go through a very basic tutorial about Webpack. I cannot get it to compile a very basic single line javascript application. I have installed and uninstalled it multiple times.
It's just a tutorial to learn how to use Webpack. I used `npm init` to set up the `package.json` and did nothing else to touch that file. I have a single index.html file and a single app.js file that is suppose to bundle into a bundle.js file.
I enter: `webpack app.js bundle.js` into the terminal
I keep getting this error:
```
Jonathans-MBP:webpack-app jonathankuhl$ webpack app.js bundle.js
Hash: 8d502a6e1f30f2ad64ab
Version: webpack 4.1.1
Time: 157ms
Built at: 2018-3-20 12:25:32
1 asset
Entrypoint main = main.js
[0] ./app.js 18 bytes {0} [built]
[1] multi ./app.js bundle.js 40 bytes {0} [built]
WARNING in configuration
The 'mode' option has not been set. Set 'mode' option to 'development' or 'production' to enable defaults for this environment.
ERROR in multi ./app.js bundle.js
Module not found: Error: Can't resolve 'bundle.js' in '/Users/jonathankuhl/Documents/Programming/node js/sandbox/webpack-app'
@ multi ./app.js bundle.js
```
Here's the package.json, there's nothing in it that I didn't do outside of `npm init`:
```
{
"name": "webpack-app",
"version": "1.0.0",
"description": "testing",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"webpack": "^4.1.1"
}
}
```
What am I doing wrong? I'm literally doing exactly what the tutorial is telling me to do, step by step. I don't know much about webpack, thought I should look into it, since I want a job in web development. I don't know what it means by "can't resolve 'bundle.js'. It's as if bundle.js doesn't exist, but it's not supposed to exist, webpack creates it.
I've tried alternatives such as doing `npm run build` after adding `"build":"webpack"` to my package.json under `"scripts"`, and had no luck. That was before I deleted the entire directory and started over. I've installed and uninstalled both webpack and webpack-cli a few times as well.
What am I doing wrong?
Here's the tut, if it matters: <https://www.youtube.com/watch?v=lziuNMk_8eQ> It's a little less than a year old, so maybe it's slightly outdated?<issue_comment>username_1: you need `bundle.js` as output so try instead this command
`webpack app.js -o bundle.js`
Upvotes: 7 [selected_answer]<issue_comment>username_2: I also needed to set the mode explicitly
```
webpack --mode=development app.js -o bundle.js
```
to proceed with that tutorial.
Upvotes: 3 |
2018/03/20 | 557 | 1,775 | <issue_start>username_0: I am trying to display json array of the images stored in database to come with the full URL in Laravel. I am using `CONCAT()` function to concatenate the full URL of the image, but I'm getting a false URL with many dashes inside.
This is a problem in the coming output:
```
{
"posts": [{
"imageurl": "http:\/\/localhost:8000\/images\/1509695371.jpg"
}, {
"imageurl": "http:\/\/localhost:8000\/images\/1509695156.jpg"
}, {
"imageurl": "http:\/\/localhost:8000\/images\/1509696465.jpg"
}, {
"imageurl": "http:\/\/localhost:8000\/images\/1509697249.jpg"
}]
}
```
And this is the function in my controller to retrieve the images stored in database from Post table:
```
public function index()
{
$posts = Post::select(array(DB::raw("CONCAT('http://localhost:8000/images/', image) AS imageurl")))->get();
return response()->json(['posts' => $posts]);
}
```
Any help will be more appreciated!<issue_comment>username_1: instead of doing it at the db level you could do the following in your `Post` model
```
public function getImageUrlAttribute($value)
{
return 'http://localhost:8000/'.$this->image;
}
```
and access it like `$post->image_url`
Upvotes: 0 <issue_comment>username_2: Thank you so much @Sari It's now working fine. After changing the codes to this,
```
public function index()
{
$posts = json_encode(Post::select(array(DB::raw("CONCAT('http://localhost:8000/images/', image) AS imageurl")))->get(), JSON_UNESCAPED_SLASHES);
return $posts;
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: DB::raw('CONCAT(\'http://localhost/hrms/images/logo/\', schools.image) as image')
And for dynamic use
DB::raw('CONCAT(\''.$url.'\', schools.image) as image')
Upvotes: 0 |
2018/03/20 | 2,651 | 7,278 | <issue_start>username_0: I'm have an object as shown below for which I need to find the total including the numbers that are outside as well as inside the curly braces.
```
this.count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
}
```
>
> Expected result: **Sum** : "625 (520)"
>
>
>
I was able t find out the sum of the first set of strings ie., 625 by doing below logic:
```
let total=this.getTotal(this.count);
public getTotal(count) {
const count1 = parseInt(items.count1.split(' ')[0]);
const count2 = parseInt(items.count2.split(' ')[0]);
const count3 = parseInt(items.count3.split(' ')[0]);
const count4 = parseInt(items.count4.split(' ')[0]);
const totalA = count1 + count2 + count3 + count4;
console.log(totalA);
}
```
But I was not able to split wisely `()` to calculate the other portion and concate with `totalA` .Do let me know any functional approach that suits best to calculate these type of objects.Thanks<issue_comment>username_1: You could use `reduce()` method to get one object and then create a string.
```js
const data = {"count1": "250 (220)","count2": "125 (100)","count3": "125 (100)","count4": "125 (100)"}
const total = Object.values(data).reduce((r, e) => {
const [a, b] = e.split(/\(([^)]+)\)/);
r.a = (r.a || 0) + +a;
r.b = (r.b || 0) + +b;
return r;
}, {})
const result = `Sum: ${total.a} (${total.b})`
console.log(result)
```
You could also use array as accumulator inside `reduce` and inside use `forEach()` loop.
```js
const data = {"count1": "250 (220)","count2": "125 (100)","count3": "125 (100)","count4": "125 (100)"}
const [a, b] = Object.values(data).reduce((r, e) => {
e.split(/\(([^)]+)\)/).forEach((e, i) => r[i] += +e)
return r;
}, [0, 0])
const result = `Sum: ${a} (${b})`
console.log(result)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: To improve your wisdom (you can use RegEx to solve this):
```js
var count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
}
function getTotal(items) {
let sum = 0
let sum1 = 0
for (let i in items) {
let item = items[i]
sum += Number.parseInt(item.split(' ')[0])
sum1 += Number.parseInt(item.split(' ')[1].replace(/^\(|\)$/g, ''))
}
console.log('Sum: ' + sum + ' (' + sum1 + ')')
}
getTotal(count)
```
Upvotes: 0 <issue_comment>username_3: Using `Array.prototype.reduce` this would work:
```js
let count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
}
let totalCount1 = Object.values(count).reduce(function(acc, val) {
return acc + parseInt(val.split(' ')[0])
}, 0)
let totalCount2 = Object.values(count).reduce(function(acc, val) {
return acc + parseInt(val.split('(').pop().split(')')[0])
}, 0)
console.log(`${totalCount1} (${totalCount2})`)
```
Upvotes: 1 <issue_comment>username_4: You can use `Object.values()` and `forEach` function for array.
```js
var count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
},
lsum = 0,
rsum = 0;
Object.values(count).forEach(v => {
v = v.replace('(', '').replace(')', '').split(' ');
lsum += +v[0];
rsum += +v[1];
});
console.log(`${lsum} (${rsum})`)
```
Upvotes: 0 <issue_comment>username_5: Your references to `this` indicate that there is more that you aren't showing us and your use of the word `public` is not valid in JavaScript.
But, you can just loop over the keys in the object and use `.split` (a little differently than you were using it) to get the totals of each part of the value.
```js
var count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
};
function getTotal(count) {
let val1 = 0;
let val2 = 0;
for(var key in count){
val1 += parseInt(count[key].split(" (")[0], 10);
val2 += parseInt(count[key].split(" (")[1], 10);
}
return "Sum 1: " + val1 + " - Sum 2: " + val2;
}
console.log(getTotal(count));
```
Upvotes: 0 <issue_comment>username_6: Since you've tagged "functional-programming", here's a possible FP solution:
```js
// utilities
let map = (f, xs) => [...xs].map(f);
let sum = xs => xs.reduce((a, b) => Number(a) + Number(b));
let zip = (...xss) => xss[0].map((_, i) => xss.map(xs => xs[i]));
// here we go
count = {
"count1": "250 (220) 10",
"count2": "125 (100) 20",
"count3": "125 (100) 30",
"count4": "125 (100) 40"
};
res = map(sum, zip(
...map(
s => s.match(/\d+/g),
Object.values(count)
)));
console.log(res);
```
(I've added a third column to make things a little bit more interesting. The code works with any number of columns).
That being said, a *real* solution to your problem would be to fix that broken data structure in the first place.
Upvotes: 1 <issue_comment>username_7: Here's a bit of a verbose `reduce` based solution:
```
const count = {
"count1": "250 (220)",
"count2": "125 (100)",
"count3": "125 (100)",
"count4": "125 (100)"
};
const rowPattern = /(\d+) \((\d+)\)/;
const parseRow = row => row.match(rowPattern).slice(1, 3).map(s => parseInt(s));
const result = Object.values(count).reduce((a, b) => {
const [ outerA, innerA ] = parseRow(a),
[ outerB, innerB ] = parseRow(b);
return `${outerA + outerB} (${innerA + innerB})`;
}, '0 (0)');
console.log(result); // 625 (520)
```
Upvotes: 0 <issue_comment>username_8: This seems to do what you want, and I think is fairly readable:
```js
const data = {"count1": "250 (220)","count2": "125 (100)","count3": "125 (100)","count4": "125 (100)"}
const splitSum = (data) => {
const [a, b] = Object.values(data).reduce(([a0, b0], e) => {
const [a1, b1] = e.split(/[^\d]+/).map(Number);
return [a0 + a1, b0 + b1];
}, [0, 0])
return `Sum: ${a} (${b})`
}
console.log(splitSum(data))
```
Upvotes: 1 <issue_comment>username_9: Breaking the work down into eg a `Count` module makes the task easier
```js
const Count =
{ fromString: s =>
s.match (/\d+/g) .map (Number)
, toString: ([a, b]) =>
`${a} (${b})`
, concat: ([a1, b1], [a2, b2]) =>
[ a1 + a2, b1 + b2 ]
, empty:
[0, 0]
}
const main = data =>
{
const result =
data.map (Count.fromString)
.reduce (Count.concat, Count.empty)
console.log (Count.toString (result))
}
const data =
{ "count1": "250 (220)"
, "count2": "125 (100)"
, "count3": "125 (100)"
, "count4": "125 (100)"
}
main (Object.values (data))
```
The `.map`-`.reduce` combo above results in *two* loops, producing unnecessary intermediate values. Using a generic combinator `mapReduce` we can collapse the loops into one - changes in **bold**
```
**const mapReduce = (m, r) =>
(acc, x) => r (acc, m (x))**
const main = data =>
{
const result =
data.reduce (**mapReduce (Count.fromString, Count.concat)**, Count.empty)
console.log (Count.toString (result))
}
```
Upvotes: 2 |
2018/03/20 | 4,198 | 16,231 | <issue_start>username_0: I m trying to generate a signed APK for android with react-native. If the debug's build works well, when I try to launch the `./gradlew assembleRelease` to create the signed APK for android, I get this error =>
```
Task :app:processReleaseResources
Failed to execute aapt
com.android.ide.common.process.ProcessException: Failed to execute aapt
```
I know that I can use the `android.enableAapt2=false` flag and it works well (for create the signed APK, I tried it), but I don t want to use this options since it's just a workaround and won t works for ever, and all of the threads that I read were suggesting that... :/
There isn t any others options ?
Thanks for your help !
Log :
```
> Task :bugsnag-react-native:compileReleaseJavaWithJavac
Note: path/node_modules/bugsnag-react-native/android/src/main/java/com/bugsnag/BugsnagReactNative.java uses unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
path/android/app/build/intermediates/res/merged/release/drawable-hdpi/node_modules_reactnavigation_src_views_assets_backicon.png: error: uncompiled PNG file passed as argument. Must be compiled first into .flat file..
error: failed parsing overlays.
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:processReleaseResources'.
> Failed to execute aapt
* Try:
Run with --info or --debug option to get more log output.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:processReleaseResources'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:100)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
Caused by: org.gradle.tooling.BuildException: Failed to execute aapt
at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:573)
at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:285)
at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
... 27 more
Suppressed: java.lang.RuntimeException: Some file processing failed, see logs for details
at com.android.builder.internal.aapt.QueuedResourceProcessor.waitForAll(QueuedResourceProcessor.java:121)
at com.android.builder.internal.aapt.QueuedResourceProcessor.end(QueuedResourceProcessor.java:141)
at com.android.builder.internal.aapt.v2.QueueableAapt2.close(QueueableAapt2.java:117)
at com.android.build.gradle.tasks.ProcessAndroidResources.doFullTaskAction(ProcessAndroidResources.java:328)
at com.android.build.gradle.internal.tasks.IncrementalTask.taskAction(IncrementalTask.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:73)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$IncrementalTaskAction.doExecute(DefaultTaskClassInfoStore.java:173)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:134)
at org.gradle.api.internal.project.taskfactory.DefaultTaskClassInfoStore$StandardTaskAction.execute(DefaultTaskClassInfoStore.java:121)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$1.run(ExecuteActionsTaskExecuter.java:122)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:111)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:92)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:70)
at org.gradle.api.internal.tasks.execution.SkipUpToDateTaskExecuter.execute(SkipUpToDateTaskExecuter.java:63)
at org.gradle.api.internal.tasks.execution.ResolveTaskOutputCachingStateExecuter.execute(ResolveTaskOutputCachingStateExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:58)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:88)
at org.gradle.api.internal.tasks.execution.ResolveTaskArtifactStateTaskExecuter.execute(ResolveTaskArtifactStateTaskExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:52)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:54)
at org.gradle.api.internal.tasks.execution.ExecuteAtMostOnceTaskExecuter.execute(ExecuteAtMostOnceTaskExecuter.java:43)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:34)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker$1.run(DefaultTaskGraphExecuter.java:248)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:336)
at org.gradle.internal.progress.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:328)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:197)
at org.gradle.internal.progress.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:107)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:241)
at org.gradle.execution.taskgraph.DefaultTaskGraphExecuter$EventFiringTaskWorker.execute(DefaultTaskGraphExecuter.java:230)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.processTask(DefaultTaskPlanExecutor.java:124)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.access$200(DefaultTaskPlanExecutor.java:80)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:105)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker$1.execute(DefaultTaskPlanExecutor.java:99)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.execute(DefaultTaskExecutionPlan.java:625)
at org.gradle.execution.taskgraph.DefaultTaskExecutionPlan.executeWithTask(DefaultTaskExecutionPlan.java:580)
at org.gradle.execution.taskgraph.DefaultTaskPlanExecutor$TaskExecutorWorker.run(DefaultTaskPlanExecutor.java:99)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:46)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:55)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.android.ide.common.process.ProcessException: Failed to execute aapt
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:796)
at com.android.build.gradle.tasks.ProcessAndroidResources.invokeAaptForSplit(ProcessAndroidResources.java:551)
... 40 more
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:482)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.core.AndroidBuilder.processResources(AndroidBuilder.java:794)
... 41 more
Caused by: java.util.concurrent.ExecutionException: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:503)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:462)
at com.google.common.util.concurrent.AbstractFuture$TrustedFuture.get(AbstractFuture.java:79)
at com.android.builder.internal.aapt.v2.QueueableAapt2.lambda$makeValidatedPackage$1(QueueableAapt2.java:179)
Caused by: com.android.tools.aapt2.Aapt2Exception: AAPT2 error: check logs for details
at com.android.builder.png.AaptProcess$NotifierProcessOutput.handleOutput(AaptProcess.java:454)
at com.android.builder.png.AaptProcess$NotifierProcessOutput.err(AaptProcess.java:411)
at com.android.builder.png.AaptProcess$ProcessOutputFacade.err(AaptProcess.java:332)
at com.android.utils.GrabProcessOutput$1.run(GrabProcessOutput.java:104)
```<issue_comment>username_1: Copying from the comment so it's easier to find...
This is caused by react-native adding uncompiled files to the merged resources directory. It got away with it with AAPT1 because the merged files didn't need to be compiled, but AAPT2 ones are compiled while being merged.
React-native owners need to update their code to pass these files as input to the merge resources task instead of dumping it uncompiled into the merged directory. They say they fixed it in the newest version, or at least provided a workaround: [github.com/facebook/react-native/pull/17967](http://github.com/facebook/react-native/pull/17967)
Upvotes: 1 <issue_comment>username_2: For me disabling aapt2 wasn't enough.
What did work was going to `android/app/src/main/res` and looking in all the drawable folders for png files other than the launch\_screen.png.
Once I deleted all these files, the problem was solved.
[](https://i.stack.imgur.com/Vi2fI.png)
Upvotes: 2 |
2018/03/20 | 1,294 | 4,524 | <issue_start>username_0: I am trying to help a co-worker do an inner join on two oracle tables so he can build a particular graph on a report.
I have no Oracle experience, only SQL Server and have gotten to what seems like the appropriate statement, but does not work.
```
SELECT concat(concat(month("a.timestamp"),','),day("a.timestamp")) as monthDay
, min("a.data_value") as minTemp
, max("a.data_value") as maxTemp
, "b.forecast" as forecastTemp
, "a.timestamp" as date
FROM table1 a
WHERE "a.category" = 'temperature'
GROUP BY concat(concat(month("timestamp"),','),day("timestamp"))
INNER JOIN (SELECT "forecast"
, "timestamp"
FROM table2
WHERE "category" = 'temperature') b
ON "a.timestamp" = "b.timestamp"
```
It doesn't like my aliases for some reason. It doesn't like not having quotes for some reason.
Also when I use the fully scored names it still fails because:
```
ORA-00933 SQL command not properly ended
```<issue_comment>username_1: The correct (simplified) syntax of select is
```
SELECT
FROM table1
JOIN table2
WHERE
GROUP BY
```
Upvotes: 0 <issue_comment>username_2: You are doing it wrong. Use subquery:
```
SELECT c.*, b.`forecast` as forecastTemp
FROM
(SELECT concat(concat(month(a.`timestamp`),','),day(a.`timestamp`)) as monthDay
, min(a.`data_value`) as minTemp
, max(a.`data_value`) as maxTemp
, a.`timestamp` as date
FROM table1 a
WHERE `category`='temperature'
GROUP BY concat(concat(month(`timestamp`),','),day(`timestamp`))) c
INNER JOIN (SELECT `forecast`
, `timestamp`
FROM table2
WHERE `category` = 'temperature') b
ON c.`timestamp` = b.`timestamp`;
```
Upvotes: 0 <issue_comment>username_3: The order of the query should be
```
SELECT
FROM
INNER JOIN
WHERE
GROUP BY
```
as below
```
SELECT concat(concat(month("a.timestamp"),','),day("a.timestamp")) as monthDay
, min("a.data_value") as minTemp
, max("a.data_value") as maxTemp
, "b.forecast" as forecastTemp
, "a.timestamp" as date
FROM table1 a
INNER JOIN (SELECT "forecast"
, "timestamp"
FROM table2
WHERE "category" = 'temperature') b
ON "a.timestamp" = "b.timestamp"
WHERE "category" = 'temperature'
GROUP BY concat(concat(month("timestamp"),','),day("timestamp"))
```
Upvotes: 1 <issue_comment>username_4: In a flood of attempts, here's yet another one.
* table2 can be moved out of subquery; join it with table1 on category as well
* note that all non-aggregates columns (from the SELECT) have to be contained in the GROUP BY clause. It seems that `a.timestamp` contains more info than just *month and day* - if that's so, it'll probably ruin the whole result set as data won't be grouped by *monthday*, but by the whole date - consider removing it from SELECT, if necessary
---
```
SELECT TO_CHAR(a.timestamp,'mm.dd') monthday,
MIN(a.data_value) mintemp,
MAX(a.data_value) maxtemp,
b.forecast forecasttemp,
a.timestamp c_date
FROM table1 a
JOIN table2 b ON a.timestamp = b.timestamp
AND a.category = b.category
WHERE a.category = 'temperature'
GROUP BY TO_CHAR(a.timestamp,'mm.dd'),
b.forecast,
a.timestamp;
```
Upvotes: 1 <issue_comment>username_5: The query should look something like this:
```
SELECT to_char(a.timestamp, 'MM-DD') as monthDay,
min(a.data_value) as minTemp,
max(a.data_value) as maxTemp,
b.forecast as forecastTemp
FROM table1 a JOIN
table2 b
ON a.timestamp = b.timestamp and b.category = 'temperature'
WHERE a.category = 'temperature'
GROUP BY to_char(timestamp, 'MM-DD'), b.forecast;
```
I'm not 100% sure this is what you want. Your query has numerous issues and complexities:
* You don't need a subquery in the `FROM` clause.
* You can use `to_char()` instead of the more complex date string processing.
* The `group by` did not contain all the relevant fields.
* Don't use double quotes, unless really, really needed.
Upvotes: 0 <issue_comment>username_6: In addition to the order of the components other answers have mentioned (`where` goes after `join` etc), you also need to remove all of the double-quote characters. In Oracle, these override the standard naming rules, so `"a.category"` is only valid if your table actually has a column named, literally, `"a.category"`, e.g.
```
create table demo ("a.category" varchar2(10));
insert into demo ("a.category") values ('Weird');
select d."a.category" from demo d;
```
It's quite rare to need to do this.
Upvotes: 0 |
2018/03/20 | 588 | 2,173 | <issue_start>username_0: I am building a rich text editor.
I have implemented text formatting like bold italic etc and also paragraph formatting like blockQuote. Now I would like to add images in editor and text should wrap around it.
I have implemented all these using `SpannableString()` and `Spanned()` and `StyleSpan()`.
I can add image to a line using `ImageSpan()`, but that add it inline and its just there in place of a character
, what I want is to insert it in paragraph and rest of text should wrap around it. I am able to add image at the beginning of text by following code.. but I cannot align it center and right.
```
SpannableString string = new SpannableString("Text with icon and padding");
string.setSpan(new IconMarginSpan(bitmap, 30), 0, string.length(),
Spanned.SPAN_EXCLUSIVE_EXCLUSIVE);
```
How to do it ? any example? or what procedure to follow?<issue_comment>username_1: you can fulfill your requirement like below:
you can set your image in imageview and your text in **Html.fromHtml("here put your text")**
**gradle** file:
```
compile 'com.github.deano2390:FlowTextView:2.0.5'
```
your **XML** layout:
```
```
your **java** file:
```
FlowTextView flowTextView = (FlowTextView) findViewById(R.id.ftv);
Spanned html = Html.fromHtml("here put your text");
flowTextView.setText(html);
```
you may get help by this link <https://github.com/deano2390/FlowTextView> also
I hope it help you..!
Upvotes: 2 <issue_comment>username_2: As per details given by you, It seems you might be using native EditText or you are trying with your custom EditText. Please provide this details to get exact solution of your query.
Apart from that, To build Rich Text Editor with the features mentioned by you (like Bold, Italic, Quote and Images etc), you can use this library RichEditor for Android: <https://github.com/wasabeef/richeditor-android>. It provides many features which may interest you.
Upvotes: 1 <issue_comment>username_3: Try with:
```
EditText text = (EditText)findViewById(R.id.text);
text.setCompoundDrawables(null, null, getResources().getDrawable(R.drawable.check_box), null);
```
Upvotes: 1 |
2018/03/20 | 418 | 1,482 | <issue_start>username_0: I have a function that loads a page and add a new user.
I want to show an alert(); and redirect/load a different function to change the url.
I have used
```
```<issue_comment>username_1: you can fulfill your requirement like below:
you can set your image in imageview and your text in **Html.fromHtml("here put your text")**
**gradle** file:
```
compile 'com.github.deano2390:FlowTextView:2.0.5'
```
your **XML** layout:
```
```
your **java** file:
```
FlowTextView flowTextView = (FlowTextView) findViewById(R.id.ftv);
Spanned html = Html.fromHtml("here put your text");
flowTextView.setText(html);
```
you may get help by this link <https://github.com/deano2390/FlowTextView> also
I hope it help you..!
Upvotes: 2 <issue_comment>username_2: As per details given by you, It seems you might be using native EditText or you are trying with your custom EditText. Please provide this details to get exact solution of your query.
Apart from that, To build Rich Text Editor with the features mentioned by you (like Bold, Italic, Quote and Images etc), you can use this library RichEditor for Android: <https://github.com/wasabeef/richeditor-android>. It provides many features which may interest you.
Upvotes: 1 <issue_comment>username_3: Try with:
```
EditText text = (EditText)findViewById(R.id.text);
text.setCompoundDrawables(null, null, getResources().getDrawable(R.drawable.check_box), null);
```
Upvotes: 1 |
2018/03/20 | 1,529 | 5,640 | <issue_start>username_0: It appears that Google Finance Currency Converter has stopped working altogether. A week ago I started getting these email notifications from my Magento 1.9.2 store:
Currency update warnings:
WARNING: Cannot retrieve rate from <https://finance.google.com/finance/converter?a=1&from=GBP&to=EUR>.
WARNING: Cannot retrieve rate from <https://finance.google.com/finance/converter?a=1&from=GBP&to=USD>.
Those URLs are indeed no longer valid. Does anyone know if there are new URLs we can use, or do we need to configure a different service?<issue_comment>username_1: It seems to be intermittent (it shows if I load a page 10 times or so, but only once every 10 clicks). But I've personally started configuring other services. I am using bank API's (currently a Swedish one so it might not help you). But check with your bank, they usually have APIs.
Good luck!
Upvotes: 1 <issue_comment>username_2: The problem is with the link, google updated the api link recently, and I found success once on checking 10 times to the existing link. Try changing to this link <https://www.google.com/finance/converter>
see this <https://www.techbuy.in/google-finance-api-currency-converter-not-working-updated-link-check-currency-converter/>
Upvotes: 0 <issue_comment>username_3: I was facing same problem from last week. But new url solved my problem and now currency conversion working fine.
try this:
<https://finance.google.com/bctzjpnsun/converter>
Upvotes: 0 <issue_comment>username_4: Apparently Google doesn't offer this service anymore.
The main alternative looks to be:
* [Fixer.io API](https://github.com/fixerAPI/fixer#readme)
* [Currencylayer API](https://currencylayer.com/documentation)
Both offers 1000 request for free a month ( you need to create an account on their homepage )
Source: <https://stackoverflow.com/a/8391430/716435>
Upvotes: 1 <issue_comment>username_5: This link is not working anymore.
```
protected $_url = 'https://finance.google.com/finance/converter?a=1&from={{CURRENCY_FROM}}&to={{CURRENCY_TO}}';
```
I researched and found this codes.
Find this file:
```
app/code/local/Payserv/GoogleFinance/Model/Google.php
```
Replace the codes with this:
```
class Payserv_GoogleFinance_Model_Google extends Mage_Directory_Model_Currency_Import_Abstract {
protected $_url = 'http://free.currencyconverterapi.com/api/v3/convert?q={{CURRENCY_FROM}}_{{CURRENCY_TO}}';
protected $_messages = array();
protected function _convert($currencyFrom, $currencyTo, $retry=0) {
$url = str_replace('{{CURRENCY_FROM}}', $currencyFrom, $this->_url);
$url = str_replace('{{CURRENCY_TO}}', $currencyTo, $url);
try {
$resultKey = $currencyFrom.'_'.$currencyTo;
$response = file_get_contents($url);
$data = Mage::helper('core')->jsonDecode($response);
$results = $data['results'][$resultKey];
$queryCount = $data['query']['count'];
if( !$queryCount && !isset($results)) {
$this->_messages[] = Mage::helper('directory')->__('Cannot retrieve rate from %s.', $url);
return null;
}
return (float)$results['val'];
} catch (Exception $e) {
if ($retry == 0) {
$this->_convert($currencyFrom, $currencyTo, 1);
} else {
$this->_messages[] = Mage::helper('directory')->__('Cannot retrieve rate from %s', $url);
}
}
}
}
```
Upvotes: 2 <issue_comment>username_6: Google's finance URL doesn't seem to work for now, I have prepared a workaround to use MSN Money (Microsoft's) API. It returns JSON so you can consume it using any programing language, I have put sample using PHP:
```
function msn($from, $to, $amount) {
$url = 'https://finance.services.appex.bing.com/Market.svc/ChartDataV5?symbols=245.20.'.strtoupper($from).strtoupper($to).'LITE&chartType=1y';
$request = curl_init();
$timeOut = 0;
curl_setopt($request, CURLOPT_URL, $url);
curl_setopt($request, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($request, CURLOPT_USERAGENT, 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)');
curl_setopt($request, CURLOPT_CONNECTTIMEOUT, $timeOut);
$response = json_decode(curl_exec($request));
curl_close($request);
$rate = array_last($response[0]->Series)->P;
return $rate * $amount;
}
```
The above function accepts the currency that you currently have, the target currency and amount. Send's a GET request to MSN URL and parses the JSON to get today's exchange rate. Finally, it multiplies the rate with your amount to convert it to the target currency.
I hope this solves your need, the code has a lot of rooms for optimization I just gave you a simple implementation
For example, you can save the exchange rate in your database and use that rate for one day this way you will only call the API once a day.
Upvotes: -1 <issue_comment>username_7: Google doesn't provide the currency converter API anymore. There are several alternative APIs offering currency conversion data. Some have been mentioned in posts already (Fixer, Currencylayer...)
Another option is [SWOP currency exchange rate API](https://swop.cx/), a fast, easy to use, reliable and transparent foreign exchange rate API made from developers for developers. Full disclaimer: I'm one of the developers of SWOP :)
* The SWOP API offers current and historical rates for 180+ currencies. They are gathered directly from trusted sources (various Central Banks and other important banks).
* The SWOP API has two endpoints, GraphQL and REST/JSON, for developer convenience.
* There's a free plan allowing 1,000 requests per month.
Upvotes: 1 |
2018/03/20 | 443 | 2,078 | <issue_start>username_0: We have a sharepoint website and as part of functional process across the website where there are lot of documents been uploaded. Currently they are been stored into database which results in very bulky table in terms of size. My initial approach was to utilize sharepoint to store the documents into file library. Does anybody think database is the wiser options and why or any other approach which is performant and better to store confidential files?<issue_comment>username_1: I never recommend saving files in the database. The easiest approach is to store them on the server in a directory and only save the file names in the database. This makes it easy to show them via a URL in a browser as well. Create a table with a column for the OriginalFileName and one for the ActualFileName. When i save a file to the server after its uploaded i usually change the name so you never have complications with duplicate file names. I use a GUID as the actual file name when its saved and save the original file name in the database along with the actual so you can get both back.
Upvotes: 0 <issue_comment>username_2: Using a database for storing documents is not a recommended approach, not only it will have large size but will be hard when it comes to maintenance and performance.
If you have a SharePoint server, why not go with a library or multiple libraries to store documents. You will get the below advantages when using SharePoint.
1.Permission management : you can set up access to documents and choose who access what.
2.Search : if there is a search service running you can search through your libraries.
3.OWA : office web apps can be used to open documents on the browser.
4.Audits : You can enable audit logs to see who does what.
Remember, SharePoint is a CMS and there are other options like MMS etc, but it stores the documents in a database too, its designed well so you dont have to worry much about it. If you go with your custom solution you will have to do a lot of custom development and testing.
Upvotes: 2 [selected_answer] |
2018/03/20 | 533 | 1,618 | <issue_start>username_0: I know the regex works. I tested it with Rubular and should work fine but it's not actually applying to my code.
I have the following present:
```
<% @region.locations.each do |location| %>
<%=other(location.hours\_operation)%>
<% end %>
```
For my helper I have the following:
```
def other(string)
string.split(/(?<=\) )/).join("\n")
end
```
It's showing on the page as:
```
7:30 AM - 5:00 PM (M-F) 7:30 AM - 12 PM (Sat)
```
What I'm looking for is:
```
7:30 AM - 5:00 PM (M-F)
7:30 AM - 12 PM (Sat)
```
It looks like it should apply but it's actually not. I've tried changing the helper to be something like.
```
def other(string)
string.to_s.split(/(?<=\) )/).join("\n")
end
```
I've even tried just putting the regex on location.hours\_operation and nothing.
EDIT:
I've also tried the following:
```
def other(string)
puts string.to_s.split(/(?<=\) )/).join("\n")
end
```
This strips out the line completely.
```
def other(string)
string.to_s.split(/(?<=\) )/).join("\n")
string
end
```
This does nothing. The same if I did return string<issue_comment>username_1: You need `string.split(/(?<=\) )/).join('
').html_safe`. HTML ignores `\n` linefeeds.
Also, `#{location.name} (#{location.abbreviation})` uses Ruby string interpolation, `#{}`, which ERB does not use. Try `<%= location.name %> (<%= location.abbreviation %>)`
Upvotes: 2 [selected_answer]<issue_comment>username_2: I'm leaving username_1's response as correct but since I hit some issues with html\_safe I ended up using instead:
```
string.gsub(/(?<=\) )/, '')
```
Upvotes: 0 |
2018/03/20 | 1,103 | 3,666 | <issue_start>username_0: I'm creating carousel with ng-bootstrap, I want to change arrow colors or image from white to black cause i have white background images and they are invisible. My problem is that i can't get span, i don't know how to call this in scss. I'm using bootstrap 4 and ng-bootstrap for angular <https://ng-bootstrap.github.io/#/components/carousel/examples>. When i change image url in console it works. I was trying to get arrows directly but nothing happens.
My code:
```
![]()
```
scss:
```
.imageCarousel {
max-height: 500px;
text-align: center;
color: $color2;
.carousel .slide {
width: 50px;
height: 50px;
background-image: url("https://storage.googleapis.com/staging.pc-shop-260af.appspot.com/ic_keyboard_arrow_right_black_24px.svg");
}
img {
// border: 2px solid blue;
max-width: 100%;
max-height: 400px;
}
}
```<issue_comment>username_1: ngb-carousel use SVG images as controls, you can override that with your own image of prev and next button:
```
.carousel-control-prev-icon{
background-image: url('https://apufpel.com.br/assets/img/seta_prim.png')
}
.carousel-control-next-icon{
background-image: url('https://apufpel.com.br/assets/img/seta_ult.png')
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: When inspecting that control on the link provided you can see your two buttons have classes attached to them.
[](https://i.stack.imgur.com/7g7BP.png)
Those arrows have a background-image attribute. Override that class for that attribute in your own css file.
Upvotes: -1 <issue_comment>username_3: Yeap it's solved but you sometimes it's necessary to set the view encapsulation to `None`
```
@Component({
selector: "app-builder",
templateUrl: "./builder.component.html",
styleUrls: ["./builder.component.css"],
encapsulation: ViewEncapsulation.None
})
```
Upvotes: 3 <issue_comment>username_4: The answer given by Eduardo (above) only works if encapsulation is set to ViewEncapsulation.None which can have unintended side effects on other aspects of your CSS.
Instead the ::ng-deep pseudo-class can be used to target the specific CSS in question. In your scss file use the following:
```
::ng-deep .carousel-control-prev-icon{
background-image: url('your-replacement.svg');
}
::ng-deep .carousel-control-next-icon{
background-image: url('your-replacement.svg');
}
```
Upvotes: 2 <issue_comment>username_5: First I would suggest upgrading to ng-bootstrap v6+ which correctly makes the ngb component ViewEncapsulation as None so it can be set through global CSS or via unencapsulated component CSS. See (<https://github.com/ng-bootstrap/ng-bootstrap/issues/3479>).
Second, override the SVG by simply setting it again & changing the **fill** property to whatever color you desire.
```
.carousel-control-next-icon {
background-image: url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' fill='rgb(50, 54, 57)' width='8' height='8' viewBox='0 0 8 8'%3e%3cpath d='M2.75 0l-1.5 1.5L3.75 4l-2.5 2.5L2.75 8l4-4-4-4z'/%3e%3c/svg%3e") !important;
}
```
or alternatively you can invert the colors if you're just looking for more contrast:
```
.carousel-control-next-icon,
.carousel-control-prev-icon {
filter: invert(1);
}
```
Setting **encapsulation: ViewEncapsulation.None** on your host component isn't a best practice. The smarter thing to do is to make these entry changes to your projects styles.scss (or whatever is specified in angular.json as the global styles file)
Upvotes: 2 |
2018/03/20 | 941 | 3,310 | <issue_start>username_0: Using puppeteer, I am trying to retrieve all cookies for a specific web site (i.e. `https://google.com`) from Node.js.
My code is:
```
// Launch browser and open a new page
const browser = await puppeteer.launch({ headless: true, args: ['--disable-dev-shm-usage'] });
const page = await browser.newPage();
await page.goto(url, { waitUntil: 'networkidle2' });
var cookies = await page.cookies();
console.log(cookies);
await browser.close();
```
It only retrieves 2 cookies, named `1P_JAR` and `NID`. However, when I open the Chrome Dev tools, it shows a lot more.
I tried using the Chrome Dev Tools directly instead of puppeteer but I am getting the same results.
Is there another function I should call? Am I doing it correctly?<issue_comment>username_1: Thanks @try-catch-finally. I got it resolved and it was a simple rookie mistake.
I was comparing cookies in my own Google Chrome instance with the Puppeteer instance. However, in my instance, I was logged in to my Google account and Puppeteer (obviously) was not.
Google uses 2 cookies when you are NOT logged in and 12 when you are logged in.
Upvotes: 0 <issue_comment>username_2: The `page.cookies()` call only gets cookies that are available to JavaScript applications inside the browser, and not the ones marked `httpOnly`, which you see in the Chrome DevTools. The solution is to ask for all available cookies through the Devtools protocol and then filter for the site you're interested in.
```
var data = await page._client.send('Network.getAllCookies');
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: If you use Playwright in place of Puppeteer, httponly cookies are readily accessible:
```
const { chromium } = require('playwright')
(async () => {
const browser = await chromium.launch()
const context = await browser.newContext()
const page = await context.newPage()
await page.goto('https://google.com', { waitUntil: 'networkidle' })
let allCookies = await context.cookies()
console.log (allCookies)
})();
```
returns:
```
[
{
sameSite: 'None',
name: '1P_JAR',
value: '2021-01-27-19',
domain: '.google.com',
path: '/',
expires: 1614369040.389115,
httpOnly: false,
secure: true
},
{
sameSite: 'None',
name: 'NID',
value: '208=VXtmbaUL...',
domain: '.google.com',
path: '/',
expires: 1627588239.572781,
httpOnly: true,
secure: false
}
]
```
Upvotes: 0 <issue_comment>username_4: You can utilise [Chrome DevTools Protocol](https://chromedevtools.github.io/devtools-protocol/) -> [getAllCookies](https://chromedevtools.github.io/devtools-protocol/tot/Network/#method-getAllCookies)
To get all browser cookies, regardless of any flags.
```js
const client = await page.target().createCDPSession();
const cookies = (await client.send('Network.getAllCookies')).cookies;
```
This will also play nice with typescript and tslint since something like
```js
const cookies = await page._client.send('Network.getAllCookies');
```
Will raise an error `TS2341: Property '_client' is private and only accessible within class 'Page'.`.
Upvotes: 4 <issue_comment>username_5: Just use it `await page.goto('https://google.com', { waitUntil: 'networkidle2' })`. And you can get all the cookies related.
Upvotes: -1 |
2018/03/20 | 2,111 | 6,981 | <issue_start>username_0: I've got this running on my Macbook Pro, so in an attempt to transfer it to my server for prolonged use, I have run into a snag. Server is Ubuntu 16.04 (server) and Python 2.7.12.
I installed the latest version of selenium using pip, and the latest version of ChromeDriver. I can start it fine from the command line (there is a GPU error but it seems not to cause any problems.)
However, when I try to start it from within python, using this code:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
path_to_chromedriver = '/usr/local/bin/chromedriver'
options = Options()
options.add_argument("start-maximized")
options.add_argument("--headless")
options.add_argument("--disable-gpu")
options.add_argument("--disable-extensions")
options.add_argument("--no-sandbox")
service_args = ['--verbose']
service_log_path = '/tmp/local/chromedriver.log'
driver = webdriver.Chrome(executable_path=path_to_chromedriver, chrome_options=options, service_args=service_args, service_log_path=service_log_path)
```
However, when I start things up, here's what I get in the chromedriver.log (after all the initial startup script for COMMAND InitSession:
```
[0.998][INFO]: Launching chrome: /opt/google/chrome/google-chrome --disable-background-networking --disable-client-side-phishing-detection --disable-default-ap\
ps --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-web-resources --enable-automation --enable-logging --ign\
ore-certificate-errors --load-extension=/tmp/.org.chromium.Chromium.tPkUXa/internal --log-level=0 --metrics-recording-only --no-first-run --password-store=basi\
c --remote-debugging-port=12752 --safebrowsing-disable-auto-update --test-type=webdriver --use-mock-keychain --user-data-dir=/tmp/.org.chromium.Chromium.ODIHcL\
data:,
[0.998][DEBUG]: DevTools request: http://localhost:12752/json/version
[1.078][DEBUG]: DevTools request failed
```
And it continues to spit out that error message for about 60 seconds until it ultimately returns this in Python:
```
WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally
(Driver info: chromedriver=2.29.461571 (8a88bbe0775e2a23afda0ceaf2ef7ee74e822cc5),platform=Linux 4.4.0-116-generic x86_64)
```
I cannot for the life of me figure out how to fix this, and I have searched extensively.
I was under the impression that I didn't need to use any virtual screen emulation if I was using headless, but could that be the issue? (I'm running in an init mode w/o a GUI in Ubuntu.)
Also, I don't like that there's no --headless being passed in that command to start chrome-stable... is that an issue?
Thanks.
EDITS:
Chromedriver version (I got what I thought was latest version):
```
$ chromedriver
Starting ChromeDriver 2.29.461571 (8a88bbe0775e2a23afda0ceaf2ef7ee74e822cc5) on port 9515
Only local connections are allowed.
```
Selenium
```
$ pip freeze | grep selenium
selenium==3.11.0
```
Chrome
```
$ google-chrome --version
Google Chrome 65.0.3325.162
```
Here's the log trace that shows what's going on-- it loops on the DevTools failure for 60 seconds then dies.
```
Full log/error trace:
[0.997][INFO]: COMMAND InitSession {
"capabilities": {
"alwaysMatch": {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [ "start-maximized", "--headless", "--disable-gpu", "--disable-extensions", "--no-sandbox" ],
"extensions": [ ]
},
"platformName": "any"
},
"firstMatch": [ {
} ]
},
"desiredCapabilities": {
"browserName": "chrome",
"goog:chromeOptions": {
"args": [ "start-maximized", "--headless", "--disable-gpu", "--disable-extensions", "--no-sandbox" ],
"extensions": [ ]
},
"platform": "ANY",
"version": ""
}
}
[0.997][INFO]: Populating Preferences file: {
"alternate_error_pages": {
"enabled": false
},
"autofill": {
"enabled": false
},
"browser": {
"check_default_browser": false
},
"distribution": {
"import_bookmarks": false,
"import_history": false,
"import_search_engine": false,
"make_chrome_default_for_user": false,
"show_welcome_page": false,
"skip_first_run_ui": true
},
"dns_prefetching": {
"enabled": false
},
"profile": {
"content_settings": {
"pattern_pairs": {
"https://*,*": {
"media-stream": {
"audio": "Default",
"video": "Default"
}
}
}
},
"default_content_setting_values": {
"geolocation": 1
},
"default_content_settings": {
"geolocation": 1,
"mouselock": 1,
"notifications": 1,
"popups": 1,
"ppapi-broker": 1
},
"password_manager_enabled": false
},
"safebrowsing": {
"enabled": false
},
"search": {
"suggest_enabled": false
},
"translate": {
"enabled": false
}
}
[0.997][INFO]: Populating Local State file: {
"background_mode": {
"enabled": false
},
"ssl": {
"rev_checking": {
"enabled": false
}
}
}
[0.998][INFO]: Launching chrome: /opt/google/chrome/google-chrome --disable-background-networking --disable-client-side-phishing-detection --disable-default-apps --disable-hang-monitor --disable-popup-blocking --disable-prompt-on-repost --disable-sync --disable-web-resources --enable-automation --enable-logging --ignore-certificate-errors --load-extension=/tmp/.org.chromium.Chromium.tPkUXa/internal --log-level=0 --metrics-recording-only --no-first-run --password-store=<PASSWORD> --remote-debugging-port=12752 --safebrowsing-disable-auto-update --test-type=webdriver --use-mock-keychain --user-data-dir=/tmp/.org.chromium.Chromium.ODIHcL data:,
[0.998][DEBUG]: DevTools request: http://localhost:12752/json/version
[1.078][DEBUG]: DevTools request failed
...
[61.018][DEBUG]: DevTools request: http://localhost:12752/json/version
[61.018][DEBUG]: DevTools request failed
[61.021][INFO]: RESPONSE InitSession unknown error: Chrome failed to start: exited abnormally
[61.021][DEBUG]: Log type 'driver' lost 0 entries on destruction
[61.021][DEBUG]: Log type 'browser' lost 0 entries on destruction
```<issue_comment>username_1: The answer was-- use the latest version of chromedriver!
<https://sites.google.com/a/chromium.org/chromedriver/downloads>
I copy/pasted from what I thought was a up to date script and it was too old a version.
Upvotes: 0 <issue_comment>username_2: Your chromedriver is out of date for your version of chrome. You're using `chromedriver=2.29.461571` but the latest version of chromedriver, which is compatible with Chrome65 is `2.37`
You can find the latest chromedriver for your platform here: <https://sites.google.com/a/chromium.org/chromedriver/downloads>
Upvotes: 2 |
2018/03/20 | 815 | 3,380 | <issue_start>username_0: I have a Ratpack app written with the Groovy DSL. (Embedded in Java, so not a script.)
I want to load the server's SSL certificates from a config file supplied in the command line options. (The certs will directly embedded in the config, or possibly in a PEM file referenced somewhere in the config.)
For example:
```
java -jar httpd.jar /etc/app/sslConfig.yml
```
sslConfig.yml:
```
---
ssl:
privateKey: file:///etc/app/privateKey.pem
certChain: file:///etc/app/certChain.pem
```
I seem to have a chicken-and-egg problem using the `serverConfig`'s facilities for reading the config file in order to configure the `SslContext` later in the `serverConfig`. The server config isn't created at the point I want to load the SslContext.
To illustrate, the DSL definition I have is something like this:
```
// SSL Config POJO definition
class SslConfig {
String privateKey
String certChain
SslContext build() { /* ... */ }
}
// ... other declarations here...
Path configPath = Paths.get(args[1]) // get this path from the CLI options
ratpack {
serverConfig {
yaml "/defaultConfig.yaml" // Defaults defined in this resource
yaml configPath // The user-supplied config file
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
ssl sslConfig // HOW DO I GET AN INSTANCE OF that SslConfig POJO HERE?
baseDir BaseDir.find()
}
handlers {
get { // ...
}
}
}
```
Possibly there is a solution to this (loading the SSL context in a later block?)
Or possibly just a better way to go about the whole thing..?<issue_comment>username_1: You could create a separate [`ConfigDataBuilder`](https://ratpack.io/manual/current/api/ratpack/config/ConfigDataBuilder.html) to load up a config object to deserialize your ssl config.
Alternatively, you can bind directly to `server.ssl`. All of the `ServerConfig` properties bind to the `server` space within the config.
Upvotes: 2 <issue_comment>username_2: The solution I am currently using is this, with an addition of a `#builder` method to `SslConfig` which returns a `SslContextBuilder` defined using its other fields.
```
ratpack {
serverConfig {
// Defaults defined in this resource
yaml RatpackEntryPoint.getResource("/defaultConfig.yaml")
// Optionally load the config path passed via the configFile parameter (if not null)
switch (configPath) {
case ~/.*[.]ya?ml/: yaml configPath; break
case ~/.*[.]json/: json configPath; break
case ~/.*[.]properties/: props configPath; break
}
env()
sysProps('genset-server')
require("/ssl", SslConfig) // Map the config to a POJO
baseDir BaseDir.find()
// This is the important change.
// It apparently needs to come last, because it prevents
// later config directives working without errors
ssl build().getAsConfigObject('/ssl',SslConfig).object.builder().build()
}
handlers {
get { // ...
}
}
```
}
Essentially this performs an extra build of the `ServerConfig` in order to redefine the input to the second build, but it works.
Upvotes: 2 [selected_answer] |
2018/03/20 | 525 | 1,409 | <issue_start>username_0: There is a nice way of finding the nonzero min/max of an array excluding zeros described in [here](https://stackoverflow.com/questions/7164397/find-the-min-max-excluding-zeros-in-a-numpy-array-or-a-tuple-in-python):
```
import numpy as np
minval = np.min(a[np.nonzero(a)])
maxval = np.max(a[np.nonzero(a)])
```
However, this won't work as soon as `a` is a 2- or more dimensional array and an axis for the min/max is desired. Any simple solutions for that?<issue_comment>username_1: This is a *non*-vectorised approach. It can be vectorised by setting 0 values to `a.min()` / `a.max()` as separate steps.
```
import numpy as np
a = np.array([[1, 2, 0],
[3, 1, 9],
[0, 3, 4]])
minval = np.min(np.where(a==0, a.max(), a), axis=0)
# array([ 1., 1., 4.])
maxval = np.max(np.where(a==0, a.min(), a), axis=0)
# array([ 3., 3., 9.])
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Masked arrays are designed exactly for these kind of purposes. You can leverage masking zeros from array (or ANY other kind of mask you desire) and do pretty much most of the stuff you do on regular arrays on your masked array now:
```
import numpy.ma as ma
mx = ma.masked_array(x, mask=x==0)
mx.min(1)
```
Example input:
```
x = np.array([[3., 2., 0., 1., 6.], [8., 4., 5., 0., 6.], [0., 7., 2., 5., 0.]])
```
output:
```
[1.0 4.0 2.0]
```
Upvotes: 0 |
2018/03/20 | 847 | 2,627 | <issue_start>username_0: I have an `std::string` filled with extended ASCII values (e.g. `čáě`). I need to URL encode this string for JavaScript to decode with `DecodeURIComponent`.
I have tried converting it to UTF-16 and then to UTF-8 via the `windows-1252` codepoint, but wasn't able to do so as there is not enough examples for the `MultiByteToWideChar` and `WideCharToMultiByte` functions.
I am compiling with MSVC-14.0 on Windows 10 64-bit.
How can I at least iterate over the individual bytes of the final UTF-8 string for me to URL encode?
Thanks<issue_comment>username_1: You can use `MultiByteToWideChar` to convert the string to UTF-16 and then encode the chars one by one.
Example code:
```
std::string readData = "Extended ASCII characters (ěščřžýáíé)";
int size = MultiByteToWideChar(
1252, //1252 corresponds with windows-1252 codepoint
0,
readData.c_str(),
-1, //the string is null terminated, no need to pass the length
NULL,
0
);
wchar_t* wchar_cstr = new wchar_t[size];
MultiByteToWideChar(
1252,
0,
readData.c_str(),
-1,
wchar_cstr,
size
);
std::stringstream encodeStream;
for(uint32_t i = 0; i < size; i++){
wchar_t wchar = wchar_cstr[i];
uint16_t val = (uint16_t) wchar;
encodeStream << "%" << std::setfill('0') << std::setw(2) << std::hex << val;
}
delete[] wchar_cstr;
std::string encodedString = encodeStream.str(); // the URL encoded string
```
While this does encode the basic ASCII characters ( < 128 ) it is completely decodable by JavaScript, which was the end goal.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I managed to do it with quite simple code.
Here is an example of converting JSON read from file into URL and sending to an external web site for showing syntax errors in the JSON (tested on MS/Windows):
```
void EncodeJsonFileTextAndSendToExternalWebSiteToShowSyntaxErrors (const std::string &jsonTxt)
{
std::stringstream encodeStream;
for (char c : jsonTxt)
{
if (c>='0' && c<='9' || c>='a' && c<='z' || c>='A' && c<='Z' || strchr("{}();",c))
encodeStream << c;
else
encodeStream << "%" << std::setfill('0') << std::setw(2) << std::hex << (int)c;
}
std::string url = "cmd /c start https://jsonlint.com/?json=" + encodeStream.str();
system(url.c_str());
}
```
which automatically opens a web browser like that: [https://jsonlint.com/?json={%0a%22dataset%20name%22%3a%20%22CIHP%22%0alabel%2017%0a}](https://jsonlint.com/?json=%7B%0a%22dataset%20name%22%3a%20%22CIHP%22%0alabel%2017%0a%7D)
Upvotes: 0 |
2018/03/20 | 1,031 | 3,271 | <issue_start>username_0: I am trying to make table with dynamically added rows. The problem is that each row is a form row, with several inputs. I had the PHP function which generate proper row and i manage to send it through $.post() to script. I checked, the code is loading properly. But when i use .append(), select inputs in my html gone crazy. Effect is on photo:
[Visual effect](https://i.stack.imgur.com/7NgOo.jpg)
The funny thing is that first row is made from the same function as second one. But first is added by PHP and second by .append(). Line before .append() html looks ok by on site, when i checked in site source, the marker in each select went before first . I have no idea how it is possible or what can i do with that. Here is my script function which should append new row:
```
function addRow(){
var id = $("#id").val();
var adres = $("#for_ajax").val() + "/inzynierka/ajax_scripts.php";
$.post(adres,{'funkcja' : 'getTableRow', 'id' : id},function(output){
$('#ideas_table').append(output);
});
}
```
I note that `output` is how it should be.
This is what it looks like:
```
| | BWWPNBN | BWWPNBN | PLN | Godzin | PLN | BWWPNBN | | Usuń |
```<issue_comment>username_1: You can use `MultiByteToWideChar` to convert the string to UTF-16 and then encode the chars one by one.
Example code:
```
std::string readData = "Extended ASCII characters (ěščřžýáíé)";
int size = MultiByteToWideChar(
1252, //1252 corresponds with windows-1252 codepoint
0,
readData.c_str(),
-1, //the string is null terminated, no need to pass the length
NULL,
0
);
wchar_t* wchar_cstr = new wchar_t[size];
MultiByteToWideChar(
1252,
0,
readData.c_str(),
-1,
wchar_cstr,
size
);
std::stringstream encodeStream;
for(uint32_t i = 0; i < size; i++){
wchar_t wchar = wchar_cstr[i];
uint16_t val = (uint16_t) wchar;
encodeStream << "%" << std::setfill('0') << std::setw(2) << std::hex << val;
}
delete[] wchar_cstr;
std::string encodedString = encodeStream.str(); // the URL encoded string
```
While this does encode the basic ASCII characters ( < 128 ) it is completely decodable by JavaScript, which was the end goal.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I managed to do it with quite simple code.
Here is an example of converting JSON read from file into URL and sending to an external web site for showing syntax errors in the JSON (tested on MS/Windows):
```
void EncodeJsonFileTextAndSendToExternalWebSiteToShowSyntaxErrors (const std::string &jsonTxt)
{
std::stringstream encodeStream;
for (char c : jsonTxt)
{
if (c>='0' && c<='9' || c>='a' && c<='z' || c>='A' && c<='Z' || strchr("{}();",c))
encodeStream << c;
else
encodeStream << "%" << std::setfill('0') << std::setw(2) << std::hex << (int)c;
}
std::string url = "cmd /c start https://jsonlint.com/?json=" + encodeStream.str();
system(url.c_str());
}
```
which automatically opens a web browser like that: [https://jsonlint.com/?json={%0a%22dataset%20name%22%3a%20%22CIHP%22%0alabel%2017%0a}](https://jsonlint.com/?json=%7B%0a%22dataset%20name%22%3a%20%22CIHP%22%0alabel%2017%0a%7D)
Upvotes: 0 |
2018/03/20 | 1,088 | 4,271 | <issue_start>username_0: I hope my title makes sense, cause I don't know how to phrase it any shorter.
I am doing a login and sign up form with parameters and hashing. I have used parameters before but never in a signup form.
So the issue it that when I make a new a user it ONLY inserts the password but not the username. I have tried to change the name of the username, I have checked the connection to the database is correct and I am simply lost of what to do now.
My database can be seen here:
[](https://i.stack.imgur.com/Vqe5G.png)
```
$username = $password = $confirm_password = "";
$username_err = $password_err = $confirm_password_err = "";
// Processing form data when form is submitted
if($_SERVER["REQUEST_METHOD"] == "POST"){
// Validate username
if(empty(trim($_POST["username"]))){
$username_err = "Please enter a username.";
} else{
// Prepare a select statement
$sql = "SELECT id FROM user WHERE username = ?";
if($stmt = mysqli_prepare($conn, $sql)){
// Bind variables to the prepared statement as parameters
mysqli_stmt_bind_param($stmt, "s", $param_username);
// Set parameters
$param_username = trim($_POST["username"]);
// Attempt to execute the prepared statement
if(mysqli_stmt_execute($stmt)){
/* store result */
mysqli_stmt_store_result($stmt);
if(mysqli_stmt_num_rows($stmt) == 1){
$username_err = "This username is already taken.";
} else{
$username = trim($_POST["username"]);
}
} else{
echo "Oops! Something went wrong. Please try again later.";
}
}
// Close statement
mysqli_stmt_close($stmt);
}
// Validate password
if(empty(trim($_POST['password']))){
$password_err = "Please enter a password.";
} elseif(strlen(trim($_POST['password'])) < 6){
$password_err = "Password must have atleast 6 characters.";
} else{
$password = trim($_POST['password']);
}
// Validate confirm password
if(empty(trim($_POST["confirm_password"]))){
$confirm_password_err = 'Please confirm password.';
} else{
$confirm_password = trim($_POST['confirm_password']);
if($password != $confirm_password){
$confirm_password_err = 'Password did not match.';
}
}
// Check input errors before inserting in database
if(empty($username_err) && empty($password_err) &&
empty($confirm_password_err)){
// Prepare an insert statement
$sql = "INSERT INTO user (name, password) VALUES (?, ?)";
if($stmt = mysqli_prepare($conn, $sql)){
// Bind variables to the prepared statement as parameters
mysqli_stmt_bind_param($stmt, "ss", $param_username,
$param_password);
// Set parameters
$param_username = $username;
$param_password = password_hash($password, PASSWORD_DEFAULT);
// Creates a password hash
// Attempt to execute the prepared statement
if(mysqli_stmt_execute($stmt)){
// Redirect to login page
// header("location: login.php");
echo "You have been added";
} else{
echo "Something went wrong. Please try again later.";
}
}
// Close statement
mysqli_stmt_close($stmt);
}
// Close connection
mysqli_close($conn);
}
?>
php
include "header.php";
?
Sign Up
-------
Please fill this form to create an account.
" method="post">
Username
php echo $username\_err; ?
Password
php echo $password\_err; ?
Confirm Password
php echo $confirm\_password\_err;
?
Already have an account? [Login here](login.php).
php
include "footer.php";
?
```
Hope you can help me, feel free to ask questions if I haven't made myself clear enough :)<issue_comment>username_1: Remove the following line
```
$param_username = $username;
```
Because you overwrite $param\_username that is already set with trim($\_POST["username"])
Have a nice day.
Upvotes: 1 <issue_comment>username_2: It turnes out the $username = $password = $confirm\_password = ""; was clearing my textbox so I removed it and added a $username = $\_POST["username"]; and then the code worked.
Upvotes: 1 [selected_answer] |
2018/03/20 | 1,654 | 4,210 | <issue_start>username_0: I have two simple data frames containing both the columns "word" and "n" for how often a certain word occurred. Here is an example:
```
df1 <- data.frame(word=c("beautiful","nice","like","good"),n=c(400,378,29,10))
df2 <- data.frame(word=c("beautiful","nice","like","good","wonderful","awesome","sad","happy"),n=c(6000,20,5,150,300,26,17,195))
```
Besides the words of `df1`, `df2` contains much more words so `df1` is only a small subset of `df2`.
I found the words, that are contained in both, `df1` and `df2`. Now I would like to subtract the word countings of `df1` from `df2` if the specific word is contained in `df2` , meaning I would like to do the following:
* Subtract word counting: `df2$n - df1$n`
* only IF `df1$word` is contained in `df2$word`
I hope that my problem is clear.
I already found all the words from df1 that are also contained in df2
```
df1 %>% filter(df1$word %in% df2$word)
```
However, I am struggling with the subtracting command based on the condition that the words in df1 must be also in `df2` and then only subtract `df2$n - df1$n`
Thank you for your help!<issue_comment>username_1: ```
require(dplyr)
df1 %>%
inner_join(df2, by = 'word') %>%
mutate(diff = n.y - n.x) %>%
select(word, diff)
```
Gives
```
word diff
1 beautiful 5600
2 nice -358
3 like -24
4 good 140
```
Upvotes: 2 <issue_comment>username_2: Using `merge`:
```
> df.tmp <- merge(df1, df2, by="word", all=TRUE)
> df.tmp$result <- df.tmp$n.y - df.tmp$n.x
> df.tmp
word n.x n.y result
1 beautiful 400 6000 5600
2 good 10 150 140
3 like 29 5 -24
4 nice 378 20 -358
5 awesome NA 26 NA
6 happy NA 195 NA
7 sad NA 17 NA
8 wonderful NA 300 NA
```
If you only want matched words
```
> df.tmp <- merge(df1, df2, by="word")
> df.tmp$result <- df.tmp$n.y - df.tmp$n.x
> df.tmp
word n.x n.y result
1 beautiful 400 6000 5600
2 good 10 150 140
3 like 29 5 -24
4 nice 378 20 -358
```
Upvotes: 2 <issue_comment>username_3: Here is a quick solution using a for loop and the `%in%` operator.
```
df2$diff <- NA
for (i in 1:nrow(df2)) {
if (df2$word[i] %in% df1$word[i]) {
df2$diff[i] <- df2$n[i] - df1$n[i]
}
}
df2
```
Output:
```
> df2
word n diff
1 beautiful 6000 5600
2 nice 20 -358
3 like 5 -24
4 good 150 140
5 wonderful 300 NA
6 awesome 26 NA
7 sad 17 NA
8 happy 195 NA
```
Upvotes: 2 <issue_comment>username_4: Here's a vectorized base solution where Boolean multiplication is used to replace an if-then construct used in the for-lop from @username_3:
```
df2$n.adjusted <- df2$n - (df2$word %in% df1$word)* # zero if no match
df1$n[ match(df1$word, df2$word) ] # gets order correct
> df2
word n n.adjusted
1 beautiful 6000 5600
2 nice 20 -358
3 like 5 -24
4 good 150 140
5 wonderful 300 300
6 awesome 26 26
7 sad 17 17
8 happy 195 195
```
Here's the example I used to test where the order of the df1 words was not the same as the order in df2 and the lengths were not an even multiple:
```
> df1 <-data.frame(word=c("nice","beautiful","like","good"),n=c(378,400,29,10))
> df2 <- data.frame(word=c("beautiful","nice","like","good","wonderful","awesome","sad"),n=c(6000,20,5,150,300,26,17))
>
> df1
word n
1 nice 378
2 beautiful 400
3 like 29
4 good 10
> df2
word n
1 beautiful 6000
2 nice 20
3 like 5
4 good 150
5 wonderful 300
6 awesome 26
7 sad 17
> df2$n.adjusted <- df2$n - (df2$word %in% df1$word)*df1$n[match(df1$word, df2$word)]
Warning message:
In (df2$word %in% df1$word) * df1$n[match(df1$word, df2$word)] :
longer object length is not a multiple of shorter object length
> df2
word n n.adjusted
1 beautiful 6000 5600
2 nice 20 -358
3 like 5 -24
4 good 150 140
5 wonderful 300 300
6 awesome 26 26
7 sad 17 17
```
Upvotes: 2 |
2018/03/20 | 854 | 2,592 | <issue_start>username_0: I have a data file with strings, floats, and integers separated by a single comma and a random number of spaces.
for example:
```
john , smith , 3.87 , 2, 6
```
I would like to scan each value into a struct containing str,str,float,int,int & ignore the commas and spaces. I have figured out up to the float but cant seem to get the intgers. any help would be appreciated my code is as follows:
```
typedef struct applicant {
char first[15];
char last[15];
float gpa;
int grev;
int greq;
} num1;
int main(int argc, char *argv[])
{
FILE *in, *out;
in = fopen(argv[1], "r");
out = fopen(argv[2], "w");
num1 class[10];
int i;
fscanf(in, "%[^,],%[^,],%f, %d, %d\n", class[i].first, class[i].last, &class[i].gpa, &class[i].grev, &class[i].greq);
fprintf(out, "%s %s %f %d %d", class[i].first, class[i].last, class[i].gpa, class[i].grev, class[i].greq);
```<issue_comment>username_1: As [sturcotte06](https://stackoverflow.com/users/2584754/sturcotte06) mentioned you should use `strtok()` function alongside with `atoi()` and `atof()` to get the expected result.
```
char text[] = "john , smith , 3.87 , 2, 6";
strcpy(class[i].first, strtok(text, ","));
strcpy(class[i].last, strtok(NULL, ",");
class[i].gpa = atof(strtok(NULL, ","));
class[i].grev = atoi(strtok(NULL, ","));
class[i].greq) = atoi(strtok(NULL, ","));
```
Upvotes: 2 <issue_comment>username_2: I suggest the following approach.
1. Read the contents of the file line by line.
2. I am assuming the white spaces are not relevant. If that is indeed the case, replace the comma characters with spaces.
3. Use a simpler format to read the data from the line of text to your struct.
Always check the return value of functions that read data from an input stream to make sure that you use the data only if the read operation was successful.
---
```
// Make it big enough for your needs.
#define LINE_SIZE 200
char line[LINE_SIZE];
if ( fgets(line, LINE_SIZE, in) != NULL )
{
// Replace the commas by white space.
char* cp = line;
for ( ; *cp != '\0'; ++cp )
{
if ( *cp == ',' )
{
*cp = ' ';
}
}
// Now use sscanf to read the data.
// Always provide width with %s to prevent buffer overflow.
int n = sscanf(line, "%14s%14s%f%d%d",
class[i].first,
class[i].last,
&class[i].gpa,
&class[i].grev,
&class[i].greq);
// Use the data only if sscanf is successful.
if ( n == 5 )
{
// Use the data
}
}
```
Upvotes: 1 |
2018/03/20 | 2,916 | 8,847 | <issue_start>username_0: Suppose I wanted to use auto dense packing in a CSS grid layout. Is there any way of introducing non-rectangular region configurations? For instance an L-shaped region that cover two columns in one row and only one column in the next. I have tried explicitly naming the grid cells however this doesn't work.<issue_comment>username_1: There is no support for non-rectangular grid items. From the [spec](https://www.w3.org/TR/css-grid-1/#placement):
>
> Every grid item is associated with a grid area, a rectangular set of adjacent grid cells that the grid item occupies.
>
>
>
[And](https://www.w3.org/TR/css-grid-1/#grid-template-areas-property):
>
> Note: Non-rectangular or disconnected regions may be permitted in a future version of this module.
>
>
>
(which does not imply that such a feature has been planned, only that there is nothing *stopping* such a feature from being added in the future)
Upvotes: 5 [selected_answer]<issue_comment>username_2: While you can't technically create L-shaped grid items. [You can layer items over each other with z-index](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout/Basic_Concepts_of_Grid_Layout#Layering_items_with_z-index).
Expanding on this, if you want gaps between the odd shapes, you can use a combination of `grid-gap` and `outline` like this:
```css
.wrapper {
display: grid;
grid-template-columns: repeat(6, 1fr);
grid-template-rows: repeat(4, 1fr);
grid-gap: 10px;
}
.box1 {
grid-column-start: 1;
grid-row-start: 1;
grid-row-end: 3;
}
.box2 {
grid-column-start: 2;
grid-column-end: 4;
grid-row-start: 1;
z-index: 1;
outline: 10px solid white;
}
.box3 {
grid-column-start: 3;
grid-column-end: 7;
grid-row-start: 1;
grid-row-end: 5;
text-align: right;
}
.box4 {
grid-column-start: 2;
grid-row-start: 2;
}
.box5 {
grid-column-start: 3;
grid-column-end: 5;
grid-row-start: 2;
z-index: 1;
outline: 10px solid white;
}
.box6 {
grid-column-start: 1;
grid-column-end: 3;
grid-row-start: 3;
grid-row-end: 5;
}
.wrapper {
background-color: #fff4e6;
}
.box {
background-color: #ffd8a8;
padding: 1em;
color: #d9480f;
}
```
```html
One
Two
Three
Four
Five
Six
```
[Open on Codepen](https://codepen.io/username_2/pen/ywJqxL)
Upvotes: 4 <issue_comment>username_3: My solution was to create grid containers that overlap each other and then use floats to mask out the area area covered up. And a little JavaScript to adjust the position of those floats when scrolling.
```js
const setStyleProperty = (prop, val) => document.documentElement.style.setProperty(prop, val);
const setScrollbarWidth = () => {
const content = $("#center");
const innerWidth = content.innerWidth();
const width = content.width();
const left = parseFloat(content.css("padding-left"));
const right = parseFloat(content.css("padding-right"));
const scrollbarWidth = innerWidth - width - left - right || 0;
setStyleProperty("--scrollbar-width", scrollbarWidth + "px");
};
const onScroll = (event) => {
const id = event.target.id;
const scrollTop = event.target.scrollTop;
let offset = 0;
if (id.match(/^top/)) { offset = $(`#${id}`).height() / 2; }
setStyleProperty(`--${id}-top`, offset + scrollTop + "px");
};
$(document).ready(function () {
const topLeft = $("#top-left");
const topRight = $("#top-right");
const bottomLeft = $("#bottom-left");
const bottomRight = $("#bottom-right");
const topLeftSpan = topLeft.children("span").eq(0);
const topRightSpan = topRight.children("span").eq(0);
const bottomLeftSpan = bottomLeft.children("span").eq(0);
const bottomRightSpan = bottomRight.children("span").eq(0);
const centerSpan = $("#center>span");
for (i = 1; i < 501; i++) {
topLeftSpan.html(topLeftSpan.html() + `p-shaped grid (${i}) `);
topRightSpan.html(topRightSpan.html() + `q-shaped grid (${i}) `);
bottomLeftSpan.html(bottomLeftSpan.html() + `b-shaped grid (${i}) `);
bottomRightSpan.html(bottomRightSpan.html() + `d-shaped grid (${i}) `);
centerSpan.html(centerSpan.html() + `All work and no play makes Jack a dull boy. `);
}
topLeft.on("scroll", onScroll);
topRight.on("scroll", onScroll);
bottomLeft.on("scroll", onScroll);
bottomRight.on("scroll", onScroll);
$(window).on("resize", function () {
setScrollbarWidth();
topLeft.trigger("scroll");
topRight.trigger("scroll");
bottomLeft.trigger("scroll");
bottomRight.trigger("scroll");
}).trigger("resize");
});
```
```css
:root {
/* Dynamically adjust border width & content padding to desired values. */
--border-width: 12px;
--padding: 4px;
/* Default scrollbar width. JS code will adjust as necessary. */
--scrollbar-width: 17px;
/* Variables for control height position on outside-shape floats. JS code will adjust as necessary when scrolling */
--top-left-top: 50%;
--top-left-offset: calc(var(--top-left-top) - var(--padding) - (var(--border-width) * 0.25));
--top-right-top: 50%;
--top-right-offset: calc(var(--top-right-top) - var(--padding) - (var(--border-width) * 0.25));
--bottom-right-top: 0px;
--bottom-left-top: 0px;
/* Formulas to properly scale floats to pad L-shaped content around center grid according to dynamic values. */
--void-padding-left-right: calc(var(--padding) + var(--border-width) * 0.25 + (var(--scrollbar-width) * 0.5));
--void-padding-bottom: calc(var(--padding) * 2 + var(--border-width) * 0.25);
}
* {
color: #FFF;
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
font-size: 10pt;
}
body {
margin: 0;
}
.grid-container {
display: grid;
grid-template-columns: 1fr 1fr 1fr 1fr;
grid-template-rows: 1fr 1fr 1fr 1fr;
height: 100vh;
width: 100vw;
}
.grid-container>div {
position: relative;
overflow-x: hidden;
overflow-y: scroll;
border-color: slategrey;
border-style: solid;
border-width: var(--border-width);
padding: var(--padding);
}
.grid-container>div>div {
/* Uncomment background-color to view floats controlling padding. */
/* background-color: rgba(0, 0, 0, 0.5); */
position: relative;
}
#top-left {
grid-area: 1 / 1 / 3 / 3;
background-color: red;
border-width: var(--border-width) calc(var(--border-width) * 0.5) calc(var(--border-width) * 0.5) var(--border-width);
direction: rtl;
text-align: left;
}
#top-left>div {
shape-outside: inset(var(--top-left-offset) 0 0 0);
float: right;
top: var(--top-left-offset);
height: 50%;
width: 50%;
margin-bottom: var(--top-left-offset);
padding: 0 0 var(--void-padding-bottom) var(--void-padding-left-right);
}
#top-right {
grid-area: 1 / 3 / 3 / 5;
background-color: purple;
border-width: var(--border-width) var(--border-width) calc(var(--border-width) * 0.5) calc(var(--border-width) * 0.5);
}
#top-right>div {
shape-outside: inset(var(--top-right-offset) 0 0 0);
float: left;
top: var(--top-right-offset);
height: 50%;
width: 50%;
margin-bottom: var(--top-right-offset);
padding: 0 var(--void-padding-left-right) var(--void-padding-bottom) 0;
}
#bottom-left {
grid-area: 3 / 1 / 5 / 3;
background-color: orange;
border-width: calc(var(--border-width) * 0.5) calc(var(--border-width) * 0.5) var(--border-width) var(--border-width);
direction: rtl;
text-align: left;
}
#bottom-left>div {
shape-outside: inset(var(--bottom-left-top) 0 0 0);
float: right;
height: calc(var(--bottom-left-top) + 50%);
width: 50%;
padding: 0 0 var(--void-padding-bottom) var(--void-padding-left-right);
margin-top: calc(var(--padding) * -1);
}
#bottom-right {
grid-area: 3 / 3 / 5 / 5;
background-color: green;
border-width: calc(var(--border-width) * 0.5) var(--border-width) var(--border-width) calc(var(--border-width) * 0.5);
}
#bottom-right>div {
shape-outside: inset(var(--bottom-right-top) 0 0 0);
float: left;
height: calc(var(--bottom-right-top) + 50%);
width: 50%;
padding: 0 var(--void-padding-left-right) var(--void-padding-bottom) 0;
margin-top: calc(var(--padding) * -1);
}
#center {
grid-area: 2 / 2 / 4 / 4;
background-color: blue;
border-width: var(--border-width);
}
```
```html
I have read multiple times that you cannot have L-shaped grids. That depends on your definition of
‘cannot’… Maybe you cannot do it with the inherit grid controls themselves, but it is possible with a
little additional imagination & creativity!
Enjoy!
© 2021 Sassano
```
See my link on codepen:
<https://codepen.io/sassano/full/dyObGar>
Upvotes: 2 |
2018/03/20 | 427 | 1,564 | <issue_start>username_0: Not sure if this is possible but I'm trying to display a div if another div which doesn't share the same parent is hovered.
The html looks something like this:
```
Hover
// some other content here
hovered content
```
I've tried using
```
.test:hover + .hover-content {
display: block;
}
```
But I think this only works if there's no other content in-between? Any suggestions?<issue_comment>username_1: Use javascript to listen to the onmouseover event, or jquery to handle the hover event on one and change the display attribute of the other. Using jquery
```
$(document).ready(function () {
$(".hover-me").hover(function () {
$(".hover-content").show();
}, function() {
$(".hover-content").hide();
});
});
```
If you don't want to use jquery, change your html like so
```
Hover
// some other content here
hovered content
```
notice that I added an id attribute to the hover-content div.
Upvotes: 1 <issue_comment>username_2: So you want to display the .hover-content when you hover the test. You can try the following solution. If it does not work, you gotta use javascript to check for the mouseover event. Hope it helps!
```
.test:hover ~ .hover-content {
display: block;
}
```
Upvotes: 0 <issue_comment>username_3: Try this, i think it will help you :
```
$(document).ready(function () {
$( ".hover-me" ).mouseenter( function () {
$( ".hover-content" ).show();
}).mouseout(function () {
/\*anything you want when mouse leaves the div\*/
} );
});
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 526 | 1,826 | <issue_start>username_0: I work on website where user creates some forms which would be filled by another users.
I have `dropdown` (image below) where user can choose what kind of input should be generated for another user:
[](https://i.stack.imgur.com/dpXJ7.png)
>
> it is Georgian words and accordingly means **Text**, **Number**,
> **Select (Yes-No)**, **Select**, **Big Text**, **File**, **Photo**
>
>
>
so I want to write on change event which draws for this user the form element which would be filled by another user. For example if user selects **Select (Yes-No)** [ასარჩევი (კი-არა)],
```
კი
არა
```
[](https://i.stack.imgur.com/8p2Q9.png)
will be drawn. But this select should be frozen like an image. It is just for visualization for this user who creates the from to see how it would be shown for user who would fill it.
If I assume all this, how can I create some piece of html which would be fully disabled but it would not look like disabled.
How can I achieve this using *javascript-jquery*?<issue_comment>username_1: Add the css style `pointer-events: none` to any inputs you want to disable mouse interaction on.
CSS:
```
.display-only {
pointer-events: none;
}
```
Html:
```
კი
არა
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: `pointer-events` is only supported from IE11. It is a very good solution but if like me you need wider support, you could add a invisible div in front of your controls. Since this new div is not click-through, there is no interactions with the controls.
Although this would be doable through `:after` pseudo-element for your controls container, it would still cause problems in IE10.
Upvotes: 0 |
2018/03/20 | 530 | 1,780 | <issue_start>username_0: I have a simple unordered list where I'm trying to style all parent `-` items so they are uppercase.
```css
ul > li {
text-transform: uppercase;
}
```
```html
* UPPERCASE
* UPPERCASE
+ Lowercase
+ Lowercase
+ Lowercase
+ Lowercase
* UPPERCASE
* UPPERCASE
```
The method above was suggested by [this article](https://css-tricks.com/child-and-sibling-selectors/), but it doesn't seem to work. What am I doing wrong?<issue_comment>username_1: That styles every `li` which is a child of a `ul` … which is *all* of them. Additionally, the default value for `text-transform` is `inherit` so the descendants will pick up the `uppercase` value from the parent anyway.
You need to write a selector which uniquely matches the children of the outermost one, and change the default to something different.
For example:
```css
li {
text-transform: none;
}
:not(li) > ul > li {
text-transform: uppercase;
}
```
```html
* UPPERCASE
* UPPERCASE
+ Lowercase
+ Lowercase
+ Lowercase
+ Lowercase
* UPPERCASE
* UPPERCASE
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: To achieve expected result , use below option
ul > li will select all li elements of ul which includes the Lowercase li elements
li > ul > li will select all ul elements under li element and apply initial to retain initial style
```
ul > li {
text-transform: uppercase;
}
li > ul > li{
text-transform: initial;
}
```
Code sample- <https://codepen.io/nagasai/pen/VXpvWG>
Run code snippet to test
```css
ul > li {
text-transform: uppercase;
}
li > ul > li{
text-transform: initial;
}
```
```html
* UPPERCASE
* UPPERCASE
+ Lowercase
+ Lowercase
+ Lowercase
+ Lowercase
* UPPERCASE
* UPPERCASE
```
Upvotes: 0 |
2018/03/20 | 1,914 | 7,529 | <issue_start>username_0: I'm not sure how to implement combined "OR" requirements in ASP.NET Core Authorization. In previous versions of ASP.NET this would have been done with roles, but I'm trying to do this with claims, partly to understand it better.
Users have an enum called AccountType that will provide different levels of access to controllers/actions/etc. There are three levels of types, call them User, BiggerUser, and BiggestUser. So BiggestUser has access to everything the account types below them have and so on. I want to implement this via the Authorize tag using Policies.
So first I have a requirement:
```cs
public class TypeRequirement : IAuthorizationRequirement
{
public TypeRequirement(AccountTypes account)
{
Account = account;
}
public AccountTypes Account { get; }
}
```
I create the policy:
```cs
services.AddAuthorization(options =>
{
options.AddPolicy("UserRights", policy =>
policy.AddRequirements(new TypeRequirement(AccountTypes.User));
});
```
The generalized handler:
```cs
public class TypeHandler : AuthorizationHandler
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, TypeRequirement requirement)
{
if (!context.User.HasClaim(c => c.Type == "AccountTypes"))
{
context.Fail();
}
string claimValue = context.User.FindFirst(c => c.Type == "AccountTypes").Value;
AccountTypes claimAsType = (AccountTypes)Enum.Parse(typeof(AccountTypes), claimValue);
if (claimAsType == requirement.Account)
{
context.Succeed(requirement);
}
return Task.CompletedTask;
}
}
```
What I would to do is add multiple requirements to the policy whereby *any* of them could satisfy it. But my current understanding is if I do something like:
```cs
options.AddPolicy("UserRights", policy => policy.AddRequirements(
new TypeRequirement(AccountTypes.User),
new TypeRequirement(AccountTypes.BiggerUser)
);
```
*Both* requirements would have to be satisfied. My handler would work if there was someway in AddRequirements to specify an OR condition. So am I on the right track or is there a different way to implement this that makes more sense?<issue_comment>username_1: The official documentation has a [dedicated section](https://learn.microsoft.com/en-us/aspnet/core/security/authorization/policies#why-would-i-want-multiple-handlers-for-a-requirement) when you want to implement an *OR* logic. The solution they provide is to register several authorization handlers against one requirement. In this case, all the handlers are run and the requirement is deemed satisfied if at least one of the handlers succeeds.
I don't think that solution applies to your problem, though; I can see two ways of implementing this nicely
---
### Provide multiple `AccountTypes` in `TypeRequirement`
The requirement would then hold all the values that would satisfy the requirement.
```cs
public class TypeRequirement : IAuthorizationRequirement
{
public TypeRequirement(params AccountTypes[] accounts)
{
Accounts = accounts;
}
public AccountTypes[] Accounts { get; }
}
```
The handler then verifies if the current user matches one of the defined account types
```cs
public class TypeHandler : AuthorizationHandler
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, TypeRequirement requirement)
{
if (!context.User.HasClaim(c => c.Type == "AccountTypes"))
{
context.Fail();
return Task.CompletedTask;
}
string claimValue = context.User.FindFirst(c => c.Type == "AccountTypes").Value;
AccountTypes claimAsType = (AccountTypes)Enum.Parse(typeof(AccountTypes),claimValue);
if (requirement.Accounts.Any(x => x == claimAsType))
{
context.Succeed(requirement);
}
return Task.CompletedTask;
}
}
```
This allows you to create several policies that will use the same requirement, except you get to define the valid values of `AccountTypes` for each of them
```cs
options.AddPolicy(
"UserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.User, AccountTypes.BiggerUser, AccountTypes.BiggestUser)));
options.AddPolicy(
"BiggerUserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.BiggerUser, AccountTypes.BiggestUser)));
options.AddPolicy(
"BiggestUserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.BiggestUser)));
```
---
### Use the enum comparison feature
As you said in your question, there's a *hierarchy* in the way you treat the different values of `AccountTypes`:
* `User` has access to some things;
* `BiggerUser` has access to everything `User` has access to, plus some other things;
* `BiggestUser` has access to everything
The idea is then that the requirement would define the *lowest* value of `AccountTypes` necessary to be satisfied, and the handler would then compare it with the user's account type.
Enums can be compared with both the `<=` and `>=` operators, and also using the `CompareTo` method. I couldn't quickly find robust documentation on this, but [this code sample on learn.microsoft.com](https://learn.microsoft.com/en-us/dotnet/api/system.enum?view=netframework-4.7.1#adding-enumeration-methods) shows the usage of the lower-than-or-equal operator.
To take advantage of this feature, the enum values need to match the hierarchy you expect, like:
```cs
public enum AccountTypes
{
User = 1,
BiggerUser = 2,
BiggestUser = 3
}
```
or
```cs
public enum AccountTypes
{
User = 1,
BiggerUser, // Automatiaclly set to 2 (value of previous one + 1)
BiggestUser // Automatically set to 3
}
```
The code of the requirement, the handler and the declaration of the policies would then look like:
```cs
public class TypeHandler : AuthorizationHandler
{
protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, TypeRequirement requirement)
{
if (!context.User.HasClaim(c => c.Type == "AccountTypes"))
{
context.Fail();
return Task.CompletedTask;
}
string claimValue = context.User.FindFirst(c => c.Type == "AccountTypes").Value;
AccountTypes claimAsType = (AccountTypes)Enum.Parse(typeof(AccountTypes),claimValue);
if (claimAsType >= requirement.MinimumAccount)
{
context.Succeed(requirement);
}
return Task.CompletedTask;
}
}
```
```cs
options.AddPolicy(
"UserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.User)));
options.AddPolicy(
"BiggerUserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.BiggerUser)));
options.AddPolicy(
"BiggestUserRights",
policy => policy.AddRequirements(new TypeRequirement(AccountTypes.BiggestUser)));
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: Copied from my [original answer](https://stackoverflow.com/a/48698120/8216122) for those looking for short answer (*note: below solution does not address hierarchy issues*).
You can add **OR** condition in Startup.cs:
Ex. I wanted only "<NAME>", "<NAME>" users to view "Ending Contracts" screen OR anyone only from "MIS" department also to be able to access the same screen. The below worked for me, where I have claim types "department" and "UserName":
```
services.AddAuthorization(options => {
options.AddPolicy("EndingContracts", policy =>
policy.RequireAssertion(context => context.User.HasClaim(c => (c.Type == "department" && c.Value == "MIS" ||
c.Type == "UserName" && "<NAME>, <NAME>".Contains(c.Value)))));
});
```
Upvotes: 3 |
2018/03/20 | 323 | 1,502 | <issue_start>username_0: I created a tag helper and wanted to use that in my Blazor sample project.
However, when I want to use the tag helper, the compiler complains about that:
`CS0103 The name 'StartTagHelperWritingScope' does not exist in the current context FirstBlazorApp`.
What is additionally required to make tag helpers work in Blazor?<issue_comment>username_1: Update: Integrating Razor components into Razor Pages and MVC apps in a hosted Blazor WebAssembly app is supported in ASP.NET Core in .NET 5.0 or later. (Microsoft Docs)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Tag helpers are not supported in Blazor. At least not as of now.
Upvotes: 3 <issue_comment>username_2: On A blazor page you would use a blazor component, this is like a TagHelper but it runs client side and is like an Angular component (or similar in any other client side JS framework) is that respect. TagHelpers run on the Server and blazor components run on the client, both allow you to create new tags. a simple component in the blazor same is the NavMenu which lives currently in the Shared folder.
Upvotes: 2 <issue_comment>username_3: I know this is an old question but I just discovered you can use Partial Components inside of Razor Components. So if you want to use tag helpers, move the elements that need them to a partial and include it in the Razor Component.
Upvotes: 2 |
2018/03/20 | 832 | 3,835 | <issue_start>username_0: Recently I have been rethinking of various ways to refactor older projects of mine, making sure to have a proper design first and applying the architecture into the code.
The business logic of an application is supposed to be agnostic of data persistence implementation, presentation frameworks, technology and even (ideally) programming language.
My question is, how do you deal with a use case/requirement where it seems like it is very tightly coupled to specific technology?
In my example, I want to have current sensor values, from a microcontroller, viewed on my browser. It is possible to implement this by having values stored in the database, read them and send them over to the presentation layer. However there is another way which is seemingly faster, and sounds more reasonable. By using websockets the server relays the values to the browser, and the server itself receives those values from the microcontroller through a stream.
These two approaches require almost entirely different design. The first requires a repository pattern that communicates with the persistence layer, where as the second one does not. The data flow is also different for the use cases. So two different implementations that fulfill the same requirement, require different architecture solely because of choice of technology and framework.
Is it still possible to design the architecture in a way that is not coupled to one of the two implementations?<issue_comment>username_1: It is perfectly possible to describe the functional requirements and business logic in an implementation agnostic way.
Most often this is done using Use Cases and use case scenarios.
*Note that use case scenarios are not specified in UML, but they are a de-facto standard in the industry.*
The idea is that your use cases represent large(ish) blocks of functionality offered by the application. It focuses on the added value for the user (actor) and not on the way it is implemented.
Typical things to avoid in use case analyses at this level is specifying things like:
* the user click the "Details" button
* the system gets the details from the database and shows in them in the detail window
But rather use phrases such as:
* the user asks to get the details
* the systems shows the details
This makes the analysis independent of design choices such as using a button, or a menu, or a function key etc..
The architecture on the other hand is usually more bound to a specific implementation (pattern), although it can still be independent of the actual technology used.
Upvotes: 0 <issue_comment>username_2: >
> Is it still possible to design the architecture in a way that is not
> coupled to one of the two implementations?
>
>
>
To some extent, yes - by inverting the dependency between the bit of code that gets the data from the sensor and the one that makes it possible to see it on screen, i.e. by making the former unaware of the latter. A couple of options could be:
* Plug the microcontroller input stream to a pipeline composed with the Chain of responsibility / Handler patterns. One handler can save the metrics to a database and another can send out the data via websockets.
* Adopt an event-driven approach. Emit events corresponding to incoming metrics from the sensor and subscribe to them. Subscribers can do a variety of things such as saving the data to a database or sending it through websockets.
Of course, the part of the architecture charged with covering the "last mile" to the browser will always be different because scenario #1 is a pull-based approach from the client's part and #2 is a push by the server (web sockets). Chances are though that if you want to display pseudo-real time data, in scenario 1 you'll have to emulate #2 with some sort of polling.
Upvotes: 2 [selected_answer] |
2018/03/20 | 292 | 853 | <issue_start>username_0: Im trying to the get the average of a column but this throws a Syntax Error and i can't get my head round it
```
Sheets("Calculator").Range("C15").Value = "=Average(Sheets("Results").Range("C2:C1000"))"
```<issue_comment>username_1: Try this:
```
Sheets("Calculator").Range("C15").Formula = "=Average(Results!C2:C1000)"
```
Or this:
```
Sheets("Calculator").Range("C15").Value = WorksheetFunction.Average(Sheets("Results").Range("C2:C1000"))
```
Upvotes: 2 <issue_comment>username_2: Try resolving it in vba.
```
Sheets("Calculator").Range("C15").Value = application.Average(Sheets("Results").Range("C2:C1000"))
```
Or as a formula on the worksheet.
```
Sheets("Calculator").Range("C15").formula = "=average(" & Sheets("Results").Range("C2:C1000").address(0, 0, external:=true) & ")"
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,394 | 4,953 | <issue_start>username_0: I have an Ajax script on a PHP page which is to run a php script on clicking a button.
The code is as follows:
```
$('#signup-form').submit(function(e){
e.preventDefault(); // Prevent Default Submission
$.ajax({
url: 'insertfromsession.php',
type: 'POST',
data: $(this).serialize(), // it will serialize the form data
dataType: 'html'
})
.done(function(data){
$('#form-container').fadeOut('slow', function(){
$('#form-container').fadeIn('slow').html(data);
});
document.getElementById('submitBtn').disabled = true;
document.getElementById("submitBtn").value="Thanks!";
})
.fail(function(){
alert('Ajax Submit Failed ...');
});
});
</code></pre>
<p>When I first open the web page the script isn't run, but if i click back and repeat the process the script executes without an issue. Does anyone know what is causing the issue?</p>
<p>Insert.php which runs this code is here: </p>
<pre><code> <!DOCTYPE html>
<html>
<head>
<script src="http://code.jquery.com/jquery-latest.js">
body { background: white !important; } /\* Adding !important forces the browser to overwrite the default style applied by Bootstrap \*/
Thank you for submitting your details
-------------------------------------
While you are here, why not apply for one of other fantastic products?
----------------------------------------------------------------------
php
session\_start();
include ("dbconnect.php");
// prepare and bind
$stmt = $conn-prepare("INSERT INTO wills (forename,surname,postcode,telno,email,ipaddress,capturedate,url,user) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)");
$stmt->bind\_param("sssssssss", $forename, $surname, $postcode, $telno, $email, $ipaddress, $capturedate, $url, $user);
$forename = $\_POST["forename"];
$surname = $\_POST["surname"];
$postcode = $\_POST["postcode"];
$telno = $\_POST["telno"];
$email = $\_POST["email"];
$ipaddress = $\_SERVER['REMOTE\_ADDR'];
$capturedate = date('Y-m-d H:i:s');
$url ="www.test.com";
$user ="testuser";
$\_SESSION["forename"] = $forename;
$\_SESSION["surname"] = $surname;
$\_SESSION["postcode"] = $postcode;
$\_SESSION["telno"] = $telno;
$\_SESSION["email"] = $email;
$\_SESSION["ipaddress"] = $ipaddress;
$\_SESSION["capturedate"] = $capturedate;
$\_SESSION["url"] = $url;
$\_SESSION["user"] = $user;
echo "
"."Forename: ".$forename."
";
echo "Surname: ".$surname."
";
echo "Telno: ".$telno."
";
echo "Email: ".$email."
";
echo "IP Address: ".$ipaddress."
";
echo "Session User: ".$\_SESSION["user"]."
";
if (!$stmt->execute()) {
echo $stmt->error;
} else {
}
$stmt->close();
$conn->close();
?>
[Back to Main Page](index.html)
#### Grab a Mobile SIM
With a SIM only contract, you get the all the benefits of an ongoing contract, without the additional high cost of a handset.
Short-term SIM only contracts give you freedom and flexibility. All the networks offer them from as little as 30 days, and you can then keep them rolling month-by-month
#### Debt Plan
Do you have more than £5,000 worth of debt?
Do you need help to reduce the monthly payments?
If so, find out if you qualify to write off up to 80% of your debt here
*
*
new WOW().init();
$( document ).ready(function() {
$('#signup-form').submit(function(e){
e.preventDefault(); // Prevent Default Submission
$.ajax({
url: 'insertfromsession.php',
type: 'POST',
data: $(this).serialize(), // it will serialize the form data
dataType: 'html'
})
.done(function(data){
$('#form-container').fadeOut('slow', function(){
$('#form-container').fadeIn('slow').html(data);
});
document.getElementById('submitBtn').disabled = true;
document.getElementById("submitBtn").value="Thanks!";
})
.fail(function(){
alert('Ajax Submit Failed ...');
});
});
});
$( document ).ready(function() {
console.log( "ready!" );
$('#signup-formdebt').submit(function(e){
e.preventDefault(); // Prevent Default Submission
$.ajax({
url: 'insertfromsessiondebt.php',
type: 'POST',
data: $(this).serialize(), // it will serialize the form data
dataType: 'html'
})
.done(function(data){
$('#form-container').fadeOut('slow', function(){
$('#form-container').fadeIn('slow').html(data);
});
document.getElementById('submitBtndebt').disabled = true;
document.getElementById("submitBtndebt").value="Thanks!";
})
.fail(function(){
alert('Ajax Submit Failed ...');
});
});
});
```<issue_comment>username_1: You should use a `document.ready` function. Here is a link to the jQuery version documentation : <https://learn.jquery.com/using-jquery-core/document-ready/>
Upvotes: -1 <issue_comment>username_2: I think I solved the issue and it has nothing at all to do with the AJAX. It actually appears to be an issue with the session variables. I moved the session variables on the insert.php right to the top before anything else and this makes it work as intended.
Thanks for your efforts guys
Upvotes: 0 |
2018/03/20 | 840 | 3,468 | <issue_start>username_0: .so lib files missing debug symbols
So I tried getting my bin/debug.apk to work in the Buildozer VM, but there was a problem with loading adb and attaching my Android device via USB, so I exported my bin/debug.apk to my desktop to debug it in Android Studio. However, in its first steps, it throws an error that asks me to replace the .so libraries with identical ones that have debug symbols.
[.so lib files missing debug symbols](https://i.stack.imgur.com/5wqlr.jpg)
Are these stashed anywhere in the Buildozer VM during the buildozer android debug phase?
Is there a way to either get the libs with the symbols or work around not having them on either Android Studio or the Buildozer VM?<issue_comment>username_1: I think these symbols are stripped by python-for-android during the recipe part of the build process. It probably wouldn't be hard to prevent this (or ideally to add an option to disable it), but I don't think there is currently one.
If you post your error with the apk, we may be able to help debug it anyway, I don't usually see issues that need the debug symbols to resolve.
Upvotes: 0 <issue_comment>username_2: Attach native debug symbols
===========================
If your APK includes native libraries (`.so` files) that do not include debug symbols, the IDE shows you a warning in the **Messages** window. You cannot debug the APK’s native code or use breakpoints without attaching debuggable native libraries.
To attach debuggable native libraries, proceed as follows:
1. If you have not already done so, make sure to [download the NDK and tools](https://developer.android.com/ndk/guides/index.html#download-ndk).
2. Under the **cpp** directory in the **Project** window (visible only if you have selected the **Android** view, as shown in below figure), double-click a native library file that doesn't include debug symbols. The editor shows a table of all the ABIs your APK supports.
3. Click **Add** at the top right corner of the editor window.
4. Navigate to the directory that includes the debuggable native libraries you want to attach and click
**OK**.
If the APK and debuggable native libraries were built using a different workstation, you need to also specify paths to local debug symbols by following these steps:
1. Add local paths to missing debug symbols by editing the field under the **Local Paths** column in the **Path Mappings** section of the editor window, as shown in below figure.
In most cases, you need only provide the path to a root folder, and Android Studio automatically inspects subdirectories to map additional sources. The IDE also automatically maps paths to a remote NDK to your local NDK download.
2. Click **Apply Changes** in the **Path Mappings** section of the editor window.

You should now see the native source files in the **Project** window. Open those native files to add breakpoints and [debug your app](https://developer.android.com/studio/debug/index.html) as you normally would. You can also remove the mappings by clicking **Clear** in the **Path Mappings** section of the editor window .
>
> **Known issue:** When attaching debug symbols to an APK, both the APK and debuggable `.so` files must be built using the same workstation or build server.
>
>
>
[*Source*](https://developer.android.com/studio/debug/apk-debugger)
Upvotes: 3 |
2018/03/20 | 948 | 3,688 | <issue_start>username_0: I currently have a dataset that looks like this:
```
Personid | Question | Response
1 | Name | Daniel
1 | Gender | Male
1 | Address | New York, NY
2 | Name | Susan
2 | Gender | Female
2 | Address | Boston, MA
3 | Name | Leonard
3 | Gender | Male
3 | Address | New York, NY
```
I also have another table that looks like this (just the person id):
```
Personid
1
1
1
2
2
2
3
3
3
```
I want to write a query to return something like this:
```
Personid | Name | Gender | Address
1 |Daniel | Male | New York, NY
2 | Susan | Female | Boston, MA
3 |Leonard | Male | New York, NY
```
I think it's a mix of some sort of "transpose" (not sure if it's even available in SQL) and conditional statement on just the gender, but I'm having issues with getting the end result. Could anyone offer any advice?<issue_comment>username_1: I think these symbols are stripped by python-for-android during the recipe part of the build process. It probably wouldn't be hard to prevent this (or ideally to add an option to disable it), but I don't think there is currently one.
If you post your error with the apk, we may be able to help debug it anyway, I don't usually see issues that need the debug symbols to resolve.
Upvotes: 0 <issue_comment>username_2: Attach native debug symbols
===========================
If your APK includes native libraries (`.so` files) that do not include debug symbols, the IDE shows you a warning in the **Messages** window. You cannot debug the APK’s native code or use breakpoints without attaching debuggable native libraries.
To attach debuggable native libraries, proceed as follows:
1. If you have not already done so, make sure to [download the NDK and tools](https://developer.android.com/ndk/guides/index.html#download-ndk).
2. Under the **cpp** directory in the **Project** window (visible only if you have selected the **Android** view, as shown in below figure), double-click a native library file that doesn't include debug symbols. The editor shows a table of all the ABIs your APK supports.
3. Click **Add** at the top right corner of the editor window.
4. Navigate to the directory that includes the debuggable native libraries you want to attach and click
**OK**.
If the APK and debuggable native libraries were built using a different workstation, you need to also specify paths to local debug symbols by following these steps:
1. Add local paths to missing debug symbols by editing the field under the **Local Paths** column in the **Path Mappings** section of the editor window, as shown in below figure.
In most cases, you need only provide the path to a root folder, and Android Studio automatically inspects subdirectories to map additional sources. The IDE also automatically maps paths to a remote NDK to your local NDK download.
2. Click **Apply Changes** in the **Path Mappings** section of the editor window.

You should now see the native source files in the **Project** window. Open those native files to add breakpoints and [debug your app](https://developer.android.com/studio/debug/index.html) as you normally would. You can also remove the mappings by clicking **Clear** in the **Path Mappings** section of the editor window .
>
> **Known issue:** When attaching debug symbols to an APK, both the APK and debuggable `.so` files must be built using the same workstation or build server.
>
>
>
[*Source*](https://developer.android.com/studio/debug/apk-debugger)
Upvotes: 3 |
2018/03/20 | 791 | 2,788 | <issue_start>username_0: Wanted to validate my inputs and change the CSS depending of the user interaction.
Starting with a required validation method I wrap all my inputs component with a and pass to `validate` an array of func. Just `required` for now.
But for all my fields the value stay the same `touched: false` and `error: "Required"`. If I touch or add stuff in the input, those values stay the same.
**Validation**
```
export const required = value => (value ? undefined : 'Required')
```
**NameInput**
```
import React from 'react';
import { Field } from 'redux-form'
import InputItem from 'Components/InputsUtils/InputItem';
import { required } from 'Components/InputsUtils/Validation';
const NameInput = () => (
);
export default NameInput;
```
**InputItem**
```
import React from 'react';
const InputItem = ({ spec, meta: { touched, error } }) => {
const { type, placeholder } = spec;
return (
);
};
export default InputItem;
```<issue_comment>username_1: The `redux-form` controls its own props within your element as long as you use the spread operator to pass those props into your input.
For example, where you are doing `const InputItem = ({ spec, meta: { touched, error } }) => ...`
Try destructing the input from the Component: `const InputItem = ({ input, spec, meta: { touched, error } }) => ...`
And where you have your , try doing the following:
```
```
The redux-form captures any `onBlur` and `onChange` events and uses its own methods to change the `touched` state. You just need to pass those along as shown above.
These are what you need: <https://redux-form.com/7.1.2/docs/api/field.md/#input-props>
Upvotes: 3 [selected_answer]<issue_comment>username_2: There are 2 solutions to solve the *"touched is always false"* issue.
**1) Ensure that [`input.onBlur`](https://redux-form.com/8.1.0/docs/api/field.md/#-code-input-onblur-eventorvalue-function-code-) is called in your component**
For an input:
```
const { input } = this.props
```
For custom form elements without native `onBlur`:
```
const { input: { value, onChange, onBlur } } = this.props
const className = 'checkbox' + (value ? ' checked' : '')
{
onChange(!value)
onBlur()
}}
/>
```
**2) Declare your form with [`touchOnChange`](https://redux-form.com/8.1.0/docs/api/reduxform.md/#-code-touchonchange-boolean-code-optional-)**
```
const ReduxFormContainer = reduxForm({
form: 'myForm',
touchOnChange: true,
})(MyForm)
```
Upvotes: 4 <issue_comment>username_3: Another point to consider:
I passed `{...props}` to my custom component, however, touched still remained false.
That is because although props contained the input object, my component couldn't
deduce onBlur from it. When explicitly stating `, it worked as expected.`
Upvotes: 1 |
2018/03/20 | 1,279 | 5,106 | <issue_start>username_0: I'm building a code interpreter in C++ and while I have the whole token logic working, I ran into an unexpected issue.
The user inputs a string into the console, the program parses said string into different objects type Token, the problem is that the way I do this is the following:
```
void splitLine(string aLine) {
stringstream ss(aLine);
string stringToken, outp;
char delim = ' ';
// Break input string aLine into tokens and store them in rTokenBag
while (getline(ss, stringToken, delim)) {
// assing value of stringToken parsed to t, this labes invalid tokens
Token t (readToken(stringToken));
R_Tokens.push_back(t);
}
}
```
The issue here is that if the parse receives a string, say `Hello World!` it will split this into 2 tokens `Hello` and `World!`
The main goal is for the code to recognize double quotes as the start of a string Token and store it whole (from `"` to `"`) as a single Token.
So if I type in `x = "hello world"` it will store `x` as a token, then next run `=` as a token, and then `hello world` as a token and not split it<issue_comment>username_1: You can use C++14 [quoted](http://en.cppreference.com/w/cpp/io/manip/quoted) manipulator.
```
#include
#include
#include
#include
void splitLine(std::string aLine) {
std::istringstream iss(aLine);
std::string stringToken;
// Break input string aLine into tokens and store them in rTokenBag
while(iss >> std::quoted(stringToken)) {
std::cout << stringToken << "\n";
}
}
int main() {
splitLine("Heloo world \"single token\" new tokens");
}
```
Upvotes: 2 <issue_comment>username_2: You really don't want to tokenize a programming language by splitting at a delimiter.
A proper tokenizer will switch on the first character to decide what kind of token to read and then keep reading as long as it finds characters that fit that token type and then emit that token when it finds the first non-matching character (which will then be used as the starting point for the next token).
That could look something like this (let's say `it` is an `istreambuf_iterator` or some other iterator that iterates over the input character-by-character):
```
Token Tokenizer::next_token() {
if (isalpha(*it)) {
return read_identifier();
} else if(isdigit(*it)) {
return read_number();
} else if(*it == '"') {
return read_string();
} /* ... */
}
Token Tokenizer::read_string() {
// This should only be called when the current character is a "
assert(*it == '"');
it++;
string contents;
while(*it != '"') {
contents.push_back(*it);
it++;
}
return Token(TokenKind::StringToken, contents);
}
```
What this doesn't handle are escape sequences or the case where we reach the end of file without seeing a second `"`, but it should give you the basic idea.
Something like `std::quoted` might solve your immediate problem with string literals, but it won't help you if you want `x="hello world"` to be tokenized the same way as `x = "hello world"` (which you almost certainly do).
---
PS: You can also read the whole source into memory first and then let your tokens contain indices or pointers into the source rather than strings (so instead of the `contents` variable, you'd just save the start index before the loop and then return `Token(TokenKind::StringToken, start_index, current_index)`). Which one's better depends partly on what you do in the parser. If your parser directly produces results and you don't need to keep the tokens around after processing them, the first one is more memory-efficient because you never need to hold the whole source in memory. If you create an AST, the memory consumption will be about the same either way, but the second version will allow you to have one big string instead of many small ones.
Upvotes: 2 <issue_comment>username_3: So I finally figured it out, and I CAN use getline() to achieve my goals.
This new code runs and parses the way I need it to be:
```
void splitLine(string aLine) {
stringstream ss(aLine);
string stringToken, outp;
char delim = ' ';
while (getline(ss, stringToken, delim)) { // Break line into tokens and store them in rTokenBag
//new code starts here
// if the current parse sub string starts with double quotes
if (stringToken[0] == '"' ) {
string torzen;
// parse me the rest of ss until you find another double quotes
getline(ss, torzen, '"' );
// Give back the space cut form the initial getline(), add the parsed sub string from the second getline(), and add a double quote at the end that was cut by the second getline()
stringToken += ' ' + torzen + '"';
}
// And we can all continue with our lives
Token t (readToken(stringToken)); // assing value of stringToken parsed to t, this labes invalid tokens
R_Tokens.push_back(t);
}
}
```
Thanks to everyone who answered and commented, you were all of great help!
Upvotes: 0 |
2018/03/20 | 862 | 2,604 | <issue_start>username_0: In the Groupby documentation, I only see examples of grouping by functions applied to the index of axis 0 or to the labels of the columns. I see no examples discussing how to group by a label derived from applying a function to a column. I would think this would be done using `apply`. Is the example below the best way to do this?
```
df = pd.DataFrame({'name' : np.random.choice(['a','b','c','d','e'], 20),
'num1': np.random.randint(low = 30, high=100, size=20),
'num2': np.random.randint(low = -3, high=9, size=20)})
df.head()
name num1 num2
0 d 34 7
1 b 49 6
2 a 51 -1
3 d 79 8
4 e 72 5
def num1_greater_than_60(number_num1):
if number_num1 >= 60:
return 'greater'
else:
return 'less'
df.groupby(df['num1'].apply(num1_greater_than_60))
```<issue_comment>username_1: You can do without apply here
```
df.groupby(df.num1.gt(60))
df.num1.gt(60)
Out[774]:
0 True
1 True
2 True
3 True
4 False
5 True
6 True
7 True
8 False
9 True
10 False
11 True
12 True
13 True
14 False
15 True
16 False
17 False
18 True
19 False
Name: num1, dtype: bool
```
Upvotes: 2 <issue_comment>username_2: from DataFrame.groupby() docs:
```
by : mapping, function, str, or iterable
Used to determine the groups for the groupby.
If ``by`` is a function, it's called on each value of the object's
index. If a dict or Series is passed, the Series or dict VALUES
will be used to determine the groups (the Series' values are first
aligned; see ``.align()`` method). If an ndarray is passed, the
values are used as-is determine the groups. A str or list of strs
may be passed to group by the columns in ``self``
```
so we can do it this way:
```
In [35]: df.set_index('num1').groupby(num1_greater_than_60)[['name']].count()
Out[35]:
name
greater 15
less 5
```
Upvotes: 3 <issue_comment>username_3: In general I would do this by creating a derived column to then groupby - I find this easier to keep track of and can always delete this or select only columns needed at the end.
```
df = pd.DataFrame({'name' : np.random.choice(['a','b','c','d','e'], 20),
'num1': np.random.randint(low = 30, high=100, size=20),
'num2': np.random.randint(low = -3, high=9, size=20)})
df['num1_greater_than_60'] = df['num1'].gt(60).replace(
to_replace=[True, False],
value=['greater', 'less'])
df.groupby('num1_greater_than_60').dosomething()
```
Upvotes: 1 |
2018/03/20 | 1,305 | 4,053 | <issue_start>username_0: Can anyone please help me, I'm a newbie, I have a bit of code which I'm working on and I'm struggling with the file directory path. I have found other examples and tried them as shown below. The Python code is to email out a file called 'myfile.txt' form the folder `'F:\D\OneDrive\Python\Spyder\test'`.
```
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
#sender's address
fromaddr = "<EMAIL>"
#receiptent's email address
toaddr = "<EMAIL>"
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = toaddr
msg['Subject'] = "Python test"
body = "Did it work Sam?"
msg.attach(MIMEText(body, 'plain'))
filename = "myfile.txt"
attachment = open("F:\D\OneDrive\Python\Spyder\test", "rb")
part = MIMEBase('application', 'octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition', "attachment; filename= %s" % filename)
msg.attach(part)
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(fromaddr, "<PASSWORD>")
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
```
And I get this error -
```
PermissionError: [Errno 13] Permission denied:
b'F:\\D\\OneDrive\\Python\\Spyder\\test'
```
If I change the line to -
```
attachment = open("F:\D\OneDrive\Python\Spyder\test\", "rb")
```
I get -
```
attachment = open("F:\D\OneDrive\Python\Spyder\test\", "rb")
^
SyntaxError: EOL while scanning string literal
```
If I change the line to -
```
attachment = open("F:\\D\\OneDrive\\Python\\Spyder\\test\\", "rb")
```
I get -
```
attachment = open("F:\\D\\OneDrive\\Python\\Spyder\\test\\", "rb")
FileNotFoundError: [Errno 2] No such file or directory:
'F:\\D\\OneDrive\\Python\\Spyder\\test\\'
```<issue_comment>username_1: If you work in Windows you must use windows path format. Method `open` with `'rb'` parameters read `file` in byte mode if file is exist. You try read the directory!?
>
> attachment = open('F:\\D\\OneDrive\\Python\\Spyder\\test\\myfile.txt", "rb")
>
>
>
equal
>
> attachment = open(r'F:\D\OneDrive\Python\Spyder\test\myfile.txt', 'rb')
>
>
>
Upvotes: 1 <issue_comment>username_2: This represents the path correctly, but fails to provide a *file* name, because the trailing `\` means a directory.
```
attachment = open("F:\\D\\OneDrive\\Python\\Spyder\\test\\myfile.txt", "rb")
```
What you likely want is
```
# Note the r and the lack of a trailing slash.
attachment = open(r"F:\D\OneDrive\Python\Spyder\test\myfile.txt", "rb")
```
Upvotes: 0 <issue_comment>username_3: I have found different code here and this works. Still can't work out why the original code does not work -
**[python program to rename the file with current date in MMDDYYY format and send email with attachment](https://stackoverflow.com/questions/45690763/python-program-to-rename-the-file-with-current-date-in-mmddyyy-format-and-send-e)**
Fixed code -
```
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.base import MIMEBase
from email import encoders
fromaddr = "<EMAIL>"
toaddr = "<EMAIL>"
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = toaddr
msg['Subject'] = "Please find the attachment"
body = "HI"
msg.attach(MIMEText(body, 'plain'))
filename = "myfile.txt"
#dt = str(datetime.datetime.now())
attachment = open("F:\\D\\OneDrive\\Python\\Spyder\\myfile.txt", "rb")
part = MIMEBase('application', 'octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition', "attachment; filename= %s" % filename)
msg.attach(part)
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(fromaddr, "<PASSWORD>")
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
```
Upvotes: 0 |
2018/03/20 | 879 | 3,235 | <issue_start>username_0: Is there a way to determine what the default branch is on an upstream repo when the local repo had the default branch changed to something different than the upstream repo?<issue_comment>username_1: There is no notation of "default branch" in Git. There is only `HEAD` in local and remote repository. You move `HEAD` locally quite often so there almost always be difference between upstream `HEAD` and local `HEAD`.
Upvotes: 1 <issue_comment>username_2: ### Solution 1: `git rev-parse` (Recommended)
Just do this:
```
git rev-parse --abbrev-ref refs/remotes/origin/HEAD
```
That should print out `HEAD` branch of the remote, which for a hosted repo should be the default branch, whether that may be `origin/main`, `origin/master`, `origin/develop`, `origin/stable`, whatever. (If the remote is actually another working copy, the remote `HEAD` is the checked-out branch.)
Your local Git repository caches this information, as the ref `refs/remotes/origin/HEAD` (replace `origin` by whatever the remote may be named.) Usually this gets set automatically by `git clone`, but if your local repository has a slightly unusual history (such as if your local repo began its life before the remote repo did), that local ref may be missing. In which case, this will generally recreate it:
```
git remote set-head origin --auto
```
That will contact the remote, check what its actual `HEAD` branch is, and update the local Git repository's `refs/remotes/origin/HEAD` ref.
Since this is locally cached information, it is possible for it to become outdated, if the remote repository changes its default branch. `git fetch` will not detect that change, but `git remote set-head origin --auto` will check what the remote HEAD is and update this local ref to match. So you may need to run that command manually after any such change in a remote repo.
**In summary**:
```
git rev-parse --abbrev-ref refs/remotes/origin/HEAD
```
gives you the answer quickly, but has a small chance of failing or being wrong;
```
git remote set-head origin --auto &&
git rev-parse --abbrev-ref refs/remotes/origin/HEAD
```
is always going to be correct, but is slower.
### Solution 2: `git ls-remote`
```
git ls-remote --symref origin HEAD
```
Example output:
```
ref: refs/heads/main HEAD
bbac5bc097f944b3ad94e723364c5513d27b1910 HEAD
```
You could even do something like this:
```
git ls-remote --symref origin HEAD |
egrep '^ref: refs/heads/.*\tHEAD$' |
cut -d/ -f3- |
cut $'-d\t' -f1
```
That will just print `main` (or `master`, or whatever).
I recommend solution 1 instead, because it is simpler, and I don't think this solution has any real advantages over it.
### Solution 3: GitHub CLI
If the remote happens to be GitHub or GitHub Enterprise, you can also use the [GitHub CLI](https://github.com/cli/cli):
```
gh repo view --json defaultBranchRef --jq .defaultBranchRef.name
```
That will print `main`, or `master`, or whatever the remote default branch name may be.
I don't recommend this solution either, because solutions 1 and 2 will work for any Git host (GitHub, GitLab, BitBucket, Gitea, whatever). Plus it requires you to install and configure another CLI tool.
Upvotes: 0 |
2018/03/20 | 473 | 1,635 | <issue_start>username_0: I have a flow where the following happens:
* `PartyA` creates a `TransactionBuilder`
* `PartyA` sends the `TransactionBuilder` to `PartyB`
* `PartyB` adds a state to the `TransactionBuilder`
However, when `PartyB` tries to a state to the builder, they get the following exception:
```
[WARN ] 2018-03-20T16:02:35,932Z [Node thread-1] flow.[99246baf-1a1d-44e5-b2f9-f4eb341b97d4].run - Terminated by unexpected exception {}
java.lang.UnsupportedOperationException: null
at java.util.Collections$UnmodifiableCollection.add(Collections.java:1055) ~[?:1.8.0_162]
at net.corda.core.transactions.TransactionBuilder.addInputState(TransactionBuilder.kt:149) ~[corda-core-corda-3.0.jar:?]
```
What's happening here? Why can't `PartyB` add items to the `TransactionBuilder`?<issue_comment>username_1: In Corda, objects are serialised before being sent over the wire. When deserialising the received objects, it is impossible to detect whether any `List`s that are being deserialised were originally mutable or immutable. On balance, we decided that our serialisation engine should make any `List`s it deserialises immutable, rather than mutable.
This is causing the issue you observe above. Under the hood, you are calling `add` on an immutable list.
You can bypass this issue using `TransactionBuilder.copy` to make a copy of the `TransactionBuilder` that has mutable lists of states again.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Issue only occurs in Corda V3, where round trip serialisation converts mutable objects into non mutable. This is not the case in Corda V2 or lower.
Upvotes: 1 |
2018/03/20 | 311 | 1,060 | <issue_start>username_0: I'm a user of a SQL Sever database, and I want to know my access rights / permissions in the database I'm using. What SQL query should I use to do so?
Thanks<issue_comment>username_1: I think the easiest way would be:
```
SELECT * FROM fn_my_permissions(NULL, 'SERVER');
GO
```
I tried it in SQL Server 2008.
Ref: <https://learn.microsoft.com/en-us/sql/relational-databases/system-functions/sys-fn-my-permissions-transact-sql>
Upvotes: 6 [selected_answer]<issue_comment>username_2: If you're looking for what you might be missing as well as what you have, here's a useful way to find it:
```
SELECT all_permissions.permission_name, p.name
FROM (
SELECT DISTINCT permission_name
FROM sys.database_permissions
) all_permissions
LEFT JOIN (
SELECT b.name, a.permission_name
FROM sys.database_permissions a
JOIN sys.database_principals b on a.grantee_principal_id = b.principal_id
WHERE b.name = '{YOUR_NAME_OR_GROUP_NAME_HERE}'
) p ON p.permission_name = all_permissions.permission_name
```
Upvotes: 2 |
2018/03/20 | 445 | 1,269 | <issue_start>username_0: My black box in this example should be all the way to the left and all the way up. I don't want to see any white space left or above of it.
```css
#box {
width: 200px;
height: 200px;
margin-left: 0px;
background-color: black;
}
```
But that isnt the case! Can anyone tell me how I can achive this?<issue_comment>username_1: In most major browsers, the default margin is `8px` on all sides. It is defined in pixels by the `user-agent-stylesheet` your browser provides.
You need to give `margin:0` to body like this
```css
body,html {
margin: 0;
padding:0;
}
```
```html
#box {
width: 200px;
height: 200px;
margin-left: 0px;
background-color: black;
}
```
**P.S** Also set padding to 0 for both html and body
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to add `0 margin` to `body and html` as well
```css
body, html{
margin:0;
}
#box{
width: 200px;
height: 200px;
margin-left: 0px;
background-color: black;
}
```
```html
```
Upvotes: 1 <issue_comment>username_3: You also need to remove the default margin on the body element.
```css
#box {
width: 200px;
height: 200px;
margin-left: 0px;
background-color: black;
}
body {
margin:0;
}
```
Upvotes: 1 |
2018/03/20 | 554 | 1,859 | <issue_start>username_0: I have wondered that both c++ and java using oops concepts but the syntaxes are quite different.
I found that java uses new ClassName() to get a reference to the heap but getting the same reference to the heap why the c++ uses new ClassName.
```
#include
using namespace std;
class Bike
{
public:
virtual void run()
{
cout << "running";
}
};
class Splender :public Bike
{
public:
void run()
{
cout << "running safely with 60km";
}
};
int main()
{
Bike &obj = new Splender();//error but reason?
obj.run();
}
```
ERROR:
invalid initialization of non-const reference of type 'Bike&' from an rvalue of type 'Splender\*'<issue_comment>username_1: ```
#include
class Bike {
public:
virtual void run() {
std::cout << "running";
}
virtual ~Bike() {}
};
class Splender: public Bike {
public:
void run() override {
std::cout << "running safely with 60km";
}
};
int main() {
Bike \*obj = new Splender(); // new returns a pointer to a dynamically allocated object
obj->run();
delete obj;
}
```
which displays:
`running safely with 60km`
Upvotes: 0 <issue_comment>username_2: Sometimes we think doing the right thing.
You make a bit confusion. Try:
```
Bike *obj=new Splender();
```
Upvotes: 0 <issue_comment>username_3: Two things. One: Operator new returns a pointer, not a reference, to an object instance. So use `Bike* obj = new Splender();`
Two: Do not get cute and try `Bike& obj = *new Splender();` because new can return `nullptr` in a low memory situation and if you dereference it, your program will crash at that point. (And you are forced to use the ugly `delete *obj;` to clean up this object as user4581301 points out in the comments.)
Oh, and `using namespace std` is a bad practice. It won't kill you to add `std::` in the places you use the standard library.
Upvotes: 2 [selected_answer] |
2018/03/20 | 427 | 1,329 | <issue_start>username_0: I would like to deploy my website in django to docker. Problem is, when i run docker-compose up, website html is loaded, but static content like css, js, logo, mp3 not. I think that this is because my url. On localhost my website runs correctly, but server has prefix something like <http://127.0.0.1:8000/Something>. I repaired urls so html files run but my static files in settings.py looks like this:
```
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static")
]
```
and for example my homepage.html with js and css included (this works on localhost) like this:
```
Title
{% load static %}
```
but when server runs on <http://127.0.0.1:8000/Something> it show up this error in the command line:
[](https://i.stack.imgur.com/pQIlu.png)
As you can see, the path must be not <http://86.110.225.19/static/styles/homepage.css> but
<http://172.16.58.3/SOMETHING/static/styles/homepage.css>
How can i fix this? Thank you so much.<issue_comment>username_1: Change the configuration to:
```
STATIC_URL = '/Something/static/'
```
Upvotes: 2 <issue_comment>username_2: I find out, what made this error. I added to **settings.py** line:
```
SUB_SITE = '/Something/'
```
and it works! :)
Upvotes: 0 |
2018/03/20 | 653 | 2,624 | <issue_start>username_0: I need to run a simple java script function after my react form has finished loading. The JS is loaded from a 3rd party so i have no control over it and i need to run one of its functions in order to make a captcha section appear in my form. If i hit F5 then the area appears as expected but navigating to the page via the routes in the app does not trigger the load.
I have worked out that if i call this line of JS
window.\_\_challenger.callPageRequest();
Then the captcha area appears, i know this works as i added a temporary button to the form which is being render and call the funtion on the button click. But I can't have some random button, i need to just fire this JS line everytime the form is navigated to.
I've tried using:
```
componentDidMount() {
window.__challenger.callPageRequest();
}
```
But that complains with:
Cannot read property 'callPageRequest' of undefined
And i've tried adding:
window.\_\_challenger.callPageRequest();
to the end of the form in the render method.
why is it so hard to do such a simple thing? any helped would be very much appreciated.<issue_comment>username_1: Are you sure componentDidMount is running after the route change? A route that is only different from other routes because of different params does not remount. You could try the [componentWillReceiveProps()](https://reactjs.org/docs/react-component.html#componentwillreceiveprops) lifecycle method instead.
Upvotes: 2 <issue_comment>username_2: Short of bringing redux and an additional component that dispatches actions on script loads. The best I can think of just now, if you're using a script tag with an external src to load the script you could try adding a handler onto the script tag and render it in you render method (although this is a bit messy). You might need to do some `this` binding in your constructor. Caveat, I haven't tried this.
see this answer: [Script load in react](https://stackoverflow.com/questions/42847126/script-load-in-react)
Upvotes: 1 <issue_comment>username_3: The easiest (and possibly hackiest) is to poll `window` and check for the presence of the function before triggering it:
```
componentDidMount() {
const poll = () => {
if(window.__challenger && window.__challenger.callPageRequest) {
window.__challenger.callPageRequest();
}else {
setTimeout(() => {
poll();
}, 250);
}
};
poll();
}
```
If it exists on the first pass, go ahead and immediately call it. If it doesn't exist yet, wait 250ms and then try again.
Upvotes: 2 [selected_answer] |
2018/03/20 | 652 | 2,201 | <issue_start>username_0: I have a function that I'd like to apply to several elements of the same class. It's a scroll page function and I need it to only execute once. So I put it in a wrapper. It works but I'd like to be able to just add a class to an element and have it act upon that element. I tried iterating through the elements and using addClass to add a unique class with their respective index added to the end but this did not work. What I have now only acts upon the first element with the "split" class.
```
//EXECUTES ONLY ONCE
function once(fn, context) {
var result;
return function() {
if(fn) {
result = fn.apply(context || this, arguments);
fn = null;
}
return result;
};
}
// Usage
//var split1 = once(function() {
// fadeInText(".split1");
//});
const handlers = $(".split").toArray()
.map(s => ({ el: $(s), show: once(() => fadeInText(s)) }));
$(window).scroll(function() {
for(const {el, show} of handlers) {
if( $(this).scrollTop() + $(window).height() > el.offset().top)
show();
}
});
//SPLITTEXT
var f = ".split",
fadeInText(f);
function fadeInText(l) {
var el = document.querySelector(l);
var split = el.dataset.split;
var text = new SplitText(el, { type: split });
var tl = new TimelineMax({ paused: false });
var splitEls = text[split];
var wrapEls = function wrapEls(els) {
return els.map(function (el) {
return '' + el.innerText + '';
});
};
var wrapped = wrapEls(splitEls);
splitEls.forEach(function (el, i) {
el.style.overflow = 'hidden';
el.innerHTML = wrapped[i];
});
var masks = splitEls.map(function (el) {
return el.querySelector('span');
});
tl.staggerFrom(masks, 1.25, { skewY: 4, y: '200%', ease: Expo.easeOut, delay: 0.9 }, 0.1, 'in');
return l;
}
```<issue_comment>username_1: `$('.class').each(function(){ /* 'this' means the element */})`
Inside the function, `this` is scoped to the element you're on in the iteration.
Upvotes: 0 <issue_comment>username_2: if you want a pure js application try this:
```
elements = document.querySelectorAll('.class');
elements.forEach((element, key) => {
//your code
})
```
Upvotes: 2 |
2018/03/20 | 369 | 1,282 | <issue_start>username_0: `$funcNum = $_GET['CKEditorFuncNum'] ;` is not returning the number.
I tried creating the upload.php for ckeditor uploadimage plugin according to
<https://stackoverflow.com/a/44553006/8719001> (sorry can't add comments yet)
which includes `echo "window.parent.CKEDITOR.tools.callFunction($funcNum, '$url', '$message');";`
However when I drop images it doesn't work and in the console I get the Response Text which doesn't show any funcNumber:
```
"window.parent.CKEDITOR.tools.callFunction(, 'https://example.com/upload/test.jpg', '');"
```
I think this might be part of filebrowser plugin?
Which I have enabled and also declared `$settings['filebrowserUploadUrl'] =`<issue_comment>username_1: This is the version problem.
I tried 2 hours and couldn't get the parameter CKEditorFuncNum with java like you.
After I changed my 4.9.1 version to 4.7.3, it worked.
Upvotes: 2 <issue_comment>username_2: Try to setup config.js insert `config.filebrowserUploadMethod = 'form';`,if you got CKEditor additional arguments empty,(CKEditor,langCode,CKEditorFuncNum). There are some errors occur,
`Incorrect server response.` and `[CKEDITOR] Error code: filetools-response-error.`.
please refer,<https://github.com/ckeditor/ckeditor-dev/issues/1894>
Upvotes: 3 |
2018/03/20 | 787 | 2,877 | <issue_start>username_0: I would like to create a excel Macro that allows to me copy an x number of rows of certain column. For Example, I have column K that I have in 10.500 rows. I want to copy 1000 lines each time and also the 500 lines at the end. any help with the coding part ? I looked on so many sites and no success. I don't need to paste the copied number in any other excel sheet. I just need the macro command to copy me 1000 lines every time from the column that I selected.
thank youuuuu very much and much appreciated !
Cheers<issue_comment>username_1: This code finds the number of rows in the currently selected column. Checks if the number to be copied is less than the rows left. Selects the range and puts the range in copy mode. After the range is copied you would have to go to they sheet, document, or whatever to paste the data.
I've modified the code to then select down 1000 rows or what ever is left in the column. So you should be able to run the code, paste the data, run the code, paste the data. When you hit the end, a message tells you that you are at the end of the column.
```
Application.CutCopyMode = False
numRowstoCopy = 1000
varCurrentRow = ActiveCell.Row
varCurrentColumn = ActiveCell.Column
FinalRow = Cells(Rows.Count, varCurrentColumn).End(xlUp).Row
varRowsToEnd = FinalRow - varCurrentRow
If varRowsToEnd < numRowstoCopy Then
Range(Cells(varCurrentRow, varCurrentColumn), Cells(varCurrentRow + varRowsToEnd, varCurrentColumn)).Select
Selection.Copy
MsgBox "Last Rows to Paste Have been copied"
Else
Range(Cells(varCurrentRow, varCurrentColumn), Cells(varCurrentRow + numRowstoCopy - 1, varCurrentColumn)).Select
Selection.Copy
ActiveCell.Offset(numRowstoCopy, 0).Select
End If
```
Upvotes: 0 <issue_comment>username_2: 1. Select a column and run the sub procedure from the Macros dialog (Alt+F8). The first 1000 cells will be copied to the clipboard.
2. Paste the data into another program.
3. Return to Excel and run the sub procedure again. The next 1000 rows of data will be copied to the clipboard.
4. When the last group of data has been copied to the clipboard, a message box will notify.
```
Option Explicit
Sub progressiveCopy()
Dim m As Long
Static i As Long, k As Long
Application.CutCopyMode = False
m = 1000
If k <> ActiveCell.Column Then
k = ActiveCell.Column
i = 0
End If
With Worksheets("sheet1")
m = Application.Min(m, .Cells(.Rows.Count, k).End(xlUp).Row - i)
.Cells(i + 1, k).Resize(m, 1).Copy
i = i + m
If i >= .Cells(.Rows.Count, k).End(xlUp).Row Then
MsgBox "This column is complete. Select another column next time."
End If
End With
End Sub
```
You may wish to set up a hot-key combination for the sub procedure in the Macros dialog to ease repetitive operations.
Upvotes: 1 |
2018/03/20 | 1,202 | 4,413 | <issue_start>username_0: I've searched and have had trouble finding an answer.
tl;dr I have a wordpress theme and no longer have support from the creator, and I'm trying to get rid of warning text popping up on the top of some pages. I'm not a programmer so I'm not sure what to do but I do understand code enough to patch in a fix from somewhere else. I need help.
Here's what I get:
```
Warning: Illegal string offset 'group_no' in file1.php on line 196
Warning: Illegal string offset 'order_no' in file1.php on line 196
Warning: Illegal string offset 'group_no' in file1.php on line 200
```
Here's the relevant code for file1.php:
```
function get_location( $location, $post_id )
{
// loaded by PHP already?
if( !empty($location) )
{
return $location;
}
// vars
$groups = array();
$group_no = 0;
// get all rules
$rules = get_post_meta($post_id, 'rule', false);
if( is_array($rules) )
{
foreach( $rules as $rule )
{
// if field group was duplicated, it may now be a serialized string!
$rule = maybe_unserialize($rule);
// does this rule have a group?
// + groups were added in 4.0.4
if( !isset($rule['group_no']) )
{
$rule['group_no'] = $group_no;
// sperate groups?
if( get_post_meta($post_id, 'allorany', true) == 'any' )
{
$group_no++;
}
}
// add to group
$groups[ $rule['group_no'] ][ $rule['order_no'] ] = $rule; // this is line 196
// sort rules
ksort( $groups[ $rule['group_no'] ] );
}
// sort groups
ksort( $groups ); //this is line 200
}
// return fields
return $groups;
}
```
I understand that this is an array assignment error of some kind but I'm not familiar enough with php to understand what the error is and how to fix it. How would I rebuild this code to make the error go away while preserving function?<issue_comment>username_1: This code finds the number of rows in the currently selected column. Checks if the number to be copied is less than the rows left. Selects the range and puts the range in copy mode. After the range is copied you would have to go to they sheet, document, or whatever to paste the data.
I've modified the code to then select down 1000 rows or what ever is left in the column. So you should be able to run the code, paste the data, run the code, paste the data. When you hit the end, a message tells you that you are at the end of the column.
```
Application.CutCopyMode = False
numRowstoCopy = 1000
varCurrentRow = ActiveCell.Row
varCurrentColumn = ActiveCell.Column
FinalRow = Cells(Rows.Count, varCurrentColumn).End(xlUp).Row
varRowsToEnd = FinalRow - varCurrentRow
If varRowsToEnd < numRowstoCopy Then
Range(Cells(varCurrentRow, varCurrentColumn), Cells(varCurrentRow + varRowsToEnd, varCurrentColumn)).Select
Selection.Copy
MsgBox "Last Rows to Paste Have been copied"
Else
Range(Cells(varCurrentRow, varCurrentColumn), Cells(varCurrentRow + numRowstoCopy - 1, varCurrentColumn)).Select
Selection.Copy
ActiveCell.Offset(numRowstoCopy, 0).Select
End If
```
Upvotes: 0 <issue_comment>username_2: 1. Select a column and run the sub procedure from the Macros dialog (Alt+F8). The first 1000 cells will be copied to the clipboard.
2. Paste the data into another program.
3. Return to Excel and run the sub procedure again. The next 1000 rows of data will be copied to the clipboard.
4. When the last group of data has been copied to the clipboard, a message box will notify.
```
Option Explicit
Sub progressiveCopy()
Dim m As Long
Static i As Long, k As Long
Application.CutCopyMode = False
m = 1000
If k <> ActiveCell.Column Then
k = ActiveCell.Column
i = 0
End If
With Worksheets("sheet1")
m = Application.Min(m, .Cells(.Rows.Count, k).End(xlUp).Row - i)
.Cells(i + 1, k).Resize(m, 1).Copy
i = i + m
If i >= .Cells(.Rows.Count, k).End(xlUp).Row Then
MsgBox "This column is complete. Select another column next time."
End If
End With
End Sub
```
You may wish to set up a hot-key combination for the sub procedure in the Macros dialog to ease repetitive operations.
Upvotes: 1 |
2018/03/20 | 1,043 | 3,846 | <issue_start>username_0: I have a table that looks something like this:
```
BuildingID | RouterType | RouterPort | RouterInstaller | Notes
-----------+------------+------------+-----------------+-------
282 | Linksys | 1990 | Super | NULL
307 | Sonic Wall | NULL | Greg | NULL
311 | NULL | NULL | NULL | NULL
```
I would like the Notes column to be the concatenation of the 2nd 3rd and 4th columns only if the column is not null.
```
line 1: Router Type: Linksys Router Port: 1990 Router Installer: Super
line 2: Router Type: Sonic Wall Router Installer: Greg
line 3: NULL
```
Also the word 'Router Type:' should only come in if the value of Router type is not null etc.
I am pretty new to SQL - any help would be greatly appreciated.<issue_comment>username_1: Try this update statement:
```
DECLARE @TEMP_TABLE TABLE (
BUILDING_ID INT,
ROUTER_TYPE VARCHAR(20) NULL,
PORT INT NULL,
ROUTER_INSTALLER VARCHAR(20) NULL,
NOTES VARCHAR(1000) NULL
)
INSERT INTO @TEMP_TABLE VALUES(1,'Linksys Router', 1990, 'Super', NULL)
INSERT INTO @TEMP_TABLE VALUES(2,NULL, NULL, NULL, NULL)
UPDATE @TEMP_TABLE
SET NOTES = COALESCE(' Router type: ' + ROUTER_TYPE,'') + COALESCE(' Port: ' + CAST(PORT AS VARCHAR),'') + COALESCE(' Router installer: ' + ROUTER_INSTALLER,'')
WHERE ROUTER_TYPE IS NOT NULL OR PORT IS NOT NULL OR ROUTER_INSTALLER IS NOT NULL
SELECT * FROM @TEMP_TABLE
```
Upvotes: 0 <issue_comment>username_2: This will do it by combining Coalesce and Concat. The column names are added as labels to the column values.
```
select COALESCE(Notes, COALESCE(CONCAT(COALESCE(CONCAT('RouterType: ',RouterType),''),
COALESCE(CONCAT(' RouterPort: ',RouterPort ),''),
COALESCE(CONCAT(' RouterInstaller: ',RouterInstaller),'')), NULL)) as Notes
from yourTable;
```
Upvotes: 1 <issue_comment>username_3: Try this select query, i think it will help you:
```
SELECT CASE WHEN (
COL2 IS NOT NULL
AND COL3 IS NOT NULL
AND COL4 IS NOT NULL )
THEN
CONCAT(COL2,' ', COL3,' ', COL4) END as ConcatedData,
* from YOUR_TABLE;
```
Upvotes: 0 <issue_comment>username_4: To get the spacing correct, I recommend:
```
select stuff(coalesce(' RouterType: ' + RouterType), '') +
coalesce(' RouterPort: ' + RouterPort ), '') +
coalesce(' RouterInstaller: ', RouterInstaller), ''),
1, 1, ''
) as Notes
from t;
```
In an `update`:
```
update t
set notes = stuff(coalesce(' RouterType: ' + RouterType), '') +
coalesce(' RouterPort: ' + RouterPort ), '') +
coalesce(' RouterInstaller: ', RouterInstaller), ''),
1, 1, ''
);
```
Note: This will not put in a `NULL` value, instead using an empty string. That is easily fixed -- if it is a problem:
```
update t
set notes = nullif(stuff(coalesce(' RouterType: ' + RouterType), '') +
coalesce(' RouterPort: ' + RouterPort ), '') +
coalesce(' RouterInstaller: ', RouterInstaller), ''),
1, 1, ''
), ''
)
```
Upvotes: 0 <issue_comment>username_5: Try this:
```
select case when [Note] = '' then null else Note from (
select BulidingId,
case when RouterType is null then '' else 'Router Type: ' + RouterType + '; '+
case when RouterPort is null then '' else 'Router Port: ' + RouterPort + '; '+
case when RouterInstaller is null then '' else 'Router Port: ' + RouterInstaller + '; '+
case when Notes is null then '' else 'Notes: ' + Notes + '; ' [Note]
from MY_TABLE
) a
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 427 | 1,599 | <issue_start>username_0: How can I modify grep so that it prints the full file if its entry matches the grep pattern, instead of printing just the matching line?
I tried using `grep -C2` to print two lines above and two below but this doesn't always work as the number of lines is not fixed.
I am not just searching a single file, I am searching an entire directory where some files may contain the given pattern and I want those files to be completely printed.<issue_comment>username_1: Simple `grep` + `cat` combination:
```
grep 'pattern' file && cat file
```
Upvotes: 2 <issue_comment>username_2: Use `grep`'s `-l` option to list the paths of files with matching contents, then print the contents of these files using `cat`.
```
grep -lR 'regex' 'directory' | xargs -d '\n' cat
```
The command from above cannot handle filenames with newlines in them.
To overcome the *filename with newlines* issue and also allow more sophisticated checks you can use the `find` command.
The following command prints the content of all regular files in `directory`.
```
find 'directory' -type f -exec cat {} +
```
To print only the content of files whose content matches the regexes `regex1` *and* `regex2`, use
```
find 'directory' -type f \
-exec grep -q 'regex1' {} \; -and \
-exec grep -q 'regex2' {} \; \
-exec cat {} +
```
The linebreaks are only for better readability. Without the `\` you can write everything into one line.
Note the `-q` for `grep`. That option supresses `grep`'s output. `grep`'s exit status will tell `find` whether to list a file or not.
Upvotes: 2 |
2018/03/20 | 774 | 2,641 | <issue_start>username_0: The first obstacle that I faced is that there is no shorthand for `document.getElementById`in Vue so I implemented a [function like this one](https://stackoverflow.com/questions/36970062/vue-js-document-getelementbyid-shorthand). The second obstacle I'm facing is that IMHO the [html2canvas docs](https://html2canvas.hertzen.com/documentation) are very limited when dealing with situations whee you don't have a and a .
Here's the summary of the markup in my `.vue`file:
```
// more markup...
```
and this is how I'm trying to implement the capture function:
```
//Vue equivalent for document.getElementById
showCaptureRef() {
console.log(this.$refs.capture);
},
// Screen capture function
downloadVisualReport () {
let vc = this
alert("Descargando reporte visual")
html2canvas(vc.showCaptureRef).then(canvas => {
vc.document.body.appendChild(canvas)
}).catch((error) => {
console.log("Erorr descargando reporte visual")
alert("Error descargando el reporte visual")
});
},
```<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 803 | 2,664 | <issue_start>username_0: I have an array, **$row2**.
In **$row2** two arrays are present. The output of **$row2** is:
```
Array
(
[0] => Array
(
[Proposal_id] => 9
[row] => 1
[col1] => 2
[col2] => 2
[col3] =>
[col4] =>
[col5] =>
[Type] => customtbl
[Invoice_term] =>
[Qoute] =>
[Rate_per_hour] =>
[Total] =>
)
[1] => Array
(
[Proposal_id] => 9
[row] => 2
[col1] => 3
[col2] => 4
[col3] =>
[col4] =>
[col5] =>
[Type] => customtbl
[Invoice_term] =>
[Qoute] =>
[Rate_per_hour] =>
[Total] =>
)
)
```
I want to remove null elements from the array, but I can't do this.
I tried the following methods:
```
array_filter($row2);
array_filter($row2, function($var){return !is_null($var);});
array_diff($rows2, array("null", ""));
```<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 840 | 2,763 | <issue_start>username_0: I have some problems with getting OpenCV-3.1.0 working (directly) with `catkin build`. I am trying to build a workspace of the TIAGo Simulation, but I get errors.
When I build a test file (containing opencv functions) with g++ and flags I have no problems:
`g++ -o test test.cpp ´pkg-config opencv --cflags --libs´`
If I don´t use the flags I get errors:
`g++ -o test test.cpp`
output:
```
test.cpp:(.text+0x3e): undefined reference to `cv::imread(cv::String const&, int)'
```
The same is happening when I´m building the TIAGo Simulation workspace that uses OpenCV when using catkin build.
Trying to build the TIAGo simulation:
```
catkin build
```
output (one of the errors):
```
look_to_point.cpp:(.text+0xa6): undefined reference to `cv::imshow(cv::String const&, cv::_InputArray const&)'
```
So since the errors are quite similar I think it could be solved by adding the equivalent of `´pkg-config opencv --cflags --libs´` to `catkin build`. Since I am quite unexperienced with ROS, catkin, and CMake I don´t know how to accomplish this.
If anyone wants to point me in the right direction I would be very grateful.<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 661 | 2,193 | <issue_start>username_0: I'm trying to select records from a DB2 Iseries system where the date field is greater than the first of this year.
However, the date fields I'm selecting from are actually PACKED fields, not true dates.
I'm trying to convert them to YYYY-MM-DD format and get everything greater than '2018-01-01' but no matter what I try it says it's invalid.
Currently trying this:
```
SELECT *
FROM table1
WHERE val = 145
AND to_date(char(dateShp), 'YYYY-MM-DD') >= '2018-01-01';
```
it says expression not valid using format string specified.
Any ideas?<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 943 | 3,161 | <issue_start>username_0: I am wanting to select object by className and NOT css or id and i continuely get error, cannot locate css selector, even though im trying to select with className. Even said i can not get the element with the css either. I have just set up protractor, so maybe im missing something? protractor - Version 5.3.0, and i run the test file using 'ng e2e' from command line.
this is my page object:
```
import { browser, by, element } from 'protractor';
export class AppPage {
navigateTo() {
return browser.get('https://mywebsite.com');
}
getSearchBtn() {
return element(by.className('c-search-form-button'));
```
and this is my spec file
```
import { AppPage } from './app.po';
import { browser, by, element } from 'protractor';
import {Config} from 'protractor';
describe('playground', () => {
let page: AppPage;
;
beforeEach(() => {
page = new AppPage();
browser.waitForAngularEnabled(false); // must have if non-angular site
page.navigateTo(); // or browser.get(url); //or browser.get('https://website.com');
});
it('click the search button', () => {
page.getSearchBtn().click();
});
});
```
error
```
- Failed: No element found using locator: By(css selector, .c-search-form-button)
```
the object
```
...
```
i have also tried with
```
element(by.className('c-search-form'));
element(by.className('c-search-form-button'));
element(by.css('.c-search-form-button'));
element(by.css('.c-search-form'));
element(by.tagName('push-button'));
```<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 694 | 2,325 | <issue_start>username_0: I am trying to install the darkflow libraries from the repository: <https://github.com/thtrieu/darkflow>
Unfortunately when entering either
```
python setup.py build_ext --inplace
```
or
```
pip install -e .
```
results in the following error:
```
running build_ext
building 'darkflow.cython_utils.nms' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": http://landinghub.visualstudio.com/visual-cpp-build-tools
```
I have installed visual studios 2017 and edited my environment variables to include
```
C:\Program Files (x86)\Microsoft Visual Studio\2017\BuildTools
```
However I continue to get the same error. Any ideas?<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 720 | 2,480 | <issue_start>username_0: my Ajax call via jQuery will always be transmitted (status 200), but only sometimes it's received by the rails controller. Means the data is sent properly but most times the controller sends back an empty json object as the response. I need the timeout else the stringify doesn't work. Any hints?
```
setTimeout(function() {
$.ajax({
type: "POST",
url: "homepage/routes",
data: {
routes_my: JSON.stringify(routes)
},
success: function(data) {
console.log(data);
}
});
}, 100);
}
#Controller
def routes
@routes = JSON.parse(params\[:routes_my\], object_class: OpenStruct)
render :json => @routes
end
```
[console log working example](https://i.stack.imgur.com/aPf3l.png)
[console log bad example](https://i.stack.imgur.com/TzCWf.png)<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 680 | 2,171 | <issue_start>username_0: So I am building my website. I added my banner , my footer but my background is getting messed up. It's not stretching out , there is some empty whitespace.
[This is what happens:](https://i.stack.imgur.com/mJfcQ.png)
If I use background-size: 100% , my background disappears. Same thing when I use background-size: cover;.
HTML:
```
```
CSS:
```
.content {
min-height: 690px;
background: url("background.png")
background-size: 100%;
}
.footer {
min-height: 200px;
background: url("footer.png");
background-repeat: repeat-x;
}
```<issue_comment>username_1: I figured it out, I had 2 mistakes:
First of all I had `id='capture'` instead of `ref="capture"`
Second, this line `vc.document.body.appendChild(canvas)`, needs to be changed to `document.body.appendChild(canvas)`
This is the final code for a function that downloads such canvas as an image:
```
downloadVisualReport () {
let vc = this
let filename = 'Reporte de ' + vc.campaign.name + '.png';
html2canvas(vc.showCaptureRef()).then(canvas => {
vc.saveAs(canvas.toDataURL(), filename);
}).catch((error) => {
alert("Error descargando el reporte visual")
});
},
```
Upvotes: 0 <issue_comment>username_2: For future readers. Try this [vue-html2canvas](https://www.npmjs.com/package/vue-html2canvas) mixin.
```
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
}
this.output = await this.$html2canvas(el, options);
}
}
}
```
Upvotes: -1 <issue_comment>username_3: ```
https://www.npmjs.com/package/vue-html2canvas
Print me!
=========
![]()
export default {
data() {
return {
output: null
}
},
methods: {
print() {
const el = this.$refs.printMe;
// add option type to get the image version
// if not provided the promise will return
// the canvas.
const options = {
type: 'dataURL'
};
(async () => {
this.output = await this.$html2canvas(el, options);
})()
}
}
}
```
Upvotes: -1 |
2018/03/20 | 452 | 1,727 | <issue_start>username_0: I'm building a custom directive where I want to read one of the attributes (formControlName) of the native element, and then conditionally add one or more attributes to the element.
However, when I console.log the native element's attribute, I get:
>
> undefined
>
>
>
Here's what I tried:
```
@Directive({
selector: '[appInputMod]'
})
export class InputModDirective implements OnInit {
constructor(public renderer: Renderer2, public hostElement: ElementRef) { }
@Input()
appInputMod() { }
ngOnInit() {
console.log(this.hostElement.nativeElement.formcontrolname);
const el = this.hostElement.nativeElement;
if (el.formcontrolname === 'firstName')
{
this.renderer.setAttribute(this.hostElement.nativeElement, 'maxlength', '35');
}
}
}
```
How can I read this attribute name from within the directive?<issue_comment>username_1: A hack, but a possibly useful side note:
This works from ngOnInit:
```
this.hostElement.nativeElement.getAttribute('formcontrolname');
```
Upvotes: 0 <issue_comment>username_2: What you're doing doesn't seem very Angular, you normally don't want to start relying on DOM manipulation. The more Angular approach would be to read the attribute on the element of your directive as an `@Input()`, and provide your results as an `@Output()`:
```
@Directive({
selector: '[appInputMod]'
})
export class InputModDirective implements OnInit {
@Input() formcontrolname: string;
@Output() somethingHappened: EventEmitter = new EventEmitter();
ngOnInit() {
if (formcontrolname === 'firstName') {
this.somethingHappened.emit({maxlength: 35});
}
}
```
And then in your template:
```
```
Upvotes: 4 [selected_answer] |
2018/03/20 | 1,479 | 5,287 | <issue_start>username_0: I'm using with a smaller piece of code to test functionality for a larger (beginner) program, but I don't understand the difference between two strings.
I found and used:
```
#include
#include
int main()
{
char \*string, \*found;
string = strdup ("1/2/3");
printf("Orig: '%s'\n",string);
while ((found = strsep(&string,"/")) != NULL )
printf ("%s\n",found);
return (0);
}
```
and this print the tokens one at a time.
Then when I try and move to a user entered string:
```
#include
#include
int main()
{
char string[13],
char \*found, \*cp = string;
fprintf(stderr, "\nEnter string: ");
scanf("%12s",string);
printf("Original string: '%s'\n",string);
while((found = strsep(&cp,"/,-")) != NULL )
printf("%s\n",found);
return(0);
}
```
I get a seg fault. I understand the basics of pointers, arrays and strings, but clearly I'm missing something, and would love for someone to tell me what it is!
Also - if I change `printf("%s\n",found);` to `printf("%i\n",found);` I get some junk integers returned, but always the correct amount, e.g. If I enter `1/2/3` I get three lines of integers, `1111/2222` I get two lines.
Thanks!
-Edit-
There was an adittional problem with `strsep`, detailed [here](https://stackoverflow.com/questions/49394350/string-token-from-strsep-not-printing-seg-fault/49394489?noredirect=1#comment85793882_49394489). Thanks all.<issue_comment>username_1: You should allocate memory for the user input string.
First option is statically
`char string[256];`
and second option is dynamically using `malloc()` function
```
char *string;
string = (char*) malloc(256 * sizeof(char));
if (string == NULL)
{
//error
}
```
Don't forget at the end to release the allocated memory
```
free(string);
```
Upvotes: 1 <issue_comment>username_2: In the first piece of code, `string` is assigned the return value of `strdup`, which allocates space for the string to duplicate and returns a pointer to that allocated space.
In the second piece of code, `string` uninitialized when it is passed to `scanf`, so `scanf` is reading the invalid value in that pointer and attempting to dereference it. This invokes [undefined behavior](https://en.wikipedia.org/wiki/Undefined_behavior) which in this case manifests as a crash.
You need to set aside space for the user's string. A simple way to do this is to create an array of a given size:
```
char string[80];
```
Then tell `scanf` how many characters it can read in:
```
scanf("%79s",string);
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Differences between the two cases:
1. In the first case `string` points to valid memory that was allocated by `strdup` while in the second case you are trying to read into invalid memory.
2. The first case is well behaved while the second case is cause for undefined behavior.
The second case can be fixed by allocating memory for it using `malloc` or using a fixed size array.
```
char *string,*found;
string = malloc(100); // Make it large enough for your need.
fprintf(stderr, "\nEnter string: ");
scanf("%99s",string);
```
or
```
char string[100], *found;
fprintf(stderr, "\nEnter string: ");
scanf("%99s",string);
```
---
Make sure you deallocate dynamically allocated memory. Otherwise, your program leaks memory.
Upvotes: 2 <issue_comment>username_4: You didn't allocate the space needed !
You have to have a memory space to write to.
You can have it statically "char string[256]", or dynamically with allocation.
In your first example, you use "strdup" that does a malloc, but scanf will not allocate memory for you.
If you want all the user input, you usually wrap the scanf in a while loop in order to retrieve the user input chunk by chunk. Then you have to reallocate each time your buffer is insuffisante to add the chunk.
If you just want to retrieve a string from stdin without doing any format-checking, I strongly recommand fgets.
Upvotes: 0 <issue_comment>username_5: The reason is very simple. Your `string` variable is a char pointer and you need to allocate memory to it to store a string.Probably in your first case `strdup ("1/2/3");` returns a string and your `char pointer *string` points to the string return by `strdup` function and that is the reason why you are not getting the segmentation error in the first case. Even in your first case also you might get a segmentation error if enter a very long string.
so allocate enough memory to the `string` pointer like below in your second example and that will fix your problem:-
```
char *string = malloc(50);// here malloc will allocate 50 bytes from heap
```
Upvotes: 0 <issue_comment>username_6: In user enter string case you do not have memory allocated to the pointer string.In the first case, **strdup** is allocating memory for string pointer while in the second case you do not have any memory associated with string pointer leading to segfault. first, allocate memory using **malloc** and then use **scanf**.
Upvotes: 0 <issue_comment>username_7: char string[13]
char cp=string
Here cp is a variable of type char and as 1 byte of memory allocated
It won't be able to store a char array of 13 character which would be 13 bytes, and it's because of this you are getting segmentation fault
Upvotes: 0 |
2018/03/20 | 597 | 2,187 | <issue_start>username_0: Below is the XML I am trying to parse. I need to extract the DLRNUMBER tag from the first FL nodelist (first one). I am trying to use Xdocument. I don't know the syntax to get the tag.
```
XDocument xdoc = XDocument.Load(@"C:\Temp\FR_in.xml");
var query = from e in xdoc.Descendants("FLSS")
select e;
```
Below is the XML:
```
```<issue_comment>username_1: In `Descendants` you have to provide the `XName` of the nodes to search for. In your code example you try to search for a value of an attribute of a node.
You can do this:
```
var result = xdoc.Descendants("ATTR")
.FirstOrDefault(element =>
element.Attribute("Id")?.Value == "DLRNUMBER")?.Attribute("Val")?.Value;
```
This finds the first `ATTR` tag that has an `Id` attribute with the value `DLRNUMBER` and returns the value of its `Val` attribute.
---
If there may be other `DLRNUMBER` values at different levels that you don't want to find, you can consider to find the `COLL` node first:
```
var collNode = xdoc.Descendants("COLL").FirstOrDefault();
var result = collNode.Descendants("ATTR")
.FirstOrDefault(element =>
element.Attribute("Id")?.Value == "DLRNUMBER")?.Attribute("Val")?.Value;
```
or refine the search according to your requirements and the kind of xml you expect as input.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try following. I prefer to get all items in an array so I can select one or more items as needed. :
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Xml;
using System.Xml.Linq;
namespace ConsoleApplication31
{
class Program
{
const string FILENAME = @"c:\temp\test.xml";
static void Main(string[] args)
{
XDocument doc = XDocument.Load(FILENAME);
XElement fl = doc.Descendants("FL").FirstOrDefault();
int[] dlrNumbers = fl.Elements("DOBJ").Select(x => x.Elements("ATTR").Where(y => (string)y.Attribute("Id") == "DLRNUMBER").Select(y => (int)y.Attribute("Val"))).SelectMany(y => y).ToArray();
}
}
}
```
Upvotes: 0 |
2018/03/20 | 866 | 2,910 | <issue_start>username_0: As part of our **ASP.NET Core 2.0** build process I have added a *dotnet test* command which I have added as a Windows batch file.
Here is my command.
```
dotnet test "MyProject.csproj" --no-restore --results-directory "MyProject\TestResults" --verbosity minimal
```
And here is the output when run from the command line.
[](https://i.stack.imgur.com/EyCNT.png)
So it all appears to work correctly, yet no test results / test output is created.<issue_comment>username_1: To output the test results from dotnet test, you can try pass `-xml /some/path/out.xml` or use the `work` parameter, like this: `dotnet test --work:"mypath/myresult.xml"`. See below threads for details:
* [dotnet test - Output test results](https://github.com/dotnet/cli/issues/3114)
* [Is there a way to specify TestResult.xml location?](https://github.com/nunit/dotnet-test-nunit/issues/100)
---
Besides, generally you need to specify the argument `-l|--logger` which specifies a logger for test results.
e.g.:
`dotnet test "myproject.csproj" --logger "trx;LogFileName=path\to\tests\folder\results.trx"` or
`dotnet test "myproject.csproj" -l:"trx;LogFileName=path\to\tests\folder\results.trx"`
To make the generated `trx` files available as test results in VSTS/TFS, you can use the "Publish Test Results" task:
[](https://i.stack.imgur.com/UkPWY.png)
Upvotes: 5 [selected_answer]<issue_comment>username_2: To output the test results using the `dotnet test` option `--results-directory` you have to also set `--logger`.
The `-xml` and `--work` options no longer work as they are not part of the options provided by the test CLI. I remember have used `-xml` in the past and it worked but it doesn't anymore.
You can see all the options for CLI .NET Core 2.x [here](https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-test?tabs=netcore2x)
To publish the tests results into a specific folder, you should use the command below:
```
dotnet test --logger "trx;logfilename=mytests.trx" --results-directory ./somefolder/subfolder
```
Or
```
dotnet test --logger "trx;LogFileName=./somefolder/subfolder/mytests.trx"
```
The **trx** file is a XML file, so you could name it as *mytests.xml* instead of *mytests.trx*.
If you use VSTS, you can publish your tests to be shown in your build page using the command above in '.NET Core' task for test and the 'Publish Test Result' task.
The '.NET Core' task explains where it publishes the results as per screenshot below:
[](https://i.stack.imgur.com/kNyk0.jpg)
Once all done, your build page would look like this:
[](https://i.stack.imgur.com/awr5n.jpg)
Upvotes: 5 |
2018/03/20 | 1,291 | 4,176 | <issue_start>username_0: Is there a way to have a cell reference to two rules in conditional formatting?
As a teacher I have a spreadsheet of the grades of 26 children.
Column A: Target = 100
Column B: Actual scores of test 1
Column C: Actual scores of test 2 (taken a month after test 1)
Column D: Difference between column C and column B (to see if their scores went up or down or stay flat after a month)
Column E: How far the score of test 2 is from the target
Is there a way to use Icon Sets so that if the scores go down from test 1 to test 2 (i.e., shows a down arrow), but the score of test 2 is within range of the target, then it will be a green down arrow?
Vice versa, if the scores go up from test 1 to test 2 (i.e., shows an up arrow), but the score of test 2 is only 20 out of 100, for example (out of range of the target), then it will be a red up arrow?
I can do the two separately by using the difference formula, but I can't seem to get both of them to work at the same time.<issue_comment>username_1: If you want icons, I think you'll need to add a column to calculate the summary. If both C and D are ‘bad’ when the value is negative, and they have the same scale, then this formula in column E should do what you need:
```
=MIN(C2,D2)
```
You can then apply the conditional formatting to column E.
If all you want is fill or font changes, then you can use an `AND()` or `OR()` function to combine conditions in your conditional format. You can read more about how to [use formulas in conditional formatting here](https://support.office.com/en-us/article/use-formulas-with-conditional-formatting-fed60dfa-1d3f-4e13-9ecb-f1951ff89d7f), but assuming your data is in row 2, the following formula should do what you need:
```
=OR(D2>0,C2<0)
```
Hope it helps!
Upvotes: 0 <issue_comment>username_2: If you make a formula like the following:

Then you have a column E with a positive number if they have increased the result and a negative value if they have decreased the score.
Then mark the column and choose:
Conditional formating -> New rule

And add a custom formating like this.

Upvotes: -1 <issue_comment>username_3: As has been mentioned, with the built-in Icon Sets it is not possible to have a green down arrow or a red up arrow, let alone having *both* red and green up and down arrows at the same time.
However, what you are after *is* possible by using arrow characters, combined with a conditional formatting formula rule to set the colours. An extra column is also required if you wish to display the test score differences as well as the arrows.
The following screenshot shows some sample data, the key formulae used, and the resulting arrows:
[](https://i.stack.imgur.com/psJMw.png)
**Setup:**
Enter the following formulae into the designated cells:
`E2` → `=C2-B2`
`F2` → `=C2-A2`
`D2` → `=CHOOSE(2+SIGN(E2),"È","Æ","Ç")`
and copy-paste/fill-down/ctrl-enter them into the rest of the table.
Set two conditional formatting formula rules for column `D` of the table (e.g., `$D$2:$D$8`):
`=F2 <0` → red font colour
`=F2>=0` → green font colour
Finally, set the font and size of column `D` of the table (`$D$2:$D$8`) appropriately for the characters chosen. (In this example I'm using the 14pt bold Wingdings 3 characters.)
**Notes:**
* The order of the characters in the `D2` formula is "down arrow", "side-ways arrow", and then "up arrow".
---
The other table is just to demonstrate some possible arrows and colours that you could use. I have repeated the table below so you can copy-paste the characters directly:
```
Wingdings 3 Wingdings Calabri
11pt 14pt_bold 14pt_bold 11pt_bold
ã Ç ñ ñ ↑ ↑
ã Ç ñ ñ ↑ ↑
â Æ ð ó → ↔
â Æ ð ó → ↔
â Æ ð ó → ↔
ä È ò ò ↓ ↓
ä È ò ò ↓ ↓
```
Upvotes: 0 |
Subsets and Splits