content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
byte[] to Audioclip - noise in audio
I use this code to open my device explorer, select an audio file and insert it as an audio source clip of an object in the scene.
public GameObject evento;
public AudioSource audio;
string path = Application.streamingAssetsPath;
EditorGUIController editorGUIController;
private void Awake() {
editorGUIController = GameObject.FindObjectOfType<EditorGUIController>();
audio = evento.GetComponent<AudioSource>();
}
public void InsertAudio(){
#if UNITY_EDITOR
path = EditorUtility.OpenFilePanel("Overwrite with wav", "", "wav");
audio.clip = LoadMP3(path);
#endif
}
public static AudioClip LoadMP3(string path){
byte[] fileData;
AudioClip clip = null;
if (File.Exists(path)){
int indice = path.LastIndexOf("/");
string audioNameA = path.Substring(indice+1);
string audioName = audioNameA.Substring(0, audioNameA.Length-4);
fileData = File.ReadAllBytes(path);
if(fileData.Length>0) {Debug.Log ("fileData: "+ fileData.ToString());}
float[] samples = new float [fileData.Length/4 +1];
Buffer.BlockCopy(fileData,0,samples,0,fileData.Length);
int channels = 1;
int sampleRate = 48000;
clip = AudioClip.Create(audioName,samples.Length, channels,sampleRate,false);
clip.SetData(samples,0);
}
return clip;
}
Unfortunately, when I start this method, the audio starts with a very annoying noise (the buzz of a TV, so to speak):
public void playAudio(){
evento.GetComponent<AudioSource>().Play();
}
I'm pretty sure the problem is in float to audiosource conversion, but I can't figure out how to fix it. some idea?
|
byte[] to Audioclip - noise in audio
|
I use this code to open my device explorer, select an audio file and insert it as an audio source clip of an object in the scene.
public GameObject evento;
public AudioSource audio;
string path = Application.streamingAssetsPath;
EditorGUIController editorGUIController;
private void Awake() {
editorGUIController = GameObject.FindObjectOfType<EditorGUIController>();
audio = evento.GetComponent<AudioSource>();
}
public void InsertAudio(){
#if UNITY_EDITOR
path = EditorUtility.OpenFilePanel("Overwrite with wav", "", "wav");
audio.clip = LoadMP3(path);
#endif
}
public static AudioClip LoadMP3(string path){
byte[] fileData;
AudioClip clip = null;
if (File.Exists(path)){
int indice = path.LastIndexOf("/");
string audioNameA = path.Substring(indice+1);
string audioName = audioNameA.Substring(0, audioNameA.Length-4);
fileData = File.ReadAllBytes(path);
if(fileData.Length>0) {Debug.Log ("fileData: "+ fileData.ToString());}
float[] samples = new float [fileData.Length/4 +1];
Buffer.BlockCopy(fileData,0,samples,0,fileData.Length);
int channels = 1;
int sampleRate = 48000;
clip = AudioClip.Create(audioName,samples.Length, channels,sampleRate,false);
clip.SetData(samples,0);
}
return clip;
}
Unfortunately, when I start this method, the audio starts with a very annoying noise (the buzz of a TV, so to speak):
public void playAudio(){
evento.GetComponent<AudioSource>().Play();
}
I'm pretty sure the problem is in float to audiosource conversion, but I can't figure out how to fix it. some idea?
|
[] |
[] |
[
"I've not enough reputation so I can't comment -_-\nI have the same exact problem and still don't know the answer, but you may be interesting of using UnityWebRequestMultimedia.GetAudioClip if you have a path or url\n"
] |
[
-2
] |
[
"arrays",
"audio_source",
"audioclip",
"data_conversion",
"unity3d"
] |
stackoverflow_0071093599_arrays_audio_source_audioclip_data_conversion_unity3d.txt
|
Q:
"Break early" if one of GitHub actions fails
Our CI setup is currently looking like this on GitHub:
Usually, first check is finishing much sooner than second check. It can succeed or fail. Is it possible (and if so - how) to "break early" and terminate remaining actions as soon as some action fails?
A:
You can do this easily but only within a single workflow. If you have multiple workflows.
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
fail-fast: true
|
"Break early" if one of GitHub actions fails
|
Our CI setup is currently looking like this on GitHub:
Usually, first check is finishing much sooner than second check. It can succeed or fail. Is it possible (and if so - how) to "break early" and terminate remaining actions as soon as some action fails?
|
[
"You can do this easily but only within a single workflow. If you have multiple workflows.\nstrategy:\n matrix:\n os: [ubuntu-latest, macos-latest, windows-latest]\n fail-fast: true\n\n"
] |
[
0
] |
[] |
[] |
[
"bitrise",
"codacy",
"github",
"github_actions"
] |
stackoverflow_0074658106_bitrise_codacy_github_github_actions.txt
|
Q:
Access translator in Shopware 6 Plugin
I am developing my first Shopware 6 plugin and was wondering how to access snippets in the Plugin class. I checked the Developer Guide but could not make it work.
I want to use the plugin translation as label in customField select options.
myfirstplugin.en-GB.json
{
"myfirstplugin": {
"my_custom_field_option_1": "Option 1",
"my_custom_field_option_2": "Option 2",
}
}
MyFirstPlugin.php
class MyFirstPlugin extends Plugin
{
// ....
private function createCustomFields(Context $context): void
{
if ($this->customFieldSetExists($context)) {
return;
}
$customFieldSetRepository = $this->container->get('custom_field_set.repository');
$customFieldSetRepository->create([
[
'id' => '294865e5c81b434d8349db9ea6b4e135',
'name' => 'my_custom_field_set',
'customFields' => [
[
'name' => 'my_custom_field',
'type' => CustomFieldTypes::SELECT,
'config' => [
'label' => [ 'en-GB' => 'My custom field'],
'options' => [
[
'value' => '294865e5c81b434d8349db9ea6b4e487',
// Access my_custom_field_option_1 of snippet myfirstplugin.en-GB.json
'label' => 'my_custom_field_option_1',
],
[
'value' => '1ce5abe719a04346930c7e43514ed4f1',
// Access my_custom_field_option_2 of snippet myfirstplugin.en-GB.json
'label' => 'my_custom_field_option_2',
],
],
'customFieldType' => 'select',
'componentName' => 'sw-single-select',
'customFieldPosition' => 1,
],
],
]
],
], $context);
}
}
A:
You can inject an argument of type Translator to your service
in services.xml
<argument type="service" id="translator"/>
in your service
use Shopware\Core\Framework\Adapter\Translation\Translator;
/**
* @var Translator
*/
private $translator;
public function __construct($translator)
{
$this->translator = $translator;
}
then down the way you can use this pretty much the same as in a twig template:
$translated = $this->translator
->trans(
'myfirstplugin.product.detail.294865e5c81b434d8349db9ea6b4e487');
A:
When you create the custom field you don't have to reference a translation snippet. Instead you can just provide the translations in the payload directly.
'config' => [
'label' => ['en-GB' => 'Label english', 'de-DE' => 'Label german'],
'type' => CustomFieldTypes::SELECT,
'options' => [
[
'value' => '294865e5c81b434d8349db9ea6b4e487',
'label' => ['en-GB' => 'Option english', 'de-DE' => 'Option german'],
],
[
'value' => '1ce5abe719a04346930c7e43514ed4f1',
'label' => ['en-GB' => 'Option english', 'de-DE' => 'Option german'],
],
],
],
A:
I think you can use the snippet repository and search the label as you want in the Plugin class like
$sRepo = $this->container->get('snippet.repository');
$labels = $sRepo->search((new Criteria())
->addFilter(
new MultiFilter(
MultiFilter::CONNECTION_OR,
[
new ContainsFilter('translationKey', 'my_custom_field_option_1'),
new ContainsFilter('translationKey', 'my_custom_field_option_2')
]
)
), $context)->getElements();
|
Access translator in Shopware 6 Plugin
|
I am developing my first Shopware 6 plugin and was wondering how to access snippets in the Plugin class. I checked the Developer Guide but could not make it work.
I want to use the plugin translation as label in customField select options.
myfirstplugin.en-GB.json
{
"myfirstplugin": {
"my_custom_field_option_1": "Option 1",
"my_custom_field_option_2": "Option 2",
}
}
MyFirstPlugin.php
class MyFirstPlugin extends Plugin
{
// ....
private function createCustomFields(Context $context): void
{
if ($this->customFieldSetExists($context)) {
return;
}
$customFieldSetRepository = $this->container->get('custom_field_set.repository');
$customFieldSetRepository->create([
[
'id' => '294865e5c81b434d8349db9ea6b4e135',
'name' => 'my_custom_field_set',
'customFields' => [
[
'name' => 'my_custom_field',
'type' => CustomFieldTypes::SELECT,
'config' => [
'label' => [ 'en-GB' => 'My custom field'],
'options' => [
[
'value' => '294865e5c81b434d8349db9ea6b4e487',
// Access my_custom_field_option_1 of snippet myfirstplugin.en-GB.json
'label' => 'my_custom_field_option_1',
],
[
'value' => '1ce5abe719a04346930c7e43514ed4f1',
// Access my_custom_field_option_2 of snippet myfirstplugin.en-GB.json
'label' => 'my_custom_field_option_2',
],
],
'customFieldType' => 'select',
'componentName' => 'sw-single-select',
'customFieldPosition' => 1,
],
],
]
],
], $context);
}
}
|
[
"You can inject an argument of type Translator to your service\nin services.xml\n<argument type=\"service\" id=\"translator\"/>\n\nin your service\nuse Shopware\\Core\\Framework\\Adapter\\Translation\\Translator;\n\n/**\n * @var Translator\n */\nprivate $translator;\npublic function __construct($translator)\n{\n $this->translator = $translator;\n}\n\nthen down the way you can use this pretty much the same as in a twig template:\n$translated = $this->translator\n ->trans(\n 'myfirstplugin.product.detail.294865e5c81b434d8349db9ea6b4e487');\n\n",
"When you create the custom field you don't have to reference a translation snippet. Instead you can just provide the translations in the payload directly.\n'config' => [\n 'label' => ['en-GB' => 'Label english', 'de-DE' => 'Label german'],\n 'type' => CustomFieldTypes::SELECT,\n 'options' => [\n [\n 'value' => '294865e5c81b434d8349db9ea6b4e487',\n 'label' => ['en-GB' => 'Option english', 'de-DE' => 'Option german'], \n ],\n [\n 'value' => '1ce5abe719a04346930c7e43514ed4f1',\n 'label' => ['en-GB' => 'Option english', 'de-DE' => 'Option german'], \n ],\n ],\n],\n\n",
"I think you can use the snippet repository and search the label as you want in the Plugin class like\n$sRepo = $this->container->get('snippet.repository');\n$labels = $sRepo->search((new Criteria())\n ->addFilter(\n new MultiFilter(\n MultiFilter::CONNECTION_OR,\n [\n new ContainsFilter('translationKey', 'my_custom_field_option_1'),\n new ContainsFilter('translationKey', 'my_custom_field_option_2')\n ]\n )\n ), $context)->getElements();\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"plugins",
"shopware",
"shopware6"
] |
stackoverflow_0074638321_plugins_shopware_shopware6.txt
|
Q:
Connect to a remote sqlite3 database with Python
I am able to create a connection to a local sqlite3 database ( Using Mac OS X 10.5 and Python 2.5.1 ) with this:
conn = sqlite3.connect('/db/MyDb')
How can I connect to this database if it is located on a server ( for example on a server running Ubuntu 8.04 with an IP address of 10.7.1.71 ) , and is not stored locally?
e.g. this does not seem to work:
conn = sqlite3.connect('10.7.1.71./db/MyDb')
A:
SQLite is embedded-only. You'll need to mount the remote filesystem before you can access it. And don't try to have more than one machine accessing the SQLite database at a time; SQLite is not built for that. Use something like PostgreSQL instead if you need that.
A:
The sqlite FAQ has an answer relevant to your question. It points out that although multi-machine network access is theoretically possible (using a remote filesystem) it likely won't be reliable unless the filesystem properly supports locks.
If you're accessing it from only one machine and process at a time, however, it should work acceptably, as that page notes (and dependent on the remote filesystem you're using).
A:
SQLite is embedded.Either go for other database or use api for deployed version
|
Connect to a remote sqlite3 database with Python
|
I am able to create a connection to a local sqlite3 database ( Using Mac OS X 10.5 and Python 2.5.1 ) with this:
conn = sqlite3.connect('/db/MyDb')
How can I connect to this database if it is located on a server ( for example on a server running Ubuntu 8.04 with an IP address of 10.7.1.71 ) , and is not stored locally?
e.g. this does not seem to work:
conn = sqlite3.connect('10.7.1.71./db/MyDb')
|
[
"SQLite is embedded-only. You'll need to mount the remote filesystem before you can access it. And don't try to have more than one machine accessing the SQLite database at a time; SQLite is not built for that. Use something like PostgreSQL instead if you need that.\n",
"The sqlite FAQ has an answer relevant to your question. It points out that although multi-machine network access is theoretically possible (using a remote filesystem) it likely won't be reliable unless the filesystem properly supports locks. \nIf you're accessing it from only one machine and process at a time, however, it should work acceptably, as that page notes (and dependent on the remote filesystem you're using).\n",
"SQLite is embedded.Either go for other database or use api for deployed version\n"
] |
[
12,
2,
0
] |
[] |
[] |
[
"macos",
"python",
"sqlite"
] |
stackoverflow_0002318315_macos_python_sqlite.txt
|
Q:
Android Studio (not installed) , when run flutter doctor while Android Studio installed on machine
When I run flutter doctor command on mac its showing below, while I already install Android Studio, and I can run ios build from Android Studio.
[!] Android Studio (not installed)
flutter doctor output:
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Mac OS X 10.14.5 18F132, locale en-GB)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 10.3)
[!] Android Studio (not installed)
[✓] Connected device (1 available)
A:
In Windows
if your Android Studio install by default, you can use this command
flutter config --android-studio-dir="C:\Program Files\Android\Android Studio"
after this command, flutter can found android studio, but the plugin can't...
flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[√] Flutter (Channel stable, 1.20.2, on Microsoft Windows [Version 10.0.18363.1016], locale zh-CN)
[√] Android toolchain - develop for Android devices (Android SDK version 30.0.2)
[!] Android Studio
X Flutter plugin not installed; this adds Flutter specific functionality.
X Dart plugin not installed; this adds Dart specific functionality.
[√] VS Code (version 1.48.0)
[!] Connected device
! No devices available
! Doctor found issues in 2 categories.
In Linux (Ubuntu)
Note: for those who are facing the problem in Ubuntu and Android Studio is installed with snap:
flutter config --android-studio-dir="/snap/android-studio/current/android-studio"
Note: for those who are facing the problem in Ubuntu and Android Studio is installed with JetBrains Toolbox:
flutter config --android-studio-dir=/home/myuser/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/201.7042882
Where ./201.7042822 matches the current version of Android Studio installed. You'll have to check which one you have and update it in the command above.
A:
I had a similar issue which I solved by updating my flutter config as follows:
flutter config --android-sdk="$HOME/Android/Sdk"
flutter config --android-studio-dir="/usr/local/android-studio"
This was on Ubuntu 20.04.1
Assumptions:
You have extracted the downloaded android studio zip to /usr/local/android-studio
If you install Android studio using snap then
flutter config --android-studio-dir="/snap/android-studio/current/android-studio"
A:
I solve this issue within windows 10 pro with this exactly line:
$> flutter config --android-studio-dir= "C:\Program Files\Android\Android Studio"
notice that is an space between the equal mark and the double cotes, this solve as you can see in this print
hope this help new flutter developers!
A:
If you have installed Android Studio via Snap, run following command
flutter config --android-studio-dir="/snap/android-studio/current/android-studio"
This will fix the issue
A:
flutter config --android-studio-dir C:\Program Files\Android\Android Studio will unfortunately not work because of empty space which breaks the tree structure but to solve the issue, use quotes like this: flutter config --android-studio-dir "C:\Program Files\Android\Android Studio"
A:
A solution that worked for me was (after double clicking the downloaded file IDE) move the Android Studio program into the applications folder.
I ran Flutter Doctor and followed all prompts. worked for me.
This works on MacOS only
A:
I think that I had the same issue, below are steps that helped me so I would recommend to try them.
1. Android Studio is installed and you can run it, so when it boots up, select configure:
In dropdown list open "plugins"
Search for "flutter" and install this plugin together with dart.
Restart the Android Studio and open a new terminal.
You should be able to create a flutter project in Android Studio and "flutter doctor" should work now.
Another possible solution:
Specify path, where android studio is installed, with the following command:
flutter config --android-studio-dir=
A:
This worked for me!!
The problem is if your folder name is "Android Studio", flutter is not able to detect it later as taking only Android and skipping Studio.
Solution is :
Rename folder name from "Android Studio" To "AndroidStudio" and then run the config command :
flutter config --android-studio-dir="C:\Program Files\Android\AndroidStudio"
A:
Following worked for me.
Close opened Android studio application if any, and open the cmd as an admisitrator and give the below command
flutter config --android-studio-dir="C:\Program Files\Android\Android Studio"
A:
This works for me in windows 8.1:
flutter config --android-studio-dir "C:\Program Files\Android\Android Studio"
2021.09.01
A:
For ubuntu; in the unzip directory;
flutter config --android-studio-dir="pwd/android-studio"
command work for me.
A:
Go Android studio Home Screen
Configure
Install Flutter, Dart, Plugin
Restart android studio and work properly
if now you are not able to solve the issue
so please run this command and check again
C:\mycomputer>flutter config --android-studio-dir="C:\ProgramFiles\Android\Android Studio"
I Have Solve this issue with this command
A:
You can use below command in command prompt
flutter config --android-studio-dir=""
A:
The solution with :
flutter config --android-studio-dir="Your_Android_Studio_Directory"
worked for me.
After that you can get error messages with flutter doctor even if you have the pluggins already intalled:
[!] Android Studio
X Flutter plugin not installed; this adds Flutter specific functionality.
X Dart plugin not installed; this adds Dart specific functionality.
Reinstalling flutter and dart plugins in Android Studio (version 4.1) solved the problem.
A:
Even though this question is for mac, a lot of people like me will find this as top result. So in case you are on Windows and you installed Android Studio via JetBrains Toolchain and the paths in answers above don't work for you, this worked for me
flutter config --android-studio-dir="C:\Users\YOUR_USERNAME_HERE\AppData\Local\JetBrains\Toolbox\apps\AndroidStudio\ch-0\202.7351085"
Where 202.7351085 is the folder name for my current version of Android Studio so you may need to change that too.
If you can't find Android Studio in this path, do:
Search Android Studio and open file location
If it's a shortcut, open location again
You should be in /bin
Go up one folder and that's your path
A:
Following worked for me in Windows:
flutter config --android-studio-dir="C:\Program Files\Android\Android Studio"
flutter config --android-sdk="C:\%HOMEPATH%\AppData\Local\Android\Sdk"
A:
Run
flutter config --android-studio-dir=<directory-where-studio-is-installed>
This will solve both issues of android licenses and studio not installed
A:
For Linux (Ubuntu) users
Run
if you installed android studio through snap (Ubuntu Software application), run
flutter config --android-studio-dir="/snap/android-studio/current/android-studio"
Otherwise, run
flutter config --android-sdk="$HOME/Android/Sdk"
flutter config --android-studio-dir="/usr/local/android-studio"
Install Dart and Flutter plugins in Android Studio and then Restart
Switch to Flutter Beta Channel by running these commands
flutter channel beta
flutter upgrade
Run flutter doctor
A:
The solution that worked for me is that I had to re-install AndroidStudio to a new location in a way that the full path to it contains no spaces, in the image below will you notice that flutter was cropping my full path because it contained a white space when I use the command flutter config --android-studio-dir C:\Program Files\Android\Android Studio
A:
I had this problem because I downloaded a new AS version without uninstalling the old one first. During the download AS left the old program in place at C:\Program Files\Android\Android Studio and just created a new installation at C:\Program Files\Android\Android Studio1 (two installations next to each other??!! ).
I tried uninstalling AS (so I could do a fresh install) using the windows control panel but it said AS is already uninstalled?
So I had to run flutter config --android-studio-dir="C:\Program Files\Android\Android Studio1" to point to the new version i.e. Android Studio1. This solved the problem.
A:
In my case just renamed from AndroidStudio to Android Studio.app inside app folder cuz it was searching for separated words: Android Studio. Worked like a charm
A:
I faced the same problem and solved by configuring android studio path manually.
For my case the path was /opt/android-studio-4.1/android-studio
flutter config --android-studio-dir=/opt/android-studio-4.1/android-studio
A:
on windows, just add the path C:\Program Files\Android\Android Studio, in the environment variables
A:
I use WSL on Windows 10 and I think that I was configuring flutter from WSL's command prompt. When I cleaned the flutter repo and tried to configure everything from PowerShell as administrator, it worked.
A:
On MacOS using Jetbrains Toolbox, I set it up as
flutter config --android-studio-dir "/Users/<myusername>/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/213.7172.25.2113.9123335/Android Studio.app"
|
Android Studio (not installed) , when run flutter doctor while Android Studio installed on machine
|
When I run flutter doctor command on mac its showing below, while I already install Android Studio, and I can run ios build from Android Studio.
[!] Android Studio (not installed)
flutter doctor output:
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Mac OS X 10.14.5 18F132, locale en-GB)
[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 10.3)
[!] Android Studio (not installed)
[✓] Connected device (1 available)
|
[
"In Windows\n\nif your Android Studio install by default, you can use this command\nflutter config --android-studio-dir=\"C:\\Program Files\\Android\\Android Studio\"\n\nafter this command, flutter can found android studio, but the plugin can't...\nflutter doctor\nDoctor summary (to see all details, run flutter doctor -v):\n[√] Flutter (Channel stable, 1.20.2, on Microsoft Windows [Version 10.0.18363.1016], locale zh-CN)\n[√] Android toolchain - develop for Android devices (Android SDK version 30.0.2)\n[!] Android Studio\n X Flutter plugin not installed; this adds Flutter specific functionality.\n X Dart plugin not installed; this adds Dart specific functionality.\n[√] VS Code (version 1.48.0)\n[!] Connected device\n ! No devices available\n\n! Doctor found issues in 2 categories.\n\nIn Linux (Ubuntu)\n\nNote: for those who are facing the problem in Ubuntu and Android Studio is installed with snap:\nflutter config --android-studio-dir=\"/snap/android-studio/current/android-studio\"\n\nNote: for those who are facing the problem in Ubuntu and Android Studio is installed with JetBrains Toolbox:\nflutter config --android-studio-dir=/home/myuser/.local/share/JetBrains/Toolbox/apps/AndroidStudio/ch-0/201.7042882\n\nWhere ./201.7042822 matches the current version of Android Studio installed. You'll have to check which one you have and update it in the command above.\n",
"I had a similar issue which I solved by updating my flutter config as follows:\n flutter config --android-sdk=\"$HOME/Android/Sdk\"\n flutter config --android-studio-dir=\"/usr/local/android-studio\"\n\nThis was on Ubuntu 20.04.1\nAssumptions:\n\nYou have extracted the downloaded android studio zip to /usr/local/android-studio\n\nIf you install Android studio using snap then\nflutter config --android-studio-dir=\"/snap/android-studio/current/android-studio\"\n\n",
"I solve this issue within windows 10 pro with this exactly line:\n$> flutter config --android-studio-dir= \"C:\\Program Files\\Android\\Android Studio\"\n\nnotice that is an space between the equal mark and the double cotes, this solve as you can see in this print \n\nhope this help new flutter developers!\n",
"If you have installed Android Studio via Snap, run following command\nflutter config --android-studio-dir=\"/snap/android-studio/current/android-studio\"\n\nThis will fix the issue\n",
"flutter config --android-studio-dir C:\\Program Files\\Android\\Android Studio will unfortunately not work because of empty space which breaks the tree structure but to solve the issue, use quotes like this: flutter config --android-studio-dir \"C:\\Program Files\\Android\\Android Studio\"\n",
"A solution that worked for me was (after double clicking the downloaded file IDE) move the Android Studio program into the applications folder.\nI ran Flutter Doctor and followed all prompts. worked for me.\nThis works on MacOS only\n",
"I think that I had the same issue, below are steps that helped me so I would recommend to try them.\n1. Android Studio is installed and you can run it, so when it boots up, select configure:\n\n\nIn dropdown list open \"plugins\"\nSearch for \"flutter\" and install this plugin together with dart. \nRestart the Android Studio and open a new terminal. \nYou should be able to create a flutter project in Android Studio and \"flutter doctor\" should work now. \n\nAnother possible solution:\nSpecify path, where android studio is installed, with the following command:\nflutter config --android-studio-dir=\n",
"This worked for me!!\nThe problem is if your folder name is \"Android Studio\", flutter is not able to detect it later as taking only Android and skipping Studio.\nSolution is :\nRename folder name from \"Android Studio\" To \"AndroidStudio\" and then run the config command :\nflutter config --android-studio-dir=\"C:\\Program Files\\Android\\AndroidStudio\"\n\n",
"Following worked for me.\nClose opened Android studio application if any, and open the cmd as an admisitrator and give the below command\n\nflutter config --android-studio-dir=\"C:\\Program Files\\Android\\Android Studio\"\n\n",
"This works for me in windows 8.1:\nflutter config --android-studio-dir \"C:\\Program Files\\Android\\Android Studio\"\n\n2021.09.01\n",
"For ubuntu; in the unzip directory;\nflutter config --android-studio-dir=\"pwd/android-studio\"\ncommand work for me.\n",
"\nGo Android studio Home Screen\nConfigure\nInstall Flutter, Dart, Plugin\nRestart android studio and work properly\n\nif now you are not able to solve the issue\nso please run this command and check again\nC:\\mycomputer>flutter config --android-studio-dir=\"C:\\ProgramFiles\\Android\\Android Studio\"\n\nI Have Solve this issue with this command\n\n",
"You can use below command in command prompt\n\nflutter config --android-studio-dir=\"\"\n\n",
"The solution with :\nflutter config --android-studio-dir=\"Your_Android_Studio_Directory\"\nworked for me.\nAfter that you can get error messages with flutter doctor even if you have the pluggins already intalled:\n[!] Android Studio\nX Flutter plugin not installed; this adds Flutter specific functionality.\nX Dart plugin not installed; this adds Dart specific functionality.\nReinstalling flutter and dart plugins in Android Studio (version 4.1) solved the problem.\n",
"Even though this question is for mac, a lot of people like me will find this as top result. So in case you are on Windows and you installed Android Studio via JetBrains Toolchain and the paths in answers above don't work for you, this worked for me\nflutter config --android-studio-dir=\"C:\\Users\\YOUR_USERNAME_HERE\\AppData\\Local\\JetBrains\\Toolbox\\apps\\AndroidStudio\\ch-0\\202.7351085\"\n\nWhere 202.7351085 is the folder name for my current version of Android Studio so you may need to change that too.\nIf you can't find Android Studio in this path, do:\n\nSearch Android Studio and open file location\nIf it's a shortcut, open location again\nYou should be in /bin\nGo up one folder and that's your path\n\n",
"Following worked for me in Windows:\nflutter config --android-studio-dir=\"C:\\Program Files\\Android\\Android Studio\"\n\nflutter config --android-sdk=\"C:\\%HOMEPATH%\\AppData\\Local\\Android\\Sdk\"\n\n",
"Run\nflutter config --android-studio-dir=<directory-where-studio-is-installed> \n\nThis will solve both issues of android licenses and studio not installed\n",
"For Linux (Ubuntu) users\n\nRun\n\nif you installed android studio through snap (Ubuntu Software application), run\nflutter config --android-studio-dir=\"/snap/android-studio/current/android-studio\"\n\nOtherwise, run\nflutter config --android-sdk=\"$HOME/Android/Sdk\"\nflutter config --android-studio-dir=\"/usr/local/android-studio\"\n\n\nInstall Dart and Flutter plugins in Android Studio and then Restart\nSwitch to Flutter Beta Channel by running these commands\n\n\nflutter channel beta\nflutter upgrade\n\nRun flutter doctor\n",
"The solution that worked for me is that I had to re-install AndroidStudio to a new location in a way that the full path to it contains no spaces, in the image below will you notice that flutter was cropping my full path because it contained a white space when I use the command flutter config --android-studio-dir C:\\Program Files\\Android\\Android Studio\n\n",
"I had this problem because I downloaded a new AS version without uninstalling the old one first. During the download AS left the old program in place at C:\\Program Files\\Android\\Android Studio and just created a new installation at C:\\Program Files\\Android\\Android Studio1 (two installations next to each other??!! ).\nI tried uninstalling AS (so I could do a fresh install) using the windows control panel but it said AS is already uninstalled?\nSo I had to run flutter config --android-studio-dir=\"C:\\Program Files\\Android\\Android Studio1\" to point to the new version i.e. Android Studio1. This solved the problem.\n",
"In my case just renamed from AndroidStudio to Android Studio.app inside app folder cuz it was searching for separated words: Android Studio. Worked like a charm\n",
"I faced the same problem and solved by configuring android studio path manually.\nFor my case the path was /opt/android-studio-4.1/android-studio\nflutter config --android-studio-dir=/opt/android-studio-4.1/android-studio\n\n",
"on windows, just add the path C:\\Program Files\\Android\\Android Studio, in the environment variables\n",
"I use WSL on Windows 10 and I think that I was configuring flutter from WSL's command prompt. When I cleaned the flutter repo and tried to configure everything from PowerShell as administrator, it worked.\n",
"On MacOS using Jetbrains Toolbox, I set it up as\nflutter config --android-studio-dir \"/Users/<myusername>/Library/Application Support/JetBrains/Toolbox/apps/AndroidStudio/ch-0/213.7172.25.2113.9123335/Android Studio.app\"\n\n"
] |
[
341,
44,
25,
22,
20,
11,
9,
8,
6,
6,
5,
4,
3,
3,
3,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[
"\nflutter config --android-studio-dir=\"file path of android studio\"\n\nfor example :\nin my case, my android studio in \"E Drive \"\n\nflutter config --android-studio-dir=\"E:\\Android\\Android Studio\"\n\n"
] |
[
-2
] |
[
"android",
"android_studio",
"flutter"
] |
stackoverflow_0059647791_android_android_studio_flutter.txt
|
Q:
Character Frequency - Sort Items Alphabetically After They've Been Sorted by Frequency
Link to Codewars challenge
I've been able to create a frequency map for the letter frequency count for a string:
function letterFrequency(text){
text = text.toLowerCase();
let map = {};
let array = text.split('');
array.forEach((value, index) => {
if (!map[value]) {
map[value] = 0;
}
map[value] += 1;
});
return Object.entries(map).filter((el) => el[0] !== ' ')
//sorts array according to count
.sort((a, b) => b[1] - a[1]);
//how to sort alphabetically when letters have same count?
}
console.log(letterFrequency('aaAabb dddDD hhcc'));
But I've not been able to figure out how to sort alphabetically when frequency counts are the same.
I.e.
[['d',5], ['a',4], ['b',2], ['h',2], ['c',2]]
Should instead be:
[['d',5], ['a',4], ['b',2], ['c',2], ['h',2]]
How can I keep frequency count as the main sorting priority, but then also sort alphabetically after sorting by frequency?
Also trying this unfortunately did not have any effect:
function letterFrequency(text){
text = text.toLowerCase();
let map = {};
let array = text.split('');
array.forEach((value, index) => {
if (!map[value]) {
map[value] = 0;
}
map[value] += 1;
});
return Object.entries(map).filter((el) => el[0] !== ' ')
.sort((a, b) => b[1] - a[1])
.sort((a, b) => a[0] - b[0]);
}
A:
You can "or" (||) the count difference with the result of localeCompare so that if the count difference is 0, localeCompare takes precedence:
function letterFrequency(text) {
text = text.toLowerCase();
let map = {};
let array = text.split('');
array.forEach((value, index) => {
if (!map[value]) {
map[value] = 0;
}
map[value] += 1;
});
return Object.entries(map).filter((el) => el[0] !== ' ')
.sort((a, b) => b[1] - a[1] || a[0].localeCompare(b[0]));
}
console.log(letterFrequency('dadbahddbccdh'));
A:
you can do two sorts. sort by alphabetical, then by number, that way the alphabetical order is preserved when you do your numerical sort after
var x = [['d',5], ['a',4], ['b',2], ['h',2], ['c',2]];
x.sort(function(a, b){
if(a[0] < b[0]) { return -1; }
if(a[0] > b[0]) { return 1; }
return 0;
}).sort(function(a, b){
return b[1] - a[1]})
A:
Simple and easy steps with JavaScript based on Object Creation, Array-Sorting, and String. The same solution can be implemented in any language.
var frequencySort = function(s) {
let obj={},string='';
for(let i=0;i<s.length;i++) {
if(obj[s[i]]){
obj[s[i]] += 1;
} else {
obj[s[i]] = 1
}
}
let array = Object.entries(obj);
for(let i=0;i<array.length;i++){
for(let j=i+1;j<array.length;j++){
if(array[j] && array[j][1] > array[i][1]){
let temp = array[i];
array[i] = array[j];
array[j] = temp;
}
}
}
for(let i=0;i<array.length;i++){
for(let j=0;j<array[i][1];j++){
string += array[i][0]
}
}
return string
};
|
Character Frequency - Sort Items Alphabetically After They've Been Sorted by Frequency
|
Link to Codewars challenge
I've been able to create a frequency map for the letter frequency count for a string:
function letterFrequency(text){
text = text.toLowerCase();
let map = {};
let array = text.split('');
array.forEach((value, index) => {
if (!map[value]) {
map[value] = 0;
}
map[value] += 1;
});
return Object.entries(map).filter((el) => el[0] !== ' ')
//sorts array according to count
.sort((a, b) => b[1] - a[1]);
//how to sort alphabetically when letters have same count?
}
console.log(letterFrequency('aaAabb dddDD hhcc'));
But I've not been able to figure out how to sort alphabetically when frequency counts are the same.
I.e.
[['d',5], ['a',4], ['b',2], ['h',2], ['c',2]]
Should instead be:
[['d',5], ['a',4], ['b',2], ['c',2], ['h',2]]
How can I keep frequency count as the main sorting priority, but then also sort alphabetically after sorting by frequency?
Also trying this unfortunately did not have any effect:
function letterFrequency(text){
text = text.toLowerCase();
let map = {};
let array = text.split('');
array.forEach((value, index) => {
if (!map[value]) {
map[value] = 0;
}
map[value] += 1;
});
return Object.entries(map).filter((el) => el[0] !== ' ')
.sort((a, b) => b[1] - a[1])
.sort((a, b) => a[0] - b[0]);
}
|
[
"You can \"or\" (||) the count difference with the result of localeCompare so that if the count difference is 0, localeCompare takes precedence:\n\n\nfunction letterFrequency(text) {\r\n text = text.toLowerCase();\r\n let map = {};\r\n let array = text.split('');\r\n array.forEach((value, index) => {\r\n if (!map[value]) {\r\n map[value] = 0;\r\n }\r\n map[value] += 1;\r\n });\r\n return Object.entries(map).filter((el) => el[0] !== ' ')\r\n .sort((a, b) => b[1] - a[1] || a[0].localeCompare(b[0]));\r\n}\r\n\r\nconsole.log(letterFrequency('dadbahddbccdh'));\n\n\n\n",
"you can do two sorts. sort by alphabetical, then by number, that way the alphabetical order is preserved when you do your numerical sort after\nvar x = [['d',5], ['a',4], ['b',2], ['h',2], ['c',2]];\nx.sort(function(a, b){\n if(a[0] < b[0]) { return -1; }\n if(a[0] > b[0]) { return 1; }\n return 0;\n}).sort(function(a, b){\n return b[1] - a[1]})\n\n",
"Simple and easy steps with JavaScript based on Object Creation, Array-Sorting, and String. The same solution can be implemented in any language.\nvar frequencySort = function(s) {\n\nlet obj={},string='';\nfor(let i=0;i<s.length;i++) {\n if(obj[s[i]]){\n obj[s[i]] += 1;\n } else {\n obj[s[i]] = 1\n }\n}\nlet array = Object.entries(obj);\nfor(let i=0;i<array.length;i++){\n for(let j=i+1;j<array.length;j++){\n if(array[j] && array[j][1] > array[i][1]){\n let temp = array[i];\n array[i] = array[j];\n array[j] = temp;\n }\n }\n}\nfor(let i=0;i<array.length;i++){\n for(let j=0;j<array[i][1];j++){\n string += array[i][0]\n } \n}\nreturn string\n};\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"frequency",
"javascript",
"sorting"
] |
stackoverflow_0058291904_frequency_javascript_sorting.txt
|
Q:
CSS grid uses unnecessary space
I'm trying to do some sort of timeline design using CSS grid, with elements interweaved on both sides. But at least rows 1, 2 and the last have juste blanked unused space.
The columns are declared, but the rows aren't. So I tried using grid-auto-rows: min-content, but it didn't change anything. Actually, whatever the value I put doesn't change anything. I tried putting hardcoded px value (which is not a option) for testing, and I can easily keep the integrity of the design without the dead space.
Tested on Firefox and Brave
* {
margin: 0;
font-family: "Ubuntu", sans-serif;
box-sizing: border-box;
}
:root {
font-size: 1px;
}
body {
font-size: 16rem;
}
/*******/
.wrapper {
--border-width: 0.5em;
--gap: 8rem;
display: grid;
grid-template-columns: 1fr auto 1fr;
gap: var(--gap);
padding: 0.5em;
align-items: start;
}
h2 {
grid-column: 1/2;
grid-row: 1/2;
font-size: 2em;
}
section {
position: relative;
text-align: justify;
border-top: var(--border-width) solid var(--accent);
border-bottom: var(--border-width) solid transparent;
padding: 0 0.5em;
max-width: 60ch;
}
section::after {
content: "";
display: block;
height: var(--border-width);
width: calc(var(--border-width) + var(--gap));
background-color: var(--accent);
position: absolute;
top: calc(-1 * var(--border-width));
z-index: -1;
}
section:nth-of-type(odd) {
grid-column: 3/4;
border-right: var(--border-width) solid var(--accent);
border-top-right-radius: 1em;
}
section:nth-of-type(odd)::after {
left: 0;
translate: -100%;
}
section:nth-of-type(even) {
grid-column: 1/2;
justify-self: end;
border-left: var(--border-width) solid var(--accent);
border-top-left-radius: 1em;
}
section:nth-of-type(even)::after {
right: 0;
translate: 100%;
}
.date {
grid-column: 2/3;
display: flex;
flex-flow: column nowrap;
align-items: center;
font-size: 12rem;
padding: 0.6em 0.3em;
line-height: 0.7em;
background-color: var(--accent);
color: white;
}
.date:nth-of-type(odd) {
border-radius: 1em 0 1em 0;
}
.date:nth-of-type(even) {
border-radius: 0 1em 0 1em;
}
.date>* {
flex-basis: 100%;
}
.green {
--accent: hsl(171, 67%, 28%);
grid-row: 1/3;
}
.orange {
--accent: hsl(22, 99%, 50%);
grid-row: 2/4;
}
.orange.date {
grid-row: 2/3;
}
.yellow {
--accent: hsl(46, 100%, 47%);
grid-row: 3/5;
}
.yellow.date {
grid-row: 3/4;
}
.pink {
--accent: hsl(343, 78%, 62%);
grid-row: 4/6;
}
.pink.date {
grid-row: 4/5;
}
.blue {
--accent: hsl(192deg 80% 48%);
grid-row: 5/7;
}
.blue.date {
grid-row: 5/6;
}
<div class="wrapper">
<h2>Lorem Ipsum</h2>
<section class="green">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Minus quidem qui, aliquid asperiores commodi officiis inventore laboriosam dignissimos dolor officia id itaque tempora provident exercitationem accusamus expedita ullam dolorum fuga. Officiis temporibus
porro nesciunt libero, eum aliquid doloremque minima nisi sint minus id mollitia ea quisquam consequuntur laudantium autem. Aperiam.
</section>
<p class="green date"><span>03/2022</span>-<span>04/2023</span></p>
<section class="orange">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eum, quisquam atque. Dolores beatae, nisi, laborum perspiciatis architecto non dolorem quae, doloribus aliquam quaerat rem? Esse hic illum sint mollitia quibusdam repellendus totam dolorum voluptatem
ipsa, nobis sapiente. Quasi quo porro aperiam cumque nobis debitis praesentium dolorem omnis repellat saepe. Incidunt laudantium at similique nobis perferendis et illo dolor aliquid nisi voluptatum eaque ab accusamus maxime possimus, ut ratione soluta
nam, natus quibusdam illum! Qui modi cum libero odit blanditiis distinctio eveniet illo facilis alias, aut neque perspiciatis et ipsam, hic natus? Explicabo consequuntur voluptatibus a ipsam voluptatem, deleniti at doloribus!
</section>
<p class="orange date"><span>12/2020</span>-<span>02/2022</span></p>
<section class="yellow">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eius, aliquam blanditiis! Magnam dolorem nostrum molestias modi, ratione id quaerat adipisci dolore impedit quas voluptate recusandae nisi deleniti sed, doloremque ullam ducimus. Voluptatem aut
praesentium magni iusto blanditiis? Doloremque, maxime necessitatibus eaque obcaecati voluptate cumque veritatis exercitationem, dolor ex beatae blanditiis.
</section>
<p class="yellow date"><span>11/2018</span>-<span>10/2020</span></p>
<section class="pink">
Lorem ipsum dolor sit, amet consectetur adipisicing elit. Perferendis blanditiis repellat nulla iste illo quos, culpa sint nihil doloribus quae molestiae eaque perspiciatis reiciendis exercitationem eum minima molestias voluptatum consequatur quisquam
asperiores obcaecati? Quas animi quis itaque molestias praesentium maiores minima. Consequuntur hic explicabo eos expedita quidem, dolorum maiores perferendis, illum quod, placeat magni! Exercitationem architecto iusto deserunt magni possimus.
</section>
<p class="pink date"><span>01/2018</span>-<span>11/2018</span></p>
<section class="blue">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Consequatur minima illum, accusamus recusandae eveniet blanditiis repellendus quaerat ullam inventore eaque? Doloremque delectus quibusdam rem hic! Modi ducimus iusto perspiciatis incidunt quidem
cum, optio, soluta id voluptatum placeat nobis quasi maxime dolorem magni pariatur cumque illum odio dolor. Dolor libero sint ea iste, autem rerum cupiditate enim aliquam? Cumque voluptatum at dolore. Veritatis, assumenda autem. Culpa facilis dolorum
molestias voluptatum, natus, fugit fuga amet veritatis, dicta similique suscipit temporibus porro tempora?
</section>
<p class="blue date"><span>07/2014</span>-<span>07/2017</span></p>
</div
A:
Hello
If you look at the screenshot of your code view with dev tools (we see the grid), it's normal you have blank space!
Try to take out the title out of the grid. Or that could also be solved with nested grid
A:
try like below,
:root {
font-size: 1px;
}
body {
font-size: 16rem;
}
/*******/
.wrapper {
--border-width: 0.5em;
--gap: 8rem;
display: grid;
grid-template-columns: 1fr auto 1fr;
gap: var(--gap);
padding: 0.5em;
align-items: start;
}
h2 {
grid-column: 1/2;
grid-row: 1/2;
font-size: 2em;
}
section {
position: relative;
text-align: justify;
border-top: var(--border-width) solid var(--accent);
border-bottom: var(--border-width) solid transparent;
padding: 0 0.5em;
max-width: 60ch;
}
section::after {
content: "";
display: block;
height: var(--border-width);
width: calc(var(--border-width) + var(--gap));
background-color: var(--accent);
position: absolute;
top: calc(-1 * var(--border-width));
z-index: -1;
}
section:nth-of-type(odd) {
grid-column: 3/4;
border-right: var(--border-width) solid var(--accent);
border-top-right-radius: 1em;
}
section:nth-of-type(odd)::after {
left: 0;
translate: -100%;
}
section:nth-of-type(even) {
grid-column: 1/2;
justify-self: end;
border-left: var(--border-width) solid var(--accent);
border-top-left-radius: 1em;
}
section:nth-of-type(even)::after {
right: 0;
translate: 100%;
}
.date {
grid-column: 2/3;
display: flex;
flex-flow: column nowrap;
align-items: center;
font-size: 12rem;
padding: 0.6em 0.3em;
line-height: 0.7em;
background-color: var(--accent);
color: white;
margin:0px;
}
.date:nth-of-type(odd) {
border-radius: 1em 0 1em 0;
}
.date:nth-of-type(even) {
border-radius: 0 1em 0 1em;
}
.date>* {
flex-basis: 100%;
}
.green {
--accent: hsl(171, 67%, 28%);
grid-row: 1/3;
}
.orange {
--accent: hsl(22, 99%, 50%);
grid-row: 2/4;
}
.orange.date {
grid-row: 2/3;
}
.yellow {
--accent: hsl(46, 100%, 47%);
grid-row: 3/5;
}
.yellow.date {
grid-row: 3/4;
}
.pink {
--accent: hsl(343, 78%, 62%);
grid-row: 4/6;
}
.pink.date {
grid-row: 4/5;
}
.blue {
--accent: hsl(192deg 80% 48%);
grid-row: 5/7;
}
.blue.date {
grid-row: 5/6;
}
<div class="wrapper">
<h2>date wise</h2>
<section class="green">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Minus quidem qui, aliquid asperiores commodi officiis inventore laboriosam dignissimos dolor officia id itaque tempora provident exercitationem accusamus expedita ullam dolorum fuga. Officiis temporibus
porro nesciunt libero, eum aliquid doloremque minima nisi sint minus id mollitia ea quisquam consequuntur laudantium autem. Aperiam.
</section>
<p class="green date"><span>03/2022</span>-<span>04/2023</span></p>
<section class="orange">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eum, quisquam atque. Dolores beatae, nisi, laborum perspiciatis architecto non dolorem quae, doloribus aliquam quaerat rem? Esse hic illum sint mollitia quibusdam repellendus totam dolorum voluptatem
ipsa, nobis sapiente. Quasi quo porro aperiam cumque nobis debitis praesentium dolorem omnis repellat saepe. Incidunt laudantium at similique nobis perferendis et illo dolor aliquid nisi voluptatum eaque ab accusamus maxime possimus, ut ratione soluta
nam, natus quibusdam illum! Qui modi cum libero odit blanditiis distinctio eveniet illo facilis alias, aut neque perspiciatis et ipsam, hic natus? Explicabo consequuntur voluptatibus a ipsam voluptatem, deleniti at doloribus!
</section>
<p class="orange date"><span>12/2020</span>-<span>02/2022</span></p>
<section class="yellow">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eius, aliquam blanditiis! Magnam dolorem nostrum molestias modi, ratione id quaerat adipisci dolore impedit quas voluptate recusandae nisi deleniti sed, doloremque ullam ducimus. Voluptatem aut
praesentium magni iusto blanditiis? Doloremque, maxime necessitatibus eaque obcaecati voluptate cumque veritatis exercitationem, dolor ex beatae blanditiis.
</section>
<p class="yellow date"><span>11/2018</span>-<span>10/2020</span></p>
<section class="pink">
Lorem ipsum dolor sit, amet consectetur adipisicing elit. Perferendis blanditiis repellat nulla iste illo quos, culpa sint nihil doloribus quae molestiae eaque perspiciatis reiciendis exercitationem eum minima molestias voluptatum consequatur quisquam
asperiores obcaecati? Quas animi quis itaque molestias praesentium maiores minima. Consequuntur hic explicabo eos expedita quidem, dolorum maiores perferendis, illum quod, placeat magni! Exercitationem architecto iusto deserunt magni possimus.
</section>
<p class="pink date"><span>01/2018</span>-<span>11/2018</span></p>
<section class="blue">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Consequatur minima illum, accusamus recusandae eveniet blanditiis repellendus quaerat ullam inventore eaque? Doloremque delectus quibusdam rem hic! Modi ducimus iusto perspiciatis incidunt quidem
cum, optio, soluta id voluptatum placeat nobis quasi maxime dolorem magni pariatur cumque illum odio dolor. Dolor libero sint ea iste, autem rerum cupiditate enim aliquam? Cumque voluptatum at dolore. Veritatis, assumenda autem. Culpa facilis dolorum
molestias voluptatum, natus, fugit fuga amet veritatis, dicta similique suscipit temporibus porro tempora?
</section>
<p class="blue date"><span>07/2014</span>-<span>07/2017</span></p>
</div>
please comment if any query
A:
To remove the unused space in your CSS grid, you can use the grid-template-rows property to explicitly specify the number of rows in your grid and set their heights. This will override the default behavior of grid-auto-rows and remove the unused space.
For example, you can add the following to your CSS:
.wrapper {
grid-template-rows: auto auto auto auto;
}
This will create four rows with heights determined by their content (using the auto value). You can adjust the values as needed to fit your design.
|
CSS grid uses unnecessary space
|
I'm trying to do some sort of timeline design using CSS grid, with elements interweaved on both sides. But at least rows 1, 2 and the last have juste blanked unused space.
The columns are declared, but the rows aren't. So I tried using grid-auto-rows: min-content, but it didn't change anything. Actually, whatever the value I put doesn't change anything. I tried putting hardcoded px value (which is not a option) for testing, and I can easily keep the integrity of the design without the dead space.
Tested on Firefox and Brave
* {
margin: 0;
font-family: "Ubuntu", sans-serif;
box-sizing: border-box;
}
:root {
font-size: 1px;
}
body {
font-size: 16rem;
}
/*******/
.wrapper {
--border-width: 0.5em;
--gap: 8rem;
display: grid;
grid-template-columns: 1fr auto 1fr;
gap: var(--gap);
padding: 0.5em;
align-items: start;
}
h2 {
grid-column: 1/2;
grid-row: 1/2;
font-size: 2em;
}
section {
position: relative;
text-align: justify;
border-top: var(--border-width) solid var(--accent);
border-bottom: var(--border-width) solid transparent;
padding: 0 0.5em;
max-width: 60ch;
}
section::after {
content: "";
display: block;
height: var(--border-width);
width: calc(var(--border-width) + var(--gap));
background-color: var(--accent);
position: absolute;
top: calc(-1 * var(--border-width));
z-index: -1;
}
section:nth-of-type(odd) {
grid-column: 3/4;
border-right: var(--border-width) solid var(--accent);
border-top-right-radius: 1em;
}
section:nth-of-type(odd)::after {
left: 0;
translate: -100%;
}
section:nth-of-type(even) {
grid-column: 1/2;
justify-self: end;
border-left: var(--border-width) solid var(--accent);
border-top-left-radius: 1em;
}
section:nth-of-type(even)::after {
right: 0;
translate: 100%;
}
.date {
grid-column: 2/3;
display: flex;
flex-flow: column nowrap;
align-items: center;
font-size: 12rem;
padding: 0.6em 0.3em;
line-height: 0.7em;
background-color: var(--accent);
color: white;
}
.date:nth-of-type(odd) {
border-radius: 1em 0 1em 0;
}
.date:nth-of-type(even) {
border-radius: 0 1em 0 1em;
}
.date>* {
flex-basis: 100%;
}
.green {
--accent: hsl(171, 67%, 28%);
grid-row: 1/3;
}
.orange {
--accent: hsl(22, 99%, 50%);
grid-row: 2/4;
}
.orange.date {
grid-row: 2/3;
}
.yellow {
--accent: hsl(46, 100%, 47%);
grid-row: 3/5;
}
.yellow.date {
grid-row: 3/4;
}
.pink {
--accent: hsl(343, 78%, 62%);
grid-row: 4/6;
}
.pink.date {
grid-row: 4/5;
}
.blue {
--accent: hsl(192deg 80% 48%);
grid-row: 5/7;
}
.blue.date {
grid-row: 5/6;
}
<div class="wrapper">
<h2>Lorem Ipsum</h2>
<section class="green">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Minus quidem qui, aliquid asperiores commodi officiis inventore laboriosam dignissimos dolor officia id itaque tempora provident exercitationem accusamus expedita ullam dolorum fuga. Officiis temporibus
porro nesciunt libero, eum aliquid doloremque minima nisi sint minus id mollitia ea quisquam consequuntur laudantium autem. Aperiam.
</section>
<p class="green date"><span>03/2022</span>-<span>04/2023</span></p>
<section class="orange">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eum, quisquam atque. Dolores beatae, nisi, laborum perspiciatis architecto non dolorem quae, doloribus aliquam quaerat rem? Esse hic illum sint mollitia quibusdam repellendus totam dolorum voluptatem
ipsa, nobis sapiente. Quasi quo porro aperiam cumque nobis debitis praesentium dolorem omnis repellat saepe. Incidunt laudantium at similique nobis perferendis et illo dolor aliquid nisi voluptatum eaque ab accusamus maxime possimus, ut ratione soluta
nam, natus quibusdam illum! Qui modi cum libero odit blanditiis distinctio eveniet illo facilis alias, aut neque perspiciatis et ipsam, hic natus? Explicabo consequuntur voluptatibus a ipsam voluptatem, deleniti at doloribus!
</section>
<p class="orange date"><span>12/2020</span>-<span>02/2022</span></p>
<section class="yellow">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Eius, aliquam blanditiis! Magnam dolorem nostrum molestias modi, ratione id quaerat adipisci dolore impedit quas voluptate recusandae nisi deleniti sed, doloremque ullam ducimus. Voluptatem aut
praesentium magni iusto blanditiis? Doloremque, maxime necessitatibus eaque obcaecati voluptate cumque veritatis exercitationem, dolor ex beatae blanditiis.
</section>
<p class="yellow date"><span>11/2018</span>-<span>10/2020</span></p>
<section class="pink">
Lorem ipsum dolor sit, amet consectetur adipisicing elit. Perferendis blanditiis repellat nulla iste illo quos, culpa sint nihil doloribus quae molestiae eaque perspiciatis reiciendis exercitationem eum minima molestias voluptatum consequatur quisquam
asperiores obcaecati? Quas animi quis itaque molestias praesentium maiores minima. Consequuntur hic explicabo eos expedita quidem, dolorum maiores perferendis, illum quod, placeat magni! Exercitationem architecto iusto deserunt magni possimus.
</section>
<p class="pink date"><span>01/2018</span>-<span>11/2018</span></p>
<section class="blue">
Lorem ipsum dolor sit amet consectetur adipisicing elit. Consequatur minima illum, accusamus recusandae eveniet blanditiis repellendus quaerat ullam inventore eaque? Doloremque delectus quibusdam rem hic! Modi ducimus iusto perspiciatis incidunt quidem
cum, optio, soluta id voluptatum placeat nobis quasi maxime dolorem magni pariatur cumque illum odio dolor. Dolor libero sint ea iste, autem rerum cupiditate enim aliquam? Cumque voluptatum at dolore. Veritatis, assumenda autem. Culpa facilis dolorum
molestias voluptatum, natus, fugit fuga amet veritatis, dicta similique suscipit temporibus porro tempora?
</section>
<p class="blue date"><span>07/2014</span>-<span>07/2017</span></p>
</div
|
[
"\nHello\nIf you look at the screenshot of your code view with dev tools (we see the grid), it's normal you have blank space!\nTry to take out the title out of the grid. Or that could also be solved with nested grid\n",
"try like below,\n\n\n:root {\n font-size: 1px;\n}\n\nbody {\n font-size: 16rem;\n}\n\n\n/*******/\n\n.wrapper {\n --border-width: 0.5em;\n --gap: 8rem;\n display: grid;\n grid-template-columns: 1fr auto 1fr;\n gap: var(--gap);\n padding: 0.5em;\n align-items: start;\n}\n\nh2 {\n grid-column: 1/2;\n grid-row: 1/2;\n font-size: 2em;\n}\n\nsection {\n position: relative;\n text-align: justify;\n border-top: var(--border-width) solid var(--accent);\n border-bottom: var(--border-width) solid transparent;\n padding: 0 0.5em;\n max-width: 60ch;\n}\n\nsection::after {\n content: \"\";\n display: block;\n height: var(--border-width);\n width: calc(var(--border-width) + var(--gap));\n background-color: var(--accent);\n position: absolute;\n top: calc(-1 * var(--border-width));\n z-index: -1;\n}\n\nsection:nth-of-type(odd) {\n grid-column: 3/4;\n border-right: var(--border-width) solid var(--accent);\n border-top-right-radius: 1em;\n}\n\nsection:nth-of-type(odd)::after {\n left: 0;\n translate: -100%;\n}\n\nsection:nth-of-type(even) {\n grid-column: 1/2;\n justify-self: end;\n border-left: var(--border-width) solid var(--accent);\n border-top-left-radius: 1em;\n}\n\nsection:nth-of-type(even)::after {\n right: 0;\n translate: 100%;\n}\n\n.date {\n grid-column: 2/3;\n display: flex;\n flex-flow: column nowrap;\n align-items: center;\n font-size: 12rem;\n padding: 0.6em 0.3em;\n line-height: 0.7em;\n background-color: var(--accent);\n color: white;\n margin:0px;\n}\n\n.date:nth-of-type(odd) {\n border-radius: 1em 0 1em 0;\n}\n\n.date:nth-of-type(even) {\n border-radius: 0 1em 0 1em;\n}\n\n.date>* {\n flex-basis: 100%;\n}\n\n.green {\n --accent: hsl(171, 67%, 28%);\n grid-row: 1/3;\n}\n\n.orange {\n --accent: hsl(22, 99%, 50%);\n grid-row: 2/4;\n}\n\n.orange.date {\n grid-row: 2/3;\n}\n\n.yellow {\n --accent: hsl(46, 100%, 47%);\n grid-row: 3/5;\n}\n\n.yellow.date {\n grid-row: 3/4;\n}\n\n.pink {\n --accent: hsl(343, 78%, 62%);\n grid-row: 4/6;\n}\n\n.pink.date {\n grid-row: 4/5;\n}\n\n.blue {\n --accent: hsl(192deg 80% 48%);\n grid-row: 5/7;\n}\n\n.blue.date {\n grid-row: 5/6;\n}\n<div class=\"wrapper\">\n <h2>date wise</h2>\n\n <section class=\"green\">\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Minus quidem qui, aliquid asperiores commodi officiis inventore laboriosam dignissimos dolor officia id itaque tempora provident exercitationem accusamus expedita ullam dolorum fuga. Officiis temporibus\n porro nesciunt libero, eum aliquid doloremque minima nisi sint minus id mollitia ea quisquam consequuntur laudantium autem. Aperiam.\n </section>\n <p class=\"green date\"><span>03/2022</span>-<span>04/2023</span></p>\n\n <section class=\"orange\">\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Eum, quisquam atque. Dolores beatae, nisi, laborum perspiciatis architecto non dolorem quae, doloribus aliquam quaerat rem? Esse hic illum sint mollitia quibusdam repellendus totam dolorum voluptatem\n ipsa, nobis sapiente. Quasi quo porro aperiam cumque nobis debitis praesentium dolorem omnis repellat saepe. Incidunt laudantium at similique nobis perferendis et illo dolor aliquid nisi voluptatum eaque ab accusamus maxime possimus, ut ratione soluta\n nam, natus quibusdam illum! Qui modi cum libero odit blanditiis distinctio eveniet illo facilis alias, aut neque perspiciatis et ipsam, hic natus? Explicabo consequuntur voluptatibus a ipsam voluptatem, deleniti at doloribus!\n </section>\n <p class=\"orange date\"><span>12/2020</span>-<span>02/2022</span></p>\n\n <section class=\"yellow\">\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Eius, aliquam blanditiis! Magnam dolorem nostrum molestias modi, ratione id quaerat adipisci dolore impedit quas voluptate recusandae nisi deleniti sed, doloremque ullam ducimus. Voluptatem aut\n praesentium magni iusto blanditiis? Doloremque, maxime necessitatibus eaque obcaecati voluptate cumque veritatis exercitationem, dolor ex beatae blanditiis.\n </section>\n <p class=\"yellow date\"><span>11/2018</span>-<span>10/2020</span></p>\n\n <section class=\"pink\">\n Lorem ipsum dolor sit, amet consectetur adipisicing elit. Perferendis blanditiis repellat nulla iste illo quos, culpa sint nihil doloribus quae molestiae eaque perspiciatis reiciendis exercitationem eum minima molestias voluptatum consequatur quisquam\n asperiores obcaecati? Quas animi quis itaque molestias praesentium maiores minima. Consequuntur hic explicabo eos expedita quidem, dolorum maiores perferendis, illum quod, placeat magni! Exercitationem architecto iusto deserunt magni possimus.\n </section>\n <p class=\"pink date\"><span>01/2018</span>-<span>11/2018</span></p>\n\n <section class=\"blue\">\n Lorem ipsum dolor sit amet consectetur adipisicing elit. Consequatur minima illum, accusamus recusandae eveniet blanditiis repellendus quaerat ullam inventore eaque? Doloremque delectus quibusdam rem hic! Modi ducimus iusto perspiciatis incidunt quidem\n cum, optio, soluta id voluptatum placeat nobis quasi maxime dolorem magni pariatur cumque illum odio dolor. Dolor libero sint ea iste, autem rerum cupiditate enim aliquam? Cumque voluptatum at dolore. Veritatis, assumenda autem. Culpa facilis dolorum\n molestias voluptatum, natus, fugit fuga amet veritatis, dicta similique suscipit temporibus porro tempora?\n </section>\n <p class=\"blue date\"><span>07/2014</span>-<span>07/2017</span></p>\n </div>\n\n\n\nplease comment if any query\n",
"To remove the unused space in your CSS grid, you can use the grid-template-rows property to explicitly specify the number of rows in your grid and set their heights. This will override the default behavior of grid-auto-rows and remove the unused space.\nFor example, you can add the following to your CSS:\n.wrapper {\n grid-template-rows: auto auto auto auto;\n}\n\n\nThis will create four rows with heights determined by their content (using the auto value). You can adjust the values as needed to fit your design.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"css",
"css_grid"
] |
stackoverflow_0074664427_css_css_grid.txt
|
Q:
The term "Concurrent render" in React confused me. Does it mean "asynchronous rendering"?
When I saw the term "Concurrent render" or "Concurrent features" in React 18, it confused me. Because I know the browser handle tasks on a single main thread. How react render concurrently on a single thread ?
Does react use event loop and task queue internally ?
A:
Yes, React uses the event loop and task queue internally to handle concurrent rendering. In React, concurrent rendering means rendering multiple components and updates simultaneously, rather than rendering and updating them one at a time in a sequential manner. This allows React to split rendering work into multiple independent tasks and use multiple CPU cores to perform them in parallel, improving the overall performance of the application.
However, it's important to note that concurrent rendering in React is not the same as multithreading, as JavaScript is a single-threaded language and the browser only has one main thread to execute JavaScript code. Instead, React uses techniques such as time slicing, suspense, and the virtual DOM to enable concurrent rendering and improve the performance of the application.
A:
TLDR: Yes.
Let me try to explain it in a different way. Imagine that you are playing with a bunch of toys, and each toy is a part of the React program. With concurrent mode, React can play with many toys at the same time, without getting confused or mixing them up. This makes the program more efficient and helps it to work better. For example, if you were playing with a doll and a puzzle at the same time, concurrent mode would help React keep track of both toys and make sure they are working together properly.
So basically, concurrent mode is like a traffic controller at a busy airport. The traffic controller helps planes take off and land safely, without crashing into each other. In the same way, concurrent mode helps React manage different parts of the program and make sure they are working together without conflicts.
I hope this would make sense.
|
The term "Concurrent render" in React confused me. Does it mean "asynchronous rendering"?
|
When I saw the term "Concurrent render" or "Concurrent features" in React 18, it confused me. Because I know the browser handle tasks on a single main thread. How react render concurrently on a single thread ?
Does react use event loop and task queue internally ?
|
[
"Yes, React uses the event loop and task queue internally to handle concurrent rendering. In React, concurrent rendering means rendering multiple components and updates simultaneously, rather than rendering and updating them one at a time in a sequential manner. This allows React to split rendering work into multiple independent tasks and use multiple CPU cores to perform them in parallel, improving the overall performance of the application.\nHowever, it's important to note that concurrent rendering in React is not the same as multithreading, as JavaScript is a single-threaded language and the browser only has one main thread to execute JavaScript code. Instead, React uses techniques such as time slicing, suspense, and the virtual DOM to enable concurrent rendering and improve the performance of the application.\n",
"TLDR: Yes.\nLet me try to explain it in a different way. Imagine that you are playing with a bunch of toys, and each toy is a part of the React program. With concurrent mode, React can play with many toys at the same time, without getting confused or mixing them up. This makes the program more efficient and helps it to work better. For example, if you were playing with a doll and a puzzle at the same time, concurrent mode would help React keep track of both toys and make sure they are working together properly.\nSo basically, concurrent mode is like a traffic controller at a busy airport. The traffic controller helps planes take off and land safely, without crashing into each other. In the same way, concurrent mode helps React manage different parts of the program and make sure they are working together without conflicts.\nI hope this would make sense.\n"
] |
[
0,
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074665845_reactjs.txt
|
Q:
How to reflesh RecycleView on Fragment after dismiss() DialogFragment?
Could you tell me the way to refresh data on RecycleView on Fragment.
On DialogFragment, there are some TextInputEditText, and users could change the data.
After users closed DialogFragment with dismiss(), RecycleView isn't changed.
Tried to use Intetnt back to RecycleView, but I think it is not correct way.
My environment is JAVA, Android Studio.
A:
You can use StartActivityForResult, it is the best way to get you modified data, here's a link to documentation and how to integrated it enter link description here
A:
You must notify all reorganized observer that the data set is altered to get visual effects using notifyDataSetChanged().More details on notifyDataSetChanged()
This link also shows the use of notifyDataSetChanged()
|
How to reflesh RecycleView on Fragment after dismiss() DialogFragment?
|
Could you tell me the way to refresh data on RecycleView on Fragment.
On DialogFragment, there are some TextInputEditText, and users could change the data.
After users closed DialogFragment with dismiss(), RecycleView isn't changed.
Tried to use Intetnt back to RecycleView, but I think it is not correct way.
My environment is JAVA, Android Studio.
|
[
"You can use StartActivityForResult, it is the best way to get you modified data, here's a link to documentation and how to integrated it enter link description here\n",
"You must notify all reorganized observer that the data set is altered to get visual effects using notifyDataSetChanged().More details on notifyDataSetChanged() \nThis link also shows the use of notifyDataSetChanged() \n"
] |
[
0,
0
] |
[] |
[] |
[
"android",
"java"
] |
stackoverflow_0074664257_android_java.txt
|
Q:
Getting position data from UBX protocol
I am working on a project which is use ublox .ubx protocol to getting position information. I'm using serial communication to connect my GPS module and getting position information to python sketch. I used Serial and pyubx2 libraries my sketch as follows,
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
print(parsed_data)
Then I have received information from GPS module as follows. It is continuously sending many of information in every second like as follows,
<UBX(NAV-SOL, iTOW=00:11:43, fTOW=-215069, week=0, gpsFix=0, gpsfixOK=0, diffSoln=0, wknSet=0, towSet=0, ecefX=637813700, ecefY=0, ecefZ=0, pAcc=649523840, ecefVX=0, ecefVY=0, ecefVZ=0, sAcc=2000, pDOP=99.99, reserved1=2, numSV=0, reserved2=215800)>
<UBX(NAV-PVT, iTOW=00:11:43, year=2015, month=10, day=18, hour=0, min=12, second=1, validDate=0, validTime=0, fullyResolved=0, validMag=0, tAcc=4294967295, nano=-215068, fixType=0, gnssFixOk=0, difSoln=0, psmState=0, headVehValid=0, carrSoln=0, confirmedAvai=0, confirmedDate=0, confirmedTime=0, numSV=0, lon=0.0, lat=0.0, height=0, hMSL=-17000, hAcc=4294967295, vAcc=3750027776, velN=0, velE=0, velD=0, gSpeed=0, headMot=0.0, sAcc=20000, headAcc=180.0, pDOP=99.99, invalidLlh=0, lastCorrectionAge=0, reserved0=2312952, headVeh=0.0, magDec=0.0, magAcc=0.0)>
I want to assign those position information (latitude, longitude, altitude etc.) into variables and hope to do some analysis part in further. So how can I derive positional information individually from this type of sentences.
A:
Try something like this (press CTRL-C to terminate) ...
from serial import Serial
from pyubx2 import UBXReader
try:
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
# print(parsed_data)
if parsed_data.identity == "NAV-PVT":
lat, lon, alt = parsed_data.lat, parsed_data.lon, parsed_data.hMSL
print(f"lat = {lat}, lon = {lon}, alt = {alt/1000} m")
except KeyboardInterrupt:
print("Terminated by user")
For further assistance, refer to https://github.com/semuconsulting/pyubx2 (there are several example Python scripts in the /examples folder).
|
Getting position data from UBX protocol
|
I am working on a project which is use ublox .ubx protocol to getting position information. I'm using serial communication to connect my GPS module and getting position information to python sketch. I used Serial and pyubx2 libraries my sketch as follows,
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('COM8', 38400)
while True:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
print(parsed_data)
Then I have received information from GPS module as follows. It is continuously sending many of information in every second like as follows,
<UBX(NAV-SOL, iTOW=00:11:43, fTOW=-215069, week=0, gpsFix=0, gpsfixOK=0, diffSoln=0, wknSet=0, towSet=0, ecefX=637813700, ecefY=0, ecefZ=0, pAcc=649523840, ecefVX=0, ecefVY=0, ecefVZ=0, sAcc=2000, pDOP=99.99, reserved1=2, numSV=0, reserved2=215800)>
<UBX(NAV-PVT, iTOW=00:11:43, year=2015, month=10, day=18, hour=0, min=12, second=1, validDate=0, validTime=0, fullyResolved=0, validMag=0, tAcc=4294967295, nano=-215068, fixType=0, gnssFixOk=0, difSoln=0, psmState=0, headVehValid=0, carrSoln=0, confirmedAvai=0, confirmedDate=0, confirmedTime=0, numSV=0, lon=0.0, lat=0.0, height=0, hMSL=-17000, hAcc=4294967295, vAcc=3750027776, velN=0, velE=0, velD=0, gSpeed=0, headMot=0.0, sAcc=20000, headAcc=180.0, pDOP=99.99, invalidLlh=0, lastCorrectionAge=0, reserved0=2312952, headVeh=0.0, magDec=0.0, magAcc=0.0)>
I want to assign those position information (latitude, longitude, altitude etc.) into variables and hope to do some analysis part in further. So how can I derive positional information individually from this type of sentences.
|
[
"Try something like this (press CTRL-C to terminate) ...\nfrom serial import Serial\nfrom pyubx2 import UBXReader\n\ntry:\n stream = Serial('COM8', 38400)\n while True:\n ubr = UBXReader(stream)\n (raw_data, parsed_data) = ubr.read()\n # print(parsed_data)\n if parsed_data.identity == \"NAV-PVT\":\n lat, lon, alt = parsed_data.lat, parsed_data.lon, parsed_data.hMSL\n print(f\"lat = {lat}, lon = {lon}, alt = {alt/1000} m\")\nexcept KeyboardInterrupt:\n print(\"Terminated by user\")\n\nFor further assistance, refer to https://github.com/semuconsulting/pyubx2 (there are several example Python scripts in the /examples folder).\n"
] |
[
0
] |
[] |
[] |
[
"gps",
"location",
"python"
] |
stackoverflow_0073864028_gps_location_python.txt
|
Q:
effectively accessing first item in object
On input consider db-dump(from dbeaver), having this format:
{
"select": [
{<row1>},
{<row2>}
],
"select": {}
}
say that I'm debugging bigger script, and just want to see first few rows, from first statement. How to do that effectively in rather huge file?
Template:
jq 'keys[0] as $k|.[$k]|limit(1;.[])' dump
isn't really great, as it need to fetch all keys first. Template
jq '.[0]|limit(1;.[])' dump
sadly does not seem to be valid one, and
jq 'first(.[])|limit(1;.[])' dump
does not seem to have any performance benefit.
What would be the best way to just access first field in object without actually testing it's name or caring for rest of fields?
A:
One way to achieve this would be to use the keys filter to get the first key of the object, then use that key to access the value of the first object in that field. The final jq query would look something like this:
jq 'keys[0] as $k|.[$k][0]' dump
A:
Given that weird object with identical keys, you can use the --stream option to access all items before the JSON processor would eliminate the duplicates, fromstream and truncate_stream to dissect the input, and limit to reduce the output to just a few items:
jq --stream -cn 'limit(5; fromstream(2|truncate_stream(inputs)))' dump.json
{<row1>}
{<row2>}
{<row3>}
{<row4>}
{<row5>}
A:
One strategy would be to use the —stream command-line option. It’s a bit tricky to use, but if you want to use jq or gojq, it’s the way to go for a space-time efficient solution for a large input.
Far easier to use would be my jm script, which is intended precisely to achieve the kind of objective you describe. In particular, please note its —-limit option. E.g. you could start with:
jm -s —-limit 1
See
https://github.com/pkoppstein/jm
How to read a 100+GB file with jq without running out of memory
|
effectively accessing first item in object
|
On input consider db-dump(from dbeaver), having this format:
{
"select": [
{<row1>},
{<row2>}
],
"select": {}
}
say that I'm debugging bigger script, and just want to see first few rows, from first statement. How to do that effectively in rather huge file?
Template:
jq 'keys[0] as $k|.[$k]|limit(1;.[])' dump
isn't really great, as it need to fetch all keys first. Template
jq '.[0]|limit(1;.[])' dump
sadly does not seem to be valid one, and
jq 'first(.[])|limit(1;.[])' dump
does not seem to have any performance benefit.
What would be the best way to just access first field in object without actually testing it's name or caring for rest of fields?
|
[
"One way to achieve this would be to use the keys filter to get the first key of the object, then use that key to access the value of the first object in that field. The final jq query would look something like this:\njq 'keys[0] as $k|.[$k][0]' dump\n\n",
"Given that weird object with identical keys, you can use the --stream option to access all items before the JSON processor would eliminate the duplicates, fromstream and truncate_stream to dissect the input, and limit to reduce the output to just a few items:\njq --stream -cn 'limit(5; fromstream(2|truncate_stream(inputs)))' dump.json\n\n{<row1>}\n{<row2>}\n{<row3>}\n{<row4>}\n{<row5>}\n\n",
"One strategy would be to use the —stream command-line option. It’s a bit tricky to use, but if you want to use jq or gojq, it’s the way to go for a space-time efficient solution for a large input.\nFar easier to use would be my jm script, which is intended precisely to achieve the kind of objective you describe. In particular, please note its —-limit option. E.g. you could start with:\njm -s —-limit 1\n\nSee\nhttps://github.com/pkoppstein/jm\nHow to read a 100+GB file with jq without running out of memory\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"jq"
] |
stackoverflow_0074665921_jq.txt
|
Q:
PHP - Problem with sorting my multidimensional array?
I have a multidimensional array that looks like this:
array:3 [
0 => array:2 [
"titre" => "Un package test"
"nbDDL" => "3"
]
1 => array:2 [
"titre" => "retest"
"nbDDL" => "1"
]
2 => array:2 [
"titre" => "ytjrtj"
"nbDDL" => "1"
]
]
I would like to sort it by ASC or DESC order (depending on a variable passed in function parameter) with the nbDDL.
I looked at the method array_multisort() but I can not put it in place.
I work under Symfony 3.
Currently, I have:
if($ordreTri == "ASC")
{
$liste = array_multisort("nbDDL", ASC);
}
Thanks for your help !
A:
There's a few different ways of doing this - you can introduce your own custom sort by using usort() and the spaceship operator <=>, but you can use array_multisort(), you just have to combine it with array_column().
You can sort the array by first fetching all the nbDDLs. Then use that as the sorting-array in array_multisort(), and sort by ascending order (SORT_ASC). Apply that to $array, and you're done!
// By reference, $array is changed
array_multisort(array_column($array, "nbDDL"), SORT_ASC, $array);
This is done by reference, so you don't need to assign it to a variable. The return-value of array_multisort() is a boolean, which means that if you assign it as
// By reference - $result is bool
$result = array_multisort(array_column($array, "nbDDL"), SORT_ASC, $array);
Then $result is either true or false, but not the sorted array.
Live demo at https://3v4l.org/9JYe5
PHP.net on array_multisort()
A:
You can try like below:
$data = [
['a' => 'a', 'order' => 1],
['b' => 'b', 'order' => 3],
['c' => 'c', 'order' => 1]
];
array_multisort(array_column($data, 'order'), SORT_DESC, SORT_NUMERIC, $data);
print_r($data);
A:
You can do as following way:
function array_sort($array, $on, $order=SORT_ASC){
$new_array = array();
$sortable_array = array();
if (count($array) > 0) {
foreach ($array as $k => $v) {
if (is_array($v)) {
foreach ($v as $k2 => $v2) {
if ($k2 == $on) {
$sortable_array[$k] = $v2;
}
}
} else {
$sortable_array[$k] = $v;
}
}
switch ($order) {
case SORT_ASC:
asort($sortable_array);
break;
case SORT_DESC:
arsort($sortable_array);
break;
}
foreach ($sortable_array as $k => $v) {
$new_array[$k] = $array[$k];
}
}
return $new_array;
}
Use function as like:
$list = array(
array( 'type' => 'suite', 'name'=>'A-Name'),
array( 'type' => 'suite', 'name'=>'C-Name'),
array( 'type' => 'suite', 'name'=>'B-Name')
);
$list = array_sort($list, 'name', SORT_ASC);
A:
normally, you can use array_multisort to sort the array, but somehow it's weird thatin php7.4 it returns boolean (true or false) instead of sorted array like in php 5. I mean, a sorted array is what we actually need, not the boolean values.
|
PHP - Problem with sorting my multidimensional array?
|
I have a multidimensional array that looks like this:
array:3 [
0 => array:2 [
"titre" => "Un package test"
"nbDDL" => "3"
]
1 => array:2 [
"titre" => "retest"
"nbDDL" => "1"
]
2 => array:2 [
"titre" => "ytjrtj"
"nbDDL" => "1"
]
]
I would like to sort it by ASC or DESC order (depending on a variable passed in function parameter) with the nbDDL.
I looked at the method array_multisort() but I can not put it in place.
I work under Symfony 3.
Currently, I have:
if($ordreTri == "ASC")
{
$liste = array_multisort("nbDDL", ASC);
}
Thanks for your help !
|
[
"There's a few different ways of doing this - you can introduce your own custom sort by using usort() and the spaceship operator <=>, but you can use array_multisort(), you just have to combine it with array_column().\nYou can sort the array by first fetching all the nbDDLs. Then use that as the sorting-array in array_multisort(), and sort by ascending order (SORT_ASC). Apply that to $array, and you're done!\n// By reference, $array is changed\narray_multisort(array_column($array, \"nbDDL\"), SORT_ASC, $array);\n\nThis is done by reference, so you don't need to assign it to a variable. The return-value of array_multisort() is a boolean, which means that if you assign it as \n// By reference - $result is bool\n$result = array_multisort(array_column($array, \"nbDDL\"), SORT_ASC, $array);\n\nThen $result is either true or false, but not the sorted array. \n\nLive demo at https://3v4l.org/9JYe5\nPHP.net on array_multisort()\n\n",
"You can try like below:\n$data = [\n ['a' => 'a', 'order' => 1],\n ['b' => 'b', 'order' => 3],\n ['c' => 'c', 'order' => 1]\n];\n\narray_multisort(array_column($data, 'order'), SORT_DESC, SORT_NUMERIC, $data);\n\nprint_r($data);\n\n",
"You can do as following way:\nfunction array_sort($array, $on, $order=SORT_ASC){\n\n $new_array = array();\n $sortable_array = array();\n\n if (count($array) > 0) {\n foreach ($array as $k => $v) {\n if (is_array($v)) {\n foreach ($v as $k2 => $v2) {\n if ($k2 == $on) {\n $sortable_array[$k] = $v2;\n }\n }\n } else {\n $sortable_array[$k] = $v;\n }\n }\n\n switch ($order) {\n case SORT_ASC:\n asort($sortable_array);\n break;\n case SORT_DESC:\n arsort($sortable_array);\n break;\n }\n\n foreach ($sortable_array as $k => $v) {\n $new_array[$k] = $array[$k];\n }\n }\n\n return $new_array;\n}\n\nUse function as like:\n$list = array(\n array( 'type' => 'suite', 'name'=>'A-Name'),\n array( 'type' => 'suite', 'name'=>'C-Name'),\n array( 'type' => 'suite', 'name'=>'B-Name')\n );\n\n$list = array_sort($list, 'name', SORT_ASC);\n\n",
"normally, you can use array_multisort to sort the array, but somehow it's weird thatin php7.4 it returns boolean (true or false) instead of sorted array like in php 5. I mean, a sorted array is what we actually need, not the boolean values.\n"
] |
[
2,
1,
1,
0
] |
[] |
[] |
[
"arrays",
"multidimensional_array",
"php",
"sorting",
"symfony"
] |
stackoverflow_0055847132_arrays_multidimensional_array_php_sorting_symfony.txt
|
Q:
Java script for Photoshop to "Save as JPEG in origin image folder with custom prefix in the name"
I have a great Java script that I've found online back in 2015. It works like a charm!
It saves the opened in Photoshop file with 1 click as high quality JPEG in the same folder as an original file:
#target photoshop
var saveFile = new File(activeDocument.path);
// =======================================================
var idsave = charIDToTypeID( "save" );
var desc8 = new ActionDescriptor();
var idAs = charIDToTypeID( "As " );
var desc9 = new ActionDescriptor();
var idEQlt = charIDToTypeID( "EQlt" );
desc9.putInteger( idEQlt, 12 );
var idMttC = charIDToTypeID( "MttC" );
var idMttC = charIDToTypeID( "MttC" );
var idNone = charIDToTypeID( "None" );
desc9.putEnumerated( idMttC, idMttC, idNone );
var idJPEG = charIDToTypeID( "JPEG" );
desc8.putObject( idAs, idJPEG, desc9 );
var idIn = charIDToTypeID( "In " );
desc8.putPath( idIn, new File( saveFile ) );
var idDocI = charIDToTypeID( "DocI" );
desc8.putInteger( idDocI, 35 );
var idCpy = charIDToTypeID( "Cpy " );
desc8.putBoolean( idCpy, false );
var idLwCs = charIDToTypeID( "LwCs" );
desc8.putBoolean( idLwCs, true );
var idsaveStage = stringIDToTypeID( "saveStage" );
var idsaveStageType = stringIDToTypeID( "saveStageType" );
var idsaveSucceeded = stringIDToTypeID( "saveSucceeded" );
desc8.putEnumerated( idsaveStage, idsaveStageType, idsaveSucceeded );
executeAction( idsave, desc8, DialogModes.NO );
But!
Is there a way to modify that script so it can:
Add the prefix, something like "_" at the beginning of the file (i.e. "_Image.jpg")
Close the original opened file in Photoshop without saving or asking to save it. I've found this line, but I'm unsure where to add it to the existing script:
app.activeDocument.close(SaveOptions.no);
It would make this amazing script even more useful and help me save so much time automating the process of saving over 7500 images I'm editing at the moment.
Thanks!
A:
Stephen_A_Marsh from Adobe Support forums helped me with this code.
Might as well share it here:
#target photoshop
var prefix = "_";
var docName = activeDocument.name.replace(/\.[^\.]+$/, '');
var saveFile = new File(activeDocument.path);
// =======================================================
var idsave = charIDToTypeID( "save" );
var desc8 = new ActionDescriptor();
var idAs = charIDToTypeID( "As " );
var desc9 = new ActionDescriptor();
var idEQlt = charIDToTypeID( "EQlt" );
desc9.putInteger( idEQlt, 12 ); // Quality level 12-1
var idMttC = charIDToTypeID( "MttC" );
var idMttC = charIDToTypeID( "MttC" );
var idNone = charIDToTypeID( "None" );
desc9.putEnumerated( idMttC, idMttC, idNone );
var idJPEG = charIDToTypeID( "JPEG" );
desc8.putObject( idAs, idJPEG, desc9 );
var idIn = charIDToTypeID( "In " );
desc8.putPath( idIn, new File( saveFile + "/" + prefix + docName ) );
var idDocI = charIDToTypeID( "DocI" );
desc8.putInteger( idDocI, 35 );
var idCpy = charIDToTypeID( "Cpy " );
desc8.putBoolean( idCpy, false );
var idLwCs = charIDToTypeID( "LwCs" );
desc8.putBoolean( idLwCs, true );
var idsaveStage = stringIDToTypeID( "saveStage" );
var idsaveStageType = stringIDToTypeID( "saveStageType" );
var idsaveSucceeded = stringIDToTypeID( "saveSucceeded" );
desc8.putEnumerated( idsaveStage, idsaveStageType, idsaveSucceeded );
executeAction(idsave, desc8, DialogModes.NO);
activeDocument.close(SaveOptions.DONOTSAVECHANGES);
|
Java script for Photoshop to "Save as JPEG in origin image folder with custom prefix in the name"
|
I have a great Java script that I've found online back in 2015. It works like a charm!
It saves the opened in Photoshop file with 1 click as high quality JPEG in the same folder as an original file:
#target photoshop
var saveFile = new File(activeDocument.path);
// =======================================================
var idsave = charIDToTypeID( "save" );
var desc8 = new ActionDescriptor();
var idAs = charIDToTypeID( "As " );
var desc9 = new ActionDescriptor();
var idEQlt = charIDToTypeID( "EQlt" );
desc9.putInteger( idEQlt, 12 );
var idMttC = charIDToTypeID( "MttC" );
var idMttC = charIDToTypeID( "MttC" );
var idNone = charIDToTypeID( "None" );
desc9.putEnumerated( idMttC, idMttC, idNone );
var idJPEG = charIDToTypeID( "JPEG" );
desc8.putObject( idAs, idJPEG, desc9 );
var idIn = charIDToTypeID( "In " );
desc8.putPath( idIn, new File( saveFile ) );
var idDocI = charIDToTypeID( "DocI" );
desc8.putInteger( idDocI, 35 );
var idCpy = charIDToTypeID( "Cpy " );
desc8.putBoolean( idCpy, false );
var idLwCs = charIDToTypeID( "LwCs" );
desc8.putBoolean( idLwCs, true );
var idsaveStage = stringIDToTypeID( "saveStage" );
var idsaveStageType = stringIDToTypeID( "saveStageType" );
var idsaveSucceeded = stringIDToTypeID( "saveSucceeded" );
desc8.putEnumerated( idsaveStage, idsaveStageType, idsaveSucceeded );
executeAction( idsave, desc8, DialogModes.NO );
But!
Is there a way to modify that script so it can:
Add the prefix, something like "_" at the beginning of the file (i.e. "_Image.jpg")
Close the original opened file in Photoshop without saving or asking to save it. I've found this line, but I'm unsure where to add it to the existing script:
app.activeDocument.close(SaveOptions.no);
It would make this amazing script even more useful and help me save so much time automating the process of saving over 7500 images I'm editing at the moment.
Thanks!
|
[
"Stephen_A_Marsh from Adobe Support forums helped me with this code.\nMight as well share it here:\n#target photoshop\nvar prefix = \"_\";\nvar docName = activeDocument.name.replace(/\\.[^\\.]+$/, '');\nvar saveFile = new File(activeDocument.path);\n// =======================================================\nvar idsave = charIDToTypeID( \"save\" );\n var desc8 = new ActionDescriptor();\n var idAs = charIDToTypeID( \"As \" );\n var desc9 = new ActionDescriptor();\n var idEQlt = charIDToTypeID( \"EQlt\" );\n desc9.putInteger( idEQlt, 12 ); // Quality level 12-1\n var idMttC = charIDToTypeID( \"MttC\" );\n var idMttC = charIDToTypeID( \"MttC\" );\n var idNone = charIDToTypeID( \"None\" );\n desc9.putEnumerated( idMttC, idMttC, idNone );\n var idJPEG = charIDToTypeID( \"JPEG\" );\n desc8.putObject( idAs, idJPEG, desc9 );\n var idIn = charIDToTypeID( \"In \" );\n desc8.putPath( idIn, new File( saveFile + \"/\" + prefix + docName ) );\n var idDocI = charIDToTypeID( \"DocI\" );\n desc8.putInteger( idDocI, 35 );\n var idCpy = charIDToTypeID( \"Cpy \" );\n desc8.putBoolean( idCpy, false );\n var idLwCs = charIDToTypeID( \"LwCs\" );\n desc8.putBoolean( idLwCs, true );\n var idsaveStage = stringIDToTypeID( \"saveStage\" );\n var idsaveStageType = stringIDToTypeID( \"saveStageType\" );\n var idsaveSucceeded = stringIDToTypeID( \"saveSucceeded\" );\n desc8.putEnumerated( idsaveStage, idsaveStageType, idsaveSucceeded );\nexecuteAction(idsave, desc8, DialogModes.NO);\nactiveDocument.close(SaveOptions.DONOTSAVECHANGES);\n\n"
] |
[
0
] |
[] |
[] |
[
"adobe",
"automation",
"javascript",
"photoshop",
"photoshop_script"
] |
stackoverflow_0074660651_adobe_automation_javascript_photoshop_photoshop_script.txt
|
Q:
How to convert string to number in python?
I have list of numbers as str
li = ['1', '4', '8.6']
if I use int to convert the result is [1, 4, 8].
If I use float to convert the result is [1.0, 4.0, 8.6]
I want to convert them to [1, 4, 8.6]
I've tried this:
li = [1, 4, 8.6]
intli = list(map(lambda x: int(x),li))
floatli = list(map(lambda x: float(x),li))
print(intli)
print(floatli)
>> [1, 4, 8]
>> [1.0, 4.0, 8.6]
A:
Convert the items to a integer if isdigit() returns True, else to a float. This can be done by a list generator:
li = ['1', '4', '8.6']
lst = [int(x) if x.isdigit() else float(x) for x in li]
print(lst)
To check if it actually worked, you can check for the types using another list generator:
types = [type(i) for i in lst]
print(types)
A:
One way is to use ast.literal_eval
>>> from ast import literal_eval
>>> spam = ['1', '4', '8.6']
>>> [literal_eval(item) for item in spam]
[1, 4, 8.6]
Word of caution - there are values which return True with str.isdigit() but not convertible to int or float and in case of literal_eval will raise SyntaxError.
>>> '1²'.isdigit()
True
A:
You can use ast.literal_eval to convert an string to a literal:
from ast import literal_eval
li = ['1', '4', '8.6']
numbers = list(map(literal_eval, li))
As @Muhammad Akhlaq Mahar noted in his comment, str.isidigit does not return True for negative integers:
>>> '-3'.isdigit()
False
A:
You're going to need a small utility function:
def to_float_or_int(s):
n = float(s)
return int(n) if n.is_integer() else n
Then,
result = [to_float_or_int(s) for s in li]
A:
You can try map each element using loads from json:
from json import loads
li = ['1', '4', '8.6']
li = [*map(loads,li)]
print(li)
# [1, 4, 8.6]
Or using eval():
print(li:=[*map(eval,['1','4','8.6','-1','-2.3'])])
# [1, 4, 8.6, -1, -2.3]
Notes:
Using json.loads() or ast.literal_eval is safer than eval() when
the string to be evaluated comes from an unknown source
A:
In Python, you can convert a string to a number using the int() or float() functions. For example, if you have a string like "123", you can convert it to the integer number 123 using the int() function like this:
string = "123"
number = int(string)
Or, if you have a string like "3.1415", you can convert it to the float number 3.1415 using the float() function like this:
string = "3.1415"
number = float(string)
These functions will raise a ValueError if the string cannot be converted to a number, so you should make sure to check for that and handle it appropriately in your code. For example:
string = "hello"
try:
number = int(string)
except ValueError:
print("The string cannot be converted to a number.")
I hope that helps! Let me know if you have any other questions.
|
How to convert string to number in python?
|
I have list of numbers as str
li = ['1', '4', '8.6']
if I use int to convert the result is [1, 4, 8].
If I use float to convert the result is [1.0, 4.0, 8.6]
I want to convert them to [1, 4, 8.6]
I've tried this:
li = [1, 4, 8.6]
intli = list(map(lambda x: int(x),li))
floatli = list(map(lambda x: float(x),li))
print(intli)
print(floatli)
>> [1, 4, 8]
>> [1.0, 4.0, 8.6]
|
[
"Convert the items to a integer if isdigit() returns True, else to a float. This can be done by a list generator:\nli = ['1', '4', '8.6']\nlst = [int(x) if x.isdigit() else float(x) for x in li]\nprint(lst)\n\nTo check if it actually worked, you can check for the types using another list generator:\ntypes = [type(i) for i in lst]\nprint(types)\n\n",
"One way is to use ast.literal_eval\n>>> from ast import literal_eval\n>>> spam = ['1', '4', '8.6']\n>>> [literal_eval(item) for item in spam]\n[1, 4, 8.6]\n\nWord of caution - there are values which return True with str.isdigit() but not convertible to int or float and in case of literal_eval will raise SyntaxError.\n>>> '1²'.isdigit()\nTrue\n\n",
"You can use ast.literal_eval to convert an string to a literal:\nfrom ast import literal_eval\n\nli = ['1', '4', '8.6']\nnumbers = list(map(literal_eval, li))\n\nAs @Muhammad Akhlaq Mahar noted in his comment, str.isidigit does not return True for negative integers:\n>>> '-3'.isdigit()\nFalse\n\n",
"You're going to need a small utility function:\ndef to_float_or_int(s):\n n = float(s)\n return int(n) if n.is_integer() else n\n\nThen,\nresult = [to_float_or_int(s) for s in li]\n\n",
"You can try map each element using loads from json:\nfrom json import loads\nli = ['1', '4', '8.6']\nli = [*map(loads,li)]\nprint(li)\n\n# [1, 4, 8.6]\n\nOr using eval():\nprint(li:=[*map(eval,['1','4','8.6','-1','-2.3'])])\n\n# [1, 4, 8.6, -1, -2.3]\n\nNotes:\n\nUsing json.loads() or ast.literal_eval is safer than eval() when\nthe string to be evaluated comes from an unknown source\n\n",
"In Python, you can convert a string to a number using the int() or float() functions. For example, if you have a string like \"123\", you can convert it to the integer number 123 using the int() function like this:\nstring = \"123\"\nnumber = int(string)\n\nOr, if you have a string like \"3.1415\", you can convert it to the float number 3.1415 using the float() function like this:\nstring = \"3.1415\"\nnumber = float(string)\n\nThese functions will raise a ValueError if the string cannot be converted to a number, so you should make sure to check for that and handle it appropriately in your code. For example:\nstring = \"hello\"\ntry:\n number = int(string)\nexcept ValueError:\n print(\"The string cannot be converted to a number.\")\n\nI hope that helps! Let me know if you have any other questions.\n"
] |
[
2,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"converters",
"integer",
"numbers",
"python",
"string"
] |
stackoverflow_0074665788_converters_integer_numbers_python_string.txt
|
Q:
Yup validation, do not show error message for form field
I use Yup along with Formik and Material UI for validating a form field.
What I want to achieve here is, I only want to show the error message if the first test fails ie. for duplicate code.
For the second test, if it fails there should not be any error message under the field, but the field should be highlighted as an error field.
But if I remove 'validCode' from the arguments of the second test the field will not be highlighted as an error field.
const YUP_STRING = Yup.string().ensure().trim();
const validationSchema = yup.object().shape({
code: YUP_STRING.test(
'',
'duplicateCode',
code => code !== prohibitedCode
).test('', 'validCode', code => codeValidator.validate(code));
});
I want to achieve this,
but what I have now is,
Is there any way that I can achieve something like the first picture using Yup?
A:
It's too late to answer but, You are leaving error name field name empty, which is why it's validating the error but can't find the message,
date: YUP_STRING.test("duplicateCodeErrorName","duplicateCode",
function (value) {
code => code !== prohibitedCode
).test('validCodeErrorName', 'validCode', code => codeValidator.validate(code));
})
This how I do regularly,
date: Yup.date().test(
"noSunday", "You Can't select Sunday as Additional Holiday",
function (value) {
const day = moment(value).day();
return day !== 0 ? true : false;
})
Hope that helps
|
Yup validation, do not show error message for form field
|
I use Yup along with Formik and Material UI for validating a form field.
What I want to achieve here is, I only want to show the error message if the first test fails ie. for duplicate code.
For the second test, if it fails there should not be any error message under the field, but the field should be highlighted as an error field.
But if I remove 'validCode' from the arguments of the second test the field will not be highlighted as an error field.
const YUP_STRING = Yup.string().ensure().trim();
const validationSchema = yup.object().shape({
code: YUP_STRING.test(
'',
'duplicateCode',
code => code !== prohibitedCode
).test('', 'validCode', code => codeValidator.validate(code));
});
I want to achieve this,
but what I have now is,
Is there any way that I can achieve something like the first picture using Yup?
|
[
"It's too late to answer but, You are leaving error name field name empty, which is why it's validating the error but can't find the message,\ndate: YUP_STRING.test(\"duplicateCodeErrorName\",\"duplicateCode\",\nfunction (value) {\n code => code !== prohibitedCode\n ).test('validCodeErrorName', 'validCode', code => codeValidator.validate(code));\n})\n\nThis how I do regularly,\ndate: Yup.date().test(\n\"noSunday\", \"You Can't select Sunday as Additional Holiday\",\nfunction (value) {\nconst day = moment(value).day();\nreturn day !== 0 ? true : false;\n })\n\nHope that helps\n"
] |
[
0
] |
[] |
[] |
[
"formik",
"material_ui",
"yup"
] |
stackoverflow_0068917722_formik_material_ui_yup.txt
|
Q:
How to test Obfuscated app which runs via Unity
As its known Mono is an fork of .NET Framework but with nuances, such as some things could be not implemented or not working in some versions of Mono, probably everywhere idk (eg Environment.FailFast), what if someone will decide to use this Environment.FailFast and add it in obfuscation process, for that this guy who did that should run game server (some standalone) to test it or check sources of Mono with necessary version of Unity (this will take some time to do that).
Obfuscator that obfuscates .NET application (Mono .DLL, eg: .NET Framework 4.7.2) which is runs on Unity Engine, how do I could test that after obfuscation it runs without errors/crashes? Currently Im just running standalone (game server of Unturned) that doing that for me, after that waiting for a few minutes then I will see the result - this is very long process for me, to spend 1-2 minutes to test obfuscated .DLL.
Problems
What if someone would ask: "hey, how do I trust you If I cant see real tests such as Unit Tests that proves me that my program works after obfuscation without any problems".
What if I want to prove to myself that obfuscation working fine and just running without errors and crashes would be great.
Current ideas
Write custom Unity game and then test there my things (Use native mono api to load assembly or Assembly.Load of the dll then create new GameObject in both situations etc).
Install Mono then open up cmd or somewhere else path where is Mono located then create simple console app write there simple logic of loading .DLL then compiling app in Visual Studio after that writing in cmd
mono "path_to_the_compiled_executable_in_VS.exe"
Issues of current ideas
This is hard to support, thats means I have to store plenty of different version of my custom Unity game somewhere - I guess in obfuscator project. (eg Unity 2019.?.?/2019.?.?/2020/2021 etc) (then write in these games some scripts to load assembly and run game). Always I will have to wait till game will load. (I mean the Unity load menu with Unity Logo, surely this is could be removed with some ways, but I dont wanna do that).
Im sure Unity is differs from Mono 100%, but I may be wrong (I mean the some implementations of Mono eg Assembly reading/loading and other things). So, Im not able to create there real GameObjects of .DLL to invoke methods such as Update/Awake/OnEnable/OnDisable etc, perhaps there are something that could do that and this would be cool.
A:
Currently, this is the single answer I got asking directly in some reversing discord channels, but I didnt test it yet, Im sure this is a great idea, will be great to hear other ideas.
Answer that I got By kotae in BitCrackers discord
what are you obfuscating? if it's the file on disk then why do you need unity Update/Awake/OnEnable/OnDisable events? when compatibility is your concern, target the lowest supported version. i think this is usually .NET 4.5. mono has near 100% coverage of everything in .NET 4.5, and there are lists on the mono site you can find to confirm that.
besides that, for unit tests here's what i'd do:
keep a list of all .NET features you use (types, methods, etc)
keep a collection of all versions of mono that unity ships with (i'm pretty sure there's only been 2 but could be wrong - check file hashes)
write a test runner that loads each mono.dll, initializes a runtime environment, and then uses reflection (either .NET reflection or just native mono api calls) to run through your list from step#1 and make sure each item exists
you could automate step#1 using reflection as well, actually, or just manual parsing of the IL. mono.cecil or something similar would let you examine the IL, then you just have to pattern match instructions that reference .NET types & methods & whatever else.
about-mono compatibility
My conclusions
I made some conclusions for myself, and one of them is there are no need Update/Awake/OnEnable/OnDisable events, sadly I dont have ideas on how to test the application in depth (I mean execute methods, test everything as it conceived by the developer of this application), so I wanted is just to see how the application runs under Unity (does it has errors/crashes this is enough for me). If I will get other answers outside I will post them here.
|
How to test Obfuscated app which runs via Unity
|
As its known Mono is an fork of .NET Framework but with nuances, such as some things could be not implemented or not working in some versions of Mono, probably everywhere idk (eg Environment.FailFast), what if someone will decide to use this Environment.FailFast and add it in obfuscation process, for that this guy who did that should run game server (some standalone) to test it or check sources of Mono with necessary version of Unity (this will take some time to do that).
Obfuscator that obfuscates .NET application (Mono .DLL, eg: .NET Framework 4.7.2) which is runs on Unity Engine, how do I could test that after obfuscation it runs without errors/crashes? Currently Im just running standalone (game server of Unturned) that doing that for me, after that waiting for a few minutes then I will see the result - this is very long process for me, to spend 1-2 minutes to test obfuscated .DLL.
Problems
What if someone would ask: "hey, how do I trust you If I cant see real tests such as Unit Tests that proves me that my program works after obfuscation without any problems".
What if I want to prove to myself that obfuscation working fine and just running without errors and crashes would be great.
Current ideas
Write custom Unity game and then test there my things (Use native mono api to load assembly or Assembly.Load of the dll then create new GameObject in both situations etc).
Install Mono then open up cmd or somewhere else path where is Mono located then create simple console app write there simple logic of loading .DLL then compiling app in Visual Studio after that writing in cmd
mono "path_to_the_compiled_executable_in_VS.exe"
Issues of current ideas
This is hard to support, thats means I have to store plenty of different version of my custom Unity game somewhere - I guess in obfuscator project. (eg Unity 2019.?.?/2019.?.?/2020/2021 etc) (then write in these games some scripts to load assembly and run game). Always I will have to wait till game will load. (I mean the Unity load menu with Unity Logo, surely this is could be removed with some ways, but I dont wanna do that).
Im sure Unity is differs from Mono 100%, but I may be wrong (I mean the some implementations of Mono eg Assembly reading/loading and other things). So, Im not able to create there real GameObjects of .DLL to invoke methods such as Update/Awake/OnEnable/OnDisable etc, perhaps there are something that could do that and this would be cool.
|
[
"Currently, this is the single answer I got asking directly in some reversing discord channels, but I didnt test it yet, Im sure this is a great idea, will be great to hear other ideas.\nAnswer that I got By kotae in BitCrackers discord\nwhat are you obfuscating? if it's the file on disk then why do you need unity Update/Awake/OnEnable/OnDisable events? when compatibility is your concern, target the lowest supported version. i think this is usually .NET 4.5. mono has near 100% coverage of everything in .NET 4.5, and there are lists on the mono site you can find to confirm that.\nbesides that, for unit tests here's what i'd do:\n\nkeep a list of all .NET features you use (types, methods, etc)\nkeep a collection of all versions of mono that unity ships with (i'm pretty sure there's only been 2 but could be wrong - check file hashes)\nwrite a test runner that loads each mono.dll, initializes a runtime environment, and then uses reflection (either .NET reflection or just native mono api calls) to run through your list from step#1 and make sure each item exists\nyou could automate step#1 using reflection as well, actually, or just manual parsing of the IL. mono.cecil or something similar would let you examine the IL, then you just have to pattern match instructions that reference .NET types & methods & whatever else.\n\nabout-mono compatibility\nMy conclusions\nI made some conclusions for myself, and one of them is there are no need Update/Awake/OnEnable/OnDisable events, sadly I dont have ideas on how to test the application in depth (I mean execute methods, test everything as it conceived by the developer of this application), so I wanted is just to see how the application runs under Unity (does it has errors/crashes this is enough for me). If I will get other answers outside I will post them here.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"c#",
"mono",
"testing",
"unity3d"
] |
stackoverflow_0074631823_.net_c#_mono_testing_unity3d.txt
|
Q:
codesandbox - Cannot find module PACKAGE or its corresponding type declarations
I built and released a npm package called schemez. On my local system via vscode, it works great with no errors and plenty of Typescript support. On codesandbox, its erroring out with Cannot find module 'schemez' or its corresponding type declarations.ts(2307) even though the code is compiling.
Sample - https://codesandbox.io/s/fluent-json-schema-vs-schemez-vs-typebox-645s2j
Why is this occurring?
A:
For me, this issue is popping up. When I open console it seems to be a CORS issue with the specific package. It might be a temporary problem, because I can not find many issues regarding this in 2022, although there is a unanswered one here
https://github.com/codesandbox/codesandbox-client/issues/6834
|
codesandbox - Cannot find module PACKAGE or its corresponding type declarations
|
I built and released a npm package called schemez. On my local system via vscode, it works great with no errors and plenty of Typescript support. On codesandbox, its erroring out with Cannot find module 'schemez' or its corresponding type declarations.ts(2307) even though the code is compiling.
Sample - https://codesandbox.io/s/fluent-json-schema-vs-schemez-vs-typebox-645s2j
Why is this occurring?
|
[
"For me, this issue is popping up. When I open console it seems to be a CORS issue with the specific package. It might be a temporary problem, because I can not find many issues regarding this in 2022, although there is a unanswered one here\nhttps://github.com/codesandbox/codesandbox-client/issues/6834\n"
] |
[
0
] |
[] |
[] |
[
"codesandbox",
"import",
"javascript",
"npm_publish",
"typescript"
] |
stackoverflow_0073116684_codesandbox_import_javascript_npm_publish_typescript.txt
|
Q:
How to pass a Composable to another Composable as its parameter and display/run it in Jetpack Compose
I have this drawer I learned to make on youtube
https://www.youtube.com/watch?v=JLICaBEiJS0&list=PLQkwcJG4YTCSpJ2NLhDTHhi6XBNfk9WiC&index=31
Philipp Lackner
I want to add the drawer to my app which has multiple screens and some of them don't need the drawer so I implemented navigation with screens , and some of the screens need to also have the drawer ontop of them wrap them.
this is the code of the drawer
val scaffoldState = rememberScaffoldState()
val scope = rememberCoroutineScope()
Scaffold(
drawerGesturesEnabled = scaffoldState.drawerState.isOpen,
scaffoldState = scaffoldState, topBar = {
AppBar(onNavigationIconClick = {
scope.launch {
scaffoldState.drawerState.open()
}
})
}, drawerContent = {
DrawerHeader()
DrawerBody(items = listOf(
MenuItem(
id = "home",
title = "Home",
contentDescription = "Go to home screen",
icon = Icons.Default.Home
),
MenuItem(
id = "settings",
title = "Settings",
contentDescription = "Go to Settings screen",
icon = Icons.Default.Settings
),
MenuItem(
id = "help",
title = "Help",
contentDescription = "Go to help screen",
icon = Icons.Default.Info
),
), onItemClick = {
println("Clicked on ${it.title}")
when (it.id) {
"home" -> {
println("Clicked on ${it.title}")
}
"settings" -> {
println("Clicked on ${it.title}")
}
"help" -> {
println("Clicked on ${it.title}")
}
}
})
}) {
Text(text = "Hello World")
}
the Text = Hellow world is where I want to pass my parameter of the screen which I don't know how to do it. I want to add a parameter that takes a composable function and runs it inside
and I followed this navigation video on how to navigate in kotlin
https://www.youtube.com/watch?v=4gUeyNkGE3g&list=PLQkwcJG4YTCSpJ2NLhDTHhi6XBNfk9WiC&index=18
which has 3 big files so if you ask I post them here but I will try to be more specific about what is needed
composable(route = Screen.RegisterScreen.route) {
RegisterScreen(navController = navCotroller)
}
and if I put the code in the drawer it works well but I want to split the code to be cleaner because I use the drawer in more places
the code work like the example bellow
composable(route = Screen.PreferenceScreen.route) {
val scaffoldState = rememberScaffoldState()
val scope = rememberCoroutineScope()
Scaffold(
drawerGesturesEnabled = scaffoldState.drawerState.isOpen,
scaffoldState = scaffoldState,
topBar = {
AppBar(onNavigationIconClick = {
scope.launch {
scaffoldState.drawerState.open()
}
})
},
drawerContent = {
DrawerHeader()
DrawerBody(items = listOf(
MenuItem(
id = "swipe",
title = "Swipe",
contentDescription = "Go to Swipe screen",
icon = Icons.Default.Home
),
MenuItem(
id = "settings",
title = "Settings",
contentDescription = "Go to Settings screen",
icon = Icons.Default.Settings
),
MenuItem(
id = "profile",
title = "Profile",
contentDescription = "Go to profile screen",
icon = Icons.Default.Info
),
), onItemClick = {
when (it.id) {
"swipe" -> {
navCotroller.navigate(Screen.SwipeScreen.route)
}
"settings" -> {
navCotroller.navigate(Screen.PreferenceScreen.route)
}
"profile" -> {
navCotroller.navigate(Screen.CelebProfileScreen.route)
}
}
})
}) {
-----> PreferenceScreen(navController = navCotroller)
}
}
but it is not clean code!! how can I use a function pointer to make this work ??
A:
I can't compile your code and I'm not sure if this will solve your issue, but based on this
… I want to add a parameter that takes a composable function and runs
it inside
You can use this as a reference. Consider this composable that takes any composable and runs it inside
@Composable
fun AnyComposable(
anyComposable: @Composable () -> Unit
) {
anyComposable()
}
And another composable that uses the composable above and passes any other composable to it like this
@Composable
fun MyScreen() {
// big red Box composable
val composable1 = @Composable {
Box (
modifier = Modifier
.size(200.dp)
.background(Color.Red)
)
}
// small blue Box composable
val composable2 = @Composable {
Box(
modifier = Modifier
.size(80.dp)
.background(Color.Blue)
)
}
// small green rectangular Box composable
val composable3 = @Composable {
Box(
modifier = Modifier
.size(width = 100.dp, height = 30.dp)
.background(Color.Green)
)
}
var dynamicComposable by remember { mutableStateOf<@Composable () -> Unit> (@Composable { composable1() }) }
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally
) {
Row(
modifier = Modifier.fillMaxWidth(),
verticalAlignment = Alignment.CenterVertically,
horizontalArrangement = Arrangement.SpaceBetween
) {
Button(onClick = {
dynamicComposable = composable1
}) {
Text("Comp 1")
}
Button(onClick = {
dynamicComposable = composable2
}) {
Text("Comp 2")
}
Button(onClick = {
dynamicComposable = composable3
}) {
Text("Comp 3")
}
}
AnyComposable {
dynamicComposable()
}
}
}
and the output:
A:
Or if in case the receiving composable has, lets say a Column, and you want to set-up the argument composables with Alignment beforehand, you can consider this one.
Same composable reciever that accepts any composables but the expected layout is Column.
@Composable
fun AnyComposable(
anyComposable: @Composable () -> Unit
) {
Column(
modifier = Modifier.fillMaxSize()
) {
anyComposable()
}
}
@Composable
fun MyScreen() {
// big red box at Start of Column
val composable1 : @Composable ColumnScope.() -> Unit = @Composable {
Box (
modifier = Modifier
.align(Alignment.Start)
.size(200.dp)
.background(Color.Red)
)
}
// small blue box at Center of Column
val composable2 : @Composable ColumnScope.() -> Unit = @Composable {
Box(
modifier = Modifier
.align(Alignment.CenterHorizontally)
.size(80.dp)
.background(Color.Blue)
)
}
// small green rectangle at End of Colummn
val composable3 : @Composable ColumnScope.() -> Unit = @Composable {
Box(
modifier = Modifier
.align(Alignment.End)
.size(width = 100.dp, height = 30.dp)
.background(Color.Green)
)
}
var dynamicComposable by remember { mutableStateOf<@Composable ColumnScope.() -> Unit> (@Composable { composable1() }) }
Column(
modifier = Modifier.fillMaxSize(),
horizontalAlignment = Alignment.CenterHorizontally
) {
Row(
modifier = Modifier.fillMaxWidth(),
verticalAlignment = Alignment.CenterVertically,
horizontalArrangement = Arrangement.SpaceBetween
) {
Button(onClick = {
dynamicComposable = composable1
}) {
Text("Comp 1")
}
Button(onClick = {
dynamicComposable = composable2
}) {
Text("Comp 2")
}
Button(onClick = {
dynamicComposable = composable3
}) {
Text("Comp 3")
}
}
AnyComposable {
dynamicComposable()
}
}
}
Output:
|
How to pass a Composable to another Composable as its parameter and display/run it in Jetpack Compose
|
I have this drawer I learned to make on youtube
https://www.youtube.com/watch?v=JLICaBEiJS0&list=PLQkwcJG4YTCSpJ2NLhDTHhi6XBNfk9WiC&index=31
Philipp Lackner
I want to add the drawer to my app which has multiple screens and some of them don't need the drawer so I implemented navigation with screens , and some of the screens need to also have the drawer ontop of them wrap them.
this is the code of the drawer
val scaffoldState = rememberScaffoldState()
val scope = rememberCoroutineScope()
Scaffold(
drawerGesturesEnabled = scaffoldState.drawerState.isOpen,
scaffoldState = scaffoldState, topBar = {
AppBar(onNavigationIconClick = {
scope.launch {
scaffoldState.drawerState.open()
}
})
}, drawerContent = {
DrawerHeader()
DrawerBody(items = listOf(
MenuItem(
id = "home",
title = "Home",
contentDescription = "Go to home screen",
icon = Icons.Default.Home
),
MenuItem(
id = "settings",
title = "Settings",
contentDescription = "Go to Settings screen",
icon = Icons.Default.Settings
),
MenuItem(
id = "help",
title = "Help",
contentDescription = "Go to help screen",
icon = Icons.Default.Info
),
), onItemClick = {
println("Clicked on ${it.title}")
when (it.id) {
"home" -> {
println("Clicked on ${it.title}")
}
"settings" -> {
println("Clicked on ${it.title}")
}
"help" -> {
println("Clicked on ${it.title}")
}
}
})
}) {
Text(text = "Hello World")
}
the Text = Hellow world is where I want to pass my parameter of the screen which I don't know how to do it. I want to add a parameter that takes a composable function and runs it inside
and I followed this navigation video on how to navigate in kotlin
https://www.youtube.com/watch?v=4gUeyNkGE3g&list=PLQkwcJG4YTCSpJ2NLhDTHhi6XBNfk9WiC&index=18
which has 3 big files so if you ask I post them here but I will try to be more specific about what is needed
composable(route = Screen.RegisterScreen.route) {
RegisterScreen(navController = navCotroller)
}
and if I put the code in the drawer it works well but I want to split the code to be cleaner because I use the drawer in more places
the code work like the example bellow
composable(route = Screen.PreferenceScreen.route) {
val scaffoldState = rememberScaffoldState()
val scope = rememberCoroutineScope()
Scaffold(
drawerGesturesEnabled = scaffoldState.drawerState.isOpen,
scaffoldState = scaffoldState,
topBar = {
AppBar(onNavigationIconClick = {
scope.launch {
scaffoldState.drawerState.open()
}
})
},
drawerContent = {
DrawerHeader()
DrawerBody(items = listOf(
MenuItem(
id = "swipe",
title = "Swipe",
contentDescription = "Go to Swipe screen",
icon = Icons.Default.Home
),
MenuItem(
id = "settings",
title = "Settings",
contentDescription = "Go to Settings screen",
icon = Icons.Default.Settings
),
MenuItem(
id = "profile",
title = "Profile",
contentDescription = "Go to profile screen",
icon = Icons.Default.Info
),
), onItemClick = {
when (it.id) {
"swipe" -> {
navCotroller.navigate(Screen.SwipeScreen.route)
}
"settings" -> {
navCotroller.navigate(Screen.PreferenceScreen.route)
}
"profile" -> {
navCotroller.navigate(Screen.CelebProfileScreen.route)
}
}
})
}) {
-----> PreferenceScreen(navController = navCotroller)
}
}
but it is not clean code!! how can I use a function pointer to make this work ??
|
[
"I can't compile your code and I'm not sure if this will solve your issue, but based on this\n\n… I want to add a parameter that takes a composable function and runs\nit inside\n\nYou can use this as a reference. Consider this composable that takes any composable and runs it inside\n@Composable\nfun AnyComposable(\n anyComposable: @Composable () -> Unit\n) {\n anyComposable()\n}\n\nAnd another composable that uses the composable above and passes any other composable to it like this\n@Composable\nfun MyScreen() {\n\n // big red Box composable\n val composable1 = @Composable {\n Box (\n modifier = Modifier\n .size(200.dp)\n .background(Color.Red)\n )\n }\n\n // small blue Box composable\n val composable2 = @Composable {\n Box(\n modifier = Modifier\n .size(80.dp)\n .background(Color.Blue)\n )\n }\n\n // small green rectangular Box composable\n val composable3 = @Composable {\n Box(\n modifier = Modifier\n .size(width = 100.dp, height = 30.dp)\n .background(Color.Green)\n )\n }\n\n var dynamicComposable by remember { mutableStateOf<@Composable () -> Unit> (@Composable { composable1() }) }\n\n Column(\n modifier = Modifier.fillMaxSize(),\n horizontalAlignment = Alignment.CenterHorizontally\n ) {\n\n Row(\n modifier = Modifier.fillMaxWidth(),\n verticalAlignment = Alignment.CenterVertically,\n horizontalArrangement = Arrangement.SpaceBetween\n ) {\n Button(onClick = {\n dynamicComposable = composable1\n }) {\n Text(\"Comp 1\")\n }\n\n Button(onClick = {\n dynamicComposable = composable2\n }) {\n Text(\"Comp 2\")\n }\n\n Button(onClick = {\n dynamicComposable = composable3\n }) {\n Text(\"Comp 3\")\n }\n }\n\n AnyComposable {\n dynamicComposable()\n }\n }\n}\n\nand the output:\n\n",
"Or if in case the receiving composable has, lets say a Column, and you want to set-up the argument composables with Alignment beforehand, you can consider this one.\nSame composable reciever that accepts any composables but the expected layout is Column.\n@Composable\nfun AnyComposable(\n anyComposable: @Composable () -> Unit\n) {\n Column(\n modifier = Modifier.fillMaxSize()\n ) {\n anyComposable()\n }\n}\n\n@Composable\nfun MyScreen() {\n\n // big red box at Start of Column\n val composable1 : @Composable ColumnScope.() -> Unit = @Composable {\n Box (\n modifier = Modifier\n .align(Alignment.Start)\n .size(200.dp)\n .background(Color.Red)\n )\n }\n\n // small blue box at Center of Column\n val composable2 : @Composable ColumnScope.() -> Unit = @Composable {\n Box(\n modifier = Modifier\n .align(Alignment.CenterHorizontally)\n .size(80.dp)\n .background(Color.Blue)\n )\n }\n\n // small green rectangle at End of Colummn\n val composable3 : @Composable ColumnScope.() -> Unit = @Composable {\n Box(\n modifier = Modifier\n .align(Alignment.End)\n .size(width = 100.dp, height = 30.dp)\n .background(Color.Green)\n )\n }\n\n var dynamicComposable by remember { mutableStateOf<@Composable ColumnScope.() -> Unit> (@Composable { composable1() }) }\n\n Column(\n modifier = Modifier.fillMaxSize(),\n horizontalAlignment = Alignment.CenterHorizontally\n ) {\n\n Row(\n modifier = Modifier.fillMaxWidth(),\n verticalAlignment = Alignment.CenterVertically,\n horizontalArrangement = Arrangement.SpaceBetween\n ) {\n Button(onClick = {\n dynamicComposable = composable1\n }) {\n Text(\"Comp 1\")\n }\n\n Button(onClick = {\n dynamicComposable = composable2\n }) {\n Text(\"Comp 2\")\n }\n\n Button(onClick = {\n dynamicComposable = composable3\n }) {\n Text(\"Comp 3\")\n }\n }\n\n AnyComposable {\n dynamicComposable()\n }\n }\n}\n\nOutput:\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"android",
"android_jetpack_compose",
"function_parameter",
"kotlin",
"lambda"
] |
stackoverflow_0074665777_android_android_jetpack_compose_function_parameter_kotlin_lambda.txt
|
Q:
Pattern finder in python
Let's say I have a list with a bunch of numbers in it, I'm looking to make a function that will list and return the numbers that are being repeated in most of them.
Example code:
—ListOfNumbers = [1234, 9912349, 578]
-print(GetPatern(ListOfNumbers))
1234
A:
Here is an example of a function that could do this:
def get_pattern(numbers):
# First, we will create a dictionary where the keys are the numbers in our list,
# and the values are the number of times those numbers appear in the list
number_count = {}
for number in numbers:
if number not in number_count:
number_count[number] = 1
else:
number_count[number] += 1
# Next, we will find the number that appears the most times in the list
# by looping through the dictionary and finding the key with the highest value
max_count = 0
max_number = 0
for number, count in number_count.items():
if count > max_count:
max_count = count
max_number = number
# Finally, we will return the numbers that appear the most times in the list
# by checking each number in the list and adding it to the result if it matches
# the number with the highest count
result = []
for number in numbers:
if number == max_number:
result.append(number)
return result
Here is an example of how you could use this function:
ListOfNumbers = [1234, 9912349, 578]
print(get_pattern(ListOfNumbers)) # This should print [1234]
This function will return a list of numbers that appear the most times in the input list. In the example above, the number 1234 appears twice in the list, so it is returned as the result.
A:
If I understand you correctly kevinjohnson, than the output of your example should be 1234, because 1234 is repeated twice in the numbers of your list (once in 1234, and the other time inside of the 9912349). So, you are looking for subpatterns inside of the numbers.
If this is the case, the solution of user7347835 will not work, because he is iterating over full numbers instead of iterating over subpatterns. Therefore, one should change the datatype. This should work, you can define the length of the pattern as a function input (though one could add functionality that returns the biggest pattern if the number of occurence is equal to a smaller pattern).
from collections import Counter
def pattern_finder(lst_of_numbers, length_of_pattern):
pattern_lst = []
# loop over numbers and transform them to strings
for number in lst_of_numbers:
num_str = str(number)
# iterate over the string looking for subtrings that match the
# length specified in the input and append them to list
for idx, val in enumerate(num_str):
pat = num_str[idx:idx+length_of_pattern]
if len(pat) == length_of_pattern:
pattern_lst.append(pat)
# extract the subtrings with the max occurence
count_lst = Counter(pattern_lst)
lst_max_pattern = [pattern for pattern, count in count_lst.items() if count==max(count_lst.values())]
return lst_max_pattern
Test:
lst_of_numbers = [1234, 9912349, 9578, 929578]
pattern_finder(lst_of_numbers, length_of_pattern=4)
Output:
['1234', '9578']
|
Pattern finder in python
|
Let's say I have a list with a bunch of numbers in it, I'm looking to make a function that will list and return the numbers that are being repeated in most of them.
Example code:
—ListOfNumbers = [1234, 9912349, 578]
-print(GetPatern(ListOfNumbers))
1234
|
[
"Here is an example of a function that could do this:\ndef get_pattern(numbers):\n # First, we will create a dictionary where the keys are the numbers in our list,\n # and the values are the number of times those numbers appear in the list\n number_count = {}\n for number in numbers:\n if number not in number_count:\n number_count[number] = 1\n else:\n number_count[number] += 1\n\n # Next, we will find the number that appears the most times in the list\n # by looping through the dictionary and finding the key with the highest value\n max_count = 0\n max_number = 0\n for number, count in number_count.items():\n if count > max_count:\n max_count = count\n max_number = number\n\n # Finally, we will return the numbers that appear the most times in the list\n # by checking each number in the list and adding it to the result if it matches\n # the number with the highest count\n result = []\n for number in numbers:\n if number == max_number:\n result.append(number)\n\n return result\n\nHere is an example of how you could use this function:\nListOfNumbers = [1234, 9912349, 578]\nprint(get_pattern(ListOfNumbers)) # This should print [1234]\n\nThis function will return a list of numbers that appear the most times in the input list. In the example above, the number 1234 appears twice in the list, so it is returned as the result.\n",
"If I understand you correctly kevinjohnson, than the output of your example should be 1234, because 1234 is repeated twice in the numbers of your list (once in 1234, and the other time inside of the 9912349). So, you are looking for subpatterns inside of the numbers.\nIf this is the case, the solution of user7347835 will not work, because he is iterating over full numbers instead of iterating over subpatterns. Therefore, one should change the datatype. This should work, you can define the length of the pattern as a function input (though one could add functionality that returns the biggest pattern if the number of occurence is equal to a smaller pattern).\nfrom collections import Counter \ndef pattern_finder(lst_of_numbers, length_of_pattern):\n pattern_lst = []\n\n # loop over numbers and transform them to strings\n for number in lst_of_numbers:\n num_str = str(number) \n \n # iterate over the string looking for subtrings that match the \n # length specified in the input and append them to list \n for idx, val in enumerate(num_str): \n pat = num_str[idx:idx+length_of_pattern]\n if len(pat) == length_of_pattern: \n pattern_lst.append(pat)\n\n # extract the subtrings with the max occurence\n count_lst = Counter(pattern_lst)\n lst_max_pattern = [pattern for pattern, count in count_lst.items() if count==max(count_lst.values())]\n return lst_max_pattern\n\nTest:\nlst_of_numbers = [1234, 9912349, 9578, 929578]\npattern_finder(lst_of_numbers, length_of_pattern=4)\n\nOutput:\n['1234', '9578']\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074665726_python.txt
|
Q:
Ucrop yalantis contracts kotlin camera
Hello I have to develope a program meeting the next requirements
I have to use Kotlin
I have to use Ucrop Yalantis Library
I have to use contracts
I have to crop images from picker and camera
I've been able to design the picker a crop the image but I have no idea about how to use Ucrop with the camera using contracts. Any suggestion?
This is my code for the picker It works. Besides I'm able to capture the image from the camera, so I have its Uri but I don`t know how to crop the image I get frome the camer with Ucrop. If you need I can add the code for the camera but the post should be long.
file:///data/user/0/com.example.tfg_red_social_v0/files/croppedImage.jpg
The file exists I've checked with the explorer
private val cropImage = registerForActivityResult(uCropContract){uri->
Glide.with(this).load(uri).circleCrop().into(binding.ivPerfil)
}
Please any help is welcome I'm adding extra code
My manifest
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<!-- PERMISOS PARA ACCEDER A INTERNET, CAMARA Y SD -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- <uses-permission android:name="android.permission.CAMERA" /> -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" tools:ignore="ScopedStorage"/>
<uses-feature android:name="android.hardware.camera" android:required="true" />
this is my provider
<provider
android:authorities="com.example.tfg_red_social_v0.fileprovider"
android:name="androidx.core.content.FileProvider"
android:exported="false"
android:grantUriPermissions="true">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/file_paths" />
</provider>
and my file_paths.xml
<?xml version="1.0" encoding="utf-8"?>
<paths>
<external-files-path
name="my_debug_images"
path="Pictures"/>
</paths>
And here how I use UCROP
private val uCropContract = object: ActivityResultContract<List<Uri>, Uri>(){
override fun createIntent(context: Context, input: List<Uri>): Intent {
val inputUri = input[0]
val outPut = input[1]
val uCrop = UCrop.of(inputUri,outPut)
.withAspectRatio(16f,9f)
return uCrop.getIntent(context)
}
override fun parseResult(resultCode: Int, intent: Intent?): Uri {
return UCrop.getOutput(intent!!)!!
}
}
private val cropImage = registerForActivityResult(uCropContract){uri->
imagenUri=uri
Glide.with(this).load(uri).into(binding.ivImagen)
}
private val getContent = registerForActivityResult(ActivityResultContracts.GetContent()){uri->
val inputUri = uri
val outputUri = File(requireContext().filesDir, "croppedImage.jpg").toUri()
val listUri = listOf<Uri>(inputUri!!,outputUri)
cropImage.launch(listUri)
}
}
And I start crop with
binding.ivImagen.setOnClickListener{
getContent.launch("image/*")
}
A:
Sorry for bothering you I fixed it. It was a piece of cake
the only thing you have to do is replace
private val getContent = registerForActivityResult(ActivityResultContracts.GetContent()){uri->
val inputUri = uri
val outputUri = File(requireContext().filesDir, "croppedImage.jpg").toUri()
val listUri = listOf<Uri>(inputUri!!,outputUri)
cropImage.launch(listUri)
with
private val recuperaImagenCamara = registerForActivityResult(ActivityResultContracts.TakePicture()){
success ->
if (success) {
val inputUri = imagenUri
val outputUri = File(requireContext().filesDir, "croppedImage.jpg").toUri()
val listUri = listOf<Uri>(inputUri!!,outputUri)
cropImage.launch(listUri)
Glide.with(this@PublicacionFragment).load(imagenUri).circleCrop().into(binding.ivImagen)
binding.ivImagen.setImageURI(imagenUri)
} else
Log.e(ContentValues.TAG,"la URI no se guardo es: $imagenUri")
}
I hope it helps to anyone
|
Ucrop yalantis contracts kotlin camera
|
Hello I have to develope a program meeting the next requirements
I have to use Kotlin
I have to use Ucrop Yalantis Library
I have to use contracts
I have to crop images from picker and camera
I've been able to design the picker a crop the image but I have no idea about how to use Ucrop with the camera using contracts. Any suggestion?
This is my code for the picker It works. Besides I'm able to capture the image from the camera, so I have its Uri but I don`t know how to crop the image I get frome the camer with Ucrop. If you need I can add the code for the camera but the post should be long.
file:///data/user/0/com.example.tfg_red_social_v0/files/croppedImage.jpg
The file exists I've checked with the explorer
private val cropImage = registerForActivityResult(uCropContract){uri->
Glide.with(this).load(uri).circleCrop().into(binding.ivPerfil)
}
Please any help is welcome I'm adding extra code
My manifest
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools">
<!-- PERMISOS PARA ACCEDER A INTERNET, CAMARA Y SD -->
<uses-permission android:name="android.permission.INTERNET" />
<!-- <uses-permission android:name="android.permission.CAMERA" /> -->
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" tools:ignore="ScopedStorage"/>
<uses-feature android:name="android.hardware.camera" android:required="true" />
this is my provider
<provider
android:authorities="com.example.tfg_red_social_v0.fileprovider"
android:name="androidx.core.content.FileProvider"
android:exported="false"
android:grantUriPermissions="true">
<meta-data
android:name="android.support.FILE_PROVIDER_PATHS"
android:resource="@xml/file_paths" />
</provider>
and my file_paths.xml
<?xml version="1.0" encoding="utf-8"?>
<paths>
<external-files-path
name="my_debug_images"
path="Pictures"/>
</paths>
And here how I use UCROP
private val uCropContract = object: ActivityResultContract<List<Uri>, Uri>(){
override fun createIntent(context: Context, input: List<Uri>): Intent {
val inputUri = input[0]
val outPut = input[1]
val uCrop = UCrop.of(inputUri,outPut)
.withAspectRatio(16f,9f)
return uCrop.getIntent(context)
}
override fun parseResult(resultCode: Int, intent: Intent?): Uri {
return UCrop.getOutput(intent!!)!!
}
}
private val cropImage = registerForActivityResult(uCropContract){uri->
imagenUri=uri
Glide.with(this).load(uri).into(binding.ivImagen)
}
private val getContent = registerForActivityResult(ActivityResultContracts.GetContent()){uri->
val inputUri = uri
val outputUri = File(requireContext().filesDir, "croppedImage.jpg").toUri()
val listUri = listOf<Uri>(inputUri!!,outputUri)
cropImage.launch(listUri)
}
}
And I start crop with
binding.ivImagen.setOnClickListener{
getContent.launch("image/*")
}
|
[
"Sorry for bothering you I fixed it. It was a piece of cake\nthe only thing you have to do is replace\n private val getContent = registerForActivityResult(ActivityResultContracts.GetContent()){uri->\n val inputUri = uri\n val outputUri = File(requireContext().filesDir, \"croppedImage.jpg\").toUri()\n val listUri = listOf<Uri>(inputUri!!,outputUri)\n cropImage.launch(listUri)\n\nwith\nprivate val recuperaImagenCamara = registerForActivityResult(ActivityResultContracts.TakePicture()){\n success ->\n if (success) {\n val inputUri = imagenUri\n val outputUri = File(requireContext().filesDir, \"croppedImage.jpg\").toUri()\n val listUri = listOf<Uri>(inputUri!!,outputUri)\n cropImage.launch(listUri)\n Glide.with(this@PublicacionFragment).load(imagenUri).circleCrop().into(binding.ivImagen)\n binding.ivImagen.setImageURI(imagenUri)\n } else\n Log.e(ContentValues.TAG,\"la URI no se guardo es: $imagenUri\")\n}\n\nI hope it helps to anyone\n"
] |
[
0
] |
[] |
[] |
[
"camera",
"contract",
"kotlin",
"ucrop"
] |
stackoverflow_0074661906_camera_contract_kotlin_ucrop.txt
|
Q:
Java Image BufferedImage loosing aspect ratio of upright image
I am storing an Image in the filesystem like:
FileImageOutputStream fos = new FileImageOutputStream(newFile);
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
And all images are stored correctly in the filesystem. (Also with correct aspect ratio with width and height!)
However, later, I am loading my image like:
File imgFile ...
FileImageInputStream stream = new FileImageInputStream(imgFile);
BufferedImage srcImage = ImageIO.read(stream);
And I want to get the width and height for my images like:
int actualHeight = srcImage.getHeight();
int actualWidth = srcImage.getWidth();
This works totally fine for images in landscape format.
However, for images in upfront format, the width and height is always swapped, so that the width is the height of the orginial image. So, e.g. I have an image that is 200x500 (width x height) and the two integers above are actualHeight = 200 and actualWidth = 500.
Am I missing something?
SOLUTION:
The maybe easiest way to achieve this, is just to use thumbnailator (https://github.com/coobird/thumbnailator):
BufferedImage srcImage = Thumbnails.of(new FileInputStream(imgFile)).scale(1).asBufferedImage();
reads the image in correct orientation
A:
It is likely that the image is being read in the incorrect orientation. When reading the image, you can try specifying the orientation of the image to ensure that it is read correctly. You can do this by using the ImageReader.setInput method and specifying the ImageReadParam.setSourceRegion parameter.
Here is an example:
// create a reader for the image file
ImageReader reader = ImageIO.getImageReadersByFormatName("jpg").next();
reader.setInput(new FileImageInputStream(imgFile));
// specify the region of the image to read
ImageReadParam param = reader.getDefaultReadParam();
param.setSourceRegion(new Rectangle(0, 0, actualWidth, actualHeight));
// read the image using the specified region
BufferedImage srcImage = reader.read(0, param);
// get the width and height of the image
int imageWidth = srcImage.getWidth();
int imageHeight = srcImage.getHeight();
// close the reader
reader.dispose();
This should ensure that the image is read in the correct orientation and the width and height values are correct.
|
Java Image BufferedImage loosing aspect ratio of upright image
|
I am storing an Image in the filesystem like:
FileImageOutputStream fos = new FileImageOutputStream(newFile);
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
And all images are stored correctly in the filesystem. (Also with correct aspect ratio with width and height!)
However, later, I am loading my image like:
File imgFile ...
FileImageInputStream stream = new FileImageInputStream(imgFile);
BufferedImage srcImage = ImageIO.read(stream);
And I want to get the width and height for my images like:
int actualHeight = srcImage.getHeight();
int actualWidth = srcImage.getWidth();
This works totally fine for images in landscape format.
However, for images in upfront format, the width and height is always swapped, so that the width is the height of the orginial image. So, e.g. I have an image that is 200x500 (width x height) and the two integers above are actualHeight = 200 and actualWidth = 500.
Am I missing something?
SOLUTION:
The maybe easiest way to achieve this, is just to use thumbnailator (https://github.com/coobird/thumbnailator):
BufferedImage srcImage = Thumbnails.of(new FileInputStream(imgFile)).scale(1).asBufferedImage();
reads the image in correct orientation
|
[
"It is likely that the image is being read in the incorrect orientation. When reading the image, you can try specifying the orientation of the image to ensure that it is read correctly. You can do this by using the ImageReader.setInput method and specifying the ImageReadParam.setSourceRegion parameter.\nHere is an example:\n// create a reader for the image file\nImageReader reader = ImageIO.getImageReadersByFormatName(\"jpg\").next();\nreader.setInput(new FileImageInputStream(imgFile));\n\n// specify the region of the image to read\nImageReadParam param = reader.getDefaultReadParam();\nparam.setSourceRegion(new Rectangle(0, 0, actualWidth, actualHeight));\n\n// read the image using the specified region\nBufferedImage srcImage = reader.read(0, param);\n\n// get the width and height of the image\nint imageWidth = srcImage.getWidth();\nint imageHeight = srcImage.getHeight();\n\n// close the reader\nreader.dispose();\n\nThis should ensure that the image is read in the correct orientation and the width and height values are correct.\n"
] |
[
0
] |
[] |
[] |
[
"bufferedimage",
"file",
"image",
"java"
] |
stackoverflow_0074665834_bufferedimage_file_image_java.txt
|
Q:
want to allow only the post owner to delete
If a hacker knows the rest api url and another person's userId, I want to prevent it from being deleted by sending it as a parameter.
I want to store the jwt token in the database and check it before writing. However, after searching, it is said that it is inefficient because it has to be searched every time. Is there any other way?? Short live jwt also exists, but I can't use it either because I have to apply it to the app, not the web. (You can use it, but it's not UX-wise)
If there is another better way, please let me know
A:
A JWT encodes all the information which the server needs (in so-called claims, for example, the user id in the sub claim and the expiry time in the exp claim). And it contains a signature which the server can validate, and which cannot be forged unless someone has the private key.
The server validates and decodes the token with code like the following:
const jwt = require("jsonwebtoken");
const signingCertificate = `-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----`;
var claims = jwt.verify(token, signingCertificate);
if (claims.exp && claims.exp <= new Date().getTime() / 1000)
throw "expired";
var user = claims.sub;
All this happens on the server, without any token being looked up in a database. And you can then check whether the post to be deleted really belongs to user. (This check involves a database access, because you must look up the user to whom the post belongs. But deletion of a post anyway is a database access.)
You don't need a user parameter, because the user is always taken from the token. So even if A knows B's userId and postId, it can only send its own token:
POST delete?postId=475
Authorization: Bearer (A's token)
and with that it cannot delete B's post 475.
|
want to allow only the post owner to delete
|
If a hacker knows the rest api url and another person's userId, I want to prevent it from being deleted by sending it as a parameter.
I want to store the jwt token in the database and check it before writing. However, after searching, it is said that it is inefficient because it has to be searched every time. Is there any other way?? Short live jwt also exists, but I can't use it either because I have to apply it to the app, not the web. (You can use it, but it's not UX-wise)
If there is another better way, please let me know
|
[
"A JWT encodes all the information which the server needs (in so-called claims, for example, the user id in the sub claim and the expiry time in the exp claim). And it contains a signature which the server can validate, and which cannot be forged unless someone has the private key.\nThe server validates and decodes the token with code like the following:\nconst jwt = require(\"jsonwebtoken\");\nconst signingCertificate = `-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----`;\nvar claims = jwt.verify(token, signingCertificate);\nif (claims.exp && claims.exp <= new Date().getTime() / 1000)\n throw \"expired\";\nvar user = claims.sub;\n\nAll this happens on the server, without any token being looked up in a database. And you can then check whether the post to be deleted really belongs to user. (This check involves a database access, because you must look up the user to whom the post belongs. But deletion of a post anyway is a database access.)\nYou don't need a user parameter, because the user is always taken from the token. So even if A knows B's userId and postId, it can only send its own token:\nPOST delete?postId=475\nAuthorization: Bearer (A's token)\n\nand with that it cannot delete B's post 475.\n"
] |
[
0
] |
[] |
[] |
[
"backend",
"node.js",
"spring"
] |
stackoverflow_0074665988_backend_node.js_spring.txt
|
Q:
The entity type 'List' requires a primary key to be defined. But it is defined
I'm new to the Entity Framework and I'm having problems with properly defining relationships.
I have a Student class, and a Course class and it's supposed to me a many to many relationship. A student can do one or more courses and a course can be done by one or more students. But when I try to run the command dotnet ef migrations add InitialCreate to create the initial migration, I get the error: The entity type 'List<int>' requires a primary key to be defined.
Here is the Student class
using System;
using System.ComponentModel.DataAnnotations;
using Microsoft.EntityFrameworkCore;
namespace Advisment.Models
{
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public int MajorId { get; set; }
public int AdvisorId { get; set; }
public List <int> CompletedCourses { get; set; }//List of courses done by student
public ICollection <Course> Courses { get; set; }//Reference the Course class
public Advisor Advisor { get; set; }
public Major Major { get; set; }
}
}
And here is the the Course class:
using System;
using System.ComponentModel.DataAnnotations;
namespace Advisment.Models
{
public class Course
{
[Key]
public int CourseId { get; set; }
public string CourseName { get; set; }
public int MajorId { get; set; }
public ICollection <Student> Students { get; set; }//Store student objects
public Major Major { get; set; }
}
}
From my understanding the ICollection(s) is used to reference the classes, but I'm not sure if I'm using it correctly.
A:
You should Add a third class which should be a bridge between student and courses. Because relationship between Student and Courses are many to many.
Consider this class :
public class Enrollment
{
public int CourseId { get; set; }
public int StudentId { get; set; }
public Student Student { get; set; }
public Course { get; set; }
public char Grade {get; set; }
}
Change your student class to be like this
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public int MajorId { get; set; }
public int AdvisorId { get; set; }
public ICollection <Enrollment> Enrollments { get; set; }//Reference the Enrollment class
public Advisor Advisor { get; set; }
public Major Major { get; set; }
}
and Course class to be like this :
public class Course
{
[Key]
public int CourseId { get; set; }
public string CourseName { get; set; }
public int MajorId { get; set; }
public ICollection <Enrollment> Enrollments{ get; set; }//Store student objects
public Major Major { get; set; }
}
In Fluent Api, you need to setup the relationship of the bridge Enrollments table like this :
modelBuilder.Entity<Enrollment>()
.HasKey(c => new { c.CourseId, c.StudentId});
modelBuilder.Entity<Enrollment>()
.HasOne(c => c.Student)
.WithMany(s => s.Enrollments)
.HasForeignKey(h => h.StudentId)
.IsRequired();
modelBuilder.Entity<Enrollment>()
.HasOne(c => c.Course)
.WithMany(j => j.Courses)
.HasForeignKey(h => h.CourseId)
.IsRequired();
To retrieve all the courses that the student take, you can use this method
context.Students.Include(s => s.Enrollments).ThenInclude(e => e.Course).ToList();
A:
Hello In order to create relationships between tables you will need to add a
Foreign Key which is the primary key in the table you try to connect.
So you need to the table/class that you want to the Foreign Key from
which you did in the Course List and implemented the many to many
relationships.
so now all you have to do is to add the foreign key to students class
for example :
public int CourseId { get; set; }
Also add Primary Key [Key] to your student
|
The entity type 'List' requires a primary key to be defined. But it is defined
|
I'm new to the Entity Framework and I'm having problems with properly defining relationships.
I have a Student class, and a Course class and it's supposed to me a many to many relationship. A student can do one or more courses and a course can be done by one or more students. But when I try to run the command dotnet ef migrations add InitialCreate to create the initial migration, I get the error: The entity type 'List<int>' requires a primary key to be defined.
Here is the Student class
using System;
using System.ComponentModel.DataAnnotations;
using Microsoft.EntityFrameworkCore;
namespace Advisment.Models
{
public class Student
{
public int Id { get; set; }
public string Name { get; set; }
public int MajorId { get; set; }
public int AdvisorId { get; set; }
public List <int> CompletedCourses { get; set; }//List of courses done by student
public ICollection <Course> Courses { get; set; }//Reference the Course class
public Advisor Advisor { get; set; }
public Major Major { get; set; }
}
}
And here is the the Course class:
using System;
using System.ComponentModel.DataAnnotations;
namespace Advisment.Models
{
public class Course
{
[Key]
public int CourseId { get; set; }
public string CourseName { get; set; }
public int MajorId { get; set; }
public ICollection <Student> Students { get; set; }//Store student objects
public Major Major { get; set; }
}
}
From my understanding the ICollection(s) is used to reference the classes, but I'm not sure if I'm using it correctly.
|
[
"You should Add a third class which should be a bridge between student and courses. Because relationship between Student and Courses are many to many.\nConsider this class :\npublic class Enrollment\n{\n public int CourseId { get; set; }\n public int StudentId { get; set; }\n public Student Student { get; set; }\n public Course { get; set; }\n public char Grade {get; set; }\n}\n\nChange your student class to be like this\npublic class Student\n{\n public int Id { get; set; }\n public string Name { get; set; }\n\n public int MajorId { get; set; }\n\n public int AdvisorId { get; set; }\n\n public ICollection <Enrollment> Enrollments { get; set; }//Reference the Enrollment class\n\n public Advisor Advisor { get; set; }\n\n public Major Major { get; set; }\n \n}\n\nand Course class to be like this :\npublic class Course\n{\n [Key]\n public int CourseId { get; set; }\n public string CourseName { get; set; }\n\n public int MajorId { get; set; }\n \n public ICollection <Enrollment> Enrollments{ get; set; }//Store student objects\n\n public Major Major { get; set; }\n}\n\nIn Fluent Api, you need to setup the relationship of the bridge Enrollments table like this :\n modelBuilder.Entity<Enrollment>()\n .HasKey(c => new { c.CourseId, c.StudentId});\n\n modelBuilder.Entity<Enrollment>()\n .HasOne(c => c.Student)\n .WithMany(s => s.Enrollments)\n .HasForeignKey(h => h.StudentId)\n .IsRequired();\n\n modelBuilder.Entity<Enrollment>()\n .HasOne(c => c.Course)\n .WithMany(j => j.Courses)\n .HasForeignKey(h => h.CourseId)\n .IsRequired();\n\n\nTo retrieve all the courses that the student take, you can use this method\ncontext.Students.Include(s => s.Enrollments).ThenInclude(e => e.Course).ToList();\n\n",
"Hello In order to create relationships between tables you will need to add a\nForeign Key which is the primary key in the table you try to connect.\nSo you need to the table/class that you want to the Foreign Key from\nwhich you did in the Course List and implemented the many to many\nrelationships.\nso now all you have to do is to add the foreign key to students class\nfor example :\npublic int CourseId { get; set; }\nAlso add Primary Key [Key] to your student\n"
] |
[
1,
0
] |
[] |
[] |
[
"c#",
"entity_framework"
] |
stackoverflow_0074659046_c#_entity_framework.txt
|
Q:
adb devices hangs and after a while times out on WSL2
Running on Ubuntu 20.04 (WSL2):
➜ ~ adb --version
Android Debug Bridge version 1.0.41
Version 33.0.1-8253317
Installed as /home/eliya/dev/Android/platform-tools/adb
Running
➜ ~ adb devices
adb W 04-28 16:43:11 20145 20145 network.cpp:149] failed to connect to '172.23.160.1:5037': Connection timed out
* cannot start server on remote host
adb: failed to check server version: cannot connect to daemon at tcp:172.23.160.1:5037: failed to connect to '172.23.160.1:5037': Connection timed out
Running
➜ ~ adb start-server
adb W 04-28 16:47:17 21475 21475 network.cpp:149] failed to connect to '172.23.160.1:5037': Connection timed out
* cannot start server on remote host
error: cannot connect to daemon at tcp:172.23.160.1:5037: failed to connect to '172.23.160.1:5037': Connection timed out
But when I run it as root, it seems to work fine:
➜ ~ sudo adb devices
List of devices attached
XXXXXXXXXXXXXX device
How can I make adb to work without sudo permissions?
Update 1
running adb -H localhost ... seems to be ok. What am I missing here?
Update 2
Running in Windows:
PS C:\Users\coeli> adb start-server
and then in WSL2 (hangs for a while and then fails):
➜ ~ adb devices
List of devices attached
* cannot start server on remote host
error: cannot connect to daemon at tcp:172.23.160.1:5037: Connection timed out
A:
I think I may have found a fix for this. In my case, the issue was caused by not installing sdkmanager properly. I used openjdk11 instead of openjdk8, which is required by the sdkmanager to run properly. In anycase, following this tutorial https://gist-github-com.translate.goog/georgealan/353a548814fe9b82a3a502926c7a42c6?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp#passo-4%C2%BA
should allow you to get it up and running without needing to use sudo commands. Article is for ReactNative but you dont need to follow the whole thing
A:
If you want WSL to be able to talk with the adb server running in Windows, you have to start the server in such a way that it will listen on all interfaces, not just the default one. This is don't with -a, like so:
.\adb.exe -a nodaemon server start
You can also put a -P in there if you want to use something other than the default port of 5037, like this:
.\adb.exe -a -P 5037 nodaemon server start
If you still have problems, double-check your Windows Firewall. However, since this is all local interface stuff, that shouldn't be a factor in most installations.
|
adb devices hangs and after a while times out on WSL2
|
Running on Ubuntu 20.04 (WSL2):
➜ ~ adb --version
Android Debug Bridge version 1.0.41
Version 33.0.1-8253317
Installed as /home/eliya/dev/Android/platform-tools/adb
Running
➜ ~ adb devices
adb W 04-28 16:43:11 20145 20145 network.cpp:149] failed to connect to '172.23.160.1:5037': Connection timed out
* cannot start server on remote host
adb: failed to check server version: cannot connect to daemon at tcp:172.23.160.1:5037: failed to connect to '172.23.160.1:5037': Connection timed out
Running
➜ ~ adb start-server
adb W 04-28 16:47:17 21475 21475 network.cpp:149] failed to connect to '172.23.160.1:5037': Connection timed out
* cannot start server on remote host
error: cannot connect to daemon at tcp:172.23.160.1:5037: failed to connect to '172.23.160.1:5037': Connection timed out
But when I run it as root, it seems to work fine:
➜ ~ sudo adb devices
List of devices attached
XXXXXXXXXXXXXX device
How can I make adb to work without sudo permissions?
Update 1
running adb -H localhost ... seems to be ok. What am I missing here?
Update 2
Running in Windows:
PS C:\Users\coeli> adb start-server
and then in WSL2 (hangs for a while and then fails):
➜ ~ adb devices
List of devices attached
* cannot start server on remote host
error: cannot connect to daemon at tcp:172.23.160.1:5037: Connection timed out
|
[
"I think I may have found a fix for this. In my case, the issue was caused by not installing sdkmanager properly. I used openjdk11 instead of openjdk8, which is required by the sdkmanager to run properly. In anycase, following this tutorial https://gist-github-com.translate.goog/georgealan/353a548814fe9b82a3a502926c7a42c6?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en-US&_x_tr_pto=wapp#passo-4%C2%BA\nshould allow you to get it up and running without needing to use sudo commands. Article is for ReactNative but you dont need to follow the whole thing\n",
"If you want WSL to be able to talk with the adb server running in Windows, you have to start the server in such a way that it will listen on all interfaces, not just the default one. This is don't with -a, like so:\n.\\adb.exe -a nodaemon server start\n\nYou can also put a -P in there if you want to use something other than the default port of 5037, like this:\n.\\adb.exe -a -P 5037 nodaemon server start\n\nIf you still have problems, double-check your Windows Firewall. However, since this is all local interface stuff, that shouldn't be a factor in most installations.\n"
] |
[
0,
0
] |
[] |
[] |
[
"adb",
"android",
"windows_subsystem_for_linux",
"wsl_2"
] |
stackoverflow_0072044894_adb_android_windows_subsystem_for_linux_wsl_2.txt
|
Q:
VSCode hot reload for flutter
I've just started playing with Flutter in VSCode. I also installed the Dart Plugin.
Running the demo app I read in the terminal
Is this the only way to hot-reload the app? I mean I should always keep the terminal open and focus on it to type "r" in order to reload my views?
Isn't there a shortcut directly from VSCode?
A:
There's an extension for that. Called Dart Code and another one named Flutter Code
They will detect that your project is a Dart/Flutter project. And allows you to debug it + hot reload using f5.
A:
For 2022, this solution worked.
Steps
Open Settings.
Paste this text > dart.flutterHotReloadOnSave on the settings search box.
And change value from "Manual" to "Always"
A:
If you like to hot reload your app with a keybinding better than Ctrl+Shift+F5, just change the Debug: Restart to Ctrl+S, so whenever you trigger the the Ctrl+S the app will first save your changes according to the workbench.action.files.save and afterwards restart the app (=hot reload, it is the green circle you see in the debugbar).
Keybindings for VS Code:
A:
VSCode debug -> start debuging, make a change and try, That's what you want.
A:
No idea why F5 does not hot reload for me :(
But, you can map the -
Flutter: Hot Reload
command to whatever key combo floats your boat via -
Preferences > Keyboard ShortCuts
as in this screenshot -
A:
open the Debug sidebar from VSCode and use it
then when you save, it will hot reload and apply the changes you make
that is what working with me
A:
use the green reload button to hot reload the app in VS Code
A:
on Mac select
Run without Debugging or Shift+f5
you will see this at top select the electric icon for Hot Reload.
After doing some changes save it. It will auto reflect changes.
A:
yes Here is the plugin Dart Code for VS CODE
https://marketplace.visualstudio.com/items?itemName=Dart-Code.dart-code
here is official doc for VsCode flutter
https://flutter.io/get-started/editor/#vscode
A:
In VS once installed Dart + Flutter extension, 2 options to hot reload 1) Use combo keys Command + Shift + F5. 2) Save the file by use combo keys Command + S
A:
I have both extensions and VSCode is set to Hot Reload once a document is saved. The only time I can reload my app is by stopping and restarting my debugger
A:
i dont know why but my vscode hot reloading not work if you have this problem you can use ctrl + f5 to refresh app.
hope to work for developers that hot reloading not work for them.
A:
You need to run Flutter app from VS Code's built-in debugger not VS Code's terminal.
A:
Select ▷ then "Run without Debugging" -> You can now use ⌘S on Mac to ⚡ Hot Reload ⚡
A:
===== Aug 2022 UPDATE =====
v3.42 and above: You can enable hot reload on autosave in the latest version by setting Flutter Hot Reload On Save to allIfDirty in your VSCode settings.
===== Old Versions (Early 2022 and below) =====
v3.41 and below: You can enable hot reload on autosave in the latest version by setting Flutter Hot Reload On Save to always in your VSCode settings.
v3.19 only: They disabled hot reload on autosave completely.
v3.18 and below: The extension hot reloads automatically when auto saving.
A:
You may encounter problems with Hot Reload due to VS Code issues like this, which is relevant while writing this answer.
The most convincing way I see to do it without downgrading, updating to Insiders builds and/or loosing dev speed is to use the green restart/reload button in the VS Code run widget or Ctrl+Shift+F5 shortcut, but this option may be inacceptable for projects with trickier navigation and few different views.
A:
if flutter and dart extensions already installed on vscode
Set from manual to All
|
VSCode hot reload for flutter
|
I've just started playing with Flutter in VSCode. I also installed the Dart Plugin.
Running the demo app I read in the terminal
Is this the only way to hot-reload the app? I mean I should always keep the terminal open and focus on it to type "r" in order to reload my views?
Isn't there a shortcut directly from VSCode?
|
[
"There's an extension for that. Called Dart Code and another one named Flutter Code\nThey will detect that your project is a Dart/Flutter project. And allows you to debug it + hot reload using f5.\n",
"For 2022, this solution worked.\nSteps\n\nOpen Settings.\nPaste this text > dart.flutterHotReloadOnSave on the settings search box.\nAnd change value from \"Manual\" to \"Always\"\n\n\n",
"If you like to hot reload your app with a keybinding better than Ctrl+Shift+F5, just change the Debug: Restart to Ctrl+S, so whenever you trigger the the Ctrl+S the app will first save your changes according to the workbench.action.files.save and afterwards restart the app (=hot reload, it is the green circle you see in the debugbar).\nKeybindings for VS Code:\n\n",
"VSCode debug -> start debuging, make a change and try, That's what you want.\n",
"No idea why F5 does not hot reload for me :(\nBut, you can map the -\nFlutter: Hot Reload\n\ncommand to whatever key combo floats your boat via -\nPreferences > Keyboard ShortCuts\n\nas in this screenshot -\n\n",
"open the Debug sidebar from VSCode and use it\n\nthen when you save, it will hot reload and apply the changes you make\nthat is what working with me\n",
"use the green reload button to hot reload the app in VS Code\n\n",
"on Mac select\n\nRun without Debugging or Shift+f5\n\nyou will see this at top select the electric icon for Hot Reload.\n\nAfter doing some changes save it. It will auto reflect changes.\n",
"yes Here is the plugin Dart Code for VS CODE \nhttps://marketplace.visualstudio.com/items?itemName=Dart-Code.dart-code\nhere is official doc for VsCode flutter \nhttps://flutter.io/get-started/editor/#vscode\n",
"In VS once installed Dart + Flutter extension, 2 options to hot reload 1) Use combo keys Command + Shift + F5. 2) Save the file by use combo keys Command + S \n",
"I have both extensions and VSCode is set to Hot Reload once a document is saved. The only time I can reload my app is by stopping and restarting my debugger\n",
"i dont know why but my vscode hot reloading not work if you have this problem you can use ctrl + f5 to refresh app.\nhope to work for developers that hot reloading not work for them.\n",
"You need to run Flutter app from VS Code's built-in debugger not VS Code's terminal.\n",
"\nSelect ▷ then \"Run without Debugging\" -> You can now use ⌘S on Mac to ⚡ Hot Reload ⚡\n",
"===== Aug 2022 UPDATE =====\nv3.42 and above: You can enable hot reload on autosave in the latest version by setting Flutter Hot Reload On Save to allIfDirty in your VSCode settings.\n===== Old Versions (Early 2022 and below) =====\nv3.41 and below: You can enable hot reload on autosave in the latest version by setting Flutter Hot Reload On Save to always in your VSCode settings.\nv3.19 only: They disabled hot reload on autosave completely.\nv3.18 and below: The extension hot reloads automatically when auto saving.\n",
"You may encounter problems with Hot Reload due to VS Code issues like this, which is relevant while writing this answer.\nThe most convincing way I see to do it without downgrading, updating to Insiders builds and/or loosing dev speed is to use the green restart/reload button in the VS Code run widget or Ctrl+Shift+F5 shortcut, but this option may be inacceptable for projects with trickier navigation and few different views.\n",
"\nif flutter and dart extensions already installed on vscode\nSet from manual to All\n"
] |
[
43,
33,
16,
6,
5,
5,
2,
2,
1,
1,
1,
1,
1,
1,
1,
0,
0
] |
[] |
[] |
[
"flutter",
"visual_studio_code"
] |
stackoverflow_0049210769_flutter_visual_studio_code.txt
|
Q:
Inconsistent LLC-loads value with perf stat
I'm trying to use perf stat to fetch hardware counter information for a benchmark on Intel's Xeon processor (based on Skylake). When I provide the -e LLC-loads -d -d -d flag, perf stat prints out LLC-loads twice - one due to -e LLC-loads and the other due to detailed flag turned on. However, the results are inconsistent:
$ perf stat -e LLC-loads,LLC-stores,L1-dcache-loads,L1-dcache-stores -d -d -d <my benchmark executable>
Performance counter stats for '<my benchmark executable>':
5145246847 LLC-loads (30.78%)
8167130238 LLC-stores (30.80%)
198057619358 L1-dcache-loads (30.80%)
83142567530 L1-dcache-stores (30.80%)
197792116698 L1-dcache-loads (30.79%)
27391515211 L1-dcache-load-misses # 13.84% of all L1-dcache hits (30.78%)
5114059688 LLC-loads (30.78%)
3025020254 LLC-load-misses # 58.97% of all LL-cache hits (30.76%)
<not supported> L1-icache-loads
58697135 L1-icache-load-misses (30.75%)
198322967573 dTLB-loads (30.74%)
209105723 dTLB-load-misses # 0.11% of all dTLB cache hits (30.72%)
2639992 iTLB-loads (30.74%)
1368656 iTLB-load-misses # 51.84% of all iTLB cache hits (30.76%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
25.301480157 seconds time elapsed
585.222699000 seconds user
1.070800000 seconds sys
As can be seen in the output, there are two LLC-loads in the output with different values. What am I getting wrong?
I've tried multiple different benchmarks assuming that it could be benchmark specific but this behavior is observed everywhere.
A:
Note the multiplexing because you specified so many events: they were sampled for (30.78%) of the total time, with the number extrapolated from that. Skylake only has 4 programmable counters per logical core that can be counting different hardware events at once.
Your program isn't 100% uniform with time, and there's sampling / extrapolation noise, so the numbers are close but differ by a couple %. (The multiplexing code didn't combine an event specified twice, instead it just put two instances of it into the rotation.)
If you just counted two instances of the event without many other events, you'd expect exactly equal counts since they'd both be active at the same time on different HW counters. (Unless the first counter would count any events after being programmed, while the kernel was still programming the second. --all-user would avoid that, telling the HW to count only when the logical core was in user-space.) e.g.
$ perf stat -e LLC-loads,LLC-loads cmp /dev/zero /dev/full
^Ccmp: Interrupt
Performance counter stats for 'cmp /dev/zero /dev/full':
31,425 LLC-loads
31,425 LLC-loads
2.748813842 seconds time elapsed
1.113722000 seconds user
1.633880000 seconds sys
(Small number of counts, I guess cmp uses buffers small enough to fit in L3 cache. I used two different files that would read as all-zeros so it couldn't just detect they were identical.)
Related:
Perf tool stat output: multiplex and scaling of "cycles" - instructions:D and cycles:D will tell perf to always count those; there are dedicated non-programmable counters for those events on Intel CPUs, but the multiplexing code doesn't know that. You could do this with other events, but that would take away slots from events where you didn't specify :D.
|
Inconsistent LLC-loads value with perf stat
|
I'm trying to use perf stat to fetch hardware counter information for a benchmark on Intel's Xeon processor (based on Skylake). When I provide the -e LLC-loads -d -d -d flag, perf stat prints out LLC-loads twice - one due to -e LLC-loads and the other due to detailed flag turned on. However, the results are inconsistent:
$ perf stat -e LLC-loads,LLC-stores,L1-dcache-loads,L1-dcache-stores -d -d -d <my benchmark executable>
Performance counter stats for '<my benchmark executable>':
5145246847 LLC-loads (30.78%)
8167130238 LLC-stores (30.80%)
198057619358 L1-dcache-loads (30.80%)
83142567530 L1-dcache-stores (30.80%)
197792116698 L1-dcache-loads (30.79%)
27391515211 L1-dcache-load-misses # 13.84% of all L1-dcache hits (30.78%)
5114059688 LLC-loads (30.78%)
3025020254 LLC-load-misses # 58.97% of all LL-cache hits (30.76%)
<not supported> L1-icache-loads
58697135 L1-icache-load-misses (30.75%)
198322967573 dTLB-loads (30.74%)
209105723 dTLB-load-misses # 0.11% of all dTLB cache hits (30.72%)
2639992 iTLB-loads (30.74%)
1368656 iTLB-load-misses # 51.84% of all iTLB cache hits (30.76%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses
25.301480157 seconds time elapsed
585.222699000 seconds user
1.070800000 seconds sys
As can be seen in the output, there are two LLC-loads in the output with different values. What am I getting wrong?
I've tried multiple different benchmarks assuming that it could be benchmark specific but this behavior is observed everywhere.
|
[
"Note the multiplexing because you specified so many events: they were sampled for (30.78%) of the total time, with the number extrapolated from that. Skylake only has 4 programmable counters per logical core that can be counting different hardware events at once.\nYour program isn't 100% uniform with time, and there's sampling / extrapolation noise, so the numbers are close but differ by a couple %. (The multiplexing code didn't combine an event specified twice, instead it just put two instances of it into the rotation.)\nIf you just counted two instances of the event without many other events, you'd expect exactly equal counts since they'd both be active at the same time on different HW counters. (Unless the first counter would count any events after being programmed, while the kernel was still programming the second. --all-user would avoid that, telling the HW to count only when the logical core was in user-space.) e.g.\n$ perf stat -e LLC-loads,LLC-loads cmp /dev/zero /dev/full\n^Ccmp: Interrupt\n\n Performance counter stats for 'cmp /dev/zero /dev/full':\n\n 31,425 LLC-loads \n 31,425 LLC-loads \n\n 2.748813842 seconds time elapsed\n\n 1.113722000 seconds user\n 1.633880000 seconds sys\n\n(Small number of counts, I guess cmp uses buffers small enough to fit in L3 cache. I used two different files that would read as all-zeros so it couldn't just detect they were identical.)\nRelated:\n\nPerf tool stat output: multiplex and scaling of \"cycles\" - instructions:D and cycles:D will tell perf to always count those; there are dedicated non-programmable counters for those events on Intel CPUs, but the multiplexing code doesn't know that. You could do this with other events, but that would take away slots from events where you didn't specify :D.\n\n"
] |
[
0
] |
[] |
[] |
[
"perf",
"profiling"
] |
stackoverflow_0074658426_perf_profiling.txt
|
Q:
Will anyone help me with this company specific question :
There are two types of liquid: type 1 and type 2. Initially, we have n ml of each type of liquid. There are four kinds of operations:
Serve 25 ml of liquid 1 and 75 ml of liquid 2.
Serve 75 ml of liquid 1 and 25 ml of liquid 2.
Serve 100 ml of liquid 1 and 0 ml of liquid 2, and
Serve 50 ml of liquid 1 and 50 ml of liquid 2.
When we serve some liquid, we give it to someone, and we no longer have it. Each turn, we will choose from the four operations with an equal probability 0.25. If the remaining volume of liquid is not enough to complete the operation, we will serve as much as possible. We stop once we no longer have some quantity of both types of liquid.
Note that we do not have an operation where all 100 ml's of liquid 2 are used first.
Return the probability that liquid 1 will be empty first, plus half the probability that 1 and 2 become empty at the same time. Answers within 105 of the actual answer will be accepted.
Input :
50
Output :
0.62500
Explanation:
If we choose the 2nd and 3rd operations,1 will become empty first.
For the fourth operation, 1 and 2 will become empty at the same time.
For the first operation, 2 will become empty first.
So, the total probability of 1 becoming empty first plus half the probability that 1 and 2 become empty at the same time, is 0.25*(1+1+ 0.5+0)=0.625.(changes required)
This is a company specific coding question. Could anyone anyone kindly help me with solving this question using python language ? It will be really helpful
|
Will anyone help me with this company specific question :
|
There are two types of liquid: type 1 and type 2. Initially, we have n ml of each type of liquid. There are four kinds of operations:
Serve 25 ml of liquid 1 and 75 ml of liquid 2.
Serve 75 ml of liquid 1 and 25 ml of liquid 2.
Serve 100 ml of liquid 1 and 0 ml of liquid 2, and
Serve 50 ml of liquid 1 and 50 ml of liquid 2.
When we serve some liquid, we give it to someone, and we no longer have it. Each turn, we will choose from the four operations with an equal probability 0.25. If the remaining volume of liquid is not enough to complete the operation, we will serve as much as possible. We stop once we no longer have some quantity of both types of liquid.
Note that we do not have an operation where all 100 ml's of liquid 2 are used first.
Return the probability that liquid 1 will be empty first, plus half the probability that 1 and 2 become empty at the same time. Answers within 105 of the actual answer will be accepted.
Input :
50
Output :
0.62500
Explanation:
If we choose the 2nd and 3rd operations,1 will become empty first.
For the fourth operation, 1 and 2 will become empty at the same time.
For the first operation, 2 will become empty first.
So, the total probability of 1 becoming empty first plus half the probability that 1 and 2 become empty at the same time, is 0.25*(1+1+ 0.5+0)=0.625.(changes required)
This is a company specific coding question. Could anyone anyone kindly help me with solving this question using python language ? It will be really helpful
|
[] |
[] |
[
"Here is a possible solution in Python:\ndef probability(n: int) -> float:\n # if there is no liquid, return 0\n if n == 0:\n return 0\n \n # if there is only 1 type of liquid, return 1\n if n == 50:\n return 1\n \n # calculate the probability of each operation\n p1 = 0.25 * probability(n - 25) # serve 25 ml of liquid 1\n p2 = 0.25 * probability(n - 25) # serve 75 ml of liquid 1\n p3 = 0.25 * probability(n - 50) # serve 100 ml of liquid 1\n p4 = 0.25 * probability(n - 50) # serve 50 ml of liquid 1\n \n # return the total probability\n return p1 + p2 + p3 + p4\n\n# test the function\nprint(probability(50)) # should return 0.625\n\nThis solution uses a recursive approach to calculate the probability of each operation, until there is no more liquid left. The base cases are when there is no more liquid (return 0) or when there is only 1 type of liquid (return 1). The probability of each operation is calculated by calling the function again with the updated amount of liquid, and then the probabilities are added together to get the total probability.\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074666121_python.txt
|
Q:
SkiaSharp draw reversible shapes
Is there a way to draw reversible shapes with SkiaSharp?
With reversible I mean outColor = srcColor XOR dstColor, so when you draw over same color again, the original color is restored. Like in WinForms ControlPaint.DrawReversibleFrame or (old) Windows FocusRects.
I'm using SkiaSharp 2.88.0
A:
You can use Exclusion blend mode when drawing, for example:
// draw some random image on canvas
var bitmap = SKBitmap.Decode(@"random.jpg");
var info = new SKImageInfo(256, 256);
using var surf = SKSurface.Create(info);
var canvas = surf.Canvas;
canvas.DrawBitmap(bitmap, info.Rect);
// initialize paint with Exclusion blend mode
var paint = new SKPaint {
Color = SKColors.Yellow,
Style = SKPaintStyle.Fill,
BlendMode = SKBlendMode.Exclusion
};
// draw overlapping rectangles using the paint
canvas.DrawRect(10, 10, 50, 50, paint);
canvas.DrawRect(25, 25, 50, 50, paint);
Result image:
|
SkiaSharp draw reversible shapes
|
Is there a way to draw reversible shapes with SkiaSharp?
With reversible I mean outColor = srcColor XOR dstColor, so when you draw over same color again, the original color is restored. Like in WinForms ControlPaint.DrawReversibleFrame or (old) Windows FocusRects.
I'm using SkiaSharp 2.88.0
|
[
"You can use Exclusion blend mode when drawing, for example:\n// draw some random image on canvas\nvar bitmap = SKBitmap.Decode(@\"random.jpg\");\nvar info = new SKImageInfo(256, 256);\nusing var surf = SKSurface.Create(info);\nvar canvas = surf.Canvas;\ncanvas.DrawBitmap(bitmap, info.Rect);\n\n// initialize paint with Exclusion blend mode\nvar paint = new SKPaint {\n Color = SKColors.Yellow,\n Style = SKPaintStyle.Fill,\n BlendMode = SKBlendMode.Exclusion\n};\n\n// draw overlapping rectangles using the paint\ncanvas.DrawRect(10, 10, 50, 50, paint);\ncanvas.DrawRect(25, 25, 50, 50, paint);\n\nResult image:\n\n"
] |
[
1
] |
[] |
[] |
[
"skiasharp"
] |
stackoverflow_0072522380_skiasharp.txt
|
Q:
Regex - match everything without whitespace
I now using in Regex this expressions,
([\x20-\x7E]+) - match everything with space
([\x21-\x7E]+) - match everything without space
But i need more performance and in benchmark i see that (.*) is 2x more faster than ([\x20-\x7E]+). Then i replaced that.
But how to write ([\x21-\x7E]+) in (.*) ? Or in other words how to modify (.*) to match everything without whitespace characters?
Thanks!
A:
To match everything except whitespace use:
[^\s]+
or
\S+
A:
I was forced to make the following
"^((?!\s).)*$"
Or if you want it for multiple symbols
"^((?![\s]).)*$"
It will be work in Notepad++ too
|
Regex - match everything without whitespace
|
I now using in Regex this expressions,
([\x20-\x7E]+) - match everything with space
([\x21-\x7E]+) - match everything without space
But i need more performance and in benchmark i see that (.*) is 2x more faster than ([\x20-\x7E]+). Then i replaced that.
But how to write ([\x21-\x7E]+) in (.*) ? Or in other words how to modify (.*) to match everything without whitespace characters?
Thanks!
|
[
"To match everything except whitespace use:\n[^\\s]+\n\nor\n\\S+\n\n",
"I was forced to make the following\n\"^((?!\\s).)*$\"\n\nOr if you want it for multiple symbols\n\"^((?![\\s]).)*$\"\n\nIt will be work in Notepad++ too\n"
] |
[
46,
0
] |
[] |
[] |
[
"performance",
"regex"
] |
stackoverflow_0008991178_performance_regex.txt
|
Q:
Renderflex Overflow on the bottom
I want to create a two column card layout and I'm using reusable widget I created. But I'm having renderflow issues. Below are the errors. i already tried to make it smaller, it worked but not as what i want it to look like.
════════ Exception caught by rendering library ═════════════════════════════════
A RenderFlex overflowed by 99853 pixels on the right.
The relevant error-causing widget was
Row
lib\…\student\subjects.dart:158
════════════════════════════════════════════════════════════════════════════════
subjects.dart
SingleChildScrollView(
child: Container(
padding: const EdgeInsets.symmetric(horizontal: 32),
child: Column(
children : [
// A Row for the top
Row(children: [
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 1'),
const SizedBox(width: 5,),
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 5,),
Row(children: [
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 2'),
const SizedBox(width: 5,),
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 2')
]
),
],
),
),
)
card
return SizedBox(
height: 140,
width: width * 0.4,
child: Card(
clipBehavior: Clip.antiAlias,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(12)
),
child: Stack(
alignment: Alignment.topLeft,
children: [
// Ink.image(
// // image: NetworkImage(link),
// image: AssetImage(widget.link),
// height: 200,
// fit: BoxFit.cover,
// //colorFilter: ColorFilters.greyscale,
// child: InkWell(
// onTap: () => Navigator.of(context).pushNamed(
// widget.source,
// arguments: 'Text from homepage',
// ),
// ),
// ),
Padding(
padding:const EdgeInsets.only(top: 0),
child: Container(
color: Colors.white60,
height: 50,
width: double.infinity,
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 5, bottom: 0),
child: Text(subjectName,
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 30),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 28, bottom: 0),
child: Text(profesor,
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 43, bottom: 0),
child: Text('$start -',
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 56, bottom: 0),
child: Text('$end',
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
],
),
),
);
The edge of the RenderFlex that is overflowing has been marked in the rendering with a yellow and black striped pattern. This is usually caused by the contents being too big for the RenderFlex.
Consider applying a flex factor (e.g. using an Expanded widget) to force the children of the RenderFlex to fit within the available space instead of being sized to their natural size.
This is considered an error condition because it indicates that there is content that cannot be seen. If the content is legitimately bigger than the available space, consider clipping it with a ClipRect widget before putting it in the flex, or using a scrollable container rather than a Flex, like a ListView.
The specific RenderFlex in question is: RenderFlex#8390f OVERFLOWING
════════════════════════════════════════════════════════════════════════════════
Edit
here is my new code. thanks to your suggestions
Expanded(
flex: 1,
child: SingleChildScrollView(
child: SizedBox(
// height: height,
width: MediaQuery.of(context).size.width,
child: SizedBox(
child: Column(
children : [
// A Row for the top
Row(
children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(
children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),
],
),
),
),
),
)
A:
the error says the widget that overflows is here: lib\…\student\subjects.dart:158
you might check what widget is there. I believe there is no widget overflowing 99853 pixels in the attached code.
You might also consider using the GridView class. it's exactly what you are trying to implement here.
https://api.flutter.dev/flutter/widgets/GridView-class.html
A:
You have a Column inside a SingleChildScrollView meaning the Column will try to expand into an item with infinite height. It'll keep rendering beneath your visible.
Try MainAxisSize.Min or wrapping in a flexible. Something that will constrain the height of the Column.
|
Renderflex Overflow on the bottom
|
I want to create a two column card layout and I'm using reusable widget I created. But I'm having renderflow issues. Below are the errors. i already tried to make it smaller, it worked but not as what i want it to look like.
════════ Exception caught by rendering library ═════════════════════════════════
A RenderFlex overflowed by 99853 pixels on the right.
The relevant error-causing widget was
Row
lib\…\student\subjects.dart:158
════════════════════════════════════════════════════════════════════════════════
subjects.dart
SingleChildScrollView(
child: Container(
padding: const EdgeInsets.symmetric(horizontal: 32),
child: Column(
children : [
// A Row for the top
Row(children: [
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 1'),
const SizedBox(width: 5,),
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 5,),
Row(children: [
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 2'),
const SizedBox(width: 5,),
SubjectCard(link: '', source: '', subjectNo: 'SUBJECT 2')
]
),
],
),
),
)
card
return SizedBox(
height: 140,
width: width * 0.4,
child: Card(
clipBehavior: Clip.antiAlias,
shape: RoundedRectangleBorder(
borderRadius: BorderRadius.circular(12)
),
child: Stack(
alignment: Alignment.topLeft,
children: [
// Ink.image(
// // image: NetworkImage(link),
// image: AssetImage(widget.link),
// height: 200,
// fit: BoxFit.cover,
// //colorFilter: ColorFilters.greyscale,
// child: InkWell(
// onTap: () => Navigator.of(context).pushNamed(
// widget.source,
// arguments: 'Text from homepage',
// ),
// ),
// ),
Padding(
padding:const EdgeInsets.only(top: 0),
child: Container(
color: Colors.white60,
height: 50,
width: double.infinity,
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 5, bottom: 0),
child: Text(subjectName,
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 30),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 28, bottom: 0),
child: Text(profesor,
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 43, bottom: 0),
child: Text('$start -',
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
Padding(
padding: const EdgeInsets.only(left: 10, top: 56, bottom: 0),
child: Text('$end',
style: GoogleFonts.smoochSans(
textStyle: const TextStyle(color: Colors.black,fontWeight: FontWeight.bold, fontSize: 18),
)
),
),
],
),
),
);
The edge of the RenderFlex that is overflowing has been marked in the rendering with a yellow and black striped pattern. This is usually caused by the contents being too big for the RenderFlex.
Consider applying a flex factor (e.g. using an Expanded widget) to force the children of the RenderFlex to fit within the available space instead of being sized to their natural size.
This is considered an error condition because it indicates that there is content that cannot be seen. If the content is legitimately bigger than the available space, consider clipping it with a ClipRect widget before putting it in the flex, or using a scrollable container rather than a Flex, like a ListView.
The specific RenderFlex in question is: RenderFlex#8390f OVERFLOWING
════════════════════════════════════════════════════════════════════════════════
Edit
here is my new code. thanks to your suggestions
Expanded(
flex: 1,
child: SingleChildScrollView(
child: SizedBox(
// height: height,
width: MediaQuery.of(context).size.width,
child: SizedBox(
child: Column(
children : [
// A Row for the top
Row(
children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(
children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1')
]
),
const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),const SizedBox(height: 10,),
Row(children: const [
SubjectCard(link: "assets/images/subject.jpg", source: '', subjectNo: 'SUBJECT 1'),
SizedBox(width: 10,),
SubjectCard(link: "assets/images/Student.JPG", source: '', subjectNo: 'SUBJECT 2')
]
),
],
),
),
),
),
)
|
[
"the error says the widget that overflows is here: lib\\…\\student\\subjects.dart:158\nyou might check what widget is there. I believe there is no widget overflowing 99853 pixels in the attached code.\nYou might also consider using the GridView class. it's exactly what you are trying to implement here.\nhttps://api.flutter.dev/flutter/widgets/GridView-class.html\n",
"You have a Column inside a SingleChildScrollView meaning the Column will try to expand into an item with infinite height. It'll keep rendering beneath your visible.\nTry MainAxisSize.Min or wrapping in a flexible. Something that will constrain the height of the Column.\n"
] |
[
0,
0
] |
[] |
[] |
[
"android",
"flutter"
] |
stackoverflow_0074665004_android_flutter.txt
|
Q:
Filter products available in pos in odoo 15
The pos module extends product.template and adds available_in_pos field. In a select field to choose a product, I would like to filter only products available in pos.
I tried the domain [('product_tmpl_id.available_in_pos', '=', True)] but I get this error
Unknown field "product.template.available_in_pos" in domain of <field name="product_id"> ([('product_tmpl_id.available_in_pos', '=', True)]))
Anyone knows how I achieve this?
A:
There are two models for product, first one is product.template which is the template for product and second one is product.product which inherits all data structure of product.template and this type of inheritance called delegation inheritance.
One of the reasons to use two models for product is using product variants, You will find one product template but many product variants for same product template which stored in product.product
The pos module extends product.template and adds available_in_pos field so this field will be available in product.product model because of the delegation inheritance.
So, you filter only products available in pos by using this domain [('available_in_pos', '=', True)]
Examples:
If you are using domain with product_id field which belong to
product.template model, you can filter only products available
in pos with the following domain [('available_in_pos', '=', True)]:
product_template = self.env['product.template'].search([('available_in_pos', '=', True)])
If you are using domain with product_id field which belong to
product.product model, you can filter all products (include
products variants) with following domain [('available_in_pos', '=', True)] or you can filter it using their product template uing this domain [('product_tmpl_id.available_in_pos', '=', True)]
product = self.env['product.product'].search([('available_in_pos', '=', True)])
or
product = self.env['product.product'].search([('product_tmpl_id.available_in_pos', '=', True)])
|
Filter products available in pos in odoo 15
|
The pos module extends product.template and adds available_in_pos field. In a select field to choose a product, I would like to filter only products available in pos.
I tried the domain [('product_tmpl_id.available_in_pos', '=', True)] but I get this error
Unknown field "product.template.available_in_pos" in domain of <field name="product_id"> ([('product_tmpl_id.available_in_pos', '=', True)]))
Anyone knows how I achieve this?
|
[
"There are two models for product, first one is product.template which is the template for product and second one is product.product which inherits all data structure of product.template and this type of inheritance called delegation inheritance.\nOne of the reasons to use two models for product is using product variants, You will find one product template but many product variants for same product template which stored in product.product\nThe pos module extends product.template and adds available_in_pos field so this field will be available in product.product model because of the delegation inheritance.\nSo, you filter only products available in pos by using this domain [('available_in_pos', '=', True)]\nExamples:\n\nIf you are using domain with product_id field which belong to\nproduct.template model, you can filter only products available\nin pos with the following domain [('available_in_pos', '=', True)]:\n\nproduct_template = self.env['product.template'].search([('available_in_pos', '=', True)])\n\n\nIf you are using domain with product_id field which belong to\nproduct.product model, you can filter all products (include\nproducts variants) with following domain [('available_in_pos', '=', True)] or you can filter it using their product template uing this domain [('product_tmpl_id.available_in_pos', '=', True)]\n\nproduct = self.env['product.product'].search([('available_in_pos', '=', True)])\n\nor\nproduct = self.env['product.product'].search([('product_tmpl_id.available_in_pos', '=', True)])\n\n"
] |
[
0
] |
[] |
[] |
[
"odoo",
"odoo_15"
] |
stackoverflow_0074610411_odoo_odoo_15.txt
|
Q:
Testing FormData in pure Javascript and is not responding
I am testing FormData in pure javascript. The code in Jquery works fine, but Pure Javascript does not give any results. I have checked thoroughly, seems I am overlooking some code. Any help, please?
HTML:
<form id="myForm" enctype="multipart/form-data">
<div> <label for="fname"> First Name</label>
<input type="text" id="fname" name="fname" placeholder="Enter your First Name" required> </div>
<div> <label for="lname">Last Name</label>
<input type="text" id="lname" name="lname" placeholder="Enter your Last Name" required> </div>
<div>
<label for="email">Email </label>
<input type="email" id="email" name="email" placeholder="Enter your email" required>
</div>
<button type="button" class="buttons" onclick="submitFormAjax()">Submit</button>
</form>
<div id="response_message"></div>
<script src="stack.js"></script>
JavaScript:
function submitFormAjax() {
var xmlhttp;
//Checking for Old window versions
if(window.XMLHttpRequest){
xmlhttp = new XMLHttpRequest();
} else if(window.ActiveXObject){
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP")
}
xmlhttp.open("POST", "custstack.php", true);
xmlhttp.onreadystatechange = function() {
if (this.readyState === 4 && this.status === 200)
{
document.getElementById("response_message").innerHTML = this.responseText;
}
}
// Retrieving the form data
var myForm = document.getElementById("myForm");
var formData = new FormData(myForm);
// Sending the request to the server
xmlhttp.send(formData);
}
PHP:
<?PHP
$fname = clean( $_POST[ "name" ] ); echo " <br /> Name = " . $fname . "<br />";
$lname = clean( $_POST[ "name" ] ); echo " <br /> Last Name = " . $lname . "<br />";
$tel = clean( $_POST[ "tel" ] ); echo " <br /> Phone = " . $tel . "<br />";
$email = clean( $_POST[ "email" ] ); echo " <br /> Email = " . $email . "<br />";
$password = clean( $_POST[ "password" ] ); echo " <br /> Password = " . $password . "<br />";
function clean( $userinput ) {
$inp = trim( $userinput );
$inp = stripslashes( $userinput );
$inp = htmlspecialchars( $userinput );
return $inp;
}
?>
A:
If you want to show the post data on the same page you don't need ajax:
<?php
print "<FORM id='myForm' action='{$_SERVER['PHP_SELF']}' method='POST' enctype='multipart/form-data'>";
?>
<div> <label for="fname"> First Name</label>
<input type="text" id="fname" name="fname" placeholder="Enter your First Name" required> </div>
<div> <label for="lname">Last Name</label>
<input type="text" id="lname" name="lname" placeholder="Enter your Last Name" required> </div>
<div> <label for="lname">Phone Number</label>
<input type="text" id="tel" name="tel" placeholder="Enter your Phone number" required> </div>
<div><label for="email">Email </label>
<input type="email" id="email" name="email" placeholder="Enter your email" required><div><
<label for="password">Password </label>
<input type="password" id="password" name="password" placeholder="Enter your password" required>
</div>
<button type="submit" class="buttons" >Submit</button>
<?PHP
function getPostValue($trigger){
$value = "";
if(!empty($_POST[$trigger]))
{
$value = clean( $_POST[$trigger]);
}
return $value;
}
function clean( $userinput ) {
$inp = trim( $userinput );
$inp = stripslashes( $userinput );
$inp = htmlspecialchars( $userinput );
return $inp;
}
echo " <br /> Name = " . getPostValue("fname") . "<br />";
echo " <br /> Last Name = " . getPostValue("lname") . "<br />";
echo " <br /> Last Phone = " . getPostValue("tel") . "<br />";
echo " <br /> Email = " . getPostValue("email") . "<br />";
echo " <br /> Password = " . getPostValue("password") . "<br />";
?>
</form>
|
Testing FormData in pure Javascript and is not responding
|
I am testing FormData in pure javascript. The code in Jquery works fine, but Pure Javascript does not give any results. I have checked thoroughly, seems I am overlooking some code. Any help, please?
HTML:
<form id="myForm" enctype="multipart/form-data">
<div> <label for="fname"> First Name</label>
<input type="text" id="fname" name="fname" placeholder="Enter your First Name" required> </div>
<div> <label for="lname">Last Name</label>
<input type="text" id="lname" name="lname" placeholder="Enter your Last Name" required> </div>
<div>
<label for="email">Email </label>
<input type="email" id="email" name="email" placeholder="Enter your email" required>
</div>
<button type="button" class="buttons" onclick="submitFormAjax()">Submit</button>
</form>
<div id="response_message"></div>
<script src="stack.js"></script>
JavaScript:
function submitFormAjax() {
var xmlhttp;
//Checking for Old window versions
if(window.XMLHttpRequest){
xmlhttp = new XMLHttpRequest();
} else if(window.ActiveXObject){
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP")
}
xmlhttp.open("POST", "custstack.php", true);
xmlhttp.onreadystatechange = function() {
if (this.readyState === 4 && this.status === 200)
{
document.getElementById("response_message").innerHTML = this.responseText;
}
}
// Retrieving the form data
var myForm = document.getElementById("myForm");
var formData = new FormData(myForm);
// Sending the request to the server
xmlhttp.send(formData);
}
PHP:
<?PHP
$fname = clean( $_POST[ "name" ] ); echo " <br /> Name = " . $fname . "<br />";
$lname = clean( $_POST[ "name" ] ); echo " <br /> Last Name = " . $lname . "<br />";
$tel = clean( $_POST[ "tel" ] ); echo " <br /> Phone = " . $tel . "<br />";
$email = clean( $_POST[ "email" ] ); echo " <br /> Email = " . $email . "<br />";
$password = clean( $_POST[ "password" ] ); echo " <br /> Password = " . $password . "<br />";
function clean( $userinput ) {
$inp = trim( $userinput );
$inp = stripslashes( $userinput );
$inp = htmlspecialchars( $userinput );
return $inp;
}
?>
|
[
"If you want to show the post data on the same page you don't need ajax:\n<?php\nprint \"<FORM id='myForm' action='{$_SERVER['PHP_SELF']}' method='POST' enctype='multipart/form-data'>\";\n?>\n<div> <label for=\"fname\"> First Name</label>\n<input type=\"text\" id=\"fname\" name=\"fname\" placeholder=\"Enter your First Name\" required> </div>\n<div> <label for=\"lname\">Last Name</label>\n<input type=\"text\" id=\"lname\" name=\"lname\" placeholder=\"Enter your Last Name\" required> </div>\n<div> <label for=\"lname\">Phone Number</label>\n<input type=\"text\" id=\"tel\" name=\"tel\" placeholder=\"Enter your Phone number\" required> </div>\n<div><label for=\"email\">Email </label>\n<input type=\"email\" id=\"email\" name=\"email\" placeholder=\"Enter your email\" required><div><\n<label for=\"password\">Password </label>\n<input type=\"password\" id=\"password\" name=\"password\" placeholder=\"Enter your password\" required>\n</div>\n<button type=\"submit\" class=\"buttons\" >Submit</button> \n<?PHP\nfunction getPostValue($trigger){ \n$value = \"\";\n if(!empty($_POST[$trigger]))\n {\n $value = clean( $_POST[$trigger]);\n } \nreturn $value;\n}\n\nfunction clean( $userinput ) {\n$inp = trim( $userinput );\n$inp = stripslashes( $userinput );\n$inp = htmlspecialchars( $userinput );\nreturn $inp;\n}\n\necho \" <br /> Name = \" . getPostValue(\"fname\") . \"<br />\";\necho \" <br /> Last Name = \" . getPostValue(\"lname\") . \"<br />\";\necho \" <br /> Last Phone = \" . getPostValue(\"tel\") . \"<br />\";\necho \" <br /> Email = \" . getPostValue(\"email\") . \"<br />\";\necho \" <br /> Password = \" . getPostValue(\"password\") . \"<br />\";\n\n?>\n</form>\n\n"
] |
[
1
] |
[] |
[] |
[
"html",
"javascript",
"php"
] |
stackoverflow_0074664247_html_javascript_php.txt
|
Q:
Select text in TextField on Double Click in Flutter Windows App
Is there an option to select the text written in TextFormField or TextField on double clicking the field in a Windows App made in Flutter?
Because currently it only works if the text is double clicked, whereas normally in windows application clicking anywhere in the text field selects the entire text written.
A:
Put your TextField inside GestureDetector
GestureDetector(
onDoubleTap:() {
if(_controller.text.isNotEmpty) {
_controller.selection = TextSelection(baseOffset: 0, extentOffset:_controller.text.length);
}
},
child: TextField(controller: _controller, ),
)
A:
Wrap the textfield with an inkwell to provide a double tap. Then on double tap set the selection of the textfield
InkWell(
onDoubleTap:(){
setState((){
_textController.selection = TextSelection(baseOffset:0, extentOffset: _textController.text.length);
});
},
child:TextField(
controller: _textController,
)
)
A:
You Dont need any other extra Widgets. Its pretty simple,You can use onTap property inside of TextField:
TextField(
controller: _controller,
onTap: () {
_controller.selection = TextSelection(baseOffset: 0, extentOffset: _controller.text.length);
}
)
|
Select text in TextField on Double Click in Flutter Windows App
|
Is there an option to select the text written in TextFormField or TextField on double clicking the field in a Windows App made in Flutter?
Because currently it only works if the text is double clicked, whereas normally in windows application clicking anywhere in the text field selects the entire text written.
|
[
"Put your TextField inside GestureDetector\nGestureDetector(\n onDoubleTap:() {\n if(_controller.text.isNotEmpty) {\n _controller.selection = TextSelection(baseOffset: 0, extentOffset:_controller.text.length);\n }\n },\n child: TextField(controller: _controller, ),\n)\n\n",
"Wrap the textfield with an inkwell to provide a double tap. Then on double tap set the selection of the textfield\nInkWell(\n onDoubleTap:(){\n setState((){\n _textController.selection = TextSelection(baseOffset:0, extentOffset: _textController.text.length);\n });\n },\n child:TextField(\n controller: _textController,\n )\n)\n\n",
"You Dont need any other extra Widgets. Its pretty simple,You can use onTap property inside of TextField:\nTextField(\n controller: _controller,\n onTap: () {\n _controller.selection = TextSelection(baseOffset: 0, extentOffset: _controller.text.length);\n }\n)\n\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"flutter",
"flutter_textformfield",
"flutter_windows"
] |
stackoverflow_0072962932_flutter_flutter_textformfield_flutter_windows.txt
|
Q:
How I can makea plot which I have customized in R
I want to plot the following plot
The x-axis ranges from 1 to 9, and the y-axis ranges from -0.5 to +0.5. I have also specified colours within the boxes
A:
First I created some reproducible data with Y factors and X values. You could define the correct and incorrect colors in a new column using case_when. To create bars use geom_col and scale_fill_manual to define the labels for your colors. Here is a reproducible example:
# Data
df <- data.frame(Y = rep(c(0.3, -0.1, -0.3), each = 9),
X = rep(c(1:9), n = 3))
library(dplyr)
library(ggplot2)
df %>%
mutate(color = case_when(Y == 0.3 | Y == -0.3 ~ 'orange',
TRUE ~ 'grey')) %>%
ggplot(aes(x = X, y = factor(Y), fill = color)) +
geom_col(width = 1) +
scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),
labels = c('Correct', 'Incorrect')) +
theme_classic() +
labs(y = 'Y', x = '')
Created on 2022-12-03 with reprex v2.0.2
Update
Slightly modify the data:
df <- data.frame(Y = rep(c(0.45, 0.25, 0.05, -0.05, -0.25, -0.45), each = 9),
X = rep(c(1:9), n = 6))
library(dplyr)
library(ggplot2)
df %>%
mutate(color = case_when(Y %in% c(-0.45, 0.45, -0.25, 0.25) ~ 'orange',
TRUE ~ 'grey')) %>%
ggplot(aes(x = X, y = factor(Y), fill = color)) +
geom_col(width = 1) +
scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),
labels = c('Correct', 'Incorrect')) +
theme_classic() +
labs(y = 'Y', x = '')
Created on 2022-12-03 with reprex v2.0.2
Update to axis
You can use the following code:
df <- data.frame(Y = c(0.45, 0.25, 0.05, -0.05, -0.25, -0.45),
X = rep(9, n = 6))
library(dplyr)
library(ggplot2)
df %>%
mutate(color = case_when(Y %in% c(-0.45, 0.45, -0.25, 0.25) ~ 'orange',
TRUE ~ 'grey')) %>%
ggplot(aes(x = X, y = factor(Y), fill = color)) +
geom_col(width = 1) +
scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),
labels = c('Correct', 'Incorrect')) +
theme_classic() +
labs(y = 'Y', x = '') +
coord_cartesian(expand = FALSE, xlim = c(1, NA)) +
scale_x_continuous(breaks = seq(1, 9, by = 1))
Created on 2022-12-03 with reprex v2.0.2
|
How I can makea plot which I have customized in R
|
I want to plot the following plot
The x-axis ranges from 1 to 9, and the y-axis ranges from -0.5 to +0.5. I have also specified colours within the boxes
|
[
"First I created some reproducible data with Y factors and X values. You could define the correct and incorrect colors in a new column using case_when. To create bars use geom_col and scale_fill_manual to define the labels for your colors. Here is a reproducible example:\n# Data\ndf <- data.frame(Y = rep(c(0.3, -0.1, -0.3), each = 9),\n X = rep(c(1:9), n = 3))\n\nlibrary(dplyr)\nlibrary(ggplot2)\ndf %>%\n mutate(color = case_when(Y == 0.3 | Y == -0.3 ~ 'orange',\n TRUE ~ 'grey')) %>%\n ggplot(aes(x = X, y = factor(Y), fill = color)) +\n geom_col(width = 1) +\n scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),\n labels = c('Correct', 'Incorrect')) +\n theme_classic() +\n labs(y = 'Y', x = '')\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n\nUpdate\nSlightly modify the data:\ndf <- data.frame(Y = rep(c(0.45, 0.25, 0.05, -0.05, -0.25, -0.45), each = 9),\n X = rep(c(1:9), n = 6))\n\nlibrary(dplyr)\nlibrary(ggplot2)\ndf %>%\n mutate(color = case_when(Y %in% c(-0.45, 0.45, -0.25, 0.25) ~ 'orange',\n TRUE ~ 'grey')) %>%\n ggplot(aes(x = X, y = factor(Y), fill = color)) +\n geom_col(width = 1) +\n scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),\n labels = c('Correct', 'Incorrect')) +\n theme_classic() +\n labs(y = 'Y', x = '')\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n\nUpdate to axis\nYou can use the following code:\ndf <- data.frame(Y = c(0.45, 0.25, 0.05, -0.05, -0.25, -0.45),\n X = rep(9, n = 6))\n\nlibrary(dplyr)\nlibrary(ggplot2)\ndf %>%\n mutate(color = case_when(Y %in% c(-0.45, 0.45, -0.25, 0.25) ~ 'orange',\n TRUE ~ 'grey')) %>%\n ggplot(aes(x = X, y = factor(Y), fill = color)) +\n geom_col(width = 1) +\n scale_fill_manual('', values = c('orange' = 'orange', 'grey' = 'grey'),\n labels = c('Correct', 'Incorrect')) +\n theme_classic() +\n labs(y = 'Y', x = '') +\n coord_cartesian(expand = FALSE, xlim = c(1, NA)) +\n scale_x_continuous(breaks = seq(1, 9, by = 1))\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n"
] |
[
1
] |
[] |
[] |
[
"ggplot2",
"r"
] |
stackoverflow_0074662475_ggplot2_r.txt
|
Q:
Chaining Telethon start methods
I have been using telethon for a long time with two clients, one for a bot (with bot token) and another for my user (using phone).
I always thought two separate clients were necessary (are them?) but I recently saw this in the documentation:
https://docs.telethon.dev/en/stable/modules/client.html#telethon.client.auth.AuthMethods.start
But when I go to test it I got:
UserWarning:
the session already had an authorized user so it did not login to the user account using the provided phone (it may not be using the user you expect)
So I don't understand if the example indicates that I can have a single client to control a bot and a userbot, if one start(...) overrides the other or if the documentation example is wrong directly.
On the other hand, if I use that example code (including the last with part) I get:
RuntimeError: You must use "async with" if the event loop is running (i.e. you are inside an "async def")
And lastly, my ide was warning me when passing a phone as a string because it expected typing.Callable[[], str].
A:
The documentation says "initialization can be chained". Initialization is this line:
client = TelegramClient(...)
and you can chain .start() there:
client = await TelegramClient(...).start(...)
but it doesn't mean you can chain multiple calls to start(). Indeed, if you want to control more than one account, you will need separate clients.
|
Chaining Telethon start methods
|
I have been using telethon for a long time with two clients, one for a bot (with bot token) and another for my user (using phone).
I always thought two separate clients were necessary (are them?) but I recently saw this in the documentation:
https://docs.telethon.dev/en/stable/modules/client.html#telethon.client.auth.AuthMethods.start
But when I go to test it I got:
UserWarning:
the session already had an authorized user so it did not login to the user account using the provided phone (it may not be using the user you expect)
So I don't understand if the example indicates that I can have a single client to control a bot and a userbot, if one start(...) overrides the other or if the documentation example is wrong directly.
On the other hand, if I use that example code (including the last with part) I get:
RuntimeError: You must use "async with" if the event loop is running (i.e. you are inside an "async def")
And lastly, my ide was warning me when passing a phone as a string because it expected typing.Callable[[], str].
|
[
"The documentation says \"initialization can be chained\". Initialization is this line:\nclient = TelegramClient(...)\n\nand you can chain .start() there:\nclient = await TelegramClient(...).start(...)\n\nbut it doesn't mean you can chain multiple calls to start(). Indeed, if you want to control more than one account, you will need separate clients.\n"
] |
[
1
] |
[] |
[] |
[
"python",
"telethon"
] |
stackoverflow_0074665837_python_telethon.txt
|
Q:
How to recursively call a function until the output is met?
I have a piece of code that contains two functions reverse() that reverses an input list and rotate() that puts the last element to the start of the first.
Now I am given another list, in the function public int minimumOps(List<Integer> a, List<Integer> b) which contains the same elements as the original list but in different order. I am trying to find how many times reverse() and/or rotate() must be called for the new list to be converted back into the original list. For instance, consider S = [1, 2, 3, 4] and T = [2, 1, 4, 3], we can see that
T = rotate(rotate(reverse(S))) gives us the output
But this is not the only way to transform S to T. To illustrate, here are some other sequence of operations:
T = reverse(rotate(rotate(S)))
T = rotate(rotate(rotate(reverse(rotate(S)))))
T = reverse(rotate(rotate(reverse(reverse(S))))))
Our goal in this problem is to find the smallest number of operations to achieve this transformation.
I cannot figure out how to solve this problem. Here is what I have so far:
public int minimumOps(List<Integer> a, List<Integer> b) {
int count = 0;
for (int x = 0; x < a.size(); x++){
if (Objects.equals(a.get(x), b.get(x))){
count += 0;
}
else{
while(!Objects.equals(a.get(x),b.get(x))){
reverse(b);
rotate(b);
count++;
}
}
}
return count;
}
public static List<Integer> rotate(List<Integer> l) {
List<Integer> li = new ArrayList<Integer>();
li.add(l.get(l.size() - 1));
for (int i = 0; i < l.size() - 1; i++) {
li.add(l.get(i));
}
return li;
}
public static List<Integer> reverse(List<Integer> l) {
List<Integer> li = new ArrayList<Integer>();
for (int i = l.size() - 1; i >= 0; i--) {
li.add(l.get(i));
}
return li;
}
Any ideas on how to approach or solve the minimumOps(a,b) and find the number of rotate() and/or reverse() needed to turn list b into list a would be greatly appreciated
A:
Here's your answer, this will only find the minimum number of transforms:
import java.util.*;
class Main {
public static void main(String[] args) {
var a = new ArrayList<>(List.of(1, 2, 3, 4));
var b = new ArrayList<>(List.of(2, 1, 4, 3));
var output = minimumOps(a, b);
if (output == Integer.MAX_VALUE) System.out.println("Can't be solved");
else System.out.println("Min transforms: "+output);
}
/*
let's draw operations tree, that is, for each list we can either reverse (rev) or rotate (rot)
a
|
/ \
rev rot
/ \ / \
rev rot rev rot
/ \ / \ / \ / \
⋮ ⋮ ⋮ ⋮
we know that reverse(reverse(list)) == list, so we can prune rev if its parent is rev.
a
|
/ \
rev rot
| / \
rot rev rot
/ \ | / \
⋮ ⋮ ⋮
now let's apply this 'blindly'.
*/
public static int minimumOps(List<Integer> a, List<Integer> b) {
if (Objects.equals(a, b)) return 0;
// minimumOpsRec is a helper method that will be called recursively
int revCount = minimumOpsRec(reverse(a), b, 1, OP.REV);
int rotCount = minimumOpsRec(rotate(a), b, 1, OP.ROT);
return Math.min(revCount, rotCount);
}
// a and b are lists that we are transforming,
// count is our counter that will be incremented by each transform
// parentOP is the previous operation from parent, i.e., rev or rot, see enum
public static int minimumOpsRec(List<Integer> a, List<Integer> b, int count, OP parentOP) {
if (Objects.equals(a, b)) return count; // base condition, return if a == b
// however not all lists can be sorted using this algorithm, generally speaking,
// if the output of this method greater than the list size then it's not sortable.
// for example try to solve this using this algorithm by yourself (hint: you cannot): a = [1, 2, 3, 4], b = [4, 2, 1, 3]
if (count > a.size()) return Integer.MAX_VALUE;
count++;
int rev = Integer.MAX_VALUE, rot;
if (parentOP == OP.ROT) rev = minimumOpsRec(reverse(a), b, count, OP.REV);
rot = minimumOpsRec(rotate(a), b, count, OP.ROT);
return Math.min(rev, rot);
}
// don't mutate input
private static List<Integer> rotate(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.rotate(newList, 1); // try using util methods as much as possible
return newList;
}
// don't mutate input
private static List<Integer> reverse(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.reverse(newList); // try using util methods as much as possible
return newList;
}
enum OP {
REV,
ROT
}
}
And here if you want to find which transforms:
import java.util.*;
class Test2 {
public static void main(String[] args) {
var a = new ArrayList<>(List.of(1, 2, 3, 4));
var b = new ArrayList<>(List.of(2, 1, 4, 3));
System.out.println(minimumOps(a, b));
}
public static Map.Entry<Integer, List<OP>> minimumOps(List<Integer> a, List<Integer> b) {
if (Objects.equals(a, b)) return new AbstractMap.SimpleEntry<>(0, new ArrayList<>());
var rev = minimumOpsRec(reverse(a), b, 1, OP.REV);
var rot = minimumOpsRec(rotate(a), b, 1, OP.ROT);
return rot.getKey() >= rev.getKey() ? rev : rot;
}
public static Map.Entry<Integer, List<OP>> minimumOpsRec(List<Integer> a, List<Integer> b, int count, OP parent) {
if (Objects.equals(a, b) || count > a.size())
return new AbstractMap.SimpleEntry<>(count, new ArrayList<>(List.of(parent)));
count++;
Map.Entry<Integer, List<OP>> rev = null;
Map.Entry<Integer, List<OP>> rot;
if (parent == OP.ROT) rev = minimumOpsRec(reverse(a), b, count, OP.REV);
rot = minimumOpsRec(rotate(a), b, count, OP.ROT);
if (rev != null && rot.getKey() >= rev.getKey()) {
rev.getValue().add(parent);
return rev;
}
rot.getValue().add(parent);
return rot;
}
// don't mutate input
private static List<Integer> rotate(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.rotate(newList, 1); // try using util methods as much as possible
return newList;
}
// don't mutate input
private static List<Integer> reverse(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.reverse(newList); // try using util methods as much as possible
return newList;
}
enum OP {
REV,
ROT
}
}
Simplified version:
class Test {
public static void main(String[] args) {
var a = new ArrayList<>(List.of(1, 2, 3, 4));
var b = new ArrayList<>(List.of(2, 1, 4, 3));
var output = minimumOps(a, b);
if (output == Integer.MAX_VALUE) System.out.println("Can't be solved");
else System.out.println("Min transforms: " + output);
}
public static int minimumOps(List<Integer> a, List<Integer> b) {
return minimumOpsRec(a, b, 0);
}
public static int minimumOpsRec(List<Integer> a, List<Integer> b, int count) {
if (Objects.equals(a, b)) return count;
if (count > a.size()) return Integer.MAX_VALUE;
count++;
return Math.min(minimumOpsRec(reverse(a), b, count), minimumOpsRec(rotate(a), b, count));
}
private static List<Integer> rotate(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.rotate(newList, 1);
return newList;
}
private static List<Integer> reverse(List<Integer> list) {
var newList = new ArrayList<>(list);
Collections.reverse(newList);
return newList;
}
}
Hope it helps, good luck
|
How to recursively call a function until the output is met?
|
I have a piece of code that contains two functions reverse() that reverses an input list and rotate() that puts the last element to the start of the first.
Now I am given another list, in the function public int minimumOps(List<Integer> a, List<Integer> b) which contains the same elements as the original list but in different order. I am trying to find how many times reverse() and/or rotate() must be called for the new list to be converted back into the original list. For instance, consider S = [1, 2, 3, 4] and T = [2, 1, 4, 3], we can see that
T = rotate(rotate(reverse(S))) gives us the output
But this is not the only way to transform S to T. To illustrate, here are some other sequence of operations:
T = reverse(rotate(rotate(S)))
T = rotate(rotate(rotate(reverse(rotate(S)))))
T = reverse(rotate(rotate(reverse(reverse(S))))))
Our goal in this problem is to find the smallest number of operations to achieve this transformation.
I cannot figure out how to solve this problem. Here is what I have so far:
public int minimumOps(List<Integer> a, List<Integer> b) {
int count = 0;
for (int x = 0; x < a.size(); x++){
if (Objects.equals(a.get(x), b.get(x))){
count += 0;
}
else{
while(!Objects.equals(a.get(x),b.get(x))){
reverse(b);
rotate(b);
count++;
}
}
}
return count;
}
public static List<Integer> rotate(List<Integer> l) {
List<Integer> li = new ArrayList<Integer>();
li.add(l.get(l.size() - 1));
for (int i = 0; i < l.size() - 1; i++) {
li.add(l.get(i));
}
return li;
}
public static List<Integer> reverse(List<Integer> l) {
List<Integer> li = new ArrayList<Integer>();
for (int i = l.size() - 1; i >= 0; i--) {
li.add(l.get(i));
}
return li;
}
Any ideas on how to approach or solve the minimumOps(a,b) and find the number of rotate() and/or reverse() needed to turn list b into list a would be greatly appreciated
|
[
"Here's your answer, this will only find the minimum number of transforms:\nimport java.util.*;\n\nclass Main {\n public static void main(String[] args) {\n var a = new ArrayList<>(List.of(1, 2, 3, 4));\n var b = new ArrayList<>(List.of(2, 1, 4, 3));\n var output = minimumOps(a, b);\n if (output == Integer.MAX_VALUE) System.out.println(\"Can't be solved\");\n else System.out.println(\"Min transforms: \"+output); \n }\n\n /*\n let's draw operations tree, that is, for each list we can either reverse (rev) or rotate (rot)\n a\n |\n / \\\n rev rot\n / \\ / \\\n rev rot rev rot\n / \\ / \\ / \\ / \\\n ⋮ ⋮ ⋮ ⋮\n\n we know that reverse(reverse(list)) == list, so we can prune rev if its parent is rev.\n\n a\n |\n / \\\n rev rot\n | / \\\n rot rev rot\n / \\ | / \\\n ⋮ ⋮ ⋮\n\n now let's apply this 'blindly'.\n */\n public static int minimumOps(List<Integer> a, List<Integer> b) {\n if (Objects.equals(a, b)) return 0;\n\n // minimumOpsRec is a helper method that will be called recursively\n int revCount = minimumOpsRec(reverse(a), b, 1, OP.REV);\n int rotCount = minimumOpsRec(rotate(a), b, 1, OP.ROT);\n\n return Math.min(revCount, rotCount);\n }\n\n // a and b are lists that we are transforming,\n // count is our counter that will be incremented by each transform\n // parentOP is the previous operation from parent, i.e., rev or rot, see enum\n public static int minimumOpsRec(List<Integer> a, List<Integer> b, int count, OP parentOP) {\n if (Objects.equals(a, b)) return count; // base condition, return if a == b\n\n // however not all lists can be sorted using this algorithm, generally speaking,\n // if the output of this method greater than the list size then it's not sortable.\n // for example try to solve this using this algorithm by yourself (hint: you cannot): a = [1, 2, 3, 4], b = [4, 2, 1, 3]\n if (count > a.size()) return Integer.MAX_VALUE;\n\n count++;\n\n int rev = Integer.MAX_VALUE, rot;\n\n if (parentOP == OP.ROT) rev = minimumOpsRec(reverse(a), b, count, OP.REV);\n\n rot = minimumOpsRec(rotate(a), b, count, OP.ROT);\n\n return Math.min(rev, rot);\n }\n\n // don't mutate input\n private static List<Integer> rotate(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.rotate(newList, 1); // try using util methods as much as possible\n return newList;\n }\n\n // don't mutate input\n private static List<Integer> reverse(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.reverse(newList); // try using util methods as much as possible\n return newList;\n }\n\n enum OP {\n REV,\n ROT\n }\n}\n\nAnd here if you want to find which transforms:\nimport java.util.*;\n\nclass Test2 {\n\n public static void main(String[] args) {\n var a = new ArrayList<>(List.of(1, 2, 3, 4));\n var b = new ArrayList<>(List.of(2, 1, 4, 3));\n System.out.println(minimumOps(a, b));\n }\n\n public static Map.Entry<Integer, List<OP>> minimumOps(List<Integer> a, List<Integer> b) {\n if (Objects.equals(a, b)) return new AbstractMap.SimpleEntry<>(0, new ArrayList<>());\n\n var rev = minimumOpsRec(reverse(a), b, 1, OP.REV);\n var rot = minimumOpsRec(rotate(a), b, 1, OP.ROT);\n\n return rot.getKey() >= rev.getKey() ? rev : rot;\n }\n\n public static Map.Entry<Integer, List<OP>> minimumOpsRec(List<Integer> a, List<Integer> b, int count, OP parent) {\n if (Objects.equals(a, b) || count > a.size())\n return new AbstractMap.SimpleEntry<>(count, new ArrayList<>(List.of(parent)));\n\n count++;\n \n Map.Entry<Integer, List<OP>> rev = null;\n Map.Entry<Integer, List<OP>> rot;\n \n if (parent == OP.ROT) rev = minimumOpsRec(reverse(a), b, count, OP.REV);\n \n rot = minimumOpsRec(rotate(a), b, count, OP.ROT);\n\n if (rev != null && rot.getKey() >= rev.getKey()) {\n rev.getValue().add(parent);\n return rev;\n }\n rot.getValue().add(parent);\n return rot;\n }\n\n // don't mutate input\n private static List<Integer> rotate(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.rotate(newList, 1); // try using util methods as much as possible\n return newList;\n }\n\n // don't mutate input\n private static List<Integer> reverse(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.reverse(newList); // try using util methods as much as possible\n return newList;\n }\n\n enum OP {\n REV,\n ROT\n }\n\n}\n\nSimplified version:\nclass Test {\n public static void main(String[] args) {\n var a = new ArrayList<>(List.of(1, 2, 3, 4));\n var b = new ArrayList<>(List.of(2, 1, 4, 3));\n var output = minimumOps(a, b);\n if (output == Integer.MAX_VALUE) System.out.println(\"Can't be solved\");\n else System.out.println(\"Min transforms: \" + output);\n }\n\n public static int minimumOps(List<Integer> a, List<Integer> b) {\n return minimumOpsRec(a, b, 0);\n }\n\n public static int minimumOpsRec(List<Integer> a, List<Integer> b, int count) {\n if (Objects.equals(a, b)) return count;\n if (count > a.size()) return Integer.MAX_VALUE;\n count++;\n return Math.min(minimumOpsRec(reverse(a), b, count), minimumOpsRec(rotate(a), b, count));\n }\n\n private static List<Integer> rotate(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.rotate(newList, 1);\n return newList;\n }\n\n private static List<Integer> reverse(List<Integer> list) {\n var newList = new ArrayList<>(list);\n Collections.reverse(newList);\n return newList;\n }\n}\n\nHope it helps, good luck\n"
] |
[
2
] |
[
"It looks like your current approach is to iterate through each element in the list a, compare it to the corresponding element in the list b, and then perform the reverse() and rotate() operations until the two elements match.\nHowever, this approach will not work for several reasons. First, the reverse() and rotate() operations do not necessarily have to be performed on a single element in order to transform a into b. Instead, the operations can be performed on the entire list. Second, it is not guaranteed that the reverse() and rotate() operations will eventually produce a matching element, even if the two lists contain the same elements.\nHere is one possible approach to solve the problem:\nFirst, check if the two lists are already equal. If they are, then no operations are needed and the function can return 0.\nIf the two lists are not equal, then perform the reverse() operation on list b. If the resulting list is equal to a, then the function can return 1, since only one reverse() operation was needed.\nIf step 2 did not produce a matching list, then perform the rotate() operation on list b repeatedly until it matches a. The number of rotate() operations needed is the minimum number of operations needed to transform a into b.\nHere is what the updated minimumOps() function might look like:\npublic int minimumOps(List<Integer> a, List<Integer> b) {\n// Check if the lists are already equal\nif (a.equals(b)) {\n return 0;\n}\n\n// Perform the reverse operation on b\nList<Integer> reversedB = reverse(b);\nif (a.equals(reversedB)) {\n // If the reverse operation produces a matching list, return 1\n return 1;\n}\n\n// If the reverse operation did not produce a matching list, perform the rotate operation repeatedly\nint count = 0;\nwhile (!a.equals(b)) {\n b = rotate(b);\n count++;\n}\n\n// Return the number of rotate operations needed\nreturn count;\n}\n\nThis approach should produce the correct result in all cases, since it checks for the two lists being already equal, and then checks if the reverse() operation alone is sufficient to transform a into b. If neither of these conditions is met, then it performs the rotate() operation repeatedly until a match is found.\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
-2
] |
[
"arraylist",
"function",
"java"
] |
stackoverflow_0074664167_arraylist_function_java.txt
|
Q:
Is there macro to create HashSet in Rust
In Rust, we can create a Vector with macro vec![].
let numbers = vec![1, 2, 3];
Is there any similar macro that allow us to create a HashSet?
From the doc https://doc.rust-lang.org/std/collections/struct.HashSet.html, I notice that we have HashSet::from:
let viking_names = HashSet::from(["Einar", "Olaf", "Harald"]);
However, that requires us to create an array first, which seems a bit wasteful.
A:
The standard library doesn't have a macro for this. This crate does provide one though.
As for the wastefulness, creating an array of string literals is pretty cheap, and is almost certainly optimized away.
A:
If you fear the overhead of constructing the HashSet at runtime, or want this to be a static, the phf crates might help you.
|
Is there macro to create HashSet in Rust
|
In Rust, we can create a Vector with macro vec![].
let numbers = vec![1, 2, 3];
Is there any similar macro that allow us to create a HashSet?
From the doc https://doc.rust-lang.org/std/collections/struct.HashSet.html, I notice that we have HashSet::from:
let viking_names = HashSet::from(["Einar", "Olaf", "Harald"]);
However, that requires us to create an array first, which seems a bit wasteful.
|
[
"The standard library doesn't have a macro for this. This crate does provide one though.\nAs for the wastefulness, creating an array of string literals is pretty cheap, and is almost certainly optimized away.\n",
"If you fear the overhead of constructing the HashSet at runtime, or want this to be a static, the phf crates might help you.\n"
] |
[
3,
1
] |
[] |
[] |
[
"rust"
] |
stackoverflow_0074664018_rust.txt
|
Q:
Can not create Database Diagram - Could not obtain information about Windows NT Group/User
I'm using a Local MS SQL Database, the database was created automatically using entity framework.
The database is created with out any errors.
However when I use Microsoft Managment studio to try and create a database diagram I get the following error:
Even when I login as the SA user I still get the same error, any ideas on why this is happening and how to fix this error.
|
Can not create Database Diagram - Could not obtain information about Windows NT Group/User
|
I'm using a Local MS SQL Database, the database was created automatically using entity framework.
The database is created with out any errors.
However when I use Microsoft Managment studio to try and create a database diagram I get the following error:
Even when I login as the SA user I still get the same error, any ideas on why this is happening and how to fix this error.
|
[] |
[] |
[
"sql 2019 - Change the owner to sa\nenter image description here\n"
] |
[
-2
] |
[
"entity_framework",
"sql_server",
"sql_server_2017",
"ssms"
] |
stackoverflow_0060189644_entity_framework_sql_server_sql_server_2017_ssms.txt
|
Q:
Is it possible to use Material UI library with React Native?
I've been using react for a while now and I know that Material UI https://material-ui.com/ is the UI library built for react. My question is - Is it possible to use this library (which is built for react) with react-native ?
On some research I found react-native has another UI library called react-native-paper, but I was wondering if Material-UI can fit together with react-native?
A:
Material UI is built for ReactJS (web apps), so it doesn't really work together with the React Native framework. However, here's a list of a few libraries to get you started:
react-native-material-bottom-navigation
This Material UI library allows us to add a super cool material bottom navigation with all its perks in pure javascript. It has no native dependencies, easy to use & customize, plus it feels stunning.
npm install react-native-material-bottom-navigation
react-native-material-dropdown
Looking for drop-down components that look and feel great as native?. This material drop-down library does just that. The Material UI drop-down with consistent behavior on iOS and Android, which also has support for landscape mode as well.
npm install --save react-native-material-dropdown
react-native-snap-carousel
I have used a number of swiper components in react native. Every one of them works ok, but this one takes things to a whole new level.
npm install --save react-native-snap-carousel
react-native-material-textfield
Comes from the same author of the dropdown package (listed above).
npm install --save react-native-material-textfield
react-native-material-menu
If you're looking for overflow menu support in React Native, this lib will do a great job. You can use this in the toolbar as an overflow menu(more menu).
npm install --save react-native-material-menu
react-native-modal-datetime-picker
Modal DateTimePicker provides support for this feature in iOs & Android using native implementation.
npm i react-native-modal-datetime-picker @react-native-community/datetimepicker
react-native-snackbar
Great if you're looking for toast/snack bar options that can be shown easily at the end of an e.g. API call.
npm install react-native-snackbar --save
react-native-country-picker-modal
This picker module allows the user to select countries from the list. It has support for search, lazy loading. Dark mode included.
npm i react-native-country-picker-modal
react-native-color
Color components for React Native. JavaScript-only, for iOS and Android.
npm i --save react-native-color
react-native-masonry
Great if you're looking for grid lists that have support for dynamic width & height.
npm install --save react-native-masonry
UPDATE:
React Native lets you build your own Native Components for Android and iOS to suit your app’s unique needs. However, there is also a thriving ecosystem of community-contributed components. To get you started ASAP may I suggest checking out the Native Directory to find what the community has been creating and how you can benefit from it.
A:
material-ui is a react based implementation of the material design system. It is meant for web development and the output that you get is HTML/ CSS (which are the building block of every web page).
react-native is a framework that lets you build native mobile apps using the power and syntax of React. The native programming language of mobile apps depends on the mobile-os you are currently running. The React Native framework gives you the ability to program in React style, and the output will be a compiled version (os-specific) using the native OS language.
Unfortunately, those two don't really work together, and you can't just take any React lib and use it in your react-native code.
A:
Material Design Library for ReactNative
Making your React Native apps look and feel native, React Native Paper is a high-quality, standard-compliant Material Design library that has you covered in all major use-cases.
https://reactnativepaper.com/
A:
Unfortunately, the material-UI library specially built for ReactJS which is the web counterpart of the React Native is not applicable to the React Native app development ecosystem. This does not mean that you cannot provide the native feel to the app you are developing. The library providing every material theme does not come out of the box for React Native. However, there are some packages that provide components with material theming. Likewise, you can also check some amazing React Native themes that give your app a modern look and feel. They simply make an app look amazing and appealing to the user group. It's all about improvising and adapting.
A:
There's another solution if you're willing to abandon react-native and go for the progressive web app route (with something like Capacitor).
Note that this can be trickier than something like react-native or expo. For example, getting something equivalent to react native's safe area views is going to require some manual twiddling (with a plugin like this one). In general, I've found that accessing native functionality is always just a little more complicated.
A:
I was working with react-native-elements and have a good performance to Android and IOS. However, its complicated change the style of components.
Since a few months i tried with NativeBase and its perfect. It have a lot of components kind MaterialUI and works perfectly in Android and IOS. Its a very completly library.
A:
I have found one. So you can install it with the below npm command:
npm install @react-native-material/core
to use it:
import React from "react";
import { Button } from "@react-native-material/core";
const App = () => (
<Button title="Click Me" onPress={() => alert("")}/>
);
export default App;
So many components you can find. For more information https://www.react-native-material.com/docs/getting-started
|
Is it possible to use Material UI library with React Native?
|
I've been using react for a while now and I know that Material UI https://material-ui.com/ is the UI library built for react. My question is - Is it possible to use this library (which is built for react) with react-native ?
On some research I found react-native has another UI library called react-native-paper, but I was wondering if Material-UI can fit together with react-native?
|
[
"Material UI is built for ReactJS (web apps), so it doesn't really work together with the React Native framework. However, here's a list of a few libraries to get you started:\n\n\nreact-native-material-bottom-navigation\n\n\nThis Material UI library allows us to add a super cool material bottom navigation with all its perks in pure javascript. It has no native dependencies, easy to use & customize, plus it feels stunning.\nnpm install react-native-material-bottom-navigation\n\n\n\nreact-native-material-dropdown\n\n\nLooking for drop-down components that look and feel great as native?. This material drop-down library does just that. The Material UI drop-down with consistent behavior on iOS and Android, which also has support for landscape mode as well.\nnpm install --save react-native-material-dropdown\n\n\n\nreact-native-snap-carousel\n\n\nI have used a number of swiper components in react native. Every one of them works ok, but this one takes things to a whole new level.\nnpm install --save react-native-snap-carousel\n\n\n\nreact-native-material-textfield\n\n\nComes from the same author of the dropdown package (listed above).\nnpm install --save react-native-material-textfield\n\n\n\nreact-native-material-menu\n\n\nIf you're looking for overflow menu support in React Native, this lib will do a great job. You can use this in the toolbar as an overflow menu(more menu).\nnpm install --save react-native-material-menu\n\n\n\nreact-native-modal-datetime-picker\n\n\nModal DateTimePicker provides support for this feature in iOs & Android using native implementation.\nnpm i react-native-modal-datetime-picker @react-native-community/datetimepicker\n\n\n\nreact-native-snackbar\n\n\nGreat if you're looking for toast/snack bar options that can be shown easily at the end of an e.g. API call.\nnpm install react-native-snackbar --save\n\n\n\nreact-native-country-picker-modal\n\n\nThis picker module allows the user to select countries from the list. It has support for search, lazy loading. Dark mode included.\nnpm i react-native-country-picker-modal\n\n\n\nreact-native-color\n\n\nColor components for React Native. JavaScript-only, for iOS and Android.\nnpm i --save react-native-color\n\n\n\nreact-native-masonry\n\n\nGreat if you're looking for grid lists that have support for dynamic width & height.\nnpm install --save react-native-masonry\n\n\nUPDATE:\nReact Native lets you build your own Native Components for Android and iOS to suit your app’s unique needs. However, there is also a thriving ecosystem of community-contributed components. To get you started ASAP may I suggest checking out the Native Directory to find what the community has been creating and how you can benefit from it.\n",
"material-ui is a react based implementation of the material design system. It is meant for web development and the output that you get is HTML/ CSS (which are the building block of every web page).\nreact-native is a framework that lets you build native mobile apps using the power and syntax of React. The native programming language of mobile apps depends on the mobile-os you are currently running. The React Native framework gives you the ability to program in React style, and the output will be a compiled version (os-specific) using the native OS language.\nUnfortunately, those two don't really work together, and you can't just take any React lib and use it in your react-native code.\n",
"Material Design Library for ReactNative\nMaking your React Native apps look and feel native, React Native Paper is a high-quality, standard-compliant Material Design library that has you covered in all major use-cases.\nhttps://reactnativepaper.com/\n\n",
"Unfortunately, the material-UI library specially built for ReactJS which is the web counterpart of the React Native is not applicable to the React Native app development ecosystem. This does not mean that you cannot provide the native feel to the app you are developing. The library providing every material theme does not come out of the box for React Native. However, there are some packages that provide components with material theming. Likewise, you can also check some amazing React Native themes that give your app a modern look and feel. They simply make an app look amazing and appealing to the user group. It's all about improvising and adapting.\n",
"There's another solution if you're willing to abandon react-native and go for the progressive web app route (with something like Capacitor).\nNote that this can be trickier than something like react-native or expo. For example, getting something equivalent to react native's safe area views is going to require some manual twiddling (with a plugin like this one). In general, I've found that accessing native functionality is always just a little more complicated.\n",
"I was working with react-native-elements and have a good performance to Android and IOS. However, its complicated change the style of components.\nSince a few months i tried with NativeBase and its perfect. It have a lot of components kind MaterialUI and works perfectly in Android and IOS. Its a very completly library.\n",
"I have found one. So you can install it with the below npm command:\nnpm install @react-native-material/core\n\nto use it:\nimport React from \"react\";\nimport { Button } from \"@react-native-material/core\";\n\nconst App = () => (\n <Button title=\"Click Me\" onPress={() => alert(\"\")}/>\n);\n\nexport default App;\n\nSo many components you can find. For more information https://www.react-native-material.com/docs/getting-started\n"
] |
[
49,
36,
16,
4,
2,
1,
1
] |
[] |
[] |
[
"material_ui",
"react_native",
"reactjs"
] |
stackoverflow_0061398426_material_ui_react_native_reactjs.txt
|
Q:
Spark SQL JDBC numberOfPartitions calculation from huge vs small data load
I have a use case where depending on the argument passed in I may have to fetch and process 1) millions or records from database (read rdbms using jdbc, decode, convert to xml, convert to csv etc., a very time consuming process), and or 2) process only a few hundres or even handful of records. Please note that I do not know the volume of the data in this multi-tenant spark app until during the runtime of my app where I calculate total # of records I need to process. So I have two questions here:
How do I know how many executors or cores I need to request for this spark job without knowing the data volume as I kick off the run.
Because I making jdbc calls on a DB table, I am using the numOfPartitions, lowerbound(0), upperBound(total#OfRecords), and partitioncolumn(ROW_NUM) to parition the SparkSQL. Now how do I calculate the numOfPartitions? In one case when I am fetching millions I want to have more paritions and less for a handful. How do I decide this number? What is the logic? Would this numOFPartition be 10-20 for 100-200? We dont want to affect transaction applications by hoggin DB resources. How do people typically decide on ths numOfPartitions?
Appreicate your help
thank you
Need help deciding database numOfPartitions
A:
It can be challenging to determine the optimal number of executors and cores for a Spark job without knowing the volume of data that needs to be processed. In general, you will want to use as many executors and cores as possible to maximize the parallelism of the job and reduce the overall processing time.
However, it's important to consider the following factors when determining the number of executors and cores to use:
The size and complexity of the data: If the data is large and complex, you may need more executors and cores to process it effectively.
The available resources: The number of executors and cores you can use will depend on the resources available on the cluster. If the cluster is already heavily utilized, you may need to use fewer executors and cores to avoid overloading the system.
The overall performance of the job: You can use Spark's built-in performance metrics to monitor the performance of the job and adjust the number of executors and cores as needed to optimize the processing time.
One approach you could take is to start with a small number of executors and cores and gradually increase them as needed based on the performance of the job and the available resources. You can also use Spark's dynamic allocation feature to automatically adjust the number of executors and cores based on the workload and available resources. This can help ensure that your Spark job is able to effectively process the data without overloading the system.
Spark's dynamic allocation feature allows the Spark application to automatically request additional executors or release unused executors based on the workload and available resources in the cluster. This can help improve the overall performance and efficiency of the Spark application by ensuring that the right amount of resources are available to process the data.
Dynamic allocation is enabled by default in Spark, but it can be configured using the spark.dynamicAllocation.enabled property in the Spark configuration.
You can also adjust the default behavior of dynamic allocation using the following properties:
spark.dynamicAllocation.minExecutors: The minimum number of executors to use for the application.
spark.dynamicAllocation.maxExecutors: The maximum number of executors to use for the application.
spark.dynamicAllocation.initialExecutors: The initial number of executors to use for the application.
By default, dynamic allocation will try to maintain a constant number of executors based on the workload and available resources. However, you can also configure it to scale up or down based on the workload using the spark.dynamicAllocation.scalingUpFactor and spark.dynamicAllocation.scalingDownFactor properties.
Overall, using Spark's dynamic allocation feature can help improve the performance and efficiency of your Spark application by automatically allocating the right amount of resources for the data being processed.
A:
The number of partitions to use when reading data from a database using the JDBC connector can have a significant impact on the performance and efficiency of the Spark job. In general, a larger number of partitions will allow the data to be processed in parallel across multiple nodes in the cluster, which can improve the overall processing time. However, using too many partitions can also cause performance issues, such as overwhelming the database with too many concurrent connections.
When you use the numPartitions parameter in a JDBC query in Spark, it will create one database connection for each partition, which can potentially overwhelm the source database if the number of partitions is too large. To avoid this issue, it's important to carefully consider the number of partitions you use in your query.
One approach you could take is to use a smaller number of partitions, such as 10-20 partitions, and ensure that each partition processes a reasonable amount of data. For example, you could use the partitionColumn and lowerBound and upperBound parameters to specify a range of values for the partition column, and then set the numPartitions parameter to a value that will create partitions of approximately 128 MB in size. This can help ensure that the number of database connections used by the query is manageable and will not overwhelm the source database.
After the query, you can repartition the DataFrame with "repartition" such as:
val repartitionedDF = df.repartition(idealNumPartitions)
To calculate the optimal number of partitions for repartition we need to evaluate the size of the DataFrame, it can be done with:
val sizeInBytes =
usersDFDistinct.queryExecution.optimizedPlan.stats.sizeInBytes
Then we can calculate the optimal number of partitions with:
val sizeInMB: Double = (sizeInBytes).toDouble / 1024.0 / 1024.0
println(f" Size in MB of the result: $sizeInMB")
val idealNumPartitions = Math.max(1, Math.ceil(sizeInMB / 128).toInt)
println(f" Ideal number of partitions: $idealNumPartitions")
|
Spark SQL JDBC numberOfPartitions calculation from huge vs small data load
|
I have a use case where depending on the argument passed in I may have to fetch and process 1) millions or records from database (read rdbms using jdbc, decode, convert to xml, convert to csv etc., a very time consuming process), and or 2) process only a few hundres or even handful of records. Please note that I do not know the volume of the data in this multi-tenant spark app until during the runtime of my app where I calculate total # of records I need to process. So I have two questions here:
How do I know how many executors or cores I need to request for this spark job without knowing the data volume as I kick off the run.
Because I making jdbc calls on a DB table, I am using the numOfPartitions, lowerbound(0), upperBound(total#OfRecords), and partitioncolumn(ROW_NUM) to parition the SparkSQL. Now how do I calculate the numOfPartitions? In one case when I am fetching millions I want to have more paritions and less for a handful. How do I decide this number? What is the logic? Would this numOFPartition be 10-20 for 100-200? We dont want to affect transaction applications by hoggin DB resources. How do people typically decide on ths numOfPartitions?
Appreicate your help
thank you
Need help deciding database numOfPartitions
|
[
"It can be challenging to determine the optimal number of executors and cores for a Spark job without knowing the volume of data that needs to be processed. In general, you will want to use as many executors and cores as possible to maximize the parallelism of the job and reduce the overall processing time.\nHowever, it's important to consider the following factors when determining the number of executors and cores to use:\nThe size and complexity of the data: If the data is large and complex, you may need more executors and cores to process it effectively.\nThe available resources: The number of executors and cores you can use will depend on the resources available on the cluster. If the cluster is already heavily utilized, you may need to use fewer executors and cores to avoid overloading the system.\nThe overall performance of the job: You can use Spark's built-in performance metrics to monitor the performance of the job and adjust the number of executors and cores as needed to optimize the processing time.\nOne approach you could take is to start with a small number of executors and cores and gradually increase them as needed based on the performance of the job and the available resources. You can also use Spark's dynamic allocation feature to automatically adjust the number of executors and cores based on the workload and available resources. This can help ensure that your Spark job is able to effectively process the data without overloading the system.\nSpark's dynamic allocation feature allows the Spark application to automatically request additional executors or release unused executors based on the workload and available resources in the cluster. This can help improve the overall performance and efficiency of the Spark application by ensuring that the right amount of resources are available to process the data.\nDynamic allocation is enabled by default in Spark, but it can be configured using the spark.dynamicAllocation.enabled property in the Spark configuration.\nYou can also adjust the default behavior of dynamic allocation using the following properties:\nspark.dynamicAllocation.minExecutors: The minimum number of executors to use for the application.\nspark.dynamicAllocation.maxExecutors: The maximum number of executors to use for the application.\nspark.dynamicAllocation.initialExecutors: The initial number of executors to use for the application.\nBy default, dynamic allocation will try to maintain a constant number of executors based on the workload and available resources. However, you can also configure it to scale up or down based on the workload using the spark.dynamicAllocation.scalingUpFactor and spark.dynamicAllocation.scalingDownFactor properties.\nOverall, using Spark's dynamic allocation feature can help improve the performance and efficiency of your Spark application by automatically allocating the right amount of resources for the data being processed.\n",
"The number of partitions to use when reading data from a database using the JDBC connector can have a significant impact on the performance and efficiency of the Spark job. In general, a larger number of partitions will allow the data to be processed in parallel across multiple nodes in the cluster, which can improve the overall processing time. However, using too many partitions can also cause performance issues, such as overwhelming the database with too many concurrent connections.\nWhen you use the numPartitions parameter in a JDBC query in Spark, it will create one database connection for each partition, which can potentially overwhelm the source database if the number of partitions is too large. To avoid this issue, it's important to carefully consider the number of partitions you use in your query.\nOne approach you could take is to use a smaller number of partitions, such as 10-20 partitions, and ensure that each partition processes a reasonable amount of data. For example, you could use the partitionColumn and lowerBound and upperBound parameters to specify a range of values for the partition column, and then set the numPartitions parameter to a value that will create partitions of approximately 128 MB in size. This can help ensure that the number of database connections used by the query is manageable and will not overwhelm the source database.\nAfter the query, you can repartition the DataFrame with \"repartition\" such as:\nval repartitionedDF = df.repartition(idealNumPartitions)\n\nTo calculate the optimal number of partitions for repartition we need to evaluate the size of the DataFrame, it can be done with:\n val sizeInBytes =\n usersDFDistinct.queryExecution.optimizedPlan.stats.sizeInBytes\n\nThen we can calculate the optimal number of partitions with:\n val sizeInMB: Double = (sizeInBytes).toDouble / 1024.0 / 1024.0\n println(f\" Size in MB of the result: $sizeInMB\")\n\n val idealNumPartitions = Math.max(1, Math.ceil(sizeInMB / 128).toInt)\n println(f\" Ideal number of partitions: $idealNumPartitions\")\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"apache_spark"
] |
stackoverflow_0074661526_apache_spark.txt
|
Q:
Literal '#' character in C preprocessor macro replacement-list?
Is there any way in which we can have literal '#' character in replacement-list of C preprocessor macro?
'#' character in replacement-list of C preprocessor is an operator that performs stringification of argument, for example-
#define foo(words) #words
foo(Hello World!)
will result-
"Hello World!"
Is there any way we can have '#' as a literal character in replacement-list like-
#define foo(p1, p2) p1 # p2
// ^ is there any way to specify that- this isn't # operator?
foo(arg1, arg2) // will result-
arg1 "arg2"
// What I wanted was
// arg1 # arg2
I tried a macro like-
#define foo(p1, p2, p3) p1 p2 p3
// And then
foo(arg1, #, arg2)
// Which resulted in
arg1 # arg2
This was getting the work done but wasn't better than typing arg1 # arg2 manually.
Then I tried defining a foo macro which in turn will call metafoo with '#' as argument-
#define metafoo(p1, p2, p3) p1 p2 p3
#define foo(p1, p2) metafoo(p1, #, p2)
foo(arg1, arg2)
which resulted in a error error: '#' is not followed by a macro parameter, because # was getting interpreted like stringification operator.
A:
What about this?
#define metafoo(p1, p2, p3) p1 p2 p3
#define hash #
#define foo(p1, p2) metafoo(p1, hash, p3)
foo(arg1, arg2)
In foo you probably mean p2 instead of p3.
A:
This is easily solved with:
#define NumberSign #
#define foo(p1, p2) p1 NumberSign p2
foo(arg1, arg2)
which yields:
arg1 # arg2
The reason it works is that # is an operator in function-like macros (macros with parameters) but has no effect in object-like macros (macros without parameters).
|
Literal '#' character in C preprocessor macro replacement-list?
|
Is there any way in which we can have literal '#' character in replacement-list of C preprocessor macro?
'#' character in replacement-list of C preprocessor is an operator that performs stringification of argument, for example-
#define foo(words) #words
foo(Hello World!)
will result-
"Hello World!"
Is there any way we can have '#' as a literal character in replacement-list like-
#define foo(p1, p2) p1 # p2
// ^ is there any way to specify that- this isn't # operator?
foo(arg1, arg2) // will result-
arg1 "arg2"
// What I wanted was
// arg1 # arg2
I tried a macro like-
#define foo(p1, p2, p3) p1 p2 p3
// And then
foo(arg1, #, arg2)
// Which resulted in
arg1 # arg2
This was getting the work done but wasn't better than typing arg1 # arg2 manually.
Then I tried defining a foo macro which in turn will call metafoo with '#' as argument-
#define metafoo(p1, p2, p3) p1 p2 p3
#define foo(p1, p2) metafoo(p1, #, p2)
foo(arg1, arg2)
which resulted in a error error: '#' is not followed by a macro parameter, because # was getting interpreted like stringification operator.
|
[
"What about this?\n#define metafoo(p1, p2, p3) p1 p2 p3\n#define hash #\n#define foo(p1, p2) metafoo(p1, hash, p3)\nfoo(arg1, arg2)\n\nIn foo you probably mean p2 instead of p3.\n",
"This is easily solved with:\n#define NumberSign #\n#define foo(p1, p2) p1 NumberSign p2\n\nfoo(arg1, arg2)\n\nwhich yields:\n\narg1 # arg2\n\nThe reason it works is that # is an operator in function-like macros (macros with parameters) but has no effect in object-like macros (macros without parameters).\n"
] |
[
0,
0
] |
[] |
[] |
[
"c",
"gcc"
] |
stackoverflow_0074664718_c_gcc.txt
|
Q:
RaggedTensor becomes Tensor in loss function
I have a sequence-to-sequence model in which I am attempting to predict the output sequence following a transformation. In doing so, I need to compute the MSE between elements in a ragged tensor:
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
Yet, when executing it encounters the error:
AttributeError: 'Tensor' object has no attribute 'to_tensor'
What causes this issue? The GRU seems to return a RaggedTensor when called directly. Yet at runtime, the arguments to the loss functions are normal Tensors.
import tensorflow as tf
import numpy as np
import functools
def generate_example(n):
for i in range(n):
dims = np.random.randint(7, 11)
x = np.random.random((dims, ))
y = 2 * x.cumsum()
yield tf.constant(x), tf.constant(y)
N = 200
ds = tf.data.Dataset.from_generator(
functools.partial(generate_example, N),
output_signature=(
tf.TensorSpec(shape=(None,), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.float32),
),
)
def rag(x, y):
x1 = tf.expand_dims(x, 0)
y1 = tf.expand_dims(y, 0)
x1 = tf.expand_dims(x1, -1)
y1 = tf.expand_dims(y1, -1)
return (
tf.RaggedTensor.from_tensor(x1),
tf.RaggedTensor.from_tensor(y1),
)
def unexp(x, y):
return (
tf.squeeze(x, axis=1),
tf.squeeze(y, axis=1)
)
ds = ds.map(rag).batch(32).map(unexp)
model = tf.keras.Sequential([
tf.keras.Input(
type_spec=tf.RaggedTensorSpec(shape=[None, None, 1],
dtype=tf.float32)),
tf.keras.layers.GRU(1, return_sequences=True),
])
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3)
A:
In your loss function, you can re-write it in the following ways to make it work.
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
if isinstance(y_value, tf.RaggedTensor):
y_value = y_value.to_tensor()
if isinstance(y_pred, tf.RaggedTensor):
y_pred = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_value, y_pred)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3) # loss & metrics will vary
Or, you don't need to convert ragged tensor, keep as it is.
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
return tf.keras.losses.MeanSquaredError()(y_value, y_pred)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3) # loss & metrics will alike
The reason you got AttributeError is because in metrics=[cpu_bce], the target and prediction tensor get converts to tesnor internally. You can inspect by printing your target and prediction in loss function. You would find that for loss function it's ragged but for metric function it's tensor. It may not feel convenient, in that case feel free to raise ticket in GitHub.
|
RaggedTensor becomes Tensor in loss function
|
I have a sequence-to-sequence model in which I am attempting to predict the output sequence following a transformation. In doing so, I need to compute the MSE between elements in a ragged tensor:
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
Yet, when executing it encounters the error:
AttributeError: 'Tensor' object has no attribute 'to_tensor'
What causes this issue? The GRU seems to return a RaggedTensor when called directly. Yet at runtime, the arguments to the loss functions are normal Tensors.
import tensorflow as tf
import numpy as np
import functools
def generate_example(n):
for i in range(n):
dims = np.random.randint(7, 11)
x = np.random.random((dims, ))
y = 2 * x.cumsum()
yield tf.constant(x), tf.constant(y)
N = 200
ds = tf.data.Dataset.from_generator(
functools.partial(generate_example, N),
output_signature=(
tf.TensorSpec(shape=(None,), dtype=tf.float32),
tf.TensorSpec(shape=(None,), dtype=tf.float32),
),
)
def rag(x, y):
x1 = tf.expand_dims(x, 0)
y1 = tf.expand_dims(y, 0)
x1 = tf.expand_dims(x1, -1)
y1 = tf.expand_dims(y1, -1)
return (
tf.RaggedTensor.from_tensor(x1),
tf.RaggedTensor.from_tensor(y1),
)
def unexp(x, y):
return (
tf.squeeze(x, axis=1),
tf.squeeze(y, axis=1)
)
ds = ds.map(rag).batch(32).map(unexp)
model = tf.keras.Sequential([
tf.keras.Input(
type_spec=tf.RaggedTensorSpec(shape=[None, None, 1],
dtype=tf.float32)),
tf.keras.layers.GRU(1, return_sequences=True),
])
def cpu_bce(y_value, y_pred):
with tf.device('/CPU:0'):
y_v = y_value.to_tensor()
y_p = y_pred.to_tensor()
return tf.keras.losses.MeanSquaredError()(y_v, y_p)
model.compile(loss=cpu_bce, optimizer="adam", metrics=[cpu_bce])
model.fit(ds, epochs=3)
|
[
"In your loss function, you can re-write it in the following ways to make it work.\ndef cpu_bce(y_value, y_pred):\n with tf.device('/CPU:0'):\n if isinstance(y_value, tf.RaggedTensor):\n y_value = y_value.to_tensor()\n \n if isinstance(y_pred, tf.RaggedTensor): \n y_pred = y_pred.to_tensor()\n \n return tf.keras.losses.MeanSquaredError()(y_value, y_pred)\n\nmodel.compile(loss=cpu_bce, optimizer=\"adam\", metrics=[cpu_bce])\nmodel.fit(ds, epochs=3) # loss & metrics will vary\n\nOr, you don't need to convert ragged tensor, keep as it is.\ndef cpu_bce(y_value, y_pred):\n with tf.device('/CPU:0'):\n return tf.keras.losses.MeanSquaredError()(y_value, y_pred)\n\nmodel.compile(loss=cpu_bce, optimizer=\"adam\", metrics=[cpu_bce])\nmodel.fit(ds, epochs=3) # loss & metrics will alike\n\n\nThe reason you got AttributeError is because in metrics=[cpu_bce], the target and prediction tensor get converts to tesnor internally. You can inspect by printing your target and prediction in loss function. You would find that for loss function it's ragged but for metric function it's tensor. It may not feel convenient, in that case feel free to raise ticket in GitHub.\n"
] |
[
0
] |
[] |
[] |
[
"keras",
"loss",
"python",
"tensorflow"
] |
stackoverflow_0074665549_keras_loss_python_tensorflow.txt
|
Q:
Open PDF In PDF Viewer
I have a PDF in the assets folder: assets/my_resume.pdf.
How can I open it in a PDF viewer? I don't want to open it in a widget. I want it to open outside of my app with whatever PDF viewer is available to this device.
I tried: https://pub.dev/packages/open_file but that doesn't work with assets.
Future<void> launchPdf() async {
}
How can I do this? I mostly care about mobile so not even considering web.
A:
I don't know if this is the best method, but you could load the pdf file data, and save it to, for example, the Document directory on the device, and after that, use the open file package to open the pdf with a native device app.
void openAsset() async {
// load pdf into ram
ByteData pdf = await DefaultAssetBundle.of(context).load('assets/your_pdf.pdf');
// get documents directory
Directory documents = await getApplicationDocumentsDirectory();
// write data to a file in the documents directory
await File(documents.path + '/your_pdf.pdf').writeAsBytes(pdf.buffer.asUint8List());
// call open_pdf function
// ...
}
(Didn't test the code)
|
Open PDF In PDF Viewer
|
I have a PDF in the assets folder: assets/my_resume.pdf.
How can I open it in a PDF viewer? I don't want to open it in a widget. I want it to open outside of my app with whatever PDF viewer is available to this device.
I tried: https://pub.dev/packages/open_file but that doesn't work with assets.
Future<void> launchPdf() async {
}
How can I do this? I mostly care about mobile so not even considering web.
|
[
"I don't know if this is the best method, but you could load the pdf file data, and save it to, for example, the Document directory on the device, and after that, use the open file package to open the pdf with a native device app.\nvoid openAsset() async {\n // load pdf into ram\n ByteData pdf = await DefaultAssetBundle.of(context).load('assets/your_pdf.pdf');\n // get documents directory\n Directory documents = await getApplicationDocumentsDirectory();\n // write data to a file in the documents directory\n await File(documents.path + '/your_pdf.pdf').writeAsBytes(pdf.buffer.asUint8List());\n // call open_pdf function\n // ...\n}\n\n(Didn't test the code)\n"
] |
[
0
] |
[] |
[] |
[
"dart",
"flutter",
"pdf"
] |
stackoverflow_0074662361_dart_flutter_pdf.txt
|
Q:
How get UTC of start of day for specific timezone?
I have to convert 2022-11-29 to '2022-11-29T04:00:00.000Z' it is offset for Santo Domingo Timezone.
But the first problem StartFromUtc is already '2022-11-29T02:00:00+02:00' but I expected '2022-11-29T00:00:00+00:00'.
so the next calculation is wrong too.
How can help?
const tz = 'America/Santo_Domingo';
const startFromDate = '2022-11-29';
const utcdate = dayjs(startFromDate + 'T00:00:00.000Z');
const tzdate = utcdate.tz(tz);
const utcFromTzdate = utcdate.tz(tz);
console.log(
'StartFrom: ', startFromDate,
'\nStartFromUtc: ', utcdate.format(),
'\nCreated UTC: ', utcdate.toISOString(),
'\nSanto Domingo:', tzdate.format(),
'\nUTC For Santo Domingo:', utcFromTzdate.format(),
);
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js"></script>
<script>
dayjs.extend(window.dayjs_plugin_utc);
dayjs.extend(window.dayjs_plugin_timezone);
</script>
A:
Given a timestamp in YYYY-MM-DD format, dayjs assumes UTC (perhaps to be consistent with ECMA-262), so it can be parsed to zero hours UTC using:
dayjs(startFromDate);
To convert it to some other timezone without shifting the date and time values, add true as the second parameter when calling tz:
let tzdate = dayjs(startFromDate).tz(tz, true)
Then get the equivalent UTC date and time using tz again:
let utc = tzdate.tz('UTC')
E.g.
const tz = 'America/Santo_Domingo';
const startFromDate = '2022-11-29';
let tzdate = dayjs(startFromDate).tz(tz, true);
let utc = tzdate.tz('UTC');
console.log(
'StartFrom : ', startFromDate,
'\nSanto Domingo :', tzdate.format(),
'\nUTC eqiuvalent:', utc.format(),
);
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js"></script>
<script>
dayjs.extend(window.dayjs_plugin_utc);
dayjs.extend(window.dayjs_plugin_timezone);
</script>
A:
Try this:
const tz = 'America/Santo_Domingo';
const startFromDate = dayjs(new Date('2022-11-29 UTC'));
const tzdate = startFromDate.tz(tz);
const utcFromTzdate = startFromDate.tz('UTC');
console.log(
'StartFrom: ', startFromDate,
'\nStartFromUtc: ', startFromDate.format(),
'\nCreated UTC: ', startFromDate.toISOString(),
'\nSanto Domingo:', tzdate.format(),
'\nUTC from Santo Domingo:', utcFromTzdate.format(),
);
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js"></script>
<script>
dayjs.extend(window.dayjs_plugin_utc);
dayjs.extend(window.dayjs_plugin_timezone);
</script>
|
How get UTC of start of day for specific timezone?
|
I have to convert 2022-11-29 to '2022-11-29T04:00:00.000Z' it is offset for Santo Domingo Timezone.
But the first problem StartFromUtc is already '2022-11-29T02:00:00+02:00' but I expected '2022-11-29T00:00:00+00:00'.
so the next calculation is wrong too.
How can help?
const tz = 'America/Santo_Domingo';
const startFromDate = '2022-11-29';
const utcdate = dayjs(startFromDate + 'T00:00:00.000Z');
const tzdate = utcdate.tz(tz);
const utcFromTzdate = utcdate.tz(tz);
console.log(
'StartFrom: ', startFromDate,
'\nStartFromUtc: ', utcdate.format(),
'\nCreated UTC: ', utcdate.toISOString(),
'\nSanto Domingo:', tzdate.format(),
'\nUTC For Santo Domingo:', utcFromTzdate.format(),
);
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js"></script>
<script src="https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js"></script>
<script>
dayjs.extend(window.dayjs_plugin_utc);
dayjs.extend(window.dayjs_plugin_timezone);
</script>
|
[
"Given a timestamp in YYYY-MM-DD format, dayjs assumes UTC (perhaps to be consistent with ECMA-262), so it can be parsed to zero hours UTC using:\ndayjs(startFromDate);\n\nTo convert it to some other timezone without shifting the date and time values, add true as the second parameter when calling tz:\nlet tzdate = dayjs(startFromDate).tz(tz, true)\n\nThen get the equivalent UTC date and time using tz again:\nlet utc = tzdate.tz('UTC')\n\nE.g.\n\n\nconst tz = 'America/Santo_Domingo';\nconst startFromDate = '2022-11-29';\nlet tzdate = dayjs(startFromDate).tz(tz, true);\nlet utc = tzdate.tz('UTC');\n\nconsole.log(\n 'StartFrom : ', startFromDate, \n '\\nSanto Domingo :', tzdate.format(),\n '\\nUTC eqiuvalent:', utc.format(),\n);\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js\"></script>\n<script>\n dayjs.extend(window.dayjs_plugin_utc);\n dayjs.extend(window.dayjs_plugin_timezone); \n</script>\n\n\n\n",
"Try this:\n\n\nconst tz = 'America/Santo_Domingo';\nconst startFromDate = dayjs(new Date('2022-11-29 UTC'));\nconst tzdate = startFromDate.tz(tz);\nconst utcFromTzdate = startFromDate.tz('UTC');\n\nconsole.log(\n 'StartFrom: ', startFromDate, \n '\\nStartFromUtc: ', startFromDate.format(), \n '\\nCreated UTC: ', startFromDate.toISOString(), \n '\\nSanto Domingo:', tzdate.format(),\n '\\nUTC from Santo Domingo:', utcFromTzdate.format(),\n);\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/dayjs.min.js\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/plugin/utc.js\"></script>\n<script src=\"https://cdn.jsdelivr.net/npm/dayjs@1/plugin/timezone.js\"></script>\n<script>\n dayjs.extend(window.dayjs_plugin_utc);\n dayjs.extend(window.dayjs_plugin_timezone); \n</script>\n\n\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"date",
"dayjs",
"javascript",
"momentjs",
"timezone"
] |
stackoverflow_0074660374_date_dayjs_javascript_momentjs_timezone.txt
|
Q:
What to replace FacebookAdapter&FacebookExtras with on com.google.ads.mediation:facebook:6.12.0.0?
Background
I had this on the project, to handle Facebook ads via Admob:
implementation 'com.google.ads.mediation:facebook:6.11.0.0'
And in code:
final AdRequest.Builder builder = new AdRequest.Builder();
builder.addNetworkExtrasBundle(FacebookAdapter.class, new FacebookExtras().setNativeBanner(true).build());
adLoader.loadAd(builder.build());
It works fine.
The problem
Now when updating to the new version:
implementation 'com.google.ads.mediation:facebook:6.12.0.0'
It shows that both FacebookAdapter and FacebookExtras don't exist anymore.
What I've tried
Checking on the docs, even though they say to use this version, the code they tell to use is the same as before:
https://developers.google.com/admob/android/mediation/meta#step_3_import_the_meta_audience_network_sdk_and_adapter
https://developers.google.com/ad-manager/mobile-ads-sdk/android/mediation/meta#using_meta_audience_network_native_ads_without_a_mediaview
Bundle extras = new FacebookExtras()
.setNativeBanner(true)
.build();
AdManagerAdRequest request = new AdManagerAdRequest.Builder()
.addNetworkExtrasBundle(FacebookAdapter.class, extras)
.build();
The question
What should I use instead? How come it's not documented?
A:
FacebookAdapter is now replaced by AdViewAdapter. This class is used to display Facebook ads in an AdView.
FacebookExtras is now replaced by FacebookMediationAdapter. This class is used to provide additional information to the Facebook Audience Network SDK, such as the placement ID and the user's gender and age.
Here is an example of how to use these classes in your code:
// Create an instance of AdViewAdapter, which replaces FacebookAdapter.
val adapter = AdViewAdapter()
// Create an instance of FacebookMediationAdapter, which replaces FacebookExtras.
val extras = FacebookMediationAdapter.Builder(context)
// Set the placement ID.
.setPlacementId(PLACEMENT_ID)
// Set the user's gender and age.
.setGender(FacebookMediationAdapter.GENDER_MALE)
.setAge(20)
.build()
// Create an instance of AdRequest and add the adapter and extras to it.
val request = AdRequest.Builder()
.addNetworkExtras(extras)
.addNetworkExtrasBundle(adapter, extras)
.build()
// Load the ad using the AdRequest.
adView.loadAd(request)
In this example, we create an instance of AdViewAdapter and an instance of FacebookMediationAdapter. We then add these objects to an AdRequest, and use the AdRequest to load an ad into an AdView.
Note that you will need to replace PLACEMENT_ID with the actual placement ID of your ad, and you may need to modify the code depending on your specific use case and requirements.
|
What to replace FacebookAdapter&FacebookExtras with on com.google.ads.mediation:facebook:6.12.0.0?
|
Background
I had this on the project, to handle Facebook ads via Admob:
implementation 'com.google.ads.mediation:facebook:6.11.0.0'
And in code:
final AdRequest.Builder builder = new AdRequest.Builder();
builder.addNetworkExtrasBundle(FacebookAdapter.class, new FacebookExtras().setNativeBanner(true).build());
adLoader.loadAd(builder.build());
It works fine.
The problem
Now when updating to the new version:
implementation 'com.google.ads.mediation:facebook:6.12.0.0'
It shows that both FacebookAdapter and FacebookExtras don't exist anymore.
What I've tried
Checking on the docs, even though they say to use this version, the code they tell to use is the same as before:
https://developers.google.com/admob/android/mediation/meta#step_3_import_the_meta_audience_network_sdk_and_adapter
https://developers.google.com/ad-manager/mobile-ads-sdk/android/mediation/meta#using_meta_audience_network_native_ads_without_a_mediaview
Bundle extras = new FacebookExtras()
.setNativeBanner(true)
.build();
AdManagerAdRequest request = new AdManagerAdRequest.Builder()
.addNetworkExtrasBundle(FacebookAdapter.class, extras)
.build();
The question
What should I use instead? How come it's not documented?
|
[
"FacebookAdapter is now replaced by AdViewAdapter. This class is used to display Facebook ads in an AdView.\nFacebookExtras is now replaced by FacebookMediationAdapter. This class is used to provide additional information to the Facebook Audience Network SDK, such as the placement ID and the user's gender and age.\nHere is an example of how to use these classes in your code:\n// Create an instance of AdViewAdapter, which replaces FacebookAdapter.\nval adapter = AdViewAdapter()\n\n// Create an instance of FacebookMediationAdapter, which replaces FacebookExtras.\nval extras = FacebookMediationAdapter.Builder(context)\n // Set the placement ID.\n .setPlacementId(PLACEMENT_ID)\n // Set the user's gender and age.\n .setGender(FacebookMediationAdapter.GENDER_MALE)\n .setAge(20)\n .build()\n\n// Create an instance of AdRequest and add the adapter and extras to it.\nval request = AdRequest.Builder()\n .addNetworkExtras(extras)\n .addNetworkExtrasBundle(adapter, extras)\n .build()\n\n// Load the ad using the AdRequest.\nadView.loadAd(request)\n\nIn this example, we create an instance of AdViewAdapter and an instance of FacebookMediationAdapter. We then add these objects to an AdRequest, and use the AdRequest to load an ad into an AdView.\nNote that you will need to replace PLACEMENT_ID with the actual placement ID of your ad, and you may need to modify the code depending on your specific use case and requirements.\n"
] |
[
0
] |
[] |
[] |
[
"admob",
"android",
"facebook_ads"
] |
stackoverflow_0074385930_admob_android_facebook_ads.txt
|
Q:
Is it possible to change the log level of a package in log4j?
Currently I have a library that is logging certain information as ERROR. If I change my log4j settings like this:
log4j.logger.com.company.theirpackage.foo=OFF
that will completely disable the library's logging altogether. However, what I'd really like is to still see the information, but have it logged at a WARN or INFO level. In other words, when that particular code calls log.error(), I want it to be as if they had called log.warn() or log.info() instead.
Is there a way to do this with log4j?
A:
Not directly, but you could write a custom appender that intercepts the calls, checks the levels, and then prints them at whatever level you want. Or, you could do some aspect oriented programming and intercept/change their calls.
But why would you want to change the level they log it at?
A:
I think this is possible, by implementing a Filter, in which the level inside a LoggingEvent is changed when it matches certain conditions.
A:
There are multiple ways to do that. I think the easiest way to do that is to use custom Appender.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn">
<Appenders>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="console-> %-5p %c{1}:%L - %m%n" />
</Console>
<Console name="CUSTOM_ERROR_STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="dependent-> %-5p{ERROR=WARN} %c{1}:%L - %m%n" />
</Console>
</Appenders>
<Loggers>
<!-- Root logger referring to console appender -->
<Root level="INFO">
<AppenderRef ref="STDOUT"/>
</Root>
<Logger name="com.my.LogTest" level="INFO" additivity="false">
<AppenderRef ref="CUSTOM_ERROR_STDOUT" />
</Logger>
</Loggers>
</Configuration>
Here I convert all the error logs into warning for that specific class com.my.LogTest. You can find more info about Appender Layouts here. It has pattern selector option which allows you to choose different pattern based on your condition.
|
Is it possible to change the log level of a package in log4j?
|
Currently I have a library that is logging certain information as ERROR. If I change my log4j settings like this:
log4j.logger.com.company.theirpackage.foo=OFF
that will completely disable the library's logging altogether. However, what I'd really like is to still see the information, but have it logged at a WARN or INFO level. In other words, when that particular code calls log.error(), I want it to be as if they had called log.warn() or log.info() instead.
Is there a way to do this with log4j?
|
[
"Not directly, but you could write a custom appender that intercepts the calls, checks the levels, and then prints them at whatever level you want. Or, you could do some aspect oriented programming and intercept/change their calls.\nBut why would you want to change the level they log it at?\n",
"I think this is possible, by implementing a Filter, in which the level inside a LoggingEvent is changed when it matches certain conditions.\n",
"There are multiple ways to do that. I think the easiest way to do that is to use custom Appender.\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Configuration status=\"warn\">\n <Appenders>\n <Console name=\"STDOUT\" target=\"SYSTEM_OUT\">\n <PatternLayout pattern=\"console-> %-5p %c{1}:%L - %m%n\" />\n </Console>\n <Console name=\"CUSTOM_ERROR_STDOUT\" target=\"SYSTEM_OUT\">\n <PatternLayout pattern=\"dependent-> %-5p{ERROR=WARN} %c{1}:%L - %m%n\" />\n </Console>\n </Appenders>\n <Loggers>\n <!-- Root logger referring to console appender -->\n <Root level=\"INFO\">\n <AppenderRef ref=\"STDOUT\"/>\n </Root>\n\n <Logger name=\"com.my.LogTest\" level=\"INFO\" additivity=\"false\">\n <AppenderRef ref=\"CUSTOM_ERROR_STDOUT\" />\n </Logger>\n </Loggers>\n</Configuration>\n\nHere I convert all the error logs into warning for that specific class com.my.LogTest. You can find more info about Appender Layouts here. It has pattern selector option which allows you to choose different pattern based on your condition.\n"
] |
[
2,
1,
0
] |
[] |
[] |
[
"java",
"log4j",
"logging"
] |
stackoverflow_0002060362_java_log4j_logging.txt
|
Q:
Upload image to folder and save path in mysql
I'm trying to save an image in folder and save the path in my database, but I can't do that.
When I insert only name and phone all is ok, but when I add the input type file, it saves my name and phone number but not the image path and not upload the image in folder.
Here is the code:
PED.PHP
<form class="form-horizontal form-label-left input_mask" method="post" id="add" name="add" enctype="multipart/form-data">
<div class="form-group"" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">CLIENT</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="text" style="width:200px; float:left;"name="client" class="form-control" placeholder="" >
</div>
</div>
<div class="form-group" style="border: 1px ; height:45px;" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">PHONE:
</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="text" style="width:200px; float:left;"name="phone" class="form-control" placeholder="" >
</div>
</div>
<div class="form-group" style="border: 1px ; height:45px;" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">PICTURE:
</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="file" name="uploadImage" id="uploadImage">
</div>
</div>
<div class="ln_solid"></div>
<div class="form-group">
<button id="save_data" type="submit" class="btn btn-success">Guardar</button>
</div>
</div>
</form
and here I receive all data:
ADDPED.PHP
<?php
date_default_timezone_set('America/Mexico_City');
session_start();
if (empty($_POST['name'])) {
$errors[] = "Selecciona una tienda";
} else if (empty($_POST['cliente'])){
$errors[] = "Ingrese cliente";
} else if (
!empty($_POST['name']) &&
!empty($_POST['cliente'])
){
include "../config/config.php";
$client = $_POST["cliente"];
$phone = $_POST["telefono"];
("images/fotos/" . $_FILES["uploadImage"]["name"]);
if ($_FILES["file"]["error"] > 0)
{
echo "Error: " . $_FILES["file"]["error"] . "<br>";
}
else
{
move_uploaded_file($_FILES["file"]["tmp_name"],
"images/fotos/" . $_FILES["file"]["name"]);
}
$sql="insert into pedido (client,phone,image) value ('$client','$phone','".$_FILES['uploadImage']['name']."')";
$query_new_insert = mysqli_query($con,$sql);
if ($query_new_insert){
$messages[] = "Tu ticket ha sido ingresado satisfactoriamente.";
} else{
$errors []= "Lo siento algo ha salido mal intenta nuevamente.".mysqli_error($con);
}
} else {
$errors []= "Error desconocido.";
}
if (isset($errors)){
?>
<div class="alert alert-danger" role="alert">
<button type="button" class="close" data-dismiss="alert">×</button>
<strong>Error!</strong>
<?php
foreach ($errors as $error) {
echo $error;
}
?>
</div>
<?php
}
if (isset($messages)){
?>
<div class="alert alert-success" role="alert">
<button type="button" class="close" data-dismiss="alert">×</button>
<strong>¡Bien hecho!</strong>
<?php
foreach ($messages as $message) {
echo $message;
}
?>
</div>
<?php
}
?>
A:
Though the question is little old, I will answer it with some simple example to help those who come later looking for an answer. In my web app, I have the following information:
I'm using XAMPP
Folder named uploads inside the web app main folder to store uploaded images
The following is the code for html form driver-form.html to collect driver information with driver picture:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Driver Entry Page</title>
</head>
<body>
<header><h3>Please enter driver information:</h3></header>
<form action="http://localhost/car-rental/insert_driver.php" target="_blank" method="post" enctype="multipart/form-data">
Name:<br>
<input type="text" name="name"><br>
<br>
Age:<br>
<input type="text" name="age"><br>
<br>
Picture:<br>
<input type='file' name='imagefile'><br>
<br>
<input type="submit" value="Add driver">
</form>
</body>
</html>
And this is the php code insert_driver.php to insert the record in the database and move the picture file to the uploads folder:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Insert Car Data</title>
</head>
<body>
<?php
//Database connection parameters
$host = 'localhost';
$dbname = 'carsdatabase';
$username = 'root';
$password = '';
//Connecting to the database
try{
$pdo = new PDO("mysql:host=$host;dbname=$dbname", $username, $password);
} catch (PDOException $pe) {
die("Could not connect to the database $dbname :" . $pe->getMessage());
}
?>
<?php
//Storing values int o variables
$name = $_POST["name"];
$age = $_POST["age"];
// File name
$filename = $_FILES["imagefile"]["name"];
// Location
$target_file = './uploads/'.$filename;
// file extension
$file_extension = pathinfo($target_file, PATHINFO_EXTENSION);
$file_extension = strtolower($file_extension);
// Valid image extension
$valid_extension = array("png","jpeg","jpg");
if(in_array($file_extension, $valid_extension)) {
echo "Filename is $filename <br>";
echo "Target file is $target_file <br>";
if(move_uploaded_file($_FILES['imagefile']['tmp_name'],$target_file)
) {
//Preparing insert array
$task = array('name' => $name,
'age' => $age,
'filename' => $filename,
'fileimage' => $target_file
);
// Execute query
$sql = 'INSERT INTO DRIVER (
NAME,
AGE,
FILENAME,
FILEIMAGE
)
VALUES (
:name,
:age,
:filename,
:fileimage
);';
$q = $pdo->prepare($sql);
$q->execute($task);
}
}
?>
</body>
</html>
And this is the code view-drivers.php that lists the added drivers along with their images from the database:
<?php
$host = 'localhost';
$dbname = 'carsdatabase';
$username = 'root';
$password = '';
try {
$pdo = new PDO("mysql:host=$host;dbname=$dbname", $username, $password);
$sql = 'SELECT *
FROM DRIVER
ORDER BY DRIVER_ID';
$q = $pdo->query($sql);
$q->setFetchMode(PDO::FETCH_ASSOC);
} catch (PDOException $pe) {
die("Could not connect to the database $dbname :" . $pe->getMessage());
}
?>
<!DOCTYPE html>
<html>
<head>
<title>Drivers Information Page</title>
</head>
<body>
<div id="container">
<h1>Tasks List</h1>
<table border="1">
<thead>
<tr>
<th>Driver ID</th>
<th>Name</th>
<th>Age</th>
<th>Picture</th>
</tr>
</thead>
<tbody>
<?php while ($row = $q->fetch()): ?>
<tr>
<td><?php echo htmlspecialchars($row['DRIVER_ID']) ?></td>
<td><?php echo htmlspecialchars($row['NAME']); ?></td>
<td><?php echo htmlspecialchars($row['AGE']); ?></td>
<td><img src="<?=$row['FILEIMAGE']?>" title="<?=$row['FILENAME']?>" width='100' height='100'></td>
</tr>
<?php endwhile; ?>
</tbody>
</table>
</body>
</div>
</html>
Finally, below is a screenshot from phpMyAdmin showing the structure of the DRIVER table:
|
Upload image to folder and save path in mysql
|
I'm trying to save an image in folder and save the path in my database, but I can't do that.
When I insert only name and phone all is ok, but when I add the input type file, it saves my name and phone number but not the image path and not upload the image in folder.
Here is the code:
PED.PHP
<form class="form-horizontal form-label-left input_mask" method="post" id="add" name="add" enctype="multipart/form-data">
<div class="form-group"" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">CLIENT</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="text" style="width:200px; float:left;"name="client" class="form-control" placeholder="" >
</div>
</div>
<div class="form-group" style="border: 1px ; height:45px;" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">PHONE:
</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="text" style="width:200px; float:left;"name="phone" class="form-control" placeholder="" >
</div>
</div>
<div class="form-group" style="border: 1px ; height:45px;" >
<label class="control-label col-md-3 col-sm-3 col-xs-12" for="first-name">PICTURE:
</label>
<div class="col-md-9 col-sm-9 col-xs-12" style="float: left; width:70px;">
<input type="file" name="uploadImage" id="uploadImage">
</div>
</div>
<div class="ln_solid"></div>
<div class="form-group">
<button id="save_data" type="submit" class="btn btn-success">Guardar</button>
</div>
</div>
</form
and here I receive all data:
ADDPED.PHP
<?php
date_default_timezone_set('America/Mexico_City');
session_start();
if (empty($_POST['name'])) {
$errors[] = "Selecciona una tienda";
} else if (empty($_POST['cliente'])){
$errors[] = "Ingrese cliente";
} else if (
!empty($_POST['name']) &&
!empty($_POST['cliente'])
){
include "../config/config.php";
$client = $_POST["cliente"];
$phone = $_POST["telefono"];
("images/fotos/" . $_FILES["uploadImage"]["name"]);
if ($_FILES["file"]["error"] > 0)
{
echo "Error: " . $_FILES["file"]["error"] . "<br>";
}
else
{
move_uploaded_file($_FILES["file"]["tmp_name"],
"images/fotos/" . $_FILES["file"]["name"]);
}
$sql="insert into pedido (client,phone,image) value ('$client','$phone','".$_FILES['uploadImage']['name']."')";
$query_new_insert = mysqli_query($con,$sql);
if ($query_new_insert){
$messages[] = "Tu ticket ha sido ingresado satisfactoriamente.";
} else{
$errors []= "Lo siento algo ha salido mal intenta nuevamente.".mysqli_error($con);
}
} else {
$errors []= "Error desconocido.";
}
if (isset($errors)){
?>
<div class="alert alert-danger" role="alert">
<button type="button" class="close" data-dismiss="alert">×</button>
<strong>Error!</strong>
<?php
foreach ($errors as $error) {
echo $error;
}
?>
</div>
<?php
}
if (isset($messages)){
?>
<div class="alert alert-success" role="alert">
<button type="button" class="close" data-dismiss="alert">×</button>
<strong>¡Bien hecho!</strong>
<?php
foreach ($messages as $message) {
echo $message;
}
?>
</div>
<?php
}
?>
|
[
"Though the question is little old, I will answer it with some simple example to help those who come later looking for an answer. In my web app, I have the following information:\n\nI'm using XAMPP\nFolder named uploads inside the web app main folder to store uploaded images\n\nThe following is the code for html form driver-form.html to collect driver information with driver picture:\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>Driver Entry Page</title>\n</head>\n<body>\n <header><h3>Please enter driver information:</h3></header>\n <form action=\"http://localhost/car-rental/insert_driver.php\" target=\"_blank\" method=\"post\" enctype=\"multipart/form-data\">\n Name:<br>\n <input type=\"text\" name=\"name\"><br>\n <br>\n Age:<br>\n <input type=\"text\" name=\"age\"><br>\n <br>\n Picture:<br>\n <input type='file' name='imagefile'><br>\n <br>\n <input type=\"submit\" value=\"Add driver\">\n </form>\n</body>\n</html>\n\n\n\nAnd this is the php code insert_driver.php to insert the record in the database and move the picture file to the uploads folder:\n\n\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>Insert Car Data</title>\n</head>\n<body>\n <?php\n //Database connection parameters\n $host = 'localhost';\n $dbname = 'carsdatabase';\n $username = 'root';\n $password = '';\n\n //Connecting to the database\n try{\n $pdo = new PDO(\"mysql:host=$host;dbname=$dbname\", $username, $password);\n } catch (PDOException $pe) {\n die(\"Could not connect to the database $dbname :\" . $pe->getMessage());\n }\n ?> \n <?php\n //Storing values int o variables\n $name = $_POST[\"name\"];\n $age = $_POST[\"age\"];\n // File name\n $filename = $_FILES[\"imagefile\"][\"name\"];\n\n // Location\n $target_file = './uploads/'.$filename;\n\n // file extension\n $file_extension = pathinfo($target_file, PATHINFO_EXTENSION);\n $file_extension = strtolower($file_extension);\n\n // Valid image extension\n $valid_extension = array(\"png\",\"jpeg\",\"jpg\");\n\n if(in_array($file_extension, $valid_extension)) {\n echo \"Filename is $filename <br>\";\n echo \"Target file is $target_file <br>\";\n if(move_uploaded_file($_FILES['imagefile']['tmp_name'],$target_file)\n ) {\n \n //Preparing insert array\n $task = array('name' => $name,\n 'age' => $age,\n 'filename' => $filename,\n 'fileimage' => $target_file\n );\n\n // Execute query\n $sql = 'INSERT INTO DRIVER (\n NAME,\n AGE,\n FILENAME,\n FILEIMAGE\n )\n VALUES (\n :name,\n :age,\n :filename,\n :fileimage\n );';\n $q = $pdo->prepare($sql);\n $q->execute($task);\n }\n }\n ?>\n</body>\n</html>\n\n\n\nAnd this is the code view-drivers.php that lists the added drivers along with their images from the database:\n\n\n<?php\n $host = 'localhost';\n $dbname = 'carsdatabase';\n $username = 'root';\n $password = '';\n\n try {\n $pdo = new PDO(\"mysql:host=$host;dbname=$dbname\", $username, $password);\n\n $sql = 'SELECT *\n FROM DRIVER\n ORDER BY DRIVER_ID';\n $q = $pdo->query($sql);\n $q->setFetchMode(PDO::FETCH_ASSOC);\n\n } catch (PDOException $pe) {\n die(\"Could not connect to the database $dbname :\" . $pe->getMessage());\n }\n\n?>\n<!DOCTYPE html>\n<html>\n<head>\n <title>Drivers Information Page</title>\n</head>\n<body>\n<div id=\"container\">\n <h1>Tasks List</h1>\n <table border=\"1\">\n <thead>\n <tr>\n <th>Driver ID</th>\n <th>Name</th>\n <th>Age</th>\n <th>Picture</th>\n </tr>\n </thead>\n <tbody>\n <?php while ($row = $q->fetch()): ?>\n <tr>\n <td><?php echo htmlspecialchars($row['DRIVER_ID']) ?></td>\n <td><?php echo htmlspecialchars($row['NAME']); ?></td>\n <td><?php echo htmlspecialchars($row['AGE']); ?></td>\n <td><img src=\"<?=$row['FILEIMAGE']?>\" title=\"<?=$row['FILENAME']?>\" width='100' height='100'></td>\n </tr>\n <?php endwhile; ?>\n </tbody>\n </table>\n</body>\n</div>\n</html>\n\n\n\nFinally, below is a screenshot from phpMyAdmin showing the structure of the DRIVER table:\n\n"
] |
[
0
] |
[
"The maximum default file size is 2MB for file upload. but you can change it in the configurations and also check the data type you used in the DB and the maximum limit of it. It's better to upload the images in a folder and insert the file name to the DB. it will also increase the performance of your system. check this example for upload image to Database.\n"
] |
[
-1
] |
[
"file",
"mysql",
"path",
"php",
"post"
] |
stackoverflow_0060640584_file_mysql_path_php_post.txt
|
Q:
iClusterPlus error for clustering HNSC TCGA data
I'm doing clustering for HNSC TCGA dataset using iClusterPlus package
I have two errors
First
**fit.single=iClusterPlus(dt1=df_m_tong1,dt2=df_c_tong1,dt3=df_e_tong1,
type=c("binomial","gaussian","gaussian"),
lambda=c(0.04,0.61,0.90),K=2,maxiter=10)**
Error in dataType(dt1, type[1], K) :
Error: some columns of binomial data are made of categories not equal to 2, which must be removed.
But, I have binomial typed mutation data
Second,
**for(k in 1:5){
cv3.fit = tune.iClusterPlus(cpus=5,dt1=df_m_tong1,dt2=df_c_tong1,dt3=df_e_tong1,
type=c("binomial","gaussian","gaussian"),K=k,n.lambda=185,
scale.lambda=c(0.05,1,1),maxiter=20)
save(cv3.fit, file=paste("cv3.fit.k",k,".Rdata",sep=""))
}**
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
Warning messages:
1: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
2: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
3: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
4: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
5: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
After that, I couldn't proceed any further
Thanks in advance
A:
for the first error I think you should use 'binomial' type for all dt:
type=c("binomial","binomial","binomial")
For the second issue - did you install parallel? Maybe this is an issue.
|
iClusterPlus error for clustering HNSC TCGA data
|
I'm doing clustering for HNSC TCGA dataset using iClusterPlus package
I have two errors
First
**fit.single=iClusterPlus(dt1=df_m_tong1,dt2=df_c_tong1,dt3=df_e_tong1,
type=c("binomial","gaussian","gaussian"),
lambda=c(0.04,0.61,0.90),K=2,maxiter=10)**
Error in dataType(dt1, type[1], K) :
Error: some columns of binomial data are made of categories not equal to 2, which must be removed.
But, I have binomial typed mutation data
Second,
**for(k in 1:5){
cv3.fit = tune.iClusterPlus(cpus=5,dt1=df_m_tong1,dt2=df_c_tong1,dt3=df_e_tong1,
type=c("binomial","gaussian","gaussian"),K=k,n.lambda=185,
scale.lambda=c(0.05,1,1),maxiter=20)
save(cv3.fit, file=paste("cv3.fit.k",k,".Rdata",sep=""))
}**
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
185 points of lambdas are used to tune parameters.
Begin parallel computation
End parallel computation
Warning messages:
1: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
2: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
3: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
4: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
5: In mclapply(1:nrow(ud), FUN = function(x) iClusterPlus(dt1, dt2, :
185 function calls resulted in an error
After that, I couldn't proceed any further
Thanks in advance
|
[
"for the first error I think you should use 'binomial' type for all dt:\ntype=c(\"binomial\",\"binomial\",\"binomial\")\nFor the second issue - did you install parallel? Maybe this is an issue.\n"
] |
[
0
] |
[] |
[] |
[
"cluster_analysis",
"head",
"r"
] |
stackoverflow_0073949824_cluster_analysis_head_r.txt
|
Q:
Python Pandas - Assign Values to Rows based on Top x% Values found in a Column
Take this mockup dataframe for example:
CustomerID Number of Purchases
ABC 5
DEF 24
GHI 85
JKL 2
MNO 100
Assume this dataframe is first sorted by Number of Purchases (descending).
How do I add a new column to it called Score, and have values assigned to it as follows:
Out of the top 60% customers (meaning the first 3 rows after sorting), 3 should be assigned to Score.
Out of the next top 20% customers (row 4 after sorting), 2 should be assigned to Score.
Out of the next and last top 20% customers (row 5 after sorting), 1 should be assigned to Score.
How do I do this in a large dataframe?
A:
import pandas as pd
import numpy as np
df = pd.DataFrame({'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],
'Number of Purchases': [5, 24, 85, 2, 100]})
df = df.sort_values(by=['Number of Purchases'], ascending=False)
proc = len(df) / 100
aaa = [[0, int(60 * proc), 3], [int(60 * proc), int(80 * proc), 2], [int(80 * proc), len(df), 1]]
df['Score'] = np.nan
df = df.reset_index()
for i in aaa:
df.loc[i[0]:i[1] - 1, 'Score'] = i[2]
print(df)
Output
index CustomerID Number of Purchases Score
0 4 MNO 100 3.0
1 2 GHI 85 3.0
2 1 DEF 24 3.0
3 0 ABC 5 2.0
4 3 JKL 2 1.0
First comes sorting, where it is indicated: ascending=False, so that the sorting is in reverse order.
The proc variable is how many rows in 1%.
A nested list aaa is created. In which the first element is the start index, the second is the end index of the range of strings. And the third element is evaluation. A 'Score' column is created with empty values and dataframe indexes are reset.
In the loop, the rows are accessed by loc through slices (this is from which index the rows are selected). Since loc is accessed inclusively (for example, the end index is 3, then the data will be selected by index 3), so i[1] - 1 is used.
A:
Once the dataframe (df) has been sorted by Number_of_Purchases,
you can generate the Score column by using the:
.rank(pct=True) - function: which calculates the rank (percentage)
and then apply a lambda function to convert this rank to a score.
Code:
import pandas as pd
# Create dataframe
df = pd.DataFrame({ 'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],
'Number of Purchases': [5, 24, 85, 2, 100]})
# Sort and then create 'Score' column
df = df.sort_values(by=['Number of Purchases'], ascending=False).reset_index(drop=True)
df['Score'] = df['Number of Purchases'].rank(pct=True).apply(lambda x: 1 if x<=0.2 else 2 if x<=0.4 else 3)
print(df)
Output:
CustomerID Number of Purchases Score
0 MNO 100 3
1 GHI 85 3
2 DEF 24 3
3 ABC 5 2
4 JKL 2 1
|
Python Pandas - Assign Values to Rows based on Top x% Values found in a Column
|
Take this mockup dataframe for example:
CustomerID Number of Purchases
ABC 5
DEF 24
GHI 85
JKL 2
MNO 100
Assume this dataframe is first sorted by Number of Purchases (descending).
How do I add a new column to it called Score, and have values assigned to it as follows:
Out of the top 60% customers (meaning the first 3 rows after sorting), 3 should be assigned to Score.
Out of the next top 20% customers (row 4 after sorting), 2 should be assigned to Score.
Out of the next and last top 20% customers (row 5 after sorting), 1 should be assigned to Score.
How do I do this in a large dataframe?
|
[
"import pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],\n 'Number of Purchases': [5, 24, 85, 2, 100]})\n\ndf = df.sort_values(by=['Number of Purchases'], ascending=False)\n\n\nproc = len(df) / 100\naaa = [[0, int(60 * proc), 3], [int(60 * proc), int(80 * proc), 2], [int(80 * proc), len(df), 1]]\ndf['Score'] = np.nan\ndf = df.reset_index()\n\n\nfor i in aaa:\n df.loc[i[0]:i[1] - 1, 'Score'] = i[2]\n\nprint(df)\n\nOutput\n index CustomerID Number of Purchases Score\n0 4 MNO 100 3.0\n1 2 GHI 85 3.0\n2 1 DEF 24 3.0\n3 0 ABC 5 2.0\n4 3 JKL 2 1.0\n\nFirst comes sorting, where it is indicated: ascending=False, so that the sorting is in reverse order.\nThe proc variable is how many rows in 1%.\nA nested list aaa is created. In which the first element is the start index, the second is the end index of the range of strings. And the third element is evaluation. A 'Score' column is created with empty values and dataframe indexes are reset.\nIn the loop, the rows are accessed by loc through slices (this is from which index the rows are selected). Since loc is accessed inclusively (for example, the end index is 3, then the data will be selected by index 3), so i[1] - 1 is used.\n",
"Once the dataframe (df) has been sorted by Number_of_Purchases, \nyou can generate the Score column by using the:\n\n.rank(pct=True) - function: which calculates the rank (percentage)\nand then apply a lambda function to convert this rank to a score.\n\nCode:\nimport pandas as pd\n\n# Create dataframe\ndf = pd.DataFrame({ 'CustomerID': ['ABC', 'DEF', 'GHI', 'JKL', 'MNO'],\n 'Number of Purchases': [5, 24, 85, 2, 100]})\n\n# Sort and then create 'Score' column\ndf = df.sort_values(by=['Number of Purchases'], ascending=False).reset_index(drop=True)\n\ndf['Score'] = df['Number of Purchases'].rank(pct=True).apply(lambda x: 1 if x<=0.2 else 2 if x<=0.4 else 3)\n\nprint(df)\n\nOutput:\n CustomerID Number of Purchases Score\n0 MNO 100 3\n1 GHI 85 3\n2 DEF 24 3\n3 ABC 5 2\n4 JKL 2 1\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074641326_pandas_python.txt
|
Q:
Extracting several Tables from same spreadsheet and merging them to get a single continuous Data frame
I have a Spreadsheet of Time series Data
It has separate tables for each year with a single row gap in between.
I want to have the Year from table header as part of the date column. So that I can plot charts and do simple comparisons of the data (YoY) etc
import pandas as pd
RigCountWorld_df = pd.read_excel(open('Worldwide Rig Count Nov 2022.xlsx', 'rb'),
sheet_name='Worldwide_Rigcount',index_col=None, header=6)
RigCountWorld_df
The code I have is no where near helpful. Even the names of pandas operations I need to use will be helpful for me.
I need a continuous table with data from all years. It would make sense to have latest data at the very end.
Even transposing the tables separately and adding them as new columns would make sense (with the column headers containing Year-Month names.
A:
Here is a proposition with some of pandas built-in functions.
import pandas as pd
df = pd.read_excel("Worldwide Rig Count Nov 2022.xlsx",
sheet_name="Worldwide_Rigcount", header=None, usecols="B:K", skiprows=6)
df.dropna(how="all", inplace=True)
df.insert(0, "Year", np.where(df[10].eq("Total World"), df[1], None))
df["Year"].ffill(inplace=True)
df.drop_duplicates(subset= df.columns[2:], inplace=True)
df.columns = ["Year", "Month"] + df.loc[0, 2:].tolist()
df = df.loc[1:, :].reset_index(drop=True)
# Output :
print(df.sample(5).to_string())
Year Month Latin America Europe Africa Middle East Asia Pacific Total Intl. Canada U.S. Total World
613 1975 Mar 310 113 122 173 208 926 192 1651 2769
588 1977 Apr 324 135 165 185 167 976 129 1907 3012
596 1977 Dec 353 142 172 195 182 1044 259 2141 3444
221 2005 Jan 307 57 50 242 204 860 550 1255 2665
566 1979 Aug 440 149 199 144 219 1151 376 2222 3749
# Check :
48 years with 13 rows (12 months + Average) for each year.
print(df.groupby("Year").size().value_counts())
13 48
dtype: int64
A:
You can use the pandas melt() function to reshape the data from wide to long format, so that each row contains a single observation. From there, you can use the pandas concat() function to join the melted dataframes from each sheet together into a single dataframe.
To add the year from the sheet header to the date column, you can use the pandas assign() function to create a new column with the year from the sheet name. Then, use the pandas datetime.strptime() function to convert the existing date column into datetime objects with the year from the new column.
Here's an example of how to do it:
import pandas as pd
# create a list of sheet names
sheets = ['Worldwide_Rigcount_2020', 'Worldwide_Rigcount_2021', 'Worldwide_Rigcount_2022']
# create an empty list to store the dataframes
dfs = []
# loop through each sheet
for sheet in sheets:
# read the sheet into a dataframe
df = pd.read_excel('Worldwide Rig Count Nov 2022.xlsx', sheet_name=sheet, index_col=None, header=6)
# extract the year from the sheet name
year = int(sheet.split('_')[-1])
# melt the dataframe from wide to long format
df_melted = df.melt(id_vars=['Date'])
# add the year from the sheet name to the dataframe
df_melted = df_melted.assign(year=year)
# convert the date column to datetime objects
df_melted['Date'] = pd.to_datetime(df_melted['Date'], format='%b %Y', errors='coerce').dt.strftime('%Y-%m-%d') + '-' + df_melted['year'].astype(str)
# append the dataframe to the list
dfs.append(df_melted)
# concat the list of dataframes into a single dataframe
df_final = pd.concat(dfs)
# print the final dataframe
print(df_final)
A:
Another solution validating year and month content (it is assumed, that the column names are in the first row of RigCountWorld_df):
df = RigCountWorld_df.copy()
first_col = 2 # First column with data
column_names = df.iloc[0, first_col:].to_list()
df["Year"] = df.iloc[:,[1]].where(df.iloc[:,1].astype(str).str.match(r"^20\d\d$"), None).ffill()
df["Month"] = df.iloc[:,[1]].where(df.iloc[:,1].astype(str).isin(("Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec")), None)
df = df[df['Month'].notna()]
df = df.iloc[:, first_col:].set_index(["Year", "Month"])
df.columns = column_names
df
|
Extracting several Tables from same spreadsheet and merging them to get a single continuous Data frame
|
I have a Spreadsheet of Time series Data
It has separate tables for each year with a single row gap in between.
I want to have the Year from table header as part of the date column. So that I can plot charts and do simple comparisons of the data (YoY) etc
import pandas as pd
RigCountWorld_df = pd.read_excel(open('Worldwide Rig Count Nov 2022.xlsx', 'rb'),
sheet_name='Worldwide_Rigcount',index_col=None, header=6)
RigCountWorld_df
The code I have is no where near helpful. Even the names of pandas operations I need to use will be helpful for me.
I need a continuous table with data from all years. It would make sense to have latest data at the very end.
Even transposing the tables separately and adding them as new columns would make sense (with the column headers containing Year-Month names.
|
[
"Here is a proposition with some of pandas built-in functions.\nimport pandas as pd\n\ndf = pd.read_excel(\"Worldwide Rig Count Nov 2022.xlsx\",\n sheet_name=\"Worldwide_Rigcount\", header=None, usecols=\"B:K\", skiprows=6)\n \ndf.dropna(how=\"all\", inplace=True)\ndf.insert(0, \"Year\", np.where(df[10].eq(\"Total World\"), df[1], None))\ndf[\"Year\"].ffill(inplace=True)\ndf.drop_duplicates(subset= df.columns[2:], inplace=True)\ndf.columns = [\"Year\", \"Month\"] + df.loc[0, 2:].tolist()\ndf = df.loc[1:, :].reset_index(drop=True)\n\n# Output :\nprint(df.sample(5).to_string())\n\n Year Month Latin America Europe Africa Middle East Asia Pacific Total Intl. Canada U.S. Total World\n613 1975 Mar 310 113 122 173 208 926 192 1651 2769\n588 1977 Apr 324 135 165 185 167 976 129 1907 3012\n596 1977 Dec 353 142 172 195 182 1044 259 2141 3444\n221 2005 Jan 307 57 50 242 204 860 550 1255 2665\n566 1979 Aug 440 149 199 144 219 1151 376 2222 3749\n\n# Check :\n48 years with 13 rows (12 months + Average) for each year.\nprint(df.groupby(\"Year\").size().value_counts())\n\n13 48\ndtype: int64\n\n",
"You can use the pandas melt() function to reshape the data from wide to long format, so that each row contains a single observation. From there, you can use the pandas concat() function to join the melted dataframes from each sheet together into a single dataframe.\nTo add the year from the sheet header to the date column, you can use the pandas assign() function to create a new column with the year from the sheet name. Then, use the pandas datetime.strptime() function to convert the existing date column into datetime objects with the year from the new column.\nHere's an example of how to do it:\nimport pandas as pd\n\n# create a list of sheet names\nsheets = ['Worldwide_Rigcount_2020', 'Worldwide_Rigcount_2021', 'Worldwide_Rigcount_2022']\n\n# create an empty list to store the dataframes\ndfs = []\n\n# loop through each sheet\nfor sheet in sheets:\n # read the sheet into a dataframe\n df = pd.read_excel('Worldwide Rig Count Nov 2022.xlsx', sheet_name=sheet, index_col=None, header=6)\n \n # extract the year from the sheet name\n year = int(sheet.split('_')[-1])\n \n # melt the dataframe from wide to long format\n df_melted = df.melt(id_vars=['Date'])\n \n # add the year from the sheet name to the dataframe\n df_melted = df_melted.assign(year=year)\n \n # convert the date column to datetime objects\n df_melted['Date'] = pd.to_datetime(df_melted['Date'], format='%b %Y', errors='coerce').dt.strftime('%Y-%m-%d') + '-' + df_melted['year'].astype(str)\n \n # append the dataframe to the list\n dfs.append(df_melted)\n \n# concat the list of dataframes into a single dataframe\ndf_final = pd.concat(dfs)\n\n# print the final dataframe\nprint(df_final)\n\n",
"Another solution validating year and month content (it is assumed, that the column names are in the first row of RigCountWorld_df):\ndf = RigCountWorld_df.copy()\nfirst_col = 2 # First column with data\ncolumn_names = df.iloc[0, first_col:].to_list()\ndf[\"Year\"] = df.iloc[:,[1]].where(df.iloc[:,1].astype(str).str.match(r\"^20\\d\\d$\"), None).ffill()\ndf[\"Month\"] = df.iloc[:,[1]].where(df.iloc[:,1].astype(str).isin((\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\")), None)\ndf = df[df['Month'].notna()]\ndf = df.iloc[:, first_col:].set_index([\"Year\", \"Month\"])\ndf.columns = column_names\ndf\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"dataframe",
"group_by",
"pandas",
"python_3.x"
] |
stackoverflow_0074665251_dataframe_group_by_pandas_python_3.x.txt
|
Q:
Trigger pipeline with jobs from another pipeline
I need to develop two azure devops pipelines. 'Starter.yml' - is trigger for second one. 'Processing.yml' - has some business logic.
Very important thing: both of them must have jobs.
I tried doing something like this:
Starter:
trigger: none
pool:
vmImage: 'windows-2019'
stages:
- stage: A
jobs:
- job: Triggering
steps:
- template: Processing.yml
Processing:
jobs:
- job: Processing
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
I get following result:
I saw a lot of examples but all of them dont work. Of course I have parallel jobs:
Is it generally possible to trigger pipeline from another pipeline when both of them have jobs?
Thank you for your answers.
A:
Thanks everyone for their answers, I managed to do what I wanted:
In following example:
There are two pipelines ('Starter' and 'Processing')
'Starter' gets input parameters in variables
'Starter' triggers 'Processing' (- template: Processing.yml) and passes input parameters to it (parameters: firstName: $(FirsName))
Processing uses the parameters (${{parameters.firstName}})
Starter
trigger: none
pool:
vmImage: 'windows-2019'
stages:
- stage: A
jobs:
- template: Processing.yml
parameters:
firstName: $(FirsName)
lastName: $(LastName)
- job: Finishing
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Finished"
Processing
parameters:
- name: firstName
default: ''
- name: lastName
default: ''
jobs:
- job: Processing
pool:
vmImage: 'windows-2019'
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello from second pipeline ${{parameters.firstName}}"
For using parallel jobs you have to buy microsoft-hosted parallel jobs (settings of your project):
|
Trigger pipeline with jobs from another pipeline
|
I need to develop two azure devops pipelines. 'Starter.yml' - is trigger for second one. 'Processing.yml' - has some business logic.
Very important thing: both of them must have jobs.
I tried doing something like this:
Starter:
trigger: none
pool:
vmImage: 'windows-2019'
stages:
- stage: A
jobs:
- job: Triggering
steps:
- template: Processing.yml
Processing:
jobs:
- job: Processing
steps:
- task: PowerShell@2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
I get following result:
I saw a lot of examples but all of them dont work. Of course I have parallel jobs:
Is it generally possible to trigger pipeline from another pipeline when both of them have jobs?
Thank you for your answers.
|
[
"Thanks everyone for their answers, I managed to do what I wanted:\nIn following example:\n\nThere are two pipelines ('Starter' and 'Processing')\n'Starter' gets input parameters in variables\n'Starter' triggers 'Processing' (- template: Processing.yml) and passes input parameters to it (parameters: firstName: $(FirsName))\nProcessing uses the parameters (${{parameters.firstName}})\n\nStarter\n trigger: none\n\n pool:\n vmImage: 'windows-2019'\n\n stages:\n - stage: A\n jobs:\n - template: Processing.yml\n parameters:\n firstName: $(FirsName)\n lastName: $(LastName)\n \n - job: Finishing\n steps:\n - task: PowerShell@2\n inputs:\n targetType: 'inline'\n script: |\n # Write your PowerShell commands here.\n \n Write-Host \"Finished\"\n\nProcessing\n parameters:\n - name: firstName\n default: ''\n - name: lastName\n default: ''\n\n jobs:\n - job: Processing\n pool:\n vmImage: 'windows-2019'\n steps:\n - task: PowerShell@2\n inputs:\n targetType: 'inline'\n script: |\n # Write your PowerShell commands here.\n \n Write-Host \"Hello from second pipeline ${{parameters.firstName}}\"\n\nFor using parallel jobs you have to buy microsoft-hosted parallel jobs (settings of your project):\n\n"
] |
[
0
] |
[] |
[] |
[
"azure_devops",
"azure_pipelines"
] |
stackoverflow_0074656867_azure_devops_azure_pipelines.txt
|
Q:
Creating random number and appending to list
populate each hand with two cards. Take a card from the deck
and put it in the player_hand list. Then, take a card from the deck and
put it in the dealer_hand list. Do that one more time in that order so that the dealer and player have each two cards. Make sure the dealer's first card is face down. I keep receiving this error from my 2 tests.
My code:
while len(dealer_hand) != 2 and len(player_hand) != 2:
player_card = random.choice(deck)
player_hand.append(player_card)
deck.remove(player_card)
if len(player_hand) == 2:
player_hand[0].face_up()
player_hand[1].face_up()
dealer_card = random.choice(deck)
dealer_hand.append(dealer_card)
deck.remove(dealer_card)
if len(dealer_hand) == 2:
dealer_hand[0].face_down()
dealer_hand[1].face_up()
return player_hand and dealer_hand
False != True
Expected :True
Actual :False
def test_deal_cards():
deck = []
for suit in cards.SUITS:
for rank in cards.RANKS:
deck.append(cards.Card(suit, rank))
dealer_hand = []
player_hand = []
blackjack.deal_cards(deck, dealer_hand, player_hand)
assert len(dealer_hand) == 2
assert len(player_hand) == 2
> assert dealer_hand[0].is_face_up() is True
E assert False is True
E + where False = <bound method Card.is_face_up of [10 of Hearts]>()
E + where <bound method Card.is_face_up of [10 of Hearts]> = [10 of Hearts].is_face_up
test_deal_cards.py:19: AssertionError
(test_deal_cards_alternates_between_player_and_dealer)
[7 of Hearts] != [6 of Spades]
Expected :[6 of Spades]
Actual :[7 of Hearts]
def test_deal_cards_alternates_between_player_and_dealer():
card1 = cards.Card(cards.SPADES, cards.SIX)
card2 = cards.Card(cards.HEARTS, cards.SEVEN)
card3 = cards.Card(cards.CLUBS, cards.EIGHT)
card4 = cards.Card(cards.DIAMONDS, cards.NINE)
deck = [card4, card3, card2, card1]
dealer_hand = []
player_hand = []
blackjack.deal_cards(deck, dealer_hand, player_hand)
assert len(dealer_hand) == 2
assert len(player_hand) == 2
> assert player_hand[0] is card1, 'Player 1st card should be Six of Spades'
E AssertionError: Player 1st card should be Six of Spades
E assert [7 of Hearts] is [6 of Spades]
test_deal_cards.py:39: AssertionError
A:
I think the error lies within the first line:
while len(dealer_hand) and len(player_hand) != 2:
If you want to check if the dealer also has 2 cards, you need to fix that line to:
while len(dealer_hand) != 2 and len(player_hand) != 2:
A:
First, the while loop is checking if the length of both the player and dealer hands are not equal to 2, which is not the correct condition. The while loop should continue as long as the length of either the player or dealer hand is less than 2, which can be checked using the 'or' operator.
Second, the code is using the 'card' attribute of the cards in the player and dealer hands, but it is not clear where this attribute is coming from. It is possible that the code is intended to use the 'face_up' and 'face_down' methods of the 'Card' class to set the visibility of the cards, but this is not properly implemented in the code.
Third, the code is not returning anything at the end, which is causing the test to fail. The code should return the player and dealer hands after they have been populated with two cards each.
Here is how the code could be fixed:
while len(dealer_hand) < 2 or len(player_hand) < 2:
player_card = random.choice(deck)
player_hand.append(player_card)
deck.remove(player_card)
if len(player_hand) == 2:
player_hand[0].face_up()
player_hand[1].face_up()
dealer_card = random.choice(deck)
dealer_hand.append(dealer_card)
deck.remove(dealer_card)
if len(dealer_hand) == 2:
dealer_hand[0].face_down()
dealer_hand[1].face_up()
return player_hand, dealer_hand
This code will draw two cards for each of the player and dealer hands, setting the first card of the dealer hand to be face down and the other cards to be face up. It will also return the player and dealer hands after they have been populated with the cards.
|
Creating random number and appending to list
|
populate each hand with two cards. Take a card from the deck
and put it in the player_hand list. Then, take a card from the deck and
put it in the dealer_hand list. Do that one more time in that order so that the dealer and player have each two cards. Make sure the dealer's first card is face down. I keep receiving this error from my 2 tests.
My code:
while len(dealer_hand) != 2 and len(player_hand) != 2:
player_card = random.choice(deck)
player_hand.append(player_card)
deck.remove(player_card)
if len(player_hand) == 2:
player_hand[0].face_up()
player_hand[1].face_up()
dealer_card = random.choice(deck)
dealer_hand.append(dealer_card)
deck.remove(dealer_card)
if len(dealer_hand) == 2:
dealer_hand[0].face_down()
dealer_hand[1].face_up()
return player_hand and dealer_hand
False != True
Expected :True
Actual :False
def test_deal_cards():
deck = []
for suit in cards.SUITS:
for rank in cards.RANKS:
deck.append(cards.Card(suit, rank))
dealer_hand = []
player_hand = []
blackjack.deal_cards(deck, dealer_hand, player_hand)
assert len(dealer_hand) == 2
assert len(player_hand) == 2
> assert dealer_hand[0].is_face_up() is True
E assert False is True
E + where False = <bound method Card.is_face_up of [10 of Hearts]>()
E + where <bound method Card.is_face_up of [10 of Hearts]> = [10 of Hearts].is_face_up
test_deal_cards.py:19: AssertionError
(test_deal_cards_alternates_between_player_and_dealer)
[7 of Hearts] != [6 of Spades]
Expected :[6 of Spades]
Actual :[7 of Hearts]
def test_deal_cards_alternates_between_player_and_dealer():
card1 = cards.Card(cards.SPADES, cards.SIX)
card2 = cards.Card(cards.HEARTS, cards.SEVEN)
card3 = cards.Card(cards.CLUBS, cards.EIGHT)
card4 = cards.Card(cards.DIAMONDS, cards.NINE)
deck = [card4, card3, card2, card1]
dealer_hand = []
player_hand = []
blackjack.deal_cards(deck, dealer_hand, player_hand)
assert len(dealer_hand) == 2
assert len(player_hand) == 2
> assert player_hand[0] is card1, 'Player 1st card should be Six of Spades'
E AssertionError: Player 1st card should be Six of Spades
E assert [7 of Hearts] is [6 of Spades]
test_deal_cards.py:39: AssertionError
|
[
"I think the error lies within the first line:\nwhile len(dealer_hand) and len(player_hand) != 2:\n\nIf you want to check if the dealer also has 2 cards, you need to fix that line to:\nwhile len(dealer_hand) != 2 and len(player_hand) != 2:\n\n",
"First, the while loop is checking if the length of both the player and dealer hands are not equal to 2, which is not the correct condition. The while loop should continue as long as the length of either the player or dealer hand is less than 2, which can be checked using the 'or' operator.\nSecond, the code is using the 'card' attribute of the cards in the player and dealer hands, but it is not clear where this attribute is coming from. It is possible that the code is intended to use the 'face_up' and 'face_down' methods of the 'Card' class to set the visibility of the cards, but this is not properly implemented in the code.\nThird, the code is not returning anything at the end, which is causing the test to fail. The code should return the player and dealer hands after they have been populated with two cards each.\nHere is how the code could be fixed:\nwhile len(dealer_hand) < 2 or len(player_hand) < 2:\n player_card = random.choice(deck)\n player_hand.append(player_card)\n deck.remove(player_card)\n if len(player_hand) == 2:\n player_hand[0].face_up()\n player_hand[1].face_up()\n\n dealer_card = random.choice(deck)\n dealer_hand.append(dealer_card)\n deck.remove(dealer_card)\n if len(dealer_hand) == 2:\n dealer_hand[0].face_down()\n dealer_hand[1].face_up()\n\nreturn player_hand, dealer_hand\n\nThis code will draw two cards for each of the player and dealer hands, setting the first card of the dealer hand to be face down and the other cards to be face up. It will also return the player and dealer hands after they have been populated with the cards.\n"
] |
[
0,
0
] |
[] |
[] |
[
"append",
"list"
] |
stackoverflow_0074666185_append_list.txt
|
Q:
Why the code of valid palindrome is not working for a single edge case?
Below is my code from leetcode. I think that I have write proper logic code but it not work only for edge case - "race a car". It returns true for that but it should be false. why it is?
Link of question - https://leetcode.com/problems/valid-palindrome
I know I can find answers with more approaches in solutions section but I want to that what is missing in my code.
class Solution {
public:
bool isPalindrome(string s) {
int st =0, e=s.size()-1;
vector<char> ans1;
vector<char> ans2;
while(st<=e){
if((s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9')){
ans1.push_back(islower(s[st]));
}
if((s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9')){
ans2.push_back(islower(s[e]));
}
st++;
e--;
}
if(ans1 == ans2){return 1;}
return 0;
}
};
A:
There are a few issues with the code.
Firstly, the condition (s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9') only checks if the character is a letter or a number. This means that other characters, such as spaces and punctuation marks, will be ignored.
Secondly, the code only pushes characters onto the ans1 and ans2 vectors if they are letters or numbers. This means that any spaces or punctuation marks will be ignored, which could lead to incorrect results. For example, the string "A man, a plan, a canal, Panama!" is a palindrome, but your code will not recognize it as such because it ignores the spaces and punctuation marks.
Thirdly, the code is using the islower function to convert the characters to lowercase before adding them to the ans1 and ans2 vectors. However, islower does not actually modify the character - it simply returns true if the character is lowercase, and false otherwise. To actually convert a character to lowercase, you should use the tolower function instead.
Finally, the code is comparing the ans1 and ans2 vectors to determine if the original string is a palindrome. However, this is not a reliable way to check if a string is a palindrome, because the order of the characters in the vectors may not match the order of the characters in the original string. For example, the string "Able was I ere I saw Elba" is a palindrome, but your code will not recognize it as such because the characters in the ans1 and ans2 vectors will be in the opposite order.
To fix these issues, you can modify your code as follows:
Use a regular expression to match only letters and numbers in the input string. This will allow you to ignore any other characters, such as spaces and punctuation marks.
Use the tolower function to convert the matched characters to lowercase, so that your code can handle palindromes that have a mixture of uppercase and lowercase letters.
Use a single vector to store the matched and converted characters, and compare the characters in this vector to the reversed version of the vector to determine if the original string is a palindrome. This will ensure that the order of the characters is preserved.
Here is an example of how your modified code might look:
class Solution {
public:
bool isPalindrome(string s) {
// Use a regular expression to match only letters and numbers in the input string
regex r("[a-zA-Z0-9]");
// Create a vector to store the matched and converted characters
vector<char> chars;
// Iterate over the characters in the input string
for (char c : s) {
// If the current character is a letter or number, convert it to lowercase and add it to the vector
if (regex_match(string(1, c), r)) {
chars.push_back(tolower(c));
}
}
// Check if the vector is equal to its reversed version
return chars == vector<char>(chars.rbegin(), chars.rend());
}
A:
class Solution {
public:
bool isPalindrome(string s) {
// Create a vector to store the converted characters
vector<char> chars;
// Iterate over the characters in the input string
for (char c : s) {
// If the current character is a letter or number, convert it to lowercase and add it to the vector
if (isalnum(c)) {
chars.push_back(tolower(c));
}
}
// Check if the vector is equal to its reversed version
return chars == vector<char>(chars.rbegin(), chars.rend());
}
}
A:
In the second loop you are using the wrong index, s[st] instead of s[e]! So you are ignoring s[e] if s[st] is not a character or digit.
Your test for characters is not portable anyway, by the way: While C guarantees digits succeeding each other in ascending order this is not necessarily the case for the letters – consider (in-?)famous EBCDIC encoding!
So for testing on letters you should use isalpha function instead and I recommend consistently isdigit for digits. For either of there's isalnum, too, which is what you'd want in your case.
Though be aware that all of these accept an int and char is possibly signed. Characters in extended range (128 - 255) would result in a negative value, so you should cast the input to unsigned char instead.
Then you accept the string by value, so you get a copy of anyway. Then profit from and convert the string to lower in place, ignoring the punctuation marks on the run:
auto pos = s.begin();
for(unsigned char c : s) // explicitly specifying the type performs the necessary cast!
{
if(isalnum(c))
{
// this moves the alnums, already lowered in case, towards
// front, overwriting the unwanted punctuation marks:
*pos++ = tolower(c);
}
}
// cut off the remaining ballast, i.e. the places of the moved characters (if any):
s.erase(pos, s.end());
OK, now s contains a string of all lower-case characters (note, though, that there are languages with more than one representation for the same character like Greek 'Σ' (Sigma), with lowercase 'σ' at the beginning or middle and 'ς' at the end of words), which you cannot cover by this approach.
Now you could copy the string, reverse it (std::reverse) and compare reversed copy and original on equality – or you spare this effort by comparing the corresponding characters directly:
for(auto b = s.begin(), e = s.end() - 1; b < e; ++b, --e)
{
if(*b != *e)
{
return false;
}
}
return true;
Note that this would fail on empty string, you might catch this special case right at the beginning (even before converting to lower):
if(s.empty())
{
return true; // depending on definition, usually empty string does
// count as palindrome
}
Actually you could even do all this in one single run; this results, though, in pretty advanced code, which likely goes beyond the scope of this question; if you are still interested in: see at godbolt. Note there especially how copying the string is avoided entirely by accepting it by const reference.
|
Why the code of valid palindrome is not working for a single edge case?
|
Below is my code from leetcode. I think that I have write proper logic code but it not work only for edge case - "race a car". It returns true for that but it should be false. why it is?
Link of question - https://leetcode.com/problems/valid-palindrome
I know I can find answers with more approaches in solutions section but I want to that what is missing in my code.
class Solution {
public:
bool isPalindrome(string s) {
int st =0, e=s.size()-1;
vector<char> ans1;
vector<char> ans2;
while(st<=e){
if((s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9')){
ans1.push_back(islower(s[st]));
}
if((s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9')){
ans2.push_back(islower(s[e]));
}
st++;
e--;
}
if(ans1 == ans2){return 1;}
return 0;
}
};
|
[
"There are a few issues with the code.\nFirstly, the condition (s[st] >= 'a' && s[st] <= 'z') || (s[st] >= 'A' && s[st] <= 'Z') || (s[st] >= '0' && s[st] <= '9') only checks if the character is a letter or a number. This means that other characters, such as spaces and punctuation marks, will be ignored.\nSecondly, the code only pushes characters onto the ans1 and ans2 vectors if they are letters or numbers. This means that any spaces or punctuation marks will be ignored, which could lead to incorrect results. For example, the string \"A man, a plan, a canal, Panama!\" is a palindrome, but your code will not recognize it as such because it ignores the spaces and punctuation marks.\nThirdly, the code is using the islower function to convert the characters to lowercase before adding them to the ans1 and ans2 vectors. However, islower does not actually modify the character - it simply returns true if the character is lowercase, and false otherwise. To actually convert a character to lowercase, you should use the tolower function instead.\nFinally, the code is comparing the ans1 and ans2 vectors to determine if the original string is a palindrome. However, this is not a reliable way to check if a string is a palindrome, because the order of the characters in the vectors may not match the order of the characters in the original string. For example, the string \"Able was I ere I saw Elba\" is a palindrome, but your code will not recognize it as such because the characters in the ans1 and ans2 vectors will be in the opposite order.\nTo fix these issues, you can modify your code as follows:\nUse a regular expression to match only letters and numbers in the input string. This will allow you to ignore any other characters, such as spaces and punctuation marks.\nUse the tolower function to convert the matched characters to lowercase, so that your code can handle palindromes that have a mixture of uppercase and lowercase letters.\nUse a single vector to store the matched and converted characters, and compare the characters in this vector to the reversed version of the vector to determine if the original string is a palindrome. This will ensure that the order of the characters is preserved.\nHere is an example of how your modified code might look:\nclass Solution {\npublic:\n bool isPalindrome(string s) {\n // Use a regular expression to match only letters and numbers in the input string\n regex r(\"[a-zA-Z0-9]\");\n\n // Create a vector to store the matched and converted characters\n vector<char> chars;\n\n // Iterate over the characters in the input string\n for (char c : s) {\n // If the current character is a letter or number, convert it to lowercase and add it to the vector\n if (regex_match(string(1, c), r)) {\n chars.push_back(tolower(c));\n }\n }\n\n // Check if the vector is equal to its reversed version\n return chars == vector<char>(chars.rbegin(), chars.rend());\n }\n\n",
"class Solution {\npublic:\n bool isPalindrome(string s) {\n // Create a vector to store the converted characters\n vector<char> chars;\n\n // Iterate over the characters in the input string\n for (char c : s) {\n // If the current character is a letter or number, convert it to lowercase and add it to the vector\n if (isalnum(c)) {\n chars.push_back(tolower(c));\n }\n }\n\n // Check if the vector is equal to its reversed version\n return chars == vector<char>(chars.rbegin(), chars.rend());\n }\n}\n\n",
"In the second loop you are using the wrong index, s[st] instead of s[e]! So you are ignoring s[e] if s[st] is not a character or digit.\nYour test for characters is not portable anyway, by the way: While C guarantees digits succeeding each other in ascending order this is not necessarily the case for the letters – consider (in-?)famous EBCDIC encoding!\nSo for testing on letters you should use isalpha function instead and I recommend consistently isdigit for digits. For either of there's isalnum, too, which is what you'd want in your case.\nThough be aware that all of these accept an int and char is possibly signed. Characters in extended range (128 - 255) would result in a negative value, so you should cast the input to unsigned char instead.\nThen you accept the string by value, so you get a copy of anyway. Then profit from and convert the string to lower in place, ignoring the punctuation marks on the run:\nauto pos = s.begin();\nfor(unsigned char c : s) // explicitly specifying the type performs the necessary cast!\n{\n if(isalnum(c))\n {\n // this moves the alnums, already lowered in case, towards\n // front, overwriting the unwanted punctuation marks:\n *pos++ = tolower(c);\n }\n}\n// cut off the remaining ballast, i.e. the places of the moved characters (if any):\ns.erase(pos, s.end());\n\nOK, now s contains a string of all lower-case characters (note, though, that there are languages with more than one representation for the same character like Greek 'Σ' (Sigma), with lowercase 'σ' at the beginning or middle and 'ς' at the end of words), which you cannot cover by this approach.\nNow you could copy the string, reverse it (std::reverse) and compare reversed copy and original on equality – or you spare this effort by comparing the corresponding characters directly:\nfor(auto b = s.begin(), e = s.end() - 1; b < e; ++b, --e)\n{\n if(*b != *e)\n {\n return false;\n }\n}\nreturn true;\n\nNote that this would fail on empty string, you might catch this special case right at the beginning (even before converting to lower):\nif(s.empty())\n{\n return true; // depending on definition, usually empty string does\n // count as palindrome\n}\n\nActually you could even do all this in one single run; this results, though, in pretty advanced code, which likely goes beyond the scope of this question; if you are still interested in: see at godbolt. Note there especially how copying the string is avoided entirely by accepting it by const reference.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"c++",
"string"
] |
stackoverflow_0074665384_c++_string.txt
|
Q:
How to handle abbreviation when reading nltk corpus
I am reading nltk corpus using
def read_corpus(package, category):
""" Read files from corpus(package)'s category.
Params:
package (nltk.corpus): corpus
category (string): category name
Return:
list of lists, with words from each of the processed files assigned with start and end tokens
"""
files = package.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(package.words(f))] + [END_TOKEN] for f in files]
But I find that it process 'U.S.' to ['U','.','S','.'] and 'I'm' to ['I', "'", 'm'].
How can I get an abbreviation as a whole or restore it?
A:
To treat abbreviations such as "U.S." and contractions such as "I'm" as a single token when processing text, you can use the TreebankWordTokenizer from the NLTK library. This tokenizer is designed to tokenize text in a way that is similar to how humans would naturally write and speak, so it will treat abbreviations and contractions as single tokens.
|
How to handle abbreviation when reading nltk corpus
|
I am reading nltk corpus using
def read_corpus(package, category):
""" Read files from corpus(package)'s category.
Params:
package (nltk.corpus): corpus
category (string): category name
Return:
list of lists, with words from each of the processed files assigned with start and end tokens
"""
files = package.fileids(category)
return [[START_TOKEN] + [w.lower() for w in list(package.words(f))] + [END_TOKEN] for f in files]
But I find that it process 'U.S.' to ['U','.','S','.'] and 'I'm' to ['I', "'", 'm'].
How can I get an abbreviation as a whole or restore it?
|
[
"To treat abbreviations such as \"U.S.\" and contractions such as \"I'm\" as a single token when processing text, you can use the TreebankWordTokenizer from the NLTK library. This tokenizer is designed to tokenize text in a way that is similar to how humans would naturally write and speak, so it will treat abbreviations and contractions as single tokens.\n"
] |
[
0
] |
[] |
[] |
[
"nltk",
"python"
] |
stackoverflow_0074666233_nltk_python.txt
|
Q:
php convert multidimensional array to json object
PHP NEWBIE
[PHP 7.4]
I am trying to refactor some old code.
GOAL: The two JSON must match.
Code snipppet 1: Old JSON code generated from php array
Code snippet 2: New JSON code generated from php array
Objective: Both must match, I am using php unit to compare them. For simplicity, I have removed the testing code.
PROBLEM: Extra set of square brackets, see image below.
I tried Using JSON_FORCE_OBJECT, which removes the square brackets, but introduces {... john: { "0": { "byEmail" ... " and ... "marie": { "1" { "byEmail" ...
(Naming wise: I have used John and marie, but these could be promotions, offers, packages, etc. so ignore the naming please)
SNIPPET 1 - php code
$body = [
"data" => [
"john" => [
"byEmail" => [
"status" => true,
"source" => "my-server",
],
"byPhoneCall" => [
"status" => true,
"source" => "my-server",
]
],
"marie" => [
"byEmail" => [
"status" => true,
"source" => "my-server",
],
"byPhoneCall" => [
"status" => true,
"source" => "my-server",
]
]
]
];
This gets converted to this JSON object:
// SNIPPET 1 - running json_encode()
{
"data": {
"john": {
"byEmail": {
"source": "my-server",
"status": true
},
"byPhoneCall": {
"source": "my-server",
"status": true
}
},
"marie": {
"byEmail": {
"source": "my-server",
"status": true
},
"byPhoneCall": {
"source": "my-server",
"status": true
}
}
}
}
SNIPPET 2:
I am creating this data structure dynamically now, because future requirement may have johnByPost, johnByFax, etc.:
//my-file-dynamic-generation.php
public function constructArray($johnByEmail=null, $johnByPhone=null, $marieByEmail=null, $marieByPhone=null) {
$body = [ "data" => ["john" => [], "marie" => []]];
if ($johnByEmail !== null) {
array_push($body["data"]["john"], $this->createKeyValuePair("byEmail", $johnByEmail));
}
if ($johnByPhone !== null) {
array_push($body["data"]["john"], $this->createKeyValuePair("byPhoneCall", $johnByPhone));
}
if ($marieByEmail !== null) {
array_push($body["data"]["marie"], $this->createKeyValuePair("byEmail", $marieByEmail));
}
if ($marieByPhone !== null) {
array_push($body["data"]["marie"], $this->createKeyValuePair("byPhoneCall", $marieByPhone));
}
return $body;
}
// HELPER func
function createKeyValuePair($title=null, $status=null, $source="my-server") {
return [
$title => [
"status" => $status,
"source" => $source,
]
];
}
JSON - OUTPUT
I have tried to use json_encode($data, JSON_FORCE_OBJECT)
That resulted me to get an extra key, which I don't want (I got .... [0] => 'marie' => ... [1] => 'john'
Appreciate reading this, thanks!
A:
You are creating a new array inside the function createKeyValuePair() (probably to add the key). You could use the function the create the content only, and create the key inside the function constructArray() :
$body["data"]["john"]["byEmail"] = $this->createKeyValuePair($johnByEmail);
and the function :
function createKeyValuePair($status = null, $source = "my-server"): array
{
return [
"status" => $status,
"source" => $source,
];
}
|
php convert multidimensional array to json object
|
PHP NEWBIE
[PHP 7.4]
I am trying to refactor some old code.
GOAL: The two JSON must match.
Code snipppet 1: Old JSON code generated from php array
Code snippet 2: New JSON code generated from php array
Objective: Both must match, I am using php unit to compare them. For simplicity, I have removed the testing code.
PROBLEM: Extra set of square brackets, see image below.
I tried Using JSON_FORCE_OBJECT, which removes the square brackets, but introduces {... john: { "0": { "byEmail" ... " and ... "marie": { "1" { "byEmail" ...
(Naming wise: I have used John and marie, but these could be promotions, offers, packages, etc. so ignore the naming please)
SNIPPET 1 - php code
$body = [
"data" => [
"john" => [
"byEmail" => [
"status" => true,
"source" => "my-server",
],
"byPhoneCall" => [
"status" => true,
"source" => "my-server",
]
],
"marie" => [
"byEmail" => [
"status" => true,
"source" => "my-server",
],
"byPhoneCall" => [
"status" => true,
"source" => "my-server",
]
]
]
];
This gets converted to this JSON object:
// SNIPPET 1 - running json_encode()
{
"data": {
"john": {
"byEmail": {
"source": "my-server",
"status": true
},
"byPhoneCall": {
"source": "my-server",
"status": true
}
},
"marie": {
"byEmail": {
"source": "my-server",
"status": true
},
"byPhoneCall": {
"source": "my-server",
"status": true
}
}
}
}
SNIPPET 2:
I am creating this data structure dynamically now, because future requirement may have johnByPost, johnByFax, etc.:
//my-file-dynamic-generation.php
public function constructArray($johnByEmail=null, $johnByPhone=null, $marieByEmail=null, $marieByPhone=null) {
$body = [ "data" => ["john" => [], "marie" => []]];
if ($johnByEmail !== null) {
array_push($body["data"]["john"], $this->createKeyValuePair("byEmail", $johnByEmail));
}
if ($johnByPhone !== null) {
array_push($body["data"]["john"], $this->createKeyValuePair("byPhoneCall", $johnByPhone));
}
if ($marieByEmail !== null) {
array_push($body["data"]["marie"], $this->createKeyValuePair("byEmail", $marieByEmail));
}
if ($marieByPhone !== null) {
array_push($body["data"]["marie"], $this->createKeyValuePair("byPhoneCall", $marieByPhone));
}
return $body;
}
// HELPER func
function createKeyValuePair($title=null, $status=null, $source="my-server") {
return [
$title => [
"status" => $status,
"source" => $source,
]
];
}
JSON - OUTPUT
I have tried to use json_encode($data, JSON_FORCE_OBJECT)
That resulted me to get an extra key, which I don't want (I got .... [0] => 'marie' => ... [1] => 'john'
Appreciate reading this, thanks!
|
[
"You are creating a new array inside the function createKeyValuePair() (probably to add the key). You could use the function the create the content only, and create the key inside the function constructArray() :\n$body[\"data\"][\"john\"][\"byEmail\"] = $this->createKeyValuePair($johnByEmail);\n\nand the function :\nfunction createKeyValuePair($status = null, $source = \"my-server\"): array \n{\n return [\n \"status\" => $status,\n \"source\" => $source,\n ];\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"json",
"multidimensional_array",
"php",
"string_comparison"
] |
stackoverflow_0074666144_arrays_json_multidimensional_array_php_string_comparison.txt
|
Q:
Thousands of OkHttp related issues reported daily (connection, unknown host, dns, etc)
Setup
I have an app in production used by thousands of users daily.
I'm using retrofit2 version 2.9.0 (latest)
My build.gradle below.
def retrofitVersion = '2.9.0'
api "com.squareup.retrofit2:converter-gson:${retrofitVersion}"
api "com.squareup.retrofit2:converter-scalars:${retrofitVersion}"
api "com.squareup.retrofit2:adapter-rxjava2:${retrofitVersion}"
api "com.squareup.retrofit2:retrofit:${retrofitVersion}"
I integrated Firebase Crashlytics and made it so that app would report any API related exceptions in try-catch blocks.
e.g.
viewModelScope.launch {
try {
val response = myRepository.getProfile()
if (response.isSuccessful) {
// continue with some business logic
} else {
Log.e(tag, "error", RunTimeException("some error")
}
} catch (throwable: Throwable){
Log.e(tag, "error thrown", throwable)
crashlytics.recordException(throwable)
}
}
Knowns
Now in Crashlytics, I get THOUSANDS of reports daily saying there were some errors.
Before I get to those errors, I want to assure you that users ARE connected to internet with proper network permissions. I see logs that users are opening other contents at the time. So these errors seem to be really random.
Erros
UnknownHostException
Non-fatal Exception: java.net.UnknownHostException: Unable to resolve host "my-host-address.com": No address associated with hostname
at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:156)
at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
at java.net.InetAddress.getAllByName(InetAddress.java:1152)
at okhttp3.Dns$Companion$DnsSystem.lookup(Dns.java:5)
...
Caused by android.system.GaiException: android_getaddrinfo failed: EAI_NODATA (No address associated with hostname)
at libcore.io.Linux.android_getaddrinfo(Linux.java)
at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
at libcore.io.BlockGuardOs.android_getaddrinfo(BlockGuardOs.java:200)
at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:135)
at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
...
ConnectionException
Non-fatal Exception: java.net.ConnectException: Failed to connect to my-host-address.com/123.123.123.123:443
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:146)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:191)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)
...
Caused by java.net.ConnectException: failed to connect to my-host-address.com/123.123.123.123 (port 443) from /:: (port 0) after 10000ms: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.IoBridge.connect(IoBridge.java:142)
at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:142)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:390)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:436)
at java.net.Socket.connect(Socket.java:621)
...
Caused by android.system.ErrnoException: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.Linux.connect(Linux.java)
at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:138)
at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
at libcore.io.IoBridge.connectErrno(IoBridge.java:173)
at libcore.io.IoBridge.connect(IoBridge.java:134)
...
SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: timeout
at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.java:4)
at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.java:8)
at okhttp3.internal.http2.Http2Stream.takeHeaders(Http2Stream.java:24)
at okhttp3.internal.http2.Http2ExchangeCodec.readResponseHeaders(Http2ExchangeCodec.java:5)
at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.java:2)
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:145)
...
Another SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: SSL handshake timed out
at com.android.org.conscrypt.NativeCrypto.SSL_do_handshake(NativeCrypto.java)
at com.android.org.conscrypt.NativeSsl.doHandshake(NativeSsl.java:387)
at com.android.org.conscrypt.ConscryptFileDescriptorSocket.startHandshake(ConscryptFileDescriptorSocket.java:234)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:72)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:52)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:196)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)
And lastly, what makes me think it's not my server issue is that I get this kind of error when I request banner ads to Google server as well.
I get thousands of reports of the following
{ "Message": "Error while connecting to ad server: Failed to connect to pubads.g.doubleclick.net/216.58.195.130:443", "Cause": "null", "Response Info": { "Adapter Responses": [], "Response ID": "null", "Response Extras": {}, "Mediation Adapter Class Name": "" }, "Domain": "com.google.android.gms.ads", "Code": 0 }
from Google ads SDK's onAdFailedToLoad listener.
Attempt
I tried to find some solutions in Retrofit2/OkHttp3 github issues, SO community, and everyone says there may be some network permission issues or network connection problem itself. But I know users are connected to internet and not using some sort of proxy. I worked with customer service team and they walked through with users, and they did not find any network issues.
Any insight would be helpful. Thank you in advance!
A:
It is possible that the issue is related to DNS caching. If the DNS cache is not updated, the app may not be able to resolve the hostname. You can try to clear the DNS cache on the device or use a different DNS server. You can also try to use a different network connection (e.g. mobile data) to see if the issue persists.
A:
About 2 years earlier I faced an issue like this. The previous answer was not my case fortunately (I tried after clearing DNS cache). I talked to my ISP about this problem they said something about caching request and response. They also told me to wait for 24 hours the server/ISP will release the cache then try. I waited and it worked. I don't know how this happened or why it worked.
A:
I had these issues too. After a long time, I have realized most of these problems are related to the SSL certificate validation by android in different devices. Some devices are not able to validate your certificate because it does not exist in their pre-installed system CA certificates. You need to put the certificate in your app to avoid this issue. You can do this in the following steps:
Export your SSL certificate to a file. (There are different ways to do this for example using a browser). Rename it to a simple name for example mycert.
Put this file in the raw folder in this path res > raw.
Make an XML file called network_security_config.xml in the xml folder at this path res > xml.
Paste this lines in it:
<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
<domain-config>
<domain includeSubdomains="true">my-host-address.com</domain>
<trust-anchors>
<certificates src="@raw/mycert"/>
</trust-anchors>
</domain-config>
</network-security-config>
change the my-host-address.com in the domain tag to your desired API address.
Add this line:
android:networkSecurityConfig="@xml/network_security_config"
to the application tag in your manifest file like so:
<application
android:allowBackup="true"
android:icon="@mipmap/app_ic_luncher"
android:label="@string/app_name"
android:theme="@style/AppCompatTheme"
android:networkSecurityConfig="@xml/network_security_config"
...
Make a new build and hopefully, it works for you.
|
Thousands of OkHttp related issues reported daily (connection, unknown host, dns, etc)
|
Setup
I have an app in production used by thousands of users daily.
I'm using retrofit2 version 2.9.0 (latest)
My build.gradle below.
def retrofitVersion = '2.9.0'
api "com.squareup.retrofit2:converter-gson:${retrofitVersion}"
api "com.squareup.retrofit2:converter-scalars:${retrofitVersion}"
api "com.squareup.retrofit2:adapter-rxjava2:${retrofitVersion}"
api "com.squareup.retrofit2:retrofit:${retrofitVersion}"
I integrated Firebase Crashlytics and made it so that app would report any API related exceptions in try-catch blocks.
e.g.
viewModelScope.launch {
try {
val response = myRepository.getProfile()
if (response.isSuccessful) {
// continue with some business logic
} else {
Log.e(tag, "error", RunTimeException("some error")
}
} catch (throwable: Throwable){
Log.e(tag, "error thrown", throwable)
crashlytics.recordException(throwable)
}
}
Knowns
Now in Crashlytics, I get THOUSANDS of reports daily saying there were some errors.
Before I get to those errors, I want to assure you that users ARE connected to internet with proper network permissions. I see logs that users are opening other contents at the time. So these errors seem to be really random.
Erros
UnknownHostException
Non-fatal Exception: java.net.UnknownHostException: Unable to resolve host "my-host-address.com": No address associated with hostname
at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:156)
at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
at java.net.InetAddress.getAllByName(InetAddress.java:1152)
at okhttp3.Dns$Companion$DnsSystem.lookup(Dns.java:5)
...
Caused by android.system.GaiException: android_getaddrinfo failed: EAI_NODATA (No address associated with hostname)
at libcore.io.Linux.android_getaddrinfo(Linux.java)
at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
at libcore.io.BlockGuardOs.android_getaddrinfo(BlockGuardOs.java:200)
at libcore.io.ForwardingOs.android_getaddrinfo(ForwardingOs.java:74)
at java.net.Inet6AddressImpl.lookupHostByName(Inet6AddressImpl.java:135)
at java.net.Inet6AddressImpl.lookupAllHostAddr(Inet6AddressImpl.java:103)
...
ConnectionException
Non-fatal Exception: java.net.ConnectException: Failed to connect to my-host-address.com/123.123.123.123:443
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:146)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:191)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)
...
Caused by java.net.ConnectException: failed to connect to my-host-address.com/123.123.123.123 (port 443) from /:: (port 0) after 10000ms: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.IoBridge.connect(IoBridge.java:142)
at java.net.PlainSocketImpl.socketConnect(PlainSocketImpl.java:142)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:390)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:212)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:436)
at java.net.Socket.connect(Socket.java:621)
...
Caused by android.system.ErrnoException: connect failed: ENETUNREACH (Network is unreachable)
at libcore.io.Linux.connect(Linux.java)
at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:138)
at libcore.io.ForwardingOs.connect(ForwardingOs.java:94)
at libcore.io.IoBridge.connectErrno(IoBridge.java:173)
at libcore.io.IoBridge.connect(IoBridge.java:134)
...
SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: timeout
at okhttp3.internal.http2.Http2Stream$StreamTimeout.newTimeoutException(Http2Stream.java:4)
at okhttp3.internal.http2.Http2Stream$StreamTimeout.exitAndThrowIfTimedOut(Http2Stream.java:8)
at okhttp3.internal.http2.Http2Stream.takeHeaders(Http2Stream.java:24)
at okhttp3.internal.http2.Http2ExchangeCodec.readResponseHeaders(Http2ExchangeCodec.java:5)
at okhttp3.internal.connection.Exchange.readResponseHeaders(Exchange.java:2)
at okhttp3.internal.http.CallServerInterceptor.intercept(CallServerInterceptor.java:145)
...
Another SocketTimeoutException
Non-fatal Exception: java.net.SocketTimeoutException: SSL handshake timed out
at com.android.org.conscrypt.NativeCrypto.SSL_do_handshake(NativeCrypto.java)
at com.android.org.conscrypt.NativeSsl.doHandshake(NativeSsl.java:387)
at com.android.org.conscrypt.ConscryptFileDescriptorSocket.startHandshake(ConscryptFileDescriptorSocket.java:234)
at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:72)
at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:52)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:196)
at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:257)
at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java)
at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:47)
And lastly, what makes me think it's not my server issue is that I get this kind of error when I request banner ads to Google server as well.
I get thousands of reports of the following
{ "Message": "Error while connecting to ad server: Failed to connect to pubads.g.doubleclick.net/216.58.195.130:443", "Cause": "null", "Response Info": { "Adapter Responses": [], "Response ID": "null", "Response Extras": {}, "Mediation Adapter Class Name": "" }, "Domain": "com.google.android.gms.ads", "Code": 0 }
from Google ads SDK's onAdFailedToLoad listener.
Attempt
I tried to find some solutions in Retrofit2/OkHttp3 github issues, SO community, and everyone says there may be some network permission issues or network connection problem itself. But I know users are connected to internet and not using some sort of proxy. I worked with customer service team and they walked through with users, and they did not find any network issues.
Any insight would be helpful. Thank you in advance!
|
[
"It is possible that the issue is related to DNS caching. If the DNS cache is not updated, the app may not be able to resolve the hostname. You can try to clear the DNS cache on the device or use a different DNS server. You can also try to use a different network connection (e.g. mobile data) to see if the issue persists.\n",
"About 2 years earlier I faced an issue like this. The previous answer was not my case fortunately (I tried after clearing DNS cache). I talked to my ISP about this problem they said something about caching request and response. They also told me to wait for 24 hours the server/ISP will release the cache then try. I waited and it worked. I don't know how this happened or why it worked.\n",
"I had these issues too. After a long time, I have realized most of these problems are related to the SSL certificate validation by android in different devices. Some devices are not able to validate your certificate because it does not exist in their pre-installed system CA certificates. You need to put the certificate in your app to avoid this issue. You can do this in the following steps:\nExport your SSL certificate to a file. (There are different ways to do this for example using a browser). Rename it to a simple name for example mycert.\nPut this file in the raw folder in this path res > raw.\nMake an XML file called network_security_config.xml in the xml folder at this path res > xml.\nPaste this lines in it:\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<network-security-config>\n <domain-config>\n <domain includeSubdomains=\"true\">my-host-address.com</domain>\n <trust-anchors>\n <certificates src=\"@raw/mycert\"/>\n </trust-anchors>\n </domain-config>\n</network-security-config>\n\nchange the my-host-address.com in the domain tag to your desired API address.\nAdd this line:\nandroid:networkSecurityConfig=\"@xml/network_security_config\"\n\nto the application tag in your manifest file like so:\n<application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/app_ic_luncher\"\n android:label=\"@string/app_name\"\n android:theme=\"@style/AppCompatTheme\"\n android:networkSecurityConfig=\"@xml/network_security_config\"\n ...\n\nMake a new build and hopefully, it works for you.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"android",
"dns",
"kotlin",
"okhttp",
"retrofit"
] |
stackoverflow_0074538967_android_dns_kotlin_okhttp_retrofit.txt
|
Q:
React: Given an array, render the elements in reverse order efficiently
I currently render a list in the typical React-style. The list is passed as an array prop, and I map over it like so:
{this.props.myList.map(createListItem, this)}
So when a new element is added, it appears like the latest item was added to the end of the list.
I would like it so the latest item appears at the top. i.e. everything appears in reverse-chronological order.
The two options I've come up with so far are:
1) Reverse the list, creating a new array each time something is added, and pass this reversed list as the prop.
2) Use shift.
But they're both unappealing because of performance.
I'm not aware of Javascript supporting mapping in reverse order. I've been trying a for-loop but I haven't been able to get it to work.
What is the idiomatic way to render an array in reverse order in React?
A:
If you choose to reverse the list using reverse(), shift() or splice(), you should make a shallow copy of the array first, and then use that function on the copy. Props in React should not be mutated.
For example:
[...this.props.myList].reverse().map(createListItem, this)
or
this.props.myList.slice(0).map(createListItem, this)
(this should really be a comment, but I don't have the points to do that yet :))
A:
If you need to display a list in the UI in reverse order you can also use
flex-direction: row-reverse;
or
flex-direction: column-reverse;
A:
As others have pointed out the humble reverse method does the job for most part. I currently ran into the same issue and I must say that using array.reverse() atleast in Chrome, the performance hit wasnt seen as such. In my opinion its better than using a loop to sort a list in reverse order.
array.reverse()
A:
When using mobx as a store one can create a computed property for reversed array that will be reevaluated and memoized each time original observable array changes.
Store
import { observable, computed } from 'mobx';
class MyStore {
@observable items = [1, 2, 3];
@computed get itemsReversed() {
return this.items.slice().reverse();
}
}
export default MyStore;
Rendering
import React, { Component } from 'react';
import { inject, observer } from 'mobx-react';
@inject('myStore') @observer
class List extends Component {
render() {
const { myStore } = this.props;
const { itemsReversed } = myStore;
return (
<div className="list">
{itemsReversed.map(item => (
<div className="list-item">{item}</div>
))}
</div>
);
}
}
export default List;
According to the official documentation this is a preferred way to reverse an array:
Unlike the built-in implementation of the functions sort and reverse, observableArray.sort and reverse will not change the array in-place, but only will return a sorted / reversed copy. From MobX 5 and higher this will show a warning. It is recommended to use array.slice().sort() instead.
A:
Some how while using array.reverse() the order was changing whenever something in state changed.I went with flexDirection:'column-reverse' which worked fine and you dont need to mess with the array data aswell.
A:
Add the new elements at the beginning of the array:
array.splice(0,0,'value to add at beginning');
Or call a for loop with an immediately invoked function:
{(() => {
for(...) {
return (<i>{whatever}</i>)
}
})()}
A:
Keep pushing at the array, and when rendering, you can simply use the
Array.reverse()
here the documentation
Remind that it will mutate the original one
A:
Simply first create copy of array using slice and apply reverse function when using map.
For example:
var myArr = [1,2,3,4,5]
myArr.slice(0).reverse().map((element, index) => {
console.log(element);
});
|
React: Given an array, render the elements in reverse order efficiently
|
I currently render a list in the typical React-style. The list is passed as an array prop, and I map over it like so:
{this.props.myList.map(createListItem, this)}
So when a new element is added, it appears like the latest item was added to the end of the list.
I would like it so the latest item appears at the top. i.e. everything appears in reverse-chronological order.
The two options I've come up with so far are:
1) Reverse the list, creating a new array each time something is added, and pass this reversed list as the prop.
2) Use shift.
But they're both unappealing because of performance.
I'm not aware of Javascript supporting mapping in reverse order. I've been trying a for-loop but I haven't been able to get it to work.
What is the idiomatic way to render an array in reverse order in React?
|
[
"If you choose to reverse the list using reverse(), shift() or splice(), you should make a shallow copy of the array first, and then use that function on the copy. Props in React should not be mutated.\nFor example:\n[...this.props.myList].reverse().map(createListItem, this)\n\nor\nthis.props.myList.slice(0).map(createListItem, this)\n\n(this should really be a comment, but I don't have the points to do that yet :))\n",
"If you need to display a list in the UI in reverse order you can also use \nflex-direction: row-reverse;\n\nor\nflex-direction: column-reverse;\n\n",
"As others have pointed out the humble reverse method does the job for most part. I currently ran into the same issue and I must say that using array.reverse() atleast in Chrome, the performance hit wasnt seen as such. In my opinion its better than using a loop to sort a list in reverse order.\n array.reverse()\n\n",
"When using mobx as a store one can create a computed property for reversed array that will be reevaluated and memoized each time original observable array changes.\nStore\nimport { observable, computed } from 'mobx';\n\nclass MyStore {\n @observable items = [1, 2, 3];\n\n @computed get itemsReversed() {\n return this.items.slice().reverse();\n }\n}\n\nexport default MyStore;\n\nRendering\nimport React, { Component } from 'react';\nimport { inject, observer } from 'mobx-react';\n\n@inject('myStore') @observer\nclass List extends Component {\n render() {\n const { myStore } = this.props;\n const { itemsReversed } = myStore;\n return (\n <div className=\"list\">\n {itemsReversed.map(item => (\n <div className=\"list-item\">{item}</div>\n ))}\n </div>\n );\n }\n}\n\nexport default List;\n\nAccording to the official documentation this is a preferred way to reverse an array:\n\nUnlike the built-in implementation of the functions sort and reverse, observableArray.sort and reverse will not change the array in-place, but only will return a sorted / reversed copy. From MobX 5 and higher this will show a warning. It is recommended to use array.slice().sort() instead.\n\n",
"Some how while using array.reverse() the order was changing whenever something in state changed.I went with flexDirection:'column-reverse' which worked fine and you dont need to mess with the array data aswell.\n",
"Add the new elements at the beginning of the array:\narray.splice(0,0,'value to add at beginning');\n\nOr call a for loop with an immediately invoked function:\n{(() => {\n for(...) {\n return (<i>{whatever}</i>)\n }\n})()}\n\n",
"Keep pushing at the array, and when rendering, you can simply use the \nArray.reverse()\n\nhere the documentation\nRemind that it will mutate the original one\n",
"Simply first create copy of array using slice and apply reverse function when using map.\nFor example:\n\n\nvar myArr = [1,2,3,4,5]\nmyArr.slice(0).reverse().map((element, index) => {\n console.log(element);\n});\n\n\n\n"
] |
[
43,
32,
4,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"reactjs"
] |
stackoverflow_0037664041_javascript_reactjs.txt
|
Q:
Snakemake combine ambiguous rules
I am trying to combine some rules. The rule1 creates in automatics {sample}_unmapped.bam file taking information from library_params.txt file, which I cannot specify as output as program outputs it itself, but which I need to use in the rule2. Is there a way for program to attend the rule1 to finish and then run rule2 using the output from rule1? Because the error it is giving me now is: {sample}_unmapped.bam file is missing.
rule rule1:
input:
basecalls_dir="/RUN1/Data/Intensities/BaseCalls/",
barcodes_dir=directory("barcodes"),
library_params="library_params.txt",
metrics_file="metrics_output.txt"
output:
log="barcodes.log"
shell:
"""
java -Djava.io.tmpdir=/path/to/tmp -Xmx2g -jar picard.jar IlluminaBasecallsToSam BASECALLS_DIR={input.basecalls_dir} BARCODES_DIR={input.barcodes_dir} LANE=1 READ_STRUCTURE=151T8B9M8B151T RUN_BARCODE=run1 LIBRARY_PARAMS={input.library_params} MOLECULAR_INDEX_TAG=RX ADAPTERs_TO_CHECK=INDEXED READ_GROUP_ID=BO NUM_PROCESSORS=2 IGNORE_UNEXPECTED_BARCODES=true > {output.log}
"""
rule rule2:
input:
log="barcodes.log",
infile="{sample}_unmapped.bam"
params:
ref="ref.fasta"
output:
outfile="{sample}.mapped.bam"
shell:
"""
java -Djava.io.tmpdir=/path/to/tmp -Xmx2g -jar picard.jar SamToFastq I={input.infile} F=/dev/stdout INTERLEAVE=true | bwa mem -p -t 7 {params.ref} /dev/stdin | java -Djava.io.tmpdir=/path/to/tmp -Xmx4g -jar picard.jar MergeBamAlignment UNMAPPED={input.infile} ALIGNED=/dev/stdin O={output.outfile} R={params.ref} SORT_ORDER=coordinate MAX_GAPS=-1 ORIENTATIONS=FR
A:
In rule2 I would move infile="{sample}_unmapped.bam" from the input directive to the params directive. And of course you would change the shell script from I={input.infile} to I={params.infile}.
rule2 will still wait for rule1 to complete because you give barcodes.log as input to rule2.
|
Snakemake combine ambiguous rules
|
I am trying to combine some rules. The rule1 creates in automatics {sample}_unmapped.bam file taking information from library_params.txt file, which I cannot specify as output as program outputs it itself, but which I need to use in the rule2. Is there a way for program to attend the rule1 to finish and then run rule2 using the output from rule1? Because the error it is giving me now is: {sample}_unmapped.bam file is missing.
rule rule1:
input:
basecalls_dir="/RUN1/Data/Intensities/BaseCalls/",
barcodes_dir=directory("barcodes"),
library_params="library_params.txt",
metrics_file="metrics_output.txt"
output:
log="barcodes.log"
shell:
"""
java -Djava.io.tmpdir=/path/to/tmp -Xmx2g -jar picard.jar IlluminaBasecallsToSam BASECALLS_DIR={input.basecalls_dir} BARCODES_DIR={input.barcodes_dir} LANE=1 READ_STRUCTURE=151T8B9M8B151T RUN_BARCODE=run1 LIBRARY_PARAMS={input.library_params} MOLECULAR_INDEX_TAG=RX ADAPTERs_TO_CHECK=INDEXED READ_GROUP_ID=BO NUM_PROCESSORS=2 IGNORE_UNEXPECTED_BARCODES=true > {output.log}
"""
rule rule2:
input:
log="barcodes.log",
infile="{sample}_unmapped.bam"
params:
ref="ref.fasta"
output:
outfile="{sample}.mapped.bam"
shell:
"""
java -Djava.io.tmpdir=/path/to/tmp -Xmx2g -jar picard.jar SamToFastq I={input.infile} F=/dev/stdout INTERLEAVE=true | bwa mem -p -t 7 {params.ref} /dev/stdin | java -Djava.io.tmpdir=/path/to/tmp -Xmx4g -jar picard.jar MergeBamAlignment UNMAPPED={input.infile} ALIGNED=/dev/stdin O={output.outfile} R={params.ref} SORT_ORDER=coordinate MAX_GAPS=-1 ORIENTATIONS=FR
|
[
"In rule2 I would move infile=\"{sample}_unmapped.bam\" from the input directive to the params directive. And of course you would change the shell script from I={input.infile} to I={params.infile}.\nrule2 will still wait for rule1 to complete because you give barcodes.log as input to rule2.\n"
] |
[
0
] |
[] |
[] |
[
"snakemake"
] |
stackoverflow_0074658878_snakemake.txt
|
Q:
Drawing an AVL Tree with tikzpicture in Latex
I hope someone can help me. I am trying to draw an AVL Tree with Tikzpicture. I must say I am not very familiar in how to modify Tikz. In row 3 both the childs lay over each other. How can I avoid this, so that they are next to each other not on each other? I have attached the code I was using for this drawing. Many thanks in advance.
\documentclass[11pt, a4paper]{book} % add parameters to the document
\usepackage{fullpage}
\usepackage[utf8]{inputenc}
\usepackage{tikz} % Graphen zeichnen
\begin{document}
\begin{tikzpicture}[
edge from parent path=
{(\tikzparentnode.south) .. controls +(0,0) and +(0,0)
.. (\tikzchildnode.north)},
every node/.style={draw,circle},
label distance=-1mm
]
\node [label=330:$0$]{7}
child {node[label=330:$0$] {2}
child {node[label=330:$0$] {1}}
child {node[label=330:$0$] {3}}}
child {node[label=330:$0$] {24}
child {node[label=330:$0$] {15}}
child {node[label=330:$0$] {42}}
};
\end{tikzpicture}
\end{document}
A:
One possible solution could be to change the sibling distance like this:
\documentclass[11pt, a4paper]{book} % add parameters to the document
\usepackage{fullpage}
\usepackage[utf8]{inputenc}
\usepackage{tikz} % Graphen zeichnen
\begin{document}
\begin{tikzpicture}[
edge from parent path=
{(\tikzparentnode.south) .. controls +(0,0) and +(0,0)
.. (\tikzchildnode.north)},
every node/.style={draw,circle},
label distance=-1mm,
level 1/.style={sibling distance=30mm},
level 2/.style={sibling distance=15mm}
]
\node [label=330:$0$]{7}
child {node[label=330:$0$] {2}
child {node[label=330:$0$] {1}}
child {node[label=330:$0$] {3}}}
child {node[label=330:$0$] {24}
child {node[label=330:$0$] {15}}
child {node[label=330:$0$] {42}}
};
\end{tikzpicture}
\end{document}
|
Drawing an AVL Tree with tikzpicture in Latex
|
I hope someone can help me. I am trying to draw an AVL Tree with Tikzpicture. I must say I am not very familiar in how to modify Tikz. In row 3 both the childs lay over each other. How can I avoid this, so that they are next to each other not on each other? I have attached the code I was using for this drawing. Many thanks in advance.
\documentclass[11pt, a4paper]{book} % add parameters to the document
\usepackage{fullpage}
\usepackage[utf8]{inputenc}
\usepackage{tikz} % Graphen zeichnen
\begin{document}
\begin{tikzpicture}[
edge from parent path=
{(\tikzparentnode.south) .. controls +(0,0) and +(0,0)
.. (\tikzchildnode.north)},
every node/.style={draw,circle},
label distance=-1mm
]
\node [label=330:$0$]{7}
child {node[label=330:$0$] {2}
child {node[label=330:$0$] {1}}
child {node[label=330:$0$] {3}}}
child {node[label=330:$0$] {24}
child {node[label=330:$0$] {15}}
child {node[label=330:$0$] {42}}
};
\end{tikzpicture}
\end{document}
|
[
"One possible solution could be to change the sibling distance like this:\n\\documentclass[11pt, a4paper]{book} % add parameters to the document\n\\usepackage{fullpage}\n\\usepackage[utf8]{inputenc}\n\\usepackage{tikz} % Graphen zeichnen\n\n\\begin{document}\n\\begin{tikzpicture}[\n edge from parent path=\n {(\\tikzparentnode.south) .. controls +(0,0) and +(0,0)\n .. (\\tikzchildnode.north)},\n every node/.style={draw,circle},\n label distance=-1mm,\n level 1/.style={sibling distance=30mm}, \n level 2/.style={sibling distance=15mm}\n ]\n\\node [label=330:$0$]{7}\n child {node[label=330:$0$] {2}\n child {node[label=330:$0$] {1}}\n child {node[label=330:$0$] {3}}}\n child {node[label=330:$0$] {24}\n child {node[label=330:$0$] {15}}\n child {node[label=330:$0$] {42}}\n };\n\\end{tikzpicture}\n\\end{document}\n\n\n"
] |
[
1
] |
[] |
[] |
[
"latex",
"pdflatex",
"tikz"
] |
stackoverflow_0074658222_latex_pdflatex_tikz.txt
|
Q:
Can anyone explain why this code on python is not working?
def n(a):
a = str(a)
if "0" in a:
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a)
return a
else:
a = a[::-1]
a = a[::-1]
a = int(a)
return a
N = int(input())
des = 10**9 + 7
summa = 0
for a in range():
print(n(a))
b = n(a)
summa = summa + b
summa = summa % des
print(summa)
gives such an error : 'invalid literal for int() with base 10: '' '
If I pass the value to the variable a without the for i in loop, then everything works
I just need to understand what is wrong with the code. I'm new to programming and can't figure it out right away
A:
The input function waits for user input. If none is given, it will return an empty string, i.e., ''. As a result, you are casting '' to an integer. This is not possible and results in the error you mention.
int('')` # returns `ValueError: invalid literal for int() with base 10: ''
You can also see this already in the end of the error. That's what the '' mean at the end of the error. That's what being passed to int()
I'm guesting that you might be copy-pasting the code above directly into a terminal. This results in Python not waiting for any actual input for input.
If you first only run this line
N = int(input())
and then hit enter, it will wait for user input. Then you can copy the rest of the code. The rest of the code also contains some issues. Specifically, range should have some input, like range(N)
def n(a):
a = str(a)
if "0" in a: # this also happen when a == '0'
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a) # and if a == '0', this resolved to int('')
....
You can add the following
def n(a):
if not a: # ifa is anything beside 0
return 0 # then there is no sense in flipping it around
a = str(a)
....
A:
The error you are seeing is because you are trying to convert an empty string to an integer using the int() function. This error is happening because you are using a range() function with no arguments in the for loop, which will create an empty range and cause the for loop to not execute at all.
To fix this error, you need to pass the correct arguments to the range() function in the for loop.
|
Can anyone explain why this code on python is not working?
|
def n(a):
a = str(a)
if "0" in a:
b = str((a).replace("0", ''))
a = b[::-1]
a = a[::-1]
a = int(a)
return a
else:
a = a[::-1]
a = a[::-1]
a = int(a)
return a
N = int(input())
des = 10**9 + 7
summa = 0
for a in range():
print(n(a))
b = n(a)
summa = summa + b
summa = summa % des
print(summa)
gives such an error : 'invalid literal for int() with base 10: '' '
If I pass the value to the variable a without the for i in loop, then everything works
I just need to understand what is wrong with the code. I'm new to programming and can't figure it out right away
|
[
"The input function waits for user input. If none is given, it will return an empty string, i.e., ''. As a result, you are casting '' to an integer. This is not possible and results in the error you mention.\nint('')` # returns `ValueError: invalid literal for int() with base 10: ''\n\nYou can also see this already in the end of the error. That's what the '' mean at the end of the error. That's what being passed to int()\nI'm guesting that you might be copy-pasting the code above directly into a terminal. This results in Python not waiting for any actual input for input.\nIf you first only run this line\nN = int(input())\n\nand then hit enter, it will wait for user input. Then you can copy the rest of the code. The rest of the code also contains some issues. Specifically, range should have some input, like range(N)\ndef n(a):\n a = str(a)\n if \"0\" in a: # this also happen when a == '0'\n b = str((a).replace(\"0\", '')) \n a = b[::-1]\n a = a[::-1]\n a = int(a) # and if a == '0', this resolved to int('')\n ....\n\nYou can add the following\ndef n(a):\n if not a: # ifa is anything beside 0\n return 0 # then there is no sense in flipping it around \n a = str(a)\n ....\n\n",
"The error you are seeing is because you are trying to convert an empty string to an integer using the int() function. This error is happening because you are using a range() function with no arguments in the for loop, which will create an empty range and cause the for loop to not execute at all.\nTo fix this error, you need to pass the correct arguments to the range() function in the for loop.\n"
] |
[
2,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074666238_python.txt
|
Q:
Spring framework:ConstraintViolationException still being threw even though I add a BindingResult just after the @Valid annotated variable
I am using javax.validation api in Spring Boot Framework (version 2.2.6.RELEASE and 2.3.4 both happend) to validate the java bean in Controllers' method's parameters. I have add a Binding Result right after the parameter being validated and tried to collect the information that Binding Result field supplies, aiming to customize my returned data to the front-end. However, this do not work and the error(or Exception in java still being threw out thus my statements in the method are not invoked). The example method in Controllers and the results in console, as well as JSON data returned to the front-end are showing bellow:
//R is my customized info class carrying data that I want to sent to the //front-end
public R save(@Valid @RequestBody BrandEntity brand, BindingResult result){
if (result.hasErrors()) {
Map<String,String> map = new HashMap<>();
result.getFieldErrors().forEach(item->{
map.put(item.getField(),item.getDefaultMessage());
});
return R.error(400, "提交的数据不合法" ).put("data",map);
}
brandService.save(brand);
return R.ok();
}
The console log:
2021-03-05 00:31:46.316 ERROR 29422 --- [io-10000-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is javax.validation.ConstraintViolationException: save.brand.showStatus: 必须是正数, save.brand.name: 不能为空] with root cause
javax.validation.ConstraintViolationException: save.brand.showStatus: 必须是正数, save.brand.name: 不能为空
at org.springframework.validation.beanvalidation.MethodValidationInterceptor.invoke(MethodValidationInterceptor.java:117) ~[spring-context-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
...
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
The JSON data returned to the front-end is like :
{
"timestamp": "2021-03-04T16:31:46.319+0000",
"status": 500,
"error": "Internal Server Error",
"message": "save.brand.showStatus: 必须是正数, save.brand.name: 不能为空",
"path": "/product/brand/save"
}
which is handled by the default exception handlers in Spring Boot Framework.
That means the statements in the method are not executed at all.
I know that a class with @ContollerAdvice annotated which deals with ConstraintViolationException may solve this. But I wanna know why the BindingResult parameter are not working and that exception are still threw via save() method? (Please ignore the message in Chinese, and you may correct my English if you would like-- Thanks)
---------edit----------
one possible clue:
@RequestParam ---> javax.validation.ConstraintViolationException
@RequestBody上validate ---> MethodArgumentNotValidException
---------solved?-----
I should use @Validated instead of @Valid.
A:
This happens when the controller class also has a @Validated annotation. In that case the method call gets revalidated and the BindingResult is ignored. I guess the solution is to prevent mixing @Valid and @Validated.
|
Spring framework:ConstraintViolationException still being threw even though I add a BindingResult just after the @Valid annotated variable
|
I am using javax.validation api in Spring Boot Framework (version 2.2.6.RELEASE and 2.3.4 both happend) to validate the java bean in Controllers' method's parameters. I have add a Binding Result right after the parameter being validated and tried to collect the information that Binding Result field supplies, aiming to customize my returned data to the front-end. However, this do not work and the error(or Exception in java still being threw out thus my statements in the method are not invoked). The example method in Controllers and the results in console, as well as JSON data returned to the front-end are showing bellow:
//R is my customized info class carrying data that I want to sent to the //front-end
public R save(@Valid @RequestBody BrandEntity brand, BindingResult result){
if (result.hasErrors()) {
Map<String,String> map = new HashMap<>();
result.getFieldErrors().forEach(item->{
map.put(item.getField(),item.getDefaultMessage());
});
return R.error(400, "提交的数据不合法" ).put("data",map);
}
brandService.save(brand);
return R.ok();
}
The console log:
2021-03-05 00:31:46.316 ERROR 29422 --- [io-10000-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is javax.validation.ConstraintViolationException: save.brand.showStatus: 必须是正数, save.brand.name: 不能为空] with root cause
javax.validation.ConstraintViolationException: save.brand.showStatus: 必须是正数, save.brand.name: 不能为空
at org.springframework.validation.beanvalidation.MethodValidationInterceptor.invoke(MethodValidationInterceptor.java:117) ~[spring-context-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) ~[spring-aop-5.2.5.RELEASE.jar:5.2.5.RELEASE]
...
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na]
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.33.jar:9.0.33]
at java.base/java.lang.Thread.run(Thread.java:834) ~[na:na]
The JSON data returned to the front-end is like :
{
"timestamp": "2021-03-04T16:31:46.319+0000",
"status": 500,
"error": "Internal Server Error",
"message": "save.brand.showStatus: 必须是正数, save.brand.name: 不能为空",
"path": "/product/brand/save"
}
which is handled by the default exception handlers in Spring Boot Framework.
That means the statements in the method are not executed at all.
I know that a class with @ContollerAdvice annotated which deals with ConstraintViolationException may solve this. But I wanna know why the BindingResult parameter are not working and that exception are still threw via save() method? (Please ignore the message in Chinese, and you may correct my English if you would like-- Thanks)
---------edit----------
one possible clue:
@RequestParam ---> javax.validation.ConstraintViolationException
@RequestBody上validate ---> MethodArgumentNotValidException
---------solved?-----
I should use @Validated instead of @Valid.
|
[
"This happens when the controller class also has a @Validated annotation. In that case the method call gets revalidated and the BindingResult is ignored. I guess the solution is to prevent mixing @Valid and @Validated.\n"
] |
[
1
] |
[] |
[] |
[
"hibernate_validator",
"java",
"spring",
"spring_boot",
"spring_mvc"
] |
stackoverflow_0066479474_hibernate_validator_java_spring_spring_boot_spring_mvc.txt
|
Q:
How to get uniform line space for a mixed paragraph of texts and images
I am using iText 7.2.1.
I am trying to add some small icons (drawn by code) in my text. I find if small icons are added into my text, it's hard to have uniform line space.
If all elements of a paragraph are texts, I can just set SetFixedLeading() then no matter how big the font sizes are, my lines have always the same height.
But when I add some small icons inside my paragraph, SetFixedLeading() no longer works.
What I want is like the "Line spacing" option in Microsoft Word. If I give it a fixed value, it treats embedding images and texts equally so I always get fixed line spacing.
The following is my code:
using iText.Kernel.Colors;
using iText.Kernel.Pdf;
using iText.Kernel.Pdf.Canvas;
using iText.Layout;
using iText.Kernel.Pdf.Xobject;
using iText.Layout.Element;
using iText.Kernel.Geom;
using iText.Kernel.Font;
using iText.IO.Font;
namespace iTextTest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
var writer = new PdfWriter("test.pdf");
var pdf_doc = new PdfDocument(writer);
var doc = new Document(pdf_doc, iText.Kernel.Geom.PageSize.DEFAULT, false);
// Make a text of various sizes
var mixed_paragraph = new Paragraph();
for (int i = 0; i < 100; i ++)
{
var style = new Style();
var size = (Math.Sin(i) + 2) * 10;
style.SetFontSize((float)size);
mixed_paragraph.Add(new Text("A").AddStyle(style));
}
// Make a 20x20 icon
var bounds = new iText.Kernel.Geom.Rectangle(0, 0, 20, 20);
var xobj = new PdfFormXObject(bounds);
var pdf_canvas = new PdfCanvas(xobj, pdf_doc);
pdf_canvas.SetFillColor(ColorConstants.RED);
pdf_canvas.Rectangle(0, 0, 20, 20);
pdf_canvas.Fill();
var icon = new iText.Layout.Element.Image(xobj);
mixed_paragraph.Add(icon);
// Fixed leading
mixed_paragraph.SetFixedLeading(10);
doc.Add(mixed_paragraph);
doc.Close();
pdf_doc.Close();
writer.Close();
MessageBox.Show("OK");
}
}
}
This is what it looks like. The second line is right but the third line has more space than fixed leading 10.
I need this because, in my case, I need some small rectanglular icons that each contain two lines of integers and other info.
These icons have bigger height than my text (or else it's hard to read), but theoretically they can still fit because my text has enough spacing.
Unfortunately, my line spaces become uneven. Fixed leading seems not affecting non-text images, so lines with icons have wider line spaces.
I have been considering a workaround: add empty spaces in text and put icons at these fixed positions. It's still hard. I don't know how to get these positions.
A:
The SetFixedLeading() method sets the fixed leading (the distance between the baselines of two lines of text) for a paragraph. However, this method only affects the line spacing of text elements within the paragraph, and not other elements such as images.
One solution to this problem would be to use the SetMinHeight() method to set the minimum height of each line in the paragraph, which will ensure that all lines, including those with images, have the same minimum height. This method will make the lines taller if necessary to accommodate the height of the images, but the overall line spacing will remain uniform.
Here is an example of how you can use the SetMinHeight() method to achieve uniform line spacing with images in a paragraph:
using iText.Kernel.Colors;
using iText.Kernel.Pdf;
using iText.Kernel.Pdf.Canvas;
using iText.Layout;
using iText.Kernel.Pdf.Xobject;
using iText.Layout.Element;
using iText.Kernel.Geom;
using iText.Kernel.Font;
using iText.IO.Font;
namespace iTextTest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
var writer = new PdfWriter("test.pdf");
var pdf_doc = new PdfDocument(writer);
var doc = new Document(pdf_doc, iText.Kernel.Geom.PageSize.DEFAULT, false);
// Make a text of various sizes
var mixed_paragraph = new Paragraph();
for (int i = 0; i < 100; i ++)
{
var style = new Style();
var size = (Math.Sin(i) + 2) * 10;
style.SetFontSize((float)size);
mixed_paragraph.Add(new Text("A").AddStyle(style));
}
// Make a 20x20 icon
var bounds = new iText.Kernel.Geom.Rectangle(0, 0, 20, 20);
var xobj = new PdfFormXObject(bounds);
var pdf_canvas = new PdfCanvas(xobj, pdf_doc);
pdf_canvas.SetFillColor(ColorConstants.RED);
pdf_canvas.Rectangle(0, 0, 20, 20);
pdf_canvas.Fill();
var icon = new iText.Layout.Element.Image(xobj);
mixed_paragraph.Add(icon);
// Set the minimum height of each line in the paragraph
mixed_paragraph.SetMinHeight(20);
doc.Add(mixed_paragraph);
doc.Close();
pdf_doc.Close();
writer.Close();
MessageBox.Show("OK");
}
}
}
In this example, I set the minimum height of each line in the paragraph to be 20, which is the height of the image. This ensures that the lines with images will have the same minimum height as the other lines in the paragraph, resulting in uniform line spacing. You can adjust the minimum height as necessary to achieve the desired line spacing.
A:
To achieve uniform line spacing for a paragraph containing both text and images, you can use the SetMultipliedLeading() method, which allows you to specify a line height that is a multiple of the font size. This method will apply the same line spacing to both text and images, so that the line spacing remains uniform.
Here is an example of how you can use the SetMultipliedLeading() method to set the line spacing for a paragraph containing text and images:
Paragraph mixedParagraph = new Paragraph();
// Add some text to the paragraph
mixedParagraph.Add(new Text("A"));
// Make a 20x20 icon
Rectangle bounds = new Rectangle(0, 0, 20, 20);
PdfFormXObject xobj = new PdfFormXObject(bounds);
PdfCanvas canvas = new PdfCanvas(xobj, pdfDoc);
canvas.SetFillColor(ColorConstants.RED);
canvas.Rectangle(0, 0, 20, 20);
canvas.Fill();
Image icon = new Image(xobj);
mixedParagraph.Add(icon);
// Set the line spacing to be 1.5 times the font size
mixedParagraph.SetMultipliedLeading(1.5f);
// Add the paragraph to the document
document.Add(mixedParagraph);
This will set the line spacing for the paragraph to be 1.5 times the font size, which will apply to both text and images in the paragraph. This will ensure that the line spacing remains uniform, even when the paragraph contains both text and images.
You can adjust the line spacing by changing the value passed to the SetMultipliedLeading() method. For example, setting the line spacing to 2 times the font size will result in a larger line spacing, while setting it to 1.25 times the font size will result in a smaller line spacing. Experiment with different values to find the line spacing that works best for your needs.
--- Potential Other Fix --
It's possible that the problem is related to the way that the image is being added to the paragraph. When you add an image to a paragraph, it's important to make sure that the image is properly aligned with the text. Depending on the alignment you choose, the image may appear to have a different line height than the surrounding text.
For example, if you are using the default alignment, which is called "baseline alignment", the image may appear to have a larger line height than the text because the bottom of the image is aligned with the baseline of the text. In this case, you can use the SetVerticalAlignment() method to change the alignment of the image, so that it is aligned with the top or the middle of the text. This will cause the image to have the same line height as the surrounding text.
Here is an example of how you can use the SetVerticalAlignment() method to align an image with the top of the text in a paragraph:
// Add some text to the paragraph
mixedParagraph.Add(new Text("A"));
// Make a 20x20 icon
Rectangle bounds = new Rectangle(0, 0, 20, 20);
PdfFormXObject xobj = new PdfFormXObject(bounds);
PdfCanvas canvas = new PdfCanvas(xobj, pdfDoc);
canvas.SetFillColor(ColorConstants.RED);
canvas.Rectangle(0, 0, 20, 20);
canvas.Fill();
Image icon = new Image(xobj);
// Set the vertical alignment of the image to be aligned with the top of the text
icon.SetVerticalAlignment(VerticalAlignment.TOP);
// Add the image to the paragraph
mixedParagraph.Add(icon);
// Set the line spacing to be 1.5 times the font size
mixedParagraph.SetMultipliedLeading(1.5f);
// Add the paragraph to the document
document.Add(mixedParagraph);
In this example, the image will be aligned with the top of the text, which will cause it to have the same line height as the surrounding text. This should solve the problem of the image having a larger line height than the text. You can adjust the alignment of the image by using a different value for the VerticalAlignment parameter in the SetVerticalAlignment() method. For example, you can use VerticalAlignment.MIDDLE to align the image with the middle of the text, or VerticalAlignment.BOTTOM to align the image with the bottom of the text.
A:
One solution to ensure uniform line spacing with text and images in iText 7.2.1 is to use the LineSpacingHandler interface and implement a custom line spacing handler that treats images and text equally.
To do this, you can create a class that implements the LineSpacingHandler interface, and override the Handle method to specify how line spacing should be calculated. In this method, you can use the PdfTextElement and PdfImageXObject classes to determine whether an element is text or an image, and apply the same line spacing calculation to both.
Here is an example of how you could implement a custom line spacing handler:
public class CustomLineSpacingHandler : LineSpacingHandler {
public float Handle(ILeafElement currentElement, float lineHeight) {
if (currentElement is PdfTextElement) {
// Calculate line spacing for text element
} else if (currentElement is PdfImageXObject) {
// Calculate line spacing for image element
}
// Return calculated line spacing
}
}
Once you have implemented your custom line spacing handler, you can use it in your code by setting it as the LineSpacingHandler for your paragraph, like this:
var mixed_paragraph = new Paragraph();
mixed_paragraph.SetLineSpacingHandler(new CustomLineSpacingHandler());
This will apply your custom line spacing calculation to all elements in the paragraph, ensuring uniform line spacing with text and images.
|
How to get uniform line space for a mixed paragraph of texts and images
|
I am using iText 7.2.1.
I am trying to add some small icons (drawn by code) in my text. I find if small icons are added into my text, it's hard to have uniform line space.
If all elements of a paragraph are texts, I can just set SetFixedLeading() then no matter how big the font sizes are, my lines have always the same height.
But when I add some small icons inside my paragraph, SetFixedLeading() no longer works.
What I want is like the "Line spacing" option in Microsoft Word. If I give it a fixed value, it treats embedding images and texts equally so I always get fixed line spacing.
The following is my code:
using iText.Kernel.Colors;
using iText.Kernel.Pdf;
using iText.Kernel.Pdf.Canvas;
using iText.Layout;
using iText.Kernel.Pdf.Xobject;
using iText.Layout.Element;
using iText.Kernel.Geom;
using iText.Kernel.Font;
using iText.IO.Font;
namespace iTextTest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
var writer = new PdfWriter("test.pdf");
var pdf_doc = new PdfDocument(writer);
var doc = new Document(pdf_doc, iText.Kernel.Geom.PageSize.DEFAULT, false);
// Make a text of various sizes
var mixed_paragraph = new Paragraph();
for (int i = 0; i < 100; i ++)
{
var style = new Style();
var size = (Math.Sin(i) + 2) * 10;
style.SetFontSize((float)size);
mixed_paragraph.Add(new Text("A").AddStyle(style));
}
// Make a 20x20 icon
var bounds = new iText.Kernel.Geom.Rectangle(0, 0, 20, 20);
var xobj = new PdfFormXObject(bounds);
var pdf_canvas = new PdfCanvas(xobj, pdf_doc);
pdf_canvas.SetFillColor(ColorConstants.RED);
pdf_canvas.Rectangle(0, 0, 20, 20);
pdf_canvas.Fill();
var icon = new iText.Layout.Element.Image(xobj);
mixed_paragraph.Add(icon);
// Fixed leading
mixed_paragraph.SetFixedLeading(10);
doc.Add(mixed_paragraph);
doc.Close();
pdf_doc.Close();
writer.Close();
MessageBox.Show("OK");
}
}
}
This is what it looks like. The second line is right but the third line has more space than fixed leading 10.
I need this because, in my case, I need some small rectanglular icons that each contain two lines of integers and other info.
These icons have bigger height than my text (or else it's hard to read), but theoretically they can still fit because my text has enough spacing.
Unfortunately, my line spaces become uneven. Fixed leading seems not affecting non-text images, so lines with icons have wider line spaces.
I have been considering a workaround: add empty spaces in text and put icons at these fixed positions. It's still hard. I don't know how to get these positions.
|
[
"The SetFixedLeading() method sets the fixed leading (the distance between the baselines of two lines of text) for a paragraph. However, this method only affects the line spacing of text elements within the paragraph, and not other elements such as images.\nOne solution to this problem would be to use the SetMinHeight() method to set the minimum height of each line in the paragraph, which will ensure that all lines, including those with images, have the same minimum height. This method will make the lines taller if necessary to accommodate the height of the images, but the overall line spacing will remain uniform.\nHere is an example of how you can use the SetMinHeight() method to achieve uniform line spacing with images in a paragraph:\nusing iText.Kernel.Colors;\nusing iText.Kernel.Pdf;\nusing iText.Kernel.Pdf.Canvas;\nusing iText.Layout;\nusing iText.Kernel.Pdf.Xobject;\nusing iText.Layout.Element;\nusing iText.Kernel.Geom;\nusing iText.Kernel.Font;\nusing iText.IO.Font;\n\nnamespace iTextTest\n{\n public partial class Form1 : Form\n {\n public Form1()\n {\n InitializeComponent();\n }\n\n private void button1_Click(object sender, EventArgs e)\n {\n var writer = new PdfWriter(\"test.pdf\");\n var pdf_doc = new PdfDocument(writer);\n var doc = new Document(pdf_doc, iText.Kernel.Geom.PageSize.DEFAULT, false);\n\n // Make a text of various sizes\n var mixed_paragraph = new Paragraph();\n for (int i = 0; i < 100; i ++)\n {\n var style = new Style();\n var size = (Math.Sin(i) + 2) * 10;\n style.SetFontSize((float)size);\n mixed_paragraph.Add(new Text(\"A\").AddStyle(style));\n } \n\n // Make a 20x20 icon\n var bounds = new iText.Kernel.Geom.Rectangle(0, 0, 20, 20);\n var xobj = new PdfFormXObject(bounds);\n var pdf_canvas = new PdfCanvas(xobj, pdf_doc);\n pdf_canvas.SetFillColor(ColorConstants.RED);\n pdf_canvas.Rectangle(0, 0, 20, 20);\n pdf_canvas.Fill();\n var icon = new iText.Layout.Element.Image(xobj);\n mixed_paragraph.Add(icon);\n\n // Set the minimum height of each line in the paragraph\n mixed_paragraph.SetMinHeight(20);\n\n doc.Add(mixed_paragraph);\n\n doc.Close();\n pdf_doc.Close();\n writer.Close();\n\n MessageBox.Show(\"OK\");\n }\n }\n}\n\nIn this example, I set the minimum height of each line in the paragraph to be 20, which is the height of the image. This ensures that the lines with images will have the same minimum height as the other lines in the paragraph, resulting in uniform line spacing. You can adjust the minimum height as necessary to achieve the desired line spacing.\n",
"To achieve uniform line spacing for a paragraph containing both text and images, you can use the SetMultipliedLeading() method, which allows you to specify a line height that is a multiple of the font size. This method will apply the same line spacing to both text and images, so that the line spacing remains uniform.\nHere is an example of how you can use the SetMultipliedLeading() method to set the line spacing for a paragraph containing text and images:\nParagraph mixedParagraph = new Paragraph();\n\n// Add some text to the paragraph\nmixedParagraph.Add(new Text(\"A\"));\n\n// Make a 20x20 icon\nRectangle bounds = new Rectangle(0, 0, 20, 20);\nPdfFormXObject xobj = new PdfFormXObject(bounds);\nPdfCanvas canvas = new PdfCanvas(xobj, pdfDoc);\ncanvas.SetFillColor(ColorConstants.RED);\ncanvas.Rectangle(0, 0, 20, 20);\ncanvas.Fill();\nImage icon = new Image(xobj);\nmixedParagraph.Add(icon);\n\n// Set the line spacing to be 1.5 times the font size\nmixedParagraph.SetMultipliedLeading(1.5f);\n\n// Add the paragraph to the document\ndocument.Add(mixedParagraph);\n\nThis will set the line spacing for the paragraph to be 1.5 times the font size, which will apply to both text and images in the paragraph. This will ensure that the line spacing remains uniform, even when the paragraph contains both text and images.\nYou can adjust the line spacing by changing the value passed to the SetMultipliedLeading() method. For example, setting the line spacing to 2 times the font size will result in a larger line spacing, while setting it to 1.25 times the font size will result in a smaller line spacing. Experiment with different values to find the line spacing that works best for your needs.\n--- Potential Other Fix --\nIt's possible that the problem is related to the way that the image is being added to the paragraph. When you add an image to a paragraph, it's important to make sure that the image is properly aligned with the text. Depending on the alignment you choose, the image may appear to have a different line height than the surrounding text.\nFor example, if you are using the default alignment, which is called \"baseline alignment\", the image may appear to have a larger line height than the text because the bottom of the image is aligned with the baseline of the text. In this case, you can use the SetVerticalAlignment() method to change the alignment of the image, so that it is aligned with the top or the middle of the text. This will cause the image to have the same line height as the surrounding text.\nHere is an example of how you can use the SetVerticalAlignment() method to align an image with the top of the text in a paragraph:\n// Add some text to the paragraph\nmixedParagraph.Add(new Text(\"A\"));\n\n// Make a 20x20 icon\nRectangle bounds = new Rectangle(0, 0, 20, 20);\nPdfFormXObject xobj = new PdfFormXObject(bounds);\nPdfCanvas canvas = new PdfCanvas(xobj, pdfDoc);\ncanvas.SetFillColor(ColorConstants.RED);\ncanvas.Rectangle(0, 0, 20, 20);\ncanvas.Fill();\nImage icon = new Image(xobj);\n\n// Set the vertical alignment of the image to be aligned with the top of the text\nicon.SetVerticalAlignment(VerticalAlignment.TOP);\n\n// Add the image to the paragraph\nmixedParagraph.Add(icon);\n\n// Set the line spacing to be 1.5 times the font size\nmixedParagraph.SetMultipliedLeading(1.5f);\n\n// Add the paragraph to the document\ndocument.Add(mixedParagraph);\n\nIn this example, the image will be aligned with the top of the text, which will cause it to have the same line height as the surrounding text. This should solve the problem of the image having a larger line height than the text. You can adjust the alignment of the image by using a different value for the VerticalAlignment parameter in the SetVerticalAlignment() method. For example, you can use VerticalAlignment.MIDDLE to align the image with the middle of the text, or VerticalAlignment.BOTTOM to align the image with the bottom of the text.\n",
"One solution to ensure uniform line spacing with text and images in iText 7.2.1 is to use the LineSpacingHandler interface and implement a custom line spacing handler that treats images and text equally.\nTo do this, you can create a class that implements the LineSpacingHandler interface, and override the Handle method to specify how line spacing should be calculated. In this method, you can use the PdfTextElement and PdfImageXObject classes to determine whether an element is text or an image, and apply the same line spacing calculation to both.\nHere is an example of how you could implement a custom line spacing handler:\npublic class CustomLineSpacingHandler : LineSpacingHandler {\n public float Handle(ILeafElement currentElement, float lineHeight) {\n if (currentElement is PdfTextElement) {\n // Calculate line spacing for text element\n } else if (currentElement is PdfImageXObject) {\n // Calculate line spacing for image element\n }\n // Return calculated line spacing\n }\n}\n\nOnce you have implemented your custom line spacing handler, you can use it in your code by setting it as the LineSpacingHandler for your paragraph, like this:\nvar mixed_paragraph = new Paragraph();\n\nmixed_paragraph.SetLineSpacingHandler(new CustomLineSpacingHandler());\n\nThis will apply your custom line spacing calculation to all elements in the paragraph, ensuring uniform line spacing with text and images.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"c#",
"itext",
"itext7"
] |
stackoverflow_0074371035_c#_itext_itext7.txt
|
Q:
Is it possible to set a symbolic breakpoint for when an iOS device turns to landscape?
I'm debugging an issue where the device unexpectedly turns to landscape and back to portrait orientation, and trying to find the root cause of it.
A:
Use either of these in lldb:
b -[UIDevice setOrientation:animated:]
b -[UIWindow _updateToInterfaceOrientation:duration:force:]
That's Apple private API so no guarantees for future iOS versions, but upon landscape/portrait rotation I do hit a breakpoint on:
iOS12.4.1
iOS15.1
iOS16.1
|
Is it possible to set a symbolic breakpoint for when an iOS device turns to landscape?
|
I'm debugging an issue where the device unexpectedly turns to landscape and back to portrait orientation, and trying to find the root cause of it.
|
[
"Use either of these in lldb:\nb -[UIDevice setOrientation:animated:]\nb -[UIWindow _updateToInterfaceOrientation:duration:force:]\n\nThat's Apple private API so no guarantees for future iOS versions, but upon landscape/portrait rotation I do hit a breakpoint on:\n\niOS12.4.1\niOS15.1\niOS16.1\n\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"lldb",
"xcode"
] |
stackoverflow_0074620816_ios_lldb_xcode.txt
|
Q:
jinja2 in python and rendering
I am unable to decipher the error here. Can any one help ?
from jinja2 import Template
prefixes = {
"10.0.0.0/24" : {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
template = """
Details for 10.0.0.0/24 prefix:
Description: {{ prefixes['10.0.0.0/24'].description }}
Region: {{ prefixes['10.0.0.0/24'].region }}
Site: {{ prefixes['10.0.0.0/24'].site }}
"""
j2 = Template(template)
print(j2.render(prefixes))
Error:
File "c:\Users\verma\Documents\Python\jinja\jinja1.py", line 19, in <module>
print(j2.render(prefixes))
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 1301, in render
self.environment.handle_exception()
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 3, in top-level template code
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 466, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: 'prefixes' is undefined
I was expecting the jinja2 rendering to work.
A:
render uses keyword arguments. replace print(j2.render(prefixes)) with print(j2.render(prefixes=prefixes)) and it should work.
A:
If you want to pass prefixes as a positional argument, you should change the prefixes dictionary to be:
prefixes = {
"prefixes": {
"10.0.0.0/24": {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
}
A:
The error message indicates that the prefixes variable is not defined when the Jinja2 template is rendered. This is likely because the prefixes variable is defined within the scope of the script, but it is not passed to the render() method as a variable.
To fix this, you can pass the prefixes variable as a keyword argument to the render() method, like this:
print(j2.render(prefixes=prefixes))
This will make the prefixes variables available to the Jinja2 template, and the rendering should work as expected.
|
jinja2 in python and rendering
|
I am unable to decipher the error here. Can any one help ?
from jinja2 import Template
prefixes = {
"10.0.0.0/24" : {
"description": "Corporate NAS",
"region": "Europe",
"site": "Telehouse-West"
}
}
template = """
Details for 10.0.0.0/24 prefix:
Description: {{ prefixes['10.0.0.0/24'].description }}
Region: {{ prefixes['10.0.0.0/24'].region }}
Site: {{ prefixes['10.0.0.0/24'].site }}
"""
j2 = Template(template)
print(j2.render(prefixes))
Error:
File "c:\Users\verma\Documents\Python\jinja\jinja1.py", line 19, in <module>
print(j2.render(prefixes))
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 1301, in render
self.environment.handle_exception()
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "<template>", line 3, in top-level template code
File "C:\Users\verma\AppData\Roaming\Python\Python310\site-packages\jinja2\environment.py", line 466, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: 'prefixes' is undefined
I was expecting the jinja2 rendering to work.
|
[
"render uses keyword arguments. replace print(j2.render(prefixes)) with print(j2.render(prefixes=prefixes)) and it should work.\n",
"If you want to pass prefixes as a positional argument, you should change the prefixes dictionary to be:\nprefixes = {\n \"prefixes\": {\n \"10.0.0.0/24\": {\n \"description\": \"Corporate NAS\",\n \"region\": \"Europe\",\n \"site\": \"Telehouse-West\"\n }\n }\n}\n\n",
"The error message indicates that the prefixes variable is not defined when the Jinja2 template is rendered. This is likely because the prefixes variable is defined within the scope of the script, but it is not passed to the render() method as a variable.\nTo fix this, you can pass the prefixes variable as a keyword argument to the render() method, like this:\n\nprint(j2.render(prefixes=prefixes))\n\n\nThis will make the prefixes variables available to the Jinja2 template, and the rendering should work as expected.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"jinja2",
"python"
] |
stackoverflow_0074666184_jinja2_python.txt
|
Q:
Phone is rooted but can't pull files from /data/data folder
My phone Samsung Galaxy S5 mini is rooted. I'm trying to pull files from /data/data/myapp.package/ to folder on my PC.
adb pull /data/data/myapp.package E:\myapp\myapp.package
it gives me this error
adb: error: failed to copy '/data/data/myapp.package' to 'E:\myapp\myapp.package': Permission denied
I found many questions like mine but no answer solved my problem. Some suggested to execute this command adb root before pulling files. Some suggested to install adbd insecure app to enable root access. In fact after installing that app, phone disappeared from adb terminal. Both solution didn't work for me.
BTW, I can copy files using cp command from adb shell but I have to copy files to sdcard and then pull from sdcard. I'm looking for solution which allows me to copy files directly from /data/data/myapp.package to my PC
Any solution?
A:
For your adb to be able to access /data/data directly (for adb pull), your adbd should be running as root - which can generally be done by adb root command.
adb root would not work on commercial devices like Samsung Galaxy S5 mini as commercial devices have ro.secure=1, i.e., the adbd can't be restarted as root due to a check of property called ro.secure. adbd insecure app circumvents this and restarts adbd in root mode to enable adb pull, etc. to work.
In short, if adbd insecure app doesn't work for you, it's not possible to do adb pull from /data/data in your existing ROM. It might be possible if you change the ROM / do some boot.img tweaks, but I would probably suggest trying latest version / different versions of adbd insecure app before going for ROM changes.
Read more on rooting here.
A:
This is my example pulling DB file from the root directory
adb -e shell "run-as com.example.project cp /data/data/com.example.project/databases/project.db /sdcard"
The key is run-as
A:
First you need to hit these two command from command line
adb root
adb remount
then
adb pull /data/data/myapp.package E:\myapp\myapp.package
A:
Here's a one-liner that lets you pull a file without installing anything else and without having to copy it to a public location on the device to then pull it to your computer:
adb exec-out su -c cat /data/data/myapp.package/my_file.apk > my_file.apk
What this does:
adb exec-out runs a command and outputs the raw binary output
su -c runs the provided command as root
cat <file> prints out the file contents
> <file> redirects the output from adb (i.e. the raw file contents) to a local file.
|
Phone is rooted but can't pull files from /data/data folder
|
My phone Samsung Galaxy S5 mini is rooted. I'm trying to pull files from /data/data/myapp.package/ to folder on my PC.
adb pull /data/data/myapp.package E:\myapp\myapp.package
it gives me this error
adb: error: failed to copy '/data/data/myapp.package' to 'E:\myapp\myapp.package': Permission denied
I found many questions like mine but no answer solved my problem. Some suggested to execute this command adb root before pulling files. Some suggested to install adbd insecure app to enable root access. In fact after installing that app, phone disappeared from adb terminal. Both solution didn't work for me.
BTW, I can copy files using cp command from adb shell but I have to copy files to sdcard and then pull from sdcard. I'm looking for solution which allows me to copy files directly from /data/data/myapp.package to my PC
Any solution?
|
[
"For your adb to be able to access /data/data directly (for adb pull), your adbd should be running as root - which can generally be done by adb root command.\nadb root would not work on commercial devices like Samsung Galaxy S5 mini as commercial devices have ro.secure=1, i.e., the adbd can't be restarted as root due to a check of property called ro.secure. adbd insecure app circumvents this and restarts adbd in root mode to enable adb pull, etc. to work.\nIn short, if adbd insecure app doesn't work for you, it's not possible to do adb pull from /data/data in your existing ROM. It might be possible if you change the ROM / do some boot.img tweaks, but I would probably suggest trying latest version / different versions of adbd insecure app before going for ROM changes.\nRead more on rooting here.\n",
"This is my example pulling DB file from the root directory\nadb -e shell \"run-as com.example.project cp /data/data/com.example.project/databases/project.db /sdcard\"\n\nThe key is run-as\n",
"First you need to hit these two command from command line\nadb root\nadb remount\n\nthen\nadb pull /data/data/myapp.package E:\\myapp\\myapp.package\n\n",
"Here's a one-liner that lets you pull a file without installing anything else and without having to copy it to a public location on the device to then pull it to your computer:\nadb exec-out su -c cat /data/data/myapp.package/my_file.apk > my_file.apk\n\nWhat this does:\n\nadb exec-out runs a command and outputs the raw binary output\nsu -c runs the provided command as root\ncat <file> prints out the file contents\n> <file> redirects the output from adb (i.e. the raw file contents) to a local file.\n\n"
] |
[
6,
1,
0,
0
] |
[] |
[] |
[
"adb",
"android",
"root"
] |
stackoverflow_0043407586_adb_android_root.txt
|
Q:
How to check if an element has a specific child?
How to check if an element has a specific child?
I added children to my element using:
var child1 = document.createElement('div');
document.body.appendChild(child1);
How can I check whether child1 is already attached to body?
if (document.body.alreadyHas(child1)) …
^^^^^^^^^^
what to do here?
A:
Given references to two elements, you can test if one is the child of the other by using the .parentElement property as follows:
if (child1.parentElement === parent1) {
// do something
}
So in your specific case where you've said the parent is the body, you can do this:
var child1 = document.createElement('div');
var child2 = document.createElement('div');
document.body.appendChild(child1); // note we only append one
if (child1.parentElement === document.body) {
console.log("child1 is a child of the body"); // this will run
}
if (child2.parentElement === document.body) {
console.log("child2 is a child of the body"); // this won't run
}
A:
You can do that using HTML DOM querySelector() Method .
if(document.querySelector("div .example").length>0)
{
//your code
}
A:
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.0.0/jquery.min.js"></script>
<html>
<div id="myDiv">
<p>tag p</p>
<strong>tag strong</strong>
<h5>title h5</h5>
</div>
<button type="button" onclick="myFunction()">Click here</button>
</html>
<script>
function myFunction(){
if(jQuery("#myDiv").children("h5").length > 0){
console.log("Found!");
}else{
console.log("Not Found!");
}
}
</script>
A:
You can use Node.contains()
you can find more about it here
const child1 = document.createElement('div');
document.body.appendChild(child1);
if(document.contains(child1)){
//your code
}
A quick note about document.querySelector
This answer that was posted earlier will throw an error in case the element doesn't exist, I wouldn't use it:
if(document.querySelector("div .example").length>0)
{
//your code
}
You can open your console in this page and test it:
VM113:1 Uncaught TypeError: Cannot read properties of null (reading 'length')
at :1:39
If you want to use document.querySelector use this instead, it will return null if doesn't find any matches:
if(document.querySelector("div .example")){
// returns null if it div doesn't exist inside DOM
}
|
How to check if an element has a specific child?
|
How to check if an element has a specific child?
I added children to my element using:
var child1 = document.createElement('div');
document.body.appendChild(child1);
How can I check whether child1 is already attached to body?
if (document.body.alreadyHas(child1)) …
^^^^^^^^^^
what to do here?
|
[
"Given references to two elements, you can test if one is the child of the other by using the .parentElement property as follows:\nif (child1.parentElement === parent1) {\n // do something\n}\n\nSo in your specific case where you've said the parent is the body, you can do this:\n\n\n var child1 = document.createElement('div');\r\n var child2 = document.createElement('div');\r\n document.body.appendChild(child1); // note we only append one\r\n\r\n if (child1.parentElement === document.body) {\r\n console.log(\"child1 is a child of the body\"); // this will run\r\n }\r\n if (child2.parentElement === document.body) {\r\n console.log(\"child2 is a child of the body\"); // this won't run\r\n }\n\n\n\n",
"You can do that using HTML DOM querySelector() Method .\nif(document.querySelector(\"div .example\").length>0)\n{\n//your code\n}\n\n",
"\n\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.0.0/jquery.min.js\"></script>\n\n<html>\n<div id=\"myDiv\">\n <p>tag p</p>\n <strong>tag strong</strong>\n <h5>title h5</h5>\n</div>\n<button type=\"button\" onclick=\"myFunction()\">Click here</button>\n</html>\n\n<script>\n\nfunction myFunction(){\n if(jQuery(\"#myDiv\").children(\"h5\").length > 0){\n console.log(\"Found!\");\n }else{\n console.log(\"Not Found!\");\n }\n}\n</script>\n\n\n\n",
"You can use Node.contains()\nyou can find more about it here\nconst child1 = document.createElement('div');\n\ndocument.body.appendChild(child1);\n\nif(document.contains(child1)){\n //your code\n}\n\nA quick note about document.querySelector\nThis answer that was posted earlier will throw an error in case the element doesn't exist, I wouldn't use it:\nif(document.querySelector(\"div .example\").length>0)\n{\n//your code\n}\n\nYou can open your console in this page and test it:\n\nVM113:1 Uncaught TypeError: Cannot read properties of null (reading 'length')\nat :1:39\n\nIf you want to use document.querySelector use this instead, it will return null if doesn't find any matches:\nif(document.querySelector(\"div .example\")){\n// returns null if it div doesn't exist inside DOM\n}\n\n"
] |
[
5,
3,
0,
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0038990396_javascript.txt
|
Q:
How to "tail -f" with "grep" save outputs to an another file which name is time varying?
STEP1
Like I said in title,
I would like to save output of
tail -f example | grep "DESIRED"
to different file
I have tried
tail -f example | grep "DESIRED" | tee -a different
tail -f example | grep "DESIRED" >> different
all of them not working
and I have searched similar questions and read several experts suggesting buffered
but I cannot use it.....
Is there any other way I can do it?
STEP2
once above is done, I would like to make "different" (filename from above) to time varying. I want to keep change its name in every 30minutes.
For example like
20221203133000
20221203140000
20221203143000
...
I have tried
tail -f example | grep "DESIRED" | tee -a $(date +%Y%m%d%H)$([ $(date +%M) -lt 30 ] && echo 00 || echo 30)00
The problem is since I did not even solve first step, I could not test the second step. But I think this command will only create one file based on the time I run the command,,,, Could I please get some advice?
A:
Below code should do what you want.
Some explanations: as you want bash to execute some "code" (in your case dumping to a different file name) you might need two things running in parallel: the tail + grep, and the code that would decide where to dump.
To connect the two processes I use a name fifo (created with mkfifo) in which what is written by tail + grep (using > tmp_fifo) is read in the while loop (using < tmp_fifo). Then once in a while loop, you are free to output to whatever file name you want.
Note: without line-buffered (like in your question) grep will work, will just wait until it has more data (prob 8k) to dump to the file. So if you do not have lots of data generated in "example" it will not dump it until it is enough.
rm -rf tmp_fifo
mkfifo tmp_fifo
(tail -f input | grep --line-buffered TEXT_TO_CHECK > tmp_fifo &)
while read LINE < tmp_fifo; do
CURRENT_NAME=$(date +%Y%m%d%H)
# or any other code that determines to what file to dump ...
echo $LINE >> ${CURRENT_NAME}
done
|
How to "tail -f" with "grep" save outputs to an another file which name is time varying?
|
STEP1
Like I said in title,
I would like to save output of
tail -f example | grep "DESIRED"
to different file
I have tried
tail -f example | grep "DESIRED" | tee -a different
tail -f example | grep "DESIRED" >> different
all of them not working
and I have searched similar questions and read several experts suggesting buffered
but I cannot use it.....
Is there any other way I can do it?
STEP2
once above is done, I would like to make "different" (filename from above) to time varying. I want to keep change its name in every 30minutes.
For example like
20221203133000
20221203140000
20221203143000
...
I have tried
tail -f example | grep "DESIRED" | tee -a $(date +%Y%m%d%H)$([ $(date +%M) -lt 30 ] && echo 00 || echo 30)00
The problem is since I did not even solve first step, I could not test the second step. But I think this command will only create one file based on the time I run the command,,,, Could I please get some advice?
|
[
"Below code should do what you want.\nSome explanations: as you want bash to execute some \"code\" (in your case dumping to a different file name) you might need two things running in parallel: the tail + grep, and the code that would decide where to dump.\nTo connect the two processes I use a name fifo (created with mkfifo) in which what is written by tail + grep (using > tmp_fifo) is read in the while loop (using < tmp_fifo). Then once in a while loop, you are free to output to whatever file name you want.\nNote: without line-buffered (like in your question) grep will work, will just wait until it has more data (prob 8k) to dump to the file. So if you do not have lots of data generated in \"example\" it will not dump it until it is enough.\nrm -rf tmp_fifo\nmkfifo tmp_fifo\n\n(tail -f input | grep --line-buffered TEXT_TO_CHECK > tmp_fifo &)\n\nwhile read LINE < tmp_fifo; do\n CURRENT_NAME=$(date +%Y%m%d%H)\n # or any other code that determines to what file to dump ...\n echo $LINE >> ${CURRENT_NAME}\ndone\n\n"
] |
[
1
] |
[] |
[] |
[
"bash",
"linux",
"tail"
] |
stackoverflow_0074666107_bash_linux_tail.txt
|
Q:
Pagination problem with cache in Laravel: seeing the same results on every page
I am using cache, but I see the same results on every page when I use pagination. How can I fix it?
For create, update, edit, save, I use Laravel Model Observer.
My PostController
public function allpost(Request $request)
{
if($request->cache =='flush')
{
Cache::flush();
}
$data =[];
$posts =Cache::get('posts',[]);
if(empty($posts))
{
$posts = Post::paginate(10);
Cache::forever('posts',$posts);
}
$data['posts'] = $posts ;
return view('frontend.allpost',$data);
}
My ModelObserver
<?php
namespace App\Observers;
use Illuminate\Support\Facades\Cache;
class PostObserver
{
public function created()
{
Cache::forget('posts');
}
public function updated()
{
Cache::forget('posts');
}
public function saved()
{
Cache::forget('posts');
}
public function deleted()
{
Cache::forget('posts');
}
}
A:
this is a common issue.
You just cache 10 record and show this 10 record in all pages.
Your Cache function must write like this :
$posts = \Cache::remember('get-all-posts-' . request('page', 1), 3600, function () {
return Post::paginate(10);
});
above code , cache records of each page and your issue will be gone
|
Pagination problem with cache in Laravel: seeing the same results on every page
|
I am using cache, but I see the same results on every page when I use pagination. How can I fix it?
For create, update, edit, save, I use Laravel Model Observer.
My PostController
public function allpost(Request $request)
{
if($request->cache =='flush')
{
Cache::flush();
}
$data =[];
$posts =Cache::get('posts',[]);
if(empty($posts))
{
$posts = Post::paginate(10);
Cache::forever('posts',$posts);
}
$data['posts'] = $posts ;
return view('frontend.allpost',$data);
}
My ModelObserver
<?php
namespace App\Observers;
use Illuminate\Support\Facades\Cache;
class PostObserver
{
public function created()
{
Cache::forget('posts');
}
public function updated()
{
Cache::forget('posts');
}
public function saved()
{
Cache::forget('posts');
}
public function deleted()
{
Cache::forget('posts');
}
}
|
[
"this is a common issue.\nYou just cache 10 record and show this 10 record in all pages.\nYour Cache function must write like this :\n$posts = \\Cache::remember('get-all-posts-' . request('page', 1), 3600, function () {\n return Post::paginate(10);\n });\n\nabove code , cache records of each page and your issue will be gone\n"
] |
[
0
] |
[] |
[] |
[
"caching",
"laravel",
"pagination"
] |
stackoverflow_0064745234_caching_laravel_pagination.txt
|
Q:
Insert record from one table to another table - Oracle
I have a table TABLE1 which has 5 columns (ROLL_NO, NAME, UNITS, CODE, AMOUNT);
CREATE TABLE TABLE1 (ROLL_NO VARCHAR2(3), NAME VARCHAR2(4), UNITS NUMBER, AMOUNT NUMBER, CODE VARCHAR2(3));
------------------------------------------------------------------------------------------
INSERT INTO TABLE1 VALUES ('101', 'JOHN', 1, 6, 'ABC');
INSERT INTO TABLE1 VALUES ('101', 'JOHN', 2, 6, 'ABC');
INSERT INTO TABLE1 VALUES ('102', 'TOMS', 1, 7, 'ABC');
INSERT INTO TABLE1 VALUES ('102', 'TOMS', 6, 7, 'ABC');
INSERT INTO TABLE1 VALUES ('103', 'FINN', 1, 1, 'BCD');
ROLL_NO NAME UNITS AMOUNT CODE
-------------------------------------------------------
101 JOHN 1 6 ABC
101 JOHN 2 6 ABC
-------------------------------------------
102 TOMS 1 7 ABC
102 TOMS 6 7 ABC
103 FINN 1 1 BCD
There is second table TABLE2 where we need to insert data from TABLE1
CREATE TABLE TABLE2 (ROLL_NO VARCHAR2(3), NAME VARCHAR2(4), RESULT VARCHAR2(3));
There are three conditions to insert data into TABLE2
1st case : If CODE is 'ABC' and SUM(UNITS) of particular ROLL_NO is equal to AMOUNT then don't insert data into TABLE2
2nd case : If CODE is 'ABC' and SUM(UNITS) of particular ROLL_NO is not equal to AMOUNT then insert data with RESULT column value as 'YES'
3rd case : If CODE is not 'ABC' then RESULT column will be 'YES'.
Note: NAME, CODE and AMOUNT will be same for particular ROLL_NO though ROLL_NO has multiple UNITS.
Example :
ROLL_NO 102 CODE 'ABC' and two lines with SUM(UNITS) as 7 and its equal to AMOUNT i.e. 7 and (1st case)
ROLL_NO 101 has CODE 'ABC' and two lines with SUM(UNITS) as 3 and its not equal to AMOUNT i.e. 6 (2nd case)
ROLL_NO 103 has CODE 'BCD' which is not equal to 'ABC'(3rd case)
At the end TABLE2 should have
ROLL_NO NAME RESULT
-----------------------------
101 JOHN YES
103 FINN YES
I have tried this oracle query but it is inserting data related to 102 ROLL_NO which I don't need
SELECT T1.ROLL_NO, T1.NAME,
CASE
WHEN T1.CODE <> 'ABC' THEN 'YES'
WHEN T1.CODE = 'ABC' AND T2.TOT_UNITS <> T1.AMOUNT THEN 'YES'
END RESULT
FROM (SELECT DISTINCT ROLL_NO, NAME, AMOUNT, CODE
FROM TABLE1 ) T1
JOIN (SELECT ROLL_NO, SUM(UNITS) AS TOT_UNITS
FROM TABLE1
GROUP BY ROLL_NO) T2 ON T1.ROLL_NO = T2.ROLL_NO
I am not able to figure out how to not insert ROLL_NO 102 record into TABLE2..Can anyone provide better query than this if possible? Thank you
A:
A "better" option is to scan table1 only once.
SQL> insert into table2 (roll_no, name, result)
2 with temp as
3 (select roll_no, name, sum(units) sum_units, amount, code,
4 case when code = 'ABC' and sum(units) = amount then 'NO'
5 when code = 'ABC' and sum(units) <> amount then 'YES'
6 else 'YES'
7 end as result
8 from table1
9 group by roll_no, name, amount, code
10 )
11 select roll_no, name, result
12 from temp
13 where result = 'YES';
2 rows created.
SQL> select * from table2;
ROL NAME RES
--- ---- ---
101 JOHN YES
103 FINN YES
SQL>
|
Insert record from one table to another table - Oracle
|
I have a table TABLE1 which has 5 columns (ROLL_NO, NAME, UNITS, CODE, AMOUNT);
CREATE TABLE TABLE1 (ROLL_NO VARCHAR2(3), NAME VARCHAR2(4), UNITS NUMBER, AMOUNT NUMBER, CODE VARCHAR2(3));
------------------------------------------------------------------------------------------
INSERT INTO TABLE1 VALUES ('101', 'JOHN', 1, 6, 'ABC');
INSERT INTO TABLE1 VALUES ('101', 'JOHN', 2, 6, 'ABC');
INSERT INTO TABLE1 VALUES ('102', 'TOMS', 1, 7, 'ABC');
INSERT INTO TABLE1 VALUES ('102', 'TOMS', 6, 7, 'ABC');
INSERT INTO TABLE1 VALUES ('103', 'FINN', 1, 1, 'BCD');
ROLL_NO NAME UNITS AMOUNT CODE
-------------------------------------------------------
101 JOHN 1 6 ABC
101 JOHN 2 6 ABC
-------------------------------------------
102 TOMS 1 7 ABC
102 TOMS 6 7 ABC
103 FINN 1 1 BCD
There is second table TABLE2 where we need to insert data from TABLE1
CREATE TABLE TABLE2 (ROLL_NO VARCHAR2(3), NAME VARCHAR2(4), RESULT VARCHAR2(3));
There are three conditions to insert data into TABLE2
1st case : If CODE is 'ABC' and SUM(UNITS) of particular ROLL_NO is equal to AMOUNT then don't insert data into TABLE2
2nd case : If CODE is 'ABC' and SUM(UNITS) of particular ROLL_NO is not equal to AMOUNT then insert data with RESULT column value as 'YES'
3rd case : If CODE is not 'ABC' then RESULT column will be 'YES'.
Note: NAME, CODE and AMOUNT will be same for particular ROLL_NO though ROLL_NO has multiple UNITS.
Example :
ROLL_NO 102 CODE 'ABC' and two lines with SUM(UNITS) as 7 and its equal to AMOUNT i.e. 7 and (1st case)
ROLL_NO 101 has CODE 'ABC' and two lines with SUM(UNITS) as 3 and its not equal to AMOUNT i.e. 6 (2nd case)
ROLL_NO 103 has CODE 'BCD' which is not equal to 'ABC'(3rd case)
At the end TABLE2 should have
ROLL_NO NAME RESULT
-----------------------------
101 JOHN YES
103 FINN YES
I have tried this oracle query but it is inserting data related to 102 ROLL_NO which I don't need
SELECT T1.ROLL_NO, T1.NAME,
CASE
WHEN T1.CODE <> 'ABC' THEN 'YES'
WHEN T1.CODE = 'ABC' AND T2.TOT_UNITS <> T1.AMOUNT THEN 'YES'
END RESULT
FROM (SELECT DISTINCT ROLL_NO, NAME, AMOUNT, CODE
FROM TABLE1 ) T1
JOIN (SELECT ROLL_NO, SUM(UNITS) AS TOT_UNITS
FROM TABLE1
GROUP BY ROLL_NO) T2 ON T1.ROLL_NO = T2.ROLL_NO
I am not able to figure out how to not insert ROLL_NO 102 record into TABLE2..Can anyone provide better query than this if possible? Thank you
|
[
"A \"better\" option is to scan table1 only once.\nSQL> insert into table2 (roll_no, name, result)\n 2 with temp as\n 3 (select roll_no, name, sum(units) sum_units, amount, code,\n 4 case when code = 'ABC' and sum(units) = amount then 'NO'\n 5 when code = 'ABC' and sum(units) <> amount then 'YES'\n 6 else 'YES'\n 7 end as result\n 8 from table1\n 9 group by roll_no, name, amount, code\n 10 )\n 11 select roll_no, name, result\n 12 from temp\n 13 where result = 'YES';\n\n2 rows created.\n\nSQL> select * from table2;\n\nROL NAME RES\n--- ---- ---\n101 JOHN YES\n103 FINN YES\n\nSQL>\n\n"
] |
[
1
] |
[] |
[] |
[
"oracle"
] |
stackoverflow_0074665755_oracle.txt
|
Q:
Nextjs Image Component - Remote Images
I am very new to the nextjs and have come across the Image component issue. I also checked around and it seems that there are similar questions but none of them has the given scenario.
I am trying to load image from the remote source via Image component. The documentation saying that you should adjust your next.config.js file to allow remote images. Since I am using next 13.0.3 version I am using images.remotePatterns property. Despite this fact I am still getting an error of hostname not being configured.
Can you please suggest what I am doing wrong and how to overcome that problem?
Br,
Aleks.
next.config.js
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'swiperjs.com',
port: '',
pathname: '/demos/images/**',
}
],
},
Usage:
<Image
src="https://swiperjs.com/demos/images/nature-1.jpg"
className={styles.swiperslideimg}
alt="test" width={400} height={400}/>
Error:
Invalid src prop (https://swiperjs.com/demos/images/nature-1.jpg) on next/image, hostname "swiperjs.com" is not configured under images in your next.config.js
See more info: https://nextjs.org/docs/messages/next-image-unconfigured-host
A:
That configuration should work. Note that you have to restart the development server by rerunning yarn run dev or npm run dev for these configuration changes to take effect.
A:
I faced the same problem and found the solution i was using
Notus NextJS
which had a slightly older version of nextjs so i ran
npm outdated
and updated the package.json and ran
npm run install:clean
|
Nextjs Image Component - Remote Images
|
I am very new to the nextjs and have come across the Image component issue. I also checked around and it seems that there are similar questions but none of them has the given scenario.
I am trying to load image from the remote source via Image component. The documentation saying that you should adjust your next.config.js file to allow remote images. Since I am using next 13.0.3 version I am using images.remotePatterns property. Despite this fact I am still getting an error of hostname not being configured.
Can you please suggest what I am doing wrong and how to overcome that problem?
Br,
Aleks.
next.config.js
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'swiperjs.com',
port: '',
pathname: '/demos/images/**',
}
],
},
Usage:
<Image
src="https://swiperjs.com/demos/images/nature-1.jpg"
className={styles.swiperslideimg}
alt="test" width={400} height={400}/>
Error:
Invalid src prop (https://swiperjs.com/demos/images/nature-1.jpg) on next/image, hostname "swiperjs.com" is not configured under images in your next.config.js
See more info: https://nextjs.org/docs/messages/next-image-unconfigured-host
|
[
"That configuration should work. Note that you have to restart the development server by rerunning yarn run dev or npm run dev for these configuration changes to take effect.\n",
"I faced the same problem and found the solution i was using\nNotus NextJS\nwhich had a slightly older version of nextjs so i ran\nnpm outdated\n\nand updated the package.json and ran\nnpm run install:clean\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"host",
"next.js"
] |
stackoverflow_0074645480_host_next.js.txt
|
Q:
PHP page not redrawing after redirect to new received from get_file_contents
my code:-
$host = "https://example.org/get.php?";
$newurl = file_get_contents($url);
$newurl = substr($newurl,stripos($newurl,"who"));
$newurl = substr($newurl,0,stripos($newurl,"</"));
// newurl string starts with "who" and ends with "</"
//var_dump($host.$newurl)."<br>"; //shows correct request string
header("Location: " . $host.$newurl);
exit();
new window displays the correct url request in the address bar but isn't redrawn
page source is blank except a single "1" char.
on pressing the resubmit button the page is drawn correctly.
Any help greatly appreciated.
Stve
A:
You can change the file_get_contents($url); to $host because it's not correct source url
A:
Firstly - If other people have a similar problem I would urge them to check that what they think is being offered up to file_get_contents($url) and to header("Location: $host$newurl"); is correct. Not is my case.
Is this case the PHP was interfacing with my java server which it did in two ways the file_get_contents($url); updated the database and returned a new url. Then the header("Location: $host$newurl"); showed the updated page.
Secondly - make sure the server is doing what you expect. In this case the server had closed the socket after the file_get_contents($url); so could not respond to the header("Location: $host$newurl");
Even old hands fall into the trap of believing their code is correct!
Thanks all for you reponses.
Steve
|
PHP page not redrawing after redirect to new received from get_file_contents
|
my code:-
$host = "https://example.org/get.php?";
$newurl = file_get_contents($url);
$newurl = substr($newurl,stripos($newurl,"who"));
$newurl = substr($newurl,0,stripos($newurl,"</"));
// newurl string starts with "who" and ends with "</"
//var_dump($host.$newurl)."<br>"; //shows correct request string
header("Location: " . $host.$newurl);
exit();
new window displays the correct url request in the address bar but isn't redrawn
page source is blank except a single "1" char.
on pressing the resubmit button the page is drawn correctly.
Any help greatly appreciated.
Stve
|
[
"You can change the file_get_contents($url); to $host because it's not correct source url\n",
"Firstly - If other people have a similar problem I would urge them to check that what they think is being offered up to file_get_contents($url) and to header(\"Location: $host$newurl\"); is correct. Not is my case.\nIs this case the PHP was interfacing with my java server which it did in two ways the file_get_contents($url); updated the database and returned a new url. Then the header(\"Location: $host$newurl\"); showed the updated page.\nSecondly - make sure the server is doing what you expect. In this case the server had closed the socket after the file_get_contents($url); so could not respond to the header(\"Location: $host$newurl\");\nEven old hands fall into the trap of believing their code is correct!\nThanks all for you reponses.\nSteve\n"
] |
[
0,
0
] |
[] |
[] |
[
"html",
"php"
] |
stackoverflow_0074605492_html_php.txt
|
Q:
NameError: name 'username_entry' is not defined
So i'm trying to do a login gui using customtkinter
I want to have an window with 2 buttons first : Login and Exit
Then when I press Login to open another py script with the login label
If i execute the second script its all right but if I try from the first one I get this error
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\denis\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\test\venv\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 527, in _clicked
self._command()
File "<string>", line 21, in login
NameError: name 'username_entry' is not defined
This is the first code:
`
import tkinter
import customtkinter
customtkinter.set_appearance_mode("System")
customtkinter.set_default_color_theme("dark-blue")
app = customtkinter.CTk() # create CTk window like you do with the Tk window
app.title("Menu")
app.geometry("240x240")
app.config(bg="#242320")
def button_function():
exec(open('D:\test\login.py').read())
def Close():
app.destroy()
font1=('Arial', 15, 'bold')
button = customtkinter.CTkButton(master=app, text="Login", font=font1, command=button_function)
button.place(relx=0.5, rely=0.4, anchor=tkinter.CENTER)
button = customtkinter.CTkButton(master=app, text="Exit", font=font1, command=Close)
button.place(relx=0.5, rely=0.6, anchor=tkinter.CENTER)
app.mainloop()
`
and this is the login code:
`
import customtkinter
from tkinter import *
from tkinter import messagebox
app = customtkinter.CTk()
app.title("Login")
app.geometry("350x200")
app.config(bg="#242320")
font1=('Arial', 15, 'bold')
username="hello"
password="123"
trials=0
def login():
global username
global password
global trials
written_username = username_entry.get()
written_password = password_entry.get()
if(written_username == '' or written_password==''):
messagebox.showwarning(title="Error", message="Enter your username and password.")
elif(written_username==username and written_password==password):
new_window=Toplevel(app)
new_window.geometry("350x200")
new_window.config(bg="#242320")
welcome_label=customtkinter.CTkLabel(new_window, text="Welcome...", font=font1, text_color="#FFFFFF")
welcome_label.place(x=100, y=100)
elif((written_username != username or written_password != password) and trials<3):
messagebox.showerror(title="Error", message="Your username or password are not correct.")
trials=trials + 1
if (trials != 3):
trials_label = customtkinter.CTkLabel(app, text=f"You have {3-trials} trials", font=font1, text_color="#FFFFFF")
trials_label.place(x=100, y=160)
if(trials==3):
login_button.destroy()
locked_label = customtkinter.CTkLabel(app, text="Your account is locked.", font=font1, text_color="#FFFFFF")
locked_label.place(x=100, y=160)
username_label=customtkinter.CTkLabel(app, text="Username: ",font=font1, text_color="#FFFFFF")
username_label.place(x=10, y=25)
password_label=customtkinter.CTkLabel(app, text="Password: ",font=font1, text_color="#FFFFFF")
password_label.place(x=10, y=75)
username_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", width= 200, height= 1)
username_entry.place(x=100, y=25)
password_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", show="*", width= 200, height= 1)
password_entry.place(x=100, y=75)
login_button=customtkinter.CTkButton(app, command=login, text="Login", font=font1, text_color="#FFFFFF", fg_color="#1f538d", hover_color="#14375e", width=50)
login_button.place(x=165, y=120)
app.mainloop()
`
Tried to do a login box and got this error. Idk how to resolve it
A:
Obviously, the username_entry is not defined in the login function body. please add it to the function arguments and then use it properly.
|
NameError: name 'username_entry' is not defined
|
So i'm trying to do a login gui using customtkinter
I want to have an window with 2 buttons first : Login and Exit
Then when I press Login to open another py script with the login label
If i execute the second script its all right but if I try from the first one I get this error
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\denis\AppData\Local\Programs\Python\Python311\Lib\tkinter\__init__.py", line 1948, in __call__
return self.func(*args)
^^^^^^^^^^^^^^^^
File "D:\test\venv\Lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 527, in _clicked
self._command()
File "<string>", line 21, in login
NameError: name 'username_entry' is not defined
This is the first code:
`
import tkinter
import customtkinter
customtkinter.set_appearance_mode("System")
customtkinter.set_default_color_theme("dark-blue")
app = customtkinter.CTk() # create CTk window like you do with the Tk window
app.title("Menu")
app.geometry("240x240")
app.config(bg="#242320")
def button_function():
exec(open('D:\test\login.py').read())
def Close():
app.destroy()
font1=('Arial', 15, 'bold')
button = customtkinter.CTkButton(master=app, text="Login", font=font1, command=button_function)
button.place(relx=0.5, rely=0.4, anchor=tkinter.CENTER)
button = customtkinter.CTkButton(master=app, text="Exit", font=font1, command=Close)
button.place(relx=0.5, rely=0.6, anchor=tkinter.CENTER)
app.mainloop()
`
and this is the login code:
`
import customtkinter
from tkinter import *
from tkinter import messagebox
app = customtkinter.CTk()
app.title("Login")
app.geometry("350x200")
app.config(bg="#242320")
font1=('Arial', 15, 'bold')
username="hello"
password="123"
trials=0
def login():
global username
global password
global trials
written_username = username_entry.get()
written_password = password_entry.get()
if(written_username == '' or written_password==''):
messagebox.showwarning(title="Error", message="Enter your username and password.")
elif(written_username==username and written_password==password):
new_window=Toplevel(app)
new_window.geometry("350x200")
new_window.config(bg="#242320")
welcome_label=customtkinter.CTkLabel(new_window, text="Welcome...", font=font1, text_color="#FFFFFF")
welcome_label.place(x=100, y=100)
elif((written_username != username or written_password != password) and trials<3):
messagebox.showerror(title="Error", message="Your username or password are not correct.")
trials=trials + 1
if (trials != 3):
trials_label = customtkinter.CTkLabel(app, text=f"You have {3-trials} trials", font=font1, text_color="#FFFFFF")
trials_label.place(x=100, y=160)
if(trials==3):
login_button.destroy()
locked_label = customtkinter.CTkLabel(app, text="Your account is locked.", font=font1, text_color="#FFFFFF")
locked_label.place(x=100, y=160)
username_label=customtkinter.CTkLabel(app, text="Username: ",font=font1, text_color="#FFFFFF")
username_label.place(x=10, y=25)
password_label=customtkinter.CTkLabel(app, text="Password: ",font=font1, text_color="#FFFFFF")
password_label.place(x=10, y=75)
username_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", width= 200, height= 1)
username_entry.place(x=100, y=25)
password_entry=customtkinter.CTkEntry(app,fg_color="#FFFFFF", font=font1, text_color="#000000", border_color="#FFFFFF", show="*", width= 200, height= 1)
password_entry.place(x=100, y=75)
login_button=customtkinter.CTkButton(app, command=login, text="Login", font=font1, text_color="#FFFFFF", fg_color="#1f538d", hover_color="#14375e", width=50)
login_button.place(x=165, y=120)
app.mainloop()
`
Tried to do a login box and got this error. Idk how to resolve it
|
[
"Obviously, the username_entry is not defined in the login function body. please add it to the function arguments and then use it properly.\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074666259_python.txt
|
Q:
Python get key of a value inside a nested dictionary
Let's say I have a dictionary called my_dic:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
Then if I input spam, it should return a, and if I input bar it should return spam. If I input b, it should return None. Basically getting the parent of the dictionary.
How would I go about doing this?
A:
A simple recursive function, which returns the current key if needle in v is true; needle in v simply testing if the key exists in the associated value:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
def get_parent_key(d: dict, needle: str):
for k, v in d.items():
if isinstance(v, dict):
if needle in v:
return k
if found := get_parent_key(v, needle):
return found
print(get_parent_key(my_dict, 'bar'))
|
Python get key of a value inside a nested dictionary
|
Let's say I have a dictionary called my_dic:
my_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}
Then if I input spam, it should return a, and if I input bar it should return spam. If I input b, it should return None. Basically getting the parent of the dictionary.
How would I go about doing this?
|
[
"A simple recursive function, which returns the current key if needle in v is true; needle in v simply testing if the key exists in the associated value:\nmy_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs': None}, 'b': {'ham': None}}\n\ndef get_parent_key(d: dict, needle: str):\n for k, v in d.items():\n if isinstance(v, dict):\n if needle in v:\n return k\n \n if found := get_parent_key(v, needle):\n return found\n \nprint(get_parent_key(my_dict, 'bar'))\n\n"
] |
[
0
] |
[
"To check if a key exists in a dictionary and get its corresponding value, you can use the in keyword and the .get() method.\nHere's an example:\nmy_dict = {'a': {'spam': {'foo': None, 'bar': None, 'baz': None},'eggs'}, 'b': {'ham'}}\n\n# Check if 'spam' is a key in my_dict and get its value\nif 'spam' in my_dict:\n print(my_dict['spam']) # Output: {'foo': None, 'bar': None, 'baz': None}\nelse:\n print('Key not found')\n\n# Check if 'bar' is a key in my_dict and get its value\nif 'bar' in my_dict:\n print(my_dict['bar']) # Output: Key not found\nelse:\n print('Key not found')\n\n# Use .get() to check if 'b' is a key in my_dict and get its value\nvalue = my_dict.get('b')\nif value is not None:\n print(value) # Output: {'ham'}\nelse:\n print('Key not found')\n\n"
] |
[
-1
] |
[
"dictionary",
"nested",
"python"
] |
stackoverflow_0074666017_dictionary_nested_python.txt
|
Q:
database design (joining 3 tables together)
My goal is to create a web app that show elections results from my country.
The data is the results for every candidates in every city for every election.
An election has many candidates and many cities.
A candidate has many elections and many cities.
A city has many elections and many candidates.
For the 2nd round of the last presidential election:
City
inscrits
votants
exprime
candidate1
score C1
candidate2
score C2
Dijon
129000
100000
80000
Macron
50000
Le Pen
30000
Lyon
1000000
900000
750000
Macron
450000
Le Pen
300000
How can I join those 3 tables together?
Is it possible to create a join table between the three, like this?
create_table "results", force: :cascade do |t|
t.integer "election_id", null: false
t.integer "candidate_id", null: false
t.integer "city_id", null: false
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["city_id"], name: "index_results_on_city_id"
t.index ["candidate_id"], name: "index_results_on_candidate_id"
t.index ["election_id"], name: "index_results_on_election_id"
end
But in this case, where can I add the city infos for election? (Column 2, 3, 4 of my data example, i.e: in this city, for this election XXX people voted, XXX didn't vote.)
I came with this database schema:
my database schema
This will not work because I will not be able to access the result of a candidate in a specific city for a specific election. It looks like there is no connection between cities and candidates.
A:
To actually tie these models together and record the data required you need a series of tables that record the election results at each level your interested in:
# rails g model national_result candidate:belongs_to election:belongs_to votes:integer percentage:decimal
class NationalResult < ApplicationRecord
belongs_to :candidate
belongs_to :election
delegate :name, to: :candidate,
prefix: true
end
# rails g model city_result candidate:belongs_to election:belongs_to votes:integer percentage:decimal
class CityResult < ApplicationRecord
belongs_to :city
belongs_to :candidate
belongs_to :election
delegate :name, to: :candidate,
prefix: true
end
Instead of having C1 and C2 columns you should use one row per candidate instead to record their result. That will let you use the same table layout even if there are more then two candidates (like in a primary) and avoids the problem of figuring out which column a candidate is in. Use foreign keys and record the primary key instead of filling your table with duplicates of the names of the candidates which can easily become denormalized.
While you might naively think "But I don't need NationalResult, I can just sum up all the LocalResult's!" - that process would actually expose any problems in your data set and very likely be quite expensive. Get the data from a repubable source instead.
You can then create the has_many assocations on the other side:
class Canditate < ApplicationRecord
has_many :local_results
has_many :national_results
end
class Election < ApplicationRecord
has_many :local_results
has_many :national_results
end
class City < ApplicationRecord
has_many :local_results
end
Keeping track of the number of eligable voters per election/city will most likely require another table.
|
database design (joining 3 tables together)
|
My goal is to create a web app that show elections results from my country.
The data is the results for every candidates in every city for every election.
An election has many candidates and many cities.
A candidate has many elections and many cities.
A city has many elections and many candidates.
For the 2nd round of the last presidential election:
City
inscrits
votants
exprime
candidate1
score C1
candidate2
score C2
Dijon
129000
100000
80000
Macron
50000
Le Pen
30000
Lyon
1000000
900000
750000
Macron
450000
Le Pen
300000
How can I join those 3 tables together?
Is it possible to create a join table between the three, like this?
create_table "results", force: :cascade do |t|
t.integer "election_id", null: false
t.integer "candidate_id", null: false
t.integer "city_id", null: false
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["city_id"], name: "index_results_on_city_id"
t.index ["candidate_id"], name: "index_results_on_candidate_id"
t.index ["election_id"], name: "index_results_on_election_id"
end
But in this case, where can I add the city infos for election? (Column 2, 3, 4 of my data example, i.e: in this city, for this election XXX people voted, XXX didn't vote.)
I came with this database schema:
my database schema
This will not work because I will not be able to access the result of a candidate in a specific city for a specific election. It looks like there is no connection between cities and candidates.
|
[
"To actually tie these models together and record the data required you need a series of tables that record the election results at each level your interested in:\n# rails g model national_result candidate:belongs_to election:belongs_to votes:integer percentage:decimal\nclass NationalResult < ApplicationRecord\n belongs_to :candidate\n belongs_to :election\n delegate :name, to: :candidate,\n prefix: true\nend\n\n# rails g model city_result candidate:belongs_to election:belongs_to votes:integer percentage:decimal\nclass CityResult < ApplicationRecord\n belongs_to :city\n belongs_to :candidate\n belongs_to :election\n delegate :name, to: :candidate,\n prefix: true\nend\n\nInstead of having C1 and C2 columns you should use one row per candidate instead to record their result. That will let you use the same table layout even if there are more then two candidates (like in a primary) and avoids the problem of figuring out which column a candidate is in. Use foreign keys and record the primary key instead of filling your table with duplicates of the names of the candidates which can easily become denormalized.\nWhile you might naively think \"But I don't need NationalResult, I can just sum up all the LocalResult's!\" - that process would actually expose any problems in your data set and very likely be quite expensive. Get the data from a repubable source instead.\nYou can then create the has_many assocations on the other side:\nclass Canditate < ApplicationRecord\n has_many :local_results\n has_many :national_results \nend\n\nclass Election < ApplicationRecord\n has_many :local_results\n has_many :national_results \nend\n\nclass City < ApplicationRecord\n has_many :local_results\nend \n\nKeeping track of the number of eligable voters per election/city will most likely require another table.\n"
] |
[
1
] |
[] |
[] |
[
"database",
"database_design",
"join",
"ruby_on_rails",
"sql"
] |
stackoverflow_0074658726_database_database_design_join_ruby_on_rails_sql.txt
|
Q:
Beginner to Unity Hub Error Failed to resolve project template:
Hello I'm a total beginner to Unity and just installed. Every time I try to create new 3D project, Error window pop up "Failed to resolve project template: [com.unity.template.3d] is not a valid project template." any suggestions on what to do? Unity 3.3.0
I tried googling the answer but it seems that the Error for everyone else is different than mine.
A:
Try completely uninstalling unity hub and any installs of unity then reinstalling it. Also, check that you have the appropriate C# sdks installed (use Visual Studio installer on Windows, Visual Studio for Mac on Mac).
A:
As Isaac said, try uninstall Unity and reinstall, you could also consideer installing an other version of Unity, even if this one isn't known for building problems, I Had same issue earlier the version of Unity just didn't work on my computer.
|
Beginner to Unity Hub Error Failed to resolve project template:
|
Hello I'm a total beginner to Unity and just installed. Every time I try to create new 3D project, Error window pop up "Failed to resolve project template: [com.unity.template.3d] is not a valid project template." any suggestions on what to do? Unity 3.3.0
I tried googling the answer but it seems that the Error for everyone else is different than mine.
|
[
"Try completely uninstalling unity hub and any installs of unity then reinstalling it. Also, check that you have the appropriate C# sdks installed (use Visual Studio installer on Windows, Visual Studio for Mac on Mac).\n",
"As Isaac said, try uninstall Unity and reinstall, you could also consideer installing an other version of Unity, even if this one isn't known for building problems, I Had same issue earlier the version of Unity just didn't work on my computer.\n"
] |
[
0,
0
] |
[] |
[] |
[
"augmented_reality",
"unity3d"
] |
stackoverflow_0074665422_augmented_reality_unity3d.txt
|
Q:
More than one file was found with OS independent path 'lib/armeabi-v7a/libfbjni.so'
* What went wrong:
Execution failed for task ':app:mergeDebugNativeLibs'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> More than one file was found with OS independent path 'lib/armeabi-v7a/libfbjni.so'. If you are using jniLibs and CMake IMPORTED targets, see https://developer.android.com/studio/preview/features#automatic_packaging_of_prebuilt_dependencies_used_by_cmake
In app/build.gradle file
packagingOptions {
pickFirst 'lib/x86/libc++_shared.so'
pickFirst 'lib/x86_64/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
}
still giving error....
Tried
In app/build.gradle file
packagingOptions {
pickFirst 'lib/x86/libfbjni.so'
pickFirst 'lib/x86_64/libfbjni.so'
pickFirst 'lib/arm64-v8a/libfbjni.so'
pickFirst 'lib/armeabi-v7a/libfbjni.so'
}
but it giving following error
More than one file was found with OS independent path 'lib/armeabi-v7a/libc++_shared.so'.
A:
==OLD SOLUTION==
The fix for current react-native
We're suggesting all users of React Native to apply this fix to your top level build.gradle file as follows:
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
// ...
}
allprojects {
repositories {
+ exclusiveContent {
+ // We get React Native's Android binaries exclusively through npm,
+ // from a local Maven repo inside node_modules/react-native/.
+ // (The use of exclusiveContent prevents looking elsewhere like Maven Central
+ // and potentially getting a wrong version.)
+ filter {
+ includeGroup "com.facebook.react"
+ }
+ forRepository {
+ maven {
+ // NOTE: if you are in a monorepo, you may have "$rootDir/../../../node_modules/react-native/android"
+ url "$rootDir/../node_modules/react-native/android"
+ }
+ }
+ }
// ...
}
}
What this fix will do is apply an exclusiveContent resolution rule that will force the resolution of React Native Android library, to use the one inside node_modules.
Once you update your app to React Native v0.71.0, this fix won't be needed anymore.
==NEW SOLUTION==
We have prepared releases for all the main versions of react-native with an hotfix:
0.70.5: https://github.com/facebook/react-native/releases/tag/v0.70.5
️ 0.69.7: https://github.com/facebook/react-native/releases/tag/v0.69.7
0.68.5: https://github.com/facebook/react-native/releases/tag/v0.68.5
️ 0.67.5: https://github.com/facebook/react-native/releases/tag/v0.67.5
️ 0.66.5: https://github.com/facebook/react-native/releases/tag/v0.66.5
️ 0.65.3: https://github.com/facebook/react-native/releases/tag/v0.65.3
️ 0.64.4: https://github.com/facebook/react-native/releases/tag/v0.64.4
️ 0.63.5: https://github.com/facebook/react-native/releases/tag/v0.63.5
By updating to these patch versions, your Android build should start working again.
To do so, in your package.json change react-native's version to the relevant new patch (ex. if you are on 0.64.3, change to 0.64.4) and run yarn install. No other changes should be necessary, but you might want to clean your android artifacts with a cd android && ./gradlew clean before trying to re-run your Android app.
== Fix for older react-native (< 0.63) ==
The fix above only works on gradle 6.2 and higher. Older react-native used older gradle.
You may determine your gradle version by looking in your /android/gradle/wrapper/gradle-wrapper.properties file.
If you are on an older version of react-native (for example 0.63 or earlier) that uses gradle version 6.1 or below, you must use a different workaround as gradle 6.1 does not support exclusiveContent.
Add this in the allprojects area of your android/buld.gradle file.
def REACT_NATIVE_VERSION = new File(['node', '--print',"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version"].execute(null, rootDir).text.trim())
allprojects {
configurations.all {
resolutionStrategy {
// Remove this override in 0.65+, as a proper fix is included in react-native itself.
force "com.facebook.react:react-native:" + REACT_NATIVE_VERSION
}
}
Instead of using exclusiveContent, the hotfix must force the version of React Native. The recommended hotfix shells out to node to read your current version of react-native. If you hard code the version of react-native, when you upgrade your project in the future, your build will fail if you forget to remove this hotfix.
Note that this fix is fragile as the location of your package.json could be different if you are in a monorepo, and node might not be available if you use a node package manager like nvm.
Source: https://github.com/facebook/react-native/issues/35210
A:
Under allProjects-build.gradle-repositories add below code.
It worked for an old project that I have been working on.
exclusiveContent {
filter {
includeGroup "com.facebook.react"
}
forRepository {
maven {
// NOTE: if you are in a monorepo, you may have "$rootDir/../../../node_modules/react-native/android"
url "$rootDir/../node_modules/react-native/android"
}
}
}
A:
See below article if you are on older react-native
Fix for older react-native (< 0.63)
The fix above only works on gradle 6.2 and higher. Older react-native used older gradle.
You may determine your gradle version by looking in your /android/gradle/wrapper/gradle-wrapper.properties file.
If you are on an older version of react-native (for example 0.63 or earlier) that uses gradle version 6.1 or below, you must use a different workaround as gradle 6.1 does not support exclusiveContent.
Add this in the allprojects area of your android/buld.gradle file.
def REACT_NATIVE_VERSION = new File(['node', '--print',"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version"].execute(null, rootDir).text.trim())
allprojects {
configurations.all {
resolutionStrategy {
// Remove this override in 0.65+, as a proper fix is included in react-native itself.
force "com.facebook.react:react-native:" + REACT_NATIVE_VERSION
}
}
A:
def REACT_NATIVE_VERSION = new File(['node', '--print',"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version"].execute(null, rootDir).text.trim())
allprojects {
configurations.all {
resolutionStrategy {
// Remove this override in 0.65+, as a proper fix is included in react-native itself.
force "com.facebook.react:react-native:" + REACT_NATIVE_VERSION
}
}
}
This worked for me Thanks.
A:
Goto to yourProject/app/build.gradle inside android{}
If you look at the error message very well, you will realise that the library is not among the pickFirst list. i.e lib/armeabi-v7a/libc++_shared
But remember, you will have to include all the react architectures options in the pickFirst list.
i.e armeabi-v7a, arm64-v8a, x86, x86_64.
Do this:
android {
packagingOptions {
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/x86/libc++_shared.so'
pickFirst 'lib/x86_64/libc++_shared.so'
}
}
Or
You can just do this:
This is also good if you don't know the name of the library
android {
packagingOptions {
pickFirst '**/*.so'
}
}
A:
This error is occurring because the packagingOptions block in your app/build.gradle file is not handling the case where there are multiple files with the same OS-independent path, such as the lib/armeabi-v7a/libfbjni.so and lib/armeabi-v7a/libc++_shared.so files in this case.
To fix this issue, you can add the exclude property to your packagingOptions block, and specify the file paths that you want to exclude from the packaging process. This will ensure that only a single file with each OS-independent path is included in the final APK.
For example, your app/build.gradle file could be updated as follows:
packagingOptions {
pickFirst 'lib/x86/libc++_shared.so'
pickFirst 'lib/x86_64/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
exclude 'lib/armeabi-v7a/libfbjni.so'
}
With these changes, the lib/armeabi-v7a/libfbjni.so file will be excluded from the packaging process, and the build should succeed without any errors.
|
More than one file was found with OS independent path 'lib/armeabi-v7a/libfbjni.so'
|
* What went wrong:
Execution failed for task ':app:mergeDebugNativeLibs'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> More than one file was found with OS independent path 'lib/armeabi-v7a/libfbjni.so'. If you are using jniLibs and CMake IMPORTED targets, see https://developer.android.com/studio/preview/features#automatic_packaging_of_prebuilt_dependencies_used_by_cmake
In app/build.gradle file
packagingOptions {
pickFirst 'lib/x86/libc++_shared.so'
pickFirst 'lib/x86_64/libc++_shared.so'
pickFirst 'lib/arm64-v8a/libc++_shared.so'
pickFirst 'lib/armeabi-v7a/libc++_shared.so'
}
still giving error....
Tried
In app/build.gradle file
packagingOptions {
pickFirst 'lib/x86/libfbjni.so'
pickFirst 'lib/x86_64/libfbjni.so'
pickFirst 'lib/arm64-v8a/libfbjni.so'
pickFirst 'lib/armeabi-v7a/libfbjni.so'
}
but it giving following error
More than one file was found with OS independent path 'lib/armeabi-v7a/libc++_shared.so'.
|
[
"==OLD SOLUTION==\nThe fix for current react-native\nWe're suggesting all users of React Native to apply this fix to your top level build.gradle file as follows:\n// Top-level build file where you can add configuration options common to all sub-projects/modules.\n\nbuildscript {\n // ...\n}\n\n\nallprojects {\n repositories {\n+ exclusiveContent {\n+ // We get React Native's Android binaries exclusively through npm,\n+ // from a local Maven repo inside node_modules/react-native/.\n+ // (The use of exclusiveContent prevents looking elsewhere like Maven Central\n+ // and potentially getting a wrong version.)\n+ filter {\n+ includeGroup \"com.facebook.react\"\n+ }\n+ forRepository {\n+ maven {\n+ // NOTE: if you are in a monorepo, you may have \"$rootDir/../../../node_modules/react-native/android\"\n+ url \"$rootDir/../node_modules/react-native/android\"\n+ }\n+ }\n+ }\n // ...\n }\n}\n\nWhat this fix will do is apply an exclusiveContent resolution rule that will force the resolution of React Native Android library, to use the one inside node_modules.\nOnce you update your app to React Native v0.71.0, this fix won't be needed anymore.\n==NEW SOLUTION==\nWe have prepared releases for all the main versions of react-native with an hotfix:\n 0.70.5: https://github.com/facebook/react-native/releases/tag/v0.70.5\n️ 0.69.7: https://github.com/facebook/react-native/releases/tag/v0.69.7\n 0.68.5: https://github.com/facebook/react-native/releases/tag/v0.68.5\n️ 0.67.5: https://github.com/facebook/react-native/releases/tag/v0.67.5\n️ 0.66.5: https://github.com/facebook/react-native/releases/tag/v0.66.5\n️ 0.65.3: https://github.com/facebook/react-native/releases/tag/v0.65.3\n️ 0.64.4: https://github.com/facebook/react-native/releases/tag/v0.64.4\n️ 0.63.5: https://github.com/facebook/react-native/releases/tag/v0.63.5\nBy updating to these patch versions, your Android build should start working again.\nTo do so, in your package.json change react-native's version to the relevant new patch (ex. if you are on 0.64.3, change to 0.64.4) and run yarn install. No other changes should be necessary, but you might want to clean your android artifacts with a cd android && ./gradlew clean before trying to re-run your Android app.\n== Fix for older react-native (< 0.63) ==\nThe fix above only works on gradle 6.2 and higher. Older react-native used older gradle.\nYou may determine your gradle version by looking in your /android/gradle/wrapper/gradle-wrapper.properties file.\nIf you are on an older version of react-native (for example 0.63 or earlier) that uses gradle version 6.1 or below, you must use a different workaround as gradle 6.1 does not support exclusiveContent.\nAdd this in the allprojects area of your android/buld.gradle file.\ndef REACT_NATIVE_VERSION = new File(['node', '--print',\"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version\"].execute(null, rootDir).text.trim())\n\nallprojects {\n configurations.all {\n resolutionStrategy {\n // Remove this override in 0.65+, as a proper fix is included in react-native itself.\n force \"com.facebook.react:react-native:\" + REACT_NATIVE_VERSION\n }\n }\n\nInstead of using exclusiveContent, the hotfix must force the version of React Native. The recommended hotfix shells out to node to read your current version of react-native. If you hard code the version of react-native, when you upgrade your project in the future, your build will fail if you forget to remove this hotfix.\nNote that this fix is fragile as the location of your package.json could be different if you are in a monorepo, and node might not be available if you use a node package manager like nvm.\n\nSource: https://github.com/facebook/react-native/issues/35210\n\n",
"Under allProjects-build.gradle-repositories add below code.\nIt worked for an old project that I have been working on.\n exclusiveContent {\n filter {\n includeGroup \"com.facebook.react\"\n }\n forRepository {\n maven {\n // NOTE: if you are in a monorepo, you may have \"$rootDir/../../../node_modules/react-native/android\"\n url \"$rootDir/../node_modules/react-native/android\"\n }\n }\n }\n\n",
"See below article if you are on older react-native\nFix for older react-native (< 0.63)\nThe fix above only works on gradle 6.2 and higher. Older react-native used older gradle.\nYou may determine your gradle version by looking in your /android/gradle/wrapper/gradle-wrapper.properties file.\nIf you are on an older version of react-native (for example 0.63 or earlier) that uses gradle version 6.1 or below, you must use a different workaround as gradle 6.1 does not support exclusiveContent.\nAdd this in the allprojects area of your android/buld.gradle file.\ndef REACT_NATIVE_VERSION = new File(['node', '--print',\"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version\"].execute(null, rootDir).text.trim())\n\nallprojects {\n configurations.all {\n resolutionStrategy {\n // Remove this override in 0.65+, as a proper fix is included in react-native itself.\n force \"com.facebook.react:react-native:\" + REACT_NATIVE_VERSION\n }\n }\n\n",
"def REACT_NATIVE_VERSION = new File(['node', '--print',\"JSON.parse(require('fs').readFileSync(require.resolve('react-native/package.json'), 'utf-8')).version\"].execute(null, rootDir).text.trim())\n\nallprojects {\n configurations.all {\n resolutionStrategy {\n // Remove this override in 0.65+, as a proper fix is included in react-native itself.\n force \"com.facebook.react:react-native:\" + REACT_NATIVE_VERSION\n }\n }\n}\n\nThis worked for me Thanks.\n",
"Goto to yourProject/app/build.gradle inside android{}\nIf you look at the error message very well, you will realise that the library is not among the pickFirst list. i.e lib/armeabi-v7a/libc++_shared\nBut remember, you will have to include all the react architectures options in the pickFirst list.\ni.e armeabi-v7a, arm64-v8a, x86, x86_64.\nDo this:\nandroid { \n packagingOptions { \n pickFirst 'lib/armeabi-v7a/libc++_shared.so' \n pickFirst 'lib/arm64-v8a/libc++_shared.so' \n pickFirst 'lib/x86/libc++_shared.so' \n pickFirst 'lib/x86_64/libc++_shared.so' \n } \n }\n\nOr\nYou can just do this:\nThis is also good if you don't know the name of the library\nandroid { \n packagingOptions { \n pickFirst '**/*.so' \n } \n }\n\n",
"This error is occurring because the packagingOptions block in your app/build.gradle file is not handling the case where there are multiple files with the same OS-independent path, such as the lib/armeabi-v7a/libfbjni.so and lib/armeabi-v7a/libc++_shared.so files in this case.\nTo fix this issue, you can add the exclude property to your packagingOptions block, and specify the file paths that you want to exclude from the packaging process. This will ensure that only a single file with each OS-independent path is included in the final APK.\nFor example, your app/build.gradle file could be updated as follows:\npackagingOptions {\n pickFirst 'lib/x86/libc++_shared.so'\n pickFirst 'lib/x86_64/libc++_shared.so'\n pickFirst 'lib/arm64-v8a/libc++_shared.so'\n pickFirst 'lib/armeabi-v7a/libc++_shared.so'\n\n exclude 'lib/armeabi-v7a/libfbjni.so'\n}\n\nWith these changes, the lib/armeabi-v7a/libfbjni.so file will be excluded from the packaging process, and the build should succeed without any errors.\n"
] |
[
49,
9,
2,
0,
0,
0
] |
[] |
[] |
[
"android",
"build.gradle",
"react_native"
] |
stackoverflow_0074327301_android_build.gradle_react_native.txt
|
Q:
Pixhawk controller won't pick up every message
I have been trying to inject raw UBX data which I gather from UBXReader library into my Pixhawk. For this, I use a GPS module to extract UBX data and a serial to USB converter to stream data into my Pixhawk. Here is what my setup looks like:
Using my other USB port, I gather GPS data and try to stream it into pixhawk as seen above. For this task, I use python.
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('/dev/ttyUSB0', 38400, timeout=3)
stream2 = Serial('/dev/ttyUSB1', 38400, timeout=3)
while 1:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
output = parsed_data.serialize()
stream2.write(output)
From MAVLink, I can see location and altitude data but I fail to stream HDOP and VDOP messages into my Pixhawk. What might be causing this and how should I proceed on fixing it?
A:
You need to ensure your external GPS is outputting the requisite UBX message types - not all UBX messages contain hDOP or vDOP data. For example:
The NAV-PVT message contains lat, lon and pDOP but not vDOP or hDOP.
The NAV-DOP message contains hDOP and vDOP but not lat or lon.
So you may need to enable the NAV-DOP message on your external GPS using uCenter or PyGPSClient.
|
Pixhawk controller won't pick up every message
|
I have been trying to inject raw UBX data which I gather from UBXReader library into my Pixhawk. For this, I use a GPS module to extract UBX data and a serial to USB converter to stream data into my Pixhawk. Here is what my setup looks like:
Using my other USB port, I gather GPS data and try to stream it into pixhawk as seen above. For this task, I use python.
from serial import Serial
from pyubx2 import UBXReader
stream = Serial('/dev/ttyUSB0', 38400, timeout=3)
stream2 = Serial('/dev/ttyUSB1', 38400, timeout=3)
while 1:
ubr = UBXReader(stream)
(raw_data, parsed_data) = ubr.read()
output = parsed_data.serialize()
stream2.write(output)
From MAVLink, I can see location and altitude data but I fail to stream HDOP and VDOP messages into my Pixhawk. What might be causing this and how should I proceed on fixing it?
|
[
"You need to ensure your external GPS is outputting the requisite UBX message types - not all UBX messages contain hDOP or vDOP data. For example:\nThe NAV-PVT message contains lat, lon and pDOP but not vDOP or hDOP.\nThe NAV-DOP message contains hDOP and vDOP but not lat or lon.\nSo you may need to enable the NAV-DOP message on your external GPS using uCenter or PyGPSClient.\n"
] |
[
0
] |
[] |
[] |
[
"gps",
"mavlink",
"px4",
"qgroundcontrol"
] |
stackoverflow_0071790208_gps_mavlink_px4_qgroundcontrol.txt
|
Q:
Reading from a file segmentation fault while running round robin algorithm
I get this error when trying to run round robin algorithm, and the algorithms are working perfectly fine when taking input from user. this is the code:
else if (select==2)
{
FILE * filehandle;
char lyne[100];
char *item;
int reccount = 0;
// open file
filehandle = fopen("Input_Data1.txt","r");
// Read file line by line
while (fgets(lyne,99,filehandle))
{
numprocess =
printf("%s",lyne);
item = strtok(lyne," ");
p[reccount].arrivetime =atoi(item);
item = strtok(NULL," ");
p[reccount].bursttime =atoi(item);
reccount++;
}
//Close file
fclose(filehandle);
}
The error I get is segmentation fault (core dumped).
screen shot of the input file
I tried reading from a file execting the same result as reading from user input but I got the error showed in the image.
A:
At least these issues:
Avoid atoi(NULL)
item = strtok(lyne," "); may return NULL.
item = strtok(NULL," "); may return NULL.
I suspect the 2nd should be item = strtok(NULL," \n");
NULL == filehandle?
Open may fail.
reccount may exceed max
No need for a -1
// fgets(lyne,99,filehandle)
fgets(lyne, sizeof lyne, filehandle)
FILE * filehandle = fopen("Input_Data1.txt","r");
if (filehandle == NULL) {
fprintf(stderr, "Open failed\n");
return;
}
char lyne[100];
int reccount = 0;
while (reccount < reccount_n && fgets(lyne, sizeof lyne, filehandle)) {
numprocess = printf("%s", lyne); // Unclear how OP is using numprocess
char *item = strtok(lyne," ");
if (item == NULL) {
fprintf(stderr, "strtok(lyne,\" \") failed\n");
return;
}
p[reccount].arrivetime =atoi(item);
item = strtok(NULL," \n");
if (item == NULL) {
fprintf(stderr, "strtok(NULL,\" \n\") failed\n");
return;
}
p[reccount].bursttime =atoi(item);
reccount++;
}
fclose(filehandle);
|
Reading from a file segmentation fault while running round robin algorithm
|
I get this error when trying to run round robin algorithm, and the algorithms are working perfectly fine when taking input from user. this is the code:
else if (select==2)
{
FILE * filehandle;
char lyne[100];
char *item;
int reccount = 0;
// open file
filehandle = fopen("Input_Data1.txt","r");
// Read file line by line
while (fgets(lyne,99,filehandle))
{
numprocess =
printf("%s",lyne);
item = strtok(lyne," ");
p[reccount].arrivetime =atoi(item);
item = strtok(NULL," ");
p[reccount].bursttime =atoi(item);
reccount++;
}
//Close file
fclose(filehandle);
}
The error I get is segmentation fault (core dumped).
screen shot of the input file
I tried reading from a file execting the same result as reading from user input but I got the error showed in the image.
|
[
"At least these issues:\nAvoid atoi(NULL)\nitem = strtok(lyne,\" \"); may return NULL.\nitem = strtok(NULL,\" \"); may return NULL.\nI suspect the 2nd should be item = strtok(NULL,\" \\n\");\nNULL == filehandle?\nOpen may fail.\nreccount may exceed max\nNo need for a -1\n// fgets(lyne,99,filehandle)\nfgets(lyne, sizeof lyne, filehandle)\n\n\n FILE * filehandle = fopen(\"Input_Data1.txt\",\"r\");\n if (filehandle == NULL) {\n fprintf(stderr, \"Open failed\\n\"); \n return;\n }\n char lyne[100];\n int reccount = 0;\n\n while (reccount < reccount_n && fgets(lyne, sizeof lyne, filehandle)) {\n numprocess = printf(\"%s\", lyne); // Unclear how OP is using numprocess\n char *item = strtok(lyne,\" \");\n if (item == NULL) {\n fprintf(stderr, \"strtok(lyne,\\\" \\\") failed\\n\"); \n return;\n }\n p[reccount].arrivetime =atoi(item);\n\n item = strtok(NULL,\" \\n\");\n if (item == NULL) {\n fprintf(stderr, \"strtok(NULL,\\\" \\n\\\") failed\\n\"); \n return;\n }\n p[reccount].bursttime =atoi(item);\n\n reccount++;\n }\n fclose(filehandle);\n\n"
] |
[
0
] |
[] |
[] |
[
"c",
"operating_system",
"round_robin"
] |
stackoverflow_0074665893_c_operating_system_round_robin.txt
|
Q:
flutter web application does not send server requests
I made a build of the web version of the application with the following command:
flutter build web --web-renderer html
But when I ran the output with python -m http.server 8000 command in my local system, none of the program requests were sent And I got the following error in Firefox console :
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at https://MY_SERVER_URL. (Reason: CORS request
did not succeed). Status code: (null). Uncaught Error:
NoSuchMethodError: j is undefined
Please help me to run the web application on localhost.
A:
First, I suggest you read about CORS, which I gave you the link
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
But the easiest solution is to run the flutter webapp locally:
Download the CORS plugin and install it in your browser (also available for Firefox)
Extension link in Chrome browser:
https://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf
When you want to run your program, click on it and activate it
This will not check CORS for your web, and solved your problem
But one point in your error is not mentioned which part is blocked, usually the headers are blocked, so after installing the plugin, enter the plugin settings and check it.
Access-Control-Allow-Headers
check it to solve your problem, if not solved , please send me a more complete error.
|
flutter web application does not send server requests
|
I made a build of the web version of the application with the following command:
flutter build web --web-renderer html
But when I ran the output with python -m http.server 8000 command in my local system, none of the program requests were sent And I got the following error in Firefox console :
Cross-Origin Request Blocked: The Same Origin Policy disallows reading
the remote resource at https://MY_SERVER_URL. (Reason: CORS request
did not succeed). Status code: (null). Uncaught Error:
NoSuchMethodError: j is undefined
Please help me to run the web application on localhost.
|
[
"First, I suggest you read about CORS, which I gave you the link\nhttps://developer.mozilla.org/en-US/docs/Web/HTTP/CORS\nBut the easiest solution is to run the flutter webapp locally:\nDownload the CORS plugin and install it in your browser (also available for Firefox)\nExtension link in Chrome browser:\nhttps://chrome.google.com/webstore/detail/allow-cors-access-control/lhobafahddgcelffkeicbaginigeejlf\nWhen you want to run your program, click on it and activate it\nThis will not check CORS for your web, and solved your problem\nBut one point in your error is not mentioned which part is blocked, usually the headers are blocked, so after installing the plugin, enter the plugin settings and check it.\nAccess-Control-Allow-Headers\ncheck it to solve your problem, if not solved , please send me a more complete error.\n"
] |
[
0
] |
[] |
[] |
[
"flutter",
"flutter_web",
"web_applications"
] |
stackoverflow_0074570892_flutter_flutter_web_web_applications.txt
|
Q:
Hiding text on small viewports
I have tried to use media queries and display: none in my CSS, but it does not hide the text on a smaller screen. Please help
HTML code:
@media all and (max-width: 600px) {
header h1 {
display: none;
}
}
<header class="word">
<h1> MY <span> WEBSITE </span> </h1>
</header>
A:
You can also try below code:
@media screen and (max-width: 600px) {
header h1 {
display: none;
}
}
<header class="word">
<h1> MY <span> WEBSITE </span> </h1>
</header>
A:
Your code should be working. I haven't seen your code so it's a shot in the dark but did you add this in head tag?
<meta name="viewport" content="width=device-width, initial-scale=1">
This gives the browser instructions on how to control the page's dimensions and scaling.
|
Hiding text on small viewports
|
I have tried to use media queries and display: none in my CSS, but it does not hide the text on a smaller screen. Please help
HTML code:
@media all and (max-width: 600px) {
header h1 {
display: none;
}
}
<header class="word">
<h1> MY <span> WEBSITE </span> </h1>
</header>
|
[
"You can also try below code:\n\n\n@media screen and (max-width: 600px) {\n header h1 {\n display: none;\n }\n}\n<header class=\"word\">\n <h1> MY <span> WEBSITE </span> </h1>\n</header> \n\n\n\n",
"Your code should be working. I haven't seen your code so it's a shot in the dark but did you add this in head tag?\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n\nThis gives the browser instructions on how to control the page's dimensions and scaling.\n"
] |
[
2,
1
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074637179_css_html.txt
|
Q:
Access C/C++ lib from a multiplatform kotlin project
For the first time, I'm using Android Studio to build a multiplatform project. I have created an Android app module that uses the multiplatform lib on Android.
I have also used XCode to build an iOS app that uses the multiplatform lib on iOS.
Everything works fine and I'm able to use the expect fun that is implemented by different actual fun for Android and iOS.
I have also created a library in C++ that exposes a C interface.
#ifndef PINGLIB_LIBRARY_H
#define PINGLIB_LIBRARY_H
#ifdef __cplusplus
extern "C" {
#endif
typedef struct {
long long elapsed;
} PingInfo;
typedef void (*PingCallback)(PingInfo pingInfo);
typedef struct
{
PingCallback pingUpdate;
} PingObserver;
void* ping(const char * url, const PingCallback *pingCallback);
void subscribe(void* pingOperation);
void unsubscribe(void* pingOperation);
#ifdef __cplusplus
}
#endif
#endif //PINGLIB_LIBRARY_H
I use CLion to build the C++ code.
I have created a .def file that I use to build the library using cinterop.
package = com.exercise.pinglib
headers = PingLibrary.h
linkerOpts.linux = -L/usr/lib/x86_64-linux-gnu
compilerOpts = -std=c99 -I/Users/username/myproject/ping/ping/header
staticLibraries = libping.a
libraryPaths = /opt/local/lib /Users/username/myproject/ping/cmake-build-debug
libping.a is the library created building the C++ code. It is created in the folder /Users/username/myproject/ping/cmake-build-debug
When I run the command cinterop -def ping.def -o ping, it creates the klib file and a folder containing a manifest.properties file a natives subfolder containing a cstubs.bc file and a kotlin subfolder with a .kt file.
ping.klib
-ping-build
manifest.properties
-natives
cstubs.bc
-kotlin
-com
-exercise
-pinglib
pinglib.kt
How can I use the library created by cinterop in my kotlin-multiplatform project?
I have found different ways to import it but I did not find any complete description of how to do it.
Here they say that I can use something like:
implementation files("ping.klib")
I did it for the iOS project but I still don't know how to access the kotlin classes neither on Android nor on iOS.
This is my build.gradle
apply plugin: 'com.android.library'
apply plugin: 'kotlin-multiplatform'
android {
compileSdkVersion 28
defaultConfig {
minSdkVersion 15
}
buildTypes {
release {
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
kotlin {
targets {
final def iOSTarget = System.getenv('SDK_NAME')?.startsWith('iphoneos') ? presets.iosArm64 : presets.iosX64
fromPreset(iOSTarget, 'ios') {
binaries {
framework('shared')
}
}
fromPreset(presets.android, 'android')
}
sourceSets {
// for common code
commonMain.dependencies {
api 'org.jetbrains.kotlin:kotlin-stdlib-common'
}
androidMain.dependencies {
api 'org.jetbrains.kotlin:kotlin-stdlib'
}
iosMain.dependencies {
implementation files("ping.klib")
}
}
}
configurations {
compileClasspath
}
task packForXCode(type: Sync) {
final File frameworkDir = new File(buildDir, "xcode-frameworks")
final String mode = project.findProperty("XCODE_CONFIGURATION")?.toUpperCase() ?: 'DEBUG'
final def framework = kotlin.targets.ios.binaries.getFramework("shared", mode)
inputs.property "mode", mode
dependsOn framework.linkTask
from { framework.outputFile.parentFile }
into frameworkDir
doLast {
new File(frameworkDir, 'gradlew').with {
text = "#!/bin/bash\nexport 'JAVA_HOME=${System.getProperty("java.home")}'\ncd '${rootProject.rootDir}'\n./gradlew \$@\n"
setExecutable(true)
}
}
}
tasks.build.dependsOn packForXCode
EDIT
I have changed the question because, initially, I thought that cinterop was not creating the klib library but it was just a mistake: I was looking in the ping-build folder but the file is outside that folder. So I resolved half of the question.
EDIT2
I have added the build.script
A:
I'm glad to see that everything is fine with the KLIB, now about the library use. First of all, I have to mention that this library can be utilized only by Kotlin/Native compiler, meaning it will be available for some targets(see list here). Then, if you're going to include C library use into an MPP project, it is always better to produce bindings via the Gradle script. It can be done inside of a target, see this doc for example. For your iOS target it should be like:
kotlin {
iosX64 { // Replace with a target you need.
compilations.getByName("main") {
val ping by cinterops.creating {
defFile(project.file("ping.def"))
packageName("c.ping")
}
}
}
}
This snippet will add cinterop task to your Gradle, and provide module to include like import c.ping.* inside of the corresponding Kotlin files.
A:
To use the library created by cinterop in your Kotlin multiplatform project, you need to add it as a dependency in the source sets for each platform. In your build.gradle file, you can do this by adding the implementation files("ping.klib") line to the dependencies for the iosMain source set. This will make the library available for use in the iOS project.
For the Android project, you can add the library as a dependency in the androidMain source set by using the implementation files("ping.klib") line in the dependencies section.
Once you have added the library as a dependency, you can access the Kotlin classes in the library by importing the package that contains the classes in your Kotlin code. For example, if the package name is com.exercise.pinglib, you can import it with import com.exercise.pinglib.*. You can then use the classes in the library as you would any other Kotlin classes.
|
Access C/C++ lib from a multiplatform kotlin project
|
For the first time, I'm using Android Studio to build a multiplatform project. I have created an Android app module that uses the multiplatform lib on Android.
I have also used XCode to build an iOS app that uses the multiplatform lib on iOS.
Everything works fine and I'm able to use the expect fun that is implemented by different actual fun for Android and iOS.
I have also created a library in C++ that exposes a C interface.
#ifndef PINGLIB_LIBRARY_H
#define PINGLIB_LIBRARY_H
#ifdef __cplusplus
extern "C" {
#endif
typedef struct {
long long elapsed;
} PingInfo;
typedef void (*PingCallback)(PingInfo pingInfo);
typedef struct
{
PingCallback pingUpdate;
} PingObserver;
void* ping(const char * url, const PingCallback *pingCallback);
void subscribe(void* pingOperation);
void unsubscribe(void* pingOperation);
#ifdef __cplusplus
}
#endif
#endif //PINGLIB_LIBRARY_H
I use CLion to build the C++ code.
I have created a .def file that I use to build the library using cinterop.
package = com.exercise.pinglib
headers = PingLibrary.h
linkerOpts.linux = -L/usr/lib/x86_64-linux-gnu
compilerOpts = -std=c99 -I/Users/username/myproject/ping/ping/header
staticLibraries = libping.a
libraryPaths = /opt/local/lib /Users/username/myproject/ping/cmake-build-debug
libping.a is the library created building the C++ code. It is created in the folder /Users/username/myproject/ping/cmake-build-debug
When I run the command cinterop -def ping.def -o ping, it creates the klib file and a folder containing a manifest.properties file a natives subfolder containing a cstubs.bc file and a kotlin subfolder with a .kt file.
ping.klib
-ping-build
manifest.properties
-natives
cstubs.bc
-kotlin
-com
-exercise
-pinglib
pinglib.kt
How can I use the library created by cinterop in my kotlin-multiplatform project?
I have found different ways to import it but I did not find any complete description of how to do it.
Here they say that I can use something like:
implementation files("ping.klib")
I did it for the iOS project but I still don't know how to access the kotlin classes neither on Android nor on iOS.
This is my build.gradle
apply plugin: 'com.android.library'
apply plugin: 'kotlin-multiplatform'
android {
compileSdkVersion 28
defaultConfig {
minSdkVersion 15
}
buildTypes {
release {
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
kotlin {
targets {
final def iOSTarget = System.getenv('SDK_NAME')?.startsWith('iphoneos') ? presets.iosArm64 : presets.iosX64
fromPreset(iOSTarget, 'ios') {
binaries {
framework('shared')
}
}
fromPreset(presets.android, 'android')
}
sourceSets {
// for common code
commonMain.dependencies {
api 'org.jetbrains.kotlin:kotlin-stdlib-common'
}
androidMain.dependencies {
api 'org.jetbrains.kotlin:kotlin-stdlib'
}
iosMain.dependencies {
implementation files("ping.klib")
}
}
}
configurations {
compileClasspath
}
task packForXCode(type: Sync) {
final File frameworkDir = new File(buildDir, "xcode-frameworks")
final String mode = project.findProperty("XCODE_CONFIGURATION")?.toUpperCase() ?: 'DEBUG'
final def framework = kotlin.targets.ios.binaries.getFramework("shared", mode)
inputs.property "mode", mode
dependsOn framework.linkTask
from { framework.outputFile.parentFile }
into frameworkDir
doLast {
new File(frameworkDir, 'gradlew').with {
text = "#!/bin/bash\nexport 'JAVA_HOME=${System.getProperty("java.home")}'\ncd '${rootProject.rootDir}'\n./gradlew \$@\n"
setExecutable(true)
}
}
}
tasks.build.dependsOn packForXCode
EDIT
I have changed the question because, initially, I thought that cinterop was not creating the klib library but it was just a mistake: I was looking in the ping-build folder but the file is outside that folder. So I resolved half of the question.
EDIT2
I have added the build.script
|
[
"I'm glad to see that everything is fine with the KLIB, now about the library use. First of all, I have to mention that this library can be utilized only by Kotlin/Native compiler, meaning it will be available for some targets(see list here). Then, if you're going to include C library use into an MPP project, it is always better to produce bindings via the Gradle script. It can be done inside of a target, see this doc for example. For your iOS target it should be like:\nkotlin {\n iosX64 { // Replace with a target you need.\n compilations.getByName(\"main\") {\n val ping by cinterops.creating {\n defFile(project.file(\"ping.def\"))\n packageName(\"c.ping\")\n }\n }\n }\n}\n\nThis snippet will add cinterop task to your Gradle, and provide module to include like import c.ping.* inside of the corresponding Kotlin files. \n",
"To use the library created by cinterop in your Kotlin multiplatform project, you need to add it as a dependency in the source sets for each platform. In your build.gradle file, you can do this by adding the implementation files(\"ping.klib\") line to the dependencies for the iosMain source set. This will make the library available for use in the iOS project.\nFor the Android project, you can add the library as a dependency in the androidMain source set by using the implementation files(\"ping.klib\") line in the dependencies section.\nOnce you have added the library as a dependency, you can access the Kotlin classes in the library by importing the package that contains the classes in your Kotlin code. For example, if the package name is com.exercise.pinglib, you can import it with import com.exercise.pinglib.*. You can then use the classes in the library as you would any other Kotlin classes.\n"
] |
[
2,
0
] |
[] |
[] |
[
"android",
"c++",
"kotlin",
"kotlin_multiplatform",
"kotlin_native"
] |
stackoverflow_0060692146_android_c++_kotlin_kotlin_multiplatform_kotlin_native.txt
|
Q:
Explanation of retry logic in Azure Durable Functions
I'm new to Azure Durable Functions and am trying to understand the retry logic and error handling.
I have a very simple orchestration function that executes 100 action functions in a fan-in fan-out pattern. My expectation is that when an action function breaks for whatever reason it is retried based on the retry options. In my case I'm expecting that I get a count of 100 in the final orchestration line (Write-Information $results.count), but for every action function that errors out due to my random error throwing, it seems no retry is done and the results is always less than 100.
Why is this happening, also why am I seeing the orchestrator output $results.count multiple times?
Orchestration
param($Context)
$range = 1..100
$retryOptions = New-DurableRetryOptions -FirstRetryInterval (New-TimeSpan -Seconds 1) -MaxNumberOfAttempts 10
$tasks =
foreach ($num in $range) {
try{
Invoke-DurableActivity -FunctionName 'RandomWaitActivity' -Input $num -NoWait -RetryOptions $retryOptions
}
catch{
Write-Output $_.Exception.Message
}
}
$results = Wait-ActivityFunction -Task $tasks
Write-Information $results.count
Action
param($number)
Write-Output "Received $number"
$random = Get-Random -Minimum 1 -Maximum 20
if($random -eq 13){
Throw "13 is a unlucky number"
}
Start-Sleep -Milliseconds $random
return $number
A:
Well, If you want to add retry in your, you can configure retry policy in your function.json.
In this retry policy you can set maxRetryCount which represent the number of times a function will retry before stopping.
function.json :-
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
],
"retry": {
"strategy": "fixedDelay",
"maxRetryCount": 1,
"delayInterval": "00:00:10"
}
}
Here Since I have set the retry count to 1 the function will try to execute twice.
Refer this MsDOC on retry policy.
|
Explanation of retry logic in Azure Durable Functions
|
I'm new to Azure Durable Functions and am trying to understand the retry logic and error handling.
I have a very simple orchestration function that executes 100 action functions in a fan-in fan-out pattern. My expectation is that when an action function breaks for whatever reason it is retried based on the retry options. In my case I'm expecting that I get a count of 100 in the final orchestration line (Write-Information $results.count), but for every action function that errors out due to my random error throwing, it seems no retry is done and the results is always less than 100.
Why is this happening, also why am I seeing the orchestrator output $results.count multiple times?
Orchestration
param($Context)
$range = 1..100
$retryOptions = New-DurableRetryOptions -FirstRetryInterval (New-TimeSpan -Seconds 1) -MaxNumberOfAttempts 10
$tasks =
foreach ($num in $range) {
try{
Invoke-DurableActivity -FunctionName 'RandomWaitActivity' -Input $num -NoWait -RetryOptions $retryOptions
}
catch{
Write-Output $_.Exception.Message
}
}
$results = Wait-ActivityFunction -Task $tasks
Write-Information $results.count
Action
param($number)
Write-Output "Received $number"
$random = Get-Random -Minimum 1 -Maximum 20
if($random -eq 13){
Throw "13 is a unlucky number"
}
Start-Sleep -Milliseconds $random
return $number
|
[
"\nWell, If you want to add retry in your, you can configure retry policy in your function.json.\n\nIn this retry policy you can set maxRetryCount which represent the number of times a function will retry before stopping.\n\n\nfunction.json :-\n{\n \"bindings\": [\n {\n \"authLevel\": \"anonymous\",\n \"type\": \"httpTrigger\",\n \"direction\": \"in\",\n \"name\": \"Request\",\n \"methods\": [\n \"get\",\n \"post\"\n ]\n },\n {\n \"type\": \"http\",\n \"direction\": \"out\",\n \"name\": \"Response\"\n } \n ],\n\n\"retry\": {\n \"strategy\": \"fixedDelay\",\n \"maxRetryCount\": 1,\n \"delayInterval\": \"00:00:10\"\n }\n}\n\nHere Since I have set the retry count to 1 the function will try to execute twice.\n\nRefer this MsDOC on retry policy.\n"
] |
[
0
] |
[] |
[] |
[
"azure_durable_functions",
"azure_functions",
"error_handling",
"retry_logic"
] |
stackoverflow_0074605237_azure_durable_functions_azure_functions_error_handling_retry_logic.txt
|
Q:
TypeError: Fullscreen request denied when tested application is automatically going into Full screen
It's my first post on the stackoverflow so I apologize in advance if I am doing something wrong.
I'm new to Cypress E2E test. Im trying to write a test for react WebApplication.
Unfortunately I am stuck on the element when the application on mobile views e.g. viewPort 360 opens in full screen. The application is doing that when I interact with the first element (login page). After filling the login field spec then stops with the message "(uncaught exception) TypeError: Fullscreen request denied".
Is there any way to work around this? On higher resolutions where full screen is not opened spec passes without problem.
Screen with the error from Cypress
I was searching if there is a parameter in Cypress settings to disable the blocker for full screen , unfortunately I did not find anything.
The same problem occurs on Chrome and Firefox.
The spec code is very simple
I use the class from POM
class Homepage_PO {
Visit_Homepage(){
cy.visit(Cypress.env('integrax_homepage'),{timeout:60000})
}
Login(login, password){
cy.get('input[name="login"]').type(login)
cy.get('input[name="password"]').type(password)
cy.get('#loginBtn').click()
}
}
export default Homepage_PO;
and the spec code
import Homepage_PO from "../../support/pageObjects/integraX/Homepage_PO";
/// <reference types = 'cypress'/>
describe('Log in into IntegraX', () => {
const homepage_PO = new Homepage_PO()
beforeEach(() => {
homepage_PO.Visit_Homepage();
});
it.only('log in as Integra administrator', () => {
homepage_PO.Login(Cypress.env('integra_admin_login'), Cypress.env('integra_admin_password'));
});
it('log in as Car Service', () => {
});
});
A:
You might be able to disable fullscreen by replacing the method on the element or documentElement.
See How TO - Fullscreen.
If the call is at element level
const stub = cy.stub();
cy.get('input[name="password"]')
.then($el => $el[0].requestFullscreen = stub)
.type(password)
.then(() => expect(stub).to.be.called)
or if the call is at document level
const stub = cy.stub();
cy.document().then(doc => {
doc.documentElement.requestFullscreen = stub
})
cy.get('input[name="password"]')
.type(password)
.then(() => expect(stub).to.be.called)
In case this method cannot be stubbed, you can catch the uncaught exception error and just let the test carry on
Cypress.once('uncaught:exception', () =>
return false
})
cy.get('input[name="password"]')
.type(password) // error suppressed by above handler
A:
Thank you for your help.
This solution worked in my case (the call is at document level)
const stub = cy.stub();
cy.document().then(doc => {
doc.documentElement.requestFullscreen = stub
})
cy.get('input[name="password"]')
.type(password)
.then(() => expect(stub).to.be.called)
This solution also werked but I'll need to add it to each field. The same problem occured when I was clicking "Login" btton. So abowe solution is much better.
Cypress.once('uncaught:exception', () =>
return false
})
cy.get('input[name="password"]')
.type(password) // error suppressed by above handler
|
TypeError: Fullscreen request denied when tested application is automatically going into Full screen
|
It's my first post on the stackoverflow so I apologize in advance if I am doing something wrong.
I'm new to Cypress E2E test. Im trying to write a test for react WebApplication.
Unfortunately I am stuck on the element when the application on mobile views e.g. viewPort 360 opens in full screen. The application is doing that when I interact with the first element (login page). After filling the login field spec then stops with the message "(uncaught exception) TypeError: Fullscreen request denied".
Is there any way to work around this? On higher resolutions where full screen is not opened spec passes without problem.
Screen with the error from Cypress
I was searching if there is a parameter in Cypress settings to disable the blocker for full screen , unfortunately I did not find anything.
The same problem occurs on Chrome and Firefox.
The spec code is very simple
I use the class from POM
class Homepage_PO {
Visit_Homepage(){
cy.visit(Cypress.env('integrax_homepage'),{timeout:60000})
}
Login(login, password){
cy.get('input[name="login"]').type(login)
cy.get('input[name="password"]').type(password)
cy.get('#loginBtn').click()
}
}
export default Homepage_PO;
and the spec code
import Homepage_PO from "../../support/pageObjects/integraX/Homepage_PO";
/// <reference types = 'cypress'/>
describe('Log in into IntegraX', () => {
const homepage_PO = new Homepage_PO()
beforeEach(() => {
homepage_PO.Visit_Homepage();
});
it.only('log in as Integra administrator', () => {
homepage_PO.Login(Cypress.env('integra_admin_login'), Cypress.env('integra_admin_password'));
});
it('log in as Car Service', () => {
});
});
|
[
"You might be able to disable fullscreen by replacing the method on the element or documentElement.\nSee How TO - Fullscreen.\nIf the call is at element level\nconst stub = cy.stub();\ncy.get('input[name=\"password\"]')\n .then($el => $el[0].requestFullscreen = stub)\n .type(password)\n .then(() => expect(stub).to.be.called)\n\nor if the call is at document level\nconst stub = cy.stub();\ncy.document().then(doc => {\n doc.documentElement.requestFullscreen = stub\n})\ncy.get('input[name=\"password\"]')\n .type(password)\n .then(() => expect(stub).to.be.called)\n\n\nIn case this method cannot be stubbed, you can catch the uncaught exception error and just let the test carry on\n\nCypress.once('uncaught:exception', () => \n return false\n})\ncy.get('input[name=\"password\"]')\n .type(password) // error suppressed by above handler\n\n",
"Thank you for your help.\nThis solution worked in my case (the call is at document level)\nconst stub = cy.stub();\ncy.document().then(doc => {\n doc.documentElement.requestFullscreen = stub\n})\ncy.get('input[name=\"password\"]')\n .type(password)\n .then(() => expect(stub).to.be.called)\n\nThis solution also werked but I'll need to add it to each field. The same problem occured when I was clicking \"Login\" btton. So abowe solution is much better.\nCypress.once('uncaught:exception', () => \n return false\n})\ncy.get('input[name=\"password\"]')\n .type(password) // error suppressed by above handler\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"cypress"
] |
stackoverflow_0074661840_cypress.txt
|
Q:
Getting distinct values from from a list comprised of lists containing a comma delimited string
Main list:
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
I want to get a list comprised of unique lists like so:
data_unique = [
["629-2, text1, 12"],
["407-3, text9, 6"],
["000-5, text6, 0"],
]
I've tried using numpy.unique but I need to pare it down further as I need the list to be populated by lists containing a single unique version of the numerical designator in the beginning of the string, ie. 629-2...
I've also tried using chain from itertools like this:
def get_unique(data):
return list(set(chain(*data)))
But that only got me as far as numpy.unique.
Thanks in advance.
A:
Code
from itertools import groupby
def get_unique(data):
def designated_version(item):
return item[0].split(',')[0]
return [list(v)[0]
for _, v in groupby(sorted(data,
key = designated_version),
designated_version)
]
Test
print(get_unique(data))
# Output
[['629-2, text1, 12'], ['407-3, text9, 6'], ['000-5, text7, 0']]
Explanation
Sorts data by designated number (in case not already sorted)
Uses groupby to group by the unique version of the numerical designator of each item in list i.e. lambda item: item[0].split(',')[0]
List comprehension keeps the first item in each grouped list i.e. list(v)[0]
A:
# Convert the list of lists to a set
data_set = set(tuple(x) for x in data)
# Convert the set back to a list
data_unique = [list(x) for x in data_set]
A:
I have used recursion to solve the problem!
def get_unique(lst):
if not lst:
return []
if lst[0] in lst[1:]:
return get_unique(lst[1:])
else:
return [lst[0]] + get_unique(lst[1:])
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
print(get_unique(data))
Here I am storing the last occurrence of the element in list.
|
Getting distinct values from from a list comprised of lists containing a comma delimited string
|
Main list:
data = [
["629-2, text1, 12"],
["629-2, text2, 12"],
["407-3, text9, 6"],
["407-3, text4, 6"],
["000-5, text7, 0"],
["000-5, text6, 0"],
]
I want to get a list comprised of unique lists like so:
data_unique = [
["629-2, text1, 12"],
["407-3, text9, 6"],
["000-5, text6, 0"],
]
I've tried using numpy.unique but I need to pare it down further as I need the list to be populated by lists containing a single unique version of the numerical designator in the beginning of the string, ie. 629-2...
I've also tried using chain from itertools like this:
def get_unique(data):
return list(set(chain(*data)))
But that only got me as far as numpy.unique.
Thanks in advance.
|
[
"Code\nfrom itertools import groupby\n\ndef get_unique(data):\n def designated_version(item):\n return item[0].split(',')[0]\n\n return [list(v)[0] \n for _, v in groupby(sorted(data, \n key = designated_version),\n designated_version)\n ]\n\n \n\nTest\nprint(get_unique(data))\n# Output\n[['629-2, text1, 12'], ['407-3, text9, 6'], ['000-5, text7, 0']]\n\nExplanation\n\nSorts data by designated number (in case not already sorted)\nUses groupby to group by the unique version of the numerical designator of each item in list i.e. lambda item: item[0].split(',')[0]\nList comprehension keeps the first item in each grouped list i.e. list(v)[0]\n\n",
"# Convert the list of lists to a set\ndata_set = set(tuple(x) for x in data)\n\n# Convert the set back to a list\ndata_unique = [list(x) for x in data_set]\n\n",
"I have used recursion to solve the problem!\ndef get_unique(lst):\n if not lst:\n return []\n if lst[0] in lst[1:]:\n return get_unique(lst[1:])\n else:\n return [lst[0]] + get_unique(lst[1:])\n\ndata = [\n[\"629-2, text1, 12\"],\n[\"629-2, text2, 12\"],\n[\"407-3, text9, 6\"],\n[\"407-3, text4, 6\"],\n[\"000-5, text7, 0\"],\n[\"000-5, text6, 0\"],\n]\nprint(get_unique(data))\n\nHere I am storing the last occurrence of the element in list.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"numpy",
"python",
"python_itertools"
] |
stackoverflow_0074666151_numpy_python_python_itertools.txt
|
Q:
Constructor chaining in Java, when is super called?
If you have a constructor which calls this() when is super() called? In the first constructor called? Or in the last constructor without the keyword this()?
Main calls: new B()
public class A {
public A(){
System.out.print("A ");
}
}
public class B extends A {
public B(){
// is super called here? Option 1
this(1);
System.out.print("B0 ");
}
public B(int i){
// or is super called here? Option 2
System.out.print("B1 ");
}
}
In this example the Output is: "A B1 B0". But it isn't clear to me, if the super() constructor is called at Option 1 or Option 2 (because the Output would be the same, in this case).
A:
You can deduce the answer by following this simple rule:
Only one super constructor will be called, and it will only be called once.
So, the super constructor could not possibly be invoked from B() because it would be invoked again from B(int).
Specifically, in the Java Language Specification (JLS) section 8.8.7. "Constructor Body" we read:
If a constructor body does not begin with an explicit constructor invocation and the constructor being declared is not part of the primordial class Object, then the constructor body implicitly begins with a superclass constructor invocation "super();", an invocation of the constructor of its direct superclass that takes no arguments.
In other words, if the constructor body does begin with an explicit constructor invocation, then there will be no implicit superclass constructor invocation.
(When the JLS speaks of a constructor body beginning with "an explicit constructor invocation" it means an invocation of another constructor of the same class.)
A:
When instantiating a new object, using new B(), it automatically calls the no-arguments constructor. In your case, public B().
In the no-arguments constructor:
Because your constructor specified either a call to another constructor (using this()) or a manually specified call to the superclass constructor (using super()), there is no other call to another constructor. If your constructor was to only have the printing line (System.out.print("B0);), super() would have been automatically called. Now, you're calling this(1), so your first constructor won't output anything yet, but will go to your second constructor, having an int argument.
In the second constructor (with int argument):
As I specified already, if your constructor doesn't have an explicit call to this() or super(), it automatically calls super() (without any arguments unless manually specified). Because super() must always be the first call in a constructor, it is executed before your print, meaning that A() is called. The first thing to be printed would be A. Moving on, there will be B1 printed.
Back in your first constructor (no args):
this(1) has already been called, so it moves to the next statement, the one that prints something. So B0 will be printed.
Now, as a recap, for a proper answer: super() is automatically called only in public B(int i), because you already specified a call to this() in your first constructor, and super() is only called automatically when you don't specify this() or an explicit call to super().
|
Constructor chaining in Java, when is super called?
|
If you have a constructor which calls this() when is super() called? In the first constructor called? Or in the last constructor without the keyword this()?
Main calls: new B()
public class A {
public A(){
System.out.print("A ");
}
}
public class B extends A {
public B(){
// is super called here? Option 1
this(1);
System.out.print("B0 ");
}
public B(int i){
// or is super called here? Option 2
System.out.print("B1 ");
}
}
In this example the Output is: "A B1 B0". But it isn't clear to me, if the super() constructor is called at Option 1 or Option 2 (because the Output would be the same, in this case).
|
[
"You can deduce the answer by following this simple rule:\n\nOnly one super constructor will be called, and it will only be called once.\n\nSo, the super constructor could not possibly be invoked from B() because it would be invoked again from B(int).\nSpecifically, in the Java Language Specification (JLS) section 8.8.7. \"Constructor Body\" we read:\n\nIf a constructor body does not begin with an explicit constructor invocation and the constructor being declared is not part of the primordial class Object, then the constructor body implicitly begins with a superclass constructor invocation \"super();\", an invocation of the constructor of its direct superclass that takes no arguments.\n\nIn other words, if the constructor body does begin with an explicit constructor invocation, then there will be no implicit superclass constructor invocation.\n(When the JLS speaks of a constructor body beginning with \"an explicit constructor invocation\" it means an invocation of another constructor of the same class.)\n",
"When instantiating a new object, using new B(), it automatically calls the no-arguments constructor. In your case, public B().\nIn the no-arguments constructor:\nBecause your constructor specified either a call to another constructor (using this()) or a manually specified call to the superclass constructor (using super()), there is no other call to another constructor. If your constructor was to only have the printing line (System.out.print(\"B0);), super() would have been automatically called. Now, you're calling this(1), so your first constructor won't output anything yet, but will go to your second constructor, having an int argument.\nIn the second constructor (with int argument):\nAs I specified already, if your constructor doesn't have an explicit call to this() or super(), it automatically calls super() (without any arguments unless manually specified). Because super() must always be the first call in a constructor, it is executed before your print, meaning that A() is called. The first thing to be printed would be A. Moving on, there will be B1 printed.\nBack in your first constructor (no args):\nthis(1) has already been called, so it moves to the next statement, the one that prints something. So B0 will be printed.\nNow, as a recap, for a proper answer: super() is automatically called only in public B(int i), because you already specified a call to this() in your first constructor, and super() is only called automatically when you don't specify this() or an explicit call to super().\n"
] |
[
2,
0
] |
[] |
[] |
[
"constructor",
"constructor_chaining",
"java"
] |
stackoverflow_0074665618_constructor_constructor_chaining_java.txt
|
Q:
inquirer package, present questions based on previous answers
I'm using NPM 'inquirer' package in order to present the user various questions. One of them is a 'choices' selection.
Is there a way to present follow up questions based on the 'choices' selection?
Here's my code:
const { prompt } = require('inquirer');
require('colors');
const questions = [
{
type: 'input',
name: 'appName',
message: 'Enter application name:'
},
{
type: 'input',
name: 'appDescription',
message: 'Enter app description:'
},
{
type: 'input',
name: 'appAuthor',
message: 'Enter app author:'
},
{
type: 'checkbox',
name: 'plugins',
message: 'Select plugins to install:',
choices: [
{
name: 'Cassandra'
}
]
}
];
module.exports.performQuestions = async () => {
console.log('\nWelcome to Chef!\n'.underline.italic.cyan);
const answers = await prompt(questions);
if (!answers.appName) {
console.error('\nPlease provide app name!\n'.red);
process.exit(1);
}
answers.appType = 'service';
return answers;
};
Here I want to present a few more questions if the user selects 'Cassandra' is that possible?
Thanks.
A:
You can use "when" and like in the example bellow, the second question will popup only if "Cassandra" is selected:
const QUESTIONS = [
{
name: 'your-name',
type: 'list',
message: 'Your name:',
choices: ['Batman', 'Superman', 'Ultron', 'Cassandra'],
},
{
name: 'hello-cassandra',
type: 'confirm',
message: 'Oh, hello Cassandra!',
when: (answers) => answers['your-name'] === 'Cassandra',
},
];
inquirer.prompt(QUESTIONS)
.then(answers =>
{
console.log(answers);
});
|
inquirer package, present questions based on previous answers
|
I'm using NPM 'inquirer' package in order to present the user various questions. One of them is a 'choices' selection.
Is there a way to present follow up questions based on the 'choices' selection?
Here's my code:
const { prompt } = require('inquirer');
require('colors');
const questions = [
{
type: 'input',
name: 'appName',
message: 'Enter application name:'
},
{
type: 'input',
name: 'appDescription',
message: 'Enter app description:'
},
{
type: 'input',
name: 'appAuthor',
message: 'Enter app author:'
},
{
type: 'checkbox',
name: 'plugins',
message: 'Select plugins to install:',
choices: [
{
name: 'Cassandra'
}
]
}
];
module.exports.performQuestions = async () => {
console.log('\nWelcome to Chef!\n'.underline.italic.cyan);
const answers = await prompt(questions);
if (!answers.appName) {
console.error('\nPlease provide app name!\n'.red);
process.exit(1);
}
answers.appType = 'service';
return answers;
};
Here I want to present a few more questions if the user selects 'Cassandra' is that possible?
Thanks.
|
[
"You can use \"when\" and like in the example bellow, the second question will popup only if \"Cassandra\" is selected:\nconst QUESTIONS = [\n {\n name: 'your-name',\n type: 'list',\n message: 'Your name:',\n choices: ['Batman', 'Superman', 'Ultron', 'Cassandra'],\n },\n {\n name: 'hello-cassandra',\n type: 'confirm',\n message: 'Oh, hello Cassandra!',\n when: (answers) => answers['your-name'] === 'Cassandra',\n },\n]; \n\n\ninquirer.prompt(QUESTIONS)\n .then(answers =>\n {\n console.log(answers);\n });\n\n"
] |
[
0
] |
[] |
[] |
[
"inquirer",
"inquirerjs",
"javascript"
] |
stackoverflow_0054909019_inquirer_inquirerjs_javascript.txt
|
Q:
Calculating mixed numbers and chars and concatinating it back again in JS/jQuery
I need to manipulate drawing of a SVG, so I have attribute "d" values like this:
d = "M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985"
What I want is to "purify" all the values (to strip the chars from them), to calculate them (for the sake of simplicity, let's say to add 100 to each value), to deconstruct the string, calculate the values inside and then concatenate it all back together so the final result is something like this:
d = "M661.5402,368.917 C735.622,368.917 404.476,665.985 479.298,665.985"
Have in mind that:
some values can start with a character
values are delimited by comma
some values within comma delimiter can be delimited by space
values are decimal
This is my try:
let arr1 = d.split(',');
arr1 = arr1.map(element => {
let arr2 = element.split(' ');
if (arr2.length > 1) {
arr2 = arr2.map(el => {
let startsWithChar = el.match(/\D+/);
if (startsWithChar) {
el = el.replace(/\D/g,'');
}
el = parseFloat(el) + 100;
if (startsWithChar) {
el = startsWithChar[0] + el;
}
})
}
else {
let startsWithChar = element.match(/\D+/);
if (startsWithChar) {
element = element.replace(/\D/g,'');
}
element = parseFloat(element) + 100;
if (startsWithChar) {
element = startsWithChar[0] + element;
}
}
});
d = arr1.join(',');
I tried with regex replace(/\D/g,'') but then it strips the decimal dot from the value also, so I think my solution is full of holes.
Maybe another solution would be to somehow modify directly each of path values/commands, I'm opened to that solution also, but I don't know how.
A:
const s = 'M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985'
console.log(s.replaceAll(/[\d.]+/g, m=>+m+100))
A:
You might use a pattern to match the format in the string with 2 capture groups.
([ ,]?\b[A-Z]?)(\d+\.\d+)\b
The pattern matches:
( Capture group 1
[ ,]?\b[A-Z]? Match an optional space or comma, a word boundary and an optional uppercase char A-Z
) Close group 1
( Capture group 2
\d+\.\d+ Match 1+ digits, a dot and 1+ digits
) Close group 1
\b A word boundary to prevent a partial word match
Regex demo
First capture the optional delimiter followed by an optional uppercase char in group 1, and the decimal number in group 2.
Then add 100 to the decimal value and join back the 2 group values.
const d = "M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985";
const regex = /([ ,]?\b[A-Z]?)(\d+\.\d+)\b/g;
const res = Array.from(
d.matchAll(regex), m => m[1] + (+m[2] + 100)
).join('');
console.log(res);
|
Calculating mixed numbers and chars and concatinating it back again in JS/jQuery
|
I need to manipulate drawing of a SVG, so I have attribute "d" values like this:
d = "M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985"
What I want is to "purify" all the values (to strip the chars from them), to calculate them (for the sake of simplicity, let's say to add 100 to each value), to deconstruct the string, calculate the values inside and then concatenate it all back together so the final result is something like this:
d = "M661.5402,368.917 C735.622,368.917 404.476,665.985 479.298,665.985"
Have in mind that:
some values can start with a character
values are delimited by comma
some values within comma delimiter can be delimited by space
values are decimal
This is my try:
let arr1 = d.split(',');
arr1 = arr1.map(element => {
let arr2 = element.split(' ');
if (arr2.length > 1) {
arr2 = arr2.map(el => {
let startsWithChar = el.match(/\D+/);
if (startsWithChar) {
el = el.replace(/\D/g,'');
}
el = parseFloat(el) + 100;
if (startsWithChar) {
el = startsWithChar[0] + el;
}
})
}
else {
let startsWithChar = element.match(/\D+/);
if (startsWithChar) {
element = element.replace(/\D/g,'');
}
element = parseFloat(element) + 100;
if (startsWithChar) {
element = startsWithChar[0] + element;
}
}
});
d = arr1.join(',');
I tried with regex replace(/\D/g,'') but then it strips the decimal dot from the value also, so I think my solution is full of holes.
Maybe another solution would be to somehow modify directly each of path values/commands, I'm opened to that solution also, but I don't know how.
|
[
"\n\nconst s = 'M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985'\n\nconsole.log(s.replaceAll(/[\\d.]+/g, m=>+m+100))\n\n\n\n",
"You might use a pattern to match the format in the string with 2 capture groups.\n([ ,]?\\b[A-Z]?)(\\d+\\.\\d+)\\b\n\nThe pattern matches:\n\n( Capture group 1\n\n[ ,]?\\b[A-Z]? Match an optional space or comma, a word boundary and an optional uppercase char A-Z\n\n\n) Close group 1\n( Capture group 2\n\n\\d+\\.\\d+ Match 1+ digits, a dot and 1+ digits\n\n\n) Close group 1\n\\b A word boundary to prevent a partial word match\n\nRegex demo\nFirst capture the optional delimiter followed by an optional uppercase char in group 1, and the decimal number in group 2.\nThen add 100 to the decimal value and join back the 2 group values.\n\n\nconst d = \"M561.5402,268.917 C635.622,268.917 304.476,565.985 379.298,565.985\";\nconst regex = /([ ,]?\\b[A-Z]?)(\\d+\\.\\d+)\\b/g;\nconst res = Array.from(\n d.matchAll(regex), m => m[1] + (+m[2] + 100)\n).join('');\n\nconsole.log(res);\n\n\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"javascript",
"jquery",
"regex",
"svg"
] |
stackoverflow_0074666142_javascript_jquery_regex_svg.txt
|
Q:
I am struggling to get a game object to be destroyed when colliding with the player
I have written code to heal the player after colliding with the health potion and then destroy the potion game object making it one time use, however, the gameobject does not get destroyed and the player is not healed. Code is as shown below:
private void OnCollisionEnter2D(Collision2D collision)
{
if (collision.gameObject.tag == "Player" )
{
playerHealthScript.heal();
Destroy(gameObject);
}
}
}
(code below is in a seperate script, used for player health)
public void heal()
{
currentHealth += healingAmount;
currentHealth = Mathf.Clamp(currentHealth, 0, 100);
healthBar.fillAmount = currentHealth / 100f;
}
A:
Before seek for anything, you should Debug.Log("Collision") in your first function, maybe your code is right but the collision isn't detect.
If so, yo could consideer check if the syntax of "Player" is the same as the tag, or maybe your "Player" doesn't have any rigidbody2D ( 2D is important ! ).
|
I am struggling to get a game object to be destroyed when colliding with the player
|
I have written code to heal the player after colliding with the health potion and then destroy the potion game object making it one time use, however, the gameobject does not get destroyed and the player is not healed. Code is as shown below:
private void OnCollisionEnter2D(Collision2D collision)
{
if (collision.gameObject.tag == "Player" )
{
playerHealthScript.heal();
Destroy(gameObject);
}
}
}
(code below is in a seperate script, used for player health)
public void heal()
{
currentHealth += healingAmount;
currentHealth = Mathf.Clamp(currentHealth, 0, 100);
healthBar.fillAmount = currentHealth / 100f;
}
|
[
"Before seek for anything, you should Debug.Log(\"Collision\") in your first function, maybe your code is right but the collision isn't detect.\nIf so, yo could consideer check if the syntax of \"Player\" is the same as the tag, or maybe your \"Player\" doesn't have any rigidbody2D ( 2D is important ! ).\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"unity3d"
] |
stackoverflow_0074656465_c#_unity3d.txt
|
Q:
Deleting a specific element from JSON
I'm trying to clean up my json file with jq and remove all the elements that don't belong to the city of Bern. Here is the JSON example:
[
{
"geometry": {
"location": {
"lat": 46.93629499999999,
"lng": 7.503256800000001
}
},
"name": "La Tana Del Lupo",
"price_level": 1,
"rating": 4.4,
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"user_ratings_total": 84,
"vicinity": "Dorfstrasse 11, Muri bei Bern"
},
{
"geometry": {
"location": {
"lat": 46.93406969999999,
"lng": 7.503247199999999
}
},
"name": "Rung Reuang",
"rating": 4.8,
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"user_ratings_total": 131,
"vicinity": "Worbstrasse 198, Muri bei Bern"
}
]
I'm trying to remove all the entries with the city "Ostermundigen" in it. I've tried different things:
jq 'map(del(.vicinity|contains(“Ostermundigen”)))’ Bern.json
this expects some input from me:
(base) dcha3453463$ jq 'map(del(.vicinity|contains(“Ostermundigen”)))’ clean_json/Bern.json
(base) dcha3453463$ jq < clean_json/Bern.json 'del(.[]|select(.vicinity(has"Ostermundigen”)))’
>
(base) dcha3453463$ jq 'del(.[]|select(.vicinity=="Ostermundigen")' clean_json/Bern.json
jq: error: syntax error, unexpected $end, expecting ';' or ')' (Unix shell quoting issues?) at <top-level>, line 1:
del(.[]|select(.vicinity=="Ostermundigen")
jq: 1 compile error
>```
What am I doing wrong?
A:
You need select to filter the right items. Without select, the filter would traverse to the criteria and delete just that.
Also, map(del(…)) would leave null values behind. Rather delete from outside the array, and descend to the items using .[] from within del:
del(.[] | select(.vicinity|contains("Ostermundigen")))
|
Deleting a specific element from JSON
|
I'm trying to clean up my json file with jq and remove all the elements that don't belong to the city of Bern. Here is the JSON example:
[
{
"geometry": {
"location": {
"lat": 46.93629499999999,
"lng": 7.503256800000001
}
},
"name": "La Tana Del Lupo",
"price_level": 1,
"rating": 4.4,
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"user_ratings_total": 84,
"vicinity": "Dorfstrasse 11, Muri bei Bern"
},
{
"geometry": {
"location": {
"lat": 46.93406969999999,
"lng": 7.503247199999999
}
},
"name": "Rung Reuang",
"rating": 4.8,
"types": [
"restaurant",
"point_of_interest",
"food",
"establishment"
],
"user_ratings_total": 131,
"vicinity": "Worbstrasse 198, Muri bei Bern"
}
]
I'm trying to remove all the entries with the city "Ostermundigen" in it. I've tried different things:
jq 'map(del(.vicinity|contains(“Ostermundigen”)))’ Bern.json
this expects some input from me:
(base) dcha3453463$ jq 'map(del(.vicinity|contains(“Ostermundigen”)))’ clean_json/Bern.json
(base) dcha3453463$ jq < clean_json/Bern.json 'del(.[]|select(.vicinity(has"Ostermundigen”)))’
>
(base) dcha3453463$ jq 'del(.[]|select(.vicinity=="Ostermundigen")' clean_json/Bern.json
jq: error: syntax error, unexpected $end, expecting ';' or ')' (Unix shell quoting issues?) at <top-level>, line 1:
del(.[]|select(.vicinity=="Ostermundigen")
jq: 1 compile error
>```
What am I doing wrong?
|
[
"You need select to filter the right items. Without select, the filter would traverse to the criteria and delete just that.\nAlso, map(del(…)) would leave null values behind. Rather delete from outside the array, and descend to the items using .[] from within del:\ndel(.[] | select(.vicinity|contains(\"Ostermundigen\")))\n\n"
] |
[
1
] |
[] |
[] |
[
"jq",
"json"
] |
stackoverflow_0074666194_jq_json.txt
|
Q:
How to disable mouse movement lua script
While I am running a script i want to add something that disables physical mouse movement so that the script is more accurate with its movements. Is this possible?
Also is there something I can add so that if i press a certain key at any time while it is running it will cancel the script immediately and re enable physical mouse movement
This is the script I have
if (event == "G_PRESSED" and arg == 4 and GetMKeyState("kb") == 3) then
MoveMouseTo( 36165, 57821)
Sleep(100)
PressMouseButton(3)
ReleaseMouseButton(3)
Sleep(100)
MoveMouseTo( 36131, 59461)
Sleep(100)
PressKey ("lalt")
Sleep(100)
PressKey ("X")
Sleep(100)
ReleaseKey ("lalt")
ReleaseKey ("X")
PressMouseButton(1)
ReleaseMouseButton(1)
Sleep(250)
MoveMouseTo( 38932, 56060)
Sleep(2000)
PressMouseButton(1)
ReleaseMouseButton(1)
MoveMouseTo( 4166, 1518)
Sleep(200)
PressMouseButton(1)
ReleaseMouseButton(1)
MoveMouseTo( 38932, 56060)
Sleep(100)
PressMouseButton(3)
ReleaseMouseButton(3)
Sleep(100)
MoveMouseTo( 38590, 57882)
Sleep(100)
PressKey ("lalt")
Sleep(100)
PressKey ("X")
Sleep(100)
ReleaseKey ("lalt")
ReleaseKey ("X")
end
Thanks
A:
LGS/GHUB is unable to disable physical mouse movement.
To make mouse moves more accurate you can reduce sleep intervals after MoveMouseTo from 100ms to 10ms. This way a human would have only 10ms to move the mouse cursor away before mouse button click is simulated.
MoveMouseTo( 38932, 56060)
Sleep(100)
MoveMouseTo( 38932, 56060)
Sleep(10)
PressMouseButton(3)
|
How to disable mouse movement lua script
|
While I am running a script i want to add something that disables physical mouse movement so that the script is more accurate with its movements. Is this possible?
Also is there something I can add so that if i press a certain key at any time while it is running it will cancel the script immediately and re enable physical mouse movement
This is the script I have
if (event == "G_PRESSED" and arg == 4 and GetMKeyState("kb") == 3) then
MoveMouseTo( 36165, 57821)
Sleep(100)
PressMouseButton(3)
ReleaseMouseButton(3)
Sleep(100)
MoveMouseTo( 36131, 59461)
Sleep(100)
PressKey ("lalt")
Sleep(100)
PressKey ("X")
Sleep(100)
ReleaseKey ("lalt")
ReleaseKey ("X")
PressMouseButton(1)
ReleaseMouseButton(1)
Sleep(250)
MoveMouseTo( 38932, 56060)
Sleep(2000)
PressMouseButton(1)
ReleaseMouseButton(1)
MoveMouseTo( 4166, 1518)
Sleep(200)
PressMouseButton(1)
ReleaseMouseButton(1)
MoveMouseTo( 38932, 56060)
Sleep(100)
PressMouseButton(3)
ReleaseMouseButton(3)
Sleep(100)
MoveMouseTo( 38590, 57882)
Sleep(100)
PressKey ("lalt")
Sleep(100)
PressKey ("X")
Sleep(100)
ReleaseKey ("lalt")
ReleaseKey ("X")
end
Thanks
|
[
"LGS/GHUB is unable to disable physical mouse movement.\nTo make mouse moves more accurate you can reduce sleep intervals after MoveMouseTo from 100ms to 10ms. This way a human would have only 10ms to move the mouse cursor away before mouse button click is simulated.\nMoveMouseTo( 38932, 56060)\nSleep(100)\nMoveMouseTo( 38932, 56060)\nSleep(10)\nPressMouseButton(3)\n\n"
] |
[
0
] |
[] |
[] |
[
"logitech_gaming_software",
"lua"
] |
stackoverflow_0074664021_logitech_gaming_software_lua.txt
|
Q:
Python - How to make a circle made of 32 triangles
I would like to ask how you would make a triangle that is purely made of 32 triangles. I'm asking because I'm having trouble writing the code myself, so I thought I'd at least find some help here
I tried to write it but Python doesn't make much sense to me and every time I get somewhere I find out at the end that it was actually useless and that I'm very bad at python
|
Python - How to make a circle made of 32 triangles
|
I would like to ask how you would make a triangle that is purely made of 32 triangles. I'm asking because I'm having trouble writing the code myself, so I thought I'd at least find some help here
I tried to write it but Python doesn't make much sense to me and every time I get somewhere I find out at the end that it was actually useless and that I'm very bad at python
|
[] |
[] |
[
"To draw a circle made of triangles using Python, you can use the turtle module. The turtle module allows you to create simple graphics using a turtle that moves around the screen. You can use the turtle module to draw lines and shapes, and then fill them with color.\n"
] |
[
-2
] |
[
"python"
] |
stackoverflow_0074666273_python.txt
|
Q:
framework not found in webview_flutter when making build in IOS
I used the webview_flutter package to show some pages in webview in a flutter, In android, it's working fine but when I try to run in IOS I get an error as in the screenshot.
Can anyone help, I really stuck here.
A:
Delete flutter folder on your PC. Download a new flutter SDK and then copy it your Flutter Path (instead of deleted flutter folder).
|
framework not found in webview_flutter when making build in IOS
|
I used the webview_flutter package to show some pages in webview in a flutter, In android, it's working fine but when I try to run in IOS I get an error as in the screenshot.
Can anyone help, I really stuck here.
|
[
"Delete flutter folder on your PC. Download a new flutter SDK and then copy it your Flutter Path (instead of deleted flutter folder).\n"
] |
[
0
] |
[] |
[] |
[
"flutter",
"ios",
"webview_flutter"
] |
stackoverflow_0070123985_flutter_ios_webview_flutter.txt
|
Q:
Convert RGB array to HSL
A disclaimer first, I'm not very skilled in Python, you guys have my admiration.
My problem:
I need to generate 10k+ images from templates (128px by 128px) with various hues and luminances.
I load the images and turn them into arrays
image = Image.open(dir + "/" + file).convert('RGBA')
arr=np.array(np.asarray(image).astype('float'))
From what I can understand, handling numpy arrays in this fashion is much faster than looping over every pixels and using colorsys.
Now, I've stumbled upon a couple functions to convert rgb to hsv.
This helped me generate my images with different hues, but I also need to play with the brightness so that some can be black, and others white.
def rgb_to_hsv(rgb):
# Translated from source of colorsys.rgb_to_hsv
hsv=np.empty_like(rgb)
hsv[...,3:]=rgb[...,3:]
r,g,b=rgb[...,0],rgb[...,1],rgb[...,2]
maxc = np.max(rgb[...,:2],axis=-1)
minc = np.min(rgb[...,:2],axis=-1)
hsv[...,2] = maxc
hsv[...,1] = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
hsv[...,0] = np.select([r==maxc,g==maxc],[bc-gc,2.0+rc-bc],default=4.0+gc-rc)
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx=(minc == maxc)
hsv[...,0][idx]=0.0
hsv[...,1][idx]=0.0
return hsv
def hsv_to_rgb(hsv):
# Translated from source of colorsys.hsv_to_rgb
rgb=np.empty_like(hsv)
rgb[...,3:]=hsv[...,3:]
h,s,v=hsv[...,0],hsv[...,1],hsv[...,2]
i = (h*6.0).astype('uint8')
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
i = i%6
conditions=[s==0.0,i==1,i==2,i==3,i==4,i==5]
rgb[...,0]=np.select(conditions,[v,q,p,p,t,v],default=v)
rgb[...,1]=np.select(conditions,[v,v,v,q,p,p],default=t)
rgb[...,2]=np.select(conditions,[v,p,t,v,v,q],default=p)
return rgb
How easy is it to modify these functions to convert to and from HSL?
Any trick to convert HSV to HSL?
Any info you can give me is greatly appreciated, thanks!
A:
Yes, numpy, namely the vectorised code, can speed-up color conversions.
The more, for massive production of 10k+ bitmaps, you may want to re-use a ready made professional conversion, or sub-class it, if it is not exactly matching your preferred Luminance model.
a Computer Vision library OpenCV, currently available for python as a cv2 module, can take care of the colorsystem conversion without any additional coding just with:
a ready-made conversion one-liner
out = cv2.cvtColor( anInputFRAME, cv2.COLOR_YUV2BGR ) # a bitmap conversion
A list of some color-systems available in cv2 ( you may notice RGB to be referred to as BRG due to OpenCV convention of a different ordering of an image's Blue-Red-Green color-planes ),
( symmetry applies COLOR_YCR_CB2BGR <-|-> COLOR_BGR2YCR_CB not all pairs shown )
>>> import cv2
>>> for key in dir( cv2 ): # show all ready conversions
... if key[:7] == 'COLOR_Y':
... print key
COLOR_YCR_CB2BGR
COLOR_YCR_CB2RGB
COLOR_YUV2BGR
COLOR_YUV2BGRA_I420
COLOR_YUV2BGRA_IYUV
COLOR_YUV2BGRA_NV12
COLOR_YUV2BGRA_NV21
COLOR_YUV2BGRA_UYNV
COLOR_YUV2BGRA_UYVY
COLOR_YUV2BGRA_Y422
COLOR_YUV2BGRA_YUNV
COLOR_YUV2BGRA_YUY2
COLOR_YUV2BGRA_YUYV
COLOR_YUV2BGRA_YV12
COLOR_YUV2BGRA_YVYU
COLOR_YUV2BGR_I420
COLOR_YUV2BGR_IYUV
COLOR_YUV2BGR_NV12
COLOR_YUV2BGR_NV21
COLOR_YUV2BGR_UYNV
COLOR_YUV2BGR_UYVY
COLOR_YUV2BGR_Y422
COLOR_YUV2BGR_YUNV
COLOR_YUV2BGR_YUY2
COLOR_YUV2BGR_YUYV
COLOR_YUV2BGR_YV12
COLOR_YUV2BGR_YVYU
COLOR_YUV2GRAY_420
COLOR_YUV2GRAY_I420
COLOR_YUV2GRAY_IYUV
COLOR_YUV2GRAY_NV12
COLOR_YUV2GRAY_NV21
COLOR_YUV2GRAY_UYNV
COLOR_YUV2GRAY_UYVY
COLOR_YUV2GRAY_Y422
COLOR_YUV2GRAY_YUNV
COLOR_YUV2GRAY_YUY2
COLOR_YUV2GRAY_YUYV
COLOR_YUV2GRAY_YV12
COLOR_YUV2GRAY_YVYU
COLOR_YUV2RGB
COLOR_YUV2RGBA_I420
COLOR_YUV2RGBA_IYUV
COLOR_YUV2RGBA_NV12
COLOR_YUV2RGBA_NV21
COLOR_YUV2RGBA_UYNV
COLOR_YUV2RGBA_UYVY
COLOR_YUV2RGBA_Y422
COLOR_YUV2RGBA_YUNV
COLOR_YUV2RGBA_YUY2
COLOR_YUV2RGBA_YUYV
COLOR_YUV2RGBA_YV12
COLOR_YUV2RGBA_YVYU
COLOR_YUV2RGB_I420
COLOR_YUV2RGB_IYUV
COLOR_YUV2RGB_NV12
COLOR_YUV2RGB_NV21
COLOR_YUV2RGB_UYNV
COLOR_YUV2RGB_UYVY
COLOR_YUV2RGB_Y422
COLOR_YUV2RGB_YUNV
COLOR_YUV2RGB_YUY2
COLOR_YUV2RGB_YUYV
COLOR_YUV2RGB_YV12
COLOR_YUV2RGB_YVYU
COLOR_YUV420P2BGR
COLOR_YUV420P2BGRA
COLOR_YUV420P2GRAY
COLOR_YUV420P2RGB
COLOR_YUV420P2RGBA
COLOR_YUV420SP2BGR
COLOR_YUV420SP2BGRA
COLOR_YUV420SP2GRAY
COLOR_YUV420SP2RGB
COLOR_YUV420SP2RGBA
I did some prototyping for Luminance conversions ( based on >>> http://en.wikipedia.org/wiki/HSL_and_HSV )
But not tested for release.
def get_YUV_V_Cr_Rec601_BRG_frame( brgFRAME ): # For the Rec. 601 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE
out = numpy.zeros( brgFRAME.shape[0:2] )
out += 0.615 / 255 * brgFRAME[:,:,1] # // Red # normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ... on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]
out -= 0.515 / 255 * brgFRAME[:,:,2] # // Green
out -= 0.100 / 255 * brgFRAME[:,:,0] # // Blue # normalise to <0.0 - 1.0> before vectorised MUL/ADD
return out
A:
# -*- coding: utf-8 -*-
# @File : rgb2hls.py
# @Info : @ TSMC
# @Desc :
import colorsys
import numpy as np
import scipy.misc
import tensorflow as tf
from PIL import Image
def rgb2hls(img):
""" note: elements in img is a float number less than 1.0 and greater than 0.
:param img: an numpy ndarray with shape NHWC
:return:
"""
assert len(img.shape) == 3
hue = np.zeros_like(img[:, :, 0])
luminance = np.zeros_like(img[:, :, 0])
saturation = np.zeros_like(img[:, :, 0])
for x in range(height):
for y in range(width):
r, g, b = img[x, y]
h, l, s = colorsys.rgb_to_hls(r, g, b)
hue[x, y] = h
luminance[x, y] = l
saturation[x, y] = s
return hue, luminance, saturation
def np_rgb2hls(img):
r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]
maxc = np.max(img, -1)
minc = np.min(img, -1)
l = (minc + maxc) / 2.0
if np.array_equal(minc, maxc):
return np.zeros_like(l), l, np.zeros_like(l)
smask = np.greater(l, 0.5).astype(np.float32)
s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))
rc = (maxc - r) / (maxc - minc + 0.001)
gc = (maxc - g) / (maxc - minc + 0.001)
bc = (maxc - b) / (maxc - minc + 0.001)
rmask = np.equal(r, maxc).astype(np.float32)
gmask = np.equal(g, maxc).astype(np.float32)
rgmask = np.logical_or(rmask, gmask).astype(np.float32)
h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)
h = np.remainder(h / 6.0, 1.0)
return h, l, s
def tf_rgb2hls(img):
""" note: elements in img all in [0,1]
:param img: a tensor with shape NHWC
:return:
"""
assert img.get_shape()[-1] == 3
r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]
maxc = tf.reduce_max(img, -1)
minc = tf.reduce_min(img, -1)
l = (minc + maxc) / 2.0
# if tf.reduce_all(tf.equal(minc, maxc)):
# return tf.zeros_like(l), l, tf.zeros_like(l)
smask = tf.cast(tf.greater(l, 0.5), tf.float32)
s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))
rc = (maxc - r) / (maxc - minc + 0.001)
gc = (maxc - g) / (maxc - minc + 0.001)
bc = (maxc - b) / (maxc - minc + 0.001)
rmask = tf.equal(r, maxc)
gmask = tf.equal(g, maxc)
rgmask = tf.cast(tf.logical_or(rmask, gmask), tf.float32)
rmask = tf.cast(rmask, tf.float32)
gmask = tf.cast(gmask, tf.float32)
h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)
h = tf.mod(h / 6.0, 1.0)
h = tf.expand_dims(h, -1)
l = tf.expand_dims(l, -1)
s = tf.expand_dims(s, -1)
x = tf.concat([tf.zeros_like(l), l, tf.zeros_like(l)], -1)
y = tf.concat([h, l, s], -1)
return tf.where(condition=tf.reduce_all(tf.equal(minc, maxc)), x=x, y=y)
if __name__ == '__main__':
"""
HLS: Hue, Luminance, Saturation
H: position in the spectrum
L: color lightness
S: color saturation
"""
avatar = Image.open("hue.jpg")
width, height = avatar.size
print("width: {}, height: {}".format(width, height))
img = np.array(avatar)
img = img / 255.0
print(img.shape)
# # hue, luminance, saturation = rgb2hls(img)
# hue, luminance, saturation = np_rgb2hls(img)
img_tensor = tf.convert_to_tensor(img, tf.float32)
hls = tf_rgb2hls(img_tensor)
h, l, s = hls[:, :, 0], hls[:, :, 1], hls[:, :, 2]
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
hue, luminance, saturation = sess.run([h, l, s])
scipy.misc.imsave("hls_h_.jpg", hue)
scipy.misc.imsave("hls_l_.jpg", luminance)
scipy.misc.imsave("hls_s_.jpg", saturation)
A:
In case someone is looking for a self-contained solution (I really didn't want to add OpenCV as a dependency), I rewrote the official python colorsys rgb_to_hls() and hls_to_rgb() functions to be usable for numpy:
import numpy as np
def rgb_to_hls(rgb_array: np.ndarray) -> np.ndarray:
"""
Expects an array of shape (X, 3), each row being RGB colours.
Returns an array of same size, each row being HLS colours.
Like `colorsys` python module, all values are between 0 and 1.
NOTE: like `colorsys`, this uses HLS rather than the more usual HSL
"""
assert rgb_array.ndim == 2
assert rgb_array.shape[1] == 3
assert np.max(rgb_array) <= 1
assert np.min(rgb_array) >= 0
r, g, b = rgb_array.T.reshape((3, -1, 1))
maxc = np.max(rgb_array, axis=1).reshape((-1, 1))
minc = np.min(rgb_array, axis=1).reshape((-1, 1))
sumc = (maxc+minc)
rangec = (maxc-minc)
with np.errstate(divide='ignore', invalid='ignore'):
rgb_c = (maxc - rgb_array) / rangec
rc, gc, bc = rgb_c.T.reshape((3, -1, 1))
h = (np.where(minc == maxc, 0, np.where(r == maxc, bc - gc, np.where(g == maxc, 2.0+rc-bc, 4.0+gc-rc)))
/ 6) % 1
l = sumc/2.0
with np.errstate(divide='ignore', invalid='ignore'):
s = np.where(minc == maxc, 0,
np.where(l < 0.5, rangec / sumc, rangec / (2.0-sumc)))
return np.concatenate((h, l, s), axis=1)
def hls_to_rgb(hls_array: np.ndarray) -> np.ndarray:
"""
Expects an array of shape (X, 3), each row being HLS colours.
Returns an array of same size, each row being RGB colours.
Like `colorsys` python module, all values are between 0 and 1.
NOTE: like `colorsys`, this uses HLS rather than the more usual HSL
"""
ONE_THIRD = 1 / 3
TWO_THIRD = 2 / 3
ONE_SIXTH = 1 / 6
def _v(m1, m2, h):
h = h % 1.0
return np.where(h < ONE_SIXTH, m1 + (m2 - m1) * h * 6,
np.where(h < .5, m2,
np.where(h < TWO_THIRD, m1 + (m2 - m1) * (TWO_THIRD - h) * 6,
m1)))
assert hls_array.ndim == 2
assert hls_array.shape[1] == 3
assert np.max(hls_array) <= 1
assert np.min(hls_array) >= 0
h, l, s = hls_array.T.reshape((3, -1, 1))
m2 = np.where(l < 0.5, l * (1 + s), l + s - (l * s))
m1 = 2 * l - m2
r = np.where(s == 0, l, _v(m1, m2, h + ONE_THIRD))
g = np.where(s == 0, l, _v(m1, m2, h))
b = np.where(s == 0, l, _v(m1, m2, h - ONE_THIRD))
return np.concatenate((r, g, b), axis=1)
def _test1():
import colorsys
rgb_array = np.array([[.5, .5, .8], [.3, .7, 1], [0, 0, 0], [1, 1, 1], [.5, .5, .5]])
hls_array = rgb_to_hls(rgb_array)
for rgb, hls in zip(rgb_array, hls_array):
assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))
new_rgb_array = hls_to_rgb(hls_array)
for hls, rgb in zip(hls_array, new_rgb_array):
assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))
assert np.all(abs(rgb_array - new_rgb_array) < 0.001)
print("tests part 1 done")
def _test2():
import colorsys
hls_array = np.array([[0.6456692913385826, 0.14960629921259844, 0.7480314960629921], [.3, .7, 1], [0, 0, 0], [0, 1, 0], [.5, .5, .5]])
rgb_array = hls_to_rgb(hls_array)
for hls, rgb in zip(hls_array, rgb_array):
assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))
new_hls_array = rgb_to_hls(rgb_array)
for rgb, hls in zip(rgb_array, new_hls_array):
assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))
assert np.all(abs(hls_array - new_hls_array) < 0.001)
print("All tests done")
def _test():
_test1()
_test2()
if __name__ == "__main__":
_test()
(see gist)
(off topic: converting the other functions in the same way is actually a great training for someone wanting to get their hands dirty with numpy (or other SIMD / GPU) programming). Let me know if you do so :)
edit: rgb_to_hsv and hsv_to_rgb now also in the gist.
|
Convert RGB array to HSL
|
A disclaimer first, I'm not very skilled in Python, you guys have my admiration.
My problem:
I need to generate 10k+ images from templates (128px by 128px) with various hues and luminances.
I load the images and turn them into arrays
image = Image.open(dir + "/" + file).convert('RGBA')
arr=np.array(np.asarray(image).astype('float'))
From what I can understand, handling numpy arrays in this fashion is much faster than looping over every pixels and using colorsys.
Now, I've stumbled upon a couple functions to convert rgb to hsv.
This helped me generate my images with different hues, but I also need to play with the brightness so that some can be black, and others white.
def rgb_to_hsv(rgb):
# Translated from source of colorsys.rgb_to_hsv
hsv=np.empty_like(rgb)
hsv[...,3:]=rgb[...,3:]
r,g,b=rgb[...,0],rgb[...,1],rgb[...,2]
maxc = np.max(rgb[...,:2],axis=-1)
minc = np.min(rgb[...,:2],axis=-1)
hsv[...,2] = maxc
hsv[...,1] = (maxc-minc) / maxc
rc = (maxc-r) / (maxc-minc)
gc = (maxc-g) / (maxc-minc)
bc = (maxc-b) / (maxc-minc)
hsv[...,0] = np.select([r==maxc,g==maxc],[bc-gc,2.0+rc-bc],default=4.0+gc-rc)
hsv[...,0] = (hsv[...,0]/6.0) % 1.0
idx=(minc == maxc)
hsv[...,0][idx]=0.0
hsv[...,1][idx]=0.0
return hsv
def hsv_to_rgb(hsv):
# Translated from source of colorsys.hsv_to_rgb
rgb=np.empty_like(hsv)
rgb[...,3:]=hsv[...,3:]
h,s,v=hsv[...,0],hsv[...,1],hsv[...,2]
i = (h*6.0).astype('uint8')
f = (h*6.0) - i
p = v*(1.0 - s)
q = v*(1.0 - s*f)
t = v*(1.0 - s*(1.0-f))
i = i%6
conditions=[s==0.0,i==1,i==2,i==3,i==4,i==5]
rgb[...,0]=np.select(conditions,[v,q,p,p,t,v],default=v)
rgb[...,1]=np.select(conditions,[v,v,v,q,p,p],default=t)
rgb[...,2]=np.select(conditions,[v,p,t,v,v,q],default=p)
return rgb
How easy is it to modify these functions to convert to and from HSL?
Any trick to convert HSV to HSL?
Any info you can give me is greatly appreciated, thanks!
|
[
"Yes, numpy, namely the vectorised code, can speed-up color conversions.\nThe more, for massive production of 10k+ bitmaps, you may want to re-use a ready made professional conversion, or sub-class it, if it is not exactly matching your preferred Luminance model.\na Computer Vision library OpenCV, currently available for python as a cv2 module, can take care of the colorsystem conversion without any additional coding just with:\na ready-made conversion one-liner\nout = cv2.cvtColor( anInputFRAME, cv2.COLOR_YUV2BGR ) # a bitmap conversion\n\nA list of some color-systems available in cv2 ( you may notice RGB to be referred to as BRG due to OpenCV convention of a different ordering of an image's Blue-Red-Green color-planes ), \n( symmetry applies COLOR_YCR_CB2BGR <-|-> COLOR_BGR2YCR_CB not all pairs shown )\n>>> import cv2\n>>> for key in dir( cv2 ): # show all ready conversions\n... if key[:7] == 'COLOR_Y':\n... print key\n\nCOLOR_YCR_CB2BGR\nCOLOR_YCR_CB2RGB\nCOLOR_YUV2BGR\nCOLOR_YUV2BGRA_I420\nCOLOR_YUV2BGRA_IYUV\nCOLOR_YUV2BGRA_NV12\nCOLOR_YUV2BGRA_NV21\nCOLOR_YUV2BGRA_UYNV\nCOLOR_YUV2BGRA_UYVY\nCOLOR_YUV2BGRA_Y422\nCOLOR_YUV2BGRA_YUNV\nCOLOR_YUV2BGRA_YUY2\nCOLOR_YUV2BGRA_YUYV\nCOLOR_YUV2BGRA_YV12\nCOLOR_YUV2BGRA_YVYU\nCOLOR_YUV2BGR_I420\nCOLOR_YUV2BGR_IYUV\nCOLOR_YUV2BGR_NV12\nCOLOR_YUV2BGR_NV21\nCOLOR_YUV2BGR_UYNV\nCOLOR_YUV2BGR_UYVY\nCOLOR_YUV2BGR_Y422\nCOLOR_YUV2BGR_YUNV\nCOLOR_YUV2BGR_YUY2\nCOLOR_YUV2BGR_YUYV\nCOLOR_YUV2BGR_YV12\nCOLOR_YUV2BGR_YVYU\nCOLOR_YUV2GRAY_420\nCOLOR_YUV2GRAY_I420\nCOLOR_YUV2GRAY_IYUV\nCOLOR_YUV2GRAY_NV12\nCOLOR_YUV2GRAY_NV21\nCOLOR_YUV2GRAY_UYNV\nCOLOR_YUV2GRAY_UYVY\nCOLOR_YUV2GRAY_Y422\nCOLOR_YUV2GRAY_YUNV\nCOLOR_YUV2GRAY_YUY2\nCOLOR_YUV2GRAY_YUYV\nCOLOR_YUV2GRAY_YV12\nCOLOR_YUV2GRAY_YVYU\nCOLOR_YUV2RGB\nCOLOR_YUV2RGBA_I420\nCOLOR_YUV2RGBA_IYUV\nCOLOR_YUV2RGBA_NV12\nCOLOR_YUV2RGBA_NV21\nCOLOR_YUV2RGBA_UYNV\nCOLOR_YUV2RGBA_UYVY\nCOLOR_YUV2RGBA_Y422\nCOLOR_YUV2RGBA_YUNV\nCOLOR_YUV2RGBA_YUY2\nCOLOR_YUV2RGBA_YUYV\nCOLOR_YUV2RGBA_YV12\nCOLOR_YUV2RGBA_YVYU\nCOLOR_YUV2RGB_I420\nCOLOR_YUV2RGB_IYUV\nCOLOR_YUV2RGB_NV12\nCOLOR_YUV2RGB_NV21\nCOLOR_YUV2RGB_UYNV\nCOLOR_YUV2RGB_UYVY\nCOLOR_YUV2RGB_Y422\nCOLOR_YUV2RGB_YUNV\nCOLOR_YUV2RGB_YUY2\nCOLOR_YUV2RGB_YUYV\nCOLOR_YUV2RGB_YV12\nCOLOR_YUV2RGB_YVYU\nCOLOR_YUV420P2BGR\nCOLOR_YUV420P2BGRA\nCOLOR_YUV420P2GRAY\nCOLOR_YUV420P2RGB\nCOLOR_YUV420P2RGBA\nCOLOR_YUV420SP2BGR\nCOLOR_YUV420SP2BGRA\nCOLOR_YUV420SP2GRAY\nCOLOR_YUV420SP2RGB\nCOLOR_YUV420SP2RGBA\n\nI did some prototyping for Luminance conversions ( based on >>> http://en.wikipedia.org/wiki/HSL_and_HSV )\nBut not tested for release.\ndef get_YUV_V_Cr_Rec601_BRG_frame( brgFRAME ): # For the Rec. 601 primaries used in gamma-corrected sRGB, fast, VECTORISED MUL/ADD CODE\n out = numpy.zeros( brgFRAME.shape[0:2] )\n out += 0.615 / 255 * brgFRAME[:,:,1] # // Red # normalise to <0.0 - 1.0> before vectorised MUL/ADD, saves [usec] ... on 480x640 [px] faster goes about 2.2 [msec] instead of 5.4 [msec]\n out -= 0.515 / 255 * brgFRAME[:,:,2] # // Green\n out -= 0.100 / 255 * brgFRAME[:,:,0] # // Blue # normalise to <0.0 - 1.0> before vectorised MUL/ADD\n return out\n\n",
"# -*- coding: utf-8 -*-\n# @File : rgb2hls.py\n# @Info : @ TSMC\n# @Desc :\n\n\nimport colorsys\n\nimport numpy as np\nimport scipy.misc\nimport tensorflow as tf\nfrom PIL import Image\n\n\ndef rgb2hls(img):\n \"\"\" note: elements in img is a float number less than 1.0 and greater than 0.\n :param img: an numpy ndarray with shape NHWC\n :return:\n \"\"\"\n assert len(img.shape) == 3\n hue = np.zeros_like(img[:, :, 0])\n luminance = np.zeros_like(img[:, :, 0])\n saturation = np.zeros_like(img[:, :, 0])\n for x in range(height):\n for y in range(width):\n r, g, b = img[x, y]\n h, l, s = colorsys.rgb_to_hls(r, g, b)\n hue[x, y] = h\n luminance[x, y] = l\n saturation[x, y] = s\n return hue, luminance, saturation\n\n\ndef np_rgb2hls(img):\n r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]\n\n maxc = np.max(img, -1)\n minc = np.min(img, -1)\n l = (minc + maxc) / 2.0\n if np.array_equal(minc, maxc):\n return np.zeros_like(l), l, np.zeros_like(l)\n smask = np.greater(l, 0.5).astype(np.float32)\n\n s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))\n rc = (maxc - r) / (maxc - minc + 0.001)\n gc = (maxc - g) / (maxc - minc + 0.001)\n bc = (maxc - b) / (maxc - minc + 0.001)\n\n rmask = np.equal(r, maxc).astype(np.float32)\n gmask = np.equal(g, maxc).astype(np.float32)\n rgmask = np.logical_or(rmask, gmask).astype(np.float32)\n\n h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)\n h = np.remainder(h / 6.0, 1.0)\n return h, l, s\n\n\ndef tf_rgb2hls(img):\n \"\"\" note: elements in img all in [0,1]\n :param img: a tensor with shape NHWC\n :return:\n \"\"\"\n assert img.get_shape()[-1] == 3\n r, g, b = img[:, :, 0], img[:, :, 1], img[:, :, 2]\n maxc = tf.reduce_max(img, -1)\n minc = tf.reduce_min(img, -1)\n\n l = (minc + maxc) / 2.0\n\n # if tf.reduce_all(tf.equal(minc, maxc)):\n # return tf.zeros_like(l), l, tf.zeros_like(l)\n smask = tf.cast(tf.greater(l, 0.5), tf.float32)\n\n s = (1.0 - smask) * ((maxc - minc) / (maxc + minc)) + smask * ((maxc - minc) / (2.001 - maxc - minc))\n rc = (maxc - r) / (maxc - minc + 0.001)\n gc = (maxc - g) / (maxc - minc + 0.001)\n bc = (maxc - b) / (maxc - minc + 0.001)\n\n rmask = tf.equal(r, maxc)\n gmask = tf.equal(g, maxc)\n rgmask = tf.cast(tf.logical_or(rmask, gmask), tf.float32)\n rmask = tf.cast(rmask, tf.float32)\n gmask = tf.cast(gmask, tf.float32)\n\n h = rmask * (bc - gc) + gmask * (2.0 + rc - bc) + (1.0 - rgmask) * (4.0 + gc - rc)\n h = tf.mod(h / 6.0, 1.0)\n\n h = tf.expand_dims(h, -1)\n l = tf.expand_dims(l, -1)\n s = tf.expand_dims(s, -1)\n\n x = tf.concat([tf.zeros_like(l), l, tf.zeros_like(l)], -1)\n y = tf.concat([h, l, s], -1)\n\n return tf.where(condition=tf.reduce_all(tf.equal(minc, maxc)), x=x, y=y)\n\n\nif __name__ == '__main__':\n \"\"\"\n HLS: Hue, Luminance, Saturation\n H: position in the spectrum\n L: color lightness\n S: color saturation\n \"\"\"\n avatar = Image.open(\"hue.jpg\")\n width, height = avatar.size\n print(\"width: {}, height: {}\".format(width, height))\n img = np.array(avatar)\n img = img / 255.0\n print(img.shape)\n\n # # hue, luminance, saturation = rgb2hls(img)\n # hue, luminance, saturation = np_rgb2hls(img)\n\n img_tensor = tf.convert_to_tensor(img, tf.float32)\n hls = tf_rgb2hls(img_tensor)\n h, l, s = hls[:, :, 0], hls[:, :, 1], hls[:, :, 2]\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n hue, luminance, saturation = sess.run([h, l, s])\n scipy.misc.imsave(\"hls_h_.jpg\", hue)\n scipy.misc.imsave(\"hls_l_.jpg\", luminance)\n scipy.misc.imsave(\"hls_s_.jpg\", saturation)\n\n",
"In case someone is looking for a self-contained solution (I really didn't want to add OpenCV as a dependency), I rewrote the official python colorsys rgb_to_hls() and hls_to_rgb() functions to be usable for numpy:\nimport numpy as np\n\ndef rgb_to_hls(rgb_array: np.ndarray) -> np.ndarray:\n \"\"\"\n Expects an array of shape (X, 3), each row being RGB colours.\n Returns an array of same size, each row being HLS colours.\n Like `colorsys` python module, all values are between 0 and 1.\n\n NOTE: like `colorsys`, this uses HLS rather than the more usual HSL\n \"\"\"\n assert rgb_array.ndim == 2\n assert rgb_array.shape[1] == 3\n assert np.max(rgb_array) <= 1\n assert np.min(rgb_array) >= 0\n\n r, g, b = rgb_array.T.reshape((3, -1, 1))\n maxc = np.max(rgb_array, axis=1).reshape((-1, 1))\n minc = np.min(rgb_array, axis=1).reshape((-1, 1))\n\n sumc = (maxc+minc)\n rangec = (maxc-minc)\n\n with np.errstate(divide='ignore', invalid='ignore'):\n rgb_c = (maxc - rgb_array) / rangec\n rc, gc, bc = rgb_c.T.reshape((3, -1, 1))\n\n h = (np.where(minc == maxc, 0, np.where(r == maxc, bc - gc, np.where(g == maxc, 2.0+rc-bc, 4.0+gc-rc)))\n / 6) % 1\n l = sumc/2.0\n with np.errstate(divide='ignore', invalid='ignore'):\n s = np.where(minc == maxc, 0,\n np.where(l < 0.5, rangec / sumc, rangec / (2.0-sumc)))\n\n return np.concatenate((h, l, s), axis=1)\n\n\ndef hls_to_rgb(hls_array: np.ndarray) -> np.ndarray:\n \"\"\"\n Expects an array of shape (X, 3), each row being HLS colours.\n Returns an array of same size, each row being RGB colours.\n Like `colorsys` python module, all values are between 0 and 1.\n\n NOTE: like `colorsys`, this uses HLS rather than the more usual HSL\n \"\"\"\n ONE_THIRD = 1 / 3\n TWO_THIRD = 2 / 3\n ONE_SIXTH = 1 / 6\n\n def _v(m1, m2, h):\n h = h % 1.0\n return np.where(h < ONE_SIXTH, m1 + (m2 - m1) * h * 6,\n np.where(h < .5, m2,\n np.where(h < TWO_THIRD, m1 + (m2 - m1) * (TWO_THIRD - h) * 6,\n m1)))\n\n\n assert hls_array.ndim == 2\n assert hls_array.shape[1] == 3\n assert np.max(hls_array) <= 1\n assert np.min(hls_array) >= 0\n\n h, l, s = hls_array.T.reshape((3, -1, 1))\n m2 = np.where(l < 0.5, l * (1 + s), l + s - (l * s))\n m1 = 2 * l - m2\n\n r = np.where(s == 0, l, _v(m1, m2, h + ONE_THIRD))\n g = np.where(s == 0, l, _v(m1, m2, h))\n b = np.where(s == 0, l, _v(m1, m2, h - ONE_THIRD))\n\n return np.concatenate((r, g, b), axis=1)\n\n\ndef _test1():\n import colorsys\n rgb_array = np.array([[.5, .5, .8], [.3, .7, 1], [0, 0, 0], [1, 1, 1], [.5, .5, .5]])\n hls_array = rgb_to_hls(rgb_array)\n for rgb, hls in zip(rgb_array, hls_array):\n assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))\n new_rgb_array = hls_to_rgb(hls_array)\n for hls, rgb in zip(hls_array, new_rgb_array):\n assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))\n assert np.all(abs(rgb_array - new_rgb_array) < 0.001)\n print(\"tests part 1 done\")\n\ndef _test2():\n import colorsys\n hls_array = np.array([[0.6456692913385826, 0.14960629921259844, 0.7480314960629921], [.3, .7, 1], [0, 0, 0], [0, 1, 0], [.5, .5, .5]])\n rgb_array = hls_to_rgb(hls_array)\n for hls, rgb in zip(hls_array, rgb_array):\n assert np.all(abs(np.array(colorsys.hls_to_rgb(*hls) - rgb) < 0.001))\n new_hls_array = rgb_to_hls(rgb_array)\n for rgb, hls in zip(rgb_array, new_hls_array):\n assert np.all(abs(np.array(colorsys.rgb_to_hls(*rgb) - hls) < 0.001))\n assert np.all(abs(hls_array - new_hls_array) < 0.001)\n print(\"All tests done\")\n\ndef _test():\n _test1()\n _test2()\n\nif __name__ == \"__main__\":\n _test()\n\n(see gist)\n(off topic: converting the other functions in the same way is actually a great training for someone wanting to get their hands dirty with numpy (or other SIMD / GPU) programming). Let me know if you do so :)\n\nedit: rgb_to_hsv and hsv_to_rgb now also in the gist.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"hsl",
"numpy",
"python",
"rgb"
] |
stackoverflow_0026292114_hsl_numpy_python_rgb.txt
|
Q:
Nim: expression 'a' is of type 'int' and has to be used (or discarded)
I'm trying to rewrite the cryptomath code from here and I'm getting this when I try to compile to JS;
Hint: used config file '/home/*******/.choosenim/toolchains/nim-1.6.8/config/nim.cfg' [Conf]
Hint: used config file '/home/*******/.choosenim/toolchains/nim-1.6.8/config/config.nims' [Conf]
...........................................................
/home/*******/nim/cryptomath.nim(6, 9) Error: expression 'a' is of type 'int' and has to be used (or discarded)
This is my code:
import std/random
import std/math
randomize()
proc gcd*(a: int, b: int): int =
while a != 0:
a, b = floorMod(b,a), a # fails here apparently
return (b+a)-a
proc find_mod_inverse*(a: int, m: int): int =
if gcd(a,m) != 1:
return -1
var
u1 = 1
u2 = 0
u3 = a
v1 = 0
v2 = 1
v3 = m
q = -1
while v3 != 0:
q = floorDiv(u3,v3)
v1 = (u1 - q * v1)
v2 = (u2 - q * v2)
v3 = (u3 - q * v3)
u1 = v1
u2 = v2
u3 = v3
return floorMod(u1,m)
I tried adding this, but it did nothing
discard a
before the end of the function
A:
Your code has two issues:
Function arguments in Nim are immutable by default, so if you want to overwrite them locally, you need to shadow them or use var
Nim syntax for multiple variable assignment is different from Python and is done with tuple-like syntax
Fixed code would look like this:
import std/random
import std/math
randomize()
proc gcd*(a: int, b: int): int =
var (a, b) = (a, b)
while a != 0:
(a, b) = (floorMod(b,a), a)
return (b+a)-a
proc find_mod_inverse*(a: int, m: int): int =
if gcd(a,m) != 1:
return -1
var
u1 = 1
u2 = 0
u3 = a
v1 = 0
v2 = 1
v3 = m
q = -1
while v3 != 0:
q = floorDiv(u3,v3)
v1 = (u1 - q * v1)
v2 = (u2 - q * v2)
v3 = (u3 - q * v3)
u1 = v1
u2 = v2
u3 = v3
return floorMod(u1,m)
The compiler error is unclear, I agree.
Also, just a tip - keep in mind that standard Nim integers are limited by the architecture's native integer size, so if you want to operate on big numbers, you need to use a separate library.
|
Nim: expression 'a' is of type 'int' and has to be used (or discarded)
|
I'm trying to rewrite the cryptomath code from here and I'm getting this when I try to compile to JS;
Hint: used config file '/home/*******/.choosenim/toolchains/nim-1.6.8/config/nim.cfg' [Conf]
Hint: used config file '/home/*******/.choosenim/toolchains/nim-1.6.8/config/config.nims' [Conf]
...........................................................
/home/*******/nim/cryptomath.nim(6, 9) Error: expression 'a' is of type 'int' and has to be used (or discarded)
This is my code:
import std/random
import std/math
randomize()
proc gcd*(a: int, b: int): int =
while a != 0:
a, b = floorMod(b,a), a # fails here apparently
return (b+a)-a
proc find_mod_inverse*(a: int, m: int): int =
if gcd(a,m) != 1:
return -1
var
u1 = 1
u2 = 0
u3 = a
v1 = 0
v2 = 1
v3 = m
q = -1
while v3 != 0:
q = floorDiv(u3,v3)
v1 = (u1 - q * v1)
v2 = (u2 - q * v2)
v3 = (u3 - q * v3)
u1 = v1
u2 = v2
u3 = v3
return floorMod(u1,m)
I tried adding this, but it did nothing
discard a
before the end of the function
|
[
"Your code has two issues:\n\nFunction arguments in Nim are immutable by default, so if you want to overwrite them locally, you need to shadow them or use var\n\nNim syntax for multiple variable assignment is different from Python and is done with tuple-like syntax\n\n\nFixed code would look like this:\nimport std/random\nimport std/math\nrandomize()\nproc gcd*(a: int, b: int): int = \n var (a, b) = (a, b)\n while a != 0:\n (a, b) = (floorMod(b,a), a)\n return (b+a)-a\n\nproc find_mod_inverse*(a: int, m: int): int =\n if gcd(a,m) != 1:\n return -1\n var \n u1 = 1\n u2 = 0\n u3 = a\n v1 = 0\n v2 = 1\n v3 = m\n q = -1\n\n while v3 != 0:\n q = floorDiv(u3,v3)\n v1 = (u1 - q * v1)\n v2 = (u2 - q * v2)\n v3 = (u3 - q * v3)\n u1 = v1\n u2 = v2\n u3 = v3\n return floorMod(u1,m) \n\nThe compiler error is unclear, I agree.\nAlso, just a tip - keep in mind that standard Nim integers are limited by the architecture's native integer size, so if you want to operate on big numbers, you need to use a separate library.\n"
] |
[
1
] |
[] |
[] |
[
"nim",
"nim_lang"
] |
stackoverflow_0074666275_nim_nim_lang.txt
|
Q:
How to make early stopping in image classification pytorch
I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
A:
This is what I did in each epoch
val_loss += loss
val_loss = val_loss / len(trainloader)
if val_loss < min_val_loss:
#Saving the model
if min_loss > loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(loaded_model.state_dict())
print('Min loss %0.2f' % min_loss)
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
loaded_model.load_state_dict(best_model)
Donno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you
A:
Try with below code.
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
early_stop = True
break
else:
continue
break
if early_stop:
print("Stopped")
break
A:
The idea of early stopping is to avoid overfitting by stopping the training process if there is no sign of improvement upon a monitored quantity, e.g. validation loss stops decreasing after a few iterations. A minimal implementation of early stopping needs 3 components:
best_score variable to store the best value of validation loss
counter variable to keep track of the number of iteration running
patience variable defines the number of epochs allows to continue training without improvement upon the validation loss. If the counter exceeds this, we stop the training process.
A pseudocode looks like this
# Define best_score, counter, and patience for early stopping:
best_score = None
counter = 0
patience = 10
path = ./checkpoints # user_defined path to save model
# Training loop:
for epoch in range(num_epochs):
# Compute training loss
loss = model(features,labels,train_mask)
# Compute validation loss
val_loss = evaluate(model, features, labels, val_mask)
if best_score is None:
best_score = val_loss
else:
# Check if val_loss improves or not.
if val_loss < best_score:
# val_loss improves, we update the latest best_score,
# and save the current model
best_score = val_loss
torch.save({'state_dict':model.state_dict()}, path)
else:
# val_loss does not improve, we increase the counter,
# stop training if it exceeds the amount of patience
counter += 1
if counter >= patience:
break
# Load best model
print('loading model before testing.')
model_checkpoint = torch.load(path)
model.load_state_dict(model_checkpoint['state_dict'])
acc = evaluate_test(model, features, labels, test_mask)
I've implemented an generic early stopping class for Pytorch to use with my some of projects. It allows you to select any validation quantity of interest (loss, accuracy, etc.). If you prefer a fancier early stopping then feel free to check it out in the repo early-stopping. There's an example notebook for reference too
A:
One way to implement early stopping in PyTorch is to use a callback function that is called at the end of each epoch. This function can check the validation loss and stop training if the loss has not improved for a certain number of epochs.
Here is an example of how this could be implemented:
Define a function to check if the validation loss has improved
def check_validation_loss(model, best_loss, current_epoch):
Calculate the validation loss
val_loss = calculate_validation_loss(model)
# If the validation loss has not improved for 3 epochs, stop training
if current_epoch - best_loss['epoch'] >= 3:
print('Stopping training, validation loss has not improved for 3 epochs')
return True
# If the validation loss is better than the best loss, update the best loss
if val_loss < best_loss['loss']:
best_loss['loss'] = val_loss
best_loss['epoch'] = current_epoch
return False
Define a function to calculate the validation loss
def calculate_validation_loss(model):
TODO: Calculate the validation loss
Define the training loop
best_loss = {'loss': float('inf'), 'epoch': 0}
for epoch in range(1, num_epochs + 1):
Train the model for one epoch
train_model(model, epoch)
# Check if we should stop training
if check_validation_loss(model, best_loss, epoch):
break
This code uses a dictionary to track the best validation loss and the epoch when it occurred. The check_validation_loss function calculates the validation loss, compares it to the best loss, and returns True if the training should be stopped.
Note that the calculate_validation_loss function is not implemented in this code, so you would need to add your own implementation for this. The train_model function is also not implemented, but this could be replaced with your own training code.
Alternatively, instead of implementing your own early stopping, you could use one of the existing early stopping implementations in PyTorch, such as torch.optim.lr_scheduler.ReduceLROnPlateau or torch.utils.callbacks.EarlyStopping. These can be used in a similar way to the above code, but provide more flexibility and options for controlling the early stopping behavior.
|
How to make early stopping in image classification pytorch
|
I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
|
[
"This is what I did in each epoch\nval_loss += loss\nval_loss = val_loss / len(trainloader)\nif val_loss < min_val_loss:\n #Saving the model\n if min_loss > loss.item():\n min_loss = loss.item()\n best_model = copy.deepcopy(loaded_model.state_dict())\n print('Min loss %0.2f' % min_loss)\n epochs_no_improve = 0\n min_val_loss = val_loss\n\nelse:\n epochs_no_improve += 1\n # Check early stopping condition\n if epochs_no_improve == n_epochs_stop:\n print('Early stopping!' )\n loaded_model.load_state_dict(best_model)\n\nDonno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you\n",
"Try with below code.\n # Check early stopping condition\n if epochs_no_improve == n_epochs_stop:\n print('Early stopping!' )\n early_stop = True\n break\n else:\n continue\n break\nif early_stop:\n print(\"Stopped\")\n break\n\n",
"The idea of early stopping is to avoid overfitting by stopping the training process if there is no sign of improvement upon a monitored quantity, e.g. validation loss stops decreasing after a few iterations. A minimal implementation of early stopping needs 3 components:\n\nbest_score variable to store the best value of validation loss\ncounter variable to keep track of the number of iteration running\npatience variable defines the number of epochs allows to continue training without improvement upon the validation loss. If the counter exceeds this, we stop the training process.\n\nA pseudocode looks like this\n# Define best_score, counter, and patience for early stopping:\nbest_score = None\ncounter = 0\npatience = 10\npath = ./checkpoints # user_defined path to save model\n\n# Training loop:\nfor epoch in range(num_epochs):\n # Compute training loss\n loss = model(features,labels,train_mask)\n \n # Compute validation loss\n val_loss = evaluate(model, features, labels, val_mask)\n \n if best_score is None:\n best_score = val_loss\n else:\n # Check if val_loss improves or not.\n if val_loss < best_score:\n # val_loss improves, we update the latest best_score, \n # and save the current model\n best_score = val_loss\n torch.save({'state_dict':model.state_dict()}, path)\n else:\n # val_loss does not improve, we increase the counter, \n # stop training if it exceeds the amount of patience\n counter += 1\n if counter >= patience:\n break\n\n# Load best model \nprint('loading model before testing.')\nmodel_checkpoint = torch.load(path)\n\nmodel.load_state_dict(model_checkpoint['state_dict'])\n\nacc = evaluate_test(model, features, labels, test_mask) \n\nI've implemented an generic early stopping class for Pytorch to use with my some of projects. It allows you to select any validation quantity of interest (loss, accuracy, etc.). If you prefer a fancier early stopping then feel free to check it out in the repo early-stopping. There's an example notebook for reference too\n",
"One way to implement early stopping in PyTorch is to use a callback function that is called at the end of each epoch. This function can check the validation loss and stop training if the loss has not improved for a certain number of epochs.\nHere is an example of how this could be implemented:\nDefine a function to check if the validation loss has improved\ndef check_validation_loss(model, best_loss, current_epoch):\nCalculate the validation loss\nval_loss = calculate_validation_loss(model)\n# If the validation loss has not improved for 3 epochs, stop training\nif current_epoch - best_loss['epoch'] >= 3:\n print('Stopping training, validation loss has not improved for 3 epochs')\n return True\n\n# If the validation loss is better than the best loss, update the best loss\nif val_loss < best_loss['loss']:\n best_loss['loss'] = val_loss\n best_loss['epoch'] = current_epoch\n\nreturn False\n\n\nDefine a function to calculate the validation loss\ndef calculate_validation_loss(model):\nTODO: Calculate the validation loss\nDefine the training loop\nbest_loss = {'loss': float('inf'), 'epoch': 0}\n\nfor epoch in range(1, num_epochs + 1):\n\nTrain the model for one epoch\ntrain_model(model, epoch)\n# Check if we should stop training\nif check_validation_loss(model, best_loss, epoch):\n break\n\n\nThis code uses a dictionary to track the best validation loss and the epoch when it occurred. The check_validation_loss function calculates the validation loss, compares it to the best loss, and returns True if the training should be stopped.\nNote that the calculate_validation_loss function is not implemented in this code, so you would need to add your own implementation for this. The train_model function is also not implemented, but this could be replaced with your own training code.\nAlternatively, instead of implementing your own early stopping, you could use one of the existing early stopping implementations in PyTorch, such as torch.optim.lr_scheduler.ReduceLROnPlateau or torch.utils.callbacks.EarlyStopping. These can be used in a similar way to the above code, but provide more flexibility and options for controlling the early stopping behavior.\n"
] |
[
3,
0,
0,
0
] |
[] |
[] |
[
"early_stopping",
"python",
"pytorch"
] |
stackoverflow_0060200088_early_stopping_python_pytorch.txt
|
Q:
What is the most appropriate way to build forms with validation in React Native?
Currently in React, we have several libraries to implement forms. But, I don't know if in React Native it is appropriate to use libraries or it is better to implement the forms completely.
I have seen options like Formik and React Hook Form
A:
It would really be a good practice according to me to use the package if the form has multiple Input Fields in React-Native and React-JS.
I don't know about other packages but yes In my Professional Career I have used Formik a lot for React and React-Native Projects.
As Formik is a free and open source, lightweight library for ReactJS or React Native also it addresses three key pain points of form creation:
How the form state is manipulated.
How to form validation and error messages are handled.
How to form submission is handled.
So I would recommend you to use Formik.
If any queries feel free to ask.
Happy Coding !!
|
What is the most appropriate way to build forms with validation in React Native?
|
Currently in React, we have several libraries to implement forms. But, I don't know if in React Native it is appropriate to use libraries or it is better to implement the forms completely.
I have seen options like Formik and React Hook Form
|
[
"It would really be a good practice according to me to use the package if the form has multiple Input Fields in React-Native and React-JS.\nI don't know about other packages but yes In my Professional Career I have used Formik a lot for React and React-Native Projects.\nAs Formik is a free and open source, lightweight library for ReactJS or React Native also it addresses three key pain points of form creation:\nHow the form state is manipulated.\nHow to form validation and error messages are handled.\nHow to form submission is handled.\nSo I would recommend you to use Formik.\nIf any queries feel free to ask.\nHappy Coding !!\n"
] |
[
0
] |
[] |
[] |
[
"forms",
"input",
"react_native",
"reactjs"
] |
stackoverflow_0074661988_forms_input_react_native_reactjs.txt
|
Q:
How to write cross-framework machine learning code for tensorflow and pytorch?
Machine learning framework comprise, amongst other things, the following functions:
augmentations
metrics and losses
These functions are simple conversions of tensors and seem rather framework independent. However, for example tensorflow's categorical crossentropy loss uses some tensorflow specific functions like tf.convert_to_tensor() or tf.cast(). So it cannot be used easily in pytorch. Also tensorflow heavily prefers to work with tensorflow tensors instead of numpy ones to create tensorflow graphs to my knowledge.
Are there any existing efforts or ideas how to write such functions in a way that they can be used in both frameworks? I'm thinking of pure numpy functions which can be somehow converted to either tensorflow or pytorch.
A:
Yes, there are existing efforts to create framework-agnostic machine learning functions. One such effort is the PyTorch Lightning framework, which aims to provide a common interface for machine learning frameworks such as PyTorch, TensorFlow, and Keras. This allows developers to write code that is compatible with multiple frameworks, without having to worry about the specific implementation details of each framework.
Another approach is to use standard numerical libraries such as NumPy and SciPy, which are compatible with both PyTorch and TensorFlow. By using these libraries, developers can write functions that are agnostic to the specific machine learning framework being used. However, this approach may not be as efficient as using the native tensor operations provided by the framework, as it may require additional data conversion steps.
|
How to write cross-framework machine learning code for tensorflow and pytorch?
|
Machine learning framework comprise, amongst other things, the following functions:
augmentations
metrics and losses
These functions are simple conversions of tensors and seem rather framework independent. However, for example tensorflow's categorical crossentropy loss uses some tensorflow specific functions like tf.convert_to_tensor() or tf.cast(). So it cannot be used easily in pytorch. Also tensorflow heavily prefers to work with tensorflow tensors instead of numpy ones to create tensorflow graphs to my knowledge.
Are there any existing efforts or ideas how to write such functions in a way that they can be used in both frameworks? I'm thinking of pure numpy functions which can be somehow converted to either tensorflow or pytorch.
|
[
"Yes, there are existing efforts to create framework-agnostic machine learning functions. One such effort is the PyTorch Lightning framework, which aims to provide a common interface for machine learning frameworks such as PyTorch, TensorFlow, and Keras. This allows developers to write code that is compatible with multiple frameworks, without having to worry about the specific implementation details of each framework.\nAnother approach is to use standard numerical libraries such as NumPy and SciPy, which are compatible with both PyTorch and TensorFlow. By using these libraries, developers can write functions that are agnostic to the specific machine learning framework being used. However, this approach may not be as efficient as using the native tensor operations provided by the framework, as it may require additional data conversion steps.\n"
] |
[
0
] |
[] |
[] |
[
"pytorch",
"tensorflow"
] |
stackoverflow_0071703505_pytorch_tensorflow.txt
|
Q:
How to configure PostgreSQL client_min_messages on Heroku & Rails
I am trying to reduce some logging noise I am getting from PostgreSQL on my Heroku/Rails application. Specifically, I am trying to configure the client_min_messages setting to warning instead of the default notice.
I followed the steps in this post and specified min_messages: warning in my database.yml file but that doesn't seem to have any effect on my Heroku PostgreSQL instance. I'm still seeing NOTICE messages in my logs and when I run SHOW client_min_messages on the database it still returns notice.
Here is a redacted example of the logs I'm seeing in Papertrail:
Nov 23 15:04:51 my-app-name-production app/postgres.123467 [COLOR] [1234-5] sql_error_code = 00000 log_line="5733" application_name="puma: cluster worker 0: 4 [app]" NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored
I can also confirm that the setting does seem to be in the Rails configuration - Rails.application.config.database_configuration[Rails.env] in a production console does show a hash containing "min_messages"=>"warning"
I also tried manually updating that via the PostgreSQL console - so SET client_min_messages TO WARNING; - but that setting doesn't 'stick'. It seems to be reset on the next session.
How do I configure client_min_messages to be warning on Heroku/Rails?
A:
You should be able to set it at the database level:
ALTER DATABASE your_database
SET client_min_messages TO 'warning';
|
How to configure PostgreSQL client_min_messages on Heroku & Rails
|
I am trying to reduce some logging noise I am getting from PostgreSQL on my Heroku/Rails application. Specifically, I am trying to configure the client_min_messages setting to warning instead of the default notice.
I followed the steps in this post and specified min_messages: warning in my database.yml file but that doesn't seem to have any effect on my Heroku PostgreSQL instance. I'm still seeing NOTICE messages in my logs and when I run SHOW client_min_messages on the database it still returns notice.
Here is a redacted example of the logs I'm seeing in Papertrail:
Nov 23 15:04:51 my-app-name-production app/postgres.123467 [COLOR] [1234-5] sql_error_code = 00000 log_line="5733" application_name="puma: cluster worker 0: 4 [app]" NOTICE: text-search query contains only stop words or doesn't contain lexemes, ignored
I can also confirm that the setting does seem to be in the Rails configuration - Rails.application.config.database_configuration[Rails.env] in a production console does show a hash containing "min_messages"=>"warning"
I also tried manually updating that via the PostgreSQL console - so SET client_min_messages TO WARNING; - but that setting doesn't 'stick'. It seems to be reset on the next session.
How do I configure client_min_messages to be warning on Heroku/Rails?
|
[
"You should be able to set it at the database level:\nALTER DATABASE your_database\n SET client_min_messages TO 'warning';\n\n"
] |
[
0
] |
[
"Set this machine-wide instead of project-wide\npsql -d rails_app_development\nALTER ROLE USER_NAME SET client_min_messages TO WARNING;\n\nChange USER_NAME to be the user account for your machine.\n",
"The client_min_messages setting in PostgreSQL controls the minimum level of messages that are sent to the client (in this case, your Rails app). By default, this setting is set to notice, which means that all messages with a severity of notice or higher will be sent to the client.\nTo change the client_min_messages setting in your Heroku/Rails app, you can use the heroku pg:psql command to connect to your PostgreSQL database and run the SET command to change the setting. For example, you can use the following command to change the client_min_messages setting to warning:\nheroku pg:psql -c \"SET client_min_messages TO warning;\"\n\nThis will change the client_min_messages setting for the current session. However, the setting will be reset to the default value (notice) when the session ends. To make the change permanent, you can use the ALTER DATABASE command to set the client_min_messages setting for the entire database. For example:\nheroku pg:psql -c \"ALTER DATABASE <database_name> SET client_min_messages TO warning;\"\n\nReplace <database_name> with the name of your database. This will change the client_min_messages setting for the entire database, so all subsequent sessions will use the warning setting.\nI hope this helps! Let me know if you have any other questions.\n",
"There is one more straightforward option, if you need to persist this setting, you can add it to your PostgreSQL configuration file (postgresql.conf) directly. So basically, step by step:\n\nSHOW config_file; to get location config location\nclient_min_messages = warning line goes to postgresql.conf\npg_ctl reload to reload the configuration without restarting the server\nSHOW client_min_messages; verifies that option has been set to warning successfully\n\n"
] |
[
-1,
-1,
-1
] |
[
"heroku",
"heroku_postgres",
"postgresql",
"ruby_on_rails"
] |
stackoverflow_0074606207_heroku_heroku_postgres_postgresql_ruby_on_rails.txt
|
Q:
GitHub Action: How to fix mvn: command not found issue with nektos/act
Introduction
I'm currently contributing to a GitHub project.
For this, I'm writing a GitHub workflow inside a GitHub Action that tests the creation of JavaDoc files.
This workflow should be run with act.
The project of the GitHub Action when I want to add this GitHub Workflow: https://github.com/MathieuSoysal/Javadoc-publisher.yml
Problem
The problem, when I execute my GitHub workflow with act I obtain this error.
[Test Actions/Test with Java 11] Start image=catthehacker/ubuntu:act-latest
[Test Actions/Test with Java 11] docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=false
[Test Actions/Test with Java 11] docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Test Actions/Test with Java 11] docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/GuillaumeFalourd/assert-command-line-output' # ref=v2
[Test Actions/Test with Java 11] ⭐ Run Pre GuillaumeFalourd/assert-command-line-output@v2
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/actions/setup-node' # ref=v2
[Test Actions/Test with Java 11] ✅ Success - Pre GuillaumeFalourd/assert-command-line-output@v2
[Test Actions/Test with Java 11] ⭐ Run Main actions/checkout@v3
[Test Actions/Test with Java 11] docker cp src=/workspaces/Javadoc-publisher.yml/. dst=/workspaces/Javadoc-publisher.yml
[Test Actions/Test with Java 11] ✅ Success - Main actions/checkout@v3
[Test Actions/Test with Java 11] ⭐ Run Main ./
[Test Actions/Test with Java 11] ⭐ Run Main actions/checkout@v3
[Test Actions/Test with Java 11] docker cp src=/workspaces/Javadoc-publisher.yml/. dst=/workspaces/Javadoc-publisher.yml
[Test Actions/Test with Java 11] ✅ Success - Main actions/checkout@v3
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/actions/setup-java' # ref=v3
[Test Actions/Test with Java 11] ⭐ Run Main actions/setup-java@v3
[Test Actions/Test with Java 11] docker cp src=/home/codespace/.cache/act/actions-setup-java@v3/ dst=/var/run/act/actions/actions-setup-java@v3/
[Test Actions/Test with Java 11] docker exec cmd=[node /var/run/act/actions/actions-setup-java@v3/dist/setup/index.js] user= workdir=
[Test Actions/Test with Java 11] ❓ ::group::Installed distributions
| Trying to resolve the latest version from remote
| Resolved latest version as 11.0.17+8
| Trying to download...
| Downloading Java 11.0.17+8 (Adopt-Hotspot) from https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.17%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz ...
[Test Actions/Test with Java 11] ::debug::Downloading https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.17%252B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz
[Test Actions/Test with Java 11] ::debug::Destination /tmp/8533ad34-250e-4794-b6bd-f014648ce8b2
[Test Actions/Test with Java 11] ::debug::download complete
| Extracting Java archive...
[Test Actions/Test with Java 11] ::debug::Checking tar --version
[Test Actions/Test with Java 11] ::debug::tar (GNU tar) 1.30%0ACopyright (C) 2017 Free Software Foundation, Inc.%0ALicense GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.%0AThis is free software: you are free to change and redistribute it.%0AThere is NO WARRANTY, to the extent permitted by law.%0A%0AWritten by John Gilmore and Jay Fenlason.
| [command]/usr/bin/tar xz --warning=no-unknown-keyword -C /tmp/4f3b749d-13d5-497d-87d7-45e30a9927cc -f /tmp/8533ad34-250e-4794-b6bd-f014648ce8b2
[Test Actions/Test with Java 11] ::debug::Caching tool Java_Adopt_jdk 11.0.17-8 x64
[Test Actions/Test with Java 11] ::debug::source dir: /tmp/4f3b749d-13d5-497d-87d7-45e30a9927cc/jdk-11.0.17+8
[Test Actions/Test with Java 11] ::debug::destination /opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
[Test Actions/Test with Java 11] ::debug::finished caching tool
| Java 11.0.17+8 was downloaded
| Setting Java 11.0.17+8 as the default
| Creating toolchains.xml for JDK version 11 from adopt
| Writing to /root/.m2/toolchains.xml
|
| Java configuration:
| Distribution: adopt
| Version: 11.0.17+8
| Path: /opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
|
[Test Actions/Test with Java 11] ❓ ::endgroup::
[Test Actions/Test with Java 11] ❓ ##[add-matcher]/run/act/actions/actions-setup-java@v3/.github/java.json
| Creating settings.xml with server-id: github
| Writing to /root/.m2/settings.xml
[Test Actions/Test with Java 11] ✅ Success - Main actions/setup-java@v3
[Test Actions/Test with Java 11] ⚙ ::set-output:: distribution=Adopt-Hotspot
[Test Actions/Test with Java 11] ⚙ ::set-output:: path=/opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
[Test Actions/Test with Java 11] ⚙ ::set-output:: version=11.0.17+8
[Test Actions/Test with Java 11] ⭐ Run Main Generate Javadoc
[Test Actions/Test with Java 11] docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1-composite-2.sh] user= workdir=
| /var/run/act/workflow/1-composite-2.sh: line 2: mvn: command not found
[Test Actions/Test with Java 11] ❌ Failure - Main Generate Javadoc
[Test Actions/Test with Java 11] exitcode '127': command not found, please refer to https://github.com/nektos/act/issues/107 for more information
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/JamesIves/github-pages-deploy-action' # ref=v4.4.0
[Test Actions/Test with Java 11] ❌ Failure - Main ./
[Test Actions/Test with Java 11] exitcode '127': command not found, please refer to https://github.com/nektos/act/issues/107 for more information
[Test Actions/Test with Java 11] ⭐ Run Post ./
[Test Actions/Test with Java 11] ⭐ Run Post actions/setup-java@v3
[Test Actions/Test with Java 11] docker exec cmd=[node /var/run/act/actions/actions-setup-java@v3/dist/cleanup/index.js] user= workdir=
[Test Actions/Test with Java 11] ✅ Success - Post actions/setup-java@v3
[Test Actions/Test with Java 11] ✅ Success - Post ./
[Test Actions/Test with Java 11] Job failed
GitHub Workflow
My GitHub Workflow :
name: Test Actions
on: [pull_request, push]
jobs:
test:
runs-on: ubuntu-latest
name: Test with Java 11
steps:
- uses: actions/checkout@v3
- uses: ./ # Uses an action in the root directory
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
javadoc-branch: javadoc-test
java-version: 11
target-folder: javadoc
- uses: GuillaumeFalourd/assert-command-line-output@v2
with:
command_line: ls -lha
contains: javadoc
expected_result: PASSED
GitHub Action
The GitHub Action of the project : https://github.com/MathieuSoysal/Javadoc-publisher.yml/blob/main/action.yml
Command
The executed command to run nektos/act : act
Question
Does someone know how we can fix this problem mvn: command not found issue with nektos/act?
A:
Explanation
Your problem is caused by the fact that the image you use with nektos/act does not has mvn.
How to fix it
You have several possibilities to fix it.
First solution
The first solution: if you want to keep your image.
You can add these lines inside your GitHub workflow to add mvn :
#### This step is only needed for GHA local runner, act:
# https://github.com/nektos/act
- name: Install curl (for nektos/act local CI testing)
run: apt-get update && apt-get install build-essential curl pkg-config openssl -y
- name: Download Maven
run: |
curl -sL https://www-eu.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.zip -o maven.zip
apt-get update
apt-get -y install unzip
unzip -d /usr/share maven.zip
rm maven.zip
ln -s /usr/share/apache-maven-3.6.3/bin/mvn /usr/bin/mvn
echo "M2_HOME=/usr/share/apache-maven-3.6.3" | tee -a /etc/environment
####
Second solution
The second solution: if you agree to change your used image.
Instead of your act command, you can execute this:
act -P ubuntu-latest=quay.io/jamezp/act-maven
|
GitHub Action: How to fix mvn: command not found issue with nektos/act
|
Introduction
I'm currently contributing to a GitHub project.
For this, I'm writing a GitHub workflow inside a GitHub Action that tests the creation of JavaDoc files.
This workflow should be run with act.
The project of the GitHub Action when I want to add this GitHub Workflow: https://github.com/MathieuSoysal/Javadoc-publisher.yml
Problem
The problem, when I execute my GitHub workflow with act I obtain this error.
[Test Actions/Test with Java 11] Start image=catthehacker/ubuntu:act-latest
[Test Actions/Test with Java 11] docker pull image=catthehacker/ubuntu:act-latest platform= username= forcePull=false
[Test Actions/Test with Java 11] docker create image=catthehacker/ubuntu:act-latest platform= entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Test Actions/Test with Java 11] docker run image=catthehacker/ubuntu:act-latest platform= entrypoint=["/usr/bin/tail" "-f" "/dev/null"] cmd=[]
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/GuillaumeFalourd/assert-command-line-output' # ref=v2
[Test Actions/Test with Java 11] ⭐ Run Pre GuillaumeFalourd/assert-command-line-output@v2
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/actions/setup-node' # ref=v2
[Test Actions/Test with Java 11] ✅ Success - Pre GuillaumeFalourd/assert-command-line-output@v2
[Test Actions/Test with Java 11] ⭐ Run Main actions/checkout@v3
[Test Actions/Test with Java 11] docker cp src=/workspaces/Javadoc-publisher.yml/. dst=/workspaces/Javadoc-publisher.yml
[Test Actions/Test with Java 11] ✅ Success - Main actions/checkout@v3
[Test Actions/Test with Java 11] ⭐ Run Main ./
[Test Actions/Test with Java 11] ⭐ Run Main actions/checkout@v3
[Test Actions/Test with Java 11] docker cp src=/workspaces/Javadoc-publisher.yml/. dst=/workspaces/Javadoc-publisher.yml
[Test Actions/Test with Java 11] ✅ Success - Main actions/checkout@v3
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/actions/setup-java' # ref=v3
[Test Actions/Test with Java 11] ⭐ Run Main actions/setup-java@v3
[Test Actions/Test with Java 11] docker cp src=/home/codespace/.cache/act/actions-setup-java@v3/ dst=/var/run/act/actions/actions-setup-java@v3/
[Test Actions/Test with Java 11] docker exec cmd=[node /var/run/act/actions/actions-setup-java@v3/dist/setup/index.js] user= workdir=
[Test Actions/Test with Java 11] ❓ ::group::Installed distributions
| Trying to resolve the latest version from remote
| Resolved latest version as 11.0.17+8
| Trying to download...
| Downloading Java 11.0.17+8 (Adopt-Hotspot) from https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.17%2B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz ...
[Test Actions/Test with Java 11] ::debug::Downloading https://github.com/adoptium/temurin11-binaries/releases/download/jdk-11.0.17%252B8/OpenJDK11U-jdk_x64_linux_hotspot_11.0.17_8.tar.gz
[Test Actions/Test with Java 11] ::debug::Destination /tmp/8533ad34-250e-4794-b6bd-f014648ce8b2
[Test Actions/Test with Java 11] ::debug::download complete
| Extracting Java archive...
[Test Actions/Test with Java 11] ::debug::Checking tar --version
[Test Actions/Test with Java 11] ::debug::tar (GNU tar) 1.30%0ACopyright (C) 2017 Free Software Foundation, Inc.%0ALicense GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.%0AThis is free software: you are free to change and redistribute it.%0AThere is NO WARRANTY, to the extent permitted by law.%0A%0AWritten by John Gilmore and Jay Fenlason.
| [command]/usr/bin/tar xz --warning=no-unknown-keyword -C /tmp/4f3b749d-13d5-497d-87d7-45e30a9927cc -f /tmp/8533ad34-250e-4794-b6bd-f014648ce8b2
[Test Actions/Test with Java 11] ::debug::Caching tool Java_Adopt_jdk 11.0.17-8 x64
[Test Actions/Test with Java 11] ::debug::source dir: /tmp/4f3b749d-13d5-497d-87d7-45e30a9927cc/jdk-11.0.17+8
[Test Actions/Test with Java 11] ::debug::destination /opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
[Test Actions/Test with Java 11] ::debug::finished caching tool
| Java 11.0.17+8 was downloaded
| Setting Java 11.0.17+8 as the default
| Creating toolchains.xml for JDK version 11 from adopt
| Writing to /root/.m2/toolchains.xml
|
| Java configuration:
| Distribution: adopt
| Version: 11.0.17+8
| Path: /opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
|
[Test Actions/Test with Java 11] ❓ ::endgroup::
[Test Actions/Test with Java 11] ❓ ##[add-matcher]/run/act/actions/actions-setup-java@v3/.github/java.json
| Creating settings.xml with server-id: github
| Writing to /root/.m2/settings.xml
[Test Actions/Test with Java 11] ✅ Success - Main actions/setup-java@v3
[Test Actions/Test with Java 11] ⚙ ::set-output:: distribution=Adopt-Hotspot
[Test Actions/Test with Java 11] ⚙ ::set-output:: path=/opt/hostedtoolcache/Java_Adopt_jdk/11.0.17-8/x64
[Test Actions/Test with Java 11] ⚙ ::set-output:: version=11.0.17+8
[Test Actions/Test with Java 11] ⭐ Run Main Generate Javadoc
[Test Actions/Test with Java 11] docker exec cmd=[bash --noprofile --norc -e -o pipefail /var/run/act/workflow/1-composite-2.sh] user= workdir=
| /var/run/act/workflow/1-composite-2.sh: line 2: mvn: command not found
[Test Actions/Test with Java 11] ❌ Failure - Main Generate Javadoc
[Test Actions/Test with Java 11] exitcode '127': command not found, please refer to https://github.com/nektos/act/issues/107 for more information
[Test Actions/Test with Java 11] ☁ git clone 'https://github.com/JamesIves/github-pages-deploy-action' # ref=v4.4.0
[Test Actions/Test with Java 11] ❌ Failure - Main ./
[Test Actions/Test with Java 11] exitcode '127': command not found, please refer to https://github.com/nektos/act/issues/107 for more information
[Test Actions/Test with Java 11] ⭐ Run Post ./
[Test Actions/Test with Java 11] ⭐ Run Post actions/setup-java@v3
[Test Actions/Test with Java 11] docker exec cmd=[node /var/run/act/actions/actions-setup-java@v3/dist/cleanup/index.js] user= workdir=
[Test Actions/Test with Java 11] ✅ Success - Post actions/setup-java@v3
[Test Actions/Test with Java 11] ✅ Success - Post ./
[Test Actions/Test with Java 11] Job failed
GitHub Workflow
My GitHub Workflow :
name: Test Actions
on: [pull_request, push]
jobs:
test:
runs-on: ubuntu-latest
name: Test with Java 11
steps:
- uses: actions/checkout@v3
- uses: ./ # Uses an action in the root directory
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
javadoc-branch: javadoc-test
java-version: 11
target-folder: javadoc
- uses: GuillaumeFalourd/assert-command-line-output@v2
with:
command_line: ls -lha
contains: javadoc
expected_result: PASSED
GitHub Action
The GitHub Action of the project : https://github.com/MathieuSoysal/Javadoc-publisher.yml/blob/main/action.yml
Command
The executed command to run nektos/act : act
Question
Does someone know how we can fix this problem mvn: command not found issue with nektos/act?
|
[
"Explanation\nYour problem is caused by the fact that the image you use with nektos/act does not has mvn.\nHow to fix it\nYou have several possibilities to fix it.\nFirst solution\nThe first solution: if you want to keep your image.\nYou can add these lines inside your GitHub workflow to add mvn :\n#### This step is only needed for GHA local runner, act:\n# https://github.com/nektos/act\n - name: Install curl (for nektos/act local CI testing)\n run: apt-get update && apt-get install build-essential curl pkg-config openssl -y\n - name: Download Maven\n run: |\n curl -sL https://www-eu.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.zip -o maven.zip\n apt-get update\n apt-get -y install unzip\n unzip -d /usr/share maven.zip\n rm maven.zip\n ln -s /usr/share/apache-maven-3.6.3/bin/mvn /usr/bin/mvn\n echo \"M2_HOME=/usr/share/apache-maven-3.6.3\" | tee -a /etc/environment\n####\n\nSecond solution\nThe second solution: if you agree to change your used image.\nInstead of your act command, you can execute this:\nact -P ubuntu-latest=quay.io/jamezp/act-maven\n\n"
] |
[
1
] |
[] |
[] |
[
"continuous_integration",
"github",
"github_actions",
"java"
] |
stackoverflow_0074666249_continuous_integration_github_github_actions_java.txt
|
Q:
Is it safe to re-interpret a smaller data type to Int64 on x64 systems?
I have a c# program that converts/re-interpret unmanaged type to ulong on x64 machine:
public static unsafe ulong ToUInt64<T>(T value) where T : unmanaged
{
if(sizeof(T) > sizeof(ulong))
throw new InvalidOperationException("Can't convert this type to UInt64");
return *(ulong*)&value;
}
Usage:
int a = 123;
ulong b = ToUInt64(a);
Console.WriteLine(b); // returns 123 as expected.
However, I'm not sure if this is 100% safe.
Namely, will the runtime always allocate 8 bytes and zero the memory if a variable have a smaller size? Can I get AccessViolationException or read some garbage memory into the ulong under some situation?
A:
I think this is still based on the runtime.
At first I cannot ensure this example is reproductive on every machine.
static void Main()
{
var j = Test(0x12345678);
ulong b = ToUInt64((byte)123);
Console.WriteLine(b); // returns 123 as expected.
Console.WriteLine(j);
}
public unsafe static short Test(int value)
{
return (short)(value - 1);
}
public static unsafe ulong ToUInt64<T>(T value) where T : unmanaged
{
return *(ulong*)&value;
}
When I debugged (F5) the program (in release mode), I got the following result. You can see the first value is unexpected.
The gernerated assembly is:
00007FFDD6148D64 mov dword ptr [rsp+24h],7Bh
00007FFDD6148D6C mov rcx,qword ptr [rsp+24h]
The value 123 (0x7B) is moved as a dword ptr, the returned value is moved as a qword ptr, so unless the runtime always clears the parameter stack, you cannot guarantee the final value will be pure.
A:
Turns out I neglected two things at first.
The value is copied on function call, so it'll be different than "re-interpreting" the variable outright.
If I allocate an array and re-interpret it in scope, it'll always fail. Since it reads adjacent unwanted memory:
int* a = stackalloc int[2];
a[0] = a[1] = 123;
Console.WriteLine(*(ulong*)&a[0]); // 528280977531
Even if I wraps it into a function, it may still fail sometimes. The result vary depending on Debug/Release configuration and other codes. Those may due to the code optimization and what's being written to the call stack (accidentally read some instructions?)
The 100% safe approach is alwasy allocate a new zeroed variable then copy the data:
public static unsafe ulong ToUInt64<T>(T value) where T : unmanaged
{
if (sizeof(T) > sizeof(TResult))
throw new InvalidCastException("Can't cast from data type with larger size");
ulong result = 0;
*(T*)&result = value;
return result;
}
A more generic implementation:
public static unsafe TResult UnmanagedCast<T,TResult>(T value) where T : unmanaged where TResult : unmanaged
{
if (sizeof(T) > sizeof(TResult))
throw new InvalidCastException("Can't cast from data type with larger size");
TResult result = default;
*(T*)&result = value;
return result;
}
usage:
UnmanagedCast<int,ulong>(123)
|
Is it safe to re-interpret a smaller data type to Int64 on x64 systems?
|
I have a c# program that converts/re-interpret unmanaged type to ulong on x64 machine:
public static unsafe ulong ToUInt64<T>(T value) where T : unmanaged
{
if(sizeof(T) > sizeof(ulong))
throw new InvalidOperationException("Can't convert this type to UInt64");
return *(ulong*)&value;
}
Usage:
int a = 123;
ulong b = ToUInt64(a);
Console.WriteLine(b); // returns 123 as expected.
However, I'm not sure if this is 100% safe.
Namely, will the runtime always allocate 8 bytes and zero the memory if a variable have a smaller size? Can I get AccessViolationException or read some garbage memory into the ulong under some situation?
|
[
"I think this is still based on the runtime.\nAt first I cannot ensure this example is reproductive on every machine.\nstatic void Main()\n{\n var j = Test(0x12345678);\n ulong b = ToUInt64((byte)123);\n Console.WriteLine(b); // returns 123 as expected.\n Console.WriteLine(j);\n}\n\npublic unsafe static short Test(int value)\n{\n return (short)(value - 1);\n}\n\npublic static unsafe ulong ToUInt64<T>(T value) where T : unmanaged\n{\n return *(ulong*)&value;\n}\n\nWhen I debugged (F5) the program (in release mode), I got the following result. You can see the first value is unexpected.\n\nThe gernerated assembly is:\n00007FFDD6148D64 mov dword ptr [rsp+24h],7Bh\n00007FFDD6148D6C mov rcx,qword ptr [rsp+24h]\n\nThe value 123 (0x7B) is moved as a dword ptr, the returned value is moved as a qword ptr, so unless the runtime always clears the parameter stack, you cannot guarantee the final value will be pure.\n",
"Turns out I neglected two things at first.\n\nThe value is copied on function call, so it'll be different than \"re-interpreting\" the variable outright.\nIf I allocate an array and re-interpret it in scope, it'll always fail. Since it reads adjacent unwanted memory:\n\nint* a = stackalloc int[2];\na[0] = a[1] = 123;\nConsole.WriteLine(*(ulong*)&a[0]); // 528280977531\n\nEven if I wraps it into a function, it may still fail sometimes. The result vary depending on Debug/Release configuration and other codes. Those may due to the code optimization and what's being written to the call stack (accidentally read some instructions?)\nThe 100% safe approach is alwasy allocate a new zeroed variable then copy the data:\npublic static unsafe ulong ToUInt64<T>(T value) where T : unmanaged\n{\n if (sizeof(T) > sizeof(TResult))\n throw new InvalidCastException(\"Can't cast from data type with larger size\");\n ulong result = 0;\n *(T*)&result = value;\n return result;\n}\n\nA more generic implementation:\npublic static unsafe TResult UnmanagedCast<T,TResult>(T value) where T : unmanaged where TResult : unmanaged\n{\n if (sizeof(T) > sizeof(TResult))\n throw new InvalidCastException(\"Can't cast from data type with larger size\");\n TResult result = default;\n *(T*)&result = value;\n return result;\n}\n\nusage:\nUnmanagedCast<int,ulong>(123)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
".net",
"c#",
"memory",
"pointers",
"unsafe"
] |
stackoverflow_0074655141_.net_c#_memory_pointers_unsafe.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.