content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: When I'm creating Flutter new project lib folder is missing in Android Studio When I am creating a new Flutter project in Android Studio, lib folder is missing. How to fix it? I run Flutter doctor, everything is fine but I am facing this problem. A: As "Devarsh Ranpara" comment change Android directory to Project A: Just delete .idea folder from project directory an re run android studio. it worked for me. A: Hi You should choose this option!!
When I'm creating Flutter new project lib folder is missing in Android Studio
When I am creating a new Flutter project in Android Studio, lib folder is missing. How to fix it? I run Flutter doctor, everything is fine but I am facing this problem.
[ "As \"Devarsh Ranpara\" comment \nchange Android directory to Project\n\n", "Just delete .idea folder from project directory an re run android studio. it worked for me.\n", "\nHi\nYou should choose this option!!\n" ]
[ 4, 3, 0 ]
[]
[]
[ "dart", "flutter", "lib", "new_operator" ]
stackoverflow_0071098180_dart_flutter_lib_new_operator.txt
Q: How to make a sample that shows the 'Props are overwritten when re-rendering' anti pattern I would like to be convinced that 'Props are overwritten when re-rendering' is an anti pattern. const MyButton = Vue.extend({ props: { obj: {} }, template: "<button @click=\"obj.text = 'Value'+Math.round(Math.random()*100)\">Change text in child ({{obj.text}})</button>" }); const app = new Vue({ el: "#main", data: { obj: { text: "Value2" } }, components: { MyButton }, template: ` <div> <Button @click='obj.text = "Value"+Math.round(Math.random()*100)'>Change text in parent ({{obj.text}})</Button><br> <MyButton :obj='obj'/> Pressing here 'mutate' the prop. It is an anti-pattern (Props are overwritten when re-rendering). But it seems to work just fine, how to change this sample so it shows the anti-pattern problem? (https://eslint.vuejs.org/rules/no-mutating-props.html) </div>` }); Codepen: https://codepen.io/SVSMITH/pen/LYrXRzW Can anyone help? A: In your example you are actually not mutating the prop, because what you send in to the child is a reference to an object. This reference stays the same. See what happens if you change the template code in your child to: "<button @click=\"obj = {}\">Change text in child ({{obj.text}})</button>" Then you will see that the prop is overwritten and the value of the child differs from that in the parent. When the parent updates the value, the updated value in the child will be overwritten. Which can cause serious and hard to find bugs. Therefore use emit in the child to update the value in the parent, which will update the child value through the prop. If you have a lot of these props and emits, you should look into using a store like Pinia.
How to make a sample that shows the 'Props are overwritten when re-rendering' anti pattern
I would like to be convinced that 'Props are overwritten when re-rendering' is an anti pattern. const MyButton = Vue.extend({ props: { obj: {} }, template: "<button @click=\"obj.text = 'Value'+Math.round(Math.random()*100)\">Change text in child ({{obj.text}})</button>" }); const app = new Vue({ el: "#main", data: { obj: { text: "Value2" } }, components: { MyButton }, template: ` <div> <Button @click='obj.text = "Value"+Math.round(Math.random()*100)'>Change text in parent ({{obj.text}})</Button><br> <MyButton :obj='obj'/> Pressing here 'mutate' the prop. It is an anti-pattern (Props are overwritten when re-rendering). But it seems to work just fine, how to change this sample so it shows the anti-pattern problem? (https://eslint.vuejs.org/rules/no-mutating-props.html) </div>` }); Codepen: https://codepen.io/SVSMITH/pen/LYrXRzW Can anyone help?
[ "In your example you are actually not mutating the prop, because what you send in to the child is a reference to an object. This reference stays the same. See what happens if you change the template code in your child to:\n\"<button @click=\\\"obj = {}\\\">Change text in child ({{obj.text}})</button>\"\n\nThen you will see that the prop is overwritten and the value of the child differs from that in the parent. When the parent updates the value, the updated value in the child will be overwritten. Which can cause serious and hard to find bugs. Therefore use emit in the child to update the value in the parent, which will update the child value through the prop.\nIf you have a lot of these props and emits, you should look into using a store like Pinia.\n" ]
[ 0 ]
[]
[]
[ "anti_patterns", "javascript", "mutation", "vue.js", "vue_props" ]
stackoverflow_0074665328_anti_patterns_javascript_mutation_vue.js_vue_props.txt
Q: JQuery don't work in one case and another work I'm writing a simple calculator, using JQuery. I need to write to the input tag an error message if the user writes an invalid symbol. And the problem is that console.log works and "$('#input')" doesn't work. ` function addChar(ch) { num1 = $('#input').val(); if(isNaN(num1)){ $('#input').val("Invalid input"); console.log("Invalid input"); } op = ch; clearAll(); } I have the same part of the code, where I use it. Another factor is working. I can't imagine why case "/": if(num2 == 0) $('#input').val("arythmetic error"); else $('#input').val(num1 / num2); break; A: If the input is an id then its okay to give '#input'. Otherwise simply remove the '#' symbol(if it is a tag). <div> <label> </label> </div> Bind the value as innerText $('label').text("arythmetic error"); Try this
JQuery don't work in one case and another work
I'm writing a simple calculator, using JQuery. I need to write to the input tag an error message if the user writes an invalid symbol. And the problem is that console.log works and "$('#input')" doesn't work. ` function addChar(ch) { num1 = $('#input').val(); if(isNaN(num1)){ $('#input').val("Invalid input"); console.log("Invalid input"); } op = ch; clearAll(); } I have the same part of the code, where I use it. Another factor is working. I can't imagine why case "/": if(num2 == 0) $('#input').val("arythmetic error"); else $('#input').val(num1 / num2); break;
[ "If the input is an id then its okay to give '#input'. Otherwise simply remove the '#' symbol(if it is a tag).\n<div>\n <label>\n </label>\n</div>\n\nBind the value as innerText\n$('label').text(\"arythmetic error\");\n\nTry this\n" ]
[ 0 ]
[]
[]
[ "frontend", "javascript", "jquery" ]
stackoverflow_0074664384_frontend_javascript_jquery.txt
Q: Big query table on top of bigtable taking too long to read in google dataflow job I have a dataflow job that reads from bigquery table( created on top of big table). The data flow job is created using custom template in java. I need to process around 500 million records from bigquery. The issue I am facing is even to read 1 million record big query read is taking 26 min and dataflow job is taking 36 min. The read is too slow in big query. Any suggestions on how to improve the read performance . A: There are a few things you can try to improve the read performance of your BigQuery job: Use query optimization techniques such as using the WHERE clause to filter out irrelevant data and using GROUP BY or ORDER BY to reduce the amount of data that needs to be processed. Use partitioned tables to distribute the data across multiple nodes, which can improve the read performance by allowing the query to run on multiple nodes in parallel. Use columnar data storage formats such as Parquet or ORC, which can improve read performance by only reading the columns that are needed for the query. Use clustering to organize the data based on the columns that are frequently used in the query, which can improve read performance by reducing the amount of data that needs to be scanned. Overall, it is important to optimize your query and data storage to improve the read performance of your BigQuery job.
Big query table on top of bigtable taking too long to read in google dataflow job
I have a dataflow job that reads from bigquery table( created on top of big table). The data flow job is created using custom template in java. I need to process around 500 million records from bigquery. The issue I am facing is even to read 1 million record big query read is taking 26 min and dataflow job is taking 36 min. The read is too slow in big query. Any suggestions on how to improve the read performance .
[ "There are a few things you can try to improve the read performance of your BigQuery job:\n\nUse query optimization techniques such as using the WHERE clause to filter out irrelevant data and using GROUP BY or ORDER BY to reduce the amount of data that needs to be processed.\n\nUse partitioned tables to distribute the data across multiple nodes, which can improve the read performance by allowing the query to run on multiple nodes in parallel.\n\nUse columnar data storage formats such as Parquet or ORC, which can improve read performance by only reading the columns that are needed for the query.\n\nUse clustering to organize the data based on the columns that are frequently used in the query, which can improve read performance by reducing the amount of data that needs to be scanned.\n\n\nOverall, it is important to optimize your query and data storage to improve the read performance of your BigQuery job.\n" ]
[ 0 ]
[]
[]
[ "google_bigquery", "google_bigquery_java", "google_cloud_dataflow", "google_cloud_platform" ]
stackoverflow_0074658971_google_bigquery_google_bigquery_java_google_cloud_dataflow_google_cloud_platform.txt
Q: How to create a channel with a button I wanted to ask how to create buttons at first, I got the same EXACT code from the docs. const { ActionRowBuilder, ButtonBuilder, ButtonStyle, Events } = require("discord.js"); client.on(Events.InteractionCreate, async (interaction) => { if (!interaction.isChatInputCommand()) return; if (interaction.commandName === "button") { const row = new ActionRowBuilder().addComponents(new ButtonBuilder().setCustomId("primary").setLabel("Click me!").setStyle(ButtonStyle.Primary)); await interaction.reply({ content: "I think you should,", components: [row] }); } }); It doesn't seem to work. There is no slash command there and not even if I write "button", it just doesn't send anything. It doesn't send a button, that's my main concern. A: Bro, I understood that you don't know how to code a discord bot Please go to discord.js Guide discordjs.guide and learn it and one more thing what you are trying to do (the above code), it never ever will work. There is no logic. Please read the guide and see how a bot is made. I hope you understand
How to create a channel with a button
I wanted to ask how to create buttons at first, I got the same EXACT code from the docs. const { ActionRowBuilder, ButtonBuilder, ButtonStyle, Events } = require("discord.js"); client.on(Events.InteractionCreate, async (interaction) => { if (!interaction.isChatInputCommand()) return; if (interaction.commandName === "button") { const row = new ActionRowBuilder().addComponents(new ButtonBuilder().setCustomId("primary").setLabel("Click me!").setStyle(ButtonStyle.Primary)); await interaction.reply({ content: "I think you should,", components: [row] }); } }); It doesn't seem to work. There is no slash command there and not even if I write "button", it just doesn't send anything. It doesn't send a button, that's my main concern.
[ "Bro, I understood that you don't know how to code a discord bot\nPlease go to discord.js Guide discordjs.guide and learn it\nand one more thing what you are trying to do (the above code), it never ever will work. There is no logic.\nPlease read the guide and see how a bot is made.\nI hope you understand\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.js", "javascript", "node.js" ]
stackoverflow_0074576626_discord_discord.js_javascript_node.js.txt
Q: Changing an element in an array in C and Getting both old and new element as output Firstly I created an array and accessed an element of it. After that, I tried to change that array element without removing the access part and I got both new and old elements as the output. How can I get only the new element as the output? I am beginner for the C language. #include <stdio.h> #include <stdlib.h> int main() { int myfirstarray[] = {2,50,16,100,39}; printf("%d",myfirstarray[2]); myfirstarray[2]=78; printf("%d",myfirstarray[2]); } this is the code and, 1678 Process returned 0 (0x0) execution time : 4.098 s Press any key to continue. this is what I got. I want to get output as only 78 A: When a program executes, each print statement writes a new string (or chunk of characters) to the output stream according to format string. These accumulate. After the execution of: int myfirstarray[] = {2,50,16,100,39}; printf("%d",myfirstarray[2]); Output (after execution of the second line): 16 Then, the value is changed and a new chunk is produced for the second index: myfirstarray[2]=78; printf("%d",myfirstarray[2]); Output (after execution of the fourth line): 1678 This is analogous to the following: printf("word:\n"); printf("mor"); printf("pho"); printf("graphic\n"); Output: word: morphographic
Changing an element in an array in C and Getting both old and new element as output
Firstly I created an array and accessed an element of it. After that, I tried to change that array element without removing the access part and I got both new and old elements as the output. How can I get only the new element as the output? I am beginner for the C language. #include <stdio.h> #include <stdlib.h> int main() { int myfirstarray[] = {2,50,16,100,39}; printf("%d",myfirstarray[2]); myfirstarray[2]=78; printf("%d",myfirstarray[2]); } this is the code and, 1678 Process returned 0 (0x0) execution time : 4.098 s Press any key to continue. this is what I got. I want to get output as only 78
[ "When a program executes, each print statement writes a new string (or chunk of characters) to the output stream according to format string. These accumulate.\nAfter the execution of:\nint myfirstarray[] = {2,50,16,100,39};\nprintf(\"%d\",myfirstarray[2]);\n\nOutput (after execution of the second line):\n16\n\nThen, the value is changed and a new chunk is produced for the second index:\nmyfirstarray[2]=78;\nprintf(\"%d\",myfirstarray[2]);\n\nOutput (after execution of the fourth line):\n1678\n\nThis is analogous to the following:\nprintf(\"word:\\n\");\nprintf(\"mor\");\nprintf(\"pho\");\nprintf(\"graphic\\n\");\n\nOutput:\nword:\nmorphographic\n\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c" ]
stackoverflow_0074665651_arrays_c.txt
Q: Django Slugify Not Showing Turkish Character I wanna make business directory and i made category model here at the below. class FirmaKategori(models.Model): kategori = models.CharField(max_length=250) description = models.CharField(max_length=5000) slug = models.SlugField(max_length=250, null=False, unique=True, allow_unicode=True) def __str__(self): return self.kategori def get_absolute_url(self): return reverse('firma-kategori-ekle') def save(self, *args, **kwargs): self.slug = slugify(self.kategori, allow_unicode=False) super().save(*args, **kwargs) I made this, and if i allow_unicode=True, Forexample i made a category its name is Takım Tezgahları, became takım-tezgahları, but i want takim-tezgahlari Varyation 1: When i delete all allow_unicode=True tags, result is Category name = Ulaşım Sektörü Slug link: ulasm-sektoru Varyation 2: When i make all allow_unicode=True tags, result is Category name = Ulaşım Sektörü Slug link: ulaşım-sektörü I want ulasim-sektoru How can i solve this. A: I solved via replace but i have a still slug url problem... Before solving, result is at the below. And after i add this replace tag def save(self, *args, **kwargs): self.slug = slugify(self.kategori.replace('ı','i').replace('ş','s'), allow_unicode=True) super().save(*args, **kwargs) Problem solved but, when i looked the link on the site, it has still My category path is like this path('kategori/slug:slug/', FirmaKategoriDetay.as_view(), name="firma-kategori-detay"), How can i fix this.
Django Slugify Not Showing Turkish Character
I wanna make business directory and i made category model here at the below. class FirmaKategori(models.Model): kategori = models.CharField(max_length=250) description = models.CharField(max_length=5000) slug = models.SlugField(max_length=250, null=False, unique=True, allow_unicode=True) def __str__(self): return self.kategori def get_absolute_url(self): return reverse('firma-kategori-ekle') def save(self, *args, **kwargs): self.slug = slugify(self.kategori, allow_unicode=False) super().save(*args, **kwargs) I made this, and if i allow_unicode=True, Forexample i made a category its name is Takım Tezgahları, became takım-tezgahları, but i want takim-tezgahlari Varyation 1: When i delete all allow_unicode=True tags, result is Category name = Ulaşım Sektörü Slug link: ulasm-sektoru Varyation 2: When i make all allow_unicode=True tags, result is Category name = Ulaşım Sektörü Slug link: ulaşım-sektörü I want ulasim-sektoru How can i solve this.
[ "I solved via replace but i have a still slug url problem...\nBefore solving, result is at the below.\n\nAnd after i add this replace tag\ndef save(self, *args, **kwargs):\n self.slug = slugify(self.kategori.replace('ı','i').replace('ş','s'), allow_unicode=True)\n super().save(*args, **kwargs) \n\n\nProblem solved but, when i looked the link on the site, it has still\n\nMy category path is like this\npath('kategori/slug:slug/', FirmaKategoriDetay.as_view(), name=\"firma-kategori-detay\"),\nHow can i fix this.\n" ]
[ 0 ]
[]
[]
[ "django" ]
stackoverflow_0074665429_django.txt
Q: Google SheetsAPI: ValueError: Client secrets must be for a web or installed app Very similar to this question: ValueError: Client secrets must be for a web or installed app but with a twist: I'm trying to do this through a Google Cloud Virtual Machine. Recently, the Out-Of-Band (OOB) flow stopped working for me (it seems the reason may lie here: oob-migration. Until then, I was able to easily run the Google Sheets API on the Virtual Machine to both read/write on Google Sheet files Now, I'm trying to follow this Python quickstart for google sheets which is almost identical to the code I already had, under the "Configure the sample" section. My code on Python right now is: scopes = ['https://www.googleapis.com/auth/spreadsheets'] creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', scopes) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( credentials_path, scopes) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) #Store creds in object my_creds = creds #Create service build('sheets', 'v4', credentials=my_creds) But every time Ì get this error: ValueError: Client secrets must be for a web or installed app. For the record, I did create the credentials under "OAuth 2.0 Client IDs" on Google Cloud, and the application type is "Web application". If that's not the type, I don't know which one it should be. Thank you so much for your help, really appreciated. A: The code you are are using was designed for an installed. Which is exactly what your error message is saying. The QuickStart clearly states Click Application type > Desktop app. While i agree the error message states installed or web, i am not sure that code can be used for a web application. Client secrets must be for a web or installed app. Open the file denoted by credentials_path the file should have the following format. credentials.json { "installed": { "client_id": "[redacted]", "project_id": "daimto-tutorials-101", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_secret": "[redacted]", "redirect_uris": [ "http://localhost" ] } } Points to check. it must say "installed" redirect_uris must not include anything like urn:ietf:wg:oauth:2.0:oob Here is a of video which will show you how to create the proper credentials file for use with that code. This should be done though Google cloud console How to create Google Oauth2 installed application credentials.json Note: I highly doubt that this issue is due to oob, you would have a different error message if it was. update Web app works. I was able to test this with using a web app credentials. the only change i had to make was to denote the port i wanted the code to run on in order to get a static port I needed to add a redirect uri to the developer console project. I made no other changes to the standard quickstart. flow = InstalledAppFlow.from_client_secrets_file( CREDENTIALS_FILE_PATH, SCOPES) creds = flow.run_local_server(port=53911)
Google SheetsAPI: ValueError: Client secrets must be for a web or installed app
Very similar to this question: ValueError: Client secrets must be for a web or installed app but with a twist: I'm trying to do this through a Google Cloud Virtual Machine. Recently, the Out-Of-Band (OOB) flow stopped working for me (it seems the reason may lie here: oob-migration. Until then, I was able to easily run the Google Sheets API on the Virtual Machine to both read/write on Google Sheet files Now, I'm trying to follow this Python quickstart for google sheets which is almost identical to the code I already had, under the "Configure the sample" section. My code on Python right now is: scopes = ['https://www.googleapis.com/auth/spreadsheets'] creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', scopes) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( credentials_path, scopes) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) #Store creds in object my_creds = creds #Create service build('sheets', 'v4', credentials=my_creds) But every time Ì get this error: ValueError: Client secrets must be for a web or installed app. For the record, I did create the credentials under "OAuth 2.0 Client IDs" on Google Cloud, and the application type is "Web application". If that's not the type, I don't know which one it should be. Thank you so much for your help, really appreciated.
[ "The code you are are using was designed for an installed. Which is exactly what your error message is saying. The QuickStart clearly states Click Application type > Desktop app.\nWhile i agree the error message states installed or web, i am not sure that code can be used for a web application.\n\nClient secrets must be for a web or installed app.\n\nOpen the file denoted by credentials_path the file should have the following format.\ncredentials.json\n{\n \"installed\": {\n \"client_id\": \"[redacted]\",\n \"project_id\": \"daimto-tutorials-101\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://oauth2.googleapis.com/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_secret\": \"[redacted]\",\n \"redirect_uris\": [\n \"http://localhost\"\n ]\n }\n}\n\nPoints to check.\n\nit must say \"installed\"\nredirect_uris must not include anything like urn:ietf:wg:oauth:2.0:oob\n\nHere is a of video which will show you how to create the proper credentials file for use with that code. This should be done though Google cloud console\n\nHow to create Google Oauth2 installed application credentials.json\n\nNote: I highly doubt that this issue is due to oob, you would have a different error message if it was.\nupdate Web app works.\nI was able to test this with using a web app credentials. the only change i had to make was to denote the port i wanted the code to run on in order to get a static port I needed to add a redirect uri to the developer console project.\nI made no other changes to the standard quickstart.\nflow = InstalledAppFlow.from_client_secrets_file(\n CREDENTIALS_FILE_PATH, SCOPES)\n creds = flow.run_local_server(port=53911)\n\n" ]
[ 1 ]
[]
[]
[ "google_api", "google_api_python_client", "google_oauth", "google_sheets_api", "python" ]
stackoverflow_0074663524_google_api_google_api_python_client_google_oauth_google_sheets_api_python.txt
Q: MappingException: "No property found on entity to bind constructor parameter to"? I developed my project with spring data mongodb, and used to have this document: @Document(collection="Instrument") public class Instrument { @Id private Integer id; private String name; private String internalCode; private String fosMarketId; private String localCode; //setters...getters... and constructurs.... Now I need to add some property to my document as bellow: .... private Long from; private Long to; private Long OpenHourfrom; private Long OpenHourTo; private Boolean isActive; //setters...getters... and constructurs.... so I have this new constructor: @PersistenceConstructor public Instrument(Integer id, String name, String internalCode, String fosMarketId, String localCode, Long from, Long to, Long openHourfrom, Long openHourTo, Boolean isActive) { super(); this.id = id; this.name = name; this.internalCode = internalCode; this.fosMarketId = fosMarketId; this.localCode = localCode; this.from = from; this.to = to; this.OpenHourfrom = openHourfrom; this.OpenHourTo = openHourTo; this.isActive = isActive; } but when I run one of repo methods this exception had thrown: org.springframework.data.mapping.model.MappingException: No property openHourfrom found on entity class com.tosan.entity.Instrument to bind constructor parameter to! at org.springframework.data.mapping.model.PersistentEntityParameterValueProvider.getParameterValue(PersistentEntityParameterValueProvider.java:74) at .... Note that I use spring-confix.xml with bellow settings: <mongo:mongo-client host="IP" port="Port" > <mongo:client-options write-concern="NORMAL" connections-per-host="1000" threads-allowed-to-block-for-connection-multiplier="600" connect-timeout="10000" max-wait-time="15000" socket-keep-alive="true" socket-timeout="15000" /> </mongo:mongo-client> I wonder how can I set auto update property of hibernate spring to true, so that I could update my document and add new properties. A: Use import org.springframework.data.mongodb.core.mapping.Field; @Field annotaion for each filed Check constuctor of entity class and make sure parameter names are correct A: The MongoRepository defined custom query methods use the constructor parameter names for locate the property to use in search, so this parameters names must be have the same same as the Document entity properties. A: Although not applicable to the subject of this topic, in some cases, adding no args constructor to the document entity will solve this error. A: my problem was, that I set a property inside the entity by ticker.setLastPrice(new LastPriceDbo() {{ setPrice(...) }} when I changed to var lastPrice = new LastPriceDbo(); lastPrice.setPrice(...) ticker.setLastPrice(lastPrice); it worked A: For some people in the future (Dec 2022). As someone mentioned before: ,,The MongoRepository defined custom query methods use the constructor parameter names for locate the property to use in search, so this parameters names must be have the same same as the Document entity properties." Example: @Data //Lombok creates constructor and Getter/Setter methods @Document() //User is Collection class public class User { @Id private String id; private Gender Gender; public User(Gender Gender) { this.Gender = Gender; }} Normally IntelliJ creates the constructur with lowercase parameter like User(Gender gender){ Gender = gender; } Change it to this, because in this case the entity name "Gender" in User class is in uppercase, so the Parameter should also be in uppercase, for some reason. User(Gender Gender){ this.Gender = Gender; } public enum Gender { MALE, FEMALE } This problem doesn't show up when I was creating the user, only if my service was using MongoRepository: @AllArgsConstructor @Service public class UserService { private final UserRepository userRepository; public List<User> getAllUser() { return userRepository.findAll(); } } public interface UserRepository extends MongoRepository<User, String> { //Optional<User> findUserByEmail(String email); } This was my case, maybe you have some other problems. Keep going buddys.
MappingException: "No property found on entity to bind constructor parameter to"?
I developed my project with spring data mongodb, and used to have this document: @Document(collection="Instrument") public class Instrument { @Id private Integer id; private String name; private String internalCode; private String fosMarketId; private String localCode; //setters...getters... and constructurs.... Now I need to add some property to my document as bellow: .... private Long from; private Long to; private Long OpenHourfrom; private Long OpenHourTo; private Boolean isActive; //setters...getters... and constructurs.... so I have this new constructor: @PersistenceConstructor public Instrument(Integer id, String name, String internalCode, String fosMarketId, String localCode, Long from, Long to, Long openHourfrom, Long openHourTo, Boolean isActive) { super(); this.id = id; this.name = name; this.internalCode = internalCode; this.fosMarketId = fosMarketId; this.localCode = localCode; this.from = from; this.to = to; this.OpenHourfrom = openHourfrom; this.OpenHourTo = openHourTo; this.isActive = isActive; } but when I run one of repo methods this exception had thrown: org.springframework.data.mapping.model.MappingException: No property openHourfrom found on entity class com.tosan.entity.Instrument to bind constructor parameter to! at org.springframework.data.mapping.model.PersistentEntityParameterValueProvider.getParameterValue(PersistentEntityParameterValueProvider.java:74) at .... Note that I use spring-confix.xml with bellow settings: <mongo:mongo-client host="IP" port="Port" > <mongo:client-options write-concern="NORMAL" connections-per-host="1000" threads-allowed-to-block-for-connection-multiplier="600" connect-timeout="10000" max-wait-time="15000" socket-keep-alive="true" socket-timeout="15000" /> </mongo:mongo-client> I wonder how can I set auto update property of hibernate spring to true, so that I could update my document and add new properties.
[ "\nUse import org.springframework.data.mongodb.core.mapping.Field;\n@Field annotaion for each filed \nCheck constuctor of entity class and make sure parameter names are correct\n\n", "The MongoRepository defined custom query methods use the constructor parameter names for locate the property to use in search, so this parameters names must be have the same same as the Document entity properties.\n", "Although not applicable to the subject of this topic, in some cases, adding no args constructor to the document entity will solve this error.\n", "my problem was, that I set a property inside the entity by\nticker.setLastPrice(new LastPriceDbo() {{ setPrice(...) }}\n\nwhen I changed to\nvar lastPrice = new LastPriceDbo();\nlastPrice.setPrice(...)\nticker.setLastPrice(lastPrice);\n\nit worked\n", "For some people in the future (Dec 2022).\nAs someone mentioned before:\n,,The MongoRepository defined custom query methods use the constructor parameter names for locate the property to use in search, so this parameters names must be have the same same as the Document entity properties.\"\nExample:\n@Data //Lombok creates constructor and Getter/Setter methods\n@Document() //User is Collection class\npublic class User {\n @Id\n private String id;\n private Gender Gender;\n \n public User(Gender Gender) {\n this.Gender = Gender;\n}}\n\nNormally IntelliJ creates the constructur with lowercase parameter like\nUser(Gender gender){\n Gender = gender;\n}\n\nChange it to this, because in this case the entity name \"Gender\" in User class is in uppercase, so the Parameter should also be in uppercase, for some reason.\nUser(Gender Gender){\n this.Gender = Gender;\n}\n\npublic enum Gender {\n MALE, FEMALE\n}\n\nThis problem doesn't show up when I was creating the user, only if my\nservice was using MongoRepository:\n@AllArgsConstructor\n@Service\npublic class UserService {\n private final UserRepository userRepository;\n public List<User> getAllUser() {\n return userRepository.findAll();\n }\n}\n\n\n\n\npublic interface UserRepository extends MongoRepository<User, String> {\n //Optional<User> findUserByEmail(String email);\n}\n\nThis was my case, maybe you have some other problems.\nKeep going buddys.\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "hibernate", "java", "mongodb", "spring", "spring_data_mongodb" ]
stackoverflow_0047387143_hibernate_java_mongodb_spring_spring_data_mongodb.txt
Q: How to lowercase selected item in a list I have x= ['AA', 'BB', 'CC'] and I want to lower case only 'BB'. A: To transform a string to lowercase, you use string.lower() To answer your question, use x[1] = x[1].lower() A: perhaps try ` x = ['AA', 'BB', 'CC'] Str = x[1] print(Str.lower()) ` #0 = AA, 1 = BB, 2 = CC use 0 1 or 2 with x[NUMBER] I hope this works for you!! EDIT: or you can use x[num] directly instead of putting it in a variable
How to lowercase selected item in a list
I have x= ['AA', 'BB', 'CC'] and I want to lower case only 'BB'.
[ "To transform a string to lowercase, you use\nstring.lower()\n\nTo answer your question, use\nx[1] = x[1].lower()\n\n", "perhaps try\n`\nx = ['AA', 'BB', 'CC']\nStr = x[1]\nprint(Str.lower())\n`\n#0 = AA, 1 = BB, 2 = CC\nuse 0 1 or 2 with x[NUMBER]\nI hope this works for you!! \nEDIT: or you can use x[num] directly instead of putting it in a variable\n" ]
[ 0, 0 ]
[]
[]
[ "lowercase", "python", "string" ]
stackoverflow_0074665441_lowercase_python_string.txt
Q: Why does `git config --global credential.helper` show wincred when it seems that I'm using Git Credential Manager Core? As I understand this article, Git Credential Manager Core is not the same with Git Credential Manager or Windows Credentials. After using this command printf "host=github.com\nprotocol=https\nusername=ooker777\npassword=ghp_yourToken" | git credential-manager-core store I'm able to push. Checking Windows Credentials I only see my GitHub password is stored in there, which will not work because GitHub requires it to be token. So it's clear that I'm not using wincred. Yet git config --global credential.helper still shows that I'm using wincred. Why is that? A: wincred was the legacy credential storage on Windows. It has been replaced by GCM (Git Credential Manager), and after Git 2.38.1, is called manager (no longer "manager-core") If you have upgraded Git for Windows, you can safely change your credential helper to manager. # up to Git 2.38.1 on Windows git config --global credential.helper manager-core # Git 2.39+ git config --global credential.helper manager
Why does `git config --global credential.helper` show wincred when it seems that I'm using Git Credential Manager Core?
As I understand this article, Git Credential Manager Core is not the same with Git Credential Manager or Windows Credentials. After using this command printf "host=github.com\nprotocol=https\nusername=ooker777\npassword=ghp_yourToken" | git credential-manager-core store I'm able to push. Checking Windows Credentials I only see my GitHub password is stored in there, which will not work because GitHub requires it to be token. So it's clear that I'm not using wincred. Yet git config --global credential.helper still shows that I'm using wincred. Why is that?
[ "wincred was the legacy credential storage on Windows.\nIt has been replaced by GCM (Git Credential Manager), and after Git 2.38.1, is called manager (no longer \"manager-core\")\nIf you have upgraded Git for Windows, you can safely change your credential helper to manager.\n# up to Git 2.38.1 on Windows\ngit config --global credential.helper manager-core\n\n# Git 2.39+\ngit config --global credential.helper manager\n\n" ]
[ 1 ]
[]
[]
[ "git", "git_credential_manager" ]
stackoverflow_0074665534_git_git_credential_manager.txt
Q: Angular problem using third-party component from lazy-loaded module I have an Angular project and I want to use a third-party component (FullCalendarComponent), which is declarated in the third-party module - FullCalendarModule) in my own lazy-loaded module. But the problem is, that the third-party module (FullCalendarModule) uses a BrowserModule, which should not be. So, my Angular app can use the third-party component (FullCalendarComponent) only in non-lazy-loaded module, because, otherwise it gives an error: Error: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead. I have created an issue on their tracker: https://github.com/fullcalendar/fullcalendar-angular/issues/423 But is there a way to bypass this for now, so I can use it in a lazy-loaded module? A: One way to bypass this issue is to import the CommonModule instead of the BrowserModule in the FullCalendarModule. This will allow you to use the FullCalendarComponent in a lazy-loaded module without encountering the error. Here is an example of how to do this: // FullCalendarModule import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; // Import CommonModule instead of BrowserModule import { FullCalendarComponent } from './full-calendar.component'; @NgModule({ imports: [ CommonModule // Import CommonModule instead of BrowserModule ], declarations: [ FullCalendarComponent ], exports: [ FullCalendarComponent ] }) export class FullCalendarModule {} Once you have done this, you should be able to use the FullCalendarComponent in your lazy-loaded module without any issues. Note that importing the CommonModule may cause some differences in the behavior of the FullCalendarComponent compared to when it is imported using the BrowserModule, so you may need to test your implementation carefully to ensure that it works as expected.
Angular problem using third-party component from lazy-loaded module
I have an Angular project and I want to use a third-party component (FullCalendarComponent), which is declarated in the third-party module - FullCalendarModule) in my own lazy-loaded module. But the problem is, that the third-party module (FullCalendarModule) uses a BrowserModule, which should not be. So, my Angular app can use the third-party component (FullCalendarComponent) only in non-lazy-loaded module, because, otherwise it gives an error: Error: Providers from the `BrowserModule` have already been loaded. If you need access to common directives such as NgIf and NgFor, import the `CommonModule` instead. I have created an issue on their tracker: https://github.com/fullcalendar/fullcalendar-angular/issues/423 But is there a way to bypass this for now, so I can use it in a lazy-loaded module?
[ "One way to bypass this issue is to import the CommonModule instead of the BrowserModule in the FullCalendarModule. This will allow you to use the FullCalendarComponent in a lazy-loaded module without encountering the error.\nHere is an example of how to do this:\n// FullCalendarModule\nimport { NgModule } from '@angular/core';\nimport { CommonModule } from '@angular/common'; // Import CommonModule instead of BrowserModule\nimport { FullCalendarComponent } from './full-calendar.component';\n\n@NgModule({\nimports: [\nCommonModule // Import CommonModule instead of BrowserModule\n],\ndeclarations: [\nFullCalendarComponent\n],\nexports: [\nFullCalendarComponent\n]\n})\nexport class FullCalendarModule {}\n\nOnce you have done this, you should be able to use the FullCalendarComponent in your lazy-loaded module without any issues. Note that importing the CommonModule may cause some differences in the behavior of the FullCalendarComponent compared to when it is imported using the BrowserModule, so you may need to test your implementation carefully to ensure that it works as expected.\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_routing", "fullcalendar", "lazy_loading" ]
stackoverflow_0074654800_angular_angular_routing_fullcalendar_lazy_loading.txt
Q: RuntimeWarning: coroutine 'setup' was never awaited setup(self) I am trying to create a discord bot, but I am caught in an unending loop of problems. In every video I've watched, it is recommended that you write the cog loading function as thus: async def load_auto(): for filename in os.listdir('./cogs'): if filename.endswith('.py'): await bot.load_extension(f'cogs.{filename[:-3]}') but every time I use this form of cog loading it gives me this error: C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self) RuntimeWarning: Enable tracemalloc to get the object allocation traceback Traceback (most recent call last): File "c:/Users/galan/Desktop/new sambot/main.py", line 118, in <module> asyncio.get_event_loop().run_until_complete(main()) File "C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 616, in run_until_completereturn future.result() File "c:/Users/galan/Desktop/new sambot/main.py", line 115, in main await load_auto() File "c:/Users/galan/Desktop/new sambot/main.py", line 16, in load_auto await bot.load_extension(f'cogs.{filename[:-3]}') TypeError: object NoneType can't be used in 'await' expression I've tried not awaiting the bot.load_extension which resulted in it giving a C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self) while this may look better, it still does not load the cogs. and it doesn't follow others' code where it seemed like it was working. Here is a part of my main.py file: from discord.ext import commands import discord import os import asyncio intents = discord.Intents.all() intents.members= True sambot_var = ('sambot', 'sambot!', 'sambot?') bot = commands.Bot(command_prefix='$', intents=intents) async def load_auto(): for filename in os.listdir('./cogs'): if filename.endswith('.py'): bot.load_extension(f'cogs.{filename[:-3]}') async def main(): await load_auto() await bot.start('token') asyncio.get_event_loop().run_until_complete(main()) asyncio.run(main()) and one of my cogs: from discord.ext import commands import sys import discord import random sys.path.append("..") import datetime import pytz import re import asyncio class Personality(commands.Cog): def init(self, client): self.client = client ... async def setup(bot): await bot.add_cog(Personality(bot)) My questions are: Does await bot.load_extension(cogs) actually not need to be awaited? Where did I go wrong? What is the solution? EDIT: The problem was that I had the old discord package ffs. My code worked fine, it just didn't work fine on my device. The problem of await bot.load_extension(cog) was caused by my outdated package. It's always the most simplest answer. Either way, thank you for answering my questions. A: The add_cog is not an async function or coroutine. It's a normal function. This is easily fixable by removing the await statement. Before await bot.add_cog(Personality(bot)) After bot.add_cog(Personality(bot)) Edit. Sorry, I forgot to answer the question, Does await bot.load_extension(cogs) actually not need to be awaited? The answer to that question is that, as of discord.py 2, the load_extension was changed to an async function because they might need it in the future. So, for now, you have to await it. A: What are your intention again intents.members= True, although you already use all the intentions that are intents = discord.Intents.all(). Well, this line of code can be entered as follows: #main.py ... bot = commands.Bot(command_prefix='$', intents=discord.Intents.all()) ... #main.py async def load_cogs(): for filename in os.listdir('./cogs'): if filename.endswith('.py'): await bot.load_extension(f'cogs.{filename[:-3]}') async def main(): await load_cogs() await bot.start('token') if __name__ == '__main__': asyncio.run(main()) # in some cog class MyCog(commands.Cog): def __init__(self, bot): self.bot = bot async def setup(bot): await bot.add_cog(MyCog(bot)) IMPORTANTLY! Before inserting someone else's code, check it for indentations and spaces so that you later not go to the forum with stupid questions.
RuntimeWarning: coroutine 'setup' was never awaited setup(self)
I am trying to create a discord bot, but I am caught in an unending loop of problems. In every video I've watched, it is recommended that you write the cog loading function as thus: async def load_auto(): for filename in os.listdir('./cogs'): if filename.endswith('.py'): await bot.load_extension(f'cogs.{filename[:-3]}') but every time I use this form of cog loading it gives me this error: C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self) RuntimeWarning: Enable tracemalloc to get the object allocation traceback Traceback (most recent call last): File "c:/Users/galan/Desktop/new sambot/main.py", line 118, in <module> asyncio.get_event_loop().run_until_complete(main()) File "C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\asyncio\base_events.py", line 616, in run_until_completereturn future.result() File "c:/Users/galan/Desktop/new sambot/main.py", line 115, in main await load_auto() File "c:/Users/galan/Desktop/new sambot/main.py", line 16, in load_auto await bot.load_extension(f'cogs.{filename[:-3]}') TypeError: object NoneType can't be used in 'await' expression I've tried not awaiting the bot.load_extension which resulted in it giving a C:\Users\galan\AppData\Local\Programs\Python\Python38-32\lib\site-packages\discord\ext\commands\bot.py:618: RuntimeWarning: coroutine 'setup' was never awaited setup(self) while this may look better, it still does not load the cogs. and it doesn't follow others' code where it seemed like it was working. Here is a part of my main.py file: from discord.ext import commands import discord import os import asyncio intents = discord.Intents.all() intents.members= True sambot_var = ('sambot', 'sambot!', 'sambot?') bot = commands.Bot(command_prefix='$', intents=intents) async def load_auto(): for filename in os.listdir('./cogs'): if filename.endswith('.py'): bot.load_extension(f'cogs.{filename[:-3]}') async def main(): await load_auto() await bot.start('token') asyncio.get_event_loop().run_until_complete(main()) asyncio.run(main()) and one of my cogs: from discord.ext import commands import sys import discord import random sys.path.append("..") import datetime import pytz import re import asyncio class Personality(commands.Cog): def init(self, client): self.client = client ... async def setup(bot): await bot.add_cog(Personality(bot)) My questions are: Does await bot.load_extension(cogs) actually not need to be awaited? Where did I go wrong? What is the solution? EDIT: The problem was that I had the old discord package ffs. My code worked fine, it just didn't work fine on my device. The problem of await bot.load_extension(cog) was caused by my outdated package. It's always the most simplest answer. Either way, thank you for answering my questions.
[ "The add_cog is not an async function or coroutine. It's a normal function. This is easily fixable by removing the await statement.\nBefore\nawait bot.add_cog(Personality(bot))\n\nAfter\nbot.add_cog(Personality(bot))\n\nEdit.\nSorry, I forgot to answer the question, Does await bot.load_extension(cogs) actually not need to be awaited?\nThe answer to that question is that, as of discord.py 2, the load_extension was changed to an async function because they might need it in the future. So, for now, you have to await it.\n", "What are your intention again intents.members= True, although you already use all the intentions that are intents = discord.Intents.all(). Well, this line of code can be entered as follows:\n#main.py\n...\nbot = commands.Bot(command_prefix='$', intents=discord.Intents.all())\n...\n\n#main.py\nasync def load_cogs():\n for filename in os.listdir('./cogs'):\n if filename.endswith('.py'):\n await bot.load_extension(f'cogs.{filename[:-3]}')\n\n\nasync def main():\n await load_cogs()\n await bot.start('token')\n\nif __name__ == '__main__':\n asyncio.run(main())\n\n# in some cog\nclass MyCog(commands.Cog):\n def __init__(self, bot):\n self.bot = bot\n\n async def setup(bot):\n await bot.add_cog(MyCog(bot))\n\nIMPORTANTLY! Before inserting someone else's code, check it for indentations and spaces so that you later not go to the forum with stupid questions.\n" ]
[ 0, 0 ]
[]
[]
[ "bots", "discord", "discord.py", "python" ]
stackoverflow_0074664982_bots_discord_discord.py_python.txt
Q: How to give custom response if user input other data types? response = input("Do you want some food? (Y/N): ") if response == "Y": print("Come! There is pizza for you") elif response == "N": print("Alright, let us do Sisha instead.") elif response == range(999): print("The required response is Y/N.") else: print("I do not understand the prompt") Q1: How to give feedback when user input numbers instead of string? A1: I take a look in python documentation but it seems that range cannot be used in if statement? I try to modify the code by stating deret_angka = int or float for n in deret_angka: print("The required response is Y/N.") and third ifs condition to: elif response == deret_angka: print("The required response is Y/N.") But got TypeError: 'type' object is not iterable Q2: How to give Y & N value even if its lower case y/n? A2: I tried to put "Y" or "y" but it doesn't work and just passed to next if condition. A: It's pretty simple, you can give feedback in many ways in python. Let's say the user typed a number so we can try to convert the type of the input and see if it raises an error like that: response = input("Do you want some food? (Y/N): ") try: float(response) int(response) except ValueError: #it a string or a bool print("The required response is Y/N.") If you want to check on a number in a specific range you can do it like that: if response in [range(999)]: pass You can't check just range(999) because it is an object. You need to convert this object to a list to loop over it. To check lower case you can do it in some ways: response = input("Do you want some food? (Y/N): ") if response in ['Y', 'y']: pass elif response in ['N', 'n']: pass #or like that: if response.lower = 'y': # Just convert the input to lower pass if response.upper = 'N': # You can also do this vice versa and convert to upper and check. pass
How to give custom response if user input other data types?
response = input("Do you want some food? (Y/N): ") if response == "Y": print("Come! There is pizza for you") elif response == "N": print("Alright, let us do Sisha instead.") elif response == range(999): print("The required response is Y/N.") else: print("I do not understand the prompt") Q1: How to give feedback when user input numbers instead of string? A1: I take a look in python documentation but it seems that range cannot be used in if statement? I try to modify the code by stating deret_angka = int or float for n in deret_angka: print("The required response is Y/N.") and third ifs condition to: elif response == deret_angka: print("The required response is Y/N.") But got TypeError: 'type' object is not iterable Q2: How to give Y & N value even if its lower case y/n? A2: I tried to put "Y" or "y" but it doesn't work and just passed to next if condition.
[ "It's pretty simple, you can give feedback in many ways in python.\nLet's say the user typed a number so we can try to convert the type of the input and see if it raises an error like that:\nresponse = input(\"Do you want some food? (Y/N): \")\ntry:\n float(response)\n int(response)\nexcept ValueError: #it a string or a bool\n print(\"The required response is Y/N.\")\n\nIf you want to check on a number in a specific range you can do it like that:\nif response in [range(999)]:\n pass\n\nYou can't check just range(999) because it is an object. You need to convert this object to a list to loop over it.\nTo check lower case you can do it in some ways:\nresponse = input(\"Do you want some food? (Y/N): \")\n\nif response in ['Y', 'y']:\n pass\nelif response in ['N', 'n']:\n pass\n\n#or like that:\n\nif response.lower = 'y': # Just convert the input to lower\n pass\nif response.upper = 'N': # You can also do this vice versa and convert to upper and check.\n pass\n\n" ]
[ 0 ]
[]
[]
[ "if_statement", "python_3.x" ]
stackoverflow_0074635779_if_statement_python_3.x.txt
Q: Flutter retrofit null is not subtype of string Code Photo i want to set some fields dynamic, but i got error like this 'null is not subtype of type 'string''. How can i fix it? Is it possible to send null data together File? A: One of your fields is getting a null value but its type is not nullable.
Flutter retrofit null is not subtype of string
Code Photo i want to set some fields dynamic, but i got error like this 'null is not subtype of type 'string''. How can i fix it? Is it possible to send null data together File?
[ "One of your fields is getting a null value but its type is not nullable.\n" ]
[ 0 ]
[]
[]
[ "flutter", "retrofit" ]
stackoverflow_0074664820_flutter_retrofit.txt
Q: How to store a thread join handle inside Rust struct I have a requirement to create a network receiver thread by storing the thread join handle inside a network receiver struct. But I'm getting errors error[E0506]: cannot assign to self.join_handle because it is borrowed and error[E0521]: borrowed data escapes outside of associated function, while I'm creating the thread inside the start method. This is my code use std::thread; use std::sync::atomic::{AtomicBool, Ordering}; pub struct NetworkReceiver { terminate_flag: AtomicBool, join_handle: Option<thread::JoinHandle<()>> } impl NetworkReceiver { pub fn new() -> NetworkReceiver { let net_recv_intf = NetworkReceiver { terminate_flag: AtomicBool::new(false), join_handle: None }; net_recv_intf } pub fn start(&mut self) { self.join_handle = Some(thread::spawn(|| self.run())); } pub fn run(&mut self) { while !self.terminate_flag.load(Ordering::Relaxed) { } } pub fn terminate(&mut self) { self.terminate_flag.store(true, Ordering::Relaxed); } } A: You could separate the join_handle from everything else inside NetworkReceiver like this: use std::thread; use std::sync::{ atomic::{AtomicBool, Ordering}, Arc, }; pub struct NetworkReceiver { terminate_flag: AtomicBool, } pub struct RunningNetworkReceiver { join_handle: thread::JoinHandle<()>, network_receiver: Arc<NetworkReceiver>, } impl NetworkReceiver { pub fn new() -> NetworkReceiver { let net_recv_intf = NetworkReceiver { terminate_flag: AtomicBool::new(false), }; net_recv_intf } pub fn start(self) -> RunningNetworkReceiver { let network_receiver = Arc::new(self); let join_handle = { let network_receiver = network_receiver.clone(); thread::spawn(move || network_receiver.run()) }; RunningNetworkReceiver { join_handle, network_receiver, } } pub fn run(&self) { while !self.terminate_flag.load(Ordering::Relaxed) { } } } impl RunningNetworkReceiver { pub fn terminate(&self) { self.network_receiver.terminate_flag.store(true, Ordering::Relaxed); } } If you need exclusive acces to anything inside of NetworkReceiver after all you'd have to wrap it (or the part that needs it) in a RwLock or Mutex or similar. A: When using variables in a thread, that thread must have ownership. This line (thread::spawn(|| self.run()); tries to move self into the thread but cannot because self needs to outlive the function. I beleive you'll have to wrap your NetworkReceiver within an Arc Mutex. You could change your NetworkReceiver::new() to return a Arc<Mutex> and all the associated functions would change from fn foo(&mut self) to fn foo(self: Arc<Self>)
How to store a thread join handle inside Rust struct
I have a requirement to create a network receiver thread by storing the thread join handle inside a network receiver struct. But I'm getting errors error[E0506]: cannot assign to self.join_handle because it is borrowed and error[E0521]: borrowed data escapes outside of associated function, while I'm creating the thread inside the start method. This is my code use std::thread; use std::sync::atomic::{AtomicBool, Ordering}; pub struct NetworkReceiver { terminate_flag: AtomicBool, join_handle: Option<thread::JoinHandle<()>> } impl NetworkReceiver { pub fn new() -> NetworkReceiver { let net_recv_intf = NetworkReceiver { terminate_flag: AtomicBool::new(false), join_handle: None }; net_recv_intf } pub fn start(&mut self) { self.join_handle = Some(thread::spawn(|| self.run())); } pub fn run(&mut self) { while !self.terminate_flag.load(Ordering::Relaxed) { } } pub fn terminate(&mut self) { self.terminate_flag.store(true, Ordering::Relaxed); } }
[ "You could separate the join_handle from everything else inside NetworkReceiver like this:\nuse std::thread;\nuse std::sync::{\n atomic::{AtomicBool, Ordering},\n Arc,\n};\n\npub struct NetworkReceiver {\n terminate_flag: AtomicBool,\n}\npub struct RunningNetworkReceiver {\n join_handle: thread::JoinHandle<()>,\n network_receiver: Arc<NetworkReceiver>,\n}\n\nimpl NetworkReceiver {\n pub fn new() -> NetworkReceiver {\n let net_recv_intf = NetworkReceiver {\n terminate_flag: AtomicBool::new(false),\n };\n\n net_recv_intf\n }\n\n pub fn start(self) -> RunningNetworkReceiver {\n let network_receiver = Arc::new(self);\n let join_handle = {\n let network_receiver = network_receiver.clone();\n thread::spawn(move || network_receiver.run())\n };\n RunningNetworkReceiver {\n join_handle,\n network_receiver,\n }\n }\n pub fn run(&self) {\n while !self.terminate_flag.load(Ordering::Relaxed) {\n }\n }\n}\nimpl RunningNetworkReceiver {\n pub fn terminate(&self) {\n self.network_receiver.terminate_flag.store(true, Ordering::Relaxed);\n }\n}\n\nIf you need exclusive acces to anything inside of NetworkReceiver after all you'd have to wrap it (or the part that needs it) in a RwLock or Mutex or similar.\n", "When using variables in a thread, that thread must have ownership. This line (thread::spawn(|| self.run()); tries to move self into the thread but cannot because self needs to outlive the function.\nI beleive you'll have to wrap your NetworkReceiver within an Arc Mutex. You could change your NetworkReceiver::new() to return a Arc<Mutex> and all the associated functions would change from fn foo(&mut self) to fn foo(self: Arc<Self>)\n" ]
[ 3, 0 ]
[ "use std::thread;\nuse std::sync::atomic::{AtomicBool, Ordering};\n\npub struct NetworkReceiver {\n terminate_flag: AtomicBool,\n join_handle: Option<thread::JoinHandle<()>>\n}\n\nimpl NetworkReceiver {\n pub fn new() -> NetworkReceiver {\n let net_recv_intf = NetworkReceiver {\n terminate_flag: AtomicBool::new(false),\n join_handle: None\n };\n\n net_recv_intf\n }\n\n pub fn start(&mut self) {\n let join_handle = thread::spawn(|| self.run());\n self.join_handle = Some(join_handle);\n }\n\n fn run(&self) {\n let mut buff: [u8; 2048] = [0; 2048];\n\n while !self.terminate_flag.load(Ordering::Relaxed) {\n // Do something here\n }\n }\n\n pub fn terminate(&mut self) {\n self.terminate_flag.store(true, Ordering::Relaxed);\n }\n}\n\nfn main() {\n let mut net_recv = NetworkReceiver::new();\n net_recv.start();\n net_recv.terminate();\n}\n\nI've made a few changes to the code you provided:\nI've added a main function that creates an instance of NetworkReceiver and calls the start() and terminate() methods on it.\nI've made the run() method a private method because it should only be called by the start() method.\nI've made the run() method take a reference to self instead of a mutable reference, because it doesn't modify the NetworkReceiver instance.\n" ]
[ -1 ]
[ "borrow_checker", "multithreading", "rust" ]
stackoverflow_0074665344_borrow_checker_multithreading_rust.txt
Q: How do I purge messages more than 14 days old without using bulkDelete() with discord.js? I want to make a bot that deletes messages older than an amount specified. I know it will be slow but bulkDelete() has the stipulation of only allowing you to delete messages younger than 14 days. I want to circumvent this by deleting one at a time. How would I do this? I tried doing bulkDelete() but that does not work. How do I do it with only the delete command? A: You can't delete messages older than 14 days. It is not allowed by Discord Discord Developer Portal - Documentation - Channel This endpoint will not delete messages older than 2 weeks, and will fail with a 400 BAD REQUEST if any message provided is older than that or if any duplicate message IDs are provided.
How do I purge messages more than 14 days old without using bulkDelete() with discord.js?
I want to make a bot that deletes messages older than an amount specified. I know it will be slow but bulkDelete() has the stipulation of only allowing you to delete messages younger than 14 days. I want to circumvent this by deleting one at a time. How would I do this? I tried doing bulkDelete() but that does not work. How do I do it with only the delete command?
[ "You can't delete messages older than 14 days.\nIt is not allowed by Discord\nDiscord Developer Portal - Documentation - Channel\nThis endpoint will not delete messages older than 2 weeks, and will fail with a 400 BAD REQUEST if any message provided is older than that or if any duplicate message IDs are provided.\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.js", "javascript", "message", "purge" ]
stackoverflow_0074544013_discord_discord.js_javascript_message_purge.txt
Q: Pretty JSON Formatting in IPython Notebook Is there an existing way to get json.dumps() output to appear as "pretty" formatted JSON inside ipython notebook? A: json.dumps has an indent argument, printing the result should be enough: print(json.dumps(obj, indent=2)) A: This might be slightly different than what OP was asking for, but you can do use IPython.display.JSON to interactively view a JSON/dict object. from IPython.display import JSON JSON({'a': [1, 2, 3, 4,], 'b': {'inner1': 'helloworld', 'inner2': 'foobar'}}) Edit: This works in Hydrogen and JupyterLab, but not in Jupyter Notebook or in IPython terminal. Inside Hydrogen: A: import uuid from IPython.display import display_javascript, display_html, display import json class RenderJSON(object): def __init__(self, json_data): if isinstance(json_data, dict): self.json_str = json.dumps(json_data) else: self.json_str = json_data self.uuid = str(uuid.uuid4()) def _ipython_display_(self): display_html('<div id="{}" style="height: 600px; width:100%;"></div>'.format(self.uuid), raw=True) display_javascript(""" require(["https://rawgit.com/caldwell/renderjson/master/renderjson.js"], function() { document.getElementById('%s').appendChild(renderjson(%s)) }); """ % (self.uuid, self.json_str), raw=True) To ouput your data in collapsible format: RenderJSON(your_json) Copy pasted from here: https://www.reddit.com/r/IPython/comments/34t4m7/lpt_print_json_in_collapsible_format_in_ipython/ Github: https://github.com/caldwell/renderjson A: I am just adding the expanded variable to @Kyle Barron answer: from IPython.display import JSON JSON(json_object, expanded=True) A: I found this page looking for a way to eliminate the literal \ns in the output. We're doing a coding interview using Jupyter and I wanted a way to display the result of a function real perty like. My version of Jupyter (4.1.0) doesn't render them as actual line breaks. The solution I produced is (I sort of hope this is not the best way to do it but...) import json output = json.dumps(obj, indent=2) line_list = output.split("\n") # Sort of line replacing "\n" with a new line # Now that our obj is a list of strings leverage print's automatic newline for line in line_list: print line I hope this helps someone! A: For Jupyter notebook, may be is enough to generate the link to open in a new tab (with the JSON viewer of firefox): from IPython.display import Markdown def jsonviewer(d): f=open('file.json','w') json.dump(d,f) f.close() print('open in firefox new tab:') return Markdown('[file.json](./file.json)') jsonviewer('[{"A":1}]') 'open in firefox new tab: file.json A: Just an extension to @filmor answer(https://stackoverflow.com/a/18873131/7018342). This encodes elements that might not compatible with json.dumps and also gives a handy function that can be used just like you would use print. import json class NpEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, np.integer): return int(obj) if isinstance(obj, np.floating): return float(obj) if isinstance(obj, np.ndarray): return obj.tolist() if isinstance(obj, np.bool_): return bool(obj) return super(NpEncoder, self).default(obj) def print_json(json_dict): print(json.dumps(json_dict, indent=2, cls=NpEncoder)) Usage: json_dict = {"Name":{"First Name": "Lorem", "Last Name": "Ipsum"}, "Age":26} print_json(json_dict) >>> { "Name": { "First Name": "Lorem", "Last Name": "Ipsum" }, "Age": 26 } A: For some uses, indent should make it: print(json.dumps(parsed, indent=2)) A Json structure is basically tree structure. While trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/. It has some interactive trees and even comes with some code including linking to this question and the collapsing tree from Shankar ARUL. Other samples include using plotly Here is the code example from plotly: import plotly.express as px fig = px.treemap( names = ["Eve","Cain", "Seth", "Enos", "Noam", "Abel", "Awan", "Enoch", "Azura"], parents = ["", "Eve", "Eve", "Seth", "Seth", "Eve", "Eve", "Awan", "Eve"] ) fig.update_traces(root_color="lightgrey") fig.update_layout(margin = dict(t=50, l=25, r=25, b=25)) fig.show() And using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib: #%pip install treelib from treelib import Tree country_tree = Tree() # Create a root node country_tree.create_node("Country", "countries") # Group by country for country, regions in wards_df.head(5).groupby(["CTRY17NM", "CTRY17CD"]): # Generate a node for each country country_tree.create_node(country[0], country[1], parent="countries") # Group by region for region, las in regions.groupby(["GOR10NM", "GOR10CD"]): # Generate a node for each region country_tree.create_node(region[0], region[1], parent=country[1]) # Group by local authority for la, wards in las.groupby(['LAD17NM', 'LAD17CD']): # Create a node for each local authority country_tree.create_node(la[0], la[1], parent=region[1]) for ward, _ in wards.groupby(['WD17NM', 'WD17CD']): # Create a leaf node for each ward country_tree.create_node(ward[0], ward[1], parent=la[1]) # Output the hierarchical data country_tree.show() I have, based on this, created a function to convert json to a tree: from treelib import Node, Tree, node def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'): if tree is None: tree = Tree() root_id = counter_byref[0] if verbose: print(f"tree.create_node({'+'}, {root_id})") tree.create_node('+', root_id) counter_byref[0] += 1 parent_id = root_id if type(o) == dict: for k,v in o.items(): this_id = counter_byref[0] if verbose: print(f"tree.create_node({str(k)}, {this_id}, parent={parent_id})") tree.create_node(str(k), this_id, parent=parent_id) counter_byref[0] += 1 json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol) elif type(o) == list: if listsNodeSymbol is not None: if verbose: print(f"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})") tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id) parent_id=counter_byref[0] counter_byref[0] += 1 for i in o: json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol) else: #node if verbose: print(f"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})") tree.create_node(str(o), counter_byref[0], parent=parent_id) counter_byref[0] += 1 return tree Then for example: import json json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() gives: + ├── 2 │ └── 3 └── 4 └── + ├── 5 └── 6 While json_2_tree(json.loads('{"2": 3, "4": [5, 6]}'),listsNodeSymbol=None).show() Gives + ├── 2 │ └── 3 └── 4 ├── 5 └── 6
Pretty JSON Formatting in IPython Notebook
Is there an existing way to get json.dumps() output to appear as "pretty" formatted JSON inside ipython notebook?
[ "json.dumps has an indent argument, printing the result should be enough:\nprint(json.dumps(obj, indent=2))\n\n", "This might be slightly different than what OP was asking for, but you can do use IPython.display.JSON to interactively view a JSON/dict object.\nfrom IPython.display import JSON\nJSON({'a': [1, 2, 3, 4,], 'b': {'inner1': 'helloworld', 'inner2': 'foobar'}})\n\nEdit: This works in Hydrogen and JupyterLab, but not in Jupyter Notebook or in IPython terminal.\nInside Hydrogen:\n\n\n", "import uuid\nfrom IPython.display import display_javascript, display_html, display\nimport json\n\nclass RenderJSON(object):\n def __init__(self, json_data):\n if isinstance(json_data, dict):\n self.json_str = json.dumps(json_data)\n else:\n self.json_str = json_data\n self.uuid = str(uuid.uuid4())\n\n def _ipython_display_(self):\n display_html('<div id=\"{}\" style=\"height: 600px; width:100%;\"></div>'.format(self.uuid), raw=True)\n display_javascript(\"\"\"\n require([\"https://rawgit.com/caldwell/renderjson/master/renderjson.js\"], function() {\n document.getElementById('%s').appendChild(renderjson(%s))\n });\n \"\"\" % (self.uuid, self.json_str), raw=True)\n\nTo ouput your data in collapsible format:\nRenderJSON(your_json)\n\n\nCopy pasted from here: https://www.reddit.com/r/IPython/comments/34t4m7/lpt_print_json_in_collapsible_format_in_ipython/\nGithub: https://github.com/caldwell/renderjson\n", "I am just adding the expanded variable to @Kyle Barron answer:\nfrom IPython.display import JSON\nJSON(json_object, expanded=True)\n\n", "I found this page looking for a way to eliminate the literal \\ns in the output. We're doing a coding interview using Jupyter and I wanted a way to display the result of a function real perty like. My version of Jupyter (4.1.0) doesn't render them as actual line breaks. The solution I produced is (I sort of hope this is not the best way to do it but...)\nimport json\n\noutput = json.dumps(obj, indent=2)\n\nline_list = output.split(\"\\n\") # Sort of line replacing \"\\n\" with a new line\n\n# Now that our obj is a list of strings leverage print's automatic newline\nfor line in line_list:\n print line\n\nI hope this helps someone!\n", "For Jupyter notebook, may be is enough to generate the link to open in a new tab (with the JSON viewer of firefox):\nfrom IPython.display import Markdown\ndef jsonviewer(d):\n f=open('file.json','w')\n json.dump(d,f)\n f.close()\n print('open in firefox new tab:')\n return Markdown('[file.json](./file.json)')\n\njsonviewer('[{\"A\":1}]')\n'open in firefox new tab:\n\nfile.json\n", "Just an extension to @filmor answer(https://stackoverflow.com/a/18873131/7018342).\nThis encodes elements that might not compatible with json.dumps and also gives a handy function that can be used just like you would use print.\nimport json\nclass NpEncoder(json.JSONEncoder):\n def default(self, obj):\n if isinstance(obj, np.integer):\n return int(obj)\n if isinstance(obj, np.floating):\n return float(obj)\n if isinstance(obj, np.ndarray):\n return obj.tolist()\n if isinstance(obj, np.bool_):\n return bool(obj)\n return super(NpEncoder, self).default(obj)\n\ndef print_json(json_dict):\n print(json.dumps(json_dict, indent=2, cls=NpEncoder))\n\nUsage:\njson_dict = {\"Name\":{\"First Name\": \"Lorem\", \"Last Name\": \"Ipsum\"}, \"Age\":26}\nprint_json(json_dict)\n>>>\n{\n \"Name\": {\n \"First Name\": \"Lorem\",\n \"Last Name\": \"Ipsum\"\n },\n \"Age\": 26\n}\n\n", "For some uses, indent should make it:\nprint(json.dumps(parsed, indent=2))\n\nA Json structure is basically tree structure.\nWhile trying to find something fancier, I came across this nice paper depicting other forms of nice trees that might be interesting: https://blog.ouseful.info/2021/07/13/exploring-the-hierarchical-structure-of-dataframes-and-csv-data/.\nIt has some interactive trees and even comes with some code including linking to this question and the collapsing tree from Shankar ARUL.\nOther samples include using plotly Here is the code example from plotly:\nimport plotly.express as px\nfig = px.treemap(\n names = [\"Eve\",\"Cain\", \"Seth\", \"Enos\", \"Noam\", \"Abel\", \"Awan\", \"Enoch\", \"Azura\"],\n parents = [\"\", \"Eve\", \"Eve\", \"Seth\", \"Seth\", \"Eve\", \"Eve\", \"Awan\", \"Eve\"]\n)\nfig.update_traces(root_color=\"lightgrey\")\nfig.update_layout(margin = dict(t=50, l=25, r=25, b=25))\nfig.show()\n\n\n\nAnd using treelib. On that note, This github also provides nice visualizations. Here is one example using treelib:\n#%pip install treelib\nfrom treelib import Tree\n\ncountry_tree = Tree()\n# Create a root node\ncountry_tree.create_node(\"Country\", \"countries\")\n\n# Group by country\nfor country, regions in wards_df.head(5).groupby([\"CTRY17NM\", \"CTRY17CD\"]):\n # Generate a node for each country\n country_tree.create_node(country[0], country[1], parent=\"countries\")\n # Group by region\n for region, las in regions.groupby([\"GOR10NM\", \"GOR10CD\"]):\n # Generate a node for each region\n country_tree.create_node(region[0], region[1], parent=country[1])\n # Group by local authority\n for la, wards in las.groupby(['LAD17NM', 'LAD17CD']):\n # Create a node for each local authority\n country_tree.create_node(la[0], la[1], parent=region[1])\n for ward, _ in wards.groupby(['WD17NM', 'WD17CD']):\n # Create a leaf node for each ward\n country_tree.create_node(ward[0], ward[1], parent=la[1])\n\n# Output the hierarchical data\ncountry_tree.show()\n\n\nI have, based on this, created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n root_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({'+'}, {root_id})\")\n tree.create_node('+', root_id)\n counter_byref[0] += 1\n parent_id = root_id\n if type(o) == dict:\n for k,v in o.items():\n this_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({str(k)}, {this_id}, parent={parent_id})\")\n tree.create_node(str(k), this_id, parent=parent_id)\n counter_byref[0] += 1\n json_2_tree(v , parent_id=this_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n if verbose:\n print(f\"tree.create_node({listsNodeSymbol}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(listsNodeSymbol, counter_byref[0], parent=parent_id)\n parent_id=counter_byref[0]\n counter_byref[0] += 1 \n for i in o:\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol)\n else: #node\n if verbose:\n print(f\"tree.create_node({str(o)}, {counter_byref[0]}, parent={parent_id})\")\n tree.create_node(str(o), counter_byref[0], parent=parent_id)\n counter_byref[0] += 1\n return tree\n\nThen for example:\nimport json\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),verbose=False,listsNodeSymbol='+').show() \n\ngives:\n+\n├── 2\n│ └── 3\n└── 4\n └── +\n ├── 5\n └── 6\n\nWhile\njson_2_tree(json.loads('{\"2\": 3, \"4\": [5, 6]}'),listsNodeSymbol=None).show() \n\nGives\n+\n├── 2\n│ └── 3\n└── 4\n ├── 5\n └── 6\n\n" ]
[ 101, 74, 39, 7, 3, 0, 0, 0 ]
[]
[]
[ "ipython_notebook", "json", "python" ]
stackoverflow_0018873066_ipython_notebook_json_python.txt
Q: Create several exposure combined density/ distribution plot in R set.seed(42) n=1000 db = data.frame(id=1:n, exp_1 = as.numeric(rnorm(n)), exp_2 = as.numeric(rnorm(n)), exp_3 = as.numeric(rnorm(n)), exp_4=as.numeric(rnorm(n))) label(db$exp_1)="Myx" label(db$exp_2)="ff3" label(db$exp_3)="poison-untitled" label(db$exp_4)="NH3" I want to create a combined density plot - with legends referring to the variable label a plot similar to A: First you could change your data to a longer format using pivot_longer from tidyr. The graph you mentioned looks like ggpubr so you can use ggdensity with scale_fill_discrete to modify the legend like this: set.seed(42) n=1000 db = data.frame(id=1:n, exp_1 = as.numeric(rnorm(n)), exp_2 = as.numeric(rnorm(n)), exp_3 = as.numeric(rnorm(n)), exp_4=as.numeric(rnorm(n))) library(ggpubr) library(tidyr) library(dplyr) db %>% pivot_longer(cols = -id) %>% ggdensity(x = 'value', fill = 'name') + scale_fill_discrete('', labels = c('Myx', 'ff3', 'poison-untitled', 'NH3')) Created on 2022-12-03 with reprex v2.0.2 If you want to change the transparency, you could use the argument alpha in ggdensity. For more info check this documentation.
Create several exposure combined density/ distribution plot in R
set.seed(42) n=1000 db = data.frame(id=1:n, exp_1 = as.numeric(rnorm(n)), exp_2 = as.numeric(rnorm(n)), exp_3 = as.numeric(rnorm(n)), exp_4=as.numeric(rnorm(n))) label(db$exp_1)="Myx" label(db$exp_2)="ff3" label(db$exp_3)="poison-untitled" label(db$exp_4)="NH3" I want to create a combined density plot - with legends referring to the variable label a plot similar to
[ "First you could change your data to a longer format using pivot_longer from tidyr. The graph you mentioned looks like ggpubr so you can use ggdensity with scale_fill_discrete to modify the legend like this:\nset.seed(42)\nn=1000\ndb = data.frame(id=1:n,\n exp_1 = as.numeric(rnorm(n)),\n exp_2 = as.numeric(rnorm(n)),\n exp_3 = as.numeric(rnorm(n)), \n exp_4=as.numeric(rnorm(n)))\n\nlibrary(ggpubr)\nlibrary(tidyr)\nlibrary(dplyr)\ndb %>%\n pivot_longer(cols = -id) %>%\n ggdensity(x = 'value', fill = 'name') +\n scale_fill_discrete('', labels = c('Myx', 'ff3', 'poison-untitled', 'NH3'))\n\n\nCreated on 2022-12-03 with reprex v2.0.2\n\nIf you want to change the transparency, you could use the argument alpha in ggdensity. For more info check this documentation.\n" ]
[ 2 ]
[]
[]
[ "ggplot2", "r" ]
stackoverflow_0074665776_ggplot2_r.txt
Q: triangle type function with only 2 angles given Implement a function that is given 2 positive integers representing 2 angles of a triangle. If the triangle is right, isosceles, or both, the function should return 1, 2 or 3, respectively. Otherwise it should return 0. im actually just started learning C language and cant move on cause i cant find an answer to this problem. would like a reference to see where are my mistakes. dont need to print it, only the function. without using loops only if statements. int TriangleType (unsigned num1, unsigned num2){ int result; if(num1 == 90 || num2 == 90 || num1 + num2 == 90){ result = 1; } else if(num1 == num2 || 180 - (num1+num2) == num1 || 180 - (num1+num2) == num2) { result = 2; } else if((num1 == 90 || num2 == 90 || num1 + num2 == 90) && (num1 == num2 || 180 - (num1+num2) == num1 || 180 - (num1+num2) == num2)){ result = 3; } else if (num1 + num2 >= 180){ result = -1; } else { result = 0; } return result; } thats the start so u can see what is the direction... maybe got mistakes even here at the start =) A: int TriangleType (unsigned num1, unsigned num2) { //Calculate the third angle and keep in num3 unsigned num3 = 180 - (num1 + num2); //We declare a variable "result" and initialize it with 0 //This way if none of our if conditions match, we would simply return 0 by "return result;". unsigned int result = 0; if (num1 == 90 || num2 == 90 || num3 == 90) //if any of our angles are 90-degree result += 1; //Add 1 to result (which is 0 at this point) if ((num1==num2) || (num2 == num3) || (num1 == num3)) //if any two angles are same i.e. isosceles result += 2; //Add 2 to result (which is 1 if previous if condition was true otherwise its 0) ) /* At this point : 1) If it was neither right-angled or isoscles both the if statements will be false and result will remain 0 2) If it was right-angled we did "result += 1" i.e. "result = result + 1" So, In case first "if" condition is true, result is 1 otherwise its 0 3) If it was isoscles we did "result += 2" i.e. "result = result + 2" So, second "if" condition is true, result becomes result + 2 ie if result was 0 it becomes 2 or if it was 1 it becomes 3. */ return result; } A: Correct logic is: int TriangleType(unsigned num1, unsigned num2) { int result; if (num1 == 90 || num2 == 90 || num1 + num2 == 90) { if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2) result = 3; else result = 1; } else if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2) { result = 2; } else if (num1 + num2 >= 180) { result = -1; } else { result = 0; } return result; } In your program there is a mistake in logic in the first if statement. You are asking whether the triangle is right and if so you evaluate is as one. But in reality it can be right and isosceles at the same time, but it's checked just in third if statement to which you won't get since you return one. So you have to check the condition in that first statement. if (num1 == 90 || num2 == 90 || num1 + num2 == 90) { if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2) result = 3; else result = 1; } Then it should work correctly. Good practice for the future is use function for any for loop, condition, etc. so you do not need to copy you code. It is much more efficient to write just one function, more readable for the other programmers and if there is a mistake, you will need to fix it at just one place and not multiple ones. I would write it as: int is_isosceles(unsigned num1, unsigned num2) { if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2) return 1; else return 0; } int TriangleType(unsigned num1, unsigned num2) { int result; if (num1 == 90 || num2 == 90 || num1 + num2 == 90) { if (is_isosceles(num1, num2)) result = 3; else result = 1; } else if (is_isosceles(num1, num2)) { result = 2; } else if (num1 + num2 >= 180) { result = -1; } else { result = 0; } return result; } Condition functions can be of type bool, but include library: #include <stdbool.h> A: Let me try and sketch a solution without giving you the complete code... int TriangleType (unsigned num1, unsigned num2){ int right = 0; int isosceles = 0; int result = 0; if( /* conditions for right angle */ ){ right = 1; } if( /* conditions for isosceles */ ) { isosceles = 1; } if ( right == 1 && isosceles == 1 ) { result = 3; } else if ( right == 1) { result = 1; } else if ( isosceles == 1 ) { /* ... */ } /*...*/ return result; } It could be made shorter but since you are just starting out this is probably best.
triangle type function with only 2 angles given
Implement a function that is given 2 positive integers representing 2 angles of a triangle. If the triangle is right, isosceles, or both, the function should return 1, 2 or 3, respectively. Otherwise it should return 0. im actually just started learning C language and cant move on cause i cant find an answer to this problem. would like a reference to see where are my mistakes. dont need to print it, only the function. without using loops only if statements. int TriangleType (unsigned num1, unsigned num2){ int result; if(num1 == 90 || num2 == 90 || num1 + num2 == 90){ result = 1; } else if(num1 == num2 || 180 - (num1+num2) == num1 || 180 - (num1+num2) == num2) { result = 2; } else if((num1 == 90 || num2 == 90 || num1 + num2 == 90) && (num1 == num2 || 180 - (num1+num2) == num1 || 180 - (num1+num2) == num2)){ result = 3; } else if (num1 + num2 >= 180){ result = -1; } else { result = 0; } return result; } thats the start so u can see what is the direction... maybe got mistakes even here at the start =)
[ "int TriangleType (unsigned num1, unsigned num2)\n{\n //Calculate the third angle and keep in num3\n unsigned num3 = 180 - (num1 + num2); \n \n //We declare a variable \"result\" and initialize it with 0\n //This way if none of our if conditions match, we would simply return 0 by \"return result;\". \n unsigned int result = 0; \n\n if (num1 == 90 || num2 == 90 || num3 == 90) //if any of our angles are 90-degree\n result += 1; //Add 1 to result (which is 0 at this point)\n if ((num1==num2) || (num2 == num3) || (num1 == num3)) //if any two angles are same i.e. isosceles\n result += 2; //Add 2 to result (which is 1 if previous if condition was true otherwise its 0) )\n \n /*\n At this point :\n 1) If it was neither right-angled or isoscles \n both the if statements will be false and result will remain 0\n\n 2) If it was right-angled we did \"result += 1\" i.e. \"result = result + 1\"\n So, In case first \"if\" condition is true, result is 1 otherwise its 0\n\n 3) If it was isoscles we did \"result += 2\" i.e. \"result = result + 2\"\n So, second \"if\" condition is true, result becomes result + 2 ie \n if result was 0 it becomes 2 or if it was 1 it becomes 3.\n */\n\n\n return result;\n}\n\n", "Correct logic is:\nint TriangleType(unsigned num1, unsigned num2)\n{\n int result;\n if (num1 == 90 || num2 == 90 || num1 + num2 == 90) {\n if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2)\n result = 3;\n else\n result = 1;\n } else if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2) {\n result = 2;\n } else if (num1 + num2 >= 180) {\n result = -1;\n } else {\n result = 0;\n }\n return result;\n}\n\nIn your program there is a mistake in logic in the first if statement. You are asking whether the triangle is right and if so you evaluate is as one. But in reality it can be right and isosceles at the same time, but it's checked just in third if statement to which you won't get since you return one. So you have to check the condition in that first statement.\nif (num1 == 90 || num2 == 90 || num1 + num2 == 90) {\n if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2)\n result = 3;\n else\n result = 1;\n }\n\nThen it should work correctly.\nGood practice for the future is use function for any for loop, condition, etc. so you do not need to copy you code. It is much more efficient to write just one function, more readable for the other programmers and if there is a mistake, you will need to fix it at just one place and not multiple ones.\nI would write it as:\nint is_isosceles(unsigned num1, unsigned num2)\n{\n if (num1 == num2 || 180 - (num1 + num2) == num1 || 180 - (num1 + num2) == num2)\n return 1;\n else\n return 0;\n}\n\nint TriangleType(unsigned num1, unsigned num2)\n{\n int result;\n if (num1 == 90 || num2 == 90 || num1 + num2 == 90) {\n if (is_isosceles(num1, num2))\n result = 3;\n else\n result = 1;\n } else if (is_isosceles(num1, num2)) {\n result = 2;\n } else if (num1 + num2 >= 180) {\n result = -1;\n } else {\n result = 0;\n }\n return result;\n}\n\nCondition functions can be of type bool, but include library:\n#include <stdbool.h>\n\n", "Let me try and sketch a solution without giving you the complete code...\nint TriangleType (unsigned num1, unsigned num2){\n\n int right = 0;\n int isosceles = 0;\n int result = 0;\n\n if( /* conditions for right angle */ ){\n right = 1;\n } \n\n if( /* conditions for isosceles */ ) {\n isosceles = 1;\n } \n\n if ( right == 1 && isosceles == 1 ) {\n result = 3;\n } else if ( right == 1) {\n result = 1;\n } else if ( isosceles == 1 ) { \n /* ... */\n } /*...*/\n return result;\n}\n\nIt could be made shorter but since you are just starting out this is probably best.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "c", "function" ]
stackoverflow_0074665120_c_function.txt
Q: The ML.net prediction has HUGE different compared with Custom Vision I've trained a model(object detection) using Azure Custom Vision, and export the model as ONNX, then import the model to my WPF(.net core) project. I use ML.net to get prediction from my model, And I found the result has HUGE different compared with the prediction I saw on Custom Vision. I've tried different order of extraction (ABGR, ARGB...etc), but the result is very disappointed, can any one give me some advice as there are not so much document online about Using Custom Vision's ONNX model with WPF to do object detection. Here's some snippet: // Model creation and pipeline definition for images needs to run just once, so calling it from the constructor: var pipeline = mlContext.Transforms .ResizeImages( resizing: ImageResizingEstimator.ResizingKind.Fill, outputColumnName: MLObjectDetectionSettings.InputTensorName, imageWidth: MLObjectDetectionSettings.ImageWidth, imageHeight: MLObjectDetectionSettings.ImageHeight, inputColumnName: nameof(MLObjectDetectionInputData.Image)) .Append(mlContext.Transforms.ExtractPixels( colorsToExtract: ImagePixelExtractingEstimator.ColorBits.Rgb, orderOfExtraction: ImagePixelExtractingEstimator.ColorsOrder.ABGR, outputColumnName: MLObjectDetectionSettings.InputTensorName)) .Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelPath, outputColumnName: MLObjectDetectionSettings.OutputTensorName, inputColumnName: MLObjectDetectionSettings.InputTensorName)); //Create empty DataView. We just need the schema to call fit() var emptyData = new List<MLObjectDetectionInputData>(); var dataView = mlContext.Data.LoadFromEnumerable(emptyData); //Generate a model. var model = pipeline.Fit(dataView); Then I use the model to create context. //Create prediction engine. var predictionEngine = _mlObjectDetectionContext.Model.CreatePredictionEngine<MLObjectDetectionInputData, MLObjectDetectionPrediction>(_mlObjectDetectionModel); //Load tag labels. var labels = File.ReadAllLines(LABELS_OBJECT_DETECTION_FILE_PATH); //Create input data. var imageInput = new MLObjectDetectionInputData { Image = this.originalImage }; //Predict. var prediction = predictionEngine.Predict(imageInput); A: Can you check on the image input (imageInput) is resized with the same size as in the model requirements when you prepare the pipeline for both Resize parameters: imageWidth: MLObjectDetectionSettings.ImageWidth, imageHeight: MLObjectDetectionSettings.ImageHeight. Also for the ExtractPixels parameters especially on the ColorBits and ColorsOrder should follow the model requirements. Hope this help Arif A: Maybe because the aspect ratio is not preserved during the resize. Try with an image with the size of: MLObjectDetectionSettings.ImageWidth * MLObjectDetectionSettings.ImageHeight And you will see much better results. I think Azure does preliminary processing on the image, maybe Padding (also during training?), or Cropping. Maybe during the processing it also uses a moving window(the size that the model expects) and then do some aggregation
The ML.net prediction has HUGE different compared with Custom Vision
I've trained a model(object detection) using Azure Custom Vision, and export the model as ONNX, then import the model to my WPF(.net core) project. I use ML.net to get prediction from my model, And I found the result has HUGE different compared with the prediction I saw on Custom Vision. I've tried different order of extraction (ABGR, ARGB...etc), but the result is very disappointed, can any one give me some advice as there are not so much document online about Using Custom Vision's ONNX model with WPF to do object detection. Here's some snippet: // Model creation and pipeline definition for images needs to run just once, so calling it from the constructor: var pipeline = mlContext.Transforms .ResizeImages( resizing: ImageResizingEstimator.ResizingKind.Fill, outputColumnName: MLObjectDetectionSettings.InputTensorName, imageWidth: MLObjectDetectionSettings.ImageWidth, imageHeight: MLObjectDetectionSettings.ImageHeight, inputColumnName: nameof(MLObjectDetectionInputData.Image)) .Append(mlContext.Transforms.ExtractPixels( colorsToExtract: ImagePixelExtractingEstimator.ColorBits.Rgb, orderOfExtraction: ImagePixelExtractingEstimator.ColorsOrder.ABGR, outputColumnName: MLObjectDetectionSettings.InputTensorName)) .Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelPath, outputColumnName: MLObjectDetectionSettings.OutputTensorName, inputColumnName: MLObjectDetectionSettings.InputTensorName)); //Create empty DataView. We just need the schema to call fit() var emptyData = new List<MLObjectDetectionInputData>(); var dataView = mlContext.Data.LoadFromEnumerable(emptyData); //Generate a model. var model = pipeline.Fit(dataView); Then I use the model to create context. //Create prediction engine. var predictionEngine = _mlObjectDetectionContext.Model.CreatePredictionEngine<MLObjectDetectionInputData, MLObjectDetectionPrediction>(_mlObjectDetectionModel); //Load tag labels. var labels = File.ReadAllLines(LABELS_OBJECT_DETECTION_FILE_PATH); //Create input data. var imageInput = new MLObjectDetectionInputData { Image = this.originalImage }; //Predict. var prediction = predictionEngine.Predict(imageInput);
[ "Can you check on the image input (imageInput) is resized with the same size as in the model requirements when you prepare the pipeline for both Resize parameters:\nimageWidth: MLObjectDetectionSettings.ImageWidth,\nimageHeight: MLObjectDetectionSettings.ImageHeight.\nAlso for the ExtractPixels parameters especially on the ColorBits and ColorsOrder should follow the model requirements.\nHope this help\nArif\n", "Maybe because the aspect ratio is not preserved during the resize.\nTry with an image with the size of:\nMLObjectDetectionSettings.ImageWidth * MLObjectDetectionSettings.ImageHeight\nAnd you will see much better results.\nI think Azure does preliminary processing on the image, maybe Padding (also during training?), or Cropping.\nMaybe during the processing it also uses a moving window(the size that the model expects) and then do some aggregation\n" ]
[ 0, 0 ]
[]
[]
[ "ml.net", "onnx", "wpf" ]
stackoverflow_0070094369_ml.net_onnx_wpf.txt
Q: How to auto center coordinates and zoom for multiple markers in react-google-maps/api I want to show dynamic multiple markers on the map. What I mean by dynamic markers is the markers are determined by the user selection (which markers do the user want to show). I have checked this documentation https://react-google-maps-api-docs.netlify.app/ but can't find any example related to the specification I need. I found an algorithm to count the center coordinate of multiple markers here Find center of multiple locations in Google Maps but only for the center coordinate (not with the best zoom level) and it's written with the original google maps api @googlemaps/js-api-loader. How could I do search for the center coordinate and zoom for multiple markers with this @react-google-maps/api library? A: You can use the FitBounds component from the @react-google-maps/api package to center and zoom the map to fit all the markers. Here's an example: import React from 'react'; import { GoogleMap, useLoadScript, FitBounds, Marker } from '@react-google-maps/api'; const containerStyle = { width: '400px', height: '400px' }; const center = { lat: 37.422, lng: -122.084 }; const options = { zoomControl: true }; const markers = [ { lat: 37.422, lng: -122.084 }, { lat: 37.422, lng: -122.085 }, { lat: 37.423, lng: -122.084 } ]; const MapWithMarkers = () => { const { isLoaded, loadError } = useLoadScript({ googleMapsApiKey: YOUR_API_KEY }); if (loadError) return 'Error loading maps'; if (!isLoaded) return 'Loading Maps'; return ( <GoogleMap mapContainerStyle={containerStyle} center={center} zoom={10} options={options} > <FitBounds center={center}> {markers.map((marker, index) => ( <Marker key={index} position={marker} /> ))} </FitBounds> </GoogleMap> ); }; export default MapWithMarkers;
How to auto center coordinates and zoom for multiple markers in react-google-maps/api
I want to show dynamic multiple markers on the map. What I mean by dynamic markers is the markers are determined by the user selection (which markers do the user want to show). I have checked this documentation https://react-google-maps-api-docs.netlify.app/ but can't find any example related to the specification I need. I found an algorithm to count the center coordinate of multiple markers here Find center of multiple locations in Google Maps but only for the center coordinate (not with the best zoom level) and it's written with the original google maps api @googlemaps/js-api-loader. How could I do search for the center coordinate and zoom for multiple markers with this @react-google-maps/api library?
[ "You can use the FitBounds component from the @react-google-maps/api package to center and zoom the map to fit all the markers.\nHere's an example:\nimport React from 'react';\nimport { GoogleMap, useLoadScript, FitBounds, Marker } from '@react-google-maps/api';\n\nconst containerStyle = {\n width: '400px',\n height: '400px'\n};\n\nconst center = {\n lat: 37.422,\n lng: -122.084\n};\n\nconst options = {\n zoomControl: true\n};\n\nconst markers = [\n {\n lat: 37.422,\n lng: -122.084\n },\n {\n lat: 37.422,\n lng: -122.085\n },\n {\n lat: 37.423,\n lng: -122.084\n }\n];\n\nconst MapWithMarkers = () => {\n const { isLoaded, loadError } = useLoadScript({\n googleMapsApiKey: YOUR_API_KEY\n });\n\n if (loadError) return 'Error loading maps';\n if (!isLoaded) return 'Loading Maps';\n\n return (\n <GoogleMap\n mapContainerStyle={containerStyle}\n center={center}\n zoom={10}\n options={options}\n >\n <FitBounds center={center}>\n {markers.map((marker, index) => (\n <Marker key={index} position={marker} />\n ))}\n </FitBounds>\n </GoogleMap>\n );\n};\n\nexport default MapWithMarkers;\n\n" ]
[ 0 ]
[]
[]
[ "coordinates", "google_maps_markers", "react_google_maps", "reactjs" ]
stackoverflow_0074665794_coordinates_google_maps_markers_react_google_maps_reactjs.txt
Q: Flutter ipad apps Do Flutter apps that work well on Ios phones automatically work well on iPads? I don't care about the styles & layout (that is taken care of), but more in terms of OS specifics. A: Yes, flutter supports iPadOS and is going to continue that for the future [1]. This means that all OS specifics, such as integration with HealthKit will also be available for iPad apps. And via the automatically generated Xcode project you have more granular control over which operating systems your app will support. But as you are already aware, a layout that works well on iOS might not work well on iPad.
Flutter ipad apps
Do Flutter apps that work well on Ios phones automatically work well on iPads? I don't care about the styles & layout (that is taken care of), but more in terms of OS specifics.
[ "Yes, flutter supports iPadOS and is going to continue that for the future [1].\nThis means that all OS specifics, such as integration with HealthKit will also be available for iPad apps.\nAnd via the automatically generated Xcode project you have more granular control over which operating systems your app will support.\nBut as you are already aware, a layout that works well on iOS might not work well on iPad.\n" ]
[ 0 ]
[]
[]
[ "flutter", "flutter_ios", "flutter_ios_build", "ipad" ]
stackoverflow_0074665771_flutter_flutter_ios_flutter_ios_build_ipad.txt
Q: Why `strlen` isn't included in elf's symbol table? I'm compile 1.c with debug info: gcc -O0 -g ./1.c // 1.c #include <stdio.h> #include <string.h> int main() { char c[222]; scanf("%s", c); int a = strlen("dadw"); return 0; } But I can't find strlen through readelf -a a.out or nm ./a.out. In my understanding, strlen is provided by libc.so and resolved by dynamic linking. So it should appear in the symbol table. Is it caused by some special tricks of gcc? ❯ nm ./a.out 000000000000038c r __abi_tag 0000000000004010 B __bss_start 0000000000004010 b completed.0 w __cxa_finalize@GLIBC_2.2.5 0000000000004000 D __data_start 0000000000004000 W data_start 00000000000010b0 t deregister_tm_clones 0000000000001120 t __do_global_dtors_aux 0000000000003db8 d __do_global_dtors_aux_fini_array_entry 0000000000004008 D __dso_handle 0000000000003dc0 d _DYNAMIC 0000000000004010 D _edata 0000000000004018 B _end 00000000000011cc T _fini 0000000000001160 t frame_dummy 0000000000003db0 d __frame_dummy_init_array_entry 00000000000020e8 r __FRAME_END__ 0000000000003fb0 d _GLOBAL_OFFSET_TABLE_ w __gmon_start__ 0000000000002008 r __GNU_EH_FRAME_HDR 0000000000001000 T _init 0000000000002000 R _IO_stdin_used U __isoc99_scanf@GLIBC_2.7 w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U __libc_start_main@GLIBC_2.34 0000000000001169 T main 00000000000010e0 t register_tm_clones U __stack_chk_fail@GLIBC_2.4 0000000000001080 T _start 0000000000004010 D __TMC_END__ A: Why strlen isn't present in the symbol table Because GCC replaces the call to strlen with a constant 4. x86-64 GCC 12.2 with the options -O0 -g produces the following assembly code. Notice the line mov DWORD PTR [rbp-4], 4, which is the constant 4 for the length of the string. .LC0: .string "%s" main: push rbp mov rbp, rsp sub rsp, 240 lea rax, [rbp-240] mov rsi, rax mov edi, OFFSET FLAT:.LC0 mov eax, 0 call __isoc99_scanf mov DWORD PTR [rbp-4], 4 mov eax, 0 leave ret Try it on godbolt How to make it actually call strlen Without replacing the constant string "dadw" with a value read at runtime, one way to make GCC generate the call to strlen is to disable built-in functions by using the -fno-builtins option (link to documentation). According to the documentation: GCC normally generates special code to handle certain built-in functions more efficiently; [...] the function calls no longer appear as such, [...] With the options -O0 -g -fno-builtin, x86-64 GCC 12.2 produces the following assembly code, and strlen is present in the symbol table. Notice the line call strlen. .LC0: .string "%s" .LC1: .string "dadw" main: push rbp mov rbp, rsp sub rsp, 240 lea rax, [rbp-240] mov rsi, rax mov edi, OFFSET FLAT:.LC0 mov eax, 0 call __isoc99_scanf mov edi, OFFSET FLAT:.LC1 call strlen mov DWORD PTR [rbp-4], eax mov eax, 0 leave ret
Why `strlen` isn't included in elf's symbol table?
I'm compile 1.c with debug info: gcc -O0 -g ./1.c // 1.c #include <stdio.h> #include <string.h> int main() { char c[222]; scanf("%s", c); int a = strlen("dadw"); return 0; } But I can't find strlen through readelf -a a.out or nm ./a.out. In my understanding, strlen is provided by libc.so and resolved by dynamic linking. So it should appear in the symbol table. Is it caused by some special tricks of gcc? ❯ nm ./a.out 000000000000038c r __abi_tag 0000000000004010 B __bss_start 0000000000004010 b completed.0 w __cxa_finalize@GLIBC_2.2.5 0000000000004000 D __data_start 0000000000004000 W data_start 00000000000010b0 t deregister_tm_clones 0000000000001120 t __do_global_dtors_aux 0000000000003db8 d __do_global_dtors_aux_fini_array_entry 0000000000004008 D __dso_handle 0000000000003dc0 d _DYNAMIC 0000000000004010 D _edata 0000000000004018 B _end 00000000000011cc T _fini 0000000000001160 t frame_dummy 0000000000003db0 d __frame_dummy_init_array_entry 00000000000020e8 r __FRAME_END__ 0000000000003fb0 d _GLOBAL_OFFSET_TABLE_ w __gmon_start__ 0000000000002008 r __GNU_EH_FRAME_HDR 0000000000001000 T _init 0000000000002000 R _IO_stdin_used U __isoc99_scanf@GLIBC_2.7 w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable U __libc_start_main@GLIBC_2.34 0000000000001169 T main 00000000000010e0 t register_tm_clones U __stack_chk_fail@GLIBC_2.4 0000000000001080 T _start 0000000000004010 D __TMC_END__
[ "Why strlen isn't present in the symbol table\nBecause GCC replaces the call to strlen with a constant 4. x86-64 GCC 12.2 with the options -O0 -g produces the following assembly code. Notice the line mov DWORD PTR [rbp-4], 4, which is the constant 4 for the length of the string.\n.LC0:\n .string \"%s\"\nmain:\n push rbp\n mov rbp, rsp\n sub rsp, 240\n lea rax, [rbp-240]\n mov rsi, rax\n mov edi, OFFSET FLAT:.LC0\n mov eax, 0\n call __isoc99_scanf\n mov DWORD PTR [rbp-4], 4\n mov eax, 0\n leave\n ret\n\nTry it on godbolt\nHow to make it actually call strlen\nWithout replacing the constant string \"dadw\" with a value read at runtime, one way to make GCC generate the call to strlen is to disable built-in functions by using the -fno-builtins option (link to documentation). According to the documentation:\n\nGCC normally generates special code to handle certain built-in functions more efficiently; [...] the function calls no longer appear as such, [...]\n\nWith the options -O0 -g -fno-builtin, x86-64 GCC 12.2 produces the following assembly code, and strlen is present in the symbol table.\nNotice the line call strlen.\n.LC0:\n .string \"%s\"\n.LC1:\n .string \"dadw\"\nmain:\n push rbp\n mov rbp, rsp\n sub rsp, 240\n lea rax, [rbp-240]\n mov rsi, rax\n mov edi, OFFSET FLAT:.LC0\n mov eax, 0\n call __isoc99_scanf\n mov edi, OFFSET FLAT:.LC1\n call strlen\n mov DWORD PTR [rbp-4], eax\n mov eax, 0\n leave\n ret\n\n" ]
[ 6 ]
[]
[]
[ "c", "dynamic_linking", "linux" ]
stackoverflow_0074665804_c_dynamic_linking_linux.txt
Q: Express + Typescript: How to GET Param and Body request I route put method to this const createFaceList = (req: Request<{faceListId : string}>, res: Response, next: NextFunction) => { console.log(req.body.name); console.log("faceListID = " + req.params.faceListId); addFacelist(req.params.faceListId, req.body) .then( result => { return res.status(200).json({result}) }) .catch(err => { logging.error(NAMESPACE, err.messagem, err); return res.status(err.statusCode).json({ statusCode: err.statusCode, message: err.message }) }) } my console.log show that undefined: undefined faceListId = undefined how to fix. thank you A: export interface TypedRequestBodyParams<Params, ReqBody> extends Express.Request { body: ReqBody, params: Params } const createFaceList = (req: TypedRequestBodyParams<{faceListId : string}, {name: string}>, res: Response) => { const { faceListId } = req.params const { name } = req.body }
Express + Typescript: How to GET Param and Body request
I route put method to this const createFaceList = (req: Request<{faceListId : string}>, res: Response, next: NextFunction) => { console.log(req.body.name); console.log("faceListID = " + req.params.faceListId); addFacelist(req.params.faceListId, req.body) .then( result => { return res.status(200).json({result}) }) .catch(err => { logging.error(NAMESPACE, err.messagem, err); return res.status(err.statusCode).json({ statusCode: err.statusCode, message: err.message }) }) } my console.log show that undefined: undefined faceListId = undefined how to fix. thank you
[ "export interface TypedRequestBodyParams<Params, ReqBody> extends Express.Request {\n body: ReqBody,\n params: Params\n}\n\nconst createFaceList = (req: TypedRequestBodyParams<{faceListId : string}, {name: string}>, res: Response) => {\n const { faceListId } = req.params\n const { name } = req.body\n}\n\n" ]
[ 0 ]
[]
[]
[ "express", "typescript" ]
stackoverflow_0072056995_express_typescript.txt
Q: Trying to retrieve element from object in React results in undefined I'm trying to retrieve the value of "id" from a one-object-long array in React. (The data is coming from Firebase). I have tried every possible combination of code, but it always gives me back undefined (in Firefox). So I have this array called "Original Data", and this is where I want to retrieve the "id" and store it in a variable. Then I tried to convert it into object by slicing to first element of arary (although I'm not sure if that's needed altogether). And then I'm trying to retrieve the id by calling the Object.id value, but it returns undefined. I tried doing it with data.id (thinking maybe it's Firebase data structure), but it also doesn't work. My code: console.log("OriginalData") console.log(OriginalData) const ConvertingToObject = {... OriginalData.slice(0)} const RetrievingID = ConvertingToObject.id console.log("Converting to Object") console.log(ConvertingToObject) console.log("Retrieving ID") console.log(RetrievingID) Console log: **OriginalData ** Array [ {…} ] ​ 0: Object { data: {…}, id: "100" } ​ length: 1 ​ <prototype>: Array [] **Converting to Object** Object { 0: {…} } ​ 0: Object { data: {…}, id: "100" } ​ <prototype>: Object { … } **Retrieving ID ** undefined A: As you can see while you are try to convert array of object into object, Key of array change itself into key of object. For example: let arrayOfObj = [{data: { "name": "naruto" }, id: 123}] let convertToObj = {...arrayOfObj.slice(0)} while consoling the data convertToObj we can see log as: { '0': { data: { name: 'naruto' }, id: 123 } } See the key value '0'. So to access id from that you will need to do something like this: console.log(convertToObj['0'].id) Or, Incase if do not want to change it to object then you can do like this: console.log(arrayOfObj[0].id)
Trying to retrieve element from object in React results in undefined
I'm trying to retrieve the value of "id" from a one-object-long array in React. (The data is coming from Firebase). I have tried every possible combination of code, but it always gives me back undefined (in Firefox). So I have this array called "Original Data", and this is where I want to retrieve the "id" and store it in a variable. Then I tried to convert it into object by slicing to first element of arary (although I'm not sure if that's needed altogether). And then I'm trying to retrieve the id by calling the Object.id value, but it returns undefined. I tried doing it with data.id (thinking maybe it's Firebase data structure), but it also doesn't work. My code: console.log("OriginalData") console.log(OriginalData) const ConvertingToObject = {... OriginalData.slice(0)} const RetrievingID = ConvertingToObject.id console.log("Converting to Object") console.log(ConvertingToObject) console.log("Retrieving ID") console.log(RetrievingID) Console log: **OriginalData ** Array [ {…} ] ​ 0: Object { data: {…}, id: "100" } ​ length: 1 ​ <prototype>: Array [] **Converting to Object** Object { 0: {…} } ​ 0: Object { data: {…}, id: "100" } ​ <prototype>: Object { … } **Retrieving ID ** undefined
[ "As you can see while you are try to convert array of object into object, Key of array change itself into key of object. For example:\nlet arrayOfObj = [{data: {\n \"name\": \"naruto\"\n}, id: 123}]\n\nlet convertToObj = {...arrayOfObj.slice(0)}\n\nwhile consoling the data convertToObj we can see log as:\n{ '0': { data: { name: 'naruto' }, id: 123 } }\n\nSee the key value '0'. So to access id from that you will need to do something like this:\nconsole.log(convertToObj['0'].id)\n\nOr, Incase if do not want to change it to object then you can do like this:\nconsole.log(arrayOfObj[0].id)\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "firebase", "javascript", "reactjs" ]
stackoverflow_0074665539_arrays_firebase_javascript_reactjs.txt
Q: Why use recursive() instead of deferred()? I needed a strategy for arbitrary JSON values and after reading about the gotchas of using composite() for recursive data came up with this json_primitives = st.one_of( st.none(), st.booleans(), st.integers(), st.floats(allow_infinity=False, allow_nan=False), st.text(), ) def json_collections(values): return st.one_of( st.dictionaries(keys=st.text(), values=values), st.lists(values), ) json_values = st.recursive(json_primitives, json_collections) In the tests of hypothesis itself I found something like json_values = st.deferred( lambda: st.none() | st.booleans() | st.integers() | st.floats(allow_infinity=False, allow_nan=False) | st.text() | json_arrays | json_objects ) json_arrays = st.lists(json_values) json_objects = st.dictionaries(st.text(), json_values) Are there any differences in how these strategies behave? I looked at the implementations of both and found the one for st.deferred much easier to follow. And I arguably find the use of deferred easier to read as well (even without the bitwise or syntactic sugar for st.one_of) A: Are there any differences in how these strategies behave? Nope - use whichever is more convenient or clearly expresses your intent. So... why have both then? The answer is partly just historical contingency: back in 2015 Hypothesis had an entirely different backend which could support st.recursive() but not st.deferred(), which was added shortly after David wrote the current backend in 2017. We've kept both because one or the other are more natural in some situations or to some people, and deprecating something as widely used and non-broken as st.recursive() would be far more painful than keeping it. In practice you can also cleanly use st.recursive() in an expression, defining the whole thing inline with extend=lambda s: .... I've never seen it in the wild, but I guess you could use an assignment expression to do the same... @given(st.recursive(st.booleans(), st.lists)) def test_demo_recursive(x): ... @given(xs := st.deferred(lambda: st.booleans() | st.lists(xs))) def test_demo_deferred(x): ... Not sure how this would look in more complex cases and of course it requires Python 3.8+, but the latter is nicer than I expected.
Why use recursive() instead of deferred()?
I needed a strategy for arbitrary JSON values and after reading about the gotchas of using composite() for recursive data came up with this json_primitives = st.one_of( st.none(), st.booleans(), st.integers(), st.floats(allow_infinity=False, allow_nan=False), st.text(), ) def json_collections(values): return st.one_of( st.dictionaries(keys=st.text(), values=values), st.lists(values), ) json_values = st.recursive(json_primitives, json_collections) In the tests of hypothesis itself I found something like json_values = st.deferred( lambda: st.none() | st.booleans() | st.integers() | st.floats(allow_infinity=False, allow_nan=False) | st.text() | json_arrays | json_objects ) json_arrays = st.lists(json_values) json_objects = st.dictionaries(st.text(), json_values) Are there any differences in how these strategies behave? I looked at the implementations of both and found the one for st.deferred much easier to follow. And I arguably find the use of deferred easier to read as well (even without the bitwise or syntactic sugar for st.one_of)
[ "\nAre there any differences in how these strategies behave?\n\nNope - use whichever is more convenient or clearly expresses your intent.\n\nSo... why have both then? The answer is partly just historical contingency: back in 2015 Hypothesis had an entirely different backend which could support st.recursive() but not st.deferred(), which was added shortly after David wrote the current backend in 2017.\nWe've kept both because one or the other are more natural in some situations or to some people, and deprecating something as widely used and non-broken as st.recursive() would be far more painful than keeping it.\nIn practice you can also cleanly use st.recursive() in an expression, defining the whole thing inline with extend=lambda s: .... I've never seen it in the wild, but I guess you could use an assignment expression to do the same...\n@given(st.recursive(st.booleans(), st.lists))\ndef test_demo_recursive(x):\n ...\n\n@given(xs := st.deferred(lambda: st.booleans() | st.lists(xs)))\ndef test_demo_deferred(x):\n ...\n\nNot sure how this would look in more complex cases and of course it requires Python 3.8+, but the latter is nicer than I expected.\n" ]
[ 0 ]
[]
[]
[ "python_hypothesis" ]
stackoverflow_0074639226_python_hypothesis.txt
Q: Does not work iconv with Docker FPM Alpine That's my dockerfile setup. When I use Laravel Dompdf the error will show "iconv(): Wrong charset, conversion from utf-8' to us-ascii//TRANSLIT' is not allowed" And I have been checked the PHP ini, the iconv has been enabled. In my docker file also added the iconv installation command. It still doesn't work. Any solutions for my docker setting? FROM php:7.3.33-fpm-alpine # Fix: iconv(): Wrong charset, conversion from UTF-8 to UTF-8//IGNORE is not allowed in Command line code on line 1 RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/ --allow-untrusted gnu-libiconv ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so php # Install php extensions RUN apk update \ && apk add --no-cache libzip-dev libmcrypt libmcrypt-dev zlib-dev \ && docker-php-ext-install exif zip bcmath mysqli pdo pdo_mysql ctype json # Install GD extensions RUN apk add --no-cache freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg-turbo-dev && \ docker-php-ext-configure gd \ --with-gd \ --with-freetype-dir=/usr/include/ \ --with-png-dir=/usr/include/ \ --with-jpeg-dir=/usr/include/ && \ NPROC=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || 1) && \ docker-php-ext-install -j${NPROC} gd && \ apk del --no-cache freetype-dev libpng-dev libjpeg-turbo-dev # Install composer RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer RUN apk --no-cache update \ && apk --no-cache add make bash g++ zlib-dev libpng-dev \ && rm -fr /var/cache/apk/* # Install npm for Laravel Mix RUN apk add npm RUN apk add nodejs-lts --update RUN npm install -g npm WORKDIR /application EXPOSE 9000 # Start services CMD ["php-fpm"] A: RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.12/community/ --allow-untrusted gnu-libiconv=1.15-r2 that's work to me
Does not work iconv with Docker FPM Alpine
That's my dockerfile setup. When I use Laravel Dompdf the error will show "iconv(): Wrong charset, conversion from utf-8' to us-ascii//TRANSLIT' is not allowed" And I have been checked the PHP ini, the iconv has been enabled. In my docker file also added the iconv installation command. It still doesn't work. Any solutions for my docker setting? FROM php:7.3.33-fpm-alpine # Fix: iconv(): Wrong charset, conversion from UTF-8 to UTF-8//IGNORE is not allowed in Command line code on line 1 RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/community/ --allow-untrusted gnu-libiconv ENV LD_PRELOAD /usr/lib/preloadable_libiconv.so php # Install php extensions RUN apk update \ && apk add --no-cache libzip-dev libmcrypt libmcrypt-dev zlib-dev \ && docker-php-ext-install exif zip bcmath mysqli pdo pdo_mysql ctype json # Install GD extensions RUN apk add --no-cache freetype libpng libjpeg-turbo freetype-dev libpng-dev libjpeg-turbo-dev && \ docker-php-ext-configure gd \ --with-gd \ --with-freetype-dir=/usr/include/ \ --with-png-dir=/usr/include/ \ --with-jpeg-dir=/usr/include/ && \ NPROC=$(grep -c ^processor /proc/cpuinfo 2>/dev/null || 1) && \ docker-php-ext-install -j${NPROC} gd && \ apk del --no-cache freetype-dev libpng-dev libjpeg-turbo-dev # Install composer RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin --filename=composer RUN apk --no-cache update \ && apk --no-cache add make bash g++ zlib-dev libpng-dev \ && rm -fr /var/cache/apk/* # Install npm for Laravel Mix RUN apk add npm RUN apk add nodejs-lts --update RUN npm install -g npm WORKDIR /application EXPOSE 9000 # Start services CMD ["php-fpm"]
[ "RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.12/community/ --allow-untrusted gnu-libiconv=1.15-r2\n\nthat's work to me\n" ]
[ 1 ]
[]
[]
[ "docker", "php" ]
stackoverflow_0070677166_docker_php.txt
Q: Cropping an iframe: Get rid of bottom 100px My goal is to embed an iframe (I have no control over it) with variable height and crop off the bottom 100px. The code I have so far is this: <iframe id="booking-content" title="booking-content" src="https://outlook.office365.com/owa/calendar/[email protected]/bookings/" scrolling="no" allowfullscreen="allowfullscreen" style="width: 1024px; height: 100vh; clip-path: inset(0px 0px 100px 0px); overflow: hidden; margin:0 auto; border:none;"> </iframe> Unfortunately, clip-path generates a white border on the bottom, see: The working sample with the code to the picture above is here. Thx a lot! (This question is somewhat related) A: Instead of using clip-path, just use height: calc(100vh - 100px) : iframe { width: 1024px; height: calc(100vh - 300px); overflow: hidden; margin: 0 auto; border: none; } <h1 id="welcome">Cropping an iframe</h1> <p>How can I get rid of the white space on the bottom? (due to clip-path)</p> <iframe id="booking-content" title="booking-content" src="https://en.wikipedia.org/wiki/World_Wide_Web" scrolling="no" allowfullscreen="allowfullscreen"> </iframe> A: if what is before the iframe can be variable, meaning height can change, you'll have to calculate all the height till the iframe, and subtract this height to the iframe height (+100). let h = 0; let el = document.getElementById('welcome'); el.innerHTML = 'Editors'; while (el.nodeName !== 'IFRAME') { h += el.offsetHeight; el = el.nextElementSibling; } h = window.innerHeight - h - 100; document.querySelector('iframe').style.height = h + 'px'; body { margin: 0; padding: 0; height: 100vh; } h1 { font-family: Impact, sans-serif; color: #CE5937; } iframe { width: 1024px; height: calc(100vh - 300px); overflow: hidden; margin: 0 auto; border: none; } <h1 id="welcome">Cropping an iframe</h1> <p>How can I get rid of the white space on the bottom? (due to clip-path)</p> <iframe id="booking-content" title="booking-content" src="https://en.wikipedia.org/wiki/World_Wide_Web" scrolling="no" allowfullscreen="allowfullscreen"> </iframe>
Cropping an iframe: Get rid of bottom 100px
My goal is to embed an iframe (I have no control over it) with variable height and crop off the bottom 100px. The code I have so far is this: <iframe id="booking-content" title="booking-content" src="https://outlook.office365.com/owa/calendar/[email protected]/bookings/" scrolling="no" allowfullscreen="allowfullscreen" style="width: 1024px; height: 100vh; clip-path: inset(0px 0px 100px 0px); overflow: hidden; margin:0 auto; border:none;"> </iframe> Unfortunately, clip-path generates a white border on the bottom, see: The working sample with the code to the picture above is here. Thx a lot! (This question is somewhat related)
[ "Instead of using clip-path, just use height: calc(100vh - 100px) :\n\n\niframe {\n width: 1024px;\n height: calc(100vh - 300px);\n overflow: hidden;\n margin: 0 auto;\n border: none;\n}\n<h1 id=\"welcome\">Cropping an iframe</h1>\n<p>How can I get rid of the white space on the bottom? (due to clip-path)</p>\n\n<iframe id=\"booking-content\" title=\"booking-content\" src=\"https://en.wikipedia.org/wiki/World_Wide_Web\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\">\n</iframe>\n\n\n\n", "if what is before the iframe can be variable, meaning height can change, you'll have to calculate all the height till the iframe, and subtract this height to the iframe height (+100).\n\n\nlet h = 0;\nlet el = document.getElementById('welcome');\nel.innerHTML = 'Editors';\nwhile (el.nodeName !== 'IFRAME') {\n h += el.offsetHeight;\n el = el.nextElementSibling;\n}\nh = window.innerHeight - h - 100;\ndocument.querySelector('iframe').style.height = h + 'px';\nbody {\n margin: 0;\n padding: 0;\n height: 100vh;\n}\n\nh1 {\n font-family: Impact, sans-serif;\n color: #CE5937;\n}\n\niframe {\n width: 1024px;\n height: calc(100vh - 300px);\n overflow: hidden;\n margin: 0 auto;\n border: none;\n}\n<h1 id=\"welcome\">Cropping an iframe</h1>\n<p>How can I get rid of the white space on the bottom? (due to clip-path)</p>\n\n<iframe id=\"booking-content\" title=\"booking-content\" src=\"https://en.wikipedia.org/wiki/World_Wide_Web\" scrolling=\"no\" allowfullscreen=\"allowfullscreen\">\n</iframe>\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "crop", "css", "html", "iframe" ]
stackoverflow_0074665232_crop_css_html_iframe.txt
Q: GitHub actions commit attribution Please don't ask why, but I have 2 GitHub accounts. I'll call these two accounts GH1 and GH2. The repository: The repository is private, it is in an organization, and the only members of the organization are GH1 and GH2. It is a repository that I don't want linked to my main, GH1, which is why I want there to be no trace of GH1 on that repository, and why I am only pushing to it as GH2. The repository is being used by me to test workflows, before I deploy them to one of the public repositories of the organization. When I push to the repo locally (as in, git push), my commit is attributed to GH2, but when I check the workflow runs, it says that it was pushed by GH1. How can I make sure that the commit is always attributed to GH2? To clarify: The commit is attributed to GH2, the incorrect attribution to GH1 is in the GitHub Actions workflow runs. (It has been modified with inspect element to redact sensitive information.) A: There are two past solutions on Stack Overflow for fixing your situation. For making future commits use the correct user: https://stackoverflow.com/a/25815116/7058266 For changing past commits to swap a user with another: https://stackoverflow.com/a/4982271/7058266 (Which is part of this question: How to amend several commits in Git to change author) Part 1: Set your git author settings. Eg: git config --global user.name "John Doe" git config --global user.email [email protected] Then reset the author for all commits after the given SHA git rebase -i YOUR_SHA -x "git commit --amend --reset-author -CHEAD" This will pop up your editor to confirm the changes. All you need to do here is save and quit and it will go through each commit and run the command specified in the -x flag. You can also change the author while maintaining the original timestamps with: git rebase -i YOUR_SHA -x "git commit --amend --author 'New Name <[email protected]>' -CHEAD" Part 2: git filter-branch --env-filter 'if [ "$GIT_AUTHOR_EMAIL" = "incorrect@email" ]; then GIT_AUTHOR_EMAIL=correct@email; GIT_AUTHOR_NAME="Correct Name"; GIT_COMMITTER_EMAIL=$GIT_AUTHOR_EMAIL; GIT_COMMITTER_NAME="$GIT_AUTHOR_NAME"; fi' -- --all I would read the details from the links to make sure you understand what you're doing before executing the actions. A: Git : fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists Try pushing with SSH instead. I'd create 2 keys for GH1 and GH2, and then edit my SSH config to something like this: Host github.com-GH2 HostName github.com User GH2 IdentityFile ~/.ssh/github_GH2 and in the repositories where I want to be attributed as GH2, I can simply run this git command: git remote set-url origin [email protected]:user/repo.git I did a push with that SSH url set, and I'm proud to say that the GitHub Actions workflow runs no longer attributes the commit to GH1 (see below screenshot). A: I found a git identity manager. Hopefully I can use this to change between GH1 and GH2, but I haven't tried it yet for GitHub Actions. https://github.com/samrocketman/git-identity-manager
GitHub actions commit attribution
Please don't ask why, but I have 2 GitHub accounts. I'll call these two accounts GH1 and GH2. The repository: The repository is private, it is in an organization, and the only members of the organization are GH1 and GH2. It is a repository that I don't want linked to my main, GH1, which is why I want there to be no trace of GH1 on that repository, and why I am only pushing to it as GH2. The repository is being used by me to test workflows, before I deploy them to one of the public repositories of the organization. When I push to the repo locally (as in, git push), my commit is attributed to GH2, but when I check the workflow runs, it says that it was pushed by GH1. How can I make sure that the commit is always attributed to GH2? To clarify: The commit is attributed to GH2, the incorrect attribution to GH1 is in the GitHub Actions workflow runs. (It has been modified with inspect element to redact sensitive information.)
[ "There are two past solutions on Stack Overflow for fixing your situation.\n\nFor making future commits use the correct user:\n\n\nhttps://stackoverflow.com/a/25815116/7058266\n\n\nFor changing past commits to swap a user with another:\n\n\nhttps://stackoverflow.com/a/4982271/7058266\n\n(Which is part of this question: How to amend several commits in Git to change author)\nPart 1:\nSet your git author settings. Eg:\ngit config --global user.name \"John Doe\"\ngit config --global user.email [email protected]\n\nThen reset the author for all commits after the given SHA\ngit rebase -i YOUR_SHA -x \"git commit --amend --reset-author -CHEAD\"\n\nThis will pop up your editor to confirm the changes. All you need to do here is save and quit and it will go through each commit and run the command specified in the -x flag.\nYou can also change the author while maintaining the original timestamps with:\ngit rebase -i YOUR_SHA -x \"git commit --amend --author 'New Name <[email protected]>' -CHEAD\"\n\nPart 2:\ngit filter-branch --env-filter 'if [ \"$GIT_AUTHOR_EMAIL\" = \"incorrect@email\" ]; then\n GIT_AUTHOR_EMAIL=correct@email;\n GIT_AUTHOR_NAME=\"Correct Name\";\n GIT_COMMITTER_EMAIL=$GIT_AUTHOR_EMAIL;\n GIT_COMMITTER_NAME=\"$GIT_AUTHOR_NAME\"; fi' -- --all\n\n\nI would read the details from the links to make sure you understand what you're doing before executing the actions.\n", "Git : fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists\nTry pushing with SSH instead. I'd create 2 keys for GH1 and GH2, and then edit my SSH config to something like this:\nHost github.com-GH2\n HostName github.com\n User GH2\n IdentityFile ~/.ssh/github_GH2\n\nand in the repositories where I want to be attributed as GH2, I can simply run this git command:\ngit remote set-url origin [email protected]:user/repo.git\n\nI did a push with that SSH url set, and I'm proud to say that the GitHub Actions workflow runs no longer attributes the commit to GH1 (see below screenshot).\n\n", "I found a git identity manager. Hopefully I can use this to change between GH1 and GH2, but I haven't tried it yet for GitHub Actions.\nhttps://github.com/samrocketman/git-identity-manager\n" ]
[ 0, 0, 0 ]
[]
[]
[ "git", "github", "github_actions" ]
stackoverflow_0070744041_git_github_github_actions.txt
Q: 'NoneType' object is not callable when tryna do a histogram on datafram rfm = df3.groupby('CustomerID').agg({ 'InvoiceNo' : lambda num: len(num), 'TotalSum' : lambda price: price.sum(), 'InvoiceDay': lambda x: ref_date- x.max()}) rfm.rename(columns={ 'InvoiceNo' : 'Frequency', 'TotalSum' : 'Monetary', 'InvoiceDay': 'Recency' }, inplace=True) rfm['Recency'] = rfm['Recency'].dt.days rfm.hist() plt.show() It keeps showing this error, I don't know what I'm doing wrong here: TypeError: 'NoneType' object is not callable I was expecting a histogram plot of the 3 different variables. If I don't have rfm.hist(column= 'Recency'), it still shows the same error. What is the issue here? These are the dtypes: Frequency int64 Monetary float64 Recency int64 Output exceeds the size limit. Open the full output data in a text editor TypeError Traceback (most recent call last) File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/getlimits.py:459, in finfo.new(cls, dtype) 458 try: --> 459 dtype = numeric.dtype(dtype) 460 except TypeError: 461 # In case a float instance was given TypeError: 'NoneType' object is not callable During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /Users/Downloads/Unclassified Learning/Unclassified Learning.ipynb Cell 25 in <cell line: 2>() 1 rfm.hist() ----> 2 plt.show() File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/matplotlib/pyplot.py:389, in show(*args, **kwargs) 345 """ 346 Display all open figures. 347 (...) 386 explicitly there. 387 """ 388 _warn_if_gui_out_of_main_thread() ... --> 462 dtype = numeric.dtype(type(dtype)) 464 obj = cls._finfo_cache.get(dtype, None) 465 if obj is not None: TypeError: 'NoneType' object is not callable A: Still trying to figure out what moment it happens as we need the whole error log. But that error is trying to tell you that you are invoking a method on a None type. Meaning that some of the attributes return None, and you are still trying to access them. To debug, recommend checking the pandas DataFrame first, printing out rfm.head(). Secondly, if this is happening while calling hist(), probably some of the underlying data is None, and might be worth investing some time into cleaning up these None rows, or filling them up with something.
'NoneType' object is not callable when tryna do a histogram on datafram
rfm = df3.groupby('CustomerID').agg({ 'InvoiceNo' : lambda num: len(num), 'TotalSum' : lambda price: price.sum(), 'InvoiceDay': lambda x: ref_date- x.max()}) rfm.rename(columns={ 'InvoiceNo' : 'Frequency', 'TotalSum' : 'Monetary', 'InvoiceDay': 'Recency' }, inplace=True) rfm['Recency'] = rfm['Recency'].dt.days rfm.hist() plt.show() It keeps showing this error, I don't know what I'm doing wrong here: TypeError: 'NoneType' object is not callable I was expecting a histogram plot of the 3 different variables. If I don't have rfm.hist(column= 'Recency'), it still shows the same error. What is the issue here? These are the dtypes: Frequency int64 Monetary float64 Recency int64 Output exceeds the size limit. Open the full output data in a text editor TypeError Traceback (most recent call last) File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/numpy/core/getlimits.py:459, in finfo.new(cls, dtype) 458 try: --> 459 dtype = numeric.dtype(dtype) 460 except TypeError: 461 # In case a float instance was given TypeError: 'NoneType' object is not callable During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) /Users/Downloads/Unclassified Learning/Unclassified Learning.ipynb Cell 25 in <cell line: 2>() 1 rfm.hist() ----> 2 plt.show() File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/matplotlib/pyplot.py:389, in show(*args, **kwargs) 345 """ 346 Display all open figures. 347 (...) 386 explicitly there. 387 """ 388 _warn_if_gui_out_of_main_thread() ... --> 462 dtype = numeric.dtype(type(dtype)) 464 obj = cls._finfo_cache.get(dtype, None) 465 if obj is not None: TypeError: 'NoneType' object is not callable
[ "Still trying to figure out what moment it happens as we need the whole error log. But that error is trying to tell you that you are invoking a method on a None type. Meaning that some of the attributes return None, and you are still trying to access them.\nTo debug, recommend checking the pandas DataFrame first, printing out rfm.head().\nSecondly, if this is happening while calling hist(), probably some of the underlying data is None, and might be worth investing some time into cleaning up these None rows, or filling them up with something.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074665665_dataframe_python.txt
Q: Clicking on the padding of an input field sets the cursor at the beginning of the line, not at the end On a HTML input field that has some padding defined, clicking on the padding area sets the cursor at the beginning of the line and not at the end. Why is that happening? What would be a use case for this behaviour? Any idea on how to prevent this and move always the cursor at the end of the line? <input style="padding:25px;font-size:25px" value="hello" /> https://jsfiddle.net/sf4x3uzL/1 A: Check the snippet, the code is bit long but manages to work in all scenarios. Here added a function to check if the click is from padding area from here Added another function to get the caret position, and combined both function to get get the desired result. var carPos = 0; (function($) { var isPaddingClick = function(element, e) { var style = window.getComputedStyle(element, null); var pTop = parseInt( style.getPropertyValue('padding-top') ); var pRight = parseFloat( style.getPropertyValue('padding-right') ); var pLeft = parseFloat( style.getPropertyValue('padding-left') ); var pBottom = parseFloat( style.getPropertyValue('padding-bottom') ); var width = element.offsetWidth; var height = element.offsetHeight; var x = parseFloat( e.offsetX ); var y = parseFloat( e.offsetY ); return !(( x > pLeft && x < width - pRight) && ( y > pTop && y < height - pBottom)) } $.fn.paddingClick = function(fn) { this.on('click', function(e) { if (isPaddingClick(this, e)) { e.target.setSelectionRange(carPos, carPos); fn() } }) return this } }(jQuery)); $('input').paddingClick() $('input').bind('click keyup', function(event){ carPos = event.target.selectionStart; }) .as-console{ display:none!important} <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <input style="padding:25px;font-size:25px" value='hello' /> A: You can use setSelectionRange to set the position of the cursor to the end of the input element on click. input.addEventListener('click', (e) => { // Do not do anything if the user already has a selection range, // for instance from a double click. const { selectionStart, selectionEnd } = input; if (selectionStart !== selectionEnd) return; const { length } = input.value; input.focus(); input.setSelectionRange(length, length); }); <input id="input" style="padding:25px;font-size:25px" value="hello" /> A: Try this !! $("input").click(function(){   this.focus(); }); A: A simple solution is to set to 0 the vertical padding and adjust the line-height to get the correct height of the input. This solution is suitable for input only: in case of textarea the huge line-height might be a problem. <input style="padding:0 25px; line-height:3.48; font-size:25px" value="hello" /> A: You can use setSelectionRange to set the caret position at the end using [-1 , -1] position. <input style="padding:25px;font-size:25px;border:1px solid #c0d9bc;outline:2px solid #6fae62" onclick="this.setSelectionRange(-1,-1)" value="hello" /> A: I found this at stack only but seems to be a decent solution, which always sets the value and cursor to end of the text. <input style="padding:25px;font-size:25px" value="hello" onfocus="this.value = this.value;" />. I hope this works for you A: As far as I can tell, clicking on the padding is still as clicking on the <input> itself. It seems like what's happening next depends on some specific implementation in the Browser or the OS. For example, on macOS + Chrome it behaves the way you describe, but it may happen that in Windows it behaves differently. My approach would be a CSS only, though. First, wrap your input within a label, this is important, so that if you click on the virtual padding the <input> will get the focus automatically. <label class="custom-input"> <input value='Hello' /> </label> And finally the CSS: .custom-input { display: inline-block; border: 1px solid #ccc; padding: 25px; } .custom-input > input { padding: 0; font-size: 25px; border: 0 none; outline: none; } https://jsfiddle.net/4ysdt7z0/17/ A: I can also reproduce this behavior on both Chrome and Safari, using a MacBook Pro. My workaround for this, not tested cross-device but at least cross-browser, is to set the vertical spacing of the field using line-height, instead padding: So instead of padding: .75rem;, I used: padding: 0 .75rem; line-height: 2.5rem; This is not a complete fix, because clicking the border of the input will still set the cursor to index 0, but at least it works better for me than vertical padding.
Clicking on the padding of an input field sets the cursor at the beginning of the line, not at the end
On a HTML input field that has some padding defined, clicking on the padding area sets the cursor at the beginning of the line and not at the end. Why is that happening? What would be a use case for this behaviour? Any idea on how to prevent this and move always the cursor at the end of the line? <input style="padding:25px;font-size:25px" value="hello" /> https://jsfiddle.net/sf4x3uzL/1
[ "Check the snippet, the code is bit long but manages to work in all scenarios. \nHere added a function to check if the click is from padding area from here\nAdded another function to get the caret position, and combined both function to get get the desired result. \n\n\nvar carPos = 0;\r\n(function($) {\r\n var isPaddingClick = function(element, e) {\r\n var style = window.getComputedStyle(element, null);\r\n var pTop = parseInt( style.getPropertyValue('padding-top') );\r\n var pRight = parseFloat( style.getPropertyValue('padding-right') );\r\n var pLeft = parseFloat( style.getPropertyValue('padding-left') ); \r\n var pBottom = parseFloat( style.getPropertyValue('padding-bottom') );\r\n var width = element.offsetWidth;\r\n var height = element.offsetHeight;\r\n var x = parseFloat( e.offsetX );\r\n var y = parseFloat( e.offsetY ); \r\n return !(( x > pLeft && x < width - pRight) &&\r\n ( y > pTop && y < height - pBottom))\r\n }\r\n $.fn.paddingClick = function(fn) {\r\n this.on('click', function(e) {\r\n if (isPaddingClick(this, e)) {\r\n e.target.setSelectionRange(carPos, carPos);\r\n fn()\r\n }\r\n }) \r\n return this\r\n }\r\n}(jQuery));\r\n\r\n\r\n$('input').paddingClick()\r\n$('input').bind('click keyup', function(event){\r\n carPos = event.target.selectionStart;\r\n})\n.as-console{ display:none!important}\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\r\n<input style=\"padding:25px;font-size:25px\" value='hello' />\n\n\n\n", "You can use setSelectionRange to set the position of the cursor to the end of the input element on click.\n\n\ninput.addEventListener('click', (e) => {\r\n // Do not do anything if the user already has a selection range,\r\n // for instance from a double click.\r\n const { selectionStart, selectionEnd } = input;\r\n if (selectionStart !== selectionEnd) return;\r\n \r\n const { length } = input.value;\r\n input.focus();\r\n input.setSelectionRange(length, length);\r\n});\n<input id=\"input\" style=\"padding:25px;font-size:25px\" value=\"hello\" />\n\n\n\n", "Try this !!\n$(\"input\").click(function(){\n  this.focus();\n});\n\n", "A simple solution is to set to 0 the vertical padding and adjust the line-height to get the correct height of the input.\nThis solution is suitable for input only: in case of textarea the huge line-height might be a problem.\n\n\n<input style=\"padding:0 25px; line-height:3.48; font-size:25px\" value=\"hello\" />\n\n\n\n", "You can use setSelectionRange to set the caret position at the end using [-1 , -1] position.\n\n\n<input style=\"padding:25px;font-size:25px;border:1px solid #c0d9bc;outline:2px solid #6fae62\" onclick=\"this.setSelectionRange(-1,-1)\" value=\"hello\" />\n\n\n\n", "I found this at stack only but seems to be a decent solution, which always sets the value and cursor to end of the text.\n<input style=\"padding:25px;font-size:25px\" value=\"hello\" \n onfocus=\"this.value = this.value;\" />.\n\nI hope this works for you\n", "As far as I can tell, clicking on the padding is still as clicking on the <input> itself. It seems like what's happening next depends on some specific implementation in the Browser or the OS. For example, on macOS + Chrome it behaves the way you describe, but it may happen that in Windows it behaves differently.\nMy approach would be a CSS only, though.\nFirst, wrap your input within a label, this is important, so that if you click on the virtual padding the <input> will get the focus automatically.\n<label class=\"custom-input\">\n <input value='Hello' />\n</label>\n\nAnd finally the CSS:\n.custom-input {\n display: inline-block;\n border: 1px solid #ccc;\n padding: 25px;\n}\n\n.custom-input > input {\n padding: 0;\n font-size: 25px;\n border: 0 none;\n outline: none;\n}\n\nhttps://jsfiddle.net/4ysdt7z0/17/\n", "I can also reproduce this behavior on both Chrome and Safari, using a MacBook Pro.\nMy workaround for this, not tested cross-device but at least cross-browser, is to set the vertical spacing of the field using line-height, instead padding:\nSo instead of padding: .75rem;, I used:\npadding: 0 .75rem;\nline-height: 2.5rem;\n\nThis is not a complete fix, because clicking the border of the input will still set the cursor to index 0, but at least it works better for me than vertical padding.\n" ]
[ 4, 3, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0058732887_css_html_javascript.txt
Q: How to do peek operation in queue in C? #include<stdio.h> #include<string.h> #include<stdlib.h> #define max 100 int enqueue(); int dequeue(); int peek(); void display(); int main() { char name[max][80], data[80]; int front, rear, value; int ch; front = rear = -1; printf("------------------------------\n"); printf("\tMenu"); printf("\n------------------------------"); printf("\n [1] ENQUEUE"); printf("\n [2] DEQUEUE"); printf("\n [3] PEEK"); printf("\n [4] DISPLAY"); printf("\n------------------------------\n"); while(1) { printf("Choice : "); scanf("%d", &ch); switch(ch) { case 1 : // insert printf("\nEnter the Name : "); scanf("%s",data); value = enqueue(name, &rear, data); if(value == -1 ) printf("\n QUEUE is Full \n"); else printf("\n'%s' is inserted in QUEUE.\n\n",data); break; case 2 : // delete value = dequeue(name, &front, &rear, data); if( value == -1 ) printf("\n QUEUE is Empty \n"); else printf("\n Deleted Name from QUEUE is : %s\n", data); printf("\n"); break; case 3: value = peek(name, &front, &rear, data); if(value != -1) { printf("\n The front is: %s", value); } break; case 5 : exit(0); default: printf("Invalid Choice \n"); } } return 0; } int enqueue(char name[max][80], int *rear, char data[80]) { if(*rear == max -1) return(-1); else { *rear = *rear + 1; strcpy(name[*rear], data); return(1); } } int dequeue(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) return(-1); else { (*front)++; strcpy(data, name[*front]); return(1); } } int peek(char name[max][80], int *front, int *rear, char data[80]) { if(*front == -1 || *front > *rear) { printf(" QUEUE IS EMPTY\n"); return -1; } else { return data[*front]; } } My Peek operation has a problem it does not print the first element. For example, the user input one name, and when the user selects the peek operation, the program should print the first element, but my program always prints QUEUE IS EMPTY even though my queue is not empty. The peek operation should print the first element of the queue. A: If you enqueue() then peek() the variable front is still -1 and it incorrectly says the queue is empty. I suggest your treat front to rear as a half open interval with both initialized to 0, so it's empty if *front == *rear. peek() need to populate data and return an integer (not data[*front] which is nonsense). main() after peek() you need to print data not value. Your prototypes are wrong. If you move the functions before main() then you can remove them. int enqueue(char name[max][80], int *rear, const char data[80]) { if(*rear + 1 == max) return -1; strcpy(name[*rear], data); (*rear)++; return 1; } int peek(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) { printf(" QUEUE IS EMPTY\n"); return -1; } strcpy(data, name[*front]); return 1; } // ... int main() { char name[max][80], data[80]; int front = 0; int rear = 0; int value; int ch; printf("------------------------------\n"); printf("\tMenu"); printf("\n------------------------------"); printf("\n [1] ENQUEUE"); printf("\n [2] DEQUEUE"); printf("\n [3] PEEK"); printf("\n [4] DISPLAY"); printf("\n------------------------------\n"); while(1) { printf("Choice : "); scanf("%d", &ch); switch(ch) { case 1 : // insert printf("\nEnter the Name : "); scanf("%s",data); value = enqueue(name, &rear, data); if(value == -1 ) printf("\n QUEUE is Full \n"); else printf("\n'%s' is inserted in QUEUE.\n\n",data); break; case 2 : // delete value = dequeue(name, &front, &rear, data); if( value == -1 ) printf("\n QUEUE is Empty \n"); else printf("\n Deleted Name from QUEUE is : %s\n", data); printf("\n"); break; case 3: value = peek(name, &front, &rear, data); if(value != -1) { printf("\n The front is: %s\n", data); } break; case 5 : exit(0); default: printf("Invalid Choice \n"); } } return 0; } example session: ------------------------------ Menu ------------------------------ [1] ENQUEUE [2] DEQUEUE [3] PEEK [4] DISPLAY ------------------------------ Choice : 1 Enter the Name : test 'test' is inserted in QUEUE. Choice : 3 The front is: test
How to do peek operation in queue in C?
#include<stdio.h> #include<string.h> #include<stdlib.h> #define max 100 int enqueue(); int dequeue(); int peek(); void display(); int main() { char name[max][80], data[80]; int front, rear, value; int ch; front = rear = -1; printf("------------------------------\n"); printf("\tMenu"); printf("\n------------------------------"); printf("\n [1] ENQUEUE"); printf("\n [2] DEQUEUE"); printf("\n [3] PEEK"); printf("\n [4] DISPLAY"); printf("\n------------------------------\n"); while(1) { printf("Choice : "); scanf("%d", &ch); switch(ch) { case 1 : // insert printf("\nEnter the Name : "); scanf("%s",data); value = enqueue(name, &rear, data); if(value == -1 ) printf("\n QUEUE is Full \n"); else printf("\n'%s' is inserted in QUEUE.\n\n",data); break; case 2 : // delete value = dequeue(name, &front, &rear, data); if( value == -1 ) printf("\n QUEUE is Empty \n"); else printf("\n Deleted Name from QUEUE is : %s\n", data); printf("\n"); break; case 3: value = peek(name, &front, &rear, data); if(value != -1) { printf("\n The front is: %s", value); } break; case 5 : exit(0); default: printf("Invalid Choice \n"); } } return 0; } int enqueue(char name[max][80], int *rear, char data[80]) { if(*rear == max -1) return(-1); else { *rear = *rear + 1; strcpy(name[*rear], data); return(1); } } int dequeue(char name[max][80], int *front, int *rear, char data[80]) { if(*front == *rear) return(-1); else { (*front)++; strcpy(data, name[*front]); return(1); } } int peek(char name[max][80], int *front, int *rear, char data[80]) { if(*front == -1 || *front > *rear) { printf(" QUEUE IS EMPTY\n"); return -1; } else { return data[*front]; } } My Peek operation has a problem it does not print the first element. For example, the user input one name, and when the user selects the peek operation, the program should print the first element, but my program always prints QUEUE IS EMPTY even though my queue is not empty. The peek operation should print the first element of the queue.
[ "If you enqueue() then peek() the variable front is still -1 and it incorrectly says the queue is empty.\n\nI suggest your treat front to rear as a half open interval with both initialized to 0, so it's empty if *front == *rear.\npeek() need to populate data and return an integer (not data[*front] which is nonsense).\nmain() after peek() you need to print data not value.\nYour prototypes are wrong. If you move the functions before main() then you can remove them.\n\nint enqueue(char name[max][80], int *rear, const char data[80]) {\n if(*rear + 1 == max)\n return -1;\n strcpy(name[*rear], data);\n (*rear)++;\n return 1;\n}\n\nint peek(char name[max][80], int *front, int *rear, char data[80]) {\n if(*front == *rear) {\n printf(\" QUEUE IS EMPTY\\n\");\n return -1;\n }\n strcpy(data, name[*front]);\n return 1;\n}\n\n// ...\n\nint main() {\n char name[max][80], data[80];\n int front = 0;\n int rear = 0;\n int value;\n int ch;\n printf(\"------------------------------\\n\");\n printf(\"\\tMenu\");\n printf(\"\\n------------------------------\");\n printf(\"\\n [1] ENQUEUE\");\n printf(\"\\n [2] DEQUEUE\");\n printf(\"\\n [3] PEEK\");\n printf(\"\\n [4] DISPLAY\");\n printf(\"\\n------------------------------\\n\");\n while(1)\n {\n printf(\"Choice : \");\n scanf(\"%d\", &ch);\n switch(ch) {\n case 1 : // insert\n printf(\"\\nEnter the Name : \");\n scanf(\"%s\",data);\n value = enqueue(name, &rear, data);\n if(value == -1 )\n printf(\"\\n QUEUE is Full \\n\");\n else\n printf(\"\\n'%s' is inserted in QUEUE.\\n\\n\",data);\n break;\n case 2 : // delete\n value = dequeue(name, &front, &rear, data);\n if( value == -1 )\n printf(\"\\n QUEUE is Empty \\n\");\n else\n printf(\"\\n Deleted Name from QUEUE is : %s\\n\", data);\n printf(\"\\n\");\n break;\n case 3:\n value = peek(name, &front, &rear, data);\n if(value != -1)\n {\n printf(\"\\n The front is: %s\\n\", data);\n }\n break;\n case 5 : exit(0);\n default: printf(\"Invalid Choice \\n\");\n }\n }\n return 0;\n}\n\nexample session:\n------------------------------\n Menu\n------------------------------\n [1] ENQUEUE\n [2] DEQUEUE\n [3] PEEK\n [4] DISPLAY\n------------------------------\nChoice : 1\n\nEnter the Name : test\n\n'test' is inserted in QUEUE.\n\nChoice : 3\n\n The front is: test\n\n" ]
[ 1 ]
[]
[]
[ "c", "peek", "queue" ]
stackoverflow_0074665740_c_peek_queue.txt
Q: Heap space issue while merging the document using pdfBox I am getting java.lang.OutOfMemory error when I am trying to merge one 44k pages pdf. I am fetching all the 44k pages from my DB in chunks and trying to merge with my main document. It is processing fine till 9.5k pages and then it start throwing heap space error. public void getDocumentAsPdf(String docid) { PDDocument pdDocument = new PDDocument(); try { //fetching total count from DB Long totalPages = countByDocument(docid); Integer batchSize = 400; Integer skip=0; Long totalBatches = totalPages/batchSize; Long remainingPages = totalPages%batchSize; for (int i = 1; i <= totalBatches; i++) { log.info("Batch : {}", i ); //fetching pages of given document in ascending order from database List<Page> documentPages = fetchPagesByDocument(document,batchSize, skip); pdDocument = mergePagesToDocument(pdDocument,documentPages); skip+=batchSize; } if(remainingPages>0) { //fetching remaining pages of given document in ascending order from database List<Page> documentPages = fetchPagesByDocument(document,batchSize,skip); pdDocument = mergePagesToDocument(pdDocument,documentPages); } } catch (Exception e) { throw new InternalErrorException("500","Exception occurred while merging! "); } } Merge pdf logic public PDDocument mergePagesToDocument(PDDocument pdDocument,List<Page> documentPages) { try { PDFMergerUtility pdfMergerUtility = new PDFMergerUtility(); pdfMergerUtility.mergeDocuments(MemoryUsageSetting.setupMainMemoryOnly()); for (Page page : documentPages) { byte[] decodedPage = java.util.Base64.getDecoder().decode(page.getPageData()); PDDocument addPage = PDDocument.load(decodedPage); pdfMergerUtility.appendDocument(pdDocument, addPage); addPage.close(); } return pdDocument; }catch (Exception e) { throw new InternalErrorException("500",e.getMessage()); } } I think there is some memory leak from my side which is causing the given issue. Any suggestion or any better approach for the same will be helpful. Thanks in advance! A: It isn't exactly a memory leak, but you are trying to store whole 44k pages PDF in pdDocument variable. It might be bigger than your heap size. You can increase it with VM option -Xmx (read more here). Alternatively you can change your approach to not load 44k pages into memory at once.
Heap space issue while merging the document using pdfBox
I am getting java.lang.OutOfMemory error when I am trying to merge one 44k pages pdf. I am fetching all the 44k pages from my DB in chunks and trying to merge with my main document. It is processing fine till 9.5k pages and then it start throwing heap space error. public void getDocumentAsPdf(String docid) { PDDocument pdDocument = new PDDocument(); try { //fetching total count from DB Long totalPages = countByDocument(docid); Integer batchSize = 400; Integer skip=0; Long totalBatches = totalPages/batchSize; Long remainingPages = totalPages%batchSize; for (int i = 1; i <= totalBatches; i++) { log.info("Batch : {}", i ); //fetching pages of given document in ascending order from database List<Page> documentPages = fetchPagesByDocument(document,batchSize, skip); pdDocument = mergePagesToDocument(pdDocument,documentPages); skip+=batchSize; } if(remainingPages>0) { //fetching remaining pages of given document in ascending order from database List<Page> documentPages = fetchPagesByDocument(document,batchSize,skip); pdDocument = mergePagesToDocument(pdDocument,documentPages); } } catch (Exception e) { throw new InternalErrorException("500","Exception occurred while merging! "); } } Merge pdf logic public PDDocument mergePagesToDocument(PDDocument pdDocument,List<Page> documentPages) { try { PDFMergerUtility pdfMergerUtility = new PDFMergerUtility(); pdfMergerUtility.mergeDocuments(MemoryUsageSetting.setupMainMemoryOnly()); for (Page page : documentPages) { byte[] decodedPage = java.util.Base64.getDecoder().decode(page.getPageData()); PDDocument addPage = PDDocument.load(decodedPage); pdfMergerUtility.appendDocument(pdDocument, addPage); addPage.close(); } return pdDocument; }catch (Exception e) { throw new InternalErrorException("500",e.getMessage()); } } I think there is some memory leak from my side which is causing the given issue. Any suggestion or any better approach for the same will be helpful. Thanks in advance!
[ "It isn't exactly a memory leak, but you are trying to store whole 44k pages PDF in pdDocument variable. It might be bigger than your heap size. You can increase it with VM option -Xmx (read more here).\nAlternatively you can change your approach to not load 44k pages into memory at once.\n" ]
[ 0 ]
[]
[]
[ "java", "java_8", "pdfbox", "spring", "spring_boot" ]
stackoverflow_0074665773_java_java_8_pdfbox_spring_spring_boot.txt
Q: Find the smallest positive integer that does not occur in a given sequence I was trying to solve this problem: Write a function: class Solution { public int solution(int[] A); } that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A. For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5. Given A = [1, 2, 3], the function should return 4. Given A = [−1, −3], the function should return 1. Assume that: N is an integer within the range [1..100,000]; each element of array A is an integer within the range [−1,000,000..1,000,000]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(N) (not counting the storage required for input arguments). I wrote the solution below which gives a low performance, however, I can't see the bug. public static int solution(int[] A) { Set<Integer> set = new TreeSet<>(); for (int a : A) { set.add(a); } int N = set.size(); int[] C = new int[N]; int index = 0; for (int a : set) { C[index++] = a; } for (int i = 0; i < N; i++) { if (C[i] > 0 && C[i] <= N) { C[i] = 0; } } for (int i = 0; i < N; i++) { if (C[i] != 0) { return (i + 1); } } return (N + 1); } The score is provided here, I will keep investigating myself, but please inform me if you can see better. A: If the expected running time should be linear, you can't use a TreeSet, which sorts the input and therefore requires O(NlogN). Therefore you should use a HashSet, which requires O(N) time to add N elements. Besides, you don't need 4 loops. It's sufficient to add all the positive input elements to a HashSet (first loop) and then find the first positive integer not in that Set (second loop). int N = A.length; Set<Integer> set = new HashSet<>(); for (int a : A) { if (a > 0) { set.add(a); } } for (int i = 1; i <= N + 1; i++) { if (!set.contains(i)) { return i; } } A: 100% result solution in Javascript: function solution(A) { // only positive values, sorted A = A.filter(x => x >= 1).sort((a, b) => a - b) let x = 1 for(let i = 0; i < A.length; i++) { // if we find a smaller number no need to continue, cause the array is sorted if(x < A[i]) { return x } x = A[i] + 1 } return x } A: My code in Java, 100% result in Codility import java.util.*; class Solution { public int solution(int[] arr) { Arrays.sort(arr); int smallest = 1; for (int i = 0; i < arr.length; i++) { if (arr[i] == smallest) { smallest++; } } return smallest; } } A: JS: filter to get positive non zero numbers from A array sort above filtered array in ascending order map to iterate loop of above stored result if to check x is less than the current element then return otherwise, add 1 in the current element and assign to x function solution(A) { let x = 1 A.filter(x => x >= 1) .sort((a, b) => a - b) .map((val, i, arr) => { if(x < arr[i]) return x = arr[i] + 1 }) return x } console.log(solution([3, 4, -1, 1])); console.log(solution([1, 2, 0])); A: Here is an efficient python solution: def solution(A): m = max(A) if m < 1: return 1 A = set(A) B = set(range(1, m + 1)) D = B - A if len(D) == 0: return m + 1 else: return min(D) A: No need to store anything. No need for hashsets. (Extra memory), You can do it as you move through the array. However, The array has to be sorted. And we know the very most minimum value is 1 import java.util.Arrays; class Solution { public int solution(int[] A) { Arrays.sort(A); int min = 1; /* for efficiency — no need to calculate or access the array object’s length property per iteration */ int cap = A.length; for (int i = 0; i < cap; i++){ if(A[i] == min){ min++; } /* can add else if A[i] > min, break; as suggested by punit */ } /* min = ( min <= 0 ) ? 1:min; which means: if (min <= 0 ){ min =1} else {min = min} you can also do: if min <1 for better efficiency/less jumps */ return min; } } A: Here is my PHP solution, 100% Task Score, 100% correctness, and 100% performance. First we iterate and we store all positive elements, then we check if they exist, function solution($A) { $B = []; foreach($A as $a){ if($a > 0) $B[] = $a; } $i = 1; $last = 0; sort($B); foreach($B as $b){ if($last == $b) $i--; // Check for repeated elements else if($i != $b) return $i; $i++; $last = $b; } return $i; } I think its one of the clears and simples functions here, the logic can be applied in all the other languages. A: For Swift 4 public func solution(_ A : inout [Int]) -> Int { let positive = A.filter { $0 > 0 }.sorted() var x = 1 for val in positive { // if we find a smaller number no need to continue, cause the array is sorted if(x < val) { return x } x = val + 1 } return x } A: I achieved 100% on this by the below solution in Python:- def solution(A): a=frozenset(sorted(A)) m=max(a) if m>0: for i in range(1,m): if i not in a: return i else: return m+1 else: return 1 A: In Kotlin with %100 score Detected time complexity: O(N) or O(N * log(N)) fun solution(A: IntArray): Int { var min = 1 val b = A.sortedArray() for (i in 0 until b.size) { if (b[i] == min) { min++ } } return min } A: This solution is in c# but complete the test with 100% score public int solution(int[] A) { // write your code in C# 6.0 with .NET 4.5 (Mono) var positives = A.Where(x => x > 0).Distinct().OrderBy(x => x).ToArray(); if(positives.Count() == 0) return 1; int prev = 0; for(int i =0; i < positives.Count(); i++){ if(positives[i] != prev + 1){ return prev + 1; } prev = positives[i]; } return positives.Last() + 1; } A: My answer in Ruby def smallest_pos_integer(arr) sorted_array = arr.select {|x| x >= 1}.sort res = 1 for i in (0..sorted_array.length - 1) if res < sorted_array[i] return res end res = sorted_array[i] + 1 end res end A: This answer gives 100% in Python. Worst case complexity O(N). The idea is that we do not care about negative numbers in the sequence, since we want to find the smallest positive integer not in the sequence A. Hence we can set all negative numbers to zero and keep only the unique positive values. Then we check iteratively starting from 1 whether the number is in the set of positive values of sequence A. Worst case scenario, where the sequence is an arithmetic progression with constant difference 1, leads to iterating through all elements and thus O(N) complexity. In the extreme case where all the elements of the sequence are negative (i.e. the maximum is negative) we can immediately return 1 as the minimum positive number. def solution(A): max_A=max(A) B=set([a if a>=0 else 0 for a in A ]) b=1 if max_A<=0: return(1) else: while b in B: b+=1 return(b) A: JavaScript ES6 Solution: function solution(A) { if (!A.includes(1)) return 1; return A.filter(a => a > 0) .sort((a, b) => a - b) .reduce((p, c) => c === p ? c + 1 : p, 1); } console.log(solution([1, 3, 6, 4, 1, 2])); console.log(solution([1, 2, 3])); console.log(solution([-1, -3])); console.log(solution([4, 5, 6])); console.log(solution([1, 2, 4])); A: I figured an easy way to do this was to use a BitSet. just add all the positive numbers to the BitSet. when finished, return the index of the first clear bit after bit 0. public static int find(int[] arr) { BitSet b = new BitSet(); for (int i : arr) { if (i > 0) { b.set(i); } } return b.nextClearBit(1); } A: Javascript solution: function solution(A) { A = [...new Set(A.sort( (a,b) => a-b))]; // If the initial integer is greater than 1 or the last integer is less than 1 if((A[0] > 1) || (A[A.length - 1] < 1)) return 1; for (let i in A) { let nextNum = A[+i+1]; if(A[i] === nextNum) continue; if((nextNum - A[i]) !== 1) { if(A[i] < 0 ) { if(A.indexOf(1) !== -1) continue; return 1; } return A[i] + 1; } } } A: 0. Introduction A) Languages allowed The Codility skills assessment demo test allows for solutions written in 18 different languages: C, C++, C#, Go, Java 8, Java 11, JavaScript, Kotlin, Lua, Objective-C, Pascal, PHP, Perl, Python, Ruby, Scala, Swift 4, Visual Basic. B) Some remarks on your question I write the solution below which gives a low performance There is no reason to worry about performance until you have a correct solution. Always make sure the solution is correct before you even think about how fast or slow your algorithm/code is! expected worst-case time complexity is O(N) Well, as the asker of the question, it is your decision what requirements should be met in an answer. But if the goal is to score 100% in the Codility (performance) test, then there is no need to demand O(N). There are plenty of solutions in the answers here which are O(N log N) and not O(N), but still pass all 4 performance tests. This proves that the O(N) requirement on time complexity is unnecessarily harsh (if the sole aim is to score 100% on the Codility test). C) About the solutions presented here All of the solutions presented here are either refactored versions of already published answers, or inspired by such answers. All solutions here score 100% in the Codility skills assessment demo test. 1 I have striven to explicitly reference each original answer/solution, provide a runnable jdoodle link for each solution, use the same 8 tests (chosen by myself) for all the solutions, choose solutions that score 100% (meaning 5 of 5 for correctness and 4 of 4 for performance/speed), make it easy to copy-paste the answers directly into the Codility skills assessment demo test, focus on some of the most used languages. 1. Java: the Codility test for correctness is incorrect (!) I will use one of the existing answers to demonstrate that the Codility test for correctness is flawed for the edge case when the given array is empty. In an empty array, the smallest positive missing integer is clearly 1. Agreed? But the Codility test suite seems to accept just about any answer for the empty array. In the code below, I deliberately return -99 for the empty array, which is obviously incorrect. Yet, Codility gives me a 100% test score for my flawed solution. (!) import java.util.Arrays; /** https://app.codility.com/demo/take-sample-test 100% https://stackoverflow.com/a/57067307 https://jdoodle.com/a/3B0D To run the program in a terminal window: javac Solution.java && java Solution && rm Solution.class Terminal command to run the combined formatter/linter: java -jar ../../../checkstyle-8.45.1.jar -c ../../../google_checks.xml *.java */ public class Solution { /** Returns the smallest positive integer missing in intArray. */ public static int solution(int[] intArray) { if (intArray.length == 0) { // No elements at all. return -99; // So the smallest positive missing integer is 1. } Arrays.sort(intArray); // System.out.println(Arrays.toString(intArray)); // Temporarily uncomment? if (intArray[0] >= 2) { // Smallest positive int is 2 or larger. return 1; // Meaning smallest positive MISSING int is 1. } if (intArray[intArray.length - 1] <= 0) { // Biggest int is 0 or smaller. return 1; // Again, smallest positive missing int is 1. } int smallestPositiveMissing = 1; for (int i = 0; i < intArray.length; i++) { if (intArray[i] == smallestPositiveMissing) { smallestPositiveMissing++; } // ^^ Stop incrementing if intArray[i] > smallestPositiveMissing. ^^ } // Because then the smallest positive missing integer has been found: return smallestPositiveMissing; } /** Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). */ public static void main(String[] args) { System.out.println("Hello Codility Demo Test for Java, B"); int[] array1 = {-1, -3}; System.out.println(solution(array1)); int[] array2 = {1, -1}; System.out.println(solution(array2)); int[] array3 = {2, 1, 2, 5}; System.out.println(solution(array3)); int[] array4 = {3, 1, -2, 2}; System.out.println(solution(array4)); int[] array5 = {}; System.out.println(solution(array5)); int[] array6 = {1, -5, -3}; System.out.println(solution(array6)); int[] array7 = {1, 2, 4, 5}; System.out.println(solution(array7)); int[] array8 = {17, 2}; System.out.println(solution(array8)); } } Below is a screen dump of the result from the test. As the solution is clearly wrong, of course it should not score 100%! 2 2. JavaScript Below is a JavaScript solution. This one has not been posted before, but is inspired by one of the previous answers. /** https://app.codility.com/demo/take-sample-test 100% (c) Henke 2022 https://stackoverflow.com/users/9213345 https://jdoodle.com/a/3AZG To run the program in a terminal window: node CodilityDemoJS3.js Terminal command to run the combined formatter/linter: standard CodilityDemoJS3.js https://github.com/standard/standard */ function solution (A) { /// Returns the smallest positive integer missing in the array A. let smallestMissing = 1 // In the following .reduce(), the only interest is in `smallestMissing`. // I arbitrarily return '-9' because I don't care about the return value. A.filter(x => x > 0).sort((a, b) => a - b).reduce((accumulator, item) => { if (smallestMissing < item) return -9 // Found before end of the array. smallestMissing = item + 1 return -9 // Found at the end of the array. }, 1) return smallestMissing } // Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). // Note! The following lines need to be left out when running the // Codility Demo Test at https://app.codility.com/demo/take-sample-test : console.log('Hello Codility Demo Test for JavaScript, 3.') console.log(solution([-1, -3])) console.log(solution([1, -1])) console.log(solution([2, 1, 2, 5])) console.log(solution([3, 1, -2, 2])) console.log(solution([])) console.log(solution([1, -5, -3])) console.log(solution([1, 2, 4, 5])) console.log(solution([17, 2])) .as-console-wrapper { max-height: 100% !important; top: 0; } 3. Python Python has come to compete with Java as one of the most used programming languages worldwide. The code below is a slightly rewritten version of this answer. #!/usr/bin/env python3 ''' https://app.codility.com/demo/take-sample-test 100% https://stackoverflow.com/a/58980724 https://jdoodle.com/a/3B0k To run the program in a terminal window: python codility_demo_python_a.py Command in the terminal window to run the linter: py -m pylint codility_demo_python_a.py https://pypi.org/project/pylint/ Dito for autopep8 formatting: autopep8 codility_demo_python_a.py --in-place https://pypi.org/project/autopep8/ ''' def solution(int_array): ''' Returns the smallest positive integer missing in int_array. ''' max_elem = max(int_array, default=0) if max_elem < 1: return 1 int_array = set(int_array) # Reusing int_array although now a set # print(int_array) # <- Temporarily uncomment at line beginning all_ints = set(range(1, max_elem + 1)) diff_set = all_ints - int_array if len(diff_set) == 0: return max_elem + 1 return min(diff_set) # Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). # Note! The following lines need to be commented out when running the # Codility Demo Test at https://app.codility.com/demo/take-sample-test : print('Hello Codility Demo Test for Python3, a.') print(solution([-1, -3])) print(solution([1, -1])) print(solution([2, 1, 2, 5])) print(solution([3, 1, -2, 2])) print(solution([])) print(solution([1, -5, -3])) print(solution([1, 2, 4, 5])) print(solution([17, 2])) 4. C# Here a solution for C#, inspired by a previous answer. using System; using System.Linq; /// https://app.codility.com/demo/take-sample-test 100% /// (c) 2021 Henke, https://stackoverflow.com/users/9213345 /// https://jdoodle.com/a/3B0Z /// To initialize the program in a terminal window, only ONCE: /// dotnet new console -o codilityDemoC#-2 && cd codilityDemoC#-2 /// To run the program in a terminal window: /// dotnet run && rm -rf obj && rm -rf bin /// Terminal command to run 'dotnet-format': /// dotnet-format --include DemoC#_2.cs && rm -rf obj && rm -rf bin public class Solution { /// Returns the smallest positive integer missing in intArray. public int solution(int[] intArray) { var sortedSet = intArray.Where(x => x > 0).Distinct().OrderBy(x => x).ToArray(); // Console.WriteLine("[" + string.Join(",", sortedSet) + "]"); // Uncomment? if (sortedSet.Length == 0) return 1; // The set is empty. int smallestMissing = 1; for (int i = 0; i < sortedSet.Length; i++) { if (smallestMissing < sortedSet[i]) break; // The answer has been found. smallestMissing = sortedSet[i] + 1; } // Coming here means all of `sortedSet` had to be traversed. return smallestMissing; } /// Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). /// NOTE! The code below must be removed before running the Codility test. static void Main(string[] args) { Console.WriteLine("Hello Codility Demo Test for C#, 2."); int[] array1 = { -1, -3 }; Console.WriteLine((new Solution()).solution(array1)); int[] array2 = { 1, -1 }; Console.WriteLine((new Solution()).solution(array2)); int[] array3 = { 2, 1, 2, 5 }; Console.WriteLine((new Solution()).solution(array3)); int[] array4 = { 3, 1, -2, 2 }; Console.WriteLine((new Solution()).solution(array4)); int[] array5 = { }; Console.WriteLine((new Solution()).solution(array5)); int[] array6 = { 1, -5, -3 }; Console.WriteLine((new Solution()).solution(array6)); int[] array7 = { 1, 2, 4, 5 }; Console.WriteLine((new Solution()).solution(array7)); int[] array8 = { 17, 2 }; Console.WriteLine((new Solution()).solution(array8)); } } 5. Swift Here is a solution for Swift, taken from this answer. /** https://app.codility.com/demo/take-sample-test 100% https://stackoverflow.com/a/57063839 https://www.jdoodle.com/a/4ny5 */ public func solution(_ A : inout [Int]) -> Int { /// Returns the smallest positive integer missing in the array A. let positiveSortedInts = A.filter { $0 > 0 }.sorted() // print(positiveSortedInts) // <- Temporarily uncomment at line beginning var smallestMissingPositiveInt = 1 for elem in positiveSortedInts{ // if(elem > smallestMissingPositiveInt) then the answer has been found! if(elem > smallestMissingPositiveInt) { return smallestMissingPositiveInt } smallestMissingPositiveInt = elem + 1 } return smallestMissingPositiveInt // This is if the whole array was traversed. } // Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). // Note! The following lines need to be left out when running the // Codility Demo Test at https://app.codility.com/demo/take-sample-test : print("Hello Codility Demo Test for Swift 4, A.") var array1 = [-1, -3] print(solution(&array1)) var array2 = [1, -1] print(solution(&array2)) var array3 = [2, 1, 2, 5] print(solution(&array3)) var array4 = [3, 1, -2, 2] print(solution(&array4)) var array5 = [] as [Int] print(solution(&array5)) var array6 = [1, -5, -3] print(solution(&array6)) var array7 = [1, 2, 4, 5] print(solution(&array7)) var array8 = [17, 2] print(solution(&array8)) 6. PHP Here a solution for PHP, taken from this answer. <?php /** https://app.codility.com/demo/take-sample-test 100% https://stackoverflow.com/a/60535808 https://www.jdoodle.com/a/4nB0 */ function solution($A) { $smallestMissingPositiveInt = 1; sort($A); foreach($A as $elem){ if($elem <=0) continue; if($smallestMissingPositiveInt < $elem) return $smallestMissingPositiveInt; else $smallestMissingPositiveInt = $elem + 1; } return $smallestMissingPositiveInt; } // Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 . // Note! The starting and ending PHP tags are needed when running // the code from the command line in a *.php file, but they and // the following lines need to be left out when running the Codility // Demo Test at https://app.codility.com/demo/take-sample-test : echo "Hello Codility Demo Test for PHP, 1.\n"; echo solution([-1, -3]) . " "; echo solution([1, -1]) . " "; echo solution([2, 1, 2, 5]) . " "; echo solution([3, 1, -2, 2]) . " "; echo solution([]) . " "; echo solution([1, -5, -3]) . " "; echo solution([1, 2, 4, 5]) . " "; echo solution([17, 2]) . " "; ?> References The Codility skills assessment demo test The LeetCode playground A correct Java solution A correct JavaScript solution A correct Python 3 solution A correct C# solution A correct Swift 4 solution A correct PHP solution 1 This is true even for the first solution – the Java solution – despite the the fact that this solution is wrong! 2 You can try running the test yourself at https://app.codility.com/demo/take-sample-test. You will have to sign up to do so. Simply copy-paste all of the code from the snippet. The default is Java 8, so you won't need to change the language for the first solution. A: My solution in JavaScript, using the reduce() method function solution(A) { // the smallest positive integer = 1 if (!A.includes(1)) return 1; // greater than 1 return A.reduce((accumulator, current) => { if (current <= 0) return accumulator const min = current + 1 return !A.includes(min) && accumulator > min ? min : accumulator; }, 1000000) } console.log(solution([1, 2, 3])) // 4 console.log(solution([5, 3, 2, 1, -1])) // 4 console.log(solution([-1, -3])) // 1 console.log(solution([2, 3, 4])) // 1 https://codesandbox.io/s/the-smallest-positive-integer-zu4s2 A: 100% solution in Swift, I found it here, it is really beautiful than my algo... No need to turn array as ordered, instead using dictionary [Int: Bool] and just check the positive item in dictionary. public func solution(_ A : inout [Int]) -> Int { var counter = [Int: Bool]() for i in A { counter[i] = true } var i = 1 while true { if counter[i] == nil { return i } else { i += 1 } } } A: JavaScript solution without sort, 100% score and O(N) runtime. It builds a hash set of the positive numbers while finding the max number. function solution(A) { set = new Set() let max = 0 for (let i=0; i<A.length; i++) { if (A[i] > 0) { set.add(A[i]) max = Math.max(max, A[i]) } } for (let i=1; i<max; i++) { if (!set.has(i)) { return i } } return max+1 } A: My solution having 100% result in codility with Swift 4. func solution(_ A : [Int]) -> Int { let positive = A.filter { $0 > 0 }.sorted() var x = 1 for val in positive{ if(x < val) { return x } x = val + 1 } return x } A: My simple and (time) efficient Java solution: import java.util.*; class Solution { public int solution(int[] A) { Set<Integer> set=new TreeSet<>(); for (int x:A) { if (x>0) { set.add(x); } } int y=1; Iterator<Integer> it=set.iterator(); while (it.hasNext()) { int curr=it.next(); if (curr!=y) { return y; } y++; } return y; } } A: Here is a simple and fast code in PHP. Task Score: 100% Correctness: 100% Performance: 100% Detected time complexity: O(N) or O(N * log(N)) function solution($A) { $x = 1; sort($A); foreach($A as $i){ if($i <=0) continue; if($x < $i) return $x; else $x = $i+1; } return $x; } Performance tests A: First let me explain about the algorithm down below. If the array contains no elements then return 1, Then in a loop check if the current element of the array is larger then the previous element by 2 then there is the first smallest missing integer, return it. If the current element is consecutive to the previous element then the current smallest missing integer is the current integer + 1. Array.sort(A); if(A.Length == 0) return 1; int last = (A[0] < 1) ? 0 : A[0]; for (int i = 0; i < A.Length; i++) { if(A[i] > 0){ if (A[i] - last > 1) return last + 1; else last = A[i]; } } return last + 1; A: This my implementation in Swift 4 with 100% Score. It should be a pretty similar code in Java. Let me know what you think. public func solution(_ A : inout [Int]) -> Int { let B = A.filter({ element in element > 0 }).sorted() var result = 1 for element in B { if element == result { result = result + 1 } else if element > result { break } } return result } A: This is the solution in C#: using System; // you can also use other imports, for example: using System.Collections.Generic; // you can write to stdout for debugging purposes, e.g. // Console.WriteLine("this is a debug message"); class Solution { public int solution(int[] A) { // write your code in C# 6.0 with .NET 4.5 (Mono) int N = A.Length; HashSet<int> set =new HashSet<int>(); foreach (int a in A) { if (a > 0) { set.Add(a); } } for (int i = 1; i <= N + 1; i++) { if (!set.Contains(i)) { return i; } } return N; } } A: This is for C#, it uses HashSet and Linq queries and has 100% score on Codility public int solution(int[] A) { var val = new HashSet<int>(A).Where(x => x >= 1).OrderBy((y) =>y).ToArray(); var minval = 1; for (int i = 0; i < val.Length; i++) { if (minval < val[i]) { return minval; } minval = val[i] + 1; } return minval; } A: You're doing too much. You've create a TreeSet which is an order set of integers, then you've tried to turn that back into an array. Instead go through the list, and skip all negative values, then once you find positive values start counting the index. If the index is greater than the number, then the set has skipped a positive value. int index = 1; for(int a: set){ if(a>0){ if(a>index){ return index; } else{ index++; } } } return index; Updated for negative values. A different solution that is O(n) would be to use an array. This is like the hash solution. int N = A.length; int[] hashed = new int[N]; for( int i: A){ if(i>0 && i<=N){ hashed[i-1] = 1; } } for(int i = 0; i<N; i++){ if(hash[i]==0){ return i+1; } } return N+1; This could be further optimized counting down the upper limit for the second loop. A: For the space complexity of O(1) and time complexity of O(N) and if the array can be modified then it could be as follows: public int getFirstSmallestPositiveNumber(int[] arr) { // set positions of non-positive or out of range elements as free (use 0 as marker) for (int i = 0; i < arr.length; i++) { if (arr[i] <= 0 || arr[i] > arr.length) { arr[i] = 0; } } //iterate through the whole array again mapping elements [1,n] to positions [0, n-1] for (int i = 0; i < arr.length; i++) { int prev = arr[i]; // while elements are not on their correct positions keep putting them there while (prev > 0 && arr[prev - 1] != prev) { int next = arr[prev - 1]; arr[prev - 1] = prev; prev = next; } } // now, the first unmapped position is the smallest element for (int i = 0; i < arr.length; i++) { if (arr[i] != i + 1) { return i + 1; } } return arr.length + 1; } @Test public void testGetFirstSmallestPositiveNumber() { int[][] arrays = new int[][]{{1,-1,-5,-3,3,4,2,8}, {5, 4, 3, 2, 1}, {0, 3, -2, -1, 1}}; for (int i = 0; i < arrays.length; i++) { System.out.println(getFirstSmallestPositiveNumber(arrays[i])); } } Output: 5 6 2 A: I find another solution to do it with additional storage, /* * if A = [-1,2] the solution works fine * */ public static int solution(int[] A) { int N = A.length; int[] C = new int[N]; /* * Mark A[i] as visited by making A[A[i] - 1] negative * */ for (int i = 0; i < N; i++) { /* * we need the absolute value for the duplicates * */ int j = Math.abs(A[i]) - 1; if (j >= 0 && j < N && A[j] > 0) { C[j] = -A[j]; } } for (int i = 0; i < N; i++) { if (C[i] == 0) { return i + 1; } } return N + 1; } A: //My recursive solution: class Solution { public int solution(int[] A) { return next(1, A); } public int next(int b, int[] A) { for (int a : A){ if (b==a) return next(++b, A); } return b; } } A: <JAVA> Try this code- private int solution(int[] A) {//Our original array int m = Arrays.stream(A).max().getAsInt(); //Storing maximum value if (m < 1) // In case all values in our array are negative { return 1; } if (A.length == 1) { //If it contains only one element if (A[0] == 1) { return 2; } else { return 1; } } int i = 0; int[] l = new int[m]; for (i = 0; i < A.length; i++) { if (A[i] > 0) { if (l[A[i] - 1] != 1) //Changing the value status at the index of our list { l[A[i] - 1] = 1; } } } for (i = 0; i < l.length; i++) //Encountering first 0, i.e, the element with least value { if (l[i] == 0) { return i + 1; } } //In case all values are filled between 1 and m return i+1; } Input: {1,-1,0} , o/p: 2 Input: {1,2,5,4,6}, o/p: 3 Input: {-1,0,-2}, o/p: 1 A: Here's my solution in C++. It got a 100% score (100% correctness, 100% performance) (after multiple tries ;)). It relies on the simple principle of comparing its values to their corresponding index (after a little preprocessing such as sorting). I agree that your solution is doing too much; You don't need four loops. The steps of my solution are basically: Sort and remove any duplicates. There are two possible methods here, the first one utilizing std::sort, std::unique, and erase, while the second one takes advantage of std::set and the fact that a set sorts itself and disallows duplicates Handle edge cases, of which there are quite a few (I missed these initially, causing my score to be quite low at first). The three edge cases are: All ints in the original array were negative All ints in the original array were positive and greater than 1 The original array had only 1 element in it For every element, check if its value != its index+1. The first element for which this is true is where the smallest missing positive integer is. I.e. if vec.at(i) != i+1, then vec.at(i-1)+1 is the smallest missing positive integer. If vec.at(i) != i+1 is false for all elements in the array, then there are no "gaps" in the array's sequence, and the smallest positive int is simply vec.back()+1 (the 4th edge case if you will). And the code: int solution(vector<int>& rawVec) { //Sort and remove duplicates: Method 1 std::sort(rawVec.begin(), rawVec.end()); rawVec.erase(std::unique(rawVec.begin(), rawVec.end()), rawVec.end()); //Sort and remove duplicates: Method 2 // std::set<int> s(rawVec.begin(), rawVec.end()); // rawVec.assign(s.begin(), s.end()); //Remove all ints < 1 vector<int> vec; vec.reserve(rawVec.size()); for(const auto& el : rawVec) { if(el>0) vec.push_back(el); } //Edge case: All ints were < 1 or all ints were > 1 if(vec.size()==0 or vec.at(0) != 1) return 1; //Edge case: vector contains only one element if(vec.size()==1) return (vec.at(0)!=1 ? 1 : 2); for(int i=0; i<vec.size(); ++i) { if(vec.at(i) != i+1) return vec.at(i-1)+1; } return vec.back()+1; } A: This code has been writen in Java SE 8 import java.util.*; public class Solution { public int solution(int[] A) { int smallestPositiveInt = 1; if(A.length == 0) { return smallestPositiveInt; } Arrays.sort(A); if(A[0] > 1) { return smallestPositiveInt; } if(A[A.length - 1] <= 0 ) { return smallestPositiveInt; } for(int x = 0; x < A.length; x++) { if(A[x] == smallestPositiveInt) { smallestPositiveInt++; } } return smallestPositiveInt; } } A: public static int solution(int[] A) { Arrays.sort(A); int minNumber = 1; int length = A.length - 1; int max = A[length]; Set < Integer > set = new HashSet < > (); for (int i: A) { if (i > 0) { set.add(i); } } for (int j = 1; j <= max + 1; j++) { if (!set.contains(j)) { minNumber = j; break; } } return minNumber; } A: This solution runs in O(N) complexity and all the corner cases are covered. public int solution(int[] A) { Arrays.sort(A); //find the first positive integer int i = 0, len = A.length; while (i < len && A[i++] < 1) ; --i; //Check if minimum value 1 is present if (A[i] != 1) return 1; //Find the missing element int j = 1; while (i < len - 1) { if (j == A[i + 1]) i++; else if (++j != A[i + 1]) return j; } // If we have reached the end of array, then increment out value if (j == A[len - 1]) j++; return j; } A: Solution in Scala with all test cases running: Class Solution { def smallestNumber(arrayOfNumbers: Array[Int]) = { val filteredSet = arrayOfNumbers.foldLeft(HashSet.empty[Int]){(acc, next) => if(next > 0) acc.+(next) else acc} getSmallestNumber(filteredSet) } @tailrec private def getSmallestNumber(setOfNumbers: HashSet[Int], index: Int = 1): Int = { setOfNumbers match { case emptySet if(emptySet.isEmpty) => index case set => if(!set.contains(index)) index else getSmallestNumber(set, index + 1) } } } A: The code below is is simpler but my motive was to write for JavaScript, ES6 users: function solution(A) { let s = A.sort(); let max = s[s.length-1]; let r = 1; // here if we have an array with [1,2,3] it should print 4 for us so I added max + 2 for(let i=1; i <= (max + 2); i++) { r = A.includes(i) ? 1 : i ; if(r>1) break; } return r; } A: My python solution 100% correctness def solution(A): if max(A) < 1: return 1 if len(A) == 1 and A[0] != 1: return 1 s = set() for a in A: if a > 0: s.add(a) for i in range(1, len(A)): if i not in s: return i return len(s) + 1 assert solution([1, 3, 6, 4, 1, 2]) == 5 assert solution([1, 2, 3]) == 4 assert solution([-1, -3]) == 1 assert solution([-3,1,2]) == 3 assert solution([1000]) == 1 A: With 100% Accuracy on codility in Java public int solution(int[] A) { // write your code in Java SE 8 Arrays.sort(A); int i=1; for (int i1 = 0; i1 < A.length; i1++) { if (A[i1] > 0 && i < A[i1]) { return i; }else if(A[i1]==i){ ++i; } } return i; } A: My Javascript solution. This got 100%. The solution is to sort the array and compare the adjacent elements of the array. function solution(A) { // write your code in JavaScript (Node.js 8.9.4) A.sort((a, b) => a - b); if (A[0] > 1 || A[A.length - 1] < 0 || A.length === 2) return 1; for (let i = 1; i < A.length - 1; ++i) { if (A[i] > 0 && (A[i + 1] - A[i]) > 1) { return A[i] + 1; } } return A[A.length - 1] + 1; } A: this is my code in Kotlin: private fun soluction(A: IntArray): Int { val N = A.size val counter = IntArray(N + 1) for (i in A) if (i in 1..N) counter[i]++ for (i in 1 until N + 1) if (counter[i] == 0) return i return N + 1 } A: 100% in Swift using recursive function. O(N) or O(N * log(N)) var promise = 1 public func solution(_ A : inout [Int]) -> Int { // write your code in Swift 4.2.1 (Linux) execute(A.sorted(), pos: 0) return promise } func execute(_ n:[Int], pos i:Int) { if n.count == i || n.count == 0 { return } if promise > 0 && n[i] > 0 && n[i]+1 < promise { promise = n[i] + 1 } else if promise == n[i] { promise = n[i] + 1 } execute(n, pos: i+1) } A: In C# you can write simple below code. public int solution(int[] A) { int maxSize = 100000; int[] counter = new int[maxSize]; foreach (var number in A) { if(number >0 && number <= maxSize) { counter[number - 1] = 1; } } for (int i = 0; i < maxSize; i++) { if (counter[i] == 0) return i + 1; } return maxSize + 1; } A: I used Python. First I sorted, then filtered corner cases, then removed elements <1. Now element and index are comparable. def solution(A): A = sorted(set(A)) if A[0]>1 or A[-1]<=0: return 1 x = 0 while x<len(A): if A[x]<=0: A.pop(x) x-=1 x+=1 for i, x in enumerate(A): if not x==i+1: return i+1 return i+2 Codility finds it okay.. A: Go language : Find the smallest positive integer that does not occur in a given sequence GO language: sort above given array in ascending order map to iterate loop of above stored result, if to check x is less than the current element then return, otherwise, add 1 in the current element and assign to x package main import ( "fmt" "sort" ) func Solution(nums []int) int { sort.Ints(nums) x := 1 for _, val := range nums { if(x < val) { return x } if(val >= 0) { x = val + 1 } } return x } func main() { a1 := []int{1, 3, 6, 4, 1, 2} fmt.Println(Solution(a1)) // Should return : 5 a2 := []int{1, 2, 3} fmt.Println(Solution(a2)) // Should return : 4 a3 := []int{-1, -3} fmt.Println(Solution(a3)) // Should return : 1 a4 := []int{4, 5, 6} fmt.Println(Solution(a4)) // Should return : 1 a5 := []int{1, 2, 4} fmt.Println(Solution(a5)) // Should return : 3 } A: Late joining the conversation. Based on: https://codereview.stackexchange.com/a/179091/184415 There is indeed an O(n) complexity solution to this problem even if duplicate ints are involved in the input: solution(A) Filter out non-positive values from A For each int in filtered Let a zero-based index be the absolute value of the int - 1 If the filtered range can be accessed by that index and filtered[index] is not negative Make the value in filtered[index] negative For each index in filtered if filtered[index] is positive return the index + 1 (to one-based) If none of the elements in filtered is positive return the length of filtered + 1 (to one-based) So an array A = [1, 2, 3, 5, 6], would have the following transformations: abs(A[0]) = 1, to_0idx = 0, A[0] = 1, make_negative(A[0]), A = [-1, 2, 3, 5, 6] abs(A[1]) = 2, to_0idx = 1, A[1] = 2, make_negative(A[1]), A = [-1, -2, 3, 5, 6] abs(A[2]) = 3, to_0idx = 2, A[2] = 3, make_negative(A[2]), A = [-1, -2, -3, 5, 6] abs(A[3]) = 5, to_0idx = 4, A[4] = 6, make_negative(A[4]), A = [-1, -2, -3, 5, -6] abs(A[4]) = 6, to_0idx = 5, A[5] is inaccessible, A = [-1, -2, -3, 5, -6] A linear search for the first positive value returns an index of 3. Converting back to a one-based index results in solution(A)=3+1=4 Here's an implementation of the suggested algorithm in C# (should be trivial to convert it over to Java lingo - cut me some slack common): public int solution(int[] A) { var positivesOnlySet = A .Where(x => x > 0) .ToArray(); if (!positivesOnlySet.Any()) return 1; var totalCount = positivesOnlySet.Length; for (var i = 0; i < totalCount; i++) //O(n) complexity { var abs = Math.Abs(positivesOnlySet[i]) - 1; if (abs < totalCount && positivesOnlySet[abs] > 0) //notice the greater than zero check positivesOnlySet[abs] = -positivesOnlySet[abs]; } for (var i = 0; i < totalCount; i++) //O(n) complexity { if (positivesOnlySet[i] > 0) return i + 1; } return totalCount + 1; } A: I think that using structures such as: sets or dicts to store unique values is not the better solution, because you end either looking for an element inside a loop which leads to O(N*N) complexity or using another loop to verify the missing value which leaves you with O(N) linear complexity but spending more time than just 1 loop. Neither using a counter array structure is optimal regarding storage space because you end up allocating MaxValue blocks of memory even when your array only has one item. So I think the best solution uses just one for-loop, avoiding structures and also implementing conditions to stop iteration when it is not needed anymore: public int solution(int[] A) { // write your code in Java SE 8 int len = A.length; int min=1; Arrays.sort(A); if(A[len-1]>0) for(int i=0; i<len; i++){ if(A[i]>0){ if(A[i]==min) min=min+1; if(A[i]>min) break; } } return min; } This way you will get complexity of O(N) or O(N * log(N)), so in the best case you are under O(N) complexity, when your array is already sorted A: package Consumer; import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class codility { public static void main(String a[]) { int[] A = {1,9,8,7,6,4,2,3}; int B[]= {-7,-5,-9}; int C[] ={1,-2,3}; int D[] ={1,2,3}; int E[] = {-1}; int F[] = {0}; int G[] = {-1000000}; System.out.println(getSmall(F)); } public static int getSmall(int[] A) { int j=0; if(A.length < 1 || A.length > 100000) return -1; List<Integer> intList = Arrays.stream(A).boxed().sorted().collect(Collectors.toList()); if(intList.get(0) < -1000000 || intList.get(intList.size()-1) > 1000000) return -1; if(intList.get(intList.size()-1) < 0) return 1; int count=0; for(int i=1; i<=intList.size();i++) { if(!intList.contains(i))return i; count++; } if(count==intList.size()) return ++count; return -1; } } A: public int solution(int[] A) { int res = 0; HashSet<Integer> list = new HashSet<>(); for (int i : A) list.add(i); for (int i = 1; i < 1000000; i++) { if(!list.contains(i)){ res = i; break; } } return res; } A: Python implementation of the solution. Get the set of the array - This ensures we have unique elements only. Then keep checking until the value is not present in the set - Print the next value as output and return it. def solution(A): # write your code in Python 3.6 a = set(A) i = 1 while True: if i in A: i+=1 else: return i return i pass A: Works 100%. tested with all the condition as described. //MissingInteger public int missingIntegerSolution(int[] A) { Arrays.sort(A); long sum = 0; for(int i=0; i<=A[A.length-1]; i++) { sum += i; } Set<Integer> mySet = Arrays.stream(A).boxed().collect(Collectors.toSet()); Integer[] B = mySet.toArray(new Integer[0]); if(sum < 0) return 1; for(int i=0; i<B.length; i++) { sum -= B[i]; } if(sum == 0) return A[A.length-1] + 1; else return Integer.parseInt(""+sum); } int[] j = {1, 3, 6, 4, 1, 2,5}; System.out.println("Missing Integer : "+obj.missingIntegerSolution(j)); Output Missing Integer : 7 int[] j = {1, 3, 6, 4, 1, 2}; System.out.println("Missing Integer : "+obj.missingIntegerSolution(j)); Output Missing Integer : 5 A: For JavaScript i would do it this way: function solution(arr) { let minValue = 1; arr.sort(); if (arr[arr.length - 1] > 0) { for (let i = 0; i < arr.length; i++) { if (arr[i] === minValue) { minValue = minValue + 1; } if (arr[i] > minValue) { break; } } } return minValue; } Tested it with the following sample data: console.log(solution([1, 3, 6, 4, 1, 2])); console.log(solution([1, 2, 3])); console.log(solution([-1, -3])); A: My solution: public int solution(int[] A) { int N = A.length; Set<Integer> set = new HashSet<>(); for (int a : A) { if (a > 0) { set.add(a); } } for (int index = 1; index <= N; index++) { if (!set.contains(index)) { return index; } } return N + 1; } A: In C#, bur need improvement. public int solution(int[] A) { int retorno = 0; for (int i = 0; i < A.Length; i++) { int ultimovalor = A[i] + 1; if (!A.Contains(ultimovalor)) { retorno = (ultimovalor); if (retorno <= 0) retorno = 1; } } return retorno; } A: // you can also use imports, for example: // import java.util.*; // you can write to stdout for debugging purposes, e.g. // System.out.println("this is a debug message"); class Solution { public int solution(int[] A) { int size=A.length; int min=A[0]; for(int i=1;i<=size;i++){ boolean found=false; for(int j=0;j<size;j++){ if(A[j]<min){min=A[j];} if(i==A[j]){ found=true; } } if(found==false){ return i; } } if(min<0){return 1;} return size+1; } } A: 100% result solution in Javascript: function solution(A) { let N,i=1; A = [...new Set(A)]; // remove duplicated numbers form the Array A = A.filter(x => x >= 1).sort((a, b) => a - b); // remove negative numbers & sort array while(!N){ // do not stop untel N get a value if(A[i-1] != i){N=i} i++; } return N; } A: Here is the code in python with comments to understand the code - Codility 100% Missing Integer Code- def solution(A): """ solution at https://app.codility.com/demo/results/trainingV4KX2W-3KS/ 100% idea is to take temp array of max length of array items for all positive use item of array as index and mark in tem array as 1 ie. present item traverse again temp array if first found value in tem array is zero that index is the smallest positive integer :param A: :return: """ max_value = max(A) if max_value < 1: # max is less than 1 ie. 1 is the smallest positive integer return 1 if len(A) == 1: # one element with 1 value if A[0] == 1: return 2 # one element other than 1 value return 1 # take array of length max value # this will work as set ie. using original array value as index for this array temp_max_len_array = [0] * max_value for i in range(len(A)): # do only for positive items if A[i] > 0: # check at index for the value in A if temp_max_len_array[A[i] - 1] != 1: # set that as 1 temp_max_len_array[A[i] - 1] = 1 print(temp_max_len_array) for i in range(len(temp_max_len_array)): # first zero ie. this index is the smallest positive integer if temp_max_len_array[i] == 0: return i + 1 # if no value found between 1 to max then last value should be smallest one return i + 2 arr = [2, 3, 6, 4, 1, 2] result = solution(arr) A: This is my solution. First we start with 1, we loop over the array and compare with 2 elements from the array, if it matches one of the element we increment by 1 and start the process all over again. private static int findSmallest(int max, int[] A) { if (A == null || A.length == 0) return max; for (int i = 0; i < A.length; i++) { if (i == A.length - 1) { if (max != A[i]) return max; else return max + 1; } else if (!checkIfUnique(max, A[i], A[i + 1])) return findSmallest(max + 1, A); } return max; } private static boolean checkIfUnique(int number, int n1, int n2) { return number != n1 && number != n2; } A: This is my solution written in ruby simple correct and efficient def solution(arr) sorted = arr.uniq.sort last = sorted.last return 1 unless last > 0 i = 1 sorted.each do |num| next unless num > 0 return i unless num == i i += 1 end i end A: Here's my JavaScript solution which scored 100% with O(N) or O(N * log(N)) detected time complexity: function solution(A) { let tmpArr = new Array(1); for (const currNum of A) { if (currNum > arr.length) { tmpArr.length = currNum; } tmpArr[currNum - 1] = true; } return (tmpArr.findIndex((element) => element === undefined) + 1) || (tmpArr.length + 1); } A: Java Solution - Inside de method solution int N = A.length; Set<Integer> set = new HashSet<>(); for (int a : A) { if (a > 0) { set.add(a); } } if(set.size()==0) { return N=1; } for (int i = 1; i <= N + 1; i++) { if (!set.contains(i)) { N= i; break; } } return N; A: Objective-C example: int solution(NSMutableArray *array) { NSArray* sortedArray = [array sortedArrayUsingSelector: @selector(compare:)]; int x = 1; for (NSNumber *number in sortedArray) { if (number.intValue < 0) { continue; } if (x < number.intValue){ return x; } x = number.intValue + 1; } return x; } A: function solution(A = []) { return (A && A .filter(num => num > 0) .sort((a, b) => a - b) .reduce((acc, curr, idx, arr) => !arr.includes(acc + 1) ? acc : curr, 0) ) + 1; } solution(); // 1 solution(null); // 1 solution([]); // 1 solution([0, 0, 0]); // 1 A: // Codility Interview Question Solved Using Javascript const newArray = []; //used in comparison to array with which the solution is required const solution = (number) => { //function with array parameter 'number' const largest = number.reduce((num1, num2) => { return num1 > num2 ? num1 : num2; /*finds the largest number in the array to be used as benchmark for the new array to be created*/ }); const range = 1 + largest; for ( let x = 1; x <= range; x++ //loop to populate new array with positive integers ) { if (x > range) { break; } newArray.push(x); } console.log("This is the number array: [" + number + "]"); //array passed console.log("This is the new array: [" + newArray + "]"); //array formed in relation to array passed const newerArray = newArray.filter((elements) => { //array formed frome unique values of 'newArray' return number.indexOf(elements) == -1; }); console.log( "These are numbers not present in the number array: " + newerArray ); const lowestInteger = newerArray.reduce((first, second) => { //newerArray reduced to its lowest possible element by finding the least element in the array return first < second ? first : second; }); console.log("The lowest positive integer is " + lowestInteger); }; solution([1, 2, 3, 4, 6]); //solution function to find the lowest possible integer invoked A: The code below will run in O(N) time and O(N) space complexity. Check this codility link for complete running report. The program first put all the values inside a HashMap meanwhile finding the max number in the array. The reason for doing this is to have only unique values in provided array and later check them in constant time. After this, another loop will run until the max found number and will return the first integer that is not present in the array. static int solution(int[] A) { int max = -1; HashMap<Integer, Boolean> dict = new HashMap<>(); for(int a : A) { if(dict.get(a) == null) { dict.put(a, Boolean.TRUE); } if(max<a) { max = a; } } for(int i = 1; i<max; i++) { if(dict.get(i) == null) { return i; } } return max>0 ? max+1 : 1; } A: This solution is in Javascript but complete the test with 100% score and less codes function solution(A) { let s = A.sort((a, b) => { return a - b }) let x = s.find(o => !s.includes(o+1) && o>0) return ((x<0) || !s.includes(1)) ? 1 : x+1 } A: in C# static int solutionA(int[] A) { Array.Sort(A); int minNumber = 1; if(A.Max() < 0) { return minNumber; } for (int i = 0; i < A.Length; i++) { if (A[i] == minNumber) { minNumber++; } if (A[i] > minNumber) { break; } } return minNumber; } 100% Test Pass https://i.stack.imgur.com/FvPR8.png A: Swift version using functions rather than an iterative approach 'The solution obtained perfect score' - Codility This solution uses functions rather than an iterative approach. So the solution relies heavily on the language's optimizations. A similar approach could be done in Java such as using Java's set operations and other functions. public func solution(_ A : inout [Int]) -> Int { let positives = A.filter{ $0 > 0} let max = positives.count <= 100_000 ? positives.count + 1 : 100_001 return Set(1...max).subtracting(A).min() ?? -1 } Obtained all positive numbers from the source array. Obtained all possible results based on the positive count. Limited the set to 100k as stated in the problem. Added 1 in case the source array was a complete sequence. Returned the minimum positive number after excluding the source array's elements from the set of all possible results. Note: The function declaration was from Codility and inout was unneeded. Returning an integer did not allow for nil so -1 was used. A: Rewrite the accepted answer with Swift. Hashset in Swift is Set. I think if index is required as return value then try to use Dictionary instead. Passed with 100% score. public func solution(_ A: [Int]) -> Int { let n = A.count var aSet = Set<Int>() for a in A { if a > 0 { aSet.insert(a) } } for i in 1...n+1 { if !aSet.contains(i) { return i } } return 1 } A: The below C++ solution obtained a 100% score. The theoretical complexity of the code is. Time Complexity : O(N) amortized due to hash set and Auxiliary Space complexity of O(N) due to use of hash for lookup in O(1) time. #include<iostream> #include<string> #include<vector> #include<climits> #include<cmath> #include<unordered_set> using namespace std; int solution(vector<int>& A) { if(!A.size()) return(1); unordered_set<int> hashSet; int maxItem=INT_MIN; for(const auto& item : A) { hashSet.insert(item); if(maxItem<item) maxItem=item; } if(maxItem<1) return(1); for(int i=1;i<=maxItem;++i) { if(hashSet.find(i)==hashSet.end()) return(i); } return(maxItem+1); } A: You could simply use this, which is a variant of Insertion-Sort, without the need of Set, or sorting the whole array. public static int solution(int[] A) { //we can choose only positive integers to enhance performance A = Arrays.stream(A).filter(i -> i > 0).toArray(); for(int i=1; i<A.length; i++){ int v1 = A[i]; int j = i-1; while (j > -1 && A[j] > v1) { A[j + 1] = A[j]; j = j - 1; } A[j + 1] = v1; if(A[i] - A[i-1] > 1){ return A[i] + 1; } } return 1; } A: Without sorting or extra memory. Time Complexity: O(N) public int solution(int[] A) { for (int i = 0; i < A.length; i++) { if (A[i] <= 0 || A[i] >= A.length) continue; int cur = A[i], point; while (cur > 0 && cur <= A.length && A[cur - 1] != cur) { point = A[cur - 1]; A[cur - 1] = cur; cur = point; if (cur < 0 || cur >= A.length) break; } } for (int i = 0; i < A.length; i++) { if (A[i] != i+1) return i+1; } return A.length + 1; } A: Below is my solution int[] A = {1,2,3}; Arrays.sort(A); Set<Integer> positiveSet = new HashSet<>(); for(int a: A) { if(a>0) { positiveSet.add(a); } } for(int a: A) { int res = a+1; if(!positiveSet.contains(res)) { System.out.println("result : "+res); break; } } A: I was thinking about another approach (JS, JavaScript) and got this results that score 66% because of performance issues: const properSet = A.filter((item) => { return item > 0 }); let biggestOutsideOfArray = Math.max(...properSet); if (biggestOutsideOfArray === -Infinity) { biggestOutsideOfArray = 1; } else { biggestOutsideOfArray += 1; } for (let i = 1; i <= biggestOutsideOfArray; i++) { if(properSet.includes(i) === false) { return i; } } } A: How about just playing around with loops. import java.util.Arrays; public class SmallestPositiveNumber { public int solution(int[] A) { Arrays.sort(A); int SPI = 1; if(A.length <= 1) return SPI; for(int i=0; i<A.length-1; i++){ if((A[i+1] - A[i]) > 1) { return A[i] + 1; } } return A[A.length-1]+1; } } A: Here is my solution on Java: public int solution(int[] A) { int smallestPositiveInteger = 1; Arrays.sort(A); if(A[0] <= 0) return smallestPositiveInteger; for( int i = 0; i < A.length; i++){ if(A[i] == smallestPositiveInteger) smallestPositiveInteger++; } return smallestPositiveInteger; } A: Shortest answer in Python. 100% def solution(A): return min([x for x in range(1,len(A) + 2) if x not in set(sorted([x for x in A if x>0 ]))]) A: A solution in kotlin using set. Space complexity: O(N) Time complexity: O(N) fun solutionUsingSet(A: IntArray): Int { val set = HashSet<Int>() A.forEach { if (it > 0) { set.add(it) } } // If input array has no positive numbers if (set.isEmpty()) { return 1 } for (i in 1 until A.size + 1) { if (!set.contains(i)) { return i } } // If input array has all numbers from 1 to N occurring once return A.size + 1 } A: PHP, passes 100% function solution ($A) { sort ($A); $smol = 1; foreach ($A as $a) { if ($a > 0) { if ($smol === $a) { $smol++; } else { if (($a + 1) < $smol) { $smol = ($a + 1); } } } } return $smol; } A: A = [-1, -3] A = [1, 4, -4, -2, 4, 7, 9] def solution(A): A.sort() A = list(set(A)) A = list(filter(lambda x: x > 0, A)) if len(A) == 0: return 1 x = min(A) if(max(A) <= 0 or x > 1): return 1 # print(A) for i in range(len(A)): increment = 1 if i+1 < len(A) else 0 payload = A[i+increment] #print(payload, x) if payload - x == 1: x = payload else: return x + 1 print(solution(A)) A: C# version class Solution { public int solution(int[] A) { var res=1; Array.Sort(A); for(int i=0; i< A.Length; i++) { if(res<A[i]) { break; } res= (A[i]>0?A[i]:0) + 1; } return res; } } A: this got 100% for C#: using System; using System.Collections.Generic; using System.Linq; public static int solution(int[] A) { List<int> lst = A.Select(r => r).Where(r => r > 0).OrderBy(r => r).Distinct().ToList(); Console.WriteLine(string.Join(", ", lst)); if (lst.Count == 0) return 1; for (int i = 0; i < lst.Count; i++) { if (lst[i] != i + 1) return i + 1; } return lst.Count + 1; } A: input : A: IntArray // Sort A to find the biggest number in the last position on the list A.sort() // create `indexed for loop` from `1` to the biggest number in the last position on the list for (int i = 1; i < A[A.length]; i++) { // check for current index (positive number) not found in the list if ((i in A).not()) println("Answe : $i") } A: The task asks for the minimal non-existing in the array A integer, which is greater than 0. I think this is the right solution: get the A into a set, to minimize the size by eliminate duplicates and getting the search in the set availability with Set.contains() method. Check the max value in the set. if it is smaller than 0, then return 1 (smallest integer, which is not contained in the set and larger than 0) If the max value is greater than 0, stream through the integers from 1 to max value and check if any of them is missing from the set, then return it. Here is the solution: public static int solution(int[] A) { Set<Integer> mySet = new HashSet<>(); Arrays.stream(A).forEach(mySet::add); int maxVal = Collections.max(mySet); return maxVal <=0 ? 1 : IntStream.range(1, maxVal).filter(i -> !mySet.contains(i)).findFirst().orElse(1); } A: In PHP, this passed 100% correctness and performance function solution($A){ $min = 1; sort($A); for($i = 0 ; $i < count($A); $i++){ if($A[$i] == $min){ $min++; } } return $min; } A: Swift 5 Answer func getSmallestPositive(array: [Int]) -> Int { let positive = Array(Set(array)) let positiveArray = positive.filter({$0 > 0}).sorted() var initialNumber = 1 for number in 0..<positiveArray.count { let item = positiveArray[number] if item > initialNumber { return initialNumber } initialNumber = item + 1 } return initialNumber } Usage var array = [1, 3, 6, 4, 1, 2] let numbeFinal = getNumber(array: array) print(numbeFinal) A: This my solution in R Solution <- function(A){ if(max(A) < 1){ return(1) } B = 1:max(A) if(length(B[!B %in% A])==0){ return(max(A)+1) } else { return(min(B[!B %in% A])) } } Evaluate the function in sample vectors C = c(1,3,6,4,1,2) D = c(1,2,3) E = c(-1,-3) G = c(-1,-3,9) Solution(C) Solution(D) Solution(E) Solution(G) A: I have done it using Linq. Pure C# . Working fine for below inputs : //int[] abc = { 1,5,3,6,2,1}; // working //int[] abc = { 1,2,3}; -- working //int[] abc = { -1, -2, -3 }; -- working //int[] abc = { 10000, 99999, 12121 }; -- working //find the smallest positive missing no. Array.Sort(abc); int[] a = abc.Distinct().ToArray(); int output = 0; var minvalue = a[0]; var maxValue = a[a.Length - 1]; for (int index = minvalue; index < maxValue; index++) { if (!a.Contains(index)) { output = index; break; } } if (output == 0) { if (maxValue < 1) { output = 1; } else { output = maxValue + 1; } } Console.WriteLine(" Output :" + output); A: Swift public func solution(_ A : inout [Int]) -> Int { // write your code in Swift 4.2.1 (Linux) var smallestValue = 1 if A.count == 0 { return smallestValue } A.sort() if (A[0] > 1) || (A[A.count - 1] <= 0) { return smallestValue } for index in 0..<A.count { if A[index] == smallestValue { smallestValue = smallestValue + 1 } } return smallestValue } A: Ruby 2.2 solution. Total score 77% due to somewhat weak performance. def solution(a) positive_sorted = a.uniq.sort.select {|i| i > 0 } return 1 if (positive_sorted.empty? || positive_sorted.first > 1) positive_sorted.each do |i| res = i + 1 next if positive_sorted.include?(res) return res end end A: This code has been written in C The existing was in C++ and Java function solution(A) { // only positive values, sorted int i,j,small,large,value,temp,skip=0; int arr[N]; for(i=0;i<N;i++) { value = A[i]; for(j=i;j<N;j++) { if(value > A[j]) { temp = value; value = A[j]; A[j] = temp; } } arr[i] = value; } small = arr[0]; large = arr[N-1]; for(i=0;i<N;i++) { if(arr[i] == arr[i+1]) { for(j=i;j<(N);j++) { arr[j] = arr[j+1]; } skip++; } } if(large < 0) { return 1; } for(i=0;i<(N);i++) { if(arr[i] != (small+i)) { return (arr[i-1]+1); } else if(arr[i] == large) { return (large+1); } } } A: This is my 100% correct solution with swift 4 using Set and avoiding the use of sorted(). let setOfIntegers = Set<Int>(A) var max = 0 for integer in stride(from: 1, to: A.count + 1, by: 1) { max = integer > max ? integer : max if (!setOfIntegers.contains(integer)) { return integer } } return max + 1 A: for JavaScript let arr = [1, 3, 6, 4, 1, 2]; let arr1 = [1, 2, 3]; let arr2 = [-1, -3]; function solution(A) { let smallestInteger = 1; A.forEach(function (val) { if (A.includes(smallestInteger)) { if (val > 0) smallestInteger++; } }); return smallestInteger; } console.log(solution(arr)); console.log(solution(arr1)); console.log(solution(arr2)); A: The equivalent and simplest code in Python 3.6 def solution(A): ab = set(range(1,abs(max(A))+2)).difference(set(A)) return min(ab) Passed all test cases, Correctness test cases, and Performance test cases A: Algorithm for element element is array which is in range [1, N] and not in its correct place, swap this element with element of its correct place. Which ever index don’t match with its element is the answer. class Solution { public int firstMissingPositive(int[] nums) { int n = nums.length; for(int i = 0; i < n;) { if(nums[i] >= 1 && nums[i] <= n && nums[i] != nums[nums[i]-1]) { swap(nums,nums[i]-1,i); } else { i++; } } for(int i = 0; i < n; i++) { if(nums[i] != i+1) return i+1; } return n+1; } private void swap(int[] a, int i, int j) { int temp = a[i]; a[i] = a[j]; a[j] = temp; }} A: function solution(A) { const mylist = new Set(A); for(i=1;i<= mylist.size+1;i++){ if(!mylist.has(i)) { return i; } } } A: Try this code it works for me import java.util.*; class Solution { public static int solution(int[] A) { // write your code in Java SE 8 int m = Arrays.stream(A).max().getAsInt(); //Storing maximum value if (m < 1) // In case all values in our array are negative { return 1; } if (A.length == 1) { //If it contains only one element if (A[0] == 1) { return 2; } else { return 1; } } int min = A[0]; int max= A[0]; int sm = 1; HashSet<Integer> set = new HashSet<Integer>(); for(int i=0;i<A.length;i++){ set.add(A[i]); if(A[i]<min){ min = A[i]; } if(A[i]>max){ max = A[i]; } } if(min <= 0){ min = 1; } if(max <= 0){ max = 1; } boolean fnd = false; for(int i=min;i<=max;i++){ if(i>0 && !set.contains(i)){ sm = i; fnd = true; break; } else continue; } if(fnd) return sm; else return max +1; } public static void main(String args[]){ Scanner s=new Scanner(System.in); System.out.println("enter number of elements"); int n=s.nextInt(); int arr[]=new int[n]; System.out.println("enter elements"); for(int i=0;i<n;i++){//for reading array arr[i]=s.nextInt(); } int array[] = arr; // Calling getMax() method for getting max value int max = solution(array); System.out.println("Maximum Value is: "+max); } } A: The simplest way using while loop: fun solution(A: IntArray): Int { var value = 1 var find = false while(value < A.size) { val iterator = A.iterator() while (iterator.hasNext()) { if (value == iterator.nextInt()) { find = true value++ } } if (!find) { break } else { find = false } } return value } A: This is might help you, it should work fine! public static int sol(int[] A) { boolean flag =false; for(int i=1; i<=1000000;i++ ) { for(int j=0;j<A.length;j++) { if(A[j]==i) { flag = false; break; }else { flag = true; } } if(flag) { return i; } } return 1; }
Find the smallest positive integer that does not occur in a given sequence
I was trying to solve this problem: Write a function: class Solution { public int solution(int[] A); } that, given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A. For example, given A = [1, 3, 6, 4, 1, 2], the function should return 5. Given A = [1, 2, 3], the function should return 4. Given A = [−1, −3], the function should return 1. Assume that: N is an integer within the range [1..100,000]; each element of array A is an integer within the range [−1,000,000..1,000,000]. Complexity: expected worst-case time complexity is O(N); expected worst-case space complexity is O(N) (not counting the storage required for input arguments). I wrote the solution below which gives a low performance, however, I can't see the bug. public static int solution(int[] A) { Set<Integer> set = new TreeSet<>(); for (int a : A) { set.add(a); } int N = set.size(); int[] C = new int[N]; int index = 0; for (int a : set) { C[index++] = a; } for (int i = 0; i < N; i++) { if (C[i] > 0 && C[i] <= N) { C[i] = 0; } } for (int i = 0; i < N; i++) { if (C[i] != 0) { return (i + 1); } } return (N + 1); } The score is provided here, I will keep investigating myself, but please inform me if you can see better.
[ "If the expected running time should be linear, you can't use a TreeSet, which sorts the input and therefore requires O(NlogN). Therefore you should use a HashSet, which requires O(N) time to add N elements.\nBesides, you don't need 4 loops. It's sufficient to add all the positive input elements to a HashSet (first loop) and then find the first positive integer not in that Set (second loop).\nint N = A.length;\nSet<Integer> set = new HashSet<>();\nfor (int a : A) {\n if (a > 0) {\n set.add(a);\n }\n}\nfor (int i = 1; i <= N + 1; i++) {\n if (!set.contains(i)) {\n return i;\n }\n}\n\n", "100% result solution in Javascript:\nfunction solution(A) {\n // only positive values, sorted\n A = A.filter(x => x >= 1).sort((a, b) => a - b)\n\n let x = 1\n\n for(let i = 0; i < A.length; i++) {\n // if we find a smaller number no need to continue, cause the array is sorted\n if(x < A[i]) {\n return x\n }\n x = A[i] + 1\n }\n\n return x\n}\n\n\n", "My code in Java, 100% result in Codility\nimport java.util.*;\n\nclass Solution {\n public int solution(int[] arr) {\n\n Arrays.sort(arr);\n\n int smallest = 1;\n\n for (int i = 0; i < arr.length; i++) {\n if (arr[i] == smallest) {\n smallest++;\n }\n }\n\n return smallest;\n }\n}\n\n", "JS:\n\nfilter to get positive non zero numbers from A array\nsort above filtered array in ascending order\nmap to iterate loop of above stored result\n\nif to check x is less than the current element then return\notherwise, add 1 in the current element and assign to x\n\n\n\n\n\nfunction solution(A) {\n\n let x = 1\n \n A.filter(x => x >= 1)\n .sort((a, b) => a - b)\n .map((val, i, arr) => {\n if(x < arr[i]) return\n x = arr[i] + 1\n })\n\n return x\n}\n\nconsole.log(solution([3, 4, -1, 1]));\nconsole.log(solution([1, 2, 0]));\n\n\n\n", "Here is an efficient python solution:\ndef solution(A):\n m = max(A)\n if m < 1:\n return 1\n\n A = set(A)\n B = set(range(1, m + 1))\n D = B - A\n if len(D) == 0:\n return m + 1\n else:\n return min(D)\n\n", "No need to store anything. No need for hashsets. (Extra memory), You can do it as you\nmove through the array. However, The array has to be sorted. And we know the very most minimum value is 1\nimport java.util.Arrays;\nclass Solution {\n public int solution(int[] A) {\n Arrays.sort(A); \n int min = 1; \n /*\n for efficiency — no need to calculate or access the \n array object’s length property per iteration \n */\n int cap = A.length; \n\n \n for (int i = 0; i < cap; i++){\n if(A[i] == min){\n min++;\n }\n /* \n can add else if A[i] > min, break; \n as suggested by punit\n */\n } \n /*\n min = ( min <= 0 ) ? 1:min; \n which means: if (min <= 0 ){\n min =1} else {min = min} \n you can also do: \n if min <1 for better efficiency/less jumps\n */\n return min; \n }\n}\n\n", "Here is my PHP solution, 100% Task Score, 100% correctness, and 100% performance. First we iterate and we store all positive elements, then we check if they exist,\nfunction solution($A) {\n\n $B = [];\n foreach($A as $a){ \n if($a > 0) $B[] = $a; \n }\n\n $i = 1;\n $last = 0;\n sort($B);\n\n foreach($B as $b){\n\n if($last == $b) $i--; // Check for repeated elements\n else if($i != $b) return $i;\n\n $i++;\n $last = $b; \n\n }\n\n return $i;\n}\n\nI think its one of the clears and simples functions here, the logic can be applied in all the other languages.\n", "For Swift 4\npublic func solution(_ A : inout [Int]) -> Int {\n let positive = A.filter { $0 > 0 }.sorted()\n var x = 1\n for val in positive {\n // if we find a smaller number no need to continue, cause the array is sorted\n if(x < val) {\n return x\n }\n x = val + 1\n }\n return x\n}\n\n", "I achieved 100% on this by the below solution in Python:-\ndef solution(A):\n a=frozenset(sorted(A))\n m=max(a)\n if m>0:\n for i in range(1,m):\n if i not in a:\n return i\n else:\n return m+1\n else:\n return 1\n\n", "In Kotlin with %100 score\nDetected time complexity: O(N) or O(N * log(N))\nfun solution(A: IntArray): Int {\n var min = 1\n val b = A.sortedArray()\n for (i in 0 until b.size) {\n if (b[i] == min) {\n min++\n }\n }\n return min\n}\n\n", "This solution is in c# but complete the test with 100% score\npublic int solution(int[] A) {\n // write your code in C# 6.0 with .NET 4.5 (Mono)\n var positives = A.Where(x => x > 0).Distinct().OrderBy(x => x).ToArray();\n if(positives.Count() == 0) return 1;\n int prev = 0;\n for(int i =0; i < positives.Count(); i++){\n\n if(positives[i] != prev + 1){\n return prev + 1;\n }\n prev = positives[i];\n }\n return positives.Last() + 1;\n}\n\n", "My answer in Ruby\ndef smallest_pos_integer(arr)\n sorted_array = arr.select {|x| x >= 1}.sort\n res = 1\n\n for i in (0..sorted_array.length - 1)\n if res < sorted_array[i]\n return res\n end\n res = sorted_array[i] + 1\n end\n res\nend\n\n", "This answer gives 100% in Python. Worst case complexity O(N).\nThe idea is that we do not care about negative numbers in the sequence, since we want to find the smallest positive integer not in the sequence A.\nHence we can set all negative numbers to zero and keep only the unique positive values. Then we check iteratively starting from 1 whether the number is in the set of positive values of sequence A.\nWorst case scenario, where the sequence is an arithmetic progression with constant difference 1, leads to iterating through all elements and thus O(N) complexity.\nIn the extreme case where all the elements of the sequence are negative (i.e. the maximum is negative) we can immediately return 1 as the minimum positive number.\ndef solution(A):\n max_A=max(A)\n B=set([a if a>=0 else 0 for a in A ])\n b=1\n if max_A<=0:\n return(1)\n else:\n while b in B:\n b+=1\n return(b)\n\n", "JavaScript ES6 Solution:\n\n\nfunction solution(A) {\n if (!A.includes(1)) return 1;\n return A.filter(a => a > 0)\n .sort((a, b) => a - b)\n .reduce((p, c) => c === p ? c + 1 : p, 1);\n}\nconsole.log(solution([1, 3, 6, 4, 1, 2]));\nconsole.log(solution([1, 2, 3]));\nconsole.log(solution([-1, -3]));\nconsole.log(solution([4, 5, 6]));\nconsole.log(solution([1, 2, 4]));\n\n\n\n", "I figured an easy way to do this was to use a BitSet.\n\njust add all the positive numbers to the BitSet.\nwhen finished, return the index of the first clear bit after bit 0.\n\npublic static int find(int[] arr) {\n BitSet b = new BitSet();\n for (int i : arr) {\n if (i > 0) {\n b.set(i);\n }\n }\n return b.nextClearBit(1);\n}\n\n", "Javascript solution:\nfunction solution(A) {\n A = [...new Set(A.sort( (a,b) => a-b))];\n\n // If the initial integer is greater than 1 or the last integer is less than 1\n if((A[0] > 1) || (A[A.length - 1] < 1)) return 1;\n\n for (let i in A) {\n let nextNum = A[+i+1];\n if(A[i] === nextNum) continue;\n if((nextNum - A[i]) !== 1) {\n if(A[i] < 0 ) {\n if(A.indexOf(1) !== -1) continue;\n return 1;\n }\n return A[i] + 1;\n }\n }\n}\n\n", "0. Introduction\nA) Languages allowed\nThe Codility skills assessment demo test allows for solutions\nwritten in 18 different languages: C, C++, C#, Go, Java 8, Java 11, JavaScript, Kotlin, Lua, Objective-C, Pascal, PHP, Perl, Python, Ruby, Scala, Swift 4, Visual Basic.\nB) Some remarks on your question\n\nI write the solution below which gives a low performance\n\nThere is no reason to worry about performance until you have a\ncorrect solution.\nAlways make sure the solution is correct before you even think about\nhow fast or slow your algorithm/code is!\n\nexpected worst-case time complexity is O(N)\n\nWell, as the asker of the question, it is your decision what\nrequirements should be met in an answer.\nBut if the goal is to score 100% in the Codility (performance) test,\nthen there is no need to demand O(N).\nThere are plenty of solutions in the answers here which are O(N log N)\nand not O(N), but still pass all 4 performance tests.\nThis proves that the O(N) requirement on time complexity is\nunnecessarily harsh (if the sole aim is to score 100% on the Codility\ntest).\nC) About the solutions presented here\nAll of the solutions presented here are either refactored versions of\nalready published answers, or inspired by such answers.\nAll solutions here score 100% in the Codility skills assessment\ndemo test.\n1\nI have striven to\n\nexplicitly reference each original answer/solution,\nprovide a runnable jdoodle link for each\nsolution,\nuse the same 8 tests (chosen by myself) for all the solutions,\nchoose solutions that score 100% (meaning 5 of 5 for correctness and\n4 of 4 for performance/speed),\nmake it easy to copy-paste the answers directly into the Codility\nskills assessment demo test,\nfocus on some of the most used languages.\n\n1. Java: the Codility test for correctness is incorrect (!)\nI will use one of the existing answers to demonstrate that the Codility\ntest for correctness is flawed for the edge case when the given array\nis empty.\nIn an empty array, the smallest positive missing integer is clearly 1.\nAgreed?\nBut the Codility test suite seems to accept just about any answer for\nthe empty array.\nIn the code below, I deliberately return -99 for the empty array,\nwhich is obviously incorrect.\nYet, Codility gives me a 100% test score for my flawed solution. (!)\nimport java.util.Arrays;\n\n/**\nhttps://app.codility.com/demo/take-sample-test 100%\nhttps://stackoverflow.com/a/57067307\nhttps://jdoodle.com/a/3B0D\nTo run the program in a terminal window:\n javac Solution.java && java Solution && rm Solution.class\nTerminal command to run the combined formatter/linter:\n java -jar ../../../checkstyle-8.45.1.jar -c ../../../google_checks.xml *.java\n*/\npublic class Solution {\n /** Returns the smallest positive integer missing in intArray. */\n public static int solution(int[] intArray) {\n if (intArray.length == 0) { // No elements at all.\n return -99; // So the smallest positive missing integer is 1.\n }\n Arrays.sort(intArray);\n // System.out.println(Arrays.toString(intArray)); // Temporarily uncomment?\n if (intArray[0] >= 2) { // Smallest positive int is 2 or larger.\n return 1; // Meaning smallest positive MISSING int is 1.\n }\n if (intArray[intArray.length - 1] <= 0) { // Biggest int is 0 or smaller.\n return 1; // Again, smallest positive missing int is 1.\n }\n int smallestPositiveMissing = 1;\n for (int i = 0; i < intArray.length; i++) {\n if (intArray[i] == smallestPositiveMissing) {\n smallestPositiveMissing++;\n } // ^^ Stop incrementing if intArray[i] > smallestPositiveMissing. ^^\n } // Because then the smallest positive missing integer has been found:\n return smallestPositiveMissing;\n }\n\n /** Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically). */\n public static void main(String[] args) {\n System.out.println(\"Hello Codility Demo Test for Java, B\");\n int[] array1 = {-1, -3};\n System.out.println(solution(array1));\n int[] array2 = {1, -1};\n System.out.println(solution(array2));\n int[] array3 = {2, 1, 2, 5};\n System.out.println(solution(array3));\n int[] array4 = {3, 1, -2, 2};\n System.out.println(solution(array4));\n int[] array5 = {};\n System.out.println(solution(array5));\n int[] array6 = {1, -5, -3};\n System.out.println(solution(array6));\n int[] array7 = {1, 2, 4, 5};\n System.out.println(solution(array7));\n int[] array8 = {17, 2};\n System.out.println(solution(array8));\n }\n}\n\nBelow is a screen dump of the result from the test.\nAs the solution is clearly wrong, of course it should not score 100%!\n2\n\n2. JavaScript\nBelow is a JavaScript solution.\nThis one has not been posted before, but is inspired by\none of the previous answers.\n\n\n/**\nhttps://app.codility.com/demo/take-sample-test 100%\n(c) Henke 2022 https://stackoverflow.com/users/9213345\nhttps://jdoodle.com/a/3AZG\nTo run the program in a terminal window:\n node CodilityDemoJS3.js\nTerminal command to run the combined formatter/linter:\n standard CodilityDemoJS3.js\nhttps://github.com/standard/standard\n*/\nfunction solution (A) {\n/// Returns the smallest positive integer missing in the array A.\n let smallestMissing = 1\n // In the following .reduce(), the only interest is in `smallestMissing`.\n // I arbitrarily return '-9' because I don't care about the return value.\n A.filter(x => x > 0).sort((a, b) => a - b).reduce((accumulator, item) => {\n if (smallestMissing < item) return -9 // Found before end of the array.\n smallestMissing = item + 1\n return -9 // Found at the end of the array.\n }, 1)\n return smallestMissing\n}\n// Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically).\n// Note! The following lines need to be left out when running the\n// Codility Demo Test at https://app.codility.com/demo/take-sample-test :\nconsole.log('Hello Codility Demo Test for JavaScript, 3.')\nconsole.log(solution([-1, -3]))\nconsole.log(solution([1, -1]))\nconsole.log(solution([2, 1, 2, 5]))\nconsole.log(solution([3, 1, -2, 2]))\nconsole.log(solution([]))\nconsole.log(solution([1, -5, -3]))\nconsole.log(solution([1, 2, 4, 5]))\nconsole.log(solution([17, 2]))\n.as-console-wrapper { max-height: 100% !important; top: 0; }\n\n\n\n3. Python\nPython has come to compete with Java as one of the most used\nprogramming languages worldwide.\nThe code below is a slightly rewritten version of this answer.\n#!/usr/bin/env python3\n'''\nhttps://app.codility.com/demo/take-sample-test 100%\nhttps://stackoverflow.com/a/58980724\nhttps://jdoodle.com/a/3B0k\nTo run the program in a terminal window:\n python codility_demo_python_a.py\nCommand in the terminal window to run the linter:\n py -m pylint codility_demo_python_a.py\nhttps://pypi.org/project/pylint/\nDito for autopep8 formatting:\n autopep8 codility_demo_python_a.py --in-place\nhttps://pypi.org/project/autopep8/\n'''\n\n\ndef solution(int_array):\n '''\n Returns the smallest positive integer missing in int_array.\n '''\n max_elem = max(int_array, default=0)\n if max_elem < 1:\n return 1\n int_array = set(int_array) # Reusing int_array although now a set\n # print(int_array) # <- Temporarily uncomment at line beginning\n all_ints = set(range(1, max_elem + 1))\n diff_set = all_ints - int_array\n if len(diff_set) == 0:\n return max_elem + 1\n return min(diff_set)\n\n\n# Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically).\n# Note! The following lines need to be commented out when running the\n# Codility Demo Test at https://app.codility.com/demo/take-sample-test :\nprint('Hello Codility Demo Test for Python3, a.')\nprint(solution([-1, -3]))\nprint(solution([1, -1]))\nprint(solution([2, 1, 2, 5]))\nprint(solution([3, 1, -2, 2]))\nprint(solution([]))\nprint(solution([1, -5, -3]))\nprint(solution([1, 2, 4, 5]))\nprint(solution([17, 2]))\n\n4. C#\nHere a solution for C#, inspired by a previous answer.\nusing System;\nusing System.Linq;\n/// https://app.codility.com/demo/take-sample-test 100%\n/// (c) 2021 Henke, https://stackoverflow.com/users/9213345\n/// https://jdoodle.com/a/3B0Z\n/// To initialize the program in a terminal window, only ONCE:\n/// dotnet new console -o codilityDemoC#-2 && cd codilityDemoC#-2\n/// To run the program in a terminal window:\n/// dotnet run && rm -rf obj && rm -rf bin\n/// Terminal command to run 'dotnet-format':\n/// dotnet-format --include DemoC#_2.cs && rm -rf obj && rm -rf bin\npublic class Solution {\n /// Returns the smallest positive integer missing in intArray.\n public int solution(int[] intArray) {\n var sortedSet =\n intArray.Where(x => x > 0).Distinct().OrderBy(x => x).ToArray();\n // Console.WriteLine(\"[\" + string.Join(\",\", sortedSet) + \"]\"); // Uncomment?\n if (sortedSet.Length == 0) return 1; // The set is empty.\n int smallestMissing = 1;\n for (int i = 0; i < sortedSet.Length; i++) {\n if (smallestMissing < sortedSet[i]) break; // The answer has been found.\n smallestMissing = sortedSet[i] + 1;\n } // Coming here means all of `sortedSet` had to be traversed.\n return smallestMissing;\n }\n\n /// Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically).\n /// NOTE! The code below must be removed before running the Codility test.\n static void Main(string[] args) {\n Console.WriteLine(\"Hello Codility Demo Test for C#, 2.\");\n int[] array1 = { -1, -3 };\n Console.WriteLine((new Solution()).solution(array1));\n int[] array2 = { 1, -1 };\n Console.WriteLine((new Solution()).solution(array2));\n int[] array3 = { 2, 1, 2, 5 };\n Console.WriteLine((new Solution()).solution(array3));\n int[] array4 = { 3, 1, -2, 2 };\n Console.WriteLine((new Solution()).solution(array4));\n int[] array5 = { };\n Console.WriteLine((new Solution()).solution(array5));\n int[] array6 = { 1, -5, -3 };\n Console.WriteLine((new Solution()).solution(array6));\n int[] array7 = { 1, 2, 4, 5 };\n Console.WriteLine((new Solution()).solution(array7));\n int[] array8 = { 17, 2 };\n Console.WriteLine((new Solution()).solution(array8));\n }\n}\n\n5. Swift\nHere is a solution for Swift, taken from this answer.\n/**\nhttps://app.codility.com/demo/take-sample-test 100%\nhttps://stackoverflow.com/a/57063839\nhttps://www.jdoodle.com/a/4ny5\n*/\npublic func solution(_ A : inout [Int]) -> Int {\n/// Returns the smallest positive integer missing in the array A.\n let positiveSortedInts = A.filter { $0 > 0 }.sorted()\n// print(positiveSortedInts) // <- Temporarily uncomment at line beginning\n var smallestMissingPositiveInt = 1\n for elem in positiveSortedInts{\n // if(elem > smallestMissingPositiveInt) then the answer has been found!\n if(elem > smallestMissingPositiveInt) { return smallestMissingPositiveInt }\n smallestMissingPositiveInt = elem + 1\n }\n return smallestMissingPositiveInt // This is if the whole array was traversed.\n}\n// Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 (but vertically).\n// Note! The following lines need to be left out when running the\n// Codility Demo Test at https://app.codility.com/demo/take-sample-test :\nprint(\"Hello Codility Demo Test for Swift 4, A.\")\nvar array1 = [-1, -3]\nprint(solution(&array1))\nvar array2 = [1, -1]\nprint(solution(&array2))\nvar array3 = [2, 1, 2, 5]\nprint(solution(&array3))\nvar array4 = [3, 1, -2, 2]\nprint(solution(&array4))\nvar array5 = [] as [Int]\nprint(solution(&array5))\nvar array6 = [1, -5, -3]\nprint(solution(&array6))\nvar array7 = [1, 2, 4, 5]\nprint(solution(&array7))\nvar array8 = [17, 2]\nprint(solution(&array8))\n\n6. PHP\nHere a solution for PHP, taken from this answer.\n<?php\n/**\nhttps://app.codility.com/demo/take-sample-test 100%\nhttps://stackoverflow.com/a/60535808\nhttps://www.jdoodle.com/a/4nB0\n*/\nfunction solution($A) {\n $smallestMissingPositiveInt = 1;\n sort($A);\n foreach($A as $elem){\n if($elem <=0) continue;\n if($smallestMissingPositiveInt < $elem) return $smallestMissingPositiveInt;\n else $smallestMissingPositiveInt = $elem + 1; \n }\n return $smallestMissingPositiveInt;\n}\n// Demo examples. --> Expected output: 1 2 3 4 1 2 3 1 .\n// Note! The starting and ending PHP tags are needed when running\n// the code from the command line in a *.php file, but they and\n// the following lines need to be left out when running the Codility\n// Demo Test at https://app.codility.com/demo/take-sample-test :\necho \"Hello Codility Demo Test for PHP, 1.\\n\";\necho solution([-1, -3]) . \" \";\necho solution([1, -1]) . \" \";\necho solution([2, 1, 2, 5]) . \" \";\necho solution([3, 1, -2, 2]) . \" \";\necho solution([]) . \" \";\necho solution([1, -5, -3]) . \" \";\necho solution([1, 2, 4, 5]) . \" \";\necho solution([17, 2]) . \" \";\n?>\n\nReferences\n\nThe Codility skills assessment demo test\nThe LeetCode playground\nA correct Java solution\nA correct JavaScript solution\nA correct Python 3 solution\nA correct C# solution\nA correct Swift 4 solution\nA correct PHP solution\n\n\n\n1\nThis is true even for the first solution – the Java solution –\ndespite the the fact that this solution is wrong!\n2 You can try running the test yourself at\nhttps://app.codility.com/demo/take-sample-test.\nYou will have to sign up to do so.\nSimply copy-paste all of the code from the snippet.\nThe default is Java 8, so you won't need to change the language\nfor the first solution.\n\n", "My solution in JavaScript, using the reduce() method\nfunction solution(A) {\n // the smallest positive integer = 1\n if (!A.includes(1)) return 1;\n\n // greater than 1\n return A.reduce((accumulator, current) => {\n if (current <= 0) return accumulator\n const min = current + 1\n return !A.includes(min) && accumulator > min ? min : accumulator;\n }, 1000000)\n}\n\nconsole.log(solution([1, 2, 3])) // 4\nconsole.log(solution([5, 3, 2, 1, -1])) // 4\nconsole.log(solution([-1, -3])) // 1\nconsole.log(solution([2, 3, 4])) // 1\n\nhttps://codesandbox.io/s/the-smallest-positive-integer-zu4s2\n", "100% solution in Swift, I found it here, it is really beautiful than my algo... No need to turn array as ordered, instead using dictionary [Int: Bool] and just check the positive item in dictionary.\npublic func solution(_ A : inout [Int]) -> Int {\n var counter = [Int: Bool]()\n for i in A {\n counter[i] = true\n }\n\n var i = 1\n while true {\n if counter[i] == nil {\n return i\n } else {\n i += 1\n }\n }\n}\n\n", "JavaScript solution without sort, 100% score and O(N) runtime. It builds a hash set of the positive numbers while finding the max number.\nfunction solution(A) {\n set = new Set()\n let max = 0\n for (let i=0; i<A.length; i++) {\n if (A[i] > 0) {\n set.add(A[i])\n max = Math.max(max, A[i])\n }\n }\n\n for (let i=1; i<max; i++) {\n if (!set.has(i)) {\n return i\n }\n }\n return max+1\n}\n\n", "My solution having 100% result in codility with Swift 4.\nfunc solution(_ A : [Int]) -> Int {\n let positive = A.filter { $0 > 0 }.sorted()\n var x = 1\n for val in positive{\n if(x < val) {\n return x\n }\n x = val + 1\n }\n return x\n}\n\n", "My simple and (time) efficient Java solution:\nimport java.util.*;\n\nclass Solution {\n public int solution(int[] A) {\n Set<Integer> set=new TreeSet<>();\n for (int x:A) {\n if (x>0) {\n set.add(x);\n }\n }\n\n int y=1;\n Iterator<Integer> it=set.iterator();\n while (it.hasNext()) {\n int curr=it.next();\n if (curr!=y) {\n return y;\n }\n y++;\n }\n return y;\n }\n}\n\n", "Here is a simple and fast code in PHP.\n\nTask Score: 100%\nCorrectness: 100%\nPerformance: 100%\nDetected time complexity: O(N) or O(N * log(N))\n\nfunction solution($A) {\n \n $x = 1;\n \n sort($A);\n \n foreach($A as $i){\n \n if($i <=0) continue;\n \n if($x < $i) return $x;\n \n else $x = $i+1; \n \n }\n \n return $x;\n}\n\nPerformance tests\n", "First let me explain about the algorithm down below.\nIf the array contains no elements then return 1,\nThen in a loop check if the current element of the array is larger then the previous element by 2 then there is the first smallest missing integer, return it.\nIf the current element is consecutive to the previous element then the current smallest missing integer is the current integer + 1.\n Array.sort(A);\n\n if(A.Length == 0) return 1;\n\n int last = (A[0] < 1) ? 0 : A[0];\n\n for (int i = 0; i < A.Length; i++)\n {\n if(A[i] > 0){\n if (A[i] - last > 1) return last + 1;\n else last = A[i];\n } \n }\n\n return last + 1;\n\n", "This my implementation in Swift 4 with 100% Score. It should be a pretty similar code in Java. Let me know what you think.\npublic func solution(_ A : inout [Int]) -> Int {\n let B = A.filter({ element in\n element > 0\n }).sorted()\n\n var result = 1\n for element in B {\n if element == result {\n result = result + 1\n } else if element > result {\n break\n }\n }\n\n return result\n}\n\n\n", "This is the solution in C#:\nusing System;\n// you can also use other imports, for example:\nusing System.Collections.Generic;\n\n// you can write to stdout for debugging purposes, e.g.\n// Console.WriteLine(\"this is a debug message\");\n\nclass Solution {\npublic int solution(int[] A) {\n // write your code in C# 6.0 with .NET 4.5 (Mono)\nint N = A.Length;\nHashSet<int> set =new HashSet<int>();\nforeach (int a in A) {\nif (a > 0) {\n set.Add(a);\n }\n}\nfor (int i = 1; i <= N + 1; i++) {\nif (!set.Contains(i)) {\n return i;\n }\n}\nreturn N;\n}\n}\n\n", "This is for C#, it uses HashSet and Linq queries and has 100% score on Codility\n public int solution(int[] A)\n {\n var val = new HashSet<int>(A).Where(x => x >= 1).OrderBy((y) =>y).ToArray();\n var minval = 1;\n for (int i = 0; i < val.Length; i++)\n {\n if (minval < val[i])\n {\n return minval;\n }\n minval = val[i] + 1;\n }\n\n return minval;\n }\n\n", "You're doing too much. You've create a TreeSet which is an order set of integers, then you've tried to turn that back into an array. Instead go through the list, and skip all negative values, then once you find positive values start counting the index. If the index is greater than the number, then the set has skipped a positive value.\nint index = 1;\nfor(int a: set){\n if(a>0){\n if(a>index){\n return index;\n } else{\n index++;\n }\n }\n}\nreturn index;\n\nUpdated for negative values.\nA different solution that is O(n) would be to use an array. This is like the hash solution.\nint N = A.length;\nint[] hashed = new int[N];\n\nfor( int i: A){\n if(i>0 && i<=N){\n hashed[i-1] = 1;\n }\n}\n\nfor(int i = 0; i<N; i++){\n if(hash[i]==0){\n return i+1;\n }\n}\nreturn N+1;\n\nThis could be further optimized counting down the upper limit for the second loop.\n", "For the space complexity of O(1) and time complexity of O(N) and if the array can be modified then it could be as follows:\npublic int getFirstSmallestPositiveNumber(int[] arr) {\n // set positions of non-positive or out of range elements as free (use 0 as marker)\n for (int i = 0; i < arr.length; i++) {\n if (arr[i] <= 0 || arr[i] > arr.length) {\n arr[i] = 0;\n }\n }\n\n //iterate through the whole array again mapping elements [1,n] to positions [0, n-1]\n for (int i = 0; i < arr.length; i++) {\n int prev = arr[i];\n // while elements are not on their correct positions keep putting them there\n while (prev > 0 && arr[prev - 1] != prev) {\n int next = arr[prev - 1];\n arr[prev - 1] = prev;\n prev = next;\n }\n }\n\n // now, the first unmapped position is the smallest element\n for (int i = 0; i < arr.length; i++) {\n if (arr[i] != i + 1) {\n return i + 1;\n }\n }\n return arr.length + 1;\n}\n\n@Test\npublic void testGetFirstSmallestPositiveNumber() {\n int[][] arrays = new int[][]{{1,-1,-5,-3,3,4,2,8},\n {5, 4, 3, 2, 1}, \n {0, 3, -2, -1, 1}};\n\n for (int i = 0; i < arrays.length; i++) {\n System.out.println(getFirstSmallestPositiveNumber(arrays[i]));\n }\n} \n\nOutput:\n\n5\n6\n2\n\n", "I find another solution to do it with additional storage, \n/*\n* if A = [-1,2] the solution works fine\n* */\npublic static int solution(int[] A) {\n\n int N = A.length;\n\n int[] C = new int[N];\n\n /*\n * Mark A[i] as visited by making A[A[i] - 1] negative\n * */\n for (int i = 0; i < N; i++) {\n\n /*\n * we need the absolute value for the duplicates\n * */\n int j = Math.abs(A[i]) - 1;\n\n if (j >= 0 && j < N && A[j] > 0) {\n C[j] = -A[j];\n }\n }\n\n for (int i = 0; i < N; i++) {\n\n if (C[i] == 0) {\n return i + 1;\n }\n }\n\n return N + 1;\n}\n\n", "//My recursive solution:\n\nclass Solution {\n public int solution(int[] A) {\n return next(1, A);\n }\n public int next(int b, int[] A) {\n for (int a : A){\n if (b==a)\n return next(++b, A);\n }\n return b;\n }\n}\n\n", "<JAVA> Try this code-\n\nprivate int solution(int[] A) {//Our original array\n\n int m = Arrays.stream(A).max().getAsInt(); //Storing maximum value\n if (m < 1) // In case all values in our array are negative\n {\n return 1;\n }\n if (A.length == 1) {\n\n //If it contains only one element\n if (A[0] == 1) {\n return 2;\n } else {\n return 1;\n }\n }\n int i = 0;\n int[] l = new int[m];\n for (i = 0; i < A.length; i++) {\n if (A[i] > 0) {\n if (l[A[i] - 1] != 1) //Changing the value status at the index of our list\n {\n l[A[i] - 1] = 1;\n }\n }\n }\n for (i = 0; i < l.length; i++) //Encountering first 0, i.e, the element with least value\n {\n if (l[i] == 0) {\n return i + 1;\n }\n }\n //In case all values are filled between 1 and m\n return i+1;\n }\nInput: {1,-1,0} , o/p: 2\nInput: {1,2,5,4,6}, o/p: 3\nInput: {-1,0,-2}, o/p: 1\n\n", "Here's my solution in C++. It got a 100% score (100% correctness, 100% performance) (after multiple tries ;)). It relies on the simple principle of comparing its values to their corresponding index (after a little preprocessing such as sorting). I agree that your solution is doing too much; You don't need four loops.\nThe steps of my solution are basically:\n\nSort and remove any duplicates. There are two possible methods here, the first one utilizing std::sort, std::unique, and erase, while the second one takes advantage of std::set and the fact that a set sorts itself and disallows duplicates\nHandle edge cases, of which there are quite a few (I missed these initially, causing my score to be quite low at first). The three edge cases are:\n\n\nAll ints in the original array were negative\nAll ints in the original array were positive and greater than 1\nThe original array had only 1 element in it\n\nFor every element, check if its value != its index+1. The first element for which this is true is where the smallest missing positive integer is. I.e. if vec.at(i) != i+1, then vec.at(i-1)+1 is the smallest missing positive integer.\nIf vec.at(i) != i+1 is false for all elements in the array, then there are no \"gaps\" in the array's sequence, and the smallest positive int is simply vec.back()+1 (the 4th edge case if you will).\n\nAnd the code:\nint solution(vector<int>& rawVec)\n{\n //Sort and remove duplicates: Method 1\n std::sort(rawVec.begin(), rawVec.end());\n rawVec.erase(std::unique(rawVec.begin(), rawVec.end()), rawVec.end());\n\n //Sort and remove duplicates: Method 2\n // std::set<int> s(rawVec.begin(), rawVec.end());\n // rawVec.assign(s.begin(), s.end());\n\n //Remove all ints < 1\n vector<int> vec;\n vec.reserve(rawVec.size());\n for(const auto& el : rawVec)\n {\n if(el>0)\n vec.push_back(el);\n }\n\n //Edge case: All ints were < 1 or all ints were > 1\n if(vec.size()==0 or vec.at(0) != 1)\n return 1;\n\n //Edge case: vector contains only one element\n if(vec.size()==1)\n return (vec.at(0)!=1 ? 1 : 2);\n\n for(int i=0; i<vec.size(); ++i)\n {\n if(vec.at(i) != i+1)\n return vec.at(i-1)+1;\n }\n return vec.back()+1;\n}\n\n", "This code has been writen in Java SE 8\nimport java.util.*;\n\npublic class Solution {\n public int solution(int[] A) { \n\n int smallestPositiveInt = 1; \n\n if(A.length == 0) {\n return smallestPositiveInt;\n }\n\n Arrays.sort(A);\n\n if(A[0] > 1) {\n return smallestPositiveInt;\n }\n\n if(A[A.length - 1] <= 0 ) {\n return smallestPositiveInt;\n }\n\n for(int x = 0; x < A.length; x++) {\n if(A[x] == smallestPositiveInt) { \n smallestPositiveInt++;\n } \n }\n\n return smallestPositiveInt;\n }\n}\n\n", "public static int solution(int[] A) {\n Arrays.sort(A);\n int minNumber = 1;\n int length = A.length - 1;\n int max = A[length];\n Set < Integer > set = new HashSet < > ();\n for (int i: A) {\n if (i > 0) {\n set.add(i);\n }\n }\n for (int j = 1; j <= max + 1; j++) {\n if (!set.contains(j)) {\n minNumber = j;\n break;\n }\n }\n return minNumber;\n}\n\n", "This solution runs in O(N) complexity and all the corner cases are covered.\n public int solution(int[] A) {\n Arrays.sort(A);\n //find the first positive integer\n int i = 0, len = A.length;\n while (i < len && A[i++] < 1) ;\n --i;\n\n //Check if minimum value 1 is present\n if (A[i] != 1)\n return 1;\n\n //Find the missing element\n int j = 1;\n while (i < len - 1) {\n if (j == A[i + 1]) i++;\n else if (++j != A[i + 1])\n return j;\n }\n\n // If we have reached the end of array, then increment out value\n if (j == A[len - 1])\n j++;\n return j;\n }\n\n", "Solution in Scala with all test cases running: \nClass Solution {\n\n def smallestNumber(arrayOfNumbers: Array[Int]) = {\n val filteredSet = arrayOfNumbers.foldLeft(HashSet.empty[Int]){(acc, next) \n => if(next > 0) acc.+(next) else acc}\n getSmallestNumber(filteredSet)\n\n }\n\n @tailrec\n private def getSmallestNumber(setOfNumbers: HashSet[Int], index: Int = 1): \n Int = {\n setOfNumbers match {\n case emptySet if(emptySet.isEmpty) => index\n case set => if(!set.contains(index)) index else getSmallestNumber(set, \n index + 1)\n }\n }\n\n}\n\n", "The code below is is simpler but my motive was to write for JavaScript, ES6 users:\nfunction solution(A) {\n \n let s = A.sort();\n let max = s[s.length-1];\n let r = 1;\n \n // here if we have an array with [1,2,3] it should print 4 for us so I added max + 2\n for(let i=1; i <= (max + 2); i++) {\n r = A.includes(i) ? 1 : i ;\n if(r>1) break;\n }\n \n return r;\n}\n\n\n", "My python solution 100% correctness\ndef solution(A):\n\n if max(A) < 1:\n return 1\n\n if len(A) == 1 and A[0] != 1:\n return 1\n\n s = set()\n for a in A:\n if a > 0:\n s.add(a)\n \n \n for i in range(1, len(A)):\n if i not in s:\n return i\n\n return len(s) + 1\n\nassert solution([1, 3, 6, 4, 1, 2]) == 5\nassert solution([1, 2, 3]) == 4\nassert solution([-1, -3]) == 1\nassert solution([-3,1,2]) == 3\nassert solution([1000]) == 1\n\n", "With 100% Accuracy on codility in Java\n public int solution(int[] A) {\n // write your code in Java SE 8\n\n Arrays.sort(A);\n int i=1;\n for (int i1 = 0; i1 < A.length; i1++) {\n if (A[i1] > 0 && i < A[i1]) {\n return i;\n }else if(A[i1]==i){\n ++i;\n }\n\n }\n return i;\n }\n\n\n", "My Javascript solution. This got 100%. The solution is to sort the array and compare the adjacent elements of the array.\nfunction solution(A) {\n// write your code in JavaScript (Node.js 8.9.4)\n A.sort((a, b) => a - b);\n\n if (A[0] > 1 || A[A.length - 1] < 0 || A.length === 2) return 1;\n\n for (let i = 1; i < A.length - 1; ++i) {\n if (A[i] > 0 && (A[i + 1] - A[i]) > 1) {\n return A[i] + 1;\n }\n }\n\n return A[A.length - 1] + 1;\n}\n\n", "this is my code in Kotlin:\nprivate fun soluction(A: IntArray): Int {\n val N = A.size\n val counter = IntArray(N + 1)\n\n for (i in A)\n if (i in 1..N)\n counter[i]++\n \n for (i in 1 until N + 1)\n if (counter[i] == 0)\n return i\n \n return N + 1\n}\n\n", "100% in Swift using recursive function. O(N) or O(N * log(N))\nvar promise = 1\n\npublic func solution(_ A : inout [Int]) -> Int {\n // write your code in Swift 4.2.1 (Linux)\n execute(A.sorted(), pos: 0)\n return promise\n\n}\n\nfunc execute(_ n:[Int], pos i:Int) {\n if n.count == i || n.count == 0 { return }\n if promise > 0 && n[i] > 0 && n[i]+1 < promise {\n promise = n[i] + 1\n } else if promise == n[i] {\n promise = n[i] + 1\n }\n execute(n, pos: i+1)\n}\n\n", "In C# you can write simple below code.\npublic int solution(int[] A) {\n int maxSize = 100000;\n int[] counter = new int[maxSize];\n\n foreach (var number in A)\n {\n if(number >0 && number <= maxSize)\n {\n counter[number - 1] = 1;\n }\n }\n\n for (int i = 0; i < maxSize; i++)\n {\n if (counter[i] == 0)\n return i + 1;\n }\n\n\n return maxSize + 1;\n}\n\n", "I used Python. First I sorted, then filtered corner cases, then removed elements <1. Now element and index are comparable.\ndef solution(A):\n A = sorted(set(A))\n if A[0]>1 or A[-1]<=0:\n return 1\n x = 0\n while x<len(A):\n if A[x]<=0:\n A.pop(x)\n x-=1\n x+=1\n for i, x in enumerate(A):\n if not x==i+1:\n return i+1\n return i+2\n\nCodility finds it okay..\n\n", "Go language : Find the smallest positive integer that does not occur in a given sequence\n\nGO language:\n\nsort above given array in ascending order\nmap to iterate loop of above stored result, if to check x is less than the current element then return, otherwise, add 1 in the current element and assign to x\n\n\npackage main\n \nimport (\n \"fmt\"\n \"sort\"\n)\n\nfunc Solution(nums []int) int {\n sort.Ints(nums)\n x := 1\n for _, val := range nums {\n if(x < val) {\n return x\n }\n if(val >= 0) {\n x = val + 1\n }\n }\n return x\n}\n\nfunc main() {\n\n a1 := []int{1, 3, 6, 4, 1, 2}\n fmt.Println(Solution(a1)) // Should return : 5\n\n a2 := []int{1, 2, 3}\n fmt.Println(Solution(a2)) // Should return : 4\n\n a3 := []int{-1, -3}\n fmt.Println(Solution(a3)) // Should return : 1\n\n a4 := []int{4, 5, 6}\n fmt.Println(Solution(a4)) // Should return : 1\n\n a5 := []int{1, 2, 4}\n fmt.Println(Solution(a5)) // Should return : 3\n\n}\n\n", "Late joining the conversation. Based on:\nhttps://codereview.stackexchange.com/a/179091/184415\n\nThere is indeed an O(n) complexity solution to this problem even if duplicate ints are involved in the input:\nsolution(A)\nFilter out non-positive values from A\nFor each int in filtered\n Let a zero-based index be the absolute value of the int - 1\n If the filtered range can be accessed by that index and filtered[index] is not negative\n Make the value in filtered[index] negative\n\nFor each index in filtered\n if filtered[index] is positive\n return the index + 1 (to one-based)\n\nIf none of the elements in filtered is positive\n return the length of filtered + 1 (to one-based)\n\nSo an array A = [1, 2, 3, 5, 6], would have the following transformations:\n\nabs(A[0]) = 1, to_0idx = 0, A[0] = 1, make_negative(A[0]), A = [-1, 2, 3, 5, 6]\nabs(A[1]) = 2, to_0idx = 1, A[1] = 2, make_negative(A[1]), A = [-1, -2, 3, 5, 6]\nabs(A[2]) = 3, to_0idx = 2, A[2] = 3, make_negative(A[2]), A = [-1, -2, -3, 5, 6]\nabs(A[3]) = 5, to_0idx = 4, A[4] = 6, make_negative(A[4]), A = [-1, -2, -3, 5, -6]\nabs(A[4]) = 6, to_0idx = 5, A[5] is inaccessible, A = [-1, -2, -3, 5, -6]\n\nA linear search for the first positive value returns an index of 3. Converting back to a one-based index results in solution(A)=3+1=4\n\nHere's an implementation of the suggested algorithm in C# (should be trivial to convert it over to Java lingo - cut me some slack common):\npublic int solution(int[] A)\n{\n var positivesOnlySet = A\n .Where(x => x > 0)\n .ToArray();\n\n if (!positivesOnlySet.Any())\n return 1;\n\n var totalCount = positivesOnlySet.Length;\n for (var i = 0; i < totalCount; i++) //O(n) complexity\n {\n var abs = Math.Abs(positivesOnlySet[i]) - 1;\n if (abs < totalCount && positivesOnlySet[abs] > 0) //notice the greater than zero check \n positivesOnlySet[abs] = -positivesOnlySet[abs];\n }\n\n for (var i = 0; i < totalCount; i++) //O(n) complexity\n {\n if (positivesOnlySet[i] > 0)\n return i + 1;\n }\n\n return totalCount + 1;\n}\n\n", "I think that using structures such as: sets or dicts to store unique values is not the better solution, because you end either looking for an element inside a loop which leads to O(N*N) complexity or using another loop to verify the missing value which leaves you with O(N) linear complexity but spending more time than just 1 loop.\nNeither using a counter array structure is optimal regarding storage space because you end up allocating MaxValue blocks of memory even when your array only has one item.\nSo I think the best solution uses just one for-loop, avoiding structures and also implementing conditions to stop iteration when it is not needed anymore:\npublic int solution(int[] A) {\n // write your code in Java SE 8\n int len = A.length;\n int min=1;\n \n Arrays.sort(A);\n\n if(A[len-1]>0)\n for(int i=0; i<len; i++){\n if(A[i]>0){\n if(A[i]==min) min=min+1;\n if(A[i]>min) break;\n }\n }\n return min;\n}\n\nThis way you will get complexity of O(N) or O(N * log(N)), so in the best case you are under O(N) complexity, when your array is already sorted\n", "package Consumer;\n\n\nimport java.util.Arrays;\nimport java.util.List;\nimport java.util.stream.Collectors;\n\npublic class codility {\npublic static void main(String a[])\n {\n int[] A = {1,9,8,7,6,4,2,3};\n int B[]= {-7,-5,-9};\n int C[] ={1,-2,3};\n int D[] ={1,2,3};\n int E[] = {-1};\n int F[] = {0};\n int G[] = {-1000000};\n System.out.println(getSmall(F));\n }\n public static int getSmall(int[] A)\n {\n int j=0;\n if(A.length < 1 || A.length > 100000) return -1;\n List<Integer> intList = Arrays.stream(A).boxed().sorted().collect(Collectors.toList());\n if(intList.get(0) < -1000000 || intList.get(intList.size()-1) > 1000000) return -1;\n if(intList.get(intList.size()-1) < 0) return 1;\n int count=0; \n for(int i=1; i<=intList.size();i++)\n {\n if(!intList.contains(i))return i;\n count++;\n }\n if(count==intList.size()) return ++count;\n return -1;\n } \n}\n\n", " public int solution(int[] A) {\n\n int res = 0;\n HashSet<Integer> list = new HashSet<>();\n\n for (int i : A) list.add(i);\n for (int i = 1; i < 1000000; i++) {\n if(!list.contains(i)){\n res = i;\n break;\n }\n }\n return res;\n}\n\n", "Python implementation of the solution. Get the set of the array - This ensures we have unique elements only. Then keep checking until the value is not present in the set - Print the next value as output and return it. \ndef solution(A):\n# write your code in Python 3.6\n a = set(A)\n i = 1\n while True:\n if i in A:\n i+=1\n else:\n return i\n return i\n pass\n\n", "Works 100%. tested with all the condition as described.\n//MissingInteger\n public int missingIntegerSolution(int[] A) {\n Arrays.sort(A);\n long sum = 0;\n for(int i=0; i<=A[A.length-1]; i++) {\n sum += i;\n }\n\n\n Set<Integer> mySet = Arrays.stream(A).boxed().collect(Collectors.toSet());\n Integer[] B = mySet.toArray(new Integer[0]);\n if(sum < 0)\n return 1;\n\n for(int i=0; i<B.length; i++) {\n sum -= B[i];\n }\n\n if(sum == 0) \n return A[A.length-1] + 1;\n else\n return Integer.parseInt(\"\"+sum);\n }\n\nint[] j = {1, 3, 6, 4, 1, 2,5};\n\nSystem.out.println(\"Missing Integer : \"+obj.missingIntegerSolution(j));\n\nOutput Missing Integer : 7\nint[] j = {1, 3, 6, 4, 1, 2};\nSystem.out.println(\"Missing Integer : \"+obj.missingIntegerSolution(j));\n\nOutput Missing Integer : 5\n", "For JavaScript i would do it this way:\nfunction solution(arr)\n{\n let minValue = 1;\n\n arr.sort();\n\n if (arr[arr.length - 1] > 0)\n {\n for (let i = 0; i < arr.length; i++)\n {\n if (arr[i] === minValue)\n {\n minValue = minValue + 1;\n }\n if (arr[i] > minValue)\n {\n break;\n }\n }\n }\n\n return minValue;\n}\n\nTested it with the following sample data:\nconsole.log(solution([1, 3, 6, 4, 1, 2]));\nconsole.log(solution([1, 2, 3]));\nconsole.log(solution([-1, -3]));\n\n", "My solution:\n public int solution(int[] A) {\n int N = A.length;\n Set<Integer> set = new HashSet<>();\n for (int a : A) {\n if (a > 0) {\n set.add(a);\n }\n }\n for (int index = 1; index <= N; index++) {\n if (!set.contains(index)) {\n return index;\n }\n }\n return N + 1;\n }\n\n", "In C#, bur need improvement.\n public int solution(int[] A) {\n int retorno = 0;\n for (int i = 0; i < A.Length; i++)\n {\n int ultimovalor = A[i] + 1;\n if (!A.Contains(ultimovalor))\n {\n retorno = (ultimovalor);\n if (retorno <= 0) retorno = 1;\n }\n }\n return retorno;\n }\n\n", "// you can also use imports, for example:\n// import java.util.*;\n\n// you can write to stdout for debugging purposes, e.g.\n// System.out.println(\"this is a debug message\");\n\nclass Solution {\n public int solution(int[] A) {\n int size=A.length;\n int min=A[0];\n for(int i=1;i<=size;i++){\n boolean found=false;\n for(int j=0;j<size;j++){\n if(A[j]<min){min=A[j];}\n if(i==A[j]){\n found=true;\n }\n }\n if(found==false){\n return i;\n\n }\n }\n\n if(min<0){return 1;}\n return size+1;\n\n\n\n }\n}\n\n", "100% result solution in Javascript:\nfunction solution(A) {\n let N,i=1;\n A = [...new Set(A)]; // remove duplicated numbers form the Array\n A = A.filter(x => x >= 1).sort((a, b) => a - b); // remove negative numbers & sort array\n while(!N){ // do not stop untel N get a value\n if(A[i-1] != i){N=i} \n i++;\n }\n return N;\n}\n\n", "Here is the code in python with comments to understand the code -\nCodility 100% Missing Integer\nCode-\ndef solution(A):\n\"\"\"\nsolution at https://app.codility.com/demo/results/trainingV4KX2W-3KS/\n100%\nidea is to take temp array of max length of array items\nfor all positive use item of array as index and mark in tem array as 1 ie. present item\ntraverse again temp array if first found value in tem array is zero that index is the smallest positive integer\n:param A:\n:return:\n\"\"\"\nmax_value = max(A)\nif max_value < 1:\n # max is less than 1 ie. 1 is the smallest positive integer\n return 1\nif len(A) == 1:\n # one element with 1 value\n if A[0] == 1:\n return 2\n # one element other than 1 value\n return 1\n# take array of length max value\n# this will work as set ie. using original array value as index for this array\ntemp_max_len_array = [0] * max_value\nfor i in range(len(A)):\n # do only for positive items\n if A[i] > 0:\n # check at index for the value in A\n if temp_max_len_array[A[i] - 1] != 1:\n # set that as 1\n temp_max_len_array[A[i] - 1] = 1\nprint(temp_max_len_array)\nfor i in range(len(temp_max_len_array)):\n # first zero ie. this index is the smallest positive integer\n if temp_max_len_array[i] == 0:\n return i + 1\n# if no value found between 1 to max then last value should be smallest one\nreturn i + 2\n\n\n arr = [2, 3, 6, 4, 1, 2] \n result = solution(arr)\n\n", "This is my solution. First we start with 1, we loop over the array and compare with 2 elements from the array, if it matches one of the element we increment by 1 and start the process all over again.\nprivate static int findSmallest(int max, int[] A) {\n\n if (A == null || A.length == 0)\n return max;\n\n for (int i = 0; i < A.length; i++) {\n if (i == A.length - 1) {\n if (max != A[i])\n return max;\n else\n return max + 1;\n } else if (!checkIfUnique(max, A[i], A[i + 1]))\n return findSmallest(max + 1, A);\n }\n return max;\n}\n\n\nprivate static boolean checkIfUnique(int number, int n1, int n2) {\n return number != n1 && number != n2;\n}\n\n", "This is my solution written in ruby simple correct and efficient\n\ndef solution(arr)\n\n sorted = arr.uniq.sort\n last = sorted.last\n return 1 unless last > 0\n\n i = 1\n sorted.each do |num|\n next unless num > 0\n return i unless num == i\n\n i += 1\n end\n i\n\nend\n\n\n", "Here's my JavaScript solution which scored 100% with O(N) or O(N * log(N)) detected time complexity:\nfunction solution(A) {\n let tmpArr = new Array(1);\n\n for (const currNum of A) {\n if (currNum > arr.length) {\n tmpArr.length = currNum;\n }\n tmpArr[currNum - 1] = true;\n }\n\n return (tmpArr.findIndex((element) => element === undefined) + 1) || (tmpArr.length + 1);\n}\n\n", "Java Solution - \nInside de method solution \nint N = A.length;\n\nSet<Integer> set = new HashSet<>();\nfor (int a : A) {\n if (a > 0) {\n set.add(a);\n }\n}\n\nif(set.size()==0) {\n return N=1;\n}\n\nfor (int i = 1; i <= N + 1; i++) {\n if (!set.contains(i)) {\n N= i;\n break;\n }\n}\nreturn N;\n\n", "Objective-C example:\n int solution(NSMutableArray *array) {\n NSArray* sortedArray = [array sortedArrayUsingSelector: @selector(compare:)];\n int x = 1;\n for (NSNumber *number in sortedArray) {\n\n if (number.intValue < 0) {\n continue;\n }\n if (x < number.intValue){\n return x;\n }\n x = number.intValue + 1;\n }\n\n return x;\n}\n\n", "function solution(A = []) {\n return (A && A\n .filter(num => num > 0)\n .sort((a, b) => a - b)\n .reduce((acc, curr, idx, arr) => \n !arr.includes(acc + 1) ? acc : curr, 0)\n ) + 1;\n}\n\nsolution(); // 1\nsolution(null); // 1\nsolution([]); // 1\nsolution([0, 0, 0]); // 1\n", "\n\n// Codility Interview Question Solved Using Javascript\n\nconst newArray = []; //used in comparison to array with which the solution is required\n\nconst solution = (number) => {\n //function with array parameter 'number'\n const largest = number.reduce((num1, num2) => {\n return num1 > num2\n ? num1\n : num2; /*finds the largest number in the array to\n be used as benchmark for the new array to\n be created*/\n });\n\n const range = 1 + largest;\n for (\n let x = 1;\n x <= range;\n x++ //loop to populate new array with positive integers\n ) {\n if (x > range) {\n break;\n }\n newArray.push(x);\n }\n console.log(\"This is the number array: [\" + number + \"]\"); //array passed\n console.log(\"This is the new array: [\" + newArray + \"]\"); //array formed in relation to array passed\n const newerArray = newArray.filter((elements) => {\n //array formed frome unique values of 'newArray'\n return number.indexOf(elements) == -1;\n });\n console.log(\n \"These are numbers not present in the number array: \" + newerArray\n );\n\n const lowestInteger = newerArray.reduce((first, second) => {\n //newerArray reduced to its lowest possible element by finding the least element in the array\n return first < second ? first : second;\n });\n console.log(\"The lowest positive integer is \" + lowestInteger);\n};\nsolution([1, 2, 3, 4, 6]); //solution function to find the lowest possible integer invoked\n\n\n\n", "The code below will run in O(N) time and O(N) space complexity. Check this codility link for complete running report.\nThe program first put all the values inside a HashMap meanwhile finding the max number in the array. The reason for doing this is to have only unique values in provided array and later check them in constant time. After this, another loop will run until the max found number and will return the first integer that is not present in the array.\n static int solution(int[] A) {\n int max = -1;\n HashMap<Integer, Boolean> dict = new HashMap<>();\n for(int a : A) {\n if(dict.get(a) == null) {\n dict.put(a, Boolean.TRUE);\n }\n if(max<a) {\n max = a;\n }\n }\n for(int i = 1; i<max; i++) {\n if(dict.get(i) == null) {\n return i;\n }\n }\n return max>0 ? max+1 : 1;\n }\n\n", "This solution is in Javascript but complete the test with 100% score and less codes\nfunction solution(A) {\n let s = A.sort((a, b) => { return a - b })\n let x = s.find(o => !s.includes(o+1) && o>0)\n return ((x<0) || !s.includes(1)) ? 1 : x+1\n}\n\n", "in C#\nstatic int solutionA(int[] A)\n {\n Array.Sort(A);\n\n int minNumber = 1;\n\n if(A.Max() < 0)\n {\n return minNumber;\n }\n\n for (int i = 0; i < A.Length; i++)\n {\n if (A[i] == minNumber)\n {\n minNumber++;\n }\n \n if (A[i] > minNumber)\n {\n break;\n }\n }\n\n return minNumber;\n }\n\n100% Test Pass https://i.stack.imgur.com/FvPR8.png\n", "Swift version using functions rather than an iterative approach\n'The solution obtained perfect score' - Codility\n\nThis solution uses functions rather than an iterative approach. So the solution relies heavily on the language's optimizations. A similar approach could be done in Java such as using Java's set operations and other functions.\npublic func solution(_ A : inout [Int]) -> Int {\n let positives = A.filter{ $0 > 0}\n let max = positives.count <= 100_000 ? positives.count + 1 : 100_001\n return Set(1...max).subtracting(A).min() ?? -1\n}\n\n\nObtained all positive numbers from the source array.\nObtained all possible results based on the positive count. Limited the set to 100k as stated in the problem. Added 1 in case the source array was a complete sequence.\nReturned the minimum positive number after excluding the source array's\nelements from the set of all possible results.\n\nNote: The function declaration was from Codility and inout was unneeded. Returning an integer did not allow for nil so -1 was used.\n", "Rewrite the accepted answer with Swift. Hashset in Swift is Set. I think if index is required as return value then try to use Dictionary instead.\nPassed with 100% score.\npublic func solution(_ A: [Int]) -> Int {\n let n = A.count\n var aSet = Set<Int>()\n \n for a in A {\n if a > 0 {\n aSet.insert(a)\n }\n }\n \n for i in 1...n+1 {\n if !aSet.contains(i) {\n return i\n }\n }\n \n return 1\n}\n\n", "The below C++ solution obtained a 100% score. The theoretical complexity of the code is. Time Complexity : O(N) amortized due to hash set and Auxiliary Space complexity of O(N) due to use of hash for lookup in O(1) time.\n#include<iostream>\n#include<string>\n#include<vector>\n#include<climits>\n#include<cmath>\n#include<unordered_set>\n\nusing namespace std;\n\nint solution(vector<int>& A)\n{\n if(!A.size())\n return(1);\n\n unordered_set<int> hashSet;\n int maxItem=INT_MIN;\n for(const auto& item : A)\n {\n hashSet.insert(item);\n if(maxItem<item)\n maxItem=item;\n }\n\n if(maxItem<1)\n return(1);\n\n for(int i=1;i<=maxItem;++i)\n {\n if(hashSet.find(i)==hashSet.end())\n return(i);\n }\n return(maxItem+1);\n}\n\n", "You could simply use this, which is a variant of Insertion-Sort, without the need of Set, or sorting the whole array.\n public static int solution(int[] A) {\n //we can choose only positive integers to enhance performance\n A = Arrays.stream(A).filter(i -> i > 0).toArray();\n for(int i=1; i<A.length; i++){\n int v1 = A[i];\n int j = i-1;\n while (j > -1 && A[j] > v1) {\n A[j + 1] = A[j];\n j = j - 1;\n }\n A[j + 1] = v1;\n if(A[i] - A[i-1] > 1){\n return A[i] + 1;\n }\n }\n return 1;\n }\n\n", "Without sorting or extra memory.\nTime Complexity: O(N)\n public int solution(int[] A) {\n for (int i = 0; i < A.length; i++) {\n if (A[i] <= 0 || A[i] >= A.length) continue;\n int cur = A[i], point;\n while (cur > 0 && cur <= A.length && A[cur - 1] != cur) {\n point = A[cur - 1];\n A[cur - 1] = cur;\n cur = point;\n if (cur < 0 || cur >= A.length) break;\n }\n }\n for (int i = 0; i < A.length; i++) {\n if (A[i] != i+1) return i+1;\n }\n return A.length + 1;\n }\n\n", "Below is my solution\nint[] A = {1,2,3};\nArrays.sort(A);\nSet<Integer> positiveSet = new HashSet<>();\nfor(int a: A) {\n if(a>0) {\n positiveSet.add(a);\n }\n}\nfor(int a: A) {\n int res = a+1;\n if(!positiveSet.contains(res)) {\n System.out.println(\"result : \"+res);\n break;\n }\n}\n\n", "I was thinking about another approach (JS, JavaScript) and got this results that score 66% because of performance issues:\n const properSet = A.filter((item) => { return item > 0 });\n let biggestOutsideOfArray = Math.max(...properSet);\n \n if (biggestOutsideOfArray === -Infinity) {\n biggestOutsideOfArray = 1;\n } else {\n biggestOutsideOfArray += 1;\n }\n \n for (let i = 1; i <= biggestOutsideOfArray; i++) {\n if(properSet.includes(i) === false) {\n return i;\n }\n } \n}\n\n", "How about just playing around with loops.\nimport java.util.Arrays;\npublic class SmallestPositiveNumber {\n\n public int solution(int[] A) {\n Arrays.sort(A);\n int SPI = 1;\n\n if(A.length <= 1) return SPI;\n\n for(int i=0; i<A.length-1; i++){\n\n if((A[i+1] - A[i]) > 1)\n {\n return A[i] + 1;\n }\n }\n return A[A.length-1]+1;\n }\n}\n\n", "Here is my solution on Java:\npublic int solution(int[] A) {\n\n int smallestPositiveInteger = 1;\n \n Arrays.sort(A);\n\n if(A[0] <= 0) return smallestPositiveInteger;\n\n for( int i = 0; i < A.length; i++){\n if(A[i] == smallestPositiveInteger)\n smallestPositiveInteger++;\n } \n\n return smallestPositiveInteger;\n}\n\n", "Shortest answer in Python. 100%\ndef solution(A):\n return min([x for x in range(1,len(A) + 2) if x not in set(sorted([x for x in A if x>0 ]))])\n\n", "A solution in kotlin using set.\nSpace complexity: O(N)\nTime complexity: O(N)\nfun solutionUsingSet(A: IntArray): Int {\n val set = HashSet<Int>()\n A.forEach {\n if (it > 0) {\n set.add(it)\n }\n }\n // If input array has no positive numbers\n if (set.isEmpty()) {\n return 1\n }\n for (i in 1 until A.size + 1) {\n if (!set.contains(i)) {\n return i\n }\n }\n // If input array has all numbers from 1 to N occurring once\n return A.size + 1\n}\n\n", "PHP, passes 100%\nfunction solution ($A) {\n\n sort ($A);\n $smol = 1;\n \n foreach ($A as $a) {\n if ($a > 0) {\n if ($smol === $a) {\n $smol++;\n } else {\n if (($a + 1) < $smol) {\n $smol = ($a + 1);\n }\n }\n }\n }\n\n return $smol;\n}\n\n", "A = [-1, -3]\nA = [1, 4, -4, -2, 4, 7, 9]\n\ndef solution(A):\n A.sort()\n A = list(set(A))\n A = list(filter(lambda x: x > 0, A))\n\n if len(A) == 0:\n return 1\n\n x = min(A)\n if(max(A) <= 0 or x > 1):\n return 1\n\n # print(A)\n for i in range(len(A)):\n increment = 1 if i+1 < len(A) else 0\n payload = A[i+increment]\n #print(payload, x)\n if payload - x == 1:\n x = payload\n else:\n return x + 1\n print(solution(A))\n\n\n", "C# version\nclass Solution {\npublic int solution(int[] A) {\n var res=1;\n Array.Sort(A);\n for(int i=0; i< A.Length; i++)\n {\n if(res<A[i])\n {\n break;\n }\n\n res= (A[i]>0?A[i]:0) + 1;\n }\n return res;\n} \n\n}\n", "this got 100% for C#:\nusing System;\nusing System.Collections.Generic;\nusing System.Linq; \npublic static int solution(int[] A)\n {\n List<int> lst = A.Select(r => r).Where(r => r > 0).OrderBy(r => r).Distinct().ToList();\n\n Console.WriteLine(string.Join(\", \", lst));\n \n if (lst.Count == 0)\n return 1;\n\n for (int i = 0; i < lst.Count; i++)\n {\n if (lst[i] != i + 1)\n return i + 1;\n }\n return lst.Count + 1;\n }\n\n", "input : A: IntArray\n// Sort A to find the biggest number in the last position on the list\nA.sort()\n// create `indexed for loop` from `1` to the biggest number in the last position on the list\nfor (int i = 1; i < A[A.length]; i++) {\n // check for current index (positive number) not found in the list\n if ((i in A).not()) println(\"Answe : $i\")\n}\n\n", "The task asks for the minimal non-existing in the array A integer, which is greater than 0.\nI think this is the right solution:\n\nget the A into a set, to minimize the size by eliminate duplicates and getting the search in the set availability with Set.contains() method.\nCheck the max value in the set. if it is smaller than 0, then return 1 (smallest integer, which is not contained in the set and larger than 0)\nIf the max value is greater than 0, stream through the integers from 1 to max value and check if any of them is missing from the set, then return it.\n\nHere is the solution:\npublic static int solution(int[] A) {\n Set<Integer> mySet = new HashSet<>();\n Arrays.stream(A).forEach(mySet::add);\n int maxVal = Collections.max(mySet);\n return maxVal <=0 ? 1 :\n IntStream.range(1, maxVal).filter(i -> !mySet.contains(i)).findFirst().orElse(1);\n}\n\n", "In PHP, this passed 100% correctness and performance\nfunction solution($A){ \n $min = 1;\n sort($A); \n for($i = 0 ; $i < count($A); $i++){\n if($A[$i] == $min){\n $min++;\n }\n }\n return $min; \n} \n\n", "Swift 5 Answer\nfunc getSmallestPositive(array: [Int]) -> Int {\n\n let positive = Array(Set(array))\n let positiveArray = positive.filter({$0 > 0}).sorted()\n var initialNumber = 1\n for number in 0..<positiveArray.count {\n let item = positiveArray[number]\n if item > initialNumber {\n return initialNumber\n }\n initialNumber = item + 1\n }\n return initialNumber\n}\n\nUsage\n var array = [1, 3, 6, 4, 1, 2]\n let numbeFinal = getNumber(array: array)\n print(numbeFinal)\n\n", "This my solution in R\nSolution <- function(A){\n if(max(A) < 1){\n return(1)\n }\n B = 1:max(A)\n if(length(B[!B %in% A])==0){\n return(max(A)+1)\n }\n else {\n return(min(B[!B %in% A]))\n }\n}\n\nEvaluate the function in sample vectors\nC = c(1,3,6,4,1,2)\nD = c(1,2,3)\nE = c(-1,-3)\nG = c(-1,-3,9)\n\nSolution(C)\nSolution(D)\nSolution(E)\nSolution(G)\n\n", "I have done it using Linq. Pure C# . Working fine for below inputs :\n//int[] abc = { 1,5,3,6,2,1}; // working\n//int[] abc = { 1,2,3}; -- working\n//int[] abc = { -1, -2, -3 }; -- working\n//int[] abc = { 10000, 99999, 12121 }; -- working\n\n //find the smallest positive missing no.\n\n Array.Sort(abc);\n int[] a = abc.Distinct().ToArray();\n int output = 0;\n\n var minvalue = a[0];\n var maxValue = a[a.Length - 1];\n\n for (int index = minvalue; index < maxValue; index++)\n {\n if (!a.Contains(index))\n {\n output = index;\n break;\n }\n } \n\n if (output == 0)\n {\n if (maxValue < 1)\n {\n output = 1;\n }\n else\n {\n output = maxValue + 1;\n }\n }\n\n Console.WriteLine(\" Output :\" + output);\n\n", "Swift\npublic func solution(_ A : inout [Int]) -> Int {\n // write your code in Swift 4.2.1 (Linux)\n\n var smallestValue = 1\n if A.count == 0 {\n return smallestValue\n }\n A.sort()\n if (A[0] > 1) || (A[A.count - 1] <= 0) {\n return smallestValue\n }\n for index in 0..<A.count {\n if A[index] == smallestValue {\n smallestValue = smallestValue + 1\n }\n }\n return smallestValue\n}\n\n", "Ruby 2.2 solution. Total score 77% due to somewhat weak performance.\ndef solution(a)\n positive_sorted = a.uniq.sort.select {|i| i > 0 }\n return 1 if (positive_sorted.empty? || positive_sorted.first > 1)\n\n positive_sorted.each do |i|\n res = i + 1\n next if positive_sorted.include?(res)\n return res\n end\nend\n\n", "\nThis code has been written in C\nThe existing was in C++ and Java\n\nfunction solution(A) {\n // only positive values, sorted\n int i,j,small,large,value,temp,skip=0;\n int arr[N];\n for(i=0;i<N;i++)\n {\n value = A[i];\n for(j=i;j<N;j++)\n {\n if(value > A[j])\n {\n temp = value;\n value = A[j];\n A[j] = temp;\n } \n }\n \n arr[i] = value;\n }\n small = arr[0];\n large = arr[N-1];\n for(i=0;i<N;i++)\n {\n if(arr[i] == arr[i+1])\n { \n for(j=i;j<(N);j++)\n {\n arr[j] = arr[j+1];\n }\n skip++; \n }\n \n }\n if(large < 0)\n {\n return 1;\n }\n for(i=0;i<(N);i++)\n { \n if(arr[i] != (small+i))\n {\n return (arr[i-1]+1);\n }\n else if(arr[i] == large)\n {\n return (large+1);\n }\n }\n}\n\n", "This is my 100% correct solution with swift 4 using Set and avoiding the use of sorted().\n let setOfIntegers = Set<Int>(A)\n var max = 0\n for integer in stride(from: 1, to: A.count + 1, by: 1) {\n max = integer > max ? integer : max\n if (!setOfIntegers.contains(integer)) {\n return integer\n }\n }\n \n return max + 1\n\n", "for JavaScript\nlet arr = [1, 3, 6, 4, 1, 2];\nlet arr1 = [1, 2, 3];\nlet arr2 = [-1, -3];\n\nfunction solution(A) {\n let smallestInteger = 1;\n\n A.forEach(function (val) {\n if (A.includes(smallestInteger)) {\n if (val > 0) smallestInteger++;\n }\n });\n\n return smallestInteger;\n}\n\nconsole.log(solution(arr));\nconsole.log(solution(arr1));\nconsole.log(solution(arr2));\n\n", "The equivalent and simplest code in Python 3.6\ndef solution(A):\n ab = set(range(1,abs(max(A))+2)).difference(set(A))\n return min(ab)\n\nPassed all test cases, Correctness test cases, and Performance test cases\n\n", "Algorithm\nfor element element is array which is in range [1, N] and not in its correct place, swap this element with element of its correct place.\nWhich ever index don’t match with its element is the answer.\nclass Solution {\npublic int firstMissingPositive(int[] nums) {\n int n = nums.length;\n for(int i = 0; i < n;) {\n if(nums[i] >= 1 && nums[i] <= n && nums[i] != nums[nums[i]-1]) {\n swap(nums,nums[i]-1,i);\n } else {\n i++;\n }\n }\n for(int i = 0; i < n; i++) {\n if(nums[i] != i+1) return i+1;\n }\n return n+1;\n}\n\nprivate void swap(int[] a, int i, int j) {\n int temp = a[i];\n a[i] = a[j];\n a[j] = temp;\n}}\n\n", "function solution(A) {\n const mylist = new Set(A);\n for(i=1;i<= mylist.size+1;i++){\n if(!mylist.has(i)) {\n return i;\n }\n }\n}\n\n\n", "Try this code it works for me \nimport java.util.*;\n class Solution {\n public static int solution(int[] A) {\n // write your code in Java SE 8\n int m = Arrays.stream(A).max().getAsInt(); //Storing maximum value \n if (m < 1) // In case all values in our array are negative \n { \n return 1; \n } \n if (A.length == 1) { \n\n //If it contains only one element \n if (A[0] == 1) { \n return 2; \n } else { \n return 1; \n } \n } \n int min = A[0];\n int max= A[0];\n int sm = 1;\n\n HashSet<Integer> set = new HashSet<Integer>();\n\n for(int i=0;i<A.length;i++){\n set.add(A[i]);\n\n if(A[i]<min){\n min = A[i];\n }\n if(A[i]>max){\n max = A[i];\n }\n }\n\n if(min <= 0){\n min = 1;\n }\n\n if(max <= 0){\n max = 1;\n }\n\n boolean fnd = false;\n for(int i=min;i<=max;i++){\n if(i>0 && !set.contains(i)){\n sm = i;\n fnd = true;\n break;\n }\n else continue;\n\n }\n if(fnd)\n return sm; \n else return max +1;\n }\n\n public static void main(String args[]){\n\n Scanner s=new Scanner(System.in);\n\n System.out.println(\"enter number of elements\");\n\n int n=s.nextInt();\n\n int arr[]=new int[n];\n\n System.out.println(\"enter elements\");\n\n for(int i=0;i<n;i++){//for reading array\n arr[i]=s.nextInt();\n\n }\n\n int array[] = arr;\n\n // Calling getMax() method for getting max value\n int max = solution(array);\n System.out.println(\"Maximum Value is: \"+max);\n\n }\n }\n\n", "The simplest way using while loop: \nfun solution(A: IntArray): Int {\n var value = 1\n var find = false\n while(value < A.size) {\n val iterator = A.iterator()\n while (iterator.hasNext()) {\n if (value == iterator.nextInt()) {\n find = true\n value++\n }\n }\n if (!find) {\n break\n } else {\n find = false\n }\n }\n return value\n}\n\n", "This is might help you, it should work fine!\npublic static int sol(int[] A)\n{\n boolean flag =false;\n for(int i=1; i<=1000000;i++ ) {\n for(int j=0;j<A.length;j++) {\n if(A[j]==i) {\n flag = false;\n break;\n }else {\n flag = true;\n }\n }\n if(flag) {\n return i;\n }\n }\n return 1;\n}\n\n" ]
[ 139, 83, 43, 22, 20, 14, 10, 10, 9, 7, 6, 6, 6, 6, 5, 5, 5, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0 ]
[ "Linear complexity O(2N) which simplifies to O(N). I used two sequentials loops, no nested loops. Performance 100%, Correctness 100% according to codility: https://app.codility.com/demo/results/trainingR7Y4E4-REP/. It shows 1 min completion time because I forgot to set length of (1000000 + 1) so I re-ran test again\n// you can also use imports, for example:\nimport java.util.Arrays;\n\n// you can write to stdout for debugging purposes, e.g.\n// System.out.println(\"this is a debug message\");\n\nclass Solution {\n public int solution(int[] array) {\n boolean[] nums = new boolean[1000000 + 1];// for indexing purpose.\n\n Arrays.stream(array).forEach(i -> {\n if(i > 0) {\n nums[i] = true;\n }\n });\n\n if(!nums[1])\n return 1;\n\n for(int i = 2; i < nums.length; i++) {\n if(!nums[i])\n return i;\n }\n\n return 1;\n }\n}\n\n" ]
[ -1 ]
[ "algorithm" ]
stackoverflow_0051719848_algorithm.txt
Q: override functions / extend during runtime? I have 3 scripts like this: A.gd: extends Node func something(): print("A something") B.gd: extends "res://A.gd" func something(): print("B something") func something_else(): print("B something_else") C.gd: extends "res://A.gd" func something(): print("C something") is there a way I can extend B in C during runtime so that I can override something() with C but I don't loose the functionality of something_else() from B? like the runtime equivalent of: C.gd: extends "res://B.gd" func something(): print("C something") Why am I doing this? I have multiple B_1.gd, B_2.gd, etc so if I want to extend anyone of them with C.gd I have to make multiple files C_1.gd, C_2.gd etc for every individual one which becomes annoying when I'm using the same C.gd for every B_<number>.gd file so is something like this possible (maybe in c#)? or is there any workaround? Note: Both C.gd and B_<number>.gd always inherit from A.gd A: You can create an script at runtime. A minimal example is like this: var script = GDScript.new() script.source_code = "func run():print('hello world')" script.reload() If you define static methods there, you can use them right away. You can also instance the script. Or set the script of an existing Object to it. Thus, you can use your existing code as template. First read the source code from the original script file: var file := File.new() file.open("res://C.gd", File.READ) var source_code := file.get_as_text() file.close() Addendum: As alternative, you could use a String literal. Then replace the path with the one you want in the source code: source_code = source_code.replace("res://A.gd", "res://B_1.gd") Addendum: As alternative, you could have a String prepared for String.format. Then create an GDScript with that modified source code: var script := GDScript.new() script.source_code = source_code script.reload() Addendum: reload returns an Error (int), which would be non zero (OK) if there is something wrong with the code. Then use that script however you want. For example instance it: var instance := script.new() Or set it to an existing instance: instance.set_script(script)
override functions / extend during runtime?
I have 3 scripts like this: A.gd: extends Node func something(): print("A something") B.gd: extends "res://A.gd" func something(): print("B something") func something_else(): print("B something_else") C.gd: extends "res://A.gd" func something(): print("C something") is there a way I can extend B in C during runtime so that I can override something() with C but I don't loose the functionality of something_else() from B? like the runtime equivalent of: C.gd: extends "res://B.gd" func something(): print("C something") Why am I doing this? I have multiple B_1.gd, B_2.gd, etc so if I want to extend anyone of them with C.gd I have to make multiple files C_1.gd, C_2.gd etc for every individual one which becomes annoying when I'm using the same C.gd for every B_<number>.gd file so is something like this possible (maybe in c#)? or is there any workaround? Note: Both C.gd and B_<number>.gd always inherit from A.gd
[ "You can create an script at runtime.\nA minimal example is like this:\nvar script = GDScript.new()\nscript.source_code = \"func run():print('hello world')\"\nscript.reload()\n\nIf you define static methods there, you can use them right away. You can also instance the script. Or set the script of an existing Object to it.\n\nThus, you can use your existing code as template. First read the source code from the original script file:\nvar file := File.new()\nfile.open(\"res://C.gd\", File.READ)\nvar source_code := file.get_as_text()\nfile.close()\n\nAddendum: As alternative, you could use a String literal.\nThen replace the path with the one you want in the source code:\nsource_code = source_code.replace(\"res://A.gd\", \"res://B_1.gd\")\n\nAddendum: As alternative, you could have a String prepared for String.format.\nThen create an GDScript with that modified source code:\nvar script := GDScript.new()\nscript.source_code = source_code\nscript.reload()\n\nAddendum: reload returns an Error (int), which would be non zero (OK) if there is something wrong with the code.\nThen use that script however you want. For example instance it:\nvar instance := script.new()\n\nOr set it to an existing instance:\ninstance.set_script(script)\n\n" ]
[ 1 ]
[]
[]
[ "gdscript", "godot" ]
stackoverflow_0074665669_gdscript_godot.txt
Q: The same instance of an HiveObject cannot be stored in two different boxes in flutter I have created a simple learning app with hive.. here I want to store all deleted expense record into separate file,(box) so that I can get it restored on request.. I don't want to store deleted records in same file, I checked history questions regarding this issue but not getting exact reason. concept is simple to create a another file with same object type...in future I would go with backup and restore .... here is my code class Boxes { static Box<ExpenseModel> getExpenseRecords() { return Hive.box<ExpenseModel>('expenserecords'); } static Box<ExpenseModel> getDeletedRecords() { return Hive.box<ExpenseModel>('deletedrecords'); } } main file void main() async { WidgetsFlutterBinding.ensureInitialized(); var dir_path=await getApplicationDocumentsDirectory(); Hive.init(dir_path.path); Hive.registerAdapter(ExpenseModelAdapter()); Hive.registerAdapter(CategoryModelAdapter()); await Hive.openBox<ExpenseModel>('expenserecords'); await Hive.openBox<ExpenseModel>('deletedrecords'); runApp(MyApp()); } and delete function Expanded( child: ValueListenableBuilder<Box<ExpenseModel>>( valueListenable:Boxes.getExpenseRecords().listenable(), builder: (context, box, _) { //todo study which index is exactly taken var data = box.values .toList(); return ListView.builder( itemCount: data.length, itemBuilder: (context, index) { final exp = data[index]; return ExpenseCard( exp: exp, ondelete: () { final rcyclebox=Boxes.getDeletedRecords(); rcyclebox.add(exp);// exp.delete(); //this delete method belongs to hiveobject }, onedit: () { }, ontoogle: (){ exp.isexpense=!exp.isexpense; box.putAt(index, exp); }, ); }); }, ) expense model classs @HiveType(typeId: 1) class ExpenseModel extends HiveObject { @HiveField(0) String title; @HiveField(1) CategoryModel category; @HiveField(2) int amount; @HiveField(3) bool isexpense; ExpenseModel({required this.title,required this.category,required this.amount,required this.isexpense}); } A: The problem is that you are getting objects from Hive and saving the same object instance (after modification of course, but the object reference is the same). Hive knows this is the same object and complains that I can't store the same thing again. So you will have to create a new object instance. You can create a copywith method that returns a new object.
The same instance of an HiveObject cannot be stored in two different boxes in flutter
I have created a simple learning app with hive.. here I want to store all deleted expense record into separate file,(box) so that I can get it restored on request.. I don't want to store deleted records in same file, I checked history questions regarding this issue but not getting exact reason. concept is simple to create a another file with same object type...in future I would go with backup and restore .... here is my code class Boxes { static Box<ExpenseModel> getExpenseRecords() { return Hive.box<ExpenseModel>('expenserecords'); } static Box<ExpenseModel> getDeletedRecords() { return Hive.box<ExpenseModel>('deletedrecords'); } } main file void main() async { WidgetsFlutterBinding.ensureInitialized(); var dir_path=await getApplicationDocumentsDirectory(); Hive.init(dir_path.path); Hive.registerAdapter(ExpenseModelAdapter()); Hive.registerAdapter(CategoryModelAdapter()); await Hive.openBox<ExpenseModel>('expenserecords'); await Hive.openBox<ExpenseModel>('deletedrecords'); runApp(MyApp()); } and delete function Expanded( child: ValueListenableBuilder<Box<ExpenseModel>>( valueListenable:Boxes.getExpenseRecords().listenable(), builder: (context, box, _) { //todo study which index is exactly taken var data = box.values .toList(); return ListView.builder( itemCount: data.length, itemBuilder: (context, index) { final exp = data[index]; return ExpenseCard( exp: exp, ondelete: () { final rcyclebox=Boxes.getDeletedRecords(); rcyclebox.add(exp);// exp.delete(); //this delete method belongs to hiveobject }, onedit: () { }, ontoogle: (){ exp.isexpense=!exp.isexpense; box.putAt(index, exp); }, ); }); }, ) expense model classs @HiveType(typeId: 1) class ExpenseModel extends HiveObject { @HiveField(0) String title; @HiveField(1) CategoryModel category; @HiveField(2) int amount; @HiveField(3) bool isexpense; ExpenseModel({required this.title,required this.category,required this.amount,required this.isexpense}); }
[ "The problem is that you are getting objects from Hive and saving the same object instance (after modification of course, but the object reference is the same). Hive knows this is the same object and complains that I can't store the same thing again. So you will have to create a new object instance. You can create a copywith method that returns a new object.\n" ]
[ 1 ]
[]
[]
[ "flutter" ]
stackoverflow_0074664709_flutter.txt
Q: I'm having trouble using the Model I wrote I wrote a model like this: List<School> schools = [ School(city: "ISTANBUL", name: "Exam1"), School(city: "ISTANBUL", name: "Name2"), School(city: "ISTANBUL", name: "Name3") ]; List allSchools() { return schools; } class School { String city; String name; School({required this.city, required this.name}); } I want to be able to use city and name values ​​in the list in Flutter. Flutter codes: TextFieldSearch( decoration: InputDecoration( hintText: "Select School", prefixIcon: const Icon( Icons.search, color: Colors.black45, ), border: OutlineInputBorder( borderRadius: BorderRadius.circular(8.0), borderSide: BorderSide.none, ), filled: true, fillColor: const Color.fromARGB(255, 229, 229, 229), ), initialList: allSchools(), label: "", controller: _schoolController, ), How can I use model and list together? I hope you understand my point. Thanks in advance for your help. Error A: The package you used only support one string, there is a workaround for that which is convert your schools list to a newlist like this: List<String> newList = schools.map((e) => '${e.city}-${e.name}').toList(); and use this list to build your TextFieldSearch. For cleaner code you can separate this widget like this: Widget buildSearchField(List<School> schools) { List<String> newList = schools.map((e) => '${e.city}-${e.name}').toList(); return TextFieldSearch( decoration: InputDecoration( hintText: "Select School", prefixIcon: const Icon( Icons.search, color: Colors.black45, ), border: OutlineInputBorder( borderRadius: BorderRadius.circular(8.0), borderSide: BorderSide.none, ), filled: true, fillColor: const Color.fromARGB(255, 229, 229, 229), ), initialList: newList, label: "", controller: _schoolController, ); } and use it like this: buildSearchField(allSchools()),
I'm having trouble using the Model I wrote
I wrote a model like this: List<School> schools = [ School(city: "ISTANBUL", name: "Exam1"), School(city: "ISTANBUL", name: "Name2"), School(city: "ISTANBUL", name: "Name3") ]; List allSchools() { return schools; } class School { String city; String name; School({required this.city, required this.name}); } I want to be able to use city and name values ​​in the list in Flutter. Flutter codes: TextFieldSearch( decoration: InputDecoration( hintText: "Select School", prefixIcon: const Icon( Icons.search, color: Colors.black45, ), border: OutlineInputBorder( borderRadius: BorderRadius.circular(8.0), borderSide: BorderSide.none, ), filled: true, fillColor: const Color.fromARGB(255, 229, 229, 229), ), initialList: allSchools(), label: "", controller: _schoolController, ), How can I use model and list together? I hope you understand my point. Thanks in advance for your help. Error
[ "The package you used only support one string, there is a workaround for that which is convert your schools list to a newlist like this:\nList<String> newList = schools.map((e) => '${e.city}-${e.name}').toList();\n\nand use this list to build your TextFieldSearch.\nFor cleaner code you can separate this widget like this:\nWidget buildSearchField(List<School> schools) {\n List<String> newList = schools.map((e) => '${e.city}-${e.name}').toList();\n\n return TextFieldSearch(\n decoration: InputDecoration(\n hintText: \"Select School\",\n prefixIcon: const Icon(\n Icons.search,\n color: Colors.black45,\n ),\n border: OutlineInputBorder(\n borderRadius: BorderRadius.circular(8.0),\n borderSide: BorderSide.none,\n ),\n filled: true,\n fillColor: const Color.fromARGB(255, 229, 229, 229),\n ),\n initialList: newList,\n label: \"\",\n controller: _schoolController,\n );\n }\n\nand use it like this:\nbuildSearchField(allSchools()),\n\n" ]
[ 1 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074665818_dart_flutter.txt
Q: If a SqlDataReader is not closed or long running - is a deadlock possible? Condition: 2 users are accessing the same table in a database at the same time. One keeps his SqlDataReader connection open or is long-running, and the other is inserting data. Will there be any kind of deadlock? Both users using Visual Studio. . A: Yes, it is possible for a deadlock to occur in this scenario. A deadlock occurs when two or more transactions are waiting for each other to release locks on resources that they both need, resulting in a standstill where none of the transactions can continue. In this case, if the SqlDataReader is not closed or is long-running, it will hold a lock on the table that the other user is trying to insert data into. This can cause the other user's transaction to wait for the SqlDataReader to release the lock, resulting in a deadlock. To avoid this situation, it is recommended to use the SqlDataReader in a using block to ensure that it is properly closed and disposed of when it is no longer needed. This will release the locks on the table and prevent potential deadlocks. Additionally, it is good practice to use transactions and the appropriate isolation levels to control concurrency and prevent deadlocks in your database.
If a SqlDataReader is not closed or long running - is a deadlock possible?
Condition: 2 users are accessing the same table in a database at the same time. One keeps his SqlDataReader connection open or is long-running, and the other is inserting data. Will there be any kind of deadlock? Both users using Visual Studio. .
[ "Yes, it is possible for a deadlock to occur in this scenario.\nA deadlock occurs when two or more transactions are waiting for each other to release locks on resources that they both need, resulting in a standstill where none of the transactions can continue. In this case, if the SqlDataReader is not closed or is long-running, it will hold a lock on the table that the other user is trying to insert data into. This can cause the other user's transaction to wait for the SqlDataReader to release the lock, resulting in a deadlock.\nTo avoid this situation, it is recommended to use the SqlDataReader in a using block to ensure that it is properly closed and disposed of when it is no longer needed. This will release the locks on the table and prevent potential deadlocks. Additionally, it is good practice to use transactions and the appropriate isolation levels to control concurrency and prevent deadlocks in your database.\n" ]
[ 0 ]
[]
[]
[ "deadlock", "sqldatareader" ]
stackoverflow_0074664911_deadlock_sqldatareader.txt
Q: Change Django Default Language I've been developing a web application in English, and now I want to change the default language to German. I tried changing the language code and adding the locale directory with all the translations, but Django still shows everything in English. I also want all my table names to be in German along with the content in the templates. I also tried Locale Middleware and also this repo for a custom middleware but it still doesn't work. Not to mention, Django changes the default language of the admin panel, but my field and table names remain English. MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'language.DefaultLanguageMiddleware', # 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] LANGUAGE_CODE = 'de' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True LOCALE_PATH = ( os.path.join(BASE_DIR, 'locale') ) Here is my locale directory: This is how I use translation in my templates: {% load i18n static %} {% translate "Single User" %} This is how I have defined my models: from django.utils.translation import gettext_lazy as _ class Facility(models.Model): name = models.CharField(_('Name'), max_length=100, null=True, blank=True) class Meta: verbose_name_plural = _('Facilities') A: Turns out everything is just right, and the only thing that messed things up was a typo in LOCALE_PATHS. settings.py: LOCALE_PATHS = ( # notice the S which was forgotten os.path.join(BASE_DIR, 'locale') )
Change Django Default Language
I've been developing a web application in English, and now I want to change the default language to German. I tried changing the language code and adding the locale directory with all the translations, but Django still shows everything in English. I also want all my table names to be in German along with the content in the templates. I also tried Locale Middleware and also this repo for a custom middleware but it still doesn't work. Not to mention, Django changes the default language of the admin panel, but my field and table names remain English. MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'language.DefaultLanguageMiddleware', # 'django.middleware.locale.LocaleMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] LANGUAGE_CODE = 'de' TIME_ZONE = 'UTC' USE_I18N = True USE_TZ = True LOCALE_PATH = ( os.path.join(BASE_DIR, 'locale') ) Here is my locale directory: This is how I use translation in my templates: {% load i18n static %} {% translate "Single User" %} This is how I have defined my models: from django.utils.translation import gettext_lazy as _ class Facility(models.Model): name = models.CharField(_('Name'), max_length=100, null=True, blank=True) class Meta: verbose_name_plural = _('Facilities')
[ "Turns out everything is just right, and the only thing that messed things up was a typo in LOCALE_PATHS.\nsettings.py:\nLOCALE_PATHS = ( # notice the S which was forgotten\n os.path.join(BASE_DIR, 'locale')\n)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_i18n", "python", "translation" ]
stackoverflow_0074590212_django_django_i18n_python_translation.txt
Q: How to implement LazyColumn item impression tracking I have a lazycolumn with items, and I want to send an event every time one of the items appears on screen. There are examples of events being sent the first time (like here https://plusmobileapps.com/2022/05/04/lazy-column-view-impressions.html) but that example doesn't send events on subsequent times the same item reappears (when scrolling up, for example). I know it shouldn't be tied to composition, because there can be multiple recompositions while an item remains on screen. What would be the best approach to solve something like this? A: I modified example in article from keys to index and it works fine, you should check out if there is something wrong with keys matching. @Composable private fun MyRow(key: Int, lazyListState: LazyListState, onItemViewed: () -> Unit){ Text( "Row $key", color = Color.White, modifier = Modifier .fillMaxWidth() .background(Color.Red) .padding(20.dp) ) ItemImpression(key = key, lazyListState = lazyListState) { onItemViewed() } } @Composable fun ItemImpression(key: Int, lazyListState: LazyListState, onItemViewed: () -> Unit) { val isItemWithKeyInView by remember { derivedStateOf { lazyListState.layoutInfo .visibleItemsInfo .any { it.index == key } } } if (isItemWithKeyInView) { LaunchedEffect(Unit) { onItemViewed() } } } Then used it as LazyColumn( verticalArrangement = Arrangement.spacedBy(14.dp), state = state ) { items(100) { MyRow(key = it, lazyListState = state) { println(" Item $it is displayed") if(it == 11){ Toast.makeText(context, "item $it is displayed", Toast.LENGTH_SHORT).show() } } } } Result Also instead of sending LazyListState to each ui composable you can move ItemImpression above list as a Composable that only tracks events using state. I put 2, but you can send a list and create for multiple ones either @Composable private fun LazyColumnEventsSample() { val context = LocalContext.current val state = rememberLazyListState() ItemImpression(key = 11, lazyListState = state) { Toast.makeText(context, "item 11 is displayed", Toast.LENGTH_SHORT).show() } ItemImpression(key = 13, lazyListState = state) { Toast.makeText(context, "item 13 is displayed", Toast.LENGTH_SHORT).show() } LazyColumn( verticalArrangement = Arrangement.spacedBy(14.dp), state = state ) { items(100) { Text( "Row $it", color = Color.White, modifier = Modifier .fillMaxWidth() .background(Color.Red) .padding(20.dp) ) } } } A: The LazyListState#layoutInfo property contains all the information about the visible items. You can use it to know if a specific item is visible in the list. Something like: @Composable private fun LazyListState.containsItem(index:Int): Boolean { return remember(this) { derivedStateOf { val visibleItemsInfo = layoutInfo.visibleItemsInfo if (layoutInfo.totalItemsCount == 0) { false } else { visibleItemsInfo.toMutableList().map { it.index }.contains(index) } } }.value } Then just use something like: val state = rememberLazyListState() var isItemVisible = state.containsItem(index = 5) Then you can observe the value using a side effect: LaunchedEffect(isItemVisible){ if (isItemVisible) //do something } Instead, if you need all the visible items you can use this function to retrieve a List with all the visible items and store it in a variable. @Composable private fun LazyListState.visibleItems(): List<Int> { return remember(this) { derivedStateOf { val visibleItemsInfo = layoutInfo.visibleItemsInfo if (layoutInfo.totalItemsCount == 0) { emptyList() } else { visibleItemsInfo.toMutableList().map { it.index } } } }.value }
How to implement LazyColumn item impression tracking
I have a lazycolumn with items, and I want to send an event every time one of the items appears on screen. There are examples of events being sent the first time (like here https://plusmobileapps.com/2022/05/04/lazy-column-view-impressions.html) but that example doesn't send events on subsequent times the same item reappears (when scrolling up, for example). I know it shouldn't be tied to composition, because there can be multiple recompositions while an item remains on screen. What would be the best approach to solve something like this?
[ "I modified example in article from keys to index and it works fine, you should check out if there is something wrong with keys matching.\n@Composable\nprivate fun MyRow(key: Int, lazyListState: LazyListState, onItemViewed: () -> Unit){\n Text(\n \"Row $key\",\n color = Color.White,\n modifier = Modifier\n .fillMaxWidth()\n .background(Color.Red)\n .padding(20.dp)\n )\n\n ItemImpression(key = key, lazyListState = lazyListState) {\n onItemViewed()\n }\n}\n\n\n@Composable\nfun ItemImpression(key: Int, lazyListState: LazyListState, onItemViewed: () -> Unit) {\n\n val isItemWithKeyInView by remember {\n derivedStateOf {\n lazyListState.layoutInfo\n .visibleItemsInfo\n .any { it.index == key }\n }\n }\n\n if (isItemWithKeyInView) {\n LaunchedEffect(Unit) {\n onItemViewed()\n }\n }\n}\n\nThen used it as\nLazyColumn(\n verticalArrangement = Arrangement.spacedBy(14.dp),\n state = state\n) {\n items(100) {\n MyRow(key = it, lazyListState = state) {\n println(\" Item $it is displayed\")\n if(it == 11){\n Toast.makeText(context, \"item $it is displayed\", Toast.LENGTH_SHORT).show()\n }\n }\n\n }\n}\n\nResult\n\nAlso instead of sending LazyListState to each ui composable you can move ItemImpression above list as a Composable that only tracks events using state. I put 2, but you can send a list and create for multiple ones either\n@Composable\nprivate fun LazyColumnEventsSample() {\n\n val context = LocalContext.current\n val state = rememberLazyListState()\n\n ItemImpression(key = 11, lazyListState = state) {\n Toast.makeText(context, \"item 11 is displayed\", Toast.LENGTH_SHORT).show()\n }\n\n\n ItemImpression(key = 13, lazyListState = state) {\n Toast.makeText(context, \"item 13 is displayed\", Toast.LENGTH_SHORT).show()\n }\n\n\n LazyColumn(\n verticalArrangement = Arrangement.spacedBy(14.dp),\n state = state\n ) {\n items(100) {\n Text(\n \"Row $it\",\n color = Color.White,\n modifier = Modifier\n .fillMaxWidth()\n .background(Color.Red)\n .padding(20.dp)\n )\n }\n }\n}\n\n", "The LazyListState#layoutInfo property contains all the information about the visible items. You can use it to know if a specific item is visible in the list.\nSomething like:\n@Composable\nprivate fun LazyListState.containsItem(index:Int): Boolean {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n false\n } else {\n visibleItemsInfo.toMutableList().map { it.index }.contains(index)\n }\n }\n }.value\n}\n\nThen just use something like:\nval state = rememberLazyListState()\nvar isItemVisible = state.containsItem(index = 5)\n\nThen you can observe the value using a side effect:\nLaunchedEffect(isItemVisible){\n if (isItemVisible)\n //do something\n}\n\nInstead, if you need all the visible items you can use this function to retrieve a List with all the visible items and store it in a variable.\n@Composable\nprivate fun LazyListState.visibleItems(): List<Int> {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n emptyList()\n } else {\n visibleItemsInfo.toMutableList().map { it.index }\n }\n }\n }.value\n}\n\n\n" ]
[ 2, 1 ]
[]
[]
[ "android", "android_jetpack_compose", "android_jetpack_compose_lazy_column", "android_jetpack_compose_list", "kotlin" ]
stackoverflow_0074157909_android_android_jetpack_compose_android_jetpack_compose_lazy_column_android_jetpack_compose_list_kotlin.txt
Q: How to combine two Unicode symbols to create a new symbol? I want to create a new symbol that combines an "Latin small letter o" and the diacritic symbol for "low line." Latin small letter o is U+006F Diacritic low line is U+0332. The desired result would have a line below the o. It is the opposite of "Latin capital letter O with Macron" (http://www.fileformat.info/info/unicode/char/014c/index.htm) except that the line is equal to the width of the 0. How to combine U+006F and U+0332 for use with JSON (e.g. \u006F and \u0332) and for use with HTML hex (e.g. &#x6F; and &#x332;) A: Based on this quesion: What's the unicode glyph used to indicate combining characters? the answer for HTML and JSON basically is the same: concatenate sans space. HTML entities: &#x6F;&#x332; JSON: \u006F\u0332 A: To combine two Unicode symbols to create a new symbol, you can use the Unicode combining characters. Unicode combining characters are special characters that are used to combine two or more characters into a single symbol. For example, if you want to combine the letter "A" and the acute accent symbol "´" to create the character "Á", you can use the Unicode combining character "U+0301" (combining acute accent). The resulting character will be "A´" (U+0041 U+0301), which is the Unicode code point for "Á" (U+00C1). Here is an example of how to use the Unicode combining characters in JavaScript: // Define the base character and the combining character const base = "\u0041"; // A const accent = "\u0301"; // combining acute accent // Combine the base character and the combining character to create a new symbol const combined = base + accent; console.log(combined); // Output: "Á" In this example, we combine the base character "A" and the combining character "combining acute accent" to create the character "Á". The combined variable will contain the Unicode code point for "Á" (U+00C1), which is the result of combining the base character and the combining character. You can use this technique to combine any two Unicode symbols to create a new symbol, as long as the combining character is compatible with the base character. Unicode provides a wide range of combining characters that can be used to create a variety of symbols.
How to combine two Unicode symbols to create a new symbol?
I want to create a new symbol that combines an "Latin small letter o" and the diacritic symbol for "low line." Latin small letter o is U+006F Diacritic low line is U+0332. The desired result would have a line below the o. It is the opposite of "Latin capital letter O with Macron" (http://www.fileformat.info/info/unicode/char/014c/index.htm) except that the line is equal to the width of the 0. How to combine U+006F and U+0332 for use with JSON (e.g. \u006F and \u0332) and for use with HTML hex (e.g. &#x6F; and &#x332;)
[ "Based on this quesion: What's the unicode glyph used to indicate combining characters?\nthe answer for HTML and JSON basically is the same: concatenate sans space.\nHTML entities: &#x6F;&#x332;\nJSON: \\u006F\\u0332\n", "To combine two Unicode symbols to create a new symbol, you can use the Unicode combining characters. Unicode combining characters are special characters that are used to combine two or more characters into a single symbol.\nFor example, if you want to combine the letter \"A\" and the acute accent symbol \"´\" to create the character \"Á\", you can use the Unicode combining character \"U+0301\" (combining acute accent). The resulting character will be \"A´\" (U+0041 U+0301), which is the Unicode code point for \"Á\" (U+00C1).\nHere is an example of how to use the Unicode combining characters in JavaScript:\n\n\n// Define the base character and the combining character\nconst base = \"\\u0041\"; // A\nconst accent = \"\\u0301\"; // combining acute accent\n\n// Combine the base character and the combining character to create a new symbol\nconst combined = base + accent;\n\nconsole.log(combined); // Output: \"Á\"\n\n\n\nIn this example, we combine the base character \"A\" and the combining character \"combining acute accent\" to create the character \"Á\". The combined variable will contain the Unicode code point for \"Á\" (U+00C1), which is the result of combining the base character and the combining character.\nYou can use this technique to combine any two Unicode symbols to create a new symbol, as long as the combining character is compatible with the base character. Unicode provides a wide range of combining characters that can be used to create a variety of symbols.\n" ]
[ 6, 0 ]
[]
[]
[ "unicode" ]
stackoverflow_0047437061_unicode.txt
Q: Scroll horizontally starting from right to left with CSS overflow:scroll So I am wondering if there is a possibility to have a different starting position with the overflow:scroll value; When you start scrolling in a div the default behaviour is to scroll from left to right: |<--Scrollbar-Starting-Left-->_________________________________________________| would it possible that it starts at the right? |_________________________________________________<--Scrollbar-Starting-Right-->| In my example the red and green items are first visible, I'd like the green and blue item to be visible first :) I've tried direction:rtl; but no luck A: You can of course use direction:rtl document.querySelector('input').addEventListener('input', function(){ document.querySelector('.box').scrollLeft = this.value; }) .box{ width: 320px; height: 100px; border:1px solid red; overflow: scroll; direction: rtl; /* <-- the trick */ } .box::before{ content:''; display:block; width:400%; height:1px; } <div class='box'></div> <br> <input placeholder='scrollLeft value'> FiddleDemo This may be useful using direction http://css-tricks.com/almanac/properties/d/direction/ A: I don't Know about any solution with just CSS but you can use Jquery to change the initial position of the scrollbar like this: $(document).ready(function(){ $('#box').scrollLeft($(this).height()) }) Check this Demo Fiddle A: With javascript you can just set scrollLeft property when page gets loaded (using el.scrollLeft = el.scrollWidth - el.clientWidth;). A: I've been working in a project and for me works very well like this: overflow-x: scroll; overflow-y: hidden; white-space: nowrap; A: There is an unexpected behavior if you add closing parenthesis at the end of text for the top answer. (Tested in Chrome 54) <div id="box"> <div id="inner_box"> <div class="item" style="background:red;"></div> <div class="item" style="background:green;"></div> <div class="item" style="background:blue;">te(st)</div> </div> </div> Watch this feedle for result : http://jsfiddle.net/mm37pqxa/ A: You can do this with one line of jQuery: $("#box").scrollLeft(480); A: Use direction rtl overflow: auto; direction:rtl;
Scroll horizontally starting from right to left with CSS overflow:scroll
So I am wondering if there is a possibility to have a different starting position with the overflow:scroll value; When you start scrolling in a div the default behaviour is to scroll from left to right: |<--Scrollbar-Starting-Left-->_________________________________________________| would it possible that it starts at the right? |_________________________________________________<--Scrollbar-Starting-Right-->| In my example the red and green items are first visible, I'd like the green and blue item to be visible first :) I've tried direction:rtl; but no luck
[ "You can of course use direction:rtl\n\n\ndocument.querySelector('input').addEventListener('input', function(){\r\n document.querySelector('.box').scrollLeft = this.value;\r\n})\n.box{\r\n width: 320px;\r\n height: 100px;\r\n border:1px solid red;\r\n overflow: scroll;\r\n direction: rtl; /* <-- the trick */\r\n}\r\n\r\n.box::before{ content:''; display:block; width:400%; height:1px; }\n<div class='box'></div>\r\n<br>\r\n<input placeholder='scrollLeft value'>\n\n\n\nFiddleDemo\nThis may be useful using direction http://css-tricks.com/almanac/properties/d/direction/\n", "I don't Know about any solution with just CSS but you can use Jquery to change the initial position of the scrollbar like this:\n$(document).ready(function(){\n $('#box').scrollLeft($(this).height())\n})\n\nCheck this Demo Fiddle\n", "With javascript you can just set scrollLeft property when page gets loaded (using el.scrollLeft = el.scrollWidth - el.clientWidth;).\n", "I've been working in a project and for me works very well like this:\noverflow-x: scroll; \noverflow-y: hidden; \nwhite-space: nowrap;\n\n", "There is an unexpected behavior if you add closing parenthesis at the end of text for the top answer. (Tested in Chrome 54)\n<div id=\"box\">\n <div id=\"inner_box\">\n <div class=\"item\" style=\"background:red;\"></div>\n <div class=\"item\" style=\"background:green;\"></div>\n <div class=\"item\" style=\"background:blue;\">te(st)</div>\n </div>\n</div>\n\nWatch this feedle for result : http://jsfiddle.net/mm37pqxa/\n", "You can do this with one line of jQuery:\n$(\"#box\").scrollLeft(480);\n", "Use direction rtl\noverflow: auto; \ndirection:rtl; \n\n" ]
[ 41, 7, 2, 2, 1, 0, 0 ]
[]
[]
[ "css", "html", "javascript", "jquery", "scroll" ]
stackoverflow_0021293456_css_html_javascript_jquery_scroll.txt
Q: Mockito Argument Matcher any() - Argument(s) are different! for verify method called with any() I am trying to write a test where I am very simply checking if a method was called on a mock. Should be easy, I've done it many times using verify, however for some reason my test is failing because it seems to be comparing the actual invocation arguments to <any Type>. My code: val mockSearchApi = mock<SearchApi>() whenever(mockSearchApi.searchBrands(any(), any(), any(), any(), any(), any(), any(), any())) .thenReturn(Single.just(getSearchApiResponse())) // Some code that causes the api .searchBrands to get called verify(mockSearchApi, times(1)).searchBrands(any(), any(), any(), any(), any(), any(), any(), any()) Then the test fails with error Argument(s) are different! Wanted: searchApi.searchBrands( <any java.lang.Boolean>, <any java.util.List>, <any java.lang.Integer>, <any java.lang.Integer>, <any java.lang.String>, <any java.lang.String>, <any java.lang.Boolean>, <any java.util.Map> ); -> at ... Actual invocations have different arguments: searchApi.searchBrands( false, [], 1, null, null, null, true, {} ); So the method is called and the argument types are correct, so why does this fail? And I have also tried changing all of those any() calls to anyBoolean(), anyList(), anyString(), etc. and the same happens. I have also ensured that the code is calling the arguments in the right order as I've seen that issue before with mocks as well. A: Most likely you used any() from mockito-kotlin Original ArgumentMatchers behaviour: any() - Matches anything, including nulls and varargs. any​(Class<T> type) - Matches any object of given type, excluding nulls. both functions return null To prevent exception when null is returned, mockito-kotlin changes this behaviour. See Matchers.kt /** Matches any object, excluding nulls. */ inline fun <reified T : Any> any(): T { return ArgumentMatchers.any(T::class.java) ?: createInstance() } /** Matches anything, including nulls. */ inline fun <reified T : Any> anyOrNull(): T { return ArgumentMatchers.any<T>() ?: createInstance() } mockito-kotlin matchers behaviour: any() delegates to ArgumentMatchers.any(T::class.java) and thus does not match nulls anyOrNull() delegates to ArgumentMatchers.any<T>() and thus matches null
Mockito Argument Matcher any() - Argument(s) are different! for verify method called with any()
I am trying to write a test where I am very simply checking if a method was called on a mock. Should be easy, I've done it many times using verify, however for some reason my test is failing because it seems to be comparing the actual invocation arguments to <any Type>. My code: val mockSearchApi = mock<SearchApi>() whenever(mockSearchApi.searchBrands(any(), any(), any(), any(), any(), any(), any(), any())) .thenReturn(Single.just(getSearchApiResponse())) // Some code that causes the api .searchBrands to get called verify(mockSearchApi, times(1)).searchBrands(any(), any(), any(), any(), any(), any(), any(), any()) Then the test fails with error Argument(s) are different! Wanted: searchApi.searchBrands( <any java.lang.Boolean>, <any java.util.List>, <any java.lang.Integer>, <any java.lang.Integer>, <any java.lang.String>, <any java.lang.String>, <any java.lang.Boolean>, <any java.util.Map> ); -> at ... Actual invocations have different arguments: searchApi.searchBrands( false, [], 1, null, null, null, true, {} ); So the method is called and the argument types are correct, so why does this fail? And I have also tried changing all of those any() calls to anyBoolean(), anyList(), anyString(), etc. and the same happens. I have also ensured that the code is calling the arguments in the right order as I've seen that issue before with mocks as well.
[ "Most likely you used any() from mockito-kotlin\nOriginal ArgumentMatchers behaviour:\n\nany() - Matches anything, including nulls and varargs.\nany​(Class<T> type) - Matches any object of given type, excluding nulls.\nboth functions return null\n\nTo prevent exception when null is returned, mockito-kotlin changes this behaviour. See Matchers.kt\n/** Matches any object, excluding nulls. */\ninline fun <reified T : Any> any(): T {\n return ArgumentMatchers.any(T::class.java) ?: createInstance()\n}\n\n/** Matches anything, including nulls. */\ninline fun <reified T : Any> anyOrNull(): T {\n return ArgumentMatchers.any<T>() ?: createInstance()\n}\n\nmockito-kotlin matchers behaviour:\n\nany() delegates to ArgumentMatchers.any(T::class.java) and thus does not match nulls\nanyOrNull() delegates to ArgumentMatchers.any<T>() and thus matches null\n\n" ]
[ 0 ]
[]
[]
[ "argument_matcher", "kotlin", "mockito", "testing", "verify" ]
stackoverflow_0074663281_argument_matcher_kotlin_mockito_testing_verify.txt
Q: Hugo not processing template variable, go syntax in .md got ignored I'm trying to show images from assets/images directory. But I just realize that go code with syntax {{code}} didn't get processed at all. It just print the code like a plain markdown content. my config.yml about markup config just this markup: defaultMarkdownHandler: goldmark highlight: noClasses: false I should still using goldmark markdown default, but the go syntax got ignored. A: You are mixing Go syntax, which usually goes into the layout files, with the markdown syntax, which goes into the content files. You should have a look at shortcodes, that you can embed into Hugo content files. https://gohugo.io/content-management/shortcodes/
Hugo not processing template variable, go syntax in .md got ignored
I'm trying to show images from assets/images directory. But I just realize that go code with syntax {{code}} didn't get processed at all. It just print the code like a plain markdown content. my config.yml about markup config just this markup: defaultMarkdownHandler: goldmark highlight: noClasses: false I should still using goldmark markdown default, but the go syntax got ignored.
[ "You are mixing Go syntax, which usually goes into the layout files, with the markdown syntax, which goes into the content files.\nYou should have a look at shortcodes, that you can embed into Hugo content files.\nhttps://gohugo.io/content-management/shortcodes/\n" ]
[ 0 ]
[]
[]
[ "hugo" ]
stackoverflow_0074632218_hugo.txt
Q: TypeError: unsupported operand type(s) for -=: 'str' and 'float' I've tried to write a program which converts decimal to binary and vice versa but when I try 23, it flags line 17 (answer2 -= x) as a type error. import math x = 4096 y = "" z = 10 q = 1 final_answer = 0 answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?")) if answer1 == "1": answer2 = input("What number do you want to convert to binary? It can't be larger than 4096") p = answer2.isdigit() if p: for i in range(13): if int(answer2) >= x: y = y + "1" answer2 -= x else: y = y + "0" x /= 2 print(y) elif not p: print("That's not a number") I tried to convert the variables of answer2 and x to float and int but the same problem still comes up. A: Your variable is still a string when you apply an operation involving a numeric value. In your case, you still need to convert the variable to a float: answer2 = float(answer2) Furthermore, I do not know if isdigit() catches floats (involving a decimal point). This post might help out if you get stuck there: Using isdigit for floats? A: The error you are encountering is happening because you are trying to subtract a string value from an integer. You need to convert the answer2 variable to an integer before you can subtract x from it. Here is one way you can fix this error: import math x = 4096 y = "" z = 10 q = 1 final_answer = 0 answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?")) if answer1 == "1": answer2 = input("What number do you want to convert to binary? It can't be larger than 4096") p = answer2.isdigit() if p: # Convert the string value of answer2 to an integer answer2 = int(answer2) for i in range(13): if answer2 >= x: y = y + "1" answer2 -= x else: y = y + "0" x /= 2 print(y) elif not p: print("That's not a number")
TypeError: unsupported operand type(s) for -=: 'str' and 'float'
I've tried to write a program which converts decimal to binary and vice versa but when I try 23, it flags line 17 (answer2 -= x) as a type error. import math x = 4096 y = "" z = 10 q = 1 final_answer = 0 answer1 = str(input("Do you want to convert decimal into binary (1) or binary into decimal (2)?")) if answer1 == "1": answer2 = input("What number do you want to convert to binary? It can't be larger than 4096") p = answer2.isdigit() if p: for i in range(13): if int(answer2) >= x: y = y + "1" answer2 -= x else: y = y + "0" x /= 2 print(y) elif not p: print("That's not a number") I tried to convert the variables of answer2 and x to float and int but the same problem still comes up.
[ "Your variable is still a string when you apply an operation involving a numeric value. In your case, you still need to convert the variable to a float:\nanswer2 = float(answer2)\n\nFurthermore, I do not know if isdigit() catches floats (involving a decimal point). This post might help out if you get stuck there: Using isdigit for floats?\n", "The error you are encountering is happening because you are trying to subtract a string value from an integer. You need to convert the answer2 variable to an integer before you can subtract x from it.\nHere is one way you can fix this error:\nimport math\n\nx = 4096\ny = \"\"\nz = 10\nq = 1\nfinal_answer = 0\n\nanswer1 = str(input(\"Do you want to convert decimal into binary (1) or binary into decimal (2)?\"))\nif answer1 == \"1\":\n answer2 = input(\"What number do you want to convert to binary? It can't be larger than 4096\")\n p = answer2.isdigit()\n if p:\n # Convert the string value of answer2 to an integer\n answer2 = int(answer2)\n for i in range(13):\n if answer2 >= x:\n y = y + \"1\"\n answer2 -= x\n else:\n y = y + \"0\"\n\n x /= 2\n\n print(y)\n elif not p:\n print(\"That's not a number\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074665643_python.txt
Q: Cartopy Mercator plot I wanna create a scatter plot of data on a map in Mercator projection. It only prints the empty map without values if I set the Projection in Mercator. It is, however, no problem if I choose the PlateCarree projection .... Works fine: ax1=plt.axes(projection=ccrs.PlateCarree()) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.COASTLINE) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.STATES) ax1.set_extent([-5, 10, 41, 52], crs=ccrs.PlateCarree()) ax1.set_title('xxx', fontsize=18); ax1.grid(b=True, alpha=0.5) obs200.plot(x="longitude", y="latitude", kind="scatter", c="mm", ax=ax1, cmap = "jet", figsize=(18,20), title="xxx") # latitude: Breitengrad, longitude: Längengrad prints empty map: ax1=plt.axes(projection=ccrs.Mercator()) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.COASTLINE) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.STATES) ax1.set_extent([-5, 10, 41, 52], crs=ccrs.Mercator()) ax1.set_title('xxx', fontsize=18); ax1.grid(b=True, alpha=0.5) obs200.plot(x="longitude", y="latitude", kind="scatter", c="mm", ax=ax1, cmap = "jet", figsize=(18,20), title="xxx") # latitude: Breitengrad, longitude: Längengrad I couldn't find my example in other questions A: A demonstration of geodataframes plotted on Mercator projection. import cartopy.crs as ccrs import geopandas as gpd import matplotlib.pyplot as plt # These are Geodataframes with world/city data # Their CRS is ccrs.PlateCarree() world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres')) city_pnts = gpd.read_file(gpd.datasets.get_path("naturalearth_cities")) # Geometries in the 2 Geodataframes have CRS transformed # to Mercator projection here mercator = ccrs.Mercator() wld2 = world.to_crs(mercator.proj4_init) cty2 = city_pnts.to_crs(mercator.proj4_init) # Plot them with matplotlib # with projection = ccrs.Mercator() fig, ax2 = plt.subplots(figsize=[8,6], subplot_kw=dict(projection=mercator)) ax2.add_geometries(wld2['geometry'], crs=mercator, facecolor='sandybrown', edgecolor='black') # Use ax2 to plot other data cty2.plot(ax=ax2, zorder=20) # This sets extent of the plot ax2.set_extent([-15, 25, 30, 60], crs=ccrs.PlateCarree())
Cartopy Mercator plot
I wanna create a scatter plot of data on a map in Mercator projection. It only prints the empty map without values if I set the Projection in Mercator. It is, however, no problem if I choose the PlateCarree projection .... Works fine: ax1=plt.axes(projection=ccrs.PlateCarree()) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.COASTLINE) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.STATES) ax1.set_extent([-5, 10, 41, 52], crs=ccrs.PlateCarree()) ax1.set_title('xxx', fontsize=18); ax1.grid(b=True, alpha=0.5) obs200.plot(x="longitude", y="latitude", kind="scatter", c="mm", ax=ax1, cmap = "jet", figsize=(18,20), title="xxx") # latitude: Breitengrad, longitude: Längengrad prints empty map: ax1=plt.axes(projection=ccrs.Mercator()) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.COASTLINE) ax1.add_feature(cf.BORDERS) ax1.add_feature(cf.STATES) ax1.set_extent([-5, 10, 41, 52], crs=ccrs.Mercator()) ax1.set_title('xxx', fontsize=18); ax1.grid(b=True, alpha=0.5) obs200.plot(x="longitude", y="latitude", kind="scatter", c="mm", ax=ax1, cmap = "jet", figsize=(18,20), title="xxx") # latitude: Breitengrad, longitude: Längengrad I couldn't find my example in other questions
[ "A demonstration of geodataframes plotted on Mercator projection.\nimport cartopy.crs as ccrs\nimport geopandas as gpd\nimport matplotlib.pyplot as plt\n\n# These are Geodataframes with world/city data\n# Their CRS is ccrs.PlateCarree()\nworld = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\ncity_pnts = gpd.read_file(gpd.datasets.get_path(\"naturalearth_cities\"))\n\n# Geometries in the 2 Geodataframes have CRS transformed\n# to Mercator projection here\nmercator = ccrs.Mercator()\nwld2 = world.to_crs(mercator.proj4_init)\ncty2 = city_pnts.to_crs(mercator.proj4_init)\n\n# Plot them with matplotlib\n# with projection = ccrs.Mercator()\nfig, ax2 = plt.subplots(figsize=[8,6], subplot_kw=dict(projection=mercator))\nax2.add_geometries(wld2['geometry'], crs=mercator, facecolor='sandybrown', edgecolor='black')\n\n# Use ax2 to plot other data\ncty2.plot(ax=ax2, zorder=20)\n\n# This sets extent of the plot\nax2.set_extent([-15, 25, 30, 60], crs=ccrs.PlateCarree())\n\n\n" ]
[ 0 ]
[]
[]
[ "cartopy", "mercator", "plot" ]
stackoverflow_0074656553_cartopy_mercator_plot.txt
Q: problem with ssl while using wsl and identity server 4 I have developed the simple identity server application with entity framework storage for user credentials and the client app asp.net core mvc with OpenId authentication. It worked properly on local machine but when I am trying to debug it on a wsl with ubuntu 20 04 installed I get the following error AuthenticationException: The remote certificate is invalid according to the validation procedure. I simply use wsl as a debug target in Visual studio. Installed .net on a wsl machine, generated the developer certificats and simply ran 2 projects simulteniasly. Sorry but I don't know what code I should provide to debug the problem but here is my client configuration on a Identity server side: new Client { ClientId = "mvc_client", ClientSecrets = { new Secret("mvc_client_secret".ToSha256()) }, AllowedGrantTypes = GrantTypes.Code, RequireConsent = false, AllowedScopes = { "dummy_api", IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile }, RedirectUris = { "https://localhost:5000/signin-oidc" } }, And the open id on a client side: services.AddAuthentication(config => { config.DefaultScheme = "Cookie"; config.DefaultChallengeScheme = "oidc"; }) .AddCookie("Cookie") .AddOpenIdConnect("oidc", config => { config.Authority = "https://localhost:5001/"; //config.Authority = "http://192.168.1.11:5004/"; //config.RequireHttpsMetadata = false; config.ClientId = "mvc_client"; config.ClientSecret = "mvc_client_secret"; config.SaveTokens = true; // persist tokens in the cookie config.ResponseType = "code"; }); I am getting this error while trying to login with client app. If I try just to login with Identity server everythin works A: If you can access the oidc config address (in your case it should be: https:/localhost:5001/.well-known/openid-configuration) in Postman or your browser and you are just testing you can set the BackchannelHttpHandler to always return true on certificate validation. Also set SslProtocols to allow different versions. These should be avoided in production environment for security reasons: . . .AddOpenIdConnect("oidc", config => { config.BackchannelHttpHandler = new HttpClientHandler { SslProtocols = SslProtocols.Tls12 | SslProtocols.Tls11 | SslProtocols.Tls12 | SslProtocols.Tls13, ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => true }; });
problem with ssl while using wsl and identity server 4
I have developed the simple identity server application with entity framework storage for user credentials and the client app asp.net core mvc with OpenId authentication. It worked properly on local machine but when I am trying to debug it on a wsl with ubuntu 20 04 installed I get the following error AuthenticationException: The remote certificate is invalid according to the validation procedure. I simply use wsl as a debug target in Visual studio. Installed .net on a wsl machine, generated the developer certificats and simply ran 2 projects simulteniasly. Sorry but I don't know what code I should provide to debug the problem but here is my client configuration on a Identity server side: new Client { ClientId = "mvc_client", ClientSecrets = { new Secret("mvc_client_secret".ToSha256()) }, AllowedGrantTypes = GrantTypes.Code, RequireConsent = false, AllowedScopes = { "dummy_api", IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile }, RedirectUris = { "https://localhost:5000/signin-oidc" } }, And the open id on a client side: services.AddAuthentication(config => { config.DefaultScheme = "Cookie"; config.DefaultChallengeScheme = "oidc"; }) .AddCookie("Cookie") .AddOpenIdConnect("oidc", config => { config.Authority = "https://localhost:5001/"; //config.Authority = "http://192.168.1.11:5004/"; //config.RequireHttpsMetadata = false; config.ClientId = "mvc_client"; config.ClientSecret = "mvc_client_secret"; config.SaveTokens = true; // persist tokens in the cookie config.ResponseType = "code"; }); I am getting this error while trying to login with client app. If I try just to login with Identity server everythin works
[ "If you can access the oidc config address (in your case it should be: https:/localhost:5001/.well-known/openid-configuration) in Postman or your browser and you are just testing you can set the BackchannelHttpHandler to always return true on certificate validation.\nAlso set SslProtocols to allow different versions. These should be avoided in production environment for security reasons:\n.\n.\n.AddOpenIdConnect(\"oidc\", config =>\n {\n config.BackchannelHttpHandler = new HttpClientHandler\n {\n SslProtocols = SslProtocols.Tls12 | SslProtocols.Tls11 | SslProtocols.Tls12 | SslProtocols.Tls13,\n ServerCertificateCustomValidationCallback = (message, cert, chain, errors) => true\n };\n});\n\n" ]
[ 1 ]
[]
[]
[ "asp.net_core_mvc", "asp.net_mvc", "identityserver4", "windows_subsystem_for_linux" ]
stackoverflow_0074665810_asp.net_core_mvc_asp.net_mvc_identityserver4_windows_subsystem_for_linux.txt
Q: mobile scrolling, page jumps to top My web application is here: https://www.snowpacktracker.com/btac/snowpacktracker On desktop, everything with scrolling is fine. However, on mobile (particularly on an iPad), any attempt to touch scroll down snaps the page back to the top. I have noticed that if I fight past the jumpiness (which is difficult) and get the page to scroll down so the header is not visible, then scrolling works normally, so perhaps something in the header is responsible. For whatever reason, I am unable to reproduce this on a desktop with devtools set to mobile dimensions, only reproducible on a mobile device (but maybe this is just me not using devtools correctly). Here is a screen recording (on iPad) to demonstrate the problem: https://vimeo.com/661613444 Here is some minimal info on my setup: Bokeh web application, using Flask to render the Bokeh content within an html template (base.html). The header uses a Bootstrap container-fluid class, in addition to Bootstrap classes for the navigation buttons. I also have a custom style.css used to over-ride certain classes in the base template. Of relevance in style.css might be: .placeholderbokehapp-snowpack { background-color: white; padding-right: 20px; padding-left: 20px; padding-top: 15px; padding-bottom: 15px; min-height: 300px; } .container-fluid { padding-right: 20px; padding-left: 20px; min-width: 1100px; } Besides imported js libraries for Bokeh, jquery, popper and bootstrap, I have custom js to define the spinning wheel for loading, and a resize sensor to stop the spinning wheel when the page dimension changes. bokeh==2.4.1, Flask==1.1.2, jquery==3.3.1, popper==1.14.3, bootstrap==4.1.3 Happy to provide any additional details as needed. A: Whenever you scroll up browser address bar hides which shifts the webpage position. You can see this video A simple fix is to prevent the address bar from hiding. html { overflow: hidden; width: 100%; } body { height: 100%; position: fixed; /* prevent overscroll bounce*/ overflow-y: scroll; -webkit-overflow-scrolling: touch; /* iOS velocity scrolling */ } The above code is copied from the stackoverflow A: It is possible that the resize event within bokeh-2.4.1.min.js is being triggered whenever ios shows/hides the address bar. You can sort of replicate this by: View the site in Chrome's Device Toolbar Make the viewport small enough that the content overflows Scroll down Vertically resize the viewport You will see the content scroll snaps to top. Unfortunately, if this is proven to be the problem I don’t have a solution. Perhaps there is a way to disable ios's showing/hiding of the address bar. Failing that, you may have to edit bokeh.js to stop it from changing the scroll-position on resize. A: Based on the information provided and the screen recording, it looks like the issue is with the container-fluid class in your Bootstrap navigation. The container-fluid class is designed to take up the full width of the viewport, which means that it will extend outside the visible area of the screen on mobile devices. This can cause problems with scrolling and touch events. One possible solution to this issue would be to use a different class for the navigation container, such as container or container-md, which are both designed to be responsive and adapt to different screen sizes. You can also use media queries in your CSS to adjust the width of the navigation container based on the width of the viewport. Here is an example of how you could use media queries to adjust the width of the navigation container on mobile devices: @media (max-width: 575px) { .container-fluid { width: 100%; padding: 0; } } This media query will apply the width: 100% and padding: 0 styles to the container-fluid class when the viewport is less than 575 pixels wide. This should help prevent the navigation container from extending outside the visible area of the screen and causing problems with scrolling and touch events. I hope this helps! Let me know if you have any other questions. A: If the page scroll works on a desktop and the problem that arises in iOS , which beloved can be reasoned with. Software bug found in from iOS 15.5 or a glitch third-party app. -- After updating to iOS 15.5, some iPhones always return to the topmost of any pages or app while scrolling, as if someone is always touching the topmost area of the iPhone. If you’re dealing with a bug and the problems started after the iOS 15.5 update, you might want to update to iOS 15.6 Hardware problem triggered by a fall. Worn out screen protector that could cause ghost touches at the top of the green. The same goes for a too-tight phone case or bumper. The best solution, if you don't fix the problem, is to use the Nav element sticky. <div class="sticky-top">...</div> OR Shrink a Header on Scroll > Header <div id="header">Header</div> Add CSS: .container-fluid { padding: 50px 10px; /* Some padding */ text-align: center; /* Centered text */ font-size: 90px; /* Big font size */ position: fixed; /* Fixed position - sit on top of the page */ top: 0; width: 100%; /* Full width */ transition: 0.2s; /* Add a transition effect (when scrolling - and font size is decreased) */ } Add JavaScript: // When the user scrolls down 50px from the top of the document, resize the header's font size window.onscroll = function() {scrollFunction()}; function scrollFunction() { if (document.body.scrollTop > 50 || document.documentElement.scrollTop > 50) { document.getElementById("header").style.fontSize = "30px"; } else { document.getElementById("header").style.fontSize = "90px"; } } A: The issue you are experiencing with scrolling on your web application is likely due to a conflict between the JavaScript libraries that you are using on your site. In particular, it appears that the Bootstrap library is interfering with the Bokeh library, causing the page to snap back to the top when you try to scroll. One possible solution to this problem would be to modify the JavaScript code that you are using to include the Bokeh library. Instead of using the Bokeh library directly, you could try using the BokehJS library, which is a standalone JavaScript library that can be used to embed Bokeh plots on web pages. To use the BokehJS library, you would need to include the following code in your HTML page: <script src="https://cdn.pydata.org/bokeh/release/bokeh-2.4.1.min.js" type="text/javascript"></script> <script src="https://cdn.pydata.org/bokeh/release/bokeh-widgets-2.4.1.min.js" type="text/javascript"></script> <script src="https://cdn.pydata.org/bokeh/release/bokeh-tables-2.4.1.min.js" type="text/javascript"></script> You would then need to modify your JavaScript code to use the BokehJS library to create your plots, instead of using the Bokeh library directly. This should help to resolve the conflict between the Bokeh and Bootstrap libraries, and it should allow you to scroll on your web application without the page snapping back to the top. It is also worth noting that the latest version of Bokeh is 2.4.1, which is the version that you are currently using. If you are still experiencing issues with scrolling on your site, you may want to try upgrading to the latest version of Bokeh to see if that resolves the problem. If you found this helpful consider donating :) BTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697 A: If a mobile page is scrolling and suddenly jumps to the top, it is likely due to a misconfigured "jump to top" feature. This feature is typically triggered by tapping on the top of the screen, and is designed to quickly scroll the page back to the top. To fix this issue, you can disable the jump to top feature or reconfigure it to only be triggered by a specific gesture, such as a long press or swipe. The specific steps to do this will depend on the platform and framework you are using, but here are some general guidelines: Identify the component or plugin that is responsible for the jump to top feature. This may be a built-in feature of the platform, or it may be a third-party component or plugin. If the jump to top feature is built-in to the platform, check the documentation to see if there is a way to disable or reconfigure it. For example, in iOS, you can disable the jump to top feature by setting the scrollToTop property of a UIScrollView to false. If the jump to top feature is provided by a third-party component or plugin, check the documentation to see if there is a way to disable or reconfigure it. For example, if you are using the React ScrollView component from the react-native library, you can disable the jump to top feature by setting the scrollEnabled prop to false. If you are unable to disable or reconfigure the jump to top feature, you can try to work around it by adding a custom gesture handler to your page. For example, you can add a touch event listener to your page that listens for a specific gesture, such as a long press or swipe, and then scrolls the page to the top manually using JavaScript.
mobile scrolling, page jumps to top
My web application is here: https://www.snowpacktracker.com/btac/snowpacktracker On desktop, everything with scrolling is fine. However, on mobile (particularly on an iPad), any attempt to touch scroll down snaps the page back to the top. I have noticed that if I fight past the jumpiness (which is difficult) and get the page to scroll down so the header is not visible, then scrolling works normally, so perhaps something in the header is responsible. For whatever reason, I am unable to reproduce this on a desktop with devtools set to mobile dimensions, only reproducible on a mobile device (but maybe this is just me not using devtools correctly). Here is a screen recording (on iPad) to demonstrate the problem: https://vimeo.com/661613444 Here is some minimal info on my setup: Bokeh web application, using Flask to render the Bokeh content within an html template (base.html). The header uses a Bootstrap container-fluid class, in addition to Bootstrap classes for the navigation buttons. I also have a custom style.css used to over-ride certain classes in the base template. Of relevance in style.css might be: .placeholderbokehapp-snowpack { background-color: white; padding-right: 20px; padding-left: 20px; padding-top: 15px; padding-bottom: 15px; min-height: 300px; } .container-fluid { padding-right: 20px; padding-left: 20px; min-width: 1100px; } Besides imported js libraries for Bokeh, jquery, popper and bootstrap, I have custom js to define the spinning wheel for loading, and a resize sensor to stop the spinning wheel when the page dimension changes. bokeh==2.4.1, Flask==1.1.2, jquery==3.3.1, popper==1.14.3, bootstrap==4.1.3 Happy to provide any additional details as needed.
[ "Whenever you scroll up browser address bar hides which shifts the webpage position.\nYou can see this video\nA simple fix is to prevent the address bar from hiding.\nhtml {\n overflow: hidden;\n width: 100%;\n}\nbody {\n height: 100%;\n position: fixed;\n /* prevent overscroll bounce*/\n overflow-y: scroll;\n -webkit-overflow-scrolling: touch;\n /* iOS velocity scrolling */\n}\n\nThe above code is copied from the stackoverflow\n", "It is possible that the resize event within bokeh-2.4.1.min.js is being triggered whenever ios shows/hides the address bar.\nYou can sort of replicate this by:\n\nView the site in Chrome's Device Toolbar\nMake the viewport small enough that the content overflows\nScroll down\nVertically resize the viewport\n\nYou will see the content scroll snaps to top.\nUnfortunately, if this is proven to be the problem I don’t have a solution. Perhaps there is a way to disable ios's showing/hiding of the address bar. Failing that, you may have to edit bokeh.js to stop it from changing the scroll-position on resize.\n", "Based on the information provided and the screen recording, it looks like the issue is with the container-fluid class in your Bootstrap navigation. The container-fluid class is designed to take up the full width of the viewport, which means that it will extend outside the visible area of the screen on mobile devices. This can cause problems with scrolling and touch events.\nOne possible solution to this issue would be to use a different class for the navigation container, such as container or container-md, which are both designed to be responsive and adapt to different screen sizes. You can also use media queries in your CSS to adjust the width of the navigation container based on the width of the viewport.\nHere is an example of how you could use media queries to adjust the width of the navigation container on mobile devices:\n@media (max-width: 575px) {\n .container-fluid {\n width: 100%;\n padding: 0;\n }\n}\n\nThis media query will apply the width: 100% and padding: 0 styles to the container-fluid class when the viewport is less than 575 pixels wide. This should help prevent the navigation container from extending outside the visible area of the screen and causing problems with scrolling and touch events.\nI hope this helps! Let me know if you have any other questions.\n", "If the page scroll works on a desktop and the problem that arises in iOS , which beloved can be reasoned with.\n\nSoftware bug found in from iOS 15.5 or a glitch third-party app.\n-- After updating to iOS 15.5, some iPhones always return to the topmost of any pages or app while scrolling, as if someone is always touching the topmost area of the iPhone. If you’re dealing with a bug and the problems started after the iOS 15.5 update, you might want to update to iOS 15.6\nHardware problem triggered by a fall.\nWorn out screen protector that could cause ghost touches at the top of the green.\nThe same goes for a too-tight phone case or bumper.\n\nThe best solution, if you don't fix the problem, is to use the Nav element sticky.\n<div class=\"sticky-top\">...</div>\n\nOR\nShrink a Header on Scroll >\nHeader\n<div id=\"header\">Header</div>\n\nAdd CSS:\n.container-fluid {\n padding: 50px 10px; /* Some padding */\n text-align: center; /* Centered text */\n font-size: 90px; /* Big font size */\n position: fixed; /* Fixed position - sit on top of the page */\n top: 0;\n width: 100%; /* Full width */\n transition: 0.2s; /* Add a transition effect (when scrolling - and font size is decreased) */\n}\n\nAdd JavaScript:\n// When the user scrolls down 50px from the top of the document, resize the header's font size\nwindow.onscroll = function() {scrollFunction()};\n\nfunction scrollFunction() {\n if (document.body.scrollTop > 50 || document.documentElement.scrollTop > 50) {\n document.getElementById(\"header\").style.fontSize = \"30px\";\n } else {\n document.getElementById(\"header\").style.fontSize = \"90px\";\n }\n}\n\n", "The issue you are experiencing with scrolling on your web application is likely due to a conflict between the JavaScript libraries that you are using on your site. In particular, it appears that the Bootstrap library is interfering with the Bokeh library, causing the page to snap back to the top when you try to scroll.\nOne possible solution to this problem would be to modify the JavaScript code that you are using to include the Bokeh library. Instead of using the Bokeh library directly, you could try using the BokehJS library, which is a standalone JavaScript library that can be used to embed Bokeh plots on web pages.\nTo use the BokehJS library, you would need to include the following code in your HTML page:\n<script src=\"https://cdn.pydata.org/bokeh/release/bokeh-2.4.1.min.js\" type=\"text/javascript\"></script>\n<script src=\"https://cdn.pydata.org/bokeh/release/bokeh-widgets-2.4.1.min.js\" type=\"text/javascript\"></script>\n<script src=\"https://cdn.pydata.org/bokeh/release/bokeh-tables-2.4.1.min.js\" type=\"text/javascript\"></script>\n\nYou would then need to modify your JavaScript code to use the BokehJS library to create your plots, instead of using the Bokeh library directly. This should help to resolve the conflict between the Bokeh and Bootstrap libraries, and it should allow you to scroll on your web application without the page snapping back to the top.\nIt is also worth noting that the latest version of Bokeh is 2.4.1, which is the version that you are currently using. If you are still experiencing issues with scrolling on your site, you may want to try upgrading to the latest version of Bokeh to see if that resolves the problem.\nIf you found this helpful consider donating :)\nBTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697\n", "If a mobile page is scrolling and suddenly jumps to the top, it is likely due to a misconfigured \"jump to top\" feature. This feature is typically triggered by tapping on the top of the screen, and is designed to quickly scroll the page back to the top.\nTo fix this issue, you can disable the jump to top feature or reconfigure it to only be triggered by a specific gesture, such as a long press or swipe. The specific steps to do this will depend on the platform and framework you are using, but here are some general guidelines:\nIdentify the component or plugin that is responsible for the jump to top feature. This may be a built-in feature of the platform, or it may be a third-party component or plugin.\nIf the jump to top feature is built-in to the platform, check the documentation to see if there is a way to disable or reconfigure it. For example, in iOS, you can disable the jump to top feature by setting the scrollToTop property of a UIScrollView to false.\nIf the jump to top feature is provided by a third-party component or plugin, check the documentation to see if there is a way to disable or reconfigure it. For example, if you are using the React ScrollView component from the react-native library, you can disable the jump to top feature by setting the scrollEnabled prop to false.\nIf you are unable to disable or reconfigure the jump to top feature, you can try to work around it by adding a custom gesture handler to your page. For example, you can add a touch event listener to your page that listens for a specific gesture, such as a long press or swipe, and then scrolls the page to the top manually using JavaScript.\n" ]
[ 2, 2, 2, 0, 0, 0 ]
[]
[]
[ "bokeh", "css", "html", "mobile", "scroll" ]
stackoverflow_0070550681_bokeh_css_html_mobile_scroll.txt
Q: How to save ParallelMapDataset? I have an input dataset (let's name it ds), a function that passes in to encoder (model named embedder). I want to make a dataset of encodings and save it to file. What I tried to do: Converter function: def generate_embedding(image, label, embedder): return (embedder(image)[0], label) Converting: embedding_ds = ds.map(lambda image, label: generate_embedding(image, label, embedder), num_parallel_calls=tf.data.AUTOTUNE) Saving: embedding_ds.save(path) But I have a problem with embedding_ds, it's not tf.data.Dataset (which I expected), but tf.raw_ops.ParallelMapDataset, which don't have save method. Can anybody give an advice? Looks like this problem is present on my tensorflow version (2.9.2) and not present on 2.11 A: Maybe update? In 2.11.0, it works: import tensorflow as tf ds = tf.data.Dataset.range(5) tf.__version__ # 2.11.0 ds = ds.map(lambda e : (e + 3) % 5, num_parallel_calls=3) ds.save('test') # works
How to save ParallelMapDataset?
I have an input dataset (let's name it ds), a function that passes in to encoder (model named embedder). I want to make a dataset of encodings and save it to file. What I tried to do: Converter function: def generate_embedding(image, label, embedder): return (embedder(image)[0], label) Converting: embedding_ds = ds.map(lambda image, label: generate_embedding(image, label, embedder), num_parallel_calls=tf.data.AUTOTUNE) Saving: embedding_ds.save(path) But I have a problem with embedding_ds, it's not tf.data.Dataset (which I expected), but tf.raw_ops.ParallelMapDataset, which don't have save method. Can anybody give an advice? Looks like this problem is present on my tensorflow version (2.9.2) and not present on 2.11
[ "Maybe update? In 2.11.0, it works:\nimport tensorflow as tf\n\nds = tf.data.Dataset.range(5)\n\ntf.__version__ # 2.11.0\n\nds = ds.map(lambda e : (e + 3) % 5, num_parallel_calls=3)\n\nds.save('test') # works\n\n" ]
[ 1 ]
[]
[]
[ "python_3.x", "tensorflow", "tensorflow_datasets" ]
stackoverflow_0074665803_python_3.x_tensorflow_tensorflow_datasets.txt
Q: how to make "onScroll" stops at a point I want to make a menu which is 6 deg rotated and it will get to 0 deg as it reaches to top as users scrolls and then when it is on top i want to make it stick at top is it possible to do it in elementor pro? or with the help of JS CSS Html? windows.onscroll = function () { scrollRotate(); }; function scrollRotate() { let image = document.getElementById("reload"); image.style.transform = "rotate(" + window.pageYOffset/10 + "deg)"; } the above code is working fine, but i want to stop scrollRotate function when the reload element reaches to the top. I want my menu to stick on top as it get to the top with scoll. But at same time i want it to gets from 6 deg rotated to 0deg rotate A: You might wanna look into the Animate on scroll (AOS) javascript library. It is really handy and quite easy to understand. Instead of creating your own calculations. Link
how to make "onScroll" stops at a point
I want to make a menu which is 6 deg rotated and it will get to 0 deg as it reaches to top as users scrolls and then when it is on top i want to make it stick at top is it possible to do it in elementor pro? or with the help of JS CSS Html? windows.onscroll = function () { scrollRotate(); }; function scrollRotate() { let image = document.getElementById("reload"); image.style.transform = "rotate(" + window.pageYOffset/10 + "deg)"; } the above code is working fine, but i want to stop scrollRotate function when the reload element reaches to the top. I want my menu to stick on top as it get to the top with scoll. But at same time i want it to gets from 6 deg rotated to 0deg rotate
[ "You might wanna look into the Animate on scroll (AOS) javascript library. It is really handy and quite easy to understand. Instead of creating your own calculations.\nLink\n" ]
[ 0 ]
[]
[]
[ "css", "elementor", "html", "javascript", "jquery" ]
stackoverflow_0074665877_css_elementor_html_javascript_jquery.txt
Q: Migrating away from Spring Security OAuth 2 I'm having a Spring Boot Auth Microservice. It uses the Oauth2 spring cloud starter dependency which is deprecated nowadays. buildscript { dependencies { classpath "org.springframework.boot:spring-boot-gradle-plugin:2.1.9.RELEASE" } } dependencies { implementation "org.springframework.boot:spring-boot-starter-actuator" implementation "org.springframework.boot:spring-boot-starter-data-jpa" implementation "org.springframework.boot:spring-boot-starter-web" implementation "org.springframework.cloud:spring-cloud-starter-oauth2:2.1.5.RELEASE" } The Schema was taken from here: https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/test/resources/schema.sql It also has a custom user_details table. The JPA class is implementing UserDetails. I've also provided an implementation for UserDetailsService which looks up the user in my custom table. OAuth Configuration is quite forward: AuthorizationServerConfiguration - where oauth is configured: @Configuration @EnableGlobalMethodSecurity(prePostEnabled = true) @EnableAuthorizationServer class AuthorizationServerConfiguration : AuthorizationServerConfigurerAdapter() { @Autowired private lateinit var authenticationManager: AuthenticationManager @Autowired private lateinit var dataSource: DataSource @Autowired @Qualifier("customUserDetailsService") internal lateinit var userDetailsService: UserDetailsService @Autowired private lateinit var passwordEncoder: BCryptPasswordEncoder override fun configure(endpoints: AuthorizationServerEndpointsConfigurer) { endpoints .tokenStore(JdbcTokenStore(dataSource)) .authenticationManager(authenticationManager) .userDetailsService(userDetailsService) } override fun configure(clients: ClientDetailsServiceConfigurer) { // This one is used in conjunction with oauth_client_details. So like there's one app client and a few backend clients. clients.jdbc(dataSource) } override fun configure(oauthServer: AuthorizationServerSecurityConfigurer) { oauthServer.passwordEncoder(passwordEncoder) } } WebSecurityConfiguration - needed for class above: @Configuration class WebSecurityConfiguration : WebSecurityConfigurerAdapter() { @Bean // We need this as a Bean. Otherwise the entire OAuth service won't work. override fun authenticationManagerBean(): AuthenticationManager { return super.authenticationManagerBean() } override fun configure(http: HttpSecurity) { http.sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } } ResourceServerConfiguration - to configure access for endpoints: @Configuration @EnableResourceServer class ResourceServerConfiguration : ResourceServerConfigurerAdapter() { override fun configure(http: HttpSecurity) { http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and().cors().disable().csrf().disable() .authorizeRequests() .antMatchers("/oauth/token").authenticated() .antMatchers("/oauth/user/**").authenticated() .antMatchers("/oauth/custom_end_points/**").hasAuthority("my-authority") // Deny everything else. .anyRequest().denyAll() } } These few lines give me a lot. User Info endpoint (used by microservices) Client's such as Mobile frontends can authenticate using: POST oauth/token and providing a grant_type=password together with a username and a password. Servers can authorize using 'oauth/authorize' Basic Auth support with different authorities is also available as I can fill username + password into the oauth_client_details table: select client_id, access_token_validity, authorities, authorized_grant_types, refresh_token_validity, scope from oauth_client_details; client_id | access_token_validity | authorities | authorized_grant_types | refresh_token_validity | scope -------------------+-----------------------+-------------------------------+-------------------------------------------+------------------------+--------- backend | 864000 | mail,push,app-register | mail,push,client_credentials | 864000 | backend app | 864000 | grant | client_credentials,password,refresh_token | 0 | app This is used by the app if there's no oauth token yet. Other microservices also use this to protect their endpoints - such as in this example: @Configuration @EnableResourceServer class ResourceServerConfig : ResourceServerConfigurerAdapter() { override fun configure(http: HttpSecurity) { http.authorizeRequests() // Coach. .antMatchers("/api/my-api/**").hasRole("my-role") .antMatchers("/registration/**").hasAuthority("my-authority") } } Their set up is quite easy: security.oauth2.client.accessTokenUri=http://localhost:20200/oauth/token security.oauth2.client.userAuthorizationUri=http://localhost:20200/oauth/authorize security.oauth2.resource.userInfoUri=http://localhost:20200/oauth/user/me security.oauth2.client.clientId=coach_client security.oauth2.client.clientSecret=coach_client The first three properties just go to my authorization server. The last two properties are the actual username + password that I've also inserted inside the oauth_client_details table. When my microservice wants to talk to another microservice it uses: val details = ClientCredentialsResourceDetails() details.clientId = "" // Values from the properties file. details.clientSecret = "" // Values from the properties file. details.accessTokenUri = "" // Values from the properties file. val template = OAuth2RestTemplate(details) template.exchange(...) Now my question is - how can I get all of this with the built in Support from Spring Security using Spring Boot? I'd like to migrate away from the deprecated packages and retain all tokens so that users are still logged in afterwards. A: We are also running a spring security authorization server and looked into this. Right now there is no replacement for the authorization server component in spring and there does not seem to be a timeline to implement one. Your best option would be to look into an existing auth component like keycloak or nimbus. alternatively there are hosted service like okta or auth0. Keeping your existing tokens will be a bit of a challange as you would need to import them into your new solution. Our current tokens are opaque while newer auth-solutions tend to use some version of jwt, so depending on your tokens, keeping them may not even be an option. Right now we consider accepting both old and new tokens for a time until the livetime of our old tokens ends, at wich point we would move fully to the new infrastukture. A: So I've ended up developing my own authentication system with a migration API from the old Spring Security OAuth 2 to my system. That way you are not logged out and need to re-login. I'll describe how I did it in case anyone is interested. In my scenario it is 2 'microservices'. One being the deprecated auth and the other leveraging it. Legacy Authentication System To either get a token as a user you'd send a request to /oauth/token with your username + password. To refresh a token another request to /oauth/token with your refresh token. Both cases return your access token + refresh token. You can execute this multiple times per devices and you'd always end up with the same tokens. This is important later. Tokens are stored as MD5 hashed. Spring Security OAuth has these tables defined: oauth_access_token (access tokens) oauth_approvals (don't know what for, is always empty in my case) oauth_client_details (contains a basic authorization method when you're not authorized) oauth_client_token (empty in my case) oauth_code (empty in my case) oauth_refresh_token (refresh tokens) user_details (contains the user data) user_details_user_role (association between user + roles) user_role (your roles) I really didn't use the multi roles functionality, but in any case it's trivial to take that into consideration as well. New Authentication System Access token & refresh tokens are uuid4's that I SHA256 into my table. I can query them easily and check for expiration and throw appropriate HTTP status codes. I ended up doing a per device (it's just a UUID generated once in the frontend) system. That way I can distinguish when a user has multiple devices (AFAIK, this isn't possible with the old system). We need these new endpoints Login with email + password to get an authentication Migration call from the old tokens to your new ones Logout call which deletes your authentication Refresh access token call Thoughts I can keep using the user_details table since only my code interacted with it and I expose it via Springs UserDetailsService. I'll create a new authentication table that has a n:1 relationship to user_details where I store a device id, access token, access token expiry & refresh token per user. To migrate from the old to the new system, my frontend will send a one time migration request, where I check for the given access token if it's valid and if it is, I generate new tokens in my system. I'll handle both systems in parallel by distinguishing at the header level Authorization: Bearer ... for the old system & Authorization: Token ... for the new system Code snippets I use Kotlin, so in order to have type safety and not accidentally mix up my old / new token I ended up using a sealed inline classes: sealed interface AccessToken /** The token from the old mechanism. */ @JvmInline value class BearerAccessToken(val hashed: String) : AccessToken /** The token from the new mechanism. */ @JvmInline value class TokenAccessToken(val hashed: String) : AccessToken To get my token from an Authorization header String: private fun getAccessToken(authorization: String?, language: Language) = when { authorization?.startsWith("Bearer ") == true -> BearerAccessToken(hashed = hashTokenOld(authorization.removePrefix("Bearer "))) authorization?.startsWith("Token ") == true -> TokenAccessToken(hashed = hashTokenNew(authorization.removePrefix("Token "))) else -> throw BackendException(Status.UNAUTHORIZED, language.errorUnauthorized()) } internal fun hashTokenOld(token: String) = MessageDigest.getInstance("MD5").digest(token.toByteArray(Charsets.UTF_8)).hex() internal fun hashTokenNew(token: String) = MessageDigest.getInstance("SHA-256").digest(token.toByteArray(Charsets.UTF_8)).hex() Verifying the tokens with type safety gets pretty easy: when (accessToken) { is BearerAccessToken -> validateViaDeprecatedAuthServer(role) is TokenAccessToken -> { // Query your table for the given accessToken = accessToken.hashed // Ensure it's still valid and exists. Otherwise throw appropriate Status Code like Unauthorized. // From your authentication table you can then also get the user id and work with your current user & return it from this method. } } The validateViaDeprecatedAuthServer is using the old authentication sytem via the Spring APIs and returns the user id: fun validateViaDeprecatedAuthServer(): String { val principal = SecurityContextHolder.getContext().authentication as OAuth2Authentication requireElseUnauthorized(principal.authorities.map { it.authority }.contains("YOUR_ROLE_NAME")) return (principal.principal as Map<*, *>)["id"] as? String ?: throw IllegalArgumentException("Cant find id in principal") } Now we can verify if a given access token from a frontend is valid. The endpoint which generates a new token from the old one is also quite simple: fun migrateAuthentication(accessToken: AccessToken) when (origin.accessToken(language)) { is BearerAccessToken -> { val userId = validateViaDeprecatedAuthServer(role) // Now, create that new authentication in your new system and return it. createAuthenticationFor() } is TokenAccessToken -> error("You're already migrated") } Creating authentication in your new system might look like this: fun createAuthenticationFor() { val refreshToken = UUID.randomUUID().toString() val accessToken = UUID.randomUUID().toString() // SHA256 both of them and save them into your table. return refreshToken to accessToken } Then you only need some glue for your new 'login' endpoint where you need to check that the email / password matches a given user in your table, create an authentication & return it. Logout just deletes the given authentication for your user id + device id. Afterthoughts I've been using this system now for the last few days and so far it's working nicely. Users are migrating. No one seems to be logged out which is exactly what I've wanted. One downside is that since the old authentication system didn't distinguish between devices, I have no way of knowing when a user has successfully migrated. He could be using 1 device or 10. I simply don't know. So both systems will need to live side by side for a rather long time and slowly I'll phase out the old system. In which case, I'll force logout you and you need to re-login (and potentially install a new App version if you haven't updated). Note that the new system is limited to my own needs, which is exactly what I want. I'd prefer it to be simple and maintainable than the Spring Blackbox authentication system.
Migrating away from Spring Security OAuth 2
I'm having a Spring Boot Auth Microservice. It uses the Oauth2 spring cloud starter dependency which is deprecated nowadays. buildscript { dependencies { classpath "org.springframework.boot:spring-boot-gradle-plugin:2.1.9.RELEASE" } } dependencies { implementation "org.springframework.boot:spring-boot-starter-actuator" implementation "org.springframework.boot:spring-boot-starter-data-jpa" implementation "org.springframework.boot:spring-boot-starter-web" implementation "org.springframework.cloud:spring-cloud-starter-oauth2:2.1.5.RELEASE" } The Schema was taken from here: https://github.com/spring-projects/spring-security-oauth/blob/master/spring-security-oauth2/src/test/resources/schema.sql It also has a custom user_details table. The JPA class is implementing UserDetails. I've also provided an implementation for UserDetailsService which looks up the user in my custom table. OAuth Configuration is quite forward: AuthorizationServerConfiguration - where oauth is configured: @Configuration @EnableGlobalMethodSecurity(prePostEnabled = true) @EnableAuthorizationServer class AuthorizationServerConfiguration : AuthorizationServerConfigurerAdapter() { @Autowired private lateinit var authenticationManager: AuthenticationManager @Autowired private lateinit var dataSource: DataSource @Autowired @Qualifier("customUserDetailsService") internal lateinit var userDetailsService: UserDetailsService @Autowired private lateinit var passwordEncoder: BCryptPasswordEncoder override fun configure(endpoints: AuthorizationServerEndpointsConfigurer) { endpoints .tokenStore(JdbcTokenStore(dataSource)) .authenticationManager(authenticationManager) .userDetailsService(userDetailsService) } override fun configure(clients: ClientDetailsServiceConfigurer) { // This one is used in conjunction with oauth_client_details. So like there's one app client and a few backend clients. clients.jdbc(dataSource) } override fun configure(oauthServer: AuthorizationServerSecurityConfigurer) { oauthServer.passwordEncoder(passwordEncoder) } } WebSecurityConfiguration - needed for class above: @Configuration class WebSecurityConfiguration : WebSecurityConfigurerAdapter() { @Bean // We need this as a Bean. Otherwise the entire OAuth service won't work. override fun authenticationManagerBean(): AuthenticationManager { return super.authenticationManagerBean() } override fun configure(http: HttpSecurity) { http.sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS) } } ResourceServerConfiguration - to configure access for endpoints: @Configuration @EnableResourceServer class ResourceServerConfiguration : ResourceServerConfigurerAdapter() { override fun configure(http: HttpSecurity) { http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS) .and().cors().disable().csrf().disable() .authorizeRequests() .antMatchers("/oauth/token").authenticated() .antMatchers("/oauth/user/**").authenticated() .antMatchers("/oauth/custom_end_points/**").hasAuthority("my-authority") // Deny everything else. .anyRequest().denyAll() } } These few lines give me a lot. User Info endpoint (used by microservices) Client's such as Mobile frontends can authenticate using: POST oauth/token and providing a grant_type=password together with a username and a password. Servers can authorize using 'oauth/authorize' Basic Auth support with different authorities is also available as I can fill username + password into the oauth_client_details table: select client_id, access_token_validity, authorities, authorized_grant_types, refresh_token_validity, scope from oauth_client_details; client_id | access_token_validity | authorities | authorized_grant_types | refresh_token_validity | scope -------------------+-----------------------+-------------------------------+-------------------------------------------+------------------------+--------- backend | 864000 | mail,push,app-register | mail,push,client_credentials | 864000 | backend app | 864000 | grant | client_credentials,password,refresh_token | 0 | app This is used by the app if there's no oauth token yet. Other microservices also use this to protect their endpoints - such as in this example: @Configuration @EnableResourceServer class ResourceServerConfig : ResourceServerConfigurerAdapter() { override fun configure(http: HttpSecurity) { http.authorizeRequests() // Coach. .antMatchers("/api/my-api/**").hasRole("my-role") .antMatchers("/registration/**").hasAuthority("my-authority") } } Their set up is quite easy: security.oauth2.client.accessTokenUri=http://localhost:20200/oauth/token security.oauth2.client.userAuthorizationUri=http://localhost:20200/oauth/authorize security.oauth2.resource.userInfoUri=http://localhost:20200/oauth/user/me security.oauth2.client.clientId=coach_client security.oauth2.client.clientSecret=coach_client The first three properties just go to my authorization server. The last two properties are the actual username + password that I've also inserted inside the oauth_client_details table. When my microservice wants to talk to another microservice it uses: val details = ClientCredentialsResourceDetails() details.clientId = "" // Values from the properties file. details.clientSecret = "" // Values from the properties file. details.accessTokenUri = "" // Values from the properties file. val template = OAuth2RestTemplate(details) template.exchange(...) Now my question is - how can I get all of this with the built in Support from Spring Security using Spring Boot? I'd like to migrate away from the deprecated packages and retain all tokens so that users are still logged in afterwards.
[ "We are also running a spring security authorization server and looked into this. Right now there is no replacement for the authorization server component in spring and there does not seem to be a timeline to implement one. Your best option would be to look into an existing auth component like keycloak or nimbus. alternatively there are hosted service like okta or auth0.\nKeeping your existing tokens will be a bit of a challange as you would need to import them into your new solution. Our current tokens are opaque while newer auth-solutions tend to use some version of jwt, so depending on your tokens, keeping them may not even be an option.\nRight now we consider accepting both old and new tokens for a time until the livetime of our old tokens ends, at wich point we would move fully to the new infrastukture.\n", "So I've ended up developing my own authentication system with a migration API from the old Spring Security OAuth 2 to my system. That way you are not logged out and need to re-login.\nI'll describe how I did it in case anyone is interested.\nIn my scenario it is 2 'microservices'. One being the deprecated auth and the other leveraging it.\nLegacy Authentication System\n\nTo either get a token as a user you'd send a request to /oauth/token with your username + password.\nTo refresh a token another request to /oauth/token with your refresh token.\nBoth cases return your access token + refresh token. You can execute this multiple times per devices and you'd always end up with the same tokens. This is important later.\nTokens are stored as MD5 hashed.\n\nSpring Security OAuth has these tables defined:\n\noauth_access_token (access tokens)\noauth_approvals (don't know what for, is always empty in my case)\noauth_client_details (contains a basic authorization method when you're not authorized)\noauth_client_token (empty in my case)\noauth_code (empty in my case)\noauth_refresh_token (refresh tokens)\nuser_details (contains the user data)\nuser_details_user_role (association between user + roles)\nuser_role (your roles)\n\nI really didn't use the multi roles functionality, but in any case it's trivial to take that into consideration as well.\nNew Authentication System\n\nAccess token & refresh tokens are uuid4's that I SHA256 into my table.\nI can query them easily and check for expiration and throw appropriate HTTP status codes.\nI ended up doing a per device (it's just a UUID generated once in the frontend) system. That way I can distinguish when a user has multiple devices (AFAIK, this isn't possible with the old system).\nWe need these new endpoints\n\nLogin with email + password to get an authentication\nMigration call from the old tokens to your new ones\nLogout call which deletes your authentication\nRefresh access token call\n\n\n\nThoughts\n\nI can keep using the user_details table since only my code interacted with it and I expose it via Springs UserDetailsService.\nI'll create a new authentication table that has a n:1 relationship to user_details where I store a device id, access token, access token expiry & refresh token per user.\nTo migrate from the old to the new system, my frontend will send a one time migration request, where I check for the given access token if it's valid and if it is, I generate new tokens in my system.\nI'll handle both systems in parallel by distinguishing at the header level Authorization: Bearer ... for the old system & Authorization: Token ... for the new system\n\nCode snippets\nI use Kotlin, so in order to have type safety and not accidentally mix up my old / new token I ended up using a sealed inline classes:\nsealed interface AccessToken\n\n/** The token from the old mechanism. */\n@JvmInline value class BearerAccessToken(val hashed: String) : AccessToken\n\n/** The token from the new mechanism. */\n@JvmInline value class TokenAccessToken(val hashed: String) : AccessToken\n\nTo get my token from an Authorization header String:\nprivate fun getAccessToken(authorization: String?, language: Language) = when {\n authorization?.startsWith(\"Bearer \") == true -> BearerAccessToken(hashed = hashTokenOld(authorization.removePrefix(\"Bearer \")))\n authorization?.startsWith(\"Token \") == true -> TokenAccessToken(hashed = hashTokenNew(authorization.removePrefix(\"Token \")))\n else -> throw BackendException(Status.UNAUTHORIZED, language.errorUnauthorized())\n}\n\ninternal fun hashTokenOld(token: String) = MessageDigest.getInstance(\"MD5\").digest(token.toByteArray(Charsets.UTF_8)).hex()\ninternal fun hashTokenNew(token: String) = MessageDigest.getInstance(\"SHA-256\").digest(token.toByteArray(Charsets.UTF_8)).hex()\n\nVerifying the tokens with type safety gets pretty easy:\nwhen (accessToken) {\n is BearerAccessToken -> validateViaDeprecatedAuthServer(role)\n is TokenAccessToken -> {\n // Query your table for the given accessToken = accessToken.hashed\n // Ensure it's still valid and exists. Otherwise throw appropriate Status Code like Unauthorized.\n // From your authentication table you can then also get the user id and work with your current user & return it from this method.\n }\n}\n\nThe validateViaDeprecatedAuthServer is using the old authentication sytem via the Spring APIs and returns the user id:\nfun validateViaDeprecatedAuthServer(): String {\n val principal = SecurityContextHolder.getContext().authentication as OAuth2Authentication\n requireElseUnauthorized(principal.authorities.map { it.authority }.contains(\"YOUR_ROLE_NAME\"))\n return (principal.principal as Map<*, *>)[\"id\"] as? String ?: throw IllegalArgumentException(\"Cant find id in principal\")\n}\n\nNow we can verify if a given access token from a frontend is valid. The endpoint which generates a new token from the old one is also quite simple:\nfun migrateAuthentication(accessToken: AccessToken) when (origin.accessToken(language)) {\n is BearerAccessToken -> {\n val userId = validateViaDeprecatedAuthServer(role)\n // Now, create that new authentication in your new system and return it.\n createAuthenticationFor()\n }\n is TokenAccessToken -> error(\"You're already migrated\")\n}\n\nCreating authentication in your new system might look like this:\nfun createAuthenticationFor() {\n val refreshToken = UUID.randomUUID().toString()\n val accessToken = UUID.randomUUID().toString()\n\n // SHA256 both of them and save them into your table.\n\n return refreshToken to accessToken\n}\n\nThen you only need some glue for your new 'login' endpoint where you need to check that the email / password matches a given user in your table, create an authentication & return it.\nLogout just deletes the given authentication for your user id + device id.\nAfterthoughts\nI've been using this system now for the last few days and so far it's working nicely. Users are migrating. No one seems to be logged out which is exactly what I've wanted.\nOne downside is that since the old authentication system didn't distinguish between devices, I have no way of knowing when a user has successfully migrated. He could be using 1 device or 10. I simply don't know. So both systems will need to live side by side for a rather long time and slowly I'll phase out the old system. In which case, I'll force logout you and you need to re-login (and potentially install a new App version if you haven't updated).\nNote that the new system is limited to my own needs, which is exactly what I want. I'd prefer it to be simple and maintainable than the Spring Blackbox authentication system.\n" ]
[ 1, 0 ]
[]
[]
[ "oauth", "oauth_2.0", "spring", "spring_boot" ]
stackoverflow_0066842856_oauth_oauth_2.0_spring_spring_boot.txt
Q: Can find match with regex Hi I'm trying to find line start with "CGK / WIII" but just can find the the first line? What's wrong with my text? (it is rendered from a pdf file) Mytext I am coding with Python to extract data from pdf invoice to dataframe with invoice2data package, and face an error with one text rendered from one pdf file. First I tried with regex: \w{3}\s\/[\s\w{4}]* and found out that it just can find 1 line. Then I also tried with fix text "CGK / WIII" should found 4 match. But it's NOT. I think there are font differences in my text but not sure. A: When I turn on global - Don't return after the first match in your linked example, it shows 4 matches. Also you can not use quantifiers {4} inside a character set (inside []). I'd do it like this: \w{3}\s/\s\w{4}
Can find match with regex
Hi I'm trying to find line start with "CGK / WIII" but just can find the the first line? What's wrong with my text? (it is rendered from a pdf file) Mytext I am coding with Python to extract data from pdf invoice to dataframe with invoice2data package, and face an error with one text rendered from one pdf file. First I tried with regex: \w{3}\s\/[\s\w{4}]* and found out that it just can find 1 line. Then I also tried with fix text "CGK / WIII" should found 4 match. But it's NOT. I think there are font differences in my text but not sure.
[ "When I turn on global - Don't return after the first match in your linked example, it shows 4 matches.\nAlso you can not use quantifiers {4} inside a character set (inside []).\nI'd do it like this:\n\\w{3}\\s/\\s\\w{4}\n" ]
[ 1 ]
[]
[]
[ "regex" ]
stackoverflow_0074665728_regex.txt
Q: flutter Mapping error: type 'String' is not a subtype of type 'Widget' in type cast I use bottomNavigationBar with two items which I define in a map: final List<Map<String, Object>> _pages = [ { 'page': CategoriesScreen(),//My categories widget 'title': 'Categories',//title of current page (with navigation bar) }, { 'page': FavoritesScreen(),//My favorites widget 'title': 'Your Favorite',//title of current page (with navigation bar) }, ]; When I use it like this: return Scaffold( appBar: AppBar( title: Text(_pages[_selectedPageIndex]['title']), ), body: _pages[_selectedPageIndex]['page'],//<----- ERROR HERE bottomNavigationBar: BottomNavigationBar(.... When I try to run the app from Android studio I got the error: type 'String' is not a subtype of type 'Widget' in type cast Any Idea why? Update 1 (full code): import 'package:flutter/material.dart'; import './favorites_screen.dart'; import './categories_screen.dart'; class TabsScreen extends StatefulWidget { @override _TabsScreenState createState() => _TabsScreenState(); } class _TabsScreenState extends State<TabsScreen> { final List<Map<String, Object>> _pages = [ { 'page': CategoriesScreen(), 'title': 'Categories', }, { 'page': FavoritesScreen(), 'title': 'Your Favorite', }, ]; int _selectedPageIndex = 0; void _selectPage(int index) { setState(() { _selectedPageIndex = index; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(_pages[_selectedPageIndex]['title']), ), body: _pages[_selectedPageIndex]['page'], bottomNavigationBar: BottomNavigationBar( onTap: _selectPage, backgroundColor: Theme.of(context).primaryColor, unselectedItemColor: Colors.white, selectedItemColor: Theme.of(context).accentColor, currentIndex: _selectedPageIndex, // type: BottomNavigationBarType.fixed, items: [ BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.category), title: Text('Categories'), ), BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.star), title: Text('Favorites'), ), ], ), ); } } Thank you A: You don't need to put it into a Map. Just use the List<Widget> for _pages and add the title the State widget of the TabsScreen. import 'package:flutter/material.dart'; import './favorites_screen.dart'; import './categories_screen.dart'; class TabsScreen extends StatefulWidget { @override _TabsScreenState createState() => _TabsScreenState(); } class _TabsScreenState extends State<TabsScreen> { final List<Widget> _pages = [ CategoriesScreen(), FavoritesScreen(), ]; String title = 'Categories'; int _selectedPageIndex = 0; void _selectPage(int index) { setState(() { _selectedPageIndex = index; if (_selectedPageIndex == 0) { title = 'Categories'; } else if (_selectedPageIndex == 1) { title = 'Your Favorite'; } }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(title), ), body: _pages[_selectedPageIndex], bottomNavigationBar: BottomNavigationBar( onTap: _selectPage, backgroundColor: Theme.of(context).primaryColor, unselectedItemColor: Colors.white, selectedItemColor: Theme.of(context).accentColor, currentIndex: _selectedPageIndex, // type: BottomNavigationBarType.fixed, items: [ BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.category), title: Text('Categories'), ), BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.star), title: Text('Favorites'), ), ], ), ); } } I'm still looking for a more elegant way of achieving but this approach works. A: following your code this is what you should do instead import 'package:flutter/material.dart'; import './categories_screen.dart'; import './favourite_screen.dart'; class TabsScreeen extends StatefulWidget { @override _TabsScreeenState createState() => _TabsScreeenState(); } class _TabsScreeenState extends State<TabsScreeen> { final List<Map<String, Object>> _pages = [ { "page": CategoriesScreen(), "title": "Categories", }, { "page": FavouriteScreen(), "title": "Favourites", } ]; int _selectedPageIndex = 0; void _selectPage(int index) { setState(() { _selectedPageIndex = index; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(_pages[_selectedPageIndex]['title']), ), body: _pages[_selectedPageIndex]['page'], bottomNavigationBar: BottomNavigationBar( unselectedItemColor: Colors.black38, selectedItemColor: Colors.white, backgroundColor: Theme.of(context).primaryColor, onTap: _selectPage, currentIndex: _selectedPageIndex, items: [ BottomNavigationBarItem( backgroundColor: Colors.orangeAccent, icon: Icon( Icons.category, ), label: 'Categories', ), BottomNavigationBarItem( backgroundColor: Colors.orangeAccent, icon: Icon( Icons.favorite, ), label: 'Favourites', ) ], ), ); } } A: Try this Instead of typing:final List<Map<String, Object>> _pages do: final List<Map<String, dynamic>> _pages So replace Object with dynamic type.
flutter Mapping error: type 'String' is not a subtype of type 'Widget' in type cast
I use bottomNavigationBar with two items which I define in a map: final List<Map<String, Object>> _pages = [ { 'page': CategoriesScreen(),//My categories widget 'title': 'Categories',//title of current page (with navigation bar) }, { 'page': FavoritesScreen(),//My favorites widget 'title': 'Your Favorite',//title of current page (with navigation bar) }, ]; When I use it like this: return Scaffold( appBar: AppBar( title: Text(_pages[_selectedPageIndex]['title']), ), body: _pages[_selectedPageIndex]['page'],//<----- ERROR HERE bottomNavigationBar: BottomNavigationBar(.... When I try to run the app from Android studio I got the error: type 'String' is not a subtype of type 'Widget' in type cast Any Idea why? Update 1 (full code): import 'package:flutter/material.dart'; import './favorites_screen.dart'; import './categories_screen.dart'; class TabsScreen extends StatefulWidget { @override _TabsScreenState createState() => _TabsScreenState(); } class _TabsScreenState extends State<TabsScreen> { final List<Map<String, Object>> _pages = [ { 'page': CategoriesScreen(), 'title': 'Categories', }, { 'page': FavoritesScreen(), 'title': 'Your Favorite', }, ]; int _selectedPageIndex = 0; void _selectPage(int index) { setState(() { _selectedPageIndex = index; }); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(_pages[_selectedPageIndex]['title']), ), body: _pages[_selectedPageIndex]['page'], bottomNavigationBar: BottomNavigationBar( onTap: _selectPage, backgroundColor: Theme.of(context).primaryColor, unselectedItemColor: Colors.white, selectedItemColor: Theme.of(context).accentColor, currentIndex: _selectedPageIndex, // type: BottomNavigationBarType.fixed, items: [ BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.category), title: Text('Categories'), ), BottomNavigationBarItem( backgroundColor: Theme.of(context).primaryColor, icon: Icon(Icons.star), title: Text('Favorites'), ), ], ), ); } } Thank you
[ "You don't need to put it into a Map. Just use the List<Widget> for _pages and add the title the State widget of the TabsScreen.\nimport 'package:flutter/material.dart'; \nimport './favorites_screen.dart';\nimport './categories_screen.dart';\n\nclass TabsScreen extends StatefulWidget {\n @override\n _TabsScreenState createState() => _TabsScreenState();\n }\n\n class _TabsScreenState extends State<TabsScreen> {\n\n final List<Widget> _pages = [\n CategoriesScreen(),\n FavoritesScreen(),\n ];\n\n String title = 'Categories';\n\n int _selectedPageIndex = 0;\n\n void _selectPage(int index) {\n setState(() {\n _selectedPageIndex = index;\n if (_selectedPageIndex == 0) {\n title = 'Categories';\n } else if (_selectedPageIndex == 1) {\n title = 'Your Favorite';\n }\n });\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(\n title: Text(title),\n ),\n body: _pages[_selectedPageIndex],\n bottomNavigationBar: BottomNavigationBar(\n onTap: _selectPage,\n backgroundColor: Theme.of(context).primaryColor,\n unselectedItemColor: Colors.white,\n selectedItemColor: Theme.of(context).accentColor,\n currentIndex: _selectedPageIndex,\n // type: BottomNavigationBarType.fixed,\n items: [\n BottomNavigationBarItem(\n backgroundColor: Theme.of(context).primaryColor,\n icon: Icon(Icons.category),\n title: Text('Categories'),\n ),\n BottomNavigationBarItem(\n backgroundColor: Theme.of(context).primaryColor,\n icon: Icon(Icons.star),\n title: Text('Favorites'),\n ),\n ],\n ),\n );\n }\n}\n\nI'm still looking for a more elegant way of achieving but this approach works.\n", "following your code this is what you should do instead\nimport 'package:flutter/material.dart';\n\nimport './categories_screen.dart';\nimport './favourite_screen.dart';\n\nclass TabsScreeen extends StatefulWidget {\n @override\n _TabsScreeenState createState() => _TabsScreeenState();\n}\n\nclass _TabsScreeenState extends State<TabsScreeen> {\n final List<Map<String, Object>> _pages = [\n {\n \"page\": CategoriesScreen(),\n \"title\": \"Categories\",\n },\n {\n \"page\": FavouriteScreen(),\n \"title\": \"Favourites\",\n }\n ];\n\n int _selectedPageIndex = 0;\n\n void _selectPage(int index) {\n setState(() {\n _selectedPageIndex = index;\n });\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(\n title: Text(_pages[_selectedPageIndex]['title']),\n ),\n body: _pages[_selectedPageIndex]['page'],\n bottomNavigationBar: BottomNavigationBar(\n unselectedItemColor: Colors.black38,\n selectedItemColor: Colors.white,\n backgroundColor: Theme.of(context).primaryColor,\n onTap: _selectPage,\n currentIndex: _selectedPageIndex,\n items: [\n BottomNavigationBarItem(\n backgroundColor: Colors.orangeAccent,\n icon: Icon(\n Icons.category,\n ),\n label: 'Categories',\n ),\n BottomNavigationBarItem(\n backgroundColor: Colors.orangeAccent,\n icon: Icon(\n Icons.favorite,\n ),\n label: 'Favourites',\n )\n ],\n ),\n );\n }\n}\n\n", "Try this\nInstead of typing:final List<Map<String, Object>> _pages\ndo: final List<Map<String, dynamic>> _pages\nSo replace Object with dynamic type.\n" ]
[ 2, 0, 0 ]
[ "In your list mapping, change object to dynamic(your value can take any form)\nlike...\nfinal List<Map<String, dynamic>> _pages\n" ]
[ -1 ]
[ "flutter" ]
stackoverflow_0063301583_flutter.txt
Q: Async: how to keep using the same future in a loop with select (no_std environment)? I have two async functions: get_message and get_event. I'd like to perform an action whenever a message arrives or an event comes and do that forever in an infinite loop. The simplified setup looks like this: use futures::{future::select, future::Either, pin_mut}; impl MsgReceiver { async fn get_message(&mut self) -> Message { /* ... */ } } impl EventListener { async fn get_event(&mut self) -> Event { /* ... */ } } async fn eternal_task(receiver: MsgReceiver, listener: EventListener) -> ! { let get_msg_fut = receiver.get_message(); pin_mut!(get_msg_fut); loop { let get_event_fut = listener.get_event(); pin_mut!(get_event_fut); match select(get_event_fut, get_msg_fut).await { Either::Left((ev, r_get_msg_fut)) => { /* react to the event */ // r_get_msg_fut is not done, how to reuse it in the next iteration? } Either::Right((msg, r_get_event_fut)) => { /* react to the message */ // it's fine to drop get_event_fut here // the following line causes a double-mut-borrow error on receiver, // despite receiver isn't borrowed anymore (the old future is completed and dropped) let new_future = receiver.get_message(); } }; } } I have three major questions here: When an event comes first, how to tell rust that I want to reuse the incomplete get_message future on the next loop iteration? When a message comes first, how to construct a new future without a borrow error? When (2) is solved, how to put the new future into the same pinned memory location and use it on the next loop iteration? A: I had success using this, but could not get rid of the Box::pin use futures::{future::select, future::Either, pin_mut}; use std::sync::Mutex; #[derive(Debug)] struct MsgReceiver; #[derive(Debug)] struct EventListener; #[derive(Debug)] struct Message; #[derive(Debug)] struct Event; impl MsgReceiver { async fn get_message(&mut self) -> Message { Message } } impl EventListener { async fn get_event(&mut self) -> Event { Event } } async fn eternal_task(receiver: MsgReceiver, mut listener: EventListener) -> ! { let receiver = Mutex::new(receiver); let mut f = None; loop { let get_msg_fut = match f.take() { None => { let mut l = receiver.lock(); Box::pin(async move { l.get_message().await }) } Some(f) => f, }; let get_event_fut = listener.get_event(); pin_mut!(get_event_fut); match select(get_event_fut, get_msg_fut).await { Either::Left((ev, r_get_msg_fut)) => { /* react to the event */ // store the future for next iteration f = Some(r_get_msg_fut); } Either::Right((msg, r_get_event_fut)) => { /* react to the message */ } }; } } #[tokio::main] async fn main() { eternal_task(MsgReceiver, EventListener).await; }
Async: how to keep using the same future in a loop with select (no_std environment)?
I have two async functions: get_message and get_event. I'd like to perform an action whenever a message arrives or an event comes and do that forever in an infinite loop. The simplified setup looks like this: use futures::{future::select, future::Either, pin_mut}; impl MsgReceiver { async fn get_message(&mut self) -> Message { /* ... */ } } impl EventListener { async fn get_event(&mut self) -> Event { /* ... */ } } async fn eternal_task(receiver: MsgReceiver, listener: EventListener) -> ! { let get_msg_fut = receiver.get_message(); pin_mut!(get_msg_fut); loop { let get_event_fut = listener.get_event(); pin_mut!(get_event_fut); match select(get_event_fut, get_msg_fut).await { Either::Left((ev, r_get_msg_fut)) => { /* react to the event */ // r_get_msg_fut is not done, how to reuse it in the next iteration? } Either::Right((msg, r_get_event_fut)) => { /* react to the message */ // it's fine to drop get_event_fut here // the following line causes a double-mut-borrow error on receiver, // despite receiver isn't borrowed anymore (the old future is completed and dropped) let new_future = receiver.get_message(); } }; } } I have three major questions here: When an event comes first, how to tell rust that I want to reuse the incomplete get_message future on the next loop iteration? When a message comes first, how to construct a new future without a borrow error? When (2) is solved, how to put the new future into the same pinned memory location and use it on the next loop iteration?
[ "I had success using this, but could not get rid of the Box::pin\nuse futures::{future::select, future::Either, pin_mut};\nuse std::sync::Mutex;\n#[derive(Debug)]\nstruct MsgReceiver;\n#[derive(Debug)]\nstruct EventListener;\n#[derive(Debug)]\nstruct Message;\n#[derive(Debug)]\nstruct Event;\nimpl MsgReceiver {\n async fn get_message(&mut self) -> Message {\n Message\n }\n}\n\nimpl EventListener {\n async fn get_event(&mut self) -> Event { \n Event\n }\n\n}\n\nasync fn eternal_task(receiver: MsgReceiver, mut listener: EventListener) -> ! {\n let receiver = Mutex::new(receiver);\n let mut f = None;\n loop {\n let get_msg_fut = match f.take() {\n None => {\n let mut l = receiver.lock();\n Box::pin(async move {\n l.get_message().await\n })\n }\n Some(f) => f,\n\n };\n let get_event_fut = listener.get_event();\n pin_mut!(get_event_fut);\n\n match select(get_event_fut, get_msg_fut).await {\n Either::Left((ev, r_get_msg_fut)) => {\n /* react to the event */\n // store the future for next iteration\n f = Some(r_get_msg_fut);\n }\n Either::Right((msg, r_get_event_fut)) => {\n /* react to the message */\n }\n };\n }\n}\n\n#[tokio::main]\nasync fn main() {\n eternal_task(MsgReceiver, EventListener).await;\n}\n\n" ]
[ 0 ]
[]
[]
[ "async_await", "future", "rust", "rust_futures" ]
stackoverflow_0074662326_async_await_future_rust_rust_futures.txt
Q: Is SQL really a programming language after all? I'm presenting a portfolio of the languages I've mastered, but here's a thing I shouldn't be asking really; is SQL a literal programming language or is it not? A lot of people say it is definitely one, others completely disagree. A: SQL is considered to be a Fourth Generation computer language. The first three are basically: Machine code. Assembly code. Common general-purpose languages, such as C, C++, Java, Python, and so on. So, based on a commonly used definition in computer science it is a programming language. And SQL is a prime example of an entire class of languages (and perhaps the most widely used of that class). A related question is whether SQL is Turing-complete -- that is can SQL emulate a Turing Machine. I should emphasize that this is really a theoretical question: no finite machine is really Turing complete. I actually never studied this in depth, but I have read that the original SQL was Turing Incomplete. Only the addition of recursive CTEs makes it complete (well, I guess recursive user-defined functions might also serve this purpose).
Is SQL really a programming language after all?
I'm presenting a portfolio of the languages I've mastered, but here's a thing I shouldn't be asking really; is SQL a literal programming language or is it not? A lot of people say it is definitely one, others completely disagree.
[ "SQL is considered to be a Fourth Generation computer language. The first three are basically:\n\nMachine code.\nAssembly code.\nCommon general-purpose languages, such as C, C++, Java, Python, and so on.\n\nSo, based on a commonly used definition in computer science it is a programming language. And SQL is a prime example of an entire class of languages (and perhaps the most widely used of that class).\nA related question is whether SQL is Turing-complete -- that is can SQL emulate a Turing Machine. I should emphasize that this is really a theoretical question: no finite machine is really Turing complete.\nI actually never studied this in depth, but I have read that the original SQL was Turing Incomplete. Only the addition of recursive CTEs makes it complete (well, I guess recursive user-defined functions might also serve this purpose).\n" ]
[ 2 ]
[ "SQL is a Language! A Query Language!\nMore than a programming Language, It is a Query Language.\nEven for asking a Query of your concern for an Answer/clarity, we need a language. The platform we use for asking(input) to existing data is MySQL.\n" ]
[ -1 ]
[ "mysql", "sql" ]
stackoverflow_0061237229_mysql_sql.txt
Q: How do I secure my h2-console using Lambda DSL? It seems spring recommends using Lambda DSL for Security Configuration. Without using lambdas, I know how to secure my h2-console. public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/h2-console/**").authenticated() .anyRequest().authenticated() .and().formLogin() .and().csrf().ignoringAntMatchers("/h2-console/**") .and().headers().frameOptions().sameOrigin(); return http.build(); Following the tutorial at the beginning, I tried the following code http .authorizeRequests((authz) -> authz .antMatchers("/h2-console/**").authenticated() .anyRequest().authenticated() ) .formLogin() .csrf().ignoringAntMatchers("/h2-console/**") .headers().frameOptions().sameOrigin(); and got this error The method csrf() is undefined for the type FormLoginConfigurer I also tried lots of other combinations, such as http .authorizeRequests(a -> a.anyRequest().permitAll()) .headers().frameOptions().sameOrigin(); or http .authorizeRequests(a -> a.anyRequest().permitAll()) .csrf(c - c.ignoringAntMatchers("/h2-console/**")); or http .authorizeRequests(a -> a.anyRequest().permitAll()) .csrf().ignoringAntMatchers("/h2-console/**") and more and more, none of them works. How do I secure my h2-console using Lambda DSL A: TL;DR: Use the same lambda syntax as for authorizeRequests: http.csrf(csrf -> csrf.ignoringAntMatchers("/h2-console/**")) Details: You are mixing old syntax (Spring Security 5) with the new (Spring Security 6) syntax. Old syntax: http.authorizeRequests().antMatchers("...").permitAll().and().csrf().ignoringAntMantchers("...") is replaced with http.authorizeRequests(a -> a.requestMatchers("...")).csrf(csrf -> ignoringAntMatchers("...))
How do I secure my h2-console using Lambda DSL?
It seems spring recommends using Lambda DSL for Security Configuration. Without using lambdas, I know how to secure my h2-console. public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http .authorizeRequests() .antMatchers("/h2-console/**").authenticated() .anyRequest().authenticated() .and().formLogin() .and().csrf().ignoringAntMatchers("/h2-console/**") .and().headers().frameOptions().sameOrigin(); return http.build(); Following the tutorial at the beginning, I tried the following code http .authorizeRequests((authz) -> authz .antMatchers("/h2-console/**").authenticated() .anyRequest().authenticated() ) .formLogin() .csrf().ignoringAntMatchers("/h2-console/**") .headers().frameOptions().sameOrigin(); and got this error The method csrf() is undefined for the type FormLoginConfigurer I also tried lots of other combinations, such as http .authorizeRequests(a -> a.anyRequest().permitAll()) .headers().frameOptions().sameOrigin(); or http .authorizeRequests(a -> a.anyRequest().permitAll()) .csrf(c - c.ignoringAntMatchers("/h2-console/**")); or http .authorizeRequests(a -> a.anyRequest().permitAll()) .csrf().ignoringAntMatchers("/h2-console/**") and more and more, none of them works. How do I secure my h2-console using Lambda DSL
[ "TL;DR: Use the same lambda syntax as for authorizeRequests:\nhttp.csrf(csrf -> csrf.ignoringAntMatchers(\"/h2-console/**\"))\n\nDetails:\nYou are mixing old syntax (Spring Security 5) with the new (Spring Security 6) syntax.\nOld syntax: http.authorizeRequests().antMatchers(\"...\").permitAll().and().csrf().ignoringAntMantchers(\"...\") is replaced with http.authorizeRequests(a -> a.requestMatchers(\"...\")).csrf(csrf -> ignoringAntMatchers(\"...))\n" ]
[ 0 ]
[]
[]
[ "java", "spring", "spring_security" ]
stackoverflow_0074587082_java_spring_spring_security.txt
Q: Nextjs Nested Dynamic Routes (catch all?) I have searched many articles and posts regarding this topic, but they don't seem to make a lot of sense to me. I am new to Nextjs and dynamic routes in general, so I'm hoping someone can explain this clearly to me for my situation. My page structure looks something like below. And what I want to do is combine the paths into one. So I would have something like this: -Page --Category ---[slug].js --Post ---[slug].js I understand how this structure works. I would end up with two different routes, one that would be domain/category/[slug], and one that would be domain/post/[slug]. This is not what I want. I want it to be domain/category/[slug]/post/[slug]. I have also tried nesting the folder structure like below, though I'm not sure if that would work, or if I'm just not doing is correctly. Not to mention many variations and structures. I'm tired of beating my head against the wall. -Page --Category ---index.js ---[slug].js ---Post ----[slug].js I'm fairly certain that the answer lies with using a catch all like [...params], I'm just not sure of the correct way to do, that makes sense to me. An example of the code I'm using can be found here:https://github.com/adrianhajdin/project_graphql_blog/tree/main/pages My structure is exactly the same, just folder and names may be different. Edit: I should also note, I have no issue with changing the current site folder structure to accomplish this. I just don't want to change my CMS structure I currently have set up. Edit 2: Based on feedback from @maxeth my page/[category]/index.js page looks like this minus the HTML: export async function getStaticProps({ params }) { const posts = await getCategoryPost(params.category); return { props: { posts }, revalidate: 60 * 20, }; } export async function getStaticPaths() { const categories = await getCategories(); return { paths: categories.map((el) => ({ params: { category: el } })), fallback: false, }; } Although I'm not sure if getStaticProps/const posts = await getCategoryPost(params.category) is correct. Should it be (params.category)? It should also be noted that getCategoryPost and getCategories queries are being imported if that wasn't obvious already. I'm sure it is though. They can be found here: https://github.com/adrianhajdin/project_graphql_blog/blob/main/services/index.js I'm not sure if the slug in the CMS also needs to be renamed to reflect the [category] directory. Currently, it's just named Slug. Edit 3: The path up through pages/category/[category] works fine. It's when I get to pages/category/[category]/post/[slug] that I get a 404. Post/[slug] getstaticprops and getstaticpaths looks like the following: export async function getStaticProps({ params }) { const data = await getPostDetails(`${params.category}/${params.slug}`); return { props: { post: data, }, }; } export async function getStaticPaths() { const posts = await getPosts(); return { paths: posts.map(({ node: { slug } }) => ({ params: { category: slug.category, slug: slug.slug }, })), fallback: true, }; } The Post queries look like this: export const getPosts = async () => { const query = gql` query MyQuery { postsConnection(orderBy: createdAt_DESC) { edges { cursor node { content { html raw text } author { bio name id photo { url } } createdAt slug title excerpt featuredImage { url } categories { name slug } } } } } `; const result = await request(graphqlAPI, query); return result.postsConnection.edges; }; export const getPostDetails = async (slug) => { const query = gql` query GetPostDetails($slug: String!) { post(where: { slug: $slug }) { content { html text raw } title excerpt featuredImage { url } author { name bio photo { url } } createdAt slug content { html raw text } categories { name slug } } } `; const result = await request(graphqlAPI, query, { slug }); return result.post; }; What the heck am I missing? It has to be something really simple. I'm just not smart enough to see it. Edit 4: The 404 was caused by a casing mistake. Where I was typing /post/[slug].js instead of /Post/, since my directory was named Post. A: You can achieve this with the following structure: └── pages └── category ├── index.js // this is the "overview page" of all categories └── [category] └── index.js // this is the "overview page" of a specific category └── post └── [slug].js // this is an actual article page for that specific category Note that you cannot use the dynamic-route name [slug] twice inside the folder category, because we will have to return objects with the path names as keys, e.g. [category] -> {category: ...}, [slug] -> {slug: ...}, so they have to be unique for next to tell for which folder you are returning the slugs inside getStaticPaths. Now if you use getStaticPaths, you need to return the specific slug for the page, as well as the "slugs" of the potential parent page inside params: {...} in order for the pages to be pre-generated. So inside category/[category]/index.js, you would need to do something like this: export async function getStaticPaths(context){ // [...] fetch all categories from CMS const allCategories = await getAllCategoriesFromCMS(); // allCategories includes all of the categories from your CMS: // e.g. ["category-A", "category-B", "category-C", ...] return { paths: allCategories.map(el => ({ params: { category: el } })) }; } Which is then accessible within getStaticProps like this: export async function getStaticProps({params}){ console.log(params.category) // can be any of the dynamic categories you fetched and returned inside getStaticPaths } ... And the same concept applies to category/[category]/post/[slug].js, just that you now have to return/handle 2 "slugs" [category] and [slug] inside getStaticPaths and getStaticProps: export async function getStaticPaths(context){ // [...] fetch all posts and the specific categories they belong to from CMS const allPosts = await getAllPostsFromCMS(); // allPosts includes all of the posts/articles from your CMS, assuming the posts have a relationship with their respective category: /* e.g. [ {category: "category-A", postSlug:"some-article-for-category-A", author: "some author", articleContent: "..."}, {category: "category-B", postSlug:"some-article-for-category-B", author: "some author", articleContent: "..."}, ] */ return { paths: [ // once again, we need to call the keys 'category' and 'slug' because we called the dynamic routes '[category]' and '[slug].js' allPosts.map(el => ({ params: { category: el.category, slug: el.postSlug } })) ], }; } and in getStaticProps: export async function getStaticProps({ params }) { console.log(params); // You are guaranteed to have matching categories and slugs in here, for example // {category: category-A, slug:some-article-for-category-A} // or {category: category-B, slug:some-article-for-category-B} // You can use these params to fetch the article entry from the CMS const article = await fetchFromCMS(`${params.category}/${params.slug}`); // [...] } Now, for example, one of the articles will be accessible under localhost:3000/category/category-A/post/some-post-for-category-A
Nextjs Nested Dynamic Routes (catch all?)
I have searched many articles and posts regarding this topic, but they don't seem to make a lot of sense to me. I am new to Nextjs and dynamic routes in general, so I'm hoping someone can explain this clearly to me for my situation. My page structure looks something like below. And what I want to do is combine the paths into one. So I would have something like this: -Page --Category ---[slug].js --Post ---[slug].js I understand how this structure works. I would end up with two different routes, one that would be domain/category/[slug], and one that would be domain/post/[slug]. This is not what I want. I want it to be domain/category/[slug]/post/[slug]. I have also tried nesting the folder structure like below, though I'm not sure if that would work, or if I'm just not doing is correctly. Not to mention many variations and structures. I'm tired of beating my head against the wall. -Page --Category ---index.js ---[slug].js ---Post ----[slug].js I'm fairly certain that the answer lies with using a catch all like [...params], I'm just not sure of the correct way to do, that makes sense to me. An example of the code I'm using can be found here:https://github.com/adrianhajdin/project_graphql_blog/tree/main/pages My structure is exactly the same, just folder and names may be different. Edit: I should also note, I have no issue with changing the current site folder structure to accomplish this. I just don't want to change my CMS structure I currently have set up. Edit 2: Based on feedback from @maxeth my page/[category]/index.js page looks like this minus the HTML: export async function getStaticProps({ params }) { const posts = await getCategoryPost(params.category); return { props: { posts }, revalidate: 60 * 20, }; } export async function getStaticPaths() { const categories = await getCategories(); return { paths: categories.map((el) => ({ params: { category: el } })), fallback: false, }; } Although I'm not sure if getStaticProps/const posts = await getCategoryPost(params.category) is correct. Should it be (params.category)? It should also be noted that getCategoryPost and getCategories queries are being imported if that wasn't obvious already. I'm sure it is though. They can be found here: https://github.com/adrianhajdin/project_graphql_blog/blob/main/services/index.js I'm not sure if the slug in the CMS also needs to be renamed to reflect the [category] directory. Currently, it's just named Slug. Edit 3: The path up through pages/category/[category] works fine. It's when I get to pages/category/[category]/post/[slug] that I get a 404. Post/[slug] getstaticprops and getstaticpaths looks like the following: export async function getStaticProps({ params }) { const data = await getPostDetails(`${params.category}/${params.slug}`); return { props: { post: data, }, }; } export async function getStaticPaths() { const posts = await getPosts(); return { paths: posts.map(({ node: { slug } }) => ({ params: { category: slug.category, slug: slug.slug }, })), fallback: true, }; } The Post queries look like this: export const getPosts = async () => { const query = gql` query MyQuery { postsConnection(orderBy: createdAt_DESC) { edges { cursor node { content { html raw text } author { bio name id photo { url } } createdAt slug title excerpt featuredImage { url } categories { name slug } } } } } `; const result = await request(graphqlAPI, query); return result.postsConnection.edges; }; export const getPostDetails = async (slug) => { const query = gql` query GetPostDetails($slug: String!) { post(where: { slug: $slug }) { content { html text raw } title excerpt featuredImage { url } author { name bio photo { url } } createdAt slug content { html raw text } categories { name slug } } } `; const result = await request(graphqlAPI, query, { slug }); return result.post; }; What the heck am I missing? It has to be something really simple. I'm just not smart enough to see it. Edit 4: The 404 was caused by a casing mistake. Where I was typing /post/[slug].js instead of /Post/, since my directory was named Post.
[ "You can achieve this with the following structure:\n└── pages\n └── category \n ├── index.js // this is the \"overview page\" of all categories\n └── [category] \n └── index.js // this is the \"overview page\" of a specific category\n └── post\n └── [slug].js // this is an actual article page for that specific category\n\nNote that you cannot use the dynamic-route name [slug] twice inside the folder category, because we will have to return objects with the path names as keys, e.g. [category] -> {category: ...}, [slug] -> {slug: ...}, so they have to be unique for next to tell for which folder you are returning the slugs inside getStaticPaths.\nNow if you use getStaticPaths, you need to return the specific slug for the page, as well as the \"slugs\" of the potential parent page inside params: {...} in order for the pages to be pre-generated.\nSo inside category/[category]/index.js, you would need to do something like this:\nexport async function getStaticPaths(context){\n // [...] fetch all categories from CMS\n\n const allCategories = await getAllCategoriesFromCMS(); \n // allCategories includes all of the categories from your CMS:\n // e.g. [\"category-A\", \"category-B\", \"category-C\", ...] \n\nreturn {\n paths: allCategories.map(el => ({ params: { category: el } }))\n};\n}\n\nWhich is then accessible within getStaticProps like this:\nexport async function getStaticProps({params}){\n console.log(params.category) // can be any of the dynamic categories you fetched and returned inside getStaticPaths\n}\n\n...\nAnd the same concept applies to category/[category]/post/[slug].js, just that you now have to return/handle 2 \"slugs\" [category] and [slug] inside getStaticPaths and getStaticProps:\nexport async function getStaticPaths(context){\n // [...] fetch all posts and the specific categories they belong to from CMS\n\n const allPosts = await getAllPostsFromCMS(); \n // allPosts includes all of the posts/articles from your CMS, assuming the posts have a relationship with their respective category:\n /* e.g. [ \n{category: \"category-A\", postSlug:\"some-article-for-category-A\", author: \"some author\", articleContent: \"...\"}, \n{category: \"category-B\", postSlug:\"some-article-for-category-B\", author: \"some author\", articleContent: \"...\"}, \n] \n*/\n\nreturn {\n paths: [ // once again, we need to call the keys 'category' and 'slug' because we called the dynamic routes '[category]' and '[slug].js'\n allPosts.map(el => ({ params: { category: el.category, slug: el.postSlug } }))\n ],\n};\n}\n\nand in getStaticProps:\nexport async function getStaticProps({ params }) {\n console.log(params);\n // You are guaranteed to have matching categories and slugs in here, for example\n // {category: category-A, slug:some-article-for-category-A}\n // or {category: category-B, slug:some-article-for-category-B}\n\n // You can use these params to fetch the article entry from the CMS\n const article = await fetchFromCMS(`${params.category}/${params.slug}`);\n\n // [...]\n}\n\nNow, for example, one of the articles will be accessible under localhost:3000/category/category-A/post/some-post-for-category-A\n" ]
[ 0 ]
[]
[]
[ "dynamic", "next", "next.js", "routes" ]
stackoverflow_0074663384_dynamic_next_next.js_routes.txt
Q: Get the absolute value of a number in Javascript I want to get the absolute value of a number in JavaScript. That is, drop the sign. I know mathematically I can do this by squaring the number then taking the square root, but I also know that this is horribly inefficient. x = -25 x = x * x x = Math.sqrt(x) console.log(x) Is there a way in JavaScript to simply drop the sign of a number that is more efficient than the mathematical approach? A: You mean like getting the absolute value of a number? The Math.abs javascript function is designed exactly for this purpose. var x = -25; x = Math.abs(x); // x would now be 25 console.log(x); Here are some test cases from the documentation: Math.abs('-1'); // 1 Math.abs(-2); // 2 Math.abs(null); // 0 Math.abs("string"); // NaN Math.abs(); // NaN A: Here is a fast way to obtain the absolute value of a number. It's applicable on every language: x = -25; console.log((x ^ (x >> 31)) - (x >> 31)); A: If you want to see how JavaScript implements this feature under the hood you can check out this post. Blog Post Here is the implementation based on the chromium source code. function MathAbs(x) { x = +x; return (x > 0) ? x : 0 - x; } console.log(MathAbs(-25)); A: I think you are looking for Math.abs(x) https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/abs A: Alternative solution Math.max(x,-x) let abs = x => Math.max(x,-x); console.log( abs(24), abs(-24) ); Also the Rick answer can be shorted to x>0 ? x : -x A: to simply drop the sign of a number var x = -25; +`${x}`.replace('-', '');
Get the absolute value of a number in Javascript
I want to get the absolute value of a number in JavaScript. That is, drop the sign. I know mathematically I can do this by squaring the number then taking the square root, but I also know that this is horribly inefficient. x = -25 x = x * x x = Math.sqrt(x) console.log(x) Is there a way in JavaScript to simply drop the sign of a number that is more efficient than the mathematical approach?
[ "You mean like getting the absolute value of a number? The Math.abs javascript function is designed exactly for this purpose.\n\n\nvar x = -25;\nx = Math.abs(x); // x would now be 25 \nconsole.log(x);\n\n\n\nHere are some test cases from the documentation:\nMath.abs('-1'); // 1\nMath.abs(-2); // 2\nMath.abs(null); // 0\nMath.abs(\"string\"); // NaN\nMath.abs(); // NaN\n\n", "Here is a fast way to obtain the absolute value of a number. It's applicable on every language:\n\n\nx = -25;\nconsole.log((x ^ (x >> 31)) - (x >> 31));\n\n\n\n", "If you want to see how JavaScript implements this feature under the hood you can check out this post.\nBlog Post\nHere is the implementation based on the chromium source code.\n\n\nfunction MathAbs(x) {\n x = +x;\n return (x > 0) ? x : 0 - x;\n}\n\nconsole.log(MathAbs(-25));\n\n\n\n", "I think you are looking for Math.abs(x)\nhttps://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/abs\n", "Alternative solution\nMath.max(x,-x)\n\n\n\nlet abs = x => Math.max(x,-x);\n\nconsole.log( abs(24), abs(-24) );\n\n\n\nAlso the Rick answer can be shorted to x>0 ? x : -x\n", "\nto simply drop the sign of a number\n\nvar x = -25;\n+`${x}`.replace('-', '');\n\n" ]
[ 121, 16, 9, 7, 3, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0009353929_javascript.txt
Q: Get list of Google Fonts I want to get list of google web fonts in select box to select a font. I am trying following function, but it gives the error. Code: function get_google_fonts() { $url = "https://www.googleapis.com/webfonts/v1/webfonts?sort=alpha"; $result = json_response( $url ); $font_list = array(); foreach ( $result->items as $font ) { $font_list[] .= $font->family; } return $font_list; } function json_response( $url ) { $raw = file_get_contents( $url, 0, null, null ); $decoded = json_decode( $raw ); return $decoded; } Error: Warning: file_get_contents(): Unable to find the wrapper "https" - did you forget to enable it when you configured PHP. If I change the https to http, I get this error: file_get_contents(http://www.googleapis.com/webfonts/v1/webfonts?sort=alpha): failed to open stream: HTTP request failed! HTTP/1.0 403 Forbidden in I guess this is because of PHP settings on my server, which I am unable to change. So, is there any alternative way to get the font list from Google? Thanks. A: To allow https wraper you must have the php_openssl extension and enable allow_url_include You can edit you php.ini to set these values : extension=php_openssl.dll allow_url_include = On If these values doesn't exist add these lines . If you can't edit your php.ini file, then you can set in on the PHP file : ini_set('allow_url_fopen', 'on'); ini_set('allow_url_include', 'on'); You could also try using CURL instead of file_get_contents. CURL is much faster than file_get_contents $url = "https://www.googleapis.com/webfonts/v1/webfonts?sort=alpha"; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_REFERER, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $result = curl_exec($ch); curl_close($ch); echo $result; Hope this helps. :) A: I think Webfonts deny access to non-browser agents. However you can use the Webfont API. In fact you need an API key, so once you get it, you'll use URLs like https://www.googleapis.com/webfonts/v1/webfonts?key=YOUR-API-KEY It's all documented in the provided link. A: Google requires SSL so make sure thats enabled on your server. To fix this error, go to your php.ini file, find the line ;sslextension=php_openssl.dll and remove the semicolon. http://php.net/manual/en/function.json-decode.php https://developers.google.com/webfonts/docs/developer_api var_dump(json_decode(file_get_contents('https://www.googleapis.com/webfonts/v1/webfonts'))); A: I tried this code and its works $url = 'https://www.googleapis.com/webfonts/v1/webfonts?key='.$google_api_key; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_URL,$url); $result=curl_exec($ch); curl_close($ch); $fonts = json_decode($result, true); $asd =$fonts['items']; $fonts_array = array(); foreach($asd as $as){ $fonts_array[] = $as['family']; } var_dump($fonts_array);
Get list of Google Fonts
I want to get list of google web fonts in select box to select a font. I am trying following function, but it gives the error. Code: function get_google_fonts() { $url = "https://www.googleapis.com/webfonts/v1/webfonts?sort=alpha"; $result = json_response( $url ); $font_list = array(); foreach ( $result->items as $font ) { $font_list[] .= $font->family; } return $font_list; } function json_response( $url ) { $raw = file_get_contents( $url, 0, null, null ); $decoded = json_decode( $raw ); return $decoded; } Error: Warning: file_get_contents(): Unable to find the wrapper "https" - did you forget to enable it when you configured PHP. If I change the https to http, I get this error: file_get_contents(http://www.googleapis.com/webfonts/v1/webfonts?sort=alpha): failed to open stream: HTTP request failed! HTTP/1.0 403 Forbidden in I guess this is because of PHP settings on my server, which I am unable to change. So, is there any alternative way to get the font list from Google? Thanks.
[ "To allow https wraper you must have the php_openssl extension and enable allow_url_include \nYou can edit you php.ini to set these values :\nextension=php_openssl.dll\n\nallow_url_include = On\n\nIf these values doesn't exist add these lines .\nIf you can't edit your php.ini file, then you can set in on the PHP file :\nini_set('allow_url_fopen', 'on');\nini_set('allow_url_include', 'on');\n\nYou could also try using CURL instead of file_get_contents. CURL is much faster than file_get_contents\n$url = \"https://www.googleapis.com/webfonts/v1/webfonts?sort=alpha\";\n$ch = curl_init();\ncurl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE);\ncurl_setopt($ch, CURLOPT_HEADER, false);\ncurl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);\ncurl_setopt($ch, CURLOPT_URL, $url);\ncurl_setopt($ch, CURLOPT_REFERER, $url);\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);\n$result = curl_exec($ch);\ncurl_close($ch);\n\necho $result;\n\nHope this helps. :)\n", "I think Webfonts deny access to non-browser agents. However you can use the Webfont API. In fact you need an API key, so once you get it, you'll use URLs like\nhttps://www.googleapis.com/webfonts/v1/webfonts?key=YOUR-API-KEY\n\nIt's all documented in the provided link.\n", "Google requires SSL so make sure thats enabled on your server.\nTo fix this error, go to your php.ini file, find the line ;sslextension=php_openssl.dll and remove the semicolon.\nhttp://php.net/manual/en/function.json-decode.php\nhttps://developers.google.com/webfonts/docs/developer_api\n var_dump(json_decode(file_get_contents('https://www.googleapis.com/webfonts/v1/webfonts')));\n\n", "I tried this code and its works\n$url = 'https://www.googleapis.com/webfonts/v1/webfonts?key='.$google_api_key;\n$ch = curl_init();\ncurl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, true);\ncurl_setopt($ch, CURLOPT_URL,$url);\n$result=curl_exec($ch);\ncurl_close($ch);\n$fonts = json_decode($result, true);\n$asd =$fonts['items'];\n$fonts_array = array();\nforeach($asd as $as){\n $fonts_array[] = $as['family'];\n}\nvar_dump($fonts_array);\n\n" ]
[ 6, 3, 1, 0 ]
[]
[]
[ "php" ]
stackoverflow_0012615567_php.txt
Q: String to JavaScript JSON/Object I have string like array and object want to convert it to pure javascript object or JSON. My string example: "`{keyOne: valueOne, KeyTow: [10, true, null, 10%], keyThree: {hello: hi}}`" I found a solution converting string to JSON here https://stackoverflow.com/a/42907123/4578250 the result with above solution is: {"keyOne": "valueOne", "KeyTow": [10, true, null, "10"%], "keyThree": {"hello": "hi"}} I want the result like: {"keyOne": "valueOne", "KeyTow": [10, true, null, "10%"], "keyThree": {"hello": "hi"}} But I'm not able to convert the string with percentage symbol. any help will be appreciated. A: Just quote all alphanumeric sequences, except for those that are true, false, null, or a number: let s = "`{keyOne: valueOne, KeyTow: [10, true, null, 10%], keyThree: {hello: hi}}`" let r = s.replaceAll('`', '').replaceAll(/([\w%]+)/g, m => m.match(/^\d[\d.]*$/) || ['true','false','null'].includes(m) ? m : `"${m}"`) console.log(JSON.parse(r))
String to JavaScript JSON/Object
I have string like array and object want to convert it to pure javascript object or JSON. My string example: "`{keyOne: valueOne, KeyTow: [10, true, null, 10%], keyThree: {hello: hi}}`" I found a solution converting string to JSON here https://stackoverflow.com/a/42907123/4578250 the result with above solution is: {"keyOne": "valueOne", "KeyTow": [10, true, null, "10"%], "keyThree": {"hello": "hi"}} I want the result like: {"keyOne": "valueOne", "KeyTow": [10, true, null, "10%"], "keyThree": {"hello": "hi"}} But I'm not able to convert the string with percentage symbol. any help will be appreciated.
[ "Just quote all alphanumeric sequences, except for those that are true, false, null, or a number:\n\n\nlet s = \"`{keyOne: valueOne, KeyTow: [10, true, null, 10%], keyThree: {hello: hi}}`\"\n\nlet r = s.replaceAll('`', '').replaceAll(/([\\w%]+)/g, m =>\n m.match(/^\\d[\\d.]*$/) || ['true','false','null'].includes(m) ? m : `\"${m}\"`)\n\nconsole.log(JSON.parse(r))\n\n\n\n" ]
[ 2 ]
[]
[]
[ "javascript", "json" ]
stackoverflow_0074665411_javascript_json.txt
Q: How can I send and Receive props in this scenario when component being used like this {Compmonent} and not I'm new to react. I want to receive props from NavItemsLayout but I don't know how const NavItemsLayout = (props)=>{ return( <div className="nav-items"> Hellow World </div> ) } const Navbar = ()=>{ return( <div className="navbar"> <AppLayout NavLayoutComponent={NavItemsLayout} // How to receive props from NavItemsLayout /> </div> ) } A: you can simply use arrow function to do it sth like this : <div className="navbar"> <AppLayout NavLayoutComponent={(props)=> <NavItemsLayout {...props} foo={'bar'} />} /> </div> or use react without jsx like this : <div className="navbar"> <AppLayout NavLayoutComponent={(props)=> React.createElement(NavItemsLayout , {...props,foo: 'bar'}, null)} /> </div>
How can I send and Receive props in this scenario when component being used like this {Compmonent} and not
I'm new to react. I want to receive props from NavItemsLayout but I don't know how const NavItemsLayout = (props)=>{ return( <div className="nav-items"> Hellow World </div> ) } const Navbar = ()=>{ return( <div className="navbar"> <AppLayout NavLayoutComponent={NavItemsLayout} // How to receive props from NavItemsLayout /> </div> ) }
[ "you can simply use arrow function to do it\nsth like this :\n<div className=\"navbar\">\n <AppLayout\n NavLayoutComponent={(props)=> <NavItemsLayout {...props} foo={'bar'} />}\n />\n </div>\n\nor use react without jsx\nlike this :\n<div className=\"navbar\">\n <AppLayout\n NavLayoutComponent={(props)=> React.createElement(NavItemsLayout , {...props,foo: 'bar'}, null)}\n />\n </div>\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "react_hooks", "react_props", "reactjs" ]
stackoverflow_0074665890_javascript_react_hooks_react_props_reactjs.txt
Q: Bazel: ttl for an artefact I am writing a bazel rule, and one of the steps is acquiring an authentication token that will expire in some time. When I rebuild this target after that time, the step sees that nothing regarding getting that token has changed, so bazel uses a cached token. Is there a way to take the TTL of that token into account? Or at least force that step to be rebuilt every time the build is run? A: The problem here is, that you actively want to write a rule that breaks bazels hermeticity guarantees. I would advise to generate the authentication token outside of bazel and inject it into the build. There are several options to inject your secret: using --action_env=SECRET=$TOKEN as a command-line argument (possibly via a generated .bazelrc). This has the downside of invalidating your entire bazel cache as every rule has to re-execute when the token changes. generate a secret.bzl somehere containing a SECRET="..." line that you can load() where you need it. If you don't want to generate the token outside of bazel, you can write a custom repository_rule() that generates a load()able file: def _get_token_impl(repository_ctx): repository_ctx.file( "BUILD.bazel", "", executable = False, ) repository_ctx.file( "secret.bzl", "SECRET = {}".format("..."), executable = False, ) get_token = repository_rule( implementation = _get_token_impl, local = True, # important ) The local = True here is important: Indicate that this rule fetches everything from the local system and should be reevaluated at every fetch. A: You can use the invalidate_untagged_caches flag in the build command to invalidate all cached results for the target being built. For example: bazel build --invalidate_untagged_caches //my:target
Bazel: ttl for an artefact
I am writing a bazel rule, and one of the steps is acquiring an authentication token that will expire in some time. When I rebuild this target after that time, the step sees that nothing regarding getting that token has changed, so bazel uses a cached token. Is there a way to take the TTL of that token into account? Or at least force that step to be rebuilt every time the build is run?
[ "The problem here is, that you actively want to write a rule that breaks bazels hermeticity guarantees.\nI would advise to generate the authentication token outside of bazel and inject it into the build. There are several options to inject your secret:\n\nusing --action_env=SECRET=$TOKEN as a command-line argument (possibly via a generated .bazelrc). This has the downside of invalidating your entire bazel cache as every rule has to re-execute when the token changes.\ngenerate a secret.bzl somehere containing a SECRET=\"...\" line that you can load() where you need it.\n\nIf you don't want to generate the token outside of bazel, you can write a custom repository_rule() that generates a load()able file:\ndef _get_token_impl(repository_ctx):\n repository_ctx.file(\n \"BUILD.bazel\",\n \"\",\n executable = False,\n )\n repository_ctx.file(\n \"secret.bzl\",\n \"SECRET = {}\".format(\"...\"),\n executable = False,\n )\n\nget_token = repository_rule(\n implementation = _get_token_impl,\n local = True, # important\n)\n\nThe local = True here is important:\n\nIndicate that this rule fetches everything from the local system and should be reevaluated at every fetch.\n\n", "You can use the invalidate_untagged_caches flag in the build command to invalidate all cached results for the target being built. For example:\nbazel build --invalidate_untagged_caches //my:target\n\n" ]
[ 1, 1 ]
[]
[]
[ "bazel" ]
stackoverflow_0074657775_bazel.txt
Q: Python - Can I sort a dictionary by one of the values that is in a list? How can I have the following dictionary sorted based on a value that is in a list? Data = {1:["name",2010],2:["name",2005],3:["name",2000]} sortedDataByYear = {3:["name",2000],2:["name",2005],1:["name",2010]} I have tried sorted(lambda), but there is something wrong. A: Dictionaries can't be sorted. However, you can do this: Data = {1:["name",2010],2:["name",2005],3:["name",2000]} sorted(Data.items(), key = lambda x: x[1]) This will return a list instead, but sorted on the first index in ascending order.
Python - Can I sort a dictionary by one of the values that is in a list?
How can I have the following dictionary sorted based on a value that is in a list? Data = {1:["name",2010],2:["name",2005],3:["name",2000]} sortedDataByYear = {3:["name",2000],2:["name",2005],1:["name",2010]} I have tried sorted(lambda), but there is something wrong.
[ "Dictionaries can't be sorted.\nHowever, you can do this:\nData = {1:[\"name\",2010],2:[\"name\",2005],3:[\"name\",2000]}\nsorted(Data.items(), key = lambda x: x[1])\n\nThis will return a list instead, but sorted on the first index in ascending order.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074665953_dictionary_python.txt
Q: Discord bot access online spreadsheet Where would I even begin to have a discord bot access an online spreadsheet? I'm in the process of coding a bot that does multiple tasks, but have yet to come across anything in regards to this. A: The flow is going to be interaction with your bot and your bot interacting with some REST API. I assume you are aiming for something like Google Sheets. So you will have to wrap interaction with your bot into interaction with Sheets API. Further help is almost impossible as exact goal was not specified. Here is documentation for Google Sheets: https://developers.google.com/sheets/api Here is documentation for discord.js: https://discord.js.org For interaction with Google Sheets, Google has made their own ?framework? for working with Google Workspace products called Google Apps Script. You can use this to introduce further automation into those products, but in your case I doubt you will need to use it. Link to App Script: https://developers.google.com/apps-script
Discord bot access online spreadsheet
Where would I even begin to have a discord bot access an online spreadsheet? I'm in the process of coding a bot that does multiple tasks, but have yet to come across anything in regards to this.
[ "The flow is going to be interaction with your bot and your bot interacting with some REST API. I assume you are aiming for something like Google Sheets. So you will have to wrap interaction with your bot into interaction with Sheets API. Further help is almost impossible as exact goal was not specified.\nHere is documentation for Google Sheets:\nhttps://developers.google.com/sheets/api\nHere is documentation for discord.js:\nhttps://discord.js.org\nFor interaction with Google Sheets, Google has made their own ?framework? for working with Google Workspace products called Google Apps Script. You can use this to introduce further automation into those products, but in your case I doubt you will need to use it.\nLink to App Script: https://developers.google.com/apps-script\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.js", "javascript", "spreadsheet" ]
stackoverflow_0074665849_discord_discord.js_javascript_spreadsheet.txt
Q: "Error: Can't open display:" in Xfig installed in MacBook Pro M2 macOS Monterey I am new to Mac. I have tried to install Xfig using MacPorts in MacBook Pro M2 macOS Monterey. After following all the steps, I tried to run Xfig by typing "xfig" in the terminal. But, it showed "Error: Can't open display:" I guess this error is related to the display. Can someone help me with this issue? Thanks! A: The error message "Error: Can't open display:" typically indicates that the X Window System is not properly set up on your Mac. To fix this issue, you can try the following steps: Make sure that the X11 app is installed on your Mac. You can check this by going to the Applications folder and looking for the X11 app. If it is not installed, you can install it by running the following command in the terminal: xcode-select --install Check if the X11 server is running on your Mac. You can do this by running the following command in the terminal: ps -ax | grep Xquartz If you see a process with the name "Xquartz" in the output, then the X11 server is running. If not, you can start it by running the following command in the terminal: open -a XQuartz After starting the X11 server, try running Xfig again by typing "xfig" in the terminal. It should now launch correctly.
"Error: Can't open display:" in Xfig installed in MacBook Pro M2 macOS Monterey
I am new to Mac. I have tried to install Xfig using MacPorts in MacBook Pro M2 macOS Monterey. After following all the steps, I tried to run Xfig by typing "xfig" in the terminal. But, it showed "Error: Can't open display:" I guess this error is related to the display. Can someone help me with this issue? Thanks!
[ "The error message \"Error: Can't open display:\" typically indicates that the X Window System is not properly set up on your Mac.\nTo fix this issue, you can try the following steps:\n\nMake sure that the X11 app is installed on your Mac. You can check this by going to the Applications folder and looking for the X11 app. If it is not installed, you can install it by running the following command in the terminal:\n\nxcode-select --install\n\n\nCheck if the X11 server is running on your Mac. You can do this by running the following command in the terminal:\n\nps -ax | grep Xquartz\n\nIf you see a process with the name \"Xquartz\" in the output, then the X11 server is running. If not, you can start it by running the following command in the terminal:\nopen -a XQuartz\n\n\nAfter starting the X11 server, try running Xfig again by typing \"xfig\" in the terminal. It should now launch correctly.\n\n" ]
[ 0 ]
[]
[]
[ "display", "macos_monterey", "macports" ]
stackoverflow_0074665822_display_macos_monterey_macports.txt
Q: How to change this SQL query to PL/SQL command line or code? SELECT username, account_status FROM dba_users; How to change this SQL query to PL/SQL command line or code? SELECT username, account_status FROM dba_users; I tried DECLARE user_name VARCHAR2(20) := 'username'; account_status VARCHAR2(20) := 'account_status'; BEGIN FOR user_name IN (SELECT username FROM dba_users) LOOP FOR account_status IN (SELECT account_status FROM dba_users) LOOP dbms_output.put_line(user_name.username || ' - ' || user_record.account_status); END LOOP; END LOOP; END; it works but the output is repeating A: This is pure SQL: SQL> select username, account_status from dba_users where rownum <= 5; USERNAME ACCOUNT_STATUS -------------------- -------------------- SYS OPEN AUDSYS LOCKED SYSTEM OPEN SYSBACKUP LOCKED SYSDG LOCKED To "convert" it into PL/SQL, use one loop (why using two?): SQL> set serveroutput on SQL> begin 2 for cur_r in (select username, account_status from dba_users where rownum <= 5) 3 loop 4 dbms_output.put_line(cur_R.username ||' - '|| cur_r.account_status); 5 end loop; 6 end; 7 / SYS - OPEN AUDSYS - LOCKED SYSTEM - OPEN SYSBACKUP - LOCKED SYSDG - LOCKED PL/SQL procedure successfully completed. Your code, fixed: if you use nested loops (once again, no need for that), you have to correlate one loop query to another - that's what you are missing - see line #4: SQL> begin 2 for cur_user in (select username from dba_users where rownum <= 5) loop 3 for cur_acc in (select account_status from dba_users 4 where username = cur_user.username 5 ) 6 loop 7 dbms_output.put_line(cur_user.username ||' - '|| cur_acc.account_status); 8 end loop; 9 end loop; 10 end; 11 / SYS - OPEN AUDSYS - LOCKED SYSTEM - OPEN SYSBACKUP - LOCKED SYSDG - LOCKED PL/SQL procedure successfully completed. SQL> A: Just use this block: cl scr set SERVEROUTPUT ON BEGIN FOR i IN (SELECT distinct username FROM dba_users order by username) LOOP FOR j IN (SELECT distinct account_status FROM dba_users where username=i.username order by account_status) LOOP dbms_output.put_line(i.username || ' - ' || j.account_status); END LOOP; END LOOP; END;
How to change this SQL query to PL/SQL command line or code? SELECT username, account_status FROM dba_users;
How to change this SQL query to PL/SQL command line or code? SELECT username, account_status FROM dba_users; I tried DECLARE user_name VARCHAR2(20) := 'username'; account_status VARCHAR2(20) := 'account_status'; BEGIN FOR user_name IN (SELECT username FROM dba_users) LOOP FOR account_status IN (SELECT account_status FROM dba_users) LOOP dbms_output.put_line(user_name.username || ' - ' || user_record.account_status); END LOOP; END LOOP; END; it works but the output is repeating
[ "This is pure SQL:\nSQL> select username, account_status from dba_users where rownum <= 5;\n\nUSERNAME ACCOUNT_STATUS\n-------------------- --------------------\nSYS OPEN\nAUDSYS LOCKED\nSYSTEM OPEN\nSYSBACKUP LOCKED\nSYSDG LOCKED\n\nTo \"convert\" it into PL/SQL, use one loop (why using two?):\nSQL> set serveroutput on\nSQL> begin\n 2 for cur_r in (select username, account_status from dba_users where rownum <= 5)\n 3 loop\n 4 dbms_output.put_line(cur_R.username ||' - '|| cur_r.account_status);\n 5 end loop;\n 6 end;\n 7 /\nSYS - OPEN\nAUDSYS - LOCKED\nSYSTEM - OPEN\nSYSBACKUP - LOCKED\nSYSDG - LOCKED\n\nPL/SQL procedure successfully completed.\n\nYour code, fixed: if you use nested loops (once again, no need for that), you have to correlate one loop query to another - that's what you are missing - see line #4:\nSQL> begin\n 2 for cur_user in (select username from dba_users where rownum <= 5) loop\n 3 for cur_acc in (select account_status from dba_users\n 4 where username = cur_user.username\n 5 )\n 6 loop\n 7 dbms_output.put_line(cur_user.username ||' - '|| cur_acc.account_status);\n 8 end loop;\n 9 end loop;\n 10 end;\n 11 /\nSYS - OPEN\nAUDSYS - LOCKED\nSYSTEM - OPEN\nSYSBACKUP - LOCKED\nSYSDG - LOCKED\n\nPL/SQL procedure successfully completed.\n\nSQL>\n\n", "Just use this block:\ncl scr\nset SERVEROUTPUT ON\n\nBEGIN\n FOR i IN (SELECT distinct username FROM dba_users order by username) LOOP\n FOR j IN (SELECT distinct account_status FROM dba_users where username=i.username order by account_status) LOOP\n dbms_output.put_line(i.username || ' - ' || j.account_status);\n END LOOP;\n END LOOP;\nEND;\n\n" ]
[ 2, 0 ]
[]
[]
[ "oracle", "plsql" ]
stackoverflow_0074663677_oracle_plsql.txt
Q: How to use the fonts I've picked from google fonts So, let's say I've picked a font from google fonts with multiple size/bold/italic. How do I use the different sizes that I've picked? I can't seem to change the size accordingly. @import url('https://fonts.googleapis.com/css2?family=Fira+Sans:ital,wght@0,100;0,200;0,400;0,500;0,700;1,400&display=swap'); .box1 { font-family: 'Fira Sans', sans-serif; } .box2 { font-family: 'Fira Sans 500;0', sans-serif; } .box3 { font-family: 'Fira Sans 700;0', sans-serif; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=<device-width>, initial-scale=1.0" /> <title>Flexbox 3</title> <link rel="stylesheet" href="style.css" /> </head> <body> <div class="main1"> <div class="box1">box 1</div> <div class="box2">box 2</div> <div class="box3">box 3</div> </div> </body> </html> A: Your code should be: @import url('https://fonts.googleapis.com/css2?family=Fira+Sans:ital,wght@0,100;0,200;0,400;0,500;0,700;1,400&display=swap'); .box1 { font-family: 'Fira Sans', sans-serif; } .box2 { font-family: 'Fira Sans', sans-serif; font-weight: 500; } .box3 { font-family: 'Fira Sans', sans-serif; font-weight: 700; }
How to use the fonts I've picked from google fonts
So, let's say I've picked a font from google fonts with multiple size/bold/italic. How do I use the different sizes that I've picked? I can't seem to change the size accordingly. @import url('https://fonts.googleapis.com/css2?family=Fira+Sans:ital,wght@0,100;0,200;0,400;0,500;0,700;1,400&display=swap'); .box1 { font-family: 'Fira Sans', sans-serif; } .box2 { font-family: 'Fira Sans 500;0', sans-serif; } .box3 { font-family: 'Fira Sans 700;0', sans-serif; } <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=<device-width>, initial-scale=1.0" /> <title>Flexbox 3</title> <link rel="stylesheet" href="style.css" /> </head> <body> <div class="main1"> <div class="box1">box 1</div> <div class="box2">box 2</div> <div class="box3">box 3</div> </div> </body> </html>
[ "Your code should be:\n@import url('https://fonts.googleapis.com/css2?family=Fira+Sans:ital,wght@0,100;0,200;0,400;0,500;0,700;1,400&display=swap');\n\n.box1 {\n font-family: 'Fira Sans', sans-serif;\n}\n\n.box2 {\n font-family: 'Fira Sans', sans-serif;\n font-weight: 500;\n}\n\n.box3 {\n font-family: 'Fira Sans', sans-serif;\n font-weight: 700;\n}\n\n" ]
[ 0 ]
[]
[]
[ "css", "frontend" ]
stackoverflow_0074218683_css_frontend.txt
Q: How to create classes with information from a JSON file My goal is to send multiple emails with information coming from json. What is the best way to loop through the file and create classes for each page? Thanks in advance This is the JSON data: { "object": "list", "results": [ { "object": "page", "id": "2", "created_time": "2022-12-03T09:15:00.000Z", "last_edited_time": "2022-12-03T09:53:00.000Z", "created_by": { "object": "user", "id": "2" }, "last_edited_by": { "object": "user", "id": "2" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "2" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "client": { "id": "JyHA", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "client2", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "client2", "href": null } ] }, "send_time": { "id": "PMEC", "type": "date", "date": { "start": "2022-12-09", "end": null, "time_zone": null } }, "email_receiver": { "id": "ewjg", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "text": { "id": "rGFS", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "A", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "A", "href": null } ] }, "subject": { "id": "title", "type": "title", "title": [ { "type": "text", "text": { "content": "test", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "test", "href": null } ] } }, "url": }, { "object": "page", "id": "1", "created_time": "2022-11-13T20:41:00.000Z", "last_edited_time": "2022-12-03T09:53:00.000Z", "created_by": { "object": "user", "id": "1" }, "last_edited_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "1" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "client": { "id": "JyHA", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "client1", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "client1", "href": null } ] }, "send_time": { "id": "PMEC", "type": "date", "date": { "start": "2022-11-14T18:00:00.000+01:00", "end": null, "time_zone": null } }, "email_receiver": { "id": "ewjg", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "text": { "id": "rGFS", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "Lorem ipsum dolor sit amet, ", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Lorem ipsum dolor sit amet, ", "href": null } ] }, "subject": { "id": "title", "type": "title", "title": [ { "type": "text", "text": { "content": "Automatic email", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Automatic email", "href": null } ] } }, "url": } ], "next_cursor": null, "has_more": false, "type": "page", "page": {} } Thanks to @Tim Roberts i already have a way how to filter the data: for result in data['results']: texttype = result['properties']['email_sender']['type'] email_sender = result['properties']['email_sender'][texttype1][0]['text']['content'] Now i need to find a way to put this information in classes: Client, Email_Sender, Email_Receicer, Subject, Text I havent found a way how. Thanks in advance! A: You may try using pandas to load the JSON into a dataframe. Then, you can create a new class for each page and assign the relevant data to its fields. For example, you could create a class like this: class Email: def __init__(self, email_sender, email_receiver, subject, text): self.email_sender = email_sender self.email_receiver = email_receiver self.subject = subject self.text = text Then, you can iterate through the JSON data and create an instance of the Email class for each page, assigning the relevant data to the fields. For example: for result in data['results']: email_sender = result['properties']['email_sender'][texttype1][0]['text']['content'] email_receiver = result['properties']['email_receiver'][texttype2][0]['text']['content'] subject = result['properties']['subject'][texttype3][0]['text']['content'] text = result['properties']['text'][texttype4][0]['text']['content'] email = Email(email_sender, email_receiver, subject, text) This way, you can create classes for each page and assign the relevant data to each page.
How to create classes with information from a JSON file
My goal is to send multiple emails with information coming from json. What is the best way to loop through the file and create classes for each page? Thanks in advance This is the JSON data: { "object": "list", "results": [ { "object": "page", "id": "2", "created_time": "2022-12-03T09:15:00.000Z", "last_edited_time": "2022-12-03T09:53:00.000Z", "created_by": { "object": "user", "id": "2" }, "last_edited_by": { "object": "user", "id": "2" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "2" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "client": { "id": "JyHA", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "client2", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "client2", "href": null } ] }, "send_time": { "id": "PMEC", "type": "date", "date": { "start": "2022-12-09", "end": null, "time_zone": null } }, "email_receiver": { "id": "ewjg", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "text": { "id": "rGFS", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "A", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "A", "href": null } ] }, "subject": { "id": "title", "type": "title", "title": [ { "type": "text", "text": { "content": "test", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "test", "href": null } ] } }, "url": }, { "object": "page", "id": "1", "created_time": "2022-11-13T20:41:00.000Z", "last_edited_time": "2022-12-03T09:53:00.000Z", "created_by": { "object": "user", "id": "1" }, "last_edited_by": { "object": "user", "id": "9b60ada0-dc62-441f-8c0a-e1668a878d0e" }, "cover": null, "icon": null, "parent": { "type": "database_id", "database_id": "1" }, "archived": false, "properties": { "email_sender": { "id": "CdJY", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "client": { "id": "JyHA", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "client1", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "client1", "href": null } ] }, "send_time": { "id": "PMEC", "type": "date", "date": { "start": "2022-11-14T18:00:00.000+01:00", "end": null, "time_zone": null } }, "email_receiver": { "id": "ewjg", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "[email protected]", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "[email protected]", "href": null } ] }, "text": { "id": "rGFS", "type": "rich_text", "rich_text": [ { "type": "text", "text": { "content": "Lorem ipsum dolor sit amet, ", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Lorem ipsum dolor sit amet, ", "href": null } ] }, "subject": { "id": "title", "type": "title", "title": [ { "type": "text", "text": { "content": "Automatic email", "link": null }, "annotations": { "bold": false, "italic": false, "strikethrough": false, "underline": false, "code": false, "color": "default" }, "plain_text": "Automatic email", "href": null } ] } }, "url": } ], "next_cursor": null, "has_more": false, "type": "page", "page": {} } Thanks to @Tim Roberts i already have a way how to filter the data: for result in data['results']: texttype = result['properties']['email_sender']['type'] email_sender = result['properties']['email_sender'][texttype1][0]['text']['content'] Now i need to find a way to put this information in classes: Client, Email_Sender, Email_Receicer, Subject, Text I havent found a way how. Thanks in advance!
[ "You may try using pandas to load the JSON into a dataframe. Then, you can create a new class for each page and assign the relevant data to its fields. For example, you could create a class like this:\nclass Email:\n def __init__(self, email_sender, email_receiver, subject, text):\n self.email_sender = email_sender\n self.email_receiver = email_receiver\n self.subject = subject\n self.text = text\n\nThen, you can iterate through the JSON data and create an instance of the Email class for each page, assigning the relevant data to the fields.\nFor example:\nfor result in data['results']:\n email_sender = result['properties']['email_sender'][texttype1][0]['text']['content']\n email_receiver = result['properties']['email_receiver'][texttype2][0]['text']['content']\n subject = result['properties']['subject'][texttype3][0]['text']['content']\n text = result['properties']['text'][texttype4][0]['text']['content']\n email = Email(email_sender, email_receiver, subject, text)\n\nThis way, you can create classes for each page and assign the relevant data to each page.\n" ]
[ 0 ]
[]
[]
[ "json", "python", "python_3.x" ]
stackoverflow_0074665661_json_python_python_3.x.txt
Q: Css help for positioning elements on screen I need to position a few elements responsively on the screen for my chat application. I have tried the grid system, but it doesn't seem to be working out. I need a layout like the following: The typing area shows which user is typing and online shows which user is online. Messages show the messages and chat contains the input. Using the grid system I got : [![enter image description here][1]][1] The typing for one should come above the input, and the input should not take the entire space. The chat area is like this: <form id="form" action=""> <div id="typing"> </div> <input id="input" autocomplete="off" /><button>Send</button> </form> Css: #form { background: rgba(0, 0, 0, 0.15); padding: 0.25rem; position: fixed; bottom: 0; left: 0; right: 0; display: flex; height: 3rem; box-sizing: border-box; backdrop-filter: blur(10px); } #input { border: none; padding: 0 1rem; flex-grow: 1; border-radius: 2rem; margin: 0.25rem; } #input:focus { outline: none; } #form>button { background: #333; border: none; padding: 0 1rem; margin: 0.25rem; border-radius: 3px; outline: none; color: #fff; } Any help is greatly appreciated. Thanks! [1]: https://i.stack.imgur.com/uIY86.png A: As for the layout, use a flexbox to divide the left and right (online) panel <div style="display:flex;width:100%;height:100vh"> <div id="left-side"> <div id="messages"></div> <div id="typing"></div> <div id="chat"></div> </div> <div id="online"></div> </div> The left-side div should have display:block, and as for the measurements choose whatever is to your liking. Color coded the end result should look like this
Css help for positioning elements on screen
I need to position a few elements responsively on the screen for my chat application. I have tried the grid system, but it doesn't seem to be working out. I need a layout like the following: The typing area shows which user is typing and online shows which user is online. Messages show the messages and chat contains the input. Using the grid system I got : [![enter image description here][1]][1] The typing for one should come above the input, and the input should not take the entire space. The chat area is like this: <form id="form" action=""> <div id="typing"> </div> <input id="input" autocomplete="off" /><button>Send</button> </form> Css: #form { background: rgba(0, 0, 0, 0.15); padding: 0.25rem; position: fixed; bottom: 0; left: 0; right: 0; display: flex; height: 3rem; box-sizing: border-box; backdrop-filter: blur(10px); } #input { border: none; padding: 0 1rem; flex-grow: 1; border-radius: 2rem; margin: 0.25rem; } #input:focus { outline: none; } #form>button { background: #333; border: none; padding: 0 1rem; margin: 0.25rem; border-radius: 3px; outline: none; color: #fff; } Any help is greatly appreciated. Thanks! [1]: https://i.stack.imgur.com/uIY86.png
[ "As for the layout, use a flexbox to divide the left and right (online) panel\n<div style=\"display:flex;width:100%;height:100vh\">\n <div id=\"left-side\">\n <div id=\"messages\"></div>\n <div id=\"typing\"></div>\n <div id=\"chat\"></div>\n </div>\n <div id=\"online\"></div>\n</div>\n\nThe left-side div should have display:block, and as for the measurements choose whatever is to your liking.\nColor coded the end result should look like this\n" ]
[ 0 ]
[]
[]
[ "css", "grid", "layout" ]
stackoverflow_0074664547_css_grid_layout.txt
Q: How to export Palette-Based png in GIMP? Because the nmlc ask me to add palette, so I am trying to create a palette-based png with GIMP. But I didn't see the option for palette, only the RGB, RGBA, GRAY, GRAYA. According to libpng, png have 3 types, but i only see two in GIMP(whether with or without Alpha channel). Is there a way to add palette in png file? I was trying to find a way to add palette to the picture, but I cannot find a way to do it. A: If your image is color-indexed in Gimp (Image > Mode > Color indexed) then it will be exported as a color-indexed PNG. But keep in mind that in Gimp, "Color-indexed" is really "Color-indexed GIF-style", so opacity is binary and not progressive. This of course is not a problem if your image has no transparent parts.
How to export Palette-Based png in GIMP?
Because the nmlc ask me to add palette, so I am trying to create a palette-based png with GIMP. But I didn't see the option for palette, only the RGB, RGBA, GRAY, GRAYA. According to libpng, png have 3 types, but i only see two in GIMP(whether with or without Alpha channel). Is there a way to add palette in png file? I was trying to find a way to add palette to the picture, but I cannot find a way to do it.
[ "If your image is color-indexed in Gimp (Image > Mode > Color indexed) then it will be exported as a color-indexed PNG.\nBut keep in mind that in Gimp, \"Color-indexed\" is really \"Color-indexed GIF-style\", so opacity is binary and not progressive. This of course is not a problem if your image has no transparent parts.\n" ]
[ 1 ]
[]
[]
[ "gimp" ]
stackoverflow_0074664666_gimp.txt
Q: Why do pythons tkinter grid buttons stretch when a label in the grid changes size? I am making a simple application with 1 button and 1 label where pressing the button changes the text on the label. Both label and button and placed using the tkinter's grid system however when I press the button the label's text changes size as expected but the button becomes stretched to the label's length too. Why? This is my code. import tkinter as tk def change(): label1.config(text = "example text that is really big which shows the button stretching") window = tk.Tk() window.config(bg="black") b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="news") label1=tk.Label(text = "text",font=("Segoe UI",20),bg="black",fg="white") label1.grid(row=2,column=1,sticky="news") window.mainloop() The expected result was the label changing and the button's size staying the same but instead the button becomes stretched to the labels length. I have tried a lot of things trying to make this work but I can't figure it out. Before pressing the button After pressing the button A: Setting sticky="news" for the button, will expand the button to fill the available space in the four directions North, East, West, South. Try changing it to sticky="w" to make it stick to the West. So, the line of code for creating the button will become: b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="w")
Why do pythons tkinter grid buttons stretch when a label in the grid changes size?
I am making a simple application with 1 button and 1 label where pressing the button changes the text on the label. Both label and button and placed using the tkinter's grid system however when I press the button the label's text changes size as expected but the button becomes stretched to the label's length too. Why? This is my code. import tkinter as tk def change(): label1.config(text = "example text that is really big which shows the button stretching") window = tk.Tk() window.config(bg="black") b1=tk.Button(window,text="button1",font=("Segoe UI",40),command=change).grid(row=1,column=1,sticky="news") label1=tk.Label(text = "text",font=("Segoe UI",20),bg="black",fg="white") label1.grid(row=2,column=1,sticky="news") window.mainloop() The expected result was the label changing and the button's size staying the same but instead the button becomes stretched to the labels length. I have tried a lot of things trying to make this work but I can't figure it out. Before pressing the button After pressing the button
[ "Setting sticky=\"news\" for the button, will expand the button to fill the available space in the four directions North, East, West, South. Try changing it to sticky=\"w\" to make it stick to the West. So, the line of code for creating the button will become:\nb1=tk.Button(window,text=\"button1\",font=(\"Segoe UI\",40),command=change).grid(row=1,column=1,sticky=\"w\")\n\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074665903_python_tkinter.txt
Q: Installing Chrome with Playwright on Google Cloud Run for Python I'm getting a Executable doesn't exist at /root/.cache/ms-playwright/chromium-1019/chromium-1019/chrome-linux/chrome error on Google Cloud Run whenever I try to install Playwright for Python, and I can't find a workaround for it, already tried to download and install a Chrome version but I don't know who to set the path to it. Was anyone able to achieve this? A: As mentioned in the document this error may occur because, Each version of Playwright needs specific versions of browser binaries to operate. By default, Playwright downloads Chromium, WebKit, and Firefox browsers into the OS-specific cache folders. You can overcome this error by following the steps suggested in the document. Download version-specific Playwright Browsers Have Playwright launch browses from the downloaded location (optional) Ask Playwright not to Download browsers by default, improves the package installation speed Can you try using executablePath to point to the installed version For more information, you can refer to these Link1, Link2 and Link3 which may help you.
Installing Chrome with Playwright on Google Cloud Run for Python
I'm getting a Executable doesn't exist at /root/.cache/ms-playwright/chromium-1019/chromium-1019/chrome-linux/chrome error on Google Cloud Run whenever I try to install Playwright for Python, and I can't find a workaround for it, already tried to download and install a Chrome version but I don't know who to set the path to it. Was anyone able to achieve this?
[ "As mentioned in the document this error may occur because, Each version of Playwright needs specific versions of browser binaries to operate. By default, Playwright downloads Chromium, WebKit, and Firefox browsers into the OS-specific cache folders.\nYou can overcome this error by following the steps suggested in the document.\n\nDownload version-specific Playwright Browsers\nHave Playwright launch browses from the downloaded location\n(optional) Ask Playwright not to Download browsers by default, improves the package installation speed\n\nCan you try using executablePath to point to the installed version\nFor more information, you can refer to these Link1, Link2 and Link3 which may help you.\n" ]
[ 0 ]
[]
[]
[ "google_cloud_run", "playwright_python" ]
stackoverflow_0074662363_google_cloud_run_playwright_python.txt
Q: Hugo - multiple sections of website with their own local taxonomies? I'm building a website that has "blog" section and and "guide" section, like this: mywebsite.com/blog/ mywebsite.com/guide/ Both blog and guide contain their own multiple posts. I'd like to add independent tags (taxonomies) to both blog and guide, so that I could list posts by specific tags, for example: mywebsite.com/blog/tags/some_blog_tag mywebsite.com/guide/tags/some_guide_tag What should be my project's folder and file structure - both content folder and layouts folder - to implement this? It seems that hugo is built around the idea that the taxonomy should be global for the entire website. However, there are also "page bundles" in hugo. Can hugo define local taxonomies inside page bundles? I find the docs very confusing on this topic. Also, what should be added to the config.json file to create such section local taxonomies? I tried the following folder structure, but I get "page not found" when I access mywebsite.com/blog/tags/some_blog_tag or mywebsite.com/guide/tags/some_guide_tag - content - blog _index.md blog_content1.md blog_content2.md blog_content3.md - guide _index.md guide_content1.md guide_content2.md guide_content3.md - layouts - blog list.html taxonomy.html - guide list.html taxonomy.html A: One way to achieve that would be to use a multilingual site, with three languages: one language named en (default) one language named blog the last one named guide and adding the parameter defaultContentLanguageInSubdir: false By doing so, you can host all your non blog/guide pages in the en language, and each blog/guide pages will have their own taxonomy as required. See documentation for more information https://gohugo.io/content-management/multilingual/
Hugo - multiple sections of website with their own local taxonomies?
I'm building a website that has "blog" section and and "guide" section, like this: mywebsite.com/blog/ mywebsite.com/guide/ Both blog and guide contain their own multiple posts. I'd like to add independent tags (taxonomies) to both blog and guide, so that I could list posts by specific tags, for example: mywebsite.com/blog/tags/some_blog_tag mywebsite.com/guide/tags/some_guide_tag What should be my project's folder and file structure - both content folder and layouts folder - to implement this? It seems that hugo is built around the idea that the taxonomy should be global for the entire website. However, there are also "page bundles" in hugo. Can hugo define local taxonomies inside page bundles? I find the docs very confusing on this topic. Also, what should be added to the config.json file to create such section local taxonomies? I tried the following folder structure, but I get "page not found" when I access mywebsite.com/blog/tags/some_blog_tag or mywebsite.com/guide/tags/some_guide_tag - content - blog _index.md blog_content1.md blog_content2.md blog_content3.md - guide _index.md guide_content1.md guide_content2.md guide_content3.md - layouts - blog list.html taxonomy.html - guide list.html taxonomy.html
[ "One way to achieve that would be to use a multilingual site, with three languages:\n\none language named en (default)\none language named blog\nthe last one named guide\n\nand adding the parameter defaultContentLanguageInSubdir: false\nBy doing so, you can host all your non blog/guide pages in the en language, and each blog/guide pages will have their own taxonomy as required.\nSee documentation for more information\nhttps://gohugo.io/content-management/multilingual/\n" ]
[ 0 ]
[]
[]
[ "hugo", "hugo_content_organization" ]
stackoverflow_0074640888_hugo_hugo_content_organization.txt
Q: How to catch props from a single component that was rendered multiple times with different prop values In app.jsx I have rendered Inputfield component multiple times and passed a random value through Math.random(). I am looking to grab those random numbers that were passed through as a prop and add them in the Inputfield component. How can I approach this problem? I currently have no clue how to approach and solve this problem. Here are my codes- app.jsx import { useState, useRef } from "react"; import styles from "./css/app.module.css"; import Inputbox from "./Inputbox"; function App() { const [components, setComponents] = useState([]); const boxCount = useRef(null); const handleKeyDown = (event) => { if (event.key === "Enter") { addComponent(); } }; function addComponent() { var result = boxCount.current.value; for (var i = 0; i < result; i++) { setComponents((arr) => [...arr, <Inputbox num={Math.random()} />]); } // setComponents(Inputbox); // console.log(result); } function checkAll() {} return ( <div className={styles.container}> <input type="text" name="" id="" ref={boxCount} onKeyDown={handleKeyDown} placeholder="Enter number of box" /> <button onClick={addComponent}>Add TextBox</button> <button onClick={checkAll}>Check all</button> {/* <div>{components}</div> */} <div className={styles.componentList}> {components.map((jsxComponent) => ( <div key={Math.random()}>{jsxComponent}</div> ))} </div> </div> ); } export default App; Inputbox,jsx import styles from "./css/inputbox.module.css"; import { useRef, useState } from "react"; const Inputbox = (props) => { const inputVal = useRef(null); console.log("Individual nums" + " " + props.num); console.log("Total"); const [checked, setChecked] = useState(true); const handleChk = () => { setChecked(!checked); if (checked == true) { handleChange(); } }; const handleChange = () => { var res = parseInt(inputVal.current.value); return res; }; return ( <div className={styles.container}> <input type="checkbox" onClick={handleChk} name="" id="" /> <input type="text" ref={inputVal} placeholder="Enter numbers" /> </div> ); }; export default Inputbox; A: function addComponent() { const result = boxCount.current.value; const newArr = [] for (let i = 0; i < result; i++) { newArr.push(Math.random()) } setComponents((arr) => [...arr, ...newArr]); } <div className={styles.componentList}> {components.map((value) => ( <Inputbox num={value} key={value} /> ))} </div>
How to catch props from a single component that was rendered multiple times with different prop values
In app.jsx I have rendered Inputfield component multiple times and passed a random value through Math.random(). I am looking to grab those random numbers that were passed through as a prop and add them in the Inputfield component. How can I approach this problem? I currently have no clue how to approach and solve this problem. Here are my codes- app.jsx import { useState, useRef } from "react"; import styles from "./css/app.module.css"; import Inputbox from "./Inputbox"; function App() { const [components, setComponents] = useState([]); const boxCount = useRef(null); const handleKeyDown = (event) => { if (event.key === "Enter") { addComponent(); } }; function addComponent() { var result = boxCount.current.value; for (var i = 0; i < result; i++) { setComponents((arr) => [...arr, <Inputbox num={Math.random()} />]); } // setComponents(Inputbox); // console.log(result); } function checkAll() {} return ( <div className={styles.container}> <input type="text" name="" id="" ref={boxCount} onKeyDown={handleKeyDown} placeholder="Enter number of box" /> <button onClick={addComponent}>Add TextBox</button> <button onClick={checkAll}>Check all</button> {/* <div>{components}</div> */} <div className={styles.componentList}> {components.map((jsxComponent) => ( <div key={Math.random()}>{jsxComponent}</div> ))} </div> </div> ); } export default App; Inputbox,jsx import styles from "./css/inputbox.module.css"; import { useRef, useState } from "react"; const Inputbox = (props) => { const inputVal = useRef(null); console.log("Individual nums" + " " + props.num); console.log("Total"); const [checked, setChecked] = useState(true); const handleChk = () => { setChecked(!checked); if (checked == true) { handleChange(); } }; const handleChange = () => { var res = parseInt(inputVal.current.value); return res; }; return ( <div className={styles.container}> <input type="checkbox" onClick={handleChk} name="" id="" /> <input type="text" ref={inputVal} placeholder="Enter numbers" /> </div> ); }; export default Inputbox;
[ "function addComponent() {\n const result = boxCount.current.value;\n const newArr = []\n for (let i = 0; i < result; i++) {\n newArr.push(Math.random())\n }\n setComponents((arr) => [...arr, ...newArr]);\n}\n\n<div className={styles.componentList}>\n {components.map((value) => (\n <Inputbox num={value} key={value} />\n ))}\n</div>\n\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074665614_reactjs.txt
Q: MongoDB Atlas works on localhost but not on production (NextJs on Vercel) I connected my NextJs app with MongoDB Atlas. It is working fine on localhost but it doesn't work on Production hosted on Vercel. At first, I thought it's about network access and added 0.0.0.0/0 on the IP Access List but no difference. The error 500 is only being shown on the production. I have called the nextjs's api from Formik's form submit. On console, I also see this Error at onSubmit (test-2b8b0b9f0ee833a8.js:1:14850) The codes for connect and disconnect import mongoose from 'mongoose'; const connection = {}; async function connect() { if (connection.isConnected) { console.log('already connected'); return; } if (mongoose.connections.length > 0) { connection.isConnected = mongoose.connections[0].readyState; if (connection.isConnected === 1) { console.log('use previous connection'); return; } await mongoose.disconnect(); } const db = mongoose.connect(process.env.MONGODB_URI); console.log('new connection'); connection.isConnected = db.connections[0].readyState; } async function disconnect() { if (connection.isConnected) { if (process.env.NODE_ENV === 'production') { await mongoose.disconnect(); connection.isConnected = false; } else { console.log('not disconnected'); } } } function convertDocToObj(doc) { doc._id = doc._id.toString(); doc.createdAt = doc.createdAt.toString(); doc.updatedAt = doc.updatedAt.toString(); return doc; } const db = { connect, disconnect, convertDocToObj }; export default db; Person Model import mongoose from 'mongoose'; const PersonSchema = new mongoose.Schema( { name: { type: String, required: true }, count: { type: Number, required: false }, }, { timestamps: true, } ); const Person = mongoose.models.Person || mongoose.model('Person', PersonSchema); export default Person; pages/api/my-endpoint.js import db from '../../../mongoose/db'; import Person from '../../../mongoose/models/Person'; const handler = async (req, res) => { // the codes await db.connect(); let person = await Person.findOne({ name: 'Frank', }); person.count += 1; await type.save(); await db.disconnect(); // the codes } A: The way that I'm doing you can also try or modify -- /lib/dbConnect.js import mongoose from 'mongoose' const DB_URL = process.env.DB_URL if (!DB_URL) { throw new Error( 'Please define the DB_URL environment variable inside .env.local' ) } /** * Global is used here to maintain a cached connection across hot reloads * in development. This prevents connections growing exponentially * during API Route usage. */ let cached = global.mongoose if (!cached) { cached = global.mongoose = { conn: null, promise: null } } async function dbConnect() { if (cached.conn) { console.log("--- db connected ---"); return cached.conn } if (!cached.promise) { const opts = { bufferCommands: false, } cached.promise = mongoose.connect(DB_URL, opts).then((mongoose) => { console.log("--- db connected ---") return mongoose }) } cached.conn = await cached.promise console.log("--- db connected ---") return cached.conn } export default dbConnect; /models/videos.js import mongoose from "mongoose"; let videoSchema = mongoose.Schema({ videoID: { type: String, }, authorID: { type: String, required: true, }, authorName: { type: String, required: true, }, videoURL: { type: String, required: true, } }); const Videos =mongoose.models?.videos || mongoose.model("videos", videoSchema); export default Videos; /pages/api/getVideos.js import dbConnect from "../../lib/dbConnect"; import Videos from "../../models/videos"; export default async function handler(req,res){ try{ await dbConnect(); await Videos.find().lean().then(response=>{ return res.status(200).json(response); }); }catch(e){ res.status(400).json(e); } } //api calling method export const fetcher = (url,method,payload) => fetch(url,{ method:method, headers:{ Accept:'application/json', 'Content-Type':"application/json" }, body:JSON.stringify(payload) }).then(res=>{ if(!res.ok){ const error = new Error("Error while fetching the data"); error.status = res.ok error.message = 'error' return error; }else{ return res.json(); } })
MongoDB Atlas works on localhost but not on production (NextJs on Vercel)
I connected my NextJs app with MongoDB Atlas. It is working fine on localhost but it doesn't work on Production hosted on Vercel. At first, I thought it's about network access and added 0.0.0.0/0 on the IP Access List but no difference. The error 500 is only being shown on the production. I have called the nextjs's api from Formik's form submit. On console, I also see this Error at onSubmit (test-2b8b0b9f0ee833a8.js:1:14850) The codes for connect and disconnect import mongoose from 'mongoose'; const connection = {}; async function connect() { if (connection.isConnected) { console.log('already connected'); return; } if (mongoose.connections.length > 0) { connection.isConnected = mongoose.connections[0].readyState; if (connection.isConnected === 1) { console.log('use previous connection'); return; } await mongoose.disconnect(); } const db = mongoose.connect(process.env.MONGODB_URI); console.log('new connection'); connection.isConnected = db.connections[0].readyState; } async function disconnect() { if (connection.isConnected) { if (process.env.NODE_ENV === 'production') { await mongoose.disconnect(); connection.isConnected = false; } else { console.log('not disconnected'); } } } function convertDocToObj(doc) { doc._id = doc._id.toString(); doc.createdAt = doc.createdAt.toString(); doc.updatedAt = doc.updatedAt.toString(); return doc; } const db = { connect, disconnect, convertDocToObj }; export default db; Person Model import mongoose from 'mongoose'; const PersonSchema = new mongoose.Schema( { name: { type: String, required: true }, count: { type: Number, required: false }, }, { timestamps: true, } ); const Person = mongoose.models.Person || mongoose.model('Person', PersonSchema); export default Person; pages/api/my-endpoint.js import db from '../../../mongoose/db'; import Person from '../../../mongoose/models/Person'; const handler = async (req, res) => { // the codes await db.connect(); let person = await Person.findOne({ name: 'Frank', }); person.count += 1; await type.save(); await db.disconnect(); // the codes }
[ "The way that I'm doing you can also try or modify --\n/lib/dbConnect.js\nimport mongoose from 'mongoose'\n\nconst DB_URL = process.env.DB_URL\n\nif (!DB_URL) {\n throw new Error(\n 'Please define the DB_URL environment variable inside .env.local'\n )\n}\n\n/**\n * Global is used here to maintain a cached connection across hot reloads\n * in development. This prevents connections growing exponentially\n * during API Route usage.\n */\nlet cached = global.mongoose\n\nif (!cached) {\n cached = global.mongoose = { conn: null, promise: null }\n}\n\nasync function dbConnect() {\n if (cached.conn) {\n console.log(\"--- db connected ---\");\n return cached.conn\n }\n\n if (!cached.promise) {\n const opts = {\n bufferCommands: false,\n }\n\n cached.promise = mongoose.connect(DB_URL, opts).then((mongoose) => {\n console.log(\"--- db connected ---\")\n return mongoose\n })\n }\n cached.conn = await cached.promise\n console.log(\"--- db connected ---\")\n return cached.conn\n}\n\nexport default dbConnect;\n\n/models/videos.js\nimport mongoose from \"mongoose\";\n\nlet videoSchema = mongoose.Schema({\n videoID: {\n type: String,\n },\n authorID: {\n type: String,\n required: true,\n },\n authorName: {\n type: String,\n required: true,\n },\n videoURL: {\n type: String,\n required: true,\n }\n});\nconst Videos =mongoose.models?.videos || mongoose.model(\"videos\", videoSchema);\nexport default Videos;\n\n/pages/api/getVideos.js\nimport dbConnect from \"../../lib/dbConnect\";\nimport Videos from \"../../models/videos\";\n\nexport default async function handler(req,res){\n try{\n await dbConnect();\n await Videos.find().lean().then(response=>{\n return res.status(200).json(response);\n });\n }catch(e){\n res.status(400).json(e);\n }\n}\n\n//api calling method\n export const fetcher = (url,method,payload) => \n fetch(url,{\n method:method,\n headers:{\n Accept:'application/json',\n 'Content-Type':\"application/json\"\n },\n body:JSON.stringify(payload)\n}).then(res=>{\n if(!res.ok){\n const error = new Error(\"Error while fetching the data\");\n error.status = res.ok\n error.message = 'error'\n return error;\n }else{\n return res.json();\n }\n})\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "mongoose", "next.js", "node.js", "vercel" ]
stackoverflow_0074665568_mongodb_mongoose_next.js_node.js_vercel.txt
Q: why spring framework Classutils field primitiveWrapperTypeMap use IdentityHashMap not hashMap? why spring framework Classutils(org.springframework.util.ClassUtils) field primitiveWrapperTypeMap use IdentityHashMap not HashMap. the Class Object is singleton, equals to other class is always false , if use hashMap the key also not cover , why use IdentityHashMap not HashMap A: I suppose IdentityHashMap used for better performance. I'll try to briefly explain it. In a typical Map implementations, equals() is used to compare keys (while you access/put/search keys) as the Map interface obligates to do so. Also, HashMap uses the hashCode() method for hashing. But the IdentityHashMap class violates these rules. According to documentation, this is a simple linear-probe hash table with peculiarities: it uses reference equality (==) for key search operations, it uses the System.identityHashCode() method during search operations. Al these 'tricks' allow IdentityHashMap demonstrate a better performance. If you look at the code, primitiveWrapperTypeMap is used in a static block, and it is filled with values such as <Integer.class, int.class>, etc. So, this field is used only inside of a class, and it is immutable from outside of a class. That's why it is safe to use IdentityHashMap. Hope it helps.
why spring framework Classutils field primitiveWrapperTypeMap use IdentityHashMap not hashMap?
why spring framework Classutils(org.springframework.util.ClassUtils) field primitiveWrapperTypeMap use IdentityHashMap not HashMap. the Class Object is singleton, equals to other class is always false , if use hashMap the key also not cover , why use IdentityHashMap not HashMap
[ "I suppose IdentityHashMap used for better performance. I'll try to briefly explain it.\nIn a typical Map implementations, equals() is used to compare keys (while you access/put/search keys) as the Map interface obligates to do so. Also, HashMap uses the hashCode() method for hashing.\nBut the IdentityHashMap class violates these rules. According to documentation, this is a simple linear-probe hash table with peculiarities:\n\nit uses reference equality (==) for key search operations,\nit uses the System.identityHashCode() method during search operations.\n\nAl these 'tricks' allow IdentityHashMap demonstrate a better performance.\nIf you look at the code, primitiveWrapperTypeMap is used in a static block, and it is filled with values such as <Integer.class, int.class>, etc. So, this field is used only inside of a class, and it is immutable from outside of a class. That's why it is safe to use IdentityHashMap.\nHope it helps.\n" ]
[ 0 ]
[]
[]
[ "hashmap", "java", "spring" ]
stackoverflow_0074663932_hashmap_java_spring.txt
Q: How can I specify which Python toolchain to use in Bazel? How can I configure Bazel to pick one toolchain over the other? I am okay with defining which toolchain to use via command-line argument or specifying which should be used in a specific target. There are currently two toolchains being defined in my WORKSPACE file. I have two Python toolchains. One of them builds Python from source and includes it in the executable .zip output, and the other one does not. When building, the toolchain that gets used is always the first toolchain which is registered. In this case, python3_tooolchain is used even though the build target imports requirement from hermetic_python3_toolchain. # WORKSPACE load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") load("@rules_python//python:pip.bzl", "pip_install") http_archive( name = "rules_python", url = "https://github.com/bazelbuild/rules_python/releases/download/0.5.0/rules_python-0.5.0.tar.gz", sha256 = "cd6730ed53a002c56ce4e2f396ba3b3be262fd7cb68339f0377a45e8227fe332", ) # Non-hermetic toolchain register_toolchains("//src:python3_toolchain") pip_install( quiet = False, name = "python_dependencies", requirements = "//:requirements.txt", python_interpreter = "/usr/bin/python3" ) load("@python_dependencies//:requirements.bzl", "requirement") # Hermetic toolchain _py_configure = """ if [[ "$OSTYPE" == "darwin"* ]]; then ./configure --prefix=$(pwd)/bazel_install --with-openssl=$(brew --prefix openssl) else ./configure --prefix=$(pwd)/bazel_install fi """ http_archive( name = "hermetic_interpreter", urls = ["https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz"], sha256 = "a57dc82d77358617ba65b9841cee1e3b441f386c3789ddc0676eca077f2951c3", strip_prefix = "Python-3.11.0", patch_cmds = [ "mkdir $(pwd)/bazel_install", _py_configure, "make", "make install", "ln -s bazel_install/bin/python3 python_bin", ], build_file_content = """ exports_files(["python_bin"]) filegroup( name = "files", srcs = glob(["bazel_install/**"], exclude = ["**/* *"]), visibility = ["//visibility:public"], ) """, ) pip_install( name = "hermetic_python3_dependencies", requirements = "//:requirements.txt", python_interpreter_target = "@hermetic_interpreter//:python_bin", ) load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") load("@rules_python//python:defs.bzl", "py_binary") load("@rules_python//python:defs.bzl", "py_library") register_toolchains("//src:hermetic_python3_toolchain") # src/BUILD load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair") # Non-hermetic toolchain py_runtime( name = "python3_runtime", interpreter_path = "/usr/bin/python3", python_version = "PY3", visibility = ["//visibility:public"], ) py_runtime_pair( name = "python3_runtime_pair", py2_runtime = None, py3_runtime = ":python3_runtime", ) toolchain( name = "python3_toolchain", toolchain = ":python3_runtime_pair", toolchain_type = "@bazel_tools//tools/python:toolchain_type", ) # Hermetic toolchain py_runtime( name = "hermetic_python3_runtime", files = ["@hermetic_interpreter//:files"], interpreter = "@hermetic_interpreter//:python_bin", python_version = "PY3", visibility = ["//visibility:public"], ) py_runtime_pair( name = "hermetic_python3_runtime_pair", py2_runtime = None, py3_runtime = ":hermetic_python3_runtime", ) toolchain( name = "hermetic_python3_toolchain", toolchain = ":hermetic_python3_runtime_pair", toolchain_type = "@bazel_tools//tools/python:toolchain_type", ) package(default_visibility = ["//visibility:public"]) # /src/some_tool/BUILD load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") # Can load this rule from either `hermetic_python3_dependencies` or `python3_dependencies`, but does not seem to make a difference py_binary( name = "some-tool", main = "some_tool.py", srcs = ["some_tool_file.py"], python_version = "PY3", srcs_version = "PY3", deps = [ requirement("requests"), "//src/common/some-library:library", ] ) package(default_visibility = ["//visibility:public"]) A: Consider upgrading rules_python, as that ruleset includes a hermetic python toolchain since https://github.com/bazelbuild/rules_python/releases/tag/0.7.0. If that is not an option: Currently you are registering two toolchains in your WORKSPACE.bazel file and bazel will use its toolchain resolution to pick one of them. You can debug that resolution with the --toolchain_resolution_debug=regex flag to see what is going on. If you want to force the entire build to use one of the toolchains, remove registering the toolchains from the WORKSPACE.bazel file and create a .bazelrc: build:hermetic_python --extra_toolchains=//src:hermetic_python3_toolchain build:system_python --extra_toolchains=//src:python3_toolchain Now you can switch between these toolchains by using bazel build --config=hermetic_python or bazel build --config=system_python. Beware however, that this does not influence which of the python toolchains was used to run the pip_parse(). You need to take extra care from which you load the requirement() function. Simply by load()ing the function you force the evaluation of the pip_parse() and therefor the fetching/compilation of the corresponding python interpreter.
How can I specify which Python toolchain to use in Bazel?
How can I configure Bazel to pick one toolchain over the other? I am okay with defining which toolchain to use via command-line argument or specifying which should be used in a specific target. There are currently two toolchains being defined in my WORKSPACE file. I have two Python toolchains. One of them builds Python from source and includes it in the executable .zip output, and the other one does not. When building, the toolchain that gets used is always the first toolchain which is registered. In this case, python3_tooolchain is used even though the build target imports requirement from hermetic_python3_toolchain. # WORKSPACE load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") load("@rules_python//python:pip.bzl", "pip_install") http_archive( name = "rules_python", url = "https://github.com/bazelbuild/rules_python/releases/download/0.5.0/rules_python-0.5.0.tar.gz", sha256 = "cd6730ed53a002c56ce4e2f396ba3b3be262fd7cb68339f0377a45e8227fe332", ) # Non-hermetic toolchain register_toolchains("//src:python3_toolchain") pip_install( quiet = False, name = "python_dependencies", requirements = "//:requirements.txt", python_interpreter = "/usr/bin/python3" ) load("@python_dependencies//:requirements.bzl", "requirement") # Hermetic toolchain _py_configure = """ if [[ "$OSTYPE" == "darwin"* ]]; then ./configure --prefix=$(pwd)/bazel_install --with-openssl=$(brew --prefix openssl) else ./configure --prefix=$(pwd)/bazel_install fi """ http_archive( name = "hermetic_interpreter", urls = ["https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz"], sha256 = "a57dc82d77358617ba65b9841cee1e3b441f386c3789ddc0676eca077f2951c3", strip_prefix = "Python-3.11.0", patch_cmds = [ "mkdir $(pwd)/bazel_install", _py_configure, "make", "make install", "ln -s bazel_install/bin/python3 python_bin", ], build_file_content = """ exports_files(["python_bin"]) filegroup( name = "files", srcs = glob(["bazel_install/**"], exclude = ["**/* *"]), visibility = ["//visibility:public"], ) """, ) pip_install( name = "hermetic_python3_dependencies", requirements = "//:requirements.txt", python_interpreter_target = "@hermetic_interpreter//:python_bin", ) load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") load("@rules_python//python:defs.bzl", "py_binary") load("@rules_python//python:defs.bzl", "py_library") register_toolchains("//src:hermetic_python3_toolchain") # src/BUILD load("@bazel_tools//tools/python:toolchain.bzl", "py_runtime_pair") # Non-hermetic toolchain py_runtime( name = "python3_runtime", interpreter_path = "/usr/bin/python3", python_version = "PY3", visibility = ["//visibility:public"], ) py_runtime_pair( name = "python3_runtime_pair", py2_runtime = None, py3_runtime = ":python3_runtime", ) toolchain( name = "python3_toolchain", toolchain = ":python3_runtime_pair", toolchain_type = "@bazel_tools//tools/python:toolchain_type", ) # Hermetic toolchain py_runtime( name = "hermetic_python3_runtime", files = ["@hermetic_interpreter//:files"], interpreter = "@hermetic_interpreter//:python_bin", python_version = "PY3", visibility = ["//visibility:public"], ) py_runtime_pair( name = "hermetic_python3_runtime_pair", py2_runtime = None, py3_runtime = ":hermetic_python3_runtime", ) toolchain( name = "hermetic_python3_toolchain", toolchain = ":hermetic_python3_runtime_pair", toolchain_type = "@bazel_tools//tools/python:toolchain_type", ) package(default_visibility = ["//visibility:public"]) # /src/some_tool/BUILD load("@hermetic_python3_dependencies//:requirements.bzl", "requirement") # Can load this rule from either `hermetic_python3_dependencies` or `python3_dependencies`, but does not seem to make a difference py_binary( name = "some-tool", main = "some_tool.py", srcs = ["some_tool_file.py"], python_version = "PY3", srcs_version = "PY3", deps = [ requirement("requests"), "//src/common/some-library:library", ] ) package(default_visibility = ["//visibility:public"])
[ "Consider upgrading rules_python, as that ruleset includes a hermetic python toolchain since https://github.com/bazelbuild/rules_python/releases/tag/0.7.0.\nIf that is not an option:\nCurrently you are registering two toolchains in your WORKSPACE.bazel file and bazel will use its toolchain resolution to pick one of them. You can debug that resolution with the --toolchain_resolution_debug=regex flag to see what is going on.\nIf you want to force the entire build to use one of the toolchains, remove registering the toolchains from the WORKSPACE.bazel file and create a .bazelrc:\nbuild:hermetic_python --extra_toolchains=//src:hermetic_python3_toolchain\nbuild:system_python --extra_toolchains=//src:python3_toolchain\n\nNow you can switch between these toolchains by using bazel build --config=hermetic_python or bazel build --config=system_python.\nBeware however, that this does not influence which of the python toolchains was used to run the pip_parse(). You need to take extra care from which you load the requirement() function. Simply by load()ing the function you force the evaluation of the pip_parse() and therefor the fetching/compilation of the corresponding python interpreter.\n" ]
[ 0 ]
[]
[]
[ "bazel", "build", "python" ]
stackoverflow_0074512774_bazel_build_python.txt
Q: RegEx Match Digits only between 2 strings across mutiple lines I have the text file below: data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20750 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20751 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20752 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> And I only want to extract the values: 20750 20751 20752 From the file. The closest I got to was: (?<=vendor-id"\>)(.*?)(?=\<\/cbc:CustomerAssignedAccountID) But this extracts: data: 20751 data: I want digits only. How do I do this? A: I dont know the language you are using but you can try the below regex (data:\s*<cbc:.*?>\s*)data:\s*(\d+)\s*(?=data:\s*</cbc:.*?>) Below are the matches data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20750 data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20751 data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20752 now the brackets () i have added to create group (\d+) this group will give you the number which you need now i dont know which language you are using but you can easily extract that number by using group A: I'd do it like this: vendor-id">[^<]*?(\d+) The matches will be in matching group 1. Important is the ? after the [^<]* so that it matches non-greedy. https://regex101.com/r/e3eR6y/1
RegEx Match Digits only between 2 strings across mutiple lines
I have the text file below: data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20750 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20751 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> data:<SupplierParty data:xmlns="xxx"> data: <cbc:CustomerAssignedAccountID schemeID="vendor-id"> data: 20752 data: </cbc:CustomerAssignedAccountID> data: <cbc:AdditionalAccountID schemeID="cashflow:v1">151</cbc:AdditionalAccountID> And I only want to extract the values: 20750 20751 20752 From the file. The closest I got to was: (?<=vendor-id"\>)(.*?)(?=\<\/cbc:CustomerAssignedAccountID) But this extracts: data: 20751 data: I want digits only. How do I do this?
[ "I dont know the language you are using but you can try the below regex\n(data:\\s*<cbc:.*?>\\s*)data:\\s*(\\d+)\\s*(?=data:\\s*</cbc:.*?>)\n\nBelow are the matches\ndata: <cbc:CustomerAssignedAccountID schemeID=\"vendor-id\">\ndata: 20750\n\ndata: <cbc:CustomerAssignedAccountID schemeID=\"vendor-id\">\ndata: 20751\n\ndata: <cbc:CustomerAssignedAccountID schemeID=\"vendor-id\">\ndata: 20752\n\nnow the brackets () i have added to create group\n(\\d+) this group will give you the number which you need\n\nnow i dont know which language you are using but you can easily extract that number by using group\n", "I'd do it like this:\nvendor-id\">[^<]*?(\\d+)\nThe matches will be in matching group 1.\nImportant is the ? after the [^<]* so that it matches non-greedy.\nhttps://regex101.com/r/e3eR6y/1\n" ]
[ 0, 0 ]
[]
[]
[ "regex" ]
stackoverflow_0074663013_regex.txt
Q: How to import products with variations in Shopware 6 I'm trying to import products from an XML with variations. The import for the products works so far but it doesn't create the variations. Here is my code (simplified): /** * @return int * @throws \Exception */ public function execute() { // avoid reaching memory limit ini_set('memory_limit', '-1'); // set tax id $this->setTaxId(); if (empty($this->taxId)) { return 1; } // read products from import xml file $importProducts = $this->loadProducts(); $csvBatch = array_chunk($importProducts, self::BATCH); $productNumbers = []; foreach ($csvBatch as $products) { $productNumbers[] = $this->processImportProducts($products, false); } $this->deleteProducts(array_merge(...$productNumbers)); return 0; } /** * @param $productsData * @param $progressBar * @return array */ private function processImportProducts($productsData, $progressBar) { $products = []; $productNumbers = []; foreach ($productsData as $product) { $products[$product['SKU']['@cdata']] = $this->importProducts($product, $progressBar); $productNumbers[] = $product['SKU']['@cdata']; } // upsert product try { $this->cleanProductProperties($products, $this->context); $this->productRepository->upsert(array_values($products), $this->context); } catch (WriteException $exception) { $this->logger->info(' '); $this->logger->info('<error>Products could not be imported. Message: '. $exception->getMessage() .'</error>'); } unset($products); return $productNumbers; } /** * @param $product * @param $progressBar * @return array */ private function importProducts($product, $progressBar) { ... $productData = [ 'id' => $productId, 'productNumber' => $productNumber, 'price' => [ [ 'currencyId' => Defaults::CURRENCY, 'net' => !empty($product['net']) ? $product['net'] : 0, 'gross' => !empty($product['net']) ? $product['net'] : 0, 'linked' => true ] ], 'stock' => 99999, 'unit' => [ 'id' => '3fff95a8077b4f5ba3d1d2a41cb53fab' ], 'unitId' => '3fff95a8077b4f5ba3d1d2a41cb53fab', 'taxId' => $this->taxId, 'name' => $productNames, 'description' => $productDescriptions ]; if(isset($product['Variations'])) { $variationIds = $product['Variations']['@cdata'] ?? ''; $productData['variation'] = [$this->getProductVariationIds($variationIds)]; } return $productData; } /** * Get product variation ids * * @param string $productVariations * @return string */ private function getProductVariationIds($productVariations) { $productVariationIds = explode(',', $productVariations); // get product variationIds in form of a string list $ids = $this->productRepository->search( (new Criteria())->addFilter(new EqualsAnyFilter('productNumber', $productVariationIds)), $this->context )->getIds(); return implode(',', $ids); } It loads correctly the ids but nothing happen. Also no error. Anyone an idea how to import variations as well? A: The variation field is not meant to be persisted or to create variants of a product. It has the Runtime flag, meaning it's not an actual database column but processed during runtime. You have to create/update variants just like you create the parent product. Additionally you have to set the parentId and the options. The latter being associations to property_group_option, which you'll have to create first. So in addition to your existing payload when creating parent products, you'll have to add this data to the variants: $productData = [ // ... 'parentId' => '...' 'options' => [ ['id' => '...'], ['id' => '...'], ['id' => '...'], // ... ], ]; Finally you'll have to create the product_configurator_setting records. That's one record for each option used across all variants. Also the productId for the records has to be the one of the parent product. $repository = $this->container->get('product_configurator_setting.repository'); $configuratorSettings = []; foreach ($options as $option) { $configuratorSetting = [ 'optionId' => $option['id'], 'productId' => $parentId, ]; $criteria = new Criteria(); $criteria->addFilter(new EqualsFilter('productId', $parentId)); $criteria->addFilter(new EqualsFilter('optionId', $option['id'])); $id = $repository->searchIds($criteria, $context)->firstId(); // if the configurator setting already exists, update or skip if ($id) { $configuratorSetting['id'] = $id; } $configuratorSettings[] = $configuratorSetting; } $repository->upsert(configuratorSettings, $context); A: Just as an addition to make things easier. When creating a product with variants you can just update the configuratorSettings of the parent/father/main-product (whatever you call it). Then Shopware6 will go and create the variant products automatically. Also the uuids of the children are created automatically. So if need to keep track of these you have to query them after the creation process. But for a fast creation this might be much faster, if you have a lot of variants the only "variation" are the options. So no special images or texts.
How to import products with variations in Shopware 6
I'm trying to import products from an XML with variations. The import for the products works so far but it doesn't create the variations. Here is my code (simplified): /** * @return int * @throws \Exception */ public function execute() { // avoid reaching memory limit ini_set('memory_limit', '-1'); // set tax id $this->setTaxId(); if (empty($this->taxId)) { return 1; } // read products from import xml file $importProducts = $this->loadProducts(); $csvBatch = array_chunk($importProducts, self::BATCH); $productNumbers = []; foreach ($csvBatch as $products) { $productNumbers[] = $this->processImportProducts($products, false); } $this->deleteProducts(array_merge(...$productNumbers)); return 0; } /** * @param $productsData * @param $progressBar * @return array */ private function processImportProducts($productsData, $progressBar) { $products = []; $productNumbers = []; foreach ($productsData as $product) { $products[$product['SKU']['@cdata']] = $this->importProducts($product, $progressBar); $productNumbers[] = $product['SKU']['@cdata']; } // upsert product try { $this->cleanProductProperties($products, $this->context); $this->productRepository->upsert(array_values($products), $this->context); } catch (WriteException $exception) { $this->logger->info(' '); $this->logger->info('<error>Products could not be imported. Message: '. $exception->getMessage() .'</error>'); } unset($products); return $productNumbers; } /** * @param $product * @param $progressBar * @return array */ private function importProducts($product, $progressBar) { ... $productData = [ 'id' => $productId, 'productNumber' => $productNumber, 'price' => [ [ 'currencyId' => Defaults::CURRENCY, 'net' => !empty($product['net']) ? $product['net'] : 0, 'gross' => !empty($product['net']) ? $product['net'] : 0, 'linked' => true ] ], 'stock' => 99999, 'unit' => [ 'id' => '3fff95a8077b4f5ba3d1d2a41cb53fab' ], 'unitId' => '3fff95a8077b4f5ba3d1d2a41cb53fab', 'taxId' => $this->taxId, 'name' => $productNames, 'description' => $productDescriptions ]; if(isset($product['Variations'])) { $variationIds = $product['Variations']['@cdata'] ?? ''; $productData['variation'] = [$this->getProductVariationIds($variationIds)]; } return $productData; } /** * Get product variation ids * * @param string $productVariations * @return string */ private function getProductVariationIds($productVariations) { $productVariationIds = explode(',', $productVariations); // get product variationIds in form of a string list $ids = $this->productRepository->search( (new Criteria())->addFilter(new EqualsAnyFilter('productNumber', $productVariationIds)), $this->context )->getIds(); return implode(',', $ids); } It loads correctly the ids but nothing happen. Also no error. Anyone an idea how to import variations as well?
[ "The variation field is not meant to be persisted or to create variants of a product. It has the Runtime flag, meaning it's not an actual database column but processed during runtime.\nYou have to create/update variants just like you create the parent product. Additionally you have to set the parentId and the options. The latter being associations to property_group_option, which you'll have to create first.\nSo in addition to your existing payload when creating parent products, you'll have to add this data to the variants:\n$productData = [\n // ...\n 'parentId' => '...'\n 'options' => [\n ['id' => '...'],\n ['id' => '...'],\n ['id' => '...'],\n // ...\n ],\n];\n\nFinally you'll have to create the product_configurator_setting records. That's one record for each option used across all variants. Also the productId for the records has to be the one of the parent product.\n$repository = $this->container->get('product_configurator_setting.repository');\n\n$configuratorSettings = [];\nforeach ($options as $option) {\n $configuratorSetting = [\n 'optionId' => $option['id'],\n 'productId' => $parentId,\n ];\n\n $criteria = new Criteria();\n $criteria->addFilter(new EqualsFilter('productId', $parentId));\n $criteria->addFilter(new EqualsFilter('optionId', $option['id']));\n\n $id = $repository->searchIds($criteria, $context)->firstId();\n\n // if the configurator setting already exists, update or skip\n if ($id) {\n $configuratorSetting['id'] = $id;\n }\n\n $configuratorSettings[] = $configuratorSetting;\n}\n\n$repository->upsert(configuratorSettings, $context);\n\n", "Just as an addition to make things easier. When creating a product with variants you can just update the configuratorSettings of the parent/father/main-product (whatever you call it).\nThen Shopware6 will go and create the variant products automatically. Also the uuids of the children are created automatically. So if need to keep track of these you have to query them after the creation process.\nBut for a fast creation this might be much faster, if you have a lot of variants the only \"variation\" are the options. So no special images or texts.\n" ]
[ 1, 0 ]
[]
[]
[ "shopware", "shopware6" ]
stackoverflow_0074644171_shopware_shopware6.txt
Q: Violation of PRIMARY KEY constraint ''. Cannot insert duplicate key in object '' I'm trying to create Match between two User's. However i'm getting this error Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details. ---> Microsoft.Data.SqlClient.SqlException (0x80131904): Violation of PRIMARY KEY "PK_Users" constraint. Cannot insert duplicate key into object 'dbo.Users'. Duplicate key value: (0c7a0cdc-a7ba-4cf8-ade2-0cef44761597). I have this entities public class Match { public Guid Id { get; set; } public User User { get; set; } public User LikedUser { get; set; } public bool Matched { get; set; } } public class User { public string Username { get; set; } public string Name { get; set; } public string Surname { get; set; } public Guid CityId { get; set; } public City City { get; set; } public Guid GenderId { get; set; } public Gender Gender { get; set; } public ICollection<Match> Matches { get; set; } = new HashSet<Match>(); public ICollection<Match> LikedBy { get; set; } = new HashSet<Match>(); } That's unfinished method i just want to check if it would create Match or not public async Task<Guid> Handle(CreateMatchCommand request, CancellationToken cancellationToken) { var liker = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liker); var liked = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liked); var match = new Match { LikedUser = liked, User = liker }; var matchId = await _unitOfWork.MatchRepository.CreateAsync(match); await _unitOfWork.SaveChangesAsync(cancellationToken); return matchId; } MatchRepository public async Task<Guid> CreateAsync(Match match) { var matchEntity = _mapper.Map<MatchEntity>(match); await _context.Matches.AddAsync(matchEntity); return matchEntity.Id; } UserRepository public async Task<User> FindByIdAsync(Guid guid) { var user = await _context.Users .FirstOrDefaultAsync(u => u.Id == guid); if (user == null) throw new NotFoundException(typeof(User), guid); _context.Entry(user).State = EntityState.Detached; return _mapper.Map<User>(user); } Why i'm getting this error? A: Yes, this is what will actually happen to you, the reason is that both the user and the second user will be considered new objects. What you will do is the following: add the UserId and LikedUserId in the Match, and save the new ones as follows: var match = new Match { LikedUserId = liked.Id, UserId = liker.Id }; A: In userRepository, try removing this line. public async Task<User> FindByIdAsync(Guid guid) { var user = await _context.Users .FirstOrDefaultAsync(u => u.Id == guid); if (user == null) throw new NotFoundException(typeof(User), guid); //_context.Entry(user).State = EntityState.Detached; <-- Remove this return _mapper.Map<User>(user); } So instead of tracking the Liked and Like users as new object, we will keep tracking them as an existing objects that are fetch from database, so EF Core will just update the foreign key in the Match entity automatically. Edit : Since first solution didn't work, you can also try like this to automatically attached it: public async Task<Guid> Handle(CreateMatchCommand request, CancellationToken cancellationToken) { var liker = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liker); var liked = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liked); _unitOfWork.Entry(liker).State = EntityState.Unchanged; _unitOfWork.Entry(liked).State = EntityState.Unchanged; var match = new Match { LikedUser = liked, User = liker }; var matchId = await _unitOfWork.MatchRepository.CreateAsync(match); await _unitOfWork.SaveChangesAsync(cancellationToken); return matchId; } So we will tell EF Core that entity Liker and Liked are existing entities from database, and no need for it to be reinserted. Just update the Match foreign keys to point to the Liker and Liked entities
Violation of PRIMARY KEY constraint ''. Cannot insert duplicate key in object ''
I'm trying to create Match between two User's. However i'm getting this error Microsoft.EntityFrameworkCore.DbUpdateException: An error occurred while saving the entity changes. See the inner exception for details. ---> Microsoft.Data.SqlClient.SqlException (0x80131904): Violation of PRIMARY KEY "PK_Users" constraint. Cannot insert duplicate key into object 'dbo.Users'. Duplicate key value: (0c7a0cdc-a7ba-4cf8-ade2-0cef44761597). I have this entities public class Match { public Guid Id { get; set; } public User User { get; set; } public User LikedUser { get; set; } public bool Matched { get; set; } } public class User { public string Username { get; set; } public string Name { get; set; } public string Surname { get; set; } public Guid CityId { get; set; } public City City { get; set; } public Guid GenderId { get; set; } public Gender Gender { get; set; } public ICollection<Match> Matches { get; set; } = new HashSet<Match>(); public ICollection<Match> LikedBy { get; set; } = new HashSet<Match>(); } That's unfinished method i just want to check if it would create Match or not public async Task<Guid> Handle(CreateMatchCommand request, CancellationToken cancellationToken) { var liker = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liker); var liked = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liked); var match = new Match { LikedUser = liked, User = liker }; var matchId = await _unitOfWork.MatchRepository.CreateAsync(match); await _unitOfWork.SaveChangesAsync(cancellationToken); return matchId; } MatchRepository public async Task<Guid> CreateAsync(Match match) { var matchEntity = _mapper.Map<MatchEntity>(match); await _context.Matches.AddAsync(matchEntity); return matchEntity.Id; } UserRepository public async Task<User> FindByIdAsync(Guid guid) { var user = await _context.Users .FirstOrDefaultAsync(u => u.Id == guid); if (user == null) throw new NotFoundException(typeof(User), guid); _context.Entry(user).State = EntityState.Detached; return _mapper.Map<User>(user); } Why i'm getting this error?
[ "Yes, this is what will actually happen to you, the reason is that both the user and the second user will be considered new objects. What you will do is the following: add the UserId and LikedUserId in the Match, and save the new ones as follows:\nvar match = new Match { LikedUserId = liked.Id, UserId = liker.Id };\n", "In userRepository, try removing this line.\npublic async Task<User> FindByIdAsync(Guid guid)\n {\n var user = await _context.Users\n .FirstOrDefaultAsync(u => u.Id == guid);\n if (user == null) throw new NotFoundException(typeof(User), guid);\n\n //_context.Entry(user).State = EntityState.Detached; <-- Remove this\n return _mapper.Map<User>(user);\n }\n\nSo instead of tracking the Liked and Like users as new object, we will keep tracking them as an existing objects that are fetch from database, so EF Core will just update the foreign key in the Match entity automatically.\nEdit : Since first solution didn't work, you can also try like this to automatically attached it:\npublic async Task<Guid> Handle(CreateMatchCommand request, CancellationToken cancellationToken)\n{\n var liker = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liker);\n var liked = await _unitOfWork.UserRepository.FindByIdAsync(request.Dto.Liked);\n\n_unitOfWork.Entry(liker).State = EntityState.Unchanged;\n_unitOfWork.Entry(liked).State = EntityState.Unchanged;\n\n var match = new Match { LikedUser = liked, User = liker };\n\n var matchId = await \n \n \n _unitOfWork.MatchRepository.CreateAsync(match);\n await _unitOfWork.SaveChangesAsync(cancellationToken);\n\n return matchId;\n}\n\nSo we will tell EF Core that entity Liker and Liked are existing entities from database, and no need for it to be reinserted. Just update the Match foreign keys to point to the Liker and Liked entities\n" ]
[ 2, 0 ]
[]
[]
[ "c#", "entity_framework", "entity_framework_core" ]
stackoverflow_0074661920_c#_entity_framework_entity_framework_core.txt
Q: Is it possible to integrate Control-M with MWAA Airflow to trigger and monitor jobs? Pretty new to these orchestration tools. Is it possible to integrate Control-M and mwaa airflow to trigger/ track jobs? Couldn't find much information regarding the same. A: Yes, full support for Airflow was added in Control-M v21 (released in October). A: Yes, it is possible to integrate Control-M with Apache Airflow for triggering and tracking jobs. Control-M is a job scheduling and workload automation tool, while Apache Airflow is an open-source platform to programmatically author, schedule, and monitor workflows. To integrate Control-M with Apache Airflow, you can use the Control-M Automation API, which allows you to automate and orchestrate your Control-M workflows directly from Apache Airflow. This integration can be useful for automating and managing complex job schedules and dependencies, and for tracking the progress of your Control-M jobs within Airflow. Here are some general steps to integrate Control-M with Airflow: Install and set up Control-M and Apache Airflow on your system. Obtain the necessary credentials for accessing the Control-M Automation API. Install the Control-M Automation API Python package in your Airflow environment. Create a Control-M connection in Airflow using the credentials from step 2. Use the Control-M operators provided by the Automation API package to create and manage your Control-M workflows within Airflow. For detailed instructions and examples, you can refer to the Control-M Automation API documentation and the Airflow documentation.
Is it possible to integrate Control-M with MWAA Airflow to trigger and monitor jobs?
Pretty new to these orchestration tools. Is it possible to integrate Control-M and mwaa airflow to trigger/ track jobs? Couldn't find much information regarding the same.
[ "Yes, full support for Airflow was added in Control-M v21 (released in October).\n", "Yes, it is possible to integrate Control-M with Apache Airflow for triggering and tracking jobs. Control-M is a job scheduling and workload automation tool, while Apache Airflow is an open-source platform to programmatically author, schedule, and monitor workflows.\nTo integrate Control-M with Apache Airflow, you can use the Control-M Automation API, which allows you to automate and orchestrate your Control-M workflows directly from Apache Airflow. This integration can be useful for automating and managing complex job schedules and dependencies, and for tracking the progress of your Control-M jobs within Airflow.\nHere are some general steps to integrate Control-M with Airflow:\n\nInstall and set up Control-M and Apache Airflow on your system.\nObtain the necessary credentials for accessing the Control-M Automation API.\nInstall the Control-M Automation API Python package in your Airflow environment.\nCreate a Control-M connection in Airflow using the credentials from step 2.\nUse the Control-M operators provided by the Automation API package to create and manage your Control-M workflows within Airflow.\n\nFor detailed instructions and examples, you can refer to the Control-M Automation API documentation and the Airflow documentation.\n" ]
[ 0, 0 ]
[]
[]
[ "airflow", "control_m", "mwaa", "orchestration" ]
stackoverflow_0074654212_airflow_control_m_mwaa_orchestration.txt
Q: How to convert every parameter of a function to any specific type? I have created a clamp function to bound value in a given range. (Almost every one of you know what a clamp function does) So this the function I created (using TS) function clamp(value: number, min: number, max: number) { return Math.min(Math.max(value, min), max) } but, there are some use cases where I want to get rid of converting all three params to Number type to pass it inside the function. I know I would have done something like this function clamp(value: number, min: number, max: number) { return Math.min(Math.max(Number(value), Number(min)), Number(max)) } converting every single param to Number type. I want to know if there is/are any other way/ways where I can just convert every param to Number type at once?? A: You can use Array.map to convert all values to Number at once. In plain old JS: function allNumbers(...values) { return values.map(Number).filter(v => !isNaN(v)); } console.log(`${allNumbers(1,`26`, 42)}`); console.log(`${allNumbers(...[...`123`])}`); console.log(`${allNumbers(`20`, `+`, `22`, `=`, 42)}`); console.log(`${allNumbers(...(`20+22=42`.split(/[+=]/)))}`); Typescript (see also...) function clamp(...values: Array<number|string>) { const numbers: (number|Nan)[] = values.map(Number); if (numbers.filter(v => !isNaN(v)).length === 3) { const [value, min, max] = numbers; return Math.min(Math.max(value, min), max); } return `insufficient argument(s)`; } console.log(clamp(42)); console.log(clamp(1,3,`42`)); A: Yes, instead of using the 'Number' constructor to convert each parameter individually, you can use the '+' operator to convert all of the parameters to the 'Number' type in one step. Here is an example of how you could use the '+' operator to convert all of the parameters to the 'Number' type in your 'clamp' function: function clamp(value: number, min: number, max: number) { // Use the '+' operator to convert all of the parameters to the 'Number' type return Math.min(Math.max(+value, +min), +max) } The '+' operator is a shorthand way of converting a value to a number. When used with a numeric value, it simply returns the original value. However, when used with a non-numeric value (such as a string), it converts the value to a number. In the example above, the '+' operator is used to convert the 'value', 'min', and 'max' parameters to the 'Number' type before passing them to the 'Math.min' and 'Math.max' functions. You can use this technique to avoid having to convert each parameter to the 'Number' type individually, making your code more concise and easier to read.
How to convert every parameter of a function to any specific type?
I have created a clamp function to bound value in a given range. (Almost every one of you know what a clamp function does) So this the function I created (using TS) function clamp(value: number, min: number, max: number) { return Math.min(Math.max(value, min), max) } but, there are some use cases where I want to get rid of converting all three params to Number type to pass it inside the function. I know I would have done something like this function clamp(value: number, min: number, max: number) { return Math.min(Math.max(Number(value), Number(min)), Number(max)) } converting every single param to Number type. I want to know if there is/are any other way/ways where I can just convert every param to Number type at once??
[ "You can use Array.map to convert all values to Number at once. In plain old JS:\n\n\nfunction allNumbers(...values) {\n return values.map(Number).filter(v => !isNaN(v));\n}\n\nconsole.log(`${allNumbers(1,`26`, 42)}`);\nconsole.log(`${allNumbers(...[...`123`])}`);\nconsole.log(`${allNumbers(`20`, `+`, `22`, `=`, 42)}`);\nconsole.log(`${allNumbers(...(`20+22=42`.split(/[+=]/)))}`);\n\n\n\nTypescript (see also...)\n\n\nfunction clamp(...values: Array<number|string>) {\n const numbers: (number|Nan)[] = values.map(Number);\n if (numbers.filter(v => !isNaN(v)).length === 3) {\n const [value, min, max] = numbers;\n return Math.min(Math.max(value, min), max);\n }\n return `insufficient argument(s)`;\n}\n\nconsole.log(clamp(42));\nconsole.log(clamp(1,3,`42`));\n\n\n\n", "Yes, instead of using the 'Number' constructor to convert each parameter individually, you can use the '+' operator to convert all of the parameters to the 'Number' type in one step.\nHere is an example of how you could use the '+' operator to convert all of the parameters to the 'Number' type in your 'clamp' function:\nfunction clamp(value: number, min: number, max: number) {\n // Use the '+' operator to convert all of the parameters to the 'Number' type\n return Math.min(Math.max(+value, +min), +max)\n}\n\nThe '+' operator is a shorthand way of converting a value to a number. When used with a numeric value, it simply returns the original value. However, when used with a non-numeric value (such as a string), it converts the value to a number.\nIn the example above, the '+' operator is used to convert the 'value', 'min', and 'max' parameters to the 'Number' type before passing them to the 'Math.min' and 'Math.max' functions.\nYou can use this technique to avoid having to convert each parameter to the 'Number' type individually, making your code more concise and easier to read.\n" ]
[ 2, 0 ]
[]
[]
[ "javascript", "typescript" ]
stackoverflow_0074665906_javascript_typescript.txt
Q: How take each two items from IEnumerable as a pair? I have IEnumerable<string> which looks like {"First", "1", "Second", "2", ... }. I need to iterate through the list and create IEnumerable<Tuple<string, string>> where Tuples will look like: "First", "1" "Second", "2" So I need to create pairs from a list I have to get pairs as mentioned above. A: A lazy extension method to achieve this is: public static IEnumerable<Tuple<T, T>> Tupelize<T>(this IEnumerable<T> source) { using (var enumerator = source.GetEnumerator()) while (enumerator.MoveNext()) { var item1 = enumerator.Current; if (!enumerator.MoveNext()) throw new ArgumentException(); var item2 = enumerator.Current; yield return new Tuple<T, T>(item1, item2); } } Note that if the number of elements happens to not be even this will throw. Another way would be to use this extensions method to split the source collection into chunks of 2: public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int batchSize) { var batch = new List<T>(batchSize); foreach (var item in list) { batch.Add(item); if (batch.Count == batchSize) { yield return batch; batch = new List<T>(batchSize); } } if (batch.Count > 0) yield return batch; } Then you can do: var tuples = items.Chunk(2) .Select(x => new Tuple<string, string>(x.First(), x.Skip(1).First())) .ToArray(); Finally, to use only existing extension methods: var tuples = items.Where((x, i) => i % 2 == 0) .Zip(items.Where((x, i) => i % 2 == 1), (a, b) => new Tuple<string, string>(a, b)) .ToArray(); A: morelinq contains a Batch extension method which can do what you want: var str = new string[] { "First", "1", "Second", "2", "Third", "3" }; var tuples = str.Batch(2, r => new Tuple<string, string>(r.FirstOrDefault(), r.LastOrDefault())); A: You can make this work using the LINQ .Zip() extension method: IEnumerable<string> source = new List<string> { "First", "1", "Second", "2" }; var tupleList = source.Zip(source.Skip(1), (a, b) => new Tuple<string, string>(a, b)) .Where((x, i) => i % 2 == 0) .ToList(); Basically the approach is zipping up the source Enumerable with itself, skipping the first element so the second enumeration is one off - that will give you the pairs ("First, "1"), ("1", "Second"), ("Second", "2"). Then we are filtering the odd tuples since we don't want those and end up with the right tuple pairs ("First, "1"), ("Second", "2") and so on. Edit: I actually agree with the sentiment of the comments - this is what I would consider "clever" code - looks smart, but has obvious (and not so obvious) downsides: Performance: the Enumerable has to be traversed twice - for the same reason it cannot be used on Enumerables that consume their source, i.e. data from network streams. Maintenance: It's not obvious what the code does - if someone else is tasked to maintain the code there might be trouble ahead, especially given point 1. Having said that, I'd probably use a good old foreach loop myself given the choice, or with a list as source collection a for loop so I can use the index directly. A: You could do something like: var pairs = source.Select((value, index) => new {Index = index, Value = value}) .GroupBy(x => x.Index / 2) .Select(g => new Tuple<string, string>(g.ElementAt(0).Value, g.ElementAt(1).Value)); This will get you an IEnumerable<Tuple<string, string>>. It works by grouping the elements by their odd/even positions and then expanding each group into a Tuple. The benefit of this approach over the Zip approach suggested by BrokenGlass is that it only enumerates the original enumerable once. It is however hard for someone to understand at first glance, so I would either do it another way (ie. not using linq), or document its intention next to where it is used. A: IEnumerable<T> items = ...; using (var enumerator = items.GetEnumerator()) { while (enumerator.MoveNext()) { T first = enumerator.Current; bool hasSecond = enumerator.MoveNext(); Trace.Assert(hasSecond, "Collection must have even number of elements."); T second = enumerator.Current; var tuple = new Tuple<T, T>(first, second); //Now you have the tuple } } A: If you are using .NET 4.0, then you can use tuple object (see http://mutelight.org/articles/finally-tuples-in-c-sharp.html). Together with LINQ it should give you what you need. If not, then you probably need to define your own tuples to do that or encode those strings like for example "First:1", "Second:2" and then decode it (also with LINQ). A: Starting from NET 6.0, you can use Enumerable.Chunk(IEnumerable, Int32) var tuples = new[] {"First", "1", "Second", "2", "Incomplete" } .Chunk(2) .Where(chunk => chunk.Length == 2) .Select(chunk => (chunk[0], chunk[1]));
How take each two items from IEnumerable as a pair?
I have IEnumerable<string> which looks like {"First", "1", "Second", "2", ... }. I need to iterate through the list and create IEnumerable<Tuple<string, string>> where Tuples will look like: "First", "1" "Second", "2" So I need to create pairs from a list I have to get pairs as mentioned above.
[ "A lazy extension method to achieve this is:\npublic static IEnumerable<Tuple<T, T>> Tupelize<T>(this IEnumerable<T> source)\n{\n using (var enumerator = source.GetEnumerator())\n while (enumerator.MoveNext())\n {\n var item1 = enumerator.Current;\n\n if (!enumerator.MoveNext())\n throw new ArgumentException();\n\n var item2 = enumerator.Current;\n\n yield return new Tuple<T, T>(item1, item2);\n }\n}\n\nNote that if the number of elements happens to not be even this will throw. Another way would be to use this extensions method to split the source collection into chunks of 2:\npublic static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> list, int batchSize)\n{\n\n var batch = new List<T>(batchSize);\n\n foreach (var item in list)\n {\n batch.Add(item);\n if (batch.Count == batchSize)\n {\n yield return batch;\n batch = new List<T>(batchSize);\n }\n }\n\n if (batch.Count > 0)\n yield return batch;\n}\n\nThen you can do:\nvar tuples = items.Chunk(2)\n .Select(x => new Tuple<string, string>(x.First(), x.Skip(1).First()))\n .ToArray();\n\nFinally, to use only existing extension methods:\nvar tuples = items.Where((x, i) => i % 2 == 0)\n .Zip(items.Where((x, i) => i % 2 == 1), \n (a, b) => new Tuple<string, string>(a, b))\n .ToArray();\n\n", "morelinq contains a Batch extension method which can do what you want:\nvar str = new string[] { \"First\", \"1\", \"Second\", \"2\", \"Third\", \"3\" };\nvar tuples = str.Batch(2, r => new Tuple<string, string>(r.FirstOrDefault(), r.LastOrDefault()));\n\n", "You can make this work using the LINQ .Zip() extension method:\nIEnumerable<string> source = new List<string> { \"First\", \"1\", \"Second\", \"2\" };\nvar tupleList = source.Zip(source.Skip(1), \n (a, b) => new Tuple<string, string>(a, b))\n .Where((x, i) => i % 2 == 0)\n .ToList();\n\nBasically the approach is zipping up the source Enumerable with itself, skipping the first element so the second enumeration is one off - that will give you the pairs (\"First, \"1\"), (\"1\", \"Second\"), (\"Second\", \"2\").\nThen we are filtering the odd tuples since we don't want those and end up with the right tuple pairs (\"First, \"1\"), (\"Second\", \"2\") and so on.\nEdit:\nI actually agree with the sentiment of the comments - this is what I would consider \"clever\" code - looks smart, but has obvious (and not so obvious) downsides:\n\nPerformance: the Enumerable has to\nbe traversed twice - for the same\nreason it cannot be used on\nEnumerables that consume their\nsource, i.e. data from network\nstreams.\nMaintenance: It's not obvious what\nthe code does - if someone else is\ntasked to maintain the code there\nmight be trouble ahead, especially\ngiven point 1.\n\nHaving said that, I'd probably use a good old foreach loop myself given the choice, or with a list as source collection a for loop so I can use the index directly.\n", "You could do something like: \nvar pairs = source.Select((value, index) => new {Index = index, Value = value})\n .GroupBy(x => x.Index / 2)\n .Select(g => new Tuple<string, string>(g.ElementAt(0).Value, \n g.ElementAt(1).Value));\n\nThis will get you an IEnumerable<Tuple<string, string>>. It works by grouping the elements by their odd/even positions and then expanding each group into a Tuple. The benefit of this approach over the Zip approach suggested by BrokenGlass is that it only enumerates the original enumerable once.\nIt is however hard for someone to understand at first glance, so I would either do it another way (ie. not using linq), or document its intention next to where it is used.\n", "IEnumerable<T> items = ...;\nusing (var enumerator = items.GetEnumerator())\n{\n while (enumerator.MoveNext())\n {\n T first = enumerator.Current;\n bool hasSecond = enumerator.MoveNext();\n Trace.Assert(hasSecond, \"Collection must have even number of elements.\");\n T second = enumerator.Current;\n\n var tuple = new Tuple<T, T>(first, second);\n //Now you have the tuple\n }\n}\n\n", "If you are using .NET 4.0, then you can use tuple object (see http://mutelight.org/articles/finally-tuples-in-c-sharp.html). Together with LINQ it should give you what you need. If not, then you probably need to define your own tuples to do that or encode those strings like for example \"First:1\", \"Second:2\" and then decode it (also with LINQ).\n", "Starting from NET 6.0, you can use\nEnumerable.Chunk(IEnumerable, Int32)\nvar tuples = new[] {\"First\", \"1\", \"Second\", \"2\", \"Incomplete\" }\n .Chunk(2)\n .Where(chunk => chunk.Length == 2)\n .Select(chunk => (chunk[0], chunk[1]));\n\n" ]
[ 14, 5, 4, 4, 1, 0, 0 ]
[]
[]
[ "c#", "ienumerable", "list", "tuples" ]
stackoverflow_0005239635_c#_ienumerable_list_tuples.txt
Q: Cannot see Flutter Web activity history in Firebase Analytics I have added Firebase Analytics to a game I have developed as a Flutter web project and can see real-time data bars coming through for users in the last 30 minutes, but cannot see any historic user or event data. I added Firebase to my project yesterday and had a number of users access it, but cannot see any record of those events today. I have gone through the set-up process again to check that everything is set up correctly and I believe that it is. Accessing the web site this morning, I saw a real-time record of me accessing the app as below: However, the historic user activity over time graph shows nothing: I have created a Singleton to be able to access the analytics instances as follows: // Singleton class to hold the Analytics for for Firebase import 'package:firebase_analytics/firebase_analytics.dart'; import 'package:firebase_core/firebase_core.dart'; import 'package:septuple/firebase_options.dart'; class Analytics { Analytics._(); // Generate an instance of the firebase analytics class final FirebaseAnalytics analytics = FirebaseAnalytics.instance; // Create an instance of this class as a singleton static final Analytics _instance = Analytics._(); // Initialise firebase for the app static Future<void> init() async { await Firebase.initializeApp( options: DefaultFirebaseOptions.currentPlatform, ); getData().setAnalyticsCollectionEnabled(true); } //Get the FirebaseAnalystics object to do event logging static FirebaseAnalytics getData() { return _instance.analytics; } } I have initialised this in my main() method as follows: void main() async { // // Initialise the Hive database await Hive.initFlutter(); await Hive.openBox('septupleBox'); // // Initialise firebase in the Analytics singleton to enable event logging // await Analytics.init(); // // Run the app runApp(Game()); } I am logging events in my state manager class, for example when the user guesses the word, as follows: void setWin(int attempts) { // Log end of game and that the user guessed correctly Analytics.getData().logLevelEnd( levelName: 'septuple', success: attempts, ); gameState = GameState.win; _box.put(_gameStateName, gameState.toString()); _game.gameTimer.startCountdown(); notifyListeners(); } "logLevelEnd is a default event type so I am assuming that this should just appear in Firebase Analytics. Do I need to configure anything so that it actively captures and retains this? I am using the following dependencies for the project: dependencies: flutter: sdk: flutter hive: ^2.2.3 hive_flutter: ^1.1.0 responsive_sizer: ^3.1.1 #flutter_launcher_icons: ^0.10.0 toggle_switch: ^2.0.1 # The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^1.0.2 firebase_core: ^2.3.0 firebase_analytics: ^10.0.6 And finally, I have a firebase_options.dart file that is configured for web. For completeness, here is the output from flutter doctor -v: flutter doctor -v [√] Flutter (Channel stable, 3.3.9, on Microsoft Windows [Version 10.0.22621.819], locale en-GB) • Flutter version 3.3.9 on channel stable at D:\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision b8f7f1f986 (10 days ago), 2022-11-23 06:43:51 +0900 • Engine revision 8f2221fbef • Dart version 2.18.5 • DevTools version 2.15.0 [√] Android toolchain - develop for Android devices (Android SDK version 32.0.0-rc1) • Android SDK at C:\Users\LaptopBob\AppData\Local\Android\Sdk • Platform android-33, build-tools 32.0.0-rc1 • ANDROID_HOME = C:\Users\LaptopBob\AppData\Local\Android\Sdk • ANDROID_SDK_ROOT = D:\Users\LaptopBob\Android_SDK\avd • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop for Windows (Visual Studio Community 2019 16.11.20) • Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2019\Community • Visual Studio Community 2019 version 16.11.32929.386 • Windows 10 SDK version 10.0.19041.0 [√] Android Studio (version 2021.3) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) [√] VS Code (version 1.73.1) • VS Code at C:\Users\LaptopBob\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.54.0 [√] Connected device (3 available) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22621.819] • Chrome (web) • chrome • web-javascript • Google Chrome 108.0.5359.71 • Edge (web) • edge • web-javascript • Microsoft Edge 107.0.1418.56 [√] HTTP Host Availability • All required HTTP hosts are available • No issues found! As per this post, I have waited 24 hours for the data to come through (it is now 26 hours since I started getting bars on the "Users in last 30 minutes" panel). Does anyone have any ideas why I am not seeing historic user or event data in Firebase Analytics? A: The solution to this problem is "wait". The data turned up some time between 30 and 42 hours after the first event was posted to Firebase. Events from the following day also showed up (only 18 hours delay), so maybe there is an initial, longer delay when you first use the service. The only reference to updates I could find in the Firebase Analytics documentation says that the data is updated 'periodically' so I'm not sure how often the data will be updated.
Cannot see Flutter Web activity history in Firebase Analytics
I have added Firebase Analytics to a game I have developed as a Flutter web project and can see real-time data bars coming through for users in the last 30 minutes, but cannot see any historic user or event data. I added Firebase to my project yesterday and had a number of users access it, but cannot see any record of those events today. I have gone through the set-up process again to check that everything is set up correctly and I believe that it is. Accessing the web site this morning, I saw a real-time record of me accessing the app as below: However, the historic user activity over time graph shows nothing: I have created a Singleton to be able to access the analytics instances as follows: // Singleton class to hold the Analytics for for Firebase import 'package:firebase_analytics/firebase_analytics.dart'; import 'package:firebase_core/firebase_core.dart'; import 'package:septuple/firebase_options.dart'; class Analytics { Analytics._(); // Generate an instance of the firebase analytics class final FirebaseAnalytics analytics = FirebaseAnalytics.instance; // Create an instance of this class as a singleton static final Analytics _instance = Analytics._(); // Initialise firebase for the app static Future<void> init() async { await Firebase.initializeApp( options: DefaultFirebaseOptions.currentPlatform, ); getData().setAnalyticsCollectionEnabled(true); } //Get the FirebaseAnalystics object to do event logging static FirebaseAnalytics getData() { return _instance.analytics; } } I have initialised this in my main() method as follows: void main() async { // // Initialise the Hive database await Hive.initFlutter(); await Hive.openBox('septupleBox'); // // Initialise firebase in the Analytics singleton to enable event logging // await Analytics.init(); // // Run the app runApp(Game()); } I am logging events in my state manager class, for example when the user guesses the word, as follows: void setWin(int attempts) { // Log end of game and that the user guessed correctly Analytics.getData().logLevelEnd( levelName: 'septuple', success: attempts, ); gameState = GameState.win; _box.put(_gameStateName, gameState.toString()); _game.gameTimer.startCountdown(); notifyListeners(); } "logLevelEnd is a default event type so I am assuming that this should just appear in Firebase Analytics. Do I need to configure anything so that it actively captures and retains this? I am using the following dependencies for the project: dependencies: flutter: sdk: flutter hive: ^2.2.3 hive_flutter: ^1.1.0 responsive_sizer: ^3.1.1 #flutter_launcher_icons: ^0.10.0 toggle_switch: ^2.0.1 # The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^1.0.2 firebase_core: ^2.3.0 firebase_analytics: ^10.0.6 And finally, I have a firebase_options.dart file that is configured for web. For completeness, here is the output from flutter doctor -v: flutter doctor -v [√] Flutter (Channel stable, 3.3.9, on Microsoft Windows [Version 10.0.22621.819], locale en-GB) • Flutter version 3.3.9 on channel stable at D:\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision b8f7f1f986 (10 days ago), 2022-11-23 06:43:51 +0900 • Engine revision 8f2221fbef • Dart version 2.18.5 • DevTools version 2.15.0 [√] Android toolchain - develop for Android devices (Android SDK version 32.0.0-rc1) • Android SDK at C:\Users\LaptopBob\AppData\Local\Android\Sdk • Platform android-33, build-tools 32.0.0-rc1 • ANDROID_HOME = C:\Users\LaptopBob\AppData\Local\Android\Sdk • ANDROID_SDK_ROOT = D:\Users\LaptopBob\Android_SDK\avd • Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) • All Android licenses accepted. [√] Chrome - develop for the web • Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop for Windows (Visual Studio Community 2019 16.11.20) • Visual Studio at C:\Program Files (x86)\Microsoft Visual Studio\2019\Community • Visual Studio Community 2019 version 16.11.32929.386 • Windows 10 SDK version 10.0.19041.0 [√] Android Studio (version 2021.3) • Android Studio at C:\Program Files\Android\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 11.0.13+0-b1751.21-8125866) [√] VS Code (version 1.73.1) • VS Code at C:\Users\LaptopBob\AppData\Local\Programs\Microsoft VS Code • Flutter extension version 3.54.0 [√] Connected device (3 available) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22621.819] • Chrome (web) • chrome • web-javascript • Google Chrome 108.0.5359.71 • Edge (web) • edge • web-javascript • Microsoft Edge 107.0.1418.56 [√] HTTP Host Availability • All required HTTP hosts are available • No issues found! As per this post, I have waited 24 hours for the data to come through (it is now 26 hours since I started getting bars on the "Users in last 30 minutes" panel). Does anyone have any ideas why I am not seeing historic user or event data in Firebase Analytics?
[ "The solution to this problem is \"wait\". The data turned up some time between 30 and 42 hours after the first event was posted to Firebase. Events from the following day also showed up (only 18 hours delay), so maybe there is an initial, longer delay when you first use the service. The only reference to updates I could find in the Firebase Analytics documentation says that the data is updated 'periodically' so I'm not sure how often the data will be updated.\n" ]
[ 0 ]
[]
[]
[ "firebase", "firebase_analytics", "flutter", "flutter_web" ]
stackoverflow_0074656063_firebase_firebase_analytics_flutter_flutter_web.txt
Q: LEFT JOIN on column where value IS NOT NULL, otherwise join on another column Joining 2 tables (orders, addresses) Orders contains columns delivery_address_id (contains NULL values) and invoice_address_id (does NOT contain NULL values) Addresses contains id column (does NOT contain NULL values) Primarily, the LEFT JOIN must be performed on orders.delivery_address_id. However, in the case when its value is NULL in the row, perform LEFT JOIN on orders.invoice_address_id. How do I deal with this? I tried the operator OR but the result was not correct. I was also thinking about a CASE WHEN statement. My expectations are to get the LEFT JOIN working. A: You can use the operator OR in the ON clause: SELECT .... FROM Orders o LEFT JOIN Addresses a ON a.id = o.delivery_address_id OR (o.delivery_address_id IS NULL AND a.id = o.invoice_address_id); Or, use COALESCE(): SELECT .... FROM Orders o LEFT JOIN Addresses a ON a.id = COALESCE(o.delivery_address_id, o.invoice_address_id); A: So, you want to join on delivery_address_id, but sometimes it's NULL, so you need invoice_address_id to be a fallback. These situations are where the COALESCE function really shines. COALESCE(delivery_address_id, invoice_address_id) will resolve to the delivery address ID if it isn't NULL, otherwise it will resolve to the invoice address ID instead. Thus we can achieve the join you want: SELECT orders.some_field, addresses.address FROM orders LEFT JOIN addresses ON COALESCE(orders.delivery_address_id, orders.invoice_address_id) = addresses.id
LEFT JOIN on column where value IS NOT NULL, otherwise join on another column
Joining 2 tables (orders, addresses) Orders contains columns delivery_address_id (contains NULL values) and invoice_address_id (does NOT contain NULL values) Addresses contains id column (does NOT contain NULL values) Primarily, the LEFT JOIN must be performed on orders.delivery_address_id. However, in the case when its value is NULL in the row, perform LEFT JOIN on orders.invoice_address_id. How do I deal with this? I tried the operator OR but the result was not correct. I was also thinking about a CASE WHEN statement. My expectations are to get the LEFT JOIN working.
[ "You can use the operator OR in the ON clause:\nSELECT ....\nFROM Orders o LEFT JOIN Addresses a\nON a.id = o.delivery_address_id \nOR (o.delivery_address_id IS NULL AND a.id = o.invoice_address_id);\n\nOr, use COALESCE():\nSELECT ....\nFROM Orders o LEFT JOIN Addresses a\nON a.id = COALESCE(o.delivery_address_id, o.invoice_address_id);\n\n", "So, you want to join on delivery_address_id, but sometimes it's NULL, so you need invoice_address_id to be a fallback.\nThese situations are where the COALESCE function really shines. COALESCE(delivery_address_id, invoice_address_id) will resolve to the delivery address ID if it isn't NULL, otherwise it will resolve to the invoice address ID instead.\nThus we can achieve the join you want:\n SELECT\n orders.some_field,\n addresses.address\n FROM\n orders\n LEFT JOIN\n addresses\n ON\n COALESCE(orders.delivery_address_id, orders.invoice_address_id) = addresses.id \n\n" ]
[ 2, 1 ]
[]
[]
[ "left_join", "postgresql", "sql" ]
stackoverflow_0074665909_left_join_postgresql_sql.txt
Q: couldn't install gopls in Fedora 31 I've below go version $ go version go version go1.14.3 linux/amd64 $ which /usr/local/go/bin/go GOPATH is set as $ echo $GOPATH /home/raj/go PATH variable is set as - $ echo $PATH /home/raj/.cargo/bin:/home/raj/go/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.sdkman/candidates/maven/current/bin:/home/raj/.sdkman/candidates/java/current/bin:/home/raj/.sdkman/candidates/gradle/current/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.local/bin:/home/raj/bin:/home/raj/.cargo/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/var/lib/snapd/snap/bin:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/ As you can see /home/raj/go/bin is in PATH Now, I'm trying to install gopls and I'm getting below error - $ GO111MODULE=on go get -v golang.org/x/tools/gopls@latest go: golang.org/x/tools/gopls latest => v0.4.1 runtime/internal/atomic runtime/internal/atomic /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:13:6: Load redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:16:24 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:19:6: Loadp redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:22:32 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:25:6: Load64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:28:26 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:31:6: LoadAcq redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:34:27 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:36:6: Xadd redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:39:37 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:39:6: Xadd64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:42:39 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:42:6: Xadduintptr redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:45:47 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:45:6: Xchg redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:48:36 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:48:6: Xchg64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:51:38 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:51:6: Xchguintptr redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:54:45 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:51:6: too many errors Why I'm getting this error and how can I fix this? Note: My OS details are - $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: Fedora Description: Fedora release 31 (Thirty One) Release: 31 Codename: ThirtyOne A: As @Jimb commented, I removed the go installation sudo rm -rf /usr/local/go and reinstalled go and it is working fine. A: I just had this same problem and in my case the problem was that i had installed the package gcc-go which for some reason didn't work with the Vscode extensions, but when i removed it and instead downloaded the package golang everything worked instantly. Maybe this can help someone.
couldn't install gopls in Fedora 31
I've below go version $ go version go version go1.14.3 linux/amd64 $ which /usr/local/go/bin/go GOPATH is set as $ echo $GOPATH /home/raj/go PATH variable is set as - $ echo $PATH /home/raj/.cargo/bin:/home/raj/go/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.sdkman/candidates/maven/current/bin:/home/raj/.sdkman/candidates/java/current/bin:/home/raj/.sdkman/candidates/gradle/current/bin:/home/raj/.cargo/bin:/home/raj/.cabal/bin:/home/raj/.ghcup/bin:/home/raj/.local/bin:/home/raj/bin:/home/raj/.cargo/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/var/lib/snapd/snap/bin:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/:/usr/local/go/bin:/home/raj/bin:/home/raj/Deps/cmake/3.16.5/cmake/bin/ As you can see /home/raj/go/bin is in PATH Now, I'm trying to install gopls and I'm getting below error - $ GO111MODULE=on go get -v golang.org/x/tools/gopls@latest go: golang.org/x/tools/gopls latest => v0.4.1 runtime/internal/atomic runtime/internal/atomic /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:13:6: Load redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:16:24 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:19:6: Loadp redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:22:32 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:25:6: Load64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:28:26 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:31:6: LoadAcq redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:34:27 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:36:6: Xadd redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:39:37 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:39:6: Xadd64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:42:39 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:42:6: Xadduintptr redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:45:47 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:45:6: Xchg redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:48:36 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:48:6: Xchg64 redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:51:38 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:51:6: Xchguintptr redeclared in this block previous declaration at /usr/local/go/src/runtime/internal/atomic/atomic_amd64.go:54:45 /usr/local/go/src/runtime/internal/atomic/atomic_amd64x.go:51:6: too many errors Why I'm getting this error and how can I fix this? Note: My OS details are - $ lsb_release -a LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: Fedora Description: Fedora release 31 (Thirty One) Release: 31 Codename: ThirtyOne
[ "As @Jimb commented, I removed the go installation sudo rm -rf /usr/local/go and reinstalled go and it is working fine.\n", "I just had this same problem and in my case the problem was that i had installed the package gcc-go which for some reason didn't work with the Vscode extensions, but when i removed it and instead downloaded the package golang everything worked instantly. Maybe this can help someone.\n" ]
[ 2, 0 ]
[]
[]
[ "go" ]
stackoverflow_0062031629_go.txt
Q: Why JWT is called authentication I'm learning purpose of JWT tokens in ASP.NET Core, but I don't understand one thing. Why does every blog calling JWT authentication? If we pass a token to an authenticated (logged-in) user. I mean why JWT is not authorization but authentication? Can't understand which point I'm skipping in this topic. A: In a typical implementation, the JWT token that you create for a signed in user will be sent with every subsequent request. The purpose is that you want to make sure that the party that sends the request is actually who they are claiming to be - ie. you want to authenticate requests. Strictly speaking, without this, you could not do authorization, or well, you could, but it wouldn't make much sense if you just beleived who the caller was instead of checking (authenticating). Edit: The jwt token will always have a "sub" field, a subject, usually a user identifier, and an expiry, even if there are no further claims. The token is also signed(*) that can and must be verified. So if somebody can present a token with their identifier, signed by your server, with a timetstamp not very long ago, then you can be sure that they recently authenticated, eg. provided their password. In all subsequent requests you don't need to get their password, you only check the token. You know they are who they say to be, because they have a valid token, your server signed their email address, so they are logged in. An attacker cannot forge such a token, because while they can replace or forge their user identifier (the sub field), the signature on the token will not be valid. Only your server can make a signature that can then be verified (by your server in this case). So in other words, there is the initial user authentication, when they provide their username and password, but HTTP is basically stateless, the next HTTP query has nothing to do with the first one. You explicitly have to make this connection, ie. create a user session, you have to securely "remember" that when a new request comes in from a user, it's the same user that already provided their password. So when they do provide their password, you make a JWT for them with their email address and a timestamp, sign it, and next time they only send that, and you know it's them, because there's you signatuare on the token. The reason you don't want them to send their password all the time is that sending passwords around in every request would present a somewhat higher risk, but much more importantly, then the client app (the browser in a webapp) would have to remember a user's password, which is not good, it's by far more secure to erase that from the browser's memory. So you create them a JWT instead. Also note that in such a usecase, a JWT is unnecessary, maybe that's where your confusion comes from. You could just send the user a plain old random session token, which you also store on the server, and store all state in the server-side session. This is all fine, and actually more secure than a JWT. JWTs should be used for example when the authentication token needs to be sent to different origins (domains), or when secure claims should be shared with another party. Using JWTs for primary authentication with a simple web app with one origin is unnecessary (very fancy today, but not correct). (*Note that for such a usecase, the signature is usually a message authentication code, a MAC, but I did not want to confuse you further with this word, as message authentication is another authentication. You can just think of this as a signature for now, though signature in cryptography is something different, and even more confusingly, JWTs can use actual signatures too.) A: JWT tokens are used for authentication and authorization. When a user logs in to an application using their credentials, the application generates a JWT token and sends it back to the user. This token contains information about the user, such as their user id and other claims, and is signed by the application. The user can then use this token to authenticate themselves to the application for subsequent requests. The application can verify the token to ensure that it is valid and has not been tampered with. In addition to authentication, JWT tokens can also be used for authorization. The claims in the token can be used to determine what actions the user is allowed to perform within the application. For example, the token may contain a claim indicating that the user has admin privileges, which the application can use to grant the user access to admin-only functionality. So, while JWT tokens are primarily used for authentication, they can also be used for authorization.
Why JWT is called authentication
I'm learning purpose of JWT tokens in ASP.NET Core, but I don't understand one thing. Why does every blog calling JWT authentication? If we pass a token to an authenticated (logged-in) user. I mean why JWT is not authorization but authentication? Can't understand which point I'm skipping in this topic.
[ "In a typical implementation, the JWT token that you create for a signed in user will be sent with every subsequent request. The purpose is that you want to make sure that the party that sends the request is actually who they are claiming to be - ie. you want to authenticate requests.\nStrictly speaking, without this, you could not do authorization, or well, you could, but it wouldn't make much sense if you just beleived who the caller was instead of checking (authenticating).\nEdit:\nThe jwt token will always have a \"sub\" field, a subject, usually a user identifier, and an expiry, even if there are no further claims. The token is also signed(*) that can and must be verified. So if somebody can present a token with their identifier, signed by your server, with a timetstamp not very long ago, then you can be sure that they recently authenticated, eg. provided their password. In all subsequent requests you don't need to get their password, you only check the token. You know they are who they say to be, because they have a valid token, your server signed their email address, so they are logged in.\nAn attacker cannot forge such a token, because while they can replace or forge their user identifier (the sub field), the signature on the token will not be valid. Only your server can make a signature that can then be verified (by your server in this case).\nSo in other words, there is the initial user authentication, when they provide their username and password, but HTTP is basically stateless, the next HTTP query has nothing to do with the first one. You explicitly have to make this connection, ie. create a user session, you have to securely \"remember\" that when a new request comes in from a user, it's the same user that already provided their password. So when they do provide their password, you make a JWT for them with their email address and a timestamp, sign it, and next time they only send that, and you know it's them, because there's you signatuare on the token.\nThe reason you don't want them to send their password all the time is that sending passwords around in every request would present a somewhat higher risk, but much more importantly, then the client app (the browser in a webapp) would have to remember a user's password, which is not good, it's by far more secure to erase that from the browser's memory. So you create them a JWT instead.\nAlso note that in such a usecase, a JWT is unnecessary, maybe that's where your confusion comes from. You could just send the user a plain old random session token, which you also store on the server, and store all state in the server-side session. This is all fine, and actually more secure than a JWT. JWTs should be used for example when the authentication token needs to be sent to different origins (domains), or when secure claims should be shared with another party. Using JWTs for primary authentication with a simple web app with one origin is unnecessary (very fancy today, but not correct).\n(*Note that for such a usecase, the signature is usually a message authentication code, a MAC, but I did not want to confuse you further with this word, as message authentication is another authentication. You can just think of this as a signature for now, though signature in cryptography is something different, and even more confusingly, JWTs can use actual signatures too.)\n", "JWT tokens are used for authentication and authorization. When a user logs in to an application using their credentials, the application generates a JWT token and sends it back to the user. This token contains information about the user, such as their user id and other claims, and is signed by the application.\nThe user can then use this token to authenticate themselves to the application for subsequent requests. The application can verify the token to ensure that it is valid and has not been tampered with.\nIn addition to authentication, JWT tokens can also be used for authorization. The claims in the token can be used to determine what actions the user is allowed to perform within the application. For example, the token may contain a claim indicating that the user has admin privileges, which the application can use to grant the user access to admin-only functionality.\nSo, while JWT tokens are primarily used for authentication, they can also be used for authorization.\n" ]
[ 0, 0 ]
[]
[]
[ "asp.net_core", "authentication", "authorization", "jwt" ]
stackoverflow_0074665774_asp.net_core_authentication_authorization_jwt.txt
Q: How can I create a dynamic product page using HTML, CSS, and Javascript I currently only know javascript. But the thing is I looked up how to do it and some people talk about something called localStorage. I have tried this and for some reason when I jump to a new page those variables aren't kept. Maybe I am doing something wrong? I jump to a new page via and all I want do do is select a certain image. take that image to a new page and add it to that page. I tried using the localStorage variables and even turning it into JSON.stringify and doing JSON.parse when trying to call the localstorage to another script. It didn't seem to work for me. Is there another solution? This is some of my code. There are two scripts. document.querySelectorAll(".card").forEach(item => { item.addEventListener("click", onProductClick); }) var div; var productImg; var ratingElement; var reviewCount; var price; function onProductClick(){ // This took a week to find out (this.id) // console.log(this.id); div = document.getElementById(this.id); productImg = div.getElementsByTagName('img')[0]; ratingElement = div.getElementsByTagName('a')[2]; reviewCount = div.getElementsByTagName('a')[3] price = div.getElementsByTagName('a')[4]; console.log(div.getElementsByTagName('a')[4]); var productData = [div, productImg,ratingElement,reviewCount,price]; window.localStorage.setItem("price", JSON.stringify(price)); } function TranslateProduct(){ console.log("Hello"); } This is script 2 var productPageImage = document.getElementById("product-image"); var myData = localStorage['productdata-local']; var value =JSON.parse(window.localStorage.getItem('price')); console.log(value); // function setProductPage(img){ // if(productImg != null){ // return; // } // console.log(window.price); // } To explain my thought process on this code in the first script I have multiple images that have event listeners for a click. I wanted to Click any given image and grab all the data about it and the product. Then I wanted to move that to another script (script 2) and add it to a dynamic second page. yet I print my variables and they work on the first script and somehow don't on the second. This is my code. in the meantime I will look into cookies Thank you! A: Have you tried Cookies You can always use cookies, but you may run into their limitations. These days, cookies are not the best choice, even though they have the ability to preserve data even longer than the current window session. or you can make a GET request to the other page by attaching your serialized object to the URL as follows: http://www.app.com/second.xyz?MyObject=SerializedData That other page can then easily parse its URL and deserialize data using JavaScript. you can check this answer for more details Pass javascript object from one page to other
How can I create a dynamic product page using HTML, CSS, and Javascript
I currently only know javascript. But the thing is I looked up how to do it and some people talk about something called localStorage. I have tried this and for some reason when I jump to a new page those variables aren't kept. Maybe I am doing something wrong? I jump to a new page via and all I want do do is select a certain image. take that image to a new page and add it to that page. I tried using the localStorage variables and even turning it into JSON.stringify and doing JSON.parse when trying to call the localstorage to another script. It didn't seem to work for me. Is there another solution? This is some of my code. There are two scripts. document.querySelectorAll(".card").forEach(item => { item.addEventListener("click", onProductClick); }) var div; var productImg; var ratingElement; var reviewCount; var price; function onProductClick(){ // This took a week to find out (this.id) // console.log(this.id); div = document.getElementById(this.id); productImg = div.getElementsByTagName('img')[0]; ratingElement = div.getElementsByTagName('a')[2]; reviewCount = div.getElementsByTagName('a')[3] price = div.getElementsByTagName('a')[4]; console.log(div.getElementsByTagName('a')[4]); var productData = [div, productImg,ratingElement,reviewCount,price]; window.localStorage.setItem("price", JSON.stringify(price)); } function TranslateProduct(){ console.log("Hello"); } This is script 2 var productPageImage = document.getElementById("product-image"); var myData = localStorage['productdata-local']; var value =JSON.parse(window.localStorage.getItem('price')); console.log(value); // function setProductPage(img){ // if(productImg != null){ // return; // } // console.log(window.price); // } To explain my thought process on this code in the first script I have multiple images that have event listeners for a click. I wanted to Click any given image and grab all the data about it and the product. Then I wanted to move that to another script (script 2) and add it to a dynamic second page. yet I print my variables and they work on the first script and somehow don't on the second. This is my code. in the meantime I will look into cookies Thank you!
[ "Have you tried Cookies\nYou can always use cookies, but you may run into their limitations. These days, cookies are not the best choice, even though they have the ability to preserve data even longer than the current window session.\nor you can make a GET request to the other page by attaching your serialized object to the URL as follows:\nhttp://www.app.com/second.xyz?MyObject=SerializedData\nThat other page can then easily parse its URL and deserialize data using JavaScript.\nyou can check this answer for more details Pass javascript object from one page to other\n" ]
[ 0 ]
[]
[]
[ "backend", "css", "frontend", "html", "javascript" ]
stackoverflow_0074665951_backend_css_frontend_html_javascript.txt
Q: How to show videos in gridview ! (Flutter) I am getting the video files from filepicker package now i want to show the videos on gridview when clicked on them video should be played (similar to gallery) A: Found answer my self first I generated the thumbnail of the video by thumbnail package(https://pub.dev/packages/video_thumbnail) from then saved created a model of thumbnail path and video path saved the path of both and accessed them :)
How to show videos in gridview ! (Flutter)
I am getting the video files from filepicker package now i want to show the videos on gridview when clicked on them video should be played (similar to gallery)
[ "Found answer my self first I generated the thumbnail of the video by thumbnail package(https://pub.dev/packages/video_thumbnail) from then saved created a model of thumbnail path and video path saved the path of both and accessed them :)\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "flutter_video_player", "gridview" ]
stackoverflow_0074641871_dart_flutter_flutter_video_player_gridview.txt
Q: Want a navbar stop I did a navbar that shows on the click event listener and I want everything in the background to have a mask on also I want this navbar to prevent the page from scrolling down. Only when I exit the navbar. I couldn't figure out how to do it. #leftmenue { background-color: rgb(255, 255, 255); border-radius: 2px; height: 100%; width: 28%; padding: 0; position: absolute; top: 0; z-index: 1000; opacity: 100%; overflow-y: scroll; overflow-x: hidden; transition: 1.5s; display: block; } <nav> <span>Electronics & Perephiral Devices</span> <li><a href="">Gaaming Setup</a><i class="fa-sharp fa-solid fa-headset"></i> </li> <li><a href="">Gaaming Setup</a><i class="fa-duotone fa-arrow-turn-down"></i></li> <li><a href="">Gaaming Setup</a><i class="fa-solid fa-arrow-up-right"></i></li> </nav> <nav> <span>Perephiral Devices</span> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> </nav> <nav> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> </nav> </div> A: you can reach this through document.querySelector('html').style.overflow = 'hidden';
Want a navbar stop
I did a navbar that shows on the click event listener and I want everything in the background to have a mask on also I want this navbar to prevent the page from scrolling down. Only when I exit the navbar. I couldn't figure out how to do it. #leftmenue { background-color: rgb(255, 255, 255); border-radius: 2px; height: 100%; width: 28%; padding: 0; position: absolute; top: 0; z-index: 1000; opacity: 100%; overflow-y: scroll; overflow-x: hidden; transition: 1.5s; display: block; } <nav> <span>Electronics & Perephiral Devices</span> <li><a href="">Gaaming Setup</a><i class="fa-sharp fa-solid fa-headset"></i> </li> <li><a href="">Gaaming Setup</a><i class="fa-duotone fa-arrow-turn-down"></i></li> <li><a href="">Gaaming Setup</a><i class="fa-solid fa-arrow-up-right"></i></li> </nav> <nav> <span>Perephiral Devices</span> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> </nav> <nav> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> <li><a href="">Gaaming Setup</a></li> </nav> </div>
[ "you can reach this through\ndocument.querySelector('html').style.overflow = 'hidden'; \n\n" ]
[ 0 ]
[]
[]
[ "css", "frontend", "html", "javascript", "visual_web_developer" ]
stackoverflow_0074665968_css_frontend_html_javascript_visual_web_developer.txt
Q: C loop to get strings and store them in 2D array I created a multi-dim array to store (name/fname/code) of multiple students in (row 0) it stores the (name/fname/code) of the first student then it passes to the next row (row 1) --> (name/fname/code) of the second student ... etc. Every time I run the code and write a string when it reaches 9 character it will add 8 char of the next string and if the next string reaches 9 char it will add 8 char of the next string to it. But if I write the name with just 8 char or less it runs like I want to. If you didn't understand run the code and you will get it. #include<stdio.h> #include<stdlib.h> #include <string.h> int main() { int i, nmbStud; int rows, columns; printf("Enter the number of students you want to add : "); scanf("%d", &nmbStud); char *arr[nmbStud][256]; for (rows = 0; rows < nmbStud; rows++) { columns = 0; while (columns < 3) { printf("--Enter the %d name : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; printf("--Enter the %d family name : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; printf("--Enter the code of the student %d : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; } } for (rows = 0; rows < nmbStud; rows++) { for (columns = 0; columns < 3; columns++) { printf("-->%s\n", &arr[rows][columns]); } } } Compile result A: The rule is that you should correctly declare what you need and use it accordingly. Here char *arr[nmbStud][256]; declares a Variable Length Array of pointers to characters. But later you use it with: scanf("%s",&arr[rows][columns]); which stores the characters inside the pointer value. As you have a 64 bits system, a pointer uses 8 bytes, hence the problem when you try to use strings using more than 7 characters - do not forget the terminating null in C strings! What you want should be closer to: char arr[nmbStud][3][256]; // 2D VLA, size nmStud * 3, of strings of 255 characters and later: scanf("%s",arr[rows][columns]); And you should be aware that this code uses VLA which is an optional feature since C11 and that will not be accepted for example by Microsoft compilers... But as the 3 strings contain different elements you'd better use structs here: struct Student { char name[256]; char fname[256]; char code[256]; };
C loop to get strings and store them in 2D array
I created a multi-dim array to store (name/fname/code) of multiple students in (row 0) it stores the (name/fname/code) of the first student then it passes to the next row (row 1) --> (name/fname/code) of the second student ... etc. Every time I run the code and write a string when it reaches 9 character it will add 8 char of the next string and if the next string reaches 9 char it will add 8 char of the next string to it. But if I write the name with just 8 char or less it runs like I want to. If you didn't understand run the code and you will get it. #include<stdio.h> #include<stdlib.h> #include <string.h> int main() { int i, nmbStud; int rows, columns; printf("Enter the number of students you want to add : "); scanf("%d", &nmbStud); char *arr[nmbStud][256]; for (rows = 0; rows < nmbStud; rows++) { columns = 0; while (columns < 3) { printf("--Enter the %d name : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; printf("--Enter the %d family name : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; printf("--Enter the code of the student %d : ", rows + 1); scanf("%s", &arr[rows][columns]); columns++; } } for (rows = 0; rows < nmbStud; rows++) { for (columns = 0; columns < 3; columns++) { printf("-->%s\n", &arr[rows][columns]); } } } Compile result
[ "The rule is that you should correctly declare what you need and use it accordingly. Here char *arr[nmbStud][256]; declares a Variable Length Array of pointers to characters. But later you use it with:\n scanf(\"%s\",&arr[rows][columns]);\n\nwhich stores the characters inside the pointer value. As you have a 64 bits system, a pointer uses 8 bytes, hence the problem when you try to use strings using more than 7 characters - do not forget the terminating null in C strings!\nWhat you want should be closer to:\nchar arr[nmbStud][3][256]; // 2D VLA, size nmStud * 3, of strings of 255 characters\n\nand later:\n scanf(\"%s\",arr[rows][columns]);\n\nAnd you should be aware that this code uses VLA which is an optional feature since C11 and that will not be accepted for example by Microsoft compilers...\n\nBut as the 3 strings contain different elements you'd better use structs here:\nstruct Student {\n char name[256];\n char fname[256];\n char code[256];\n};\n\n" ]
[ 0 ]
[]
[]
[ "algorithm", "arrays", "c", "c_strings", "multidimensional_array" ]
stackoverflow_0074665894_algorithm_arrays_c_c_strings_multidimensional_array.txt
Q: Error no such file or directory in VScode I'm trying to include header files from the atmel avr folder to work with arduino. Despite trying to include the directory of the files, it still prompts me with "No such file or directory" when compyling. The files are located inside "C:\repositories\arduino_testing\include\avr" What am I doing wrong? main.c #include <stdio.h> #include "avr\io.h" int main(){ printf("This is a C code"); return 0; } tasks.json { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++.exe build active file", "command": "C:\\MinGW\\bin\\g++.exe", "args": [ "-I C:\\repositories\\arduino_testing\\include", "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Task generated by Debugger." }, { "type": "cppbuild", "label": "C/C++: cpp.exe build active file", "command": "C:\\MinGW\\bin\\cpp.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": "build", "detail": "Task generated by Debugger." } ], "version": "2.0.0" } c_cpp_properties { "configurations": [ { "name": "Win32", "includePath": [ //"${workspaceFolder}/**", //"C:\\repositories\\arduino_testing\\avr", "C:\\repositories\\arduino_testing\\include" ], "defines": [ "_DEBUG", "UNICODE", "_UNICODE" ], "windowsSdkVersion": "10.0.18362.0", "compilerPath": "C:\\MinGW\\bin\\gcc.exe", "cStandard": "c17", "cppStandard": "c++17", "intelliSenseMode": "windows-gcc-x64" } ], "version": 4 } A: In your c_cpp_properties.json file, you need to add the path to the avr folder in the includePath array. "includePath": [ //"${workspaceFolder}/**", "C:\\repositories\\arduino_testing\\include\\avr" ], A: #include "avr\io.h" #include always uses forward slashes / as path separator, no matter what host system you are using. That system header is part of the GNU tools for AVR and provided by AVR-LibC, hence: #include <avr/io.h> Also notice that this is a sytem header, thus enclose in < and >, not in "'s. There is no need for -isystem <path> (or -I <path> for that matter) as far as the GNU tools are concerned. You might need to provide the headers to IDEs that need their contents to display locations of macro definitions, though. If you can only compile with -isystem <path> (or -I <path> for that matter) your toolchain installation is broken, and you should get a working toolchain. Same if it doesn't work with -isystem (or -I): toolchain is broken. So show the include paths, add -v to the compiler options and inspect its output: > avr-gcc -v ... [snip many lines] #include "..." search starts here: #include <...> search starts here: $install/bin/../lib/gcc/avr/8.5.0/include $install/bin/../lib/gcc/avr/8.5.0/include-fixed $install/bin/../lib/gcc/avr/8.5.0/../../../../avr/include End of search list. ... where the last line belongs to the AVR-LibC system includes. Resolving all these ..'s, the path is $install/avr/include, and it should contain a folder avr which contains io.h etc.
Error no such file or directory in VScode
I'm trying to include header files from the atmel avr folder to work with arduino. Despite trying to include the directory of the files, it still prompts me with "No such file or directory" when compyling. The files are located inside "C:\repositories\arduino_testing\include\avr" What am I doing wrong? main.c #include <stdio.h> #include "avr\io.h" int main(){ printf("This is a C code"); return 0; } tasks.json { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++.exe build active file", "command": "C:\\MinGW\\bin\\g++.exe", "args": [ "-I C:\\repositories\\arduino_testing\\include", "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": { "kind": "build", "isDefault": true }, "detail": "Task generated by Debugger." }, { "type": "cppbuild", "label": "C/C++: cpp.exe build active file", "command": "C:\\MinGW\\bin\\cpp.exe", "args": [ "-fdiagnostics-color=always", "-g", "${file}", "-o", "${fileDirname}\\${fileBasenameNoExtension}.exe" ], "options": { "cwd": "${fileDirname}" }, "problemMatcher": [ "$gcc" ], "group": "build", "detail": "Task generated by Debugger." } ], "version": "2.0.0" } c_cpp_properties { "configurations": [ { "name": "Win32", "includePath": [ //"${workspaceFolder}/**", //"C:\\repositories\\arduino_testing\\avr", "C:\\repositories\\arduino_testing\\include" ], "defines": [ "_DEBUG", "UNICODE", "_UNICODE" ], "windowsSdkVersion": "10.0.18362.0", "compilerPath": "C:\\MinGW\\bin\\gcc.exe", "cStandard": "c17", "cppStandard": "c++17", "intelliSenseMode": "windows-gcc-x64" } ], "version": 4 }
[ "In your c_cpp_properties.json file, you need to add the path to the avr folder in the includePath array.\n\"includePath\": [\n //\"${workspaceFolder}/**\",\n \"C:\\\\repositories\\\\arduino_testing\\\\include\\\\avr\"\n],\n\n", "\n#include \"avr\\io.h\"\n\n#include always uses forward slashes / as path separator, no matter what host system you are using.\nThat system header is part of the GNU tools for AVR and provided by AVR-LibC, hence:\n#include <avr/io.h>\nAlso notice that this is a sytem header, thus enclose in < and >, not in \"'s.\nThere is no need for -isystem <path> (or -I <path> for that matter) as far as the GNU tools are concerned. You might need to provide the headers to IDEs that need their contents to display locations of macro definitions, though.\nIf you can only compile with -isystem <path> (or -I <path> for that matter) your toolchain installation is broken, and you should get a working toolchain. Same if it doesn't work with -isystem (or -I): toolchain is broken.\nSo show the include paths, add -v to the compiler options and inspect its output:\n> avr-gcc -v ...\n[snip many lines]\n#include \"...\" search starts here:\n#include <...> search starts here:\n $install/bin/../lib/gcc/avr/8.5.0/include\n $install/bin/../lib/gcc/avr/8.5.0/include-fixed\n $install/bin/../lib/gcc/avr/8.5.0/../../../../avr/include\nEnd of search list.\n...\n\nwhere the last line belongs to the AVR-LibC system includes. Resolving all these ..'s, the path is $install/avr/include, and it should contain a folder avr which contains io.h etc.\n" ]
[ 0, 0 ]
[]
[]
[ "c", "visual_studio_code" ]
stackoverflow_0074663037_c_visual_studio_code.txt
Q: add a comma after every "%" symbol using regex in a dataframe if i have a value like this : C:100% B:90% A:80% i want to add comma after every % so the output is like this : C:100%,B:90%,A:80% i've tried somthing like : data['Final'] = data['Final'].str.replace(r'(%)\n\b', r'\1,', regex=True) A: You can use the re.sub method from the re module in Python to achieve this. import re # Your original string string = "C:100% B:90% A:80%" # Use regex to replace all occurrences of '%' with ',%' string = re.sub("%", ",%", string) # The resulting string will be: "C:100%, B:90%, A:80%" If you want to apply this to a column in a DataFrame, you can use the apply method to apply the regex substitution to each value in the column. For example: import pandas as pd import re # Create a DataFrame with a column of strings df = pd.DataFrame({"values": ["C:100% B:90% A:80%", "D:70% E:60% F:50%"]}) # Use the apply method to apply the regex substitution to each value in the column df["values"] = df["values"].apply(lambda x: re.sub("% ", "%,", x)) This will result in a DataFrame with the following values in the values column: 0 C:100%,B:90%,A:80% 1 D:70%,E:60%,F:50% A: You can use this : df['final']= df['final'].str.replace(r'%\s*\b', r'%,', regex=True) Output : print(df) final 0 C:100%,B:90%,A:80% A: There is no newline in your example data, so you could write the pattern matching just a space, or 1 or more whitespace chars \s+ data = pd.DataFrame({"Final": ["C:100% B:90% A:80%"]}) data['Final'] = data['Final'].str.replace(r'(%) \b', r'\1,', regex=True) print(data) Output Final 0 C:100%,B:90%,A:80%
add a comma after every "%" symbol using regex in a dataframe
if i have a value like this : C:100% B:90% A:80% i want to add comma after every % so the output is like this : C:100%,B:90%,A:80% i've tried somthing like : data['Final'] = data['Final'].str.replace(r'(%)\n\b', r'\1,', regex=True)
[ "You can use the re.sub method from the re module in Python to achieve this.\nimport re\n\n# Your original string\nstring = \"C:100% B:90% A:80%\"\n\n# Use regex to replace all occurrences of '%' with ',%'\nstring = re.sub(\"%\", \",%\", string)\n\n# The resulting string will be: \"C:100%, B:90%, A:80%\"\n\nIf you want to apply this to a column in a DataFrame, you can use the apply method to apply the regex substitution to each value in the column. For example:\nimport pandas as pd\nimport re\n\n# Create a DataFrame with a column of strings\ndf = pd.DataFrame({\"values\": [\"C:100% B:90% A:80%\", \"D:70% E:60% F:50%\"]})\n\n# Use the apply method to apply the regex substitution to each value in the column\ndf[\"values\"] = df[\"values\"].apply(lambda x: re.sub(\"% \", \"%,\", x))\n\nThis will result in a DataFrame with the following values in the values column:\n0 C:100%,B:90%,A:80%\n1 D:70%,E:60%,F:50%\n\n", "You can use this :\ndf['final']= df['final'].str.replace(r'%\\s*\\b', r'%,', regex=True)\n\nOutput :\nprint(df)\n\n final\n0 C:100%,B:90%,A:80%\n\n", "There is no newline in your example data, so you could write the pattern matching just a space, or 1 or more whitespace chars \\s+\ndata = pd.DataFrame({\"Final\": [\"C:100% B:90% A:80%\"]})\ndata['Final'] = data['Final'].str.replace(r'(%) \\b', r'\\1,', regex=True)\nprint(data)\n\nOutput\n Final\n0 C:100%,B:90%,A:80%\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "python", "regex", "string" ]
stackoverflow_0074665703_dataframe_python_regex_string.txt
Q: Google My Business API - Get Reviews PERMISSION_DENIED Error I'm trying to work with Google My Business API. I was able to successfully setup the OAuth 2.0 Playground to work and some simple C# code using the Google.Apis.MyBusinessAccountManagement.v1 library. Now that I have those 2 things working I'm trying to move on to my goal which is get a list of reviews for my business. For the C# library, the MyBusinessAccountManagementService object doesn't have any methods for reviews. So I researched a bunch and found an API call to a different endpoint https://mybusiness.googleapis.com/v4/accounts/{accountId}/locations/{locationId}/reviews and decided to try with the Oauth2.0 Playground using both the https://www.googleapis.com/auth/business.manage and https://www.googleapis.com/auth/plus.business.manage scopes; but for some reason on this endpoint I get the PERMISSION_DENIED error (see below). I already went over the process to fill out the form to get my project associated with the Business and the API is set as Enabled (see image) {   "error": {     "status": "PERMISSION_DENIED",     "message": "Google My Business API has not been used in project {projectId} before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId} then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.",     "code": 403,     "details": [       {         "@type": "type.googleapis.com/google.rpc.Help",         "links": [           {             "url": "https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId}",             "description": "Google developers console API activation"           }         ]       },       {         "reason": "SERVICE_DISABLED",         "@type": "type.googleapis.com/google.rpc.ErrorInfo",         "domain": "googleapis.com",         "metadata": {           "consumer": "projects/{projectId}",           "service": "mybusiness.googleapis.com"         }       }     ]   } } A: Oauth2 play ground is only used for testing, its not meant for antyhing more then that. Google My Business API has not been used in project {projectId} before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId} then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. Means exactly the api has not ben enabled in that project yet. You need to go under library and enable it. There are about 7 different apis listed in google cloud console related to My Business API . May I suggest you enable them all until you find the one you are missing. As for the .net client library remember alot of the My Business API's were deprecated they may not work anymore, have you checked the documentation.
Google My Business API - Get Reviews PERMISSION_DENIED Error
I'm trying to work with Google My Business API. I was able to successfully setup the OAuth 2.0 Playground to work and some simple C# code using the Google.Apis.MyBusinessAccountManagement.v1 library. Now that I have those 2 things working I'm trying to move on to my goal which is get a list of reviews for my business. For the C# library, the MyBusinessAccountManagementService object doesn't have any methods for reviews. So I researched a bunch and found an API call to a different endpoint https://mybusiness.googleapis.com/v4/accounts/{accountId}/locations/{locationId}/reviews and decided to try with the Oauth2.0 Playground using both the https://www.googleapis.com/auth/business.manage and https://www.googleapis.com/auth/plus.business.manage scopes; but for some reason on this endpoint I get the PERMISSION_DENIED error (see below). I already went over the process to fill out the form to get my project associated with the Business and the API is set as Enabled (see image) {   "error": {     "status": "PERMISSION_DENIED",     "message": "Google My Business API has not been used in project {projectId} before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId} then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.",     "code": 403,     "details": [       {         "@type": "type.googleapis.com/google.rpc.Help",         "links": [           {             "url": "https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId}",             "description": "Google developers console API activation"           }         ]       },       {         "reason": "SERVICE_DISABLED",         "@type": "type.googleapis.com/google.rpc.ErrorInfo",         "domain": "googleapis.com",         "metadata": {           "consumer": "projects/{projectId}",           "service": "mybusiness.googleapis.com"         }       }     ]   } }
[ "Oauth2 play ground is only used for testing, its not meant for antyhing more then that.\n\nGoogle My Business API has not been used in project {projectId} before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/mybusiness.googleapis.com/overview?project={projectId} then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.\n\nMeans exactly the api has not ben enabled in that project yet. You need to go under library and enable it.\nThere are about 7 different apis listed in google cloud console related to My Business API . May I suggest you enable them all until you find the one you are missing.\nAs for the .net client library remember alot of the My Business API's were deprecated they may not work anymore, have you checked the documentation.\n" ]
[ 0 ]
[]
[]
[ "c#", "google_api", "google_my_business_api", "oauth2_playground", "oauth_2.0" ]
stackoverflow_0074661545_c#_google_api_google_my_business_api_oauth2_playground_oauth_2.0.txt
Q: React apps are not deploying on heroku, how to deploy without using eco dynos React Apps are no longer working on Heroku, something related to free eoc dynos changed from 28th of november 2022, this is the exact message Is there a way to deploy react app on Heroku without using paid dynos ? A: Heroku closed the free tier https://blog.heroku.com/next-chapter on November 28, 2022 You can no longer host your apps for free
React apps are not deploying on heroku, how to deploy without using eco dynos
React Apps are no longer working on Heroku, something related to free eoc dynos changed from 28th of november 2022, this is the exact message Is there a way to deploy react app on Heroku without using paid dynos ?
[ "Heroku closed the free tier https://blog.heroku.com/next-chapter on November 28, 2022\nYou can no longer host your apps for free\n" ]
[ 0 ]
[]
[]
[ "heroku", "javascript", "reactjs" ]
stackoverflow_0074665308_heroku_javascript_reactjs.txt
Q: Python Dataframe fillna with value on left column I have an excel spreadsheet where there are merged cells. I would like to build a dictionary of Product_ID - Category - Country. But for that I need to get, I believe, Python to be able to read an excel file with horizontally merged cells. import pandas as pd excel_sheet = pd.read_excel(r'C:\Users\myusername\Documents\Product_Sales_Database.xlsx, 'Product IDs') However the returned dataframe is this: My question is, how can I fill the nan values in the dataframe, with the values on the left column? Thank you! A: This should do: df.iloc[0] = df.iloc[0].ffill() A: I understand, that the question is more than 2 yo, and the best answer is coorect, but if you want to fill NaNs of the whole Frame, you can use: df.T.ffill().T
Python Dataframe fillna with value on left column
I have an excel spreadsheet where there are merged cells. I would like to build a dictionary of Product_ID - Category - Country. But for that I need to get, I believe, Python to be able to read an excel file with horizontally merged cells. import pandas as pd excel_sheet = pd.read_excel(r'C:\Users\myusername\Documents\Product_Sales_Database.xlsx, 'Product IDs') However the returned dataframe is this: My question is, how can I fill the nan values in the dataframe, with the values on the left column? Thank you!
[ "This should do:\ndf.iloc[0] = df.iloc[0].ffill() \n\n", "I understand, that the question is more than 2 yo, and the best answer is coorect, but if you want to fill NaNs of the whole Frame, you can use:\ndf.T.ffill().T\n\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "fillna", "pandas", "python" ]
stackoverflow_0063303810_dataframe_fillna_pandas_python.txt
Q: Git push hangs when pushing to Github? Git push hangs everytime I try to push to github. I am using Cygwin and Windows 7. Git functions fine locally tracking branches, providing status, setting global user.name and user.email and allowing commits. I'm still new and learning. I enter git push , git push origin master or git push -u origin master and I get nothing but a blank line requiring me to ctl-c to get the prompt back. ssh-keygen -t rsa -C "[email protected]" asks me for a file name and hangs git push heroku master hangs $ git status returns On branch master nothing to commit, working directory clean $ git pull returns Already up to date $ git remote -v returns: heroku [email protected]:myherokusite.git (fetch) heroku [email protected]:myherokusite.git (push) origin https://github.com/gitusername/appname.git (fetch) origin https://github.com/gitusername/appname.git (push) or the correct ssh remote settings are returned when trying this with ssh Updated: Using the SSH url [email protected]:gitusername/gitrepo.git also hangs git remote set-url origin https://github.com/gitusername/appname.git is correct Updated: I can see the git processes running in Windows Task Manager while it hangs. I've tried: Using different internet connection locations switching between https and ssh and it hangs Uninstalled git. Reinstalled from: https://code.google.com/p/msysgit/downloads/list Uninstalled git. Installed Cygwin's git Uninstalled git. Installed Github for Windows GUI app and it I WAS able to push. But this app has limited functionality, forces me out of my Cygwin window into another app which then forces me into a Windows command prompt for complete functionality which I thought I had escaped by using Cygwin. Spent many, many hours trying to resolve this, it worked faultlessly before, thanks. UPDATE 4/2014: I rebuilt my entire machine Win 7, Cygwin etc and all is now working fine A: Restart your ssh agent! killall ssh-agent; eval `ssh-agent` A: git config --global core.askpass "git-gui--askpass" This worked for me. It may take 3-5 secs for the prompt to appear just enter your login credentials and you are good to go. A: Try creating a script like ~/sshv.sh that will show you what ssh is up to: #!/bin/bash ssh -vvv "$@" Allow execution of the ~/sshv.sh file for the owner of the file: chmod u+x ~/sshv.sh Then invoke your git push with: GIT_SSH=~/sshv.sh git push ... In my case, this helped me figure out that I was using ssh shared connections that needed to be closed, so I killed those ssh processes and it started working. A: Try GIT_CURL_VERBOSE=1 git push ...Your problem may occur due to proxy settings, for instance if git is trying to reach github.com via a proxy server and the proxy is not responding. With GIT_CURL_VERBOSE=1 it will show the target IP address and some information. You can compare this IP address with the output of the command: host www.github.com. If these IPs are different then you can set https_proxy="" and try again. A: Using "Command Prompt" (cmd) instead of git bash for initial push resolved the hang up for me. Since then I use git bash without any issues. A: I had the same problem with absolutely same symptoms… I was about to rebuild my whole system in my despair)). I also tried git config --global core.askpass "git-gui--askpass", but it didn't work. The solution was to restart the ssh-agent: launchctl stop /System/Library/LaunchAgents/org.openbsd.ssh-agent launchctl start /System/Library/LaunchAgents/org.openbsd.ssh-agent If needed, prefix these with sudo. A: In my case, the issue was the https seems to be no longer supported and I had to switch all my origins from the old https://github.com/username/myrepo to [email protected]:username/myrepo.git. I did this with git remote set-url origin [email protected]:username/myrepo.git A: I had the same issue. Stop worrying and searching endless complicated solutions, just remove git and reinstall it. sudo apt-get purge git sudo apt-get autoremove sudo apt-get install git Thats it. It should work now A: Had the same problem. Was a little bit confused but the thing was I had make a git init --bare on root, that means that you won't be able to push because of you don't have any rights. Instead make a new User or in my case I used Pi User and made git init --bare there, which then later on it worked. git config --global http.postBuffer 524288000 Maximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system. For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests. A: Its worth checking if you are using the cygwin git or an external git (ie github). If whereis git returns just /cygdrive/c/Program Files (x86)/Git/cmd/git.exe or similar its best to install the cygwin git package, this solved the problem for me. A: For anyone experiencing this since 2021/08/13 and finding this question, it may be related to the recent auth policy changes on GitHub. They are no longer accepting username/password for authentication. The solution is to set up ssh access or create a personal access token. A: I thought my Git windows screen was struck but actually a sign in prompt comes behind it.Check for it and enter your credentials and that's it. A: I had this same issue today, all I did to resolve it was remove origin git remote remove origin and re-add it git remote add origin https://github.com/username/project.git then I was able to push successfully. A: I just wanted to say that I'm having this issue on my AWS EC2 instances. I was trying to push from my EC2 instance itself, when I have it configured to only allow traffic in from the load balancer. I changed the rule to allow HTTP in from everywhere, but it still didn't fix the problem. Then I realized it's because my security groups are configured to not allow outbound traffic from my EC2 instances over HTTPS. I didn't have allow HTTPS inbound traffic to make it work, even though it's probably a good policy for you to have HTTPS available inbound. A: This occurred for me when my computer's disk space was full. Delete some files & empty the trash to fix. A: In my case a new public key on cPanel (my remote) was not yet authorized. My client was a new machine running Ubuntu 2020-04 git push origin ...worked, but prompted for the cPanel password. I assume the git-gui process hung waiting for a password that I couldn't enter. After authorizing my new key git-gui worked. It did prompt for the key store password. A: Use Git CMD, not Cygwin Bash Terminal. By using the Git CMD, my system was able to authenticate my information on GitHub. After that, using bash worked fine. I understand that there was some sort of authentication the program was trying to do, but couldn't do from the bash terminal for some reason. Although this answer is not comprehensive, it does get the job done. A: I'm wondering if it's the same thing I had... Go into Putty Click on "Default Settings" in the Saved Sessions. Click Load Go to Connection -> SSH -> Bugs Set "Chokes on PuTTY's SSH-2 'winadj' requests" to On (instead of Auto) Go Back to Session in the treeview (top of the list) Click on "Default Settings" in the Saved Sessions box. Click Save. This (almost verbatim) comes from : https://tortoisegit.org/issue/1880 A: In my case the issue was there was some process that had locked my keychain access... Force quit all other apps to make sure keychain access is not locked on your Mac A: I'm new to this. I managed to solve my issue regarding the hanging git push command. I recently installed git scm. In one of the installation options, I had selected to use git credential manager core. I assumed that it was installed automatically. But it looks like there was an error in that installation. I reinstalled git credential manager core from the website, and it works perfectly now. A: If you use windows credential manager, use CMD instead of git Bash. Then you can add an authentication method to proceed. This worked for me. A: I also had an issue where git hangs on the "Writing objects" part on Windows 7 (using msysgit, the default windows client from git) and this is the first hit I got in google, so I will also post my answer here. git config --global core.askpass "git-gui--askpass" did not work unfotunately, but after some researching I found the tip on Git push halts on "Writing Objects: 100%" to use git config –global sendpack.sideband false which worked perfectly. I can finally push from the commandline again! A: I had two repositories, pushing to one of which worked fine. So, I compared their .git/config. The non-working one had at the end: [http] sslVerify = false The working one had instead: [credential] helper = store Changing .git/config solved the problem. A: I spent hours trying to fix this and none of the recommendations worked. Out of frustration I moved the whole project to a backup folder, recloned a fresh and then copied over my files from backup folder. It worked!!. I suspect my issue was I committed node_module which was not excluded in .gitignore initially and removing from cache did not help/work. When I started from fresh the file size was a fraction compared to the earlier one. A: This probably works for other Windows setups (I faced the issue on Windows 7 Pro 32 bits BTW and trying to push to Bitbucket, not Github). I tried reinstalling Git and fiddling with the installer configuration. Got it working with the OpenSSH setting left out and choosing Not to use one when choosing a credentials manager, which is probably what the SSH agent explained in other answers is called on GNU/Linux, so the hanging was probably due to waiting for an assumingly unavailable Windows credentials manager to respond. A: So when you type & enter git pish-u origin, GUI asking for your credentials should popup. In my case, after typing git pish-u origin, nothing happens until took a look at my task manager and found something running which I was certain was the GUI that should popup to ask your credentials. I decided to end its task. I was assuming that it will show an error on my gitbash but instead, the damn GUI finally showed up and I was able to finally progress. A: I was faced with the same problem. I use Github desktop for the normal action and it can push or pull, but it doesn't support force push, when I try to do some rebase work and it always failed to push force. I tried add core.askpass, config the proxy but all not work. Finally I decided to watch the log of Github desktop and I found it use below command to push: git -c credential.helper= -c protocol.version=2 push origin I tried this one with force flag and it work, it finally ask me for the user name and password. I am not sure which configuration make it work, but I think this may help. EDIT: I tried to install manager-core from here and I am able to push. It seems like manager-core is not installed properly. A: Sometimes I have this hanging issue when pushing a new branch from Android Studio and it does not give an error message. Usually, if I do a simple fetch from main it will work afterwards. A: In my case git was trying to use Ipv6 instead of Ipv4 to authenticate github and my terminal was stuck here set_sock_tos: set socket 3 IPV6_TCLASS 0x48. To solve this I added AddressFamily option to ~/.ssh/config Host github.com Hostname github.com AddressFamily inet IdentityFile ~/.ssh/id_rsa test command: ssh -vT [email protected] A: I experienced this in andorid studio. Actually, on git push, the progress bar kept running and nothing happened; I used to push code using token; Here is what i did: in android studio opened terminal. entered git push when email and password were asked on gui, i closed the dialogue box. after that i was prompted for user name on terminal, so i did it. after that i was asked password, where instead of password, i entered the access token. git push Logon failed, use ctrl+c to cancel basic credential prompt. Username for 'https://github.com': abcxyz Password for 'https://[email protected]': ____ and voila. A: If you are in VSCode it may also be the case that the GitHub VSCode extension wants to open GitHub to sign in. If you click to fast you that popup and then all successive git push will get stuck even when you execute them from the VSCode terminal. Solution: Close VSCode, open the project root again in a fresh window. When prompted follow the link and sign in. That's it. A: Turning off Virgin Media WebSafe fixed this issue for me. Assume other ISPs have similar toggle-able safety features. A: For me none of above worked. I tried this one and it worked: git push --set-upstream origin main A: Sometimes not stuck. Maybe it will be pushing. You can check progress of your push by using this command. I think this will useful for someone. git push --progress See more about how to see git push progress: How can I know how much percentage of git push is complete?
Git push hangs when pushing to Github?
Git push hangs everytime I try to push to github. I am using Cygwin and Windows 7. Git functions fine locally tracking branches, providing status, setting global user.name and user.email and allowing commits. I'm still new and learning. I enter git push , git push origin master or git push -u origin master and I get nothing but a blank line requiring me to ctl-c to get the prompt back. ssh-keygen -t rsa -C "[email protected]" asks me for a file name and hangs git push heroku master hangs $ git status returns On branch master nothing to commit, working directory clean $ git pull returns Already up to date $ git remote -v returns: heroku [email protected]:myherokusite.git (fetch) heroku [email protected]:myherokusite.git (push) origin https://github.com/gitusername/appname.git (fetch) origin https://github.com/gitusername/appname.git (push) or the correct ssh remote settings are returned when trying this with ssh Updated: Using the SSH url [email protected]:gitusername/gitrepo.git also hangs git remote set-url origin https://github.com/gitusername/appname.git is correct Updated: I can see the git processes running in Windows Task Manager while it hangs. I've tried: Using different internet connection locations switching between https and ssh and it hangs Uninstalled git. Reinstalled from: https://code.google.com/p/msysgit/downloads/list Uninstalled git. Installed Cygwin's git Uninstalled git. Installed Github for Windows GUI app and it I WAS able to push. But this app has limited functionality, forces me out of my Cygwin window into another app which then forces me into a Windows command prompt for complete functionality which I thought I had escaped by using Cygwin. Spent many, many hours trying to resolve this, it worked faultlessly before, thanks. UPDATE 4/2014: I rebuilt my entire machine Win 7, Cygwin etc and all is now working fine
[ "Restart your ssh agent!\nkillall ssh-agent; eval `ssh-agent`\n\n", "git config --global core.askpass \"git-gui--askpass\"\n\nThis worked for me. It may take 3-5 secs for the prompt to appear just enter your login credentials and you are good to go.\n", "Try creating a script like ~/sshv.sh that will show you what ssh is up to:\n#!/bin/bash\nssh -vvv \"$@\"\n\nAllow execution of the ~/sshv.sh file for the owner of the file:\nchmod u+x ~/sshv.sh\n\nThen invoke your git push with:\nGIT_SSH=~/sshv.sh git push ...\n\nIn my case, this helped me figure out that I was using ssh shared connections that needed to be closed, so I killed those ssh processes and it started working.\n", "Try GIT_CURL_VERBOSE=1 git push\n...Your problem may occur due to proxy settings, for instance if git is trying to reach github.com via a proxy server and the proxy is not responding.\nWith GIT_CURL_VERBOSE=1 it will show the target IP address and some information. You can compare this IP address with the output of the command: host www.github.com. If these IPs are different then you can set https_proxy=\"\" and try again.\n", "Using \"Command Prompt\" (cmd) instead of git bash for initial push resolved the hang up for me. Since then I use git bash without any issues.\n", "I had the same problem with absolutely same symptoms… I was about to rebuild my whole system in my despair)). I also tried git config --global core.askpass \"git-gui--askpass\", but it didn't work.\nThe solution was to restart the ssh-agent:\nlaunchctl stop /System/Library/LaunchAgents/org.openbsd.ssh-agent \n\nlaunchctl start /System/Library/LaunchAgents/org.openbsd.ssh-agent\n\nIf needed, prefix these with sudo.\n", "In my case, the issue was the https seems to be no longer supported and I had to switch all my origins from the old https://github.com/username/myrepo to [email protected]:username/myrepo.git.\nI did this with\ngit remote set-url origin [email protected]:username/myrepo.git\n\n", "I had the same issue. \nStop worrying and searching endless complicated solutions, \njust remove git and reinstall it.\nsudo apt-get purge git\nsudo apt-get autoremove\nsudo apt-get install git\n\nThats it. It should work now\n", "\nHad the same problem. Was a little bit confused but the thing was I had make a git init --bare on root, that means that you won't be able to push because of you don't have any rights. Instead make a new User or in my case I used Pi User and made git init --bare there, which then later on it worked. \ngit config --global http.postBuffer 524288000\n\n\nMaximum size in bytes of the buffer used by smart HTTP transports when POSTing data to the remote system.\n For requests larger than this buffer size, HTTP/1.1 and Transfer-Encoding: chunked is used to avoid creating a massive pack file locally. Default is 1 MiB, which is sufficient for most requests.\n\n", "Its worth checking if you are using the cygwin git or an external git (ie github).\nIf whereis git returns just /cygdrive/c/Program Files (x86)/Git/cmd/git.exe or similar its best to install the cygwin git package, this solved the problem for me.\n", "For anyone experiencing this since 2021/08/13 and finding this question, it may be related to the recent auth policy changes on GitHub. They are no longer accepting username/password for authentication.\nThe solution is to set up ssh access or create a personal access token.\n", "I thought my Git windows screen was struck but actually a sign in prompt comes behind it.Check for it and enter your credentials and that's it.\n", "I had this same issue today, all I did to resolve it was remove origin git remote remove origin and re-add it git remote add origin https://github.com/username/project.git then I was able to push successfully.\n", "I just wanted to say that I'm having this issue on my AWS EC2 instances. I was trying to push from my EC2 instance itself, when I have it configured to only allow traffic in from the load balancer. I changed the rule to allow HTTP in from everywhere, but it still didn't fix the problem. Then I realized it's because my security groups are configured to not allow outbound traffic from my EC2 instances over HTTPS. I didn't have allow HTTPS inbound traffic to make it work, even though it's probably a good policy for you to have HTTPS available inbound. \n", "This occurred for me when my computer's disk space was full. Delete some files & empty the trash to fix.\n", "In my case a new public key on cPanel (my remote) was not yet authorized.\nMy client was a new machine running Ubuntu 2020-04\ngit push origin\n\n...worked, but prompted for the cPanel password.\nI assume the git-gui process hung waiting for a password that I couldn't enter.\nAfter authorizing my new key git-gui worked. It did prompt for the key store password.\n", "Use Git CMD, not Cygwin Bash Terminal.\nBy using the Git CMD, my system was able to authenticate my information on GitHub. After that, using bash worked fine. I understand that there was some sort of authentication the program was trying to do, but couldn't do from the bash terminal for some reason. Although this answer is not comprehensive, it does get the job done.\n", "I'm wondering if it's the same thing I had...\n\nGo into Putty\nClick on \"Default Settings\" in the Saved Sessions. Click Load\nGo to Connection -> SSH -> Bugs\nSet \"Chokes on PuTTY's SSH-2 'winadj' requests\" to On (instead of Auto)\nGo Back to Session in the treeview (top of the list)\nClick on \"Default Settings\" in the Saved Sessions box. Click Save.\n\nThis (almost verbatim) comes from :\nhttps://tortoisegit.org/issue/1880\n", "In my case the issue was there was some process that had locked my keychain access... \nForce quit all other apps to make sure keychain access is not locked on your Mac\n", "I'm new to this. I managed to solve my issue regarding the hanging git push command.\nI recently installed git scm. In one of the installation options, I had selected to use git credential manager core. I assumed that it was installed automatically. But it looks like there was an error in that installation. I reinstalled git credential manager core from the website, and it works perfectly now.\n", "If you use windows credential manager, use CMD instead of git Bash. Then you can add an authentication method to proceed. This worked for me.\n", "I also had an issue where git hangs on the \"Writing objects\" part on Windows 7 (using msysgit, the default windows client from git) and this is the first hit I got in google, so I will also post my answer here.\ngit config --global core.askpass \"git-gui--askpass\" did not work unfotunately, but after some researching I found the tip on Git push halts on \"Writing Objects: 100%\" to use git config –global sendpack.sideband false which worked perfectly.\nI can finally push from the commandline again!\n", "I had two repositories, pushing to one of which worked fine. So, I compared their .git/config. The non-working one had at the end:\n[http]\n sslVerify = false\n\nThe working one had instead:\n[credential]\n helper = store\n\nChanging .git/config solved the problem.\n", "I spent hours trying to fix this and none of the recommendations worked. Out of frustration I moved the whole project to a backup folder, recloned a fresh and then copied over my files from backup folder. It worked!!. I suspect my issue was I committed node_module which was not excluded in .gitignore initially and removing from cache did not help/work. When I started from fresh the file size was a fraction compared to the earlier one.\n", "This probably works for other Windows setups (I faced the issue on Windows 7 Pro 32 bits BTW and trying to push to Bitbucket, not Github).\nI tried reinstalling Git and fiddling with the installer configuration.\nGot it working with the OpenSSH setting left out and choosing Not to use one when choosing a credentials manager, which is probably what the SSH agent explained in other answers is called on GNU/Linux, so the hanging was probably due to waiting for an assumingly unavailable Windows credentials manager to respond.\n", "So when you type & enter git pish-u origin, GUI asking for your credentials should popup. In my case, after typing git pish-u origin, nothing happens until took a look at my task manager and found something running which I was certain was the GUI that should popup to ask your credentials. I decided to end its task. I was assuming that it will show an error on my gitbash but instead, the damn GUI finally showed up and I was able to finally progress.\n", "I was faced with the same problem.\nI use Github desktop for the normal action and it can push or pull, but it doesn't support force push, when I try to do some rebase work and it always failed to push force.\nI tried add core.askpass, config the proxy but all not work.\nFinally I decided to watch the log of Github desktop and I found it use below command to push:\ngit -c credential.helper= -c protocol.version=2 push origin\n\n\nI tried this one with force flag and it work, it finally ask me for the user name and password.\nI am not sure which configuration make it work, but I think this may help.\nEDIT:\nI tried to install manager-core from here and I am able to push.\nIt seems like manager-core is not installed properly.\n", "Sometimes I have this hanging issue when pushing a new branch from Android Studio and it does not give an error message. Usually, if I do a simple fetch from main it will work afterwards.\n", "In my case git was trying to use Ipv6 instead of Ipv4 to authenticate github and my terminal was stuck here\nset_sock_tos: set socket 3 IPV6_TCLASS 0x48.\nTo solve this I added AddressFamily option to ~/.ssh/config\nHost github.com\n Hostname github.com\n AddressFamily inet \n IdentityFile ~/.ssh/id_rsa \n\ntest command:\nssh -vT [email protected]\n", "I experienced this in andorid studio.\nActually, on git push, the progress bar kept running and nothing happened;\nI used to push code using token;\nHere is what i did:\n\nin android studio opened terminal.\nentered\n\ngit push\n\n\nwhen email and password were asked on gui, i closed the dialogue box.\nafter that i was prompted for user name on terminal, so i did it.\nafter that i was asked password, where instead of password, i entered the access token.\n\ngit push\nLogon failed, use ctrl+c to cancel basic credential prompt.\nUsername for 'https://github.com': abcxyz\nPassword for 'https://[email protected]': ____\n\nand voila.\n", "If you are in VSCode it may also be the case that the GitHub VSCode extension wants to open GitHub to sign in.\nIf you click to fast you that popup and then all successive git push will get stuck even when you execute them from the VSCode terminal.\nSolution: Close VSCode, open the project root again in a fresh window. When prompted follow the link and sign in. That's it.\n", "Turning off Virgin Media WebSafe fixed this issue for me. Assume other ISPs have similar toggle-able safety features.\n", "For me none of above worked.\nI tried this one and it worked:\n\ngit push --set-upstream origin main\n\n", "Sometimes not stuck. Maybe it will be pushing. You can check progress of your push by using this command. I think this will useful for someone.\ngit push --progress\n\nSee more about how to see git push progress: How can I know how much percentage of git push is complete?\n" ]
[ 89, 81, 44, 37, 21, 20, 14, 10, 8, 6, 5, 4, 3, 2, 2, 2, 2, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "cygwin", "freeze", "git", "git_push", "github" ]
stackoverflow_0016906161_cygwin_freeze_git_git_push_github.txt
Q: for, foreach and other loops in github-actions There is a "if" available in github-actions. Is there any way to do use loops? (for, foreach, while , ...) something like - foreach val in ['val1', 'val2'] A: In one word - no. But there is a matrix mechanism that probably will help you. Share more details about the problem you want to solve using loops. A: Maybe not what you are looking for, but you can use a run in the action - name: Print things run: | for x in ${{ y }}; do echo "$x" done A: Looping is not currently possible, especially if you consider looping by using the same runner. Matrix can be used to run a variable number of jobs, but that adds a considerable complexity and huge increase in resource usage, so directly slower results.
for, foreach and other loops in github-actions
There is a "if" available in github-actions. Is there any way to do use loops? (for, foreach, while , ...) something like - foreach val in ['val1', 'val2']
[ "In one word - no. But there is a matrix mechanism that probably will help you. Share more details about the problem you want to solve using loops.\n", "Maybe not what you are looking for, but you can use a run in the action\n - name: Print things\n run: |\n for x in ${{ y }}; do\n echo \"$x\"\n done\n\n", "Looping is not currently possible, especially if you consider looping by using the same runner.\nMatrix can be used to run a variable number of jobs, but that adds a considerable complexity and huge increase in resource usage, so directly slower results.\n" ]
[ 6, 1, 0 ]
[]
[]
[ "github_actions" ]
stackoverflow_0060995173_github_actions.txt
Q: Tensorflow Lite Micro package in Keil custom MCU platform I have tried to use tflite default package in Keil MDK environment, but I am facing some compiling problems. project code My Device: Armv8-M Mainline based device My TFlite runtime environment: runtime environment There are hundreds of errors existed. error type1 error: use of undeclared identifier 'EAFNOSUPPORT'/.../... error type2 error: use of undeclared identifier 'errno' What's maybe going wrong? I tried to compile the package in an empty project with the same environment settings, and it could be compiled instead of such as errors. A: I solved the problem by disabling the redefined symbol in ROM.lib file.
Tensorflow Lite Micro package in Keil custom MCU platform
I have tried to use tflite default package in Keil MDK environment, but I am facing some compiling problems. project code My Device: Armv8-M Mainline based device My TFlite runtime environment: runtime environment There are hundreds of errors existed. error type1 error: use of undeclared identifier 'EAFNOSUPPORT'/.../... error type2 error: use of undeclared identifier 'errno' What's maybe going wrong? I tried to compile the package in an empty project with the same environment settings, and it could be compiled instead of such as errors.
[ "I solved the problem by disabling the redefined symbol in ROM.lib file.\n" ]
[ 0 ]
[]
[]
[ "c", "compiler_errors", "keil", "microcontroller", "tensorflow_lite" ]
stackoverflow_0074588399_c_compiler_errors_keil_microcontroller_tensorflow_lite.txt