content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: unusual result from joining two Data Frames I have two tables: first name A id x 1 123 2 456 3 789 second name B: id y 1 4 3 5 3 6 I need join tables A and B with result: id x y 1 123 4 2 456 3 789 5 3 6 Of course instead of x and y columns I have a lot of columns and tables have a lot of rows, so the solution cannot be use"left join" and remove one value. I've no idea how to get that result in python. Could You help me ? A: Use Pandas merge import pandas as pd # load the data from the two tables into pandas dataframes df1 = pd.read_csv('A.csv') df2 = pd.read_csv('B.csv') # merge the df using 'id' column merged_df = pd.merge(df1, df2, on='id') print(merged_df)
unusual result from joining two Data Frames
I have two tables: first name A id x 1 123 2 456 3 789 second name B: id y 1 4 3 5 3 6 I need join tables A and B with result: id x y 1 123 4 2 456 3 789 5 3 6 Of course instead of x and y columns I have a lot of columns and tables have a lot of rows, so the solution cannot be use"left join" and remove one value. I've no idea how to get that result in python. Could You help me ?
[ "Use Pandas merge\nimport pandas as pd\n\n# load the data from the two tables into pandas dataframes\ndf1 = pd.read_csv('A.csv')\ndf2 = pd.read_csv('B.csv')\n\n# merge the df using 'id' column\nmerged_df = pd.merge(df1, df2, on='id')\n\nprint(merged_df)\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074662569_pandas_python.txt
Q: AWK is inserting in the beginning instead of appending, only with specific input I wrote a simple awk one liner to append a string before and after each each line of a given file: awk '{print "--> " $0 " <--"}' filename Now I have a test file containing these lines: test test2 test3 The result is as expected: --> test <-- --> test2 <-- --> test3 <-- However, these are my real lines I need to process: "\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b" "\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c" "\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c" Processing those with the very same one liner will not work, it will instead only insert the rightmost arrow in front of the line. <--"\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b" <--"\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c" <--"\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c" Why is this happening and how can I work around it? A: Looks like your input file contains dos/windows line endings (\r\n); try ... head -2 filename | od -c ... and you should see each line ending with \r \n. Run dos2unix filename to permanently remove the \r characters from the file. Alternatively you can have awk strip the \r, eg: awk '{sub(/\r/,""); print "--> " $0 " <--"}' filename
AWK is inserting in the beginning instead of appending, only with specific input
I wrote a simple awk one liner to append a string before and after each each line of a given file: awk '{print "--> " $0 " <--"}' filename Now I have a test file containing these lines: test test2 test3 The result is as expected: --> test <-- --> test2 <-- --> test3 <-- However, these are my real lines I need to process: "\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b" "\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c" "\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c" Processing those with the very same one liner will not work, it will instead only insert the rightmost arrow in front of the line. <--"\x1c\x1d\x1e\x1f\x20\x21\x22\x23\x24\x25\x26\x27\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f\x30\x31\x32\x33\x34\x35\x36\x37\x38\x39\x3a\x3b" <--"\x3c\x3d\x3e\x3f\x40\x41\x42\x43\x44\x45\x46\x47\x48\x49\x4a\x4b\x4c\x4d\x4f\x50\x51\x52\x53\x54\x55\x56\x57\x58\x59\x5a\x5b\x5c" <--"\x5d\x5e\x5f\x60\x61\x62\x63\x64\x65\x66\x67\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f\x70\x71\x72\x73\x74\x75\x76\x77\x78\x79\x7a\x7b\x7c" Why is this happening and how can I work around it?
[ "Looks like your input file contains dos/windows line endings (\\r\\n); try ...\nhead -2 filename | od -c\n\n... and you should see each line ending with \\r \\n.\nRun dos2unix filename to permanently remove the \\r characters from the file.\nAlternatively you can have awk strip the \\r, eg:\nawk '{sub(/\\r/,\"\"); print \"--> \" $0 \" <--\"}' filename\n\n" ]
[ 0 ]
[]
[]
[ "awk" ]
stackoverflow_0074662562_awk.txt
Q: how can I get the output of prototype correctly I am learning JS prototype, and I got the following code, all is clear except the output of console.log(myObj.name). How come it gives the name of the object itself, which is myObj? Doesn't it need to find the prototype from its upper-level object? const myObj = function () {} myObj.prototype.name = 'prototype.name' myObj1 = new myObj() console.log(myObj.name) // => myObj console.log(myObj1.name) // => 'prototype.name' console.log(myObj.prototype.name) // => prototype.name I was expecting the same output as myObj1 and their prototype. A: .prototype is not the same as the prototype. When you use the new operator, the prototype of the object being created is set to the .prototype attribute of the constructor function. So, when myObj1 is created, its prototype is myObj.prototype and contains a .name attribute. Now, myObj1 has .name in its prototype chain, but myObj doesn't; it has the attribute .prototype.name. See this MDN article for more. Look at this for an example: const a = function() {} a.prototype.greeting = "Hello"; const b = new a(); console.log(a.greeting); console.log(b.greeting); console.log(a.prototype.greeting); However, the .name attribute is special in your case. As Felix Kling pointed out in the comments of the question, myObj is a function, and every function has a special .name attribute that contains the name of the function (ie: 'myObj'). So, you are being confused because myObj.name is a completely separate thing from myObj.prototype.name and myObj1.name, which are the same. A: Doesn't it need to find the prototype from its upper-level object? Assuming you know about prototypal inheritance and that in JS, object derives from another object (unlike class based inheritance), I think what you call upper-level object is the prototype. When a function is used as a constructor to create an object, in your case myObj1, that object derives from the object referenced by the prototype property of that function. Here myObj1 derives from myObj.prototype. So, when you read a property myObj1.X and that property does not exist, runtime will go up the derivation tree and look for myObj.prototype.X. However, myObj itself is a function (and also an object). All functions derive from built-in Function.prototype. For example, Function.prototype.myprop = 'somevalue'; //Note: not a good practice to modify Function.prototype, this is just for demo let myObj = function() {} // myObj derives from Function.prototype console.log(myObj.myprop); // => somevalue In your case however, myObj.name really actually refers to Function.prototype.name, which is JS built-in property and is readonly.
how can I get the output of prototype correctly
I am learning JS prototype, and I got the following code, all is clear except the output of console.log(myObj.name). How come it gives the name of the object itself, which is myObj? Doesn't it need to find the prototype from its upper-level object? const myObj = function () {} myObj.prototype.name = 'prototype.name' myObj1 = new myObj() console.log(myObj.name) // => myObj console.log(myObj1.name) // => 'prototype.name' console.log(myObj.prototype.name) // => prototype.name I was expecting the same output as myObj1 and their prototype.
[ ".prototype is not the same as the prototype.\nWhen you use the new operator, the prototype of the object being created is set to the .prototype attribute of the constructor function. So, when myObj1 is created, its prototype is myObj.prototype and contains a .name attribute. Now, myObj1 has .name in its prototype chain, but myObj doesn't; it has the attribute .prototype.name. See this MDN article for more.\nLook at this for an example:\n\n\nconst a = function() {}\na.prototype.greeting = \"Hello\";\n\nconst b = new a();\n\nconsole.log(a.greeting);\nconsole.log(b.greeting);\nconsole.log(a.prototype.greeting);\n\n\n\nHowever, the .name attribute is special in your case. As Felix Kling pointed out in the comments of the question, myObj is a function, and every function has a special .name attribute that contains the name of the function (ie: 'myObj'). So, you are being confused because myObj.name is a completely separate thing from myObj.prototype.name and myObj1.name, which are the same.\n", "\nDoesn't it need to find the prototype from its upper-level object?\n\nAssuming you know about prototypal inheritance and that in JS, object derives from another object (unlike class based inheritance), I think what you call upper-level object is the prototype.\nWhen a function is used as a constructor to create an object, in your case myObj1, that object derives from the object referenced by the prototype property of that function. Here myObj1 derives from myObj.prototype. So, when you read a property myObj1.X and that property does not exist, runtime will go up the derivation tree and look for myObj.prototype.X.\nHowever, myObj itself is a function (and also an object). All functions derive from built-in Function.prototype. For example,\n\n\nFunction.prototype.myprop = 'somevalue'; //Note: not a good practice to modify Function.prototype, this is just for demo\n\nlet myObj = function() {} // myObj derives from Function.prototype\n\nconsole.log(myObj.myprop); // => somevalue\n\n\n\nIn your case however, myObj.name really actually refers to Function.prototype.name, which is JS built-in property and is readonly.\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "properties", "prototype" ]
stackoverflow_0074662270_javascript_properties_prototype.txt
Q: Break apart an array in _Post I have a form where arrays are being past in $_POST. I understand _POST variables are in an array themselves, so this is an array within an array? How do I reference them in PHP - the simlest way? Here is the POST data and the arrays are "name" and "address" Array ( [email] => [email protected] [name] => Array ( [first] => Joe [last] => Smith ) [address] => Array ( [addr_line1] => 123 Main street [addr_line2] => Street Address Line 2 [city] => New Port Richey [state] => Florida [postal] => 33699 ) [position_wanted] => Array ( [0] => Foster [1] => Adoption ) [jf_app_id] => 020054 ) Thanks in advance for any insight. I have no idea where to start. A: so this is an array within an array? Yes, it simply is :-) How do I reference them in PHP - the simlest way? Indeed, display like this: echo $_POST['name']['first']; e.g. echo 'His name is '.$_POST['name']['first'] .', he lives in '.$_POST['address']['city']; Reference: // Will show „Joe” echo $_POST['name']['first']; echo "\n"; $name = &$_POST['name']['first']; $name = 'Bill'; // Will show „Bill” echo $_POST['name']['first'];
Break apart an array in _Post
I have a form where arrays are being past in $_POST. I understand _POST variables are in an array themselves, so this is an array within an array? How do I reference them in PHP - the simlest way? Here is the POST data and the arrays are "name" and "address" Array ( [email] => [email protected] [name] => Array ( [first] => Joe [last] => Smith ) [address] => Array ( [addr_line1] => 123 Main street [addr_line2] => Street Address Line 2 [city] => New Port Richey [state] => Florida [postal] => 33699 ) [position_wanted] => Array ( [0] => Foster [1] => Adoption ) [jf_app_id] => 020054 ) Thanks in advance for any insight. I have no idea where to start.
[ "\nso this is an array within an array?\n\nYes, it simply is :-)\n\nHow do I reference them in PHP - the simlest way?\n\nIndeed, display like this:\necho $_POST['name']['first'];\n\ne.g.\necho 'His name is '.$_POST['name']['first']\n .', he lives in '.$_POST['address']['city'];\n\nReference:\n// Will show „Joe”\necho $_POST['name']['first'];\necho \"\\n\";\n\n$name = &$_POST['name']['first'];\n$name = 'Bill';\n\n// Will show „Bill”\necho $_POST['name']['first'];\n\n" ]
[ 0 ]
[]
[]
[ "php", "post" ]
stackoverflow_0074661927_php_post.txt
Q: Getting stuck with this nested/aggregate view in SQL Create a view that will show the person details, site name, and "quant" measurements with accompanying count, and average readings for each person, site name, and "quant" sorted by the average reading, person, site name, and "quant". CREATE OR REPLACE VIEW site_measurements AS SELECT s.quant, s.reading , s.person_id , site.site_name , AVG(s.reading) ,COUNT(s.person_id) ,COUNT(site.site_name) ,COUNT(s.quant) FROM visited v JOIN survey s ON (v.visited_id=s.taken_id) JOIN site ON (site.site_name=v.site_name) GROUP BY s.person_id, site.site_name, s.quant,s.reading ORDER BY s.reading, s.person_id, site.site_name, s.quant; This is not working since I can not have more than one COUNT. I am stuck on where to go from here. A: You can't use the COUNT function multiple times in the same query because it only counts the number of rows in a table or query. Instead, you can use the COUNT function to count the number of rows for each person, site, and quant, and then use the AVG function to calculate the average reading for each group. Here is an example query that produces the result you're looking for: CREATE OR REPLACE VIEW site_measurements AS SELECT s.quant, s.reading, s.person_id, site.site_name, AVG(s.reading) AS avg_reading, COUNT(s.person_id) AS person_count, COUNT(site.site_name) AS site_count, COUNT(s.quant) AS quant_count FROM visited v JOIN survey s ON v.visited_id = s.taken_id JOIN site ON site.site_name = v.site_name GROUP BY s.person_id, site.site_name, s.quant, s.reading ORDER BY avg_reading, s.person_id, site.site_name, s.quant; This query uses the GROUP BY clause to group the rows by person, site, and quant. Then it uses the COUNT and AVG functions to calculate the number of rows in each group, as well as the average reading for each group. The ORDER BY clause sorts the rows by the average reading, person, site, and quant.
Getting stuck with this nested/aggregate view in SQL
Create a view that will show the person details, site name, and "quant" measurements with accompanying count, and average readings for each person, site name, and "quant" sorted by the average reading, person, site name, and "quant". CREATE OR REPLACE VIEW site_measurements AS SELECT s.quant, s.reading , s.person_id , site.site_name , AVG(s.reading) ,COUNT(s.person_id) ,COUNT(site.site_name) ,COUNT(s.quant) FROM visited v JOIN survey s ON (v.visited_id=s.taken_id) JOIN site ON (site.site_name=v.site_name) GROUP BY s.person_id, site.site_name, s.quant,s.reading ORDER BY s.reading, s.person_id, site.site_name, s.quant; This is not working since I can not have more than one COUNT. I am stuck on where to go from here.
[ "You can't use the COUNT function multiple times in the same query because it only counts the number of rows in a table or query. Instead, you can use the COUNT function to count the number of rows for each person, site, and quant, and then use the AVG function to calculate the average reading for each group.\nHere is an example query that produces the result you're looking for:\nCREATE OR REPLACE VIEW site_measurements\nAS\nSELECT\ns.quant,\ns.reading,\ns.person_id,\nsite.site_name,\nAVG(s.reading) AS avg_reading,\nCOUNT(s.person_id) AS person_count,\nCOUNT(site.site_name) AS site_count,\nCOUNT(s.quant) AS quant_count\nFROM visited v\nJOIN survey s ON v.visited_id = s.taken_id\nJOIN site ON site.site_name = v.site_name\nGROUP BY s.person_id, site.site_name, s.quant, s.reading\nORDER BY avg_reading, s.person_id, site.site_name, s.quant;\n\nThis query uses the GROUP BY clause to group the rows by person, site, and quant. Then it uses the COUNT and AVG functions to calculate the number of rows in each group, as well as the average reading for each group. The ORDER BY clause sorts the rows by the average reading, person, site, and quant.\n" ]
[ 0 ]
[]
[]
[ "aggregate", "mysql", "view" ]
stackoverflow_0074662587_aggregate_mysql_view.txt
Q: How to write a library for multiple devices with similar versions of an API I am trying to develop a library of shared code for my company. We are developing on a technology by SICK called AppSpace, which is designed for machine vision. AppSpace is a stand alone eco-system, beneath which there comes a variety of SICK programmable devices (e.g. programmable cameras, LiDAR sensors), and an IDE with which these can be programmed. Programs are written in Lua, using HTML/CSS for the front end. AppSpace provides a Lua API for these devices. In my company, a few of us write applications and it is therefore important that we create a library of shared code to avoid redundancy / rewritten code. However, each firmware version of each device has a corresponding API version. That is to say, that on a given device the API can change between firmware versions, and also that API versions differ across devices. Two devices will have two different sets of API functions available to them. Functions they share in common may also have slightly different implementations. I am at a loss as to how such a situation can be properly managed. I suppose the most "manual" route would be for each device to have its own partial copy of the library, and to manually update each device's library to have the same behavior each time a change is made, ensuring that each device conforms to its API. This seems like bad practice as it is very error prone - the libraries would inevitably become out of sync. Another option might be to have a master library, and to scrape the API documentation for each device. Then build a library manager which parses the Lua code from the library and identifies missing functions for each device. This seems completely impractical and also error prone, probably. What would be the best way to develop and maintain a library of shared code which can be run on multiple devices, if it is even possible? A: I would like to answer this and review some of the topics discussed. First and foremost; functions that are shared in common between devices will be implemented differently by means of the compiled code on the respective device (i.e. PC, 2d camera, 3d camera, LIDAR, etc) while the functionality maintains the same between them all. This way the code can be readily ported from one device to another. That is the principle of the SICK AppEngine that is running on all SICK AppSpace devices as well as 3rd party hardware with AppEngine installed. The APIs embedded into the devices are called a CROWN (Common Reusable Objects Wired by Name) component and can be tested against nil to determine if they are exposed APIs. Here's an example of an CROWN called 'IMAGE'. If it exists than you could run this code when it does. if IMAGE then --do code end SICK also has a AppPool that you can upload your source code to and it will test all the required CROWNs and return a list of all SICK devices that can run properly.
How to write a library for multiple devices with similar versions of an API
I am trying to develop a library of shared code for my company. We are developing on a technology by SICK called AppSpace, which is designed for machine vision. AppSpace is a stand alone eco-system, beneath which there comes a variety of SICK programmable devices (e.g. programmable cameras, LiDAR sensors), and an IDE with which these can be programmed. Programs are written in Lua, using HTML/CSS for the front end. AppSpace provides a Lua API for these devices. In my company, a few of us write applications and it is therefore important that we create a library of shared code to avoid redundancy / rewritten code. However, each firmware version of each device has a corresponding API version. That is to say, that on a given device the API can change between firmware versions, and also that API versions differ across devices. Two devices will have two different sets of API functions available to them. Functions they share in common may also have slightly different implementations. I am at a loss as to how such a situation can be properly managed. I suppose the most "manual" route would be for each device to have its own partial copy of the library, and to manually update each device's library to have the same behavior each time a change is made, ensuring that each device conforms to its API. This seems like bad practice as it is very error prone - the libraries would inevitably become out of sync. Another option might be to have a master library, and to scrape the API documentation for each device. Then build a library manager which parses the Lua code from the library and identifies missing functions for each device. This seems completely impractical and also error prone, probably. What would be the best way to develop and maintain a library of shared code which can be run on multiple devices, if it is even possible?
[ "I would like to answer this and review some of the topics discussed.\nFirst and foremost; functions that are shared in common between devices will be implemented differently by means of the compiled code on the respective device (i.e. PC, 2d camera, 3d camera, LIDAR, etc) while the functionality maintains the same between them all. This way the code can be readily ported from one device to another. That is the principle of the SICK AppEngine that is running on all SICK AppSpace devices as well as 3rd party hardware with AppEngine installed.\nThe APIs embedded into the devices are called a CROWN (Common Reusable Objects Wired by Name) component and can be tested against nil to determine if they are exposed APIs. Here's an example of an CROWN called 'IMAGE'. If it exists than you could run this code when it does.\nif IMAGE then\n --do code\nend\n\nSICK also has a AppPool that you can upload your source code to and it will test all the required CROWNs and return a list of all SICK devices that can run properly.\n" ]
[ 0 ]
[]
[]
[ "lua", "shared_libraries" ]
stackoverflow_0073103522_lua_shared_libraries.txt
Q: Using Arduino Ultrasonic Sensor with Pyfirmata I'm trying to use pyfirmata to use an Arduino Ultrasonic sensor. I used Arduino Uno board and HC-SR04 Ultrasonic sensor. Here is the code I'm using. The code ran smoothly, it's just that it seems the echo pin failed to get an impulse from the trigger ultrasonic sound, so it keeps on getting False (LOW reading) and thus giving me false distance reading. Does anyone have a solution for this problem? import pyfirmata import time board = pyfirmata.Arduino('COM16') start = 0 end = 0 echo = board.get_pin('d:11:i') trig = board.get_pin('d:12:o') LED = board.get_pin('d:13:o') it = pyfirmata.util.Iterator(board) it.start() trig.write(0) time.sleep(2) while True: time.sleep(0.5) trig.write(1) time.sleep(0.00001) trig.write(0) print(echo.read()) while echo.read() == False: start = time.time() while echo.read() == True: end = time.time() TimeElapsed = end - start distance = (TimeElapsed * 34300) / 2 print("Measured Distance = {} cm".format(distance) ) I've tried changing the time.sleep() to several value and it still doesn't work. It works just fine when I'm using Arduino code dirrectly from Arduino IDE. A: I haven't done the exact math but given a range of 50cm you're at about 3ms travel time. That would mean you need to turn off the pulse and poll the pin state within that time. That's not going to happen. The echo probably arrives befor you have turned off the emitter through PyFirmata. You should do the delay measurement on the Arduino. A: I solve this false data problem by counting. I observe that false data comes after 2 or 3 sec. So if it takes More than 2 or 3 sec I clear count and restarts it from 0; Sudo code: cnt = 0; if sensorvalue <= 20 && sensorvalue <= 30: cnt++; if cnt>=5: detected = true; cnt =0; if cnt<5 && lastDecttime>2 (2 sec): cnt = 0; // Here we handle the false value and clear the data A: I'm currently trying to work this exact problem out. I can get the sensor to work using the Arduino IDE directly, but not with python and pyfirmata. I am getting some output, but its mostly non-sensical. Here's an example output I'm getting, while keeping the sensor at the same distance from my object: 817.1010613441467 536.828875541687 0.0 546.0820078849792 0.0 0.0 1060.0213408470154 Regarding your code, the only thing I can see that you could do differently is to use the board.pass_time function instead of time.sleep(). Let me know if you get anywhere! import pyfirmata as pyf import time def ultra_test(): board = pyf.Arduino("COM10") it = pyf.util.Iterator(board) it.start() trigpin = board.get_pin("d:7:o") echopin = board.get_pin("d:8:i") while True: trigpin.write(0) board.pass_time(0.5) trigpin.write(1) board.pass_time(0.00001) trigpin.write(0) limit_start = time.time() while echopin.read() != 1: if time.time() - limit_start > 1: break pass start = time.time() while echopin.read() != 0: pass stop = time.time() time_elapsed = stop - start print((time_elapsed) * 34300 / 2) board.pass_time(1)
Using Arduino Ultrasonic Sensor with Pyfirmata
I'm trying to use pyfirmata to use an Arduino Ultrasonic sensor. I used Arduino Uno board and HC-SR04 Ultrasonic sensor. Here is the code I'm using. The code ran smoothly, it's just that it seems the echo pin failed to get an impulse from the trigger ultrasonic sound, so it keeps on getting False (LOW reading) and thus giving me false distance reading. Does anyone have a solution for this problem? import pyfirmata import time board = pyfirmata.Arduino('COM16') start = 0 end = 0 echo = board.get_pin('d:11:i') trig = board.get_pin('d:12:o') LED = board.get_pin('d:13:o') it = pyfirmata.util.Iterator(board) it.start() trig.write(0) time.sleep(2) while True: time.sleep(0.5) trig.write(1) time.sleep(0.00001) trig.write(0) print(echo.read()) while echo.read() == False: start = time.time() while echo.read() == True: end = time.time() TimeElapsed = end - start distance = (TimeElapsed * 34300) / 2 print("Measured Distance = {} cm".format(distance) ) I've tried changing the time.sleep() to several value and it still doesn't work. It works just fine when I'm using Arduino code dirrectly from Arduino IDE.
[ "I haven't done the exact math but given a range of 50cm you're at about 3ms travel time. That would mean you need to turn off the pulse and poll the pin state within that time.\nThat's not going to happen. The echo probably arrives befor you have turned off the emitter through PyFirmata. You should do the delay measurement on the Arduino.\n", "I solve this false data problem by counting. I observe that false data comes after 2 or 3 sec. So if it takes More than 2 or 3 sec I clear count and restarts it from 0;\nSudo code:\n\ncnt = 0;\n\nif sensorvalue <= 20 && sensorvalue <= 30:\n cnt++;\nif cnt>=5:\n detected = true;\n cnt =0;\n\nif cnt<5 && lastDecttime>2 (2 sec):\n cnt = 0; // Here we handle the false value and clear the data\n\n", "I'm currently trying to work this exact problem out. I can get the sensor to work using the Arduino IDE directly, but not with python and pyfirmata. I am getting some output, but its mostly non-sensical.\nHere's an example output I'm getting, while keeping the sensor at the same distance from my object:\n817.1010613441467\n536.828875541687\n0.0\n546.0820078849792\n0.0\n0.0\n1060.0213408470154\n\nRegarding your code, the only thing I can see that you could do differently is to use the board.pass_time function instead of time.sleep(). Let me know if you get anywhere!\nimport pyfirmata as pyf\nimport time\n\ndef ultra_test():\n\nboard = pyf.Arduino(\"COM10\")\nit = pyf.util.Iterator(board)\nit.start()\ntrigpin = board.get_pin(\"d:7:o\")\nechopin = board.get_pin(\"d:8:i\")\nwhile True:\n trigpin.write(0)\n board.pass_time(0.5)\n trigpin.write(1)\n board.pass_time(0.00001)\n trigpin.write(0)\n limit_start = time.time()\n \n while echopin.read() != 1:\n if time.time() - limit_start > 1:\n break\n pass\n \n start = time.time()\n while echopin.read() != 0:\n pass\n stop = time.time()\n time_elapsed = stop - start\n print((time_elapsed) * 34300 / 2)\n board.pass_time(1)\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "arduino", "arduino_ultra_sonic", "arduino_uno", "pyfirmata", "python" ]
stackoverflow_0074443453_arduino_arduino_ultra_sonic_arduino_uno_pyfirmata_python.txt
Q: R: creating combinations of elements within a group and adding up numbers associated with combinations in a new data frame I have the following dataset: Letter ID Number A A1 1 A A2 2 A A3 3 B B1 1 B B2 2 B B3 3 B B4 4 My aim is first to create all possible combinations of IDs within the same "Letter" group. For example, for the letter A, it would be only three combinations: A1-A2,A2-A3,and A1-A3. The same IDs ordered differently don't count as a new combination, so for example A1-A2 is the same as A2-A1. Then, within those combinations, I want to add up the numbers from the "Number" column associated with those IDs. So for the combination A1-A2, which are associated with 1 and 2 in the "Number" column, this would result in the number 1+2=3. Finally, I want to place the ID combinations, added numbers and original Letter in a new data frame. Something like this: Letter Combination Add.Number A A1-A2 3 A A2-A3 5 A A1-A3 4 B B1-B2 3 B B2-B3 5 B B3-B4 7 B B1-B3 4 B B2-B4 6 B B1-B4 5 How can I do this in R, ideally using the package dplyr? A: library(dplyr) letter <- c("A","A","A","B","B","B","B") df <- data.frame(letter) %>% group_by(letter) %>% mutate( number = row_number(), id = paste0(letter,number) ) df %>% full_join(df,by = "letter") %>% filter(number.x < number.y) %>% mutate( combination = paste0(id.x,"-",id.y), add_number = number.x + number.y) %>% select(letter,combination,add_number) # A tibble: 9 x 3 # Groups: letter [2] letter combination add_number <chr> <chr> <int> 1 A A1-A2 3 2 A A1-A3 4 3 A A2-A3 5 4 B B1-B2 3 5 B B1-B3 4 6 B B1-B4 5 7 B B2-B3 5 8 B B2-B4 6 9 B B3-B4 7 A: In base R, using combn: df <- data.frame( Letter = c("A","A","A","B","B","B","B"), Id = c("A1","A2","A3","B1","B2","B3","B4"), Number = c(1,2,3,1,2,3,4)) # combinations l<-lapply(split(df$Id, df$Letter) ,function(x) setNames(data.frame(t(combn(x,2))), c("L1","L2"))) n<-lapply(split(df$Number, df$Letter) ,function(x) setNames(data.frame(t(combn(x,2))), c("N1","N2"))) # rbind all result <- do.call(rbind, mapply(cbind, Letter=names(l), l, n, SIMPLIFY = F)) result$combination <- paste(result$L1, result$L2, sep="-") result$sum = result$N1 + result$N2 result #> Letter L1 L2 N1 N2 combination sum #> A.1 A A1 A2 1 2 A1-A2 3 #> A.2 A A1 A3 1 3 A1-A3 4 #> A.3 A A2 A3 2 3 A2-A3 5 #> B.1 B B1 B2 1 2 B1-B2 3 #> B.2 B B1 B3 1 3 B1-B3 4 #> B.3 B B1 B4 1 4 B1-B4 5 #> B.4 B B2 B3 2 3 B2-B3 5 #> B.5 B B2 B4 2 4 B2-B4 6 #> B.6 B B3 B4 3 4 B3-B4 7
R: creating combinations of elements within a group and adding up numbers associated with combinations in a new data frame
I have the following dataset: Letter ID Number A A1 1 A A2 2 A A3 3 B B1 1 B B2 2 B B3 3 B B4 4 My aim is first to create all possible combinations of IDs within the same "Letter" group. For example, for the letter A, it would be only three combinations: A1-A2,A2-A3,and A1-A3. The same IDs ordered differently don't count as a new combination, so for example A1-A2 is the same as A2-A1. Then, within those combinations, I want to add up the numbers from the "Number" column associated with those IDs. So for the combination A1-A2, which are associated with 1 and 2 in the "Number" column, this would result in the number 1+2=3. Finally, I want to place the ID combinations, added numbers and original Letter in a new data frame. Something like this: Letter Combination Add.Number A A1-A2 3 A A2-A3 5 A A1-A3 4 B B1-B2 3 B B2-B3 5 B B3-B4 7 B B1-B3 4 B B2-B4 6 B B1-B4 5 How can I do this in R, ideally using the package dplyr?
[ "library(dplyr)\n \n\nletter <- c(\"A\",\"A\",\"A\",\"B\",\"B\",\"B\",\"B\")\n\ndf <-\n data.frame(letter) %>% \n group_by(letter) %>% \n mutate(\n number = row_number(),\n id = paste0(letter,number)\n ) \n\ndf %>% \n full_join(df,by = \"letter\") %>% \n filter(number.x < number.y) %>% \n mutate(\n combination = paste0(id.x,\"-\",id.y),\n add_number = number.x + number.y) %>% \n select(letter,combination,add_number)\n\n# A tibble: 9 x 3\n# Groups: letter [2]\n letter combination add_number\n <chr> <chr> <int>\n1 A A1-A2 3\n2 A A1-A3 4\n3 A A2-A3 5\n4 B B1-B2 3\n5 B B1-B3 4\n6 B B1-B4 5\n7 B B2-B3 5\n8 B B2-B4 6\n9 B B3-B4 7\n\n", "In base R, using combn:\ndf <- data.frame(\n Letter = c(\"A\",\"A\",\"A\",\"B\",\"B\",\"B\",\"B\"),\n Id = c(\"A1\",\"A2\",\"A3\",\"B1\",\"B2\",\"B3\",\"B4\"),\n Number = c(1,2,3,1,2,3,4))\n\n# combinations\nl<-lapply(split(df$Id, df$Letter) ,function(x) \n setNames(data.frame(t(combn(x,2))), c(\"L1\",\"L2\")))\nn<-lapply(split(df$Number, df$Letter) ,function(x) \n setNames(data.frame(t(combn(x,2))), c(\"N1\",\"N2\")))\n\n# rbind all\nresult <- do.call(rbind, mapply(cbind, Letter=names(l), l, n, SIMPLIFY = F))\nresult$combination <- paste(result$L1, result$L2, sep=\"-\")\nresult$sum = result$N1 + result$N2\nresult\n#> Letter L1 L2 N1 N2 combination sum\n#> A.1 A A1 A2 1 2 A1-A2 3\n#> A.2 A A1 A3 1 3 A1-A3 4\n#> A.3 A A2 A3 2 3 A2-A3 5\n#> B.1 B B1 B2 1 2 B1-B2 3\n#> B.2 B B1 B3 1 3 B1-B3 4\n#> B.3 B B1 B4 1 4 B1-B4 5\n#> B.4 B B2 B3 2 3 B2-B3 5\n#> B.5 B B2 B4 2 4 B2-B4 6\n#> B.6 B B3 B4 3 4 B3-B4 7\n\n" ]
[ 2, 1 ]
[]
[]
[ "combinations", "dataframe", "r" ]
stackoverflow_0074662293_combinations_dataframe_r.txt
Q: FastApi returning response take long time and block everything I got problem with my api FastApi, I got a big request that return me 700k rows. This request take 50 sec to be treat. But, the return response take 2mins and completely block the server who can't handle other request during those 2 mins. And I don't Know how to handle this ... Here is my code : @app.get("/request") async def request_db(data): dict_of_result = await run_in_threadpool(get_data_from_pgsql, data) # After 50 sec the code above is done with even others requests coming working # But this return below block the server for 2min ! return dict_of_result I can't add limit or pagination system that request is for specefic purpose. Thank you for help A: You should not make a 700k row database request from FastAPI or any other web server. I would update this application logic / query to offload the processing to the database or to an external worker and only make a query for the result. AsyncIO prevents the application from blocking while waiting for IO, not processing what must be a huge amount of IO. This is especially worse in Python where you are single process bound by the GIL (Global Interpreter Lock). A: This is a bit late. But here's some info for other readers. There are 2 problems here. Running the query returning a giant result. Seems like this is not the problem here Returning the result. The problem is serializing a giant dataframe/dict all at once and in memory. This is what streaming is for and ideally should start at the db level where you can stream out the data as you are processing it. @app.get("/request") async def request_db(data): dict_of_result = await run_in_threadpool(get_data_from_pgsql, data) # After 50 sec the code above is done with even others requests coming working def chunk_emitter(): # How to split() will depend on the data since this is a dict for chunk in split(dict_of_result, CHUNK_SIZE): yield chunk headers = {'Content-Disposition': 'attachment'} return StreamingResponse(iterfile(), headers=headers, media_type='application/json') More examples here: How to download a large file using FastAPI?.
FastApi returning response take long time and block everything
I got problem with my api FastApi, I got a big request that return me 700k rows. This request take 50 sec to be treat. But, the return response take 2mins and completely block the server who can't handle other request during those 2 mins. And I don't Know how to handle this ... Here is my code : @app.get("/request") async def request_db(data): dict_of_result = await run_in_threadpool(get_data_from_pgsql, data) # After 50 sec the code above is done with even others requests coming working # But this return below block the server for 2min ! return dict_of_result I can't add limit or pagination system that request is for specefic purpose. Thank you for help
[ "You should not make a 700k row database request from FastAPI or any other web server.\nI would update this application logic / query to offload the processing to the database or to an external worker and only make a query for the result.\nAsyncIO prevents the application from blocking while waiting for IO, not processing what must be a huge amount of IO. This is especially worse in Python where you are single process bound by the GIL (Global Interpreter Lock).\n", "This is a bit late. But here's some info for other readers. There are 2 problems here.\n\nRunning the query returning a giant result. Seems like this is not the problem here\nReturning the result.\n\nThe problem is serializing a giant dataframe/dict all at once and in memory. This is what streaming is for and ideally should start at the db level where you can stream out the data as you are processing it.\n\[email protected](\"/request\")\nasync def request_db(data):\n dict_of_result = await run_in_threadpool(get_data_from_pgsql, data)\n # After 50 sec the code above is done with even others requests coming working\n def chunk_emitter():\n # How to split() will depend on the data since this is a dict\n for chunk in split(dict_of_result, CHUNK_SIZE):\n yield chunk\n\n headers = {'Content-Disposition': 'attachment'}\n return StreamingResponse(iterfile(), headers=headers, media_type='application/json')\n\nMore examples here: How to download a large file using FastAPI?.\n" ]
[ 1, 0 ]
[]
[]
[ "fastapi", "python" ]
stackoverflow_0072576972_fastapi_python.txt
Q: When using mutate for several columns in dplyr, how can I reference to another data frame by row name? I need a push to the right direction... or a hint where to look further. I have a large dataframe containing results of analyses for samples - a key column and then several columns for analytical targets. I managed to filter certain conditions, summarise some statistics and store them in a second dataframe. (dft2_stat_norm) Now I want to column-wise divide all target values of the original (unfiltered) tibble (e. g. below "dft_Resp_norm") by the corresponding (row) value stored in the second tibble. I have all targets stored in a vector "analytes". The column and row are linked by the target descriptions, e. g. "G444". can you suggest a solution? can you suggest a source (comprehensive for lateral entrants ;)) to read further on this? Thanks! > dft_Resp_norm # A tibble: 39 × 7 Datum Lfd_Nr Probe cond G444 G448 S453 \<chr\> \<dbl\> \<chr\> \<chr\> \<dbl\> \<dbl\> \<dbl\> 1 09.01.2020 16 NK NK 0.00586 0.0591 0.0594 2 04.02.2020 37 NK NK 0.00661 0.0609 0.0944 3 12.02.2020 57 NK NK 0.00611 0.0674 0.116 4 13.03.2020 67 NK-2 NK 0.00122 0.0678 0.115 5 13.03.2020 68 NK-007 NK 0.0138 0.115 0.359 > dft2_stat_norm # A tibble: 3 × 2 An_Names median_Resp_norm <chr> <dbl> 1 G444 0.00678 2 G448 0.0696 3 S453 0.126 I tried it directly ... dft_MOM_full <- df_Resp_full %>% select(1:7) %>% mutate(across(any_of(analytes); -?- ) but I can link the column name to the corresponding row in the second tibble, and I tried it by grouping ... dft_MOM_full <- df_Resp_full %>% select(1:7) %>% pivot_longer(any_of(analytes), names_to = "target", values_to = "MOM") %>% mutate(MOM = MOM / -?- ) ... but I didn't manage to find out how to link the group name to the second tibble A: Using a left_join you could do: library(dplyr) library(tidyr) analytes <- dft2_stat_norm$An_Names dft_MOM_full <- dft_Resp_norm %>% pivot_longer(any_of(analytes), names_to = "target", values_to = "MOM") |> left_join(dft2_stat_norm, by = c("target" = "An_Names")) |> mutate(MOM = MOM / median_Resp_norm) dft_MOM_full #> # A tibble: 15 × 7 #> Datum Lfd_Nr Probe cond target MOM median_Resp_norm #> <chr> <int> <chr> <chr> <chr> <dbl> <dbl> #> 1 09.01.2020 16 NK NK G444 0.864 0.00678 #> 2 09.01.2020 16 NK NK G448 0.849 0.0696 #> 3 09.01.2020 16 NK NK S453 0.471 0.126 #> 4 04.02.2020 37 NK NK G444 0.975 0.00678 #> 5 04.02.2020 37 NK NK G448 0.875 0.0696 #> 6 04.02.2020 37 NK NK S453 0.749 0.126 #> 7 12.02.2020 57 NK NK G444 0.901 0.00678 #> 8 12.02.2020 57 NK NK G448 0.968 0.0696 #> 9 12.02.2020 57 NK NK S453 0.921 0.126 #> 10 13.03.2020 67 NK-2 NK G444 0.180 0.00678 #> 11 13.03.2020 67 NK-2 NK G448 0.974 0.0696 #> 12 13.03.2020 67 NK-2 NK S453 0.913 0.126 #> 13 13.03.2020 68 NK-007 NK G444 2.04 0.00678 #> 14 13.03.2020 68 NK-007 NK G448 1.65 0.0696 #> 15 13.03.2020 68 NK-007 NK S453 2.85 0.126
When using mutate for several columns in dplyr, how can I reference to another data frame by row name?
I need a push to the right direction... or a hint where to look further. I have a large dataframe containing results of analyses for samples - a key column and then several columns for analytical targets. I managed to filter certain conditions, summarise some statistics and store them in a second dataframe. (dft2_stat_norm) Now I want to column-wise divide all target values of the original (unfiltered) tibble (e. g. below "dft_Resp_norm") by the corresponding (row) value stored in the second tibble. I have all targets stored in a vector "analytes". The column and row are linked by the target descriptions, e. g. "G444". can you suggest a solution? can you suggest a source (comprehensive for lateral entrants ;)) to read further on this? Thanks! > dft_Resp_norm # A tibble: 39 × 7 Datum Lfd_Nr Probe cond G444 G448 S453 \<chr\> \<dbl\> \<chr\> \<chr\> \<dbl\> \<dbl\> \<dbl\> 1 09.01.2020 16 NK NK 0.00586 0.0591 0.0594 2 04.02.2020 37 NK NK 0.00661 0.0609 0.0944 3 12.02.2020 57 NK NK 0.00611 0.0674 0.116 4 13.03.2020 67 NK-2 NK 0.00122 0.0678 0.115 5 13.03.2020 68 NK-007 NK 0.0138 0.115 0.359 > dft2_stat_norm # A tibble: 3 × 2 An_Names median_Resp_norm <chr> <dbl> 1 G444 0.00678 2 G448 0.0696 3 S453 0.126 I tried it directly ... dft_MOM_full <- df_Resp_full %>% select(1:7) %>% mutate(across(any_of(analytes); -?- ) but I can link the column name to the corresponding row in the second tibble, and I tried it by grouping ... dft_MOM_full <- df_Resp_full %>% select(1:7) %>% pivot_longer(any_of(analytes), names_to = "target", values_to = "MOM") %>% mutate(MOM = MOM / -?- ) ... but I didn't manage to find out how to link the group name to the second tibble
[ "Using a left_join you could do:\nlibrary(dplyr)\nlibrary(tidyr)\n\nanalytes <- dft2_stat_norm$An_Names\n\ndft_MOM_full <- dft_Resp_norm %>%\n pivot_longer(any_of(analytes),\n names_to = \"target\",\n values_to = \"MOM\") |> \n left_join(dft2_stat_norm, by = c(\"target\" = \"An_Names\")) |> \n mutate(MOM = MOM / median_Resp_norm)\n\ndft_MOM_full\n#> # A tibble: 15 × 7\n#> Datum Lfd_Nr Probe cond target MOM median_Resp_norm\n#> <chr> <int> <chr> <chr> <chr> <dbl> <dbl>\n#> 1 09.01.2020 16 NK NK G444 0.864 0.00678\n#> 2 09.01.2020 16 NK NK G448 0.849 0.0696 \n#> 3 09.01.2020 16 NK NK S453 0.471 0.126 \n#> 4 04.02.2020 37 NK NK G444 0.975 0.00678\n#> 5 04.02.2020 37 NK NK G448 0.875 0.0696 \n#> 6 04.02.2020 37 NK NK S453 0.749 0.126 \n#> 7 12.02.2020 57 NK NK G444 0.901 0.00678\n#> 8 12.02.2020 57 NK NK G448 0.968 0.0696 \n#> 9 12.02.2020 57 NK NK S453 0.921 0.126 \n#> 10 13.03.2020 67 NK-2 NK G444 0.180 0.00678\n#> 11 13.03.2020 67 NK-2 NK G448 0.974 0.0696 \n#> 12 13.03.2020 67 NK-2 NK S453 0.913 0.126 \n#> 13 13.03.2020 68 NK-007 NK G444 2.04 0.00678\n#> 14 13.03.2020 68 NK-007 NK G448 1.65 0.0696 \n#> 15 13.03.2020 68 NK-007 NK S453 2.85 0.126\n\n" ]
[ 0 ]
[]
[]
[ "dplyr", "mutate", "r", "reference" ]
stackoverflow_0074661849_dplyr_mutate_r_reference.txt
Q: How to use Python Fitz detect Hyphen when using search_for? I'm new to the Fitz library and am working on a project where I need to find a string in a PDF page. I'm running into a case where the text on the page that I'm searching on is hyphenated. I am aware of the TEXT_DEHYPHENATE flag that I can use in the search for function, but that doesn't work for me (as shown in the image here https://postimg.cc/zHZPdd6v ). I'm getting no cases when I search for the hyphenated string. Python Script LOC = "./test.pdf" doc = fitz.open(LOC) page = doc[1] print(page.get_text()) found = page.search_for("lowcost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) found = page.search_for("low-cost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) found = page.search_for("low cost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) for rect in found: print(rect) Output Abstract The objective of “XXXXXXXXXXXXXXXXXX” was design and assemble a low- cost and efficient tool. DONE 0 DONE 0 DONE 0 Can someone please point me to how I might be able to detect the hyphen in my file? Thank you! A: Your first approach should work, look here: # insert some hyphenated text page.insert_textbox((100,100,300,300),"The objective of 'xxx' was design and assemble a low-\ncost and efficient tool.") 157.94699853658676 # now search for it again page.search_for("lowcost") # 2 rectangles! [Rect(159.3009796142578, 116.24800109863281, 175.8009796142578, 131.36199951171875), Rect(100.0, 132.49501037597656, 120.17399597167969, 147.6090087890625)] # each containing a text portion with hyphen removed for rect in page.search_for("lowcost"): print(page.get_textbox(rect)) low cost Without the original file there is no way to tell the reason for your failure. Are you sure there really is text - and not e.g. an image or other hickups? Edited: As per the comment of user @KJ below: PyMuPDF's C base library MuPDF regards all of the unicodes '-', 0xAD, 0x2010, 0x2011 as hyphens in this context. They all should work the same. Just reconfirmed it in an example.
How to use Python Fitz detect Hyphen when using search_for?
I'm new to the Fitz library and am working on a project where I need to find a string in a PDF page. I'm running into a case where the text on the page that I'm searching on is hyphenated. I am aware of the TEXT_DEHYPHENATE flag that I can use in the search for function, but that doesn't work for me (as shown in the image here https://postimg.cc/zHZPdd6v ). I'm getting no cases when I search for the hyphenated string. Python Script LOC = "./test.pdf" doc = fitz.open(LOC) page = doc[1] print(page.get_text()) found = page.search_for("lowcost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) found = page.search_for("low-cost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) found = page.search_for("low cost", flags=TEXT_DEHYPHENATE) print("DONE") print(len(found)) for rect in found: print(rect) Output Abstract The objective of “XXXXXXXXXXXXXXXXXX” was design and assemble a low- cost and efficient tool. DONE 0 DONE 0 DONE 0 Can someone please point me to how I might be able to detect the hyphen in my file? Thank you!
[ "Your first approach should work, look here:\n# insert some hyphenated text\npage.insert_textbox((100,100,300,300),\"The objective of 'xxx' was design and assemble a low-\\ncost and efficient tool.\")\n157.94699853658676\n\n# now search for it again\npage.search_for(\"lowcost\") # 2 rectangles!\n[Rect(159.3009796142578, 116.24800109863281, 175.8009796142578, 131.36199951171875),\n Rect(100.0, 132.49501037597656, 120.17399597167969, 147.6090087890625)]\n\n# each containing a text portion with hyphen removed\nfor rect in page.search_for(\"lowcost\"):\n print(page.get_textbox(rect))\n\n \nlow\ncost\n\nWithout the original file there is no way to tell the reason for your failure.\nAre you sure there really is text - and not e.g. an image or other hickups?\nEdited: As per the comment of user @KJ below: PyMuPDF's C base library MuPDF regards all of the unicodes '-', 0xAD, 0x2010, 0x2011 as hyphens in this context. They all should work the same. Just reconfirmed it in an example.\n" ]
[ 0 ]
[]
[]
[ "pymupdf", "python", "python_pdfkit", "python_pdfreader" ]
stackoverflow_0074647583_pymupdf_python_python_pdfkit_python_pdfreader.txt
Q: issues when installing hive when I'm installing hive i get this error Exception in thread "main" java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap') at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:413) at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:389) at org.apache.hadoop.hive.cli.CliSessionState.<init>(CliSessionState.java:60) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236) A: This error is occurring because you're trying to cast a 'jdk.internal.loader.ClassLoaders$AppClassLoader' object to a 'java.net.URLClassLoader' object, but these two classes are not compatible. To fix this error, you will need to update your version of Hive and make sure that it is compatible with the version of Java you're using. Hive is built on top of the Hadoop framework, so you will also need to make sure that your version of Hadoop is compatible with the version of Java you're using. It's also possible that the error is occurring because you're trying to use a different version of Java than the one that Hive was built with. In this case, you can try setting the JAVA_HOME environment variable to point to the correct version of Java, and then restart the Hive installation process. If you continue to have issues, you may want to consult the Hive documentation or seek help from the Hive community.
issues when installing hive
when I'm installing hive i get this error Exception in thread "main" java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap') at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:413) at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:389) at org.apache.hadoop.hive.cli.CliSessionState.<init>(CliSessionState.java:60) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.hadoop.util.RunJar.run(RunJar.java:323) at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
[ "This error is occurring because you're trying to cast a 'jdk.internal.loader.ClassLoaders$AppClassLoader' object to a 'java.net.URLClassLoader' object, but these two classes are not compatible.\nTo fix this error, you will need to update your version of Hive and make sure that it is compatible with the version of Java you're using. Hive is built on top of the Hadoop framework, so you will also need to make sure that your version of Hadoop is compatible with the version of Java you're using.\nIt's also possible that the error is occurring because you're trying to use a different version of Java than the one that Hive was built with. In this case, you can try setting the JAVA_HOME environment variable to point to the correct version of Java, and then restart the Hive installation process.\nIf you continue to have issues, you may want to consult the Hive documentation or seek help from the Hive community.\n" ]
[ 0 ]
[]
[]
[ "hive" ]
stackoverflow_0074662657_hive.txt
Q: Oracle Common Table Expressions (WITH) within BEGIN and END I want to write a single string of multiple SQL statements, to be sent through ODBC, which has constants stated as 'variables' to improve legibility/maintenance. I believe I need to use the DECLARE block. I do not want to write a procedure or a function, it's an ad-hoc query. The following test CTE works fine in isolation (without a begin/end block): WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; But when I run this declare badgerId CHAR(32); /*todo, use me later, a few times within multiple CTEs*/ begin WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; end; I get the error: Error starting at line 2 in command: declare badgerId CHAR(32); /*todo, use me later*/ begin WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; end; Error report: ORA-06550: line 4, column 1: PLS-00428: an INTO clause is expected in this SELECT statement 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: What is this gibberish? In SQL Server I can just type. DECLARE @badgerId CHAR(32); at any time in the SQL statement, which is awesome. What's the Oracle equivalent? Cheers! A: You're getting this error because you are selecting the data but not assigning it to the variable declared (badgerId). The below should work, declare badgerId CHAR(32); /*todo, use me later, a few times within multiple CTEs*/ begin WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * into badgerId FROM X; end; A: Lots of confusion here. The problem described eight years ago is that cte definitions apparently can not be done inside of a BEGIN-END block. That does appear to be an actual restriction/limitation. I've searched for a statement somewhere to that effect and haven't found one, but it's what I observe. I've reformatted the code and put it in this script twice, once inside a BE block and once not. If you run just the top 7 lines it works fine; if you try to run the whole thing or just the bottom section it does not. I'm running Oracle PL/SQL version 19 enterprise.
Oracle Common Table Expressions (WITH) within BEGIN and END
I want to write a single string of multiple SQL statements, to be sent through ODBC, which has constants stated as 'variables' to improve legibility/maintenance. I believe I need to use the DECLARE block. I do not want to write a procedure or a function, it's an ad-hoc query. The following test CTE works fine in isolation (without a begin/end block): WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; But when I run this declare badgerId CHAR(32); /*todo, use me later, a few times within multiple CTEs*/ begin WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; end; I get the error: Error starting at line 2 in command: declare badgerId CHAR(32); /*todo, use me later*/ begin WITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * FROM X; end; Error report: ORA-06550: line 4, column 1: PLS-00428: an INTO clause is expected in this SELECT statement 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action: What is this gibberish? In SQL Server I can just type. DECLARE @badgerId CHAR(32); at any time in the SQL statement, which is awesome. What's the Oracle equivalent? Cheers!
[ "You're getting this error because you are selecting the data but not assigning it to the variable declared (badgerId). The below should work,\ndeclare\nbadgerId CHAR(32); /*todo, use me later, a few times within multiple CTEs*/\nbegin\nWITH X AS ( SELECT 'HELLO' from DUAL ) SELECT * into badgerId FROM X;\nend;\n\n", "Lots of confusion here. The problem described eight years ago is that cte definitions apparently can not be done inside of a BEGIN-END block. That does appear to be an actual restriction/limitation. I've searched for a statement somewhere to that effect and haven't found one, but it's what I observe. I've reformatted the code and put it in this script twice, once inside a BE block and once not. If you run just the top 7 lines it works fine; if you try to run the whole thing or just the bottom section it does not. I'm running Oracle PL/SQL version 19 enterprise.\n\n" ]
[ 1, 0 ]
[]
[]
[ "common_table_expression", "oracle" ]
stackoverflow_0026912971_common_table_expression_oracle.txt
Q: How to get url .pdf + text from ... class + onclick ... Can someone give me a tip how to find the way? I need to get link of pdf file + the text("Instructions (DE)") from this tag: <td class="col-download-data" onclick="openPdf('https://www.roco.cc/static/version1662032330/frontend/Casisoft/Roco/en_GB/doc/AN/1/DE/62200-BA_7937.pdf');">Instructions (DE)</td> No, I am getting this output: openPdf('https://www.roco.cc/static/version1662032330/frontend/Casisoft/Roco/en_GB/doc/ET/1/DE/69255_11395.pdf'); Here is my code: import requests from bs4 import BeautifulSoup import pandas as pd import xlsxwriter productlinks = [] for x in range(1, 2): r = requests.get( f'https://www.roco.cc/ren/products/locomotives/steam-locomotives.html?p={x}&verfuegbarkeit_status=41%2C42%2C43%2C45%2C44') soup = BeautifulSoup(r.content, 'lxml') productlist = soup.find_all('li', class_='item product product-item') for item in productlist: for link in item.find_all('a', class_='product-item-link', href=True): productlinks.append(link['href']) for url in productlinks: r = requests.get(url, allow_redirects=False) content = BeautifulSoup(r.text, 'lxml') for tag in content.find_all('a'): on_click = tag.get('onclick') if on_click: print(on_click) A: for url in productlinks: r = requests.get(url, allow_redirects=False) content = BeautifulSoup(r.text, 'lxml') for tag in content.find_all('a'): on_click = tag.get('onclick') if on_click: pdf = re.findall(r"'([^']*)'", on_click) print(pdf)
How to get url .pdf + text from ... class + onclick ...
Can someone give me a tip how to find the way? I need to get link of pdf file + the text("Instructions (DE)") from this tag: <td class="col-download-data" onclick="openPdf('https://www.roco.cc/static/version1662032330/frontend/Casisoft/Roco/en_GB/doc/AN/1/DE/62200-BA_7937.pdf');">Instructions (DE)</td> No, I am getting this output: openPdf('https://www.roco.cc/static/version1662032330/frontend/Casisoft/Roco/en_GB/doc/ET/1/DE/69255_11395.pdf'); Here is my code: import requests from bs4 import BeautifulSoup import pandas as pd import xlsxwriter productlinks = [] for x in range(1, 2): r = requests.get( f'https://www.roco.cc/ren/products/locomotives/steam-locomotives.html?p={x}&verfuegbarkeit_status=41%2C42%2C43%2C45%2C44') soup = BeautifulSoup(r.content, 'lxml') productlist = soup.find_all('li', class_='item product product-item') for item in productlist: for link in item.find_all('a', class_='product-item-link', href=True): productlinks.append(link['href']) for url in productlinks: r = requests.get(url, allow_redirects=False) content = BeautifulSoup(r.text, 'lxml') for tag in content.find_all('a'): on_click = tag.get('onclick') if on_click: print(on_click)
[ "for url in productlinks:\n r = requests.get(url, allow_redirects=False)\n content = BeautifulSoup(r.text, 'lxml')\n for tag in content.find_all('a'):\n on_click = tag.get('onclick')\n if on_click:\n pdf = re.findall(r\"'([^']*)'\", on_click)\n print(pdf)\n\n" ]
[ 0 ]
[]
[]
[ "onclick", "output", "pdf", "python", "web_scraping" ]
stackoverflow_0074661995_onclick_output_pdf_python_web_scraping.txt
Q: how to check if an arr is "wavy"? my code should check if an ARR is "'wavy" which means the first element is less then the second, the second is greater then the third and same as this till the end of the arr.. but it doesn't work.. here is the code: IDEAL MODEL small STACK 100h DATASEG ARR db 3 dup (?) REZ db 1 CODESEG start: mov ax,@data mov ds,ax mov cx,3 ;cx=99 xor ax,ax ; ax=0 xor si,si ;si=0 mov [ARR],0 mov [ARR+1], 1 mov [ARR+2], 0 lea bx,[ARR] ; bx=offset arr L1: cmp cx,1 je finish mov di,cx ;di=cx (as index) neg di ;di=-di lea si,[bx+di] mov ax,[3+si] cmp ax,[4+si] ; compre the odd vs even index illegal use of register jg wrong ; exit if the odd index > even index dec cx ; cx=cx-1 cmp cx,0 ; check if cx=0 je finish ; if cx=0 finish mov di,cx ;di=cx (as index) neg di ;di=-di lea si,[bx+di] mov ax,[3+si] cmp ax,[4+si] ; compre the even vs odd index illegal use of register jl wrong ; exit if the even <odd index loop L1 ; cx=cx-1 if cx!=0 -> jump to L1 wrong: mov [REZ],0 finish: exit: mov ax,4c00h int 21h END start there is even an example but it doesn't work.. do u know where is the mistake? in the end we should end with res=1 if the arr is "wavy" or if it is not wavy arr=0 A: You have a mismatch between the size of the array elements (byte), and the size of the operations that you perform on these elements (word). The definition ARR db 3 dup (?) does not match the code mov ax,[3+si] cmp ax,[4+si]. You need to write mov AL, [3+si] cmp AL, [4+si] instead. When the loop ends, you should not fall-through into wrong, but rather jmp to exit. A single address register is enough: sub cx, 1 jbe exit ; In case of 0 or 1 array elements lea si, [ARR] cld ; Clear direction flag so LODSB will increment SI L1: lodsb cmp al, [si] jg wrong dec cx jz exit lodsb cmp al, [si] jl wrong loop L1 jmp exit wrong: ... exit: ... It is fascinating to explore alternative solutions Using separate loops for the rising and falling edges of the sawtooth. Not only are the new loops very short, they can iterate over just one conditional branch thanks to sticking a sentinel to the end of the array. The sentinel is chosen such that the loop has got to end. The position where the loop exit occurs then decides about success or failure: mov si, offset ARR ; Address of the array mov cx, ... ; Number of array elements call TestWavy ; -> AL=[0,1] mov [REZ], al ... ; IN (cx,si) OUT (al) MOD (ah,bx,cx,dx,si,di) TestWavy: cld ; Clear DF so LODSW will SI++ mov bx, cx shr bx, 1 jz .fine ; In case of 0 or 1 array elements lea di, [si+bx] add di, bx mov dx, [di] ; Preserve existing bytes mov word ptr [di], 0102h ; Sentinel for the rising edge push si ; (1) .rise: lodsw cmp al, ah jng .rise mov [di], dx ; Restore the bytes cmp si, di pop si ; (1) jbe .wrong inc si ; Address of the first falling edge dec cx shr cx, 1 jz .fine ; In case of 2 array elements mov di, si add di, cx add di, cx mov dx, [di] ; Preserve existing bytes mov word ptr [di], 0201h ; Sentinel for the falling edge .fall: lodsw cmp al, ah jnl .fall mov [di], dx ; Restore the bytes cmp si, di jbe .wrong .fine: mov al, 1 ret .wrong: mov al, 0 ret Using the byte-sized registers to the fullest. This minimizes the number of memory accesses as well as the number of iterations on the loop. Every iteration is guaranteed to be able to perform 8 comparisons, thanks to replicating the last element 7 times. The idea is, that this way, no early exits have to be included. And don't forget that you need to keep 7 unoccupied bytes after the array. mov si, offset ARR ; Address of the array mov cx, ... ; Number of array elements call TestWavy ; -> AL=[0,1] mov [REZ], al ... ; IN (cx,si) OUT (al) MOD (ah,bx,cx,dx,si,di,bp) TestWavy: sub cx, 1 jbe .fine ; In case of 0 or 1 array elements cld ; Clear DF so LODSW will SI++ mov bp, cx ; and STOSB will DI++ lea di, [si+bp] ; Address of the last element mov al, [di] mov cx, 8 rep stosb .more: lodsw ; Load 8 elements xchg dx, ax ; `mov dx, ax` lodsw xchg cx, ax ; `mov cx, ax` lodsw xchg bx, ax ; `mov bx, ax` lodsw cmp dl, dh ; Check 4 rising edges jg .wrong cmp cl, ch jg .wrong cmp bl, bh jg .wrong cmp al, ah jg .wrong cmp dh, cl ; Check 4 falling edges jl .wrong cmp ch, bl jl .wrong cmp bh, al jl .wrong cmp ah, [si] jl .wrong sub bp, 8 ja .more .fine: mov al, 1 ret .wrong: mov al, 0 ret
how to check if an arr is "wavy"?
my code should check if an ARR is "'wavy" which means the first element is less then the second, the second is greater then the third and same as this till the end of the arr.. but it doesn't work.. here is the code: IDEAL MODEL small STACK 100h DATASEG ARR db 3 dup (?) REZ db 1 CODESEG start: mov ax,@data mov ds,ax mov cx,3 ;cx=99 xor ax,ax ; ax=0 xor si,si ;si=0 mov [ARR],0 mov [ARR+1], 1 mov [ARR+2], 0 lea bx,[ARR] ; bx=offset arr L1: cmp cx,1 je finish mov di,cx ;di=cx (as index) neg di ;di=-di lea si,[bx+di] mov ax,[3+si] cmp ax,[4+si] ; compre the odd vs even index illegal use of register jg wrong ; exit if the odd index > even index dec cx ; cx=cx-1 cmp cx,0 ; check if cx=0 je finish ; if cx=0 finish mov di,cx ;di=cx (as index) neg di ;di=-di lea si,[bx+di] mov ax,[3+si] cmp ax,[4+si] ; compre the even vs odd index illegal use of register jl wrong ; exit if the even <odd index loop L1 ; cx=cx-1 if cx!=0 -> jump to L1 wrong: mov [REZ],0 finish: exit: mov ax,4c00h int 21h END start there is even an example but it doesn't work.. do u know where is the mistake? in the end we should end with res=1 if the arr is "wavy" or if it is not wavy arr=0
[ "You have a mismatch between the size of the array elements (byte), and the size of the operations that you perform on these elements (word).\nThe definition ARR db 3 dup (?) does not match the code mov ax,[3+si] cmp ax,[4+si]. You need to write mov AL, [3+si] cmp AL, [4+si] instead.\nWhen the loop ends, you should not fall-through into wrong, but rather jmp to exit.\nA single address register is enough:\n sub cx, 1\n jbe exit ; In case of 0 or 1 array elements\n lea si, [ARR]\n cld ; Clear direction flag so LODSB will increment SI\nL1:\n lodsb\n cmp al, [si]\n jg wrong\n dec cx\n jz exit\n lodsb\n cmp al, [si]\n jl wrong\n loop L1\n jmp exit\nwrong:\n ...\nexit:\n ...\n\n\nIt is fascinating to explore alternative solutions\n\nUsing separate loops for the rising and falling edges of the sawtooth. Not only are the new loops very short, they can iterate over just one conditional branch thanks to sticking a sentinel to the end of the array. The sentinel is chosen such that the loop has got to end. The position where the loop exit occurs then decides about success or failure:\n mov si, offset ARR ; Address of the array\n mov cx, ... ; Number of array elements\n call TestWavy ; -> AL=[0,1]\n mov [REZ], al\n\n ...\n\n; IN (cx,si) OUT (al) MOD (ah,bx,cx,dx,si,di)\nTestWavy:\n cld ; Clear DF so LODSW will SI++\n mov bx, cx\n shr bx, 1\n jz .fine ; In case of 0 or 1 array elements\n lea di, [si+bx]\n add di, bx\n mov dx, [di] ; Preserve existing bytes\n mov word ptr [di], 0102h ; Sentinel for the rising edge\n push si ; (1)\n.rise:\n lodsw\n cmp al, ah\n jng .rise\n mov [di], dx ; Restore the bytes\n cmp si, di\n pop si ; (1)\n jbe .wrong\n\n inc si ; Address of the first falling edge\n dec cx\n shr cx, 1\n jz .fine ; In case of 2 array elements\n mov di, si\n add di, cx\n add di, cx\n mov dx, [di] ; Preserve existing bytes\n mov word ptr [di], 0201h ; Sentinel for the falling edge\n.fall:\n lodsw\n cmp al, ah\n jnl .fall\n mov [di], dx ; Restore the bytes\n cmp si, di\n jbe .wrong\n.fine:\n mov al, 1\n ret\n.wrong:\n mov al, 0\n ret\n\n\nUsing the byte-sized registers to the fullest. This minimizes the number of memory accesses as well as the number of iterations on the loop. Every iteration is guaranteed to be able to perform 8 comparisons, thanks to replicating the last element 7 times. The idea is, that this way, no early exits have to be included. And don't forget that you need to keep 7 unoccupied bytes after the array.\n mov si, offset ARR ; Address of the array\n mov cx, ... ; Number of array elements\n call TestWavy ; -> AL=[0,1]\n mov [REZ], al\n\n ...\n\n; IN (cx,si) OUT (al) MOD (ah,bx,cx,dx,si,di,bp)\nTestWavy:\n sub cx, 1\n jbe .fine ; In case of 0 or 1 array elements\n cld ; Clear DF so LODSW will SI++\n mov bp, cx ; and STOSB will DI++\n lea di, [si+bp] ; Address of the last element\n mov al, [di]\n mov cx, 8\n rep stosb\n.more:\n lodsw ; Load 8 elements\n xchg dx, ax ; `mov dx, ax`\n lodsw\n xchg cx, ax ; `mov cx, ax`\n lodsw\n xchg bx, ax ; `mov bx, ax`\n lodsw\n cmp dl, dh ; Check 4 rising edges\n jg .wrong\n cmp cl, ch\n jg .wrong\n cmp bl, bh\n jg .wrong\n cmp al, ah\n jg .wrong\n cmp dh, cl ; Check 4 falling edges\n jl .wrong\n cmp ch, bl\n jl .wrong\n cmp bh, al\n jl .wrong\n cmp ah, [si]\n jl .wrong\n sub bp, 8\n ja .more\n.fine:\n mov al, 1\n ret\n.wrong:\n mov al, 0\n ret\n\n\n\n" ]
[ 2 ]
[]
[]
[ "arrays", "assembly", "tasm", "x86_16" ]
stackoverflow_0074660887_arrays_assembly_tasm_x86_16.txt
Q: GTM not propagating nonce to Custom HTML tags In order to implement Content-Security-Policy, I need to pass nonce to GTM to allow tags. Using nonce-aware version of GTM snippet works great for all tag types except Custom HTML. Is there a way to pass nonce to Custom HTML and allow custom scripts, without using unsafe-inline? A: In order to add the nonce attribute to the Custom HTML scripts, it must be first defined as a GTM variable: Add id="gtmScript" to the nonce-aware version of GTM snippet - this will be used to target the element and capture nonce. <script id="gtmScript" nonce="{GENERATED_NONCE}"> // GTM function </script> In GTM, create a new variable that will capture the nonce. Use DOM Element type, and select the ID of the GTM snippet. Now that the nonce variable is available in the GTM, add it to the Custom HTML script. <script nonce="{{nonce}}"> console.log("CSP-allowed script with nonce:", "{{nonce}}"); </script> If the tag is not firing, check the Support document.write. This can be a key step in Single Page Applications. The GTM Custom HTML script is now nonce-allowed and fires as expected. Of course, any assets used by this script will now need to be allowed in the CSP header. Script within a script Many tracking scripts are creating and firing additional script within themselves. These will also be blocked as inline-scripts. Find out where and how they are created, and add nonce to them as well. Usually, the code looks similar to this: var script = document.createElement("script"); script.type = "text/javascript"; script.async = true; script.src = "https://tracking.js"; var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(script, s); Edit this part of the code and insert the nonce variable, in the same manner along with other attributes. script.nonce = "{{nonce}}"; Again, pay attention and whitelist any necessary assets that are now being blocked from this newly allowed script. That's it - Custom HTML script is now fully CSP-allowed. Source and disclaimer: I'm the author of expanded dev.to guide A: I have found the same problem as others with the original solution proposed by Matija Mrkaic. His article was very useful but I was finding the nonce data attribute to be returning a blank value. To fix this I added a data-nonce attribute to the GTM script with a variable in Tag Manager to extract the nonce value (similar to Matija Mrkaic. This new variable was called data-nonce). I then added a custom HTML tag in GTM that removes the data-nonce attribute once it is loaded. GTM code: <script id="gtmScript" nonce='{{csp_nonce}}' data-nonce='{{csp_nonce}}'>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;var n=d.querySelector('[nonce]'); n&&j.setAttribute('nonce',n.nonce||n.getAttribute('nonce'));f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','{{GOOGLE_TAG_MANAGER_ID}}');</script> GTM Custom HTML tag to remove the nonce value once it was loaded: <script nonce="{{data-nonce}}"> console.log("Inline script to remove data-nonce."); document.getElementById("gtmScript").removeAttribute("data-nonce"); </script> This solution is far from perfect but I have so far been unable to find a method to pass 'secrets' to GTM. Exposing the nonce value in the DOM is not recommended, the theory behind this 'temporary' solution is to only expose it for a short period of time until loaded into the Google Tag Manager variable. Comments/suggestions welcome, many thanks. A: For anyone having the issue with Chrome hiding the nonce attribute (noted by Keyhan and Dan in https://stackoverflow.com/a/65100705/3370010), I found that Google Tag Manager has a setting to get a variable from a global "JavaScript Variable". You just need to set that global variable first. If you add Google Tag Manager dynamically, it can be set before your Google Tag Manager script. window.nonceForCustomScripts = nonce; If you are just inserting the code into the Google Tag Manger script, it would look something like (function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src= 'https://www.googletagmanager.com/gtm.js?id='+i+dl;var n=d.querySelector('[nonce]'); n&&j.setAttribute('nonce',n.nonce||n.getAttribute('nonce')); // Added code w.nonceForCustomScripts = n.nonce||n.getAttribute('nonce'); // End added code f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','your-gtm-id'); This is more secure than adding a data-nonce attribute because it prevents CSS-based attacks such as the one listed in https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/nonce
GTM not propagating nonce to Custom HTML tags
In order to implement Content-Security-Policy, I need to pass nonce to GTM to allow tags. Using nonce-aware version of GTM snippet works great for all tag types except Custom HTML. Is there a way to pass nonce to Custom HTML and allow custom scripts, without using unsafe-inline?
[ "In order to add the nonce attribute to the Custom HTML scripts, it must be first defined as a GTM variable:\n\nAdd id=\"gtmScript\" to the nonce-aware version of GTM snippet - this will be used to target the element and capture nonce.\n\n<script id=\"gtmScript\" nonce=\"{GENERATED_NONCE}\">\n // GTM function\n</script>\n\n\nIn GTM, create a new variable that will capture the nonce.\nUse DOM Element type, and select the ID of the GTM snippet.\n\n\n\nNow that the nonce variable is available in the GTM, add it to the Custom HTML script.\n<script nonce=\"{{nonce}}\">\n console.log(\"CSP-allowed script with nonce:\", \"{{nonce}}\");\n</script>\n\nIf the tag is not firing, check the Support document.write. This can be a key step in Single Page Applications.\nThe GTM Custom HTML script is now nonce-allowed and fires as expected.\nOf course, any assets used by this script will now need to be allowed in the CSP header.\n\n\nScript within a script\nMany tracking scripts are creating and firing additional script within themselves.\nThese will also be blocked as inline-scripts.\nFind out where and how they are created, and add nonce to them as well.\nUsually, the code looks similar to this:\nvar script = document.createElement(\"script\");\nscript.type = \"text/javascript\";\nscript.async = true;\nscript.src = \"https://tracking.js\";\nvar s = document.getElementsByTagName(\"script\")[0];\ns.parentNode.insertBefore(script, s);\n\nEdit this part of the code and insert the nonce variable, in the same manner along with other attributes.\nscript.nonce = \"{{nonce}}\";\n\nAgain, pay attention and whitelist any necessary assets that are now being blocked from this newly allowed script.\nThat's it - Custom HTML script is now fully CSP-allowed.\n\nSource and disclaimer: I'm the author of expanded dev.to guide\n", "I have found the same problem as others with the original solution proposed by Matija Mrkaic. His article was very useful but I was finding the nonce data attribute to be returning a blank value.\nTo fix this I added a data-nonce attribute to the GTM script with a variable in Tag Manager to extract the nonce value (similar to Matija Mrkaic. This new variable was called data-nonce). I then added a custom HTML tag in GTM that removes the data-nonce attribute once it is loaded.\nGTM code:\n<script id=\"gtmScript\" nonce='{{csp_nonce}}' data-nonce='{{csp_nonce}}'>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':\nnew Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],\nj=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=\n'https://www.googletagmanager.com/gtm.js?id='+i+dl;var n=d.querySelector('[nonce]');\nn&&j.setAttribute('nonce',n.nonce||n.getAttribute('nonce'));f.parentNode.insertBefore(j,f);\n})(window,document,'script','dataLayer','{{GOOGLE_TAG_MANAGER_ID}}');</script>\n\nGTM Custom HTML tag to remove the nonce value once it was loaded:\n<script nonce=\"{{data-nonce}}\">\n console.log(\"Inline script to remove data-nonce.\");\n document.getElementById(\"gtmScript\").removeAttribute(\"data-nonce\");\n</script>\n\nThis solution is far from perfect but I have so far been unable to find a method to pass 'secrets' to GTM. Exposing the nonce value in the DOM is not recommended, the theory behind this 'temporary' solution is to only expose it for a short period of time until loaded into the Google Tag Manager variable.\nComments/suggestions welcome, many thanks.\n", "For anyone having the issue with Chrome hiding the nonce attribute (noted by Keyhan and Dan in https://stackoverflow.com/a/65100705/3370010), I found that Google Tag Manager has a setting to get a variable from a global \"JavaScript Variable\".\n\nYou just need to set that global variable first. If you add Google Tag Manager dynamically, it can be set before your Google Tag Manager script.\nwindow.nonceForCustomScripts = nonce;\n\nIf you are just inserting the code into the Google Tag Manger script, it would look something like\n(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':\n new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],\n j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=\n 'https://www.googletagmanager.com/gtm.js?id='+i+dl;var n=d.querySelector('[nonce]');\n n&&j.setAttribute('nonce',n.nonce||n.getAttribute('nonce'));\n// Added code\nw.nonceForCustomScripts = n.nonce||n.getAttribute('nonce');\n// End added code\n f.parentNode.insertBefore(j,f);\n })(window,document,'script','dataLayer','your-gtm-id');\n\nThis is more secure than adding a data-nonce attribute because it prevents CSS-based attacks such as the one listed in https://developer.mozilla.org/en-US/docs/Web/API/HTMLElement/nonce\n" ]
[ 11, 0, 0 ]
[]
[]
[ "content_security_policy", "google_tag_manager" ]
stackoverflow_0065100704_content_security_policy_google_tag_manager.txt
Q: Grouping Python dictionaries in hierarchical form with multiple keys? Here is my list of dicts: [{'subtopic': 'IAM', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_iam_policies_info","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure! I can help with AWS IAM policies info'}, {'subtopic': 'ECS', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ecs_restart_service","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure! I can help with restarting AWS ECS Service'}, {'subtopic': 'EC2', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ec2_create_instance","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure, I can help creating an EC2 machine'}, {'subtopic': 'EC2', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ec2_security_group_info","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure, I can help with various information about AWS security groups'}, {'subtopic': 'S3', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_s3_file_copy","workflow.parameters": {"region": "us-west"}}'], 'text': 'Sure, I can help you with the process of copying on S3'}, {'subtopic': 'GitHub', 'topic': 'AWS', 'attachments': ['{"workflow.name": "view_pull_request","workflow.parameters": {"region": "us-west"}}'], 'text': 'Sure, I can help with GitHub pull requests'}, {'subtopic': 'Subtopic Title', 'topic': 'Topic Title', 'attachments': [], 'text': 'This is another fact'}, {'subtopic': 'Subtopic Title', 'topic': 'Topic Title', 'attachments': [], 'text': 'This is a fact'}] I would like to group by topic and subtopic to get a final result: { "AWS": { "GitHub": { 'attachments': ['{"workflow.name": "view_pull_request","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help with GitHub pull requests'] }, "S3": { 'attachments': ['{"workflow.name": "aws_s3_file_copy","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help you with the process of copying on S3'] }, "EC2": { 'attachments': ['{"workflow.name": "aws_ec2_create_instance","workflow.parameters": {"region": "us-east"}}', '{"workflow.name": "aws_ec2_security_group_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure, I can help creating an EC2 machine', 'Sure, I can help with various information about AWS security groups'] }, "ECS": { 'attachments': ['{"workflow.name": "aws_ecs_restart_service","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with restarting AWS ECS Service'] }, "IAM": { 'attachments': ['{"workflow.name": "aws_iam_policies_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with AWS IAM policies info'] } }, "Topic Title": { "Subtopic Title": { 'attachments': [], 'text': ['This is another fact'] } } } I am using: groups = ['topic', 'subtopic', "text", "attachments"] groups.reverse() def hierachical_data(data, groups): g = groups[-1] g_list = [] for key, items in itertools.groupby(data, operator.itemgetter(g)): g_list.append({key:list(items)}) groups = groups[0:-1] if(len(groups) != 0): for e in g_list: for k, v in e.items(): e[k] = hierachical_data(v, groups) return g_list print(hierachical_data(filtered_top_facts_dicts, groups)) But getting an error for hashing lists. Please advise how to transform my json to the desired format. A: To group the list of dictionaries by topic and subtopic, you can create an empty dictionary and then loop through the list of dictionaries to add each item to the appropriate nested level in the dictionary. result = {} for item in data: topic = item['topic'] subtopic = item['subtopic'] if topic not in result: result[topic] = {} if subtopic not in result[topic]: result[topic][subtopic] = {} result[topic][subtopic]['attachments'] = [] result[topic][subtopic]['text'] = [] result[topic][subtopic]['attachments'].extend(item['attachments']) result[topic][subtopic]['text'].append(item['text']) # Reverse the order of the sub-dictionaries within each topic for topic, subtopics in result.items(): result[topic] = dict(reversed(list(subtopics.items()))) After this loop has completed, the result dictionary will be in the format you described, with topic and subtopic as the keys and the attachments and text as the values within each sub-dictionary. Output: {'AWS': {'GitHub': {'attachments': ['{"workflow.name": "view_pull_request","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help with GitHub pull requests']}, 'S3': {'attachments': ['{"workflow.name": "aws_s3_file_copy","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help you with the process of copying on S3']}, 'EC2': {'attachments': ['{"workflow.name": "aws_ec2_create_instance","workflow.parameters": {"region": "us-east"}}', '{"workflow.name": "aws_ec2_security_group_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure, I can help creating an EC2 machine', 'Sure, I can help with various information about AWS security groups']}, 'ECS': {'attachments': ['{"workflow.name": "aws_ecs_restart_service","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with restarting AWS ECS Service']}, 'IAM': {'attachments': ['{"workflow.name": "aws_iam_policies_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with AWS IAM policies info']}}, 'Topic Title': {'Subtopic Title': {'attachments': [], 'text': ['This is another fact', 'This is a fact']}}} A: I think the cleanest solution is to use dictlib with reduce in one line: from functools import reduce import dictlib reduce( lambda x, y: dictlib.union_setadd(x, y), [ { x["topic"]: { x["subtopic"]: { list(x.keys())[2]: list(x.values())[2], list(x.keys())[3]: [list(x.values())[3]], } } } for x in d ], ) where d is your initial list and dictlib.union_setadd() merges dictionaries by doing setadd logic like with str and int. Note that when put in reduce, merge is sequential and cumulative for all your list entries. Hope this helps.
Grouping Python dictionaries in hierarchical form with multiple keys?
Here is my list of dicts: [{'subtopic': 'IAM', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_iam_policies_info","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure! I can help with AWS IAM policies info'}, {'subtopic': 'ECS', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ecs_restart_service","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure! I can help with restarting AWS ECS Service'}, {'subtopic': 'EC2', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ec2_create_instance","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure, I can help creating an EC2 machine'}, {'subtopic': 'EC2', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_ec2_security_group_info","workflow.parameters": {"region": "us-east"}}'], 'text': 'Sure, I can help with various information about AWS security groups'}, {'subtopic': 'S3', 'topic': 'AWS', 'attachments': ['{"workflow.name": "aws_s3_file_copy","workflow.parameters": {"region": "us-west"}}'], 'text': 'Sure, I can help you with the process of copying on S3'}, {'subtopic': 'GitHub', 'topic': 'AWS', 'attachments': ['{"workflow.name": "view_pull_request","workflow.parameters": {"region": "us-west"}}'], 'text': 'Sure, I can help with GitHub pull requests'}, {'subtopic': 'Subtopic Title', 'topic': 'Topic Title', 'attachments': [], 'text': 'This is another fact'}, {'subtopic': 'Subtopic Title', 'topic': 'Topic Title', 'attachments': [], 'text': 'This is a fact'}] I would like to group by topic and subtopic to get a final result: { "AWS": { "GitHub": { 'attachments': ['{"workflow.name": "view_pull_request","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help with GitHub pull requests'] }, "S3": { 'attachments': ['{"workflow.name": "aws_s3_file_copy","workflow.parameters": {"region": "us-west"}}'], 'text': ['Sure, I can help you with the process of copying on S3'] }, "EC2": { 'attachments': ['{"workflow.name": "aws_ec2_create_instance","workflow.parameters": {"region": "us-east"}}', '{"workflow.name": "aws_ec2_security_group_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure, I can help creating an EC2 machine', 'Sure, I can help with various information about AWS security groups'] }, "ECS": { 'attachments': ['{"workflow.name": "aws_ecs_restart_service","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with restarting AWS ECS Service'] }, "IAM": { 'attachments': ['{"workflow.name": "aws_iam_policies_info","workflow.parameters": {"region": "us-east"}}'], 'text': ['Sure! I can help with AWS IAM policies info'] } }, "Topic Title": { "Subtopic Title": { 'attachments': [], 'text': ['This is another fact'] } } } I am using: groups = ['topic', 'subtopic', "text", "attachments"] groups.reverse() def hierachical_data(data, groups): g = groups[-1] g_list = [] for key, items in itertools.groupby(data, operator.itemgetter(g)): g_list.append({key:list(items)}) groups = groups[0:-1] if(len(groups) != 0): for e in g_list: for k, v in e.items(): e[k] = hierachical_data(v, groups) return g_list print(hierachical_data(filtered_top_facts_dicts, groups)) But getting an error for hashing lists. Please advise how to transform my json to the desired format.
[ "To group the list of dictionaries by topic and subtopic, you can create an empty dictionary and then loop through the list of dictionaries to add each item to the appropriate nested level in the dictionary.\nresult = {}\n\nfor item in data:\n topic = item['topic']\n subtopic = item['subtopic']\n\n if topic not in result:\n result[topic] = {}\n\n if subtopic not in result[topic]:\n result[topic][subtopic] = {}\n result[topic][subtopic]['attachments'] = []\n result[topic][subtopic]['text'] = []\n\n result[topic][subtopic]['attachments'].extend(item['attachments'])\n result[topic][subtopic]['text'].append(item['text'])\n\n# Reverse the order of the sub-dictionaries within each topic\nfor topic, subtopics in result.items():\n result[topic] = dict(reversed(list(subtopics.items())))\n\nAfter this loop has completed, the result dictionary will be in the format you described, with topic and subtopic as the keys and the attachments and text as the values within each sub-dictionary.\nOutput:\n{'AWS': {'GitHub': {'attachments': ['{\"workflow.name\": \"view_pull_request\",\"workflow.parameters\": {\"region\": \"us-west\"}}'],\n 'text': ['Sure, I can help with GitHub pull requests']},\n 'S3': {'attachments': ['{\"workflow.name\": \"aws_s3_file_copy\",\"workflow.parameters\": {\"region\": \"us-west\"}}'],\n 'text': ['Sure, I can help you with the process of copying on S3']},\n 'EC2': {'attachments': ['{\"workflow.name\": \"aws_ec2_create_instance\",\"workflow.parameters\": {\"region\": \"us-east\"}}',\n '{\"workflow.name\": \"aws_ec2_security_group_info\",\"workflow.parameters\": {\"region\": \"us-east\"}}'],\n 'text': ['Sure, I can help creating an EC2 machine',\n 'Sure, I can help with various information about AWS security groups']},\n 'ECS': {'attachments': ['{\"workflow.name\": \"aws_ecs_restart_service\",\"workflow.parameters\": {\"region\": \"us-east\"}}'],\n 'text': ['Sure! I can help with restarting AWS ECS Service']},\n 'IAM': {'attachments': ['{\"workflow.name\": \"aws_iam_policies_info\",\"workflow.parameters\": {\"region\": \"us-east\"}}'],\n 'text': ['Sure! I can help with AWS IAM policies info']}},\n 'Topic Title': {'Subtopic Title': {'attachments': [],\n 'text': ['This is another fact', 'This is a fact']}}}\n\n", "I think the cleanest solution is to use dictlib with reduce in one line:\nfrom functools import reduce\nimport dictlib\n\nreduce(\n lambda x, y: dictlib.union_setadd(x, y),\n [\n {\n x[\"topic\"]: {\n x[\"subtopic\"]: {\n list(x.keys())[2]: list(x.values())[2],\n list(x.keys())[3]: [list(x.values())[3]],\n }\n }\n }\n for x in d\n ],\n)\n\nwhere d is your initial list and dictlib.union_setadd() merges dictionaries by doing setadd logic like with str and int. Note that when put in reduce, merge is sequential and cumulative for all your list entries.\nHope this helps.\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "itertools_groupby", "python", "python_3.x", "python_itertools" ]
stackoverflow_0074662274_dictionary_itertools_groupby_python_python_3.x_python_itertools.txt
Q: Tensorflow doesn't seem to see my gpu I've tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn't support it). When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well. I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460 A: I came across this same issue in jupyter notebooks. This could be an easy fix. $ pip uninstall tensorflow $ pip install tensorflow-gpu You can check if it worked with: tf.test.gpu_device_name() Update 2020 It seems like tensorflow 2.0+ comes with gpu capabilities therefore pip install tensorflow should be enough A: Summary: check if tensorflow sees your GPU (optional) check if your videocard can work with tensorflow (optional) find versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version install CUDA Toolkit install cuDNN SDK pip uninstall tensorflow; pip install tensorflow-gpu check if tensorflow sees your GPU * source - https://www.tensorflow.org/install/gpu Detailed instruction: check if tensorflow sees your GPU (optional) from tensorflow.python.client import device_lib def get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos] print(get_available_devices()) # my output was => ['/device:CPU:0'] # good output must be => ['/device:CPU:0', '/device:GPU:0'] check if your card can work with tensorflow (optional) my PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook tensorflow needs Compute Capability 3.5 or higher. (https://www.tensorflow.org/install/gpu#hardware_requirements) https://developer.nvidia.com/cuda-gpus select "CUDA-Enabled GeForce Products" result - "GeForce GTX 1060 Compute Capability = 6.1" my card can work with tf! find versions of CUDA Toolkit and cuDNN SDK, that you need a) find your tf version import sys print (sys.version) # 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] import tensorflow as tf print(tf.__version__) # my output was => 1.13.1 b) find right versions of CUDA Toolkit and cuDNN SDK for your tf version https://www.tensorflow.org/install/source#linux * it is written for linux, but worked in my case see, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4 install CUDA Toolkit a) install CUDA Toolkit 10.0 https://developer.nvidia.com/cuda-toolkit-archive select: CUDA Toolkit 10.0 and download base installer (2 GB) installation settings: select only CUDA (my installation path was: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development) b) add environment variables: system variables / path must have: D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\bin D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\libnvvp D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\extras\CUPTI\libx64 D:\Programs\x64\Nvidia\Cuda_v_10_0\Development\include install cuDNN SDK a) download cuDNN SDK v7.4 https://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple) select "Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0" b) add path to 'bin' folder into "environment variables / system variables / path": D:\Programs\x64\Nvidia\cudnn_for_cuda_10_0\bin pip uninstall tensorflow pip install tensorflow-gpu check if tensorflow sees your GPU - restart your PC - print(get_available_devices()) - # now this code should return => ['/device:CPU:0', '/device:GPU:0'] A: If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow and install keras-gpu instead (conda install -c anaconda keras-gpu. This will install everything you need to run your machine learning codes in GPU. Cheers! P.S. You should check first if you have installed the drivers correctly using nvidia-smi. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:\Program Files\NVIDIA Corporation\NVSMI A: When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0. https://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux You might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU. A: The following worked for me, hp laptop. I have a Cuda Compute capability (version) 3.0 compatible Nvidia card. Windows 7. pip3.6.exe uninstall tensorflow-gpu pip3.6.exe uninstall tensorflow-gpu pip3.6.exe install tensorflow-gpu A: So as of 2022-04, the tensorflow package contains both CPU and GPU builds. To install a GPU build, search to see what's available: λ conda search tensorflow Loading channels: done # Name Version Build Channel tensorflow 0.12.1 py35_1 conda-forge tensorflow 0.12.1 py35_2 conda-forge tensorflow 1.0.0 py35_0 conda-forge … tensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main tensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main tensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main tensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main tensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main tensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main tensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main You can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU. To narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance: λ conda search tensorflow=2*=gpu* Loading channels: done # Name Version Build Channel tensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main tensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main tensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main tensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main tensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main tensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main tensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main tensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main tensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main tensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main To install a specific version in an otherwise empty environment, you can use a command like: λ conda activate tf (tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0 … The following NEW packages will be INSTALLED: _tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu … cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2 cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0 … tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0 tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0 … As you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit and cudnn packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on the official website. After installation, confirm that it worked and it sees the GPU by running: λ python Python 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> tf.__version__ '2.6.0' >>> tf.config.list_physical_devices() [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Getting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers. This tries to install any TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance: λ conda install tensorflow=2*=gpu* spyder matplotlib For me, this ended up installing a two year old GPU version of tensorflow: matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1 spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1 tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0 I had previously been using the tensorflow-gpu package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow or the CUDA dependencies: λ conda list … cookiecutter 1.7.2 pyhd3eb1b0_0 cryptography 3.4.8 py38h71e12ea_0 cycler 0.11.0 pyhd3eb1b0_0 dataclasses 0.8 pyh6d0b6a4_7 … tensorflow 2.3.0 mkl_py38h8557ec7_0 tensorflow-base 2.3.0 eigen_py38h75a453f_0 tensorflow-estimator 2.6.0 pyh7b7c402_0 tensorflow-gpu 2.3.0 he13fc11_0 A: I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was conda install cudatoolkit==11.2 pip install tensorflow-gpu==2.8.0 Although I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0 was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH to point to your anaconda3/envs/<your_tensorflow_environment>/lib with this command export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib Unless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda's official website. A: In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using: pip uninstall tensorflow-gpu==1.14 pip install tensorflow-gpu==1.14 A: I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success. What solved my issue was to update my GPU drivers. You can update them via: Pressing windows-button + r Entering devmgmt.msc Right-Clicking on "Display adapters" and clicking on the "Properties" option Going to the "Driver" tab and selecting "Updating Driver". Finally, click on "Search automatically for updated driver software" Restart your machine and run the following check again: from tensorflow.python.client import device_lib local_device_protos = device_lib.list_local_devices() [x.name for x in local_device_protos] Sample output: 2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189 pciBusID: 0000:01:00.0 2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check. 2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0 2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N 2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0) >>> [x.name for x in local_device_protos] ['/device:CPU:0', '/device:GPU:0'] A: I had a problem because I didn't specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide: Caution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin Before that, I read most of the answers to this and similar questions. I followed @AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer. The easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am): pip uninstall keras pip uninstall tensorflow-io-gcs-filesystem pip uninstall tensorflow-estimator pip uninstall tensorflow pip uninstall Keras-Preprocessing pip uninstall tensorflow-intel because I wanted only packages required for the old version, and I didn't do it for all required packages for 2.11 version. After that I installed tensorflow 2.10: pip install tensorflow<2.11 and it worked. I used this code to check if GPU is visible: import tensorflow as tf print(tf.config.list_physical_devices('GPU'))
Tensorflow doesn't seem to see my gpu
I've tried tensorflow on both cuda 7.5 and 8.0, w/o cudnn (my GPU is old, cudnn doesn't support it). When I execute device_lib.list_local_devices(), there is no gpu in the output. Theano sees my gpu, and works fine with it, and examples in /usr/share/cuda/samples work fine as well. I installed tensorflow through pip install. Is my gpu too old for tf to support it? gtx 460
[ "I came across this same issue in jupyter notebooks. This could be an easy fix.\n$ pip uninstall tensorflow\n$ pip install tensorflow-gpu\n\nYou can check if it worked with:\ntf.test.gpu_device_name()\n\nUpdate 2020\nIt seems like tensorflow 2.0+ comes with gpu capabilities therefore\npip install tensorflow should be enough\n", "Summary:\n\ncheck if tensorflow sees your GPU (optional)\ncheck if your videocard can work with tensorflow (optional)\nfind versions of CUDA Toolkit and cuDNN SDK, compatible with your tf version\ninstall CUDA Toolkit\ninstall cuDNN SDK\npip uninstall tensorflow; pip install tensorflow-gpu \ncheck if tensorflow sees your GPU\n\n* source - https://www.tensorflow.org/install/gpu\nDetailed instruction:\n\ncheck if tensorflow sees your GPU (optional)\nfrom tensorflow.python.client import device_lib\ndef get_available_devices():\n local_device_protos = device_lib.list_local_devices()\n return [x.name for x in local_device_protos]\nprint(get_available_devices()) \n# my output was => ['/device:CPU:0']\n# good output must be => ['/device:CPU:0', '/device:GPU:0']\n\ncheck if your card can work with tensorflow (optional)\n\nmy PC: GeForce GTX 1060 notebook (driver version - 419.35), windows 10, jupyter notebook\ntensorflow needs Compute Capability 3.5 or higher. (https://www.tensorflow.org/install/gpu#hardware_requirements)\nhttps://developer.nvidia.com/cuda-gpus\nselect \"CUDA-Enabled GeForce Products\"\nresult - \"GeForce GTX 1060 Compute Capability = 6.1\"\nmy card can work with tf!\n\nfind versions of CUDA Toolkit and cuDNN SDK, that you need\na) find your tf version\nimport sys\nprint (sys.version)\n# 3.6.4 |Anaconda custom (64-bit)| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)]\nimport tensorflow as tf\nprint(tf.__version__)\n# my output was => 1.13.1\n\nb) find right versions of CUDA Toolkit and cuDNN SDK for your tf version\nhttps://www.tensorflow.org/install/source#linux\n* it is written for linux, but worked in my case\nsee, that tensorflow_gpu-1.13.1 needs: CUDA Toolkit v10.0, cuDNN SDK v7.4\n\ninstall CUDA Toolkit\na) install CUDA Toolkit 10.0\nhttps://developer.nvidia.com/cuda-toolkit-archive\nselect: CUDA Toolkit 10.0 and download base installer (2 GB)\ninstallation settings: select only CUDA\n (my installation path was: D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development)\n\nb) add environment variables:\nsystem variables / path must have:\n D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\bin\n D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\libnvvp\n D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\extras\\CUPTI\\libx64\n D:\\Programs\\x64\\Nvidia\\Cuda_v_10_0\\Development\\include\n\ninstall cuDNN SDK\na) download cuDNN SDK v7.4\nhttps://developer.nvidia.com/rdp/cudnn-archive (needs registration, but it is simple)\nselect \"Download cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0\"\n\nb) add path to 'bin' folder into \"environment variables / system variables / path\":\nD:\\Programs\\x64\\Nvidia\\cudnn_for_cuda_10_0\\bin\n\npip uninstall tensorflow\npip install tensorflow-gpu \ncheck if tensorflow sees your GPU\n- restart your PC\n- print(get_available_devices()) \n- # now this code should return => ['/device:CPU:0', '/device:GPU:0']\n\n\n", "If you are using conda, you might have installed the cpu version of the tensorflow. Check package list (conda list) of the environment to see if this is the case . If so, remove the package by using conda remove tensorflow and install keras-gpu instead (conda install -c anaconda keras-gpu. This will install everything you need to run your machine learning codes in GPU. Cheers!\nP.S. You should check first if you have installed the drivers correctly using nvidia-smi. By default, this is not in your PATH so you might as well need to add the folder to your path. The .exe file can be found at C:\\Program Files\\NVIDIA Corporation\\NVSMI\n", "When I look up your GPU, I see that it only supports CUDA Compute Capability 2.1. (Can be checked through https://developer.nvidia.com/cuda-gpus) Unfortunately, TensorFlow needs a GPU with minimum CUDA Compute Capability 3.0.\nhttps://www.tensorflow.org/get_started/os_setup#optional_install_cuda_gpus_on_linux\nYou might see some logs from TensorFlow checking your GPU, but ultimately the library will avoid using an unsupported GPU. \n", "The following worked for me, hp laptop. I have a Cuda Compute capability\n(version) 3.0 compatible Nvidia card. Windows 7.\npip3.6.exe uninstall tensorflow-gpu\npip3.6.exe uninstall tensorflow-gpu\npip3.6.exe install tensorflow-gpu\n\n", "So as of 2022-04, the tensorflow package contains both CPU and GPU builds. To install a GPU build, search to see what's available:\nλ conda search tensorflow\nLoading channels: done\n# Name Version Build Channel\ntensorflow 0.12.1 py35_1 conda-forge\ntensorflow 0.12.1 py35_2 conda-forge\ntensorflow 1.0.0 py35_0 conda-forge\n…\ntensorflow 2.5.0 mkl_py39h1fa1df6_0 pkgs/main\ntensorflow 2.6.0 eigen_py37h37bbdb1_0 pkgs/main\ntensorflow 2.6.0 eigen_py38h63d3545_0 pkgs/main\ntensorflow 2.6.0 eigen_py39h855417c_0 pkgs/main\ntensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main\ntensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main\ntensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main\ntensorflow 2.6.0 mkl_py37h9623b36_0 pkgs/main\ntensorflow 2.6.0 mkl_py38hdc16138_0 pkgs/main\ntensorflow 2.6.0 mkl_py39h31650da_0 pkgs/main\n\nYou can see that there are builds of TF 2.6.0 that support Python 3.7, 3.8 and 3.9, and that are built for MKL (Intel CPU), Eigen, or GPU.\nTo narrow it down, you can use wildcards in the search. This will find any Tensorflow 2.x version that is built for GPU, for instance:\nλ conda search tensorflow=2*=gpu*\nLoading channels: done\n# Name Version Build Channel\ntensorflow 2.0.0 gpu_py36hfdd5754_0 pkgs/main\ntensorflow 2.0.0 gpu_py37h57d29ca_0 pkgs/main\ntensorflow 2.1.0 gpu_py36h3346743_0 pkgs/main\ntensorflow 2.1.0 gpu_py37h7db9008_0 pkgs/main\ntensorflow 2.5.0 gpu_py37h23de114_0 pkgs/main\ntensorflow 2.5.0 gpu_py38h8e8c102_0 pkgs/main\ntensorflow 2.5.0 gpu_py39h7dc34a2_0 pkgs/main\ntensorflow 2.6.0 gpu_py37h3e8f0e3_0 pkgs/main\ntensorflow 2.6.0 gpu_py38hc0e8100_0 pkgs/main\ntensorflow 2.6.0 gpu_py39he88c5ba_0 pkgs/main\n\nTo install a specific version in an otherwise empty environment, you can use a command like:\nλ conda activate tf\n\n(tf) λ conda install tensorflow=2.6.0=gpu_py39he88c5ba_0\n\n…\n\nThe following NEW packages will be INSTALLED:\n\n _tflow_select pkgs/main/win-64::_tflow_select-2.1.0-gpu\n …\n cudatoolkit pkgs/main/win-64::cudatoolkit-11.3.1-h59b6b97_2\n cudnn pkgs/main/win-64::cudnn-8.2.1-cuda11.3_0\n …\n tensorflow pkgs/main/win-64::tensorflow-2.6.0-gpu_py39he88c5ba_0\n tensorflow-base pkgs/main/win-64::tensorflow-base-2.6.0-gpu_py39hb3da07e_0\n …\n\nAs you can see, if you install a GPU build, it will automatically also install compatible cudatoolkit and cudnn packages. You don't need to manually check versions for compatibility, or manually download several gigabytes from Nvidia's website, or register as a developer, as it says in other answers or on the official website.\nAfter installation, confirm that it worked and it sees the GPU by running:\nλ python\nPython 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import tensorflow as tf\n>>> tf.__version__\n'2.6.0'\n>>> tf.config.list_physical_devices()\n[PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\n\nGetting conda to install a GPU build and other packages you want to use is another story, however, because there are a lot of package incompatibilities for me. I think the best you can do is specify the installation criteria using wildcards and cross your fingers.\nThis tries to install any TF 2.x version that's built for GPU and that has dependencies compatible with Spyder and matplotlib's dependencies, for instance:\nλ conda install tensorflow=2*=gpu* spyder matplotlib\n\nFor me, this ended up installing a two year old GPU version of tensorflow:\n matplotlib pkgs/main/win-64::matplotlib-3.5.1-py37haa95532_1\n spyder pkgs/main/win-64::spyder-5.1.5-py37haa95532_1\n tensorflow pkgs/main/win-64::tensorflow-2.1.0-gpu_py37h7db9008_0\n\nI had previously been using the tensorflow-gpu package, but that doesn't work anymore. conda typically grinds forever trying to find compatible packages to install, and even when it's installed, it doesn't actually install a gpu build of tensorflow or the CUDA dependencies:\nλ conda list\n…\ncookiecutter 1.7.2 pyhd3eb1b0_0\ncryptography 3.4.8 py38h71e12ea_0\ncycler 0.11.0 pyhd3eb1b0_0\ndataclasses 0.8 pyh6d0b6a4_7\n…\ntensorflow 2.3.0 mkl_py38h8557ec7_0\ntensorflow-base 2.3.0 eigen_py38h75a453f_0\ntensorflow-estimator 2.6.0 pyh7b7c402_0\ntensorflow-gpu 2.3.0 he13fc11_0\n\n", "I have had an issue where I needed the latest TensorFlow (2.8.0 at the time of writing) with GPU support running in a conda environment. The problem was that it was not available via conda. What I did was\nconda install cudatoolkit==11.2\npip install tensorflow-gpu==2.8.0\n\nAlthough I've cheched that the cuda toolkit version was compatible with the tensorflow version, it was still returning an error, where libcudart.so.11.0 was not found. As a result, GPUs were not visible. The remedy was to set environmental variable LD_LIBRARY_PATH to point to your anaconda3/envs/<your_tensorflow_environment>/lib with this command\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/<user>/anaconda3/envs/<your_tensorflow_environment>/lib\n\nUnless you make it permanent, you will need to create this variable every time you start a terminal prior to a session (jupyter notebook). It can be conveniently automated by following this procedure from conda's official website.\n", "In my case, I had a working tensorflow-gpu version 1.14 but suddenly it stopped working. I fixed the problem using:\n pip uninstall tensorflow-gpu==1.14\n pip install tensorflow-gpu==1.14\n\n", "I experienced the same problem on my Windows OS. I followed tensorflow's instructions on installing CUDA, cudnn, etc., and tried the suggestions in the answers above - with no success.\nWhat solved my issue was to update my GPU drivers. You can update them via:\n\nPressing windows-button + r\nEntering devmgmt.msc\nRight-Clicking on \"Display adapters\" and clicking on the \"Properties\" option\nGoing to the \"Driver\" tab and selecting \"Updating Driver\".\nFinally, click on \"Search automatically for updated driver software\"\nRestart your machine and run the following check again:\n\nfrom tensorflow.python.client import device_lib\nlocal_device_protos = device_lib.list_local_devices()\n[x.name for x in local_device_protos]\n\nSample output:\n2022-01-17 13:41:10.557751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:\nname: GeForce 940MX major: 5 minor: 0 memoryClockRate(GHz): 1.189\npciBusID: 0000:01:00.0\n2022-01-17 13:41:10.558125: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.\n2022-01-17 13:41:10.562095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0\n2022-01-17 13:45:11.392814: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:\n2022-01-17 13:45:11.393617: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0\n2022-01-17 13:45:11.393739: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N\n2022-01-17 13:45:11.401271: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/device:GPU:0 with 1391 MB memory) -> physical GPU (device: 0, name: GeForce 940MX, pci bus id: 0000:01:00.0, compute capability: 5.0)\n>>> [x.name for x in local_device_protos]\n['/device:CPU:0', '/device:GPU:0']\n\n", "I had a problem because I didn't specify the version of Tensorflow so my version was 2.11. After many hours I found that my problem is described in install guide:\n\nCaution: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin\n\nBefore that, I read most of the answers to this and similar questions. I followed @AndrewPt answer. I already had installed CUDA but updated the version just in case, installed cudNN, and restarted the computer.\nThe easiest solution for me was to downgrade to 2.10 (you can try different options mentioned in the install guide). I first uninstalled all of these packages (probably it's not necessary, but I didn't want to see how pip messed up versions at 2 am):\npip uninstall keras\npip uninstall tensorflow-io-gcs-filesystem\npip uninstall tensorflow-estimator\npip uninstall tensorflow\npip uninstall Keras-Preprocessing\npip uninstall tensorflow-intel\n\nbecause I wanted only packages required for the old version, and I didn't do it for all required packages for 2.11 version. After that I installed tensorflow 2.10:\npip install tensorflow<2.11\n\nand it worked.\nI used this code to check if GPU is visible:\nimport tensorflow as tf \nprint(tf.config.list_physical_devices('GPU'))\n\n" ]
[ 38, 30, 26, 16, 7, 3, 1, 0, 0, 0 ]
[]
[]
[ "tensorflow" ]
stackoverflow_0041402409_tensorflow.txt
Q: How to replace .append with .concat in pandas dataframe? Here is my code dataframe = pd.DataFrame(columns = my_columns) for stock in stocks['Ticker'][:1]: api_url = f'https://sandbox.iexapis.com/stable/stock/{symbol}/quote/?token={IEX_CLOUD_API_TOKEN}' data = requests.get(api_url).json() dataframe = dataframe.append( pd.Series([stock, data['latestPrice'], marketCap/1000000000000], index = my_columns), ignore_index = True ) dataframe Returns this BUT! Ticker Stock Price Market Cap A 153.57 2.37218 Also returns : FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. dataframe = dataframe.append( I understand I want to make dataframe a list but how do I parse through the Series? A: dataframe = pd.DataFrame(columns = my_columns) for stock in stocks['Ticker'][:1]: api_url = f'https://sandbox.iexapis.com/stable/stock/{symbol}/quote/?token={IEX_CLOUD_API_TOKEN}' data = requests.get(api_url).json() new_row = pd.DataFrame( [ [stock, data["latestPrice"], marketCap / 1000000000000] ], columns=my_columns ) dataframe = pd.concat([dataframe, new_row], ignore_index = True) dataframe
How to replace .append with .concat in pandas dataframe?
Here is my code dataframe = pd.DataFrame(columns = my_columns) for stock in stocks['Ticker'][:1]: api_url = f'https://sandbox.iexapis.com/stable/stock/{symbol}/quote/?token={IEX_CLOUD_API_TOKEN}' data = requests.get(api_url).json() dataframe = dataframe.append( pd.Series([stock, data['latestPrice'], marketCap/1000000000000], index = my_columns), ignore_index = True ) dataframe Returns this BUT! Ticker Stock Price Market Cap A 153.57 2.37218 Also returns : FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. dataframe = dataframe.append( I understand I want to make dataframe a list but how do I parse through the Series?
[ "dataframe = pd.DataFrame(columns = my_columns)\nfor stock in stocks['Ticker'][:1]:\n api_url = f'https://sandbox.iexapis.com/stable/stock/{symbol}/quote/?token={IEX_CLOUD_API_TOKEN}'\n data = requests.get(api_url).json()\n new_row = pd.DataFrame(\n [\n [stock, data[\"latestPrice\"], marketCap / 1000000000000]\n ],\n columns=my_columns\n )\n dataframe = pd.concat([dataframe, new_row], ignore_index = True)\n\ndataframe\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074662439_dataframe_pandas_python.txt
Q: Boost::Asio Serial Port async_read_some not storing data in buffer I'm developing Serial Port program using Boost::Asio. I call the SerialPort::read_async method every time I want to read data from serial port. While I am testing I realized that the data received on serial port is not getting saved in the read_buffer however the read handler receives proper number of received bytes in boost::asio::placeholders::bytes_transferred field/parameter. The read handler also contains boost::system::errc::success in the boost::asio::placeholders::error field/parameter. The read_buffer holds exactly the same value that was set before the async_read_some call was made. this->read_buffer.fill(static_cast<std::byte>('\0')); //Clear Buffer this->read_buffer.fill(static_cast<std::byte>('0')); //For Testing Code bool SerialPort::read_async(std::uint32_t read_timeout) { try { this->read_buffer.fill(static_cast<std::byte>('\0')); //Clear Buffer //this->read_buffer.fill(static_cast<std::byte>('0')); //For Testing if (read_timeout not_eq SerialPort::ignore_timeout) this->read_timeout = read_timeout;//If read_timeout is not set to ignore_timeout, update the read_timeout else use old read_timeout this->port.async_read_some(boost::asio::buffer(this->read_buffer.data(), this->read_buffer.size()), boost::bind(&SerialPort::read_handler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); return true; } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); return false; } } Update Remaining Code bool SerialPort::open_port(const std::string& port_name, std::uint32_t baud_rate, std::uint8_t data_bits, std::uint8_t stop_bits, parity_t parity, flow_control_t flow_control, std::uint32_t read_timeout, std::uint32_t read_inter_byte_timeout, std::uint32_t write_timeout) { try { this->port_name = port_name; if (not this->open_port()) return false; if (not this->set_baud_rate(baud_rate).has_value()) return false; if (not this->set_data_bits(data_bits).has_value()) return false; if (not this->set_stop_bits(stop_bits).has_value()) return false; if (not this->set_parity(parity).has_value()) return false; if (not this->set_flow_control(flow_control).has_value()) return false; this->read_timeout = read_timeout; if (read_inter_byte_timeout <= 0) this->read_inter_byte_timeout = 1; #ifdef _WIN64 BOOL return_value; DCB dcb = { 0 }; COMMTIMEOUTS timeouts = { 0 }; if (this->line_mode) //Set COM port to return data either at \n or \r { /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = GetCommState(this->native_port, &dcb); if (return_value) { if(this->new_line_character == '\r') dcb.EofChar = '\r'; //Specify end of data character as carriage-return (\r) else // --> Default dcb.EofChar = '\n'; //Specify end of data character as new-line (\n) } else { PLOG_ERROR << "Error GetCommState : " << GetLastErrorAsString(); return false; } /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = SetCommState(this->native_port, &dcb); if (not return_value) { PLOG_ERROR << "Error SetCommState : " << GetLastErrorAsString(); return false; } } else //Set COM port to return data on timeout { /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = GetCommTimeouts(this->native_port, &timeouts); if (return_value) { timeouts.ReadIntervalTimeout = this->read_inter_byte_timeout; // Timeout in miliseconds //timeouts.ReadTotalTimeoutConstant = 0; //MAXDWORD; // in milliseconds - not needed //timeouts.ReadTotalTimeoutMultiplier = 0; // in milliseconds - not needed //timeouts.WriteTotalTimeoutConstant = 50; // in milliseconds - not needed //timeouts.WriteTotalTimeoutMultiplier = write_timeout; // in milliseconds - not needed } else { PLOG_ERROR << "Error GetCommTimeouts : " << GetLastErrorAsString(); return false; } /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = SetCommTimeouts(this->native_port, &timeouts); if (not return_value) { PLOG_ERROR << "Error SetCommTimeouts : " << GetLastErrorAsString(); return false; } } #else //For Linux termios #endif // _WIN64 return true; } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); return false; } } void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred) { this->read_async(); // I realized I was calling read_async before reading data bool receive_complete{ false }; try { if (error not_eq boost::system::errc::success) //Error in serial port read { PLOG_ERROR << error.to_string(); this->async_signal.emit(this->port_number, SerialPortEvents::read_error, error.to_string()); return; } if (this->line_mode) { std::string temporary_recieve_data; std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, //Data is added to temporary buffer std::back_inserter(temporary_recieve_data), [](std::byte character) { return static_cast<char>(character); } ); boost::algorithm::trim(temporary_recieve_data); // Trim handles space character, tab, carriage return, newline, vertical tab and form feed //Data is further processed based on the Process logic receive_complete = true; } else // Bulk-Data. Just append data to end of received_data string buffer. // Wait for timeout to trigger recevive_complete { //Test Function std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, std::back_inserter(this->received_data), [](std::byte character) { return static_cast<char>(character); } ); this->async_signal.emit(this->port_number, SerialPortEvents::read_data, this->received_data); //Data has been recieved send to server via MQTT } } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); this->async_signal.emit(this->port_number, SerialPortEvents::read_error, ex.what()); } } A: Can you show the/a complete, self-contained minimal example. The code shown has no obvious issue (except some smells like the argument read_timeout soft-shadowing the member variable of the same name - and effectively being unused). Here is a minimal self-contained example just from the code shown: Live On Coliru #include <boost/asio.hpp> #include <boost/asio/serial_port.hpp> #include <boost/bind/bind.hpp> #include <iomanip> #include <iostream> namespace asio = boost::asio; static inline std::ostream PLOG_ERROR(std::cerr.rdbuf()); struct SerialPort { static constexpr uint32_t ignore_timeout = -1; SerialPort(asio::any_io_executor ex, std::string dev) : port(ex, dev) {} bool read_async(uint32_t timeout_override) { try { read_buffer.fill({}); // Clear Buffer if (timeout_override not_eq SerialPort::ignore_timeout) { read_timeout = timeout_override; } using namespace asio::placeholders; port.async_read_some( asio::buffer(read_buffer), bind(&SerialPort::read_handler, this, error, bytes_transferred)); return true; } catch (std::exception const& ex) { PLOG_ERROR << ex.what() << std::endl; return false; } } private: void read_handler(boost::system::error_code ec, size_t bytes_transferred) { std::cerr << "received " << bytes_transferred << " bytes (" << ec.message() << ")" << std::endl; auto fmt = std::cerr.flags(); for (auto b : read_buffer) { if (!bytes_transferred--) break; std::cerr << " " << std::hex << std::showbase << std::setfill('0') << std::setw(4) << static_cast<unsigned>(b); } std::cerr.flags(fmt); std::cerr << std::endl; if (!ec) read_async(ignore_timeout); } uint32_t read_timeout = 10; std::array<std::byte, 256> read_buffer{}; asio::serial_port port; }; int main(int argc, char** argv) { asio::io_context ioc; SerialPort sp(make_strand(ioc), argc > 1 ? argv[1] : "/dev/ttyS0"); sp.read_async(SerialPort::ignore_timeout); ioc.run(); // ioc.run_for(std::chrono::seconds(1)); } And testing using socat as described here: Virtual Serial Port for Linux socat -d -d pty,raw,echo=0 pty,raw,echo=0 Local demo: A: I figured out the problem. In my SerialPort::read_handler method I was calling this->read_async() before reading/copying the data from buffer. this->read_async() is resetting the buffer. The thing I don't understand is why this is haappening at random? Is this a scheduling issue (i.e OS is causing context switching)?
Boost::Asio Serial Port async_read_some not storing data in buffer
I'm developing Serial Port program using Boost::Asio. I call the SerialPort::read_async method every time I want to read data from serial port. While I am testing I realized that the data received on serial port is not getting saved in the read_buffer however the read handler receives proper number of received bytes in boost::asio::placeholders::bytes_transferred field/parameter. The read handler also contains boost::system::errc::success in the boost::asio::placeholders::error field/parameter. The read_buffer holds exactly the same value that was set before the async_read_some call was made. this->read_buffer.fill(static_cast<std::byte>('\0')); //Clear Buffer this->read_buffer.fill(static_cast<std::byte>('0')); //For Testing Code bool SerialPort::read_async(std::uint32_t read_timeout) { try { this->read_buffer.fill(static_cast<std::byte>('\0')); //Clear Buffer //this->read_buffer.fill(static_cast<std::byte>('0')); //For Testing if (read_timeout not_eq SerialPort::ignore_timeout) this->read_timeout = read_timeout;//If read_timeout is not set to ignore_timeout, update the read_timeout else use old read_timeout this->port.async_read_some(boost::asio::buffer(this->read_buffer.data(), this->read_buffer.size()), boost::bind(&SerialPort::read_handler, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); return true; } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); return false; } } Update Remaining Code bool SerialPort::open_port(const std::string& port_name, std::uint32_t baud_rate, std::uint8_t data_bits, std::uint8_t stop_bits, parity_t parity, flow_control_t flow_control, std::uint32_t read_timeout, std::uint32_t read_inter_byte_timeout, std::uint32_t write_timeout) { try { this->port_name = port_name; if (not this->open_port()) return false; if (not this->set_baud_rate(baud_rate).has_value()) return false; if (not this->set_data_bits(data_bits).has_value()) return false; if (not this->set_stop_bits(stop_bits).has_value()) return false; if (not this->set_parity(parity).has_value()) return false; if (not this->set_flow_control(flow_control).has_value()) return false; this->read_timeout = read_timeout; if (read_inter_byte_timeout <= 0) this->read_inter_byte_timeout = 1; #ifdef _WIN64 BOOL return_value; DCB dcb = { 0 }; COMMTIMEOUTS timeouts = { 0 }; if (this->line_mode) //Set COM port to return data either at \n or \r { /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = GetCommState(this->native_port, &dcb); if (return_value) { if(this->new_line_character == '\r') dcb.EofChar = '\r'; //Specify end of data character as carriage-return (\r) else // --> Default dcb.EofChar = '\n'; //Specify end of data character as new-line (\n) } else { PLOG_ERROR << "Error GetCommState : " << GetLastErrorAsString(); return false; } /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = SetCommState(this->native_port, &dcb); if (not return_value) { PLOG_ERROR << "Error SetCommState : " << GetLastErrorAsString(); return false; } } else //Set COM port to return data on timeout { /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = GetCommTimeouts(this->native_port, &timeouts); if (return_value) { timeouts.ReadIntervalTimeout = this->read_inter_byte_timeout; // Timeout in miliseconds //timeouts.ReadTotalTimeoutConstant = 0; //MAXDWORD; // in milliseconds - not needed //timeouts.ReadTotalTimeoutMultiplier = 0; // in milliseconds - not needed //timeouts.WriteTotalTimeoutConstant = 50; // in milliseconds - not needed //timeouts.WriteTotalTimeoutMultiplier = write_timeout; // in milliseconds - not needed } else { PLOG_ERROR << "Error GetCommTimeouts : " << GetLastErrorAsString(); return false; } /* * If the function succeeds, the return value is nonzero. * If the function fails, the return value is zero. To get extended error information, call GetLastError. */ return_value = SetCommTimeouts(this->native_port, &timeouts); if (not return_value) { PLOG_ERROR << "Error SetCommTimeouts : " << GetLastErrorAsString(); return false; } } #else //For Linux termios #endif // _WIN64 return true; } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); return false; } } void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred) { this->read_async(); // I realized I was calling read_async before reading data bool receive_complete{ false }; try { if (error not_eq boost::system::errc::success) //Error in serial port read { PLOG_ERROR << error.to_string(); this->async_signal.emit(this->port_number, SerialPortEvents::read_error, error.to_string()); return; } if (this->line_mode) { std::string temporary_recieve_data; std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, //Data is added to temporary buffer std::back_inserter(temporary_recieve_data), [](std::byte character) { return static_cast<char>(character); } ); boost::algorithm::trim(temporary_recieve_data); // Trim handles space character, tab, carriage return, newline, vertical tab and form feed //Data is further processed based on the Process logic receive_complete = true; } else // Bulk-Data. Just append data to end of received_data string buffer. // Wait for timeout to trigger recevive_complete { //Test Function std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, std::back_inserter(this->received_data), [](std::byte character) { return static_cast<char>(character); } ); this->async_signal.emit(this->port_number, SerialPortEvents::read_data, this->received_data); //Data has been recieved send to server via MQTT } } catch (const std::exception& ex) { PLOG_ERROR << ex.what(); this->async_signal.emit(this->port_number, SerialPortEvents::read_error, ex.what()); } }
[ "Can you show the/a complete, self-contained minimal example. The code shown has no obvious issue (except some smells like the argument read_timeout soft-shadowing the member variable of the same name - and effectively being unused).\nHere is a minimal self-contained example just from the code shown:\nLive On Coliru\n#include <boost/asio.hpp>\n#include <boost/asio/serial_port.hpp>\n#include <boost/bind/bind.hpp>\n#include <iomanip>\n#include <iostream>\nnamespace asio = boost::asio;\n\nstatic inline std::ostream PLOG_ERROR(std::cerr.rdbuf());\n\nstruct SerialPort {\n static constexpr uint32_t ignore_timeout = -1;\n\n SerialPort(asio::any_io_executor ex, std::string dev) : port(ex, dev) {}\n\n bool read_async(uint32_t timeout_override) {\n try {\n read_buffer.fill({}); // Clear Buffer\n\n if (timeout_override not_eq SerialPort::ignore_timeout) {\n read_timeout = timeout_override;\n }\n using namespace asio::placeholders;\n\n port.async_read_some(\n asio::buffer(read_buffer),\n bind(&SerialPort::read_handler, this, error, bytes_transferred));\n\n return true;\n } catch (std::exception const& ex) {\n PLOG_ERROR << ex.what() << std::endl;\n return false;\n }\n }\n\n private:\n void read_handler(boost::system::error_code ec, size_t bytes_transferred) {\n std::cerr << \"received \" << bytes_transferred << \" bytes (\" << ec.message() << \")\"\n << std::endl;\n\n auto fmt = std::cerr.flags();\n for (auto b : read_buffer) {\n if (!bytes_transferred--)\n break;\n std::cerr << \" \" << std::hex << std::showbase << std::setfill('0')\n << std::setw(4) << static_cast<unsigned>(b);\n }\n std::cerr.flags(fmt);\n std::cerr << std::endl;\n\n if (!ec)\n read_async(ignore_timeout);\n }\n\n uint32_t read_timeout = 10;\n std::array<std::byte, 256> read_buffer{};\n asio::serial_port port;\n};\n\nint main(int argc, char** argv) {\n asio::io_context ioc;\n\n SerialPort sp(make_strand(ioc), argc > 1 ? argv[1] : \"/dev/ttyS0\");\n sp.read_async(SerialPort::ignore_timeout);\n\n ioc.run();\n // ioc.run_for(std::chrono::seconds(1));\n}\n\nAnd testing using socat as described here: Virtual Serial Port for Linux\nsocat -d -d pty,raw,echo=0 pty,raw,echo=0\n\nLocal demo:\n\n", "I figured out the problem. \nIn my SerialPort::read_handler method I was calling this->read_async() before reading/copying the data from buffer.\nthis->read_async() is resetting the buffer.\nThe thing I don't understand is why this is haappening at random? Is this a scheduling issue (i.e OS is causing context switching)?\n" ]
[ 1, 0 ]
[]
[]
[ "asynchronous", "boost_asio", "c++", "serial_port" ]
stackoverflow_0074659635_asynchronous_boost_asio_c++_serial_port.txt
Q: Network connection setup in constructor: good or bad? I'm working on a class that handles interaction with a remote process that may or may not be available; indeed in most cases it won't be. If it's not, an object of that class has no purpose in life and needs to go away. Is it less ugly to: Handle connection setup in the constructor, throwing an exception if the process isn't there. Handle connection setup in a separate connect() method, returning an error code if the process isn't there. In option 1), the calling code will of course have to wrap its instantiation of that class and everything else that deals with it in a try() block. In option 2, it can simply check the return value from connect(), and return (destroying the object) if it failed, but it's less RAII-compliant, Relatedly, if I go with option 1), is it better to throw one of the std::exception classes, derive my own exception class therefrom, roll my own underived exception class, or just throw a string? I'd like to include some indication of the failure, which seems to rule out the first of these. Edited to clarify: The remote process is on the same machine, so it's pretty unlikely that the ::connect() call will block. A: I consider it bad to do a blocking connect() in a constructor, because the blocking nature is not something one typically expects from constructing an object. So, users of your class may be confused by this functionality. As for exceptions, I think it is generally best (but also the most work) to derive a new class from std::exception. This allows the catcher to perform an action for that specific type of exception with a catch (const myexception &e) {...} statement, and also do one thing for all exceptions with a catch (const std::exception &e) {...}. See related question: How much work should be done in a constructor? A: Regarding throwing exceptions, its perfectly fine to create your own classes. As a hypothetical user I'd prefer if they derived from std::exception, or perhaps std::runtime_error (which allows you to pass an error string to the ctor). Users who want to can catch your derived type, but the common idiom of: try { operation_that_might_throw (); } catch (std::exception& e) { cerr << "Caught exception: " << e.what() << endl; } will work for your new exception types as well as anything thrown by the C++ runtime. This is basically the Rule of Least Surprise. A: If your connection object is effectively non-functional if the connection fails then it doesn't make sense to have the object exist if all its other methods will always do nothing or throw exceptions. For this reason I would perform the connect in a constructor and fail by throwing an exception (derived from std::exception) if this method fails. However, you are right that clients of the class may need to be aware that the constructor might block or fail. For this reason I might choose to make the constructor private and use a static factory method (named constructor idiom) so that clients have to make an explicit MakeConnection call. It is still the client's responsibility to determine if not having a connection is fatal to it, or whether it can handle an offline mode. In the former case it can own a connection by value and let any connection failure propogate to its clients; in the latter it can own the object via a pointer, preferably 'smart'. In the latter case it might choose to attempt construction of the owned connection in its constructor or it might defer it until needed. E.g. (warning: code all completely untested) class Connection { Connection(); // Actually make the connection, may throw // ... public: static Connection MakeConnection() { return Connection(); } // ... }; Here's a class that requires a working connection. class MustHaveConnection { public: // You can't create a MustHaveConnection if `MakeConnection` fails MustHaveConnection() : _connection(Connection::MakeConnection()) { } private: Connection _connection; }; Here's a class that can work without one. class OptionalConnection { public: // You can create a OptionalConnectionif `MakeConnection` fails // 'offline' mode can be determined by whether _connection is NULL OptionalConnection() { try { _connection.reset(new Connection(Connection::MakeConnection())); } catch (const std::exception&) { // Failure *is* an option, it would be better to capture a more // specific exception if possible. } } OptionalConnection(const OptionalConnection&); OptionalConnection& operator=(const OptionalConnection&); private: std::auto_ptr<Connection> _connection; } And finally one that creates one on demand, and propogates exceptions to the caller. class OnDemandConnection { public: OnDemandConnection() { } OnDemandConnection(const OnDemandConnection&); OnDemandConnection& operator=(const OnDemandConnection&); // Propgates exceptions to caller void UseConnection() { if (_connection.get() == NULL) _connection.reset(new Connection(Connection::MakeConnection())); // do something with _connection } private: std::auto_ptr<Connection> _connection; } A: Don't connect from the constructor, a constructor that blocks is unexpected and bad API design. Write a connect method and mark your class noncopyable. If you rely on instances being connected already, make the constructor private and write a static factory method to get pre-connected instances. A: If the connection would take a long time, it is more reasonable to put the code in another method. Still, you can (and you should) use exceptions to inform the caller whether your connect() method has been successful or not, instead of returning error codes. It is also more advisable to create a new exception class derived from std::exception instead of throwing plain data or even throwing other STL exceptions. You may also derive your exception class from a more specific description of your error (for example, deriving from std::runtime_error), but this approach is less common. A: I think Option 1 is a better approach but you need to think how would you expect the consumer of the class to use this? Just the fact that they have wired it up is good enough to go ahead and connect (Option 1) or the fact they should have the option to call Connect() when they are good and ready (Option 2)? RAII also supports the DRY principle (don't repeat yourself). However with Option 1 you need to ensure you Exception handling is spot on and you don't get into race conditions. As you know, if there is an exception thrown in the constructor the destructor wont be called to clean up. Also be vary of any static functions you might have as you will need locks around those as well - leading you down a spiral path. If you haven't seen this post yet its a good read. A: I would go with the second one, since I believe that the constructor should not do any other thing than initialize the private members. Besides that, it's easier to deal with failures (such as not connecting). Depending on what you're exactly going to do, you could keep the object alive and call the connect method when you need it, minimizing the need of creating another object. As for the exceptions, you should create your own. This will allow the caller to take specific rollback actions when needed. A: Under the RAII mind of thought, isn't this by definition good? Acquisation is Initialization. A: Another thing that was unclear in my original post is that the client code doesn't have any interaction with this object once it's connected. The client runs in its own thread, and once the object is instantiated and connected, the client calls one method on it that runs for the duration of the parent process. Once that process ends (for whatever reason), the object disconnects and the client thread exits. If the remote process wasn't available, the thread exits immediately. So having a non-connected object lying around isn't really an issue. I found another reason not to do the connection in the constructor: it means that I either have to handle teardown in the destructor, or have a separate disconnect() call with no separate connect() call, which smells funny. The teardown is non-trivial and might block or throw, so doing it in the destructor is probably less than ideal. A: I believe there is a pattern we can use here that addresses some of the points made in other answers. The question is pretty old but this was my first google result. If the class is useless without a connection, then instantiating it conceptually appears to be half true. The object is not really ready to be used. The user needs to separately call a connect() method. This just feels like bureaucracy. Conversely, it is also true that a blocking operation is unconventional, and as other answers point out, may cause confusion. Not to mention annoyances in unit testing and threading. I believe the pattern for this that addresses our problems is: We can separate our functionality into more classes. The ready-to-go connection, the class that uses the connection and a factory. The constructor needs the connection because it can't work without it. Use a factory that sets up the connection to save the caller some work. Our factory can be instantiated as empty (which makes sense). Then we can retrieve our class using it. For example an FTPServer (not in C++ sorry) class FTPServerFactory: def get_with_environ_variables() -> FTPServer: # create your connection here e.g. with FTP login details class FTPServer: def __init__(ftp_host: FTP_Host): #logged in an ready to go There are two distinct benefits of this Testing - we can easily mock a logged-in ftp_host to return whatever we want. This is way less confusing than having to reach into the class's constructor or the connect() method. We won't need to Defining different ways of connecting using methods e.g. with env variables or user input
Network connection setup in constructor: good or bad?
I'm working on a class that handles interaction with a remote process that may or may not be available; indeed in most cases it won't be. If it's not, an object of that class has no purpose in life and needs to go away. Is it less ugly to: Handle connection setup in the constructor, throwing an exception if the process isn't there. Handle connection setup in a separate connect() method, returning an error code if the process isn't there. In option 1), the calling code will of course have to wrap its instantiation of that class and everything else that deals with it in a try() block. In option 2, it can simply check the return value from connect(), and return (destroying the object) if it failed, but it's less RAII-compliant, Relatedly, if I go with option 1), is it better to throw one of the std::exception classes, derive my own exception class therefrom, roll my own underived exception class, or just throw a string? I'd like to include some indication of the failure, which seems to rule out the first of these. Edited to clarify: The remote process is on the same machine, so it's pretty unlikely that the ::connect() call will block.
[ "I consider it bad to do a blocking connect() in a constructor, because the blocking nature is not something one typically expects from constructing an object. So, users of your class may be confused by this functionality.\nAs for exceptions, I think it is generally best (but also the most work) to derive a new class from std::exception. This allows the catcher to perform an action for that specific type of exception with a catch (const myexception &e) {...} statement, and also do one thing for all exceptions with a catch (const std::exception &e) {...}.\nSee related question: How much work should be done in a constructor?\n", "Regarding throwing exceptions, its perfectly fine to create your own classes. As a hypothetical user I'd prefer if they derived from std::exception, or perhaps std::runtime_error (which allows you to pass an error string to the ctor).\nUsers who want to can catch your derived type, but the common idiom of:\n try {\n operation_that_might_throw ();\n } catch (std::exception& e) {\n cerr << \"Caught exception: \" << e.what() << endl;\n }\n\nwill work for your new exception types as well as anything thrown by the C++ runtime. This is basically the Rule of Least Surprise.\n", "If your connection object is effectively non-functional if the connection fails then it doesn't make sense to have the object exist if all its other methods will always do nothing or throw exceptions. For this reason I would perform the connect in a constructor and fail by throwing an exception (derived from std::exception) if this method fails.\nHowever, you are right that clients of the class may need to be aware that the constructor might block or fail. For this reason I might choose to make the constructor private and use a static factory method (named constructor idiom) so that clients have to make an explicit MakeConnection call.\nIt is still the client's responsibility to determine if not having a connection is fatal to it, or whether it can handle an offline mode. In the former case it can own a connection by value and let any connection failure propogate to its clients; in the latter it can own the object via a pointer, preferably 'smart'. In the latter case it might choose to attempt construction of the owned connection in its constructor or it might defer it until needed.\nE.g. (warning: code all completely untested)\nclass Connection\n{\n Connection(); // Actually make the connection, may throw\n // ...\n\npublic:\n static Connection MakeConnection() { return Connection(); }\n\n // ...\n};\n\nHere's a class that requires a working connection.\nclass MustHaveConnection\n{\npublic:\n // You can't create a MustHaveConnection if `MakeConnection` fails\n MustHaveConnection()\n : _connection(Connection::MakeConnection())\n {\n }\n\nprivate:\n Connection _connection;\n};\n\nHere's a class that can work without one.\nclass OptionalConnection\n{\npublic:\n // You can create a OptionalConnectionif `MakeConnection` fails\n // 'offline' mode can be determined by whether _connection is NULL\n OptionalConnection()\n {\n try\n {\n _connection.reset(new Connection(Connection::MakeConnection()));\n }\n catch (const std::exception&)\n {\n // Failure *is* an option, it would be better to capture a more\n // specific exception if possible.\n }\n }\n\n OptionalConnection(const OptionalConnection&);\n OptionalConnection& operator=(const OptionalConnection&);\n\nprivate:\n std::auto_ptr<Connection> _connection;\n}\n\nAnd finally one that creates one on demand, and propogates exceptions to the caller.\nclass OnDemandConnection\n{\npublic:\n OnDemandConnection()\n {\n }\n\n OnDemandConnection(const OnDemandConnection&);\n OnDemandConnection& operator=(const OnDemandConnection&);\n\n // Propgates exceptions to caller\n void UseConnection()\n {\n if (_connection.get() == NULL)\n _connection.reset(new Connection(Connection::MakeConnection()));\n\n // do something with _connection\n }\n\nprivate:\n std::auto_ptr<Connection> _connection;\n}\n\n", "Don't connect from the constructor, a constructor that blocks is unexpected and bad API design.\nWrite a connect method and mark your class noncopyable. If you rely on instances being connected already, make the constructor private and write a static factory method to get pre-connected instances.\n", "If the connection would take a long time, it is more reasonable to put the code in another method. Still, you can (and you should) use exceptions to inform the caller whether your connect() method has been successful or not, instead of returning error codes.\nIt is also more advisable to create a new exception class derived from std::exception instead of throwing plain data or even throwing other STL exceptions. You may also derive your exception class from a more specific description of your error (for example, deriving from std::runtime_error), but this approach is less common.\n", "I think Option 1 is a better approach but you need to think how would you expect the consumer of the class to use this? Just the fact that they have wired it up is good enough to go ahead and connect (Option 1) or the fact they should have the option to call Connect() when they are good and ready (Option 2)? \nRAII also supports the DRY principle (don't repeat yourself). However with Option 1 you need to ensure you Exception handling is spot on and you don't get into race conditions. As you know, if there is an exception thrown in the constructor the destructor wont be called to clean up. Also be vary of any static functions you might have as you will need locks around those as well - leading you down a spiral path.\nIf you haven't seen this post yet its a good read.\n", "I would go with the second one, since I believe that the constructor should not do any other thing than initialize the private members. Besides that, it's easier to deal with failures (such as not connecting). Depending on what you're exactly going to do, you could keep the object alive and call the connect method when you need it, minimizing the need of creating another object.\nAs for the exceptions, you should create your own. This will allow the caller to take specific rollback actions when needed.\n", "Under the RAII mind of thought, isn't this by definition good? Acquisation is Initialization.\n", "Another thing that was unclear in my original post is that the client code doesn't have any interaction with this object once it's connected. The client runs in its own thread, and once the object is instantiated and connected, the client calls one method on it that runs for the duration of the parent process. Once that process ends (for whatever reason), the object disconnects and the client thread exits. If the remote process wasn't available, the thread exits immediately. So having a non-connected object lying around isn't really an issue.\nI found another reason not to do the connection in the constructor: it means that I either have to handle teardown in the destructor, or have a separate disconnect() call with no separate connect() call, which smells funny. The teardown is non-trivial and might block or throw, so doing it in the destructor is probably less than ideal.\n", "I believe there is a pattern we can use here that addresses some of the points made in other answers. The question is pretty old but this was my first google result.\n\nIf the class is useless without a connection, then instantiating it conceptually appears to be half true. The object is not really ready to be used.\nThe user needs to separately call a connect() method. This just feels like bureaucracy.\nConversely, it is also true that a blocking operation is unconventional, and as other answers point out, may cause confusion. Not to mention annoyances in unit testing and threading.\nI believe the pattern for this that addresses our problems is:\n\nWe can separate our functionality into more classes. The ready-to-go connection, the class that uses the connection and a factory.\nThe constructor needs the connection because it can't work without it.\nUse a factory that sets up the connection to save the caller some work.\n\nOur factory can be instantiated as empty (which makes sense). Then we can retrieve our class using it.\nFor example an FTPServer (not in C++ sorry)\nclass FTPServerFactory:\n\ndef get_with_environ_variables() -> FTPServer:\n # create your connection here e.g. with FTP login details\n\n\nclass FTPServer:\n\ndef __init__(ftp_host: FTP_Host): #logged in an ready to go\n\n\nThere are two distinct benefits of this\n\nTesting - we can easily mock a logged-in ftp_host to return whatever we want. This is way less confusing than having to reach into the class's constructor or the connect() method. We won't need to\nDefining different ways of connecting using methods e.g. with env variables or user input\n\n" ]
[ 6, 4, 2, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "c++", "network_programming" ]
stackoverflow_0002143482_c++_network_programming.txt
Q: How to transform a Sink into a Sink? I have a method that can emit its output into a given Sink<Node>. I wanted to pipe that into stdout which is a Sink<List<int>>. Supposing I have a function convert that converts Node to List<int>, how can I transform stdout into a Sink<Node>, so that it will print my Tree to the console? A: I have made this example showing how you can do it with a StreamController: import 'dart:async'; import 'dart:convert'; import 'dart:io'; class Message { String text; Message(this.text); } void main() { final controller = StreamController<Message>(); stdout.addStream(controller.stream .map((var msg) => msg.text) .transform(const Utf8Encoder())); var messageSink = controller.sink; messageSink.add(Message('Hello World')); } The StreamController in this example takes Message objects and converts them into List<int> by first using map to convert the Message to String object and then use a transformer to convert the String into a List of UTF8 bytes. A: I've filed: https://github.com/dart-lang/sdk/issues/50607 Here is how I solved this: class _MappedSink<From, To> implements Sink<From> { final To Function(From) _transform; final Sink<To> _sink; const _MappedSink(this._sink, this._transform); @override void add(From data) => _sink.add(_transform(data)); @override void close() => _sink.close(); } extension SinkMap<To> on Sink<To> { Sink<From> map<From>(To Function(From) transform) => _MappedSink(this, transform); }
How to transform a Sink into a Sink?
I have a method that can emit its output into a given Sink<Node>. I wanted to pipe that into stdout which is a Sink<List<int>>. Supposing I have a function convert that converts Node to List<int>, how can I transform stdout into a Sink<Node>, so that it will print my Tree to the console?
[ "I have made this example showing how you can do it with a StreamController:\nimport 'dart:async';\nimport 'dart:convert';\nimport 'dart:io';\n\nclass Message {\n String text;\n Message(this.text);\n}\n\nvoid main() {\n final controller = StreamController<Message>();\n stdout.addStream(controller.stream\n .map((var msg) => msg.text)\n .transform(const Utf8Encoder()));\n\n var messageSink = controller.sink;\n messageSink.add(Message('Hello World'));\n}\n\nThe StreamController in this example takes Message objects and converts them into List<int> by first using map to convert the Message to String object and then use a transformer to convert the String into a List of UTF8 bytes.\n", "I've filed: https://github.com/dart-lang/sdk/issues/50607\nHere is how I solved this:\nclass _MappedSink<From, To> implements Sink<From> {\n final To Function(From) _transform;\n final Sink<To> _sink;\n const _MappedSink(this._sink, this._transform);\n\n @override\n void add(From data) => _sink.add(_transform(data));\n @override\n void close() => _sink.close();\n}\n\nextension SinkMap<To> on Sink<To> {\n Sink<From> map<From>(To Function(From) transform) =>\n _MappedSink(this, transform);\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "dart" ]
stackoverflow_0059110527_dart.txt
Q: Sympy intersection of FiniteSets with strings Define two sympy FiniteSet sets a and b with each of them containing only one string element: a = FiniteSet('red') b = FiniteSet('yellow') If I ask for the Intersection of those sets: Intersection(a,b) I was expecting to get as result an empty set {}, but I just get Intersection({red}, {yellow}). Why is that? It works well for Union: Union(a,b) = {'red', 'yellow'}. Even if I define those sets: a = FiniteSet('red', 'yellow') b = FiniteSet('red') I get the expected result: Intersection(a,b) = {'red'}. I was planing to use those set manipulations to reduce/simplify rather long symbolic representations of combinations of different sets. But with this behavior it will not work. It also works well with the built-in python sets: a = {'red'} b = {'yellow'} a.intersection(b) leads to set(). Is this a bug in sympy? A: The intersection cannot unambiguously give a result for objects which are variables. Your strings became Symbols with color names and a might equal b or it might not. If your elements were 'a+1' and 'a+2' the intersection would be an empty set because those two cannot be the same for finite values. If you intend that distinct items are distinct, then map them to integers, allow simplification to take place, and then map them back to sybols: reps = {s:i for i, s in expr.atoms(Symbol)} expr = expr.xreplace(reps).xreplace({i:s for s,i in reps.items()})
Sympy intersection of FiniteSets with strings
Define two sympy FiniteSet sets a and b with each of them containing only one string element: a = FiniteSet('red') b = FiniteSet('yellow') If I ask for the Intersection of those sets: Intersection(a,b) I was expecting to get as result an empty set {}, but I just get Intersection({red}, {yellow}). Why is that? It works well for Union: Union(a,b) = {'red', 'yellow'}. Even if I define those sets: a = FiniteSet('red', 'yellow') b = FiniteSet('red') I get the expected result: Intersection(a,b) = {'red'}. I was planing to use those set manipulations to reduce/simplify rather long symbolic representations of combinations of different sets. But with this behavior it will not work. It also works well with the built-in python sets: a = {'red'} b = {'yellow'} a.intersection(b) leads to set(). Is this a bug in sympy?
[ "The intersection cannot unambiguously give a result for objects which are variables. Your strings became Symbols with color names and a might equal b or it might not. If your elements were 'a+1' and 'a+2' the intersection would be an empty set because those two cannot be the same for finite values.\nIf you intend that distinct items are distinct, then map them to integers, allow simplification to take place, and then map them back to sybols:\nreps = {s:i for i, s in expr.atoms(Symbol)}\nexpr = expr.xreplace(reps).xreplace({i:s for s,i in reps.items()})\n\n" ]
[ 1 ]
[]
[]
[ "python", "set", "set_theory", "sympy" ]
stackoverflow_0074662655_python_set_set_theory_sympy.txt
Q: Can you Run Xcode in Linux? Can you run Xcode in Linux? Mac OS X was based on BSD Unix, so is it possible? From what I have heard, there is a MonoDevelop plugin that has an iPhone simulator. A: The low-level toolchain for Xcode (the gcc compiler family, the gdb debugger, etc.) is all open source and common to Unix and Linux platforms. But the IDE--the editor, project management, indexing, navigation, build system, graphical debugger, visual data modeling, SCM system, refactoring, project snapshots, etc.--is a Mac OS X Cocoa application, and is not portable. A: Nobody suggested Vagrant yet, so here it is, Vagrant box for OSX vagrant init AndrewDryga/vagrant-box-osx --box-version 0.2.1 vagrant up # editor's notes: # - this requires virtualbox # - version 0.3.1 (2016) is down now, so version 0.2.1 (2015) # - there are notes for building an image one's self at the site and you have a MACOS virtual machine. But according to Apple's EULA, you still need to run it on MacOS hardware :D But anywhere, here's one to all of you geeks who wiped MacOS and installed Ubuntu :D Unfortunately, you can't run the editors from inside using SSH X-forwarding option. A: I really wanted to comment, not answer. But just to be precise, OSX is not based on BSD, it is an evolution of NeXTStep. The NeXTStep OS utilizes the Mach kernel developed by CMU. It was originally designed as a MicroKernel, but due to performance constraints, they eventually decided they needed to include the Unix portion of the API into the kernel itself and so a BSD-compatible "server" (originally intended to process requests for BSD-compatible kernel messages) was moved into the kernel, making it a Monolithic kernel. It may be BSD compatible in the programming API, but it is NOT BSD. The rest of the OS involved ObjectiveC (under arrangements between Stepstone and Richard Stallman of GNU/GCC) with a GUI based on a technology called "Display Postscript" ... sort of like an X Server, but with postscript commands. OS X changed Display Postscript to Display PDF, and increased the general hardware requirements 1000 fold (NeXT could run in 8-16MB, now you need GB). Due to the close marriage of GCC and Objective C and NeXT, your best bet at running XCode natively under Linux would be to do a port (if you can get ahold of the source - good luck) utilizing the GNUStep libraries. Originally designed for NextStep and then OpenStep compatibility, I've heard they are now more-or-less Cocoa compatible, but I've not played with any of it in almost 2 decades. Of course that only gets you as far as ObjC, not Swift, and I don't know if Apple is going to OpenSource it. A: You can run Xcode on Linux NATIVELY using Darling: Darling is a translation layer that lets you run macOS software on Linux Once installed you can install Xcode via command-line developer tool following this link. A: If you run VMware Player or Workstation (or maybe VirtualBox, I'm not sure if it supports Mac OS X, but may), and then Mac OS X Server (Client can't legally be virtualized). Of course, in this case you are running XCode on OS X, but your host machine could be linux. A: If you cannot shell out thousands of dollars for a decent Mac then there is an option to run OSX and XCode in the cloud: http://www.macincloud.com/ A: I think you need MonoTouch (not free!) for that plugin. And no, there is no way to run Xcode on Linux. Sorry for all the bad news. :) A: Nope, you've heard of MonoTouch which is a .NET/mono environment for iPhone development. But you still need a Mac and the official iPhone SDK. And the emulator is the official apple one, this acts as a separate IDE and allows you to not have to code in Objective C, rather you code in c# It's an interesting project to say the least.... EDIT: apparently, you can distribute on the app store now, early on that was a no go.... A: The easiest option to do that is running a VM with a OSX copy. A: If you really want to use Xcode on linux you could get Virtual Box and install Hackintosh on a VM. Edit: Virtual Box Guest Additions is not supported with MacOS Movaje. You will want to use VMware https://www.vmware.com/ https://hackintosh.com/ A: It was weird that no one suggested KVM. It is gonna provide you almost native performance and it is built-in Linux. Go and check it out. you will feel like u are using mac only and then install Xcode there u may even choose to directly boot into the OSX GUI instead of Linux one on startup A: If you want XCode on another OS, I suggest cloud computing. That way your app is being developed on a Mac and can be submitted to the App Store. A: Use quiling framework For more info check at https://github.com/qilingframework/qiling I think it is the best A: Maybe you can use Virtual Machine and Qiling framework. A: If you are planning to use a Mac VM on Linux, check out Docker-OSX. It provides a simple approach to use pre-built Mac VMs with Docker. To know more about the legality of running Apple software on non-Apple hardware, read this article: Is Hackintosh, OSX-KVM, or Docker-OSX legal? A: It is not possible to run Xcode on Linux. Xcode is a development environment created by Apple for building apps for macOS, iOS, watchOS, and tvOS. It is not compatible with Linux, and there is no version of Xcode available for Linux. The MonoDevelop plugin you mentioned is a third-party tool that allows developers to build cross-platform applications using the .NET framework. It does not include an iPhone simulator, but it does provide some integration with Apple's development tools. However, it is still not possible to use Xcode on Linux with this plugin. If you want to develop iOS or macOS apps on Linux, you will need to use a different set of tools. There are some open-source options available, such as the PhoneGap framework, which allows you to build mobile apps using HTML, CSS, and JavaScript. Alternatively, you could use a virtual machine to run macOS on Linux, and then install and use Xcode within the virtual machine.
Can you Run Xcode in Linux?
Can you run Xcode in Linux? Mac OS X was based on BSD Unix, so is it possible? From what I have heard, there is a MonoDevelop plugin that has an iPhone simulator.
[ "The low-level toolchain for Xcode (the gcc compiler family, the gdb debugger, etc.) is all open source and common to Unix and Linux platforms. But the IDE--the editor, project management, indexing, navigation, build system, graphical debugger, visual data modeling, SCM system, refactoring, project snapshots, etc.--is a Mac OS X Cocoa application, and is not portable.\n", "Nobody suggested Vagrant yet, so here it is, Vagrant box for OSX\nvagrant init AndrewDryga/vagrant-box-osx --box-version 0.2.1\nvagrant up\n# editor's notes:\n# - this requires virtualbox\n# - version 0.3.1 (2016) is down now, so version 0.2.1 (2015)\n# - there are notes for building an image one's self at the site\n\nand you have a MACOS virtual machine. But according to Apple's EULA, you still need to run it on MacOS hardware :D But anywhere, here's one to all of you geeks who wiped MacOS and installed Ubuntu :D\nUnfortunately, you can't run the editors from inside using SSH X-forwarding option.\n", "I really wanted to comment, not answer. But just to be precise, OSX is not based on BSD, it is an evolution of NeXTStep. The NeXTStep OS utilizes the Mach kernel developed by CMU. It was originally designed as a MicroKernel, but due to performance constraints, they eventually decided they needed to include the Unix portion of the API into the kernel itself and so a BSD-compatible \"server\" (originally intended to process requests for BSD-compatible kernel messages) was moved into the kernel, making it a Monolithic kernel. It may be BSD compatible in the programming API, but it is NOT BSD.\nThe rest of the OS involved ObjectiveC (under arrangements between Stepstone and Richard Stallman of GNU/GCC) with a GUI based on a technology called \"Display Postscript\" ... sort of like an X Server, but with postscript commands. OS X changed Display Postscript to Display PDF, and increased the general hardware requirements 1000 fold (NeXT could run in 8-16MB, now you need GB).\nDue to the close marriage of GCC and Objective C and NeXT, your best bet at running XCode natively under Linux would be to do a port (if you can get ahold of the source - good luck) utilizing the GNUStep libraries. Originally designed for NextStep and then OpenStep compatibility, I've heard they are now more-or-less Cocoa compatible, but I've not played with any of it in almost 2 decades. Of course that only gets you as far as ObjC, not Swift, and I don't know if Apple is going to OpenSource it.\n", "You can run Xcode on Linux NATIVELY using Darling:\n\nDarling is a translation layer that lets you run macOS software on Linux\n\nOnce installed you can install Xcode via command-line developer tool following this link.\n", "If you run VMware Player or Workstation (or maybe VirtualBox, I'm not sure if it supports Mac OS X, but may), and then Mac OS X Server (Client can't legally be virtualized). Of course, in this case you are running XCode on OS X, but your host machine could be linux.\n", "If you cannot shell out thousands of dollars for a decent Mac then there is an option to run OSX and XCode in the cloud:\nhttp://www.macincloud.com/\n", "I think you need MonoTouch (not free!) for that plugin.\nAnd no, there is no way to run Xcode on Linux.\nSorry for all the bad news. :)\n", "Nope, you've heard of MonoTouch which is a .NET/mono environment for iPhone development. But you still need a Mac and the official iPhone SDK. And the emulator is the official apple one, this acts as a separate IDE and allows you to not have to code in Objective C, rather you code in c#\nIt's an interesting project to say the least....\nEDIT: apparently, you can distribute on the app store now, early on that was a no go....\n", "The easiest option to do that is running a VM with a OSX copy.\n", "If you really want to use Xcode on linux you could get Virtual Box and install Hackintosh on a VM.\nEdit: Virtual Box Guest Additions is not supported with MacOS Movaje. You will want to use VMware\nhttps://www.vmware.com/\nhttps://hackintosh.com/\n", "It was weird that no one suggested KVM.\nIt is gonna provide you almost native performance and it is built-in Linux.\nGo and check it out.\nyou will feel like u are using mac only and then install Xcode there\nu may even choose to directly boot into the OSX GUI instead of Linux one on startup\n", "If you want XCode on another OS, I suggest cloud computing. That way your app is being developed on a Mac and can be submitted to the App Store. \n", "Use quiling framework\nFor more info check at https://github.com/qilingframework/qiling\nI think it is the best\n", "Maybe you can use Virtual Machine and Qiling framework.\n", "If you are planning to use a Mac VM on Linux, check out Docker-OSX. It provides a simple approach to use pre-built Mac VMs with Docker.\nTo know more about the legality of running Apple software on non-Apple hardware, read this article: Is Hackintosh, OSX-KVM, or Docker-OSX legal?\n", "It is not possible to run Xcode on Linux. Xcode is a development environment created by Apple for building apps for macOS, iOS, watchOS, and tvOS. It is not compatible with Linux, and there is no version of Xcode available for Linux.\nThe MonoDevelop plugin you mentioned is a third-party tool that allows developers to build cross-platform applications using the .NET framework. It does not include an iPhone simulator, but it does provide some integration with Apple's development tools. However, it is still not possible to use Xcode on Linux with this plugin.\nIf you want to develop iOS or macOS apps on Linux, you will need to use a different set of tools. There are some open-source options available, such as the PhoneGap framework, which allows you to build mobile apps using HTML, CSS, and JavaScript. Alternatively, you could use a virtual machine to run macOS on Linux, and then install and use Xcode within the virtual machine.\n" ]
[ 475, 48, 31, 19, 10, 8, 4, 3, 1, 1, 1, 0, 0, 0, 0, 0 ]
[ "OSX is based on BSD, not Linux. You cannot run Xcode on a Linux machine.\n" ]
[ -3 ]
[ "linux", "monodevelop", "xcode" ]
stackoverflow_0002406151_linux_monodevelop_xcode.txt
Q: I having an error in the console for intents I put this in: By the way, the ping command is in another folder NOT linked here..it wouldn’t make any difference const Discord = require('discord.js'); const client = new Discord.Client(); const client = new Client({ intents: 32767 }); const prefix = '-'; const fs = require('fs'); client.commands = new Discord.Collection(); const commandFiles = fs.readdirSync('./commands/').filter(file => file.endsWith('.js')); for(const file of commandFiles) { const command = require(`./commands/${file}`); client.commands.set(command.name, command); } client.once('ready', () => { console.log('Client is ready.'); }); client.on('message', message =>{ if(!message.content.startsWith(prefix) || message.author.bot) return; const args = message.content.slice(prefix.length).split(/ +/); const command = args.shift().toLowerCase(); if (command === 'ping') { client.commands.get('ping').execute(message, args); } }); const mySecret = process.env['TOKEN'] keepAlive(); I tried many different ways and looked at multiple stackoverflow articles, but nothing worked!.
I having an error in the console for intents
I put this in: By the way, the ping command is in another folder NOT linked here..it wouldn’t make any difference const Discord = require('discord.js'); const client = new Discord.Client(); const client = new Client({ intents: 32767 }); const prefix = '-'; const fs = require('fs'); client.commands = new Discord.Collection(); const commandFiles = fs.readdirSync('./commands/').filter(file => file.endsWith('.js')); for(const file of commandFiles) { const command = require(`./commands/${file}`); client.commands.set(command.name, command); } client.once('ready', () => { console.log('Client is ready.'); }); client.on('message', message =>{ if(!message.content.startsWith(prefix) || message.author.bot) return; const args = message.content.slice(prefix.length).split(/ +/); const command = args.shift().toLowerCase(); if (command === 'ping') { client.commands.get('ping').execute(message, args); } }); const mySecret = process.env['TOKEN'] keepAlive(); I tried many different ways and looked at multiple stackoverflow articles, but nothing worked!.
[]
[]
[ "I think you can use the following code to resolve your issue.\nconst { Client, GatewayIntentBits } = require('discord.js');\n\nconst client = new Client({\n intents: [\n GatewayIntentBits.Guilds,\n GatewayIntentBits.GuildMessages,\n GatewayIntentBits.MessageContent,\n GatewayIntentBits.GuildMembers,\n ],\n});\n\n" ]
[ -2 ]
[ "discord", "discord.js", "node.js" ]
stackoverflow_0074662625_discord_discord.js_node.js.txt
Q: Right Click Movement Not working in Unreal Engine 4.24 version c++ I tried implementing an RTS in unreal engine c++ and currently I can select and deselect units but they won't move though I already have a function for that. Could someone take a look what I am doing wrong? Here's my code: void ACoba_PlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("RightMouseClick", IE_Pressed, this, &ACoba_PlayerController::MoveReleased); } void ACoba_PlayerController::MoveReleased() { if (SelectedActors.Num() > 0) { for (int32 i = 0; i < SelectedActors.Num(); i++) { FHitResult Hit; GetHitResultUnderCursor(ECC_Visibility, false, Hit); FVector MoveLocation = Hit.Location + FVector(i / 2 * 100, i % 2 * 100, 0); UAIBlueprintHelperLibrary::SimpleMoveToLocation(SelectedActors[i]->GetController(), MoveLocation); } } } Note: I've already setup the input for RightMouseClick at the input property. Could someone help me please. Thank you. A: Make sure you have set your PlayerController to receive input: ACoba_PlayerController::ACoba_PlayerController() { AutoReceiveInput = EAutoReceiveInput::Player0; /* ... */ } I just tested the right button and it worked. I'm using Unreal 4.24.2 A: I had the exact same issue, and after spending a day into it, i found out the problem is not in the code itself (mine is identical than the OP), the problem was that you need to add in your map a NavMeshBoundVolume in UE editor, this will make your map floor surface "walkable" by your actors, without this the SimpleMoveToLocation function won't do anything. I found it very annoying that there is no warning or error displayed showing that you are missing a NavMeshBoundVolume, that would save a lot of time.
Right Click Movement Not working in Unreal Engine 4.24 version c++
I tried implementing an RTS in unreal engine c++ and currently I can select and deselect units but they won't move though I already have a function for that. Could someone take a look what I am doing wrong? Here's my code: void ACoba_PlayerController::SetupInputComponent() { Super::SetupInputComponent(); InputComponent->BindAction("RightMouseClick", IE_Pressed, this, &ACoba_PlayerController::MoveReleased); } void ACoba_PlayerController::MoveReleased() { if (SelectedActors.Num() > 0) { for (int32 i = 0; i < SelectedActors.Num(); i++) { FHitResult Hit; GetHitResultUnderCursor(ECC_Visibility, false, Hit); FVector MoveLocation = Hit.Location + FVector(i / 2 * 100, i % 2 * 100, 0); UAIBlueprintHelperLibrary::SimpleMoveToLocation(SelectedActors[i]->GetController(), MoveLocation); } } } Note: I've already setup the input for RightMouseClick at the input property. Could someone help me please. Thank you.
[ "Make sure you have set your PlayerController to receive input:\nACoba_PlayerController::ACoba_PlayerController()\n{\n AutoReceiveInput = EAutoReceiveInput::Player0;\n\n /* ... */\n}\n\nI just tested the right button and it worked.\nI'm using Unreal 4.24.2\n", "I had the exact same issue, and after spending a day into it, i found out the problem is not in the code itself (mine is identical than the OP), the problem was that you need to add in your map a NavMeshBoundVolume in UE editor, this will make your map floor surface \"walkable\" by your actors, without this the SimpleMoveToLocation function won't do anything.\nI found it very annoying that there is no warning or error displayed showing that you are missing a NavMeshBoundVolume, that would save a lot of time.\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "unreal_engine4" ]
stackoverflow_0060161118_c++_unreal_engine4.txt
Q: Why am I getting question marks in my terminal as a response when I call an API (Node.js) This is the response I am getting from the API �x�\�¶�� '↓Xus!y ���X[m�e5����Ea�r�↔�p�♣cZ�$�y�:_��►V��M¶♥��b`▼7�G=�ը♠q▬`)M�v�a�Q4�c˄nME☺�*8Ù�A�h►�↔6{�E◄G6]`҄�#v-A�掰�9#�S(S�B[� ��5��a▼�Z崪�5�`��e�V�→�t0�L�QFe�7 "��♥‼5�.�T�l#��e ���b)↨�1§,↨§W��&�▲→�}SAG�;�Q�♣(/3�Y▲�:gC �→↑��7�Eݞ^<�↨F‼����h�މ�e���1|▼�iD)y▼I��g�>@���→IB�♠;�����Sj→O☻��k$♠H8qrX��ʘc�r��Yd�w Gh�Y� �5]j¶�`� ĵ8!�YV ►r\�J7U!#Y8� j �Gp�>H�@�↨���T�Uk�ڛ�↑♣�mE�§�tb��ֈ�¶♠�,�▬e�X�c�s��� ]@+∟�xT[�dAU���♠�ƒtۥ�(Zl�6d�§��Ps§g�ɜ�`U�U�ͅ↕ڣQx%�►��Q���+��`,�{o(є� Zp��Km��fsb,Ԭ�fq4�*+֔ޣ5PRn8:{fY�����;c�F�`%��1◄:��n���C��ͧ��~�V���ϑ�¶↑z��∟̭V]����{∟►�c�↑↕-D�y �L���8΍,F'0��f.☻X!� kwa∟ Y��>▼"��z�Qc�uP$�♦r�h:��►Ҙʸ���@5 ∟��C/m�>��→ r��c▬W� ��jK36% ↨D��n��☺��vģ▼/��C�Qw��Nz ♣���Ďxs0vn��L�:��◄��zȽ�����_p�♣����K�l6�N�AMZݎ`>���˩���Q� ����y���X�a{����◄`l��qom�▬♥�c�▼�� p�zQ�↑�i▼�☺��о}¶ �� w���ݎ��5܂Lf?�♣�� C���‼UϘп¶z`��▲Rw�0|��ºDc▲Q�6Y]��_v!�7���@x�♦�ĉa���▼X��r �`�§7�P#vy��8��!�}xT۹ ↑↨Q�▬�h���~��☺►��M�u0V�EAZ�k� ↕)�"��ޙ]��R��m"�j)i� ѕ�◄_��$YY�[S������x�}3_���↨��������4∟�$�o�hL��4k]�S‼��Z���☺F�H☻h�j���W�▬�ʜ♥�ּj��� h� 6U�@��u@up→��#f C+A3[;�♥x]U�∟y��c��x}�O�պ��Җ§r����Q܀��p�↨��8�w=D�o��#x�CDs=��a �1w↑s�i8����5�§→‼!{ ��Y{,♦(䫾K��s4�+�TPa{��↨�j��٩�☻��\���7Y�?������eFF ���t���ªg�∟���G/�}y��`R;�� K5��2īpE ¶Ϩ�B�Ƃ�ؽE�▬�^0�ȉ�H,M=G^�d�'j��▬}N♦ϡ�2PS昚��b��j2k,�a�→6♣�t�↓��▬5↨�Xs�v��¶Ds�47��ƭ�a-�-6����↕&��&��▼�{�ss���XAG�a����*↕���aF��yM��p`-�♠�>�+u�S��6dݠ���ž:��c����u^���qv�♥�C�=6�,dOE³ V�▲�:��ޫ�J∟(w �K▲�} ‼5��!�#�����@8��Vv����ؕ$QD�0�ͭ�K+@�☺R:��3�>�l"n�c�$Q����∟�-�,7�a���M�>↕���Kq§�YXf���uR��{8� A!Q��,♠F;?��‼�&w�6 ��� ն�����P�_�o�E�������?�r♣♦er.�m4�]�ǹgc�t�↨��↕��U#�ܢ >��b#(9▼♥�ש��y�)►���☻l�n‼:fk�؆�"� This is how I am calling the API const getArtwork = async (req , res) => { await axios.get('https://api.rawg.io/api/games?token&key=3004a2').then(res => { console.log(res.data) }) }; The key in the code is wrong because I cannot post the real key here on stackoverflow. Is there a reason why I am getting these strange replies from the API and same also happened with the IGDB API (A different API for the same purpose). Am i doing something wrong? A: There are a few possible reasons why you might be seeing this in your terminal when you call an API. First, it's possible that the API is returning data in a different encoding than your terminal is expecting. For example, if the API is returning data in UTF-8 format but your terminal is expecting data in ASCII format, you might see question marks instead of the actual characters in the response. To fix this issue, you can try specifying the encoding in the Content-Type header of your request, or you can try changing the encoding of your terminal to match the encoding of the API response. Another possible reason is that the API is returning an error or invalid data in response to your request. This could be caused by a number of things, such as an incorrect API key, an invalid query parameter, or an issue with the API itself. To troubleshoot this issue, you can try checking the API documentation to make sure you're calling the API correctly, and you can also try using a tool like Postman to inspect the raw response from the API to see if there are any error messages or other clues as to what might be going wrong. Finally, it's also possible that you are simply not handling the response from the API correctly in your code. For example, if you are trying to log the entire response object to the console, you might see question marks instead of the actual data if the response contains binary data (such as an image or audio file) that cannot be represented as text. To fix this issue, you can try logging only the specific data that you want to see, or you can try using a tool like Postman to inspect the response and make sure it contains the data you expect. In summary, there are several possible reasons why you might be seeing question marks when calling an API, and the best way to troubleshoot the issue will depend on the specific details of your code and the API you are calling.
Why am I getting question marks in my terminal as a response when I call an API (Node.js)
This is the response I am getting from the API �x�\�¶�� '↓Xus!y ���X[m�e5����Ea�r�↔�p�♣cZ�$�y�:_��►V��M¶♥��b`▼7�G=�ը♠q▬`)M�v�a�Q4�c˄nME☺�*8Ù�A�h►�↔6{�E◄G6]`҄�#v-A�掰�9#�S(S�B[� ��5��a▼�Z崪�5�`��e�V�→�t0�L�QFe�7 "��♥‼5�.�T�l#��e ���b)↨�1§,↨§W��&�▲→�}SAG�;�Q�♣(/3�Y▲�:gC �→↑��7�Eݞ^<�↨F‼����h�މ�e���1|▼�iD)y▼I��g�>@���→IB�♠;�����Sj→O☻��k$♠H8qrX��ʘc�r��Yd�w Gh�Y� �5]j¶�`� ĵ8!�YV ►r\�J7U!#Y8� j �Gp�>H�@�↨���T�Uk�ڛ�↑♣�mE�§�tb��ֈ�¶♠�,�▬e�X�c�s��� ]@+∟�xT[�dAU���♠�ƒtۥ�(Zl�6d�§��Ps§g�ɜ�`U�U�ͅ↕ڣQx%�►��Q���+��`,�{o(є� Zp��Km��fsb,Ԭ�fq4�*+֔ޣ5PRn8:{fY�����;c�F�`%��1◄:��n���C��ͧ��~�V���ϑ�¶↑z��∟̭V]����{∟►�c�↑↕-D�y �L���8΍,F'0��f.☻X!� kwa∟ Y��>▼"��z�Qc�uP$�♦r�h:��►Ҙʸ���@5 ∟��C/m�>��→ r��c▬W� ��jK36% ↨D��n��☺��vģ▼/��C�Qw��Nz ♣���Ďxs0vn��L�:��◄��zȽ�����_p�♣����K�l6�N�AMZݎ`>���˩���Q� ����y���X�a{����◄`l��qom�▬♥�c�▼�� p�zQ�↑�i▼�☺��о}¶ �� w���ݎ��5܂Lf?�♣�� C���‼UϘп¶z`��▲Rw�0|��ºDc▲Q�6Y]��_v!�7���@x�♦�ĉa���▼X��r �`�§7�P#vy��8��!�}xT۹ ↑↨Q�▬�h���~��☺►��M�u0V�EAZ�k� ↕)�"��ޙ]��R��m"�j)i� ѕ�◄_��$YY�[S������x�}3_���↨��������4∟�$�o�hL��4k]�S‼��Z���☺F�H☻h�j���W�▬�ʜ♥�ּj��� h� 6U�@��u@up→��#f C+A3[;�♥x]U�∟y��c��x}�O�պ��Җ§r����Q܀��p�↨��8�w=D�o��#x�CDs=��a �1w↑s�i8����5�§→‼!{ ��Y{,♦(䫾K��s4�+�TPa{��↨�j��٩�☻��\���7Y�?������eFF ���t���ªg�∟���G/�}y��`R;�� K5��2īpE ¶Ϩ�B�Ƃ�ؽE�▬�^0�ȉ�H,M=G^�d�'j��▬}N♦ϡ�2PS昚��b��j2k,�a�→6♣�t�↓��▬5↨�Xs�v��¶Ds�47��ƭ�a-�-6����↕&��&��▼�{�ss���XAG�a����*↕���aF��yM��p`-�♠�>�+u�S��6dݠ���ž:��c����u^���qv�♥�C�=6�,dOE³ V�▲�:��ޫ�J∟(w �K▲�} ‼5��!�#�����@8��Vv����ؕ$QD�0�ͭ�K+@�☺R:��3�>�l"n�c�$Q����∟�-�,7�a���M�>↕���Kq§�YXf���uR��{8� A!Q��,♠F;?��‼�&w�6 ��� ն�����P�_�o�E�������?�r♣♦er.�m4�]�ǹgc�t�↨��↕��U#�ܢ >��b#(9▼♥�ש��y�)►���☻l�n‼:fk�؆�"� This is how I am calling the API const getArtwork = async (req , res) => { await axios.get('https://api.rawg.io/api/games?token&key=3004a2').then(res => { console.log(res.data) }) }; The key in the code is wrong because I cannot post the real key here on stackoverflow. Is there a reason why I am getting these strange replies from the API and same also happened with the IGDB API (A different API for the same purpose). Am i doing something wrong?
[ "There are a few possible reasons why you might be seeing this in your terminal when you call an API.\nFirst, it's possible that the API is returning data in a different encoding than your terminal is expecting. For example, if the API is returning data in UTF-8 format but your terminal is expecting data in ASCII format, you might see question marks instead of the actual characters in the response. To fix this issue, you can try specifying the encoding in the Content-Type header of your request, or you can try changing the encoding of your terminal to match the encoding of the API response.\nAnother possible reason is that the API is returning an error or invalid data in response to your request. This could be caused by a number of things, such as an incorrect API key, an invalid query parameter, or an issue with the API itself. To troubleshoot this issue, you can try checking the API documentation to make sure you're calling the API correctly, and you can also try using a tool like Postman to inspect the raw response from the API to see if there are any error messages or other clues as to what might be going wrong.\nFinally, it's also possible that you are simply not handling the response from the API correctly in your code. For example, if you are trying to log the entire response object to the console, you might see question marks instead of the actual data if the response contains binary data (such as an image or audio file) that cannot be represented as text. To fix this issue, you can try logging only the specific data that you want to see, or you can try using a tool like Postman to inspect the response and make sure it contains the data you expect.\nIn summary, there are several possible reasons why you might be seeing question marks when calling an API, and the best way to troubleshoot the issue will depend on the specific details of your code and the API you are calling.\n" ]
[ 0 ]
[]
[]
[ "node.js" ]
stackoverflow_0074662431_node.js.txt
Q: Automate Twitter Bot I have done a twitter bot using python that posts a tweet about the weather info for a specific city. I test it doing this: python file.py and then I check on my Twitter Account that it works. But, how can I execute it periodically? Where can I upload my source code? Are there any free server that runs my file.py for free? A: Assuming you're running gnu/linux and your machine is online most of the time, you can configure your own crontab to run your script periodically. check: https://www.freebsd.org/doc/handbook/configtuning-cron.html If that is not the case, Check out https://wiki.python.org/moin/FreeHosts for your purpose first from the list should do the job. (https://www.pythonanywhere.com/) A: You can host your code file to a github repository, then run your .py file through a Github action which run by a schedule you set up by a .yml file at .github/workflows folder.
Automate Twitter Bot
I have done a twitter bot using python that posts a tweet about the weather info for a specific city. I test it doing this: python file.py and then I check on my Twitter Account that it works. But, how can I execute it periodically? Where can I upload my source code? Are there any free server that runs my file.py for free?
[ "Assuming you're running gnu/linux and your machine is online most of the time, you can configure your own crontab to run your script periodically. \ncheck: https://www.freebsd.org/doc/handbook/configtuning-cron.html\nIf that is not the case,\nCheck out https://wiki.python.org/moin/FreeHosts for your purpose first from the list should do the job. (https://www.pythonanywhere.com/)\n", "You can host your code file to a github repository, then run your .py file through a Github action which run by a schedule you set up by a .yml file at .github/workflows folder.\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "bots", "twitter" ]
stackoverflow_0028485312_automation_bots_twitter.txt
Q: How to get the content of PDF form text fields using pdfbox? I'm using this to get the text of a PDF file using org.apache.pdfbox File f = new File(fileName); if (!f.isFile()) { System.out.println("File " + fileName + " does not exist."); return null; } try { parser = new PDFParser(new FileInputStream(f)); } catch (Exception e) { System.out.println("Unable to open PDF Parser."); return null; } try { parser.parse(); cosDoc = parser.getDocument(); pdfStripper = new PDFTextStripper(); pdDoc = new PDDocument(cosDoc); parsedText = pdfStripper.getText(pdDoc); } catch (Exception e) { e.printStackTrace(); } It works great for the PDFs I've used it on so far. Now I have a PDF form that has editable text fields in it. My code does not return the text inside the fields. I would like to get that text. Is there a way to get it using PDFBox? A: This is how you get key/value for AcroForms: (This particular program prints it to the console.) package pdf_form_filler; import org.apache.pdfbox.pdmodel.PDDocument; import org.apache.pdfbox.pdmodel.PDDocumentCatalog; import org.apache.pdfbox.pdmodel.interactive.form.*; import java.io.File; import java.util.*; public class pdf_form_filler { public static void listFields(PDDocument doc) throws Exception { PDDocumentCatalog catalog = doc.getDocumentCatalog(); PDAcroForm form = catalog.getAcroForm(); List<PDFieldTreeNode> fields = form.getFields(); for(PDFieldTreeNode field: fields) { Object value = field.getValue(); String name = field.getFullyQualifiedName(); System.out.print(name); System.out.print(" = "); System.out.print(value); System.out.println(); } } public static void main(String[] args) throws Exception { File file = new File("test.pdf"); PDDocument doc = PDDocument.load(file); listFields(doc); } } A: PDFieldTreeNode doesn't seem to be supported anymore. Try PDField A: For those trying to use this same method nowadays. public static void listFields(PDDocument doc) throws Exception { PDDocumentCatalog catalog = doc.getDocumentCatalog(); PDAcroForm form = catalog.getAcroForm(); List<PDField> fields = form.getFields(); for(PDField field: fields) { Object value = field.getValueAsString(); String name = field.getFullyQualifiedName(); System.out.print(name); System.out.print(" = "); System.out.print(value); System.out.println(); } }
How to get the content of PDF form text fields using pdfbox?
I'm using this to get the text of a PDF file using org.apache.pdfbox File f = new File(fileName); if (!f.isFile()) { System.out.println("File " + fileName + " does not exist."); return null; } try { parser = new PDFParser(new FileInputStream(f)); } catch (Exception e) { System.out.println("Unable to open PDF Parser."); return null; } try { parser.parse(); cosDoc = parser.getDocument(); pdfStripper = new PDFTextStripper(); pdDoc = new PDDocument(cosDoc); parsedText = pdfStripper.getText(pdDoc); } catch (Exception e) { e.printStackTrace(); } It works great for the PDFs I've used it on so far. Now I have a PDF form that has editable text fields in it. My code does not return the text inside the fields. I would like to get that text. Is there a way to get it using PDFBox?
[ "This is how you get key/value for AcroForms: (This particular program prints it to the console.)\npackage pdf_form_filler;\n\nimport org.apache.pdfbox.pdmodel.PDDocument;\nimport org.apache.pdfbox.pdmodel.PDDocumentCatalog;\nimport org.apache.pdfbox.pdmodel.interactive.form.*;\nimport java.io.File;\nimport java.util.*;\n\npublic class pdf_form_filler {\n\n public static void listFields(PDDocument doc) throws Exception {\n PDDocumentCatalog catalog = doc.getDocumentCatalog();\n PDAcroForm form = catalog.getAcroForm();\n List<PDFieldTreeNode> fields = form.getFields();\n\n for(PDFieldTreeNode field: fields) {\n Object value = field.getValue();\n String name = field.getFullyQualifiedName();\n System.out.print(name);\n System.out.print(\" = \");\n System.out.print(value);\n System.out.println();\n }\n }\n\n public static void main(String[] args) throws Exception {\n File file = new File(\"test.pdf\");\n PDDocument doc = PDDocument.load(file);\n listFields(doc);\n }\n\n}\n\n", "PDFieldTreeNode doesn't seem to be supported anymore. Try PDField\n", "For those trying to use this same method nowadays.\npublic static void listFields(PDDocument doc) throws Exception {\n PDDocumentCatalog catalog = doc.getDocumentCatalog();\n PDAcroForm form = catalog.getAcroForm();\n List<PDField> fields = form.getFields();\n\n for(PDField field: fields) {\n Object value = field.getValueAsString();\n String name = field.getFullyQualifiedName();\n System.out.print(name);\n System.out.print(\" = \");\n System.out.print(value);\n System.out.println();\n }\n}\n\n" ]
[ 8, 1, 0 ]
[]
[]
[ "java", "pdf", "pdfbox" ]
stackoverflow_0027282537_java_pdf_pdfbox.txt
Q: Django and adding a static image Good evening, I've just completed this tutorial: https://docs.djangoproject.com/en/4.1/intro/tutorial01/ and I need to add a new directory to display a dataset (unrelated to the polls app) I've set up my new directory as I did the first steps in the tutorial. My steps: ...\> py manage.py startapp newendpoint newendpoint/ __init__.py admin.py apps.py migrations/ __init__.py models.py tests.py urls.py views.py path('newendpoint/', include('newendpoint.urls')) **Once this is setup I've tried these tutorials: ** https://youtu.be/u1FR1nZ6Ng4 I've tried this tutorial and had no luck https://adiramadhan17.medium.com/django-load-image-from-static-directory-27f002b1bdf1 I've also tried this one My server goes down or nothing displays. I could really use some help getting this figured out, before I tried the static image I was trying to add a csv via SQLite3 with no luck either. A: Step 1: Install pillow $ pip install pillow Step 2: Add the model for the image in your apps models.py class Imagemodel(models.Model): # ..... pic = models.ImageField(upload_to='images/', null=True) # U can change to `FileField` for files Step 3: Make migrations and migrate: $ py manage.py makemigrations && migrate Step 4: open settings.py and add the following code. This code tells Django where to store the images. import os # at the top # Other settings .. MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(BASE_DIR , 'media') Step 5: In your project directory level, create the media folder: $ mkdir media Step 6: Open the project level urls.py and add the code below to add our media folder to the static files. # other imports from . import settings from django.contrib.staticfiles.urls import static from django.contrib.staticfiles.urls import staticfiles_urlpatterns # URL patterns urlpatterns +=staticfiles_urlpatterns() urlpatterns +=static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) Step 7: In your app directory level (newendpoint), add a forms.py file and add the code below: from django import forms from .models import * class PicForm(forms.ModelForm): class Meta: model = Imagemodel fields = ['pic'] Step 8: In your app (newendpoint), create a folder called templates and add a file called pic.html inside. In pic.html, add the code below: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>image</title> </head> <body> <form method = "post" enctype="multipart/form-data"> {% csrf_token %} {{ form.as_p }} <button type="submit">Upload</button> </form> </body> </html> Step 9: In your app's views.py add the code below: from django.http import HttpResponse from django.shortcuts import render, redirect from .forms import * # Create your views here. def pic_view(request): if request.method == 'POST': form = PicForm(request.POST, request.FILES) if form.is_valid(): form.save() return redirect('success') else: form = PicForm() return render(request, 'pic.html', {'form': form}) def success(request): return HttpResponse('successfully uploaded') Step 10: In your app's urls.py add the code below: # .. other imports from django.urls import path from .views import * urlpatterns = [ path('image_upload', pic_view, name='image_upload'), path('success', success, name='success'), ] Step 11: Run the server: $ python3 manage.py runserver Upload the image through: http://127.0.0.1:8000/image_upload
Django and adding a static image
Good evening, I've just completed this tutorial: https://docs.djangoproject.com/en/4.1/intro/tutorial01/ and I need to add a new directory to display a dataset (unrelated to the polls app) I've set up my new directory as I did the first steps in the tutorial. My steps: ...\> py manage.py startapp newendpoint newendpoint/ __init__.py admin.py apps.py migrations/ __init__.py models.py tests.py urls.py views.py path('newendpoint/', include('newendpoint.urls')) **Once this is setup I've tried these tutorials: ** https://youtu.be/u1FR1nZ6Ng4 I've tried this tutorial and had no luck https://adiramadhan17.medium.com/django-load-image-from-static-directory-27f002b1bdf1 I've also tried this one My server goes down or nothing displays. I could really use some help getting this figured out, before I tried the static image I was trying to add a csv via SQLite3 with no luck either.
[ "Step 1:\n\nInstall pillow\n\n$ pip install pillow\n\nStep 2:\nAdd the model for the image in your apps models.py\n\nclass Imagemodel(models.Model):\n # .....\n pic = models.ImageField(upload_to='images/', null=True) # U can change to `FileField` for files\n\nStep 3:\nMake migrations and migrate:\n$ py manage.py makemigrations && migrate\n\nStep 4:\nopen settings.py and add the following code. This code tells Django where to store the images.\nimport os # at the top\n# Other settings ..\nMEDIA_URL = '/media/'\nMEDIA_ROOT = os.path.join(BASE_DIR , 'media')\n\n\nStep 5:\nIn your project directory level, create the media folder:\n$ mkdir media \n\nStep 6:\nOpen the project level urls.py and add the code below to add our media folder to the static files.\n# other imports\nfrom . import settings\nfrom django.contrib.staticfiles.urls import static\nfrom django.contrib.staticfiles.urls import staticfiles_urlpatterns\n\n# URL patterns\n\nurlpatterns +=staticfiles_urlpatterns()\nurlpatterns +=static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)\n\nStep 7:\nIn your app directory level (newendpoint), add a forms.py file and add the code below:\nfrom django import forms\nfrom .models import *\n \n \nclass PicForm(forms.ModelForm):\n \n class Meta:\n model = Imagemodel\n fields = ['pic']\n\nStep 8:\nIn your app (newendpoint), create a folder called templates and add a file called pic.html inside. In pic.html, add the code below:\n<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\">\n <title>image</title>\n</head>\n<body>\n <form method = \"post\" enctype=\"multipart/form-data\">\n {% csrf_token %}\n {{ form.as_p }}\n <button type=\"submit\">Upload</button>\n </form>\n</body>\n</html>\n\nStep 9:\nIn your app's views.py add the code below:\nfrom django.http import HttpResponse\nfrom django.shortcuts import render, redirect\nfrom .forms import *\n \n# Create your views here.\n \n \ndef pic_view(request):\n \n if request.method == 'POST':\n form = PicForm(request.POST, request.FILES)\n \n if form.is_valid():\n form.save()\n return redirect('success')\n else:\n form = PicForm()\n return render(request, 'pic.html', {'form': form})\n \n \ndef success(request):\n return HttpResponse('successfully uploaded')\n\nStep 10:\nIn your app's urls.py add the code below:\n# .. other imports\nfrom django.urls import path\nfrom .views import *\n\n\nurlpatterns = [\n path('image_upload', pic_view, name='image_upload'),\n path('success', success, name='success'),\n]\n\nStep 11:\nRun the server:\n$ python3 manage.py runserver \n\nUpload the image through:\nhttp://127.0.0.1:8000/image_upload\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074662462_django_python.txt
Q: Outlook addin - js or api generate email file I have an outlook addin that I've built using Yeoman. The addin communicates with a server API on my server to combine data from an email with additional data from a database that a user has saved against an email address. This is all working great. Next I want to store a copy of the email server side, as a file on disk, .msg preferred but I'll take a .eml if thats the only option. I have 2 options but don't know if either are possible. Either the addin generates the .msg file and posts it to the server API OR the server side API generates the .msg file directly. I have got the server side using the Outlook v2 API and able to pull back the email information when the client passes it the token, id etc. If it could just generate/download a .msg file server side this would be ideal. As a side note, many of the Microsoft API pages point out the deprecation of the Outlook API in favor of the Graph API, however there are inconsistent links between the pages and it get confusing. I have discovered the token from getCallbackTokenAsync only works with the Outlook API and not Graph, but I cant find out a way to generate a graph compatible token. All the example code from MS uses Office.context.mailbox.restUrl which still gives the Outlook API url and not Graph! So I guess I'm trying to find out if it's even possible to get/generate a .msg or .eml file either client side using outlook.js or server side using one of the api's. Thank you. I can get message data both client and server side but cannot get a physical email file. A: The Office JavaScript API (OfficeJS) doesn't provide anything for saving messages as msg files (or getting streams). The best what you could do is to use Graph API where you could get the EML file, see Get MIME content of a message for more information. The server-side code may use the OAuth 2.0 On-Behalf-Of flow (OBO) to request a new access token with permissions to Microsoft Graph. Read more about that in the Authorize to Microsoft Graph with SSO article. The on-behalf-of (OBO) flow describes the scenario of a web API using an identity other than its own to call another web API. Referred to as delegation in OAuth, the intent is to pass a user's identity and permissions through the request chain. For the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform. It only uses delegated scopes and not application roles. Roles remain attached to the principal (the user) and never to the application operating on the user's behalf. This occurs to prevent the user gaining permission to resources they shouldn't have access to. See Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow for more information. A: Eugene's answer is good. If ultimately you need to get that message to your backend service, using Graph as Eugune described would be the recommended approach. If for whatever reason you are still looking for a capability to access it on client using Office.js, it is not a part of the product. We track Outlook add-in feature requests on our Tech Community Page. Please submit your request there and choose the appropriate label(s). Feature requests on Tech Community are considered, when we go through our planning process. Note there is already a couple of similar ideas there, if you search for "eml" keyword, that you may want to upvote.
Outlook addin - js or api generate email file
I have an outlook addin that I've built using Yeoman. The addin communicates with a server API on my server to combine data from an email with additional data from a database that a user has saved against an email address. This is all working great. Next I want to store a copy of the email server side, as a file on disk, .msg preferred but I'll take a .eml if thats the only option. I have 2 options but don't know if either are possible. Either the addin generates the .msg file and posts it to the server API OR the server side API generates the .msg file directly. I have got the server side using the Outlook v2 API and able to pull back the email information when the client passes it the token, id etc. If it could just generate/download a .msg file server side this would be ideal. As a side note, many of the Microsoft API pages point out the deprecation of the Outlook API in favor of the Graph API, however there are inconsistent links between the pages and it get confusing. I have discovered the token from getCallbackTokenAsync only works with the Outlook API and not Graph, but I cant find out a way to generate a graph compatible token. All the example code from MS uses Office.context.mailbox.restUrl which still gives the Outlook API url and not Graph! So I guess I'm trying to find out if it's even possible to get/generate a .msg or .eml file either client side using outlook.js or server side using one of the api's. Thank you. I can get message data both client and server side but cannot get a physical email file.
[ "The Office JavaScript API (OfficeJS) doesn't provide anything for saving messages as msg files (or getting streams). The best what you could do is to use Graph API where you could get the EML file, see Get MIME content of a message for more information.\nThe server-side code may use the OAuth 2.0 On-Behalf-Of flow (OBO) to request a new access token with permissions to Microsoft Graph. Read more about that in the Authorize to Microsoft Graph with SSO article.\nThe on-behalf-of (OBO) flow describes the scenario of a web API using an identity other than its own to call another web API. Referred to as delegation in OAuth, the intent is to pass a user's identity and permissions through the request chain.\nFor the middle-tier service to make authenticated requests to the downstream service, it needs to secure an access token from the Microsoft identity platform. It only uses delegated scopes and not application roles. Roles remain attached to the principal (the user) and never to the application operating on the user's behalf. This occurs to prevent the user gaining permission to resources they shouldn't have access to. See Microsoft identity platform and OAuth 2.0 On-Behalf-Of flow for more information.\n", "Eugene's answer is good. If ultimately you need to get that message to your backend service, using Graph as Eugune described would be the recommended approach. If for whatever reason you are still looking for a capability to access it on client using Office.js, it is not a part of the product. We track Outlook add-in feature requests on our Tech Community Page. Please submit your request there and choose the appropriate label(s). Feature requests on Tech Community are considered, when we go through our planning process. Note there is already a couple of similar ideas there, if you search for \"eml\" keyword, that you may want to upvote.\n" ]
[ 1, 0 ]
[]
[]
[ "office_addins", "office_js", "outlook", "outlook_addin", "outlook_web_addins" ]
stackoverflow_0074644855_office_addins_office_js_outlook_outlook_addin_outlook_web_addins.txt
Q: Cannot remove nor access a directory even as a root I am using WSL Ubuntu 20.04. I logged in as a root. When I list the directory, already seeing permission problem. I am not able to rm nor cd into the directory as a root as well. I cannot access the directory using the user that created the folder as well but listing it do not alert permission problem. I am sure I need to get rid of this directory. What can I do on this regard? The directory is actually added using sshfs with default_options: sshfs -o default_permissions Thanks in advanced.
Cannot remove nor access a directory even as a root
I am using WSL Ubuntu 20.04. I logged in as a root. When I list the directory, already seeing permission problem. I am not able to rm nor cd into the directory as a root as well. I cannot access the directory using the user that created the folder as well but listing it do not alert permission problem. I am sure I need to get rid of this directory. What can I do on this regard? The directory is actually added using sshfs with default_options: sshfs -o default_permissions Thanks in advanced.
[]
[]
[ "I need to unmount it with fusermount -u sshfs. Then, I can have full access to it.\n" ]
[ -1 ]
[ "linux", "permission_denied", "sshfs", "user_permissions", "windows_subsystem_for_linux" ]
stackoverflow_0074662708_linux_permission_denied_sshfs_user_permissions_windows_subsystem_for_linux.txt
Q: Add Language in Xcode 14.1 Simulator problem I updated my Xcode to 14.1. but when I want to add a language in Setting, it stays on this page and I can't do any thing. All I can do is Erase All content and Setting to return to normal state. It works on my MacBook Air M1, but no on my MacBook Pro 2019 Intel i7 A: It works on M1/M2 but it doesn't on Intel. You can use the command line tool simctl as a workaround. To set the preferred language to e.g. "German" use the following command: xcrun simctl spawn <UUID> defaults write "Apple Global Domain" AppleLanguages -array de To change the region use: xcrun simctl spawn <UUID> defaults write "Apple Global Domain" AppleLocale -string 'de_DE' Replace <UUID> with the UUID of the device you want to change. To get the currently booted device(s) use: xcrun simctl list 'devices' 'booted' After you have changed language and/or region you need to shutdown and reboot the device like this. xcrun simctl shutdown <UUID> followed by xcrun simctl boot <UUID> If you only want to change the currently booted device(s) you can just use 'booted' instead of the UUID. Like so: xcrun simctl shutdown 'booted' So, to change the preferred language of the currently booted device(s) to English, just use: xcrun simctl spawn 'booted' defaults write "Apple Global Domain" AppleLanguages -array en To change language and/or region the desired device(s) must be booted. A: Also on Mac with intel you can download 15,5 iOS simulator Its changing language without problem
Add Language in Xcode 14.1 Simulator problem
I updated my Xcode to 14.1. but when I want to add a language in Setting, it stays on this page and I can't do any thing. All I can do is Erase All content and Setting to return to normal state. It works on my MacBook Air M1, but no on my MacBook Pro 2019 Intel i7
[ "It works on M1/M2 but it doesn't on Intel. You can use the command line tool simctl as a workaround.\nTo set the preferred language to e.g. \"German\" use the following command:\nxcrun simctl spawn <UUID> defaults write \"Apple Global Domain\" AppleLanguages -array de\nTo change the region use:\nxcrun simctl spawn <UUID> defaults write \"Apple Global Domain\" AppleLocale -string 'de_DE'\nReplace <UUID> with the UUID of the device you want to change. To get the currently booted device(s) use:\nxcrun simctl list 'devices' 'booted'\nAfter you have changed language and/or region you need to shutdown and reboot the device like this.\nxcrun simctl shutdown <UUID> followed by xcrun simctl boot <UUID>\nIf you only want to change the currently booted device(s) you can just use 'booted' instead of the UUID. Like so:\nxcrun simctl shutdown 'booted'\nSo, to change the preferred language of the currently booted device(s) to English, just use:\nxcrun simctl spawn 'booted' defaults write \"Apple Global Domain\" AppleLanguages -array en\nTo change language and/or region the desired device(s) must be booted.\n", "Also on Mac with intel you can download 15,5 iOS simulator\nIts changing language without problem\n" ]
[ 1, 0 ]
[]
[]
[ "ios", "xcode", "xcode14" ]
stackoverflow_0074299410_ios_xcode_xcode14.txt
Q: useSelector does not update the value after dispatch I do not understand, the value is dispatch but useSelector does not update, it's only showing an empty value. In the above image, the cart contains the value. see this above image, console Array does not contain any values. Code: const dispatch = useDispatch(); const selectorCart = useSelector((state) => state.cart); const selectorLogin = useSelector((state) => state.login); function handleAddItemInCart(product) { let isProductAllReadyExit = true; for(let item of selectorCart) { if (product.id === item.id && product.title === item.title) { isProductAllReadyExit = false; break; } } if (isProductAllReadyExit) { dispatch(addItemInCart(product)); console.log("2. Selector Cart Value : ", selectorCart); handleAddCartItemSave(); } cartslice import { createSlice } from "@reduxjs/toolkit"; const cartSlice = createSlice({ name: "cart", initialState: [], reducers : { addItemInCart : (state, action) => { state.push(action.payload); }, removeItemInCart : (state, action) => { return state.product.filter(product => product.id !== action.payload && product.title !== action.payload.title); }, }, }); export const {addItemInCart, removeItemInCart} = cartSlice.actions; export default cartSlice.reducer; What is this mistake in the above code? A: It looks like the issue is that your console.log is being called before the dispatch(addItemInCart(product)) call has had a chance to update the state. This means that selectorCart will always have its initial value, an empty array, when console.log is called. One way to fix this would be to move the console.log call to after the dispatch call, like this: function handleAddItemInCart(product) { let isProductAllReadyExit = true; for(let item of selectorCart) { if (product.id === item.id && product.title === item.title) { isProductAllReadyExit = false; break; } } if (isProductAllReadyExit) { dispatch(addItemInCart(product)); // Move the console.log call here, after the dispatch call console.log("2. Selector Cart Value : ", selectorCart); handleAddCartItemSave(); } Alternatively, you could use the useEffect hook to log the updated value of selectorCart after it has been updated by the dispatch call, like this: const dispatch = useDispatch(); const selectorCart = useSelector((state) => state.cart); const selectorLogin = useSelector((state) => state.login); // Use the useEffect hook to log the updated value of selectorCart useEffect(() => { console.log("2. Selector Cart Value : ", selectorCart); }, [selectorCart]); // Only re-run the effect when selectorCart changes function handleAddItemInCart(product) { let isProductAllReadyExit = true; for(let item of selectorCart) { if (product.id === item.id && product.title === item.title) { isProductAllReadyExit = false; break; } } if (isProductAllReadyExit) { dispatch(addItemInCart(product)); handleAddCartItemSave(); }
useSelector does not update the value after dispatch
I do not understand, the value is dispatch but useSelector does not update, it's only showing an empty value. In the above image, the cart contains the value. see this above image, console Array does not contain any values. Code: const dispatch = useDispatch(); const selectorCart = useSelector((state) => state.cart); const selectorLogin = useSelector((state) => state.login); function handleAddItemInCart(product) { let isProductAllReadyExit = true; for(let item of selectorCart) { if (product.id === item.id && product.title === item.title) { isProductAllReadyExit = false; break; } } if (isProductAllReadyExit) { dispatch(addItemInCart(product)); console.log("2. Selector Cart Value : ", selectorCart); handleAddCartItemSave(); } cartslice import { createSlice } from "@reduxjs/toolkit"; const cartSlice = createSlice({ name: "cart", initialState: [], reducers : { addItemInCart : (state, action) => { state.push(action.payload); }, removeItemInCart : (state, action) => { return state.product.filter(product => product.id !== action.payload && product.title !== action.payload.title); }, }, }); export const {addItemInCart, removeItemInCart} = cartSlice.actions; export default cartSlice.reducer; What is this mistake in the above code?
[ "It looks like the issue is that your console.log is being called before the dispatch(addItemInCart(product)) call has had a chance to update the state.\nThis means that selectorCart will always have its initial value, an empty array, when console.log is called.\nOne way to fix this would be to move the console.log call to after the dispatch call, like this:\nfunction handleAddItemInCart(product) {\n let isProductAllReadyExit = true;\n for(let item of selectorCart) {\n if (product.id === item.id && product.title === item.title) {\n isProductAllReadyExit = false;\n break;\n }\n }\n\n if (isProductAllReadyExit) {\n dispatch(addItemInCart(product));\n // Move the console.log call here, after the dispatch call\n console.log(\"2. Selector Cart Value : \", selectorCart);\n handleAddCartItemSave();\n }\n\nAlternatively, you could use the useEffect hook to log the updated value of selectorCart after it has been updated by the dispatch call, like this:\nconst dispatch = useDispatch();\nconst selectorCart = useSelector((state) => state.cart);\nconst selectorLogin = useSelector((state) => state.login);\n\n// Use the useEffect hook to log the updated value of selectorCart\nuseEffect(() => {\n console.log(\"2. Selector Cart Value : \", selectorCart);\n}, [selectorCart]); // Only re-run the effect when selectorCart changes\n\nfunction handleAddItemInCart(product) {\n let isProductAllReadyExit = true;\n for(let item of selectorCart) {\n if (product.id === item.id && product.title === item.title) {\n isProductAllReadyExit = false;\n break;\n }\n }\n\n if (isProductAllReadyExit) {\n dispatch(addItemInCart(product));\n handleAddCartItemSave();\n }\n\n" ]
[ 0 ]
[]
[]
[ "react_redux", "reactjs", "redux_toolkit", "useselector" ]
stackoverflow_0073655167_react_redux_reactjs_redux_toolkit_useselector.txt
Q: hbase: For major compaction config does not take effect I have do the config :habse.offpeak.end.hour:22 ,hbase.offpeak.start.hour: 18.hbase.hregion.majorcompaction: 86400000.but hbase still do major compaction in random time ,like:9:00 ,13:55 and so on. can you tell me how to config hbase do major compaction in offpeak time. A: you can check this https://issues.apache.org/jira/browse/HBASE-8329, I think the hbase.offpeak.start.hour will not change the major compaction time.
hbase: For major compaction config does not take effect
I have do the config :habse.offpeak.end.hour:22 ,hbase.offpeak.start.hour: 18.hbase.hregion.majorcompaction: 86400000.but hbase still do major compaction in random time ,like:9:00 ,13:55 and so on. can you tell me how to config hbase do major compaction in offpeak time.
[ "you can check this https://issues.apache.org/jira/browse/HBASE-8329,\nI think the hbase.offpeak.start.hour will not change the major compaction time.\n" ]
[ 0 ]
[]
[]
[ "data_compaction", "hbase" ]
stackoverflow_0070121227_data_compaction_hbase.txt
Q: Using SCSS `:export` causes webpack error: "Error: Module parse failed: Unexpected token" I want to use variables defined in SCSS in TypeScript modules in a Angular app. An quite elegant way seems to use SCSS :export, see this question or this one or this one. But when I use :export in one of my SCSS files, I get an webpack error: ./src/styles/_export.scss:16:0 - Error: Module parse failed: Unexpected token (16:0) File was processed with these loaders: * ./node_modules/.pnpm/[email protected]/node_modules/resolve-url-loader/index.js * ./node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/sass-loader/dist/cjs.js You may need an additional loader to handle the result of these loaders. | | > :export { | dialogWidth: 600px; | sidenavWidthExpanded: 280px; Error: src/app/modules/help/services/help.service.ts:12:29 - error TS2307: Cannot find module '@styles/_export.scss' or its corresponding type declarations. Any idea what is missing? Is there another loader required as the error message suggests? Is anything outdated? In fact, I'm not quite sure which tool would process the :export as it does not seem to be a SASS feature, but rather ICSS as pointed out by PaulCo. Looking at my pnpm-lock.yaml it seems that 3 different versions of the css-loader are used (and all depend on some version of icss-utils), no idea which one of those is in fact used to build: /css-loader/[email protected]: resolution: {integrity: sha512-M5lSukoWi1If8dhQAUCvj4H8vUt3vOnwbQBH9DdTm/s4Ym2B/3dPMtYZeJmq7Q3S3Pa+I94DcZ7pc9bP14cWIQ==} engines: {node: '>= 8.9.0'} peerDependencies: webpack: ^4.0.0 || ^5.0.0 dependencies: camelcase: 5.3.1 cssesc: 3.0.0 icss-utils: 4.1.1 loader-utils: 1.4.0 normalize-path: 3.0.0 postcss: 7.0.39 postcss-modules-extract-imports: 2.0.0 postcss-modules-local-by-default: 3.0.3 postcss-modules-scope: 2.2.0 postcss-modules-values: 3.0.0 postcss-value-parser: 4.2.0 schema-utils: 2.7.1 semver: 6.3.0 webpack: 4.46.0 /css-loader/[email protected]: resolution: {integrity: sha512-Q7mOvpBNBG7YrVGMxRxcBJZFL75o+cH2abNASdibkj/fffYD8qWbInZrD0S9ccI6vZclF3DsHE7njGlLtaHbhg==} engines: {node: '>= 10.13.0'} peerDependencies: webpack: ^4.27.0 || ^5.0.0 dependencies: icss-utils: [email protected] loader-utils: 2.0.3 postcss: 8.4.18 postcss-modules-extract-imports: [email protected] postcss-modules-local-by-default: [email protected] postcss-modules-scope: [email protected] postcss-modules-values: [email protected] postcss-value-parser: 4.2.0 schema-utils: 3.1.1 semver: 7.3.8 webpack: 5.74.0 /css-loader/[email protected]: resolution: {integrity: sha512-yB5CNFa14MbPJcomwNh3wLThtkZgcNyI2bNMRt8iE5Z8Vwl7f8vQXFAzn2HDOJvtDq2NTZBUGMSUNNyrv3/+cw==} engines: {node: '>= 12.13.0'} peerDependencies: webpack: ^5.0.0 dependencies: icss-utils: [email protected] postcss: 8.4.18 postcss-modules-extract-imports: [email protected] postcss-modules-local-by-default: [email protected] postcss-modules-scope: [email protected] postcss-modules-values: [email protected] postcss-value-parser: 4.2.0 semver: 7.3.8 webpack: [email protected] A: I was using this exact approach in an angular 13 project without any problems, but it broke for me with the same error you are experiencing after updating the project to angular 14. Used versions: Angular 13: sass-loader 12.4.0 Angular 14: @angular-devkit/build-angular/node_modules/sass-loader 13.0.2 If you want to stick to this solution maybe this could give you a hint why it breaks. After hours of trying to fix the initial problem I now settled on a workaround explained here: https://medium.com/@mariusschroeder/export-scss-variables-435b6e784302. This is what I suggest as an alternative solution. It works just as well as the other approach, even if it feels a little more hacky to be honest.
Using SCSS `:export` causes webpack error: "Error: Module parse failed: Unexpected token"
I want to use variables defined in SCSS in TypeScript modules in a Angular app. An quite elegant way seems to use SCSS :export, see this question or this one or this one. But when I use :export in one of my SCSS files, I get an webpack error: ./src/styles/_export.scss:16:0 - Error: Module parse failed: Unexpected token (16:0) File was processed with these loaders: * ./node_modules/.pnpm/[email protected]/node_modules/resolve-url-loader/index.js * ./node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/sass-loader/dist/cjs.js You may need an additional loader to handle the result of these loaders. | | > :export { | dialogWidth: 600px; | sidenavWidthExpanded: 280px; Error: src/app/modules/help/services/help.service.ts:12:29 - error TS2307: Cannot find module '@styles/_export.scss' or its corresponding type declarations. Any idea what is missing? Is there another loader required as the error message suggests? Is anything outdated? In fact, I'm not quite sure which tool would process the :export as it does not seem to be a SASS feature, but rather ICSS as pointed out by PaulCo. Looking at my pnpm-lock.yaml it seems that 3 different versions of the css-loader are used (and all depend on some version of icss-utils), no idea which one of those is in fact used to build: /css-loader/[email protected]: resolution: {integrity: sha512-M5lSukoWi1If8dhQAUCvj4H8vUt3vOnwbQBH9DdTm/s4Ym2B/3dPMtYZeJmq7Q3S3Pa+I94DcZ7pc9bP14cWIQ==} engines: {node: '>= 8.9.0'} peerDependencies: webpack: ^4.0.0 || ^5.0.0 dependencies: camelcase: 5.3.1 cssesc: 3.0.0 icss-utils: 4.1.1 loader-utils: 1.4.0 normalize-path: 3.0.0 postcss: 7.0.39 postcss-modules-extract-imports: 2.0.0 postcss-modules-local-by-default: 3.0.3 postcss-modules-scope: 2.2.0 postcss-modules-values: 3.0.0 postcss-value-parser: 4.2.0 schema-utils: 2.7.1 semver: 6.3.0 webpack: 4.46.0 /css-loader/[email protected]: resolution: {integrity: sha512-Q7mOvpBNBG7YrVGMxRxcBJZFL75o+cH2abNASdibkj/fffYD8qWbInZrD0S9ccI6vZclF3DsHE7njGlLtaHbhg==} engines: {node: '>= 10.13.0'} peerDependencies: webpack: ^4.27.0 || ^5.0.0 dependencies: icss-utils: [email protected] loader-utils: 2.0.3 postcss: 8.4.18 postcss-modules-extract-imports: [email protected] postcss-modules-local-by-default: [email protected] postcss-modules-scope: [email protected] postcss-modules-values: [email protected] postcss-value-parser: 4.2.0 schema-utils: 3.1.1 semver: 7.3.8 webpack: 5.74.0 /css-loader/[email protected]: resolution: {integrity: sha512-yB5CNFa14MbPJcomwNh3wLThtkZgcNyI2bNMRt8iE5Z8Vwl7f8vQXFAzn2HDOJvtDq2NTZBUGMSUNNyrv3/+cw==} engines: {node: '>= 12.13.0'} peerDependencies: webpack: ^5.0.0 dependencies: icss-utils: [email protected] postcss: 8.4.18 postcss-modules-extract-imports: [email protected] postcss-modules-local-by-default: [email protected] postcss-modules-scope: [email protected] postcss-modules-values: [email protected] postcss-value-parser: 4.2.0 semver: 7.3.8 webpack: [email protected]
[ "I was using this exact approach in an angular 13 project without any problems, but it broke for me with the same error you are experiencing after updating the project to angular 14.\nUsed versions:\nAngular 13: sass-loader 12.4.0\nAngular 14: @angular-devkit/build-angular/node_modules/sass-loader 13.0.2\nIf you want to stick to this solution maybe this could give you a hint why it breaks.\nAfter hours of trying to fix the initial problem I now settled on a workaround explained here: https://medium.com/@mariusschroeder/export-scss-variables-435b6e784302. This is what I suggest as an alternative solution.\nIt works just as well as the other approach, even if it feels a little more hacky to be honest.\n" ]
[ 0 ]
[]
[]
[ "angular", "sass", "typescript", "webpack" ]
stackoverflow_0074401518_angular_sass_typescript_webpack.txt
Q: How to slow down animation loop I am using this babylonjs playground as an example to animate a boat in a scene I am creating for a class project. But the animation is going far too fast for what want to use it for. Could you please show me how to slow this animation down? As well as explain how the animation part works. Thank you! Babylonjs playground: https://playground.babylonjs.com/#1YD970#14 sorry im very new to babylonjs I don't really understand some of the classes and variables used so I don't know what exactly to change to have a slower animation loop. A: In order to slow down the animation in this particular Babylonjs playground you will need to change two variables. The first variable is the speed which is located in the createScene function. The speed variable is currently set to 10, however you can increase or decrease this value to speed up or slow down the animation. The second variable is the time step which is located in the scene.registerBeforeRender function. The time step variable is currently set to 1/60, however you can increase or decrease this value to speed up or slow down the animation. It is important to note that if you decrease the speed variable too much, the animation may not appear to be moving at all. I hope this helps!
How to slow down animation loop
I am using this babylonjs playground as an example to animate a boat in a scene I am creating for a class project. But the animation is going far too fast for what want to use it for. Could you please show me how to slow this animation down? As well as explain how the animation part works. Thank you! Babylonjs playground: https://playground.babylonjs.com/#1YD970#14 sorry im very new to babylonjs I don't really understand some of the classes and variables used so I don't know what exactly to change to have a slower animation loop.
[ "In order to slow down the animation in this particular Babylonjs playground you will need to change two variables.\nThe first variable is the speed which is located in the createScene function. The speed variable is currently set to 10, however you can increase or decrease this value to speed up or slow down the animation.\nThe second variable is the time step which is located in the scene.registerBeforeRender function. The time step variable is currently set to 1/60, however you can increase or decrease this value to speed up or slow down the animation.\nIt is important to note that if you decrease the speed variable too much, the animation may not appear to be moving at all.\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "babylonjs", "javascript" ]
stackoverflow_0074662738_babylonjs_javascript.txt
Q: Discard branch in forked repo and make a copy of same branch from upstream into fork I have a forked repo with branch develop. When I make changes I usually pull from the upstream develop in to my forked branch and continue. Now I made a series of commits in my forked repo that I don't want to merge. So How can reset my branch to be what the upstream branch is - like a sync or How can I discard my branch and make a new copy of the upstream develop branch in my forked repo? thank you! A: (Since branches have virtually no cost in git you might want to create a backup branch before doing the reset (e.g. git branch develop.old.backup develop). You can always just delete it later if you want to) The answer to 1) is a hard reset: git checkout develop; git reset --hard origin/develop (assuming the upstream repository is named origin). NB, the hard reset is a command that discards data and commits, which is what you ask for in this particular case, but it should be handled with respect1. Alternative 2) would be git checkout main # or any other branch/commitish git branch -D develop git checkout -b develop origin/develop 1 I remember reading a humorous piece titled something "How to remove files as root in just 15 step" or something like that with exaggerated double/tripple checking of file masks and conditions before running an rm command as root. I was unable to find it now when searching for it.
Discard branch in forked repo and make a copy of same branch from upstream into fork
I have a forked repo with branch develop. When I make changes I usually pull from the upstream develop in to my forked branch and continue. Now I made a series of commits in my forked repo that I don't want to merge. So How can reset my branch to be what the upstream branch is - like a sync or How can I discard my branch and make a new copy of the upstream develop branch in my forked repo? thank you!
[ "(Since branches have virtually no cost in git you might want to create a backup branch before doing the reset (e.g. git branch develop.old.backup develop). You can always just delete it later if you want to)\nThe answer to 1) is a hard reset: git checkout develop; git reset --hard origin/develop (assuming the upstream repository is named origin). NB, the hard reset is a command that discards data and commits, which is what you ask for in this particular case, but it should be handled with respect1.\nAlternative 2) would be\ngit checkout main # or any other branch/commitish\ngit branch -D develop\ngit checkout -b develop origin/develop\n\n\n1\nI remember reading a humorous piece titled something \"How to remove files as root in just 15 step\" or something like that with exaggerated double/tripple checking of file masks and conditions before running an rm command as root. I was unable to find it now when searching for it.\n" ]
[ 1 ]
[]
[]
[ "git", "gitlab" ]
stackoverflow_0074661085_git_gitlab.txt
Q: Follow up to Bizarre canvas position locked by setting initial value I posted a question about bizarre thing trying to change style.top of canvas. People kindly advised tidy-up of code, but this still leaves me one unanswered question: its still the case that if I initialise the top or left positions of the canvas, it cant be moved. But it can if I never give it an initial value! In this example: you click on a canvas and should be able to move it- if style.top is given an initial value, it will stop canvas ever moving vertically similarly style.left value will stop it ever moving horizontally e.g. In the code below // comment out the top initial value, hence it will move vertically, but why?? <!DOCTYPE html> <html> <body> <script> var n=0, canv, ct var body = document.getElementsByTagName("body")[0]; var i,offx=0,offy=0, n=0 canv = document.createElement('canvas'); canv.id = "C"; canv.style.width = "100px"; canv.style.height = "100px"; //canv.style.top = "50px"; commenting out line will let it move canv.style.left = "50px"; canv.style.zIndex = 0; canv.style.position = "absolute"; canv.style.border = "2px solid"; body.appendChild(canv); ct = canv.getContext("2d"); canv.addEventListener('mousedown', function (e) { if (this.style.border == "2px solid"){ this.style.border = "5px solid" offy=e.y-this.style.top offx=e.x-this.style.left } else{this.style.border = "2px solid"} }); canv.addEventListener('mousemove', function (e) { if (this.style.border != "2px solid"){ this.style.top=e.y-offy+"px" this.style.left=e.x-offx+"px" } }); </script> </body> </html> A: This is happening because this.style.top is a string, ending in px, so you can't subtract it like that: offy=e.y-this.style.top I made a few changes so that it is not trying to subtract strings in this fiddle.
Follow up to Bizarre canvas position locked by setting initial value
I posted a question about bizarre thing trying to change style.top of canvas. People kindly advised tidy-up of code, but this still leaves me one unanswered question: its still the case that if I initialise the top or left positions of the canvas, it cant be moved. But it can if I never give it an initial value! In this example: you click on a canvas and should be able to move it- if style.top is given an initial value, it will stop canvas ever moving vertically similarly style.left value will stop it ever moving horizontally e.g. In the code below // comment out the top initial value, hence it will move vertically, but why?? <!DOCTYPE html> <html> <body> <script> var n=0, canv, ct var body = document.getElementsByTagName("body")[0]; var i,offx=0,offy=0, n=0 canv = document.createElement('canvas'); canv.id = "C"; canv.style.width = "100px"; canv.style.height = "100px"; //canv.style.top = "50px"; commenting out line will let it move canv.style.left = "50px"; canv.style.zIndex = 0; canv.style.position = "absolute"; canv.style.border = "2px solid"; body.appendChild(canv); ct = canv.getContext("2d"); canv.addEventListener('mousedown', function (e) { if (this.style.border == "2px solid"){ this.style.border = "5px solid" offy=e.y-this.style.top offx=e.x-this.style.left } else{this.style.border = "2px solid"} }); canv.addEventListener('mousemove', function (e) { if (this.style.border != "2px solid"){ this.style.top=e.y-offy+"px" this.style.left=e.x-offx+"px" } }); </script> </body> </html>
[ "This is happening because this.style.top is a string, ending in px, so you can't subtract it like that:\noffy=e.y-this.style.top\n\nI made a few changes so that it is not trying to subtract strings in this fiddle.\n" ]
[ 1 ]
[]
[]
[ "canvas", "target" ]
stackoverflow_0074661685_canvas_target.txt
Q: Ensuring memorization doesn't happen between between train and test sets in a Machine Learning model Recently, contractors developed an NER solution for us which extracts relevant drugs out of pharmaceutical policies (drugs that the policy was describing coverage criteria for). Part of their process was to go through the training set, and replace drugs ("Tylenol", etc) that also appeared in the test set, in order to ensure that the model was learning about the context of the drug, rather than memorizing the drug name (ex. showing up in sentences like "Tylenol is covered under the following circumstances..."). My question is, if we have new test data added, and we want to reevaluate the model, would it make sense to substitute words in the test set to make sure that they don't appear in the previous training set, or should we re-substitute the words in the training set, retrain the model, and reevaluate on the new test data? Thanks A: It is generally not a good idea to replace words in the test set in order to avoid memorization by the model. This is because the purpose of the test set is to evaluate the model's performance on unseen data, and replacing words in the test set effectively makes the data less "unseen" for the model. This can lead to inflated performance scores and a false sense of the model's generalizability to new data. Instead of replacing words in the test set, it is better to retrain the model on the new training data, which includes the updated words, and then evaluate the model on the new test set. This will give a more accurate picture of the model's performance on new data and will help to avoid overfitting to the training set. It is also important to note that the goal of training a model should not be to avoid memorization of specific words, but rather to learn the underlying patterns and relationships in the data that allow it to make accurate predictions. This can be achieved through techniques such as regularization and using appropriate training and evaluation metrics.
Ensuring memorization doesn't happen between between train and test sets in a Machine Learning model
Recently, contractors developed an NER solution for us which extracts relevant drugs out of pharmaceutical policies (drugs that the policy was describing coverage criteria for). Part of their process was to go through the training set, and replace drugs ("Tylenol", etc) that also appeared in the test set, in order to ensure that the model was learning about the context of the drug, rather than memorizing the drug name (ex. showing up in sentences like "Tylenol is covered under the following circumstances..."). My question is, if we have new test data added, and we want to reevaluate the model, would it make sense to substitute words in the test set to make sure that they don't appear in the previous training set, or should we re-substitute the words in the training set, retrain the model, and reevaluate on the new test data? Thanks
[ "It is generally not a good idea to replace words in the test set in order to avoid memorization by the model. This is because the purpose of the test set is to evaluate the model's performance on unseen data, and replacing words in the test set effectively makes the data less \"unseen\" for the model. This can lead to inflated performance scores and a false sense of the model's generalizability to new data.\nInstead of replacing words in the test set, it is better to retrain the model on the new training data, which includes the updated words, and then evaluate the model on the new test set. This will give a more accurate picture of the model's performance on new data and will help to avoid overfitting to the training set.\nIt is also important to note that the goal of training a model should not be to avoid memorization of specific words, but rather to learn the underlying patterns and relationships in the data that allow it to make accurate predictions. This can be achieved through techniques such as regularization and using appropriate training and evaluation metrics.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "named_entity_recognition", "training_data" ]
stackoverflow_0074662729_machine_learning_named_entity_recognition_training_data.txt
Q: Why does using C's assert function skip evaluation of a conditional that comes before it? I was using C's assert.h assert function in a method similar to this: int x = 3; if (x == 3) printf("x is 3 ✅"); assert(x != 3); When running it, I found out that the if statement is skipped entirely, and the program terminates when it reaches the assert statement. Needless to say, this caused a couple of extremely nasty bugs in my program before I found the culprit. What could be the cause of this? Why is the conditional being skipped entirely? If it isn't being skipped, then why is the code inside not being executed? I set up an online example here. A: The if statement is not skipped. When assert is executed, it is not considered a "clean" program termination. Therefore, I/O buffers are not flushed. Try adding a fflush() call to explicitly flush stdout's buffer: int x = 3; if (x == 3) { printf("x is 3 ✅"); fflush(stdout); } assert(x != 3); Here are the relevant paragraphs from the C17 standard (italic emphasis mine): 7.2.1.1 The assert macro puts diagnostic tests into programs; it expands to a void expression. When it is executed, if expression (which shall have a scalar type) is false (that is, compares equal to 0), the assert macro writes information about the particular call that failed (including the text of the argument, the name of the source file, the source line number, and the name of the enclosing function — the latter are respectively the values of the preprocessing macros __FILE__ and __LINE__ and of the identifier __func__) on the standard error stream in an implementation-defined format. It then calls the abort function. 7.22.4.1 The abort function causes abnormal program termination to occur, unless the signal SIGABRT is being caught and the signal handler does not return. Whether open streams with unwritten buffered data are flushed, open streams are closed, or temporary files are removed is implementation-defined.
Why does using C's assert function skip evaluation of a conditional that comes before it?
I was using C's assert.h assert function in a method similar to this: int x = 3; if (x == 3) printf("x is 3 ✅"); assert(x != 3); When running it, I found out that the if statement is skipped entirely, and the program terminates when it reaches the assert statement. Needless to say, this caused a couple of extremely nasty bugs in my program before I found the culprit. What could be the cause of this? Why is the conditional being skipped entirely? If it isn't being skipped, then why is the code inside not being executed? I set up an online example here.
[ "The if statement is not skipped.\nWhen assert is executed, it is not considered a \"clean\" program termination. Therefore, I/O buffers are not flushed. Try adding a fflush() call to explicitly flush stdout's buffer:\nint x = 3;\n\nif (x == 3)\n{\n printf(\"x is 3 ✅\");\n fflush(stdout);\n}\n\nassert(x != 3);\n\nHere are the relevant paragraphs from the C17 standard (italic emphasis mine):\n\n7.2.1.1\nThe assert macro puts diagnostic tests into programs; it expands to a void expression. When it is executed, if expression (which shall have a scalar type) is false (that is, compares equal to 0), the assert macro writes information about the particular call that failed (including the text of the argument, the name of the source file, the source line number, and the name of the enclosing function — the latter are respectively the values of the preprocessing macros __FILE__ and __LINE__ and of the identifier __func__) on the standard error stream in an implementation-defined format. It then calls the abort function.\n\n\n7.22.4.1\nThe abort function causes abnormal program termination to occur, unless the signal SIGABRT\nis being caught and the signal handler does not return. Whether open streams with unwritten\nbuffered data are flushed, open streams are closed, or temporary files are removed is implementation-defined.\n\n" ]
[ 6 ]
[]
[]
[ "c" ]
stackoverflow_0074662688_c.txt
Q: Dart Map keys, get unique key Simple Map here: Map map = Map<int, String>{}; I can populate it: map = {1: 'c', 2: 'dart', 3: 'flutter'}; Here I need to specify a KEY. I would like to know on how to get an auto key. I cannot use map.lenght because whenever I will delete e.g. the second item (2) the third will remain 3 and map.lenght will overwrite that key. As the @eamirho3ein answer I tried this: //& Maps Map map = <int, Ingredient>{}; Map<int, T> addToMap<T>(Map<int, T> map, T newItem) { var list = map.entries.map((e) => e.value).toList(); list.add(newItem); var newIndex = 1; return Map.fromIterable(list, key: (item) => newIndex++, value: (item) => item); } Map<int, Ingredient> result = addToMap<Ingredient>( map, //Error here Ingredient( name: "Pizza", kcal: 100, carbohydrates: 50, proteins: 35, lipids: 23, fibers: 12, date: DateTime.now(), bottomTabIndex: 0, leftTabIndex: 0)); But I receive this error on map(indicated): The argument type 'Map<dynamic, dynamic>' can't be assigned to the parameter type 'Map<int, Ingredient>'. This is my simple class: class Ingredient { String? name; int? kcal; int? carbohydrates; int? proteins; int? lipids; int? fibers; int? leftTabIndex; int? bottomTabIndex; DateTime? date; Ingredient( {this.name, this.kcal, this.carbohydrates, this.proteins, this.lipids, this.fibers, this.leftTabIndex, this.bottomTabIndex, this.date}); } A: You can use this function to remove an item in a map and auto generate new key: Map<int, T> removeFromMap<T>(Map<int, T> map, int index) { var list = map.entries.map((e) => e.value).toList(); list.removeAt(index); var newIndex = 1; return Map.fromIterable(list, key: (item) => newIndex++, value: (item) => item); } you can use it like this: var result = removeFromMap<String>({1: 'c', 2: 'dart', 3: 'flutter'}, 1); print("result = $result"); //result = {1: c, 2: flutter} If you want add new Item: Map<int, T> addToMap<T>(Map<int, T> map, T newItem) { var list = map.entries.map((e) => e.value).toList(); list.add(newItem); var newIndex = 1; return Map.fromIterable(list, key: (item) => newIndex++, value: (item) => item); } and call it like this: var result = addToMap<String>({1: 'c', 2: 'dart', 3: 'flutter'}, 'B'); print("result = $result"); //result = {1: c, 2: dart, 3: flutter, 4: B} A: You can use an incrementing index key that will be always different than others on every call, and wrapping it inside an autoKey() method like this: int index = 0; autoKey() { return ++index; } Map<int, String> map = {}; map = {autoKey(): 'c', autoKey(): 'dart', autoKey(): 'flutter'}; print(map); {1: c, 2: dart, 3: flutter} A: Maybe I should use a function: int getNewKey(Map map) { if (map.isEmpty) { return 0; // or 1, this is the first item I insert } else { return map.keys.last + 1; } } And use it whenever I will add something to that map: {getNewKey(map) : 'c', getNewKey(map) : 'dart', getNewKey(map) : 'flutter'} Please let me know if this is faulty :-| By this way I will never overwrite a key that's only incremental. Please note: I should not use directly {(map.keys.last + 1) : 'c', (map.keys.last + 1) : 'dart', (map.keys.last + 1) : 'flutter'} Because if the Map is empty will produce an error. A: If you have a Map that uses contiguous, integer-based keys, you should ask yourself if you should be using a List instead, and then you don't need to manage keys yourself. If you must use a Map because some API requires it, you can still start with a List and then use List.asMap to easily convert it: var map = [ 'c', 'dart', 'flutter', ].asMap(); print(map); // Prints: {0: c, 1: dart, 2: flutter} Note that List.asMap returns an unmodifable view, so if you want to allow mutation, you would need to create a copy: var map = Map.of([ 'c', 'dart', 'flutter', ].asMap()); If must have your Map keys be integers starting from 1 and not 0, then you could insert a dummy element (and remove it later if desired): var map = Map.of([ '', 'c', 'dart', 'flutter', ].asMap()) ..remove(0); print(map); // Prints: {1: c, 2: dart, 3: flutter}
Dart Map keys, get unique key
Simple Map here: Map map = Map<int, String>{}; I can populate it: map = {1: 'c', 2: 'dart', 3: 'flutter'}; Here I need to specify a KEY. I would like to know on how to get an auto key. I cannot use map.lenght because whenever I will delete e.g. the second item (2) the third will remain 3 and map.lenght will overwrite that key. As the @eamirho3ein answer I tried this: //& Maps Map map = <int, Ingredient>{}; Map<int, T> addToMap<T>(Map<int, T> map, T newItem) { var list = map.entries.map((e) => e.value).toList(); list.add(newItem); var newIndex = 1; return Map.fromIterable(list, key: (item) => newIndex++, value: (item) => item); } Map<int, Ingredient> result = addToMap<Ingredient>( map, //Error here Ingredient( name: "Pizza", kcal: 100, carbohydrates: 50, proteins: 35, lipids: 23, fibers: 12, date: DateTime.now(), bottomTabIndex: 0, leftTabIndex: 0)); But I receive this error on map(indicated): The argument type 'Map<dynamic, dynamic>' can't be assigned to the parameter type 'Map<int, Ingredient>'. This is my simple class: class Ingredient { String? name; int? kcal; int? carbohydrates; int? proteins; int? lipids; int? fibers; int? leftTabIndex; int? bottomTabIndex; DateTime? date; Ingredient( {this.name, this.kcal, this.carbohydrates, this.proteins, this.lipids, this.fibers, this.leftTabIndex, this.bottomTabIndex, this.date}); }
[ "You can use this function to remove an item in a map and auto generate new key:\nMap<int, T> removeFromMap<T>(Map<int, T> map, int index) {\n var list = map.entries.map((e) => e.value).toList();\n list.removeAt(index);\n var newIndex = 1;\n\n return Map.fromIterable(list,\n key: (item) => newIndex++, value: (item) => item);\n } \n\nyou can use it like this:\nvar result = removeFromMap<String>({1: 'c', 2: 'dart', 3: 'flutter'}, 1);\nprint(\"result = $result\"); //result = {1: c, 2: flutter}\n\nIf you want add new Item:\nMap<int, T> addToMap<T>(Map<int, T> map, T newItem) {\n var list = map.entries.map((e) => e.value).toList();\n list.add(newItem);\n var newIndex = 1;\n\n return Map.fromIterable(list,\n key: (item) => newIndex++, value: (item) => item);\n }\n\nand call it like this:\nvar result = addToMap<String>({1: 'c', 2: 'dart', 3: 'flutter'}, 'B');\nprint(\"result = $result\"); //result = {1: c, 2: dart, 3: flutter, 4: B}\n\n", "You can use an incrementing index key that will be always different than others on every call, and wrapping it inside an autoKey() method like this:\nint index = 0;\nautoKey() {\n return ++index;\n}\n\nMap<int, String> map = {};\nmap = {autoKey(): 'c', autoKey(): 'dart', autoKey(): 'flutter'};\n\nprint(map); {1: c, 2: dart, 3: flutter}\n\n", "Maybe I should use a function:\nint getNewKey(Map map) {\n if (map.isEmpty) {\n return 0; // or 1, this is the first item I insert\n } else {\n return map.keys.last + 1;\n }\n}\n\nAnd use it whenever I will add something to that map:\n{getNewKey(map) : 'c', getNewKey(map) : 'dart', getNewKey(map) : 'flutter'}\nPlease let me know if this is faulty :-|\nBy this way I will never overwrite a key that's only incremental.\nPlease note: I should not use directly\n{(map.keys.last + 1) : 'c', (map.keys.last + 1) : 'dart', (map.keys.last + 1) : 'flutter'}\n\nBecause if the Map is empty will produce an error.\n", "If you have a Map that uses contiguous, integer-based keys, you should ask yourself if you should be using a List instead, and then you don't need to manage keys yourself.\nIf you must use a Map because some API requires it, you can still start with a List and then use List.asMap to easily convert it:\nvar map = [\n 'c',\n 'dart',\n 'flutter',\n ].asMap();\n\nprint(map); // Prints: {0: c, 1: dart, 2: flutter}\n\nNote that List.asMap returns an unmodifable view, so if you want to allow mutation, you would need to create a copy:\nvar map = Map.of([\n 'c',\n 'dart',\n 'flutter',\n ].asMap());\n\nIf must have your Map keys be integers starting from 1 and not 0, then you could insert a dummy element (and remove it later if desired):\nvar map = Map.of([\n '',\n 'c',\n 'dart',\n 'flutter',\n ].asMap())\n ..remove(0);\n\nprint(map); // Prints: {1: c, 2: dart, 3: flutter}\n\n" ]
[ 2, 0, 0, 0 ]
[ "another way is that you can use a class as key that never equals itself using hashCode, like this:\nclass nonEqualKey {\n \n @override\n int get hashCode => 0;\n \n @override\n operator ==(covariant nonEqualKey other) {\n return other.hashCode != hashCode;\n }\n @override\n toString() {\n return \"unique\";\n }\n}\n\nMap map = {};\nmap = {nonEqualKey(): 'c', nonEqualKey(): 'dart', nonEqualKey(): 'flutter'};\nprint(map); // {unique: c, unique: dart, unique: flutter}\n\nHere I overridden the hashCode so it will be always 0, then I overridden the == operator so the objects can be equal if they have a different hashCode which is impossible since always 0!=0 is false.\nEven if you use the same class constructor, they will never be the same, and so, it lets you use it as much as you want without needing to handle it for every operation you will do on the Map\n" ]
[ -1 ]
[ "dart", "dictionary", "flutter" ]
stackoverflow_0074657464_dart_dictionary_flutter.txt
Q: Consider defining a bean of type 'org.springframework.cloud.openfeign.FeignContext' in your configuration I am trying to run the application but this error keeps prompting. Description: Parameter 0 of constructor in com.clientui.clientui.controller.ClientController required a bean of type 'org.springframework.cloud.openfeign.FeignContext' that could not be found. Action: Consider defining a bean of type 'org.springframework.cloud.openfeign.FeignContext' in your configuration. Here is the code: Main package com.clientui.clientui; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.openfeign.EnableFeignClients; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @SpringBootApplication @EnableFeignClients("com.clientui") public class ClientuiApplication { public static void main(String[] args) { SpringApplication.run(ClientuiApplication.class, args); } } Controller package com.clientui.clientui.controller; import com.clientui.clientui.beans.ProductBean; import com.clientui.clientui.proxies.MicroserviceProduitsProxy; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping; import java.util.List; @Controller public class ClientController { private final MicroserviceProduitsProxy produitsProxy; public ClientController(MicroserviceProduitsProxy produitsProxy){ this.produitsProxy = produitsProxy; } @RequestMapping("/") public String accueil(Model model){ List<ProductBean> produits = produitsProxy.listeDesProduits(); model.addAttribute("produits", produits); return "Accueil"; } } A: I had the same problem when updating the spring-boot version to 3.0.0, I think it's some compatibility bug with spring cloud and spring boot's autoconfigure. I solved it by adding the annotation @ImportAutoConfiguration({FeignAutoConfiguration.class}) in the application, in your case: import org.springframework.cloud.openfeign.FeignAutoConfiguration; @SpringBootApplication @EnableFeignClients("com.clientui") @ImportAutoConfiguration({FeignAutoConfiguration.class}) public class ClientuiApplication { public static void main(String[] args) { SpringApplication.run(ClientuiApplication.class, args); } } A: I use Spring Boot 3.0.0 and faced the same issue and resolved it by using 2022.0.0-RC2 version of spring-cloud-dependencies. (https://docs.spring.io/spring-cloud/docs/2022.0.0-RC2/reference/html/). It should work with Spring Boot 3.0.0. If you are using Maven add this to your dependencyManagement section in pom.xml: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2022.0.0-RC2</version> <type>pom</type> <scope>import</scope> </dependency> Note: For the time I am writing this answer 2022.0.0-RC2 version is not available in central repository but you can find it in Spring Lib M repository so you should also add it to your repositories section in pom.xml: <repository> <id>lib-m</id> <name>Spring Lib M</name> <url>https://repo.spring.io/libs-milestone/</url> </repository>
Consider defining a bean of type 'org.springframework.cloud.openfeign.FeignContext' in your configuration
I am trying to run the application but this error keeps prompting. Description: Parameter 0 of constructor in com.clientui.clientui.controller.ClientController required a bean of type 'org.springframework.cloud.openfeign.FeignContext' that could not be found. Action: Consider defining a bean of type 'org.springframework.cloud.openfeign.FeignContext' in your configuration. Here is the code: Main package com.clientui.clientui; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.openfeign.EnableFeignClients; import org.springframework.cloud.openfeign.FeignClient; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @SpringBootApplication @EnableFeignClients("com.clientui") public class ClientuiApplication { public static void main(String[] args) { SpringApplication.run(ClientuiApplication.class, args); } } Controller package com.clientui.clientui.controller; import com.clientui.clientui.beans.ProductBean; import com.clientui.clientui.proxies.MicroserviceProduitsProxy; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.ui.Model; import org.springframework.web.bind.annotation.RequestMapping; import java.util.List; @Controller public class ClientController { private final MicroserviceProduitsProxy produitsProxy; public ClientController(MicroserviceProduitsProxy produitsProxy){ this.produitsProxy = produitsProxy; } @RequestMapping("/") public String accueil(Model model){ List<ProductBean> produits = produitsProxy.listeDesProduits(); model.addAttribute("produits", produits); return "Accueil"; } }
[ "I had the same problem when updating the spring-boot version to 3.0.0, I think it's some compatibility bug with spring cloud and spring boot's autoconfigure.\nI solved it by adding the annotation @ImportAutoConfiguration({FeignAutoConfiguration.class}) in the application, in your case:\nimport org.springframework.cloud.openfeign.FeignAutoConfiguration;\n\n@SpringBootApplication\n@EnableFeignClients(\"com.clientui\")\n@ImportAutoConfiguration({FeignAutoConfiguration.class})\npublic class ClientuiApplication {\n\n public static void main(String[] args) {\n SpringApplication.run(ClientuiApplication.class, args);\n }\n\n}\n\n", "I use Spring Boot 3.0.0 and faced the same issue and resolved it by using 2022.0.0-RC2 version of spring-cloud-dependencies. (https://docs.spring.io/spring-cloud/docs/2022.0.0-RC2/reference/html/). It should work with Spring Boot 3.0.0.\nIf you are using Maven add this to your dependencyManagement section in pom.xml:\n<dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-dependencies</artifactId>\n <version>2022.0.0-RC2</version>\n <type>pom</type>\n <scope>import</scope>\n</dependency>\n\nNote: For the time I am writing this answer 2022.0.0-RC2 version is not available in central repository but you can find it in Spring Lib M repository so you should also add it to your repositories section in pom.xml:\n<repository>\n <id>lib-m</id>\n <name>Spring Lib M</name>\n <url>https://repo.spring.io/libs-milestone/</url>\n</repository>\n\n" ]
[ 1, 0 ]
[]
[]
[ "microservices", "spring_boot", "spring_cloud_feign" ]
stackoverflow_0074593433_microservices_spring_boot_spring_cloud_feign.txt
Q: "Carousel" animation for text in Flutter I want to make a "text carousel", just like the music title in this gif, how do I do this with flutter? A: To get it done, you need to use marquee, a package from pub.dev Use the link below https://pub.dev/packages/marquee You will see the ReadMe, to install.
"Carousel" animation for text in Flutter
I want to make a "text carousel", just like the music title in this gif, how do I do this with flutter?
[ "To get it done, you need to use marquee, a package from pub.dev\nUse the link below\nhttps://pub.dev/packages/marquee\nYou will see the ReadMe, to install.\n" ]
[ 0 ]
[]
[]
[ "flutter" ]
stackoverflow_0074662527_flutter.txt
Q: Google App Engine login fails with error 500 I have an appengine (Java) app with the ability to sign in via Google. For this I use UserServiceFactory.getUserService().createLoginURL(...). This has been working fine so far, and still works well locally (using mvn appengine:run) but on production it consistently throws a generic 500 while on /_ah/conflogin?state=~AJKiYcHHHXI45-... (the 5th URL of the login process, while already being logged in with Google) and I can find nothing relevant in the Logs Explorer at https://console.cloud.google.com/logs/... I've since even updated to the latest https://mvnrepository.com/artifact/com.google.appengine/appengine-api-1.0-sdk/2.0.10 but the problem remains. Where should I look at to solve this issue? A: To find the root cause of this issue, you can find the logs for this error will be helpful. In this documentation there is a sample query that you could try to check for logs error with status 500. resource.type="gae_app" AND log_id("appengine.googleapis.com/request_log") AND httpRequest.status>=500 Alternatively you could also try running ‘gcloud app logs read’ as mentioned in this document to see if you get any logs. The issue tracker mentioned by Rez in comment is marked as Fixed and after checking your issue I think it also relates to the same issue tracker. As it closed I suggest to get your issue resolved I recommend to raise new issue tracker by referencing the fixed one or else you may raise support ticket with google A: First, I would check the AppEngine logs to see if you can find any clues as to what is causing the 500 error. You can do this by going to the Logs Explorer in the GCP console (https://console.cloud.google.com/logs/). If the error is not being logged in the AppEngine logs, then you can try debugging the code to see where the error is occurring. You can do this by setting breakpoints in your code and examining the state of the variables when the error occurs. This will help you narrow down the source of the error and allow you to fix it. If the error is still not clear, then you can try enabling verbose logging for the UserServiceFactory class. This will allow you to get more detailed information about what is happening in the background and can help you pinpoint the source of the error. A: One possible cause of this issue is that the UserServiceFactory.getUserService().createLoginURL() method is not being called from within a request-handling thread. This method is designed to be called from within a request-handling thread, and it may throw an exception if it is called from outside of a request-handling thread. To solve this issue, you can try wrapping the call to the createLoginURL() method in a RequestFactory. This will ensure that the method is called from within a request-handling thread, and it should prevent the exception from being thrown. Additionally, you may want to check the logs for your App Engine app to see if there are any other error messages that can provide more information about what is causing the 500 error. You can access the logs for your app by going to the Logs Explorer in the Google Cloud Console, and looking for logs that are associated with your app and the "/_ah/conflogin" URL.
Google App Engine login fails with error 500
I have an appengine (Java) app with the ability to sign in via Google. For this I use UserServiceFactory.getUserService().createLoginURL(...). This has been working fine so far, and still works well locally (using mvn appengine:run) but on production it consistently throws a generic 500 while on /_ah/conflogin?state=~AJKiYcHHHXI45-... (the 5th URL of the login process, while already being logged in with Google) and I can find nothing relevant in the Logs Explorer at https://console.cloud.google.com/logs/... I've since even updated to the latest https://mvnrepository.com/artifact/com.google.appengine/appengine-api-1.0-sdk/2.0.10 but the problem remains. Where should I look at to solve this issue?
[ "To find the root cause of this issue, you can find the logs for this error will be helpful.\nIn this documentation there is a sample query that you could try to check for logs error with status 500.\nresource.type=\"gae_app\" AND\nlog_id(\"appengine.googleapis.com/request_log\") AND\nhttpRequest.status>=500\n\nAlternatively you could also try running ‘gcloud app logs read’ as mentioned in this document to see if you get any logs.\nThe issue tracker mentioned by Rez in comment is marked as Fixed and after checking your issue I think it also relates to the same issue tracker. As it closed I suggest to get your issue resolved I recommend to raise new issue tracker by referencing the fixed one or else you may raise support ticket with google\n", "First, I would check the AppEngine logs to see if you can find any clues as to what is causing the 500 error. You can do this by going to the Logs Explorer in the GCP console (https://console.cloud.google.com/logs/).\nIf the error is not being logged in the AppEngine logs, then you can try debugging the code to see where the error is occurring. You can do this by setting breakpoints in your code and examining the state of the variables when the error occurs. This will help you narrow down the source of the error and allow you to fix it.\nIf the error is still not clear, then you can try enabling verbose logging for the UserServiceFactory class. This will allow you to get more detailed information about what is happening in the background and can help you pinpoint the source of the error.\n", "One possible cause of this issue is that the UserServiceFactory.getUserService().createLoginURL() method is not being called from within a request-handling thread. This method is designed to be called from within a request-handling thread, and it may throw an exception if it is called from outside of a request-handling thread.\nTo solve this issue, you can try wrapping the call to the createLoginURL() method in a RequestFactory. This will ensure that the method is called from within a request-handling thread, and it should prevent the exception from being thrown.\nAdditionally, you may want to check the logs for your App Engine app to see if there are any other error messages that can provide more information about what is causing the 500 error. You can access the logs for your app by going to the Logs Explorer in the Google Cloud Console, and looking for logs that are associated with your app and the \"/_ah/conflogin\" URL.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "google_app_engine", "google_cloud_platform", "http_status_code_500" ]
stackoverflow_0074411999_google_app_engine_google_cloud_platform_http_status_code_500.txt
Q: How to set type for () => dispatch(action) in typescript? I am new to typescript and converting my jsx to tsx files. How can I define the type for a redux dispatch for my onPress prop? I tried using "Function" and the "AppDispatch". Thanks in advance import { AppDispatch } from "../redux/store" interface ItemProps { item: Custom_date, onPress: AppDispatch, //What should I put here? backgroundColor: any, textColor: any } const Item = ({ item, onPress, backgroundColor, textColor }: ItemProps) => ( <TouchableOpacity onPress={onPress} style={[styles.item, backgroundColor]}> <Text style={[styles.dateTitle, textColor]}>{item.title}</Text> </TouchableOpacity> ); export const DateHeader = () => { const { date } = useSelector((store: RootState) => store.nutrition) const dispatch = useDispatch() const renderItem = ({ item }) => { const backgroundColor = item.id === date.id ? "#033F40" : "#BDF0CC"; const color = item.id === date.id ? '#BDF0CC' : '#033F40'; return ( <Item item={item} onPress={() => dispatch(setDate(item))} backgroundColor={{ backgroundColor }} textColor={{ color }} /> ); } return (<> <FlatList {props} /> </>) } A: The only things you need to know to type a function are the argument types and the return type. In order to find the return type of dispatch() you can check the internals of useDispatch(), but as a general rule (with quite a few exceptions) click handlers will not return anything. Any side effects the function produces, such as by updating a variable or calling an API or updating a database, won't be included in the type signature. You're also not passing in any parameters in onPress as you can tell from the empty brackets where you define the arrow function, which simplifies things a lot. As a result your type for onPress will most likely be as follows, if the dispatch function you're calling does not return a value: interface ItemProps { item: Custom_date, onPress: () => void, backgroundColor: any, textColor: any } A: You should use the type for the dispatch function that you are using in your app, which is most likely the AppDispatch from the Redux store: interface ItemProps { item: Custom_date, onPress: AppDispatch, backgroundColor: any, textColor: any }
How to set type for () => dispatch(action) in typescript?
I am new to typescript and converting my jsx to tsx files. How can I define the type for a redux dispatch for my onPress prop? I tried using "Function" and the "AppDispatch". Thanks in advance import { AppDispatch } from "../redux/store" interface ItemProps { item: Custom_date, onPress: AppDispatch, //What should I put here? backgroundColor: any, textColor: any } const Item = ({ item, onPress, backgroundColor, textColor }: ItemProps) => ( <TouchableOpacity onPress={onPress} style={[styles.item, backgroundColor]}> <Text style={[styles.dateTitle, textColor]}>{item.title}</Text> </TouchableOpacity> ); export const DateHeader = () => { const { date } = useSelector((store: RootState) => store.nutrition) const dispatch = useDispatch() const renderItem = ({ item }) => { const backgroundColor = item.id === date.id ? "#033F40" : "#BDF0CC"; const color = item.id === date.id ? '#BDF0CC' : '#033F40'; return ( <Item item={item} onPress={() => dispatch(setDate(item))} backgroundColor={{ backgroundColor }} textColor={{ color }} /> ); } return (<> <FlatList {props} /> </>) }
[ "The only things you need to know to type a function are the argument types and the return type. In order to find the return type of dispatch() you can check the internals of useDispatch(), but as a general rule (with quite a few exceptions) click handlers will not return anything. Any side effects the function produces, such as by updating a variable or calling an API or updating a database, won't be included in the type signature.\nYou're also not passing in any parameters in onPress as you can tell from the empty brackets where you define the arrow function, which simplifies things a lot.\nAs a result your type for onPress will most likely be as follows, if the dispatch function you're calling does not return a value:\ninterface ItemProps {\n item: Custom_date, \n onPress: () => void,\n backgroundColor: any,\n textColor: any\n}\n\n", "You should use the type for the dispatch function that you are using in your app, which is most likely the AppDispatch from the Redux store:\ninterface ItemProps {\n item: Custom_date, \n onPress: AppDispatch, \n backgroundColor: any,\n textColor: any\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "react_native", "redux_toolkit", "typescript" ]
stackoverflow_0074662347_react_native_redux_toolkit_typescript.txt
Q: image loads but does not update I'm using this code, the image loads but is not updating. The code receives the image url through an API, this API every time it makes a request it generates a new image url <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script type="text/javascript"> function atualizarqr() { document.getElementById("img").src = "<?= $this->admin_m->qr_whatsapp(); ?>"; } window.onload = function () { setInterval(atualizarqr(), 5000); }; </script> <div onload="atualizarqr"> <img src="" id="img"/> </div> A: To make the image update, you can try the following changes to your code: Add a setInterval call to the atualizarqr function, which will update the image every 5 seconds. Use the $ syntax to access the jQuery library and use the attr method to set the src attribute of the img element. Here is an updated version of your code with these changes: <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script type="text/javascript"> function atualizarqr() { // use jQuery to set the src attribute of the img element $("#img").attr("src", "<?= $this->admin_m->qr_whatsapp(); ?>"); } window.onload = function () { // update the image every 5 seconds setInterval(atualizarqr, 5000); }; </script> <!-- remove the onload attribute from the div element --> <div> <img src="" id="img"/> </div> In this updated code, the atualizarqr function is called every 5 seconds using the setInterval function. This function uses jQuery to set the src attribute of the img element to the URL of the image provided by the API. This will cause the image to update every 5 seconds. Note that the onload attribute is removed from the div element, as this attribute is used to specify a function to be called when the page loads, and it is not needed in this case.
image loads but does not update
I'm using this code, the image loads but is not updating. The code receives the image url through an API, this API every time it makes a request it generates a new image url <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script> <script type="text/javascript"> function atualizarqr() { document.getElementById("img").src = "<?= $this->admin_m->qr_whatsapp(); ?>"; } window.onload = function () { setInterval(atualizarqr(), 5000); }; </script> <div onload="atualizarqr"> <img src="" id="img"/> </div>
[ "To make the image update, you can try the following changes to your code:\nAdd a setInterval call to the atualizarqr function, which will update the image every 5 seconds.\nUse the $ syntax to access the jQuery library and use the attr method to set the src attribute of the img element.\nHere is an updated version of your code with these changes:\n\n\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js\"></script>\n\n<script type=\"text/javascript\">\nfunction atualizarqr() {\n // use jQuery to set the src attribute of the img element\n $(\"#img\").attr(\"src\", \"<?= $this->admin_m->qr_whatsapp(); ?>\");\n}\n\nwindow.onload = function () {\n // update the image every 5 seconds\n setInterval(atualizarqr, 5000);\n};\n</script>\n\n<!-- remove the onload attribute from the div element -->\n<div>\n <img src=\"\" id=\"img\"/>\n</div>\n\n\n\nIn this updated code, the atualizarqr function is called every 5 seconds using the setInterval function. This function uses jQuery to set the src attribute of the img element to the URL of the image provided by the API. This will cause the image to update every 5 seconds.\nNote that the onload attribute is removed from the div element, as this attribute is used to specify a function to be called when the page loads, and it is not needed in this case.\n" ]
[ 0 ]
[]
[]
[ "ajax", "javascript", "php" ]
stackoverflow_0074662755_ajax_javascript_php.txt
Q: localStorage storaging JSON as [object Object] When I try to use localStorage to store an JSON it becomes a [object Object], here is the code const treino = [ { A: "Costas e Biceps" }, { B: "Membros inferiores e Ombro" }, { C: "Peito e Triceps" } ]; const treinoJSON = JSON.stringify(treino); localStorage.setItem("papa", treino); const treinoJSONSaved = localStorage.getItem("papa"); const treinoJSONSavedParsed = JSON.parse(treinoJSONSaved); console.log(treinoJSONSavedParsed); I've tried to "console.log" every step and the variable becomes a JSON with no problem, if I parse it outside of the localStorage it works just fine, but when I store it doesn't work. A: You need to JSON.stringify the item and then JSON.parse the item when you read. The value must be a string: https://developer.mozilla.org/en-US/docs/Web/API/Storage/setItem You need to use treinoJSON as the value in setItem.
localStorage storaging JSON as [object Object]
When I try to use localStorage to store an JSON it becomes a [object Object], here is the code const treino = [ { A: "Costas e Biceps" }, { B: "Membros inferiores e Ombro" }, { C: "Peito e Triceps" } ]; const treinoJSON = JSON.stringify(treino); localStorage.setItem("papa", treino); const treinoJSONSaved = localStorage.getItem("papa"); const treinoJSONSavedParsed = JSON.parse(treinoJSONSaved); console.log(treinoJSONSavedParsed); I've tried to "console.log" every step and the variable becomes a JSON with no problem, if I parse it outside of the localStorage it works just fine, but when I store it doesn't work.
[ "You need to JSON.stringify the item and then JSON.parse the item when you read. The value must be a string: https://developer.mozilla.org/en-US/docs/Web/API/Storage/setItem\nYou need to use treinoJSON as the value in setItem.\n" ]
[ 1 ]
[]
[]
[ "javascript", "json", "local_storage" ]
stackoverflow_0074662767_javascript_json_local_storage.txt
Q: Pine Script strategy to trigger once per day? Need help coding Pine Script v5 strategy to trigger once per day? Would like to use "bar_index == 0", but do not know how to establish when the bar index re-starts at the beginning of the trading day. Please help! A: Solution: https://www.tradingview.com/pine-script-reference/v5/#fun_timeframe{dot}change The below tests if it's a new day if timeframe.change("D") ... A: I'm sure there is a better way, but this worked for me: var traded_today = false //use whatever hours:days for first candle of time series chart //using 15min here, but can be done using an input() first_candle = time(timeframe.period, "9:30-9:15:23456" if (first_candle) traded_today := false condition = your_condition and not traded_today if (condition) traded_today := true
Pine Script strategy to trigger once per day?
Need help coding Pine Script v5 strategy to trigger once per day? Would like to use "bar_index == 0", but do not know how to establish when the bar index re-starts at the beginning of the trading day. Please help!
[ "Solution: https://www.tradingview.com/pine-script-reference/v5/#fun_timeframe{dot}change\nThe below tests if it's a new day\nif timeframe.change(\"D\")\n ...\n\n", "I'm sure there is a better way, but this worked for me:\nvar traded_today = false\n\n//use whatever hours:days for first candle of time series chart\n//using 15min here, but can be done using an input()\nfirst_candle = time(timeframe.period, \"9:30-9:15:23456\"\n\nif (first_candle)\n traded_today := false\n \ncondition = your_condition and not traded_today\n\nif (condition)\n traded_today := true \n\n" ]
[ 0, 0 ]
[]
[]
[ "pine_script", "pinescript_v5", "tradingview_api" ]
stackoverflow_0073163747_pine_script_pinescript_v5_tradingview_api.txt
Q: How to get distinct fruits where indicator =only B and make sure that fruit listed is not coded on any "A" Indicator column in proc sql How to get distinct fruits where indicator =only B and make sure that fruit listed is not coded on any "A" Indicator column in proc sql. I tried this but obviously its not working. EDITED: example; fruits Indicator apple A Strawberry B apple B Strawberry B Orange A Orange B Mango B Banana A Peach B Cherry A Output that I want: fruits Indicator Mango B Peach B strawberry B Note: even though Apple and Orange has A and B, we do not want those on the output since both have indicator i.e A. We want fruits that is not coded on any A indicator column. proc sql; create table unique as select distinct fruits, indicator from example where indicator='b' and fruits in(select distinct fruits from example where indicator='b'); quit; but this gets: fruits Indicator apple B orange B mango B peach B strawberry B I need to add another step ..where if fruit = both A and B indicator then do not get that value? A: It is not clear what you criteria is. If the goal is to select all of the observations that only have 'B' in INDICATOR then use an aggregate function in the having clause. select * from have group by fruits having min( indicator='B' ) = 1 ; Try these examples: select age , count(*) as count , max( sex='M') as any_male , min( sex='M') as all_male , max( sex='F') as any_female , min( sex='F') as all_female from sashelp.class group by age ; select * from sashelp.class group by age having min( sex='M' ) = 1 ;
How to get distinct fruits where indicator =only B and make sure that fruit listed is not coded on any "A" Indicator column in proc sql
How to get distinct fruits where indicator =only B and make sure that fruit listed is not coded on any "A" Indicator column in proc sql. I tried this but obviously its not working. EDITED: example; fruits Indicator apple A Strawberry B apple B Strawberry B Orange A Orange B Mango B Banana A Peach B Cherry A Output that I want: fruits Indicator Mango B Peach B strawberry B Note: even though Apple and Orange has A and B, we do not want those on the output since both have indicator i.e A. We want fruits that is not coded on any A indicator column. proc sql; create table unique as select distinct fruits, indicator from example where indicator='b' and fruits in(select distinct fruits from example where indicator='b'); quit; but this gets: fruits Indicator apple B orange B mango B peach B strawberry B I need to add another step ..where if fruit = both A and B indicator then do not get that value?
[ "It is not clear what you criteria is.\nIf the goal is to select all of the observations that only have 'B' in INDICATOR then use an aggregate function in the having clause.\nselect *\n from have\n group by fruits\n having min( indicator='B' ) = 1\n;\n\nTry these examples:\nselect age\n , count(*) as count\n , max( sex='M') as any_male\n , min( sex='M') as all_male\n , max( sex='F') as any_female\n , min( sex='F') as all_female\n from sashelp.class\n group by age \n;\n\nselect * from sashelp.class\ngroup by age\nhaving min( sex='M' ) = 1\n;\n\n\n" ]
[ 0 ]
[]
[]
[ "proc_sql", "sas", "sql_server" ]
stackoverflow_0074662650_proc_sql_sas_sql_server.txt
Q: Docker compose invalid type when setting up a postgres database I want to create a postgresql database locally, but I don't understand the problem. In my .env I have POSTGRESQL_ADDON_DB='test' so it's a string, so why do I have this error? my docker-compose.yaml: version: "3.9" services: app: build: . user: 'node' restart: always container_name: ${COMPOSE_PROJECT_NAME}-app working_dir: /usr/app/ command: npm run start:dev ports: - 8080:8080 volumes: - .:/usr/app - /usr/app/node_modules env_file: - fileName.env depends_on: - postgres postgres: image: postgres:11-alpine container_name: ${COMPOSE_PROJECT_NAME}-postgres environment: - POSTGRES_DB: ${POSTGRESQL_ADDON_DB} - POSTGRES_USER: ${POSTGRESQL_ADDON_USER} - POSTGRES_PASSWORD: ${POSTGRESQL_ADDON_PASSWORD} ports: - 5432:5432 volumes: - postgres:/var/lib/postgresql/data pgadmin: image: dpage/pgadmin4:latest container_name: ${COMPOSE_PROJECT_NAME}-pgadmin restart: always environment: - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL} - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD} - PGADMIN_LISTEN_PORT=80 - GUNICORN_ACCESS_LOGFILE='/dev/null' - PGADMIN_CONFIG_UPGRADE_CHECK_ENABLED='false' ports: - 80:80 - 443:443 volumes: - pgadmin-data:/var/lib/pgadmin logging: driver: none depends_on: - postgres volumes: pg-data: pgadmin-data: issue : ERROR: The Compose file './docker-compose.yaml' is invalid because: services.postgres.environment contains {"POSTGRES_DB": "test"}, which is an invalid type, it should be a string I already tried to put a static value: environment: - POSTGRES_DB: 'test' but nothing changes. A: It looks like the problem is that you are defining the POSTGRES_DB environment variable as an array, but it should be a string. In your docker-compose.yaml file, you need to change this line: environment: - POSTGRES_DB: ${POSTGRESQL_ADDON_DB} to this: environment: POSTGRES_DB: ${POSTGRESQL_ADDON_DB} Notice that I removed the - before POSTGRES_DB, which is what is causing the error. When you define environment variables in a Docker Compose file, each variable should be on its own line without a - at the beginning.
Docker compose invalid type when setting up a postgres database
I want to create a postgresql database locally, but I don't understand the problem. In my .env I have POSTGRESQL_ADDON_DB='test' so it's a string, so why do I have this error? my docker-compose.yaml: version: "3.9" services: app: build: . user: 'node' restart: always container_name: ${COMPOSE_PROJECT_NAME}-app working_dir: /usr/app/ command: npm run start:dev ports: - 8080:8080 volumes: - .:/usr/app - /usr/app/node_modules env_file: - fileName.env depends_on: - postgres postgres: image: postgres:11-alpine container_name: ${COMPOSE_PROJECT_NAME}-postgres environment: - POSTGRES_DB: ${POSTGRESQL_ADDON_DB} - POSTGRES_USER: ${POSTGRESQL_ADDON_USER} - POSTGRES_PASSWORD: ${POSTGRESQL_ADDON_PASSWORD} ports: - 5432:5432 volumes: - postgres:/var/lib/postgresql/data pgadmin: image: dpage/pgadmin4:latest container_name: ${COMPOSE_PROJECT_NAME}-pgadmin restart: always environment: - PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL} - PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD} - PGADMIN_LISTEN_PORT=80 - GUNICORN_ACCESS_LOGFILE='/dev/null' - PGADMIN_CONFIG_UPGRADE_CHECK_ENABLED='false' ports: - 80:80 - 443:443 volumes: - pgadmin-data:/var/lib/pgadmin logging: driver: none depends_on: - postgres volumes: pg-data: pgadmin-data: issue : ERROR: The Compose file './docker-compose.yaml' is invalid because: services.postgres.environment contains {"POSTGRES_DB": "test"}, which is an invalid type, it should be a string I already tried to put a static value: environment: - POSTGRES_DB: 'test' but nothing changes.
[ "It looks like the problem is that you are defining the POSTGRES_DB environment variable as an array, but it should be a string. In your docker-compose.yaml file, you need to change this line:\nenvironment:\n - POSTGRES_DB: ${POSTGRESQL_ADDON_DB}\n\nto this:\nenvironment:\n POSTGRES_DB: ${POSTGRESQL_ADDON_DB}\n\nNotice that I removed the - before POSTGRES_DB, which is what is causing the error. When you define environment variables in a Docker Compose file, each variable should be on its own line without a - at the beginning.\n" ]
[ 1 ]
[]
[]
[ "docker", "docker_compose", "postgresql" ]
stackoverflow_0074662747_docker_docker_compose_postgresql.txt
Q: How to iterate through nested dynamic JSON file in Flutter So I have two JSON files with different fields. { "password": { "length": 5, "reset": true}, } "dataSettings": { "enabled": true, "algorithm": { "djikstra": true }, "country": { "states": {"USA": true, "Romania": false}} } I want to be able to use the same code to be able to print out all the nested fields and its values in the JSON. I tried using a [JSON to Dart converter package](https://javiercbk.github.io/json_to_dart/. However, it seems like using this would make it so I would have to hardcode all the values, since I would retrieve it by doing item.dataSettings.country.states.USA which is a hardcoded method of doing it. Instead I want a way to loop through all the nested values and print it out without having to write it out myself. A: You can use ´dart:convert´ to convert the JSON string into an object of type Map<String, dynamic>: import 'dart:convert'; final jsonData = "{'someKey' : 'someValue'}"; final parsedJson = jsonDecode(jsonData); You can then iterate over this dict like any other: parsedJson.forEach((key, value) { // ... Do something with the key and value )); For your use case of listing all keys and values, a recursive implementation might be most easy to implement: void printMapContent(Map<String, dynamic> map) { parsedJson.forEach((key, value) { print("Key: $key"); if (value is String) { print("Value: $value"); } else if (value is Map<String, dynamic>) { // Recursive call printMapContent(value); } )); } But be aware that this type of recursive JSON parsing is generally not recommendable, because it is very unstructured and prone to errors. You should know what your data structure coming from the backend looks like and parse this data into well-structured objects. There you can also perform input validation and verify the data is reasonable. You can read up on the topic of "JSON parsing in dart", e.g. in this blog article.
How to iterate through nested dynamic JSON file in Flutter
So I have two JSON files with different fields. { "password": { "length": 5, "reset": true}, } "dataSettings": { "enabled": true, "algorithm": { "djikstra": true }, "country": { "states": {"USA": true, "Romania": false}} } I want to be able to use the same code to be able to print out all the nested fields and its values in the JSON. I tried using a [JSON to Dart converter package](https://javiercbk.github.io/json_to_dart/. However, it seems like using this would make it so I would have to hardcode all the values, since I would retrieve it by doing item.dataSettings.country.states.USA which is a hardcoded method of doing it. Instead I want a way to loop through all the nested values and print it out without having to write it out myself.
[ "You can use ´dart:convert´ to convert the JSON string into an object of type Map<String, dynamic>:\nimport 'dart:convert';\n\nfinal jsonData = \"{'someKey' : 'someValue'}\";\nfinal parsedJson = jsonDecode(jsonData);\n\nYou can then iterate over this dict like any other:\nparsedJson.forEach((key, value) {\n\n // ... Do something with the key and value\n\n));\n\nFor your use case of listing all keys and values, a recursive implementation might be most easy to implement:\nvoid printMapContent(Map<String, dynamic> map) {\n parsedJson.forEach((key, value) {\n print(\"Key: $key\"); \n if (value is String) {\n print(\"Value: $value\");\n } else if (value is Map<String, dynamic>) {\n // Recursive call\n printMapContent(value);\n }\n )); \n}\n\nBut be aware that this type of recursive JSON parsing is generally not recommendable, because it is very unstructured and prone to errors. You should know what your data structure coming from the backend looks like and parse this data into well-structured objects.\nThere you can also perform input validation and verify the data is reasonable.\nYou can read up on the topic of \"JSON parsing in dart\", e.g. in this blog article.\n" ]
[ 1 ]
[]
[]
[ "dart", "flutter", "fromjson", "json", "server" ]
stackoverflow_0074648619_dart_flutter_fromjson_json_server.txt
Q: How to use rtweet on rstudio.cloud? I want to authenticate with Twitter (Rtweet package) via rstudio.cloud. The problem is that the authentication opens a new page each time where I am supposed to authorise via Twitter. When I am redirected back from there, I end up in nirvana. library (rtweet) > search_users("#ICForumCH", n = 10) Requesting token on behalf of user... Waiting for authentication in browser... Press Esc/Ctrl + C to abort -> Twitter authentication Page -> Hmmm… can't reach this page 127.0.0.1 refused to connect. I found this solution from community.rstudio.com but cannot seem to make it work. Oh, and please don't tell me I need a desktop version. I will never get the necessary permissions at my workplace. A: If You have The twitter account logged in your default browser this may work for you. Run this code first, library(rtweet) auth_setup_default() # Using default authentication available. # Reading auth from 'C:\Users\XXX\AppData\Roaming/R/config/R/rtweet/default.rds' You can take default.rds file and upload it to your RStudio.cloud folder. Just use auth_as("foldername/default.rds") before post a tweet.
How to use rtweet on rstudio.cloud?
I want to authenticate with Twitter (Rtweet package) via rstudio.cloud. The problem is that the authentication opens a new page each time where I am supposed to authorise via Twitter. When I am redirected back from there, I end up in nirvana. library (rtweet) > search_users("#ICForumCH", n = 10) Requesting token on behalf of user... Waiting for authentication in browser... Press Esc/Ctrl + C to abort -> Twitter authentication Page -> Hmmm… can't reach this page 127.0.0.1 refused to connect. I found this solution from community.rstudio.com but cannot seem to make it work. Oh, and please don't tell me I need a desktop version. I will never get the necessary permissions at my workplace.
[ "If You have The twitter account logged in your default browser this may work for you.\nRun this code first,\nlibrary(rtweet)\nauth_setup_default()\n# Using default authentication available.\n# Reading auth from 'C:\\Users\\XXX\\AppData\\Roaming/R/config/R/rtweet/default.rds'\n\nYou can take default.rds file and upload it to your RStudio.cloud folder.\nJust use\nauth_as(\"foldername/default.rds\")\n\nbefore post a tweet.\n" ]
[ 0 ]
[]
[]
[ "r", "rstudio", "rtweet" ]
stackoverflow_0071764112_r_rstudio_rtweet.txt
Q: How do I handle running things like Google Cloud API Gateway locally? I have a project that uses API gateway to handle security. When it does this it forwards the header to x-forwarded-authorization and a bunch of other stuff. Is there a way to recreate this so a dev can run all of these locally? I see tickets like this (Serverless API Gateway on GCP) suggest ESPv2 (https://github.com/GoogleCloudPlatform/esp-v2) Or I know I can throw something together with a reverse proxy like NGINX but what is the correct way to handle this? A: It is not possible to run the API gateway locally, and it appears that this feature is not available at this time. This is perhaps because Google API Gateway is built on envoy and it's tightly integrated with other live services. If you need a local setup that is close to API Gateway's functionality, use ESPv2 on your local machine. ESPv2 integrates with Google Service Infrastructure to enable API management features at scale, including authentication, telemetry reports, metrics, and security. You can check this stackoverflow thread on how to deploy ESPv2 locally.
How do I handle running things like Google Cloud API Gateway locally?
I have a project that uses API gateway to handle security. When it does this it forwards the header to x-forwarded-authorization and a bunch of other stuff. Is there a way to recreate this so a dev can run all of these locally? I see tickets like this (Serverless API Gateway on GCP) suggest ESPv2 (https://github.com/GoogleCloudPlatform/esp-v2) Or I know I can throw something together with a reverse proxy like NGINX but what is the correct way to handle this?
[ "It is not possible to run the API gateway locally, and it appears that this feature is not available at this time. This is perhaps because Google API Gateway is built on envoy and it's tightly integrated with other live services.\nIf you need a local setup that is close to API Gateway's functionality, use ESPv2 on your local machine. ESPv2 integrates with Google Service Infrastructure to enable API management features at scale, including authentication, telemetry reports, metrics, and security. You can check this stackoverflow thread on how to deploy ESPv2 locally.\n" ]
[ 0 ]
[]
[]
[ "docker", "google_api_gateway", "google_cloud_platform" ]
stackoverflow_0074658275_docker_google_api_gateway_google_cloud_platform.txt
Q: Infinite loop when comparing input with list I want to compare the input with the list, but the else loop continues infinitely even when i give it a input that matches the list. def check_input(): while True: guessing_range = input("Please enter a guessing range.") if guessing_range.isdigit: guessing_range = int(guessing_range) if guessing_range in lst: break else: print("Guessing range must be 10, 100, or 1000!") continue return guessing_range check_input() A: It looks like your check_input function is trying to prompt the user to enter a guessing range, and then check whether the entered value is in a list of acceptable guessing ranges. However, there are a few issues with the code. First, when you check whether guessing_range is in the list, you are using the in keyword to check whether guessing_range is in the list. However, guessing_range is a string at this point, so the in keyword will always return False, even if the string is an acceptable guessing range. This is because the in keyword checks for identity, not equality, so it will only return True if the string is the same object as one of the elements in the list, which is unlikely to be the case. To fix this, you can convert the string to an integer before checking whether it is in the list. This way, you will be checking whether the integer value of guessing_range is in the list, rather than the string itself. Another issue is that you are using the isdigit method to check whether guessing_range is a digit. However, the isdigit method returns a bool value indicating whether the string is a digit, so you need to call it like a function. In other words, you need to add parentheses after isdigit to actually call the method and get the return value. Here is how you could modify your check_input function to fix these issues: def check_input(): while True: guessing_range = input("Please enter a guessing range.") if guessing_range.isdigit(): # Call the isdigit method to get the return value guessing_range = int(guessing_range) # Convert the string to an integer if guessing_range in lst: # Check whether the integer value is in the list break else: print("Guessing range must be 10, 100, or 1000!") continue return guessing_range With these changes, your check_input function should be able to check whether the user's input is an acceptable guessing range and break out of the while loop if it is. A: There is an error in the line if guessing_range.isdigit:. isdigit() is a method, so it needs to be called using parenthesis. The correct syntax would be if guessing_range.isdigit():. Additionally, the continue statement at the end of the loop is unnecessary as the loop will continue iterating automatically. There are a few other issues with the code: The lst variable is not defined in the code, so the if guessing_range in lst: line will throw a NameError. The list of valid guessing ranges needs to be defined before it can be used in the check_input() function. The guessing_range variable is assigned an integer value only if the user input is a digit. If the user enters a non-digit value, the guessing_range variable will still have the original string value, which will not be in the list of valid ranges. This will cause the if guessing_range in lst: line to always return False. To fix these issues, the code can be modified as follows: lst = [10, 100, 1000] def check_input(): while True: guessing_range = input("Please enter a guessing range.") if guessing_range.isdigit(): guessing_range = int(guessing_range) else: print("Guessing range must be a number!") continue if guessing_range in lst: break else: print("Guessing range must be 10, 100, or 1000!") return guessing_range check_input()
Infinite loop when comparing input with list
I want to compare the input with the list, but the else loop continues infinitely even when i give it a input that matches the list. def check_input(): while True: guessing_range = input("Please enter a guessing range.") if guessing_range.isdigit: guessing_range = int(guessing_range) if guessing_range in lst: break else: print("Guessing range must be 10, 100, or 1000!") continue return guessing_range check_input()
[ "It looks like your check_input function is trying to prompt the user to enter a guessing range, and then check whether the entered value is in a list of acceptable guessing ranges. However, there are a few issues with the code.\nFirst, when you check whether guessing_range is in the list, you are using the in keyword to check whether guessing_range is in the list. However, guessing_range is a string at this point, so the in keyword will always return False, even if the string is an acceptable guessing range. This is because the in keyword checks for identity, not equality, so it will only return True if the string is the same object as one of the elements in the list, which is unlikely to be the case.\nTo fix this, you can convert the string to an integer before checking whether it is in the list. This way, you will be checking whether the integer value of guessing_range is in the list, rather than the string itself.\nAnother issue is that you are using the isdigit method to check whether guessing_range is a digit. However, the isdigit method returns a bool value indicating whether the string is a digit, so you need to call it like a function. In other words, you need to add parentheses after isdigit to actually call the method and get the return value.\nHere is how you could modify your check_input function to fix these issues:\ndef check_input():\n while True:\n guessing_range = input(\"Please enter a guessing range.\")\n if guessing_range.isdigit(): # Call the isdigit method to get the return value\n guessing_range = int(guessing_range) # Convert the string to an integer\n if guessing_range in lst: # Check whether the integer value is in the list\n break\n else:\n print(\"Guessing range must be 10, 100, or 1000!\")\n continue\n return guessing_range\n\nWith these changes, your check_input function should be able to check whether the user's input is an acceptable guessing range and break out of the while loop if it is.\n", "There is an error in the line if guessing_range.isdigit:. isdigit() is a method, so it needs to be called using parenthesis. The correct syntax would be if guessing_range.isdigit():.\nAdditionally, the continue statement at the end of the loop is unnecessary as the loop will continue iterating automatically.\nThere are a few other issues with the code:\n\nThe lst variable is not defined in the code, so the if guessing_range in lst: line will throw a NameError. The list of valid guessing ranges needs to be defined before it can be used in the check_input() function.\nThe guessing_range variable is assigned an integer value only if the user input is a digit. If the user enters a non-digit value, the guessing_range variable will still have the original string value, which will not be in the list of valid ranges. This will cause the if guessing_range in lst: line to always return False.\n\nTo fix these issues, the code can be modified as follows:\nlst = [10, 100, 1000]\n\ndef check_input():\n while True:\n guessing_range = input(\"Please enter a guessing range.\")\n if guessing_range.isdigit():\n guessing_range = int(guessing_range)\n else:\n print(\"Guessing range must be a number!\")\n continue\n if guessing_range in lst:\n break\n else:\n print(\"Guessing range must be 10, 100, or 1000!\")\n return guessing_range\n\ncheck_input()\n\n" ]
[ 0, 0 ]
[]
[]
[ "input", "iteration", "loops" ]
stackoverflow_0074662684_input_iteration_loops.txt
Q: How would I use React Hook UseEffect in order to create a function with searchMovies and searchTitles? This is code for the interface export interface ActorAttributes { TYPE?: string, NAME?: string, } export interface MovieAttributes { OBJECTID: number, SID: string, NAME: string, DIRECTOR: string, DESCRIP: string, } This is the code for my App.tsx import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts" const Home: React.FC = () => { const [search, setSearch] = React.useState(false) const [movieSearch, setMovieSearch] = React.useState<MovieAttributes[]>([]); const [actorSearch, setActorSearch] = React.useState<ActorAttributes>([]); const demo = async () => { setSearch(true) const demoMovieSearch = await searchMovies("Dumbo") setMovieSearch(demoMovieSearch) console.log("Movie example", demoMovieSearch) const demoActorSearch = await searchActors("j", demoDistrictSearch[1].SID) setActorSearch(demoActorSearch) console.log("Actor Example", demoActorSearch) setSearching(false) } This is what I've tried so far with useEffect. My goal is to implement a search bar function by using useEffect. I apologize in advance if there are errors within my code as I am fairly new to react. If anyone has any tips, ideas, suggestions, etc. please feel free to leave a comment. useEffect(() => { demo() }, []) A: Here is one way you can use the useEffect hook to create a search function: import { useEffect, useState } from 'react'; import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts" const Home: React.FC = () => { const [search, setSearch] = useState(false); const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]); const [actorSearch, setActorSearch] = useState<ActorAttributes>([]); const [searchTerm, setSearchTerm] = useState(''); useEffect(() => { // Perform the search when the searchTerm state changes if (searchTerm.length > 0) { setSearch(true); searchMovies(searchTerm).then(results => { setMovieSearch(results); }); searchActors(searchTerm, movieSearch[1].SID).then(results => { setActorSearch(results); }); setSearch(false); } }, [searchTerm, movieSearch]); // Handle changes to the search term input field const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => { setSearchTerm(event.target.value); } return ( <div> {/* The search input field */} <input type="text" value={searchTerm} onChange={handleSearchTermChange} /> {/* Display the search results */} {search && <p>Searching...</p>} {movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>} {actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>} </div> ); } The useEffect hook is called every time the searchTerm state changes. Inside the hook, we perform the search using the searchMovies and searchActors functions. We update the movieSearch and actorSearch state variables with the results of the search. We also have an input field that allows the user to enter the search term. The handleSearchTermChange function is called whenever the user enters a new search term, and it updates the searchTerm state variable with the new value. Finally, we display the search results and a "Searching..." message while the search is in progress.
How would I use React Hook UseEffect in order to create a function with searchMovies and searchTitles?
This is code for the interface export interface ActorAttributes { TYPE?: string, NAME?: string, } export interface MovieAttributes { OBJECTID: number, SID: string, NAME: string, DIRECTOR: string, DESCRIP: string, } This is the code for my App.tsx import { searchMovies, searchActors, MovieAttributes, ActorAttributes } from "@utils/atts" const Home: React.FC = () => { const [search, setSearch] = React.useState(false) const [movieSearch, setMovieSearch] = React.useState<MovieAttributes[]>([]); const [actorSearch, setActorSearch] = React.useState<ActorAttributes>([]); const demo = async () => { setSearch(true) const demoMovieSearch = await searchMovies("Dumbo") setMovieSearch(demoMovieSearch) console.log("Movie example", demoMovieSearch) const demoActorSearch = await searchActors("j", demoDistrictSearch[1].SID) setActorSearch(demoActorSearch) console.log("Actor Example", demoActorSearch) setSearching(false) } This is what I've tried so far with useEffect. My goal is to implement a search bar function by using useEffect. I apologize in advance if there are errors within my code as I am fairly new to react. If anyone has any tips, ideas, suggestions, etc. please feel free to leave a comment. useEffect(() => { demo() }, [])
[ "Here is one way you can use the useEffect hook to create a search function:\nimport { useEffect, useState } from 'react';\nimport { searchMovies, searchActors, MovieAttributes, ActorAttributes } from \"@utils/atts\"\n\nconst Home: React.FC = () => {\n const [search, setSearch] = useState(false);\n const [movieSearch, setMovieSearch] = useState<MovieAttributes[]>([]);\n const [actorSearch, setActorSearch] = useState<ActorAttributes>([]);\n const [searchTerm, setSearchTerm] = useState('');\n\n useEffect(() => {\n // Perform the search when the searchTerm state changes\n if (searchTerm.length > 0) {\n setSearch(true);\n\n searchMovies(searchTerm).then(results => {\n setMovieSearch(results);\n });\n\n searchActors(searchTerm, movieSearch[1].SID).then(results => {\n setActorSearch(results);\n });\n\n setSearch(false);\n }\n }, [searchTerm, movieSearch]);\n\n // Handle changes to the search term input field\n const handleSearchTermChange = (event: React.ChangeEvent<HTMLInputElement>) => {\n setSearchTerm(event.target.value);\n }\n\n return (\n <div>\n {/* The search input field */}\n <input type=\"text\" value={searchTerm} onChange={handleSearchTermChange} />\n\n {/* Display the search results */}\n {search && <p>Searching...</p>}\n {movieSearch.length > 0 && <p>Found {movieSearch.length} movies</p>}\n {actorSearch.length > 0 && <p>Found {actorSearch.length} actors</p>}\n </div>\n );\n}\n\nThe useEffect hook is called every time the searchTerm state changes. Inside the hook, we perform the search using the searchMovies and searchActors functions. We update the movieSearch and actorSearch state variables with the results of the search.\nWe also have an input field that allows the user to enter the search term. The handleSearchTermChange function is called whenever the user enters a new search term, and it updates the searchTerm state variable with the new value.\nFinally, we display the search results and a \"Searching...\" message while the search is in progress.\n" ]
[ 0 ]
[]
[]
[ "react_hooks", "reactjs", "typescript" ]
stackoverflow_0074662184_react_hooks_reactjs_typescript.txt
Q: jQuery How to add a class to a div if another div has specific class if any of the webpage elements has is-open class, add open class to another div. doesn't work is-open is added to div's each time a modal or tab is opened on the page. <script> if($(".is-open").length){ $(".blur-screen").addClass("open"); } else { $(".blur-screen").removeClass("open"); } </script> A: The code in your script runs immediately (before is-open is added to a div, because that only happens if a modal or tab is open, which probably doesn't happen immediately when page is loaded). what you need to do is to call a function that will check it every time a modal/tab is opened function checkIsOpen() { if($(".is-open").length){ $(".blur-screen").addClass("open"); } else { $(".blur-screen").removeClass("open"); } } when modal/tab opens: checkIsOpen(); A: Try this, if($("div").hasClass('is-open')){ $(".blur-screen").addClass("open"); } else { $(".blur-screen").removeClass("open"); }
jQuery How to add a class to a div if another div has specific class
if any of the webpage elements has is-open class, add open class to another div. doesn't work is-open is added to div's each time a modal or tab is opened on the page. <script> if($(".is-open").length){ $(".blur-screen").addClass("open"); } else { $(".blur-screen").removeClass("open"); } </script>
[ "The code in your script runs immediately (before is-open is added to a div, because that only happens if a modal or tab is open, which probably doesn't happen immediately when page is loaded).\nwhat you need to do is to call a function that will check it every time a modal/tab is opened\nfunction checkIsOpen() {\n if($(\".is-open\").length){\n $(\".blur-screen\").addClass(\"open\");\n } else {\n $(\".blur-screen\").removeClass(\"open\"); \n } \n}\n\nwhen modal/tab opens:\ncheckIsOpen();\n\n", "Try this,\nif($(\"div\").hasClass('is-open')){\n $(\".blur-screen\").addClass(\"open\");\n} else {\n $(\".blur-screen\").removeClass(\"open\"); \n} \n\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "jquery" ]
stackoverflow_0074656897_javascript_jquery.txt
Q: Visual Studio 2022: Git Diff for all changed files Is it possible in Visual Studio 2022 to browse through all changed files? In the Git Changes window, a double click on single file opens the diff, but I would like to rewiew all the changed files before committing. (In JetBrains Rider, it's possible to browse through all changed files in the Diff window) A: I don't believe single-click diff is currently possible before committing (it is after). But we can up-vote this suggestion for it.
Visual Studio 2022: Git Diff for all changed files
Is it possible in Visual Studio 2022 to browse through all changed files? In the Git Changes window, a double click on single file opens the diff, but I would like to rewiew all the changed files before committing. (In JetBrains Rider, it's possible to browse through all changed files in the Diff window)
[ "I don't believe single-click diff is currently possible before committing (it is after). But we can up-vote this suggestion for it.\n" ]
[ 0 ]
[]
[]
[ "visual_studio", "visual_studio_2022" ]
stackoverflow_0070650984_visual_studio_visual_studio_2022.txt
Q: Express And Ejs Unexpected identifier ** I came across this error and tried to solve it using the git... But i cant really find out the real solution to to problem.. Is it the EJS template or form the app. Creating a To Do list app from web development app... I also tried installing ejs-lint to see the problem...But that also gave it own error** SyntaxError: Unexpected identifier in /home/abuhavictor/Documents/programking/EJS/todolist-v1/views/list.ejs while compiling ejs If the above error is not helpful, you may want to try EJS-Lint: https://github.com/RyanZim/EJS-Lint Or, if you meant to create an async function, pass `async: true` as an option.    at new Function (<anonymous>)    at Template.compile (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:673:12)    at Object.compile (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:398:16)    at handleCache (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:235:18)    at tryHandleCache (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:274:16)    at View.exports.renderFile [as engine] (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:491:10)    at View.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/view.js:135:8)    at tryRender (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/application.js:657:10)    at Function.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/application.js:609:3)    at ServerResponse.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/response.js:1039:7) list.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>TO DO LIST</title> </head> <body> <% if (kindOfDay === "Saturday" || kindOfDay === "Sunday"){ %> <h1 style="color: red"><%= kindOfDay %> ToDo list</h1> <% }else { %> <h1 style="color: blue"><%= kindOfDay %> ToDo list</h1> <% } %> </body> </html> app.js const express = require('express'); const bodyparser = require('body-parser'); const port = 5000 const app = express(); app.set('view engine', 'ejs'); app.get("/", function (req, res) { var today = new Date(); var currentDay = today.getDay(); var day = ""; switch (currentDay) { case 0: day = 'Sunday'; break; case 1: day = 'Monday'; break; case 2: day = 'Tuesday'; break; case 3: day = 'Wednesday'; break; case 4: day = 'Thursday'; break; case 5: day = 'Friday'; break; case 6: day = 'Saturday'; break; default: console.log("Error: Current day is equals to " + currentDay); break; } res.render('list', { kindOfDay: day }); }); app.listen(port, function () { console.log('server started on port ' + port); }); A: The error message you are seeing indicates that there is a syntax error in your list.ejs file. In particular, it looks like there is an unexpected identifier (i.e. a character that is not valid in JavaScript) on the line that contains only three backtick characters (```). To fix this error, you can either remove this line entirely, or replace the backtick characters with a string or code that is valid in JavaScript. For example, you could replace the line with the following code: <% if (kindOfDay === "Saturday" || kindOfDay === "Sunday"){ %> <h1 style="color: red"><%= kindOfDay %> ToDo list</h1> <% }else { %> <h1 style="color: blue"><%= kindOfDay %> ToDo list</h1> <% } %> This code uses an if statement to check if the value of the kindOfDay variable is "Saturday" or "Sunday". If it is, the h1 element is given a red color, and if not, it is given a blue color. This code should fix the syntax error and allow your app to run without any problems.
Express And Ejs Unexpected identifier
** I came across this error and tried to solve it using the git... But i cant really find out the real solution to to problem.. Is it the EJS template or form the app. Creating a To Do list app from web development app... I also tried installing ejs-lint to see the problem...But that also gave it own error** SyntaxError: Unexpected identifier in /home/abuhavictor/Documents/programking/EJS/todolist-v1/views/list.ejs while compiling ejs If the above error is not helpful, you may want to try EJS-Lint: https://github.com/RyanZim/EJS-Lint Or, if you meant to create an async function, pass `async: true` as an option.    at new Function (<anonymous>)    at Template.compile (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:673:12)    at Object.compile (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:398:16)    at handleCache (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:235:18)    at tryHandleCache (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:274:16)    at View.exports.renderFile [as engine] (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/ejs/lib/ejs.js:491:10)    at View.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/view.js:135:8)    at tryRender (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/application.js:657:10)    at Function.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/application.js:609:3)    at ServerResponse.render (/home/abuhavictor/Documents/programking/EJS/todolist-v1/node_modules/express/lib/response.js:1039:7) list.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>TO DO LIST</title> </head> <body> <% if (kindOfDay === "Saturday" || kindOfDay === "Sunday"){ %> <h1 style="color: red"><%= kindOfDay %> ToDo list</h1> <% }else { %> <h1 style="color: blue"><%= kindOfDay %> ToDo list</h1> <% } %> </body> </html> app.js const express = require('express'); const bodyparser = require('body-parser'); const port = 5000 const app = express(); app.set('view engine', 'ejs'); app.get("/", function (req, res) { var today = new Date(); var currentDay = today.getDay(); var day = ""; switch (currentDay) { case 0: day = 'Sunday'; break; case 1: day = 'Monday'; break; case 2: day = 'Tuesday'; break; case 3: day = 'Wednesday'; break; case 4: day = 'Thursday'; break; case 5: day = 'Friday'; break; case 6: day = 'Saturday'; break; default: console.log("Error: Current day is equals to " + currentDay); break; } res.render('list', { kindOfDay: day }); }); app.listen(port, function () { console.log('server started on port ' + port); });
[ "The error message you are seeing indicates that there is a syntax error in your list.ejs file. In particular, it looks like there is an unexpected identifier (i.e. a character that is not valid in JavaScript) on the line that contains only three backtick characters (```).\nTo fix this error, you can either remove this line entirely, or replace the backtick characters with a string or code that is valid in JavaScript. For example, you could replace the line with the following code:\n <% if (kindOfDay === \"Saturday\" || kindOfDay === \"Sunday\"){ %>\n <h1 style=\"color: red\"><%= kindOfDay %> ToDo list</h1>\n <% }else { %>\n <h1 style=\"color: blue\"><%= kindOfDay %> ToDo list</h1>\n <% } %>\n\nThis code uses an if statement to check if the value of the kindOfDay variable is \"Saturday\" or \"Sunday\". If it is, the h1 element is given a red color, and if not, it is given a blue color. This code should fix the syntax error and allow your app to run without any problems.\n" ]
[ 0 ]
[]
[]
[ "ejs", "express", "html", "node.js" ]
stackoverflow_0074662776_ejs_express_html_node.js.txt
Q: Unable to link css to html I wanted to add some css to my html, but whenever I tried to use external css, the style is never applied. When I use inline or internal css everything works as intended, and I can't find what went wrong. Here I am using external css for my first div and inline css for my second div. The css applied to the two divs are basically the same. The css for the first div doesn't work but it does on the second so the problem isn't with the code, it with linking to the css file, but I should be doing the link tag right. The two files are definitely in the same folder. .q { position: absolute; left: 50%; top: 50%; transform: translate(-50%, -50%); } <!DOCTYPE html> <html lang="en"> <head> <title>Search</title> <link rel="stylesheet" href="index.css" /> </head> <body> <form action="https://google.com/search"> <div class="q"> <input type="text" name="q"> </div> <div class="button" style="position:absolute; left:50%; top: 50%; transform: translate(-50%, 90%);"> <input type="submit" value="Google Search"> </div> </form> </body> </html> A: Problem has been resolved. When I was trying to debug though the Network tab of my browser's developer tool, VScode suddenly prompted me saying I have two existing versions of the same file index.css and asking if I wanted to overwrite them. After overwriting the problem has been resolved. However, I am not certain if debugging though the Network tab of my browser's developer tool was what triggered VScode to identify the mixups in the directory, and if there are other methods to cause VScode to notice the problem or fix it personally.
Unable to link css to html
I wanted to add some css to my html, but whenever I tried to use external css, the style is never applied. When I use inline or internal css everything works as intended, and I can't find what went wrong. Here I am using external css for my first div and inline css for my second div. The css applied to the two divs are basically the same. The css for the first div doesn't work but it does on the second so the problem isn't with the code, it with linking to the css file, but I should be doing the link tag right. The two files are definitely in the same folder. .q { position: absolute; left: 50%; top: 50%; transform: translate(-50%, -50%); } <!DOCTYPE html> <html lang="en"> <head> <title>Search</title> <link rel="stylesheet" href="index.css" /> </head> <body> <form action="https://google.com/search"> <div class="q"> <input type="text" name="q"> </div> <div class="button" style="position:absolute; left:50%; top: 50%; transform: translate(-50%, 90%);"> <input type="submit" value="Google Search"> </div> </form> </body> </html>
[ "Problem has been resolved. When I was trying to debug though the Network tab of my browser's developer tool, VScode suddenly prompted me saying I have two existing versions of the same file index.css and asking if I wanted to overwrite them. After overwriting the problem has been resolved. However, I am not certain if debugging though the Network tab of my browser's developer tool was what triggered VScode to identify the mixups in the directory, and if there are other methods to cause VScode to notice the problem or fix it personally.\n" ]
[ 0 ]
[ "Look at the browser console. If it's returning 404 or something.\nI tried your code on my PC and it works fine.\n\n" ]
[ -2 ]
[ "css", "html" ]
stackoverflow_0074623843_css_html.txt
Q: How to sum records in partition for only part of records I am struggling with window functions in SQL Server. I have a table that is tracking how many records were transferred. I wrote a query to sum how many rows are transferred for each parameter. However, at some point records had to be resend (they were dropped from the final location and resend). So, if I continue with my old query, I get duplicated values. This is an example table: parameter rows min_id max_id create_date status A1 48 350 521 06.11.2022 sent A1 48 350 521 06.11.2022 error A1 78 1 350 05.11.2022 sent A1 13 299 350 04.11.2022 sent A1 50 100 299 03.11.2022 sent A1 15 1 100 01.11.2022 sent B2 87 800 1202 07.11.2022 sent B2 187 1 800 06.11.2022 sent B2 12 570 800 04.11.2022 sent B2 120 320 570 03.11.2022 sent B2 55 1 320 01.11.2022 sent You can understand when the table was resend when min_id is 1 again. The result I want to achieve is: parameter sum min_id max_id max_date A1 126 1 521 06.11.2022 B2 274 1 1202 07.11.2022 What I was able to do so far (but is causing duplicate results): SELECT * FROM (SELECT parameter , sum(rows) over (partition by parameter) as sum , min_id , max_id , MAX(create_date) over (partition by parameter) as max_date FROM my_table) as s WHERE create_date = max_date and status = 'sent' I think that maybe one more window function (nested window function?) needs to be added that will make a certain range of partitions starting with min_id=1 having the latest create_date. However, I failed to do so. Could anyone advise on how to approach this? A: With a small adjustment you could fetch the results as below: SELECT parameter, sum(rows) as sum, min(min_id) as min_id, max(max_id) as max_id, max(create_date) as max_date FROM (SELECT parameter , rows , min_id , max_id , create_date , status , MAX(case when min_id = 1 then create_date end) over (partition by parameter) as sent_start FROM my_table) as s WHERE create_date >= sent_start and status = 'sent' GROUP BY parameter It's worth considering the variations in data. Could records be resent with a min_id greater than 1? Can records be sent and resent within the same day? If any of these are a possibility you may want to test using an EXISTS condition: OPTION 2 ;WITH SentRows as ( SELECT * FROM my_table WHERE status='sent' ) SELECT parameter, sum(rows) as sum, min(min_id) as min_id, max(max_id) as max_id, max(create_date) as max_date FROM SentRows as s WHERE NOT EXISTS (SELECT 1 FROM SentRows t WHERE t.parameter = s.parameter AND t.create_date > s.create_date AND t.min_id <= s.min_id AND t.max_id >= s.max_id) GROUP BY parameter For partially overlapping records you may want to involve window functions, but here it isn't required. A: Instead of using partitions you can try using the GROUP BY clause since the output which you are expecting might have aggregations in all columns. You can try the below query: SELECT parameter, SUM(rows) AS rows, MIN(min_id) AS min_id, MAX(max_id) AS max_id, MAX(create_date) AS max_date FROM my_table WHERE status = 'sent' GROUP BY parameter This will give you the expected output. But still if you'd like to go with your old query and the duplicates are the only issue, you can try using DISTINCT keyword after SELECT to give you the unique records.
How to sum records in partition for only part of records
I am struggling with window functions in SQL Server. I have a table that is tracking how many records were transferred. I wrote a query to sum how many rows are transferred for each parameter. However, at some point records had to be resend (they were dropped from the final location and resend). So, if I continue with my old query, I get duplicated values. This is an example table: parameter rows min_id max_id create_date status A1 48 350 521 06.11.2022 sent A1 48 350 521 06.11.2022 error A1 78 1 350 05.11.2022 sent A1 13 299 350 04.11.2022 sent A1 50 100 299 03.11.2022 sent A1 15 1 100 01.11.2022 sent B2 87 800 1202 07.11.2022 sent B2 187 1 800 06.11.2022 sent B2 12 570 800 04.11.2022 sent B2 120 320 570 03.11.2022 sent B2 55 1 320 01.11.2022 sent You can understand when the table was resend when min_id is 1 again. The result I want to achieve is: parameter sum min_id max_id max_date A1 126 1 521 06.11.2022 B2 274 1 1202 07.11.2022 What I was able to do so far (but is causing duplicate results): SELECT * FROM (SELECT parameter , sum(rows) over (partition by parameter) as sum , min_id , max_id , MAX(create_date) over (partition by parameter) as max_date FROM my_table) as s WHERE create_date = max_date and status = 'sent' I think that maybe one more window function (nested window function?) needs to be added that will make a certain range of partitions starting with min_id=1 having the latest create_date. However, I failed to do so. Could anyone advise on how to approach this?
[ "With a small adjustment you could fetch the results as below:\nSELECT parameter, sum(rows) as sum, min(min_id) as min_id, max(max_id) as max_id,\n max(create_date) as max_date\nFROM\n (SELECT \n parameter\n , rows\n , min_id\n , max_id\n , create_date\n , status\n , MAX(case when min_id = 1 then create_date end) over (partition by parameter) as sent_start\n FROM my_table) as s\nWHERE create_date >= sent_start and status = 'sent'\nGROUP BY parameter\n\nIt's worth considering the variations in data. Could records be resent with a min_id greater than 1? Can records be sent and resent within the same day?\nIf any of these are a possibility you may want to test using an EXISTS condition:\nOPTION 2\n;WITH SentRows as\n(\nSELECT *\nFROM my_table\nWHERE status='sent'\n)\n\nSELECT parameter, sum(rows) as sum, min(min_id) as min_id, max(max_id) as max_id,\n max(create_date) as max_date \nFROM SentRows as s\nWHERE NOT EXISTS\n (SELECT 1 FROM SentRows t WHERE t.parameter = s.parameter AND t.create_date > s.create_date \n AND t.min_id <= s.min_id AND t.max_id >= s.max_id)\nGROUP BY parameter\n\nFor partially overlapping records you may want to involve window functions, but here it isn't required.\n", "Instead of using partitions you can try using the GROUP BY clause since the output which you are expecting might have aggregations in all columns. You can try the below query:\nSELECT\n parameter, \n SUM(rows) AS rows, \n MIN(min_id) AS min_id,\n MAX(max_id) AS max_id,\n MAX(create_date) AS max_date \nFROM my_table\nWHERE status = 'sent'\nGROUP BY parameter\n\nThis will give you the expected output. But still if you'd like to go with your old query and the duplicates are the only issue, you can try using DISTINCT keyword after SELECT to give you the unique records.\n" ]
[ 1, 0 ]
[]
[]
[ "sql", "sql_server", "window_functions" ]
stackoverflow_0074656353_sql_sql_server_window_functions.txt
Q: "sudo systemctl enable docker" not available: Automatically run Docker at boot on WSL2 (using a "sysvinit" / "init" command or a workaround) I am using Ubuntu on WSL2 (not on Docker Desktop). According to How to fix docker ‘Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?’ on Ubuntu, I can automatically start the docker daemon at boot using sudo systemctl enable docker instead of just starting it again at every boot with sudo systemctl start docker with both commands avoiding "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?". When using any of the two, I get Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker and a test run shows, that docker is not yet running: docker run hello-world docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. See 'docker run --help'. Some steps before, I also got a different message at this point: System has not been booted with systemd as init system (PID 1). Can't operate.Failed to connect to bus: Host is down" which brought me to Fixing "System has not been booted with systemd as init system" Error: Reason: Your Linux system is not using systemd How to know which init system you are using? You may use this command to know the process name associated with PID 1 (the first process that runs on your system): ps -p 1 -o comm= It should show systemd or sysv (or something like that) in the output. ps -p 1 -o comm= gave me init. According to this and this table Systemd command Sysvinit command systemctl start service_name service service_name start systemctl stop service_name service service_name stop systemctl restart service_name service service_name restart systemctl status service_name service service_name status systemctl enable service_name chkconfig service_name on systemctl disable service_name chkconfig service_name off I can choose service docker start to run docker, which works. But I cannot find something like "systemd"'s sudo systemctl enable docker for "sysvinit". I would expect it to be like: sudo service docker enable But that "enable" is not available for "sysvinit" / "init". While sudo service docker start works like sudo systemctl start docker, there is no such command that uses "enable". At the moment, I need to run sudo service docker start whenever I start WSL2. The question: What is the command that reaches sudo systemctl enable docker using sudo service docker ..., or if that does not exist, what is a workaround here to automatically start docker when opening Ubuntu on WSL2? A: Important note: Most users should read my updated answer first. This answer is a bit outdated, but I'm leaving it here in case it's beneficial to anyone running on an older WSL release. Short answer to "what is a workaround here to automatically start docker when opening Ubuntu on WSL2? Option 1: On Windows 11, add the necessary commands to the [boot] section in /etc/wsl.conf: [boot] command="service docker start" Note that under the latest Preview releases, there appears to be an issue that causes anything started via this boot.command to terminate when no services that were started via an actual command-line are still running. In other words, if you need Docker (or any other service) to continue to run after you exit your WSL2 session, you'll probably need to use Option 2 (or uninstall the Preview). Option 2: On Windows 10, run the necessary commands in your user startup scripts (e.g. .profile). Do it with a check to see if the service is running first, like: wsl.exe -u root -e sh -c "service docker status || service docker start" This is a better alternative than my previous answer (option 3, below) since it doesn't require modification to sudoers. This takes advantage of the fact that the wsl.exe command can be run from inside WSL, using the -u root option to run the commands as root without a password. Note: If for some reason this command fails, your default WSL distribution may be different than you expect. Check the output of wsl.exe -l -v. You can change the default distro using wsl.exe --setdefault <distro_name> or adjust the commandline above to specify the distro with -d <distro_name>. Option 3: (old answer, here for posterity): visudo or add rules to /etc/sudoers.d to allow your user to run the commands without a password: username ALL = (root) NOPASSWD: /usr/sbin/service docker * Then edit your .profile to add: sudo service docker status || sudo service docker start More Details: As you've discovered, WSL does not include any systemd support, nor really any direct support for starting a service on boot. For starters, the WSL subsystem doesn't launch at Windows boot, but only when the user launches a login session anyway. So without any real "system start", the init.d or systemd startup doesn't make as much sense. Further, users may have multiple WSL instances/distributions running, and if you are doing that (as I am), then you really don't want all services from all instances running on every boot (although, updated answer, Windows 11 does now give us this option). For Docker, though, are you running Docker Desktop with WSL2 integration, or just installed directly into a WSL2 instance? For Docker Desktop, I ran across this in another question yesterday on how to start Docker Desktop daemon at Windows boot. You can also have the WSL2 instance start via Windows Task Manager when the user logs in, and run the script via something like wsl -u root service docker start in the Task Manager. Note that the same doesn't seem to work at Windows boot, however, (only login) because Windows seems to terminate any WSL instance that isn't tied to an active user after a few seconds (even if a service is running in the background). You can work around this with the PowerShell Invoke-WmiMethod, something like ... powershell.exe Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList 'wsl', although I haven't tested this all that thoroughly. A: This worked for WSL ubuntu. Before service --status-all [ - ] ssh service ssh start service --status-all [ + ] ssh A: This answer requires the latest version of Windows and WSL at the time of this posting, and it now works under both Windows 10 and 11. Run wsl --version and confirm that you are on WSL 1.0.0 (not to be confused with WSL1) or later. If you are on an older release of Windows or WSL, then wsl --version will likely just show the Help text. See this answer for information on how to upgrade. If you cannot upgrade at this time, then please see my original answer for a workaround for Windows 10. what is a workaround here to automatically start docker when opening Ubuntu on WSL2? Option 1: Enable Systemd support in WSL2 The latest release of WSL2 includes support for Systemd. You can read how to enable it in this Community Wiki answer or my original Ask Ubuntu answer. However, my personal recommendation is to consider whether you really need Systemd. It will add additional overhead and potentially other complications, and it isn't strictly necessary for Ubuntu to run (well) on WSL, as we've been doing for quite a few years without it. Option 2 may be a better (and faster) option for many services. If you do have Systemd enabled, then the commands in the original question should work for you: sudo systemctl enable docker sudo systemctl start docker Docker Engine should automatically start for you the next time you restart your WSL2 distribution. However, please see the bottom of this answer for an important note on keeping the services running. Option 2: Add the necessary commands to the [boot] section in /etc/wsl.conf: [boot] command= service docker start To run multiple commands, separate them with a semicolon as in: [boot] command= service docker start; service cron start Important Note: If you run a service (e.g. cron or docker) using either of these methods, please note that the WSL distribution will still auto-terminate when the last process that was started interactively completes. You can see more discussion (and a workaround using keychain) for this in my answer to the Ask Ubuntu question Is it possible to run a WSL app in the background?.
"sudo systemctl enable docker" not available: Automatically run Docker at boot on WSL2 (using a "sysvinit" / "init" command or a workaround)
I am using Ubuntu on WSL2 (not on Docker Desktop). According to How to fix docker ‘Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?’ on Ubuntu, I can automatically start the docker daemon at boot using sudo systemctl enable docker instead of just starting it again at every boot with sudo systemctl start docker with both commands avoiding "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?". When using any of the two, I get Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker and a test run shows, that docker is not yet running: docker run hello-world docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?. See 'docker run --help'. Some steps before, I also got a different message at this point: System has not been booted with systemd as init system (PID 1). Can't operate.Failed to connect to bus: Host is down" which brought me to Fixing "System has not been booted with systemd as init system" Error: Reason: Your Linux system is not using systemd How to know which init system you are using? You may use this command to know the process name associated with PID 1 (the first process that runs on your system): ps -p 1 -o comm= It should show systemd or sysv (or something like that) in the output. ps -p 1 -o comm= gave me init. According to this and this table Systemd command Sysvinit command systemctl start service_name service service_name start systemctl stop service_name service service_name stop systemctl restart service_name service service_name restart systemctl status service_name service service_name status systemctl enable service_name chkconfig service_name on systemctl disable service_name chkconfig service_name off I can choose service docker start to run docker, which works. But I cannot find something like "systemd"'s sudo systemctl enable docker for "sysvinit". I would expect it to be like: sudo service docker enable But that "enable" is not available for "sysvinit" / "init". While sudo service docker start works like sudo systemctl start docker, there is no such command that uses "enable". At the moment, I need to run sudo service docker start whenever I start WSL2. The question: What is the command that reaches sudo systemctl enable docker using sudo service docker ..., or if that does not exist, what is a workaround here to automatically start docker when opening Ubuntu on WSL2?
[ "\nImportant note: Most users should read my updated answer first. This answer is a bit outdated, but I'm leaving it here in case it's beneficial to anyone running on an older WSL release.\n\nShort answer to \"what is a workaround here to automatically start docker when opening Ubuntu on WSL2?\n\nOption 1: On Windows 11, add the necessary commands to the [boot] section in /etc/wsl.conf:\n[boot]\ncommand=\"service docker start\"\n\nNote that under the latest Preview releases, there appears to be an issue that causes anything started via this boot.command to terminate when no services that were started via an actual command-line are still running. In other words, if you need Docker (or any other service) to continue to run after you exit your WSL2 session, you'll probably need to use Option 2 (or uninstall the Preview).\n\nOption 2: On Windows 10, run the necessary commands in your user startup scripts (e.g. .profile). Do it with a check to see if the service is running first, like:\nwsl.exe -u root -e sh -c \"service docker status || service docker start\"\n\nThis is a better alternative than my previous answer (option 3, below) since it doesn't require modification to sudoers. This takes advantage of the fact that the wsl.exe command can be run from inside WSL, using the -u root option to run the commands as root without a password.\nNote: If for some reason this command fails, your default WSL distribution may be different than you expect. Check the output of wsl.exe -l -v. You can change the default distro using wsl.exe --setdefault <distro_name> or adjust the commandline above to specify the distro with -d <distro_name>.\n\nOption 3: (old answer, here for posterity): visudo or add rules to /etc/sudoers.d to allow your user to run the commands without a password:\nusername ALL = (root) NOPASSWD: /usr/sbin/service docker *\n\nThen edit your .profile to add:\nsudo service docker status || sudo service docker start\n\n\n\nMore Details:\nAs you've discovered, WSL does not include any systemd support, nor really any direct support for starting a service on boot.\nFor starters, the WSL subsystem doesn't launch at Windows boot, but only when the user launches a login session anyway. So without any real \"system start\", the init.d or systemd startup doesn't make as much sense.\nFurther, users may have multiple WSL instances/distributions running, and if you are doing that (as I am), then you really don't want all services from all instances running on every boot (although, updated answer, Windows 11 does now give us this option).\nFor Docker, though, are you running Docker Desktop with WSL2 integration, or just installed directly into a WSL2 instance? For Docker Desktop, I ran across this in another question yesterday on how to start Docker Desktop daemon at Windows boot.\nYou can also have the WSL2 instance start via Windows Task Manager when the user logs in, and run the script via something like wsl -u root service docker start in the Task Manager.\nNote that the same doesn't seem to work at Windows boot, however, (only login) because Windows seems to terminate any WSL instance that isn't tied to an active user after a few seconds (even if a service is running in the background). You can work around this with the PowerShell Invoke-WmiMethod, something like ...\npowershell.exe Invoke-WmiMethod -Class Win32_Process -Name Create -ArgumentList 'wsl', although I haven't tested this all that thoroughly.\n", "This worked for WSL ubuntu.\nBefore\nservice --status-all\n[ - ] ssh\nservice ssh start\nservice --status-all\n[ + ] ssh\n", "\nThis answer requires the latest version of Windows and WSL at the time of this posting, and it now works under both Windows 10 and 11. Run wsl --version and confirm that you are on WSL 1.0.0 (not to be confused with WSL1) or later.\nIf you are on an older release of Windows or WSL, then wsl --version will likely just show the Help text. See this answer for information on how to upgrade.\nIf you cannot upgrade at this time, then please see my original answer for a workaround for Windows 10.\n\n\nwhat is a workaround here to automatically start docker when opening Ubuntu on WSL2?\n\n\nOption 1: Enable Systemd support in WSL2\nThe latest release of WSL2 includes support for Systemd. You can read how to enable it in this Community Wiki answer or my original Ask Ubuntu answer.\nHowever, my personal recommendation is to consider whether you really need Systemd. It will add additional overhead and potentially other complications, and it isn't strictly necessary for Ubuntu to run (well) on WSL, as we've been doing for quite a few years without it. Option 2 may be a better (and faster) option for many services.\nIf you do have Systemd enabled, then the commands in the original question should work for you:\nsudo systemctl enable docker\nsudo systemctl start docker\n\nDocker Engine should automatically start for you the next time you restart your WSL2 distribution. However, please see the bottom of this answer for an important note on keeping the services running.\n\n\n\n\nOption 2: Add the necessary commands to the [boot] section in /etc/wsl.conf:\n[boot]\ncommand= service docker start\n\nTo run multiple commands, separate them with a semicolon as in:\n[boot]\ncommand= service docker start; service cron start\n\n\n\nImportant Note: If you run a service (e.g. cron or docker) using either of these methods, please note that the WSL distribution will still auto-terminate when the last process that was started interactively completes. You can see more discussion (and a workaround using keychain) for this in my answer to the Ask Ubuntu question Is it possible to run a WSL app in the background?.\n" ]
[ 51, 0, 0 ]
[]
[]
[ "docker", "service", "systemctl", "systemd", "wsl_2" ]
stackoverflow_0065813979_docker_service_systemctl_systemd_wsl_2.txt
Q: Wining prizes based off probabilities I was curious on how wheel spinning based of chances algorithm would work so I wrote this code I created an object with each prize and its probability const chances = { "Apple" : 22.45, "Peaches" : 32.8, "Grapes" : 20, "Bananas" : 6.58, "Strawberry" : 18.17 } then I generate a random number and check if its within the prize's winning ranges const random = Math.floor((Math.random() * 10000) + 1); var rangeStart= 0; for (var key in chances){ var rangeEnd= rangeStart+ chances[key]; if (rangeStart*100 < random && random <= rangeEnd*100){ console.log(rangeStart*100+" < "+random+" <= "+rangeEnd*100); console.log("You won a "+key) break; } rangeStart+= chances[key]; } you can check the code here am I in the right path? A: It looks like you are on the right track. Your code is almost correct. However, there are a few issues that you need to fix. First, you are generating a random number between 1 and 10000, but the probabilities in your chances object are percentages. This means that the total sum of all the probabilities is 100, not 10000. You need to generate a random number between 1 and 100 instead. Second, in the loop you are multiplying rangeStart and rangeEnd by 100 before checking if the random number is within the range. This will make the comparison always false, because rangeStart and rangeEnd are percentages, not 100 times the actual percentage value. You need to remove the multiplication by 100 from the comparison. Here is the updated code with these changes applied: const chances = { "Apple" : 22.45, "Peaches" : 32.8, "Grapes" : 20, "Bananas" : 6.58, "Strawberry" : 18.17 }; const random = Math.floor((Math.random() * 100) + 1); let rangeStart = 0; for (const key in chances) { const rangeEnd = rangeStart + chances[key]; if (rangeStart < random && random <= rangeEnd) { console.log(rangeStart + " < " + random + " <= " + rangeEnd); console.log("You won a " + key); break; } rangeStart += chances[key]; } I hope this helps. A: As already mentioned, instead of 10000 stick to percentages 100. To prove that your code works fine (if you're unsure about the used algorithm), in order to test it sample the results of many iterations. If the collected results are close to the expected original values - that's a good sign you're on the right track: Here's an example with 1,000,000 iterations. The precision seems pretty decent: const chances = { "Apple" : 22.45, "Peaches" : 32.8, "Grapes" : 20, "Bananas" : 6.58, "Strawberry" : 18.17 }; const results = {}; const generate = () => { const random = Math.floor((Math.random() * 100) + 1); var rangeStart = 0; for (var key in chances){ var rangeEnd = rangeStart+ chances[key]; if (rangeStart < random && random <= rangeEnd){ results[key] ??= 0; results[key] += 1; break; } rangeStart+= chances[key]; } }; // noprotect const samples = 1000000; for (let i=0; i<samples; i++) generate(); Object.entries(results).forEach(([k, v]) => { console.log(`${k} ${(v / samples * 100).toFixed(2)}% (Expected: ${chances[k]}%)`); });
Wining prizes based off probabilities
I was curious on how wheel spinning based of chances algorithm would work so I wrote this code I created an object with each prize and its probability const chances = { "Apple" : 22.45, "Peaches" : 32.8, "Grapes" : 20, "Bananas" : 6.58, "Strawberry" : 18.17 } then I generate a random number and check if its within the prize's winning ranges const random = Math.floor((Math.random() * 10000) + 1); var rangeStart= 0; for (var key in chances){ var rangeEnd= rangeStart+ chances[key]; if (rangeStart*100 < random && random <= rangeEnd*100){ console.log(rangeStart*100+" < "+random+" <= "+rangeEnd*100); console.log("You won a "+key) break; } rangeStart+= chances[key]; } you can check the code here am I in the right path?
[ "It looks like you are on the right track. Your code is almost correct. However, there are a few issues that you need to fix.\nFirst, you are generating a random number between 1 and 10000, but the probabilities in your chances object are percentages. This means that the total sum of all the probabilities is 100, not 10000. You need to generate a random number between 1 and 100 instead.\nSecond, in the loop you are multiplying rangeStart and rangeEnd by 100 before checking if the random number is within the range. This will make the comparison always false, because rangeStart and rangeEnd are percentages, not 100 times the actual percentage value. You need to remove the multiplication by 100 from the comparison.\nHere is the updated code with these changes applied:\nconst chances = {\n \"Apple\" : 22.45,\n \"Peaches\" : 32.8,\n \"Grapes\" : 20,\n \"Bananas\" : 6.58,\n \"Strawberry\" : 18.17\n};\n\nconst random = Math.floor((Math.random() * 100) + 1);\nlet rangeStart = 0;\n\nfor (const key in chances) {\n const rangeEnd = rangeStart + chances[key];\n if (rangeStart < random && random <= rangeEnd) {\n console.log(rangeStart + \" < \" + random + \" <= \" + rangeEnd);\n console.log(\"You won a \" + key);\n break;\n }\n rangeStart += chances[key];\n}\n\nI hope this helps.\n", "As already mentioned, instead of 10000 stick to percentages 100.\nTo prove that your code works fine (if you're unsure about the used algorithm), in order to test it sample the results of many iterations. If the collected results are close to the expected original values - that's a good sign you're on the right track:\nHere's an example with 1,000,000 iterations. The precision seems pretty decent:\n\n\nconst chances = {\n \"Apple\" : 22.45,\n \"Peaches\" : 32.8,\n \"Grapes\" : 20,\n \"Bananas\" : 6.58,\n \"Strawberry\" : 18.17\n};\n\nconst results = {};\n\nconst generate = () => {\n \n const random = Math.floor((Math.random() * 100) + 1);\n var rangeStart = 0;\n\n for (var key in chances){\n var rangeEnd = rangeStart+ chances[key];\n if (rangeStart < random && random <= rangeEnd){\n results[key] ??= 0;\n results[key] += 1;\n break;\n }\n rangeStart+= chances[key];\n }\n};\n\n// noprotect\nconst samples = 1000000;\nfor (let i=0; i<samples; i++) generate();\n\nObject.entries(results).forEach(([k, v]) => {\n console.log(`${k} ${(v / samples * 100).toFixed(2)}% (Expected: ${chances[k]}%)`); \n});\n\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "javascript", "node.js", "probability_distribution" ]
stackoverflow_0074662454_javascript_node.js_probability_distribution.txt
Q: How can I get city name from a latitude and longitude point? Is there a way to get a city name from a latitude and longitude point using the google maps api for javascript? If so could I please see an example? A: This is called Reverse Geocoding Documentation from Google: http://code.google.com/apis/maps/documentation/geocoding/#ReverseGeocoding. Sample Call to Google's geocode Web Service: http://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=true&key=YOUR_KEY A: Here is a complete sample: <!DOCTYPE html> <html> <head> <title>Geolocation API with Google Maps API</title> <meta charset="UTF-8" /> </head> <body> <script> function displayLocation(latitude,longitude){ var request = new XMLHttpRequest(); var method = 'GET'; var url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng='+latitude+','+longitude+'&sensor=true'; var async = true; request.open(method, url, async); request.onreadystatechange = function(){ if(request.readyState == 4 && request.status == 200){ var data = JSON.parse(request.responseText); var address = data.results[0]; document.write(address.formatted_address); } }; request.send(); }; var successCallback = function(position){ var x = position.coords.latitude; var y = position.coords.longitude; displayLocation(x,y); }; var errorCallback = function(error){ var errorMessage = 'Unknown error'; switch(error.code) { case 1: errorMessage = 'Permission denied'; break; case 2: errorMessage = 'Position unavailable'; break; case 3: errorMessage = 'Timeout'; break; } document.write(errorMessage); }; var options = { enableHighAccuracy: true, timeout: 1000, maximumAge: 0 }; navigator.geolocation.getCurrentPosition(successCallback,errorCallback,options); </script> </body> </html> A: In node.js we can use node-geocoder npm module to get address from lat, lng., geo.js var NodeGeocoder = require('node-geocoder'); var options = { provider: 'google', httpAdapter: 'https', // Default apiKey: ' ', // for Mapquest, OpenCage, Google Premier formatter: 'json' // 'gpx', 'string', ... }; var geocoder = NodeGeocoder(options); geocoder.reverse({lat:28.5967439, lon:77.3285038}, function(err, res) { console.log(res); }); output: node geo.js [ { formattedAddress: 'C-85B, C Block, Sector 8, Noida, Uttar Pradesh 201301, India', latitude: 28.5967439, longitude: 77.3285038, extra: { googlePlaceId: 'ChIJkTdx9vzkDDkRx6LVvtz1Rhk', confidence: 1, premise: 'C-85B', subpremise: null, neighborhood: 'C Block', establishment: null }, administrativeLevels: { level2long: 'Gautam Buddh Nagar', level2short: 'Gautam Buddh Nagar', level1long: 'Uttar Pradesh', level1short: 'UP' }, city: 'Noida', country: 'India', countryCode: 'IN', zipcode: '201301', provider: 'google' } ] A: Here is the latest sample of Google's geocode Web Service https://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&key=YOUR_API_KEY Simply change the YOUR_API_KEY to the API key you get from Google Geocoding API P/S: Geocoding API is under Places NOT Maps ;) A: Following Code Works Fine to Get City Name (Using Google Map Geo API) : HTML <p><button onclick="getLocation()">Get My Location</button></p> <p id="demo"></p> <script src="http://maps.google.com/maps/api/js?key=YOUR_API_KEY"></script> SCRIPT var x=document.getElementById("demo"); function getLocation(){ if (navigator.geolocation){ navigator.geolocation.getCurrentPosition(showPosition,showError); } else{ x.innerHTML="Geolocation is not supported by this browser."; } } function showPosition(position){ lat=position.coords.latitude; lon=position.coords.longitude; displayLocation(lat,lon); } function showError(error){ switch(error.code){ case error.PERMISSION_DENIED: x.innerHTML="User denied the request for Geolocation." break; case error.POSITION_UNAVAILABLE: x.innerHTML="Location information is unavailable." break; case error.TIMEOUT: x.innerHTML="The request to get user location timed out." break; case error.UNKNOWN_ERROR: x.innerHTML="An unknown error occurred." break; } } function displayLocation(latitude,longitude){ var geocoder; geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(latitude, longitude); geocoder.geocode( {'latLng': latlng}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { if (results[0]) { var add= results[0].formatted_address ; var value=add.split(","); count=value.length; country=value[count-1]; state=value[count-2]; city=value[count-3]; x.innerHTML = "city name is: " + city; } else { x.innerHTML = "address not found"; } } else { x.innerHTML = "Geocoder failed due to: " + status; } } ); } A: BigDataCloud also has a nice API for this, also for nodejs users. they have API for client - free. But also for backend, using API_KEY (free according to quota). Their GitHub page. the code looks like: const client = require('@bigdatacloudapi/client')(API_KEY); async foo() { ... const location: string = await client.getReverseGeocode({ latitude:'32.101786566878445', longitude: '34.858965073072056' }); } A: In case if you don't want to use google geocoding API than you can refer to few other Free APIs for the development purpose. for example i used [mapquest] API in order to get the location name. you can fetch location name easily by implementing this following function const fetchLocationName = async (lat,lng) => { await fetch( 'https://www.mapquestapi.com/geocoding/v1/reverse?key=API-Key&location='+lat+'%2C'+lng+'&outFormat=json&thumbMaps=false', ) .then((response) => response.json()) .then((responseJson) => { console.log( 'ADDRESS GEOCODE is BACK!! => ' + JSON.stringify(responseJson), ); }); }; A: Here's a modern solution using a promise: function getAddress (latitude, longitude) { return new Promise(function (resolve, reject) { var request = new XMLHttpRequest(); var method = 'GET'; var url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng=' + latitude + ',' + longitude + '&sensor=true'; var async = true; request.open(method, url, async); request.onreadystatechange = function () { if (request.readyState == 4) { if (request.status == 200) { var data = JSON.parse(request.responseText); var address = data.results[0]; resolve(address); } else { reject(request.status); } } }; request.send(); }); }; And call it like this: getAddress(lat, lon).then(console.log).catch(console.error); The promise returns the address object in 'then' or the error status code in 'catch' A: Following Code Works Fine For Me to Get City, state, country, zipcode (Using Google Map Geo API) : var url = "https://maps.googleapis.com/maps/api/geocode/json?latlng="+lat+","+long+"&key=KEY_HERE&sensor=false"; $.get(url, function(data) { var results = data.results; if (data.status === 'OK') { //console.log(JSON.stringify(results)); if (results[0]) { var city = ""; var state = ""; var country = ""; var zipcode = ""; var address_components = results[0].address_components; for (var i = 0; i < address_components.length; i++) { if (address_components[i].types[0] === "administrative_area_level_1" && address_components[i].types[1] === "political") { state = address_components[i].long_name; } if (address_components[i].types[0] === "locality" && address_components[i].types[1] === "political" ) { city = address_components[i].long_name; } if (address_components[i].types[0] === "postal_code" && zipcode == "") { zipcode = address_components[i].long_name; } if (address_components[i].types[0] === "country") { country = address_components[i].long_name; } } var address = { "city": city, "state": state, "country": country, "zipcode": zipcode, }; console.log(address); } else { window.alert('No results found'); } } else { window.alert('Geocoder failed due to: ' + status); } }); A: Same as @Sanchit Gupta. in this part if (results[0]) { var add= results[0].formatted_address ; var value=add.split(","); count=value.length; country=value[count-1]; state=value[count-2]; city=value[count-3]; x.innerHTML = "city name is: " + city; } just console the results array if (results[0]) { console.log(results[0]); // choose from console whatever you need. var city = results[0].address_components[3].short_name; x.innerHTML = "city name is: " + city; } A: There are many tools available google maps API as like all had written use this data "https://simplemaps.com/data/world-cities" download free version and convert excel to JSON with some online converter like "http://beautifytools.com/excel-to-json-converter.php" use IP address which is not good because using IP address of someone may not good users think that you can hack them. other free and paid tools are available also A: public function retornaCidade ( $lat, $lng ) { $key = "SUA CHAVE"; $url = 'https://maps.googleapis.com/maps/api/geocode/json?latlng=' . $lat . ',' . $lng . '&key=' . $key; $geoFull = json_decode ( file_get_contents ( $url ), true ); if ( $geoFull[ 'results' ] ) { //console.log(JSON.stringify(results)); if ( $geoFull[ 'results' ][ 0 ] ) { $cidade = ""; $estado = ""; $pais = ""; $cep = ""; $address_components = $geoFull[ 'results' ][ 0 ][ 'address_components' ]; for ( $i = 0; $i < count ( $address_components ); $i++ ) { if ( ($address_components[ $i ][ 'types' ][ 0 ] == "administrative_area_level_1") && ($address_components[ $i ][ 'types' ][ 1 ] == "political" )) { $estado = str_replace('State of ', '',$address_components[ $i ][ 'long_name' ]);]; } if ( ($address_components[ $i ][ 'types' ][ 0 ] == "administrative_area_level_2") && ($address_components[ $i ][ 'types' ][ 1 ] == "political" )) { $cidade = $address_components[ $i ][ 'long_name' ]; } if ( $address_components[ $i ][ 'types' ][ 0 ] == "postal_code" && $cep == "" ) { $cep = $address_components[ $i ][ 'long_name' ]; } if ($address_components[ $i ][ 'types' ][ 0 ] == "country" ) { $pais = $address_components[ $i ][ 'long_name' ]; } } $endereco = [ "cidade" => $cidade, "estado" => $estado, "pais" => $pais, "cep" => $cep, ]; return $endereco; } else { return false; } } else { return false; } } A: You can use this library in your API based on Node to do reverse geocoding: https://github.com/rapomon/geojson-places
How can I get city name from a latitude and longitude point?
Is there a way to get a city name from a latitude and longitude point using the google maps api for javascript? If so could I please see an example?
[ "This is called Reverse Geocoding \n\nDocumentation from Google: \nhttp://code.google.com/apis/maps/documentation/geocoding/#ReverseGeocoding. \nSample Call to Google's geocode Web Service: \nhttp://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&sensor=true&key=YOUR_KEY\n\n", "Here is a complete sample:\n<!DOCTYPE html>\n<html>\n <head>\n <title>Geolocation API with Google Maps API</title>\n <meta charset=\"UTF-8\" />\n </head>\n <body>\n <script>\n function displayLocation(latitude,longitude){\n var request = new XMLHttpRequest();\n\n var method = 'GET';\n var url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng='+latitude+','+longitude+'&sensor=true';\n var async = true;\n\n request.open(method, url, async);\n request.onreadystatechange = function(){\n if(request.readyState == 4 && request.status == 200){\n var data = JSON.parse(request.responseText);\n var address = data.results[0];\n document.write(address.formatted_address);\n }\n };\n request.send();\n };\n\n var successCallback = function(position){\n var x = position.coords.latitude;\n var y = position.coords.longitude;\n displayLocation(x,y);\n };\n\n var errorCallback = function(error){\n var errorMessage = 'Unknown error';\n switch(error.code) {\n case 1:\n errorMessage = 'Permission denied';\n break;\n case 2:\n errorMessage = 'Position unavailable';\n break;\n case 3:\n errorMessage = 'Timeout';\n break;\n }\n document.write(errorMessage);\n };\n\n var options = {\n enableHighAccuracy: true,\n timeout: 1000,\n maximumAge: 0\n };\n\n navigator.geolocation.getCurrentPosition(successCallback,errorCallback,options);\n </script>\n </body>\n</html>\n\n", "In node.js we can use node-geocoder npm module to get address from lat, lng.,\ngeo.js\nvar NodeGeocoder = require('node-geocoder');\n\nvar options = {\n provider: 'google',\n httpAdapter: 'https', // Default\n apiKey: ' ', // for Mapquest, OpenCage, Google Premier\n formatter: 'json' // 'gpx', 'string', ...\n};\n\nvar geocoder = NodeGeocoder(options);\n\ngeocoder.reverse({lat:28.5967439, lon:77.3285038}, function(err, res) {\n console.log(res);\n});\n\noutput:\n\nnode geo.js\n\n[ { formattedAddress: 'C-85B, C Block, Sector 8, Noida, Uttar Pradesh 201301, India',\n latitude: 28.5967439,\n longitude: 77.3285038,\n extra: \n { googlePlaceId: 'ChIJkTdx9vzkDDkRx6LVvtz1Rhk',\n confidence: 1,\n premise: 'C-85B',\n subpremise: null,\n neighborhood: 'C Block',\n establishment: null },\n administrativeLevels: \n { level2long: 'Gautam Buddh Nagar',\n level2short: 'Gautam Buddh Nagar',\n level1long: 'Uttar Pradesh',\n level1short: 'UP' },\n city: 'Noida',\n country: 'India',\n countryCode: 'IN',\n zipcode: '201301',\n provider: 'google' } ]\n\n", "Here is the latest sample of Google's geocode Web Service \nhttps://maps.googleapis.com/maps/api/geocode/json?latlng=40.714224,-73.961452&key=YOUR_API_KEY\nSimply change the YOUR_API_KEY to the API key you get from Google Geocoding API\nP/S: Geocoding API is under Places NOT Maps ;)\n", "Following Code Works Fine to Get City Name (Using Google Map Geo API) : \nHTML\n<p><button onclick=\"getLocation()\">Get My Location</button></p>\n<p id=\"demo\"></p>\n<script src=\"http://maps.google.com/maps/api/js?key=YOUR_API_KEY\"></script>\n\nSCRIPT\nvar x=document.getElementById(\"demo\");\nfunction getLocation(){\n if (navigator.geolocation){\n navigator.geolocation.getCurrentPosition(showPosition,showError);\n }\n else{\n x.innerHTML=\"Geolocation is not supported by this browser.\";\n }\n}\n\nfunction showPosition(position){\n lat=position.coords.latitude;\n lon=position.coords.longitude;\n displayLocation(lat,lon);\n}\n\nfunction showError(error){\n switch(error.code){\n case error.PERMISSION_DENIED:\n x.innerHTML=\"User denied the request for Geolocation.\"\n break;\n case error.POSITION_UNAVAILABLE:\n x.innerHTML=\"Location information is unavailable.\"\n break;\n case error.TIMEOUT:\n x.innerHTML=\"The request to get user location timed out.\"\n break;\n case error.UNKNOWN_ERROR:\n x.innerHTML=\"An unknown error occurred.\"\n break;\n }\n}\n\nfunction displayLocation(latitude,longitude){\n var geocoder;\n geocoder = new google.maps.Geocoder();\n var latlng = new google.maps.LatLng(latitude, longitude);\n\n geocoder.geocode(\n {'latLng': latlng}, \n function(results, status) {\n if (status == google.maps.GeocoderStatus.OK) {\n if (results[0]) {\n var add= results[0].formatted_address ;\n var value=add.split(\",\");\n\n count=value.length;\n country=value[count-1];\n state=value[count-2];\n city=value[count-3];\n x.innerHTML = \"city name is: \" + city;\n }\n else {\n x.innerHTML = \"address not found\";\n }\n }\n else {\n x.innerHTML = \"Geocoder failed due to: \" + status;\n }\n }\n );\n}\n\n", "BigDataCloud also has a nice API for this, also for nodejs users.\nthey have API for client - free. But also for backend, using API_KEY (free according to quota).\nTheir GitHub page.\nthe code looks like:\nconst client = require('@bigdatacloudapi/client')(API_KEY);\n\nasync foo() {\n ...\n const location: string = await client.getReverseGeocode({\n latitude:'32.101786566878445', \n longitude: '34.858965073072056'\n });\n}\n\n", "In case if you don't want to use google geocoding API than you can refer to few other Free APIs for the development purpose.\nfor example i used [mapquest] API in order to get the location name.\nyou can fetch location name easily by implementing this following function\n\n\n const fetchLocationName = async (lat,lng) => {\n await fetch(\n 'https://www.mapquestapi.com/geocoding/v1/reverse?key=API-Key&location='+lat+'%2C'+lng+'&outFormat=json&thumbMaps=false',\n )\n .then((response) => response.json())\n .then((responseJson) => {\n console.log(\n 'ADDRESS GEOCODE is BACK!! => ' + JSON.stringify(responseJson),\n );\n });\n };\n\n\n\n", "Here's a modern solution using a promise:\nfunction getAddress (latitude, longitude) {\n return new Promise(function (resolve, reject) {\n var request = new XMLHttpRequest();\n\n var method = 'GET';\n var url = 'http://maps.googleapis.com/maps/api/geocode/json?latlng=' + latitude + ',' + longitude + '&sensor=true';\n var async = true;\n\n request.open(method, url, async);\n request.onreadystatechange = function () {\n if (request.readyState == 4) {\n if (request.status == 200) {\n var data = JSON.parse(request.responseText);\n var address = data.results[0];\n resolve(address);\n }\n else {\n reject(request.status);\n }\n }\n };\n request.send();\n });\n};\n\nAnd call it like this:\ngetAddress(lat, lon).then(console.log).catch(console.error);\n\nThe promise returns the address object in 'then' or the error status code in 'catch'\n", "Following Code Works Fine For Me to Get\nCity,\nstate,\ncountry,\nzipcode\n(Using Google Map Geo API) :\n var url = \"https://maps.googleapis.com/maps/api/geocode/json?latlng=\"+lat+\",\"+long+\"&key=KEY_HERE&sensor=false\";\n $.get(url, function(data) {\n var results = data.results;\n if (data.status === 'OK') \n {\n //console.log(JSON.stringify(results));\n if (results[0]) \n {\n var city = \"\";\n var state = \"\";\n var country = \"\";\n var zipcode = \"\";\n \n var address_components = results[0].address_components;\n \n for (var i = 0; i < address_components.length; i++) \n {\n if (address_components[i].types[0] === \"administrative_area_level_1\" && address_components[i].types[1] === \"political\") {\n state = address_components[i].long_name; \n }\n if (address_components[i].types[0] === \"locality\" && address_components[i].types[1] === \"political\" ) { \n city = address_components[i].long_name; \n }\n \n if (address_components[i].types[0] === \"postal_code\" && zipcode == \"\") {\n zipcode = address_components[i].long_name;\n\n }\n \n if (address_components[i].types[0] === \"country\") {\n country = address_components[i].long_name;\n\n }\n }\n var address = {\n \"city\": city,\n \"state\": state,\n \"country\": country,\n \"zipcode\": zipcode,\n };\n console.log(address);\n } \n else \n {\n window.alert('No results found');\n }\n } \n else \n {\n window.alert('Geocoder failed due to: ' + status);\n \n }\n });\n\n", "Same as @Sanchit Gupta.\nin this part \nif (results[0]) {\n var add= results[0].formatted_address ;\n var value=add.split(\",\");\n count=value.length;\n country=value[count-1];\n state=value[count-2];\n city=value[count-3];\n x.innerHTML = \"city name is: \" + city;\n}\n\njust console the results array\nif (results[0]) {\n console.log(results[0]);\n // choose from console whatever you need.\n var city = results[0].address_components[3].short_name;\n x.innerHTML = \"city name is: \" + city;\n}\n\n", "There are many tools available\n\ngoogle maps API as like all had written\nuse this data\n\"https://simplemaps.com/data/world-cities\"\ndownload free version and convert excel to JSON with some online converter like \"http://beautifytools.com/excel-to-json-converter.php\"\nuse IP address \nwhich is not good because using IP address of someone may not good \nusers think that you can hack them.\n\nother free and paid tools are available also\n", "public function retornaCidade ( $lat, $lng )\n {\n $key = \"SUA CHAVE\";\n $url = 'https://maps.googleapis.com/maps/api/geocode/json?latlng=' . $lat . ',' . $lng . '&key=' . $key;\n $geoFull = json_decode ( file_get_contents ( $url ), true );\n\n if ( $geoFull[ 'results' ] )\n {\n //console.log(JSON.stringify(results));\n if ( $geoFull[ 'results' ][ 0 ] )\n {\n $cidade = \"\";\n $estado = \"\";\n $pais = \"\";\n $cep = \"\";\n\n $address_components = $geoFull[ 'results' ][ 0 ][ 'address_components' ];\n\n for ( $i = 0; $i < count ( $address_components ); $i++ )\n {\n if ( ($address_components[ $i ][ 'types' ][ 0 ] == \"administrative_area_level_1\") && ($address_components[ $i ][ 'types' ][ 1 ] == \"political\" ))\n {\n $estado = str_replace('State of ', '',$address_components[ $i ][ 'long_name' ]);];\n }\n if ( ($address_components[ $i ][ 'types' ][ 0 ] == \"administrative_area_level_2\") && ($address_components[ $i ][ 'types' ][ 1 ] == \"political\" ))\n {\n $cidade = $address_components[ $i ][ 'long_name' ];\n }\n\n if ( $address_components[ $i ][ 'types' ][ 0 ] == \"postal_code\" && $cep == \"\" )\n {\n $cep = $address_components[ $i ][ 'long_name' ];\n }\n\n if ($address_components[ $i ][ 'types' ][ 0 ] == \"country\" )\n {\n $pais = $address_components[ $i ][ 'long_name' ];\n }\n }\n $endereco = [\n \"cidade\" => $cidade,\n \"estado\" => $estado,\n \"pais\" => $pais,\n \"cep\" => $cep,\n ];\n \n return $endereco;\n }\n else\n {\n return false;\n }\n }\n else\n {\n return false;\n }\n }\n\n", "You can use this library in your API based on Node to do reverse geocoding:\nhttps://github.com/rapomon/geojson-places\n" ]
[ 135, 29, 10, 7, 6, 5, 5, 4, 2, 1, 1, 0, 0 ]
[ "you can do it with pure php and google geocode api\n/*\n *\n * @param latlong (String) is Latitude and Longitude with , as separator for example \"21.3724002,39.8016229\"\n **/\nfunction getCityNameByLatitudeLongitude($latlong)\n{\n $APIKEY = \"AIzaXXXXXXXXXXXXXXXXXXXXXXXXXXX\"; // Replace this with your google maps api key \n $googleMapsUrl = \"https://maps.googleapis.com/maps/api/geocode/json?latlng=\" . $latlong . \"&language=ar&key=\" . $APIKEY;\n $response = file_get_contents($googleMapsUrl);\n $response = json_decode($response, true);\n $results = $response[\"results\"];\n $addressComponents = $results[0][\"address_components\"];\n $cityName = \"\";\n foreach ($addressComponents as $component) {\n // echo $component;\n $types = $component[\"types\"];\n if (in_array(\"locality\", $types) && in_array(\"political\", $types)) {\n $cityName = $component[\"long_name\"];\n }\n }\n if ($cityName == \"\") {\n echo \"Failed to get CityName\";\n } else {\n echo $cityName;\n }\n}\n\n" ]
[ -3 ]
[ "geocoding", "google_maps", "javascript", "latitude_longitude", "node.js" ]
stackoverflow_0006548504_geocoding_google_maps_javascript_latitude_longitude_node.js.txt
Q: How to further extend state of an extended Vuex Class Module I am in the process of lessening the bulk of a Vuex store module that has grown too big and overly complex. Our state management system is built with Vuex using JS classes where we have a base module that is extended to multiple other more specific modules. I am currently refactoring one of those more ‘specific’ modules - this module handles the state of a couple views containing a handful of components. I am attempting to extend this module the same as all of our other modules and use spread syntax to bring in the existing state properties… export default class ChildModule extends BaseModule { constructor() { super() this.state = { ...this.state } this.getters = { ...this.getters } } // ** the same syntax for actions and mutations // logging rootState in an action inside GrandchildModule correct values from ChildModule } the only difference in the working module vs the broken module is the class that’s being extended… export default class GrandchildModule extends ChildModule {... for some reason that I haven’t figured out yet the GrandchildModule is not inheriting the state from the ChildModule properties that require fetching data from the api - properties that do not require api access are behaving correctly - I have been unsuccessful using getters in the GrandchildModule which I expected to be reactive to fetched data? I’m hoping someone has some insight on how I can make this work or tell me what i’m doing wrong update: found this on reactivity that i'm currently investigating - not 100% that this is my issue but it could be? A: It sounds like you are trying to use inheritance to extend the state of your Vuex module. However, inheritance is not the appropriate way to do this in JavaScript. Instead of using inheritance, you should use composition. In your GrandchildModule, you can define a new state object that contains the properties from the ChildModule's state object that you want to include. You can then add any additional properties that are specific to the GrandchildModule. Here is an example of how you might do this: export default class GrandchildModule extends BaseModule { constructor() { super() // Create a new state object that includes the properties from // the ChildModule's state that you want to include this.state = { ...this.state, childModuleProp1: this.childModule.state.childModuleProp1, childModuleProp2: this.childModule.state.childModuleProp2, } // Add any additional properties specific to the GrandchildModule this.state.grandchildModuleProp1 = ... this.state.grandchildModuleProp2 = ... // Define your getters, mutations, and actions as usual this.getters = {...} this.mutations = {...} this.actions = {...} } } Using composition in this way allows you to reuse the state, getters, mutations, and actions from the ChildModule in the GrandchildModule without using inheritance. This makes it easier to manage and maintain your Vuex store modules.
How to further extend state of an extended Vuex Class Module
I am in the process of lessening the bulk of a Vuex store module that has grown too big and overly complex. Our state management system is built with Vuex using JS classes where we have a base module that is extended to multiple other more specific modules. I am currently refactoring one of those more ‘specific’ modules - this module handles the state of a couple views containing a handful of components. I am attempting to extend this module the same as all of our other modules and use spread syntax to bring in the existing state properties… export default class ChildModule extends BaseModule { constructor() { super() this.state = { ...this.state } this.getters = { ...this.getters } } // ** the same syntax for actions and mutations // logging rootState in an action inside GrandchildModule correct values from ChildModule } the only difference in the working module vs the broken module is the class that’s being extended… export default class GrandchildModule extends ChildModule {... for some reason that I haven’t figured out yet the GrandchildModule is not inheriting the state from the ChildModule properties that require fetching data from the api - properties that do not require api access are behaving correctly - I have been unsuccessful using getters in the GrandchildModule which I expected to be reactive to fetched data? I’m hoping someone has some insight on how I can make this work or tell me what i’m doing wrong update: found this on reactivity that i'm currently investigating - not 100% that this is my issue but it could be?
[ "It sounds like you are trying to use inheritance to extend the state of your Vuex module. However, inheritance is not the appropriate way to do this in JavaScript. Instead of using inheritance, you should use composition.\nIn your GrandchildModule, you can define a new state object that contains the properties from the ChildModule's state object that you want to include. You can then add any additional properties that are specific to the GrandchildModule. Here is an example of how you might do this:\nexport default class GrandchildModule extends BaseModule {\n constructor() {\n super()\n\n // Create a new state object that includes the properties from\n // the ChildModule's state that you want to include\n this.state = {\n ...this.state,\n childModuleProp1: this.childModule.state.childModuleProp1,\n childModuleProp2: this.childModule.state.childModuleProp2,\n }\n\n // Add any additional properties specific to the GrandchildModule\n this.state.grandchildModuleProp1 = ...\n this.state.grandchildModuleProp2 = ...\n\n // Define your getters, mutations, and actions as usual\n this.getters = {...}\n this.mutations = {...}\n this.actions = {...}\n }\n}\n\nUsing composition in this way allows you to reuse the state, getters, mutations, and actions from the ChildModule in the GrandchildModule without using inheritance. This makes it easier to manage and maintain your Vuex store modules.\n" ]
[ 0 ]
[]
[]
[ "extends", "javascript", "vuex" ]
stackoverflow_0074630168_extends_javascript_vuex.txt
Q: Convert Text to Number - Excel Javascript I've got a range that is originally formatted as "General" or @ and I've tried to convert it back to numbers. Technically, it's working, the cell format shows as "numbers" in Excel. But, the only way I can get the numbers to behave correctly (show the sum in the bottom) is by using the "Convert to Number" function in Excel. How can I access this functionality programmaticaly in Javasciprt/Excel API? Here is what I'm using currently: var ws = context.workbook.worksheets.getActiveWorksheet(); var Used_Rng_And_Props = await Ranges.Get_Used_Rng_And_Props(context, ws, false) //Set Revcd_Wt as Number var recvdwt_col_index = await Ranges.Get_Header_Col_Index(context, Used_Rng_And_Props, "Recvd_wt") console.log('recvdwt_col_index:' + recvdwt_col_index) var rng = await Ranges.Get_Entire_Col_Rng(ws, recvdwt_col_index) rng.numberFormat = "0.00" rng.select() A: Interstingly, I found this --> https://learn.microsoft.com/en-us/javascript/api/excel/excel.range?view=excel-js-preview#excel-excel-range-convertdatatypetotext-member(1) But I need the opposite. After trying Excel online, I noticed the same functionality didn't exist (no "Convert to Number") and I am guessing this isn't possible via the Excel API as it has to be consistent between versions. But, perhaps I'm wrong, so I'll leave it to better minds. As it stands, I basically wrote a function to get the rng.values, switch the rng.numberFormat back to 0.00 then re-wrote the values. This resolved the issue, but will add latency as I added extra context.sync. //Set Revcd_Wt back to Number var recvdwt_col_index = await Ranges.Get_Header_Col_Index(context, Used_Rng_And_Props, "Recvd_wt") var rng = ws.getRangeByIndexes(1, recvdwt_col_index, Used_Rng_And_Props.rowCount - 1, 1) rng.load('values') await context.sync() var rng_vals = rng.values rng.numberFormat = '0.00' rng.values = rng_vals
Convert Text to Number - Excel Javascript
I've got a range that is originally formatted as "General" or @ and I've tried to convert it back to numbers. Technically, it's working, the cell format shows as "numbers" in Excel. But, the only way I can get the numbers to behave correctly (show the sum in the bottom) is by using the "Convert to Number" function in Excel. How can I access this functionality programmaticaly in Javasciprt/Excel API? Here is what I'm using currently: var ws = context.workbook.worksheets.getActiveWorksheet(); var Used_Rng_And_Props = await Ranges.Get_Used_Rng_And_Props(context, ws, false) //Set Revcd_Wt as Number var recvdwt_col_index = await Ranges.Get_Header_Col_Index(context, Used_Rng_And_Props, "Recvd_wt") console.log('recvdwt_col_index:' + recvdwt_col_index) var rng = await Ranges.Get_Entire_Col_Rng(ws, recvdwt_col_index) rng.numberFormat = "0.00" rng.select()
[ "Interstingly, I found this --> https://learn.microsoft.com/en-us/javascript/api/excel/excel.range?view=excel-js-preview#excel-excel-range-convertdatatypetotext-member(1)\nBut I need the opposite. After trying Excel online, I noticed the same functionality didn't exist (no \"Convert to Number\") and I am guessing this isn't possible via the Excel API as it has to be consistent between versions. But, perhaps I'm wrong, so I'll leave it to better minds.\nAs it stands, I basically wrote a function to get the rng.values, switch the rng.numberFormat back to 0.00 then re-wrote the values. This resolved the issue, but will add latency as I added extra context.sync.\n//Set Revcd_Wt back to Number\nvar recvdwt_col_index = await Ranges.Get_Header_Col_Index(context, Used_Rng_And_Props, \"Recvd_wt\")\nvar rng = ws.getRangeByIndexes(1, recvdwt_col_index, Used_Rng_And_Props.rowCount - 1, 1)\nrng.load('values')\nawait context.sync()\nvar rng_vals = rng.values\nrng.numberFormat = '0.00'\nrng.values = rng_vals\n\n" ]
[ 0 ]
[]
[]
[ "excel", "excel_web_addins", "javascript", "office_addins", "office_js" ]
stackoverflow_0074661789_excel_excel_web_addins_javascript_office_addins_office_js.txt
Q: How to: rotate a selected/set image (Flutter) I've managed to rotate images to landscape/portrait after selecting them from Image picker (gallery/camera) .. This works fine, and will continue set new images to my desired orientation .. However, I'm trying to use the same method to rotate an already selected/set image and it doesn't work .. Here is the logic I'm using: import 'package:image/image.dart' as img; void _rotateImage(File file) async { print('>>> rotating image'); try { List<int> imageBytes = await file.readAsBytes(); final originalImage = img.decodeImage(imageBytes); print('>>> original width: ${originalImage.width}'); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, 90); print('>>> fixed width: ${fixedImage.width}'); final fixedFile = await file.writeAsBytes(img.encodeJpg(fixedImage)); setState(() { print('>>> setting state'); _image = fixedFile; }); } catch (e) { print(e); } } I can even see that the image is getting rotated before setting state, but it still doesn't update on screen (this is showing two attempts, not multiple in one) I/flutter (18314): >>> rotating image I/flutter (18314): >>> original width: 450 I/flutter (18314): >>> fixed width: 360 I/flutter (18314): >>> setting state I/flutter (18314): >>> rotating image I/flutter (18314): >>> original width: 360 I/flutter (18314): >>> fixed width: 450 I/flutter (18314): >>> setting state Does anyone has any idea why this method works when picking a new image from the camera/gallery but won't when using a file that's already in the state? [EDIT] I thought it may be something to do with the same file path being used. So I added this code below and although it makes the image refresh, for a fraction of a second, it still doesn't show the rotated image [/EDIT] import 'package:image/image.dart' as img; void _rotateImage(File file) async { try { Random random = new Random(); int randomNumber = random.nextInt(1000000); final newFile = await file.copy( '/data/user/0/!PRIVATE!/cache/rotatedImage$randomNumber.jpg'); List<int> imageBytes = await newFile.readAsBytes(); final originalImage = img.decodeImage(imageBytes); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, 90); final fixedFile = await newFile.writeAsBytes(img.encodeJpg(fixedImage), mode: FileMode.append, flush: true); setState(() { _image = fixedFile; }); } catch (e) { print(e); } } Below is some code to show what's happening when selecting an image and choosing to rotate import 'package:image/image.dart' as img; void _pickImage() async { Navigator.pop(context); try { final pickedFile = await _imagePicker.getImage(source: ImageSource.gallery); File file = File(pickedFile.path); if (pickedFile != null && _rotateToLandscape) { await _setImageToLandscape(file); } else if (pickedFile != null) { await _setImageToPortrait(file); } } catch (e) { print(e); } } Future<void> _setImageToLandscape(File file) async { print('>>> setting image to landscape'); try { setState(() { _loading = true; }); var decodedImage = await decodeImageFromList(file.readAsBytesSync()); int width = decodedImage.width; int height = decodedImage.height; if (width > height) { print('>>> returing original image'); _setSelectedImage(file); } else if (width < height) { print('>>> rotating image'); List<int> imageBytes = await file.readAsBytes(); final originalImage = img.decodeImage(imageBytes); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, -90); final fixedFile = await file.writeAsBytes(img.encodeJpg(fixedImage)); _setSelectedImage(fixedFile); } } catch (e) { print(e); } finally { setState(() { _loading = false; }); } } void _setSelectedImage(File file) { switch (_selectedImage) { case 1: setState(() { _image = file; widget.setImage(image: file); }); break; case 2: setState(() { _image2 = file; widget.setImage(image2: file); }); break; case 3: setState(() { _image3 = file; widget.setImage(image3: file); }); break; } } A: You've set the FileMode when writing to FileMode.append so it will add the new image in the same file after the old image (since you copied the old file) which means that when decoding the new image only the first part will get decoded (the original image) So to fix it you should just be able to remove the mode from the write A: Future<File?> _rotateImage( String url, String fileExt, AttachmentModel item) async { try { File file; if (item.file == null) { file = await urlToFile(url, fileExt, true); } else { file = await urlToFile(url, fileExt, false); List<int> temp = await item.file!.readAsBytes(); await file.writeAsBytes(temp); } List<int> imageBytes = await file.readAsBytes(); final originalImage = img.decodeImage(imageBytes); print('previous width: ${originalImage?.width}'); img.Image newImage; newImage = img.copyRotate(originalImage!, 90); print('width: ${newImage.width}'); final fixedFile = await file.writeAsBytes(img.encodeJpg(newImage)); int index = attachments!.indexOf(item); attachments![index] = AttachmentModel( attachmentUrl: fixedFile.path, fileExt: fileExt, file: fixedFile, ); setState(() {}); return fixedFile; } catch (e) { print(e); return null; } } Future<File> urlToFile(String imageUrl, String fileExt, bool isNet) async { Directory tempDir = await getTemporaryDirectory(); String tempPath = tempDir.path; File file = File(tempPath + DateTime.now().toString() + '.$fileExt'); if (!isNet) return file; Uri uri = Uri.parse(imageUrl); http.Response response = await http.get(uri); await file.writeAsBytes(response.bodyBytes); return file; } Don't use original file, create new one over the original file, I put this code for an example. I know it's to late but maybe It can help someone A: I had the same problem with my photo capture application which uses both cameras (Front and Back). The problem was that the photo captured by the front camera was flipped horizontally. What I did was detect if the front camera is in use and flip the image horizontally, if not; return the image as it is. Future<File?> takePicture() async { final CameraController? cameraController = controller; if (cameraController == null || !cameraController.value.isInitialized) { print('Error >>>>: select a camera first.'); return null; } if (cameraController.value.isTakingPicture) { // A capture is already pending, do nothing. return null; } try { final XFile file = await cameraController.takePicture(); print(file); List<int> imageBytes = await file.readAsBytes(); File file2 = File(file.path); if(_selectedCamera == 1){ // When using the front camera; flip the image img.Image? originalImage = img.decodeImage(imageBytes); img.Image fixedImage = img.flipHorizontal(originalImage!); File fixedFile = await file2.writeAsBytes( img.encodeJpg(fixedImage), flush: true, ); // When using the back camara, don't flip the image return fixedFile; } return file2; } on CameraException catch (e) { print(e); return null; } }
How to: rotate a selected/set image (Flutter)
I've managed to rotate images to landscape/portrait after selecting them from Image picker (gallery/camera) .. This works fine, and will continue set new images to my desired orientation .. However, I'm trying to use the same method to rotate an already selected/set image and it doesn't work .. Here is the logic I'm using: import 'package:image/image.dart' as img; void _rotateImage(File file) async { print('>>> rotating image'); try { List<int> imageBytes = await file.readAsBytes(); final originalImage = img.decodeImage(imageBytes); print('>>> original width: ${originalImage.width}'); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, 90); print('>>> fixed width: ${fixedImage.width}'); final fixedFile = await file.writeAsBytes(img.encodeJpg(fixedImage)); setState(() { print('>>> setting state'); _image = fixedFile; }); } catch (e) { print(e); } } I can even see that the image is getting rotated before setting state, but it still doesn't update on screen (this is showing two attempts, not multiple in one) I/flutter (18314): >>> rotating image I/flutter (18314): >>> original width: 450 I/flutter (18314): >>> fixed width: 360 I/flutter (18314): >>> setting state I/flutter (18314): >>> rotating image I/flutter (18314): >>> original width: 360 I/flutter (18314): >>> fixed width: 450 I/flutter (18314): >>> setting state Does anyone has any idea why this method works when picking a new image from the camera/gallery but won't when using a file that's already in the state? [EDIT] I thought it may be something to do with the same file path being used. So I added this code below and although it makes the image refresh, for a fraction of a second, it still doesn't show the rotated image [/EDIT] import 'package:image/image.dart' as img; void _rotateImage(File file) async { try { Random random = new Random(); int randomNumber = random.nextInt(1000000); final newFile = await file.copy( '/data/user/0/!PRIVATE!/cache/rotatedImage$randomNumber.jpg'); List<int> imageBytes = await newFile.readAsBytes(); final originalImage = img.decodeImage(imageBytes); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, 90); final fixedFile = await newFile.writeAsBytes(img.encodeJpg(fixedImage), mode: FileMode.append, flush: true); setState(() { _image = fixedFile; }); } catch (e) { print(e); } } Below is some code to show what's happening when selecting an image and choosing to rotate import 'package:image/image.dart' as img; void _pickImage() async { Navigator.pop(context); try { final pickedFile = await _imagePicker.getImage(source: ImageSource.gallery); File file = File(pickedFile.path); if (pickedFile != null && _rotateToLandscape) { await _setImageToLandscape(file); } else if (pickedFile != null) { await _setImageToPortrait(file); } } catch (e) { print(e); } } Future<void> _setImageToLandscape(File file) async { print('>>> setting image to landscape'); try { setState(() { _loading = true; }); var decodedImage = await decodeImageFromList(file.readAsBytesSync()); int width = decodedImage.width; int height = decodedImage.height; if (width > height) { print('>>> returing original image'); _setSelectedImage(file); } else if (width < height) { print('>>> rotating image'); List<int> imageBytes = await file.readAsBytes(); final originalImage = img.decodeImage(imageBytes); img.Image fixedImage; fixedImage = img.copyRotate(originalImage, -90); final fixedFile = await file.writeAsBytes(img.encodeJpg(fixedImage)); _setSelectedImage(fixedFile); } } catch (e) { print(e); } finally { setState(() { _loading = false; }); } } void _setSelectedImage(File file) { switch (_selectedImage) { case 1: setState(() { _image = file; widget.setImage(image: file); }); break; case 2: setState(() { _image2 = file; widget.setImage(image2: file); }); break; case 3: setState(() { _image3 = file; widget.setImage(image3: file); }); break; } }
[ "You've set the FileMode when writing to FileMode.append so it will add the new image in the same file after the old image (since you copied the old file) which means that when decoding the new image only the first part will get decoded (the original image)\nSo to fix it you should just be able to remove the mode from the write\n", "Future<File?> _rotateImage(\n String url, String fileExt, AttachmentModel item) async {\n try {\n File file;\n if (item.file == null) {\n file = await urlToFile(url, fileExt, true);\n } else {\n file = await urlToFile(url, fileExt, false);\n List<int> temp = await item.file!.readAsBytes();\n await file.writeAsBytes(temp);\n }\n\n List<int> imageBytes = await file.readAsBytes();\n final originalImage = img.decodeImage(imageBytes);\n print('previous width: ${originalImage?.width}');\n img.Image newImage;\n newImage = img.copyRotate(originalImage!, 90);\n print('width: ${newImage.width}');\n final fixedFile = await file.writeAsBytes(img.encodeJpg(newImage));\n int index = attachments!.indexOf(item);\n attachments![index] = AttachmentModel(\n attachmentUrl: fixedFile.path,\n fileExt: fileExt,\n file: fixedFile,\n );\n setState(() {});\n return fixedFile;\n } catch (e) {\n print(e);\n return null;\n }\n }\n\n Future<File> urlToFile(String imageUrl, String fileExt, bool isNet) async {\n Directory tempDir = await getTemporaryDirectory();\n String tempPath = tempDir.path;\n File file = File(tempPath + DateTime.now().toString() + '.$fileExt');\n if (!isNet) return file;\n Uri uri = Uri.parse(imageUrl);\n http.Response response = await http.get(uri);\n await file.writeAsBytes(response.bodyBytes);\n return file;\n }\n\nDon't use original file, create new one over the original file, I put this code for an example. I know it's to late but maybe It can help someone\n", "I had the same problem with my photo capture application which uses both cameras (Front and Back).\nThe problem was that the photo captured by the front camera was flipped horizontally.\nWhat I did was detect if the front camera is in use and flip the image horizontally, if not; return the image as it is.\nFuture<File?> takePicture() async {\n final CameraController? cameraController = controller;\n if (cameraController == null || !cameraController.value.isInitialized) {\n print('Error >>>>: select a camera first.');\n return null;\n }\n\n if (cameraController.value.isTakingPicture) {\n // A capture is already pending, do nothing.\n return null;\n }\n\n try {\n final XFile file = await cameraController.takePicture();\n print(file);\n\n List<int> imageBytes = await file.readAsBytes();\n File file2 = File(file.path);\n\n if(_selectedCamera == 1){ // When using the front camera; flip the image\n img.Image? originalImage = img.decodeImage(imageBytes);\n img.Image fixedImage = img.flipHorizontal(originalImage!);\n File fixedFile = await file2.writeAsBytes(\n img.encodeJpg(fixedImage),\n flush: true,\n );\n // When using the back camara, don't flip the image\n return fixedFile;\n }\n return file2;\n\n\n } on CameraException catch (e) {\n print(e);\n return null;\n }\n}\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0064498774_dart_flutter.txt
Q: Laravel Forge Letsencrypt Fails This is a long shot, but wondering if anyone else has run into a similar issue. I'm trying to set up a new site on Laravel Forge using DigitalOcean as my provider. I've got the server instance set up and the app is installed, but when I attempt to navigate to the site I get a SSL_ERROR_UNRECOGNIZED_NAME_ALERT error. DNS is being provided by CloudFlare (proxied) and the A record resolves correctly. I went into my Forge dashboard to request a new SSL Cert from LetsEncrypt, but that failed with the error output below. 2022-12-03 00:00:35 URL:https://forge-certificates.laravel.com/le/1617562/1825058/ecdsa?env=production [4557] -> "letsencrypt_script1670025635" [1] Cloning into 'letsencrypt1670025635'... Note: switching to '91cccc0c234e4decf0a19595fa19a6f306788032'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 91cccc0 ensure newline before new section in openssl.cnf + ERROR: An error occurred while sending post-request to https://acme-v02.api.letsencrypt.org/acme/new-order (Status 400) Details: HTTP/2 400 server: nginx date: Sat, 03 Dec 2022 00:00:39 GMT content-type: application/problem+json content-length: 173 cache-control: public, max-age=0, no-cache link: <https://acme-v02.api.letsencrypt.org/directory>;rel="index" replay-nonce: 5CA2cs4FJk70Onq0iakIYvXisgUnrGMELGxh0lXsKjFUAWU { "type": "urn:ietf:params:acme:error:accountDoesNotExist", "detail": "Account \"https://acme-v02.api.letsencrypt.org/acme/acct/853465757\" not found", "status": 400 } I've looked around a bit and I've seen similar issues, but none of the solutions being suggested seem relevant (or they didn't solve the problem after attempting them.) Any and all advice is appreciated. Thanks in advance! A: It looks like the issue is with your SSL certificate. You may need to generate a new certificate from LetsEncrypt and configure it in the Forge dashboard. If the issue persists, it may be worth checking your DNS settings to ensure that the A record is pointing to the correct server IP address. You may also need to check your Forge configuration to ensure that the domain name is configured correctly.
Laravel Forge Letsencrypt Fails
This is a long shot, but wondering if anyone else has run into a similar issue. I'm trying to set up a new site on Laravel Forge using DigitalOcean as my provider. I've got the server instance set up and the app is installed, but when I attempt to navigate to the site I get a SSL_ERROR_UNRECOGNIZED_NAME_ALERT error. DNS is being provided by CloudFlare (proxied) and the A record resolves correctly. I went into my Forge dashboard to request a new SSL Cert from LetsEncrypt, but that failed with the error output below. 2022-12-03 00:00:35 URL:https://forge-certificates.laravel.com/le/1617562/1825058/ecdsa?env=production [4557] -> "letsencrypt_script1670025635" [1] Cloning into 'letsencrypt1670025635'... Note: switching to '91cccc0c234e4decf0a19595fa19a6f306788032'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 91cccc0 ensure newline before new section in openssl.cnf + ERROR: An error occurred while sending post-request to https://acme-v02.api.letsencrypt.org/acme/new-order (Status 400) Details: HTTP/2 400 server: nginx date: Sat, 03 Dec 2022 00:00:39 GMT content-type: application/problem+json content-length: 173 cache-control: public, max-age=0, no-cache link: <https://acme-v02.api.letsencrypt.org/directory>;rel="index" replay-nonce: 5CA2cs4FJk70Onq0iakIYvXisgUnrGMELGxh0lXsKjFUAWU { "type": "urn:ietf:params:acme:error:accountDoesNotExist", "detail": "Account \"https://acme-v02.api.letsencrypt.org/acme/acct/853465757\" not found", "status": 400 } I've looked around a bit and I've seen similar issues, but none of the solutions being suggested seem relevant (or they didn't solve the problem after attempting them.) Any and all advice is appreciated. Thanks in advance!
[ "It looks like the issue is with your SSL certificate. You may need to generate a new certificate from LetsEncrypt and configure it in the Forge dashboard. If the issue persists, it may be worth checking your DNS settings to ensure that the A record is pointing to the correct server IP address. You may also need to check your Forge configuration to ensure that the domain name is configured correctly.\n" ]
[ 1 ]
[]
[]
[ "laravel", "laravel_forge", "lets_encrypt" ]
stackoverflow_0074662824_laravel_laravel_forge_lets_encrypt.txt
Q: How to open URL's using telegraf JS markup inline keyboard in telegram in app browser? Code Snippet: ctx.reply( `Hi! ${ctx.from.first_name} \n \n Shall we start? `, Markup.inlineKeyboard( [ Markup.button.url( "Covid-19 IN", "https://www.covid19india.org/" ), Markup.button.url("WHO ", "https://covid19.who.int/"), ], { columns: 2 } ) ); This is throwing a 400 Bad request URL exception. Any help, how to open these URL's in telegram inline browser ? A: Markup.urlButton("NAME", "https://url.com/"),
How to open URL's using telegraf JS markup inline keyboard in telegram in app browser?
Code Snippet: ctx.reply( `Hi! ${ctx.from.first_name} \n \n Shall we start? `, Markup.inlineKeyboard( [ Markup.button.url( "Covid-19 IN", "https://www.covid19india.org/" ), Markup.button.url("WHO ", "https://covid19.who.int/"), ], { columns: 2 } ) ); This is throwing a 400 Bad request URL exception. Any help, how to open these URL's in telegram inline browser ?
[ " Markup.urlButton(\"NAME\", \"https://url.com/\"),\n\n" ]
[ 0 ]
[ "Just need to remove \"https://\" from the URL & it will work fine.\nhttps://github.com/telegraf/telegraf/discussions/1344\n" ]
[ -1 ]
[ "javascript", "node.js", "telegraf.js", "telegram", "telegram_bot" ]
stackoverflow_0067675449_javascript_node.js_telegraf.js_telegram_telegram_bot.txt
Q: Visual Studio Code C# Debugging Problem (The terminal process failed to launch: Path to shell executable "dotnet" is not a file of a symlink.) I created a workspace using dotnet new console, wrote some code. But when I try to start debugging it using the option Run/Start debugging in visual studio code, it fails with the message: Executing task: dotnet build /home/MY USERNAME/Desktop/Codes/C#/Console/Console.csproj /property:GenerateFullPaths=true /consoleloggerparameters:NoSummary The terminal process failed to launch: Path to shell executable "dotnet" is not a file of a symlink. Terminal will be reused by tasks, press any key to close it. Using the dotnet run command in terminal works fine without any problems. But using the start debugging option fails for some reason. I really don't want to have to type this command every time I want to start the program. Here is the result of dotnet --info command: .NET Core SDK (reflects global.json if exists):\ Version: 3.1.302\ Commit: 41faccf259 Runtime Environment:\ OS Name: ubuntu\ OS Version: 20.04\ OS Platform: Linux\ RID: linux-x64\ Base Path: /usr/share/dotnet/sdk/3.1.302/ Host (useful for support):\ Version: 3.1.6\ Commit: 3acd9b0cd1 .NET Core SDKs installed:\ 3.1.302 [/usr/share/dotnet/sdk] .NET Core runtimes installed:\ Microsoft.AspNetCore.App 3.1.6 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]\ Microsoft.NETCore.App 3.1.6 [/usr/share/dotnet/shared/Microsoft.NETCore.App] To install additional .NET Core runtimes or SDKs:\ https://aka.ms/dotnet-download I've done some translating with the result, it may not match entirely the original output A: Seems like this post isn't going to be answered. I've found a way to solve it. In "tasks.json" file i replaced the command "dotnet" with "/usr/bin/dotnet" and it's working fine now. But i think that the actual problem has something to do with the path variable and my solution is just a temporary one. A: try deleting the .vscode folder from the dotnet root project. Then restart the vscode project window this .vscode folder will regenerate automatically while you are asked to add configuration. And now your c# debugging should be working fine. It worked for me on Linux. A: This just randomly started happening on Mac for me. The fix was to add: export dotnet=/usr/local/share/dotnet/dotnet to my ~/.zshrc file. Then restart vscode. A: Here the issue was the dotnet package had just been installed and added to the path on Mac (using .net 5.0). I had to exit vs-code, close the terminal I had launched it from using code . and then open a new terminal tab (where dotnet itself was resolvable) and then relaunch vs-code from that new terminal. TLDR launch new terminal after dotnet install then use code . to launch new vs code instance from there (assumes you used the Ctrl/Cmd+shift+p "Add to shell" option in vscode to launch from a terminal) A: Kindly check this Grepper response, works well on Ubuntu 20.04.4 LTS also. A: In [LINUX] the $PATH environment variable may have another path to the "dotnet" command. So, you can use "echo $PATH" command to check it. If it's true, then you can check the bash file "sudo nano /etc/bash.bashrc" and remove the export with "dotnet" note. A: Uninstalling dotnet and vscode did not work for me, nor did removing ~/.vscode Eventually I resolved the issue by removing this directory ~/.config/Code. That directory contains various settings so you may wish to back it up / you may wish to retain your settings.json file. There is probably a specific value somewhere in that directory that causes this particular issue but I didn't want to sift through it to find the culprit - probably easier to just start again. A: In my case issue resolved by installing dotnet 5 sdk from the link below and Restarting the MAC. https://dotnet.microsoft.com/download/dotnet/thank-you/sdk-5.0.300-macos-x64-installer?journey=vs-code A: The solution may be that you installed .NET using the default instructions using export DOTNET_ROOT etc, and putting that in ~/bashrc The best way is, to install .NET SDK using sudo apt install sudo apt-get update; \ sudo apt-get install -y apt-transport-https && \ sudo apt-get update && \ sudo apt-get install -y dotnet-sdk-5.0 Full instructions on https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu. Now, .net install will indeed involve a 'symlink' about which the debugger complained. It should work. A: I deleted the "dotnet" folder from my personal folder and it worked. Apparently, when I downloaded the SDK, I made a mistake. A: Make the following changes in the settings.json file (typically in ~/.config/Code/User dir). { "terminal.integrated.profiles.linux": { "bash" : { "path" : "/bin/bash", "icon" : "terminal-bash" } } } PS: Using Ubuntu 20.04. Neither removing ~/.config/Code nor removing ~/.vscode worked for me. A: Old post but may help somebody: I have had this issue when VScode is re-launched after a restart of my mac. Closing vscode and re-opening it gives access back to the shell. A: To anyone else having this issue, you are likely experiencing a bug with VSCode, which will hopefully be resolved soon: https://github.com/microsoft/vscode/pull/158666 You likely have a ~/dotnet folder, and if your vscode process starts in the home directory, the folder will take priority over the executable in PATH. Check the other answers for temporary workarounds until the fix is merged. EDIT: Just because there is so much conflicting information here, the fixes that should work are: Changing the "command" field in tasks.json from "dotnet" to the full path (e.g. "/usr/bin/dotnet") Or Starting vscode via a terminal using code . (as long as you run this in a directory which doesn't contain a dotnet sub-directory) A: In .zshrc I had to change export dotnet=/usr/local/share/dotnet/dotnet to export dotnet=/usr/local/share/dotnet I have a dotnet executable within my dotnet folder too, but it only works if I do not include it in the path.
Visual Studio Code C# Debugging Problem (The terminal process failed to launch: Path to shell executable "dotnet" is not a file of a symlink.)
I created a workspace using dotnet new console, wrote some code. But when I try to start debugging it using the option Run/Start debugging in visual studio code, it fails with the message: Executing task: dotnet build /home/MY USERNAME/Desktop/Codes/C#/Console/Console.csproj /property:GenerateFullPaths=true /consoleloggerparameters:NoSummary The terminal process failed to launch: Path to shell executable "dotnet" is not a file of a symlink. Terminal will be reused by tasks, press any key to close it. Using the dotnet run command in terminal works fine without any problems. But using the start debugging option fails for some reason. I really don't want to have to type this command every time I want to start the program. Here is the result of dotnet --info command: .NET Core SDK (reflects global.json if exists):\ Version: 3.1.302\ Commit: 41faccf259 Runtime Environment:\ OS Name: ubuntu\ OS Version: 20.04\ OS Platform: Linux\ RID: linux-x64\ Base Path: /usr/share/dotnet/sdk/3.1.302/ Host (useful for support):\ Version: 3.1.6\ Commit: 3acd9b0cd1 .NET Core SDKs installed:\ 3.1.302 [/usr/share/dotnet/sdk] .NET Core runtimes installed:\ Microsoft.AspNetCore.App 3.1.6 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]\ Microsoft.NETCore.App 3.1.6 [/usr/share/dotnet/shared/Microsoft.NETCore.App] To install additional .NET Core runtimes or SDKs:\ https://aka.ms/dotnet-download I've done some translating with the result, it may not match entirely the original output
[ "Seems like this post isn't going to be answered. I've found a way to solve it. In \"tasks.json\" file i replaced the command \"dotnet\" with \"/usr/bin/dotnet\" and it's working fine now. But i think that the actual problem has something to do with the path variable and my solution is just a temporary one.\n", "try deleting the .vscode folder from the dotnet root project. Then restart the vscode project window this .vscode folder will regenerate automatically while you are asked to add configuration. And now your c# debugging should be working fine. It worked for me on Linux.\n", "This just randomly started happening on Mac for me. The fix was to add:\nexport dotnet=/usr/local/share/dotnet/dotnet\nto my ~/.zshrc file. Then restart vscode.\n", "Here the issue was the dotnet package had just been installed and added to the path on Mac (using .net 5.0). I had to exit vs-code, close the terminal I had launched it from using code . and then open a new terminal tab (where dotnet itself was resolvable) and then relaunch vs-code from that new terminal.\nTLDR launch new terminal after dotnet install then use code . to launch new vs code instance from there (assumes you used the Ctrl/Cmd+shift+p \"Add to shell\" option in vscode to launch from a terminal)\n", "Kindly check this Grepper response, works well on Ubuntu 20.04.4 LTS also.\n", "In [LINUX] the $PATH environment variable may have another path to the \"dotnet\" command. So, you can use \"echo $PATH\" command to check it. If it's true, then you can check the bash file \"sudo nano /etc/bash.bashrc\" and remove the export with \"dotnet\" note.\n", "Uninstalling dotnet and vscode did not work for me, nor did removing ~/.vscode\nEventually I resolved the issue by removing this directory ~/.config/Code. That directory contains various settings so you may wish to back it up / you may wish to retain your settings.json file.\nThere is probably a specific value somewhere in that directory that causes this particular issue but I didn't want to sift through it to find the culprit - probably easier to just start again.\n", "In my case issue resolved by installing dotnet 5 sdk from the link below and Restarting the MAC.\nhttps://dotnet.microsoft.com/download/dotnet/thank-you/sdk-5.0.300-macos-x64-installer?journey=vs-code\n", "The solution may be that you installed .NET using the default instructions using export DOTNET_ROOT etc, and putting that in ~/bashrc\nThe best way is, to install .NET SDK using\nsudo apt install \nsudo apt-get update; \\\nsudo apt-get install -y apt-transport-https && \\\nsudo apt-get update && \\\nsudo apt-get install -y dotnet-sdk-5.0\n\nFull instructions on https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu.\nNow, .net install will indeed involve a 'symlink' about which the debugger complained. It should work.\n", "I deleted the \"dotnet\" folder from my personal folder and it worked. Apparently, when I downloaded the SDK, I made a mistake.\n", "Make the following changes in the settings.json file (typically in ~/.config/Code/User dir).\n{\n \"terminal.integrated.profiles.linux\": {\n \"bash\" : { \n \"path\" : \"/bin/bash\", \n \"icon\" : \"terminal-bash\"\n }\n }\n}\n\nPS: Using Ubuntu 20.04. Neither removing ~/.config/Code nor removing ~/.vscode worked for me.\n", "Old post but may help somebody:\nI have had this issue when VScode is re-launched after a restart of my mac.\nClosing vscode and re-opening it gives access back to the shell.\n", "To anyone else having this issue, you are likely experiencing a bug with VSCode, which will hopefully be resolved soon:\nhttps://github.com/microsoft/vscode/pull/158666\nYou likely have a ~/dotnet folder, and if your vscode process starts in the home directory, the folder will take priority over the executable in PATH.\nCheck the other answers for temporary workarounds until the fix is merged.\nEDIT:\nJust because there is so much conflicting information here, the fixes that should work are:\n\nChanging the \"command\" field in tasks.json from \"dotnet\" to the full path (e.g. \"/usr/bin/dotnet\")\n\nOr\n\nStarting vscode via a terminal using code . (as long as you run this in a directory which doesn't contain a dotnet sub-directory)\n\n", "In .zshrc I had to change\nexport dotnet=/usr/local/share/dotnet/dotnet\n\nto\nexport dotnet=/usr/local/share/dotnet\n\nI have a dotnet executable within my dotnet folder too, but it only works if I do not include it in the path.\n" ]
[ 34, 9, 8, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "c#", "visual_studio_code" ]
stackoverflow_0063088161_.net_c#_visual_studio_code.txt
Q: Is there any way to save file history within the code file, like a GIF contains multiple images, or a presentation sheet with animations? I want to create a tutorial code on recursion for education purposes. It looks like this for now: We have a function fact that calculates the factorial of parameter n recursively. def fact(n: int) -> int: if n <= 1: return 1 else: return n * fact(n - 1) We have the following files: recursion_01.md: print(fact(3)) # What is fact(4) ? # Let's take a look at the function definition... recursion_02.md: print(fact(3)) # What is fact(4) ? def fact(n): # We evaluate the variable "n" as 3... if n <= 1: return 1 else: return n * fact(n - 1) recursion_03.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(3 - 1) # We evaluate the expression... recursion_04.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(2) # What is fact(2) ? # Lat's take a look at the function definition... recursion_05.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(2) # What is fact(2) ? def fact(n): # We evaluate the variable "n" as 2... if n <= 1: return 1 else: return n * fact(n - 1) You can imagine the rest. So now I would just open one file after another, and show the changes. But it is tedious (and not elegant) to go through these files one-by-one. Therefore I am searching for a method / code format / file-extension / anything to save multiple code files in a simple code file (like a GIF file contains multiple images) and to be able to easily visually navigate between the file versions (like navigating between animations on a presentation sheet). Are you aware of any possibilities to achieve that? Thanks in advance! I have tried searching for such a possibility but I didn't find anything. Of course I can take screenshots and create a GIF etc. but that's not my purpose. A: It sounds like you're looking for a way to save multiple versions of your code in a single file and then easily switch between them. One way to do this would be to use a version control system like Git. With Git, you can save multiple versions of your code in a repository, and then use Git commands to switch between the different versions.
Is there any way to save file history within the code file, like a GIF contains multiple images, or a presentation sheet with animations?
I want to create a tutorial code on recursion for education purposes. It looks like this for now: We have a function fact that calculates the factorial of parameter n recursively. def fact(n: int) -> int: if n <= 1: return 1 else: return n * fact(n - 1) We have the following files: recursion_01.md: print(fact(3)) # What is fact(4) ? # Let's take a look at the function definition... recursion_02.md: print(fact(3)) # What is fact(4) ? def fact(n): # We evaluate the variable "n" as 3... if n <= 1: return 1 else: return n * fact(n - 1) recursion_03.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(3 - 1) # We evaluate the expression... recursion_04.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(2) # What is fact(2) ? # Lat's take a look at the function definition... recursion_05.md: print(fact(3)) # What is fact(4) ? def fact(3): if 3 <= 1: # Condition is false... # SKIPPED! else: # Entering else: return 3 * fact(2) # What is fact(2) ? def fact(n): # We evaluate the variable "n" as 2... if n <= 1: return 1 else: return n * fact(n - 1) You can imagine the rest. So now I would just open one file after another, and show the changes. But it is tedious (and not elegant) to go through these files one-by-one. Therefore I am searching for a method / code format / file-extension / anything to save multiple code files in a simple code file (like a GIF file contains multiple images) and to be able to easily visually navigate between the file versions (like navigating between animations on a presentation sheet). Are you aware of any possibilities to achieve that? Thanks in advance! I have tried searching for such a possibility but I didn't find anything. Of course I can take screenshots and create a GIF etc. but that's not my purpose.
[ "It sounds like you're looking for a way to save multiple versions of your code in a single file and then easily switch between them. One way to do this would be to use a version control system like Git. With Git, you can save multiple versions of your code in a repository, and then use Git commands to switch between the different versions.\n" ]
[ 0 ]
[]
[]
[ "file_extension", "format", "gif", "presentation", "version" ]
stackoverflow_0074658968_file_extension_format_gif_presentation_version.txt
Q: Django rest API : Python command-line .py execution: Stuck at a certain place I'm trying to deploy the following Django-rest api on gcp ubuntu 22.0.4 using python 3.9. https://github.com/OkunaOrg/okuna-api The entire setup is supposed to be done and get setup using a single command : python3.9 okuna-cli.py up-full The execution seems stuck at "Waiting for server to come up..." and doesn't proceed ahead. The setup should complete by stating "Okuna is live at "domain". Another important aspect of the setup is the 5 docker containers are running and working fine when i run the py file. I'm even able to access the database after creating a superuser. The code is as follows : import random import time import click import subprocess import colorlog import logging import os.path from shutil import copyfile import json import atexit import os, errno import requests from halo import Halo handler = colorlog.StreamHandler() handler.setFormatter(colorlog.ColoredFormatter( '%(log_color)s%(name)s -> %(message)s')) logger = colorlog.getLogger('') logger.addHandler(handler) logger.setLevel(level=logging.DEBUG) current_dir = os.path.dirname(__file__) OKUNA_CLI_CONFIG_FILE = os.path.join(current_dir, '.okuna-cli.json') OKUNA_CLI_CONFIG_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.okuna-cli.json') LOCAL_API_ENV_FILE = os.path.join(current_dir, '.env') LOCAL_API_ENV_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.env') DOCKER_COMPOSE_ENV_FILE = os.path.join(current_dir, '.docker-compose.env') DOCKER_COMPOSE_ENV_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.docker-compose.env') REQUIREMENTS_TXT_FILE = os.path.join(current_dir, 'requirements.txt') DOCKER_API_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'api', 'requirements.txt') DOCKER_WORKER_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'worker', 'requirements.txt') DOCKER_SCHEDULER_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'scheduler', 'requirements.txt') DOCKER_API_TEST_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'api-test', 'requirements.txt') CONTEXT_SETTINGS = dict( default_map={} ) random_generator = random.SystemRandom() def _remove_file_silently(filename): try: os.remove(filename) except OSError as e: # this would be "except OSError, e:" before Python 2.6 if e.errno != errno.ENOENT: # errno.ENOENT = no such file or directory raise # re-raise exception if a different error occurred def _get_random_string(length=12, allowed_chars='abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'): """ Return a securely generated random string. The default length of 12 with the a-z, A-Z, 0-9 character set returns a 71-bit value. log_2((26+26+10)^12) =~ 71 bits """ return ''.join(random.choice(allowed_chars) for i in range(length)) def _get_django_secret_key(): """ Return a 50 character random string usable as a SECRET_KEY setting value. """ chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)' return _get_random_string(50, chars) def _get_mysql_password(): return _get_random_string(64) def _get_redis_password(): return _get_random_string(128) def _copy_requirements_txt_to_docker_images_dir(): copyfile(REQUIREMENTS_TXT_FILE, DOCKER_API_IMAGE_REQUIREMENTS_TXT_FILE) copyfile(REQUIREMENTS_TXT_FILE, DOCKER_WORKER_IMAGE_REQUIREMENTS_TXT_FILE) copyfile(REQUIREMENTS_TXT_FILE, DOCKER_SCHEDULER_IMAGE_REQUIREMENTS_TXT_FILE) def _check_okuna_api_is_running(address, port): # Create a TCP socket try: response = requests.get('http://%s:%s/health/' % (address, port)) response_status = response.status_code return response_status == 200 except requests.ConnectionError as e: return False def _wait_until_api_is_running(address, port, message='Waiting for server to come up...', sleep=None): spinner = Halo(text=message, spinner='dots') spinner.start() if sleep: time.sleep(sleep) is_running = _check_okuna_api_is_running(address=address, port=port) while not is_running: is_running = _check_okuna_api_is_running(address=address, port=port) spinner.stop() def _clean(): """ Cleans everything that the okuna-cli has created. Docker volumes, config files, everything. :return: """ logger.info(' Cleaning up database') subprocess.run(["docker", "volume", "rm", "okuna-api_mariadb"]) subprocess.run(["docker", "volume", "rm", "okuna-api_redisdb"]) logger.info(' Cleaning up config files') _remove_file_silently(LOCAL_API_ENV_FILE) _remove_file_silently(DOCKER_COMPOSE_ENV_FILE) _remove_file_silently(OKUNA_CLI_CONFIG_FILE) logger.info('✅ Clean up done!') def _print_okuna_logo(): print(r""" ____ _ / __ \| | | | | | | ___ _ _ __ __ _ | | | | |/ | | | | '_ \ / _` | | |__| | <| |_| | | | | (_| | \____/|_|\_\\__,_|_| |_|\__,_| """) def _file_exists(filename): return os.path.exists(filename) and os.path.isfile(filename) def _replace_in_file(filename, texts): with open(filename, 'r') as file: filedata = file.read() # Replace the target string for key in texts: value = texts[key] filedata = filedata.replace(key, value) # Write the file out again with open(filename, 'w') as file: file.write(filedata) def _ensure_has_local_api_environment_file(okuna_cli_config): if _file_exists(LOCAL_API_ENV_FILE): return logger.info('Local API .env file does not exist. Creating %s' % LOCAL_API_ENV_FILE) if not _file_exists(LOCAL_API_ENV_FILE_TEMPLATE): raise Exception('Local API .env file template did not exist') copyfile(LOCAL_API_ENV_FILE_TEMPLATE, LOCAL_API_ENV_FILE) _replace_in_file(LOCAL_API_ENV_FILE, { "{{DJANGO_SECRET_KEY}}": okuna_cli_config['djangoSecretKey'], "{{SQL_PASSWORD}}": okuna_cli_config['sqlPassword'], "{{REDIS_PASSWORD}}": okuna_cli_config['redisPassword'], }) def _ensure_has_docker_compose_api_environment_file(okuna_cli_config): if _file_exists(DOCKER_COMPOSE_ENV_FILE): return logger.info('Docker compose env file does not exist. Creating %s' % DOCKER_COMPOSE_ENV_FILE) if not _file_exists(DOCKER_COMPOSE_ENV_FILE_TEMPLATE): raise Exception('Docker compose env file template did not exist') copyfile(DOCKER_COMPOSE_ENV_FILE_TEMPLATE, DOCKER_COMPOSE_ENV_FILE) _replace_in_file(DOCKER_COMPOSE_ENV_FILE, { "{{DJANGO_SECRET_KEY}}": okuna_cli_config['djangoSecretKey'], "{{SQL_PASSWORD}}": okuna_cli_config['sqlPassword'], "{{REDIS_PASSWORD}}": okuna_cli_config['redisPassword'], }) def _ensure_has_okuna_config_file(): if _file_exists(OKUNA_CLI_CONFIG_FILE): return django_secret_key = _get_django_secret_key() mysql_password = _get_mysql_password() redis_password = _get_redis_password() logger.info('Generated DJANGO_SECRET_KEY=%s' % django_secret_key) logger.info('Generated SQL_PASSWORD=%s' % mysql_password) logger.info('Generated REDIS_PASSWORD=%s' % redis_password) logger.info('Config file does not exist. Creating %s' % OKUNA_CLI_CONFIG_FILE) if not _file_exists(OKUNA_CLI_CONFIG_FILE_TEMPLATE): raise Exception('Config file template did not exists') copyfile(OKUNA_CLI_CONFIG_FILE_TEMPLATE, OKUNA_CLI_CONFIG_FILE) _replace_in_file(OKUNA_CLI_CONFIG_FILE, { "{{DJANGO_SECRET_KEY}}": django_secret_key, "{{SQL_PASSWORD}}": mysql_password, "{{REDIS_PASSWORD}}": redis_password, }) def _bootstrap(is_local_api): logger.info(' Bootstrapping Okuna with some data') if is_local_api: subprocess.run(["./utils/scripts/bootstrap_development_data.sh"]) else: subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "exec", "webserver", "/bootstrap_development_data.sh"]) def _ensure_has_required_cli_config_files(): _ensure_has_okuna_config_file() with open(OKUNA_CLI_CONFIG_FILE, 'r+') as okuna_cli_config_file: okuna_cli_config = json.load(okuna_cli_config_file) _ensure_has_docker_compose_api_environment_file(okuna_cli_config=okuna_cli_config) _ensure_has_local_api_environment_file(okuna_cli_config=okuna_cli_config) def _ensure_was_bootstrapped(is_local_api): with open(OKUNA_CLI_CONFIG_FILE, 'r+') as okuna_cli_config_file: okuna_cli_config = json.load(okuna_cli_config_file) if okuna_cli_config['bootstrapped']: return logger.info('Okuna was not bootstrapped.') _bootstrap(is_local_api=is_local_api) okuna_cli_config['bootstrapped'] = True okuna_cli_config_file.seek(0) json.dump(okuna_cli_config, okuna_cli_config_file, indent=4) okuna_cli_config_file.truncate() logger.info('Okuna was bootstrapped.') @click.group() def cli(): pass def _down_test(): """Bring Okuna down""" logger.error('⬇️ Bringing the Okuna test services down...') subprocess.run(["docker-compose", "-f", "docker-compose-test-services-only.yml", "down"]) def _down_full(): """Bring Okuna down""" logger.error('⬇️ Bringing the whole of Okuna down...') subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "down"]) def _down_services_only(): """Bring Okuna down""" logger.error('⬇️ Bringing the Okuna services down...') subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "down"]) @click.command() def down_services_only(): _down_services_only() @click.command() def down_full(): _down_full() @click.command() def up_full(): """Bring the whole of Okuna up""" _print_okuna_logo() _ensure_has_required_cli_config_files() _copy_requirements_txt_to_docker_images_dir() logger.info('⬆️ Bringing the whole of Okuna up...') atexit.register(_down_full) subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "up", "-d", "-V"]) okuna_api_address = 'domain' okuna_api_port = 80 _wait_until_api_is_running(address=okuna_api_address, port=okuna_api_port) _ensure_was_bootstrapped(is_local_api=False) logger.info(' Okuna is live at http://%s:%s.' % (okuna_api_address, okuna_api_port)) subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "logs", "--follow", "--tail=0", "webserver"]) input() @click.command() def up_services_only(): """Bring only the Okuna services up. API is up to you.""" _print_okuna_logo() _ensure_has_required_cli_config_files() _copy_requirements_txt_to_docker_images_dir() logger.info('⬆️ Bringing only the Okuna services up...') atexit.register(_down_services_only) subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "up", "-d", "-V"]) _ensure_was_bootstrapped(is_local_api=True) logger.info(' Okuna services are up') subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "logs", "--follow"]) input() @click.command() def down_test(): _down_test() @click.command() def up_test(): """Bring the Okuna test services up""" _print_okuna_logo() _ensure_has_required_cli_config_files() logger.info('⬆️ Bringing the Okuna test services up...') atexit.register(_down_test) subprocess.run(["docker-compose", "-f", "docker-compose-test-services-only.yml", "up", "-d", "-V"]) logger.info(' Okuna tests services are live') subprocess.run( ["docker-compose", "-f", "docker-compose-test-services-only.yml", "logs", "--follow", "--tail=0"]) input() @click.command() def build_full(): """Rebuild Okuna services""" _ensure_has_required_cli_config_files() logger.info('‍♀️ Rebuilding Okuna full services...') _copy_requirements_txt_to_docker_images_dir() subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "build"]) @click.command() def build_services_only(): """Rebuild Okuna services""" _ensure_has_required_cli_config_files() logger.info('‍♀️ Rebuilding only Okuna services...') _copy_requirements_txt_to_docker_images_dir() subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "build"]) @click.command() def status(): """Get Okuna status""" logger.info('️‍♂️ Retrieving services status...') subprocess.run(["docker-compose", "ps"]) @click.command() def clean(): """Bootstrap Okuna""" _clean() cli.add_command(up_full) cli.add_command(down_full) cli.add_command(up_test) cli.add_command(down_test) cli.add_command(up_services_only) cli.add_command(down_services_only) cli.add_command(build_full) cli.add_command(build_services_only) cli.add_command(clean) cli.add_command(status) if __name__ == '__main__': cli() I checked that the def status() isn't working as well which is supposed to check the running docker containers as defined in docker-compose.env and show results. I can se following error when I try: python3.9 okuna-cli.py status Can't find a suitable configuration file in this directory or any parent.Are you in the right directory?Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml When i do docker-compose -f docker-compose-full.yml up I have the following warning displayed : Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.16.16.2' (This connection closed normally without authentication) Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.16.16.3' (This connection closed normally without authentication) EDIT : The above Warning disappears after downgrading Mariadb version to 10.2 I'm getting the 2 additional warnings as well. This is despite running inside a virtual environment and I've done everything using only pip3 without sudo: 1.The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag 2.Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv My docker-compose: version: '3' services: webserver: container_name: okuna-api build: dockerfile: Dockerfile context: ./.docker/api privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip ports: - 80:80 working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.1 depends_on: - db - redis env_file: - .docker-compose.env worker: container_name: okuna-worker build: dockerfile: Dockerfile context: ./.docker/worker privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.2 depends_on: - webserver env_file: - .docker-compose.env scheduler: container_name: okuna-scheduler build: dockerfile: Dockerfile context: ./.docker/scheduler privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.3 depends_on: - webserver env_file: - .docker-compose.env db: image: mariadb:latest hostname: db.okuna volumes: - mariadb:/var/lib/mysql ports: - 3306 privileged: false networks: okuna: ipv4_address: 172.16.16.4 command: --character-set-server=utf8 --collation-server=utf8_unicode_ci env_file: - .docker-compose.env redis: image: bitnami/redis:latest privileged: false ports: - 6379 networks: okuna: ipv4_address: 172.16.16.5 env_file: - .docker-compose.env volumes: - redisdb:/bitnami/redis/data volumes: mariadb: redisdb: networks: okuna: ipam: driver: default config: - subnet: "172.16.16.0/16" my docker-compose.env : # Variable specifying execution environment # Required always. # Possible values: production,development,acceptance, test ENVIRONMENT=development # ============= START NON-ENV SPECIFIC VARIABLES ============= # # [NAME] ALLOWED_HOSTS # [DESCRIPTION] Django variable specifying allowed hosts. # [REQUIRED][PRODUCTION] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#allowed-hosts #ALLOWED_HOSTS=www.openbook.social # [NAME] SECRET_KEY # [DESCRIPTION] Django variable to provide cryptographic signing. If using okuna-cli, obtained from .okuna-cli.json # [REQUIRED][ALWAYS] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#secret-key SECRET_KEY=949m="long passwrod generated here" # [NAME] JWT_ALGORITHM # [DESCRIPTION] Django variable to provide cryptographic signing. # [REQUIRED][ALWAYS] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#secret-key JWT_ALGORITHM=HS256 # [NAME] MEDIA_ROOT # [DESCRIPTION] Absolute filesystem path to the directory that will hold user-uploaded files. # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#media-root # [OPTIONAL=./media] # MEDIA_ROOT= # [NAME] MEDIA_URL # [DESCRIPTION] URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#media-url # [OPTIONAL=/media/] # MEDIA_URL= # [GROUP] SQL Database Configuration # [DESCRIPTION] The SQL database configuration # [REQUIRED][ALWAYS] RDS_DB_NAME=okuna RDS_USERNAME=root RDS_HOSTNAME=db.okuna RDS_PORT=3306 RDS_HOSTNAME_READER=db.okuna RDS_HOSTNAME_WRITER=db.okuna #[NAME] RDS_PASSWORD # [DESCRIPTION] The password for the SQL Database. If using okuna-cli, obtained from .okuna-cli.json RDS_PASSWORD=long passwrod generated here # [GROUP] Redis Database configuration Configuration # [DESCRIPTION] The redis database configuration # [REQUIRED][ALWAYS] REDIS_HOST=redis.okuna REDIS_PORT=6379 #[NAME] REDIS_PASSSWORD # [DESCRIPTION] The password for the REDIS Database. REDIS_PASSWORD=long password generated here # [GROUP] Top posts criteria # [DESCRIPTION] The criteria under which posts will be added to the Explore/Top posts section of the app # [OPTIONAL=2] # MIN_UNIQUE_TOP_POST_REACTIONS_COUNT= # MIN_UNIQUE_TOP_POST_COMMENTS_COUNT= # [NAME] NEW_USER_SUGGESTED_COMMUNITIES # [DESCRIPTION] The ids of the communities to be suggested to a new user # [OPTIONAL=1] # NEW_USER_SUGGESTED_COMMUNITIES=1,1310,216 # [GROUP] Allowed media sizes # [DESCRIPTION] The criteria under which posts will be added to the Explore/Top posts section of the app # [OPTIONAL] # POST_MEDIA_MAX_SIZE=30485760 # PROFILE_AVATAR_MAX_SIZE=10485760 # PROFILE_COVER_MAX_SIZE=10485760 # COMMUNITY_AVATAR_MAX_SIZE=10485760 # COMMUNITY_COVER_MAX_SIZE=10485760 # [NAME] MODERATORS_COMMUNITY_NAME # [DESCRIPTION] The community which when joined, will become global moderators # [OPTIONAL=mods] # MODERATORS_COMMUNITY_NAME= # ============= END NON-ENV SPECIFIC VARIABLES ============= # # ============= START DOCKER COMPOSE SPECIFIC VARIABLES ============= # # [GROUP] Mysql Docker Image env vars # [DESCRIPTION] This must match the RDS_PASSWORD AND RDS_DATABASE env vars on top # [REQUIRED][ALWAYS] MYSQL_ROOT_PASSWORD=long password generated here MYSQL_DATABASE=okuna # [NAME] WAIT_HOSTS # [DESCRIPTION] The hosts that the Kosmos API should wait for # [REQUIRED] WAIT_HOSTS:db.okuna:3306 # ============= END DOCKER COMPOSE SPECIFIC VARIABLES ============= # This is despite the configuration files are intact and in the right place. Help appreciated. A: It looks like the issue is that you are using Python 3.9, which is not yet supported by the Okuna API. The requirements.txt file in the root directory of the repository specifies that the API supports Python 3.6, 3.7, and 3.8. You can try installing one of these versions of Python and running the okuna-cli.py script again. A: It looks like the up-full command is supposed to launch a local development server for the Okuna API. The server is launched using Docker, and the okuna-cli.py script appears to be waiting for the server to start up before proceeding. There are a few potential reasons why the server may not be starting up properly: The Docker containers for the Okuna API may not be starting up properly. You can check the status of the Docker containers by running the docker ps command. If the containers are not running, try starting them manually using the docker-compose up command. The Okuna API server may not be binding to the correct IP address or port. By default, the server is supposed to bind to 0.0.0.0:8000, but this can be configured using the OKUNA_API_BIND_IP and OKUNA_API_BIND_PORT environment variables. You can check the server logs to see if it is binding to the correct address and port. There may be another process running on the same IP address and port that is preventing the Okuna API server from binding. You can use the netstat -tulpn command to see which processes are listening on which ports on your system. If another process is listening on the same IP address and port as the Okuna API server, you will need to either stop that process or reconfigure the Okuna API server to bind to a different IP address and port. Alternatively if those items don't fix the issue: It may be due to the requests library not being able to make a connection to the server. This can happen for a number of reasons, but one possible cause is that the server is not running or is not accessible from the machine where you are running the okuna-cli.py script. One way to troubleshoot this issue is to try running the okuna-cli.py script with the --verbose flag, which will enable more detailed logging. This can help you see what is happening behind the scenes when the script is stuck at "Waiting for server to come up...". Another thing you can try is to manually check if the server is running and accessible by making a request to the server's URL using the curl command. For example: curl http://127.0.0.1:8000 If the server is running and accessible, you should see a response from the server. If you don't see a response, it could mean that the server is not running or is not accessible from your machine. If the server is running and accessible, but you are still encountering issues when running the okuna-cli.py script, you may need to further troubleshoot the issue by looking at the logs for the server and the script itself. This will likely require some additional debugging, but it can help you identify the root cause of the issue. Overall, the best approach for solving this issue will depend on your specific setup and the details of the problem
Django rest API : Python command-line .py execution: Stuck at a certain place
I'm trying to deploy the following Django-rest api on gcp ubuntu 22.0.4 using python 3.9. https://github.com/OkunaOrg/okuna-api The entire setup is supposed to be done and get setup using a single command : python3.9 okuna-cli.py up-full The execution seems stuck at "Waiting for server to come up..." and doesn't proceed ahead. The setup should complete by stating "Okuna is live at "domain". Another important aspect of the setup is the 5 docker containers are running and working fine when i run the py file. I'm even able to access the database after creating a superuser. The code is as follows : import random import time import click import subprocess import colorlog import logging import os.path from shutil import copyfile import json import atexit import os, errno import requests from halo import Halo handler = colorlog.StreamHandler() handler.setFormatter(colorlog.ColoredFormatter( '%(log_color)s%(name)s -> %(message)s')) logger = colorlog.getLogger('') logger.addHandler(handler) logger.setLevel(level=logging.DEBUG) current_dir = os.path.dirname(__file__) OKUNA_CLI_CONFIG_FILE = os.path.join(current_dir, '.okuna-cli.json') OKUNA_CLI_CONFIG_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.okuna-cli.json') LOCAL_API_ENV_FILE = os.path.join(current_dir, '.env') LOCAL_API_ENV_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.env') DOCKER_COMPOSE_ENV_FILE = os.path.join(current_dir, '.docker-compose.env') DOCKER_COMPOSE_ENV_FILE_TEMPLATE = os.path.join(current_dir, 'templates/.docker-compose.env') REQUIREMENTS_TXT_FILE = os.path.join(current_dir, 'requirements.txt') DOCKER_API_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'api', 'requirements.txt') DOCKER_WORKER_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'worker', 'requirements.txt') DOCKER_SCHEDULER_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'scheduler', 'requirements.txt') DOCKER_API_TEST_IMAGE_REQUIREMENTS_TXT_FILE = os.path.join(current_dir, '.docker', 'api-test', 'requirements.txt') CONTEXT_SETTINGS = dict( default_map={} ) random_generator = random.SystemRandom() def _remove_file_silently(filename): try: os.remove(filename) except OSError as e: # this would be "except OSError, e:" before Python 2.6 if e.errno != errno.ENOENT: # errno.ENOENT = no such file or directory raise # re-raise exception if a different error occurred def _get_random_string(length=12, allowed_chars='abcdefghijklmnopqrstuvwxyz' 'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'): """ Return a securely generated random string. The default length of 12 with the a-z, A-Z, 0-9 character set returns a 71-bit value. log_2((26+26+10)^12) =~ 71 bits """ return ''.join(random.choice(allowed_chars) for i in range(length)) def _get_django_secret_key(): """ Return a 50 character random string usable as a SECRET_KEY setting value. """ chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)' return _get_random_string(50, chars) def _get_mysql_password(): return _get_random_string(64) def _get_redis_password(): return _get_random_string(128) def _copy_requirements_txt_to_docker_images_dir(): copyfile(REQUIREMENTS_TXT_FILE, DOCKER_API_IMAGE_REQUIREMENTS_TXT_FILE) copyfile(REQUIREMENTS_TXT_FILE, DOCKER_WORKER_IMAGE_REQUIREMENTS_TXT_FILE) copyfile(REQUIREMENTS_TXT_FILE, DOCKER_SCHEDULER_IMAGE_REQUIREMENTS_TXT_FILE) def _check_okuna_api_is_running(address, port): # Create a TCP socket try: response = requests.get('http://%s:%s/health/' % (address, port)) response_status = response.status_code return response_status == 200 except requests.ConnectionError as e: return False def _wait_until_api_is_running(address, port, message='Waiting for server to come up...', sleep=None): spinner = Halo(text=message, spinner='dots') spinner.start() if sleep: time.sleep(sleep) is_running = _check_okuna_api_is_running(address=address, port=port) while not is_running: is_running = _check_okuna_api_is_running(address=address, port=port) spinner.stop() def _clean(): """ Cleans everything that the okuna-cli has created. Docker volumes, config files, everything. :return: """ logger.info(' Cleaning up database') subprocess.run(["docker", "volume", "rm", "okuna-api_mariadb"]) subprocess.run(["docker", "volume", "rm", "okuna-api_redisdb"]) logger.info(' Cleaning up config files') _remove_file_silently(LOCAL_API_ENV_FILE) _remove_file_silently(DOCKER_COMPOSE_ENV_FILE) _remove_file_silently(OKUNA_CLI_CONFIG_FILE) logger.info('✅ Clean up done!') def _print_okuna_logo(): print(r""" ____ _ / __ \| | | | | | | ___ _ _ __ __ _ | | | | |/ | | | | '_ \ / _` | | |__| | <| |_| | | | | (_| | \____/|_|\_\\__,_|_| |_|\__,_| """) def _file_exists(filename): return os.path.exists(filename) and os.path.isfile(filename) def _replace_in_file(filename, texts): with open(filename, 'r') as file: filedata = file.read() # Replace the target string for key in texts: value = texts[key] filedata = filedata.replace(key, value) # Write the file out again with open(filename, 'w') as file: file.write(filedata) def _ensure_has_local_api_environment_file(okuna_cli_config): if _file_exists(LOCAL_API_ENV_FILE): return logger.info('Local API .env file does not exist. Creating %s' % LOCAL_API_ENV_FILE) if not _file_exists(LOCAL_API_ENV_FILE_TEMPLATE): raise Exception('Local API .env file template did not exist') copyfile(LOCAL_API_ENV_FILE_TEMPLATE, LOCAL_API_ENV_FILE) _replace_in_file(LOCAL_API_ENV_FILE, { "{{DJANGO_SECRET_KEY}}": okuna_cli_config['djangoSecretKey'], "{{SQL_PASSWORD}}": okuna_cli_config['sqlPassword'], "{{REDIS_PASSWORD}}": okuna_cli_config['redisPassword'], }) def _ensure_has_docker_compose_api_environment_file(okuna_cli_config): if _file_exists(DOCKER_COMPOSE_ENV_FILE): return logger.info('Docker compose env file does not exist. Creating %s' % DOCKER_COMPOSE_ENV_FILE) if not _file_exists(DOCKER_COMPOSE_ENV_FILE_TEMPLATE): raise Exception('Docker compose env file template did not exist') copyfile(DOCKER_COMPOSE_ENV_FILE_TEMPLATE, DOCKER_COMPOSE_ENV_FILE) _replace_in_file(DOCKER_COMPOSE_ENV_FILE, { "{{DJANGO_SECRET_KEY}}": okuna_cli_config['djangoSecretKey'], "{{SQL_PASSWORD}}": okuna_cli_config['sqlPassword'], "{{REDIS_PASSWORD}}": okuna_cli_config['redisPassword'], }) def _ensure_has_okuna_config_file(): if _file_exists(OKUNA_CLI_CONFIG_FILE): return django_secret_key = _get_django_secret_key() mysql_password = _get_mysql_password() redis_password = _get_redis_password() logger.info('Generated DJANGO_SECRET_KEY=%s' % django_secret_key) logger.info('Generated SQL_PASSWORD=%s' % mysql_password) logger.info('Generated REDIS_PASSWORD=%s' % redis_password) logger.info('Config file does not exist. Creating %s' % OKUNA_CLI_CONFIG_FILE) if not _file_exists(OKUNA_CLI_CONFIG_FILE_TEMPLATE): raise Exception('Config file template did not exists') copyfile(OKUNA_CLI_CONFIG_FILE_TEMPLATE, OKUNA_CLI_CONFIG_FILE) _replace_in_file(OKUNA_CLI_CONFIG_FILE, { "{{DJANGO_SECRET_KEY}}": django_secret_key, "{{SQL_PASSWORD}}": mysql_password, "{{REDIS_PASSWORD}}": redis_password, }) def _bootstrap(is_local_api): logger.info(' Bootstrapping Okuna with some data') if is_local_api: subprocess.run(["./utils/scripts/bootstrap_development_data.sh"]) else: subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "exec", "webserver", "/bootstrap_development_data.sh"]) def _ensure_has_required_cli_config_files(): _ensure_has_okuna_config_file() with open(OKUNA_CLI_CONFIG_FILE, 'r+') as okuna_cli_config_file: okuna_cli_config = json.load(okuna_cli_config_file) _ensure_has_docker_compose_api_environment_file(okuna_cli_config=okuna_cli_config) _ensure_has_local_api_environment_file(okuna_cli_config=okuna_cli_config) def _ensure_was_bootstrapped(is_local_api): with open(OKUNA_CLI_CONFIG_FILE, 'r+') as okuna_cli_config_file: okuna_cli_config = json.load(okuna_cli_config_file) if okuna_cli_config['bootstrapped']: return logger.info('Okuna was not bootstrapped.') _bootstrap(is_local_api=is_local_api) okuna_cli_config['bootstrapped'] = True okuna_cli_config_file.seek(0) json.dump(okuna_cli_config, okuna_cli_config_file, indent=4) okuna_cli_config_file.truncate() logger.info('Okuna was bootstrapped.') @click.group() def cli(): pass def _down_test(): """Bring Okuna down""" logger.error('⬇️ Bringing the Okuna test services down...') subprocess.run(["docker-compose", "-f", "docker-compose-test-services-only.yml", "down"]) def _down_full(): """Bring Okuna down""" logger.error('⬇️ Bringing the whole of Okuna down...') subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "down"]) def _down_services_only(): """Bring Okuna down""" logger.error('⬇️ Bringing the Okuna services down...') subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "down"]) @click.command() def down_services_only(): _down_services_only() @click.command() def down_full(): _down_full() @click.command() def up_full(): """Bring the whole of Okuna up""" _print_okuna_logo() _ensure_has_required_cli_config_files() _copy_requirements_txt_to_docker_images_dir() logger.info('⬆️ Bringing the whole of Okuna up...') atexit.register(_down_full) subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "up", "-d", "-V"]) okuna_api_address = 'domain' okuna_api_port = 80 _wait_until_api_is_running(address=okuna_api_address, port=okuna_api_port) _ensure_was_bootstrapped(is_local_api=False) logger.info(' Okuna is live at http://%s:%s.' % (okuna_api_address, okuna_api_port)) subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "logs", "--follow", "--tail=0", "webserver"]) input() @click.command() def up_services_only(): """Bring only the Okuna services up. API is up to you.""" _print_okuna_logo() _ensure_has_required_cli_config_files() _copy_requirements_txt_to_docker_images_dir() logger.info('⬆️ Bringing only the Okuna services up...') atexit.register(_down_services_only) subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "up", "-d", "-V"]) _ensure_was_bootstrapped(is_local_api=True) logger.info(' Okuna services are up') subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "logs", "--follow"]) input() @click.command() def down_test(): _down_test() @click.command() def up_test(): """Bring the Okuna test services up""" _print_okuna_logo() _ensure_has_required_cli_config_files() logger.info('⬆️ Bringing the Okuna test services up...') atexit.register(_down_test) subprocess.run(["docker-compose", "-f", "docker-compose-test-services-only.yml", "up", "-d", "-V"]) logger.info(' Okuna tests services are live') subprocess.run( ["docker-compose", "-f", "docker-compose-test-services-only.yml", "logs", "--follow", "--tail=0"]) input() @click.command() def build_full(): """Rebuild Okuna services""" _ensure_has_required_cli_config_files() logger.info('‍♀️ Rebuilding Okuna full services...') _copy_requirements_txt_to_docker_images_dir() subprocess.run(["docker-compose", "-f", "docker-compose-full.yml", "build"]) @click.command() def build_services_only(): """Rebuild Okuna services""" _ensure_has_required_cli_config_files() logger.info('‍♀️ Rebuilding only Okuna services...') _copy_requirements_txt_to_docker_images_dir() subprocess.run(["docker-compose", "-f", "docker-compose-services-only.yml", "build"]) @click.command() def status(): """Get Okuna status""" logger.info('️‍♂️ Retrieving services status...') subprocess.run(["docker-compose", "ps"]) @click.command() def clean(): """Bootstrap Okuna""" _clean() cli.add_command(up_full) cli.add_command(down_full) cli.add_command(up_test) cli.add_command(down_test) cli.add_command(up_services_only) cli.add_command(down_services_only) cli.add_command(build_full) cli.add_command(build_services_only) cli.add_command(clean) cli.add_command(status) if __name__ == '__main__': cli() I checked that the def status() isn't working as well which is supposed to check the running docker containers as defined in docker-compose.env and show results. I can se following error when I try: python3.9 okuna-cli.py status Can't find a suitable configuration file in this directory or any parent.Are you in the right directory?Supported filenames: docker-compose.yml, docker-compose.yaml, compose.yml, compose.yaml When i do docker-compose -f docker-compose-full.yml up I have the following warning displayed : Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.16.16.2' (This connection closed normally without authentication) Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.16.16.3' (This connection closed normally without authentication) EDIT : The above Warning disappears after downgrading Mariadb version to 10.2 I'm getting the 2 additional warnings as well. This is despite running inside a virtual environment and I've done everything using only pip3 without sudo: 1.The directory '/root/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag 2.Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv My docker-compose: version: '3' services: webserver: container_name: okuna-api build: dockerfile: Dockerfile context: ./.docker/api privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip ports: - 80:80 working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.1 depends_on: - db - redis env_file: - .docker-compose.env worker: container_name: okuna-worker build: dockerfile: Dockerfile context: ./.docker/worker privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.2 depends_on: - webserver env_file: - .docker-compose.env scheduler: container_name: okuna-scheduler build: dockerfile: Dockerfile context: ./.docker/scheduler privileged: true extra_hosts: - db.okuna:172.16.16.4 - redis.okuna:172.16.16.5 volumes: - ./:/opt/okuna-api-core - ./.docker-cache/pip:/root/.cache/pip working_dir: /opt/okuna-api-core networks: okuna: ipv4_address: 172.16.16.3 depends_on: - webserver env_file: - .docker-compose.env db: image: mariadb:latest hostname: db.okuna volumes: - mariadb:/var/lib/mysql ports: - 3306 privileged: false networks: okuna: ipv4_address: 172.16.16.4 command: --character-set-server=utf8 --collation-server=utf8_unicode_ci env_file: - .docker-compose.env redis: image: bitnami/redis:latest privileged: false ports: - 6379 networks: okuna: ipv4_address: 172.16.16.5 env_file: - .docker-compose.env volumes: - redisdb:/bitnami/redis/data volumes: mariadb: redisdb: networks: okuna: ipam: driver: default config: - subnet: "172.16.16.0/16" my docker-compose.env : # Variable specifying execution environment # Required always. # Possible values: production,development,acceptance, test ENVIRONMENT=development # ============= START NON-ENV SPECIFIC VARIABLES ============= # # [NAME] ALLOWED_HOSTS # [DESCRIPTION] Django variable specifying allowed hosts. # [REQUIRED][PRODUCTION] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#allowed-hosts #ALLOWED_HOSTS=www.openbook.social # [NAME] SECRET_KEY # [DESCRIPTION] Django variable to provide cryptographic signing. If using okuna-cli, obtained from .okuna-cli.json # [REQUIRED][ALWAYS] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#secret-key SECRET_KEY=949m="long passwrod generated here" # [NAME] JWT_ALGORITHM # [DESCRIPTION] Django variable to provide cryptographic signing. # [REQUIRED][ALWAYS] # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#secret-key JWT_ALGORITHM=HS256 # [NAME] MEDIA_ROOT # [DESCRIPTION] Absolute filesystem path to the directory that will hold user-uploaded files. # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#media-root # [OPTIONAL=./media] # MEDIA_ROOT= # [NAME] MEDIA_URL # [DESCRIPTION] URL that handles the media served from MEDIA_ROOT, used for managing stored files. It must end in a slash if set # [MORE] https://docs.djangoproject.com/en/2.1/ref/settings/#media-url # [OPTIONAL=/media/] # MEDIA_URL= # [GROUP] SQL Database Configuration # [DESCRIPTION] The SQL database configuration # [REQUIRED][ALWAYS] RDS_DB_NAME=okuna RDS_USERNAME=root RDS_HOSTNAME=db.okuna RDS_PORT=3306 RDS_HOSTNAME_READER=db.okuna RDS_HOSTNAME_WRITER=db.okuna #[NAME] RDS_PASSWORD # [DESCRIPTION] The password for the SQL Database. If using okuna-cli, obtained from .okuna-cli.json RDS_PASSWORD=long passwrod generated here # [GROUP] Redis Database configuration Configuration # [DESCRIPTION] The redis database configuration # [REQUIRED][ALWAYS] REDIS_HOST=redis.okuna REDIS_PORT=6379 #[NAME] REDIS_PASSSWORD # [DESCRIPTION] The password for the REDIS Database. REDIS_PASSWORD=long password generated here # [GROUP] Top posts criteria # [DESCRIPTION] The criteria under which posts will be added to the Explore/Top posts section of the app # [OPTIONAL=2] # MIN_UNIQUE_TOP_POST_REACTIONS_COUNT= # MIN_UNIQUE_TOP_POST_COMMENTS_COUNT= # [NAME] NEW_USER_SUGGESTED_COMMUNITIES # [DESCRIPTION] The ids of the communities to be suggested to a new user # [OPTIONAL=1] # NEW_USER_SUGGESTED_COMMUNITIES=1,1310,216 # [GROUP] Allowed media sizes # [DESCRIPTION] The criteria under which posts will be added to the Explore/Top posts section of the app # [OPTIONAL] # POST_MEDIA_MAX_SIZE=30485760 # PROFILE_AVATAR_MAX_SIZE=10485760 # PROFILE_COVER_MAX_SIZE=10485760 # COMMUNITY_AVATAR_MAX_SIZE=10485760 # COMMUNITY_COVER_MAX_SIZE=10485760 # [NAME] MODERATORS_COMMUNITY_NAME # [DESCRIPTION] The community which when joined, will become global moderators # [OPTIONAL=mods] # MODERATORS_COMMUNITY_NAME= # ============= END NON-ENV SPECIFIC VARIABLES ============= # # ============= START DOCKER COMPOSE SPECIFIC VARIABLES ============= # # [GROUP] Mysql Docker Image env vars # [DESCRIPTION] This must match the RDS_PASSWORD AND RDS_DATABASE env vars on top # [REQUIRED][ALWAYS] MYSQL_ROOT_PASSWORD=long password generated here MYSQL_DATABASE=okuna # [NAME] WAIT_HOSTS # [DESCRIPTION] The hosts that the Kosmos API should wait for # [REQUIRED] WAIT_HOSTS:db.okuna:3306 # ============= END DOCKER COMPOSE SPECIFIC VARIABLES ============= # This is despite the configuration files are intact and in the right place. Help appreciated.
[ "It looks like the issue is that you are using Python 3.9, which is not yet supported by the Okuna API. The requirements.txt file in the root directory of the repository specifies that the API supports Python 3.6, 3.7, and 3.8. You can try installing one of these versions of Python and running the okuna-cli.py script again.\n", "It looks like the up-full command is supposed to launch a local development server for the Okuna API. The server is launched using Docker, and the okuna-cli.py script appears to be waiting for the server to start up before proceeding.\nThere are a few potential reasons why the server may not be starting up properly:\n\nThe Docker containers for the Okuna API may not be starting up\nproperly. You can check the status of the Docker containers by\nrunning the docker ps command. If the containers are not running, try\nstarting them manually using the docker-compose up command.\nThe Okuna API server may not be binding to the correct IP address or\nport. By default, the server is supposed to bind to 0.0.0.0:8000, but\nthis can be configured using the OKUNA_API_BIND_IP and\nOKUNA_API_BIND_PORT environment variables. You can check the server\nlogs to see if it is binding to the correct address and port.\nThere may be another process running on the same IP address and port\nthat is preventing the Okuna API server from binding. You can use the\nnetstat -tulpn command to see which processes are listening on which\nports on your system. If another process is listening on the same IP\naddress and port as the Okuna API server, you will need to either\nstop that process or reconfigure the Okuna API server to bind to a\ndifferent IP address and port.\n\nAlternatively if those items don't fix the issue:\nIt may be due to the requests library not being able to make a connection to the server. This can happen for a number of reasons, but one possible cause is that the server is not running or is not accessible from the machine where you are running the okuna-cli.py script.\nOne way to troubleshoot this issue is to try running the okuna-cli.py script with the --verbose flag, which will enable more detailed logging. This can help you see what is happening behind the scenes when the script is stuck at \"Waiting for server to come up...\".\nAnother thing you can try is to manually check if the server is running and accessible by making a request to the server's URL using the curl command. For example:\ncurl http://127.0.0.1:8000\n\nIf the server is running and accessible, you should see a response from the server. If you don't see a response, it could mean that the server is not running or is not accessible from your machine.\nIf the server is running and accessible, but you are still encountering issues when running the okuna-cli.py script, you may need to further troubleshoot the issue by looking at the logs for the server and the script itself. This will likely require some additional debugging, but it can help you identify the root cause of the issue.\nOverall, the best approach for solving this issue will depend on your specific setup and the details of the problem\n" ]
[ 0, 0 ]
[]
[]
[ "django_4.1", "docker", "docker_compose", "python_3.x", "ubuntu" ]
stackoverflow_0074556804_django_4.1_docker_docker_compose_python_3.x_ubuntu.txt
Q: Call compound.finance api with parameters I'm trying to simply call the compound.finance api "https://api.compound.finance/api/v2/account" with the parameter max_health. the doc says "If provided, should be given as { "value": "...string formatted number..." }". (https://compound.finance/docs/api#account-service) So I tried 4 methods here below: response = requests.get( 'https://api.compound.finance/api/v2/account', params={ "max_health": "1.0" # method 1 "max_health": {"value":"1.0"} # method 2 "max_health": json.dumps({"value":"1.0"}) # method 3 } ) but it does not work, and I get HTTPError: 500 Server Error: Internal Server Error for url:... Any idea I should format it please? A: They did not update the API docs. You should send a POST request and provide params as a request body. import json import requests url = "https://api.compound.finance/api/v2/account" data = { "max_health": {"value": "1.0"} } response = requests.post(url, data=json.dumps(data)) # <Response [200]> response = response.json() # {'accounts': ...} Edit notes The problem was that the API expects raw JSON so I used json.dumps. A: As Artyom already explained his beautiful answer, indeed their API documentation unfortunately outdated. In addition to his answer I'd like to add that requests library supports json argument that accepts raw JSON arguments starting with requests version 2.4.2. Therefore data=json.dumps(params) is not necessary anymore. See my code below. api_base = "https://api.compound.finance/api/v2/account" params = {'max_health': {'value':'0.95'}, 'min_borrow_value_in_eth': { 'value': '0.002' }, 'page_number':19, } response = requests.post(api_base, json=params).json()
Call compound.finance api with parameters
I'm trying to simply call the compound.finance api "https://api.compound.finance/api/v2/account" with the parameter max_health. the doc says "If provided, should be given as { "value": "...string formatted number..." }". (https://compound.finance/docs/api#account-service) So I tried 4 methods here below: response = requests.get( 'https://api.compound.finance/api/v2/account', params={ "max_health": "1.0" # method 1 "max_health": {"value":"1.0"} # method 2 "max_health": json.dumps({"value":"1.0"}) # method 3 } ) but it does not work, and I get HTTPError: 500 Server Error: Internal Server Error for url:... Any idea I should format it please?
[ "They did not update the API docs. You should send a POST request and provide params as a request body.\nimport json\nimport requests\n\nurl = \"https://api.compound.finance/api/v2/account\"\ndata = {\n \"max_health\": {\"value\": \"1.0\"}\n}\n\nresponse = requests.post(url, data=json.dumps(data)) # <Response [200]>\nresponse = response.json() # {'accounts': ...}\n\nEdit notes\nThe problem was that the API expects raw JSON so I used json.dumps.\n", "As Artyom already explained his beautiful answer, indeed their API documentation unfortunately outdated. In addition to his answer I'd like to add that requests library supports json argument that accepts raw JSON arguments starting with requests version 2.4.2. Therefore data=json.dumps(params) is not necessary anymore.\nSee my code below.\napi_base = \"https://api.compound.finance/api/v2/account\"\nparams = {'max_health': {'value':'0.95'},\n 'min_borrow_value_in_eth': { 'value': '0.002' },\n 'page_number':19,\n }\nresponse = requests.post(api_base, json=params).json()\n\n" ]
[ 2, 0 ]
[]
[]
[ "python", "python_requests" ]
stackoverflow_0072715891_python_python_requests.txt
Q: How to correctly drag the view around the circle? I want the green dot to follow the touch point in a circular path, but it doesn't seem to be doing it right. It seems like there is an unwanted offset somewhere but I can't find it on my own for quite some time. Here is my code: @Preview @Composable fun Test() { val touchPoint = remember { mutableStateOf(Offset.Zero) } Scaffold { Column() { Box(Modifier.height(100.dp).fillMaxWidth().background(Color.Blue)) Layout( modifier = Modifier.aspectRatio(1f).fillMaxSize(), content = { Box( Modifier .size(48.dp) .clip(CircleShape) .background(Color.Green) .pointerInput(Unit) { detectDragGestures( onDrag = { change, dragAmount -> change.consumeAllChanges() touchPoint.value += dragAmount } ) } ) } ) { measurables, constraints -> val dot = measurables.first().measure(constraints.copy(minHeight = 0, minWidth = 0)) val width = constraints.maxWidth val height = constraints.maxHeight val centerX = width / 2 val centerY = height / 2 val lengthFromCenter = width / 2 - dot.width / 2 val touchX = touchPoint.value.x val touchY = touchPoint.value.y layout(width, height) { // I planned to achieve the desired behaviour with the following steps: // 1. Convert cartesian coordinates to polar ones val r = sqrt(touchX.pow(2) + touchY.pow(2)) val angle = atan2(touchY.toDouble(), touchX.toDouble()) // 2. Use fixed polar radius val rFixed = lengthFromCenter // 3. Convert it back to cartesian coordinates val x = rFixed * cos(angle) val y = rFixed * sin(angle) // 4. Layout on screen dot.place( x = (x + centerX - dot.width / 2).roundToInt(), y = (y + centerY - dot.height / 2).roundToInt() ) } } Box(Modifier.fillMaxSize().background(Color.Blue)) } } } I'm definitely missing something but don't know what exactly. What am I doing wrong? A: touchPoint.value += dragAmount Is in pixel values, and you're updating the position of the dot with pixel values, where it requires dp values. If you update that with private fun Float.pxToDp(context: Context): Dp = // Float or Int, depends on the value you have, or Double (this / context.resources.displayMetrics.density).dp The amount with which it will be moved, will be smaller and reflect the dragging made by the user A: You can easily achieve this by using some math: @Composable fun CircularView( content: @Composable () -> Unit ) { var middle by remember { mutableStateOf(Offset.Zero) } var size by remember { mutableStateOf(0.dp) } var dragAngle by remember { mutableStateOf(0f) } Canvas(modifier = Modifier.size(size)) { drawCircle( color = Color.Red, center = middle, style = Stroke(1.dp.toPx()) ) } Layout( content = content, modifier = Modifier.pointerInput(true) { detectDragGestures( onDrag = { change, _ -> change.consumeAllChanges() val positionOfDrag = change.position val previousPosition = change.previousPosition dragAngle += atan2( positionOfDrag.x - middle.x, positionOfDrag.y - middle.y ) - atan2( previousPosition.x - middle.x, previousPosition.y - middle.y ) } ) } ) { measurables, constraints -> val placeables = measurables.map { it.measure(constraints) } val layoutWidth = constraints.maxWidth val layoutHeight = constraints.maxHeight layout(layoutWidth, layoutHeight) { val childCount = placeables.size if (childCount == 0) return@layout val middleX = layoutWidth / 2f val middleY = layoutHeight / 2f middle = Offset(middleX, middleY) val angleBetween = 2 * PI / childCount val radius = min( layoutWidth - (placeables.maxByOrNull { it.width }?.width ?: 0), layoutHeight - (placeables.maxByOrNull { it.height }?.height ?: 0) ) / 2 size = (radius * 2).toDp() placeables.forEachIndexed { index, placeable -> val angle = index * angleBetween - PI / 2 - dragAngle val x = middleX + (radius) * cos(angle) - placeable.width / 2f val y = middleY + (radius) * sin(angle) - placeable.height / 2f placeable.placeRelative(x = x.toInt(), y = y.toInt()) } } } } On the calling side: CircularView { repeat(10) { Box( modifier = Modifier .background( Color( red = random.nextInt(255), green = random.nextInt(255), blue = random.nextInt(255) ), shape = CircleShape ) .size(50.dp), contentAlignment = Alignment.Center ) { Text(text = it.toString(), fontSize = 12.sp, color = Color.White) } } }
How to correctly drag the view around the circle?
I want the green dot to follow the touch point in a circular path, but it doesn't seem to be doing it right. It seems like there is an unwanted offset somewhere but I can't find it on my own for quite some time. Here is my code: @Preview @Composable fun Test() { val touchPoint = remember { mutableStateOf(Offset.Zero) } Scaffold { Column() { Box(Modifier.height(100.dp).fillMaxWidth().background(Color.Blue)) Layout( modifier = Modifier.aspectRatio(1f).fillMaxSize(), content = { Box( Modifier .size(48.dp) .clip(CircleShape) .background(Color.Green) .pointerInput(Unit) { detectDragGestures( onDrag = { change, dragAmount -> change.consumeAllChanges() touchPoint.value += dragAmount } ) } ) } ) { measurables, constraints -> val dot = measurables.first().measure(constraints.copy(minHeight = 0, minWidth = 0)) val width = constraints.maxWidth val height = constraints.maxHeight val centerX = width / 2 val centerY = height / 2 val lengthFromCenter = width / 2 - dot.width / 2 val touchX = touchPoint.value.x val touchY = touchPoint.value.y layout(width, height) { // I planned to achieve the desired behaviour with the following steps: // 1. Convert cartesian coordinates to polar ones val r = sqrt(touchX.pow(2) + touchY.pow(2)) val angle = atan2(touchY.toDouble(), touchX.toDouble()) // 2. Use fixed polar radius val rFixed = lengthFromCenter // 3. Convert it back to cartesian coordinates val x = rFixed * cos(angle) val y = rFixed * sin(angle) // 4. Layout on screen dot.place( x = (x + centerX - dot.width / 2).roundToInt(), y = (y + centerY - dot.height / 2).roundToInt() ) } } Box(Modifier.fillMaxSize().background(Color.Blue)) } } } I'm definitely missing something but don't know what exactly. What am I doing wrong?
[ "touchPoint.value += dragAmount\n\nIs in pixel values, and you're updating the position of the dot with pixel values, where it requires dp values. If you update that with\nprivate fun Float.pxToDp(context: Context): Dp = // Float or Int, depends on the value you have, or Double\n (this / context.resources.displayMetrics.density).dp\n\nThe amount with which it will be moved, will be smaller and reflect the dragging made by the user\n", "You can easily achieve this by using some math:\n@Composable\nfun CircularView(\n content: @Composable () -> Unit\n) {\n var middle by remember {\n mutableStateOf(Offset.Zero)\n }\n\n var size by remember {\n mutableStateOf(0.dp)\n }\n\n var dragAngle by remember {\n mutableStateOf(0f)\n }\n\n Canvas(modifier = Modifier.size(size)) {\n drawCircle(\n color = Color.Red,\n center = middle,\n style = Stroke(1.dp.toPx())\n )\n }\n Layout(\n content = content,\n modifier = Modifier.pointerInput(true) {\n detectDragGestures(\n onDrag = { change, _ ->\n change.consumeAllChanges()\n val positionOfDrag = change.position\n val previousPosition = change.previousPosition\n\n dragAngle += atan2(\n positionOfDrag.x - middle.x,\n positionOfDrag.y - middle.y\n ) - atan2(\n previousPosition.x - middle.x,\n previousPosition.y - middle.y\n )\n }\n )\n }\n ) { measurables, constraints ->\n\n val placeables = measurables.map { it.measure(constraints) }\n val layoutWidth = constraints.maxWidth\n val layoutHeight = constraints.maxHeight\n\n layout(layoutWidth, layoutHeight) {\n val childCount = placeables.size\n if (childCount == 0) return@layout\n\n val middleX = layoutWidth / 2f\n val middleY = layoutHeight / 2f\n\n\n middle = Offset(middleX, middleY)\n\n val angleBetween = 2 * PI / childCount\n val radius =\n min(\n layoutWidth - (placeables.maxByOrNull { it.width }?.width ?: 0),\n layoutHeight - (placeables.maxByOrNull { it.height }?.height ?: 0)\n ) / 2\n\n size = (radius * 2).toDp()\n\n placeables.forEachIndexed { index, placeable ->\n val angle = index * angleBetween - PI / 2 - dragAngle\n val x = middleX + (radius) * cos(angle) - placeable.width / 2f\n val y = middleY + (radius) * sin(angle) - placeable.height / 2f\n placeable.placeRelative(x = x.toInt(), y = y.toInt())\n }\n }\n }\n}\n\nOn the calling side:\nCircularView {\nrepeat(10) {\n Box(\n modifier = Modifier\n .background(\n Color(\n red = random.nextInt(255),\n green = random.nextInt(255),\n blue = random.nextInt(255)\n ), shape = CircleShape\n )\n .size(50.dp),\n contentAlignment = Alignment.Center\n\n ) {\n Text(text = it.toString(), fontSize = 12.sp, color = Color.White)\n }\n}\n\n}\n" ]
[ 1, 0 ]
[]
[]
[ "android_jetpack_compose", "drag" ]
stackoverflow_0073098811_android_jetpack_compose_drag.txt
Q: How can I get Grafana to read a custom metric? Currently my Grafana Dashboard reads system info from the Grafana agent that runs on my machine. I have a script that executes hourly to do some action. If the script executes successfully then it can output that success to an XML file or create a file called "success.txt". If the script fails then it could create a file "fail.txt". How can I get Grafana to check for the presence of a file or a file's content to get it to report back to the dashboard the status, basically a binary result, of a custom metric "Hourly script job" such as success or fail? I've searched the web and found any-json-to-metrics exporter but not sure that'll work. I'd like to avoid hosting a web server that exposes endpoints. I'd like for the Grafana agent to pick up the custom metrics. A: Grafana does not have a built-in way to read files and use their contents as metrics. You would need to write a custom plugin, or write your own exporter for these metrics.
How can I get Grafana to read a custom metric?
Currently my Grafana Dashboard reads system info from the Grafana agent that runs on my machine. I have a script that executes hourly to do some action. If the script executes successfully then it can output that success to an XML file or create a file called "success.txt". If the script fails then it could create a file "fail.txt". How can I get Grafana to check for the presence of a file or a file's content to get it to report back to the dashboard the status, basically a binary result, of a custom metric "Hourly script job" such as success or fail? I've searched the web and found any-json-to-metrics exporter but not sure that'll work. I'd like to avoid hosting a web server that exposes endpoints. I'd like for the Grafana agent to pick up the custom metrics.
[ "Grafana does not have a built-in way to read files and use their contents as metrics. You would need to write a custom plugin, or write your own exporter for these metrics.\n" ]
[ 0 ]
[]
[]
[ "dashboard", "grafana", "json", "metrics" ]
stackoverflow_0074662842_dashboard_grafana_json_metrics.txt
Q: How to Create Azure Resource Graph Explorer Scheduled Reports and Email Alerts I have a Kusto query taken from this example that looks like this: Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState I would like to create an weekly alert that send the result through an e-mail in a CSV file. The Logic App is organized in 5 steps: One: Two: With URL: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Body: { "query": "Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState" } Three: Where I parse the Body and I give an extract of the JSON Schema: { "count": 3, "data": [ { "count_": 3, "vmPowerState": "PowerState/stopped" }, { "count_": 29, "vmPowerState": "PowerState/deallocated" }, { "count_": 118, "vmPowerState": "PowerState/running" } ], "skip_token": null, "total_records": 3 } Here I have a few doubt because I found a guide that says that I should use array formula instead. I'm not very sure about that because I cannot see the details in the example. Anyway this is what I do: Four: Five: Where I create the attachment from the CSV The e-mail in the end arrives but the attachment is not a CSV, it's a JSON file: What the hack am I doing wrong? A: if you want to use "Create CSV table" with Columns set to "Automatic", do pass the "body" of "parse Json". you don't need to use the array variable but whatever you use need to return an array like this: The body of the json parser on your example has many other json nodes enveloping that. You should have the option "data" as there is an array there called "data" if you want to cut it short, try "data" you can change to "custom". that would allow you to remove redundant data or format data (like the "PowerState" in "PowerState/stopped"): you can also add the .csv to the file name: The above worked for me but it can be enhanced
How to Create Azure Resource Graph Explorer Scheduled Reports and Email Alerts
I have a Kusto query taken from this example that looks like this: Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState I would like to create an weekly alert that send the result through an e-mail in a CSV file. The Logic App is organized in 5 steps: One: Two: With URL: https://management.azure.com/providers/Microsoft.ResourceGraph/resources Body: { "query": "Resources | where type =~ 'microsoft.compute/virtualmachines' | extend vmPowerState = tostring(properties.extended.instanceView.powerState.code) | summarize count() by vmPowerState" } Three: Where I parse the Body and I give an extract of the JSON Schema: { "count": 3, "data": [ { "count_": 3, "vmPowerState": "PowerState/stopped" }, { "count_": 29, "vmPowerState": "PowerState/deallocated" }, { "count_": 118, "vmPowerState": "PowerState/running" } ], "skip_token": null, "total_records": 3 } Here I have a few doubt because I found a guide that says that I should use array formula instead. I'm not very sure about that because I cannot see the details in the example. Anyway this is what I do: Four: Five: Where I create the attachment from the CSV The e-mail in the end arrives but the attachment is not a CSV, it's a JSON file: What the hack am I doing wrong?
[ "if you want to use \"Create CSV table\" with Columns set to \"Automatic\", do pass the \"body\" of \"parse Json\".\n\nyou don't need to use the array variable but whatever you use need to return an array like this:\n\nThe body of the json parser on your example has many other json nodes enveloping that. You should have the option \"data\" as there is an array there called \"data\"\nif you want to cut it short, try \"data\"\n\nyou can change to \"custom\". that would allow you to remove redundant data or format data (like the \"PowerState\" in \"PowerState/stopped\"):\n\nyou can also add the .csv to the file name:\n\nThe above worked for me but it can be enhanced\n\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_logic_apps", "kql" ]
stackoverflow_0074657747_azure_azure_logic_apps_kql.txt
Q: Data available through the Consumer API I was trying to see if the API has the ability to update something on the account, mainly the E-Statements value to see if they have E-Statements enabled. I was looking through the claims and couldn't find a claim in particular that would give me this info. I then went on to check the Consumer API and could not find an endpoint that can possibly give me this info. Is that data unavailable through the Consumer API? I would like to read and update that field. Is there a list of Core fields that Banno makes available to us? Or would looking through the API Reference be enough to see all of the values Banno makes available to us? A: "Statements" fall under the category of Documents in the Consumer API. (Admittedly this is not obvious, so we've taken a note to add some clarity around "statements" being a subset of "documents"). You'll want to read the sections on determining "Eligibility" and also how to "Determine whether a user is enrolled". The combination of both is necessary, i.e. there isn't a single 'value' or 'attribute' that answers the question.
Data available through the Consumer API
I was trying to see if the API has the ability to update something on the account, mainly the E-Statements value to see if they have E-Statements enabled. I was looking through the claims and couldn't find a claim in particular that would give me this info. I then went on to check the Consumer API and could not find an endpoint that can possibly give me this info. Is that data unavailable through the Consumer API? I would like to read and update that field. Is there a list of Core fields that Banno makes available to us? Or would looking through the API Reference be enough to see all of the values Banno makes available to us?
[ "\"Statements\" fall under the category of Documents in the Consumer API.\n\n(Admittedly this is not obvious, so we've taken a note to add some clarity around \"statements\" being a subset of \"documents\").\n\nYou'll want to read the sections on determining \"Eligibility\" and also how to \"Determine whether a user is enrolled\".\n\nThe combination of both is necessary, i.e. there isn't a single 'value' or 'attribute' that answers the question.\n\n" ]
[ 0 ]
[]
[]
[ "banno_digital_toolkit" ]
stackoverflow_0074661955_banno_digital_toolkit.txt
Q: Cant paste using VBA from Excel master workbook to other workbooks Im trying to write a VBA macro to copy/paste a range of cells from one workbook into all the other workbooks in the folder. I have the code to open, close and save the files and to copy the range from the master. But i dont know how to get the range pasted into the other workbooks. I have tried the code from this question by Jim Simson and from here but without luck. Below is my own code i wrote before coming on to Stackoverflow. Sub COPYMASTERTODATA() Dim myfolder As String Dim myfile As String 'DEFINES FOLDER PATH myfolder = "C:\Users\xxx\xxx\Desktop\DATA" 'DEFINES FILETYPE myfile = Dir(myfolder & "\*xlsx") Do While myfile <> "" 'OPENS ALL FILES IN FOLDER Workbooks.Open Filename:=myfolder & "\" & myfile 'COPIES RANGE OF CELLS IN MASTER Workbooks("MASTER.XLSM").Worksheets("Sheet1").Range("B2:E30").Copy 'PASTES RANGE TO OTHER WORKBOOK Workbooks("myfile").Worksheets("Sheet1").Range("A2").PasteSpecial Paste:=xlPasteValue 'CLOSES ALL FILES Workbooks(myfile).Close Savechanges = True myfile = Dir Loop End Sub I have tried to use different paste methods to get the range pasted into either all the workbooks or just one workbook. The macro opens, saves and closes the workbooks but no pasting happens. I am looking for help with what to put for the paste command. A: This will work: Option Explicit Sub Copy_Data_To_All_SubFiles() Dim FSO As Object Dim FileDir As String Dim oFile As Object Dim ofolder As Object Dim TargetWB As Workbook Set FSO = CreateObject("scripting.filesystemobject") FileDir = "C:\Users\cameron\Documents\temp" Set ofolder = FSO.getfolder(FileDir) For Each oFile In ofolder.Files If FSO.getextensionname(oFile) = "xlsx" Then Set TargetWB = Workbooks.Open(oFile.Name) ' >>> Use this to copy Sheet1.Range("B2:C5").Copy TargetWB.Worksheets(1).Range("B2:C5").PasteSpecial xlPasteValues ' *** OR use this to copy 'TargetWB.Worksheets(1).Range("B2:C5").value = Sheet1.Range("B2:C5").value TargetWB.Save TargetWB.Close End If Next oFile End Sub Just update what ranges you want copied, what XL type you want to target, the target folder... whatever else. Let me know if it works for you.
Cant paste using VBA from Excel master workbook to other workbooks
Im trying to write a VBA macro to copy/paste a range of cells from one workbook into all the other workbooks in the folder. I have the code to open, close and save the files and to copy the range from the master. But i dont know how to get the range pasted into the other workbooks. I have tried the code from this question by Jim Simson and from here but without luck. Below is my own code i wrote before coming on to Stackoverflow. Sub COPYMASTERTODATA() Dim myfolder As String Dim myfile As String 'DEFINES FOLDER PATH myfolder = "C:\Users\xxx\xxx\Desktop\DATA" 'DEFINES FILETYPE myfile = Dir(myfolder & "\*xlsx") Do While myfile <> "" 'OPENS ALL FILES IN FOLDER Workbooks.Open Filename:=myfolder & "\" & myfile 'COPIES RANGE OF CELLS IN MASTER Workbooks("MASTER.XLSM").Worksheets("Sheet1").Range("B2:E30").Copy 'PASTES RANGE TO OTHER WORKBOOK Workbooks("myfile").Worksheets("Sheet1").Range("A2").PasteSpecial Paste:=xlPasteValue 'CLOSES ALL FILES Workbooks(myfile).Close Savechanges = True myfile = Dir Loop End Sub I have tried to use different paste methods to get the range pasted into either all the workbooks or just one workbook. The macro opens, saves and closes the workbooks but no pasting happens. I am looking for help with what to put for the paste command.
[ "This will work:\nOption Explicit\n\nSub Copy_Data_To_All_SubFiles()\n \n Dim FSO As Object\n Dim FileDir As String\n Dim oFile As Object\n Dim ofolder As Object\n \n Dim TargetWB As Workbook\n \n Set FSO = CreateObject(\"scripting.filesystemobject\")\n FileDir = \"C:\\Users\\cameron\\Documents\\temp\"\n Set ofolder = FSO.getfolder(FileDir)\n \n For Each oFile In ofolder.Files\n If FSO.getextensionname(oFile) = \"xlsx\" Then\n Set TargetWB = Workbooks.Open(oFile.Name)\n \n ' >>> Use this to copy\n Sheet1.Range(\"B2:C5\").Copy\n TargetWB.Worksheets(1).Range(\"B2:C5\").PasteSpecial xlPasteValues\n \n ' *** OR use this to copy\n 'TargetWB.Worksheets(1).Range(\"B2:C5\").value = Sheet1.Range(\"B2:C5\").value\n \n TargetWB.Save\n TargetWB.Close\n \n End If\n Next oFile\n \nEnd Sub\n\n\n\n\nJust update what ranges you want copied, what XL type you want to target, the target folder... whatever else.\nLet me know if it works for you.\n" ]
[ 0 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0074626378_excel_vba.txt
Q: Create new column which orders two previous columns I'm looking to create a new column which is based on the ordering of two other columns, preferably using the Tidyverse functions, but any suggestions are appreciated. I have a table of around 1300 entries and several columns but a sample of my data looks something like: Number of people TotalOrder TotalQuantile 12 1 1 19 2 1 21 3 2 45 5 2 53 5 3 55 6 3 60 7 4 75 8 4 But I want a fourth column which ranks TotalOrder within TotalQuantile, and to look something like: Number of people TotalOrder TotalQuantile NewOrder 12 1 1 1 19 2 1 2 21 3 2 1 45 5 2 2 53 5 3 1 55 6 3 2 60 7 4 1 75 8 4 2 I've tried a few things like filtering, arranging, etc but it's not worked out. Thanks for the help. A: library(dplyr) df <- structure(list( Number.of.people = c(12L, 19L, 21L, 45L, 53L, 55L, 60L, 75L), TotalOrder = c(1L, 2L, 3L, 5L, 5L, 6L, 7L, 8L), TotalQuantile = c(1L, 1L, 2L, 2L, 3L, 3L, 4L, 4L)), row.names = c(NA,-8L), class = c("tbl_df", "tbl", "data.frame")) df %>% group_by(TotalQuantile) %>% mutate(NewOrder = row_number()) # A tibble: 8 x 4 # Groups: TotalQuantile [4] Number.of.people TotalOrder TotalQuantile NewOrder <int> <int> <int> <int> 1 12 1 1 1 2 19 2 1 2 3 21 3 2 1 4 45 5 2 2 5 53 5 3 1 6 55 6 3 2 7 60 7 4 1 8 75 8 4 2
Create new column which orders two previous columns
I'm looking to create a new column which is based on the ordering of two other columns, preferably using the Tidyverse functions, but any suggestions are appreciated. I have a table of around 1300 entries and several columns but a sample of my data looks something like: Number of people TotalOrder TotalQuantile 12 1 1 19 2 1 21 3 2 45 5 2 53 5 3 55 6 3 60 7 4 75 8 4 But I want a fourth column which ranks TotalOrder within TotalQuantile, and to look something like: Number of people TotalOrder TotalQuantile NewOrder 12 1 1 1 19 2 1 2 21 3 2 1 45 5 2 2 53 5 3 1 55 6 3 2 60 7 4 1 75 8 4 2 I've tried a few things like filtering, arranging, etc but it's not worked out. Thanks for the help.
[ "library(dplyr)\n \n\ndf <-\n structure(list(\n Number.of.people = c(12L, 19L, 21L, 45L, 53L, 55L, 60L, 75L),\n TotalOrder = c(1L, 2L, 3L, 5L, 5L, 6L, 7L, 8L),\n TotalQuantile = c(1L, 1L, 2L, 2L, 3L, 3L, 4L, 4L)),\n row.names = c(NA,-8L), class = c(\"tbl_df\", \"tbl\", \"data.frame\"))\n\ndf %>% \n group_by(TotalQuantile) %>% \n mutate(NewOrder = row_number())\n\n# A tibble: 8 x 4\n# Groups: TotalQuantile [4]\n Number.of.people TotalOrder TotalQuantile NewOrder\n <int> <int> <int> <int>\n1 12 1 1 1\n2 19 2 1 2\n3 21 3 2 1\n4 45 5 2 2\n5 53 5 3 1\n6 55 6 3 2\n7 60 7 4 1\n8 75 8 4 2\n\n" ]
[ 0 ]
[]
[]
[ "arrange_act_assert", "columnsorting", "quantile", "tidyverse" ]
stackoverflow_0074660676_arrange_act_assert_columnsorting_quantile_tidyverse.txt
Q: Iterating through a nested yaml document in Python I have a nested Yaml like this that I want to iterate thru and create a list of objects from . --- InternalRuleService: - HomeRules: - RuleName: Sample1 IgnoreList: InputParameters: - resourceId: some-res-id - ruleAge: 1 - ruleAgeUnits: days - RuleName: Sample2 IgnoreList: - Account: '12' Region: NorthAmericas - Account: '10' Region: AsiaPacific - Account: '10' Region: Europe InputParameters: - InterfaceIds: xxxx1,xxxxx2 - RuleName: Sample3 IgnoreList: - Account: '14' Region: NorthAmericas - Account: '18' Region: MiddleEast InputParameters: - localContact: JohnDoe contactNumber: 123123 - CustomRules: - RuleName: CustomOne documentType: packet IgnoreList: - Account: '14' Region: NorthAmericas - Account: '18' Region: MiddleEast ThirdPartyRules: - RuleName: alta-prism licenseType: multi licenseAge: 5 licenseAgeUnit: year IgnoreList: - Account: '45' Region: NorthAmericas - Account: '44' Region: MiddleEast This is my code import yaml import json with open('rules.yml', 'r') as file: rules = yaml.safe_load(file) for rows in rules: print(rows) This gives only InternalRuleService and ThirdPartyRules in the output. I want to iterate through all the HomeRules and tried this for rows in rules: print(rows['HomeRules']) which gave me the error below TypeError: string indices must be integers This is what I am trying with the indices for rows in rules: print(rows[0]) This results in the I and T being printed on the screen. How do you access each item in this yaml and build a python object from it? The object I wanted from this Yaml file is one with properties as below RuleName, IgnoreList<LIST>,InputParameters<LIST>,RuleType,SubRuleType Here RuleType will be InternalRuleService and ThirdPartyRules, while SubRuleType will be HomeRules and CustomRules for only those cases where RuleType will be InternalRuleService. Ignore A: Because of the minus in front of HomeRules and CustomRules, InternalRuleService becomes a list, not a dict. Therefore you need the int indicies. This can be quickly determined with pprint: >>> import pprint >>> pprint.pprint(rules, depth=3) {'InternalRuleService': [{'HomeRules': [...]}, {'CustomRules': [...]}], 'ThirdPartyRules': [{'IgnoreList': [...], 'RuleName': 'alta-prism', 'licenseAge': 5, 'licenseAgeUnit': 'year', 'licenseType': 'multi'}]} To iterate HomeRules from the current yaml you have can do: for rows in rules['InternalRuleService'][0]['HomeRules']: print(rows) which prints {'RuleName': 'Sample1', 'IgnoreList': None, 'InputParameters': [{'resourceId': 'some-res-id'}, {'ruleAge': 1}, {'ruleAgeUnits': 'days'}]} {'RuleName': 'Sample2', 'IgnoreList': [{'Account': '12', 'Region': 'NorthAmericas'}, {'Account': '10', 'Region': 'AsiaPacific'}, {'Account': '10', 'Region': 'Europe'}], 'InputParameters': [{'InterfaceIds': 'xxxx1,xxxxx2'}]} {'RuleName': 'Sample3', 'IgnoreList': [{'Account': '14', 'Region': 'NorthAmericas'}, {'Account': '18', 'Region': 'MiddleEast'}], 'InputParameters': [{'localContact': 'JohnDoe', 'contactNumber': 123123}]} If you remove the - before HomeRules and CustomRules you can remove the list-item index [0] and just write: for rows in rules['InternalRuleService']['HomeRules']: print(rows)
Iterating through a nested yaml document in Python
I have a nested Yaml like this that I want to iterate thru and create a list of objects from . --- InternalRuleService: - HomeRules: - RuleName: Sample1 IgnoreList: InputParameters: - resourceId: some-res-id - ruleAge: 1 - ruleAgeUnits: days - RuleName: Sample2 IgnoreList: - Account: '12' Region: NorthAmericas - Account: '10' Region: AsiaPacific - Account: '10' Region: Europe InputParameters: - InterfaceIds: xxxx1,xxxxx2 - RuleName: Sample3 IgnoreList: - Account: '14' Region: NorthAmericas - Account: '18' Region: MiddleEast InputParameters: - localContact: JohnDoe contactNumber: 123123 - CustomRules: - RuleName: CustomOne documentType: packet IgnoreList: - Account: '14' Region: NorthAmericas - Account: '18' Region: MiddleEast ThirdPartyRules: - RuleName: alta-prism licenseType: multi licenseAge: 5 licenseAgeUnit: year IgnoreList: - Account: '45' Region: NorthAmericas - Account: '44' Region: MiddleEast This is my code import yaml import json with open('rules.yml', 'r') as file: rules = yaml.safe_load(file) for rows in rules: print(rows) This gives only InternalRuleService and ThirdPartyRules in the output. I want to iterate through all the HomeRules and tried this for rows in rules: print(rows['HomeRules']) which gave me the error below TypeError: string indices must be integers This is what I am trying with the indices for rows in rules: print(rows[0]) This results in the I and T being printed on the screen. How do you access each item in this yaml and build a python object from it? The object I wanted from this Yaml file is one with properties as below RuleName, IgnoreList<LIST>,InputParameters<LIST>,RuleType,SubRuleType Here RuleType will be InternalRuleService and ThirdPartyRules, while SubRuleType will be HomeRules and CustomRules for only those cases where RuleType will be InternalRuleService. Ignore
[ "Because of the minus in front of HomeRules and CustomRules, InternalRuleService becomes a list, not a dict. Therefore you need the int indicies.\nThis can be quickly determined with pprint:\n>>> import pprint\n>>> pprint.pprint(rules, depth=3)\n{'InternalRuleService': [{'HomeRules': [...]}, {'CustomRules': [...]}],\n 'ThirdPartyRules': [{'IgnoreList': [...],\n 'RuleName': 'alta-prism',\n 'licenseAge': 5,\n 'licenseAgeUnit': 'year',\n 'licenseType': 'multi'}]}\n\n\nTo iterate HomeRules from the current yaml you have can do:\nfor rows in rules['InternalRuleService'][0]['HomeRules']:\n print(rows)\n\nwhich prints\n{'RuleName': 'Sample1', 'IgnoreList': None, 'InputParameters': [{'resourceId': 'some-res-id'}, {'ruleAge': 1}, {'ruleAgeUnits': 'days'}]}\n{'RuleName': 'Sample2', 'IgnoreList': [{'Account': '12', 'Region': 'NorthAmericas'}, {'Account': '10', 'Region': 'AsiaPacific'}, {'Account': '10', 'Region': 'Europe'}], 'InputParameters': [{'InterfaceIds': 'xxxx1,xxxxx2'}]}\n{'RuleName': 'Sample3', 'IgnoreList': [{'Account': '14', 'Region': 'NorthAmericas'}, {'Account': '18', 'Region': 'MiddleEast'}], 'InputParameters': [{'localContact': 'JohnDoe', 'contactNumber': 123123}]}\n\nIf you remove the - before HomeRules and CustomRules you can remove the list-item index [0] and just write:\nfor rows in rules['InternalRuleService']['HomeRules']:\n print(rows)\n\n" ]
[ 1 ]
[]
[]
[ "python_3.x", "pyyaml", "yaml" ]
stackoverflow_0074662357_python_3.x_pyyaml_yaml.txt
Q: How to remove everything inside a github repository without removing the repository? During a classroom project a github repository with a new project was created by my instructor. The problem is before this repository I had my own project created with my a lot of work done, in the eleventh hour I found that it was difficult to transfer things from one repo to another, copy pasting created a lot of problems. Now I want to remove everything inside my remote repository (created using github classroom), without removing the repository itself, is it possible? I wasn't able to find similar cases on the internet, if I have missed out a ditto copy on stackoverflow sorry for brinigng it up again. How can I achieve my goal? A: You can't. GitHub support staff can, but you literally can't, for various GitHub-specific reasons. You can delete all but one branch name, and you can make a truly empty commit (using the empty tree and no parent or child commits) and set the last branch name to select that commit. However, all the commits you had in that repository will remain there, and can be found by their raw hash IDs, as long as whoever is looking can find the hash IDs somehow (guesswork, history, existing PRs and other issues, and so on). Furthermore, the issues/PRs/etc database that GitHub maintain—which is not part of Git, but is part of your on-GitHub storage—has no option for cleaning it out.
How to remove everything inside a github repository without removing the repository?
During a classroom project a github repository with a new project was created by my instructor. The problem is before this repository I had my own project created with my a lot of work done, in the eleventh hour I found that it was difficult to transfer things from one repo to another, copy pasting created a lot of problems. Now I want to remove everything inside my remote repository (created using github classroom), without removing the repository itself, is it possible? I wasn't able to find similar cases on the internet, if I have missed out a ditto copy on stackoverflow sorry for brinigng it up again. How can I achieve my goal?
[ "You can't. GitHub support staff can, but you literally can't, for various GitHub-specific reasons.\nYou can delete all but one branch name, and you can make a truly empty commit (using the empty tree and no parent or child commits) and set the last branch name to select that commit. However, all the commits you had in that repository will remain there, and can be found by their raw hash IDs, as long as whoever is looking can find the hash IDs somehow (guesswork, history, existing PRs and other issues, and so on).\nFurthermore, the issues/PRs/etc database that GitHub maintain—which is not part of Git, but is part of your on-GitHub storage—has no option for cleaning it out.\n" ]
[ 1 ]
[]
[]
[ "git", "github", "github_classroom" ]
stackoverflow_0074662734_git_github_github_classroom.txt
Q: How to make item template in LazyColumn reusable How can the template I use for my LazyColumn items be modfied to enable reuse? What can I use instead of it to ensure my Clothes sealed interface can be picked up and reused? The code that is a comment is what I was using previously. @Composable fun ReusableTitleSubtitle() { val text1 = when (it) { is Clothes.FixedSizeClothing -> stringResource(id = it.itemName) is Clothes.MultiSizeClothing -> stringResource(id = it.itemName) } val text2 = when (it) { is Clothes.FixedSizeClothing -> stringResource( id = R.string.size_placeholder, it.itemPlaceholder ) is Clothes.MultiSizeClothing -> stringResource( id = R.string.sizes_placeholder_and_placeholder, it.itemPlaceholders[0], it.itemPlaceholders[1] ) } Column() { Text(text = text1) Text(text = text2) } } sealed interface Clothes { val itemName: Int data class FixedSizeClothing(override val itemName: Int, val itemPlaceholder: Int): Clothes data class MultiSizeClothing(override val itemName: Int, val itemPlaceholders: List<Int>): Clothes } @Composable fun MyScreenContent( modifier: Modifier = Modifier, contentPadding: PaddingValues = PaddingValues() ) { Box(modifier = modifier.fillMaxSize()) { val clothesItems = remember { arrayOf( Clothes.FixedSizeClothing(itemName = R.string.chine, itemPlaceholder = 3), Clothes.MultiSizeClothing(itemName = R.string.paisley, itemPlaceholders = listOf(1, 2)), Clothes.FixedSizeClothing(itemName = R.string.stripy, itemPlaceholder = 7), Clothes.FixedSizeClothing(itemName = R.string.tartan, itemPlaceholder = 5), Clothes.FixedSizeClothing(itemName = R.string.tattersall, itemPlaceholder = 8) ) } MyLazyColumn( lazyItems = clothesItems, modifier = Modifier.padding(contentPadding) ) { ReusableTitleSubtitle() // val text1 = when (it) { // is Clothes.FixedSizeClothing -> stringResource(id = it.itemName) // is Clothes.MultiSizeClothing -> stringResource(id = it.itemName) // } // val text2 = when (it) { // is Clothes.FixedSizeClothing -> // stringResource( // id = R.string.size_placeholder, // it.itemPlaceholder // ) // is Clothes.MultiSizeClothing -> // stringResource( // id = R.string.sizes_placeholder_and_placeholder, // it.itemPlaceholders[0], it.itemPlaceholders[1] // ) // } // Column() { // Text(text = text1) // Text(text = text2) // } } } } A: You can pass as parameter Clothes: @Composable fun ReusableTitleSubtitle(clothes:Clothes) { val text1 = when (clothes) { is Clothes.FixedSizeClothing -> //... is Clothes.MultiSizeClothing -> //... } //... } @Composable fun MyLazyColumn( lazyItems : Array<Clothes>, modifier : Modifier = Modifier ) { LazyColumn( modifier = modifier ) { items(lazyItems) { ReusableTitleSubtitle(it) } } } and then: MyLazyColumn( lazyItems = clothesItems, modifier = Modifier.padding(contentPadding) )
How to make item template in LazyColumn reusable
How can the template I use for my LazyColumn items be modfied to enable reuse? What can I use instead of it to ensure my Clothes sealed interface can be picked up and reused? The code that is a comment is what I was using previously. @Composable fun ReusableTitleSubtitle() { val text1 = when (it) { is Clothes.FixedSizeClothing -> stringResource(id = it.itemName) is Clothes.MultiSizeClothing -> stringResource(id = it.itemName) } val text2 = when (it) { is Clothes.FixedSizeClothing -> stringResource( id = R.string.size_placeholder, it.itemPlaceholder ) is Clothes.MultiSizeClothing -> stringResource( id = R.string.sizes_placeholder_and_placeholder, it.itemPlaceholders[0], it.itemPlaceholders[1] ) } Column() { Text(text = text1) Text(text = text2) } } sealed interface Clothes { val itemName: Int data class FixedSizeClothing(override val itemName: Int, val itemPlaceholder: Int): Clothes data class MultiSizeClothing(override val itemName: Int, val itemPlaceholders: List<Int>): Clothes } @Composable fun MyScreenContent( modifier: Modifier = Modifier, contentPadding: PaddingValues = PaddingValues() ) { Box(modifier = modifier.fillMaxSize()) { val clothesItems = remember { arrayOf( Clothes.FixedSizeClothing(itemName = R.string.chine, itemPlaceholder = 3), Clothes.MultiSizeClothing(itemName = R.string.paisley, itemPlaceholders = listOf(1, 2)), Clothes.FixedSizeClothing(itemName = R.string.stripy, itemPlaceholder = 7), Clothes.FixedSizeClothing(itemName = R.string.tartan, itemPlaceholder = 5), Clothes.FixedSizeClothing(itemName = R.string.tattersall, itemPlaceholder = 8) ) } MyLazyColumn( lazyItems = clothesItems, modifier = Modifier.padding(contentPadding) ) { ReusableTitleSubtitle() // val text1 = when (it) { // is Clothes.FixedSizeClothing -> stringResource(id = it.itemName) // is Clothes.MultiSizeClothing -> stringResource(id = it.itemName) // } // val text2 = when (it) { // is Clothes.FixedSizeClothing -> // stringResource( // id = R.string.size_placeholder, // it.itemPlaceholder // ) // is Clothes.MultiSizeClothing -> // stringResource( // id = R.string.sizes_placeholder_and_placeholder, // it.itemPlaceholders[0], it.itemPlaceholders[1] // ) // } // Column() { // Text(text = text1) // Text(text = text2) // } } } }
[ "You can pass as parameter Clothes:\n@Composable\nfun ReusableTitleSubtitle(clothes:Clothes) {\n\n val text1 = when (clothes) {\n is Clothes.FixedSizeClothing -> //...\n is Clothes.MultiSizeClothing -> //...\n }\n\n //...\n}\n\n@Composable\nfun MyLazyColumn(\n lazyItems : Array<Clothes>,\n modifier : Modifier = Modifier\n) {\n LazyColumn(\n modifier = modifier\n ) {\n items(lazyItems) {\n ReusableTitleSubtitle(it)\n }\n }\n}\n\nand then:\n MyLazyColumn(\n lazyItems = clothesItems,\n modifier = Modifier.padding(contentPadding)\n ) \n\n" ]
[ 1 ]
[]
[]
[ "android", "android_jetpack_compose", "kotlin" ]
stackoverflow_0074662539_android_android_jetpack_compose_kotlin.txt
Q: old() does not work on dependent dropdown in Laravel blade I have two dropdown list in my Laravel page. I am trying to get old value of the dependent dropdown list (categorylist.blade) and make it selected once I fill and post the data. It returns back to the same page if the validation has not successfully completed. I am able to get all values except this one. I have tried Session::put() as well as Session::flash() but did not work. The category list is retrieved once you chose section as it requests the category list from the controller through ajax. How can I get the old value in the category dropdown list after the page refreshed. Here is my section selection dropdown: <select class="form-control sectionchoose" name="section_id" id="section_id"> <option value="">Choose Section</option> @foreach($sections as $section) <option value="{{$section['id']}}" @if(old('section_id') == $section['id']) selected @endif>{{$section['name']}} </option> @endforeach </select> My categorylist dropdown: <label class="col-sm-6">Chose Category</label> <div class="categorylist"> @include('admin.deal-management.categorylist') </div> And this is my categorylist view file: <select class="form-control " name="category_id" id="category_id"> <option value="">Choose Category</option> @foreach($categories as $category) <option value="{{$category['id']}}" @if(old('category_id') == $category['id']) selected @endif> {{$category['category_name']}} </option> @endforeach </select> and this is my main controller: public function addEditDeals(DealAddEditRequest $request, $id=null){ //*** post starts here ***/ if($request->isMethod('post')){ $message = 'Updated successfully'; $data=$request->all(); $deal->fill($request->validated()); $deal->save(); return redirect()->back()->with($message); } This is my categorylist controller: public function findCategories(Request $request){ if($request->json()){ $data = $request->all(); $categories = Category::where(['section_id' => $data['id'], 'status'=>1])->get()->toArray(); return view('admin.deal-management.categorylist',compact('categories')); } } And finally, this is the jQuery part: $(document).ready(function (){ let sectionid = $('.sectionchoose').val(); $.ajax({ headers: { 'X-CSRF-TOKEN' : $('meta[name="csrf-token"]').attr('content') }, type: 'POST', datatype: 'json', url : '/admin/selectsection', data: {id:sectionid}, success: function(response){ $('.categorylist').html(response) }, error:function(){ } }) }) A: Finally, was able to find the solution after 12 hours. Whoever has the same issue can use the approach below: Step 1: Send the Session value through the with() command: return redirect()->back()->withErrors($validator) ->withInput()->with('cat_id',$data['category_id']); } Step 2: Retreive the data in your main blade and attain hidden input: <input class="cat_id" id="asd" type="hidden" value="{{Session::get('cat_id')}}"/> Step 3: Get the retreived session value in Jquery: $(document).ready(function (){ let sectionid = $('.sectionchoose').val(); let cat_id; if($('#asd').val()){ cat_id = $('#asd').val(); } else { cat_id = 0; } $.ajax({ headers: { 'X-CSRF-TOKEN' : $('meta[name="csrf-token"]').attr('content') }, type: 'POST', datatype: 'json', url : '/admin/selectsection', data: {id:sectionid, cat_id:cat_id}, success: function(response){ $('.categorylist').html(response) }, error:function(){ } }) }) Step 4: Send the session value to your dependent blade again (as cat_id here) public function findCategories(Request $request){ if($request->json()){ $data = $request->all(); $cat_id = $data['cat_id'] ?? ''; $categories = Category::where(['section_id' => $data['id'], 'status'=>1])->get()->toArray(); return view('admin.deal-management.categorylist',compact('categories','cat_id')); } } Done! There is not any other way to get old value of dependent dropdown list value so far. If somebody knows better way, please help to improve this answer.
old() does not work on dependent dropdown in Laravel blade
I have two dropdown list in my Laravel page. I am trying to get old value of the dependent dropdown list (categorylist.blade) and make it selected once I fill and post the data. It returns back to the same page if the validation has not successfully completed. I am able to get all values except this one. I have tried Session::put() as well as Session::flash() but did not work. The category list is retrieved once you chose section as it requests the category list from the controller through ajax. How can I get the old value in the category dropdown list after the page refreshed. Here is my section selection dropdown: <select class="form-control sectionchoose" name="section_id" id="section_id"> <option value="">Choose Section</option> @foreach($sections as $section) <option value="{{$section['id']}}" @if(old('section_id') == $section['id']) selected @endif>{{$section['name']}} </option> @endforeach </select> My categorylist dropdown: <label class="col-sm-6">Chose Category</label> <div class="categorylist"> @include('admin.deal-management.categorylist') </div> And this is my categorylist view file: <select class="form-control " name="category_id" id="category_id"> <option value="">Choose Category</option> @foreach($categories as $category) <option value="{{$category['id']}}" @if(old('category_id') == $category['id']) selected @endif> {{$category['category_name']}} </option> @endforeach </select> and this is my main controller: public function addEditDeals(DealAddEditRequest $request, $id=null){ //*** post starts here ***/ if($request->isMethod('post')){ $message = 'Updated successfully'; $data=$request->all(); $deal->fill($request->validated()); $deal->save(); return redirect()->back()->with($message); } This is my categorylist controller: public function findCategories(Request $request){ if($request->json()){ $data = $request->all(); $categories = Category::where(['section_id' => $data['id'], 'status'=>1])->get()->toArray(); return view('admin.deal-management.categorylist',compact('categories')); } } And finally, this is the jQuery part: $(document).ready(function (){ let sectionid = $('.sectionchoose').val(); $.ajax({ headers: { 'X-CSRF-TOKEN' : $('meta[name="csrf-token"]').attr('content') }, type: 'POST', datatype: 'json', url : '/admin/selectsection', data: {id:sectionid}, success: function(response){ $('.categorylist').html(response) }, error:function(){ } }) })
[ "Finally, was able to find the solution after 12 hours. Whoever has the same issue can use the approach below:\nStep 1: Send the Session value through the with() command:\n\n return redirect()->back()->withErrors($validator)\n ->withInput()->with('cat_id',$data['category_id']);\n\n }\n\nStep 2: Retreive the data in your main blade and attain hidden input:\n<input class=\"cat_id\" id=\"asd\" type=\"hidden\" value=\"{{Session::get('cat_id')}}\"/>\n\nStep 3: Get the retreived session value in Jquery:\n$(document).ready(function (){\n let sectionid = $('.sectionchoose').val();\n let cat_id;\n if($('#asd').val()){\n cat_id = $('#asd').val();\n } else {\n cat_id = 0;\n }\n $.ajax({\n\n headers: {\n 'X-CSRF-TOKEN' : $('meta[name=\"csrf-token\"]').attr('content')\n },\n type: 'POST',\n datatype: 'json',\n url : '/admin/selectsection',\n data: {id:sectionid, cat_id:cat_id},\n\n success: function(response){\n $('.categorylist').html(response)\n }, error:function(){\n\n }\n })\n})\n\nStep 4: Send the session value to your dependent blade again (as cat_id here)\n public function findCategories(Request $request){\n if($request->json()){\n $data = $request->all();\n $cat_id = $data['cat_id'] ?? '';\n $categories = Category::where(['section_id' => $data['id'], 'status'=>1])->get()->toArray();\n return view('admin.deal-management.categorylist',compact('categories','cat_id'));\n\n }\n }\n\nDone! There is not any other way to get old value of dependent dropdown list value so far. If somebody knows better way, please help to improve this answer.\n" ]
[ 0 ]
[]
[]
[ "laravel" ]
stackoverflow_0074654816_laravel.txt
Q: I got a "Whitelabel Error Page" when using Eureka server I created a spring cloud project using SPRING INITIALIZR. My project structure is as below: enter image description here The DemoApplication: package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @SpringBootApplication @EnableEurekaServer public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } application.properties: server.port=8888 eureka.client.register-with-eureka=false eureka.client.fetch-registry=false eureka.instance.hostname=localhost eureka.client.service-url.defaultZone=localhost:8888/eureka spring.application.name=appName pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>demo</name> <description>Demo project for Spring Boot</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <spring-cloud.version>Greenwich.M3</spring-cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> </project> But when I visit localhost:8888, the following error occurs: Whitelabel Error Page This application has no explicit mapping for /error, so you are seeing this as a fallback. Sun Dec 02 11:23:36 CST 2018 There was an unexpected error (type=Not Found, status=404). No message available I don't konw why this happens.How can I solve this? A: at-least the latest version of SpringBoot and Cloud requires these configs for Eureka UI to come up: #in application.yml spring: freemarker: template-loader-path: classpath:/templates/ prefer-file-system-access: false or #in application.properties spring.freemarker.template-loader-path= classpath:/templates/ spring.freemarker.prefer-file-system-access= false See here: https://cloud.spring.io/spring-cloud-static/spring-cloud-netflix/2.1.0.RELEASE/multi/multi_spring-cloud-eureka-server.html#netflix-eureka-server-starter A: For me, I have generated the project using start.spring.io and I have chosen Eureka Server, I did not know that I need to add @EnableEurekaServer myself on top of the application. It worked by adding that. A: I had the same problem when using Greenwich.M3 spring cloud version. For me it worked when changing the spring cloud version to Finchley.SR1 <spring-cloud.version>Finchley.SR1</spring-cloud.version> A: this is working for me: application.properties file: server.port=8888 eureka.client.register-with-eureka=false eureka.client.fetch-registry=false eureka.instance.hostname=localhost eureka.client.service-url.defaultZone=http://localhost:8888/eureka spring.application.name=appName pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>demo</name> <description>Demo project for Spring Boot</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <spring-cloud.version>Greenwich.M3</spring-cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> </project> visit : localhost:8888 A: I use spring boot 2.2.1, spring cloud version Hoxton.RELEASE ( <spring-cloud.version>Hoxton.RELEASE</spring-cloud.version>),my settings is: spring.application.name=testserver server.port=8888 eureka.client.register-with-eureka=false eureka.client.fetch-registry=false eureka.instance.hostname=localhost eureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/ And for me url is http://localhost:8888/ (not http://localhost:8888/eureka). For server, url depends on server.port and server.servlet.context-path properties (so, if i set server.servlet.context-path=/eureka, url will be http://localhost:8888/eureka) A: I am using springboot 3.0.0 version and java 17. These versions are not working together stable. I tried to all the suggestions above but it did not work in my case. I finally found solution for my own case. It started appearing, when I changed the jdk version from "eclipse temurin jdk 17 version" to "open jdk 19". Edit:  After the solution, on the client side, I cannot register my service to Eureka service. When I change the spring version as "2.7.5", it works.
I got a "Whitelabel Error Page" when using Eureka server
I created a spring cloud project using SPRING INITIALIZR. My project structure is as below: enter image description here The DemoApplication: package com.example.demo; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; @SpringBootApplication @EnableEurekaServer public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } application.properties: server.port=8888 eureka.client.register-with-eureka=false eureka.client.fetch-registry=false eureka.instance.hostname=localhost eureka.client.service-url.defaultZone=localhost:8888/eureka spring.application.name=appName pom.xml: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>demo</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>demo</name> <description>Demo project for Spring Boot</description> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>2.1.1.RELEASE</version> <relativePath/> <!-- lookup parent from repository --> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <java.version>1.8</java.version> <spring-cloud.version>Greenwich.M3</spring-cloud.version> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> <repositories> <repository> <id>spring-milestones</id> <name>Spring Milestones</name> <url>https://repo.spring.io/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> </project> But when I visit localhost:8888, the following error occurs: Whitelabel Error Page This application has no explicit mapping for /error, so you are seeing this as a fallback. Sun Dec 02 11:23:36 CST 2018 There was an unexpected error (type=Not Found, status=404). No message available I don't konw why this happens.How can I solve this?
[ "at-least the latest version of SpringBoot and Cloud requires these configs for Eureka UI to come up:\n#in application.yml\nspring:\n freemarker:\n template-loader-path: classpath:/templates/\n prefer-file-system-access: false\n\nor\n#in application.properties\nspring.freemarker.template-loader-path= classpath:/templates/\nspring.freemarker.prefer-file-system-access= false\n\nSee here:\nhttps://cloud.spring.io/spring-cloud-static/spring-cloud-netflix/2.1.0.RELEASE/multi/multi_spring-cloud-eureka-server.html#netflix-eureka-server-starter\n", "For me, I have generated the project using start.spring.io and I have chosen Eureka Server, I did not know that I need to add @EnableEurekaServer myself on top of the application. It worked by adding that.\n", "I had the same problem when using Greenwich.M3 spring cloud version. For me it worked when changing the spring cloud version to Finchley.SR1\n<spring-cloud.version>Finchley.SR1</spring-cloud.version>\n\n", "this is working for me: \napplication.properties file: \nserver.port=8888\neureka.client.register-with-eureka=false\neureka.client.fetch-registry=false\neureka.instance.hostname=localhost\neureka.client.service-url.defaultZone=http://localhost:8888/eureka\nspring.application.name=appName\n\npom.xml: \n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>com.example</groupId>\n <artifactId>demo</artifactId>\n <version>0.0.1-SNAPSHOT</version>\n <packaging>jar</packaging>\n\n <name>demo</name>\n <description>Demo project for Spring Boot</description>\n\n <parent>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-parent</artifactId>\n <version>2.1.1.RELEASE</version>\n <relativePath/> <!-- lookup parent from repository -->\n </parent>\n\n <properties>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n <java.version>1.8</java.version>\n <spring-cloud.version>Greenwich.M3</spring-cloud.version>\n </properties>\n\n <dependencies>\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-web</artifactId>\n </dependency>\n <dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>\n </dependency>\n\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-test</artifactId>\n <scope>test</scope>\n </dependency>\n </dependencies>\n\n <dependencyManagement>\n <dependencies>\n <dependency>\n <groupId>org.springframework.cloud</groupId>\n <artifactId>spring-cloud-dependencies</artifactId>\n <version>${spring-cloud.version}</version>\n <type>pom</type>\n <scope>import</scope>\n </dependency>\n </dependencies>\n </dependencyManagement>\n\n <build>\n <plugins>\n <plugin>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-maven-plugin</artifactId>\n </plugin>\n </plugins>\n </build>\n\n <repositories>\n <repository>\n <id>spring-milestones</id>\n <name>Spring Milestones</name>\n <url>https://repo.spring.io/milestone</url>\n <snapshots>\n <enabled>false</enabled>\n </snapshots>\n </repository>\n </repositories>\n\n\n</project>\n\nvisit : localhost:8888\n", "I use spring boot 2.2.1, spring cloud version Hoxton.RELEASE (\n<spring-cloud.version>Hoxton.RELEASE</spring-cloud.version>),my settings is:\nspring.application.name=testserver \nserver.port=8888\n\neureka.client.register-with-eureka=false\neureka.client.fetch-registry=false\neureka.instance.hostname=localhost\n\neureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eureka/\nAnd for me url is http://localhost:8888/ (not http://localhost:8888/eureka).\nFor server, url depends on server.port and server.servlet.context-path properties (so, if i set server.servlet.context-path=/eureka, url will be http://localhost:8888/eureka)\n", "I am using springboot 3.0.0 version and java 17. These versions are not working together stable. I tried to all the suggestions above but it did not work in my case. I finally found solution for my own case.\nIt started appearing, when I changed the jdk version from \"eclipse temurin jdk 17 version\" to \"open jdk 19\".\nEdit: \nAfter the solution, on the client side, I cannot register my service to Eureka service. When I change the spring version as \"2.7.5\", it works.\n" ]
[ 8, 4, 1, 0, 0, 0 ]
[]
[]
[ "spring_boot", "spring_cloud" ]
stackoverflow_0053577161_spring_boot_spring_cloud.txt
Q: How to change the "shape" of pairplot in Seaborn? I plotted this pairplot correlating only one features with all the others, how can i visualize it in a better way? I need to visualize 4 columns. In the official documentation of pairplot i can't find the option. This is the df: This is the part of the code: sns.pairplot(data=dftrain, y_vars=['medv'], x_vars=dftrain.columns[:-1]) This is the plot: A: The shape of a pairplot can't be changed. But, you can create a similar relplot if you convert the dataframe to long form. Here is some simple example code, starting from dummy data: import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(300, 14), columns=[*"abcdefghijklmn"]) df_long = df.melt(id_vars=df.columns[-1], value_vars=df.columns[:-1]) g = sns.relplot(df_long, x=df.columns[-1], y='value', col='variable', col_wrap=4, height=2) A: You can use seaborn.FacetGrid and set a value of the parameter col_wrap. col_wrap (int): “Wrap” the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet. Try this : cols= dftrain.columns[:-1].tolist() g = sns.FacetGrid(pd.DataFrame(cols), col=0, col_wrap=3, sharex=False) for ax, varx in zip(g.axes, cols): sns.scatterplot(data=dftrain, x=varx, y="medv", ax=ax) g.tight_layout() # Output :
How to change the "shape" of pairplot in Seaborn?
I plotted this pairplot correlating only one features with all the others, how can i visualize it in a better way? I need to visualize 4 columns. In the official documentation of pairplot i can't find the option. This is the df: This is the part of the code: sns.pairplot(data=dftrain, y_vars=['medv'], x_vars=dftrain.columns[:-1]) This is the plot:
[ "The shape of a pairplot can't be changed. But, you can create a similar relplot if you convert the dataframe to long form.\nHere is some simple example code, starting from dummy data:\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame(np.random.rand(300, 14), columns=[*\"abcdefghijklmn\"])\ndf_long = df.melt(id_vars=df.columns[-1], value_vars=df.columns[:-1])\n\ng = sns.relplot(df_long, x=df.columns[-1], y='value', col='variable', col_wrap=4, height=2)\n\n\n", "You can use seaborn.FacetGrid and set a value of the parameter col_wrap.\n\ncol_wrap (int): “Wrap” the column variable at this width, so that the\ncolumn facets span multiple rows. Incompatible with a row facet.\n\nTry this :\ncols= dftrain.columns[:-1].tolist()\n\ng = sns.FacetGrid(pd.DataFrame(cols), col=0, col_wrap=3, sharex=False)\n\nfor ax, varx in zip(g.axes, cols):\n sns.scatterplot(data=dftrain, x=varx, y=\"medv\", ax=ax)\n \ng.tight_layout()\n\n# Output :\n\n" ]
[ 2, 2 ]
[]
[]
[ "pairplot", "pandas", "python", "seaborn", "shapes" ]
stackoverflow_0074662654_pairplot_pandas_python_seaborn_shapes.txt
Q: VSCode can't find include path I've got a simple CMake educational project sturctured like this: The root CMakeLists.txt is like that: cmake_minimum_required(VERSION 3.24.2) project(SIMPLE_ENGINE CXX) add_subdirectory(engine) add_subdirectory(game) game: cmake_minimum_required(VERSION 3.24.2) project(GAME CXX) add_executable( game src/main.cpp ) target_link_libraries( game engine ) set_property(TARGET game PROPERTY CXX_STANDARD 20) engine: cmake_minimum_required(VERSION 3.24.2) project(ENGINE CXX) add_library( engine include/base/window.h src/base/window.cpp include/base/engine.h src/base/engine.cpp ) target_include_directories( engine PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include ) target_link_libraries( engine glfw GLEW GL ) set_property(TARGET engine PROPERTY CXX_STANDARD 20) The problem is that VSCode can't find include files despite the fact the project compiles and runs successfully. As far as I understand it should get all the information from cmake files. Any advice in that regard? A: Ok, it seems that problem was because "configurationProvider" in configuration file was set to "ms-vscode.makefile-tools". I changed it to "configurationProvider": "ms-vscode.cmake-tools" and now it seems to work. A: You should try putting all the files in one directory. You might also have some extension enabled that is messing up VSCode if it isn't detecting the files but still compiling them.
VSCode can't find include path
I've got a simple CMake educational project sturctured like this: The root CMakeLists.txt is like that: cmake_minimum_required(VERSION 3.24.2) project(SIMPLE_ENGINE CXX) add_subdirectory(engine) add_subdirectory(game) game: cmake_minimum_required(VERSION 3.24.2) project(GAME CXX) add_executable( game src/main.cpp ) target_link_libraries( game engine ) set_property(TARGET game PROPERTY CXX_STANDARD 20) engine: cmake_minimum_required(VERSION 3.24.2) project(ENGINE CXX) add_library( engine include/base/window.h src/base/window.cpp include/base/engine.h src/base/engine.cpp ) target_include_directories( engine PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/include ) target_link_libraries( engine glfw GLEW GL ) set_property(TARGET engine PROPERTY CXX_STANDARD 20) The problem is that VSCode can't find include files despite the fact the project compiles and runs successfully. As far as I understand it should get all the information from cmake files. Any advice in that regard?
[ "Ok, it seems that problem was because\n\"configurationProvider\" in configuration file was set to \"ms-vscode.makefile-tools\". I changed it to\n\"configurationProvider\": \"ms-vscode.cmake-tools\" and now it seems to work.\n", "You should try putting all the files in one directory. You might also have some extension enabled that is messing up VSCode if it isn't detecting the files but still compiling them.\n" ]
[ 1, 0 ]
[]
[]
[ "c++", "cmake", "ide", "visual_studio_code" ]
stackoverflow_0074660782_c++_cmake_ide_visual_studio_code.txt
Q: Is there a way to loop through an entire Python script with an Input function? I have a very basic Blackjack simulator where I input whether I want to Hit or Stay. When I choose it, it then tells me the result. I want to run this over multiple times. Is there a function where after I get the result of the hand, it will restart from the top of the script? I am using Jupyter notebook and am currently just restarting and running all cells and then input my choice A: You can use a basic game play pattern such as the following to do what you want. # Function to request input and verify input type is valid def getInput(prompt, respType= None): while True: resp = input(prompt) if respType == str or respType == None: break else: try: resp = respType(resp) break except ValueError: print('Invalid input, please try again') return resp # function to initiate game play and control game termination def playgame(): if getInput('Do you want to play? (y/n)').lower() == 'y': while True: game_input = getInput('Enter an Integer', int) # . . . Place your game logic here . . . # if getInput("Play Again? (y/n)").lower() == 'n': break print('Good bye')
Is there a way to loop through an entire Python script with an Input function?
I have a very basic Blackjack simulator where I input whether I want to Hit or Stay. When I choose it, it then tells me the result. I want to run this over multiple times. Is there a function where after I get the result of the hand, it will restart from the top of the script? I am using Jupyter notebook and am currently just restarting and running all cells and then input my choice
[ "You can use a basic game play pattern such as the following to do what you want.\n# Function to request input and verify input type is valid\ndef getInput(prompt, respType= None):\n while True:\n resp = input(prompt)\n if respType == str or respType == None:\n break\n else:\n try:\n resp = respType(resp)\n break\n except ValueError:\n print('Invalid input, please try again')\n return resp \n\n# function to initiate game play and control game termination\ndef playgame():\n if getInput('Do you want to play? (y/n)').lower() == 'y':\n while True:\n game_input = getInput('Enter an Integer', int)\n # . . . Place your game logic here . . . #\n if getInput(\"Play Again? (y/n)\").lower() == 'n':\n break\n print('Good bye') \n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074662285_python.txt
Q: How to make a function access to only one file? What I want I have two files: file.h and file.c. I want to define a function in file.h that is going to have its definition in file.c but I don't want the function to be included when I include file.h in other file. Example: file.h #ifndef HEADER_INCLUDED #define HEADER_INCLUDED // This function should be declared here and defined in "file.c" // but only used there (not in any other source file). int private_func(); // This function can be used every where (if included of course) // but it needs to be able to access the "private_func()". int public_func(); #endif // HEADER_INCLUDED file.c #include "file.h" int private_func() { int a = 2; int b = 3; return a + b; } int public_func() { // this functons uses the "private_func()" result to give its result int c = private_func(); int d = private_func(); return c + d; } #endif // HEADER_INCLUDED other.c This file should not import private_func() when including its header #include "file.h" int main() { // can call the "public_func()" int result1 = public_func(); // but cannot call "private_func()" int result2 = private_func(); return 0; } In short I don't want private_func() to be imported by a file other than "file.c". (if possible) A: You could use the pre-processor. In header: #ifndef HEADER_INCLUDED #define HEADER_INCLUDED #ifdef _USE_PRIVATE_ // This function should be declared here and defined in "file.c" // but only used there (not in any other source file). int private_func(); #endif // This function can be used every where (if included of course) // but it needs to be able to access the "private_func()". int public_func(); #endif // HEADER_INCLUDED In the implementation file: #define _USE_PRIVATE_ #include "file.h" int private_func() { int a = 2; int b = 3; return a + b; } int public_func() { // this functons uses the "private_func()" result to give its result int c = private_func(); int d = private_func(); return c + d; } #endif // HEADER_INCLUDED other.c would not change. Including the header without the special define would omit the private definition.
How to make a function access to only one file?
What I want I have two files: file.h and file.c. I want to define a function in file.h that is going to have its definition in file.c but I don't want the function to be included when I include file.h in other file. Example: file.h #ifndef HEADER_INCLUDED #define HEADER_INCLUDED // This function should be declared here and defined in "file.c" // but only used there (not in any other source file). int private_func(); // This function can be used every where (if included of course) // but it needs to be able to access the "private_func()". int public_func(); #endif // HEADER_INCLUDED file.c #include "file.h" int private_func() { int a = 2; int b = 3; return a + b; } int public_func() { // this functons uses the "private_func()" result to give its result int c = private_func(); int d = private_func(); return c + d; } #endif // HEADER_INCLUDED other.c This file should not import private_func() when including its header #include "file.h" int main() { // can call the "public_func()" int result1 = public_func(); // but cannot call "private_func()" int result2 = private_func(); return 0; } In short I don't want private_func() to be imported by a file other than "file.c". (if possible)
[ "You could use the pre-processor. In header:\n#ifndef HEADER_INCLUDED\n#define HEADER_INCLUDED\n\n#ifdef _USE_PRIVATE_\n// This function should be declared here and defined in \"file.c\"\n// but only used there (not in any other source file).\nint private_func();\n#endif\n\n// This function can be used every where (if included of course)\n// but it needs to be able to access the \"private_func()\".\nint public_func();\n\n#endif // HEADER_INCLUDED\n\nIn the implementation file:\n#define _USE_PRIVATE_\n#include \"file.h\"\n\nint private_func()\n{\n int a = 2;\n int b = 3;\n\n return a + b;\n}\n\nint public_func()\n{\n // this functons uses the \"private_func()\" result to give its result\n int c = private_func();\n int d = private_func();\n\n return c + d; \n}\n\n#endif // HEADER_INCLUDED\n\nother.c would not change. Including the header without the special define would omit the private definition.\n" ]
[ 0 ]
[]
[]
[ "c", "include", "private", "scope" ]
stackoverflow_0074662694_c_include_private_scope.txt
Q: Count number of elements per nested field in Elastic Search I'm new with Elastic Search. I have documents in Elastic Search that contain nested fields like this: Document 1: "Volume": [ { "partition": "s1", "type": "west" } { "partition": "s2", "type": "south" } ] Document 2: "Volume": [ { "partition": "a2", "type": "north" } ] Document 3: "Volume": [ { "partition": "f3", "type": "north" } { "partition": "a1", "type": "south" } ] and so on. I need to count the number of "type" fields, so the expected result would be: "west": 1 "south": 2 "north":2 I used nested aggregation, like this: "size":0, "aggs": { "nested_properties": { "nested": { "path": "Volume" }, "aggs": { "count": { "cardinality": { "field": "Volume.type" } } } } } But the result is: "aggregations": { "nested_properies": { "doc_count": 123456, "count": { "value": 9 } } } How can I count the number of entries for each "type" subfield? A: You can use Term Aggregation. Like this: { "size": 0, "aggs": { "groups": { "nested": { "path": "Volume" }, "aggs": { "NAME": { "terms": { "field": "Volume.type.keyword", "size": 10 } } } } } } Response: "aggregations": { "groups": { "doc_count": 5, "NAME": { "doc_count_error_upper_bound": 0, "sum_other_doc_count": 0, "buckets": [ { "key": "north", "doc_count": 2 }, { "key": "south", "doc_count": 2 }, { "key": "west", "doc_count": 1 } ] } } }
Count number of elements per nested field in Elastic Search
I'm new with Elastic Search. I have documents in Elastic Search that contain nested fields like this: Document 1: "Volume": [ { "partition": "s1", "type": "west" } { "partition": "s2", "type": "south" } ] Document 2: "Volume": [ { "partition": "a2", "type": "north" } ] Document 3: "Volume": [ { "partition": "f3", "type": "north" } { "partition": "a1", "type": "south" } ] and so on. I need to count the number of "type" fields, so the expected result would be: "west": 1 "south": 2 "north":2 I used nested aggregation, like this: "size":0, "aggs": { "nested_properties": { "nested": { "path": "Volume" }, "aggs": { "count": { "cardinality": { "field": "Volume.type" } } } } } But the result is: "aggregations": { "nested_properies": { "doc_count": 123456, "count": { "value": 9 } } } How can I count the number of entries for each "type" subfield?
[ "You can use Term Aggregation.\nLike this:\n{\n \"size\": 0,\n \"aggs\": {\n \"groups\": {\n \"nested\": {\n \"path\": \"Volume\"\n },\n \"aggs\": {\n \"NAME\": {\n \"terms\": {\n \"field\": \"Volume.type.keyword\",\n \"size\": 10\n }\n }\n }\n }\n }\n}\n\nResponse:\n \"aggregations\": {\n \"groups\": {\n \"doc_count\": 5,\n \"NAME\": {\n \"doc_count_error_upper_bound\": 0,\n \"sum_other_doc_count\": 0,\n \"buckets\": [\n {\n \"key\": \"north\",\n \"doc_count\": 2\n },\n {\n \"key\": \"south\",\n \"doc_count\": 2\n },\n {\n \"key\": \"west\",\n \"doc_count\": 1\n }\n ]\n }\n }\n }\n\n" ]
[ 1 ]
[]
[]
[ "elasticsearch", "elasticsearch_aggregation", "nested" ]
stackoverflow_0074662055_elasticsearch_elasticsearch_aggregation_nested.txt
Q: How can I add Local Storage to an Angular App I've been beating my head againsed a wall trying to get this figured out so I figured id just ask. How do you add local storage to this? I've tried following several guides/templates but its just not making any sense to me. I get how to do it in JS and this type of implementation where its just storing in session makes sense as well but local storage is just fighting me. How would I modify the below to get it to store access delete and edit from local storage? export class TodoService { todos: Todo[] = [] constructor() { } getAllTodos() { return this.todos } addTodo(todo: Todo) { this.todos.push(todo) } updateTodo(index: number, updatedTodo: Todo) { this.todos[index] = updatedTodo } deleteTodo(index: number) { this.todos.splice(index, 1) } A: To add data to local storage in the Angular framework, you can use the localStorage property of the Window object. Here's an example: import { Injectable } from '@angular/core'; import { Window } from '@angular/platform-browser'; @Injectable({ providedIn: 'root' }) export class MyService { constructor(private window: Window) {} addItem(key: string, value: string) { this.window.localStorage.setItem(key, value); } removeItem(key: string) { this.window.localStorage.removeItem(key); } } In this example, we inject the Window service into our component, which allows us to access the localStorage property. We can then use the setItem() method to add data to local storage. You can also check to see if this will work: // set a value in local storage localStorage.setItem('myKey', 'myValue'); In this example, we use the setItem() method to set a value for the key myKey to myValue. To set arrays in local storage you can do: // Create an array const myArray = ['apple', 'banana', 'orange']; // Convert the array to a string const arrayString = JSON.stringify(myArray); // Set the string to local storage localStorage.setItem('myArray', arrayString); Then to refetch the array you do: // Get the string from local storage const arrayString = localStorage.getItem('myArray'); // Convert the string to an array const myArray = JSON.parse(arrayString); Remember that local storage can only store strings, so you need to convert your array to a string before saving it, and then convert it back to an array when you retrieve it. I hope this helps! Let me know if you have any other questions. -chatgpt
How can I add Local Storage to an Angular App
I've been beating my head againsed a wall trying to get this figured out so I figured id just ask. How do you add local storage to this? I've tried following several guides/templates but its just not making any sense to me. I get how to do it in JS and this type of implementation where its just storing in session makes sense as well but local storage is just fighting me. How would I modify the below to get it to store access delete and edit from local storage? export class TodoService { todos: Todo[] = [] constructor() { } getAllTodos() { return this.todos } addTodo(todo: Todo) { this.todos.push(todo) } updateTodo(index: number, updatedTodo: Todo) { this.todos[index] = updatedTodo } deleteTodo(index: number) { this.todos.splice(index, 1) }
[ "To add data to local storage in the Angular framework, you can use the localStorage property of the Window object. Here's an example:\nimport { Injectable } from '@angular/core';\nimport { Window } from '@angular/platform-browser';\n\n@Injectable({\n providedIn: 'root'\n})\nexport class MyService {\n constructor(private window: Window) {}\n\n addItem(key: string, value: string) {\n this.window.localStorage.setItem(key, value);\n }\n\n removeItem(key: string) {\n this.window.localStorage.removeItem(key);\n }\n}\n\n\nIn this example, we inject the Window service into our component, which allows us to access the localStorage property. We can then use the setItem() method to add data to local storage.\nYou can also check to see if this will work:\n// set a value in local storage\nlocalStorage.setItem('myKey', 'myValue');\n\nIn this example, we use the setItem() method to set a value for the key myKey to myValue.\nTo set arrays in local storage you can do:\n// Create an array\nconst myArray = ['apple', 'banana', 'orange'];\n\n// Convert the array to a string\nconst arrayString = JSON.stringify(myArray);\n\n// Set the string to local storage\nlocalStorage.setItem('myArray', arrayString);\n\nThen to refetch the array you do:\n// Get the string from local storage\nconst arrayString = localStorage.getItem('myArray');\n\n// Convert the string to an array\nconst myArray = JSON.parse(arrayString);\n\n\nRemember that local storage can only store strings, so you need to convert your array to a string before saving it, and then convert it back to an array when you retrieve it.\nI hope this helps! Let me know if you have any other questions.\n-chatgpt\n" ]
[ 0 ]
[]
[]
[ "angular", "crud", "local_storage", "typescript" ]
stackoverflow_0074662840_angular_crud_local_storage_typescript.txt
Q: How to stream bytes from std::ostream to std::vector? I'm looking for a way to stream data similar to std::ostringstream but for a vector of bytes instead of std::string. Zeroes are allowed as bytes. What is the most elegant way to do this in STL? A: std::vector<uint8_t> vec = { 1, 2, 3, 4 }; std::copy(vec.cbegin(), vec.cend(), std::ostream_iterator<uint8_t>(std::cout, " ")); Be careful : the stream will probably interpret the values as characters, and so will use the character overloading of the stream operator. If you want to print the integer values, templatize std::ostream_iterator with an other kind of int. A: Not elegant at all (ugly is too kind), but works for me at the moment. I have some testing code that needs to check if the correct (binary) output is written to a stream. I routed the output to a temp file and read that fle back into a vector: std::ofstream outfile( "temp.hex", std::ios_base::binary | std::ios_base::out ); prog.write_binary( outfile ); outfile.close(); std::ifstream result_file("temp.hex", std::ios_base::binary | std::ios_base::in ); std::istream_iterator<uint8_t> file_end; std::istream_iterator<uint8_t> file_begin(result_file); std::vector<uint8_t> actual; std::copy( file_begin, file_end, std::back_inserter(actual) ); std::remove( "temp.hex" ); Now the result is in the "actual" vector and is ready to be compared with my "expected" vector.
How to stream bytes from std::ostream to std::vector?
I'm looking for a way to stream data similar to std::ostringstream but for a vector of bytes instead of std::string. Zeroes are allowed as bytes. What is the most elegant way to do this in STL?
[ " std::vector<uint8_t> vec = { 1, 2, 3, 4 };\n std::copy(vec.cbegin(), vec.cend(), std::ostream_iterator<uint8_t>(std::cout, \" \"));\n\nBe careful : the stream will probably interpret the values as characters, and so will use the character overloading of the stream operator. If you want to print the integer values, templatize std::ostream_iterator with an other kind of int.\n", "Not elegant at all (ugly is too kind), but works for me at the moment.\nI have some testing code that needs to check if the correct (binary) output is written to a stream. I routed the output to a temp file and read that fle back into a vector:\nstd::ofstream outfile( \"temp.hex\", std::ios_base::binary | std::ios_base::out );\nprog.write_binary( outfile );\noutfile.close();\n\nstd::ifstream result_file(\"temp.hex\", std::ios_base::binary | std::ios_base::in );\nstd::istream_iterator<uint8_t> file_end;\nstd::istream_iterator<uint8_t> file_begin(result_file);\n\nstd::vector<uint8_t> actual;\n\nstd::copy( file_begin, file_end, std::back_inserter(actual) );\n\nstd::remove( \"temp.hex\" );\n\nNow the result is in the \"actual\" vector and is ready to be compared with my \"expected\" vector.\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "stl" ]
stackoverflow_0056374450_c++_stl.txt
Q: Skaffold: AMD64 dev machine (Mac), remote cluster ARM64 is picking AMD64 images Following Building multi-architecture docker images with Skaffold, I've been able to successfully continue building my multi-architecture (AMD64 and ARM64) images. However, it looks like the kubernetes cluster ends up pulling the AMD64 image, as I'm seeing: standard_init_linux.go:211: exec user process caused "exec format error" in the logs. I've looked at https://skaffold.dev/docs/references/yaml/ but that didn't appear to shed any light on how I can ensure it uses the correct architecture. Thanks in advance. A: Skaffold v2.0.0 and beyond now has explicit support for cross-platform and multi-platform builds. See the relevant docs here: https://skaffold.dev/docs/workflows/handling-platforms/
Skaffold: AMD64 dev machine (Mac), remote cluster ARM64 is picking AMD64 images
Following Building multi-architecture docker images with Skaffold, I've been able to successfully continue building my multi-architecture (AMD64 and ARM64) images. However, it looks like the kubernetes cluster ends up pulling the AMD64 image, as I'm seeing: standard_init_linux.go:211: exec user process caused "exec format error" in the logs. I've looked at https://skaffold.dev/docs/references/yaml/ but that didn't appear to shed any light on how I can ensure it uses the correct architecture. Thanks in advance.
[ "Skaffold v2.0.0 and beyond now has explicit support for cross-platform and multi-platform builds. See the relevant docs here:\nhttps://skaffold.dev/docs/workflows/handling-platforms/\n" ]
[ 0 ]
[]
[]
[ "arm64", "docker", "kubernetes", "pi", "skaffold" ]
stackoverflow_0061724748_arm64_docker_kubernetes_pi_skaffold.txt
Q: How to use useHistory in MUI Datagrid rendercell I am trying to pass data to another page when a user clicks on a button in MUI Datagrid. I am using useHistory from react-router-dom, but the issue I am facing is how to implement useHistory. How would I get useHistory to work in my function handleEdit since the error shows up as "handleEdit which is neither a React function component or a custom React Hook function". Table Component import React from "react"; import { useHistory } from "react-router-dom"; import { DataGrid} from "@mui/x-data-grid"; import EditIcon from "@mui/icons-material/Edit"; const columns = [ { field: "id", headerName: "Id", }, { field: "actions", headerName: "Actions", width: 110, renderCell: (params) => ( <EditIcon onClick={() => handleEdit(params)} /> ) } ] function handleEdit(params) { const history = useHistory(); history.push({ pathname: "/home/Search", state: params }) } export default function Table({ rows }) { return ( <DataGrid rows={rows} columns={columns} /> ) } A: To use useHistory in your handleEdit function, you will need to move the useHistory hook to the top level of your component and pass the history object as an argument to handleEdit. Alternatively, you could use an inline arrow function to pass the history object as an argument to handleEdit. In either case, you will need to ensure that the useHistory hook is called within the same component or a child component of the component that renders the EditIcon component.
How to use useHistory in MUI Datagrid rendercell
I am trying to pass data to another page when a user clicks on a button in MUI Datagrid. I am using useHistory from react-router-dom, but the issue I am facing is how to implement useHistory. How would I get useHistory to work in my function handleEdit since the error shows up as "handleEdit which is neither a React function component or a custom React Hook function". Table Component import React from "react"; import { useHistory } from "react-router-dom"; import { DataGrid} from "@mui/x-data-grid"; import EditIcon from "@mui/icons-material/Edit"; const columns = [ { field: "id", headerName: "Id", }, { field: "actions", headerName: "Actions", width: 110, renderCell: (params) => ( <EditIcon onClick={() => handleEdit(params)} /> ) } ] function handleEdit(params) { const history = useHistory(); history.push({ pathname: "/home/Search", state: params }) } export default function Table({ rows }) { return ( <DataGrid rows={rows} columns={columns} /> ) }
[ "To use useHistory in your handleEdit function, you will need to move the useHistory hook to the top level of your component and pass the history object as an argument to handleEdit.\nAlternatively, you could use an inline arrow function to pass the history object as an argument to handleEdit.\nIn either case, you will need to ensure that the useHistory hook is called within the same component or a child component of the component that renders the EditIcon component.\n" ]
[ 0 ]
[]
[]
[ "material_ui", "mui_datatable", "react_hooks", "reactjs" ]
stackoverflow_0074661990_material_ui_mui_datatable_react_hooks_reactjs.txt
Q: How to update in Vue3 component content based on reponse data? Recently I've been working on filters in my service for booking hotel rooms in .NET + Vue3. Backend method for filtering works fine, but I don't have clue how to force component to update its content using fetched data. Im reciving data in format like this: enter image description here Here are my script and component files: Filters component: <template> <div class="container"> <div class="d-flex align-items-center"> <label for="first_day" class="p-2">First day: </label> <input type="date" name="first_day" v-model="filtersOptions.FirstDay" /> <label for="last_day" class="p-2">Last day: </label> <input type="date" name="last_day" v-model="filtersOptions.LastDay"/> <button type="submit" class="m-2 p-2" v-on:click="fetchFilteredRooms()">Search</button> </div> </div> </template> <script lang="ts"> import { useFilters } from '@/composables/useFilters'; export default { setup(props: any, context: any) { const { filtersOptions, fetchFilteredRooms } = useFilters(); return { filtersOptions, fetchFilteredRooms, } } } </script> Filters script: import { ref } from 'vue'; import Consts from "@/consts"; import { useRooms } from './useRooms'; class FiltersOptions { FirstDay: any; LastDay: any; }; const { Rooms } = useRooms(); export const useFilters = () => { const filtersOptions = ref<any>(new FiltersOptions()); async function fetchFilteredRooms() { const filterRoomsAPI = Consts.API.concat(`rooms/search`) const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'POST', 'Access-Control-Allow-Origin': `${filterRoomsAPI}` } fetch(filterRoomsAPI, { method: 'POST', mode: 'cors', credentials: 'same-origin', body: JSON.stringify(filtersOptions._value), headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.error(error)); } return { Rooms, filtersOptions, fetchFilteredRooms, } } Rooms component: import { ref } from 'vue'; import Consts from "@/consts"; import { useRooms } from './useRooms'; class FiltersOptions { FirstDay: any; LastDay: any; }; const { Rooms } = useRooms(); export const useFilters = () => { const filtersOptions = ref<any>(new FiltersOptions()); async function fetchFilteredRooms() { const filterRoomsAPI = Consts.API.concat(`rooms/search`) const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'POST', 'Access-Control-Allow-Origin': `${filterRoomsAPI}` } fetch(filterRoomsAPI, { method: 'POST', mode: 'cors', credentials: 'same-origin', body: JSON.stringify(filtersOptions._value), headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.error(error)); } return { Rooms, filtersOptions, fetchFilteredRooms, } } Rooms script: import { ref } from 'vue'; import Consts from "@/consts" const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'GET', 'Access-Control-Allow-Origin': `${Consts.RoomsAPI}` } export function useRooms() { const Rooms = ref([]); async function fetchRooms() { fetch(Consts.RoomsAPI, { headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.log(error)); } return { Rooms, fetchRooms, }; } Any idea how to deal with it? A: In Vue3, you can use the $set method to update a reactive property's value. You can use it in the then callback of your fetch request to update the Rooms property in your useRooms composable. Here's an example of how you can update your fetchFilteredRooms method to do that: import { ref } from 'vue'; export const useRooms = () => { const Rooms = ref<any>([]); async function fetchFilteredRooms() { // your existing code here .then((data) => { // use $set to update the Rooms property Rooms.$set(data); }) .catch(error => console.error(error)); } return { Rooms, fetchFilteredRooms, } } After updating the value of the Rooms property, the component that uses it should update automatically to reflect the new data. To make sure that the component updates correctly when the Rooms property changes, you can use the setup function in your component to create a reactive property that tracks the value of the Rooms property. Here's an example of how you can do that in your Rooms component: import { ref } from 'vue'; import { useRooms } from './useRooms'; export default { setup() { const { Rooms, fetchFilteredRooms } = useRooms(); // create a reactive property that tracks the value of the Rooms property const rooms = ref(Rooms.value); return { rooms, fetchFilteredRooms, } } } Now, in your component's template, you can use the rooms property to render the data. Here's an example of how you can do that: <template> <div class="container"> <div v-for="room in rooms" :key="room.id"> {{ room.name }} </div> </div> </template> <script> import { ref } from 'vue'; import { useRooms } from './useRooms'; export default { setup() { const { Rooms, fetchFilteredRooms } = useRooms(); const rooms = ref(Rooms.value); return { rooms, fetchFilteredRooms, } } } </script> With this change, your component should update automatically when the Rooms property changes.
How to update in Vue3 component content based on reponse data?
Recently I've been working on filters in my service for booking hotel rooms in .NET + Vue3. Backend method for filtering works fine, but I don't have clue how to force component to update its content using fetched data. Im reciving data in format like this: enter image description here Here are my script and component files: Filters component: <template> <div class="container"> <div class="d-flex align-items-center"> <label for="first_day" class="p-2">First day: </label> <input type="date" name="first_day" v-model="filtersOptions.FirstDay" /> <label for="last_day" class="p-2">Last day: </label> <input type="date" name="last_day" v-model="filtersOptions.LastDay"/> <button type="submit" class="m-2 p-2" v-on:click="fetchFilteredRooms()">Search</button> </div> </div> </template> <script lang="ts"> import { useFilters } from '@/composables/useFilters'; export default { setup(props: any, context: any) { const { filtersOptions, fetchFilteredRooms } = useFilters(); return { filtersOptions, fetchFilteredRooms, } } } </script> Filters script: import { ref } from 'vue'; import Consts from "@/consts"; import { useRooms } from './useRooms'; class FiltersOptions { FirstDay: any; LastDay: any; }; const { Rooms } = useRooms(); export const useFilters = () => { const filtersOptions = ref<any>(new FiltersOptions()); async function fetchFilteredRooms() { const filterRoomsAPI = Consts.API.concat(`rooms/search`) const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'POST', 'Access-Control-Allow-Origin': `${filterRoomsAPI}` } fetch(filterRoomsAPI, { method: 'POST', mode: 'cors', credentials: 'same-origin', body: JSON.stringify(filtersOptions._value), headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.error(error)); } return { Rooms, filtersOptions, fetchFilteredRooms, } } Rooms component: import { ref } from 'vue'; import Consts from "@/consts"; import { useRooms } from './useRooms'; class FiltersOptions { FirstDay: any; LastDay: any; }; const { Rooms } = useRooms(); export const useFilters = () => { const filtersOptions = ref<any>(new FiltersOptions()); async function fetchFilteredRooms() { const filterRoomsAPI = Consts.API.concat(`rooms/search`) const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'POST', 'Access-Control-Allow-Origin': `${filterRoomsAPI}` } fetch(filterRoomsAPI, { method: 'POST', mode: 'cors', credentials: 'same-origin', body: JSON.stringify(filtersOptions._value), headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.error(error)); } return { Rooms, filtersOptions, fetchFilteredRooms, } } Rooms script: import { ref } from 'vue'; import Consts from "@/consts" const headers = { 'Content-type': 'application/json; charset=UTF-8', 'Access-Control-Allow-Methods': 'GET', 'Access-Control-Allow-Origin': `${Consts.RoomsAPI}` } export function useRooms() { const Rooms = ref([]); async function fetchRooms() { fetch(Consts.RoomsAPI, { headers }) .then(response => response.json()) .then((data) => (Rooms.value = data)) .catch(error => console.log(error)); } return { Rooms, fetchRooms, }; } Any idea how to deal with it?
[ "In Vue3, you can use the $set method to update a reactive property's value. You can use it in the then callback of your fetch request to update the Rooms property in your useRooms composable. Here's an example of how you can update your fetchFilteredRooms method to do that:\nimport { ref } from 'vue';\n\nexport const useRooms = () => {\n const Rooms = ref<any>([]);\n\n async function fetchFilteredRooms() {\n // your existing code here\n .then((data) => {\n // use $set to update the Rooms property\n Rooms.$set(data);\n })\n .catch(error => console.error(error));\n }\n\n return {\n Rooms,\n fetchFilteredRooms,\n }\n}\n\nAfter updating the value of the Rooms property, the component that uses it should update automatically to reflect the new data.\nTo make sure that the component updates correctly when the Rooms property changes, you can use the setup function in your component to create a reactive property that tracks the value of the Rooms property. Here's an example of how you can do that in your Rooms component:\nimport { ref } from 'vue';\nimport { useRooms } from './useRooms';\n\nexport default {\n setup() {\n const { Rooms, fetchFilteredRooms } = useRooms();\n // create a reactive property that tracks the value of the Rooms property\n const rooms = ref(Rooms.value);\n\n return {\n rooms,\n fetchFilteredRooms,\n }\n }\n}\n\nNow, in your component's template, you can use the rooms property to render the data. Here's an example of how you can do that:\n<template>\n <div class=\"container\">\n <div v-for=\"room in rooms\" :key=\"room.id\">\n {{ room.name }}\n </div>\n </div>\n</template>\n\n<script>\nimport { ref } from 'vue';\nimport { useRooms } from './useRooms';\n\nexport default {\n setup() {\n const { Rooms, fetchFilteredRooms } = useRooms();\n const rooms = ref(Rooms.value);\n\n return {\n rooms,\n fetchFilteredRooms,\n }\n }\n}\n</script>\n\nWith this change, your component should update automatically when the Rooms property changes.\n" ]
[ 0 ]
[]
[]
[ "asp.net", "vue.js", "vue_component", "vuejs3" ]
stackoverflow_0074590151_asp.net_vue.js_vue_component_vuejs3.txt
Q: How do I create rotating animation for background image within container without rotating entire container? I am very new to HTML and CSS and have been stuck on this problem for a while. Ideally I'm looking for a solution that is CSS only, but I can try a JavaScript solution if I need to. My code is probably very badly written so please forgive me. I am trying to create a rotating banner animation effect for my personal website. I have a container that has a background image of a colour wheel, and I have some divs within this container that hold my logo and a subtitle. The colour wheel image is just a large circle. I am looking to rotate this image without it rotating the whole container. I have tried everything from this post but this just rotates the image once where as I would like to rotate the image continuously: How to rotate the background image in the container? I have also tried this, which has the animation aspect but also rotates the whole container: https://www.sitepoint.com/community/t/rotate-background-image-constantly/251925/3 Here is my code: HTML: <section id="banner"> <div class= "banner-container"> <div class="row"> <div class="col text-center"> <img src="images/BenMillerType.png" class="logo"/> </div> </div> <div class="row justify-content-center align-items-center"> <div class="col-md-10"> <p class="promo-title text-center"></p> <p class="promo-subtitle text-center"> Graphic Design | 3D Design | UI/UX Design </p> </div> </div> </div> <div id="work"></div> </section> CSS: #banner { margin-bottom: 100px; background-color: #FFFDC4 ; border-bottom: 1px solid black; } .banner-container{ overflow: hidden; width: 100%; height: 95%; margin: 0px; background-image: url(/images/ColourWheel.png); background-position: center; margin:0 !important; } .promo-subtitle { font-size: 20px; background-color: rgb(42, 156, 157) !important; color: #fff; border-radius: 30px; border: 1px solid black; padding: 10px 20px; position: absolute; bottom: 100px; left: 50%; transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); } .logo { margin-top: 250px; object-fit:contain; width: 500px; height: auto; z-index: 2 !; } A: You need to separate the logo part from the color wheel part so you can access only the part you want to animate... Once you have done this. You can add an animation to the container you want to rotate.. #test { width: 100%; height: 300px; background: url('https://i.stack.imgur.com/xtolD.png'); background-size:cover; } @keyframes rotating { from{ transform: rotate(0deg); } to{ transform: rotate(360deg); } } .rotating { animation: rotating 2s linear infinite; } <div id='test' class='rotating'></div>
How do I create rotating animation for background image within container without rotating entire container?
I am very new to HTML and CSS and have been stuck on this problem for a while. Ideally I'm looking for a solution that is CSS only, but I can try a JavaScript solution if I need to. My code is probably very badly written so please forgive me. I am trying to create a rotating banner animation effect for my personal website. I have a container that has a background image of a colour wheel, and I have some divs within this container that hold my logo and a subtitle. The colour wheel image is just a large circle. I am looking to rotate this image without it rotating the whole container. I have tried everything from this post but this just rotates the image once where as I would like to rotate the image continuously: How to rotate the background image in the container? I have also tried this, which has the animation aspect but also rotates the whole container: https://www.sitepoint.com/community/t/rotate-background-image-constantly/251925/3 Here is my code: HTML: <section id="banner"> <div class= "banner-container"> <div class="row"> <div class="col text-center"> <img src="images/BenMillerType.png" class="logo"/> </div> </div> <div class="row justify-content-center align-items-center"> <div class="col-md-10"> <p class="promo-title text-center"></p> <p class="promo-subtitle text-center"> Graphic Design | 3D Design | UI/UX Design </p> </div> </div> </div> <div id="work"></div> </section> CSS: #banner { margin-bottom: 100px; background-color: #FFFDC4 ; border-bottom: 1px solid black; } .banner-container{ overflow: hidden; width: 100%; height: 95%; margin: 0px; background-image: url(/images/ColourWheel.png); background-position: center; margin:0 !important; } .promo-subtitle { font-size: 20px; background-color: rgb(42, 156, 157) !important; color: #fff; border-radius: 30px; border: 1px solid black; padding: 10px 20px; position: absolute; bottom: 100px; left: 50%; transform: translate(-50%, -50%); -ms-transform: translate(-50%, -50%); } .logo { margin-top: 250px; object-fit:contain; width: 500px; height: auto; z-index: 2 !; }
[ "You need to separate the logo part from the color wheel part so you can access only the part you want to animate...\nOnce you have done this. You can add an animation to the container you want to rotate..\n\n\n #test {\n width: 100%;\n height: 300px;\n background: url('https://i.stack.imgur.com/xtolD.png');\n background-size:cover;\n }\n\n @keyframes rotating {\n from{\n transform: rotate(0deg);\n }\n to{\n transform: rotate(360deg);\n }\n }\n\n .rotating {\n animation: rotating 2s linear infinite;\n }\n<div id='test' class='rotating'></div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074662614_css_html_javascript.txt
Q: What rules are there for qualifiers of effective type? So I was re-reading C17 6.5/6 - 6.5/7 regarding effective type and strict aliasing, but couldn't figure out how to treat qualifiers. Some things confuse me: I always assumed that qualifiers aren't really relevant for effective type since the rules speak of lvalue access, meaning lvalue conversion that discards qualifiers. But what if the object is a pointer? Qualifiers to the pointed-at data aren't affected by lvalue conversion. Q1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated? The exceptions to the strict aliasing rule mention qualifiers in these cases: — a qualified version of a type compatible with the effective type of the object, — a type that is the signed or unsigned type corresponding to the effective type of the object, — a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, None of these address qualifiers of the effective type itself, only by the lvalue used for access. Which should be quite irrelevant, because of lvalue conversion... right? Q2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied? Q3: Does the effective type come with qualifiers or not? Where in the standard is this stated? A: "Qualified type" being a defined term, the definition is potentially relevant: Any type so far mentioned is an unqualified type. Each unqualified type has several qualified versions of its type, corresponding to the combinations of one, two, or all three of the const, volatile, and restrict qualifiers. The qualified or unqualified versions of a type are distinct types that belong to the same type category and have the same representation and alignment requirements. A derived type is not qualified by the qualifiers (if any) of the type from which it is derived. (C17 6.2.5/26) I note that the _Atomic keyword is different from the other three categorized as type qualifiers, and I presume that this is related to the fact that atomic types are not required to have the same representation or alignment requirements as their corresponding non-atomic types. I also note that the specification is explicit that qualified and unqualified versions of a type are different types. With that background, Q1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated? I take you to mean this: const uint32_t *x = &some_uint32; uint32_t * y = *(uint32_t **) &x; The effective type of x is const uint32_t * (an unqualified pointer to const-qualified uint32_t), and it is being accessed via an lvalue of type uint32_t * (an unqualified pointer to unqualified uint32_t). This combination is not among the exceptions allowed by the language spec. In particular, uint32_t * is not a qualified version of a const uint32_t *. The resulting behavior is therefore undefined, as specified in C17 6.5, paragraphs 6 and 7. Although the standard does not discuss this particular application of the SAR, I take it to be justified indirectly. The issue in cases such as this is not so much about accessing the pointer value itself as about producing a pointer whose type discards qualifiers of the pointed-to type. Note also that the SAR does allow this variation: const uint32_t *x = &some_uint32; const uint32_t * const y = *(const uint32_t * const *) &x; , as const uint32_t * const is a qualified version of const uint32_t *. Q2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied? I don't see how lvalue conversion could be construed to apply before strict aliasing. The strict aliasing rule is expressed in terms of the lvalues used for accessing objects, and the result of lvalue conversion is not an lvalue. Additionally, as @EricPostpischil observed, the SAR applies to all accesses, which include writes. There is no lvalue conversion in the first place for an lvalue that is being written. Q3: Does the effective type come with qualifiers or not? Where in the standard is this stated? Qualified and unqualified versions of a type are different types. I see no justification for interpreting the paragraph 6.5/6's "the declared type of the object" or "the type of the lvalue" as if the type were supposed to be considered stripped of its qualifiers, much less as if all qualifiers in the type(s) from which it is derived were stripped. The words "the type" mean what they say. A: Q3: Does the effective type come with qualifiers or not? Where in the standard is this stated? The effective type includes qualifiers (or lack thereof) because the rules about effective type say that a type is used, and types include qualifiers, and the rules about effective type do not say the qualifiers are disregarded. C 2018 6.5 6 says the effective type of an object for access to its stored value is one of: “the declared type of the object” (if any), “the type of the lvalue” previously used to store into it (if that is not a character type), “the effective type of the object from which the value is copied” (if it was copied by a byte-copy method and the source has an effective type), or “the type of the lvalue used for the access.” The third of these is recursive, so it leads to one of the others. The others all say the effective type is some type, and they do not say the effective type is the unqualified version of that type. It simply is that type; the qualifiers are not removed. Q2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied? Lvalue conversion is immaterial. The aliasing rules in C 2018 6.5 7 make no mention of lvalue conversion, and it might not occur at all, since the rules apply to both reading and modifying values. (The rules in 6.5 7 are for when a stored value is “accessed,” and “access” in the C standard means reading or modifying, per 3.1.) When an object is modified, a new value is written into it; there is no lvalue conversion. When an object is read, the aliasing rules apply to that access, and lvalue conversion happens afterward, as a separate thing. Q1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated? The phrasing of these sentences do not make sense in this context. I will consider two meanings for them. First, I take the first sentence as it stands and the second question as “Can I lvalue access it as a pointer to the unqualified version of the effective type?” Although I suspect my second interpretation below is the one that was intended, this one involves less change to the text. The answer is the C standard does not define the behavior because it does not conform to the rule in 6.5 7. Given const char *p;, p is a pointer to a qualified type. Then, after, char **q = (char **) &p;, *q is a pointer to an unqualified type. Using *q to read or to modify p would not conform to the rule in 6.5 7. When we consider accessing p with *q, then as we see above, the effective type of the object is const char *, the type of the lvalue is char *, and none of the cases in 6.5 7 say a const char * may be accessed as a char *. Second, I take the sentences as “What if the effective type is a qualified type? Can I lvalue access it as an unqualified version of the same type?” Again, the answer is the C standard does not define the behavior because it does not conform to the rule in 6.5 7. Given const int p = 3;, p has a qualified type. Then, after int *q = (int *) &p;, *q has the unqualified version of the same type. When we consider accessing p with *q, the effective type of the object is const int, and the type of the lvalue is int, and none of the cases in 6.5 7 say a const int may be accessed as an int. None of these address qualifiers of the effective type itself, only by the lvalue used for access. Which should be quite irrelevant, because of lvalue conversion... right? No, the qualifiers of the effective type are relevant. lvalue conversion, if it occurs, does not make them irrelevant. 6.5 7 states requirements for the lvalue type with relation to the effective type, and the qualifiers of each are parts of their types and partake in the rule in 6.5 7.
What rules are there for qualifiers of effective type?
So I was re-reading C17 6.5/6 - 6.5/7 regarding effective type and strict aliasing, but couldn't figure out how to treat qualifiers. Some things confuse me: I always assumed that qualifiers aren't really relevant for effective type since the rules speak of lvalue access, meaning lvalue conversion that discards qualifiers. But what if the object is a pointer? Qualifiers to the pointed-at data aren't affected by lvalue conversion. Q1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated? The exceptions to the strict aliasing rule mention qualifiers in these cases: — a qualified version of a type compatible with the effective type of the object, — a type that is the signed or unsigned type corresponding to the effective type of the object, — a type that is the signed or unsigned type corresponding to a qualified version of the effective type of the object, None of these address qualifiers of the effective type itself, only by the lvalue used for access. Which should be quite irrelevant, because of lvalue conversion... right? Q2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied? Q3: Does the effective type come with qualifiers or not? Where in the standard is this stated?
[ "\"Qualified type\" being a defined term, the definition is potentially relevant:\n\nAny type so far mentioned is an unqualified type. Each unqualified type has several qualified versions of its type, corresponding to the combinations of one, two, or all three of the const, volatile, and restrict qualifiers. The qualified or unqualified versions of a type are distinct types that belong to the same type category and have the same representation and alignment requirements. A derived type is not qualified by the qualifiers (if any) of the type from which it is derived.\n\n(C17 6.2.5/26)\nI note that the _Atomic keyword is different from the other three categorized as type qualifiers, and I presume that this is related to the fact that atomic types are not required to have the same representation or alignment requirements as their corresponding non-atomic types.\nI also note that the specification is explicit that qualified and unqualified versions of a type are different types.\nWith that background,\n\nQ1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated?\n\nI take you to mean this:\nconst uint32_t *x = &some_uint32;\nuint32_t * y = *(uint32_t **) &x;\n\nThe effective type of x is const uint32_t * (an unqualified pointer to const-qualified uint32_t), and it is being accessed via an lvalue of type uint32_t * (an unqualified pointer to unqualified uint32_t). This combination is not among the exceptions allowed by the language spec. In particular, uint32_t * is not a qualified version of a const uint32_t *. The resulting behavior is therefore undefined, as specified in C17 6.5, paragraphs 6 and 7.\nAlthough the standard does not discuss this particular application of the SAR, I take it to be justified indirectly. The issue in cases such as this is not so much about accessing the pointer value itself as about producing a pointer whose type discards qualifiers of the pointed-to type.\nNote also that the SAR does allow this variation:\nconst uint32_t *x = &some_uint32;\nconst uint32_t * const y = *(const uint32_t * const *) &x;\n\n, as const uint32_t * const is a qualified version of const uint32_t *.\n\nQ2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied?\n\nI don't see how lvalue conversion could be construed to apply before strict aliasing. The strict aliasing rule is expressed in terms of the lvalues used for accessing objects, and the result of lvalue conversion is not an lvalue.\nAdditionally, as @EricPostpischil observed, the SAR applies to all accesses, which include writes. There is no lvalue conversion in the first place for an lvalue that is being written.\n\nQ3: Does the effective type come with qualifiers or not? Where in the standard is this stated?\n\nQualified and unqualified versions of a type are different types. I see no justification for interpreting the paragraph 6.5/6's \"the declared type of the object\" or \"the type of the lvalue\" as if the type were supposed to be considered stripped of its qualifiers, much less as if all qualifiers in the type(s) from which it is derived were stripped. The words \"the type\" mean what they say.\n", "\nQ3: Does the effective type come with qualifiers or not? Where in the standard is this stated?\n\nThe effective type includes qualifiers (or lack thereof) because the rules about effective type say that a type is used, and types include qualifiers, and the rules about effective type do not say the qualifiers are disregarded.\nC 2018 6.5 6 says the effective type of an object for access to its stored value is one of:\n\n“the declared type of the object” (if any),\n“the type of the lvalue” previously used to store into it (if that is not a character type),\n“the effective type of the object from which the value is copied” (if it was copied by a byte-copy method and the source has an effective type), or\n“the type of the lvalue used for the access.”\n\nThe third of these is recursive, so it leads to one of the others. The others all say the effective type is some type, and they do not say the effective type is the unqualified version of that type. It simply is that type; the qualifiers are not removed.\n\nQ2: Does lvalue conversion happen before or after the above quoted rules of effective type/strict aliasing are applied?\n\nLvalue conversion is immaterial. The aliasing rules in C 2018 6.5 7 make no mention of lvalue conversion, and it might not occur at all, since the rules apply to both reading and modifying values. (The rules in 6.5 7 are for when a stored value is “accessed,” and “access” in the C standard means reading or modifying, per 3.1.) When an object is modified, a new value is written into it; there is no lvalue conversion. When an object is read, the aliasing rules apply to that access, and lvalue conversion happens afterward, as a separate thing.\n\nQ1: What if the effective type is a pointer to qualified-type? Can I lvalue access it as a non-qualified pointer to the same type? Where in the standard is this stated?\n\nThe phrasing of these sentences do not make sense in this context. I will consider two meanings for them.\nFirst, I take the first sentence as it stands and the second question as “Can I lvalue access it as a pointer to the unqualified version of the effective type?” Although I suspect my second interpretation below is the one that was intended, this one involves less change to the text. The answer is the C standard does not define the behavior because it does not conform to the rule in 6.5 7.\nGiven const char *p;, p is a pointer to a qualified type. Then, after, char **q = (char **) &p;, *q is a pointer to an unqualified type. Using *q to read or to modify p would not conform to the rule in 6.5 7. When we consider accessing p with *q, then as we see above, the effective type of the object is const char *, the type of the lvalue is char *, and none of the cases in 6.5 7 say a const char * may be accessed as a char *.\nSecond, I take the sentences as “What if the effective type is a qualified type? Can I lvalue access it as an unqualified version of the same type?” Again, the answer is the C standard does not define the behavior because it does not conform to the rule in 6.5 7.\nGiven const int p = 3;, p has a qualified type. Then, after int *q = (int *) &p;, *q has the unqualified version of the same type. When we consider accessing p with *q, the effective type of the object is const int, and the type of the lvalue is int, and none of the cases in 6.5 7 say a const int may be accessed as an int.\n\nNone of these address qualifiers of the effective type itself, only by the lvalue used for access. Which should be quite irrelevant, because of lvalue conversion... right?\n\nNo, the qualifiers of the effective type are relevant. lvalue conversion, if it occurs, does not make them irrelevant. 6.5 7 states requirements for the lvalue type with relation to the effective type, and the qualifiers of each are parts of their types and partake in the rule in 6.5 7.\n" ]
[ 2, 0 ]
[]
[]
[ "c", "c17", "language_lawyer", "lvalue", "strict_aliasing" ]
stackoverflow_0065356861_c_c17_language_lawyer_lvalue_strict_aliasing.txt
Q: Python3 find position/index of a name/element in a list with more than one of the same name I am having a problem that I just don't know how to solve and nothing I'm finding is helping. My problem is that I have a list of names (strings), in this list I will have the same name show up more than once. lst = ['hello.com', 'hello.com', 'hello.com', 'world.com', 'test1.com'] index = web_lst.index(domain)+1 print(index) The issue with this code is that index() will always find and use the first 'hello.com' instead of any of the other "hello.com's", so index will always be 1. If I were asking for any of the other names then it'd work I think. I am trying to get the integer representation of the 'hello.com' names (1, 2, 3, etc.), and I don't know how to do that or what else to use besides python lists. This, I don't think is going to work and I'm asking for any other ideas on what to do or use instead of using a list. (if what I'm trying to do is not possible with lists) My main goal is basically a login manager using sqlite3 and I want to have the ability to have multiple logins with some having the same domain name (but with different data and notes, etc.), because we like to have multiple logins/accounts for 1 website. I have a TUI (beaupy) for selecting the domain/option you want to get the login for but if you have more than 1 of the same domain name it doesn't know which one to pick. I have managed to use integers as IDs in the sqlite3 database to help but the main issue is the picking of an element from a list to get a number, to then plug into the read() function. So the list options will correlate to the "IDs" in the database. List index 0+1 would be option/row 1 in the database (and so on). def clear(): os.system('clear||cls') def add(encrypted_data): ID = 1 database = sqlite3.connect('vault.gter') c = database.cursor() #Check to see if IDs exist and if yes then get how many/length of list and add 1 and use that instead. c.execute("SELECT id FROM logins") all_ids = c.fetchall() out = list(itertools.chain(*all_ids)) list_length = len(out) if not all_ids: pass else: for x in out: if x == list_length: ID = x+1 else: pass c.execute(f"INSERT INTO logins VALUES ('{ID}', '{encrypted_data}')") database.commit() database.close() def domains(dKey): database = sqlite3.connect('vault.gter') c = database.cursor() c.execute("SELECT data FROM logins") websites = c.fetchall() enc_output = list(itertools.chain(*websites)) web_lst = [] note_lst = [] for x in enc_output: result = gcm.stringD(x, dKey) #decrypt encrypted json string. obj_result = json.loads(result) #turns back into json object website = obj_result['Domain'] notes = obj_result['Notes'] web_lst.append(website) note_lst.append(notes) for w,n in zip(web_lst, note_lst): with open('.lst', 'a') as fa: fa.writelines(f"{w} ({n})\n") fa.close() with open(".lst", "r+") as fr: data = fr.read() fnlst = data.strip().split('\n') fr.truncate(0) fr.close() os.remove(".lst") print(f'(Press "ctrl+c" to exit)\n-----------------------------------------------------------\n\nWebsite domain/name to get login for?\n') domain = beaupy.select(fnlst, cursor_style="#ffa533") clear() if domain == None: clear() return else: domain = domain.split(' ', 1)[0] #get first word in a string. print(domain) #debug index = web_lst.index(domain)+1 input(index) #debug pwd = read(index) return pwd # Come up with new way to show available options to chose from and then get number from that to use here for "db_row". def read(db_row): database = sqlite3.connect('vault.gter') c = database.cursor() c.execute("SELECT id FROM logins") all_ids = c.fetchall() lst_output = list(itertools.chain(*all_ids)) if not all_ids: input("No IDS") #debug database.commit() database.close() return else: for x in lst_output: if x == db_row: c.execute(f"SELECT data FROM logins WHERE id LIKE '{db_row}'") #to prevent my main issue of it not knowing what I want when two domain names are the same. stoof = c.fetchone() database.commit() database.close() return stoof[0] else: #(debug) - input(f"error, x is not the same as db_row. x = {x} & db_row = {db_row}") pass If anyone has a better way of doing this whole login manager thing, I'll be very very appreciative. From handling the database and sqlite3 commands, better IDs? to perhaps completely a different (and free) way of storage. And finding a better way to handle my main problem here (with or without having to use lists). Anything is helpful. <3 If anyone has questions then feel free to ask away and I'll respond when I can with the best of my knowledge. A: You can get both the index and the element using a for-loop. for i in range(len(lst)): element = lst[i] if element == domain: print(i) This should give you all indexes of domain. Edited Code: d = {} c = 0 for i in range(len(lst)): element = lst[i] if element == domain: c += 1 d[c] = i for number, index in d.items(): # Do something here. Remember to use number and index instead of c and i! c is the occurence number, and i is the index. A: Here is a one-liner: [{item:[i for i, x in enumerate(lst) if x == item]} for item in set(lst)]
Python3 find position/index of a name/element in a list with more than one of the same name
I am having a problem that I just don't know how to solve and nothing I'm finding is helping. My problem is that I have a list of names (strings), in this list I will have the same name show up more than once. lst = ['hello.com', 'hello.com', 'hello.com', 'world.com', 'test1.com'] index = web_lst.index(domain)+1 print(index) The issue with this code is that index() will always find and use the first 'hello.com' instead of any of the other "hello.com's", so index will always be 1. If I were asking for any of the other names then it'd work I think. I am trying to get the integer representation of the 'hello.com' names (1, 2, 3, etc.), and I don't know how to do that or what else to use besides python lists. This, I don't think is going to work and I'm asking for any other ideas on what to do or use instead of using a list. (if what I'm trying to do is not possible with lists) My main goal is basically a login manager using sqlite3 and I want to have the ability to have multiple logins with some having the same domain name (but with different data and notes, etc.), because we like to have multiple logins/accounts for 1 website. I have a TUI (beaupy) for selecting the domain/option you want to get the login for but if you have more than 1 of the same domain name it doesn't know which one to pick. I have managed to use integers as IDs in the sqlite3 database to help but the main issue is the picking of an element from a list to get a number, to then plug into the read() function. So the list options will correlate to the "IDs" in the database. List index 0+1 would be option/row 1 in the database (and so on). def clear(): os.system('clear||cls') def add(encrypted_data): ID = 1 database = sqlite3.connect('vault.gter') c = database.cursor() #Check to see if IDs exist and if yes then get how many/length of list and add 1 and use that instead. c.execute("SELECT id FROM logins") all_ids = c.fetchall() out = list(itertools.chain(*all_ids)) list_length = len(out) if not all_ids: pass else: for x in out: if x == list_length: ID = x+1 else: pass c.execute(f"INSERT INTO logins VALUES ('{ID}', '{encrypted_data}')") database.commit() database.close() def domains(dKey): database = sqlite3.connect('vault.gter') c = database.cursor() c.execute("SELECT data FROM logins") websites = c.fetchall() enc_output = list(itertools.chain(*websites)) web_lst = [] note_lst = [] for x in enc_output: result = gcm.stringD(x, dKey) #decrypt encrypted json string. obj_result = json.loads(result) #turns back into json object website = obj_result['Domain'] notes = obj_result['Notes'] web_lst.append(website) note_lst.append(notes) for w,n in zip(web_lst, note_lst): with open('.lst', 'a') as fa: fa.writelines(f"{w} ({n})\n") fa.close() with open(".lst", "r+") as fr: data = fr.read() fnlst = data.strip().split('\n') fr.truncate(0) fr.close() os.remove(".lst") print(f'(Press "ctrl+c" to exit)\n-----------------------------------------------------------\n\nWebsite domain/name to get login for?\n') domain = beaupy.select(fnlst, cursor_style="#ffa533") clear() if domain == None: clear() return else: domain = domain.split(' ', 1)[0] #get first word in a string. print(domain) #debug index = web_lst.index(domain)+1 input(index) #debug pwd = read(index) return pwd # Come up with new way to show available options to chose from and then get number from that to use here for "db_row". def read(db_row): database = sqlite3.connect('vault.gter') c = database.cursor() c.execute("SELECT id FROM logins") all_ids = c.fetchall() lst_output = list(itertools.chain(*all_ids)) if not all_ids: input("No IDS") #debug database.commit() database.close() return else: for x in lst_output: if x == db_row: c.execute(f"SELECT data FROM logins WHERE id LIKE '{db_row}'") #to prevent my main issue of it not knowing what I want when two domain names are the same. stoof = c.fetchone() database.commit() database.close() return stoof[0] else: #(debug) - input(f"error, x is not the same as db_row. x = {x} & db_row = {db_row}") pass If anyone has a better way of doing this whole login manager thing, I'll be very very appreciative. From handling the database and sqlite3 commands, better IDs? to perhaps completely a different (and free) way of storage. And finding a better way to handle my main problem here (with or without having to use lists). Anything is helpful. <3 If anyone has questions then feel free to ask away and I'll respond when I can with the best of my knowledge.
[ "You can get both the index and the element using a for-loop.\nfor i in range(len(lst)):\n element = lst[i]\n if element == domain:\n print(i)\n\nThis should give you all indexes of domain.\nEdited Code:\nd = {}\nc = 0\nfor i in range(len(lst)):\n element = lst[i]\n if element == domain:\n c += 1\n d[c] = i\n\nfor number, index in d.items():\n # Do something here. Remember to use number and index instead of c and i!\n\nc is the occurence number, and i is the index.\n", "Here is a one-liner:\n[{item:[i for i, x in enumerate(lst) if x == item]} for item in set(lst)]\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x", "sqlite3_python" ]
stackoverflow_0074662809_python_python_3.x_sqlite3_python.txt
Q: Can you build a VSTO Excel solution in the latest Visual Studio on ARM? I see that there is now a native ARM version of Visual Studio which is great as i use Parallels Desktop on my mac and the previous version of Visual Studio is painfully slow. I see some workloads are available and some are not: https://developercommunity.visualstudio.com/search?space=8&q=%5BARM64%5D&stateGroup=active&ftype=idea&sort=relevance One thing that is unclear to me is if it supports VSTO solutions. I know that the latest .Net framework versions don't support VSTO: https://github.com/dotnet/core/issues/5156 but I wasn't sure if there are any impediments of running the native ARM based version of Visual studio to run a VSTO solution on .Net 4.8 framework version. A: No, it is not possible. Microsoft Office available for ARM-based processors doesn't support ARM-based COM add-ins (not emulated). The Office apps utilize a new technology from Microsoft called ARM64EC, which stands for ARM64 Emulation Compatible. This technology allows developers to mix and match code that's built natively for ARM64 alongside code that runs in emulation. As a result, apps with dependencies that don't natively support ARM64 can run partly as native apps and partly in emulation. Office has x64 code and legacy add-ins that aren't built for Windows 11 on ARM. With ARM64EC, Microsoft can rebuild large portions of the app to run natively on ARM devices, while the older components run in emulation. A: It is currently not possible to build a Visual Studio Tools for Office (VSTO) solution using the ARM version of Visual Studio. VSTO solutions require the use of the .NET Framework, which is not currently supported on ARM-based devices. Microsoft has announced plans to bring .NET support to ARM-based devices in the future, but it is not currently available. A: It is not currently possible to use the native ARM version of Visual Studio to develop and run VSTO solutions. This is because the VSTO runtime, which is required for VSTO solutions, is not currently available for ARM-based systems. As you mentioned, the latest versions of the .NET Framework do not support VSTO, and this applies to both the x86 and ARM versions of the .NET Framework. This means that even if the ARM version of Visual Studio did support VSTO development, you would not be able to run VSTO solutions on it. At this time, the only way to develop and run VSTO solutions is to use the x86 version of Visual Studio on an x86-based system. This is because the VSTO runtime is only available for x86-based systems, and it is not currently supported on ARM-based systems. If you are using Parallels Desktop on your Mac, you may want to consider using the x86 version of Visual Studio in a virtual machine to develop and run your VSTO solutions. This will provide the best performance and compatibility for VSTO development.
Can you build a VSTO Excel solution in the latest Visual Studio on ARM?
I see that there is now a native ARM version of Visual Studio which is great as i use Parallels Desktop on my mac and the previous version of Visual Studio is painfully slow. I see some workloads are available and some are not: https://developercommunity.visualstudio.com/search?space=8&q=%5BARM64%5D&stateGroup=active&ftype=idea&sort=relevance One thing that is unclear to me is if it supports VSTO solutions. I know that the latest .Net framework versions don't support VSTO: https://github.com/dotnet/core/issues/5156 but I wasn't sure if there are any impediments of running the native ARM based version of Visual studio to run a VSTO solution on .Net 4.8 framework version.
[ "No, it is not possible. Microsoft Office available for ARM-based processors doesn't support ARM-based COM add-ins (not emulated).\nThe Office apps utilize a new technology from Microsoft called ARM64EC, which stands for ARM64 Emulation Compatible. This technology allows developers to mix and match code that's built natively for ARM64 alongside code that runs in emulation. As a result, apps with dependencies that don't natively support ARM64 can run partly as native apps and partly in emulation.\nOffice has x64 code and legacy add-ins that aren't built for Windows 11 on ARM. With ARM64EC, Microsoft can rebuild large portions of the app to run natively on ARM devices, while the older components run in emulation.\n", "It is currently not possible to build a Visual Studio Tools for Office (VSTO) solution using the ARM version of Visual Studio. VSTO solutions require the use of the .NET Framework, which is not currently supported on ARM-based devices. Microsoft has announced plans to bring .NET support to ARM-based devices in the future, but it is not currently available.\n", "It is not currently possible to use the native ARM version of Visual Studio to develop and run VSTO solutions. This is because the VSTO runtime, which is required for VSTO solutions, is not currently available for ARM-based systems.\nAs you mentioned, the latest versions of the .NET Framework do not support VSTO, and this applies to both the x86 and ARM versions of the .NET Framework. This means that even if the ARM version of Visual Studio did support VSTO development, you would not be able to run VSTO solutions on it.\nAt this time, the only way to develop and run VSTO solutions is to use the x86 version of Visual Studio on an x86-based system. This is because the VSTO runtime is only available for x86-based systems, and it is not currently supported on ARM-based systems.\nIf you are using Parallels Desktop on your Mac, you may want to consider using the x86 version of Visual Studio in a virtual machine to develop and run your VSTO solutions. This will provide the best performance and compatibility for VSTO development.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "arm", "c#", "office_addins", "visual_studio", "vsto" ]
stackoverflow_0073445396_arm_c#_office_addins_visual_studio_vsto.txt
Q: Create a binary flag to see if a column contains certain terms between two tables table 1: terms apple banana candy table 2: search_terms apple cake good apple cake recipe nothing special banana pudding bananapudding candybar The expected result table: search_terms | flag apple cake | yes good apple recipe | yes nothing special | no banana pudding | yes bananapudding | no candybar | no I'm trying not to use cross join as there are many terms. My working code: with targeted_terms as (select distinct terms from t1) select distinct search_terms, REGEXP_CONTAINS(search_terms, CONCAT(r'(?i)(\b', t1.terms, r'\b)')) as flag from t2 cross join t1 A: If you want to avoid a cross join, how about concatenating your search terms into a single (potentially very long...) regex? It can use | to search for multiple different possibilities in one go. Something like this might work: SELECT search_terms, REGEXP_CONTAINS( search_terms, SELECT STRING_AGG(terms, '|') FROM t1 ) AS flag, FROM t2
Create a binary flag to see if a column contains certain terms between two tables
table 1: terms apple banana candy table 2: search_terms apple cake good apple cake recipe nothing special banana pudding bananapudding candybar The expected result table: search_terms | flag apple cake | yes good apple recipe | yes nothing special | no banana pudding | yes bananapudding | no candybar | no I'm trying not to use cross join as there are many terms. My working code: with targeted_terms as (select distinct terms from t1) select distinct search_terms, REGEXP_CONTAINS(search_terms, CONCAT(r'(?i)(\b', t1.terms, r'\b)')) as flag from t2 cross join t1
[ "If you want to avoid a cross join, how about concatenating your search terms into a single (potentially very long...) regex? It can use | to search for multiple different possibilities in one go.\nSomething like this might work:\n SELECT\n search_terms,\n REGEXP_CONTAINS(\n search_terms,\n SELECT STRING_AGG(terms, '|') FROM t1\n ) AS flag,\n FROM\n t2 \n\n" ]
[ 0 ]
[]
[]
[ "google_bigquery", "sql" ]
stackoverflow_0074662575_google_bigquery_sql.txt
Q: Could not get unknown property 'mavenUser' for Credentials All I am getting following error in build.gradle in Android Studio : Could not get unknown property 'mavenUser' for Credentials [username: null] of type org.gradle.api.internal.artifacts.repositories.DefaultPasswordCredentials_Decorated Below my Gradle file: buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.7.+' repositories { mavenCentral() maven { // ~/.gradle/gradle.properties should be configured! credentials { username mavenUser password mavenPassword } url 'http://dev.softwerk.se:8080/nexus/content/repositories/softwerk-repo' } } android { compileSdkVersion 19 buildToolsVersion "19.0.1" defaultConfig { minSdkVersion 8 targetSdkVersion 19 versionCode 2014020601 versionName "2.0.0" } compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 } } dependencies { compile 'com.android.support:support-v4:19.0.+' compile 'com.android.support:appcompat-v7:19.0.+' compile 'com.google.android.gms:play-services:4.0.30' // Sync framework compile 'se.softwerk.commons.android:android-framework:1.1.10@aar' compile 'com.googlecode.plist:dd-plist:1.0' compile 'com.google.code.gson:gson:1.7.1' compile 'commons-io:commons-io:2.1' } } }` A: Create a file called gradle.properties in your project directory. Add the lines: mavenUser=yourusername mavenPassword=yourpassword A: For me it works I have add this official doc generated username and password in settings.gradle file under mavenCentral() write this username and password with proper use of = ' ' username='paypal_sgerritz' password='AKCp8jQ8tAahqpT5JjZ4FRP2mW7GMoFZ674kGqHmupTesKeAY2G8NcmPKLuTxTGkKjDLRzDUQ' Add this username and password in gradle.properties file with proper use of = ' ' username='paypal_sgerritz' password='AKCp8jQ8tAahqpT5JjZ4FRP2mW7GMoFZ674kGqHmupTesKeAY2G8NcmPKLuTxTGkKjDLRzDUQ' A: I just had to edit my ~/.gradle/gradle.properties file and add the credentials in it. Open the properties file using -> vim ~/.gradle/gradle.properties Add your credentials something like below and save it. user=abc_user password=abc_password Rerun your build.
Could not get unknown property 'mavenUser' for Credentials
All I am getting following error in build.gradle in Android Studio : Could not get unknown property 'mavenUser' for Credentials [username: null] of type org.gradle.api.internal.artifacts.repositories.DefaultPasswordCredentials_Decorated Below my Gradle file: buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.7.+' repositories { mavenCentral() maven { // ~/.gradle/gradle.properties should be configured! credentials { username mavenUser password mavenPassword } url 'http://dev.softwerk.se:8080/nexus/content/repositories/softwerk-repo' } } android { compileSdkVersion 19 buildToolsVersion "19.0.1" defaultConfig { minSdkVersion 8 targetSdkVersion 19 versionCode 2014020601 versionName "2.0.0" } compileOptions { sourceCompatibility JavaVersion.VERSION_1_7 targetCompatibility JavaVersion.VERSION_1_7 } } dependencies { compile 'com.android.support:support-v4:19.0.+' compile 'com.android.support:appcompat-v7:19.0.+' compile 'com.google.android.gms:play-services:4.0.30' // Sync framework compile 'se.softwerk.commons.android:android-framework:1.1.10@aar' compile 'com.googlecode.plist:dd-plist:1.0' compile 'com.google.code.gson:gson:1.7.1' compile 'commons-io:commons-io:2.1' } } }`
[ "Create a file called gradle.properties in your project directory. Add the lines: \nmavenUser=yourusername\nmavenPassword=yourpassword\n\n", "\nFor me it works I have add this official doc generated username and password in settings.gradle file under mavenCentral() write this username and password with\nproper use of = ' '\n\nusername='paypal_sgerritz'\npassword='AKCp8jQ8tAahqpT5JjZ4FRP2mW7GMoFZ674kGqHmupTesKeAY2G8NcmPKLuTxTGkKjDLRzDUQ'\n\n\nAdd this username and password in gradle.properties file with proper use of = ' '\nusername='paypal_sgerritz'\npassword='AKCp8jQ8tAahqpT5JjZ4FRP2mW7GMoFZ674kGqHmupTesKeAY2G8NcmPKLuTxTGkKjDLRzDUQ'\n\n\n\n", "I just had to edit my ~/.gradle/gradle.properties file and add the credentials in it.\n\nOpen the properties file using -> vim ~/.gradle/gradle.properties\nAdd your credentials something like below and save it.\n\n\nuser=abc_user\npassword=abc_password\n\nRerun your build.\n" ]
[ 19, 1, 0 ]
[]
[]
[ "android", "android_gradle_plugin" ]
stackoverflow_0041201310_android_android_gradle_plugin.txt