content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
35
137
Q: I want to receive information from the user from discord, but I don't know what to do. ( discord.py ) I want to receive information from the user from discord, but I don't know what to do. I want to make a class to input data if user write !make [name] [data], bot generate class A, A(name, data) The following is the code I made. What should I do? Ps. command_prefix is not working properly. What should I do with this? ` import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): if message.content == "test": await message.channel.send ("{} | {}, Hello".format(message.author, message.author.mention)) await message.author.send ("{} | {}, User, Hello".format(message.author, message.author.mention)) if message.content =="!help": await message.channel.send ("hello, I'm bot 0.0.1 Alpha") async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-') ` A: I advice to not using on message for making commands what I do advice: import discord from discord.ext import commands @bot.command(name="name here if you want a different one than the function name", description="describe it here", hidden=False) #set hidden to True to hide it in the help async def mycommand(ctx, argument1, argument2): '''A longer description of the command Usage example: !mycommand hi 1 ''' await ctx.send(f"Got {argument1} and {argument2}") if you will use the two ways together then add after this line await message.channel.send ("hello, I'm bot 0.0.1 Alpha") this: else: await bot.process_commands(message) if you want to make help command u should first remove the default one by editing this line bot = commands.Bot(command_prefix='!',intents=intents) to : bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) overall code should look like import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents,help_command=None) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") @bot.command() async def make(ctx,name,data): #do whatever u want with the name and data premeter pass @bot.command() async def help(ctx): await ctx.send("hello, I'm bot 0.0.1 Alpha") @bot.command() async def test(ctx): await ctx.send ("{} | {}, Hello".format(ctx.author, ctx.author.mention)) client.run('-') and yeah if u want to know what is ctx ctx is context which is default premeter and have some methods like send,author and more
I want to receive information from the user from discord, but I don't know what to do. ( discord.py )
I want to receive information from the user from discord, but I don't know what to do. I want to make a class to input data if user write !make [name] [data], bot generate class A, A(name, data) The following is the code I made. What should I do? Ps. command_prefix is not working properly. What should I do with this? ` import discord, asyncio import char # class file from discord.ext import commands intents=discord.Intents.all() client = discord.Client(intents=intents) bot = commands.Bot(command_prefix='!',intents=intents) @client.event async def on_ready(): await client.change_presence(status=discord.Status.online, activity=discord.Game("Game")) @client.event async def on_message(message): if message.content == "test": await message.channel.send ("{} | {}, Hello".format(message.author, message.author.mention)) await message.author.send ("{} | {}, User, Hello".format(message.author, message.author.mention)) if message.content =="!help": await message.channel.send ("hello, I'm bot 0.0.1 Alpha") async def new_class(ctx,user:discord.user,context1,context2): global char_num globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name) char_num+=1 await ctx.message.channel.send ("done", context1,"!") client.run('-') `
[ "I advice to not using on message for making commands\nwhat I do advice:\nimport discord\nfrom discord.ext import commands\n \[email protected](name=\"name here if you want a different one than the function name\", description=\"describe it here\", hidden=False) #set hidden to True to hide it in the help\nasync def mycommand(ctx, argument1, argument2):\n '''A longer description of the command\n\n\n Usage example:\n !mycommand hi 1\n '''\n await ctx.send(f\"Got {argument1} and {argument2}\")\n\nif you will use the two ways together then add after this line\nawait message.channel.send (\"hello, I'm bot 0.0.1 Alpha\")\nthis:\n else:\n await bot.process_commands(message)\n\nif you want to make help command u should first remove the default one\nby editing this line bot = commands.Bot(command_prefix='!',intents=intents) to :\nbot = commands.Bot(command_prefix='!',intents=intents,help_command=None)\n\noverall code should look like\nimport discord, asyncio\nimport char # class file\nfrom discord.ext import commands\n\nintents=discord.Intents.all()\nclient = discord.Client(intents=intents) \n\nbot = commands.Bot(command_prefix='!',intents=intents,help_command=None)\n\[email protected]\nasync def on_ready():\n await client.change_presence(status=discord.Status.online, activity=discord.Game(\"Game\"))\n\n\[email protected]\nasync def on_message(message):\n async def new_class(ctx,user:discord.user,context1,context2):\n global char_num\n globals()['char_{}'.format(char_num)]=char(name=context1,Sffter=context2,username=ctx.message.author.name)\n char_num+=1\n await ctx.message.channel.send (\"done\", context1,\"!\")\n\[email protected]()\nasync def make(ctx,name,data):\n #do whatever u want with the name and data premeter\n pass\[email protected]()\nasync def help(ctx):\n await ctx.send(\"hello, I'm bot 0.0.1 Alpha\")\[email protected]()\nasync def test(ctx):\n await ctx.send (\"{} | {}, Hello\".format(ctx.author, ctx.author.mention))\n\n\nclient.run('-')\n\nand yeah if u want to know what is ctx\nctx is context which is default premeter and have some methods like send,author and more\n" ]
[ 0 ]
[]
[]
[ "discord", "python" ]
stackoverflow_0074670633_discord_python.txt
Q: How do I get the client IP address of a websocket connection in Django Channels? I need to get the client IP address of a websocket connection for some extra functionality I would like to implement. I have an existing deployed Django server running an Nginx-Gunicorn-Uvicorn Worker-Redis configuration. As one might expect, during development, whilst running a local server, everything works as expected. However, when deployed, I receive the error NoneType object is not subscriptable when attempting to access the client IP address of the websocket via self.scope["client"][0]. Here are the configurations and code: NGINX Config: upstream uvicorn { server unix:/run/gunicorn.sock; } server { listen 80; server_name <ip address> <hostname>; location = /favicon.ico { access_log off; log_not_found off; } location / { include proxy_params; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://uvicorn; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 128; } location /ws/ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://uvicorn; } location /static/ { root /var/www/serverfiles/; autoindex off; } location /media { alias /mnt/apps; } } Gunicorn Config: NOTE: ExecStart has been formatted for readability, it is one line in the actual config [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=django Group=www-data WorkingDirectory=/srv/server Environment=DJANGO_SECRET_KEY= Environment=GITEA_SECRET_KEY= Environment=MSSQL_DATABASE_PASSWORD= ExecStart=/bin/bash -c " source venv/bin/activate; exec /srv/server/venv/bin/gunicorn --workers 3 --bind unix:/run/gunicorn.sock --timeout 300 --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log --log-level debug --capture-output -k uvicorn.workers.UvicornWorker src.server.asgi:application " [Install] WantedBy=multi-user.target Code throwing the error: @database_sync_to_async def _set_online_if_model(self, set_online: bool) -> None: model: MyModel for model in MyModel.objects.all(): if self.scope["client"][0] == model.ip: model.online = set_online model.save() This server has been running phenomenally in its current configuration before my need to access connect client IP addresses. It handles other websocket connections just fine without any issues. I've already looked into trying to configure my own custom UvicornWorker according to the docs. I'm not at all an expert in this, so I might have misunderstood what I was supposed to do: https://www.uvicorn.org/deployment/#running-behind-nginx from uvicorn.workers import UvicornWorker class ServerUvicornWorker(UvicornWorker): def __init__(self, *args, **kwargs) -> None: self.CONFIG_KWARGS.update({"proxy_headers": True, "forwarded_allow_ips": "*"}) super().__init__(*args, **kwargs) I also looked at https://github.com/django/channels/issues/546 which mentioned a --proxy-headers config for Daphne, however, I am not running Daphne. https://github.com/django/channels/issues/385 mentioned that HTTP headers are passed to the connect method of a consumer, however, that post is quite old and no longer relavent as far as I can tell. I do not get any additional **kwargs to my connect method. A: Client IP has nothing to do with channels self.scope["client"][0] is undefined because when you receive data from the front end at the backend there is no data with the name client. so try to send it from the frontend. you can send a manual, static value at first to verify and then find techniques to read the IP address and then send it.
How do I get the client IP address of a websocket connection in Django Channels?
I need to get the client IP address of a websocket connection for some extra functionality I would like to implement. I have an existing deployed Django server running an Nginx-Gunicorn-Uvicorn Worker-Redis configuration. As one might expect, during development, whilst running a local server, everything works as expected. However, when deployed, I receive the error NoneType object is not subscriptable when attempting to access the client IP address of the websocket via self.scope["client"][0]. Here are the configurations and code: NGINX Config: upstream uvicorn { server unix:/run/gunicorn.sock; } server { listen 80; server_name <ip address> <hostname>; location = /favicon.ico { access_log off; log_not_found off; } location / { include proxy_params; proxy_set_header Connection ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://uvicorn; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 128; } location /ws/ { proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://uvicorn; } location /static/ { root /var/www/serverfiles/; autoindex off; } location /media { alias /mnt/apps; } } Gunicorn Config: NOTE: ExecStart has been formatted for readability, it is one line in the actual config [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=django Group=www-data WorkingDirectory=/srv/server Environment=DJANGO_SECRET_KEY= Environment=GITEA_SECRET_KEY= Environment=MSSQL_DATABASE_PASSWORD= ExecStart=/bin/bash -c " source venv/bin/activate; exec /srv/server/venv/bin/gunicorn --workers 3 --bind unix:/run/gunicorn.sock --timeout 300 --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log --log-level debug --capture-output -k uvicorn.workers.UvicornWorker src.server.asgi:application " [Install] WantedBy=multi-user.target Code throwing the error: @database_sync_to_async def _set_online_if_model(self, set_online: bool) -> None: model: MyModel for model in MyModel.objects.all(): if self.scope["client"][0] == model.ip: model.online = set_online model.save() This server has been running phenomenally in its current configuration before my need to access connect client IP addresses. It handles other websocket connections just fine without any issues. I've already looked into trying to configure my own custom UvicornWorker according to the docs. I'm not at all an expert in this, so I might have misunderstood what I was supposed to do: https://www.uvicorn.org/deployment/#running-behind-nginx from uvicorn.workers import UvicornWorker class ServerUvicornWorker(UvicornWorker): def __init__(self, *args, **kwargs) -> None: self.CONFIG_KWARGS.update({"proxy_headers": True, "forwarded_allow_ips": "*"}) super().__init__(*args, **kwargs) I also looked at https://github.com/django/channels/issues/546 which mentioned a --proxy-headers config for Daphne, however, I am not running Daphne. https://github.com/django/channels/issues/385 mentioned that HTTP headers are passed to the connect method of a consumer, however, that post is quite old and no longer relavent as far as I can tell. I do not get any additional **kwargs to my connect method.
[ "Client IP has nothing to do with channels\nself.scope[\"client\"][0] is undefined because when you receive data from the front end at the backend there is no data with the name client. so try to send it from the frontend. you can send a manual, static value at first to verify and then find techniques to read the IP address and then send it.\n" ]
[ 0 ]
[]
[]
[ "django", "django_channels", "nginx", "python" ]
stackoverflow_0074605177_django_django_channels_nginx_python.txt
Q: Same optimization code different results on different computers I am running nested optimization code. sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05}) sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True}) The first minimization is the part of the function B, so it is kind of running inside the second minimization. And the whole optimization is based on the data, there's no random number involved. I run the exactly same code on two different computers, and get the totally different results. I have installed different versions of anaconda, but scipy, numpy, and all the packages used have the same versions. I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit) I am trying to figure out what might be causing this. Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same? or are there any options for sp.optimize that default values are set to be different from computer to computer? PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers? A: You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the "real" result, but only numerical approximations. During a long optimization task these differences can sum up. Furthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point. Can you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description. If this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too? You see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this. A: I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.
Same optimization code different results on different computers
I am running nested optimization code. sp.optimize.minimize(fun=A, x0=D, method="SLSQP", bounds=(E), constraints=({'type':'eq','fun':constrains}), options={'disp': True, 'maxiter':100, 'ftol':1e-05}) sp.optimize.minimize(fun=B, x0=C, method="Nelder-Mead", options={'disp': True}) The first minimization is the part of the function B, so it is kind of running inside the second minimization. And the whole optimization is based on the data, there's no random number involved. I run the exactly same code on two different computers, and get the totally different results. I have installed different versions of anaconda, but scipy, numpy, and all the packages used have the same versions. I don't really think OS would matter, but one is windows 10 (64bit), and the other one is windows 8.1 (64 bit) I am trying to figure out what might be causing this. Even though I did not state the whole options, if two computers are running the same code, shouldn't the results be the same? or are there any options for sp.optimize that default values are set to be different from computer to computer? PS. I was looking at the option "eps". Is it possible that default values of "eps" are different on these computers?
[ "You should never expect numerical methods to perform identically on different devices; or even different runs of the same code on the same device. Due to the finite precision of the machine you can never calculate the \"real\" result, but only numerical approximations. During a long optimization task these differences can sum up.\nFurthermore, some optimazion methods use some kind of randomness on the inside to solve the problem of being stuck in local minima: they add a small, alomost vanishing noise to the previous calculated solution to allow the algorithm to converge faster in the global minimum and not being stuck in a local minimum or a saddle-point.\nCan you try to plot the landscape of the function you want to minimize? This can help you to analyze the problem: If both of the results (on each machine) are local minima, then this behaviour can be explained by my previous description.\nIf this is not the case, you should check the version of scipy you have installed on both machines. Maybe you are implicitly using float values on one device and double values on the other one, too?\nYou see: there are a lot of possible explanations for this (at the first look) strange numerical behaviour; you have to give us more details to solve this.\n", "I found that different versions of SciPy do or do not allow minimum and maximum bounds to be the same. For example, in SciPy version 1.5.4, a parameter with equal min and max bounds sends that term's Jacobian to nan, which brings the minimization to a premature stop.\n" ]
[ 0, 0 ]
[]
[]
[ "minimization", "optimization", "python", "scipy" ]
stackoverflow_0046043768_minimization_optimization_python_scipy.txt
Q: Execute python function from Java code and get result I am working with a Python library but everything else is in Java. I want to be able to access and use the Python library from Java, so I started researching and using Jython. I need to use numpy and neurokit libraries. I write this simple code in Java: PythonInterpreter interpreter = new PythonInterpreter(); interpreter.set("values", 10 ); interpreter.execfile("D:\\PyCharmWorkspace\\IoTproject\\Test.py"); PyObject b = interpreter.get("result"); and the code in Python: import sys sys.path.append("D:\\PyCharmWorkspace\\venv\\lib\\site-packages") import numpy as np result = values + 20 The problem is that when It tries to load module numpy, I get this error: Exception in thread "main" Traceback (most recent call last): File "D:\PyCharmWorkspace\IoTproject\TestECGfeature.py", line 4, in <module> import numpy as np File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\overrides.py", line 166 SyntaxError: unqualified exec is not allowed in function 'decorator' because it contains free variables I also tried to do this: interpreter.exec("import sys"); interpreter.exec("sys.path.append('D:\\PyCharmWorkspace\\venv\\lib\\site-packages')"); interpreter.exec("import numpy as np"); and I get: Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named numpy To install Jython I have add jar file to project build-path. I found jep and jpy that can make communicate java with python but I didn't found how to install or use them. What I need is call a Python function giving params and getting result. How Can I do or How can I solve the problem using Jython? A: Following code can be used for executing python script private void runPythonCode(String pythonScript) { ProcessBuilder pb = new ProcessBuilder("python", pythonScript); Process process = pb.start(); int errCode = process.waitFor(); if (errCode == 1) { System.out.println("Error"); } else { String filePath = output(process.getInputStream()); logger.info("Generated report file path ::" + filePath); if (filePath != null) { File docxFile = new File(filePath.trim()); // creates a new file only if it does not exists, // file.exists() returns false // if we explicitly do not create file even if the file // exists docxFile.createNewFile(); String updatedFileName = docxFile.getParent() + File.separator + jobAccountJson.getProviderName() + "_" + docxFile.getName(); File reanmedFileName = new File(updatedFileName); if(docxFile.renameTo(reanmedFileName)) { logger.info("Renamed file to " + r eanmedFileName.getPath()); return reanmedFileName; } else { logger.error("Could not rename file to " + updatedFileName); } return docxFile; } } } private static String output(InputStream inputStream) throws IOException { StringBuilder sb = new StringBuilder(); BufferedReader br = null; try { br = new BufferedReader(new InputStreamReader(inputStream)); String line = null; while ((line = br.readLine()) != null) { sb.append(line + System.getProperty("line.separator")); } } finally { if(br != null) { br.close(); } } return sb.toString(); } A: ProcessBuilder pb = new ProcessBuilder("python", "NameOfScript.py"); Process p = pb.start(); p.getInputStream().transferTo(System.out);
Execute python function from Java code and get result
I am working with a Python library but everything else is in Java. I want to be able to access and use the Python library from Java, so I started researching and using Jython. I need to use numpy and neurokit libraries. I write this simple code in Java: PythonInterpreter interpreter = new PythonInterpreter(); interpreter.set("values", 10 ); interpreter.execfile("D:\\PyCharmWorkspace\\IoTproject\\Test.py"); PyObject b = interpreter.get("result"); and the code in Python: import sys sys.path.append("D:\\PyCharmWorkspace\\venv\\lib\\site-packages") import numpy as np result = values + 20 The problem is that when It tries to load module numpy, I get this error: Exception in thread "main" Traceback (most recent call last): File "D:\PyCharmWorkspace\IoTproject\TestECGfeature.py", line 4, in <module> import numpy as np File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\__init__.py", line 142, in <module> from . import core File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\__init__.py", line 24, in <module> from . import multiarray File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\multiarray.py", line 14, in <module> from . import overrides File "D:\PyCharmWorkspace\venv\lib\site-packages\numpy\core\overrides.py", line 166 SyntaxError: unqualified exec is not allowed in function 'decorator' because it contains free variables I also tried to do this: interpreter.exec("import sys"); interpreter.exec("sys.path.append('D:\\PyCharmWorkspace\\venv\\lib\\site-packages')"); interpreter.exec("import numpy as np"); and I get: Exception in thread "main" Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: No module named numpy To install Jython I have add jar file to project build-path. I found jep and jpy that can make communicate java with python but I didn't found how to install or use them. What I need is call a Python function giving params and getting result. How Can I do or How can I solve the problem using Jython?
[ "Following code can be used for executing python script\n private void runPythonCode(String pythonScript) {\n ProcessBuilder pb = new ProcessBuilder(\"python\", pythonScript);\n\n Process process = pb.start();\n int errCode = process.waitFor();\n\n if (errCode == 1) {\n System.out.println(\"Error\");\n } else {\n String filePath = output(process.getInputStream());\n logger.info(\"Generated report file path ::\" + filePath);\n if (filePath != null) {\n File docxFile = new File(filePath.trim());\n // creates a new file only if it does not exists,\n // file.exists() returns false\n // if we explicitly do not create file even if the file\n // exists\n docxFile.createNewFile();\n String updatedFileName = docxFile.getParent() + File.separator \n + jobAccountJson.getProviderName() + \"_\" + docxFile.getName();\n File reanmedFileName = new File(updatedFileName);\n if(docxFile.renameTo(reanmedFileName)) {\n logger.info(\"Renamed file to \" + r\n\neanmedFileName.getPath());\n return reanmedFileName;\n } else {\n logger.error(\"Could not rename file to \" + updatedFileName);\n }\n return docxFile;\n }\n }\n}\nprivate static String output(InputStream inputStream) throws IOException {\n StringBuilder sb = new StringBuilder();\n BufferedReader br = null;\n try {\n br = new BufferedReader(new InputStreamReader(inputStream));\n String line = null;\n while ((line = br.readLine()) != null) {\n sb.append(line + System.getProperty(\"line.separator\"));\n }\n } finally {\n if(br != null) {\n br.close();\n }\n }\n return sb.toString();\n}\n\n", " ProcessBuilder pb = new ProcessBuilder(\"python\", \"NameOfScript.py\");\n Process p = pb.start();\n p.getInputStream().transferTo(System.out);\n\n" ]
[ 0, 0 ]
[]
[]
[ "java", "jython", "numpy", "python" ]
stackoverflow_0060171954_java_jython_numpy_python.txt
Q: str object is not callable while importing the dataset on jupyter notebook. what to do? I tried to import the dataset on jupyter notebook. But it indicates error as str object is not callable,even the pathway of the file are obsolutely okay. or Are there any problems with anaconda? help me out!! here is my code after importing the libraries: df=pd.read_csv('Nutrients.csv') Even everything is okay it still shows str object is not callable Now i need to load the dataset. A: In pandas.read_csv, the string which is passed inside as a parameter is the name of the file. If the file does not exist, then python just considers the value as a string which in your case is the same. Try checking out the location of the jupyter notebook that you are running the code in and the file you want to access. According to your code, they should be in the same location.
str object is not callable while importing the dataset on jupyter notebook. what to do?
I tried to import the dataset on jupyter notebook. But it indicates error as str object is not callable,even the pathway of the file are obsolutely okay. or Are there any problems with anaconda? help me out!! here is my code after importing the libraries: df=pd.read_csv('Nutrients.csv') Even everything is okay it still shows str object is not callable Now i need to load the dataset.
[ "In pandas.read_csv, the string which is passed inside as a parameter is the name of the file. If the file does not exist, then python just considers the value as a string which in your case is the same.\nTry checking out the location of the jupyter notebook that you are running the code in and the file you want to access. According to your code, they should be in the same location.\n" ]
[ 0 ]
[]
[]
[ "dataset", "pandas", "python", "read.csv", "string" ]
stackoverflow_0074669070_dataset_pandas_python_read.csv_string.txt
Q: Generate random numbers list with limit on each element and on total Assume I have a list of values, for example: limits = [10, 6, 3, 5, 1] For every item in limits, I need to generate a random number less than or equal to the item. However, the catch is that the sum of elements in the new random list must be equal to a specified total. For example if total = 10, then one possible random list is: random_list = [2, 1, 3, 4, 0] where you see random_list has same length as limits, every element in random_list is less than or equal to the corresponding element in limits, and sum(random_list) = total. How to generate such a list? I am open (and prefer) to use numpy, scipy, or pandas. A: To generate such a list, you can use numpy's random.multinomial function. This function allows you to generate a list of random numbers that sum to a specified total, where each number is chosen from a different bin with a specified size. For example, to generate a list of 5 random numbers that sum to 10, where the first number can be any integer from 0 to 10, the second number can be any integer from 0 to 6, and so on, you can use the following code: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 random_list = np.random.multinomial(total, [1/x for x in limits]) This will generate a list of 5 random numbers that sum to 10 and are less than or equal to the corresponding element in the limits list. Alternatively, you could use numpy's random.randint function to generate random numbers that are less than or equal to the corresponding element in the limits list, and then use a loop to add up the numbers until the sum equals the specified total. This approach would look something like this: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 random_list = [] # Generate a random number for each element in limits for limit in limits: random_list.append(np.random.randint(limit)) # Keep adding random numbers until the sum equals the total while sum(random_list) != total: random_list[np.random.randint(len(random_list))] += 1 Both of these approaches should work to generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list. EDIT FOR @gerges To generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list, you can use a combination of the numpy functions random.multinomial and random.randint. Here is an example of how you could do this: import numpy as np limits = [10, 6, 3, 5, 1] total = 10 # Generate a list of random numbers that sum to the total using the multinomial function random_list = np.random.multinomial(total, [1/x for x in limits]) # Use the randint function to ensure that each number is less than or equal to the corresponding limit for i, limit in enumerate(limits): random_list[i] = np.random.randint(random_list[i], limit+1) # Check that the sum of the numbers in the list equals the specified total and that each number is less than or equal to the corresponding limit assert sum(random_list) == total for i, number in enumerate(random_list): assert number <= limits[I] This approach generates a list of random numbers using the multinomial function, and then uses the randint function to ensure that each number is less than or equal to the corresponding limit. This guarantees that the resulting list of numbers will sum to the specified total and will be less than or equal to the corresponding element in the limits list. A: Found what I was looking for: The hypergeometric distribution which is similar to the binomial, but without replacement. The distribution available in numpy: import numpy as np gen = np.random.Generator(np.random.PCG64(seed)) random_list = gen.multivariate_hypergeometric(limits, total) # array([4, 4, 1, 1, 0]) Also to make sure I didn't misunderstand the distribution did a sanity check with 10 million samples and check that the maximum is always within the limits res = gen.multivariate_hypergeometric(limits, total, size=10000000) res.max(axis=0) # array([10, 6, 3, 5, 1]) which is same as limits.
Generate random numbers list with limit on each element and on total
Assume I have a list of values, for example: limits = [10, 6, 3, 5, 1] For every item in limits, I need to generate a random number less than or equal to the item. However, the catch is that the sum of elements in the new random list must be equal to a specified total. For example if total = 10, then one possible random list is: random_list = [2, 1, 3, 4, 0] where you see random_list has same length as limits, every element in random_list is less than or equal to the corresponding element in limits, and sum(random_list) = total. How to generate such a list? I am open (and prefer) to use numpy, scipy, or pandas.
[ "To generate such a list, you can use numpy's random.multinomial function. This function allows you to generate a list of random numbers that sum to a specified total, where each number is chosen from a different bin with a specified size.\nFor example, to generate a list of 5 random numbers that sum to 10, where the first number can be any integer from 0 to 10, the second number can be any integer from 0 to 6, and so on, you can use the following code:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\nrandom_list = np.random.multinomial(total, [1/x for x in limits])\n\n\nThis will generate a list of 5 random numbers that sum to 10 and are less than or equal to the corresponding element in the limits list.\nAlternatively, you could use numpy's random.randint function to generate random numbers that are less than or equal to the corresponding element in the limits list, and then use a loop to add up the numbers until the sum equals the specified total. This approach would look something like this:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\nrandom_list = []\n\n# Generate a random number for each element in limits\nfor limit in limits:\n random_list.append(np.random.randint(limit))\n\n# Keep adding random numbers until the sum equals the total\nwhile sum(random_list) != total:\n random_list[np.random.randint(len(random_list))] += 1\n\n\nBoth of these approaches should work to generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list.\nEDIT FOR @gerges\nTo generate a list of random numbers that sum to a specified total and are less than or equal to the corresponding element in the limits list, you can use a combination of the numpy functions random.multinomial and random.randint.\nHere is an example of how you could do this:\nimport numpy as np\n\nlimits = [10, 6, 3, 5, 1]\ntotal = 10\n\n# Generate a list of random numbers that sum to the total using the multinomial function\nrandom_list = np.random.multinomial(total, [1/x for x in limits])\n\n# Use the randint function to ensure that each number is less than or equal to the corresponding limit\nfor i, limit in enumerate(limits):\n random_list[i] = np.random.randint(random_list[i], limit+1)\n\n# Check that the sum of the numbers in the list equals the specified total and that each number is less than or equal to the corresponding limit\nassert sum(random_list) == total\nfor i, number in enumerate(random_list):\n assert number <= limits[I]\n\n\nThis approach generates a list of random numbers using the multinomial function, and then uses the randint function to ensure that each number is less than or equal to the corresponding limit. This guarantees that the resulting list of numbers will sum to the specified total and will be less than or equal to the corresponding element in the limits list.\n", "Found what I was looking for: The hypergeometric distribution which is similar to the binomial, but without replacement.\nThe distribution available in numpy:\nimport numpy as np\n\ngen = np.random.Generator(np.random.PCG64(seed))\nrandom_list = gen.multivariate_hypergeometric(limits, total)\n\n# array([4, 4, 1, 1, 0])\n\nAlso to make sure I didn't misunderstand the distribution did a sanity check with 10 million samples and check that the maximum is always within the limits\nres = gen.multivariate_hypergeometric(limits, total, size=10000000) \n\nres.max(axis=0)\n\n# array([10, 6, 3, 5, 1])\n\nwhich is same as limits.\n" ]
[ 1, 1 ]
[]
[]
[ "numpy", "pandas", "python", "scipy" ]
stackoverflow_0074670818_numpy_pandas_python_scipy.txt
Q: How can I load a saved JSON tree with treelib? I have made a Python script wherein I process a big html with BeautifulSoup while I build a tree from it using treelib: http://xiaming.me/treelib/. I have found that this library comes with methods to save the tree file on my system and also parsing it to JSON. But after I do this, how can I load it? It is not efficient to build the same entire tree for each run. I think I can make a function to parse the JSON tree previously written to a file but I just want to be sure if there exists another easy way or not. Thanks in advance A: The simple Answer With this treelib, you can't. As they say in their documentation (http://xiaming.me/treelib/pyapi.html#node-objects): tree.save2file(filename[, nid[, level[, idhidden[, filter[, key[, reverse]]]]]]]) Save the tree into file for offline analysis. It does not contain any JSON-Parser, so it can not read the files. What can you do? You have no other option as building the tree each time for every run. Implement a JSON-Reader that parses the file and creates the tree for you. https://docs.python.org/2/library/json.html A: I have built a small parser for my case. Maybe it works in your case. The note identifiers are named after the tag plus the depth of the node in the tree (tag+depth). import json from types import prepare_class from treelib import Node, Tree, node import os file_path = os.path.abspath(os.path.dirname(__file__)) with open(file_path + '\\tree.json') as f: tree_json = json.load(f) tree = Tree() def load_tree(json_tree, depth=0, parent=None): k, value = list(json_tree.items())[0] if parent is None: tree.create_node(tag=str(k), identifier=str(k)+str(depth)) parent = tree.get_node(str(k)+str(depth)) for counter,value in enumerate(json_tree[k]['children']): if isinstance(json_tree[k]['children'][counter], str): tree.create_node(tag=value, identifier=value+str(depth), parent=parent) else: tree.create_node(tag=list(value)[0], identifier=list(value)[0]+str(depth), parent=parent) load_tree(json_tree[k]['children'][counter], depth+1, tree.get_node(list(value)[0]+str(depth)) ) load_tree(tree_json) A: I have created a function to convert json to a tree: from treelib import Node, Tree, node def create_node(tree, s, counter_byref, verbose, parent_id=None): node_id = counter_byref[0] if verbose: print(f"tree.create_node({s}, {node_id}, parent={parent_id})") tree.create_node(s, node_id, parent=parent_id) counter_byref[0] += 1 return node_id def to_compact_string(o): if type(o) == dict: if len(o)>1: raise Exception() k,v =next(iter(o.items())) return f'{k}:{to_compact_string(v)}' elif type(o) == list: if len(o)>1: raise Exception() return f'[{to_compact_string(next(iter(o)))}]' else: return str(o) def to_compact(tree, o, counter_byref, verbose, parent_id): try: s = to_compact_string(o) if verbose: print(f"# to_compact({o}) ==> [{s}]") create_node(tree, s, counter_byref, verbose, parent_id=parent_id) return True except: return False def json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, compact_single_dict=False, listsNodeSymbol='+'): if tree is None: tree = Tree() parent_id = create_node(tree, '+', counter_byref, verbose) if compact_single_dict and to_compact(tree, o, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node pass elif type(o) == dict: for k,v in o.items(): if compact_single_dict and to_compact(tree, {k:v}, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node continue key_nd_id = create_node(tree, str(k), counter_byref, verbose, parent_id=parent_id) if verbose: print(f"# json_2_tree({v})") json_2_tree(v , parent_id=key_nd_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict) elif type(o) == list: if listsNodeSymbol is not None: parent_id = create_node(tree, listsNodeSymbol, counter_byref, verbose, parent_id=parent_id) for i in o: if compact_single_dict and to_compact(tree, i, counter_byref, verbose, parent_id): # no need to do more, inserted as a single node continue if verbose: print(f"# json_2_tree({i})") json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict) else: #node create_node(tree, str(o), counter_byref, verbose, parent_id=parent_id) return tree Then for example: import json j = json.loads('{"2": 3, "4": [5, 6], "7": {"8": 9}}') json_2_tree(j ,verbose=False,listsNodeSymbol='+' ).show() gives: + ├── 2 │ └── 3 ├── 4 │ └── + │ ├── 5 │ └── 6 └── 7 └── 8 └── 9 While json_2_tree(j ,listsNodeSymbol=None, verbose=False ).show() + ├── 2 │ └── 3 ├── 4 │ ├── 5 │ └── 6 └── 7 └── 8 └── 9 And json_2_tree(j ,compact_single_dict=True,listsNodeSymbol=None).show() + ├── 2:3 ├── 4 │ ├── 5 │ └── 6 └── 7:8:9 As you see, there are different trees one can make depending on how explicit vs. compact he wants to be.
How can I load a saved JSON tree with treelib?
I have made a Python script wherein I process a big html with BeautifulSoup while I build a tree from it using treelib: http://xiaming.me/treelib/. I have found that this library comes with methods to save the tree file on my system and also parsing it to JSON. But after I do this, how can I load it? It is not efficient to build the same entire tree for each run. I think I can make a function to parse the JSON tree previously written to a file but I just want to be sure if there exists another easy way or not. Thanks in advance
[ "The simple Answer\nWith this treelib, you can't.\nAs they say in their documentation (http://xiaming.me/treelib/pyapi.html#node-objects):\ntree.save2file(filename[, nid[, level[, idhidden[, filter[, key[, reverse]]]]]]])\n Save the tree into file for offline analysis.\n\nIt does not contain any JSON-Parser, so it can not read the files. \nWhat can you do?\nYou have no other option as building the tree each time for every run. \nImplement a JSON-Reader that parses the file and creates the tree for you.\nhttps://docs.python.org/2/library/json.html\n", "I have built a small parser for my case. Maybe it works in your case.\nThe note identifiers are named after the tag plus the depth of the node in the tree (tag+depth).\nimport json\nfrom types import prepare_class\nfrom treelib import Node, Tree, node\nimport os\n\nfile_path = os.path.abspath(os.path.dirname(__file__))\n\nwith open(file_path + '\\\\tree.json') as f:\n tree_json = json.load(f)\n\ntree = Tree()\n\ndef load_tree(json_tree, depth=0, parent=None):\n k, value = list(json_tree.items())[0]\n \n if parent is None:\n tree.create_node(tag=str(k), identifier=str(k)+str(depth))\n parent = tree.get_node(str(k)+str(depth))\n\n for counter,value in enumerate(json_tree[k]['children']): \n if isinstance(json_tree[k]['children'][counter], str):\n tree.create_node(tag=value, identifier=value+str(depth), parent=parent)\n else:\n tree.create_node(tag=list(value)[0], identifier=list(value)[0]+str(depth), parent=parent)\n load_tree(json_tree[k]['children'][counter], depth+1, tree.get_node(list(value)[0]+str(depth)) )\n\nload_tree(tree_json)\n\n", "I have created a function to convert json to a tree:\nfrom treelib import Node, Tree, node\n\ndef create_node(tree, s, counter_byref, verbose, parent_id=None):\n node_id = counter_byref[0]\n if verbose:\n print(f\"tree.create_node({s}, {node_id}, parent={parent_id})\")\n tree.create_node(s, node_id, parent=parent_id)\n counter_byref[0] += 1\n return node_id\n\ndef to_compact_string(o):\n if type(o) == dict:\n if len(o)>1:\n raise Exception()\n k,v =next(iter(o.items()))\n return f'{k}:{to_compact_string(v)}'\n elif type(o) == list:\n if len(o)>1:\n raise Exception()\n return f'[{to_compact_string(next(iter(o)))}]'\n else:\n return str(o)\n\ndef to_compact(tree, o, counter_byref, verbose, parent_id):\n try:\n s = to_compact_string(o)\n if verbose:\n print(f\"# to_compact({o}) ==> [{s}]\")\n create_node(tree, s, counter_byref, verbose, parent_id=parent_id)\n return True\n except:\n return False\n\ndef json_2_tree(o , parent_id=None, tree=None, counter_byref=[0], verbose=False, compact_single_dict=False, listsNodeSymbol='+'):\n if tree is None:\n tree = Tree()\n parent_id = create_node(tree, '+', counter_byref, verbose)\n if compact_single_dict and to_compact(tree, o, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n pass\n elif type(o) == dict:\n for k,v in o.items():\n if compact_single_dict and to_compact(tree, {k:v}, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n continue\n key_nd_id = create_node(tree, str(k), counter_byref, verbose, parent_id=parent_id)\n if verbose:\n print(f\"# json_2_tree({v})\")\n json_2_tree(v , parent_id=key_nd_id, tree=tree, counter_byref=counter_byref, verbose=verbose, listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict)\n elif type(o) == list:\n if listsNodeSymbol is not None:\n parent_id = create_node(tree, listsNodeSymbol, counter_byref, verbose, parent_id=parent_id)\n for i in o:\n if compact_single_dict and to_compact(tree, i, counter_byref, verbose, parent_id):\n # no need to do more, inserted as a single node\n continue\n if verbose:\n print(f\"# json_2_tree({i})\")\n json_2_tree(i , parent_id=parent_id, tree=tree, counter_byref=counter_byref, verbose=verbose,listsNodeSymbol=listsNodeSymbol, compact_single_dict=compact_single_dict)\n else: #node\n create_node(tree, str(o), counter_byref, verbose, parent_id=parent_id)\n return tree\n\nThen for example:\nimport json\nj = json.loads('{\"2\": 3, \"4\": [5, 6], \"7\": {\"8\": 9}}')\njson_2_tree(j ,verbose=False,listsNodeSymbol='+' ).show() \n\ngives:\n+\n├── 2\n│ └── 3\n├── 4\n│ └── +\n│ ├── 5\n│ └── 6\n└── 7\n └── 8\n └── 9\n\nWhile\njson_2_tree(j ,listsNodeSymbol=None, verbose=False ).show() \n\n+\n├── 2\n│ └── 3\n├── 4\n│ ├── 5\n│ └── 6\n└── 7\n └── 8\n └── 9\n\nAnd\njson_2_tree(j ,compact_single_dict=True,listsNodeSymbol=None).show() \n\n+\n├── 2:3\n├── 4\n│ ├── 5\n│ └── 6\n└── 7:8:9\n\nAs you see, there are different trees one can make depending on how explicit vs. compact he wants to be.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "json", "python", "tree" ]
stackoverflow_0035031748_json_python_tree.txt
Q: Extract a value from a JSON string stored in a pandas data frame column I have a pandas dataframe with a column named json2 which contains a json string coming from an API call: "{'obj': [{'timestp': '2022-12-03', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}, {'timestp': '2022-12-02', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}]}" I want to make a function that iterates over the column and extracts the number of followers if the timestp matches with a given date def get_followers(x): if x['obj']['timestp']=='2022-12-03': return x['obj']['followers'] df['date'] = df['json2'].apply(get_followers) I should get 281475 as value in the column date but I got an error: "list indices must be integers or slices, not str" What I'm doing wrong? Thank you in advance A: The key named obj occurs in list of dictionaries. Before you define another key, you must also specify the index of the list element. import ast df['json2']=df['json2'].apply(ast.literal_eval) #if dictionary's type is string, convert to dictionary. def get_followers(x): if x['obj'][0]['timestp']=='2022-12-03': return x['obj'][0]['followers'] df['date'] = df['json2'].apply(get_followers) Also you can use this too. This does the same job as the function you are using: df['date'] = df['json2'].apply(lambda x: x['obj'][0]['followers'] if x['obj'][0]['timestp']=='2022-12-03' else None) for list of dicts: def get_followers(x): for i in x['obj']: if i['timestp'] == '2022-12-03': return i['followers'] break df['date'] = df['json2'].apply(get_followers)
Extract a value from a JSON string stored in a pandas data frame column
I have a pandas dataframe with a column named json2 which contains a json string coming from an API call: "{'obj': [{'timestp': '2022-12-03', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}, {'timestp': '2022-12-02', 'followers': 281475, 'avg_likes_per_post': 7557, 'avg_comments_per_post': 182, 'avg_views_per_post': 57148, 'engagement_rate': 2.6848}]}" I want to make a function that iterates over the column and extracts the number of followers if the timestp matches with a given date def get_followers(x): if x['obj']['timestp']=='2022-12-03': return x['obj']['followers'] df['date'] = df['json2'].apply(get_followers) I should get 281475 as value in the column date but I got an error: "list indices must be integers or slices, not str" What I'm doing wrong? Thank you in advance
[ "The key named obj occurs in list of dictionaries. Before you define another key, you must also specify the index of the list element.\nimport ast\ndf['json2']=df['json2'].apply(ast.literal_eval) #if dictionary's type is string, convert to dictionary.\n\ndef get_followers(x):\n if x['obj'][0]['timestp']=='2022-12-03':\n return x['obj'][0]['followers']\n\ndf['date'] = df['json2'].apply(get_followers)\n\nAlso you can use this too. This does the same job as the function you are using:\ndf['date'] = df['json2'].apply(lambda x: x['obj'][0]['followers'] if x['obj'][0]['timestp']=='2022-12-03' else None)\n\nfor list of dicts:\ndef get_followers(x):\n for i in x['obj']:\n if i['timestp'] == '2022-12-03':\n return i['followers']\n break\n \ndf['date'] = df['json2'].apply(get_followers)\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "json", "pandas", "python" ]
stackoverflow_0074670977_dictionary_json_pandas_python.txt
Q: function that returns the length of the longest run of repetition in a given list im trying to write a function that returns the length of the longest run of repetition in a given list Here is my code: ` def longest_repetition(a): longest = 0 j = 0 run2 = 0 while j <= len(a)-1: for i in a: run = a.count(a[j] == i) if run == 1: run2 += 1 if run2 > longest: longest = run2 j += 1 run2 = 0 return longest print(longest_repetition([4,1,2,4,7,9,4])) print(longest_repetition([5,3,5,6,9,4,4,4,4])) 3 0 ` The first test function works fine, but the second test function is not counting at all and I'm not sure why. Any insight is much appreciated Edit: Just noticed that the question I was given and the expected results are not consistent. So what I'm basically trying to do is find the most repeated element in a list and the output would be the number of times it is repeated. That said, the output for the second test function should be 4 because the element '4' is repeated four times (elements are not required to be in one run as implied in my original question) A: First of all, let's check if you were consistent with your question (function that returns the length of the longest run of repetition): e.g.: a = [4,1,2,4,7,9,4] b = [5,3,5,6,9,4,4,4,4] (assuming, you are only checking single position, e.g. c = [1,2,3,1,2,3] could have one repetition of sequence 1,2,3 - i am assuming that is not your goal) So: for a, there is no repetitions of same value, therefore length equals 0 for b, you have one, quadruple repetition of 4, therefore length equals 4 First, your max_amount_of_repetitions=0 and current_repetitions_run=0' So, what you need to do to detect repetition is simply check if value of n-1'th and n'th element is same. If so, you increment current_repetitions_run', else, you reset current_repetitions_run=0. Last step is check if your current run is longest of all: max_amount_of_repetitions= max(max_amount_of_repetitions, current_repetitions_run) to surely get both n-1 and n within your list range, I'd simply start iteration from second element. That way, n-1 is first element. for n in range(1,len(a)): if a[n-1] == a[n]: print("I am sure, you can figure out the rest") A: you can use hash to calculate the frequency of the element and then get the max of frequencies. using functional approach from collections import Counter def longest_repitition(array): return max(Counter(array).values()) other way, without using Counter def longest_repitition(array): freq = {} for val in array: if val not in freq: freq[val] = 0 freq[val] += 1 values = freq.values() return max(values)
function that returns the length of the longest run of repetition in a given list
im trying to write a function that returns the length of the longest run of repetition in a given list Here is my code: ` def longest_repetition(a): longest = 0 j = 0 run2 = 0 while j <= len(a)-1: for i in a: run = a.count(a[j] == i) if run == 1: run2 += 1 if run2 > longest: longest = run2 j += 1 run2 = 0 return longest print(longest_repetition([4,1,2,4,7,9,4])) print(longest_repetition([5,3,5,6,9,4,4,4,4])) 3 0 ` The first test function works fine, but the second test function is not counting at all and I'm not sure why. Any insight is much appreciated Edit: Just noticed that the question I was given and the expected results are not consistent. So what I'm basically trying to do is find the most repeated element in a list and the output would be the number of times it is repeated. That said, the output for the second test function should be 4 because the element '4' is repeated four times (elements are not required to be in one run as implied in my original question)
[ "First of all, let's check if you were consistent with your question (function that returns the length of the longest run of repetition):\ne.g.:\na = [4,1,2,4,7,9,4]\nb = [5,3,5,6,9,4,4,4,4]\n(assuming, you are only checking single position, e.g. c = [1,2,3,1,2,3] could have one repetition of sequence 1,2,3 - i am assuming that is not your goal)\nSo:\nfor a, there is no repetitions of same value, therefore length equals 0\nfor b, you have one, quadruple repetition of 4, therefore length equals 4\nFirst, your max_amount_of_repetitions=0 and current_repetitions_run=0' So, what you need to do to detect repetition is simply check if value of n-1'th and n'th element is same. If so, you increment current_repetitions_run', else, you reset current_repetitions_run=0.\nLast step is check if your current run is longest of all:\nmax_amount_of_repetitions= max(max_amount_of_repetitions, current_repetitions_run)\nto surely get both n-1 and n within your list range, I'd simply start iteration from second element. That way, n-1 is first element.\nfor n in range(1,len(a)):\n if a[n-1] == a[n]:\n print(\"I am sure, you can figure out the rest\")\n\n", "you can use hash to calculate the frequency of the element and then get the max of frequencies.\nusing functional approach\nfrom collections import Counter\ndef longest_repitition(array):\n return max(Counter(array).values())\n\nother way, without using Counter\ndef longest_repitition(array):\n freq = {}\n for val in array:\n if val not in freq:\n freq[val] = 0\n freq[val] += 1\n values = freq.values()\n return max(values)\n\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074670644_list_python.txt
Q: Python array, get item on position with variable vin = txid['vin'][0]['txid'] How I get something like: vout = 3 vin = txid['vin'][vout]['txid'] I assume it won't work like this.... A: You can even use it as user input. No problem at all. txid = {'vin': [{'txid' : 10}, {'txid' : 20}, {'txid' : 30}, {'txid' : 40}]} vin = txid['vin'][0]['txid'] print(vin) vout = 3 vin = txid['vin'][vout]['txid'] print(vin) Output: 10 40
Python array, get item on position with variable
vin = txid['vin'][0]['txid'] How I get something like: vout = 3 vin = txid['vin'][vout]['txid'] I assume it won't work like this....
[ "You can even use it as user input. No problem at all.\ntxid = {'vin': [{'txid' : 10}, {'txid' : 20}, {'txid' : 30}, {'txid' : 40}]}\n\nvin = txid['vin'][0]['txid']\nprint(vin)\n\nvout = 3\nvin = txid['vin'][vout]['txid']\nprint(vin)\n\nOutput:\n10\n40\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074671001_arrays_python.txt
Q: (flask + socket.IO) Result of emit callback is the response of my REST endpoint Just to give a context here, I'm a node.JS developer, but I'm on a project that I need to work with Python using Flask framework. The problem is, when a client request to an endpoint of my rest flask app, I need to emit an event using socket.IO, and get some data from the socket server, then this data is the response of the endpoint. But I didn't figured out how to send this, because flask needs a "return" statement saying what is the response, and my callback is in another context. Sample of what I'm trying to do: (There's some comments explaining) import socketio import eventlet from flask import Flask, request sio = socketio.Server() app = Flask(__name__) @app.route('/test/<param>') def get(param): def ack(data): print (data) #Should be the response sio.emit('event', param, callback=ack) # Socket server call my ack function #Without a return statement, the endpoint return 500 if __name__ == '__main__': app = socketio.Middleware(sio, app) eventlet.wsgi.server(eventlet.listen(('', 8000)), app) Maybe, the right question here is: Is this possible? A: I'm going to give you one way to implement what you want specifically, but I believe you have an important design flaw in this, as I explain in a comment above. In the way you have this coded, your socketio.Server() object will broadcast to all your clients, so will not be able to get a callback. If you want to emit to one client (hopefully not the same one that sent the HTTP request), then you need to add a room=client_sid argument to the emit. Or, if you are contacting a Socket.IO server, then you need to use a Socket.IO client here, not a server. In any case, to block your HTTP route until the callback function is invoked, you can use an Event object. Something like this: from threading import Event from flask import jsonify @app.route('/test/<param>') def get(param): ev = threading.Event() result = None def ack(data): nonlocal result nonlocal ev result = {'data': data} ev.set() # unblock HTTP route sio.emit('event', param, room=some_client_sid, callback=ack) ev.wait() # blocks until ev.set() is called return jsonify(result) A: I had a similar problem using FastAPI + socketIO (async version) and I was stuck at the exact same point. No eventlet so could not try out the monkey patching option. After a lot of head bangings it turns out that, for some reason, adding asyncio.sleep(.1) just before ev.wait() made everything work smoothly. Without that, emitted event actually never reach the other side (socketio client, in my scenario)
(flask + socket.IO) Result of emit callback is the response of my REST endpoint
Just to give a context here, I'm a node.JS developer, but I'm on a project that I need to work with Python using Flask framework. The problem is, when a client request to an endpoint of my rest flask app, I need to emit an event using socket.IO, and get some data from the socket server, then this data is the response of the endpoint. But I didn't figured out how to send this, because flask needs a "return" statement saying what is the response, and my callback is in another context. Sample of what I'm trying to do: (There's some comments explaining) import socketio import eventlet from flask import Flask, request sio = socketio.Server() app = Flask(__name__) @app.route('/test/<param>') def get(param): def ack(data): print (data) #Should be the response sio.emit('event', param, callback=ack) # Socket server call my ack function #Without a return statement, the endpoint return 500 if __name__ == '__main__': app = socketio.Middleware(sio, app) eventlet.wsgi.server(eventlet.listen(('', 8000)), app) Maybe, the right question here is: Is this possible?
[ "I'm going to give you one way to implement what you want specifically, but I believe you have an important design flaw in this, as I explain in a comment above. In the way you have this coded, your socketio.Server() object will broadcast to all your clients, so will not be able to get a callback. If you want to emit to one client (hopefully not the same one that sent the HTTP request), then you need to add a room=client_sid argument to the emit. Or, if you are contacting a Socket.IO server, then you need to use a Socket.IO client here, not a server.\nIn any case, to block your HTTP route until the callback function is invoked, you can use an Event object. Something like this:\nfrom threading import Event\nfrom flask import jsonify\n\[email protected]('/test/<param>')\ndef get(param):\n ev = threading.Event()\n result = None\n\n def ack(data):\n nonlocal result\n nonlocal ev\n\n result = {'data': data}\n ev.set() # unblock HTTP route\n\n sio.emit('event', param, room=some_client_sid, callback=ack)\n ev.wait() # blocks until ev.set() is called\n return jsonify(result)\n\n", "I had a similar problem using FastAPI + socketIO (async version) and I was stuck at the exact same point. No eventlet so could not try out the monkey patching option.\nAfter a lot of head bangings it turns out that, for some reason, adding asyncio.sleep(.1) just before ev.wait() made everything work smoothly. Without that, emitted event actually never reach the other side (socketio client, in my scenario)\n" ]
[ 2, 0 ]
[]
[]
[ "flask", "flask_socketio", "python", "socket.io" ]
stackoverflow_0043301977_flask_flask_socketio_python_socket.io.txt
Q: Reversed double linked list by python why can't print reversed this double linked list by python? always print 6 or None please can anyone help me fast to pass this task /////////////////////////////////////////////////////////////////////////// class Node: def __init__(self, data=None, next=None, prev=None): self.data = data self.next = next self.previous = prev sample methods==> def set_data(self, newData): self.data = newData def get_data(self): return self.data def set_next(self, newNext): self.next = newNext def get_next(self): return self.next def hasNext(self): return self.next is not None def set_previous(self, newprev): self.previous = newprev def get_previous(self): return self.previous def hasPrevious(self): return self.previous is not None class double===> class DoubleLinkedList: def __init__(self): self.head = None self.tail = None def addAtStart(self, item): newNode = Node(item) if self.head is None: self.head = self.tail = newNode else: newNode.set_next(self.head) newNode.set_previous(None) self.head.set_previous(newNode) self.head = newNode def size(self): current = self.head count = 0 while current is not None: count += 1 current = current.get_next() return count here is the wrong method ==> try to fix it without more changes def printReverse(self): current = self.head while current: temp = current.next current.next = current.previous current.previous = temp current = current.previous temp = self.head self.head = self.tail self.tail = temp print("Nodes of doubly linked list reversed: ") while current is not None: print(current.data), current = current.get_next() call methods==> new = DoubleLinkedList() new.addAtStart(1) new.addAtStart(2) new.addAtStart(3) new.printReverse() A: Your printReverse seems to do something else than what its name suggests. I would think that this function would just iterate the list nodes in reversed order and print the values, but it actually reverses the list, and doesn't print the result because of a bug. The error in your code is that the final loop has a condition that is guaranteed to be false. current is always None when it reaches that loop, so nothing gets printed there. This is easily fixed by initialising current just before the loop with: current = self.head That fixes your issue, but it is not nice to have a function that both reverses the list, and prints it. It is better practice to separate these two tasks. The method that reverses the list could be named reverse. Then add another method that allows iteration of the values in the list. This is done by defining __iter__. The caller can then easily print the list with that iterator. Here is how that looks: def reverse(self): current = self.head while current: current.previous, current.next = current.next, current.previous current = current.previous self.head, self.tail = self.tail, self.head def __iter__(self): node = self.head while node: yield node.data node = node.next def __repr__(self): return "->".join(map(repr, self)) The main program can then be: lst = DoubleLinkedList() lst.addAtStart(1) lst.addAtStart(2) lst.addAtStart(3) print(lst) lst.reverse() print(lst)
Reversed double linked list by python
why can't print reversed this double linked list by python? always print 6 or None please can anyone help me fast to pass this task /////////////////////////////////////////////////////////////////////////// class Node: def __init__(self, data=None, next=None, prev=None): self.data = data self.next = next self.previous = prev sample methods==> def set_data(self, newData): self.data = newData def get_data(self): return self.data def set_next(self, newNext): self.next = newNext def get_next(self): return self.next def hasNext(self): return self.next is not None def set_previous(self, newprev): self.previous = newprev def get_previous(self): return self.previous def hasPrevious(self): return self.previous is not None class double===> class DoubleLinkedList: def __init__(self): self.head = None self.tail = None def addAtStart(self, item): newNode = Node(item) if self.head is None: self.head = self.tail = newNode else: newNode.set_next(self.head) newNode.set_previous(None) self.head.set_previous(newNode) self.head = newNode def size(self): current = self.head count = 0 while current is not None: count += 1 current = current.get_next() return count here is the wrong method ==> try to fix it without more changes def printReverse(self): current = self.head while current: temp = current.next current.next = current.previous current.previous = temp current = current.previous temp = self.head self.head = self.tail self.tail = temp print("Nodes of doubly linked list reversed: ") while current is not None: print(current.data), current = current.get_next() call methods==> new = DoubleLinkedList() new.addAtStart(1) new.addAtStart(2) new.addAtStart(3) new.printReverse()
[ "Your printReverse seems to do something else than what its name suggests. I would think that this function would just iterate the list nodes in reversed order and print the values, but it actually reverses the list, and doesn't print the result because of a bug.\nThe error in your code is that the final loop has a condition that is guaranteed to be false. current is always None when it reaches that loop, so nothing gets printed there. This is easily fixed by initialising current just before the loop with:\n current = self.head\n\nThat fixes your issue, but it is not nice to have a function that both reverses the list, and prints it. It is better practice to separate these two tasks. The method that reverses the list could be named reverse. Then add another method that allows iteration of the values in the list. This is done by defining __iter__. The caller can then easily print the list with that iterator.\nHere is how that looks:\n def reverse(self):\n current = self.head\n while current:\n current.previous, current.next = current.next, current.previous\n current = current.previous\n self.head, self.tail = self.tail, self.head\n\n def __iter__(self):\n node = self.head\n while node:\n yield node.data\n node = node.next\n\n def __repr__(self):\n return \"->\".join(map(repr, self))\n\nThe main program can then be:\nlst = DoubleLinkedList()\nlst.addAtStart(1)\nlst.addAtStart(2)\nlst.addAtStart(3)\nprint(lst)\nlst.reverse()\nprint(lst)\n\n" ]
[ 1 ]
[]
[]
[ "linked_list", "python" ]
stackoverflow_0074670265_linked_list_python.txt
Q: Stuck on Python "KeyError: " in BFS code of a water jug scenario Intended Function of code: Takes a user input for the volume of 3 jars(1-9) and output the volumes with one of the jars containing the target length. jars can be Emptied/Filled a jar, or poured from one jar to another until one is empty or full. With the code I have, i'm stuck on a key exception error . Target length is 4 for this case Code: ` class Graph: class GraphNode: def __init__(self, jar1 = 0, jar2 = 0, jar3 = 0, color = "white", pi = None): self.jar1 = jar1 self.jar2 = jar2 self.jar3 = jar3 self.color = color self.pi = pi def __repr__(self): return str(self) def __init__(self, jl1 = 0, jl2 = 0, jl3 = 0, target = 0): self.jl1 = jl1 self.jl2 = jl2 self.jl3 = jl3 self.target = target self.V = {} for x in range(jl1 + 1): for y in range(jl2 + 1): for z in range(jl3 + 1): node = Graph.GraphNode(x, y, z, "white", None) self.V[node] = None def isFound(self, a: GraphNode) -> bool: if self.target in [a.jar1, a.jar2, a.jar3]: return True return False pass def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool: if self.V[a]==b: return True return False pass def BFS(self) -> [] : start = Graph.GraphNode(0, 0, 0, "white") queue=[] queue.append(start) while len(queue)>0: u=queue.pop(0) for v in self.V: if self.isAdjacent(u,v): if v.color =="white": v.color == "gray" v.pi=u if self.isFound(v): output=[] while v.pi is not None: output.insert(0,v) v=v.pi return output else: queue.append(v) u.color="black" return [] ####################################################### j1 = input("Size of first jar: ") j2 = input("Size of second jar: ") j3 = input("Size of third jar: ") t = input("Size of target: ") jar1 = int(j1) jar2 = int(j2) jar3 = int(j3) target = int(t) graph1 = Graph(jar1, jar2, jar3, target) output = graph1.BFS() print(output) ` **Error: ** line 37, in isAdjacent if self.V[a]==b: KeyError: <exception str() failed> A: Strange but when I first ran this in the IPython interpreter I got a different exception: ... :35, in Graph.isAdjacent(self, a, b) 34 def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool: ---> 35 if self.V[a]==b: 36 return True 37 return False <class 'str'>: (<class 'RecursionError'>, RecursionError('maximum recursion depth exceeded while getting the str of an object')) When I run it as a script or in the normal interpreter I do get the same one you had: ... line 35, in isAdjacent if self.V[a]==b: KeyError: <exception str() failed> I'm not sure what this means so I ran the debugger and got this: File "/Users/.../stackoverflow/bfs1.py", line 1, in <module> class Graph: File "/Users/.../stackoverflow/bfs1.py", line 47, in BFS if self.isAdjacent(u,v): File "/Users/.../stackoverflow/bfs1.py", line 35, in isAdjacent if self.V[a]==b: KeyError: <unprintable KeyError object> Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /Users/.../stackoverflow/bfs1.py(35)isAdjacent() -> if self.V[a]==b: (Pdb) type(a) <class '__main__.Graph.GraphNode'> (Pdb) str(a) *** RecursionError: maximum recursion depth exceeded while calling a Python object So it does seem like a maximum recursion error. (The error message you originally got is not very helpful). But the words <unprintable KeyError object> are a clue. It looks like it was not able to display the KeyError exception... The culprit is this line in your class definition: def __repr__(self): return str(self) What were you trying to do here? The __repr__ function is called when the class is asked to produce a string representation of itself. But yours calls the string function on the instance of the class so it will call itself! So I think you actually generated a second exception while the debugger was trying to display the first!!!. I replaced these lines with def __repr__(self): return f"GraphNode({self.jar1}, {self.jar2}, {self.jar3}, {self.color}, {self.pi})" and I don't get the exception now: Size of first jar: 1 Size of second jar: 3 Size of third jar: 6 Size of target: 4 Traceback (most recent call last): File "/Users/.../stackoverflow/bfs1.py", line 77, in <module> output = graph1.BFS() File "/Users/.../stackoverflow/bfs1.py", line 45, in BFS if self.isAdjacent(u,v): File "/Users/.../stackoverflow/bfs1.py", line 33, in isAdjacent if self.V[a]==b: KeyError: GraphNode(0, 0, 0, white, None) This exception is easier to interpret. Now it's over to you to figure out why this GraphNode was not found in the keys of self.V!
Stuck on Python "KeyError: " in BFS code of a water jug scenario
Intended Function of code: Takes a user input for the volume of 3 jars(1-9) and output the volumes with one of the jars containing the target length. jars can be Emptied/Filled a jar, or poured from one jar to another until one is empty or full. With the code I have, i'm stuck on a key exception error . Target length is 4 for this case Code: ` class Graph: class GraphNode: def __init__(self, jar1 = 0, jar2 = 0, jar3 = 0, color = "white", pi = None): self.jar1 = jar1 self.jar2 = jar2 self.jar3 = jar3 self.color = color self.pi = pi def __repr__(self): return str(self) def __init__(self, jl1 = 0, jl2 = 0, jl3 = 0, target = 0): self.jl1 = jl1 self.jl2 = jl2 self.jl3 = jl3 self.target = target self.V = {} for x in range(jl1 + 1): for y in range(jl2 + 1): for z in range(jl3 + 1): node = Graph.GraphNode(x, y, z, "white", None) self.V[node] = None def isFound(self, a: GraphNode) -> bool: if self.target in [a.jar1, a.jar2, a.jar3]: return True return False pass def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool: if self.V[a]==b: return True return False pass def BFS(self) -> [] : start = Graph.GraphNode(0, 0, 0, "white") queue=[] queue.append(start) while len(queue)>0: u=queue.pop(0) for v in self.V: if self.isAdjacent(u,v): if v.color =="white": v.color == "gray" v.pi=u if self.isFound(v): output=[] while v.pi is not None: output.insert(0,v) v=v.pi return output else: queue.append(v) u.color="black" return [] ####################################################### j1 = input("Size of first jar: ") j2 = input("Size of second jar: ") j3 = input("Size of third jar: ") t = input("Size of target: ") jar1 = int(j1) jar2 = int(j2) jar3 = int(j3) target = int(t) graph1 = Graph(jar1, jar2, jar3, target) output = graph1.BFS() print(output) ` **Error: ** line 37, in isAdjacent if self.V[a]==b: KeyError: <exception str() failed>
[ "Strange but when I first ran this in the IPython interpreter I got a different exception:\n... :35, in Graph.isAdjacent(self, a, b)\n 34 def isAdjacent(self, a: GraphNode, b: GraphNode) -> bool:\n---> 35 if self.V[a]==b:\n 36 return True\n 37 return False\n\n<class 'str'>: (<class 'RecursionError'>, RecursionError('maximum recursion depth exceeded while getting the str of an object'))\n\nWhen I run it as a script or in the normal interpreter I do get the same one you had:\n... line 35, in isAdjacent\n if self.V[a]==b:\nKeyError: <exception str() failed>\n\nI'm not sure what this means so I ran the debugger and got this:\n File \"/Users/.../stackoverflow/bfs1.py\", line 1, in <module>\n class Graph:\n File \"/Users/.../stackoverflow/bfs1.py\", line 47, in BFS\n if self.isAdjacent(u,v):\n File \"/Users/.../stackoverflow/bfs1.py\", line 35, in isAdjacent\n if self.V[a]==b:\nKeyError: <unprintable KeyError object>\nUncaught exception. Entering post mortem debugging\nRunning 'cont' or 'step' will restart the program\n> /Users/.../stackoverflow/bfs1.py(35)isAdjacent()\n-> if self.V[a]==b:\n(Pdb) type(a)\n<class '__main__.Graph.GraphNode'>\n(Pdb) str(a)\n*** RecursionError: maximum recursion depth exceeded while calling a Python object\n\nSo it does seem like a maximum recursion error. (The error message you originally got is not very helpful). But the words <unprintable KeyError object> are a clue. It looks like it was not able to display the KeyError exception...\nThe culprit is this line in your class definition:\n def __repr__(self):\n return str(self)\n\nWhat were you trying to do here?\nThe __repr__ function is called when the class is asked to produce a string representation of itself. But yours calls the string function on the instance of the class so it will call itself! So I think you actually generated a second exception while the debugger was trying to display the first!!!.\nI replaced these lines with\n def __repr__(self):\n return f\"GraphNode({self.jar1}, {self.jar2}, {self.jar3}, {self.color}, {self.pi})\"\n\nand I don't get the exception now:\nSize of first jar: 1\nSize of second jar: 3\nSize of third jar: 6\nSize of target: 4\nTraceback (most recent call last):\n File \"/Users/.../stackoverflow/bfs1.py\", line 77, in <module>\n output = graph1.BFS()\n File \"/Users/.../stackoverflow/bfs1.py\", line 45, in BFS\n if self.isAdjacent(u,v):\n File \"/Users/.../stackoverflow/bfs1.py\", line 33, in isAdjacent\n if self.V[a]==b:\nKeyError: GraphNode(0, 0, 0, white, None)\n\nThis exception is easier to interpret. Now it's over to you to figure out why this GraphNode was not found in the keys of self.V!\n" ]
[ 1 ]
[]
[]
[ "breadth_first_search", "graph_traversal", "python" ]
stackoverflow_0074664111_breadth_first_search_graph_traversal_python.txt
Q: Extraction multiple data points from a long sentence/paragraph I was looking for an approach or any useful libraries to extract multiple data points that corresponds to different years, from a single paragraph. For eg. The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600. That's about 50% \increase in size In the above example, i need to extract, 1. sales year 2019 --> 400 2. sales year 2020 --> 600 Assumptions You can assume the entity is already known. [sales in the above example] Can anyone please suggest? Thanks in advance Approach. Pre existing libraries etc. A: One approach you could take is to use regular expressions to search for patterns in the text that match the information you're looking for. For example, in the sentence "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.", you could use the following regular expression to match the sales data for each year: \d{4} is \d+. This regular expression will match any four-digit number followed by " is " and then one or more digits. Once you have matched the relevant data points, you can use a library like Python's re module to extract the information you need. For example, in Python you could do something like this: import re text = "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600." # Use the regular expression to find all matches in the text matches = re.findall(r"\d{4} is \d+", text) # Loop through the matches and extract the year and sales data for match in matches: year, sales = match.split(" is ") print(f"Year: {year}, Sales: {sales}") This code would output the following: Year: 2019, Sales: 400 Year: 2020, Sales: 600 Another option is to use a natural language processing (NLP) library like spaCy or NLTK to extract the information you need. These libraries can help you identify and extract specific entities, such as dates and numbers, from a piece of text. For example, using spaCy you could do something like this: import spacy # Load the English model nlp = spacy.load("en_core_web_sm") # Parse the text text = "The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600." doc = nlp(text) # Loop through the entities in the document for ent in doc.ents: # If the entity is a date and a number, print the year and the sales data if ent.label_ == "DATE" and ent.label_ == "CARDINAL": print(f"Year: {ent.text}, Sales: {ent.text}") This code would output the same results as the previous example. Overall, there are many approaches you can take to extract multiple data points from a single paragraph. The approach you choose will depend on the specific requirements of your task and the data you are working with.
Extraction multiple data points from a long sentence/paragraph
I was looking for an approach or any useful libraries to extract multiple data points that corresponds to different years, from a single paragraph. For eg. The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600. That's about 50% \increase in size In the above example, i need to extract, 1. sales year 2019 --> 400 2. sales year 2020 --> 600 Assumptions You can assume the entity is already known. [sales in the above example] Can anyone please suggest? Thanks in advance Approach. Pre existing libraries etc.
[ "One approach you could take is to use regular expressions to search for patterns in the text that match the information you're looking for. For example, in the sentence \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\", you could use the following regular expression to match the sales data for each year: \\d{4} is \\d+. This regular expression will match any four-digit number followed by \" is \" and then one or more digits.\nOnce you have matched the relevant data points, you can use a library like Python's re module to extract the information you need. For example, in Python you could do something like this:\nimport re\n\ntext = \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\"\n\n# Use the regular expression to find all matches in the text\nmatches = re.findall(r\"\\d{4} is \\d+\", text)\n\n# Loop through the matches and extract the year and sales data\nfor match in matches:\n year, sales = match.split(\" is \")\n print(f\"Year: {year}, Sales: {sales}\")\n\nThis code would output the following:\nYear: 2019, Sales: 400\nYear: 2020, Sales: 600\n\nAnother option is to use a natural language processing (NLP) library like spaCy or NLTK to extract the information you need. These libraries can help you identify and extract specific entities, such as dates and numbers, from a piece of text.\nFor example, using spaCy you could do something like this:\nimport spacy\n\n# Load the English model\nnlp = spacy.load(\"en_core_web_sm\")\n\n# Parse the text\ntext = \"The total volume of the sales in the year 2019 is 400 whereas in the year 2020 is 600.\"\ndoc = nlp(text)\n\n# Loop through the entities in the document\nfor ent in doc.ents:\n # If the entity is a date and a number, print the year and the sales data\n if ent.label_ == \"DATE\" and ent.label_ == \"CARDINAL\":\n print(f\"Year: {ent.text}, Sales: {ent.text}\")\n\nThis code would output the same results as the previous example.\nOverall, there are many approaches you can take to extract multiple data points from a single paragraph. The approach you choose will depend on the specific requirements of your task and the data you are working with.\n" ]
[ 1 ]
[]
[]
[ "nlp", "python" ]
stackoverflow_0074671055_nlp_python.txt
Q: What is the difference between __str__ and __repr__? What is the difference between __str__ and __repr__ in Python? A: Alex summarized well but, surprisingly, was too succinct. First, let me reiterate the main points in Alex’s post: The default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah) __repr__ goal is to be unambiguous __str__ goal is to be readable Container’s __str__ uses contained objects’ __repr__ Default implementation is useless This is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like: return "%s(%r)" % (self.__class__, self.__dict__) would have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__. This means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator). The goal of __repr__ is to be unambiguous Let me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a log(INFO, "I am in the weird function and a is", a, "and b is", b, "but I got a null C — using default", default_c) But you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: "MyClass(this=%r,that=%r)" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”. Note: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass("3"). The goal of __str__ is to be readable Specifically, it is not intended to be unambiguous — notice that str(3)==str("3"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be "2010/4/12 15:35:22", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement. Container’s __str__ uses contained objects’ __repr__ This seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__? [moshe is, 3, hello world, this is a list, oh I don't know, containing just 4 elements] Not very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just print("[" + ", ".join(l) + "]") (you can probably also figure out what to do about dictionaries. Summary Implement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability. A: My rule of thumb: __repr__ is for developers, __str__ is for customers. A: Unless you specifically act to ensure otherwise, most classes don't have helpful results for either: >>> class Sic(object): pass ... >>> print(str(Sic())) <__main__.Sic object at 0x8b7d0> >>> print(repr(Sic())) <__main__.Sic object at 0x8b7d0> >>> As you see -- no difference, and no info beyond the class and object's id. If you only override one of the two...: >>> class Sic(object): ... def __repr__(self): return 'foo' ... >>> print(str(Sic())) foo >>> print(repr(Sic())) foo >>> class Sic(object): ... def __str__(self): return 'foo' ... >>> print(str(Sic())) foo >>> print(repr(Sic())) <__main__.Sic object at 0x2617f0> >>> as you see, if you override __repr__, that's ALSO used for __str__, but not vice versa. Other crucial tidbits to know: __str__ on a built-on container uses the __repr__, NOT the __str__, for the items it contains. And, despite the words on the subject found in typical docs, hardly anybody bothers making the __repr__ of objects be a string that eval may use to build an equal object (it's just too hard, AND not knowing how the relevant module was actually imported makes it actually flat out impossible). So, my advice: focus on making __str__ reasonably human-readable, and __repr__ as unambiguous as you possibly can, even if that interferes with the fuzzy unattainable goal of making __repr__'s returned value acceptable as input to __eval__! A: __repr__: representation of python object usually eval will convert it back to that object __str__: is whatever you think is that object in text form e.g. >>> s="""w'o"w""" >>> repr(s) '\'w\\\'o"w\'' >>> str(s) 'w\'o"w' >>> eval(str(s))==s Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 1 w'o"w ^ SyntaxError: EOL while scanning single-quoted string >>> eval(repr(s))==s True A: In short, the goal of __repr__ is to be unambiguous and __str__ is to be readable. Here is a good example: >>> import datetime >>> today = datetime.datetime.now() >>> str(today) '2012-03-14 09:21:58.130922' >>> repr(today) 'datetime.datetime(2012, 3, 14, 9, 21, 58, 130922)' Read this documentation for repr: repr(object) Return a string containing a printable representation of an object. This is the same value yielded by conversions (reverse quotes). It is sometimes useful to be able to access this operation as an ordinary function. For many types, this function makes an attempt to return a string that would yield an object with the same value when passed to eval(), otherwise the representation is a string enclosed in angle brackets that contains the name of the type of the object together with additional information often including the name and address of the object. A class can control what this function returns for its instances by defining a __repr__() method. Here is the documentation for str: str(object='') Return a string containing a nicely printable representation of an object. For strings, this returns the string itself. The difference with repr(object) is that str(object) does not always attempt to return a string that is acceptable to eval(); its goal is to return a printable string. If no argument is given, returns the empty string, ''. A: What is the difference between __str__ and __repr__ in Python? __str__ (read as "dunder (double-underscore) string") and __repr__ (read as "dunder-repper" (for "representation")) are both special methods that return strings based on the state of the object. __repr__ provides backup behavior if __str__ is missing. So one should first write a __repr__ that allows you to reinstantiate an equivalent object from the string it returns e.g. using eval or by typing it in character-for-character in a Python shell. At any time later, one can write a __str__ for a user-readable string representation of the instance, when one believes it to be necessary. __str__ If you print an object, or pass it to format, str.format, or str, then if a __str__ method is defined, that method will be called, otherwise, __repr__ will be used. __repr__ The __repr__ method is called by the builtin function repr and is what is echoed on your python shell when it evaluates an expression that returns an object. Since it provides a backup for __str__, if you can only write one, start with __repr__ Here's the builtin help on repr: repr(...) repr(object) -> string Return the canonical string representation of the object. For most object types, eval(repr(object)) == object. That is, for most objects, if you type in what is printed by repr, you should be able to create an equivalent object. But this is not the default implementation. Default Implementation of __repr__ The default object __repr__ is (C Python source) something like: def __repr__(self): return '<{0}.{1} object at {2}>'.format( type(self).__module__, type(self).__qualname__, hex(id(self))) That means by default you'll print the module the object is from, the class name, and the hexadecimal representation of its location in memory - for example: <__main__.Foo object at 0x7f80665abdd0> This information isn't very useful, but there's no way to derive how one might accurately create a canonical representation of any given instance, and it's better than nothing, at least telling us how we might uniquely identify it in memory. How can __repr__ be useful? Let's look at how useful it can be, using the Python shell and datetime objects. First we need to import the datetime module: import datetime If we call datetime.now in the shell, we'll see everything we need to recreate an equivalent datetime object. This is created by the datetime __repr__: >>> datetime.datetime.now() datetime.datetime(2015, 1, 24, 20, 5, 36, 491180) If we print a datetime object, we see a nice human readable (in fact, ISO) format. This is implemented by datetime's __str__: >>> print(datetime.datetime.now()) 2015-01-24 20:05:44.977951 It is a simple matter to recreate the object we lost because we didn't assign it to a variable by copying and pasting from the __repr__ output, and then printing it, and we get it in the same human readable output as the other object: >>> the_past = datetime.datetime(2015, 1, 24, 20, 5, 36, 491180) >>> print(the_past) 2015-01-24 20:05:36.491180 #How do I implement them? As you're developing, you'll want to be able to reproduce objects in the same state, if possible. This, for example, is how the datetime object defines __repr__ (Python source). It is fairly complex, because of all of the attributes needed to reproduce such an object: def __repr__(self): """Convert to formal string, for repr().""" L = [self._year, self._month, self._day, # These are never zero self._hour, self._minute, self._second, self._microsecond] if L[-1] == 0: del L[-1] if L[-1] == 0: del L[-1] s = "%s.%s(%s)" % (self.__class__.__module__, self.__class__.__qualname__, ", ".join(map(str, L))) if self._tzinfo is not None: assert s[-1:] == ")" s = s[:-1] + ", tzinfo=%r" % self._tzinfo + ")" if self._fold: assert s[-1:] == ")" s = s[:-1] + ", fold=1)" return s If you want your object to have a more human readable representation, you can implement __str__ next. Here's how the datetime object (Python source) implements __str__, which it easily does because it already has a function to display it in ISO format: def __str__(self): "Convert to string, for str()." return self.isoformat(sep=' ') Set __repr__ = __str__? This is a critique of another answer here that suggests setting __repr__ = __str__. Setting __repr__ = __str__ is silly - __repr__ is a fallback for __str__ and a __repr__, written for developers usage in debugging, should be written before you write a __str__. You need a __str__ only when you need a textual representation of the object. Conclusion Define __repr__ for objects you write so you and other developers have a reproducible example when using it as you develop. Define __str__ when you need a human readable string representation of it. A: On page 358 of the book Python scripting for computational science by Hans Petter Langtangen, it clearly states that The __repr__ aims at a complete string representation of the object; The __str__ is to return a nice string for printing. So, I prefer to understand them as repr = reproduce str = string (representation) from the user's point of view although this is a misunderstanding I made when learning python. A small but good example is also given on the same page as follows: Example In [38]: str('s') Out[38]: 's' In [39]: repr('s') Out[39]: "'s'" In [40]: eval(str('s')) Traceback (most recent call last): File "<ipython-input-40-abd46c0c43e7>", line 1, in <module> eval(str('s')) File "<string>", line 1, in <module> NameError: name 's' is not defined In [41]: eval(repr('s')) Out[41]: 's' A: Apart from all the answers given, I would like to add few points :- 1) __repr__() is invoked when you simply write object's name on interactive python console and press enter. 2) __str__() is invoked when you use object with print statement. 3) In case, if __str__ is missing, then print and any function using str() invokes __repr__() of object. 4) __str__() of containers, when invoked will execute __repr__() method of its contained elements. 5) str() called within __str__() could potentially recurse without a base case, and error on maximum recursion depth. 6) __repr__() can call repr() which will attempt to avoid infinite recursion automatically, replacing an already represented object with .... A: (2020 entry) Q: What's the difference between __str__() and __repr__()? TL;DR: LONG This question has been around a long time, and there are a variety of answers of which most are correct (not to mention from several Python community legends[!]). However when it comes down to the nitty-gritty, this question is analogous to asking the difference between the str() and repr() built-in functions. I'm going to describe the differences in my own words (which means I may be "borrowing" liberally from Core Python Programming so pls forgive me). Both str() and repr() have the same basic job: their goal is to return a string representation of a Python object. What kind of string representation is what differentiates them. str() & __str__() return a printable string representation of an object... something human-readable/for human consumption repr() & __repr__() return a string representation of an object that is a valid Python expression, an object you can pass to eval() or type into the Python shell without getting an error. For example, let's assign a string to x and an int to y, and simply showing human-readable string versions of each: >>> x, y = 'foo', 123 >>> str(x), str(y) ('foo', '123') Can we take what is inside the quotes in both cases and enter them verbatim into the Python interpreter? Let's give it a try: >>> 123 123 >>> foo Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'foo' is not defined Clearly you can for an int but not necessarily for a str. Similarly, while I can pass '123' to eval(), that doesn't work for 'foo': >>> eval('123') 123 >>> eval('foo') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 1, in <module> NameError: name 'foo' is not defined So this tells you the Python shell just eval()s what you give it. Got it? Now, let's repr() both expressions and see what we get. More specifically, take its output and dump those out in the interpreter (there's a point to this which we'll address afterwards): >>> repr(x), repr(y) ("'foo'", '123') >>> 123 123 >>> 'foo' 'foo' Wow, they both work? That's because 'foo', while a printable string representation of that string, it's not evaluatable, but "'foo'" is. 123 is a valid Python int called by either str() or repr(). What happens when we call eval() with these? >>> eval('123') 123 >>> eval("'foo'") 'foo' It works because 123 and 'foo' are valid Python objects. Another key takeaway is that while sometimes both return the same thing (the same string representation), that's not always the case. (And yes, yes, I can go create a variable foo where the eval() works, but that's not the point.) More factoids about both pairs Sometimes, str() and repr() are called implicitly, meaning they're called on behalf of users: when users execute print (Py1/Py2) or call print() (Py3+), even if users don't call str() explicitly, such a call is made on their behalf before the object is displayed. In the Python shell (interactive interpreter), if you enter a variable at the >>> prompt and press RETURN, the interpreter displays the results of repr() implicitly called on that object. To connect str() and repr() to __str__() and __repr__(), realize that calls to the built-in functions, i.e., str(x) or repr(y) result in calling their object's corresponding special methods: x.__str__() or y.__repr()__ By implementing __str__() and __repr__() for your Python classes, you overload the built-in functions (str() and repr()), allowing instances of your classes to be passed in to str() and repr(). When such calls are made, they turn around and call the class' __str__() and __repr__() (per #3). A: To put it simply: __str__ is used in to show a string representation of your object to be read easily by others. __repr__ is used to show a string representation of the object. Let's say I want to create a Fraction class where the string representation of a fraction is '(1/2)' and the object (Fraction class) is to be represented as 'Fraction (1,2)' So we can create a simple Fraction class: class Fraction: def __init__(self, num, den): self.__num = num self.__den = den def __str__(self): return '(' + str(self.__num) + '/' + str(self.__den) + ')' def __repr__(self): return 'Fraction (' + str(self.__num) + ',' + str(self.__den) + ')' f = Fraction(1,2) print('I want to represent the Fraction STRING as ' + str(f)) # (1/2) print('I want to represent the Fraction OBJECT as ', repr(f)) # Fraction (1,2) A: From an (An Unofficial) Python Reference Wiki (archive copy) by effbot: __str__ "computes the "informal" string representation of an object. This differs from __repr__ in that it does not have to be a valid Python expression: a more convenient or concise representation may be used instead." A: In all honesty, eval(repr(obj)) is never used. If you find yourself using it, you should stop, because eval is dangerous, and strings are a very inefficient way to serialize your objects (use pickle instead). Therefore, I would recommend setting __repr__ = __str__. The reason is that str(list) calls repr on the elements (I consider this to be one of the biggest design flaws of Python that was not addressed by Python 3). An actual repr will probably not be very helpful as the output of print([your, objects]). To qualify this, in my experience, the most useful use case of the repr function is to put a string inside another string (using string formatting). This way, you don't have to worry about escaping quotes or anything. But note that there is no eval happening here. A: str - Creates a new string object from the given object. repr - Returns the canonical string representation of the object. The differences: str(): makes object readable generates output for end-user repr(): needs code that reproduces object generates output for developer A: From the book Fluent Python: A basic requirement for a Python object is to provide usable string representations of itself, one used for debugging and logging, another for presentation to end users. That is why the special methods __repr__ and __str__ exist in the data model. A: One aspect that is missing in other answers. It's true that in general the pattern is: Goal of __str__: human-readable Goal of __repr__: unambiguous, possibly machine-readable via eval Unfortunately, this differentiation is flawed, because the Python REPL and also IPython use __repr__ for printing objects in a REPL console (see related questions for Python and IPython). Thus, projects which are targeted for interactive console work (e.g., Numpy or Pandas) have started to ignore above rules and provide a human-readable __repr__ implementation instead. A: You can get some insight from this code: class Foo(): def __repr__(self): return("repr") def __str__(self): return("str") foo = Foo() foo #repr print(foo) #str A: __str__ can be invoked on an object by calling str(obj) and should return a human readable string. __repr__ can be invoked on an object by calling repr(obj) and should return internal object (object fields/attributes) This example may help: class C1:pass class C2: def __str__(self): return str(f"{self.__class__.__name__} class str ") class C3: def __repr__(self): return str(f"{self.__class__.__name__} class repr") class C4: def __str__(self): return str(f"{self.__class__.__name__} class str ") def __repr__(self): return str(f"{self.__class__.__name__} class repr") ci1 = C1() ci2 = C2() ci3 = C3() ci4 = C4() print(ci1) #<__main__.C1 object at 0x0000024C44A80C18> print(str(ci1)) #<__main__.C1 object at 0x0000024C44A80C18> print(repr(ci1)) #<__main__.C1 object at 0x0000024C44A80C18> print(ci2) #C2 class str print(str(ci2)) #C2 class str print(repr(ci2)) #<__main__.C2 object at 0x0000024C44AE12E8> print(ci3) #C3 class repr print(str(ci3)) #C3 class repr print(repr(ci3)) #C3 class repr print(ci4) #C4 class str print(str(ci4)) #C4 class str print(repr(ci4)) #C4 class repr A: >>> print(decimal.Decimal(23) / decimal.Decimal("1.05")) 21.90476190476190476190476190 >>> decimal.Decimal(23) / decimal.Decimal("1.05") Decimal('21.90476190476190476190476190') When print() is called on the result of decimal.Decimal(23) / decimal.Decimal("1.05") the raw number is printed; this output is in string form which can be achieved with __str__(). If we simply enter the expression we get a decimal.Decimal output — this output is in representational form which can be achieved with __repr__(). All Python objects have two output forms. String form is designed to be human-readable. The representational form is designed to produce output that if fed to a Python interpreter would (when possible) reproduce the represented object. A: Excellent answers already cover the difference between __str__ and __repr__, which for me boils down to the former being readable even by an end user, and the latter being as useful as possible to developers. Given that, I find that the default implementation of __repr__ often fails to achieve this goal because it omits information useful to developers. For this reason, if I have a simple enough __str__, I generally just try to get the best of both worlds with something like: def __repr__(self): return '{0} ({1})'.format(object.__repr__(self), str(self)) A: One important thing to keep in mind is that container's __str__ uses contained objects' __repr__. >>> from datetime import datetime >>> from decimal import Decimal >>> print (Decimal('52'), datetime.now()) (Decimal('52'), datetime.datetime(2015, 11, 16, 10, 51, 26, 185000)) >>> str((Decimal('52'), datetime.now())) "(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 52, 22, 176000))" Python favors unambiguity over readability, the __str__ call of a tuple calls the contained objects' __repr__, the "formal" representation of an object. Although the formal representation is harder to read than an informal one, it is unambiguous and more robust against bugs. A: In a nutshell: class Demo: def __repr__(self): return 'repr' def __str__(self): return 'str' demo = Demo() print(demo) # use __str__, output 'str' to stdout s = str(demo) # __str__ is used, return 'str' r = repr(demo) # __repr__ is used, return 'repr' import logging logger = logging.getLogger(logging.INFO) logger.info(demo) # use __str__, output 'str' to stdout from pprint import pprint, pformat pprint(demo) # use __repr__, output 'repr' to stdout result = pformat(demo) # use __repr__, result is string which value is 'str' A: Understand __str__ and __repr__ intuitively and permanently distinguish them at all. __str__ return the string disguised body of a given object for readable of eyes __repr__ return the real flesh body of a given object (return itself) for unambiguity to identify. See it in an example In [30]: str(datetime.datetime.now()) Out[30]: '2017-12-07 15:41:14.002752' Disguised in string form As to __repr__ In [32]: datetime.datetime.now() Out[32]: datetime.datetime(2017, 12, 7, 15, 43, 27, 297769) Presence in real body which allows to be manipulated directly. We can do arithmetic operation on __repr__ results conveniently. In [33]: datetime.datetime.now() Out[33]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521) In [34]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521) - datetime.datetime(2 ...: 017, 12, 7, 15, 43, 27, 297769) Out[34]: datetime.timedelta(0, 222, 443752) if apply the operation on __str__ In [35]: '2017-12-07 15:43:14.002752' - '2017-12-07 15:41:14.002752' TypeError: unsupported operand type(s) for -: 'str' and 'str' Returns nothing but error. Another example. In [36]: str('string_body') Out[36]: 'string_body' # in string form In [37]: repr('real_body') Out[37]: "'real_body'" #its real body hide inside Hope this help you build concrete grounds to explore more answers. A: __str__ must return string object whereas __repr__ can return any python expression. If __str__ implementation is missing then __repr__ function is used as fallback. There is no fallback if __repr__ function implementation is missing. If __repr__ function is returning String representation of the object, we can skip implementation of __str__ function. Source: https://www.journaldev.com/22460/python-str-repr-functions A: __repr__ is used everywhere, except by print and str methods (when a __str__is defined !) A: Every object inherits __repr__ from the base class that all objects created. class Person: pass p=Person() if you call repr(p) you will get this as default: <__main__.Person object at 0x7fb2604f03a0> But if you call str(p) you will get the same output. it is because when __str__ does not exist, Python calls __repr__ Let's implement our own __str__ class Person: def __init__(self,name,age): self.name=name self.age=age def __repr__(self): print("__repr__ called") return f"Person(name='{self.name}',age={self.age})" p=Person("ali",20) print(p) and str(p)will return __repr__ called Person(name='ali',age=20) let's add __str__() class Person: def __init__(self, name, age): self.name = name self.age = age def __repr__(self): print('__repr__ called') return f"Person(name='{self.name}, age=self.age')" def __str__(self): print('__str__ called') return self.name p=Person("ali",20) if we call print(p) and str(p), it will call __str__() so it will return __str__ called ali repr(p) will return repr called "Person(name='ali, age=self.age')" Let's omit __repr__ and just implement __str__. class Person: def __init__(self, name, age): self.name = name self.age = age def __str__(self): print('__str__ called') return self.name p=Person('ali',20) print(p) will look for the __str__ and will return: __str__ called ali NOTE= if we had __repr__ and __str__ defined, f'name is {p}' would call __str__ A: Programmers with prior experience in languages with a toString method tend to implement __str__ and not __repr__. If you only implement one of these special methods in Python, choose __repr__. From Fluent Python book, by Ramalho, Luciano. A: Basically __str__ or str() is used for creating output that is human-readable are must be for end-users. On the other hand, repr() or __repr__ mainly returns canonical string representation of objects which serve the purpose of debugging and development helps the programmers. A: repr() used when we debug or log.It is used for developers to understand code. one the other hand str() user for non developer like(QA) or user. class Customer: def __init__(self,name): self.name = name def __repr__(self): return "Customer('{}')".format(self.name) def __str__(self): return f"cunstomer name is {self.name}" cus_1 = Customer("Thusi") print(repr(cus_1)) #print(cus_1.__repr__()) print(str(cus_1)) #print(cus_1.__str__()) A: As far as I see it: __str__ is used for converting an object as a string, which makes the object more human-readable (for costumers like me). However, __repr__ must be used for representing the class's object as a string, which seems more unambiguous. So __repr__ is most likely used by developers for development and debugging.
What is the difference between __str__ and __repr__?
What is the difference between __str__ and __repr__ in Python?
[ "Alex summarized well but, surprisingly, was too succinct.\nFirst, let me reiterate the main points in Alex’s post:\n\nThe default implementation is useless (it’s hard to think of one which wouldn’t be, but yeah)\n__repr__ goal is to be unambiguous\n__str__ goal is to be readable\nContainer’s __str__ uses contained objects’ __repr__\n\nDefault implementation is useless\nThis is mostly a surprise because Python’s defaults tend to be fairly useful. However, in this case, having a default for __repr__ which would act like:\nreturn \"%s(%r)\" % (self.__class__, self.__dict__)\n\nwould have been too dangerous (for example, too easy to get into infinite recursion if objects reference each other). So Python cops out. Note that there is one default which is true: if __repr__ is defined, and __str__ is not, the object will behave as though __str__=__repr__.\nThis means, in simple terms: almost every object you implement should have a functional __repr__ that’s usable for understanding the object. Implementing __str__ is optional: do that if you need a “pretty print” functionality (for example, used by a report generator).\nThe goal of __repr__ is to be unambiguous\nLet me come right out and say it — I do not believe in debuggers. I don’t really know how to use any debugger, and have never used one seriously. Furthermore, I believe that the big fault in debuggers is their basic nature — most failures I debug happened a long long time ago, in a galaxy far far away. This means that I do believe, with religious fervor, in logging. Logging is the lifeblood of any decent fire-and-forget server system. Python makes it easy to log: with maybe some project specific wrappers, all you need is a\nlog(INFO, \"I am in the weird function and a is\", a, \"and b is\", b, \"but I got a null C — using default\", default_c)\n\nBut you have to do the last step — make sure every object you implement has a useful repr, so code like that can just work. This is why the “eval” thing comes up: if you have enough information so eval(repr(c))==c, that means you know everything there is to know about c. If that’s easy enough, at least in a fuzzy way, do it. If not, make sure you have enough information about c anyway. I usually use an eval-like format: \"MyClass(this=%r,that=%r)\" % (self.this,self.that). It does not mean that you can actually construct MyClass, or that those are the right constructor arguments — but it is a useful form to express “this is everything you need to know about this instance”.\nNote: I used %r above, not %s. You always want to use repr() [or %r formatting character, equivalently] inside __repr__ implementation, or you’re defeating the goal of repr. You want to be able to differentiate MyClass(3) and MyClass(\"3\").\nThe goal of __str__ is to be readable\nSpecifically, it is not intended to be unambiguous — notice that str(3)==str(\"3\"). Likewise, if you implement an IP abstraction, having the str of it look like 192.168.1.1 is just fine. When implementing a date/time abstraction, the str can be \"2010/4/12 15:35:22\", etc. The goal is to represent it in a way that a user, not a programmer, would want to read it. Chop off useless digits, pretend to be some other class — as long is it supports readability, it is an improvement.\nContainer’s __str__ uses contained objects’ __repr__\nThis seems surprising, doesn’t it? It is a little, but how readable would it be if it used their __str__?\n[moshe is, 3, hello\nworld, this is a list, oh I don't know, containing just 4 elements]\n\nNot very. Specifically, the strings in a container would find it way too easy to disturb its string representation. In the face of ambiguity, remember, Python resists the temptation to guess. If you want the above behavior when you’re printing a list, just\nprint(\"[\" + \", \".join(l) + \"]\")\n\n(you can probably also figure out what to do about dictionaries.\nSummary\nImplement __repr__ for any class you implement. This should be second nature. Implement __str__ if you think it would be useful to have a string version which errs on the side of readability.\n", "My rule of thumb: __repr__ is for developers, __str__ is for customers.\n", "Unless you specifically act to ensure otherwise, most classes don't have helpful results for either:\n>>> class Sic(object): pass\n... \n>>> print(str(Sic()))\n<__main__.Sic object at 0x8b7d0>\n>>> print(repr(Sic()))\n<__main__.Sic object at 0x8b7d0>\n>>> \n\nAs you see -- no difference, and no info beyond the class and object's id. If you only override one of the two...:\n>>> class Sic(object): \n... def __repr__(self): return 'foo'\n... \n>>> print(str(Sic()))\nfoo\n>>> print(repr(Sic()))\nfoo\n>>> class Sic(object):\n... def __str__(self): return 'foo'\n... \n>>> print(str(Sic()))\nfoo\n>>> print(repr(Sic()))\n<__main__.Sic object at 0x2617f0>\n>>> \n\nas you see, if you override __repr__, that's ALSO used for __str__, but not vice versa.\nOther crucial tidbits to know: __str__ on a built-on container uses the __repr__, NOT the __str__, for the items it contains. And, despite the words on the subject found in typical docs, hardly anybody bothers making the __repr__ of objects be a string that eval may use to build an equal object (it's just too hard, AND not knowing how the relevant module was actually imported makes it actually flat out impossible).\nSo, my advice: focus on making __str__ reasonably human-readable, and __repr__ as unambiguous as you possibly can, even if that interferes with the fuzzy unattainable goal of making __repr__'s returned value acceptable as input to __eval__!\n", "__repr__: representation of python object usually eval will convert it back to that object\n__str__: is whatever you think is that object in text form\ne.g.\n>>> s=\"\"\"w'o\"w\"\"\"\n>>> repr(s)\n'\\'w\\\\\\'o\"w\\''\n>>> str(s)\n'w\\'o\"w'\n>>> eval(str(s))==s\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1\n w'o\"w\n ^\nSyntaxError: EOL while scanning single-quoted string\n>>> eval(repr(s))==s\nTrue\n\n", "\nIn short, the goal of __repr__ is to be unambiguous and __str__ is to be\n readable.\n\nHere is a good example:\n>>> import datetime\n>>> today = datetime.datetime.now()\n>>> str(today)\n'2012-03-14 09:21:58.130922'\n>>> repr(today)\n'datetime.datetime(2012, 3, 14, 9, 21, 58, 130922)'\n\nRead this documentation for repr:\n\nrepr(object)\nReturn a string containing a printable representation of an object. This is the same value yielded by conversions (reverse\n quotes). It is sometimes useful to be able to access this operation as\n an ordinary function. For many types, this function makes an attempt\n to return a string that would yield an object with the same value when\n passed to eval(), otherwise the representation is a string enclosed in\n angle brackets that contains the name of the type of the object\n together with additional information often including the name and\n address of the object. A class can control what this function returns\n for its instances by defining a __repr__() method.\n\nHere is the documentation for str:\n\nstr(object='')\nReturn a string containing a nicely printable\n representation of an object. For strings, this returns the string\n itself. The difference with repr(object) is that str(object) does not\n always attempt to return a string that is acceptable to eval(); its\n goal is to return a printable string. If no argument is given, returns\n the empty string, ''.\n\n", "\nWhat is the difference between __str__ and __repr__ in Python?\n\n__str__ (read as \"dunder (double-underscore) string\") and __repr__ (read as \"dunder-repper\" (for \"representation\")) are both special methods that return strings based on the state of the object.\n__repr__ provides backup behavior if __str__ is missing.\nSo one should first write a __repr__ that allows you to reinstantiate an equivalent object from the string it returns e.g. using eval or by typing it in character-for-character in a Python shell.\nAt any time later, one can write a __str__ for a user-readable string representation of the instance, when one believes it to be necessary.\n__str__\nIf you print an object, or pass it to format, str.format, or str, then if a __str__ method is defined, that method will be called, otherwise, __repr__ will be used.\n__repr__\nThe __repr__ method is called by the builtin function repr and is what is echoed on your python shell when it evaluates an expression that returns an object.\nSince it provides a backup for __str__, if you can only write one, start with __repr__\nHere's the builtin help on repr:\nrepr(...)\n repr(object) -> string\n \n Return the canonical string representation of the object.\n For most object types, eval(repr(object)) == object.\n\nThat is, for most objects, if you type in what is printed by repr, you should be able to create an equivalent object. But this is not the default implementation.\nDefault Implementation of __repr__\nThe default object __repr__ is (C Python source) something like:\ndef __repr__(self):\n return '<{0}.{1} object at {2}>'.format(\n type(self).__module__, type(self).__qualname__, hex(id(self)))\n\nThat means by default you'll print the module the object is from, the class name, and the hexadecimal representation of its location in memory - for example:\n<__main__.Foo object at 0x7f80665abdd0>\n\nThis information isn't very useful, but there's no way to derive how one might accurately create a canonical representation of any given instance, and it's better than nothing, at least telling us how we might uniquely identify it in memory.\nHow can __repr__ be useful?\nLet's look at how useful it can be, using the Python shell and datetime objects. First we need to import the datetime module:\nimport datetime\n\nIf we call datetime.now in the shell, we'll see everything we need to recreate an equivalent datetime object. This is created by the datetime __repr__:\n>>> datetime.datetime.now()\ndatetime.datetime(2015, 1, 24, 20, 5, 36, 491180)\n\nIf we print a datetime object, we see a nice human readable (in fact, ISO) format. This is implemented by datetime's __str__:\n>>> print(datetime.datetime.now())\n2015-01-24 20:05:44.977951\n\nIt is a simple matter to recreate the object we lost because we didn't assign it to a variable by copying and pasting from the __repr__ output, and then printing it, and we get it in the same human readable output as the other object:\n>>> the_past = datetime.datetime(2015, 1, 24, 20, 5, 36, 491180)\n>>> print(the_past)\n2015-01-24 20:05:36.491180\n\n#How do I implement them?\nAs you're developing, you'll want to be able to reproduce objects in the same state, if possible. This, for example, is how the datetime object defines __repr__ (Python source). It is fairly complex, because of all of the attributes needed to reproduce such an object:\ndef __repr__(self):\n \"\"\"Convert to formal string, for repr().\"\"\"\n L = [self._year, self._month, self._day, # These are never zero\n self._hour, self._minute, self._second, self._microsecond]\n if L[-1] == 0:\n del L[-1]\n if L[-1] == 0:\n del L[-1]\n s = \"%s.%s(%s)\" % (self.__class__.__module__,\n self.__class__.__qualname__,\n \", \".join(map(str, L)))\n if self._tzinfo is not None:\n assert s[-1:] == \")\"\n s = s[:-1] + \", tzinfo=%r\" % self._tzinfo + \")\"\n if self._fold:\n assert s[-1:] == \")\"\n s = s[:-1] + \", fold=1)\"\n return s\n\nIf you want your object to have a more human readable representation, you can implement __str__ next. Here's how the datetime object (Python source) implements __str__, which it easily does because it already has a function to display it in ISO format:\ndef __str__(self):\n \"Convert to string, for str().\"\n return self.isoformat(sep=' ')\n\nSet __repr__ = __str__?\nThis is a critique of another answer here that suggests setting __repr__ = __str__.\nSetting __repr__ = __str__ is silly - __repr__ is a fallback for __str__ and a __repr__, written for developers usage in debugging, should be written before you write a __str__.\nYou need a __str__ only when you need a textual representation of the object.\nConclusion\nDefine __repr__ for objects you write so you and other developers have a reproducible example when using it as you develop. Define __str__ when you need a human readable string representation of it.\n", "On page 358 of the book Python scripting for computational science by Hans Petter Langtangen, it clearly states that \n\nThe __repr__ aims at a complete string representation of the object;\nThe __str__ is to return a nice string for printing.\n\nSo, I prefer to understand them as\n\nrepr = reproduce\nstr = string (representation)\n\nfrom the user's point of view\nalthough this is a misunderstanding I made when learning python.\nA small but good example is also given on the same page as follows:\nExample\nIn [38]: str('s')\nOut[38]: 's'\n\nIn [39]: repr('s')\nOut[39]: \"'s'\"\n\nIn [40]: eval(str('s'))\nTraceback (most recent call last):\n\n File \"<ipython-input-40-abd46c0c43e7>\", line 1, in <module>\n eval(str('s'))\n\n File \"<string>\", line 1, in <module>\n\nNameError: name 's' is not defined\n\n\nIn [41]: eval(repr('s'))\nOut[41]: 's'\n\n", "Apart from all the answers given, I would like to add few points :-\n1) __repr__() is invoked when you simply write object's name on interactive python console and press enter.\n2) __str__() is invoked when you use object with print statement.\n3) In case, if __str__ is missing, then print and any function using str() invokes __repr__() of object.\n4) __str__() of containers, when invoked will execute __repr__() method of its contained elements.\n5) str() called within __str__() could potentially recurse without a base case, and error on maximum recursion depth.\n6) __repr__() can call repr() which will attempt to avoid infinite recursion automatically, replacing an already represented object with ....\n", "(2020 entry)\nQ: What's the difference between __str__() and __repr__()?\nTL;DR:\n\nLONG\nThis question has been around a long time, and there are a variety of answers of which most are correct (not to mention from several Python community legends[!]). However when it comes down to the nitty-gritty, this question is analogous to asking the difference between the str() and repr() built-in functions. I'm going to describe the differences in my own words (which means I may be \"borrowing\" liberally from Core Python Programming so pls forgive me).\nBoth str() and repr() have the same basic job: their goal is to return a string representation of a Python object. What kind of string representation is what differentiates them.\n\nstr() & __str__() return a printable string representation of\nan object... something human-readable/for human consumption\nrepr() & __repr__() return a string representation of an object that is a valid Python expression, an object you can pass to eval() or type into the Python shell without getting an error.\n\nFor example, let's assign a string to x and an int to y, and simply showing human-readable string versions of each:\n>>> x, y = 'foo', 123\n>>> str(x), str(y)\n('foo', '123')\n\nCan we take what is inside the quotes in both cases and enter them verbatim into the Python interpreter? Let's give it a try:\n>>> 123\n123\n>>> foo\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'foo' is not defined\n\nClearly you can for an int but not necessarily for a str. Similarly, while I can pass '123' to eval(), that doesn't work for 'foo':\n>>> eval('123')\n123\n>>> eval('foo')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"<string>\", line 1, in <module>\nNameError: name 'foo' is not defined\n\nSo this tells you the Python shell just eval()s what you give it. Got it? Now, let's repr() both expressions and see what we get. More specifically, take its output and dump those out in the interpreter (there's a point to this which we'll address afterwards):\n>>> repr(x), repr(y)\n(\"'foo'\", '123')\n>>> 123\n123\n>>> 'foo'\n'foo'\n\nWow, they both work? That's because 'foo', while a printable string representation of that string, it's not evaluatable, but \"'foo'\" is. 123 is a valid Python int called by either str() or repr(). What happens when we call eval() with these?\n>>> eval('123')\n123\n>>> eval(\"'foo'\")\n'foo'\n\nIt works because 123 and 'foo' are valid Python objects. Another key takeaway is that while sometimes both return the same thing (the same string representation), that's not always the case. (And yes, yes, I can go create a variable foo where the eval() works, but that's not the point.)\nMore factoids about both pairs\n\nSometimes, str() and repr() are called implicitly, meaning they're called on behalf of users: when users execute print (Py1/Py2) or call print() (Py3+), even if users don't call str() explicitly, such a call is made on their behalf before the object is displayed.\nIn the Python shell (interactive interpreter), if you enter a variable at the >>> prompt and press RETURN, the interpreter displays the results of repr() implicitly called on that object.\nTo connect str() and repr() to __str__() and __repr__(), realize that calls to the built-in functions, i.e., str(x) or repr(y) result in calling their object's corresponding special methods: x.__str__() or y.__repr()__\nBy implementing __str__() and __repr__() for your Python classes, you overload the built-in functions (str() and repr()), allowing instances of your classes to be passed in to str() and repr(). When such calls are made, they turn around and call the class' __str__() and __repr__() (per #3).\n\n", "To put it simply:\n__str__ is used in to show a string representation of your object to be read easily by others.\n__repr__ is used to show a string representation of the object.\nLet's say I want to create a Fraction class where the string representation of a fraction is '(1/2)' and the object (Fraction class) is to be represented as 'Fraction (1,2)'\nSo we can create a simple Fraction class:\nclass Fraction:\n def __init__(self, num, den):\n self.__num = num\n self.__den = den\n\n def __str__(self):\n return '(' + str(self.__num) + '/' + str(self.__den) + ')'\n\n def __repr__(self):\n return 'Fraction (' + str(self.__num) + ',' + str(self.__den) + ')'\n\n\n\nf = Fraction(1,2)\nprint('I want to represent the Fraction STRING as ' + str(f)) # (1/2)\nprint('I want to represent the Fraction OBJECT as ', repr(f)) # Fraction (1,2)\n\n", "From an (An Unofficial) Python Reference Wiki (archive copy) by effbot:\n__str__ \"computes the \"informal\" string representation of an object. This differs from __repr__ in that it does not have to be a valid Python expression: a more convenient or concise representation may be used instead.\"\n", "In all honesty, eval(repr(obj)) is never used. If you find yourself using it, you should stop, because eval is dangerous, and strings are a very inefficient way to serialize your objects (use pickle instead).\nTherefore, I would recommend setting __repr__ = __str__. The reason is that str(list) calls repr on the elements (I consider this to be one of the biggest design flaws of Python that was not addressed by Python 3). An actual repr will probably not be very helpful as the output of print([your, objects]).\nTo qualify this, in my experience, the most useful use case of the repr function is to put a string inside another string (using string formatting). This way, you don't have to worry about escaping quotes or anything. But note that there is no eval happening here.\n", "str - Creates a new string object from the given object.\nrepr - Returns the canonical string representation of the object.\nThe differences:\nstr():\n\nmakes object readable\ngenerates output for end-user\n\nrepr():\n\nneeds code that reproduces object\ngenerates output for developer\n\n", "From the book Fluent Python:\n\nA basic requirement for a Python object is to provide usable \n string representations of itself, one used for debugging and\n logging, another for presentation to end users. That is why the\n special methods __repr__ and __str__ exist in the data model.\n\n", "One aspect that is missing in other answers. It's true that in general the pattern is:\n\nGoal of __str__: human-readable\nGoal of __repr__: unambiguous, possibly machine-readable via eval\n\nUnfortunately, this differentiation is flawed, because the Python REPL and also IPython use __repr__ for printing objects in a REPL console (see related questions for Python and IPython). Thus, projects which are targeted for interactive console work (e.g., Numpy or Pandas) have started to ignore above rules and provide a human-readable __repr__ implementation instead.\n", "You can get some insight from this code:\nclass Foo():\n def __repr__(self):\n return(\"repr\")\n def __str__(self):\n return(\"str\")\n\nfoo = Foo()\nfoo #repr\nprint(foo) #str\n\n", "__str__ can be invoked on an object by calling str(obj) and should return a human readable string. \n__repr__ can be invoked on an object by calling repr(obj) and should return internal object (object fields/attributes)\nThis example may help:\nclass C1:pass\n\nclass C2: \n def __str__(self):\n return str(f\"{self.__class__.__name__} class str \")\n\nclass C3: \n def __repr__(self): \n return str(f\"{self.__class__.__name__} class repr\")\n\nclass C4: \n def __str__(self):\n return str(f\"{self.__class__.__name__} class str \")\n def __repr__(self): \n return str(f\"{self.__class__.__name__} class repr\")\n\n\nci1 = C1() \nci2 = C2() \nci3 = C3() \nci4 = C4()\n\nprint(ci1) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(str(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(repr(ci1)) #<__main__.C1 object at 0x0000024C44A80C18>\nprint(ci2) #C2 class str\nprint(str(ci2)) #C2 class str\nprint(repr(ci2)) #<__main__.C2 object at 0x0000024C44AE12E8>\nprint(ci3) #C3 class repr\nprint(str(ci3)) #C3 class repr\nprint(repr(ci3)) #C3 class repr\nprint(ci4) #C4 class str \nprint(str(ci4)) #C4 class str \nprint(repr(ci4)) #C4 class repr\n\n", ">>> print(decimal.Decimal(23) / decimal.Decimal(\"1.05\"))\n21.90476190476190476190476190\n>>> decimal.Decimal(23) / decimal.Decimal(\"1.05\")\nDecimal('21.90476190476190476190476190')\n\nWhen print() is called on the result of decimal.Decimal(23) / decimal.Decimal(\"1.05\") the raw number is printed; this output is in string form which can be achieved with __str__(). If we simply enter the expression we get a decimal.Decimal output — this output is in representational form which can be achieved with __repr__(). All Python objects have two output forms. String form is designed to be human-readable. The representational form is designed to produce output that if fed to a Python interpreter would (when possible) reproduce the represented object.\n", "Excellent answers already cover the difference between __str__ and __repr__, which for me boils down to the former being readable even by an end user, and the latter being as useful as possible to developers. Given that, I find that the default implementation of __repr__ often fails to achieve this goal because it omits information useful to developers.\nFor this reason, if I have a simple enough __str__, I generally just try to get the best of both worlds with something like:\ndef __repr__(self):\n return '{0} ({1})'.format(object.__repr__(self), str(self))\n\n", "\nOne important thing to keep in mind is that container's __str__ uses contained objects' __repr__.\n\n>>> from datetime import datetime\n>>> from decimal import Decimal\n>>> print (Decimal('52'), datetime.now())\n(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 51, 26, 185000))\n>>> str((Decimal('52'), datetime.now()))\n\"(Decimal('52'), datetime.datetime(2015, 11, 16, 10, 52, 22, 176000))\"\n\nPython favors unambiguity over readability, the __str__ call of a tuple calls the contained objects' __repr__, the \"formal\" representation of an object. Although the formal representation is harder to read than an informal one, it is unambiguous and more robust against bugs.\n", "In a nutshell:\nclass Demo:\n def __repr__(self):\n return 'repr'\n def __str__(self):\n return 'str'\n\ndemo = Demo()\nprint(demo) # use __str__, output 'str' to stdout\n\ns = str(demo) # __str__ is used, return 'str'\nr = repr(demo) # __repr__ is used, return 'repr'\n\nimport logging\nlogger = logging.getLogger(logging.INFO)\nlogger.info(demo) # use __str__, output 'str' to stdout\n\nfrom pprint import pprint, pformat\npprint(demo) # use __repr__, output 'repr' to stdout\nresult = pformat(demo) # use __repr__, result is string which value is 'str'\n\n", "Understand __str__ and __repr__ intuitively and permanently distinguish them at all.\n__str__ return the string disguised body of a given object for readable of eyes\n__repr__ return the real flesh body of a given object (return itself) for unambiguity to identify.\nSee it in an example\nIn [30]: str(datetime.datetime.now())\nOut[30]: '2017-12-07 15:41:14.002752'\nDisguised in string form\n\nAs to __repr__\nIn [32]: datetime.datetime.now()\nOut[32]: datetime.datetime(2017, 12, 7, 15, 43, 27, 297769)\nPresence in real body which allows to be manipulated directly.\n\nWe can do arithmetic operation on __repr__ results conveniently.\nIn [33]: datetime.datetime.now()\nOut[33]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521)\nIn [34]: datetime.datetime(2017, 12, 7, 15, 47, 9, 741521) - datetime.datetime(2\n ...: 017, 12, 7, 15, 43, 27, 297769)\nOut[34]: datetime.timedelta(0, 222, 443752)\n\nif apply the operation on __str__\nIn [35]: '2017-12-07 15:43:14.002752' - '2017-12-07 15:41:14.002752'\nTypeError: unsupported operand type(s) for -: 'str' and 'str'\n\nReturns nothing but error.\nAnother example.\nIn [36]: str('string_body')\nOut[36]: 'string_body' # in string form\n\nIn [37]: repr('real_body')\nOut[37]: \"'real_body'\" #its real body hide inside\n\nHope this help you build concrete grounds to explore more answers.\n", "\n__str__ must return string object whereas __repr__ can return any python expression.\nIf __str__ implementation is missing then __repr__ function is used as fallback. There is no fallback if __repr__ function implementation is missing.\nIf __repr__ function is returning String representation of the object, we can skip implementation of __str__ function.\n\nSource: https://www.journaldev.com/22460/python-str-repr-functions\n", "__repr__ is used everywhere, except by print and str methods (when a __str__is defined !)\n", "Every object inherits __repr__ from the base class that all objects created.\nclass Person:\n pass\n\np=Person()\n\nif you call repr(p) you will get this as default:\n <__main__.Person object at 0x7fb2604f03a0>\n\nBut if you call str(p) you will get the same output. it is because when __str__ does not exist, Python calls __repr__\nLet's implement our own __str__\nclass Person:\n def __init__(self,name,age):\n self.name=name\n self.age=age\n def __repr__(self):\n print(\"__repr__ called\")\n return f\"Person(name='{self.name}',age={self.age})\"\n\np=Person(\"ali\",20)\n\nprint(p) and str(p)will return\n __repr__ called\n Person(name='ali',age=20)\n\nlet's add __str__()\nclass Person:\n def __init__(self, name, age):\n self.name = name\n self.age = age\n \n def __repr__(self):\n print('__repr__ called')\n return f\"Person(name='{self.name}, age=self.age')\"\n \n def __str__(self):\n print('__str__ called')\n return self.name\n\np=Person(\"ali\",20)\n\nif we call print(p) and str(p), it will call __str__() so it will return\n__str__ called\nali\n\nrepr(p) will return\nrepr called\n\"Person(name='ali, age=self.age')\"\nLet's omit __repr__ and just implement __str__.\nclass Person:\ndef __init__(self, name, age):\n self.name = name\n self.age = age\n\ndef __str__(self):\n print('__str__ called')\n return self.name\n\np=Person('ali',20)\n\nprint(p) will look for the __str__ and will return:\n__str__ called\nali\n\nNOTE= if we had __repr__ and __str__ defined, f'name is {p}' would call __str__\n", "\nProgrammers with prior experience in languages with a toString method tend to implement __str__ and not __repr__.\nIf you only implement one of these special methods in Python, choose __repr__.\n\nFrom Fluent Python book, by Ramalho, Luciano.\n", "Basically __str__ or str() is used for creating output that is human-readable are must be for end-users.\nOn the other hand, repr() or __repr__ mainly returns canonical string representation of objects which serve the purpose of debugging and development helps the programmers.\n", "repr() used when we debug or log.It is used for developers to understand code.\none the other hand str() user for non developer like(QA) or user.\nclass Customer:\n def __init__(self,name):\n self.name = name\n def __repr__(self):\n return \"Customer('{}')\".format(self.name)\n def __str__(self):\n return f\"cunstomer name is {self.name}\"\n\ncus_1 = Customer(\"Thusi\")\nprint(repr(cus_1)) #print(cus_1.__repr__()) \nprint(str(cus_1)) #print(cus_1.__str__())\n\n", "As far as I see it:\n__str__ is used for converting an object as a string, which makes the object more human-readable (for costumers like me). However, __repr__ must be used for representing the class's object as a string, which seems more unambiguous. So __repr__ is most likely used by developers for development and debugging.\n" ]
[ 3334, 749, 498, 207, 201, 164, 49, 48, 37, 16, 15, 14, 12, 9, 9, 9, 8, 6, 6, 5, 5, 4, 4, 3, 3, 1, 1, 1, 0 ]
[]
[]
[ "magic_methods", "python", "repr" ]
stackoverflow_0001436703_magic_methods_python_repr.txt
Q: Convert Variable Name to String? I would like to convert a python variable name into the string equivalent as shown. Any ideas how? var = {} print ??? # Would like to see 'var' something_else = 3 print ??? # Would print 'something_else' A: TL;DR: Not possible. See 'conclusion' at the end. There is an usage scenario where you might need this. I'm not implying there are not better ways or achieving the same functionality. This would be useful in order to 'dump' an arbitrary list of dictionaries in case of error, in debug modes and other similar situations. What would be needed, is the reverse of the eval() function: get_indentifier_name_missing_function() which would take an identifier name ('variable','dictionary',etc) as an argument, and return a string containing the identifier’s name. Consider the following current state of affairs: random_function(argument_data) If one is passing an identifier name ('function','variable','dictionary',etc) argument_data to a random_function() (another identifier name), one actually passes an identifier (e.g.: <argument_data object at 0xb1ce10>) to another identifier (e.g.: <function random_function at 0xafff78>): <function random_function at 0xafff78>(<argument_data object at 0xb1ce10>) From my understanding, only the memory address is passed to the function: <function at 0xafff78>(<object at 0xb1ce10>) Therefore, one would need to pass a string as an argument to random_function() in order for that function to have the argument's identifier name: random_function('argument_data') Inside the random_function() def random_function(first_argument): , one would use the already supplied string 'argument_data' to: serve as an 'identifier name' (to display, log, string split/concat, whatever) feed the eval() function in order to get a reference to the actual identifier, and therefore, a reference to the real data: print("Currently working on", first_argument) some_internal_var = eval(first_argument) print("here comes the data: " + str(some_internal_var)) Unfortunately, this doesn't work in all cases. It only works if the random_function() can resolve the 'argument_data' string to an actual identifier. I.e. If argument_data identifier name is available in the random_function()'s namespace. This isn't always the case: # main1.py import some_module1 argument_data = 'my data' some_module1.random_function('argument_data') # some_module1.py def random_function(first_argument): print("Currently working on", first_argument) some_internal_var = eval(first_argument) print("here comes the data: " + str(some_internal_var)) ###### Expected results would be: Currently working on: argument_data here comes the data: my data Because argument_data identifier name is not available in the random_function()'s namespace, this would yield instead: Currently working on argument_data Traceback (most recent call last): File "~/main1.py", line 6, in <module> some_module1.random_function('argument_data') File "~/some_module1.py", line 4, in random_function some_internal_var = eval(first_argument) File "<string>", line 1, in <module> NameError: name 'argument_data' is not defined Now, consider the hypotetical usage of a get_indentifier_name_missing_function() which would behave as described above. Here's a dummy Python 3.0 code: . # main2.py import some_module2 some_dictionary_1 = { 'definition_1':'text_1', 'definition_2':'text_2', 'etc':'etc.' } some_other_dictionary_2 = { 'key_3':'value_3', 'key_4':'value_4', 'etc':'etc.' } # # more such stuff # some_other_dictionary_n = { 'random_n':'random_n', 'etc':'etc.' } for each_one_of_my_dictionaries in ( some_dictionary_1, some_other_dictionary_2, ..., some_other_dictionary_n ): some_module2.some_function(each_one_of_my_dictionaries) # some_module2.py def some_function(a_dictionary_object): for _key, _value in a_dictionary_object.items(): print( get_indentifier_name_missing_function(a_dictionary_object) + " " + str(_key) + " = " + str(_value) ) ###### Expected results would be: some_dictionary_1 definition_1 = text_1 some_dictionary_1 definition_2 = text_2 some_dictionary_1 etc = etc. some_other_dictionary_2 key_3 = value_3 some_other_dictionary_2 key_4 = value_4 some_other_dictionary_2 etc = etc. ...... ...... ...... some_other_dictionary_n random_n = random_n some_other_dictionary_n etc = etc. Unfortunately, get_indentifier_name_missing_function() would not see the 'original' identifier names (some_dictionary_,some_other_dictionary_2,some_other_dictionary_n). It would only see the a_dictionary_object identifier name. Therefore the real result would rather be: a_dictionary_object definition_1 = text_1 a_dictionary_object definition_2 = text_2 a_dictionary_object etc = etc. a_dictionary_object key_3 = value_3 a_dictionary_object key_4 = value_4 a_dictionary_object etc = etc. ...... ...... ...... a_dictionary_object random_n = random_n a_dictionary_object etc = etc. So, the reverse of the eval() function won't be that useful in this case. Currently, one would need to do this: # main2.py same as above, except: for each_one_of_my_dictionaries_names in ( 'some_dictionary_1', 'some_other_dictionary_2', '...', 'some_other_dictionary_n' ): some_module2.some_function( { each_one_of_my_dictionaries_names : eval(each_one_of_my_dictionaries_names) } ) # some_module2.py def some_function(a_dictionary_name_object_container): for _dictionary_name, _dictionary_object in a_dictionary_name_object_container.items(): for _key, _value in _dictionary_object.items(): print( str(_dictionary_name) + " " + str(_key) + " = " + str(_value) ) ###### In conclusion: Python passes only memory addresses as arguments to functions. Strings representing the name of an identifier, can only be referenced back to the actual identifier by the eval() function if the name identifier is available in the current namespace. A hypothetical reverse of the eval() function, would not be useful in cases where the identifier name is not 'seen' directly by the calling code. E.g. inside any called function. Currently one needs to pass to a function: the string representing the identifier name the actual identifier (memory address) This can be achieved by passing both the 'string' and eval('string') to the called function at the same time. I think this is the most 'general' way of solving this egg-chicken problem across arbitrary functions, modules, namespaces, without using corner-case solutions. The only downside is the use of the eval() function which may easily lead to unsecured code. Care must be taken to not feed the eval() function with just about anything, especially unfiltered external-input data. A: Totally possible with the python-varname package (python3): from varname import nameof s = 'Hey!' print (nameof(s)) Output: s Install: pip3 install varname Or get the package here: https://github.com/pwwang/python-varname A: I searched for this question because I wanted a Python program to print assignment statements for some of the variables in the program. For example, it might print "foo = 3, bar = 21, baz = 432". The print function would need the variable names in string form. I could have provided my code with the strings "foo","bar", and "baz", but that felt like repeating myself. After reading the previous answers, I developed the solution below. The globals() function behaves like a dict with variable names (in the form of strings) as keys. I wanted to retrieve from globals() the key corresponding to the value of each variable. The method globals().items() returns a list of tuples; in each tuple the first item is the variable name (as a string) and the second is the variable value. My variablename() function searches through that list to find the variable name(s) that corresponds to the value of the variable whose name I need in string form. The function itertools.ifilter() does the search by testing each tuple in the globals().items() list with the function lambda x: var is globals()[x[0]]. In that function x is the tuple being tested; x[0] is the variable name (as a string) and x[1] is the value. The lambda function tests whether the value of the tested variable is the same as the value of the variable passed to variablename(). In fact, by using the is operator, the lambda function tests whether the name of the tested variable is bound to the exact same object as the variable passed to variablename(). If so, the tuple passes the test and is returned by ifilter(). The itertools.ifilter() function actually returns an iterator which doesn't return any results until it is called properly. To get it called properly, I put it inside a list comprehension [tpl[0] for tpl ... globals().items())]. The list comprehension saves only the variable name tpl[0], ignoring the variable value. The list that is created contains one or more names (as strings) that are bound to the value of the variable passed to variablename(). In the uses of variablename() shown below, the desired string is returned as an element in a list. In many cases, it will be the only item in the list. If another variable name is assigned the same value, however, the list will be longer. >>> def variablename(var): ... import itertools ... return [tpl[0] for tpl in ... itertools.ifilter(lambda x: var is x[1], globals().items())] ... >>> var = {} >>> variablename(var) ['var'] >>> something_else = 3 >>> variablename(something_else) ['something_else'] >>> yet_another = 3 >>> variablename(something_else) ['yet_another', 'something_else'] A: as long as it's a variable and not a second class, this here works for me: def print_var_name(variable): for name in globals(): if eval(name) == variable: print name foo = 123 print_var_name(foo) >>>foo this happens for class members: class xyz: def __init__(self): pass member = xyz() print_var_name(member) >>>member ans this for classes (as example): abc = xyz print_var_name(abc) >>>abc >>>xyz So for classes it gives you the name AND the properteries A: This is not possible. In Python, there really isn't any such thing as a "variable". What Python really has are "names" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none. Consider this example: foo = 1 bar = 1 baz = 1 Now, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid. In Python, a name is a way to access an object, so there is no way to work with names directly. There might be some clever way to hack the Python bytecodes or something to get the value of the name, but that is at best a parlor trick. If you know you want print foo to print "foo", you might as well just execute print "foo" in the first place. EDIT: I have changed the wording slightly to make this more clear. Also, here is an even better example: foo = 1 bar = foo baz = foo In practice, Python reuses the same object for integers with common values like 0 or 1, so the first example should bind the same object to all three names. But this example is crystal clear: the same object is bound to foo, bar, and baz. A: Technically the information is available to you, but as others have asked, how would you make use of it in a sensible way? >>> x = 52 >>> globals() {'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', 'x': 52, '__doc__': None, '__package__': None} This shows that the variable name is present as a string in the globals() dictionary. >>> globals().keys()[2] 'x' In this case it happens to be the third key, but there's no reliable way to know where a given variable name will end up >>> for k in globals().keys(): ... if not k.startswith("_"): ... print k ... x >>> You could filter out system variables like this, but you're still going to get all of your own items. Just running that code above created another variable "k" that changed the position of "x" in the dict. But maybe this is a useful start for you. If you tell us what you want this capability for, more helpful information could possibly be given. A: By using the the unpacking operator: >>> def tostr(**kwargs): return kwargs >>> var = {} >>> something_else = 3 >>> tostr(var = var,something_else=something_else) {'var' = {},'something_else'=3} A: You somehow have to refer to the variable you want to print the name of. So it would look like: print varname(something_else) There is no such function, but if there were it would be kind of pointless. You have to type out something_else, so you can as well just type quotes to the left and right of it to print the name as a string: print "something_else" A: What are you trying to achieve? There is absolutely no reason to ever do what you describe, and there is likely a much better solution to the problem you're trying to solve.. The most obvious alternative to what you request is a dictionary. For example: >>> my_data = {'var': 'something'} >>> my_data['something_else'] = 'something' >>> print my_data.keys() ['var', 'something_else'] >>> print my_data['var'] something Mostly as a.. challenge, I implemented your desired output. Do not use this code, please! #!/usr/bin/env python2.6 class NewLocals: """Please don't ever use this code..""" def __init__(self, initial_locals): self.prev_locals = list(initial_locals.keys()) def show_new(self, new_locals): output = ", ".join(list(set(new_locals) - set(self.prev_locals))) self.prev_locals = list(new_locals.keys()) return output # Set up eww = None eww = NewLocals(locals()) # "Working" requested code var = {} print eww.show_new(locals()) # Outputs: var something_else = 3 print eww.show_new(locals()) # Outputs: something_else # Further testing another_variable = 4 and_a_final_one = 5 print eww.show_new(locals()) # Outputs: another_variable, and_a_final_one A: Does Django not do this when generating field names? http://docs.djangoproject.com/en/dev//topics/db/models/#verbose-field-names Seems reasonable to me. A: I think this is a cool solution and I suppose the best you can get. But do you see any way to handle the ambigious results, your function may return? As "is" operator behaves unexpectedly with integers shows, low integers and strings of the same value get cached by python so that your variablename-function might priovide ambigous results with a high probability. In my case, I would like to create a decorator, that adds a new variable to a class by the varialbename i pass it: def inject(klass, dependency): klass.__dict__["__"+variablename(dependency)]=dependency But if your method returns ambigous results, how can I know the name of the variable I added? var any_var="myvarcontent" var myvar="myvarcontent" @inject(myvar) class myclasss(): def myclass_method(self): print self.__myvar #I can not be sure, that this variable will be set... Maybe if I will also check the local list I could at least remove the "dependency"-Variable from the list, but this will not be a reliable result. A: Here is a succinct variation that lets you specify any directory. The issue with using directories to find anything is that multiple variables can have the same value. So this code returns a list of possible variables. def varname( var, dir=locals()): return [ key for key, val in dir.items() if id( val) == id( var)] A: I don't know it's right or not, but it worked for me def varname(variable): for name in list(globals().keys()): expression = f'id({name})' if id(variable) == eval(expression): return name A: it is possible to a limited extent. the answer is similar to the solution by @tamtam . The given example assumes the following assumptions - You are searching for a variable by its value The variable has a distinct value The value is in the global namespace Example: testVar = "unique value" varNameAsString = [k for k,v in globals().items() if v == "unique value"] # # the variable "varNameAsString" will contain all the variable name that matches # the value "unique value" # for this example, it will be a list of a single entry "testVar" # print(varNameAsString) Output : ['testVar'] You can extend this example for any other variable/data type A: I'd like to point out a use case for this that is not an anti-pattern, and there is no better way to do it. This seems to be a missing feature in python. There are a number of functions, like patch.object, that take the name of a method or property to be patched or accessed. Consider this: patch.object(obj, "method_name", new_reg) This can potentially start "false succeeding" when you change the name of a method. IE: you can ship a bug, you thought you were testing.... simply because of a bad method name refactor. Now consider: varname. This could be an efficient, built-in function. But for now it can work by iterating an object or the caller's frame: Now your call can be: patch.member(obj, obj.method_name, new_reg) And the patch function can call: varname(var, obj=obj) This would: assert that the var is bound to the obj and return the name of the member. Or if the obj is not specified, use the callers stack frame to derive it, etc. Could be made an efficient built in at some point, but here's a definition that works. I deliberately didn't support builtins, easy to add tho: Feel free to stick this in a package called varname.py, and use it in your patch.object calls: patch.object(obj, varname(obj, obj.method_name), new_reg) Note: this was written for python 3. import inspect def _varname_dict(var, dct): key_name = None for key, val in dct.items(): if val is var: if key_name is not None: raise NotImplementedError("Duplicate names not supported %s, %s" % (key_name, key)) key_name = key return key_name def _varname_obj(var, obj): key_name = None for key in dir(obj): val = getattr(obj, key) equal = val is var if equal: if key_name is not None: raise NotImplementedError("Duplicate names not supported %s, %s" % (key_name, key)) key_name = key return key_name def varname(var, obj=None): if obj is None: if hasattr(var, "__self__"): return var.__name__ caller_frame = inspect.currentframe().f_back try: ret = _varname_dict(var, caller_frame.f_locals) except NameError: ret = _varname_dict(var, caller_frame.f_globals) else: ret = _varname_obj(var, obj) if ret is None: raise NameError("Name not found. (Note: builtins not supported)") return ret A: This will work for simnple data types (str, int, float, list etc.) >>> def my_print(var_str) : print var_str+':', globals()[var_str] >>> a = 5 >>> b = ['hello', ',world!'] >>> my_print('a') a: 5 >>> my_print('b') b: ['hello', ',world!'] A: It's not very Pythonesque but I was curious and found this solution. You need to duplicate the globals dictionary since its size will change as soon as you define a new variable. def var_to_name(var): # noinspection PyTypeChecker dict_vars = dict(globals().items()) var_string = None for name in dict_vars.keys(): if dict_vars[name] is var: var_string = name break return var_string if __name__ == "__main__": test = 3 print(f"test = {test}") print(f"variable name: {var_to_name(test)}") which returns: test = 3 variable name: test A: To get the variable name of var as a string: var = 1000 var_name = [k for k,v in locals().items() if v == var][0] print(var_name) # ---> outputs 'var' A: Thanks @restrepo, this was exactly what I needed to create a standard save_df_to_file() function. For this, I made some small changes to your tostr() function. Hope this will help someone else: def variabletostr(**df): variablename = list(df.keys())[0] return variablename variabletostr(df=0) A: The original question is pretty old, but I found an almost solution with Python 3. (I say almost because I think you can get close to a solution but I do not believe there is a solution concrete enough to satisfy the exact request). First, you might want to consider the following: objects are a core concept in Python, and they may be assigned a variable, but the variable itself is a bound name (think pointer or reference) not the object itself var is just a variable name bound to an object and that object could have more than one reference (in your example it does not seem to) in this case, var appears to be in the global namespace so you can use the global builtin conveniently named global different name references to the same object will all share the same id which can be checked by running the id builtin id like so: id(var) This function grabs the global variables and filters out the ones matching the content of your variable. def get_bound_names(target_variable): '''Returns a list of bound object names.''' return [k for k, v in globals().items() if v is target_variable] The real challenge here is that you are not guaranteed to get back the variable name by itself. It will be a list, but that list will contain the variable name you are looking for. If your target variable (bound to an object) is really the only bound name, you could access it this way: bound_names = get_variable_names(target_variable) var_string = bound_names[0] A: Possible for Python >= 3.8 (with f'{var=}' string ) Not sure if this could be used in production code, but in Python 3.8(and up) you can use f' string debugging specifier. Add = at the end of an expression, and it will print both the expression and its value: my_salary_variable = 5000 print(f'{my_salary_variable = }') Output: my_salary_variable = 5000 To uncover this magic here is another example: param_list = f'{my_salary_variable=}'.split('=') print(param_list) Output: ['my_salary_variable', '5000'] Explanation: when you put '=' after your var in f'string, it returns a string with variable name, '=' and its value. Split it with .split('=') and get a List of 2 strings, [0] - your_variable_name, and [1] - actual object of variable. Pick up [0] element of the list if you need variable name only. my_salary_variable = 5000 param_list = f'{my_salary_variable=}'.split('=') print(param_list[0]) Output: my_salary_variable or, in one line my_salary_variable = 5000 print(f'{my_salary_variable=}'.split('=')[0]) Output: my_salary_variable Works with functions too: def my_super_calc_foo(number): return number**3 print(f'{my_super_calc_foo(5) = }') print(f'{my_super_calc_foo(5)=}'.split('=')) Output: my_super_calc_foo(5) = 125 ['my_super_calc_foo(5)', '125'] Process finished with exit code 0
Convert Variable Name to String?
I would like to convert a python variable name into the string equivalent as shown. Any ideas how? var = {} print ??? # Would like to see 'var' something_else = 3 print ??? # Would print 'something_else'
[ "TL;DR: Not possible. See 'conclusion' at the end.\n\nThere is an usage scenario where you might need this. I'm not implying there are not better ways or achieving the same functionality.\nThis would be useful in order to 'dump' an arbitrary list of dictionaries in case of error, in debug modes and other similar situations.\nWhat would be needed, is the reverse of the eval() function:\nget_indentifier_name_missing_function()\n\nwhich would take an identifier name ('variable','dictionary',etc) as an argument, and return a\nstring containing the identifier’s name.\n\nConsider the following current state of affairs:\nrandom_function(argument_data)\n\nIf one is passing an identifier name ('function','variable','dictionary',etc) argument_data to a random_function() (another identifier name), one actually passes an identifier (e.g.: <argument_data object at 0xb1ce10>) to another identifier (e.g.: <function random_function at 0xafff78>):\n<function random_function at 0xafff78>(<argument_data object at 0xb1ce10>)\n\nFrom my understanding, only the memory address is passed to the function:\n<function at 0xafff78>(<object at 0xb1ce10>)\n\nTherefore, one would need to pass a string as an argument to random_function() in order for that function to have the argument's identifier name:\nrandom_function('argument_data')\n\nInside the random_function()\ndef random_function(first_argument):\n\n, one would use the already supplied string 'argument_data' to:\n\nserve as an 'identifier name' (to display, log, string split/concat, whatever)\n\nfeed the eval() function in order to get a reference to the actual identifier, and therefore, a reference to the real data:\nprint(\"Currently working on\", first_argument)\nsome_internal_var = eval(first_argument)\nprint(\"here comes the data: \" + str(some_internal_var))\n\n\n\nUnfortunately, this doesn't work in all cases. It only works if the random_function() can resolve the 'argument_data' string to an actual identifier. I.e. If argument_data identifier name is available in the random_function()'s namespace.\nThis isn't always the case:\n# main1.py\nimport some_module1\n\nargument_data = 'my data'\n\nsome_module1.random_function('argument_data')\n\n\n# some_module1.py\ndef random_function(first_argument):\n print(\"Currently working on\", first_argument)\n some_internal_var = eval(first_argument)\n print(\"here comes the data: \" + str(some_internal_var))\n######\n\nExpected results would be:\nCurrently working on: argument_data\nhere comes the data: my data\n\nBecause argument_data identifier name is not available in the random_function()'s namespace, this would yield instead:\nCurrently working on argument_data\nTraceback (most recent call last):\n File \"~/main1.py\", line 6, in <module>\n some_module1.random_function('argument_data')\n File \"~/some_module1.py\", line 4, in random_function\n some_internal_var = eval(first_argument)\n File \"<string>\", line 1, in <module>\nNameError: name 'argument_data' is not defined\n\n\nNow, consider the hypotetical usage of a get_indentifier_name_missing_function() which would behave as described above.\nHere's a dummy Python 3.0 code: .\n# main2.py\nimport some_module2\nsome_dictionary_1 = { 'definition_1':'text_1',\n 'definition_2':'text_2',\n 'etc':'etc.' }\nsome_other_dictionary_2 = { 'key_3':'value_3',\n 'key_4':'value_4', \n 'etc':'etc.' }\n#\n# more such stuff\n#\nsome_other_dictionary_n = { 'random_n':'random_n',\n 'etc':'etc.' }\n\nfor each_one_of_my_dictionaries in ( some_dictionary_1,\n some_other_dictionary_2,\n ...,\n some_other_dictionary_n ):\n some_module2.some_function(each_one_of_my_dictionaries)\n\n\n# some_module2.py\ndef some_function(a_dictionary_object):\n for _key, _value in a_dictionary_object.items():\n print( get_indentifier_name_missing_function(a_dictionary_object) +\n \" \" +\n str(_key) +\n \" = \" +\n str(_value) )\n######\n\nExpected results would be:\nsome_dictionary_1 definition_1 = text_1\nsome_dictionary_1 definition_2 = text_2\nsome_dictionary_1 etc = etc.\nsome_other_dictionary_2 key_3 = value_3\nsome_other_dictionary_2 key_4 = value_4\nsome_other_dictionary_2 etc = etc.\n......\n......\n......\nsome_other_dictionary_n random_n = random_n\nsome_other_dictionary_n etc = etc.\n\nUnfortunately, get_indentifier_name_missing_function() would not see the 'original' identifier names (some_dictionary_,some_other_dictionary_2,some_other_dictionary_n). It would only see the a_dictionary_object identifier name.\nTherefore the real result would rather be:\na_dictionary_object definition_1 = text_1\na_dictionary_object definition_2 = text_2\na_dictionary_object etc = etc.\na_dictionary_object key_3 = value_3\na_dictionary_object key_4 = value_4\na_dictionary_object etc = etc.\n......\n......\n......\na_dictionary_object random_n = random_n\na_dictionary_object etc = etc.\n\nSo, the reverse of the eval() function won't be that useful in this case.\n\nCurrently, one would need to do this:\n# main2.py same as above, except:\n\n for each_one_of_my_dictionaries_names in ( 'some_dictionary_1',\n 'some_other_dictionary_2',\n '...',\n 'some_other_dictionary_n' ):\n some_module2.some_function( { each_one_of_my_dictionaries_names :\n eval(each_one_of_my_dictionaries_names) } )\n \n \n # some_module2.py\n def some_function(a_dictionary_name_object_container):\n for _dictionary_name, _dictionary_object in a_dictionary_name_object_container.items():\n for _key, _value in _dictionary_object.items():\n print( str(_dictionary_name) +\n \" \" +\n str(_key) +\n \" = \" +\n str(_value) )\n ######\n\n\nIn conclusion:\n\nPython passes only memory addresses as arguments to functions.\nStrings representing the name of an identifier, can only be referenced back to the actual identifier by the eval() function if the name identifier is available in the current namespace.\nA hypothetical reverse of the eval() function, would not be useful in cases where the identifier name is not 'seen' directly by the calling code. E.g. inside any called function.\nCurrently one needs to pass to a function:\n\nthe string representing the identifier name\nthe actual identifier (memory address)\n\n\n\nThis can be achieved by passing both the 'string' and eval('string') to the called function at the same time. I think this is the most 'general' way of solving this egg-chicken problem across arbitrary functions, modules, namespaces, without using corner-case solutions. The only downside is the use of the eval() function which may easily lead to unsecured code. Care must be taken to not feed the eval() function with just about anything, especially unfiltered external-input data.\n", "Totally possible with the python-varname package (python3):\nfrom varname import nameof\n\ns = 'Hey!'\n\nprint (nameof(s))\n\nOutput:\ns\n\nInstall:\npip3 install varname\n\nOr get the package here:\nhttps://github.com/pwwang/python-varname\n", "I searched for this question because I wanted a Python program to print assignment statements for some of the variables in the program. For example, it might print \"foo = 3, bar = 21, baz = 432\". The print function would need the variable names in string form. I could have provided my code with the strings \"foo\",\"bar\", and \"baz\", but that felt like repeating myself. After reading the previous answers, I developed the solution below.\nThe globals() function behaves like a dict with variable names (in the form of strings) as keys. I wanted to retrieve from globals() the key corresponding to the value of each variable. The method globals().items() returns a list of tuples; in each tuple the first item is the variable name (as a string) and the second is the variable value. My variablename() function searches through that list to find the variable name(s) that corresponds to the value of the variable whose name I need in string form.\nThe function itertools.ifilter() does the search by testing each tuple in the globals().items() list with the function lambda x: var is globals()[x[0]]. In that function x is the tuple being tested; x[0] is the variable name (as a string) and x[1] is the value. The lambda function tests whether the value of the tested variable is the same as the value of the variable passed to variablename(). In fact, by using the is operator, the lambda function tests whether the name of the tested variable is bound to the exact same object as the variable passed to variablename(). If so, the tuple passes the test and is returned by ifilter().\nThe itertools.ifilter() function actually returns an iterator which doesn't return any results until it is called properly. To get it called properly, I put it inside a list comprehension [tpl[0] for tpl ... globals().items())]. The list comprehension saves only the variable name tpl[0], ignoring the variable value. The list that is created contains one or more names (as strings) that are bound to the value of the variable passed to variablename().\nIn the uses of variablename() shown below, the desired string is returned as an element in a list. In many cases, it will be the only item in the list. If another variable name is assigned the same value, however, the list will be longer.\n>>> def variablename(var):\n... import itertools\n... return [tpl[0] for tpl in \n... itertools.ifilter(lambda x: var is x[1], globals().items())]\n... \n>>> var = {}\n>>> variablename(var)\n['var']\n>>> something_else = 3\n>>> variablename(something_else)\n['something_else']\n>>> yet_another = 3\n>>> variablename(something_else)\n['yet_another', 'something_else']\n\n", "as long as it's a variable and not a second class, this here works for me:\ndef print_var_name(variable):\n for name in globals():\n if eval(name) == variable:\n print name\nfoo = 123\nprint_var_name(foo)\n>>>foo\n\nthis happens for class members:\nclass xyz:\n def __init__(self):\n pass\nmember = xyz()\nprint_var_name(member)\n>>>member\n\nans this for classes (as example):\nabc = xyz\nprint_var_name(abc)\n>>>abc\n>>>xyz\n\nSo for classes it gives you the name AND the properteries\n", "This is not possible.\nIn Python, there really isn't any such thing as a \"variable\". What Python really has are \"names\" which can have objects bound to them. It makes no difference to the object what names, if any, it might be bound to. It might be bound to dozens of different names, or none.\nConsider this example:\nfoo = 1\nbar = 1\nbaz = 1\n\nNow, suppose you have the integer object with value 1, and you want to work backwards and find its name. What would you print? Three different names have that object bound to them, and all are equally valid.\nIn Python, a name is a way to access an object, so there is no way to work with names directly. There might be some clever way to hack the Python bytecodes or something to get the value of the name, but that is at best a parlor trick.\nIf you know you want print foo to print \"foo\", you might as well just execute print \"foo\" in the first place.\nEDIT: I have changed the wording slightly to make this more clear. Also, here is an even better example:\nfoo = 1\nbar = foo\nbaz = foo\n\nIn practice, Python reuses the same object for integers with common values like 0 or 1, so the first example should bind the same object to all three names. But this example is crystal clear: the same object is bound to foo, bar, and baz.\n", "Technically the information is available to you, but as others have asked, how would you make use of it in a sensible way?\n>>> x = 52\n>>> globals()\n{'__builtins__': <module '__builtin__' (built-in)>, '__name__': '__main__', \n'x': 52, '__doc__': None, '__package__': None}\n\nThis shows that the variable name is present as a string in the globals() dictionary.\n>>> globals().keys()[2]\n'x'\n\nIn this case it happens to be the third key, but there's no reliable way to know where a given variable name will end up\n>>> for k in globals().keys():\n... if not k.startswith(\"_\"):\n... print k\n...\nx\n>>>\n\nYou could filter out system variables like this, but you're still going to get all of your own items. Just running that code above created another variable \"k\" that changed the position of \"x\" in the dict.\nBut maybe this is a useful start for you. If you tell us what you want this capability for, more helpful information could possibly be given.\n", "By using the the unpacking operator:\n>>> def tostr(**kwargs):\n return kwargs\n\n>>> var = {}\n>>> something_else = 3\n>>> tostr(var = var,something_else=something_else)\n{'var' = {},'something_else'=3}\n\n", "You somehow have to refer to the variable you want to print the name of. So it would look like:\nprint varname(something_else)\n\nThere is no such function, but if there were it would be kind of pointless. You have to type out something_else, so you can as well just type quotes to the left and right of it to print the name as a string:\nprint \"something_else\"\n\n", "What are you trying to achieve? There is absolutely no reason to ever do what you describe, and there is likely a much better solution to the problem you're trying to solve..\nThe most obvious alternative to what you request is a dictionary. For example:\n>>> my_data = {'var': 'something'}\n>>> my_data['something_else'] = 'something'\n>>> print my_data.keys()\n['var', 'something_else']\n>>> print my_data['var']\nsomething\n\nMostly as a.. challenge, I implemented your desired output. Do not use this code, please!\n#!/usr/bin/env python2.6\nclass NewLocals:\n \"\"\"Please don't ever use this code..\"\"\"\n def __init__(self, initial_locals):\n self.prev_locals = list(initial_locals.keys())\n\n def show_new(self, new_locals):\n output = \", \".join(list(set(new_locals) - set(self.prev_locals)))\n self.prev_locals = list(new_locals.keys())\n return output\n# Set up\neww = None\neww = NewLocals(locals())\n\n# \"Working\" requested code\n\nvar = {}\n\nprint eww.show_new(locals()) # Outputs: var\n\nsomething_else = 3\nprint eww.show_new(locals()) # Outputs: something_else\n\n# Further testing\n\nanother_variable = 4\nand_a_final_one = 5\n\nprint eww.show_new(locals()) # Outputs: another_variable, and_a_final_one\n\n", "Does Django not do this when generating field names?\nhttp://docs.djangoproject.com/en/dev//topics/db/models/#verbose-field-names\nSeems reasonable to me.\n", "I think this is a cool solution and I suppose the best you can get. But do you see any way to handle the ambigious results, your function may return?\nAs \"is\" operator behaves unexpectedly with integers shows, low integers and strings of the same value get cached by python so that your variablename-function might priovide ambigous results with a high probability. \nIn my case, I would like to create a decorator, that adds a new variable to a class by the varialbename i pass it:\ndef inject(klass, dependency):\nklass.__dict__[\"__\"+variablename(dependency)]=dependency\n\nBut if your method returns ambigous results, how can I know the name of the variable I added?\nvar any_var=\"myvarcontent\"\nvar myvar=\"myvarcontent\"\n@inject(myvar)\nclass myclasss():\n def myclass_method(self):\n print self.__myvar #I can not be sure, that this variable will be set...\n\nMaybe if I will also check the local list I could at least remove the \"dependency\"-Variable from the list, but this will not be a reliable result.\n", "Here is a succinct variation that lets you specify any directory.\nThe issue with using directories to find anything is that multiple variables can have the same value. So this code returns a list of possible variables.\ndef varname( var, dir=locals()):\n return [ key for key, val in dir.items() if id( val) == id( var)]\n\n", "I don't know it's right or not, but it worked for me\ndef varname(variable):\n for name in list(globals().keys()):\n expression = f'id({name})'\n if id(variable) == eval(expression):\n return name\n\n", "it is possible to a limited extent. the answer is similar to the solution by @tamtam .\nThe given example assumes the following assumptions -\n\nYou are searching for a variable by its value\nThe variable has a distinct value\nThe value is in the global namespace\n\nExample:\ntestVar = \"unique value\"\nvarNameAsString = [k for k,v in globals().items() if v == \"unique value\"]\n#\n# the variable \"varNameAsString\" will contain all the variable name that matches\n# the value \"unique value\"\n# for this example, it will be a list of a single entry \"testVar\"\n#\nprint(varNameAsString)\n\nOutput : ['testVar']\nYou can extend this example for any other variable/data type\n", "I'd like to point out a use case for this that is not an anti-pattern, and there is no better way to do it.\nThis seems to be a missing feature in python.\nThere are a number of functions, like patch.object, that take the name of a method or property to be patched or accessed.\nConsider this:\npatch.object(obj, \"method_name\", new_reg)\nThis can potentially start \"false succeeding\" when you change the name of a method. IE: you can ship a bug, you thought you were testing.... simply because of a bad method name refactor.\nNow consider: varname. This could be an efficient, built-in function. But for now it can work by iterating an object or the caller's frame:\nNow your call can be:\npatch.member(obj, obj.method_name, new_reg)\nAnd the patch function can call:\nvarname(var, obj=obj)\nThis would: assert that the var is bound to the obj and return the name of the member. Or if the obj is not specified, use the callers stack frame to derive it, etc.\nCould be made an efficient built in at some point, but here's a definition that works. I deliberately didn't support builtins, easy to add tho:\nFeel free to stick this in a package called varname.py, and use it in your patch.object calls:\npatch.object(obj, varname(obj, obj.method_name), new_reg)\nNote: this was written for python 3.\nimport inspect\n\ndef _varname_dict(var, dct):\n key_name = None\n for key, val in dct.items():\n if val is var:\n if key_name is not None:\n raise NotImplementedError(\"Duplicate names not supported %s, %s\" % (key_name, key))\n key_name = key\n return key_name\n\ndef _varname_obj(var, obj):\n key_name = None\n for key in dir(obj):\n val = getattr(obj, key)\n equal = val is var\n if equal:\n if key_name is not None:\n raise NotImplementedError(\"Duplicate names not supported %s, %s\" % (key_name, key))\n key_name = key\n return key_name\n\ndef varname(var, obj=None):\n if obj is None:\n if hasattr(var, \"__self__\"):\n return var.__name__\n caller_frame = inspect.currentframe().f_back\n try:\n ret = _varname_dict(var, caller_frame.f_locals)\n except NameError:\n ret = _varname_dict(var, caller_frame.f_globals)\n else:\n ret = _varname_obj(var, obj)\n if ret is None:\n raise NameError(\"Name not found. (Note: builtins not supported)\")\n return ret\n\n", "This will work for simnple data types (str, int, float, list etc.)\n\n>>> def my_print(var_str) : \n print var_str+':', globals()[var_str]\n>>> a = 5\n>>> b = ['hello', ',world!']\n>>> my_print('a')\na: 5\n>>> my_print('b')\nb: ['hello', ',world!']\n\n", "It's not very Pythonesque but I was curious and found this solution. You need to duplicate the globals dictionary since its size will change as soon as you define a new variable.\ndef var_to_name(var):\n # noinspection PyTypeChecker\n dict_vars = dict(globals().items())\n\n var_string = None\n\n for name in dict_vars.keys():\n if dict_vars[name] is var:\n var_string = name\n break\n\n return var_string\n\n\nif __name__ == \"__main__\":\n test = 3\n print(f\"test = {test}\")\n print(f\"variable name: {var_to_name(test)}\")\n\nwhich returns:\ntest = 3\nvariable name: test\n\n", "To get the variable name of var as a string:\nvar = 1000\nvar_name = [k for k,v in locals().items() if v == var][0] \nprint(var_name) # ---> outputs 'var'\n\n", "Thanks @restrepo, this was exactly what I needed to create a standard save_df_to_file() function. For this, I made some small changes to your tostr() function. Hope this will help someone else:\ndef variabletostr(**df):\n variablename = list(df.keys())[0]\n return variablename\n \n variabletostr(df=0)\n\n", "The original question is pretty old, but I found an almost solution with Python 3. (I say almost because I think you can get close to a solution but I do not believe there is a solution concrete enough to satisfy the exact request).\nFirst, you might want to consider the following:\n\nobjects are a core concept in Python, and they may be assigned a variable, but the variable itself is a bound name (think pointer or reference) not the object itself\nvar is just a variable name bound to an object and that object could have more than one reference (in your example it does not seem to)\nin this case, var appears to be in the global namespace so you can use the global builtin conveniently named global\ndifferent name references to the same object will all share the same id which can be checked by running the id builtin id like so: id(var)\n\nThis function grabs the global variables and filters out the ones matching the content of your variable.\ndef get_bound_names(target_variable):\n '''Returns a list of bound object names.'''\n return [k for k, v in globals().items() if v is target_variable]\n\nThe real challenge here is that you are not guaranteed to get back the variable name by itself. It will be a list, but that list will contain the variable name you are looking for. If your target variable (bound to an object) is really the only bound name, you could access it this way:\nbound_names = get_variable_names(target_variable)\nvar_string = bound_names[0]\n\n", "Possible for Python >= 3.8 (with f'{var=}' string )\nNot sure if this could be used in production code, but in Python 3.8(and up) you can use f' string debugging specifier. Add = at the end of an expression, and it will print both the expression and its value:\nmy_salary_variable = 5000\nprint(f'{my_salary_variable = }')\n\nOutput:\nmy_salary_variable = 5000\n\nTo uncover this magic here is another example:\nparam_list = f'{my_salary_variable=}'.split('=')\nprint(param_list)\n\nOutput:\n['my_salary_variable', '5000']\n\nExplanation: when you put '=' after your var in f'string, it returns a string with variable name, '=' and its value. Split it with .split('=') and get a List of 2 strings, [0] - your_variable_name, and [1] - actual object of variable.\nPick up [0] element of the list if you need variable name only.\nmy_salary_variable = 5000\nparam_list = f'{my_salary_variable=}'.split('=')\nprint(param_list[0])\nOutput:\nmy_salary_variable\n\nor, in one line\n\nmy_salary_variable = 5000\nprint(f'{my_salary_variable=}'.split('=')[0])\nOutput:\nmy_salary_variable\n\nWorks with functions too:\ndef my_super_calc_foo(number):\n return number**3\n\nprint(f'{my_super_calc_foo(5) = }')\nprint(f'{my_super_calc_foo(5)=}'.split('='))\n\nOutput:\nmy_super_calc_foo(5) = 125\n['my_super_calc_foo(5)', '125']\n\nProcess finished with exit code 0\n\n" ]
[ 62, 41, 14, 12, 7, 6, 3, 2, 2, 2, 2, 2, 2, 2, 1, 0, 0, 0, 0, 0, 0 ]
[ "This module works for converting variables names to a string:\nhttps://pypi.org/project/varname/\nUse it like this:\nfrom varname import nameof\n\nvariable=0\n\nname=nameof(variable)\n\nprint(name)\n\n//output: variable\n\nInstall it by:\npip install varname\n\n", "print \"var\"\nprint \"something_else\"\n\nOr did you mean something_else?\n" ]
[ -1, -3 ]
[ "python", "string", "variables" ]
stackoverflow_0001534504_python_string_variables.txt
Q: AWS S3 Boto3 Python - An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied I am having a problem implementing delete_object. def delete_image_from_s3(img=None): if img: try: response = client.delete_object( Bucket='my-bucket', Key='uploads/img.jpg', ) print(response) except ClientError as ce: print("error", ce) whenever i send a request to delete a certain file, I keep receiving error caught by exception: An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied I know it has something to do with my policies, i already set required policies to allow it. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:PutObject", "s3:PutObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::my-bucket/*" } ] } I have full acces set for my IAM user for s3.. Or does it do something with bucket being public? or I am just missing something. Any suggestions will do, thanks for responding. A: The policy you have shown appears to be a Bucket Policy that is assigned to a specific bucket. This policy is granting anyone in the world permission to use your S3 bucket, so it is not recommended from a security viewpoint. You should remove this bucket policy. You have mentioned that the provided code is running on "localhost" -- I will presume this means you are running it on your own computer. In order to make API calls to AWS from that code, you will need to provide AWS credentials. Typically, these credentials are stored in the ~/.aws/credentials file by running the AWS CLI aws configure command. You would need to provide an Access Key and Secret Key associated with an IAM User (via the Security Credentials tab in the IAM management console). You would then need to assign permissions to the IAM User so that they can use the S3 bucket. Note that the permissions are given to the IAM User, not the bucket: { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:PutObject", "s3:PutObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::my-bucket/*" } ] } The difference here is that there is no Principal because it is directly attached to the IAM User.
AWS S3 Boto3 Python - An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
I am having a problem implementing delete_object. def delete_image_from_s3(img=None): if img: try: response = client.delete_object( Bucket='my-bucket', Key='uploads/img.jpg', ) print(response) except ClientError as ce: print("error", ce) whenever i send a request to delete a certain file, I keep receiving error caught by exception: An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied I know it has something to do with my policies, i already set required policies to allow it. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPublicRead", "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": [ "s3:GetObject", "s3:GetObjectAcl", "s3:PutObject", "s3:PutObjectAcl", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::my-bucket/*" } ] } I have full acces set for my IAM user for s3.. Or does it do something with bucket being public? or I am just missing something. Any suggestions will do, thanks for responding.
[ "The policy you have shown appears to be a Bucket Policy that is assigned to a specific bucket. This policy is granting anyone in the world permission to use your S3 bucket, so it is not recommended from a security viewpoint. You should remove this bucket policy.\nYou have mentioned that the provided code is running on \"localhost\" -- I will presume this means you are running it on your own computer.\nIn order to make API calls to AWS from that code, you will need to provide AWS credentials. Typically, these credentials are stored in the ~/.aws/credentials file by running the AWS CLI aws configure command. You would need to provide an Access Key and Secret Key associated with an IAM User (via the Security Credentials tab in the IAM management console).\nYou would then need to assign permissions to the IAM User so that they can use the S3 bucket. Note that the permissions are given to the IAM User, not the bucket:\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Sid\": \"AllowPublicRead\",\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:GetObject\",\n \"s3:GetObjectAcl\",\n \"s3:PutObject\",\n \"s3:PutObjectAcl\",\n \"s3:DeleteObject\"\n ],\n \"Resource\": \"arn:aws:s3:::my-bucket/*\"\n }\n ]\n}\n\nThe difference here is that there is no Principal because it is directly attached to the IAM User.\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto3", "python" ]
stackoverflow_0074669404_amazon_s3_amazon_web_services_boto3_python.txt
Q: KeyError: '...' keeps coming back I keep getting the error and can't find where the problem lies. I'm trying so I can choose wether I want the attack the creature or both printed and what type the creature is: 'easy', 'medium' or 'hard', I want to store that into a variable. creature = {'easy': ['chicken', 'slime', 'rat'], 'medium': ['wolf', 'cow', 'fox'], 'hard': ['baby dragon', 'demon', 'lesser demi god'] } attack = { 'easy': ['pecks you', 'spits juice at', 'scratches'], 'medium': ['bites', 'charges at', 'bites'], 'hard': ['spits sparks of fire at', 'rends', 'smashes'] } creature_easy = ['chicken', 'slime', 'rat'] cre = random.choice(creature_easy) linked = dict(zip(creature[cre], attack[cre])) cre_type = linked[0] cre = random.choice(dict(creature)) print(linked[cre]) KeyError: 'rat' Thanks in advance A: You might want something like: chosen_level = 'easy' game_data = dict(zip(creature[chosen_level], attack[chosen_level])) import random cre = random.choice(list(game_data)) att = game_data[cre] print(cre, att) Output: rat scratches
KeyError: '...' keeps coming back
I keep getting the error and can't find where the problem lies. I'm trying so I can choose wether I want the attack the creature or both printed and what type the creature is: 'easy', 'medium' or 'hard', I want to store that into a variable. creature = {'easy': ['chicken', 'slime', 'rat'], 'medium': ['wolf', 'cow', 'fox'], 'hard': ['baby dragon', 'demon', 'lesser demi god'] } attack = { 'easy': ['pecks you', 'spits juice at', 'scratches'], 'medium': ['bites', 'charges at', 'bites'], 'hard': ['spits sparks of fire at', 'rends', 'smashes'] } creature_easy = ['chicken', 'slime', 'rat'] cre = random.choice(creature_easy) linked = dict(zip(creature[cre], attack[cre])) cre_type = linked[0] cre = random.choice(dict(creature)) print(linked[cre]) KeyError: 'rat' Thanks in advance
[ "You might want something like:\nchosen_level = 'easy'\ngame_data = dict(zip(creature[chosen_level], attack[chosen_level]))\n\nimport random\ncre = random.choice(list(game_data))\natt = game_data[cre]\n\nprint(cre, att) \n\nOutput: rat scratches\n" ]
[ 2 ]
[]
[]
[ "dictionary", "keyerror", "python" ]
stackoverflow_0074671020_dictionary_keyerror_python.txt
Q: How do I fill a dictionary with indices in a for loop? I have a transposed Dataframe tr: 7128 8719 14051 14636 JDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89 JDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90 JDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91 JDUTC_3 NaN 2452149.36 NaN NaN JDUTC_4 NaN 2452149.36 NaN NaN JDUTC_5 NaN 2452149.36 NaN NaN JDUTC_6 1.23 2452149.37 NaN NaN JDUTC_7 NaN NaN NaN NaN JDUTC_8 NaN NaN NaN NaN JDUTC_9 NaN NaN NaN NaN And I create dict 'a' with this block of code: a = {} b=[] for _, contents in tr.items(): b.clear() for ind, val in enumerate(contents): if np.isnan(val): b.append(ind) continue else: pass print(_) print(b) a[_] = b print(a) Which gives me this output: 7128 [3, 4, 5, 7, 8, 9] {7128: [3, 4, 5, 7, 8, 9]} 8719 [7, 8, 9] {7128: [7, 8, 9], 8719: [7, 8, 9]} 14051 [3, 4, 5, 6, 7, 8, 9] {7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9]} 14636 [3, 4, 5, 6, 7, 8, 9] {7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9], 14636: [3, 4, 5, 6, 7, 8, 9]} What I expect dict 'a' to look like is this: {7128: [3, 4, 5, 7, 8, 9] 8719: [7, 8, 9] 14051: [3, 4, 5, 6, 7, 8, 9] 14636: [3, 4, 5, 6, 7, 8, 9]} What I am doing wrong? Why is a[_] = b overwriting all the previous keys when print(_) is verifying that _ is always the next column label? A: The problem is you are assigning same list to all keys. a = {} b=[] # < --- You create one Array/list 'b' for _, contents in tr.items(): b.clear() for ind, val in enumerate(contents): if np.isnan(val): b.append(ind) continue else: pass print(_) print(b) a[_] = b # <-- assign same array to all keys. print(a) Check my comment on the code above. b.clear() This line just clears the same array, it does not create a new array. To run the code as you intended, create a new array/list in side the loop. a = {} for _, contents in tr.items(): b = [] # <--- new array/list is created for ind, val in enumerate(contents): if np.isnan(val): b.append(ind) continue else: pass print(_) print(b) a[_] = b # <--- Now you assign the new array 'b' to a[_] print(a) A: With the correct name convention, I would change your code after: import numpy as np import pandas as pd import sys if sys.version_info[0] < 3: from StringIO import StringIO else: from io import StringIO s = StringIO("""idx 7128 8719 14051 14636 JDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89 JDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90 JDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91 JDUTC_3 NaN 2452149.36 NaN NaN JDUTC_4 NaN 2452149.36 NaN NaN JDUTC_5 NaN 2452149.36 NaN NaN JDUTC_6 1.23 2452149.37 NaN NaN JDUTC_7 NaN NaN NaN NaN JDUTC_8 NaN NaN NaN NaN JDUTC_9 NaN NaN NaN NaN""") tr = pd.read_csv(s, sep="\t", index_col=0) (people should give minimal working code - but often forget to give e.g. the code to build the data frame etc. and the imports) to: a = {} b = [] for name, values in tr.items(): b.clear() # this is problematic as you know for ind, val in enumerate(values): if np.isnan(val): b.append(ind) continue else: pass a[name] = b continue and pass are not necessary - they just say "go on" with the loop. In Python, you are not forced to give the else branch: for name, values in tr.items(): b.clear() # This is still problematic at this state. for ind, val in enumerate(values): if np.isnan(val): b.append(ind) a[name] = b Such collection of data using for-loops are better done with list-comprehensions: a = {} for name, values in tr.items(): b = [ind for ind, val in enumerate(values) if np.isnan(val)] a[name] = b # now the result is already correct! And finally, you can even build list-comprehensions for dictionaries - making this entire code a one-liner - but a readable one - when one is familiar with list comprehensions: a = {name: [i for i, x in enumerate(vals) if np.isnan(x)] for name, vals in tr.items()} You can see the result: a # which returns: {'7128': [3, 4, 5, 7, 8, 9], '8719': [7, 8, 9], '14051': [3, 4, 5, 6, 7, 8, 9], '14636': [3, 4, 5, 6, 7, 8, 9]} List-comprehensions are going into the direction of Functional Programming (FP). Which exactly deals with the problem of not to apply mutation (like the b.append() or b.clear() methods - because - as you have seen: your case is a demonstration of how easily a bug is generated when using mutation. - and would contribute to the discussion - why FP - while it at the first sight looks brain-unfriendly - is actually the more brain-friendly way to program. List comprehensions are the Pythonic form of "map" - and if you use a "if" inside list comprehensions - this is the Pythonic equivalent to "filter" which FP people know like a second brain for breathing.
How do I fill a dictionary with indices in a for loop?
I have a transposed Dataframe tr: 7128 8719 14051 14636 JDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89 JDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90 JDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91 JDUTC_3 NaN 2452149.36 NaN NaN JDUTC_4 NaN 2452149.36 NaN NaN JDUTC_5 NaN 2452149.36 NaN NaN JDUTC_6 1.23 2452149.37 NaN NaN JDUTC_7 NaN NaN NaN NaN JDUTC_8 NaN NaN NaN NaN JDUTC_9 NaN NaN NaN NaN And I create dict 'a' with this block of code: a = {} b=[] for _, contents in tr.items(): b.clear() for ind, val in enumerate(contents): if np.isnan(val): b.append(ind) continue else: pass print(_) print(b) a[_] = b print(a) Which gives me this output: 7128 [3, 4, 5, 7, 8, 9] {7128: [3, 4, 5, 7, 8, 9]} 8719 [7, 8, 9] {7128: [7, 8, 9], 8719: [7, 8, 9]} 14051 [3, 4, 5, 6, 7, 8, 9] {7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9]} 14636 [3, 4, 5, 6, 7, 8, 9] {7128: [3, 4, 5, 6, 7, 8, 9], 8719: [3, 4, 5, 6, 7, 8, 9], 14051: [3, 4, 5, 6, 7, 8, 9], 14636: [3, 4, 5, 6, 7, 8, 9]} What I expect dict 'a' to look like is this: {7128: [3, 4, 5, 7, 8, 9] 8719: [7, 8, 9] 14051: [3, 4, 5, 6, 7, 8, 9] 14636: [3, 4, 5, 6, 7, 8, 9]} What I am doing wrong? Why is a[_] = b overwriting all the previous keys when print(_) is verifying that _ is always the next column label?
[ "The problem is you are assigning same list to all keys.\na = {}\nb=[] # < --- You create one Array/list 'b'\nfor _, contents in tr.items():\n b.clear()\n for ind, val in enumerate(contents):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n print(_)\n print(b)\n a[_] = b # <-- assign same array to all keys.\n print(a)\n\nCheck my comment on the code above.\nb.clear()\n\nThis line just clears the same array, it does not create a new array.\nTo run the code as you intended, create a new array/list in side the loop.\na = {}\nfor _, contents in tr.items():\n b = [] # <--- new array/list is created\n for ind, val in enumerate(contents):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n print(_)\n print(b)\n a[_] = b # <--- Now you assign the new array 'b' to a[_]\n print(a)\n\n", "With the correct name convention, I would change your code\nafter:\nimport numpy as np\nimport pandas as pd\n\nimport sys\nif sys.version_info[0] < 3:\n from StringIO import StringIO\nelse:\n from io import StringIO\n\ns = StringIO(\"\"\"idx 7128 8719 14051 14636\nJDUTC_0 2451957.36 2452149.36 2457243.98 2452531.89\nJDUTC_1 2451957.37 2452149.36 2457243.99 2452531.90\nJDUTC_2 2451957.37 2452149.36 2457244.00 2452531.91\nJDUTC_3 NaN 2452149.36 NaN NaN\nJDUTC_4 NaN 2452149.36 NaN NaN\nJDUTC_5 NaN 2452149.36 NaN NaN\nJDUTC_6 1.23 2452149.37 NaN NaN\nJDUTC_7 NaN NaN NaN NaN\nJDUTC_8 NaN NaN NaN NaN\nJDUTC_9 NaN NaN NaN NaN\"\"\")\n\ntr = pd.read_csv(s, sep=\"\\t\", index_col=0)\n\n(people should give minimal working code - but often forget to give e.g. the code to build the data frame etc. and the imports)\nto:\n\n\na = {}\nb = []\nfor name, values in tr.items():\n b.clear() # this is problematic as you know\n for ind, val in enumerate(values):\n if np.isnan(val):\n b.append(ind)\n continue\n else:\n pass\n a[name] = b\n\ncontinue and pass are not necessary - they just say \"go on\" with the loop.\nIn Python, you are not forced to give the else branch:\nfor name, values in tr.items():\n b.clear() # This is still problematic at this state.\n for ind, val in enumerate(values):\n if np.isnan(val):\n b.append(ind)\n a[name] = b\n\nSuch collection of data using for-loops are better done with list-comprehensions:\na = {}\nfor name, values in tr.items():\n b = [ind for ind, val in enumerate(values) if np.isnan(val)]\n a[name] = b\n# now the result is already correct!\n\nAnd finally, you can even build list-comprehensions for dictionaries -\nmaking this entire code a one-liner - but a readable one - when one is familiar with list comprehensions:\na = {name: [i for i, x in enumerate(vals) if np.isnan(x)] for name, vals in tr.items()}\n\nYou can see the result:\na\n# which returns:\n{'7128': [3, 4, 5, 7, 8, 9],\n '8719': [7, 8, 9],\n '14051': [3, 4, 5, 6, 7, 8, 9],\n '14636': [3, 4, 5, 6, 7, 8, 9]}\n\nList-comprehensions are going into the direction of Functional Programming (FP).\nWhich exactly deals with the problem of not to apply mutation (like the b.append() or b.clear() methods - because - as you have seen: your case is a demonstration of how easily a bug is generated when using mutation. - and would contribute to the discussion - why FP - while it at the first sight looks brain-unfriendly - is\nactually the more brain-friendly way to program.\nList comprehensions are the Pythonic form of \"map\" - and if you use a \"if\" inside list comprehensions - this is the Pythonic equivalent to \"filter\" which FP people know like a second brain for breathing.\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "for_loop", "python" ]
stackoverflow_0074669044_dictionary_for_loop_python.txt
Q: Add toolbar button icon matplotlib I want to add an icon to a custom button in a matplotlib figure toolbar. How can I do that? So far, I have the following code: import matplotlib matplotlib.rcParams["toolbar"] = "toolmanager" import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolToggleBase class NewTool(ToolToggleBase): ...[tool code] fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1, 2, 3], label="legend") ax.legend() fig.canvas.manager.toolmanager.add_tool("newtool", NewTool) fig.canvas.manager.toolbar.add_tool(toolmanager.get_tool("newtool"), "toolgroup") fig.show() For now, the only thing that it does is adding a new button (which do what I want) but the icon is only the tool's name i.e.: "newtool". How can I change this for a custom icon like a png image? A: The tool can have an attribute image, which denotes the path to a png image. import matplotlib matplotlib.rcParams["toolbar"] = "toolmanager" import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolBase class NewTool(ToolBase): image = r"C:\path\to\hiker.png" fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1, 2, 3], label="legend") ax.legend() tm = fig.canvas.manager.toolmanager tm.add_tool("newtool", NewTool) fig.canvas.manager.toolbar.add_tool(tm.get_tool("newtool"), "toolgroup") plt.show() A: I tried this solution, there is also a similar solution on matplotlib docs, but I >cannot reproduce it. I get the following error: tm.add_tool("newtool", NewTool) >AttributeError: 'NoneType' object has no attribute 'add_tool' Seems weird that >plt.figure() does not contain canvas. Any ideas? – bad_locality Jun 25, 2020 at 12:12 I do not know if it's help, but I had the same issues at first and then i realized that I had not activated the interactive matplotlib option (%matplotlib with Python - spyder). Therefore no toolbar are associated since it only create a static figure.
Add toolbar button icon matplotlib
I want to add an icon to a custom button in a matplotlib figure toolbar. How can I do that? So far, I have the following code: import matplotlib matplotlib.rcParams["toolbar"] = "toolmanager" import matplotlib.pyplot as plt from matplotlib.backend_tools import ToolToggleBase class NewTool(ToolToggleBase): ...[tool code] fig = plt.figure() ax = fig.add_subplot(111) ax.plot([1, 2, 3], label="legend") ax.legend() fig.canvas.manager.toolmanager.add_tool("newtool", NewTool) fig.canvas.manager.toolbar.add_tool(toolmanager.get_tool("newtool"), "toolgroup") fig.show() For now, the only thing that it does is adding a new button (which do what I want) but the icon is only the tool's name i.e.: "newtool". How can I change this for a custom icon like a png image?
[ "The tool can have an attribute image, which denotes the path to a png image.\nimport matplotlib\nmatplotlib.rcParams[\"toolbar\"] = \"toolmanager\"\nimport matplotlib.pyplot as plt\nfrom matplotlib.backend_tools import ToolBase\n\nclass NewTool(ToolBase):\n image = r\"C:\\path\\to\\hiker.png\"\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.plot([1, 2, 3], label=\"legend\")\nax.legend()\ntm = fig.canvas.manager.toolmanager\ntm.add_tool(\"newtool\", NewTool)\nfig.canvas.manager.toolbar.add_tool(tm.get_tool(\"newtool\"), \"toolgroup\")\nplt.show()\n\n\n", "\nI tried this solution, there is also a similar solution on matplotlib docs, but I >cannot reproduce it. I get the following error: tm.add_tool(\"newtool\", NewTool) >AttributeError: 'NoneType' object has no attribute 'add_tool' Seems weird that >plt.figure() does not contain canvas. Any ideas? –\nbad_locality\nJun 25, 2020 at 12:12\n\nI do not know if it's help, but I had the same issues at first and then i realized that I had not activated the interactive matplotlib option (%matplotlib with Python - spyder). Therefore no toolbar are associated since it only create a static figure.\n" ]
[ 7, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0052971285_matplotlib_python.txt
Q: How to apply contour to z matrix which has the same dimension as x- and y matrix My dataset contains x, y coordinate and energy surface z. I need to turn those into this image. The problem is that z need to have the dimension of (x, y) but all three of them are 1D numpy array with length = 10201 (also some values in z are inf). I tried to turn z into a meshgrid with Z, Z = np.meshgrid(z,z) and then try ax.contour(x, y, Z) but the result is this. How should it do it? I tried to turn z into a meshgrid, tried to remove all the rows which contain inf value in z, tried to turn all three of them into meshgrids A: To create a contour plot from your 1D arrays of x, y, and z coordinates, you can use NumPy's meshgrid function to create 2D grids from your 1D arrays, and then use the contour function from Matplotlib to create the contour plot. First, you need to create 2D grids from your 1D arrays of x and y coordinates using NumPy's meshgrid function. You can do this by calling np.meshgrid(x, y), where x and y are your 1D arrays of x and y coordinates, respectively. This will return two 2D grids, one for the x coordinates and one for the y coordinates. Next, you can use the contour function from Matplotlib to create the contour plot. You can do this by calling ax.contour(x, y, z), where ax is the axes object that you want to draw the contour plot on, x and y are the 2D grids of x and y coordinates that you created using meshgrid, and z is your 1D array of z coordinates. This will create a contour plot with x and y coordinates on the x- and y-axes, respectively, and z values as the contour levels. One thing to keep in mind is that if you have any inf values in your z array, they will cause the contour function to throw an error. In this case, you will need to remove the inf values from your z array before creating the contour plot. You can do this by using NumPy's isinf function to find the indices of the inf values in your z array, and then use these indices to select only the non-inf values from your z array. Here is an example of how you can use these steps to create a contour plot from your 1D arrays of x, y, and z coordinates: import numpy as np import matplotlib.pyplot as plt # 1D arrays of x, y, and z coordinates x = ... y = ... z = ... # Create 2D grids of x and y coordinates X, Y = np.meshgrid(x, y) # Remove inf values from z array z_noninf = z[~np.isinf(z)] # Create figure and axes object fig, ax = plt.subplots() # Create contour plot ax.contour(X, Y, z_noninf) # Add x and y labels ax.set_xlabel('x') ax.set_ylabel('y') # Show the plot plt.show() I hope this helps!
How to apply contour to z matrix which has the same dimension as x- and y matrix
My dataset contains x, y coordinate and energy surface z. I need to turn those into this image. The problem is that z need to have the dimension of (x, y) but all three of them are 1D numpy array with length = 10201 (also some values in z are inf). I tried to turn z into a meshgrid with Z, Z = np.meshgrid(z,z) and then try ax.contour(x, y, Z) but the result is this. How should it do it? I tried to turn z into a meshgrid, tried to remove all the rows which contain inf value in z, tried to turn all three of them into meshgrids
[ "To create a contour plot from your 1D arrays of x, y, and z coordinates, you can use NumPy's meshgrid function to create 2D grids from your 1D arrays, and then use the contour function from Matplotlib to create the contour plot.\nFirst, you need to create 2D grids from your 1D arrays of x and y coordinates using NumPy's meshgrid function. You can do this by calling np.meshgrid(x, y), where x and y are your 1D arrays of x and y coordinates, respectively. This will return two 2D grids, one for the x coordinates and one for the y coordinates.\nNext, you can use the contour function from Matplotlib to create the contour plot. You can do this by calling ax.contour(x, y, z), where ax is the axes object that you want to draw the contour plot on, x and y are the 2D grids of x and y coordinates that you created using meshgrid, and z is your 1D array of z coordinates. This will create a contour plot with x and y coordinates on the x- and y-axes, respectively, and z values as the contour levels.\nOne thing to keep in mind is that if you have any inf values in your z array, they will cause the contour function to throw an error. In this case, you will need to remove the inf values from your z array before creating the contour plot. You can do this by using NumPy's isinf function to find the indices of the inf values in your z array, and then use these indices to select only the non-inf values from your z array.\nHere is an example of how you can use these steps to create a contour plot from your 1D arrays of x, y, and z coordinates:\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# 1D arrays of x, y, and z coordinates\nx = ...\ny = ...\nz = ...\n\n# Create 2D grids of x and y coordinates\nX, Y = np.meshgrid(x, y)\n\n# Remove inf values from z array\nz_noninf = z[~np.isinf(z)]\n\n# Create figure and axes object\nfig, ax = plt.subplots()\n\n# Create contour plot\nax.contour(X, Y, z_noninf)\n\n# Add x and y labels\nax.set_xlabel('x')\nax.set_ylabel('y')\n\n# Show the plot\nplt.show()\n\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "contour", "matplotlib", "python" ]
stackoverflow_0074671161_contour_matplotlib_python.txt
Q: Creating folder if it does not exist, move files there I am python newbie and have read countless answers here and in other sources, on how to create folders if they do not exist and move files there. However, still I cannot bring it to work. So what I want to do is the following: Keep my downloads folder clean. I want to run the script, it is supposed to move all files to matching extension name folders. If the folder already exists, it does not have to create it. Problems: I want to be able to run the script as often as I want while keeping the newly created folders there. However, then the whole os.listdir part does not work because folders have no file extensions. I tried to solve this by leaving out folders but it does not work as well. I would appreciate any help! from os import scandir import os import shutil basepath = r"C:\Users\me\Downloads\test" for entry in scandir(basepath):     if entry.is_dir():         continue     files = os.listdir(r"C:\Users\me\Downloads\test")     ext = [f.rsplit(".")[1] for f in files]     ext_final = set(ext) try:     [os.makedirs(e) for e in ext_final] except:     print("Folder already exists!") for file in files:     for e in ext_final:         if file.rsplit(".")[1]==e:             shutil.move(file,e) A: os.makedirs has a switch to create a folder if it does not exist. use it like this: os.makedirs(foldern_name, exist_ok=True) so just replace that try...except part of code which is this: try: [os.makedirs(e) for e in ext_final] except: print("Folder already exists!") with this: for e in ext_final: os.makedirs(e, exist_os=True) A: I tried my own approach ... It is kind of ugly but it gets the job done. import os import shutil def sort_folder(fpath: str) -> None: dirs = [] files = [] filetypes = [] for item in os.listdir(fpath): if os.path.isfile(f"{fpath}\{item}"): files.append(item) else: dirs.append(item) for file in files: filetype = os.path.splitext(file)[1] filetypes.append(filetype) for filetype in set(filetypes): if not os.path.isdir(f"{fpath}\{filetype}"): os.mkdir(f"{fpath}\{filetype}") for (file, filetype) in zip(files, filetypes): shutil.move(f"{fpath}\{file}", f"{fpath}\{filetype}") if __name__ == '__main__': # running the script sort_folder(fpath) A: There are a few issues with your code that are preventing it from working as expected. Here are a few suggestions to help you fix the problems: You are using the scandir function to iterate over the files and directories in the basepath directory, but you are not using the entries returned by this function. Instead, you are using the os.listdir function to get a list of all the files in the directory. This means that your code is not processing the entries returned by the scandir function, and it is not checking if the entries are files or directories. You are creating a list of file extensions using the files list, but this list contains both files and directories, so the resulting list of extensions will not be accurate. Instead, you should use the entry.name property of the entries returned by the scandir function to get the names of the files and directories, and then you can split the names to get the file extensions. You are using the try and except statements to catch any errors that might occur when creating the directories for the file extensions. However, the try and except statements are not being used correctly. The try statement should be used to run the code that might throw an error, and the except statement should be used to handle the error if it occurs. In your code, the try and except statements are not being used to handle any errors, so they are not doing anything. Here is a modified version of your code that fixes these problems and should work as expected: import os import shutil basepath = r"C:\Users\me\Downloads\test" for entry in scandir(basepath): if entry.is_file(): file_ext = entry.name.rsplit(".")[1] if not os.path.exists(file_ext): os.makedirs(file_ext) shutil.move(entry.path, file_ext) This code uses the scandir function to iterate over the entries in the basepath directory, and it only processes the entries that are files (not directories). For each file, it gets the file extension from the file name, and it creates a directory with the same name as the file extension if it does not already exist. Then, it moves the file to the directory using the shutil.move function.
Creating folder if it does not exist, move files there
I am python newbie and have read countless answers here and in other sources, on how to create folders if they do not exist and move files there. However, still I cannot bring it to work. So what I want to do is the following: Keep my downloads folder clean. I want to run the script, it is supposed to move all files to matching extension name folders. If the folder already exists, it does not have to create it. Problems: I want to be able to run the script as often as I want while keeping the newly created folders there. However, then the whole os.listdir part does not work because folders have no file extensions. I tried to solve this by leaving out folders but it does not work as well. I would appreciate any help! from os import scandir import os import shutil basepath = r"C:\Users\me\Downloads\test" for entry in scandir(basepath):     if entry.is_dir():         continue     files = os.listdir(r"C:\Users\me\Downloads\test")     ext = [f.rsplit(".")[1] for f in files]     ext_final = set(ext) try:     [os.makedirs(e) for e in ext_final] except:     print("Folder already exists!") for file in files:     for e in ext_final:         if file.rsplit(".")[1]==e:             shutil.move(file,e)
[ "os.makedirs has a switch to create a folder if it does not exist.\nuse it like this:\nos.makedirs(foldern_name, exist_ok=True)\n\nso just replace that try...except part of code which is this:\n\ntry:\n\n [os.makedirs(e) for e in ext_final]\n\nexcept:\n\n print(\"Folder already exists!\")\n\nwith this:\nfor e in ext_final:\n os.makedirs(e, exist_os=True)\n\n", "I tried my own approach ... It is kind of ugly but it gets the job done.\n\nimport os\nimport shutil\n\n\n\ndef sort_folder(fpath: str) -> None:\n\n dirs = []\n files = []\n filetypes = []\n\n for item in os.listdir(fpath):\n if os.path.isfile(f\"{fpath}\\{item}\"):\n files.append(item)\n else:\n dirs.append(item)\n\n for file in files:\n filetype = os.path.splitext(file)[1]\n filetypes.append(filetype)\n\n for filetype in set(filetypes):\n if not os.path.isdir(f\"{fpath}\\{filetype}\"):\n os.mkdir(f\"{fpath}\\{filetype}\")\n\n for (file, filetype) in zip(files, filetypes):\n shutil.move(f\"{fpath}\\{file}\", f\"{fpath}\\{filetype}\")\n\n\n\nif __name__ == '__main__':\n# running the script\n\n sort_folder(fpath)\n\n", "There are a few issues with your code that are preventing it from working as expected. Here are a few suggestions to help you fix the problems:\n\nYou are using the scandir function to iterate over the files and directories in the basepath directory, but you are not using the entries returned by this function. Instead, you are using the os.listdir function to get a list of all the files in the directory. This means that your code is not processing the entries returned by the scandir function, and it is not checking if the entries are files or directories.\n\nYou are creating a list of file extensions using the files list, but this list contains both files and directories, so the resulting list of extensions will not be accurate. Instead, you should use the entry.name property of the entries returned by the scandir function to get the names of the files and directories, and then you can split the names to get the file extensions.\n\nYou are using the try and except statements to catch any errors that might occur when creating the directories for the file extensions. However, the try and except statements are not being used correctly. The try statement should be used to run the code that might throw an error, and the except statement should be used to handle the error if it occurs. In your code, the try and except statements are not being used to handle any errors, so they are not doing anything.\nHere is a modified version of your code that fixes these problems and should work as expected:\nimport os\nimport shutil\nbasepath = r\"C:\\Users\\me\\Downloads\\test\"\nfor entry in scandir(basepath):\nif entry.is_file():\nfile_ext = entry.name.rsplit(\".\")[1]\nif not os.path.exists(file_ext):\nos.makedirs(file_ext)\nshutil.move(entry.path, file_ext)\n\n\nThis code uses the scandir function to iterate over the entries in the basepath directory, and it only processes the entries that are files (not directories). For each file, it gets the file extension from the file name, and it creates a directory with the same name as the file extension if it does not already exist. Then, it moves the file to the directory using the shutil.move function.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "operating_system", "python", "shutil" ]
stackoverflow_0074670080_operating_system_python_shutil.txt
Q: Regex match a string of 18 characters (4 digits + 14 letters uppercase) please help me find a regex match this combination here it is a few examples of strings i want , I hope it helps you 1st example "HBYVHDV86DBYF44CGB" 2nd example "NGCDV15DVDB81JHDBR" 3rd example "MOX48DVPLYBJHD63JH" As you can see, there is something special , the four numbers are divided into two parts on the string . 1st example "_ 86 _ 44 _" 2nd example "_ 15 _ 81 _" 3rd example "_ 48 _ 63 _" here it is an example of a problem pgfbS63RKSFK63TNEABHHHHH bhuhu56 PGSCS63RKSFK63TNEA igi65TGHkj pgfbS63RKSFK63TNEAB PGSCS6R8KSFK63TNEA PGSCS63RKSFKT15NEA i did try this regex [a-zA-Z]+[0-9]+[a-zA-Z]+[0-9]+[a-zA-Z]+ here it is the result pgfbS63RKSFK63TNEABHHHHH PGSCS63RKSFK63TNEA pgfbS63RKSFK63TNEAB PGSCS6R8KSFK PGSCS63RKSFKT15NEA what i was expecting PGSCS63RKSFK63TNEA PGSCS63RKSFKT15NEA A: You can use look-ahead to test one of the conditions, like the total length of the input. The other conditions can be expressed close to what you proposed, but with double digits (\d\d) and end-of-input anchors (^ and $) ^(?=\w{18}$)[A-Za-z]+\d\d[A-Za-z]+\d\d[A-Za-z]+$
Regex match a string of 18 characters (4 digits + 14 letters uppercase)
please help me find a regex match this combination here it is a few examples of strings i want , I hope it helps you 1st example "HBYVHDV86DBYF44CGB" 2nd example "NGCDV15DVDB81JHDBR" 3rd example "MOX48DVPLYBJHD63JH" As you can see, there is something special , the four numbers are divided into two parts on the string . 1st example "_ 86 _ 44 _" 2nd example "_ 15 _ 81 _" 3rd example "_ 48 _ 63 _" here it is an example of a problem pgfbS63RKSFK63TNEABHHHHH bhuhu56 PGSCS63RKSFK63TNEA igi65TGHkj pgfbS63RKSFK63TNEAB PGSCS6R8KSFK63TNEA PGSCS63RKSFKT15NEA i did try this regex [a-zA-Z]+[0-9]+[a-zA-Z]+[0-9]+[a-zA-Z]+ here it is the result pgfbS63RKSFK63TNEABHHHHH PGSCS63RKSFK63TNEA pgfbS63RKSFK63TNEAB PGSCS6R8KSFK PGSCS63RKSFKT15NEA what i was expecting PGSCS63RKSFK63TNEA PGSCS63RKSFKT15NEA
[ "You can use look-ahead to test one of the conditions, like the total length of the input. The other conditions can be expressed close to what you proposed, but with double digits (\\d\\d) and end-of-input anchors (^ and $)\n^(?=\\w{18}$)[A-Za-z]+\\d\\d[A-Za-z]+\\d\\d[A-Za-z]+$\n\n" ]
[ 0 ]
[]
[]
[ "digits", "letter", "python", "regex", "uppercase" ]
stackoverflow_0074668785_digits_letter_python_regex_uppercase.txt
Q: when I run the following function I received this error IndexError: index 0 is out of bounds for axis 0 with size 0 index problem while running predication funcation def predict_death(anaemia,high_blood_pressure,serum_creatinine,serum_sodium,smoking): anaemia_index = np.where(X.columns==anaemia)[0][0] x = np.zeros(len(X.columns)) x[0] =high_blood_pressure x[1] = serum_creatinine x[2] = serum_sodium x[3] = smoking if anaemia_index >= 0: x[anaemia_index] = 1 return mode.predict([x])[0] death = predict_death(1,1, 1.9, 137,1) A: t looks like you are trying to use the np.where function to find the index of a column in the X DataFrame by its name. However, the np.where function does not work like this - it returns the indices of elements in an array that match a specified condition. To get the index of a column in a DataFrame by its name, you can use the DataFrame.columns.get_loc method instead. For example: anaemia_index = X.columns.get_loc(anaemia) This will return the index of the anaemia column in the X DataFrame. You can then use this index to access the corresponding column in the x array. You can also simplify your code by using the DataFrame.loc method to select the columns you want in the x array, rather than creating the x array manually and setting the values of its elements one by one. For example: x = X.loc[:, ["high_blood_pressure", "serum_creatinine", "serum_sodium", "smoking", anaemia]].values This will create the x array by selecting the columns you want from the X DataFrame, and then converting the DataFrame to a NumPy array using the DataFrame.values attribute. You can then pass this array directly to the mode.predict method to get the predicted death outcome. Here is an example of how your predict_death function could be implemented using these changes: def predict_death(anaemia, high_blood_pressure, serum_creatinine, serum_sodium, smoking): x = X.loc[:, ["high_blood_pressure", "serum_creatinine", "serum_sodium", "smoking", anaemia]].values return mode.predict([x])[0] death = predict_death("anaemia_yes", 1, 1.9, 137, 1) Note that in this example, the anaemia argument to the predict_death function should be the name of the column in the X DataFrame that corresponds to the presence or absence of anaemia (e.g. "anaemia_yes" or "anaemia_no").
when I run the following function I received this error IndexError: index 0 is out of bounds for axis 0 with size 0
index problem while running predication funcation def predict_death(anaemia,high_blood_pressure,serum_creatinine,serum_sodium,smoking): anaemia_index = np.where(X.columns==anaemia)[0][0] x = np.zeros(len(X.columns)) x[0] =high_blood_pressure x[1] = serum_creatinine x[2] = serum_sodium x[3] = smoking if anaemia_index >= 0: x[anaemia_index] = 1 return mode.predict([x])[0] death = predict_death(1,1, 1.9, 137,1)
[ "t looks like you are trying to use the np.where function to find the index of a column in the X DataFrame by its name. However, the np.where function does not work like this - it returns the indices of elements in an array that match a specified condition.\nTo get the index of a column in a DataFrame by its name, you can use the DataFrame.columns.get_loc method instead. For example:\nanaemia_index = X.columns.get_loc(anaemia)\n\nThis will return the index of the anaemia column in the X DataFrame. You can then use this index to access the corresponding column in the x array.\nYou can also simplify your code by using the DataFrame.loc method to select the columns you want in the x array, rather than creating the x array manually and setting the values of its elements one by one. For example:\nx = X.loc[:, [\"high_blood_pressure\", \"serum_creatinine\", \"serum_sodium\", \"smoking\", anaemia]].values\n\nThis will create the x array by selecting the columns you want from the X DataFrame, and then converting the DataFrame to a NumPy array using the DataFrame.values attribute. You can then pass this array directly to the mode.predict method to get the predicted death outcome.\nHere is an example of how your predict_death function could be implemented using these changes:\ndef predict_death(anaemia, high_blood_pressure, serum_creatinine, serum_sodium, smoking):\n x = X.loc[:, [\"high_blood_pressure\", \"serum_creatinine\", \"serum_sodium\", \"smoking\", anaemia]].values\n return mode.predict([x])[0]\n\ndeath = predict_death(\"anaemia_yes\", 1, 1.9, 137, 1)\n\nNote that in this example, the anaemia argument to the predict_death function should be the name of the column in the X DataFrame that corresponds to the presence or absence of anaemia (e.g. \"anaemia_yes\" or \"anaemia_no\").\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "for_loop", "function", "machine_learning", "python" ]
stackoverflow_0074670166_deep_learning_for_loop_function_machine_learning_python.txt
Q: Flask-SQLAlchemy Legacy vs New Query Interface I am trying to update some queries in a web application because as stated in Flask-SQLAlchemy You may see uses of Model.query or session.query to build queries. That query interface is considered legacy in SQLAlchemy. Prefer using the session.execute(select(...)) instead. I have a query: subnets = db.session.query(Subnet).order_by(Subnet.id).all() Which is translated into: SELECT subnet.id AS subnet_id, subnet.name AS subnet_name, subnet.network AS subnet_network, subnet.access AS subnet_access, subnet.date_created AS subnet_date_created FROM subnet ORDER BY subnet.id And I take the subnets variable and loop it over in my view in two different locations. And it works. However, when I try to update my query and use the new SQLAlchemy interface: subnets = db.session.execute(db.select(Subnet).order_by(Subnet.id)).scalars() I can only loop once and there is nothing left to loop over in the second loop? How can I achieve the same result with the new query interface? A: As noted in the comments to the question, your second example is not directly comparable to your first example because your second example is missing the .all() at the end. Side note: session.scalars(select(Subnet).order_by(Subnet.id)).all() is a convenient shorthand for session.execute(select(Subnet).order_by(Subnet.id)).scalars().all() and is the recommended approach for SQLAlchemy 1.4+.
Flask-SQLAlchemy Legacy vs New Query Interface
I am trying to update some queries in a web application because as stated in Flask-SQLAlchemy You may see uses of Model.query or session.query to build queries. That query interface is considered legacy in SQLAlchemy. Prefer using the session.execute(select(...)) instead. I have a query: subnets = db.session.query(Subnet).order_by(Subnet.id).all() Which is translated into: SELECT subnet.id AS subnet_id, subnet.name AS subnet_name, subnet.network AS subnet_network, subnet.access AS subnet_access, subnet.date_created AS subnet_date_created FROM subnet ORDER BY subnet.id And I take the subnets variable and loop it over in my view in two different locations. And it works. However, when I try to update my query and use the new SQLAlchemy interface: subnets = db.session.execute(db.select(Subnet).order_by(Subnet.id)).scalars() I can only loop once and there is nothing left to loop over in the second loop? How can I achieve the same result with the new query interface?
[ "As noted in the comments to the question, your second example is not directly comparable to your first example because your second example is missing the .all() at the end.\nSide note:\nsession.scalars(select(Subnet).order_by(Subnet.id)).all()\n\nis a convenient shorthand for\nsession.execute(select(Subnet).order_by(Subnet.id)).scalars().all()\n\nand is the recommended approach for SQLAlchemy 1.4+.\n" ]
[ 1 ]
[]
[]
[ "python", "sqlalchemy" ]
stackoverflow_0074668995_python_sqlalchemy.txt
Q: How to access the first two elements for each product_catogory I have already sorted the dataframe(dfDogNew) based on product_category and quantity_sold. Now I want to access the first two most sold product in each product category, how can I achieve this? I have written a for loop to access them, but the system tells me there is a key error, does anyone can help me on this? Thank you! Also, if it only has one product in the dfDogNew, then it will only return one row, I assume there should be no difference if I set the slicing as [:2], pandas will automatically pass to the next category there is only 1 product in the previous cstegory? I will attach my for loop code below: for i in product_category: for g in dfDogNew['product_category']: if i == g: finaldf = dfDogNew[dfDogNew[g]].iloc[:2] else: pass sorted dataframe based on product_category and quantity_sold list of each product category for loop attempt to access the first two elements out of each category error I have listed all my attempts in the previous description. A: It looks like you're trying to filter your DataFrame to get the first two rows for each unique value of the product_category column. One way to do this is to use the DataFrame.groupby method, which allows you to group your data by a particular column and then apply a function to each group. In your case, you can group your data by the product_category column and then apply the DataFrame.head method to each group to get the first two rows of each group. Here is an example of how you could do this: # Group the data by the 'product_category' column groups = dfDogNew.groupby("product_category") # Apply the head method to each group to get the first two rows finaldf = groups.head(2) The finaldf DataFrame that is returned will contain the first two rows for each unique value of the product_category column in your original DataFrame. Note that this approach does not require a for loop, and is generally more efficient than looping over the rows of your DataFrame. If you only want to include rows from your original DataFrame that have a product_category value that is present in the product_category array, you can filter your DataFrame using the DataFrame.isin method before grouping it. For example: # Filter the DataFrame to include only rows with a 'product_category' # value that is present in the 'product_category' array dfDogNew = dfDogNew[dfDogNew["product_category"].isin(product_category)] # Group the data by the 'product_category' column groups = dfDogNew.groupby("product_category") # Apply the head method to each group to get the first two rows finaldf = groups.head(2) This will include only rows in the dfDogNew DataFrame that have a product_category value that is present in the product_category array, and then group and filter the data as described above. Note that when using the DataFrame.head method, if there are fewer than two rows in a group, the resulting DataFrame will only include the available rows. So, if a product_category value only has one row in your original DataFrame, the resulting finaldf DataFrame will only include that one row for that category. You do not need to worry about handling this case separately.
How to access the first two elements for each product_catogory
I have already sorted the dataframe(dfDogNew) based on product_category and quantity_sold. Now I want to access the first two most sold product in each product category, how can I achieve this? I have written a for loop to access them, but the system tells me there is a key error, does anyone can help me on this? Thank you! Also, if it only has one product in the dfDogNew, then it will only return one row, I assume there should be no difference if I set the slicing as [:2], pandas will automatically pass to the next category there is only 1 product in the previous cstegory? I will attach my for loop code below: for i in product_category: for g in dfDogNew['product_category']: if i == g: finaldf = dfDogNew[dfDogNew[g]].iloc[:2] else: pass sorted dataframe based on product_category and quantity_sold list of each product category for loop attempt to access the first two elements out of each category error I have listed all my attempts in the previous description.
[ "It looks like you're trying to filter your DataFrame to get the first two rows for each unique value of the product_category column. One way to do this is to use the DataFrame.groupby method, which allows you to group your data by a particular column and then apply a function to each group.\nIn your case, you can group your data by the product_category column and then apply the DataFrame.head method to each group to get the first two rows of each group. Here is an example of how you could do this:\n# Group the data by the 'product_category' column\ngroups = dfDogNew.groupby(\"product_category\")\n\n# Apply the head method to each group to get the first two rows\nfinaldf = groups.head(2)\n\nThe finaldf DataFrame that is returned will contain the first two rows for each unique value of the product_category column in your original DataFrame. Note that this approach does not require a for loop, and is generally more efficient than looping over the rows of your DataFrame.\nIf you only want to include rows from your original DataFrame that have a product_category value that is present in the product_category array, you can filter your DataFrame using the DataFrame.isin method before grouping it. For example:\n# Filter the DataFrame to include only rows with a 'product_category'\n# value that is present in the 'product_category' array\ndfDogNew = dfDogNew[dfDogNew[\"product_category\"].isin(product_category)]\n\n# Group the data by the 'product_category' column\ngroups = dfDogNew.groupby(\"product_category\")\n\n# Apply the head method to each group to get the first two rows\nfinaldf = groups.head(2)\n\nThis will include only rows in the dfDogNew DataFrame that have a product_category value that is present in the product_category array, and then group and filter the data as described above.\nNote that when using the DataFrame.head method, if there are fewer than two rows in a group, the resulting DataFrame will only include the available rows. So, if a product_category value only has one row in your original DataFrame, the resulting finaldf DataFrame will only include that one row for that category. You do not need to worry about handling this case separately.\n" ]
[ 1 ]
[]
[]
[ "numpy_slicing", "pandas", "python" ]
stackoverflow_0074671202_numpy_slicing_pandas_python.txt
Q: Displaying text onto pygame window from file using readline() I'm currently trying to read text from a .txt file then display it onto a pygame window. The problem I'm facing is that it just displays nothing at all when I try to read through the whole txt file using a readline() loop. When I run the code below, it prints all the lines to the terminal but doesn't display it onto the pygame window. import pygame from pygame.locals import * pygame.init() screen=pygame.display.set_mode([800,600]) red = (255,0,0) yellow = (255, 191, 0) file = open('file.txt') count = 0 while True: count += 1 # Get next line from file line = file.readline() # if line is empty # end of file is reached if not line: break print("Line{}: {}".format(count, line.strip())) file.close() centerx, centery = screen.get_rect().centerx, screen.get_rect().centery deltaY = centery + 50 # adjust so it goes below screen start running =True while running: for event in pygame.event.get(): if event.type == QUIT: running = False screen.fill(0) deltaY-= 1 i=0 msg_list=[] pos_list=[] pygame.time.delay(10) font = pygame.font.SysFont('impact', 30) for line in line.split('\n'): msg=font.render(line, True, yellow) msg_list.append(msg) pos= msg.get_rect(center=(centerx, centery+deltaY+30*i)) pos_list.append(pos) i=i+1 for j in range(i): screen.blit(msg_list[j], pos_list[j]) pygame.display.update() pygame.quit()` What I've tried to do is use a different way to read all the text from the file but this just ends up printing the very last line in the .txt file opposed to displaying the whole thing. I believe using the readline() method would be much more effiecent as that reads the file line by line and then displays it onto the pygame window. It also might be helpful to know I am trying to display the text line by line on the window such as: hello bonjour hola welcome file = open('file.txt') for line in file: movie_credits = line file.close() centerx, centery = screen.get_rect().centerx, screen.get_rect().centery deltaY = centery + 50 # adjust so it goes below screen start running =True while running: for event in pygame.event.get(): if event.type == QUIT: running = False screen.fill(0) deltaY-= 1 i=0 msg_list=[] pos_list=[] pygame.time.delay(10) font = pygame.font.SysFont('impact', 30) for line in movie_credits.split('\n'): msg=font.render(line, True, yellow) msg_list.append(msg) pos= msg.get_rect(center=(centerx, centery+deltaY+30*i)) pos_list.append(pos) i=i+1 for j in range(i): screen.blit(msg_list[j], pos_list[j]) pygame.display.update() pygame.quit() A: You have to add the lines read from the file to a list (see How to read a file line-by-line into a list?): list_of_lines = [] with open('file.txt') as file: list_of_lines = [line.rstrip() for line in file] After that you can render the text lines from the list: font = pygame.font.SysFont('impact', 30) msg_list = [] for line in list_of_lines: msg = font.render(line, True, yellow) msg_list.append(msg) Complete example: import pygame from pygame.locals import * pygame.init() screen=pygame.display.set_mode([800,600]) clock = pygame.time.Clock() red = (255,0,0) yellow = (255, 191, 0) list_of_lines = [] with open('file.txt') as file: list_of_lines = [line.rstrip() for line in file] file.close() font = pygame.font.SysFont('impact', 30) msg_list = [] for line in list_of_lines: msg = font.render(line, True, yellow) msg_list.append(msg) centerx, centery = screen.get_rect().centerx, screen.get_rect().centery deltaY = centery + 50 # adjust so it goes below screen start running =True while running: clock.tick(100) for event in pygame.event.get(): if event.type == QUIT: running = False screen.fill(0) deltaY -= 1 for i, msg in enumerate(msg_list): pos = msg.get_rect(center=(centerx, centery + deltaY + 30 * i)) screen.blit(msg, pos) pygame.display.update() pygame.quit()
Displaying text onto pygame window from file using readline()
I'm currently trying to read text from a .txt file then display it onto a pygame window. The problem I'm facing is that it just displays nothing at all when I try to read through the whole txt file using a readline() loop. When I run the code below, it prints all the lines to the terminal but doesn't display it onto the pygame window. import pygame from pygame.locals import * pygame.init() screen=pygame.display.set_mode([800,600]) red = (255,0,0) yellow = (255, 191, 0) file = open('file.txt') count = 0 while True: count += 1 # Get next line from file line = file.readline() # if line is empty # end of file is reached if not line: break print("Line{}: {}".format(count, line.strip())) file.close() centerx, centery = screen.get_rect().centerx, screen.get_rect().centery deltaY = centery + 50 # adjust so it goes below screen start running =True while running: for event in pygame.event.get(): if event.type == QUIT: running = False screen.fill(0) deltaY-= 1 i=0 msg_list=[] pos_list=[] pygame.time.delay(10) font = pygame.font.SysFont('impact', 30) for line in line.split('\n'): msg=font.render(line, True, yellow) msg_list.append(msg) pos= msg.get_rect(center=(centerx, centery+deltaY+30*i)) pos_list.append(pos) i=i+1 for j in range(i): screen.blit(msg_list[j], pos_list[j]) pygame.display.update() pygame.quit()` What I've tried to do is use a different way to read all the text from the file but this just ends up printing the very last line in the .txt file opposed to displaying the whole thing. I believe using the readline() method would be much more effiecent as that reads the file line by line and then displays it onto the pygame window. It also might be helpful to know I am trying to display the text line by line on the window such as: hello bonjour hola welcome file = open('file.txt') for line in file: movie_credits = line file.close() centerx, centery = screen.get_rect().centerx, screen.get_rect().centery deltaY = centery + 50 # adjust so it goes below screen start running =True while running: for event in pygame.event.get(): if event.type == QUIT: running = False screen.fill(0) deltaY-= 1 i=0 msg_list=[] pos_list=[] pygame.time.delay(10) font = pygame.font.SysFont('impact', 30) for line in movie_credits.split('\n'): msg=font.render(line, True, yellow) msg_list.append(msg) pos= msg.get_rect(center=(centerx, centery+deltaY+30*i)) pos_list.append(pos) i=i+1 for j in range(i): screen.blit(msg_list[j], pos_list[j]) pygame.display.update() pygame.quit()
[ "You have to add the lines read from the file to a list (see How to read a file line-by-line into a list?):\nlist_of_lines = []\nwith open('file.txt') as file:\n list_of_lines = [line.rstrip() for line in file]\n\nAfter that you can render the text lines from the list:\nfont = pygame.font.SysFont('impact', 30)\nmsg_list = []\nfor line in list_of_lines:\n msg = font.render(line, True, yellow)\n msg_list.append(msg)\n\n\nComplete example:\n\nimport pygame\nfrom pygame.locals import *\n\npygame.init()\nscreen=pygame.display.set_mode([800,600])\nclock = pygame.time.Clock()\n\nred = (255,0,0)\nyellow = (255, 191, 0)\n\nlist_of_lines = []\nwith open('file.txt') as file:\n list_of_lines = [line.rstrip() for line in file]\nfile.close()\n\nfont = pygame.font.SysFont('impact', 30)\nmsg_list = []\nfor line in list_of_lines:\n msg = font.render(line, True, yellow)\n msg_list.append(msg)\n\ncenterx, centery = screen.get_rect().centerx, screen.get_rect().centery\ndeltaY = centery + 50 # adjust so it goes below screen start\n\nrunning =True\nwhile running:\n clock.tick(100)\n for event in pygame.event.get():\n if event.type == QUIT:\n running = False\n\n screen.fill(0)\n \n deltaY -= 1\n for i, msg in enumerate(msg_list):\n pos = msg.get_rect(center=(centerx, centery + deltaY + 30 * i))\n screen.blit(msg, pos)\n \n pygame.display.update()\n\npygame.quit()\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074671176_pygame_python.txt
Q: Find the Gaussian probability density of x for a normal distribution So, I'm supposed to write a function normpdf(x , avg, std) that returns the Gaussian probability density function of x for a normal distribution with mean avg and standard deviation std, with avg = 0 and std = 1. This is what I got so far, but when I click run, I get this message: Input In [95] return pdf ^ SyntaxError: invalid syntax I'm confused on what I did wrong on that part. import numpy as np import math def normpdf(x, avg=0, std=1) : # normal distribution eq exponent = math.exp(-0.5 * ((x - avg) / std) ** 2) pdf = (1 / (std * math.sqrt(2 * math.pi)) * exponent) return pdf # set x values x = np.linspace(1, 50) normpdf(x, avg, std) I added the parenthesis here and math.sqrt: pdf = (1 / (std * math.sqrt(2 * math.pi)) * exponent) ... but then I got this message: TypeError Traceback (most recent call last) Input In [114], in <cell line: 11>() 9 pdf = (1/(std*math.sqrt(2*math.pi))*exponent) 10 return pdf ---> 11 normpdf(x, avg, std) Input In [114], in normpdf(x, avg, std) 6 def normpdf(x, avg=0, std=1) : 7 #normal distribution eq ----> 8 exponent = math.exp(-0.5*((x-avg)/std)**2) 9 pdf = (1/(std*math.sqrt(2*math.pi))*exponent) 10 return pdf TypeError: only size-1 arrays can be converted to Python scalars A: You don't need the math module. Use just numpy functions: import numpy as np def normpdf(x, avg=0, std=1): exp = np.exp(-0.5 * ((x - avg) / std) ** 2) pdf = (1 / (std * np.sqrt(2 * np.pi)) * exp) return pdf x = np.linspace(1, 50) print(normpdf(x)) The code above will result in: [2.41970725e-001 5.39909665e-002 4.43184841e-003 1.33830226e-004 1.48671951e-006 6.07588285e-009 9.13472041e-012 5.05227108e-015 1.02797736e-018 7.69459863e-023 2.11881925e-027 2.14638374e-032 7.99882776e-038 1.09660656e-043 5.53070955e-050 1.02616307e-056 7.00418213e-064 1.75874954e-071 1.62463604e-079 5.52094836e-088 6.90202942e-097 3.17428155e-106 5.37056037e-116 3.34271444e-126 7.65392974e-137 6.44725997e-148 1.99788926e-159 2.27757748e-171 9.55169454e-184 1.47364613e-196 8.36395161e-210 1.74636626e-223 1.34141967e-237 3.79052640e-252 3.94039628e-267 1.50690472e-282 2.12000655e-298 1.09722105e-314 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000] A: Your implementation of the normal probability density function is almost correct. The issue you're encountering is that you're trying to evaluate the function for an array of values, but the math operations you're using only work with scalar values (single numbers). To fix this, you can either evaluate the function for each value in the array separately, or you can use numpy functions that can operate on arrays. Here's an example of how you could do this using numpy: import numpy as np def normpdf(x, avg=0, std=1): # Compute the exponent exponent = np.exp(-0.5 * ((x - avg) / std) ** 2) # Compute the normalization constant const = 1 / (std * np.sqrt(2 * np.pi)) # Compute the normal probability density function pdf = const * exponent return pdf # Set x values x = np.linspace(1, 50) # Compute the probability density function for the given values of x pdf = normpdf(x, avg, std) # Print the result print(pdf) Note that in this example I used numpy functions for the exponent and square root, which allows the function to operate on arrays of values instead of just scalars. I also used a numpy array for the x values, which means that the function will return an array of probabilities, one for each value of x. A: You are trying to calculate the normal probability density function of an array of values, but the math.exp() function expects a scalar value as input. To solve this issue, you can use the np.exp() function from the NumPy library, which can handle arrays as input and will apply the exponent calculation element-wise. import numpy as np def normpdf(x, avg=0, std=1): # normal distribution eq exponent = np.exp(-0.5 * ((x - avg) / std) ** 2) pdf = (1 / (std * np.sqrt(2 * np.pi)) * exponent) return pdf # set x values x = np.linspace(1, 50) normpdf(x) I replaced the math.exp() and math.sqrt() functions with their NumPy equivalents, np.exp() and np.sqrt(). These functions can handle arrays as input and will return an array of results.
Find the Gaussian probability density of x for a normal distribution
So, I'm supposed to write a function normpdf(x , avg, std) that returns the Gaussian probability density function of x for a normal distribution with mean avg and standard deviation std, with avg = 0 and std = 1. This is what I got so far, but when I click run, I get this message: Input In [95] return pdf ^ SyntaxError: invalid syntax I'm confused on what I did wrong on that part. import numpy as np import math def normpdf(x, avg=0, std=1) : # normal distribution eq exponent = math.exp(-0.5 * ((x - avg) / std) ** 2) pdf = (1 / (std * math.sqrt(2 * math.pi)) * exponent) return pdf # set x values x = np.linspace(1, 50) normpdf(x, avg, std) I added the parenthesis here and math.sqrt: pdf = (1 / (std * math.sqrt(2 * math.pi)) * exponent) ... but then I got this message: TypeError Traceback (most recent call last) Input In [114], in <cell line: 11>() 9 pdf = (1/(std*math.sqrt(2*math.pi))*exponent) 10 return pdf ---> 11 normpdf(x, avg, std) Input In [114], in normpdf(x, avg, std) 6 def normpdf(x, avg=0, std=1) : 7 #normal distribution eq ----> 8 exponent = math.exp(-0.5*((x-avg)/std)**2) 9 pdf = (1/(std*math.sqrt(2*math.pi))*exponent) 10 return pdf TypeError: only size-1 arrays can be converted to Python scalars
[ "You don't need the math module. Use just numpy functions:\nimport numpy as np\n\n\ndef normpdf(x, avg=0, std=1):\n exp = np.exp(-0.5 * ((x - avg) / std) ** 2)\n pdf = (1 / (std * np.sqrt(2 * np.pi)) * exp)\n return pdf\n\n\nx = np.linspace(1, 50)\n\nprint(normpdf(x))\n\nThe code above will result in:\n[2.41970725e-001 5.39909665e-002 4.43184841e-003 1.33830226e-004\n 1.48671951e-006 6.07588285e-009 9.13472041e-012 5.05227108e-015\n 1.02797736e-018 7.69459863e-023 2.11881925e-027 2.14638374e-032\n 7.99882776e-038 1.09660656e-043 5.53070955e-050 1.02616307e-056\n 7.00418213e-064 1.75874954e-071 1.62463604e-079 5.52094836e-088\n 6.90202942e-097 3.17428155e-106 5.37056037e-116 3.34271444e-126\n 7.65392974e-137 6.44725997e-148 1.99788926e-159 2.27757748e-171\n 9.55169454e-184 1.47364613e-196 8.36395161e-210 1.74636626e-223\n 1.34141967e-237 3.79052640e-252 3.94039628e-267 1.50690472e-282\n 2.12000655e-298 1.09722105e-314 0.00000000e+000 0.00000000e+000\n 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000\n 0.00000000e+000 0.00000000e+000 0.00000000e+000 0.00000000e+000\n 0.00000000e+000 0.00000000e+000]\n\n", "Your implementation of the normal probability density function is almost correct. The issue you're encountering is that you're trying to evaluate the function for an array of values, but the math operations you're using only work with scalar values (single numbers).\nTo fix this, you can either evaluate the function for each value in the array separately, or you can use numpy functions that can operate on arrays. Here's an example of how you could do this using numpy:\nimport numpy as np\n\ndef normpdf(x, avg=0, std=1):\n # Compute the exponent\n exponent = np.exp(-0.5 * ((x - avg) / std) ** 2)\n\n # Compute the normalization constant\n const = 1 / (std * np.sqrt(2 * np.pi))\n\n # Compute the normal probability density function\n pdf = const * exponent\n\n return pdf\n\n# Set x values\nx = np.linspace(1, 50)\n\n# Compute the probability density function for the given values of x\npdf = normpdf(x, avg, std)\n\n# Print the result\nprint(pdf)\n\nNote that in this example I used numpy functions for the exponent and square root, which allows the function to operate on arrays of values instead of just scalars. I also used a numpy array for the x values, which means that the function will return an array of probabilities, one for each value of x.\n", "You are trying to calculate the normal probability density function of an array of values, but the math.exp() function expects a scalar value as input. To solve this issue, you can use the np.exp() function from the NumPy library, which can handle arrays as input and will apply the exponent calculation element-wise.\nimport numpy as np\n\ndef normpdf(x, avg=0, std=1):\n # normal distribution eq\n exponent = np.exp(-0.5 * ((x - avg) / std) ** 2)\n pdf = (1 / (std * np.sqrt(2 * np.pi)) * exponent)\n return pdf\n\n# set x values\nx = np.linspace(1, 50)\n\nnormpdf(x)\n\nI replaced the math.exp() and math.sqrt() functions with their NumPy equivalents, np.exp() and np.sqrt(). These functions can handle arrays as input and will return an array of results.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "gaussian", "normal_distribution", "python" ]
stackoverflow_0074670914_gaussian_normal_distribution_python.txt
Q: Rewrite a pay computation with time-and-a-half for overtime and create a function called computepay which takes two parameters(hours and rate0 Here is my code (btw I am new to Stackoverflow and coding in general so forgive me if I made some mistakes in formatting this question): hours = int(input('Enter hours:')) rate = int(input('Enter rate:')) pay =('Your pay this month' + str((hours + hours/2) * rate)) def computepay(hours,rate): pay =('Your pay this month' + str((hours + hours/2) * rate)) return pay print(pay) A: To take into account overtime versus regular rates of pay it seems like you will need to know how much time the employee worked in each category. With that in mind, here is a simplified example to illustrate some of the key concepts. Example: def computepay(hours, rate): return hours * rate regular_rate = float(input("Hourly rate in dollars: ")) regular_hours = float(input("Regular hours worked: ")) overtime_hours = float(input("Overtime hours worked: ")) regular_pay = computepay(regular_hours, regular_rate) overtime_pay = computepay(overtime_hours, regular_rate * 1.5) total_pay = regular_pay + overtime_pay print(f"This pay period you earned: ${total_pay:.2f}") Output: Hourly rate in dollars: 15.00 Regular hours worked: 40 Overtime hours worked: 10 This pay period you earned: $825.00 A: My teacher told me to solve it in one function, computerpay . so try this . def computepay(hours, rate): if hours > 40: reg = rate * hours otp = (hours - 40.0) * (rate * 0.5) pay = reg + otp else: pay = hours * rate return pay Then The input Output Part sh = input("enter Hours:") sr = input(" Enter rate:") fh = float(sh) fr = float(sr) xp = computepay(fh,fr) print("Pay:",xp) A: def computepay(hours, rate) : return hours * rate def invalid_input() : print("Input Numeric Value") while True : try : regular_rate = float(input("Hourly rate in dollars: ")) break except : invalid_input() continue while True : try : regular_hours = float(input("Regular Hours Worked: ")) break except : invalid_input() continue while True : try : overtime_hours = float(input("Overtime hours worked :")) break except : invalid_input() continue overtime_rate = regular_rate * 1.5 regular_pay = computepay(regular_hours, regular_rate) overtime_pay = computepay(overtime_hours, overtime_rate) total_pay = regular_pay + overtime_pay print("PAY : ", total_pay) A: hour = float(input('Enter hours: ')) rate = float(input('Enter rates: ')) def compute_pay(hours, rates): if hours <= 40: print(hours * rates) elif hours > 40: print(((hours * rate) - 40 * rate) * 1.5 + 40 * rate) compute_pay(hour, rate ) A: hour = float(input('Enter hours: ')) rate = float(input('Enter rates: ')) def compute_pay(hours, rates): if hours <= 40: pay = hours * rates return pay elif hours > 40: pay = ((hours * rate) - 40 * rate) * 1.5 + 40 * rate return pay pay = compute_pay(hour, rate) print(pay)
Rewrite a pay computation with time-and-a-half for overtime and create a function called computepay which takes two parameters(hours and rate0
Here is my code (btw I am new to Stackoverflow and coding in general so forgive me if I made some mistakes in formatting this question): hours = int(input('Enter hours:')) rate = int(input('Enter rate:')) pay =('Your pay this month' + str((hours + hours/2) * rate)) def computepay(hours,rate): pay =('Your pay this month' + str((hours + hours/2) * rate)) return pay print(pay)
[ "To take into account overtime versus regular rates of pay it seems like you will need to know how much time the employee worked in each category. With that in mind, here is a simplified example to illustrate some of the key concepts.\nExample:\ndef computepay(hours, rate):\n return hours * rate\n\nregular_rate = float(input(\"Hourly rate in dollars: \"))\nregular_hours = float(input(\"Regular hours worked: \"))\novertime_hours = float(input(\"Overtime hours worked: \"))\n\nregular_pay = computepay(regular_hours, regular_rate)\novertime_pay = computepay(overtime_hours, regular_rate * 1.5)\ntotal_pay = regular_pay + overtime_pay\n\nprint(f\"This pay period you earned: ${total_pay:.2f}\")\n\nOutput:\nHourly rate in dollars: 15.00\nRegular hours worked: 40\nOvertime hours worked: 10\nThis pay period you earned: $825.00\n\n", "My teacher told me to solve it in one function, computerpay . so try this .\ndef computepay(hours, rate):\nif hours > 40:\n reg = rate * hours\n otp = (hours - 40.0) * (rate * 0.5)\n pay = reg + otp\nelse:\n pay = hours * rate \nreturn pay\n\nThen The input Output Part\nsh = input(\"enter Hours:\")\nsr = input(\" Enter rate:\")\nfh = float(sh)\nfr = float(sr)\nxp = computepay(fh,fr)\nprint(\"Pay:\",xp)\n\n", "def computepay(hours, rate) :\n return hours * rate\ndef invalid_input() :\n print(\"Input Numeric Value\")\nwhile True :\n try :\n regular_rate = float(input(\"Hourly rate in dollars: \"))\n break\n except :\n invalid_input()\n continue\nwhile True :\n try :\n regular_hours = float(input(\"Regular Hours Worked: \"))\n break\n except :\n invalid_input()\n continue\nwhile True :\n try :\n overtime_hours = float(input(\"Overtime hours worked :\"))\n break\n except :\n invalid_input()\n continue\novertime_rate = regular_rate * 1.5\n\nregular_pay = computepay(regular_hours, regular_rate)\novertime_pay = computepay(overtime_hours, overtime_rate)\ntotal_pay = regular_pay + overtime_pay\n\nprint(\"PAY : \", total_pay)\n\n", "hour = float(input('Enter hours: '))\nrate = float(input('Enter rates: '))\ndef compute_pay(hours, rates):\nif hours <= 40:\n print(hours * rates)\nelif hours > 40:\n print(((hours * rate) - 40 * rate) * 1.5 + 40 * rate)\n\ncompute_pay(hour, rate )\n", "hour = float(input('Enter hours: '))\nrate = float(input('Enter rates: '))\ndef compute_pay(hours, rates):\nif hours <= 40:\n pay = hours * rates\n return pay\nelif hours > 40:\n pay = ((hours * rate) - 40 * rate) * 1.5 + 40 * rate\n return pay\n\npay = compute_pay(hour, rate)\nprint(pay)\n" ]
[ 0, 0, 0, 0, 0 ]
[]
[]
[ "function", "parameters", "python" ]
stackoverflow_0067036969_function_parameters_python.txt
Q: How to create a FILE HANDLER project in Python that can do the below given tasks: Read content from a txt file character by character. Get number of characters, words, spaces and lines in a file. Find no. of lines in the file. Find the line no. which contains a specific word. PS: Please provide a basic code so that a beginner can understand A: SpesificWord = input() #get the specific word with open('file.txt','r',encoding='utf-8') as file: #open the file for index, line in enumerate(file): #index is current no of file, line is current line print(len(line)) # prints the length of line - including space etc. print(f'Current Line: {index+1}) #prints current line if SpesificWord in line: print(f'Spesific word is in line {index+1})
How to create a FILE HANDLER project in Python that can do the below given tasks:
Read content from a txt file character by character. Get number of characters, words, spaces and lines in a file. Find no. of lines in the file. Find the line no. which contains a specific word. PS: Please provide a basic code so that a beginner can understand
[ "SpesificWord = input() #get the specific word\nwith open('file.txt','r',encoding='utf-8') as file: #open the file\n for index, line in enumerate(file): #index is current no of file, line is current line\n print(len(line)) # prints the length of line - including space etc.\n print(f'Current Line: {index+1}) #prints current line\n if SpesificWord in line:\n print(f'Spesific word is in line {index+1})\n\n" ]
[ 0 ]
[]
[]
[ "filehandler", "python" ]
stackoverflow_0074669712_filehandler_python.txt
Q: Python Error Help : IndexError: list index out of range def read_file(): '''reads input file returns a list of information''' #your code here A: The error IndexError: list index out of range is raised when you try to access an index in a list that doesn't exist. For example, if you try to access the 10th element of a list that only has 3 elements, you will get this error. In your code, you are trying to access the elements in temp using the indices [0], [1], [2], and [3], but it looks like temp might not have enough elements for that. To avoid this error, you should first check that temp has enough elements before trying to access them. You can do this using an if statement and the len() function, like this: if len(temp) >= 4: students_dict[temp[0]] = [temp[1], temp[2], temp[3]] Alternatively, you can use the get() method to safely access elements in temp, like this: students_dict[temp[0]] = [temp.get(1), temp.get(2), temp.get(3)] This will set the elements in the list to None if they don't exist in temp, rather than raising an error. I hope this helps! Let me know if you have any other questions.
Python Error Help : IndexError: list index out of range
def read_file(): '''reads input file returns a list of information''' #your code here
[ "The error IndexError: list index out of range is raised when you try to access an index in a list that doesn't exist. For example, if you try to access the 10th element of a list that only has 3 elements, you will get this error.\nIn your code, you are trying to access the elements in temp using the indices [0], [1], [2], and [3], but it looks like temp might not have enough elements for that.\nTo avoid this error, you should first check that temp has enough elements before trying to access them. You can do this using an if statement and the len() function, like this:\nif len(temp) >= 4:\n students_dict[temp[0]] = [temp[1], temp[2], temp[3]]\n\nAlternatively, you can use the get() method to safely access elements in temp, like this:\nstudents_dict[temp[0]] = [temp.get(1), temp.get(2), temp.get(3)]\n\nThis will set the elements in the list to None if they don't exist in temp, rather than raising an error.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 1 ]
[]
[]
[ "index_error", "python" ]
stackoverflow_0074671322_index_error_python.txt
Q: Python : How to contrast contrast data I don't know what is the term for this, but what get the closest to it is the process of contrasting an image. Basically i have a list of values going from 0 to 100. I would like that values over 50 come closer to 100 and values under 50 come closer to 0. for example : [0, 23, 50,58,100] would become something like this (roughly) : [0, 10, 50, 69, 100] is their a mathematic formula for this ? A: The following code might give you some ideas. If it doesn't meet your needs then you need to clearly explain just what those needs are. def contrast(nums): contrasted = [] for num in nums: if num < 50: contrasted.append(num//2) elif num > 50: contrasted.append((num + 100)//2) else: contrasted.append(num) return contrasted data = [0, 23, 50,58,100] print(contrast(data)) Output: [0, 11, 50, 79, 100]
Python : How to contrast contrast data
I don't know what is the term for this, but what get the closest to it is the process of contrasting an image. Basically i have a list of values going from 0 to 100. I would like that values over 50 come closer to 100 and values under 50 come closer to 0. for example : [0, 23, 50,58,100] would become something like this (roughly) : [0, 10, 50, 69, 100] is their a mathematic formula for this ?
[ "The following code might give you some ideas. If it doesn't meet your needs then you need to clearly explain just what those needs are.\ndef contrast(nums):\n contrasted = []\n for num in nums:\n if num < 50:\n contrasted.append(num//2)\n elif num > 50:\n contrasted.append((num + 100)//2)\n else:\n contrasted.append(num)\n return contrasted\n\ndata = [0, 23, 50,58,100]\nprint(contrast(data))\n\nOutput: [0, 11, 50, 79, 100]\n" ]
[ 1 ]
[]
[]
[ "math", "python" ]
stackoverflow_0074666019_math_python.txt
Q: How to compare two columns in DataFrame and change value of third column based on that comparison? I have following table in Pandas: index | project | category | period | update | amount 0 | 100130 | labour | 202201 | 202203 | 1000 1 | 100130 | labour | 202202 | 202203 | 1000 2 | 100130 | labour | 202203 | 202203 | 1000 3 | 100130 | labour | 202204 | 202203 | 1000 4 | 100130 | labour | 202205 | 202203 | 1000 And my final goal is to get table grouped by project and category with summary of amount column but only from month of update until now. So for example above I will get summary from 202203 until 202205 which is 3000 for project 100130 and category labour. As a first step I tried following condition: for index, row in table.iterrows(): if row["period"] < row["update"] row["amount"] = 0 But: this iteration is not working is there some simple and not so time consuming way how to do it? As my table has over 60.000 rows, so iteration not so good idea probably. A: table["amount"] = 0 if table["period"] < table["update"] else None A: I did some more research and this code seems to solve my problem: def check_update(row): if row["period"] < row["update"]: return 0 else: return row["amount"] table["amount2"] = table.apply(check_update, axis=1)
How to compare two columns in DataFrame and change value of third column based on that comparison?
I have following table in Pandas: index | project | category | period | update | amount 0 | 100130 | labour | 202201 | 202203 | 1000 1 | 100130 | labour | 202202 | 202203 | 1000 2 | 100130 | labour | 202203 | 202203 | 1000 3 | 100130 | labour | 202204 | 202203 | 1000 4 | 100130 | labour | 202205 | 202203 | 1000 And my final goal is to get table grouped by project and category with summary of amount column but only from month of update until now. So for example above I will get summary from 202203 until 202205 which is 3000 for project 100130 and category labour. As a first step I tried following condition: for index, row in table.iterrows(): if row["period"] < row["update"] row["amount"] = 0 But: this iteration is not working is there some simple and not so time consuming way how to do it? As my table has over 60.000 rows, so iteration not so good idea probably.
[ "table[\"amount\"] = 0 if table[\"period\"] < table[\"update\"] else None\n\n", "I did some more research and this code seems to solve my problem:\ndef check_update(row):\n if row[\"period\"] < row[\"update\"]:\n return 0\n else:\n return row[\"amount\"]\n\ntable[\"amount2\"] = table.apply(check_update, axis=1)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074670499_pandas_python.txt
Q: How to perform calculations on a subset of a column in a pandas dataframe? With a dataset such as this: famid birth age ht 0 1 1 one 2.8 1 1 1 two 3.4 2 1 2 one 2.9 3 1 2 two 3.8 4 1 3 one 2.2 5 1 3 two 2.9 ...where we've got values for a variable ht for different categories of, for example, age , I would like to adjust a subset of the data in df['ht'] where df['age'] == 'one' only. And I would like to do it without creating a new column. I've tried: df[df['age']=='one']['ht'] = df[df['age']=='one']['ht']*10**6 But to my mild surprise the numbers don't change. Maybe because the A value is trying to be set on a copy of a slice from a DataFrame warning is triggered in the same run. I've also tried with df.mask() and df.where(). But to no avail. I'm clearly failing at something very basic here, but I'd really like to know how to do this properly. There are similarly sounding questions such as Performing calculations on subset of data frame subset in Python, but the suggested solutions here are pointing towards df.groupby(), and I don't think this necessarily is the right approach here. Thank you for any suggestions! Here's a fully reproducible dataset: import pandas as pd df = pd.DataFrame({ 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] }) df = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age', sep='_', suffix=r'\w+') df.reset_index(inplace = True) A: To adjust a subset of a column in a pandas dataframe, you can use the loc method. The loc method allows you to access a subset of the dataframe by specifying the values in the rows and columns that you want. In your case, you want to adjust the values in the ht column where the age column is equal to one. You can do this with the following code: df.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6 The first argument to the loc method is a condition that specifies the rows that you want to select. In this case, the condition is df['age'] == 'one', which selects all rows where the value in the age column is one. The second argument specifies the column or columns that you want to adjust. In this case, we want to adjust the ht column, so the second argument is 'ht'. Finally, the right-hand side of the assignment operator sets the new values for the selected rows and columns. In this case, the right-hand side is df[df['age'] == 'one']['ht'] * 10**6, which multiplies the values in the ht column for rows where the age column is one by 10^6. After running this code, the values in the ht column where the age column is one will be adjusted as desired. Here's an example of how you could use this code in your case: import pandas as pd df = pd.DataFrame({ 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] }) df = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age', sep='_', suffix=r'\w+') df.reset_index(inplace = True) # Adjust the values in the ht column where the age column is 'one' df.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6 # Print the updated dataframe print(df) After running this code, the dataframe will have the adjusted values in the ht column where the age column is one. A: Let's try this: df.loc[df['age'] == 'one', 'ht'] *= 10**6 Output: famid birth age ht 0 1 1 one 2800000.0 1 1 1 two 3.4 2 1 2 one 2900000.0 3 1 2 two 3.8 4 1 3 one 2200000.0 5 1 3 two 2.9 6 2 1 one 2000000.0 7 2 1 two 3.2 8 2 2 one 1800000.0 9 2 2 two 2.8 10 2 3 one 1900000.0 11 2 3 two 2.4 12 3 1 one 2200000.0 13 3 1 two 3.3 14 3 2 one 2300000.0 15 3 2 two 3.4 16 3 3 one 2100000.0 17 3 3 two 2.9 A: To perform calculations on a subset of a column in a pandas dataframe, you can use the .loc method to select the subset of the dataframe and then apply the calculation to that subset. For example, to multiply the ht values for rows where age is equal to one by 10^6, you could use the following code: df.loc[df['age']=='one', 'ht'] = df.loc[df['age']=='one', 'ht'] * 10**6 This code selects the subset of rows where age is one using df.loc[df['age']=='one'], and then applies the multiplication operation to the ht column in that subset using df.loc[df['age']=='one', 'ht'] * 10**6. You can also use the .loc method to perform calculations on multiple columns in the subset. For example, if you wanted to multiply the ht values by 10^6 and then add 1000 to the birth values for rows where age is one, you could use the following code: df.loc[df['age']=='one', ['ht', 'birth']] = df.loc[df['age']=='one', ['ht', 'birth']] A: To adjust values in a subset of a DataFrame, you can use the loc method and select the rows you want to adjust using a boolean mask. The syntax for modifying values using loc is as follows: df.loc[mask, column] = new_value where mask is a boolean array that specifies which rows to adjust, column is the name of the column to adjust, and new_value is the value to assign to the selected rows. In your case, you want to adjust the ht column for rows where the age column is equal to "one", so your mask would be df['age'] == 'one'. To modify the values in the ht column, you would use the following code: df.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6 This code will select the rows where age is equal to "one", and then set the values in the ht column to their current value times 10^6. Alternatively, you can use the isin method to create your boolean mask, which can make the code more readable. The isin method returns a boolean array where each element is True if the corresponding value in the column is in the given list of values, and False otherwise. So, to create a mask for rows where the age column is equal to "one", you can use the following code: mask = df['age'].isin(['one']) Then you can use this mask with the loc method to modify the values in the ht column: df.loc[mask, 'ht'] = df[mask]['ht'] * 10**6 I hope this helps! Let me know if you have any other questions. A: Here is a way: df.assign(ht = df['ht'].mask(df['age'].isin(['one']),df['ht'].mul(10**6))) by using isin(), more values from the age column can be added. Output: famid birth age ht 0 1 1 one 2800000.0 1 1 1 two 3.4 2 1 2 one 2900000.0 3 1 2 two 3.8 4 1 3 one 2200000.0 5 1 3 two 2.9 6 2 1 one 2000000.0 7 2 1 two 3.2 8 2 2 one 1800000.0 9 2 2 two 2.8 10 2 3 one 1900000.0 11 2 3 two 2.4 12 3 1 one 2200000.0 13 3 1 two 3.3 14 3 2 one 2300000.0 15 3 2 two 3.4 16 3 3 one 2100000.0 17 3 3 two 2.9
How to perform calculations on a subset of a column in a pandas dataframe?
With a dataset such as this: famid birth age ht 0 1 1 one 2.8 1 1 1 two 3.4 2 1 2 one 2.9 3 1 2 two 3.8 4 1 3 one 2.2 5 1 3 two 2.9 ...where we've got values for a variable ht for different categories of, for example, age , I would like to adjust a subset of the data in df['ht'] where df['age'] == 'one' only. And I would like to do it without creating a new column. I've tried: df[df['age']=='one']['ht'] = df[df['age']=='one']['ht']*10**6 But to my mild surprise the numbers don't change. Maybe because the A value is trying to be set on a copy of a slice from a DataFrame warning is triggered in the same run. I've also tried with df.mask() and df.where(). But to no avail. I'm clearly failing at something very basic here, but I'd really like to know how to do this properly. There are similarly sounding questions such as Performing calculations on subset of data frame subset in Python, but the suggested solutions here are pointing towards df.groupby(), and I don't think this necessarily is the right approach here. Thank you for any suggestions! Here's a fully reproducible dataset: import pandas as pd df = pd.DataFrame({ 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1], 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9] }) df = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age', sep='_', suffix=r'\w+') df.reset_index(inplace = True)
[ "To adjust a subset of a column in a pandas dataframe, you can use the loc method. The loc method allows you to access a subset of the dataframe by specifying the values in the rows and columns that you want. In your case, you want to adjust the values in the ht column where the age column is equal to one. You can do this with the following code:\ndf.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6\n\nThe first argument to the loc method is a condition that specifies the rows that you want to select. In this case, the condition is df['age'] == 'one', which selects all rows where the value in the age column is one. The second argument specifies the column or columns that you want to adjust. In this case, we want to adjust the ht column, so the second argument is 'ht'. Finally, the right-hand side of the assignment operator sets the new values for the selected rows and columns. In this case, the right-hand side is df[df['age'] == 'one']['ht'] * 10**6, which multiplies the values in the ht column for rows where the age column is one by 10^6.\nAfter running this code, the values in the ht column where the age column is one will be adjusted as desired. Here's an example of how you could use this code in your case:\nimport pandas as pd\n\ndf = pd.DataFrame({\n 'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],\n 'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],\n 'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],\n 'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]\n})\ndf = pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age',\n sep='_', suffix=r'\\w+')\ndf.reset_index(inplace = True)\n\n# Adjust the values in the ht column where the age column is 'one'\ndf.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6\n\n# Print the updated dataframe\nprint(df)\n\nAfter running this code, the dataframe will have the adjusted values in the ht column where the age column is one.\n", "Let's try this:\ndf.loc[df['age'] == 'one', 'ht'] *= 10**6\n\nOutput:\n famid birth age ht\n0 1 1 one 2800000.0\n1 1 1 two 3.4\n2 1 2 one 2900000.0\n3 1 2 two 3.8\n4 1 3 one 2200000.0\n5 1 3 two 2.9\n6 2 1 one 2000000.0\n7 2 1 two 3.2\n8 2 2 one 1800000.0\n9 2 2 two 2.8\n10 2 3 one 1900000.0\n11 2 3 two 2.4\n12 3 1 one 2200000.0\n13 3 1 two 3.3\n14 3 2 one 2300000.0\n15 3 2 two 3.4\n16 3 3 one 2100000.0\n17 3 3 two 2.9\n\n", "To perform calculations on a subset of a column in a pandas dataframe, you can use the .loc method to select the subset of the dataframe and then apply the calculation to that subset. For example, to multiply the ht values for rows where age is equal to one by 10^6, you could use the following code:\ndf.loc[df['age']=='one', 'ht'] = df.loc[df['age']=='one', 'ht'] * 10**6\n\nThis code selects the subset of rows where age is one using df.loc[df['age']=='one'], and then applies the multiplication operation to the ht column in that subset using df.loc[df['age']=='one', 'ht'] * 10**6.\nYou can also use the .loc method to perform calculations on multiple columns in the subset. For example, if you wanted to multiply the ht values by 10^6 and then add 1000 to the birth values for rows where age is one, you could use the following code:\ndf.loc[df['age']=='one', ['ht', 'birth']] = df.loc[df['age']=='one', ['ht', 'birth']]\n\n", "To adjust values in a subset of a DataFrame, you can use the loc method and select the rows you want to adjust using a boolean mask. The syntax for modifying values using loc is as follows:\ndf.loc[mask, column] = new_value\n\nwhere mask is a boolean array that specifies which rows to adjust, column is the name of the column to adjust, and new_value is the value to assign to the selected rows.\nIn your case, you want to adjust the ht column for rows where the age column is equal to \"one\", so your mask would be df['age'] == 'one'. To modify the values in the ht column, you would use the following code:\ndf.loc[df['age'] == 'one', 'ht'] = df[df['age'] == 'one']['ht'] * 10**6\n\nThis code will select the rows where age is equal to \"one\", and then set the values in the ht column to their current value times 10^6.\nAlternatively, you can use the isin method to create your boolean mask, which can make the code more readable. The isin method returns a boolean array where each element is True if the corresponding value in the column is in the given list of values, and False otherwise. So, to create a mask for rows where the age column is equal to \"one\", you can use the following code:\nmask = df['age'].isin(['one'])\n\nThen you can use this mask with the loc method to modify the values in the ht column:\ndf.loc[mask, 'ht'] = df[mask]['ht'] * 10**6\n\nI hope this helps! Let me know if you have any other questions.\n", "Here is a way:\ndf.assign(ht = df['ht'].mask(df['age'].isin(['one']),df['ht'].mul(10**6)))\n\nby using isin(), more values from the age column can be added.\nOutput:\n famid birth age ht\n0 1 1 one 2800000.0\n1 1 1 two 3.4\n2 1 2 one 2900000.0\n3 1 2 two 3.8\n4 1 3 one 2200000.0\n5 1 3 two 2.9\n6 2 1 one 2000000.0\n7 2 1 two 3.2\n8 2 2 one 1800000.0\n9 2 2 two 2.8\n10 2 3 one 1900000.0\n11 2 3 two 2.4\n12 3 1 one 2200000.0\n13 3 1 two 3.3\n14 3 2 one 2300000.0\n15 3 2 two 3.4\n16 3 3 one 2100000.0\n17 3 3 two 2.9\n\n" ]
[ 2, 2, 1, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074670948_dataframe_pandas_python.txt
Q: how to check if a number is even or not in python I am attempting to make a collatz conjecture program, but I can't figure out how to check for even numbers this is my current code for it elif (original_collatz % 2) == 0: new_collatz = collatz/2 anyone have an idea how to check i tried it with modulo but could figure out how it works, and my program just ignores this line, whole program: ` #collatz program import time collatz = 7 original_collatz = collatz new_collatz = collatz while True: if original_collatz % 2 == 1: new_collatz = (collatz * 3) + 1 elif (original_collatz % 2) == 0: new_collatz = collatz/2 collatz = new_collatz print(collatz) if collatz == 1: print('woo hoo') original_collatz += 1 time.sleep(1) A: The problem is not that "your program ignore the lines", it's that you test the parity of original_collatz which doesn't change. You need to check the parity of collatz You need to use integer division (//) when dividing by two or collatz will become a float. You don't really need to use new_collatz as an intermediate, you could just overwrite collatz directly. Here is a fixed sample: # collatz program import time collatz = 7 original_collatz = collatz new_collatz = collatz while True: if collatz % 2 == 1: new_collatz = (collatz * 3) + 1 else: new_collatz = collatz // 2 collatz = new_collatz print(collatz) if collatz == 1: print('woo hoo') original_collatz += 1 collatz = original_collatz time.sleep(1) A: Working example: import time collatz_init = 7 collatz = collatz_init while True: if collatz % 2 == 1: collatz = (collatz * 3) + 1 else: collatz //= 2 print(collatz) if collatz == 1: print('woo hoo') collatz_init += 1 collatz = collatz_init time.sleep(1)
how to check if a number is even or not in python
I am attempting to make a collatz conjecture program, but I can't figure out how to check for even numbers this is my current code for it elif (original_collatz % 2) == 0: new_collatz = collatz/2 anyone have an idea how to check i tried it with modulo but could figure out how it works, and my program just ignores this line, whole program: ` #collatz program import time collatz = 7 original_collatz = collatz new_collatz = collatz while True: if original_collatz % 2 == 1: new_collatz = (collatz * 3) + 1 elif (original_collatz % 2) == 0: new_collatz = collatz/2 collatz = new_collatz print(collatz) if collatz == 1: print('woo hoo') original_collatz += 1 time.sleep(1)
[ "The problem is not that \"your program ignore the lines\", it's that you test the parity of original_collatz which doesn't change.\n\nYou need to check the parity of collatz\nYou need to use integer division (//) when dividing by two or collatz will become a float.\nYou don't really need to use new_collatz as an intermediate, you could just overwrite collatz directly.\n\nHere is a fixed sample:\n# collatz program\nimport time\n\ncollatz = 7\noriginal_collatz = collatz\nnew_collatz = collatz\n\nwhile True:\n if collatz % 2 == 1:\n new_collatz = (collatz * 3) + 1\n\n else:\n new_collatz = collatz // 2\n\n collatz = new_collatz\n\n print(collatz)\n\n if collatz == 1:\n print('woo hoo')\n original_collatz += 1\n collatz = original_collatz\n\n time.sleep(1)\n\n", "Working example:\nimport time\n\ncollatz_init = 7\ncollatz = collatz_init\n\nwhile True:\n if collatz % 2 == 1:\n collatz = (collatz * 3) + 1\n else:\n collatz //= 2\n\n print(collatz)\n\n if collatz == 1:\n print('woo hoo')\n collatz_init += 1\n collatz = collatz_init\n\n time.sleep(1)\n\n" ]
[ 1, 1 ]
[]
[]
[ "collatz", "integer", "modulo", "python" ]
stackoverflow_0074671339_collatz_integer_modulo_python.txt
Q: How can I catch all errors from a create.sql just like SQLite gives? I have a create.sql (and a populate.sql) that create a SQLite3 database (and populate it with some dummy data). I then save the resulting database and make some further analysis. I use Python to automate this process for several pairs of sql files. db_mem = sqlite3.connect(":memory:") cur = db_mem.cursor() try: with open(row['path']+'/' + 'create.sql') as fp: cur.executescript(fp.read()) except: creates.append('C-error') continue else: creates.append('Ok') try: with open(row['path']+'/' + 'populate.sql') as fp: cur.executescript(fp.read()) except sqlite3.Error as x: populates.append('P-error') else: populates.append('Ok') db_disk = sqlite3.connect(f'./SQL_Final/{index}_out.db') db_mem.backup(db_disk) However, i can only catch 1 error at creation, instead of several errors that are outputed when I do .read create.sql from sqlite. My question is how can I catch all errors? For reproducibility issues, here is a dummy create.sql (that generates errors): DROP TABLE IF EXISTS Comp; CREATE TABLE Comp ( idComp INTEGER PRIMARY KEY, nomeComp TEXT, dataInicio TEXT, dataFim TEXT ); DROP TABLE IF EXISTS Game; DROP TABLE IF EXISTS Stade; DROP TABLE IF EXISTS Club; DROP TABLE IF EXISTS Squad; CREATE TABLE Game ( idGame INTEGER PRIMARY KEY, golosFora INTEGER, golosCasa INTEGER, data DATE, jornada INTEGER, duração TIME, nomeComp TEXT REFERENCES Comp, nomeStade TEXT REFERENCES Stade, nomeSquadFora TEXT REFERENCES Squad, nomeSquadCasa TEXT REFERENCES Squad, CONSTRAINT CHECK_Game_golosFora CHECK (Game_golosFora >= 0), CONSTRAINT CHECK_Game_golosCasa CHECK (Game_golosCasa >= 0), CONSTRAINT CHECK_Game_jornada CHECK (jornada > 0) ); CREATE TABLE Stade ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Club ( nomeClub TEXT PRIMARY KEY, país TEXT NOT NULL ); CREATE TABLE Squad ( nomeSquad TEXT PRIMARY Key, nomeClub TEXT REFERENCES Club ); If I read this file from SQLIte (with .read create.sql) I get the following errors: Error: near line 2: in prepare, table "Comp" has more than one primary key (1) Error: near line 13: in prepare, no such column: Game_golosFora (1) However, if I automate from Python, I get only a single error: Error: table "Comp" has more than one primary key Is there any way I can fix this? A: The executescript() will run the whole sql file but will raise exception when an error occurs. Since you want to continue the DB schema creation regardless of errors, you could switch from using executescript() to looping over the sql statements list by calling the execute() on each of them. This example will demonstrate that the processing continues when an error occurs. We will try to create a table that already exists to simulate sqlite error. create.sql: CREATE TABLE Stade ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Stade ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Stade2 ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Stade2 ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Stade3 ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); python: import traceback import sqlite3 db_mem = sqlite3.connect(":memory:") cur = db_mem.cursor() all_errors = [] try: with open('create.sql', 'r') as fp: text = fp.read().split(';') for sql in text: try: cur.execute(sql) except: # print("exception at sql: {}, details: {}".format(sql, traceback.format_exc())) all_errors.append(traceback.format_exc()) except: print("exception, details: {}".format(traceback.format_exc())) else: db_disk = sqlite3.connect('db.sqlite') db_mem.backup(db_disk) for i, error in enumerate(all_errors): print("Error #{}: {}".format(i, error) running it, we can see 2 error messages, and inspecting the db.sqlite we can see the 3rd table as well, i.e. the processing reached the last sql.
How can I catch all errors from a create.sql just like SQLite gives?
I have a create.sql (and a populate.sql) that create a SQLite3 database (and populate it with some dummy data). I then save the resulting database and make some further analysis. I use Python to automate this process for several pairs of sql files. db_mem = sqlite3.connect(":memory:") cur = db_mem.cursor() try: with open(row['path']+'/' + 'create.sql') as fp: cur.executescript(fp.read()) except: creates.append('C-error') continue else: creates.append('Ok') try: with open(row['path']+'/' + 'populate.sql') as fp: cur.executescript(fp.read()) except sqlite3.Error as x: populates.append('P-error') else: populates.append('Ok') db_disk = sqlite3.connect(f'./SQL_Final/{index}_out.db') db_mem.backup(db_disk) However, i can only catch 1 error at creation, instead of several errors that are outputed when I do .read create.sql from sqlite. My question is how can I catch all errors? For reproducibility issues, here is a dummy create.sql (that generates errors): DROP TABLE IF EXISTS Comp; CREATE TABLE Comp ( idComp INTEGER PRIMARY KEY, nomeComp TEXT, dataInicio TEXT, dataFim TEXT ); DROP TABLE IF EXISTS Game; DROP TABLE IF EXISTS Stade; DROP TABLE IF EXISTS Club; DROP TABLE IF EXISTS Squad; CREATE TABLE Game ( idGame INTEGER PRIMARY KEY, golosFora INTEGER, golosCasa INTEGER, data DATE, jornada INTEGER, duração TIME, nomeComp TEXT REFERENCES Comp, nomeStade TEXT REFERENCES Stade, nomeSquadFora TEXT REFERENCES Squad, nomeSquadCasa TEXT REFERENCES Squad, CONSTRAINT CHECK_Game_golosFora CHECK (Game_golosFora >= 0), CONSTRAINT CHECK_Game_golosCasa CHECK (Game_golosCasa >= 0), CONSTRAINT CHECK_Game_jornada CHECK (jornada > 0) ); CREATE TABLE Stade ( nomeStade TEXT PRIMARY KEY, local TEXT NOT NULL, idGame INTEGER REFERENCES Game ); CREATE TABLE Club ( nomeClub TEXT PRIMARY KEY, país TEXT NOT NULL ); CREATE TABLE Squad ( nomeSquad TEXT PRIMARY Key, nomeClub TEXT REFERENCES Club ); If I read this file from SQLIte (with .read create.sql) I get the following errors: Error: near line 2: in prepare, table "Comp" has more than one primary key (1) Error: near line 13: in prepare, no such column: Game_golosFora (1) However, if I automate from Python, I get only a single error: Error: table "Comp" has more than one primary key Is there any way I can fix this?
[ "The executescript() will run the whole sql file but will raise exception when an error occurs.\nSince you want to continue the DB schema creation regardless of errors, you could switch from using executescript() to looping over the sql statements list by calling the execute() on each of them.\nThis example will demonstrate that the processing continues when an error occurs. We will try to create a table that already exists to simulate sqlite error.\ncreate.sql:\nCREATE TABLE Stade (\n nomeStade TEXT PRIMARY KEY,\n local TEXT NOT NULL,\n idGame INTEGER REFERENCES Game\n);\n\nCREATE TABLE Stade (\n nomeStade TEXT PRIMARY KEY,\n local TEXT NOT NULL,\n idGame INTEGER REFERENCES Game\n);\n\nCREATE TABLE Stade2 (\n nomeStade TEXT PRIMARY KEY,\n local TEXT NOT NULL,\n idGame INTEGER REFERENCES Game\n);\n\nCREATE TABLE Stade2 (\n nomeStade TEXT PRIMARY KEY,\n local TEXT NOT NULL,\n idGame INTEGER REFERENCES Game\n);\n\nCREATE TABLE Stade3 (\n nomeStade TEXT PRIMARY KEY,\n local TEXT NOT NULL,\n idGame INTEGER REFERENCES Game\n);\n\npython:\nimport traceback\nimport sqlite3\ndb_mem = sqlite3.connect(\":memory:\")\ncur = db_mem.cursor()\nall_errors = []\ntry:\n with open('create.sql', 'r') as fp:\n text = fp.read().split(';')\n for sql in text:\n try:\n cur.execute(sql)\n except:\n # print(\"exception at sql: {}, details: {}\".format(sql, traceback.format_exc()))\n all_errors.append(traceback.format_exc())\nexcept:\n print(\"exception, details: {}\".format(traceback.format_exc()))\n \nelse:\n db_disk = sqlite3.connect('db.sqlite')\n db_mem.backup(db_disk)\n\nfor i, error in enumerate(all_errors):\n print(\"Error #{}: {}\".format(i, error)\n\nrunning it, we can see 2 error messages, and inspecting the db.sqlite we can see the 3rd table as well, i.e. the processing reached the last sql.\n" ]
[ 1 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0074671110_python_sqlite.txt
Q: Facing Import Error when importing esda and libpysal libraries Even though I have installed both the libraries several times using different orders in different virtual environments, I'm still facing an issue where I'm not able to import and use certain geospatial libraries like esda and libpysal. The following error shows up: ImportError Traceback (most recent call last) C:\Users\SLAADM~1\AppData\Local\Temp/ipykernel_35328/2667884714.py in <module> 3 import numpy as np 4 import matplotlib.pyplot as plt ----> 5 import esda 6 import libpysal as lps 7 import pysal c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\esda\__init__.py in <module> 5 6 """ ----> 7 from . import adbscan 8 from .gamma import Gamma 9 from .geary import Geary c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\esda\adbscan.py in <module> 8 import pandas 9 import numpy as np ---> 10 from libpysal.cg.alpha_shapes import alpha_shape_auto 11 from scipy.spatial import cKDTree 12 from collections import Counter c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\__init__.py in <module> 25 Tools for creating and manipulating weights 26 """ ---> 27 from . import cg 28 from . import io 29 from . import weights c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\cg\__init__.py in <module> 9 from .sphere import * 10 from .voronoi import * ---> 11 from .alpha_shapes import * c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\cg\alpha_shapes.py in <module> 22 23 try: ---> 24 import pygeos 25 26 HAS_PYGEOS = True c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\pygeos\__init__.py in <module> ----> 1 from .lib import GEOSException # NOQA 2 from .lib import Geometry # NOQA 3 from .lib import geos_version, geos_version_string # NOQA 4 from .lib import geos_capi_version, geos_capi_version_string # NOQA 5 from .decorators import UnsupportedGEOSOperation # NOQA ImportError: DLL load failed while importing lib: The specified procedure could not be found. Would really appreciate any help in making this work. Please throw any suggestions you might have at me. A: install pygeos i.e conda install pygeos it worked for me A: I found same issue when running example code from a couple of years ago. The pysal API has changed. Import libpysal first then import the esda libraries eg import libpysal from esda.moran import Moran from esda.smaup import Smaup see https://pysal.org/esda/generated/esda.Moran.html
Facing Import Error when importing esda and libpysal libraries
Even though I have installed both the libraries several times using different orders in different virtual environments, I'm still facing an issue where I'm not able to import and use certain geospatial libraries like esda and libpysal. The following error shows up: ImportError Traceback (most recent call last) C:\Users\SLAADM~1\AppData\Local\Temp/ipykernel_35328/2667884714.py in <module> 3 import numpy as np 4 import matplotlib.pyplot as plt ----> 5 import esda 6 import libpysal as lps 7 import pysal c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\esda\__init__.py in <module> 5 6 """ ----> 7 from . import adbscan 8 from .gamma import Gamma 9 from .geary import Geary c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\esda\adbscan.py in <module> 8 import pandas 9 import numpy as np ---> 10 from libpysal.cg.alpha_shapes import alpha_shape_auto 11 from scipy.spatial import cKDTree 12 from collections import Counter c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\__init__.py in <module> 25 Tools for creating and manipulating weights 26 """ ---> 27 from . import cg 28 from . import io 29 from . import weights c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\cg\__init__.py in <module> 9 from .sphere import * 10 from .voronoi import * ---> 11 from .alpha_shapes import * c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\libpysal\cg\alpha_shapes.py in <module> 22 23 try: ---> 24 import pygeos 25 26 HAS_PYGEOS = True c:\users\sla admin\appdata\local\programs\python\python39\lib\site-packages\pygeos\__init__.py in <module> ----> 1 from .lib import GEOSException # NOQA 2 from .lib import Geometry # NOQA 3 from .lib import geos_version, geos_version_string # NOQA 4 from .lib import geos_capi_version, geos_capi_version_string # NOQA 5 from .decorators import UnsupportedGEOSOperation # NOQA ImportError: DLL load failed while importing lib: The specified procedure could not be found. Would really appreciate any help in making this work. Please throw any suggestions you might have at me.
[ "install pygeos i.e conda install pygeos\nit worked for me\n", "I found same issue when running example code from a couple of years ago. The pysal API has changed.\nImport libpysal first then import the esda libraries eg\nimport libpysal\nfrom esda.moran import Moran\nfrom esda.smaup import Smaup\n\nsee\nhttps://pysal.org/esda/generated/esda.Moran.html\n" ]
[ 0, 0 ]
[]
[]
[ "geospatial", "gis", "import", "jupyter_notebook", "python" ]
stackoverflow_0068841646_geospatial_gis_import_jupyter_notebook_python.txt
Q: Browser opens twice when running script I'm working on a web driver and I'm trying to implement classes into my code. I had this working but as soon as I turned it into a class my browser started opening twice when I ran the program. It will open a browser, then open the second browser, run the commands and then leave the first browser open. Can any tell me why this is happening? This is just an example of how is is formatted. import selenium import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait class About(object): def __init__(self, driver): self.driver = driver def pause(self): time.sleep(1) def run(self): self.driver.get('https://google.com') self.pause() def tearDown(self): self.driver.quit() go = About(webdriver.Chrome()) if __name__ == '__main__': go.run() go.tearDown() A: It looks like you are creating a new instance of the webdriver.Chrome class when you initialize the About class in this line: go = About(webdriver.Chrome()) This is causing a new Chrome browser to be opened when you create the About object. Instead, you should create the webdriver.Chrome object outside of the About class and pass it to the About class when you create the object. Here is an example of how you can do that: import selenium import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait class About(object): def __init__(self, driver): self.driver = driver def pause(self): time.sleep(1) def run(self): self.driver.get('https://google.com') self.pause() def tearDown(self): self.driver.quit() # Create the webdriver.Chrome object outside of the About class driver = webdriver.Chrome() go = About(driver) if __name__ == '__main__': go.run() go.tearDown() This way, only one instance of the webdriver.Chrome class will be created and only one browser will be opened. When you create the About object, you pass the webdriver.Chrome object to it so that it can be used in the methods of the About class.
Browser opens twice when running script
I'm working on a web driver and I'm trying to implement classes into my code. I had this working but as soon as I turned it into a class my browser started opening twice when I ran the program. It will open a browser, then open the second browser, run the commands and then leave the first browser open. Can any tell me why this is happening? This is just an example of how is is formatted. import selenium import time from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait class About(object): def __init__(self, driver): self.driver = driver def pause(self): time.sleep(1) def run(self): self.driver.get('https://google.com') self.pause() def tearDown(self): self.driver.quit() go = About(webdriver.Chrome()) if __name__ == '__main__': go.run() go.tearDown()
[ "It looks like you are creating a new instance of the webdriver.Chrome class when you initialize the About class in this line:\ngo = About(webdriver.Chrome())\n\nThis is causing a new Chrome browser to be opened when you create the About object. Instead, you should create the webdriver.Chrome object outside of the About class and pass it to the About class when you create the object. Here is an example of how you can do that:\nimport selenium\nimport time\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\n\nclass About(object):\n def __init__(self, driver):\n self.driver = driver\n\n def pause(self):\n time.sleep(1)\n\n def run(self):\n self.driver.get('https://google.com')\n self.pause()\n\n def tearDown(self):\n self.driver.quit()\n\n# Create the webdriver.Chrome object outside of the About class\ndriver = webdriver.Chrome()\ngo = About(driver)\n\nif __name__ == '__main__':\n go.run()\n go.tearDown()\n\nThis way, only one instance of the webdriver.Chrome class will be created and only one browser will be opened. When you create the About object, you pass the webdriver.Chrome object to it so that it can be used in the methods of the About class.\n" ]
[ 0 ]
[]
[]
[ "google_chrome", "python", "selenium", "selenium_webdriver", "webdriver" ]
stackoverflow_0041432728_google_chrome_python_selenium_selenium_webdriver_webdriver.txt
Q: How to effectively loop through each pixel for saving time with numpy? As you know looping through each pixels and accessing their values with opencv takes too long. As a beginner I'm trying to learn opencv myself when I tried this approach it took me around 7-10 seconds of time to loop through image and perform operations. code is as below original_image = cv2.imread(img_f) image = np.array(original_image) for y in range(image.shape[0]): for x in range(image.shape[1]): # remove grey background if 150 <= image[y, x, 0] <= 180 and \ 150 <= image[y, x, 1] <= 180 and \ 150 <= image[y, x, 2] <= 180: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 # remove green dashes if image[y, x, 0] == 0 and \ image[y, x, 1] == 169 and \ image[y, x, 2] == 0: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 in above code i'm just trying to remove grey and green pixel colors. I found similar question asked here but im not able to understand how to use numpy in my usecase as i'm beginner in python and numpy. Any help or suggestion for solving this will be appreciated thanks A: You can take advantage of NumPy's vectorized operations to eliminate all loops which should be much faster. # Remove grey background is_grey = ((150 <= image) & (image <= 180)).all(axis=2, keepdims=True) image = np.where(is_grey, 0, image) # Remove green dashes is_green_dash = (image[..., 0] == 0) & (image[..., 1] == 169) & (image[..., 2] == 0) is_green_dash = is_green_dash[..., np.newaxis] # append a new dim at the end image = np.where(is_green_dash, 0, image) Both invocations of np.where rely on NumPy's broadcasting. A: One way to improve the performance of your code would be to use the cv2.inRange() function to find pixels with the desired colors, and then use the cv2.bitwise_and() function to remove those pixels from the image. This can be done more efficiently than looping through each pixel individually, which can be slow and computationally intensive. Here is an example of how this could be implemented: import cv2 import numpy as np # Read the image original_image = cv2.imread('image.jpg') # Define the colors to be removed as ranges of BGR values grey_min = np.array([150, 150, 150], np.uint8) grey_max = np.array([180, 180, 180], np.uint8) green_min = np.array([0, 0, 0], np.uint8) green_max = np.array([0, 169, 0], np.uint8) # Use inRange() to find pixels with the desired colors grey_mask = cv2.inRange(original_image, grey_min, grey_max) green_mask = cv2.inRange(original_image, green_min, green_max) # Use bitwise_and() to remove the pixels with A: You can apply numpy filtering to image. In your scenario it would be: mask_gray = ( (150 <= image[:, :, 0]) & (image[:, :, 0] <= 180) & (150 <= image[:, :, 1]) & (image[:, :, 1] <= 180) & (150 <= image[:, :, 2]) & (image[:, :, 2] <= 180) ) image[mask_gray] = 0 mask_green = ( (image[:, :, 0] == 0) & (image[:, :, 1] == 169) & (image[:, :, 2] == 0) ) image[mask_green] = 0 mask_gray and mask_green here are boolean masks A: If you insist on writing your own loops... You could just use numba. It's a JIT compiler for Python code. from numba import njit @njit def your_function(input): ... return output input = cv.imread(...) # yes this will shadow the builtin of the same name output = your_function(input) The first time you call your_function, it'll take a second to compile. All further calls are blazingly fast.
How to effectively loop through each pixel for saving time with numpy?
As you know looping through each pixels and accessing their values with opencv takes too long. As a beginner I'm trying to learn opencv myself when I tried this approach it took me around 7-10 seconds of time to loop through image and perform operations. code is as below original_image = cv2.imread(img_f) image = np.array(original_image) for y in range(image.shape[0]): for x in range(image.shape[1]): # remove grey background if 150 <= image[y, x, 0] <= 180 and \ 150 <= image[y, x, 1] <= 180 and \ 150 <= image[y, x, 2] <= 180: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 # remove green dashes if image[y, x, 0] == 0 and \ image[y, x, 1] == 169 and \ image[y, x, 2] == 0: image[y, x, 0] = 0 image[y, x, 1] = 0 image[y, x, 2] = 0 in above code i'm just trying to remove grey and green pixel colors. I found similar question asked here but im not able to understand how to use numpy in my usecase as i'm beginner in python and numpy. Any help or suggestion for solving this will be appreciated thanks
[ "You can take advantage of NumPy's vectorized operations to eliminate all loops which should be much faster.\n# Remove grey background\nis_grey = ((150 <= image) & (image <= 180)).all(axis=2, keepdims=True)\nimage = np.where(is_grey, 0, image)\n\n# Remove green dashes\nis_green_dash = (image[..., 0] == 0) & (image[..., 1] == 169) & (image[..., 2] == 0)\nis_green_dash = is_green_dash[..., np.newaxis] # append a new dim at the end\nimage = np.where(is_green_dash, 0, image)\n\nBoth invocations of np.where rely on NumPy's broadcasting.\n", "One way to improve the performance of your code would be to use the cv2.inRange() function to find pixels with the desired colors, and then use the cv2.bitwise_and() function to remove those pixels from the image. This can be done more efficiently than looping through each pixel individually, which can be slow and computationally intensive. Here is an example of how this could be implemented:\nimport cv2\nimport numpy as np\n\n# Read the image\noriginal_image = cv2.imread('image.jpg')\n\n# Define the colors to be removed as ranges of BGR values\ngrey_min = np.array([150, 150, 150], np.uint8)\ngrey_max = np.array([180, 180, 180], np.uint8)\ngreen_min = np.array([0, 0, 0], np.uint8)\ngreen_max = np.array([0, 169, 0], np.uint8)\n\n# Use inRange() to find pixels with the desired colors\ngrey_mask = cv2.inRange(original_image, grey_min, grey_max)\ngreen_mask = cv2.inRange(original_image, green_min, green_max)\n\n# Use bitwise_and() to remove the pixels with\n\n", "You can apply numpy filtering to image. In your scenario it would be:\nmask_gray = (\n (150 <= image[:, :, 0]) & (image[:, :, 0] <= 180) & \n (150 <= image[:, :, 1]) & (image[:, :, 1] <= 180) & \n (150 <= image[:, :, 2]) & (image[:, :, 2] <= 180)\n)\n\nimage[mask_gray] = 0\n\nmask_green = (\n (image[:, :, 0] == 0) &\n (image[:, :, 1] == 169) &\n (image[:, :, 2] == 0)\n)\n\nimage[mask_green] = 0\n\nmask_gray and mask_green here are boolean masks\n", "If you insist on writing your own loops...\nYou could just use numba. It's a JIT compiler for Python code.\nfrom numba import njit\n\n@njit\ndef your_function(input):\n ...\n return output\n\ninput = cv.imread(...) # yes this will shadow the builtin of the same name\noutput = your_function(input)\n\nThe first time you call your_function, it'll take a second to compile. All further calls are blazingly fast.\n" ]
[ 3, 1, 0, 0 ]
[]
[]
[ "cv2", "image_processing", "numpy", "python" ]
stackoverflow_0074664212_cv2_image_processing_numpy_python.txt
Q: how to check if a line contains text and then stop after it reaches a blank line? [Python] am writing this algorithm that takes strings from a text file and appends them to an array. if the strings are in continuous line ex. ABCD EFG HIJK LMNOP then they would be appended into the same array until it reaches a blank line at which point it stops and starts appending the next number of continuous arrays to a different array. In my head a for loop is the way to go as i have the number of lines but i am just not sure how to check if a line has ANY TYPE of text or not (Not looking for any substrings just checking if the line itself contains anything). A: Simply check if the current line equal the new line character. with open('filename.txt', 'r') as f: for line in f: if line == '\n': print('Empty line') else: print('line contain string') A: If you're stripping the linebreak off each line, you can just check afterward to see if the result is empty. Given the following file text.txt: ABCD EFG HIJK LMNOP QRSTUV WXYZ here's an example of reading it into a list of lists of strings: >>> arr = [[]] >>> with open("text.txt") as f: ... for line in f: ... line = line.strip() ... if line: ... arr[-1].append(line) ... else: ... arr.append([]) ... >>> arr [['ABCD', 'EFG', 'HIJK', 'LMNOP'], ['QRSTUV', 'WXYZ']]
how to check if a line contains text and then stop after it reaches a blank line? [Python]
am writing this algorithm that takes strings from a text file and appends them to an array. if the strings are in continuous line ex. ABCD EFG HIJK LMNOP then they would be appended into the same array until it reaches a blank line at which point it stops and starts appending the next number of continuous arrays to a different array. In my head a for loop is the way to go as i have the number of lines but i am just not sure how to check if a line has ANY TYPE of text or not (Not looking for any substrings just checking if the line itself contains anything).
[ "Simply check if the current line equal the new line character.\nwith open('filename.txt', 'r') as f:\n for line in f:\n if line == '\\n':\n print('Empty line')\n else:\n print('line contain string')\n\n", "If you're stripping the linebreak off each line, you can just check afterward to see if the result is empty. Given the following file text.txt:\nABCD\nEFG\nHIJK\nLMNOP\n\nQRSTUV\nWXYZ\n\nhere's an example of reading it into a list of lists of strings:\n>>> arr = [[]]\n>>> with open(\"text.txt\") as f:\n... for line in f:\n... line = line.strip()\n... if line:\n... arr[-1].append(line)\n... else:\n... arr.append([])\n...\n>>> arr\n[['ABCD', 'EFG', 'HIJK', 'LMNOP'], ['QRSTUV', 'WXYZ']]\n\n" ]
[ 0, 0 ]
[]
[]
[ "file", "python", "txt" ]
stackoverflow_0074671373_file_python_txt.txt
Q: Create multicategory chart by python I have the data like the following in excel: company month-year #people got interviewed # people employed link to the data: (https://docs.google.com/spreadsheets/d/1DwZt9fpnzR9yUNBMjmqA1hg11d-2dXNs/edit?usp=share_link&ouid=113997824301423906122&rtpof=true&sd=true) when I try to create multicategory chart(company as first category and the year-month as second category) by plotly library by python it mixes up the order of second category for y,z company. Putting the code and the screenshot of the chart below. Code : import pandas as pd from helper_functions import get_df import plotly.graph_objects as go from datetime import datetime def multicat_chart(infile=None, sheet_name=None, chart_type = None, chart_title = None): #chart type must be given df=pd.read_excel(infile,sheet_name) df = df.fillna(method='ffill') cat = df.columns[0] sub_cat = df.columns[1] cols = df.columns[2:] fig = go.Figure() cats = [] sub_cats = [] for c in df[cat].unique(): new_df = df.loc[df[cat] == c] scats = new_df[sub_cat] scats = scats.apply(lambda date: datetime.strptime(date, "%b-%Y")) scats = list(scats) scats.sort() scats = [datetime.strftime(element, '%b-%y') for element in scats] scats = [str(element) for element in scats] for sc in scats: cats.append(str(c)) sub_cats.append(str(sc)) print(c) for i in scats: print(i) fig.add_trace( go.Bar(x = [cats,sub_cats],y = df[cols[0]], name="# people got interviewed" )) fig.add_trace( go.Bar(x = [cats,sub_cats],y = df[cols[1]], name="# people employed" )) fig.update_layout(width = 1000, height = 1000) return fig fig = multicat_chart(infile = 'data_for_test.xlsx', sheet_name = 'data', chart_type = 'bar') fig.show() Chart: I gave the data to the Bar() function in ordered way but it mixes somehow, I would like to have in ascending order, what I did I convert string to datetime object and then sorted all subcategory data with the sort() function of list, and converted back to string. And By running the script you can notice that it prints in the right order, it means that it is given ordered to function, but it mixes, who can help me to understand why it behaves so? A: Once you made the dates month-year, they were object type--character strings, not dates—as in date-type. When you sorted, you sorted by calendar month. First, use strptime to make it a date, sort it, then use strftime. import pandas as pd import plotly.graph_objects as go from datetime import datetime as dt def multicat_chart(infile=None, sheet_name=None, chart_type = None, chart_title = None): #chart type must be given df = pd.read_excel(infile, sheet_name) df = df.fillna(method='ffill') # fill in companies df['month'] = [dt.strptime(x, '%b-%Y') for x in df['month']] # date for ordering df.sort_values(by = ['month', 'company'], inplace = True) # appearance order df['month2'] = [dt.strftime(x, '%b-%Y') for x in df['month']] # visual appearance fig = go.Figure() # plot it fig.add_trace( go.Bar(x = [df.iloc[:, 0], df.iloc[:, 4]], y = df.iloc[:, 2], name="# people got interviewed" )) fig.add_trace( go.Bar(x = [df.iloc[:, 0], df.iloc[:, 4]], y = df.iloc[:, 3], name="# people employed" )) fig.update_layout(width = 1000, height = 1000) return fig You had a lot of extra work going on in your function; I cut a lot of that out because you didn't need it. If you have any questions, let me know!
Create multicategory chart by python
I have the data like the following in excel: company month-year #people got interviewed # people employed link to the data: (https://docs.google.com/spreadsheets/d/1DwZt9fpnzR9yUNBMjmqA1hg11d-2dXNs/edit?usp=share_link&ouid=113997824301423906122&rtpof=true&sd=true) when I try to create multicategory chart(company as first category and the year-month as second category) by plotly library by python it mixes up the order of second category for y,z company. Putting the code and the screenshot of the chart below. Code : import pandas as pd from helper_functions import get_df import plotly.graph_objects as go from datetime import datetime def multicat_chart(infile=None, sheet_name=None, chart_type = None, chart_title = None): #chart type must be given df=pd.read_excel(infile,sheet_name) df = df.fillna(method='ffill') cat = df.columns[0] sub_cat = df.columns[1] cols = df.columns[2:] fig = go.Figure() cats = [] sub_cats = [] for c in df[cat].unique(): new_df = df.loc[df[cat] == c] scats = new_df[sub_cat] scats = scats.apply(lambda date: datetime.strptime(date, "%b-%Y")) scats = list(scats) scats.sort() scats = [datetime.strftime(element, '%b-%y') for element in scats] scats = [str(element) for element in scats] for sc in scats: cats.append(str(c)) sub_cats.append(str(sc)) print(c) for i in scats: print(i) fig.add_trace( go.Bar(x = [cats,sub_cats],y = df[cols[0]], name="# people got interviewed" )) fig.add_trace( go.Bar(x = [cats,sub_cats],y = df[cols[1]], name="# people employed" )) fig.update_layout(width = 1000, height = 1000) return fig fig = multicat_chart(infile = 'data_for_test.xlsx', sheet_name = 'data', chart_type = 'bar') fig.show() Chart: I gave the data to the Bar() function in ordered way but it mixes somehow, I would like to have in ascending order, what I did I convert string to datetime object and then sorted all subcategory data with the sort() function of list, and converted back to string. And By running the script you can notice that it prints in the right order, it means that it is given ordered to function, but it mixes, who can help me to understand why it behaves so?
[ "Once you made the dates month-year, they were object type--character strings, not dates—as in date-type. When you sorted, you sorted by calendar month.\nFirst, use strptime to make it a date, sort it, then use strftime.\nimport pandas as pd\nimport plotly.graph_objects as go\nfrom datetime import datetime as dt\n\ndef multicat_chart(infile=None, sheet_name=None, chart_type = None, chart_title = None):\n \n #chart type must be given\n df = pd.read_excel(infile, sheet_name)\n df = df.fillna(method='ffill') # fill in companies\n\n df['month'] = [dt.strptime(x, '%b-%Y') for x in df['month']] # date for ordering\n df.sort_values(by = ['month', 'company'], inplace = True) # appearance order\n df['month2'] = [dt.strftime(x, '%b-%Y') for x in df['month']] # visual appearance\n fig = go.Figure() # plot it\n fig.add_trace( go.Bar(x = [df.iloc[:, 0], df.iloc[:, 4]], \n y = df.iloc[:, 2], name=\"# people got interviewed\" ))\n fig.add_trace( go.Bar(x = [df.iloc[:, 0], df.iloc[:, 4]], \n y = df.iloc[:, 3], name=\"# people employed\" ))\n fig.update_layout(width = 1000, height = 1000)\n return fig\n\n\n\nYou had a lot of extra work going on in your function; I cut a lot of that out because you didn't need it.\nIf you have any questions, let me know!\n" ]
[ 0 ]
[]
[]
[ "linux", "plotly", "python" ]
stackoverflow_0074625714_linux_plotly_python.txt
Q: Scrape tweets by Python and BeautifulSoup I want to scrape the tweets of a specific account on Twitter via SB but it is not working for me this is my code : import facebook as fb from bs4 import BeautifulSoup as bs import requests myUrl = requests.get('https://twitter.com/search?q=(from%3AAlMosahf)&src=typed_query&f=live') source = myUrl.content soup = bs(source, 'html.parser') twi = soup.find_all('div', {'data-testid':'tweetText'}) myTW = twi[1].text print(myTW) The result is "list index out of range" .. because "twi" is empty A: It looks like you're trying to scrape Twitter using Beautiful Soup, but the code you've provided won't work for several reasons. First, the Twitter website uses JavaScript to dynamically generate its content, which means that the raw HTML you get from a requests.get() call won't include the tweets you're looking for. Instead, you'll need to use a tool that can execute the JavaScript on the page and return the fully-rendered HTML. Second, even if you were able to get the fully-rendered HTML, the code you've provided won't work because the data-testid attribute you're using to find the tweets doesn't exist on the page. You'll need to use a different approach to locate the tweets in the HTML. To scrape Twitter using Beautiful Soup, you'll need to use a different approach. One option is to use the Twitter API to retrieve the tweets you're interested in, and then use Beautiful Soup to parse the returned data. Here's an example of how you could do that: import tweepy from bs4 import BeautifulSoup as bs # Authenticate with the Twitter API auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Get the tweets from the user with the username "AlMosahf" tweets = api.user_timeline(screen_name="AlMosahf") # Parse the tweets using Beautiful Soup for tweet in tweets: soup = bs(tweet.text, 'html.parser') # Do something with the parsed tweet A: The error you are seeing is because you are trying to access the second element of the twi list, but the list is empty. This means that the find_all() method did not find any elements that match the search criteria you specified. There are several reasons why this might happen, in general. One possible reason is that the page structure has changed, so the elements you are trying to find are no longer present on the page. Another possible reason (the reason in this scenario) is that the page uses JavaScript to dynamically generate its content, so the content you see in the browser may not be present in the initial HTML source that is downloaded by the requests library. To fix this error, you can try the following steps: Use the developer tools in your web browser to inspect the page and verify that the elements you are trying to find are actually present on the page. If the elements are present, try using a different method to extract the content. For example, you could use a different parsing library, or you could try using a web scraping framework like Scrapy or Selenium. If the elements are not present, you may need to use a different approach to extract the content. For example, you could try using the Twitter API to access the tweets directly, rather than trying to scrape them from the page. You can use the tweepy library to access the Twitter API and extract the tweets from a specific account. This can be a more reliable and efficient way to access the tweets, compared to scraping the page using BeautifulSoup. Here is an example of how you could use tweepy to extract the tweets from a specific account: import tweepy # Set up your API keys and access tokens consumer_key = 'your-consumer-key' consumer_secret = 'your-consumer-secret' access_token = 'your-access-token' access_token_secret = 'your-access-token-secret' # Authenticate with the Twitter API auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) # Extract the tweets from the specified account account = 'AlMosahf' tweets = api.user_timeline(screen_name=account) # Print the tweets for tweet in tweets: print(tweet.text) This code uses the tweepy library to authenticate with the Twitter API and extract the tweets from the specified account. The tweets are then printed to the console. You can modify this code to suit your needs. For example, you could use the limit parameter to specify the number of tweets you want to extract, or you could use the since_id and max_id parameters to specify a date range for the tweets. For more information, you can refer to the tweepy documentation.
Scrape tweets by Python and BeautifulSoup
I want to scrape the tweets of a specific account on Twitter via SB but it is not working for me this is my code : import facebook as fb from bs4 import BeautifulSoup as bs import requests myUrl = requests.get('https://twitter.com/search?q=(from%3AAlMosahf)&src=typed_query&f=live') source = myUrl.content soup = bs(source, 'html.parser') twi = soup.find_all('div', {'data-testid':'tweetText'}) myTW = twi[1].text print(myTW) The result is "list index out of range" .. because "twi" is empty
[ "It looks like you're trying to scrape Twitter using Beautiful Soup, but the code you've provided won't work for several reasons.\nFirst, the Twitter website uses JavaScript to dynamically generate its content, which means that the raw HTML you get from a requests.get() call won't include the tweets you're looking for. Instead, you'll need to use a tool that can execute the JavaScript on the page and return the fully-rendered HTML.\nSecond, even if you were able to get the fully-rendered HTML, the code you've provided won't work because the data-testid attribute you're using to find the tweets doesn't exist on the page. You'll need to use a different approach to locate the tweets in the HTML.\nTo scrape Twitter using Beautiful Soup, you'll need to use a different approach. One option is to use the Twitter API to retrieve the tweets you're interested in, and then use Beautiful Soup to parse the returned data. Here's an example of how you could do that:\nimport tweepy\nfrom bs4 import BeautifulSoup as bs\n\n# Authenticate with the Twitter API\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\napi = tweepy.API(auth)\n\n# Get the tweets from the user with the username \"AlMosahf\"\ntweets = api.user_timeline(screen_name=\"AlMosahf\")\n\n# Parse the tweets using Beautiful Soup\nfor tweet in tweets:\n soup = bs(tweet.text, 'html.parser')\n # Do something with the parsed tweet\n\n", "The error you are seeing is because you are trying to access the second element of the twi list, but the list is empty. This means that the find_all() method did not find any elements that match the search criteria you specified.\nThere are several reasons why this might happen, in general. One possible reason is that the page structure has changed, so the elements you are trying to find are no longer present on the page. Another possible reason (the reason in this scenario) is that the page uses JavaScript to dynamically generate its content, so the content you see in the browser may not be present in the initial HTML source that is downloaded by the requests library.\nTo fix this error, you can try the following steps:\n\nUse the developer tools in your web browser to inspect the page and verify that the elements you are trying to find are actually present on the page.\nIf the elements are present, try using a different method to extract the content. For example, you could use a different parsing library, or you could try using a web scraping framework like Scrapy or Selenium.\nIf the elements are not present, you may need to use a different approach to extract the content. For example, you could try using the Twitter API to access the tweets directly, rather than trying to scrape them from the page.\n\nYou can use the tweepy library to access the Twitter API and extract the tweets from a specific account. This can be a more reliable and efficient way to access the tweets, compared to scraping the page using BeautifulSoup.\nHere is an example of how you could use tweepy to extract the tweets from a specific account:\nimport tweepy\n\n# Set up your API keys and access tokens\nconsumer_key = 'your-consumer-key'\nconsumer_secret = 'your-consumer-secret'\naccess_token = 'your-access-token'\naccess_token_secret = 'your-access-token-secret'\n\n# Authenticate with the Twitter API\nauth = tweepy.OAuthHandler(consumer_key, consumer_secret)\nauth.set_access_token(access_token, access_token_secret)\napi = tweepy.API(auth)\n\n# Extract the tweets from the specified account\naccount = 'AlMosahf'\ntweets = api.user_timeline(screen_name=account)\n\n# Print the tweets\nfor tweet in tweets:\n print(tweet.text)\n\nThis code uses the tweepy library to authenticate with the Twitter API and extract the tweets from the specified account. The tweets are then printed to the console.\nYou can modify this code to suit your needs. For example, you could use the limit parameter to specify the number of tweets you want to extract, or you could use the since_id and max_id parameters to specify a date range for the tweets. For more information, you can refer to the tweepy documentation.\n" ]
[ 2, 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074669065_beautifulsoup_python.txt
Q: Dicts not being popped from list? The context doesn't matter too much, but I came across the problem that while trying to pop dict objects from a list, it wouldn't delete all of them. I'm doing this to filter for certain values in the dict objects, and I was left with things that should have been removed. Just to see what would happen, I tried deleting every item in the list called accepted_auctions (shown below), but it did not work. for auction in accepted_auctions: accepted_auctions.pop(accepted_auctions.index(auction)) print(len(accepted_auctions)) When I tested this code, print(len(accepted_auctions)) printed 44 into the console. What am I doing wrong? A: It looks like you're using a for loop to iterate over the list and calling pop on the list at the same time. This is generally not a good idea because the for loop uses an iterator to go through the items in the list, and modifying the list while you're iterating over it can cause the iterator to become confused and not behave as expected. One way to fix this is to create a new list that contains only the items that you want to keep, and then replace the original list with the new one. Here's an example: # Create an empty list to store the items that we want to keep filtered_auctions = [] # Iterate over the items in the list for auction in accepted_auctions: # Check if the item meets the criteria for being kept if some_condition(auction): # If it does, append it to the filtered list filtered_auctions.append(auction) # Replace the original list with the filtered list accepted_auctions = filtered_auctions Another way to fix this is to use a while loop instead of a for loop. Here's an example: # Keep looping until the list is empty while accepted_auctions: # Pop the first item from the list auction = accepted_auctions.pop(0) # Check if the item meets the criteria for being kept if some_condition(auction): # If it does, append it to the filtered list filtered_auctions.append(auction) # Replace the original list with the filtered list accepted_auctions = filtered_auctions I hope this helps! Let me know if you have any other questions. A: Modifying a list as you iterate over it will invalidate the iterator (because the indices of all the items are changing as you remove items), which in turn causes it to skip items. Don't do that. The easiest way to create a filtered list is via a list comprehension that creates a new list, e.g.: accepted_auctions = [a for a in accepted_auctions if something(a)] Here's a simple example using a list comprehension to filter a list of ints to only the odd numbers: >>> nums = list(range(10)) >>> nums [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> nums = [n for n in nums if n % 2] >>> nums [1, 3, 5, 7, 9]
Dicts not being popped from list?
The context doesn't matter too much, but I came across the problem that while trying to pop dict objects from a list, it wouldn't delete all of them. I'm doing this to filter for certain values in the dict objects, and I was left with things that should have been removed. Just to see what would happen, I tried deleting every item in the list called accepted_auctions (shown below), but it did not work. for auction in accepted_auctions: accepted_auctions.pop(accepted_auctions.index(auction)) print(len(accepted_auctions)) When I tested this code, print(len(accepted_auctions)) printed 44 into the console. What am I doing wrong?
[ "It looks like you're using a for loop to iterate over the list and calling pop on the list at the same time. This is generally not a good idea because the for loop uses an iterator to go through the items in the list, and modifying the list while you're iterating over it can cause the iterator to become confused and not behave as expected.\nOne way to fix this is to create a new list that contains only the items that you want to keep, and then replace the original list with the new one. Here's an example:\n# Create an empty list to store the items that we want to keep\nfiltered_auctions = []\n\n# Iterate over the items in the list\nfor auction in accepted_auctions:\n # Check if the item meets the criteria for being kept\n if some_condition(auction):\n # If it does, append it to the filtered list\n filtered_auctions.append(auction)\n\n# Replace the original list with the filtered list\naccepted_auctions = filtered_auctions\n\nAnother way to fix this is to use a while loop instead of a for loop. Here's an example:\n# Keep looping until the list is empty\nwhile accepted_auctions:\n # Pop the first item from the list\n auction = accepted_auctions.pop(0)\n\n # Check if the item meets the criteria for being kept\n if some_condition(auction):\n # If it does, append it to the filtered list\n filtered_auctions.append(auction)\n\n# Replace the original list with the filtered list\naccepted_auctions = filtered_auctions\n\nI hope this helps! Let me know if you have any other questions.\n", "Modifying a list as you iterate over it will invalidate the iterator (because the indices of all the items are changing as you remove items), which in turn causes it to skip items. Don't do that.\nThe easiest way to create a filtered list is via a list comprehension that creates a new list, e.g.:\naccepted_auctions = [a for a in accepted_auctions if something(a)]\n\nHere's a simple example using a list comprehension to filter a list of ints to only the odd numbers:\n>>> nums = list(range(10))\n>>> nums\n[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\n>>> nums = [n for n in nums if n % 2]\n>>> nums\n[1, 3, 5, 7, 9]\n\n" ]
[ 3, 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074671420_python_python_3.x.txt
Q: Count of distinct values in pandas column which has list of values I have a dataframe column which contains lists of values. I am interested in getting the count of each distinct value inside the list across the column using python. A: To visualize it: import seaborn as sns sns.countplot(x='ColumnName',data=df) too see counts normally: df['ColumnName'].value_counts()
Count of distinct values in pandas column which has list of values
I have a dataframe column which contains lists of values. I am interested in getting the count of each distinct value inside the list across the column using python.
[ "To visualize it:\nimport seaborn as sns\nsns.countplot(x='ColumnName',data=df)\n\ntoo see counts normally:\ndf['ColumnName'].value_counts()\n\n" ]
[ 1 ]
[]
[]
[ "data_science", "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074671443_data_science_dataframe_numpy_pandas_python.txt
Q: why my react hook(usestate) not rendering? i'm using django restframework for my server side, i have fetch my datas on ReactJS, set it using "setPosts", consoled my response and i am getting my require response but when i try to render it in my return() block. i am not getting the data. rather i am having a blank page. i am using a windows 11 and python 3.11.0 import React,{Component,useState,useEffect,useRef} from "react"; import user_img from '../1.jpeg'; import music_img from '../2.jpg'; import music from '../Victony.mp3'; import { Button } from "@mui/material"; import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'; import { faPlay,faPause,faForward,faBackward } from '@fortawesome/free-solid-svg-icons'; import WaveSurfer from 'wavesurfer.js'; import AudioPlayer from 'react-modern-audio-player'; import RegionsPlugin from "wavesurfer.js/dist/plugin/wavesurfer.regions.min"; import TimelinePlugin from "wavesurfer.js/dist/plugin/wavesurfer.timeline.min"; import CursorPlugin from "wavesurfer.js/dist/plugin/wavesurfer.cursor.min"; // import MyCustomPlugin from 'my-custom-plugin-path'; // const WaveFormOptions=ref=>({ // barWidth: 3, // cursorWidth: 1, // container: ref, // // backend: 'WebAudio', // height: 80, // progressColor: '#2D5BFF', // responsive: true, // waveColor: '#EFEFEF', // cursorColor: 'transparent', // }); const Home=()=>{ const audio=document.querySelector('#audio') const music_container=document.querySelector('#music_container') const [image, setImage]=useState('') const [posts, setPosts]=useState([]) const [icon,setIcon]=useState(faPlay) const[playing,setPlaying]=useState(false) // const progress=useRef() const[progress,SetCurrentProgress]=useState(0) function getAllPosts(){ fetch(`http://127.0.0.1:8000/`) .then(response=>response.json()) .then(data=>{ console.log(data) setPosts(data) }) } // console.log(posts.artist_name) // LOAD ALL POSTS // function soundrimage(){ // // setImage(music_img) // } useEffect(()=>{ getAllPosts() // console.log(this) },[]) // LOAD AND CONTROL MUSIC function playSong(){ music_container.classList.add('play') setIcon(faPause) setPlaying(true) audio.play() } function pauseSong(){ music_container.classList.remove('play') setIcon(faPlay) setPlaying(true) audio.pause() } function playPause(){ let isplaying=music_container.classList.contains('play') if(isplaying){ pauseSong() }else{ playSong() } } const UpdateProgress=(e)=>{ // console.log(e.target.duration) const {duration,currentTime}=e.target const ProgressPercent=(currentTime/duration)*100 SetCurrentProgress(`${ProgressPercent}`) } // console.log(progress.current) const setProgress=(e)=>{ const width=e.target.clientWidth // console.log(e) const clickX=e.nativeEvent.offsetX // console.log(clickX) const duration=audio.duration // console.log(duration) audio.currentTime=(clickX/width)*duration } // MUSIC INFO // const music_image= document.querySelector("#music_image") return( <body> <Navbar/> <main className="landing"> {posts.map(post=>{ <div className="music_container" onClick={playPause} id="music_container"> <img id="music_image" src={'http://127.0.0.1:8000'+post.image} /> <button > <FontAwesomeIcon icon={icon} id="playBtn" /> </button> <audio src='' id="audio" onTimeUpdate={e=>UpdateProgress(e)}/> <p className="caption">{post.title}</p> <p>{post.title}</p> </div> console.log(post.image) } )} </main> <Play image={image} playing={playing} playpause={playPause} playicon={icon} progress={progress} setProgress={e=>setProgress(e)} /> </body> ) } export default Home A: The issue here is with your map function. Try the following syntax: { posts.map((post) => ( <div className="music_container" onClick={playPause} id="music_container"> <img id="music_image" src={'http://127.0.0.1:8000'+post.image} /> <button> <FontAwesomeIcon icon={icon} id="playBtn" /> </button> <audio src='' id="audio" onTimeUpdate={e=>UpdateProgress(e)}/> <p className="caption">{post.title}</p> <p>{post.title}</p> </div> )) }
why my react hook(usestate) not rendering?
i'm using django restframework for my server side, i have fetch my datas on ReactJS, set it using "setPosts", consoled my response and i am getting my require response but when i try to render it in my return() block. i am not getting the data. rather i am having a blank page. i am using a windows 11 and python 3.11.0 import React,{Component,useState,useEffect,useRef} from "react"; import user_img from '../1.jpeg'; import music_img from '../2.jpg'; import music from '../Victony.mp3'; import { Button } from "@mui/material"; import { FontAwesomeIcon } from '@fortawesome/react-fontawesome'; import { faPlay,faPause,faForward,faBackward } from '@fortawesome/free-solid-svg-icons'; import WaveSurfer from 'wavesurfer.js'; import AudioPlayer from 'react-modern-audio-player'; import RegionsPlugin from "wavesurfer.js/dist/plugin/wavesurfer.regions.min"; import TimelinePlugin from "wavesurfer.js/dist/plugin/wavesurfer.timeline.min"; import CursorPlugin from "wavesurfer.js/dist/plugin/wavesurfer.cursor.min"; // import MyCustomPlugin from 'my-custom-plugin-path'; // const WaveFormOptions=ref=>({ // barWidth: 3, // cursorWidth: 1, // container: ref, // // backend: 'WebAudio', // height: 80, // progressColor: '#2D5BFF', // responsive: true, // waveColor: '#EFEFEF', // cursorColor: 'transparent', // }); const Home=()=>{ const audio=document.querySelector('#audio') const music_container=document.querySelector('#music_container') const [image, setImage]=useState('') const [posts, setPosts]=useState([]) const [icon,setIcon]=useState(faPlay) const[playing,setPlaying]=useState(false) // const progress=useRef() const[progress,SetCurrentProgress]=useState(0) function getAllPosts(){ fetch(`http://127.0.0.1:8000/`) .then(response=>response.json()) .then(data=>{ console.log(data) setPosts(data) }) } // console.log(posts.artist_name) // LOAD ALL POSTS // function soundrimage(){ // // setImage(music_img) // } useEffect(()=>{ getAllPosts() // console.log(this) },[]) // LOAD AND CONTROL MUSIC function playSong(){ music_container.classList.add('play') setIcon(faPause) setPlaying(true) audio.play() } function pauseSong(){ music_container.classList.remove('play') setIcon(faPlay) setPlaying(true) audio.pause() } function playPause(){ let isplaying=music_container.classList.contains('play') if(isplaying){ pauseSong() }else{ playSong() } } const UpdateProgress=(e)=>{ // console.log(e.target.duration) const {duration,currentTime}=e.target const ProgressPercent=(currentTime/duration)*100 SetCurrentProgress(`${ProgressPercent}`) } // console.log(progress.current) const setProgress=(e)=>{ const width=e.target.clientWidth // console.log(e) const clickX=e.nativeEvent.offsetX // console.log(clickX) const duration=audio.duration // console.log(duration) audio.currentTime=(clickX/width)*duration } // MUSIC INFO // const music_image= document.querySelector("#music_image") return( <body> <Navbar/> <main className="landing"> {posts.map(post=>{ <div className="music_container" onClick={playPause} id="music_container"> <img id="music_image" src={'http://127.0.0.1:8000'+post.image} /> <button > <FontAwesomeIcon icon={icon} id="playBtn" /> </button> <audio src='' id="audio" onTimeUpdate={e=>UpdateProgress(e)}/> <p className="caption">{post.title}</p> <p>{post.title}</p> </div> console.log(post.image) } )} </main> <Play image={image} playing={playing} playpause={playPause} playicon={icon} progress={progress} setProgress={e=>setProgress(e)} /> </body> ) } export default Home
[ "The issue here is with your map function. Try the following syntax:\n{ posts.map((post) => (\n <div className=\"music_container\" onClick={playPause} id=\"music_container\">\n <img id=\"music_image\" src={'http://127.0.0.1:8000'+post.image} />\n <button>\n <FontAwesomeIcon icon={icon} id=\"playBtn\" />\n </button>\n <audio src='' id=\"audio\" onTimeUpdate={e=>UpdateProgress(e)}/>\n <p className=\"caption\">{post.title}</p>\n <p>{post.title}</p>\n </div>\n)) }\n\n" ]
[ 0 ]
[]
[]
[ "django_rest_framework", "python", "reactjs" ]
stackoverflow_0074671368_django_rest_framework_python_reactjs.txt
Q: counting occurrences between 1 and 10 of an array I have an initial_array with numbers between 10 and 1, in descending order. initial_array = np.array ([10,10,7,4,2]) I want an output_array which counts the number of occurrences between 1 and 10. output_array = [ 0 1 0 1 0 0 1 0 0 2] A: Here is one possible solution using the np.bincount function: import numpy as np # create initial array with numbers between 10 and 1, in descending order initial_array = np.array([10, 10, 7, 4, 2]) # create output array that counts the number of occurrences between 1 and 10 output_array = np.bincount(initial_array, minlength=11) # print output array print(output_array) This produces the following output: [0 1 0 1 0 0 1 0 0 2] Note that the minlength parameter of the np.bincount function determines the size of the output array, which should be equal to the maximum value in the initial array plus one (11 in this case, since the maximum value is 10). If you don't specify this parameter, the output array will only have as many elements as the maximum value in the initial array. For example, if you remove the minlength parameter from the code above, the output will be: [0 1 0 1 0 0 1 0 0 2]
counting occurrences between 1 and 10 of an array
I have an initial_array with numbers between 10 and 1, in descending order. initial_array = np.array ([10,10,7,4,2]) I want an output_array which counts the number of occurrences between 1 and 10. output_array = [ 0 1 0 1 0 0 1 0 0 2]
[ "Here is one possible solution using the np.bincount function:\nimport numpy as np\n\n# create initial array with numbers between 10 and 1, in descending order\ninitial_array = np.array([10, 10, 7, 4, 2])\n\n# create output array that counts the number of occurrences between 1 and 10\noutput_array = np.bincount(initial_array, minlength=11)\n\n# print output array\nprint(output_array)\n\nThis produces the following output:\n[0 1 0 1 0 0 1 0 0 2]\n\nNote that the minlength parameter of the np.bincount function determines the size of the output array, which should be equal to the maximum value in the initial array plus one (11 in this case, since the maximum value is 10). If you don't specify this parameter, the output array will only have as many elements as the maximum value in the initial array.\nFor example, if you remove the minlength parameter from the code above, the output will be:\n[0 1 0 1 0 0 1 0 0 2]\n\n" ]
[ 2 ]
[]
[]
[ "arrays", "python" ]
stackoverflow_0074671493_arrays_python.txt
Q: How do i put numpy.float64 data into dataframe? Im having 5 differents dataframes with values like this: tanggal komoditas harga 1 Beras Sembako 12000 2 Beras Sembako 12000 ... Beras Sembako ... 31 Beras Sembako 11000 (the only difference between each dataframes is on the 'komoditas' columns values is having different names) Im using this loop to get the mean values for each of 5 dataframes that is used for z in dfs: for x in tanggal: mean = z.loc[z['tanggal'] == x, 'harga'].mean() rata = [mean] print(rata) dfs contains 5 different sets of dataframes that im trying to get the mean value from. tanggal is set of range from (1, 31) after trying to run it. im getting result as numpy.float64 data like this: [13916.666666666666][13916.666666666666][13895.833333333334] ... [13901.041666666666] Im trying to convert these values into a dataframe using this df_rata = pd.DataFrame(rata, columns =['Harga Rata']) but when i did it only one value showed up like this: Harga Rata 0 13901.041667 when i tried to define rata length using len(rata) it only showed result as only 1 value is stored inside the variable. Is there something that i did wrong? I'm very new to this and still learning, an explanation would be much appreciated. Thanks! A: Mark Ransom pointed out the issue in a comment. The fix is to append mean to rata in the loop: rata = [] for z in dfs: for x in tanggal: mean = z.loc[z['tanggal'] == x, 'harga'].mean() rata.append(mean) At this point, rata will be a list such that len(rata) == len(tanggal)*len(dfs) and the mean of harga across tanggal in the first, second, etc data frame is in rata[0 : len(tanggal)], rata[len(tanggal) : 2*len(tanggal)], etc respectively. A side comment, if your data is only a single list (like rata), consider using pandas.Series instead.
How do i put numpy.float64 data into dataframe?
Im having 5 differents dataframes with values like this: tanggal komoditas harga 1 Beras Sembako 12000 2 Beras Sembako 12000 ... Beras Sembako ... 31 Beras Sembako 11000 (the only difference between each dataframes is on the 'komoditas' columns values is having different names) Im using this loop to get the mean values for each of 5 dataframes that is used for z in dfs: for x in tanggal: mean = z.loc[z['tanggal'] == x, 'harga'].mean() rata = [mean] print(rata) dfs contains 5 different sets of dataframes that im trying to get the mean value from. tanggal is set of range from (1, 31) after trying to run it. im getting result as numpy.float64 data like this: [13916.666666666666][13916.666666666666][13895.833333333334] ... [13901.041666666666] Im trying to convert these values into a dataframe using this df_rata = pd.DataFrame(rata, columns =['Harga Rata']) but when i did it only one value showed up like this: Harga Rata 0 13901.041667 when i tried to define rata length using len(rata) it only showed result as only 1 value is stored inside the variable. Is there something that i did wrong? I'm very new to this and still learning, an explanation would be much appreciated. Thanks!
[ "Mark Ransom pointed out the issue in a comment. The fix is to append mean to rata in the loop:\nrata = []\nfor z in dfs:\n for x in tanggal:\n mean = z.loc[z['tanggal'] == x, 'harga'].mean()\n rata.append(mean)\n\nAt this point, rata will be a list such that len(rata) == len(tanggal)*len(dfs) and the mean of harga across tanggal in the first, second, etc data frame is in rata[0 : len(tanggal)], rata[len(tanggal) : 2*len(tanggal)], etc respectively.\nA side comment, if your data is only a single list (like rata), consider using pandas.Series instead.\n" ]
[ 0 ]
[]
[]
[ "arrays", "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074667526_arrays_dataframe_numpy_pandas_python.txt
Q: Python Azure Can't Create Blob Container: This request is not authorized to perform this operation I'm trying to create a blob container within a Azure storage account with Azure's python API. def create_storage_container(storageAccountName: str, containerName: str): print(f"Creating storage container '{containerName}' in storage account '{storageAccountName}'") credentials = DefaultAzureCredential() url=F"https://{storageAccountName}.blob.core.windows.net" blobClient = BlobServiceClient(account_url=url, credential=credentials) containerClient = blobClient.get_container_client(containerName) containerClient.create_container() On create_container() I get the error: Exception has occurred: HttpResponseError This request is not authorized to perform this operation. RequestId:8a3f8af1-101e-0075-3351-074949000000 Time:2022-12-03T20:00:25.5236364Z ErrorCode:AuthorizationFailure Content: AuthorizationFailureThis request is not authorized to perform this operation. RequestId:8a3f8af1-101e-0075-3351-074949000000 Time:2022-12-03T20:00:25.5236364Z The storage account was created like so: # Creates a storage account if it does not already exist. # Returns the name of the storage account. def create_storage_account(resourceGroupName: str, location: str, subscriptionId: str, storageAccountName: str): credentials = AzureCliCredential() storageClient = StorageManagementClient(credentials, subscriptionId, "2018-02-01") # Why does this have creation powers for storage accounts instead of the ResourceManagementClient? params = { "sku": { "name": "Standard_LRS", "tier": "Standard" }, "kind": "StorageV2", "location": location, "supportsHttpsTrafficOnly": True } result = storageClient.storage_accounts.begin_create(resourceGroupName, storageAccountName, params) # type:ignore storageAccount = result.result(120) print(f"Done creating storage account with name: {storageAccount.name}") The storage accounts that are generated like this seem to have completely open network access, so I wouldn't think that would be an issue. How can I fix this error or create a storage container in another way programatically? Thanks A: Check the RBAC roles your user is assigned to for the storage account. The default ones don’t always enable you to view data and sounds like it’s causing your problems.
Python Azure Can't Create Blob Container: This request is not authorized to perform this operation
I'm trying to create a blob container within a Azure storage account with Azure's python API. def create_storage_container(storageAccountName: str, containerName: str): print(f"Creating storage container '{containerName}' in storage account '{storageAccountName}'") credentials = DefaultAzureCredential() url=F"https://{storageAccountName}.blob.core.windows.net" blobClient = BlobServiceClient(account_url=url, credential=credentials) containerClient = blobClient.get_container_client(containerName) containerClient.create_container() On create_container() I get the error: Exception has occurred: HttpResponseError This request is not authorized to perform this operation. RequestId:8a3f8af1-101e-0075-3351-074949000000 Time:2022-12-03T20:00:25.5236364Z ErrorCode:AuthorizationFailure Content: AuthorizationFailureThis request is not authorized to perform this operation. RequestId:8a3f8af1-101e-0075-3351-074949000000 Time:2022-12-03T20:00:25.5236364Z The storage account was created like so: # Creates a storage account if it does not already exist. # Returns the name of the storage account. def create_storage_account(resourceGroupName: str, location: str, subscriptionId: str, storageAccountName: str): credentials = AzureCliCredential() storageClient = StorageManagementClient(credentials, subscriptionId, "2018-02-01") # Why does this have creation powers for storage accounts instead of the ResourceManagementClient? params = { "sku": { "name": "Standard_LRS", "tier": "Standard" }, "kind": "StorageV2", "location": location, "supportsHttpsTrafficOnly": True } result = storageClient.storage_accounts.begin_create(resourceGroupName, storageAccountName, params) # type:ignore storageAccount = result.result(120) print(f"Done creating storage account with name: {storageAccount.name}") The storage accounts that are generated like this seem to have completely open network access, so I wouldn't think that would be an issue. How can I fix this error or create a storage container in another way programatically? Thanks
[ "Check the RBAC roles your user is assigned to for the storage account. The default ones don’t always enable you to view data and sounds like it’s causing your problems.\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_python_sdk", "python" ]
stackoverflow_0074670530_azure_azure_python_sdk_python.txt
Q: Python code to do breadth-first discovery of a non-binary tree My problem: I have a known root node that I'm starting with and a specific other target node that I'm trying to find the shortest path to. I'm trying to write Python code to implement the Iterative Deepening Breadth-First Search algo, up to some max depth (say, 5 vertices). However, there are two features that (I believe) make this problem unlike virtually all the other SO questions and/or online tutorials I've been able to find so far: I do not yet know the structure of the tree at all: all I know is that both the root and target nodes exist, as do many other unknown nodes. The root and target nodes could be separated by one vertice, by 5, by 10, etc. Also, the tree is not binary: any node can have none, one, or many sibling nodes. When I successfully find a path from the root node to the target node, I need to return the shortest path between them. (Most of the solutions I've seen involve returning the entire traversal order required to locate a node, which I don't need.) How would I go about implementing this? My immediate thought was to try some form of recursion, but that seems much better-suited to Depth-First Search. TLDR: In the example tree below (apologies for ugly design), I want to traverse it from Root to Target in alphabetical order. (This should result in the algorithm skipping the letters K and L, since it will have found the Target node immediately after J.) I want the function to return: [Root, B, E, H, Target] A: You're basically looking for the Dijkstra algorithm. Dijkstra's algorithm adapts Breadth First Search to let you find the shortest path to your target. In order to retrieve the shortest path from the origin to a node, all that needs to be stored is the parent for each node discovered Let's say this is your tree node: class TreeNode: def __init__(self, value, children=None, parent=None): self.value = value self.parent = parent self.children = [] if children is None else children This function returns the path from the tree root node to the target node: from queue import Queue def path_root2target(root_node, target_value): def build_path(target_node): path = [target_node] while path[-1].parent is not None: path.append(path[-1].parent) return path[::-1] q = Queue() q.put(root_node) while not q.empty(): node = q.get() if node.value == target_value: return build_path(node) for child in node.children: child.parent = node q.put(child) raise ValueError('Target node not found') Example: >>> D = TreeNode('D') >>> A = TreeNode('A', [D]) >>> B = TreeNode('B') >>> C = TreeNode('C') >>> R = TreeNode('R', [A, B, C]) >>> path_root2target(R, 'E') ValueError: Target node not found >>> [node.value for node in path_root2target(R, 'D')] ['R', 'A', 'D'] If you want to return the node values (instead of the nodes themselves, then just modify the build_path function accordingly. A: As crazy as this sounds, I also asked ChatGPT to help me with this problem and, after I requested that it tweak its output in a few ways, here's what it came up with (comments included!), with just a couple small edits by me to replicate the tree from my diagram. (I verified it works.) # Import the necessary modules import queue # Define a TreeNode class to represent each node in the tree class TreeNode: def __init__(self, value, children=[]): self.value = value self.children = children # Define a function to perform the search def iterative_deepening_bfs(tree, target): # Set the initial depth to 0 depth = 0 # Create an infinite loop while True: # Create a queue to store the nodes at the current depth q = queue.Queue() # Add the root node to the queue q.put(tree) # Create a set to track which nodes have been visited visited = set() # Create a dictionary to store the paths to each visited node paths = {tree: [tree]} # Create a variable to track whether the target has been found found = False # Create a loop to process the nodes at the current depth while not q.empty(): # Get the next node from the queue node = q.get() # If the node has not been visited yet, process it if node not in visited: # Check if the node is the target if node == target: # Set the found variable to a tuple containing the depth and path to the target, and break out of the loop found = (depth, paths[node]) break # Add the node to the visited set visited.add(node) # Add the node's children to the queue for child in node.children: q.put(child) paths[child] = paths[node] + [child] # If the target was found, return the depth and path to the target if found: return found # Increment the depth and continue the loop depth += 1 root = TreeNode("Root") nodeA = TreeNode("A") nodeB = TreeNode("B") nodeC = TreeNode("C") nodeD = TreeNode("D") nodeE = TreeNode("E") nodeF = TreeNode("F") nodeG = TreeNode("G") nodeH = TreeNode("H") nodeI = TreeNode("I") nodeJ = TreeNode("J") nodeK = TreeNode("K") nodeL = TreeNode("L") target = TreeNode("Target") root.children = [nodeA, nodeB, nodeC] nodeA.children = [nodeD] nodeB.children = [nodeE, nodeF] nodeC.children = [nodeG] nodeE.children = [nodeH] nodeF.children = [nodeI] nodeG.children = [nodeJ] nodeH.children = [target] nodeI.children = [nodeK] nodeJ.children = [nodeL] # Assign the root node to the tree variable tree = root # Call the iterative_deepening_bfs function to search for the target node result = iterative_deepening_bfs(tree, target) # Print the depth and path to the target node print(f"The target was found at depth {result[0]} with path [{', '.join([str(node.value) for node in result[1]])}]")
Python code to do breadth-first discovery of a non-binary tree
My problem: I have a known root node that I'm starting with and a specific other target node that I'm trying to find the shortest path to. I'm trying to write Python code to implement the Iterative Deepening Breadth-First Search algo, up to some max depth (say, 5 vertices). However, there are two features that (I believe) make this problem unlike virtually all the other SO questions and/or online tutorials I've been able to find so far: I do not yet know the structure of the tree at all: all I know is that both the root and target nodes exist, as do many other unknown nodes. The root and target nodes could be separated by one vertice, by 5, by 10, etc. Also, the tree is not binary: any node can have none, one, or many sibling nodes. When I successfully find a path from the root node to the target node, I need to return the shortest path between them. (Most of the solutions I've seen involve returning the entire traversal order required to locate a node, which I don't need.) How would I go about implementing this? My immediate thought was to try some form of recursion, but that seems much better-suited to Depth-First Search. TLDR: In the example tree below (apologies for ugly design), I want to traverse it from Root to Target in alphabetical order. (This should result in the algorithm skipping the letters K and L, since it will have found the Target node immediately after J.) I want the function to return: [Root, B, E, H, Target]
[ "You're basically looking for the Dijkstra algorithm. Dijkstra's algorithm adapts Breadth First Search to let you find the shortest path to your target. In order to retrieve the shortest path from the origin to a node, all that needs to be stored is the parent for each node discovered\nLet's say this is your tree node:\nclass TreeNode:\n def __init__(self, value, children=None, parent=None):\n self.value = value\n self.parent = parent\n self.children = [] if children is None else children\n\nThis function returns the path from the tree root node to the target node:\nfrom queue import Queue\n\ndef path_root2target(root_node, target_value):\n def build_path(target_node):\n path = [target_node]\n while path[-1].parent is not None:\n path.append(path[-1].parent)\n return path[::-1]\n q = Queue()\n q.put(root_node)\n while not q.empty():\n node = q.get()\n if node.value == target_value:\n return build_path(node)\n for child in node.children:\n child.parent = node\n q.put(child)\n raise ValueError('Target node not found')\n\nExample:\n>>> D = TreeNode('D')\n>>> A = TreeNode('A', [D])\n>>> B = TreeNode('B')\n>>> C = TreeNode('C')\n>>> R = TreeNode('R', [A, B, C])\n>>> path_root2target(R, 'E')\nValueError: Target node not found\n>>> [node.value for node in path_root2target(R, 'D')]\n['R', 'A', 'D']\n\nIf you want to return the node values (instead of the nodes themselves, then just modify the build_path function accordingly.\n", "As crazy as this sounds, I also asked ChatGPT to help me with this problem and, after I requested that it tweak its output in a few ways, here's what it came up with (comments included!), with just a couple small edits by me to replicate the tree from my diagram. (I verified it works.)\n# Import the necessary modules\nimport queue\n\n# Define a TreeNode class to represent each node in the tree\nclass TreeNode:\n def __init__(self, value, children=[]):\n self.value = value\n self.children = children\n\n\n# Define a function to perform the search\ndef iterative_deepening_bfs(tree, target):\n # Set the initial depth to 0\n depth = 0\n\n # Create an infinite loop\n while True:\n # Create a queue to store the nodes at the current depth\n q = queue.Queue()\n\n # Add the root node to the queue\n q.put(tree)\n\n # Create a set to track which nodes have been visited\n visited = set()\n\n # Create a dictionary to store the paths to each visited node\n paths = {tree: [tree]}\n\n # Create a variable to track whether the target has been found\n found = False\n\n # Create a loop to process the nodes at the current depth\n while not q.empty():\n # Get the next node from the queue\n node = q.get()\n\n # If the node has not been visited yet, process it\n if node not in visited:\n # Check if the node is the target\n if node == target:\n # Set the found variable to a tuple containing the depth and path to the target, and break out of the loop\n found = (depth, paths[node])\n break\n\n # Add the node to the visited set\n visited.add(node)\n\n # Add the node's children to the queue\n for child in node.children:\n q.put(child)\n paths[child] = paths[node] + [child]\n\n # If the target was found, return the depth and path to the target\n if found:\n return found\n\n # Increment the depth and continue the loop\n depth += 1\n\n\nroot = TreeNode(\"Root\")\nnodeA = TreeNode(\"A\")\nnodeB = TreeNode(\"B\")\nnodeC = TreeNode(\"C\")\nnodeD = TreeNode(\"D\")\nnodeE = TreeNode(\"E\")\nnodeF = TreeNode(\"F\")\nnodeG = TreeNode(\"G\")\nnodeH = TreeNode(\"H\")\nnodeI = TreeNode(\"I\")\nnodeJ = TreeNode(\"J\")\nnodeK = TreeNode(\"K\")\nnodeL = TreeNode(\"L\")\ntarget = TreeNode(\"Target\")\n\nroot.children = [nodeA, nodeB, nodeC]\nnodeA.children = [nodeD]\nnodeB.children = [nodeE, nodeF]\nnodeC.children = [nodeG]\nnodeE.children = [nodeH]\nnodeF.children = [nodeI]\nnodeG.children = [nodeJ]\nnodeH.children = [target]\nnodeI.children = [nodeK]\nnodeJ.children = [nodeL]\n\n# Assign the root node to the tree variable\ntree = root\n\n\n\n# Call the iterative_deepening_bfs function to search for the target node\nresult = iterative_deepening_bfs(tree, target)\n\n# Print the depth and path to the target node\nprint(f\"The target was found at depth {result[0]} with path [{', '.join([str(node.value) for node in result[1]])}]\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "breadth_first_search", "graph_theory", "python", "tree" ]
stackoverflow_0074669889_breadth_first_search_graph_theory_python_tree.txt
Q: How to avoid Segmentation fault in pycocotools during decoding of RLE Here is a sample of decoding corrupted RLE: from pycocotools import mask # pycocotools version is 2.0.2 mask.decode({'size': [1024, 1024], 'counts': "OeSOk0[l0VOaSOn0kh0cNmYO'"}) As result it fails with Segmentation fault (core dumped) It looks like this: Python 3.6.15 (default) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> from pycocotools import mask >>> mask.decode({'size': [1024, 1024], 'counts': "OeSOk0[l0VOaSOn0kh0cNmYO'"}) Segmentation fault (core dumped) Questions: Is the way to validate RLE(Run-length encoding) before putting it in into mask.decode? (I think it's not possible, but still) Is the way to handle signal.SIGSEGV and continue executing of code? A: This issue is solved by updating pycocotools to version 2.0.5
How to avoid Segmentation fault in pycocotools during decoding of RLE
Here is a sample of decoding corrupted RLE: from pycocotools import mask # pycocotools version is 2.0.2 mask.decode({'size': [1024, 1024], 'counts': "OeSOk0[l0VOaSOn0kh0cNmYO'"}) As result it fails with Segmentation fault (core dumped) It looks like this: Python 3.6.15 (default) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> >>> from pycocotools import mask >>> mask.decode({'size': [1024, 1024], 'counts': "OeSOk0[l0VOaSOn0kh0cNmYO'"}) Segmentation fault (core dumped) Questions: Is the way to validate RLE(Run-length encoding) before putting it in into mask.decode? (I think it's not possible, but still) Is the way to handle signal.SIGSEGV and continue executing of code?
[ "This issue is solved by updating pycocotools to version 2.0.5\n" ]
[ 0 ]
[]
[]
[ "pycocotools", "python", "rle" ]
stackoverflow_0073491138_pycocotools_python_rle.txt
Q: Why is my else clause not working in Django? I'm trying to create a search error for my ecommerce website. When a user inputs a search that is not in the database, it should return the search error page. Though it seems my else clause isn't working. I tried putting the else clause in the search.html page, but it keeps giving me errors and it seems when I try to fix the errors, nothing really happens, it stays the same. I expect the search_error.html page to appear when the user inputs a product name that is not in the database. Though I keep getting for example, when I type "hello," the page appears with "Search results for hello." But it should result the search_error.html page. I also tried currently a else clause in my views.py, but it shows the same thing. I think my else clause isn't working and I don't know why. My views.py: def search(request): if 'searched' in request.GET: searched = request.GET['searched'] products = Product.objects.filter(title__icontains=searched) return render(request, 'epharmacyweb/search.html', {'searched': searched, 'products': products}) else: return render(request, 'epharmacyweb/search_error.html') def search_error(request): return render(request, 'epharmacyweb/search_error.html') My urls.py under URLPatterns: path('search/', views.search, name='search'), path('search_error/', views.search_error, name='search_error'), My search.html page: {% if searched %} <div class="pb-3 h3">Search Results for {{ searched }}</div> <div class="row row-cols-1 row-cols-sm-2 row-cols-md-5 g-3"> {% for product in products %} <div class="col"> <div class="card shadow-sm"> <img class="img-fluid" alt="Responsive image" src="{{ product.image.url }}"> <div class="card-body"> <p class="card-text"> <a class="text-dark text-decoration-none" href="{{ product.get_absolute_url }}">{{ product.title }}</a> </p> <div class="d-flex justify-content-between align-items-center"> <small class="text-muted"></small> </div> </div> </div> </div> {% endfor %} </div> <br></br> {% else %} <h1>You haven't searched anything yet...</h1> {% endif %} A: I think you want to check if products = Product.objects.filter(title__icontains=searched) is returning results instead of checking if "searched" is in the GET arguments. To check if the database returned results you can you exists() https://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.exists A: Your if is returning true because the term 'searched' is in fact in the request.GET dictionary. That doesn't mean that there is a product in your database with the value request.GET['searched'], which might be "hello" when you type in "hello". request.GET is a dictionary with a key of 'searched' and a value of "hello". You can also use get(), which will either get the value of request['searched'] or return None, so you do not have to check it with an if at all. Now to check if the database has the value of the search term, you can check the queryset: def search(request): # Return the value or None searched = request.GET.get('searched') products = Product.objects.filter(title__icontains=searched) # Check if there are any products with the search term: if products: return render(request, 'epharmacyweb/search.html', {'searched': searched, 'products': products}) else: return render(request, 'epharmacyweb/search_error.html')
Why is my else clause not working in Django?
I'm trying to create a search error for my ecommerce website. When a user inputs a search that is not in the database, it should return the search error page. Though it seems my else clause isn't working. I tried putting the else clause in the search.html page, but it keeps giving me errors and it seems when I try to fix the errors, nothing really happens, it stays the same. I expect the search_error.html page to appear when the user inputs a product name that is not in the database. Though I keep getting for example, when I type "hello," the page appears with "Search results for hello." But it should result the search_error.html page. I also tried currently a else clause in my views.py, but it shows the same thing. I think my else clause isn't working and I don't know why. My views.py: def search(request): if 'searched' in request.GET: searched = request.GET['searched'] products = Product.objects.filter(title__icontains=searched) return render(request, 'epharmacyweb/search.html', {'searched': searched, 'products': products}) else: return render(request, 'epharmacyweb/search_error.html') def search_error(request): return render(request, 'epharmacyweb/search_error.html') My urls.py under URLPatterns: path('search/', views.search, name='search'), path('search_error/', views.search_error, name='search_error'), My search.html page: {% if searched %} <div class="pb-3 h3">Search Results for {{ searched }}</div> <div class="row row-cols-1 row-cols-sm-2 row-cols-md-5 g-3"> {% for product in products %} <div class="col"> <div class="card shadow-sm"> <img class="img-fluid" alt="Responsive image" src="{{ product.image.url }}"> <div class="card-body"> <p class="card-text"> <a class="text-dark text-decoration-none" href="{{ product.get_absolute_url }}">{{ product.title }}</a> </p> <div class="d-flex justify-content-between align-items-center"> <small class="text-muted"></small> </div> </div> </div> </div> {% endfor %} </div> <br></br> {% else %} <h1>You haven't searched anything yet...</h1> {% endif %}
[ "I think you want to check if products = Product.objects.filter(title__icontains=searched) is returning results instead of checking if \"searched\" is in the GET arguments.\nTo check if the database returned results you can you exists()\nhttps://docs.djangoproject.com/en/4.1/ref/models/querysets/#django.db.models.query.QuerySet.exists\n", "Your if is returning true because the term 'searched' is in fact in the request.GET dictionary. That doesn't mean that there is a product in your database with the value request.GET['searched'], which might be \"hello\" when you type in \"hello\". request.GET is a dictionary with a key of 'searched' and a value of \"hello\". You can also use get(), which will either get the value of request['searched'] or return None, so you do not have to check it with an if at all.\nNow to check if the database has the value of the search term, you can check the queryset:\ndef search(request):\n \n # Return the value or None\n searched = request.GET.get('searched')\n\n products = Product.objects.filter(title__icontains=searched)\n\n # Check if there are any products with the search term:\n if products:\n return render(request, 'epharmacyweb/search.html', {'searched': searched, 'products': products})\n else:\n return render(request, 'epharmacyweb/search_error.html')\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074671375_django_python.txt
Q: Apache Beam Python DoFn process method and keyword arguments I am using Apache Beam SDK 2.43.0 with Python 3.8 and I am seeing some behaviour in the example below that I do not understand. If I run the snippet as given, I get the error: ... File "apache_beam\runners\common.py", line 983, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window TypeError: process() got multiple values for argument 'some_side_input' [while running 'use side input'] If I use beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar)) instead of beam.ParDo(UseSideInput(), some_side_input=beam.pvalue.AsSingleton(pcollect_bar)), i.e. use a positional argument for the some_side_input parameter instead of a keyword argument, the pipeline runs as expected. Similarly, if I use the UseSideInputNoTimestamp DoFn, which does not have the timestamp parameter in the process method, the pipeline runs and allows some_side_input to be supplied as a keyword argument. Another option is to specify the process method as process(self, element, timestamp=beam.DoFn.TimestampParam, some_side_input=None) and the snippet runs as is allowing a keyword argument. But, I am then providing a default value just to allow for the keyword argument. Just wondering, is this error expected and if so what is the reason for it? import apache_beam as beam import time class UseSideInput(beam.DoFn): def process(self, element, some_side_input, timestamp=beam.DoFn.TimestampParam): yield f"{element}~{some_side_input}~{timestamp.to_rfc3339()}" class UseSideInputNoTimestamp(beam.DoFn): def process(self, element, some_side_input): yield f"{element}~{some_side_input}" with beam.Pipeline() as p: pcollect_bar = p | "create bar" >> beam.Create(["bar"]) ( p | "create foo" >> beam.Create(["foo"]) | "add timestamp" >> beam.Map(lambda element: beam.window.TimestampedValue(element, int(time.time()))) # | "use side input" >> beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar)) <-- works # | "use side input no ts" >> beam.ParDo(UseSideInputNoTimestamp(), # some_side_input=beam.pvalue.AsSingleton(pcollect_bar)) <-- works | "use side input" >> beam.ParDo(UseSideInput(), some_side_input=beam.pvalue.AsSingleton(pcollect_bar)) | "print" >> beam.Map(print) ) A: From what I understood, the behaviour is if you have only the side input parameter, you can pass it with a keyword argument or positional argument, but if you have multiple parameters, you have to use positional arguments : Only side input argument : def test_side_input(self): import apache_beam as beam import time class UseSideInput(beam.DoFn): def process(self, element, some_side_input, *args, **kwargs): yield f"{element}~{some_side_input}" with beam.Pipeline() as p: pcollect_bar = p | "create bar" >> beam.Create(["bar"]) ( p | "create foo" >> beam.Create(["foo"]) | "add timestamp" >> beam.Map( lambda element: beam.window.TimestampedValue(element, int(time.time()))) | "use side input one arg positional" >> beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar)) # | "use side input one arg param" >> beam.ParDo(UseSideInput(), # some_side_input=beam.pvalue.AsSingleton(pcollect_bar)) | "print" >> beam.Map(print) ) Multiple arguments : def test_side_input(self): import apache_beam as beam import time class UseSideInput(beam.DoFn): def process(self, element, some_side_input, other, timestamp=beam.DoFn.TimestampParam, *args, **kwargs): yield f"{element}~{some_side_input}~{timestamp.to_rfc3339()}" with beam.Pipeline() as p: pcollect_bar = p | "create bar" >> beam.Create(["bar"]) ( p | "create foo" >> beam.Create(["foo"]) | "add timestamp" >> beam.Map( lambda element: beam.window.TimestampedValue(element, int(time.time()))) | "use side input" >> beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar), 0) | "print" >> beam.Map(print) ) In the second example, I have 2 arguments : some_side_input and other According to the documentation : 4.5.3. Accessing additional parameters in your DoFn, the timestamp allows to access to the timestamp of element and needs to have a default value with beam.DoFn.TimestampParam : import apache_beam as beam class ProcessRecord(beam.DoFn): def process(self, element, timestamp=beam.DoFn.TimestampParam): # access timestamp of element. pass This timestamp argument is not considered like other usual arguments. You also have an alternative for side inputs and use methods as functions in Map or FlatMap (built in Beam DoFn) and in this case the keyword arguments works everytime : def test_other(self): import apache_beam as beam import time def to_element(element, some_side_input, other): return f"{element}~{some_side_input}~{other}" with beam.Pipeline() as p: pcollect_bar = p | "create bar" >> beam.Create(["bar"]) ( p | "create foo" >> beam.Create(["foo"]) | "add timestamp" >> beam.Map( lambda element: beam.window.TimestampedValue(element, int(time.time()))) | "use side input" >> beam.Map(to_element, some_side_input=beam.pvalue.AsSingleton(pcollect_bar), other=0) | "print" >> beam.Map(print) )
Apache Beam Python DoFn process method and keyword arguments
I am using Apache Beam SDK 2.43.0 with Python 3.8 and I am seeing some behaviour in the example below that I do not understand. If I run the snippet as given, I get the error: ... File "apache_beam\runners\common.py", line 983, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window TypeError: process() got multiple values for argument 'some_side_input' [while running 'use side input'] If I use beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar)) instead of beam.ParDo(UseSideInput(), some_side_input=beam.pvalue.AsSingleton(pcollect_bar)), i.e. use a positional argument for the some_side_input parameter instead of a keyword argument, the pipeline runs as expected. Similarly, if I use the UseSideInputNoTimestamp DoFn, which does not have the timestamp parameter in the process method, the pipeline runs and allows some_side_input to be supplied as a keyword argument. Another option is to specify the process method as process(self, element, timestamp=beam.DoFn.TimestampParam, some_side_input=None) and the snippet runs as is allowing a keyword argument. But, I am then providing a default value just to allow for the keyword argument. Just wondering, is this error expected and if so what is the reason for it? import apache_beam as beam import time class UseSideInput(beam.DoFn): def process(self, element, some_side_input, timestamp=beam.DoFn.TimestampParam): yield f"{element}~{some_side_input}~{timestamp.to_rfc3339()}" class UseSideInputNoTimestamp(beam.DoFn): def process(self, element, some_side_input): yield f"{element}~{some_side_input}" with beam.Pipeline() as p: pcollect_bar = p | "create bar" >> beam.Create(["bar"]) ( p | "create foo" >> beam.Create(["foo"]) | "add timestamp" >> beam.Map(lambda element: beam.window.TimestampedValue(element, int(time.time()))) # | "use side input" >> beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar)) <-- works # | "use side input no ts" >> beam.ParDo(UseSideInputNoTimestamp(), # some_side_input=beam.pvalue.AsSingleton(pcollect_bar)) <-- works | "use side input" >> beam.ParDo(UseSideInput(), some_side_input=beam.pvalue.AsSingleton(pcollect_bar)) | "print" >> beam.Map(print) )
[ "From what I understood, the behaviour is if you have only the side input parameter, you can pass it with a keyword argument or positional argument, but if you have multiple parameters, you have to use positional arguments :\n\nOnly side input argument :\n\ndef test_side_input(self):\n import apache_beam as beam\n import time\n \n\n class UseSideInput(beam.DoFn):\n def process(self, element, some_side_input, *args, **kwargs):\n yield f\"{element}~{some_side_input}\"\n\n with beam.Pipeline() as p:\n pcollect_bar = p | \"create bar\" >> beam.Create([\"bar\"])\n\n (\n p\n | \"create foo\" >> beam.Create([\"foo\"])\n | \"add timestamp\" >> beam.Map(\n lambda element: beam.window.TimestampedValue(element, int(time.time())))\n | \"use side input one arg positional\" >> beam.ParDo(UseSideInput(), beam.pvalue.AsSingleton(pcollect_bar))\n # | \"use side input one arg param\" >> beam.ParDo(UseSideInput(),\n # some_side_input=beam.pvalue.AsSingleton(pcollect_bar))\n | \"print\" >> beam.Map(print)\n )\n\n\nMultiple arguments :\n\ndef test_side_input(self):\n import apache_beam as beam\n import time\n\n class UseSideInput(beam.DoFn):\n def process(self, element, some_side_input, other, timestamp=beam.DoFn.TimestampParam, *args, **kwargs):\n yield f\"{element}~{some_side_input}~{timestamp.to_rfc3339()}\"\n\n with beam.Pipeline() as p:\n pcollect_bar = p | \"create bar\" >> beam.Create([\"bar\"])\n\n (\n p\n | \"create foo\" >> beam.Create([\"foo\"])\n | \"add timestamp\" >> beam.Map(\n lambda element: beam.window.TimestampedValue(element, int(time.time())))\n | \"use side input\" >> beam.ParDo(UseSideInput(),\n beam.pvalue.AsSingleton(pcollect_bar),\n 0)\n | \"print\" >> beam.Map(print)\n )\n\nIn the second example, I have 2 arguments : some_side_input and other\nAccording to the documentation : 4.5.3. Accessing additional parameters in your DoFn, the timestamp allows to access to the timestamp of element and needs to have a default value with beam.DoFn.TimestampParam :\nimport apache_beam as beam\n\nclass ProcessRecord(beam.DoFn):\n\n def process(self, element, timestamp=beam.DoFn.TimestampParam):\n # access timestamp of element.\n pass\n\nThis timestamp argument is not considered like other usual arguments.\nYou also have an alternative for side inputs and use methods as functions in Map or FlatMap (built in Beam DoFn) and in this case the keyword arguments works everytime :\ndef test_other(self):\n import apache_beam as beam\n import time\n\n def to_element(element, some_side_input, other):\n return f\"{element}~{some_side_input}~{other}\"\n\n with beam.Pipeline() as p:\n pcollect_bar = p | \"create bar\" >> beam.Create([\"bar\"])\n\n (\n p\n | \"create foo\" >> beam.Create([\"foo\"])\n | \"add timestamp\" >> beam.Map(\n lambda element: beam.window.TimestampedValue(element, int(time.time())))\n | \"use side input\" >> beam.Map(to_element,\n some_side_input=beam.pvalue.AsSingleton(pcollect_bar),\n other=0)\n | \"print\" >> beam.Map(print)\n )\n\n" ]
[ 1 ]
[]
[]
[ "apache_beam", "parameter_passing", "python" ]
stackoverflow_0074670833_apache_beam_parameter_passing_python.txt
Q: doubling object in Python how do I double the object side by side? print(" *\n * *\n * *\n * *\n*** ***\n * *\n * *\n *****" *2) this code puts the objects one below another, how do i do that it prints it besides? A: You could try something like that: obj = " *\n * *\n * *\n * *\n*** ***\n * *\n * *\n *****" lines = obj.split('\n') space = len(max(lines, key=len)) + 3 for line in lines: print(line + " "* (space - len(line)) + line) A: Use a multiline string for your image, iterate through the lines and print each one doubled: # using dots to indicate spaces clearly s = '''\ ....*..... ...*.*.... ..*...*... .*.....*.. ***...***. ..*...*... ..*...*... ..*****... '''.replace('.',' ') # double each line for line in s.splitlines(): print(line * 2) Output: * * * * * * * * * * * * * * *** *** *** *** * * * * * * * * ***** ***** A: This works: arrow = " *\n * *\n * *\n * *\n*** ***\n * *\n * *\n *****" arrow_lines = arrow.split("\n") double_arrow_lines = [line +" "*(15-len(line)) +line for line in arrow_lines] double_arrow = "\n".join(double_arrow_lines) print(double_arrow)
doubling object in Python
how do I double the object side by side? print(" *\n * *\n * *\n * *\n*** ***\n * *\n * *\n *****" *2) this code puts the objects one below another, how do i do that it prints it besides?
[ "You could try something like that:\nobj = \" *\\n * *\\n * *\\n * *\\n*** ***\\n * *\\n * *\\n *****\"\n\nlines = obj.split('\\n')\nspace = len(max(lines, key=len)) + 3\n\nfor line in lines:\n print(line + \" \"* (space - len(line)) + line)\n\n", "Use a multiline string for your image, iterate through the lines and print each one doubled:\n# using dots to indicate spaces clearly\ns = '''\\\n....*.....\n...*.*....\n..*...*...\n.*.....*..\n***...***.\n..*...*...\n..*...*...\n..*****...\n'''.replace('.',' ')\n\n# double each line\nfor line in s.splitlines():\n print(line * 2)\n\nOutput:\n * * \n * * * * \n * * * * \n * * * * \n*** *** *** *** \n * * * * \n * * * * \n ***** ***** \n\n", "This works:\narrow = \" *\\n * *\\n * *\\n * *\\n*** ***\\n * *\\n * *\\n *****\"\n \narrow_lines = arrow.split(\"\\n\")\ndouble_arrow_lines = [line +\" \"*(15-len(line)) +line for line in arrow_lines]\ndouble_arrow = \"\\n\".join(double_arrow_lines)\n \nprint(double_arrow)\n\n" ]
[ 2, 1, 0 ]
[ "To print the object side by side, you can use the join() method. The join() method takes a list of strings and concatenates them together, using the string on which the method is called as a separator.\nFor example, you can use the following code to print the object side by side:\nobj = \" *\\n * \\n * \\n * \\n ***\\n * *\\n * *\\n *****\"\n\nprint(\"\\n\".join([obj, obj]))\n\nThis will print the object twice, with a newline between each instance.\nAlternatively, you could use string multiplication to repeat the string, and then use the print() function to print the result on a single line, like this:\nobj = \" *\\n * \\n * \\n * \\n ***\\n * *\\n * *\\n *****\"\n\nprint(obj * 2, sep=\"\")\n\nIn this case, we use the sep parameter of the print() function to specify that no separator should be used between the items being printed. This will print the objects side by side on a single line.\n" ]
[ -4 ]
[ "python" ]
stackoverflow_0074671391_python.txt
Q: Python can't access global variable modified in a loop in another loop because of threading I'm currently working on a small project with the OpenWeatherMap API and used threading for the first time. I don't know if this is the best "solution" to my code, but the rest of my project is working fine and I'm having only this one last issue. It seems like it's coming from the way the threading is handled, that the 2nd thread is calling a variable that didn't get "processed" in the 1st thread so far. import requests import time import json from threading import Thread def weather(): global desc while True: # [code for the API is working fine] with open('weather.json') as f: data = json.load(f) desc = data["weather"][0]["description"] temp = data["main"]["temp"] print(desc, temp) time.sleep(120) # this would print something like "extra sunny 293" (temp in °K) every 2 mins def other(): while True: print(desc) time.sleep(10) # this causes the error "NameError: name 'desc' is not defined" thread1 = Thread(target=weather) thread2 = Thread(target=other) thread1.start() thread2.start() thread1.join() thread2.join() My current "fix" is to simply add a small delay to the 2nd loop with: def other(): while True: time.sleep(1) print(desc) time.sleep(9) # no error anymore But this solution feels rather janky and I can imagine that it will get only worse with more complicated code. Is there a more optimal way to fix this? A: It sounds like the variable desc is not defined in the other function. You can fix this by making desc a global variable that can be accessed by both functions. You can do this by adding the global keyword before the variable name in both functions, like this: def weather(): global desc # [code for the API is working fine] with open('weather.json') as f: data = json.load(f) desc = data["weather"][0]["description"] temp = data["main"]["temp"] print(desc, temp) time.sleep(120) # this would print something like "extra sunny 293" (temp in °K) every 2 mins def other(): global desc while True: print(desc) Alternatively, you can pass the value of desc as an argument to the other function when you create the thread2 object, like this: thread2 = Thread(target=other, args=(desc,)) Then, you can access the desc argument in the other function using the index of the args tuple, like this: def other(desc): while True: print(desc) time.sleep(10) # this should no longer cause the "NameError: name 'desc' is not defined" error
Python can't access global variable modified in a loop in another loop because of threading
I'm currently working on a small project with the OpenWeatherMap API and used threading for the first time. I don't know if this is the best "solution" to my code, but the rest of my project is working fine and I'm having only this one last issue. It seems like it's coming from the way the threading is handled, that the 2nd thread is calling a variable that didn't get "processed" in the 1st thread so far. import requests import time import json from threading import Thread def weather(): global desc while True: # [code for the API is working fine] with open('weather.json') as f: data = json.load(f) desc = data["weather"][0]["description"] temp = data["main"]["temp"] print(desc, temp) time.sleep(120) # this would print something like "extra sunny 293" (temp in °K) every 2 mins def other(): while True: print(desc) time.sleep(10) # this causes the error "NameError: name 'desc' is not defined" thread1 = Thread(target=weather) thread2 = Thread(target=other) thread1.start() thread2.start() thread1.join() thread2.join() My current "fix" is to simply add a small delay to the 2nd loop with: def other(): while True: time.sleep(1) print(desc) time.sleep(9) # no error anymore But this solution feels rather janky and I can imagine that it will get only worse with more complicated code. Is there a more optimal way to fix this?
[ "It sounds like the variable desc is not defined in the other function. You can fix this by making desc a global variable that can be accessed by both functions. You can do this by adding the global keyword before the variable name in both functions, like this:\ndef weather():\n global desc\n # [code for the API is working fine]\n with open('weather.json') as f:\n data = json.load(f)\n desc = data[\"weather\"][0][\"description\"]\n temp = data[\"main\"][\"temp\"]\n print(desc, temp)\n time.sleep(120) # this would print something like \"extra sunny 293\" (temp in °K) every 2 mins \n\ndef other():\n global desc\n while True:\n print(desc)\n\n\nAlternatively, you can pass the value of desc as an argument to the other function when you create the thread2 object, like this:\nthread2 = Thread(target=other, args=(desc,))\n\nThen, you can access the desc argument in the other function using the index of the args tuple, like this:\ndef other(desc):\n while True:\n print(desc)\n time.sleep(10) # this should no longer cause the \"NameError: name 'desc' is not defined\" error\n\n\n" ]
[ 0 ]
[]
[]
[ "multithreading", "python", "python_multithreading" ]
stackoverflow_0074671542_multithreading_python_python_multithreading.txt
Q: Django AWS S3 media files Working with Django, I am trying to use AWS S3 storage only for uploading and reading files which is working well at MEDIA_URL but the problem when using AWS S3 is that somehow I am losing reference to STATIC_URL where CSS and javascript files are I only want MEDIA_URL pointing to S3 and keep my STATIC_URL away from AWS S3... Is that possible? # Static asset configuration BASE_DIR = os.path.dirname(os.path.abspath(__file__)) STATIC_ROOT = 'staticfiles' STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) if DEBUG: MEDIA_ROOT = os.environ['MEDIA_ROOT'] MEDIA_URL = os.environ['MEDIA_URL'] else: DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY') AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME') MEDIA_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME A: Change STATIC_URL to "STATIC_ROOT = os.path.join(BASE_DIR,'static')". Could use decouple to hide those variables too.
Django AWS S3 media files
Working with Django, I am trying to use AWS S3 storage only for uploading and reading files which is working well at MEDIA_URL but the problem when using AWS S3 is that somehow I am losing reference to STATIC_URL where CSS and javascript files are I only want MEDIA_URL pointing to S3 and keep my STATIC_URL away from AWS S3... Is that possible? # Static asset configuration BASE_DIR = os.path.dirname(os.path.abspath(__file__)) STATIC_ROOT = 'staticfiles' STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(BASE_DIR, 'static'), ) if DEBUG: MEDIA_ROOT = os.environ['MEDIA_ROOT'] MEDIA_URL = os.environ['MEDIA_URL'] else: DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY') AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME') MEDIA_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
[ "Change STATIC_URL to \"STATIC_ROOT = os.path.join(BASE_DIR,'static')\". Could use decouple to hide those variables too.\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "django", "python" ]
stackoverflow_0024734733_amazon_s3_amazon_web_services_django_python.txt
Q: Handle Google authentication in FastAPI I am trying to implement a Google authentication on a FastAPI application. I have a local register and login system with JWT that works perfectly, but the 'get_current_user' method depends on the oauth scheme for the local authentication: async def get_current_user(token: str = Depends(oauth2_scheme)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(token, settings.JWT_SECRET_KEY, algorithms=[ALGORITHM]) email: EmailStr = payload.get("sub") if email is None: raise credentials_exception token_data = TokenData(email=email) except JWTError: raise credentials_exception user = await User.find_one(User.email == EmailStr(token_data.email)) if user is None: raise credentials_exception return user oauth2_scheme using fastapi.security: oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/jwt/login") Now, the problem is that I don't know how to handle when a user is authenticated via Google, because I've defined a different Oauth client for Google: google_oauth = OAuth(starlette_config) google_oauth.register( name='google', server_metadata_url='https://accounts.google.com/.well-known/openid-configuration', client_kwargs={'scope': 'openid email profile'} ) And my protected routes depend on the 'get_current_user' method, which is linked to the local oauth2_scheme. How should I go about allowing users who have logged in via Google to access my protected endpoints? A: from fastapi import Request, HTTPException from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials class JWTBearer(HTTPBearer): def __init__(self, auto_error: bool = True): super(JWTBearer, self).__init__(auto_error=auto_error) async def __call__(self, request: Request): credentials: HTTPAuthorizationCredentials = await super(JWTBearer, self).__call__(request) if credentials: if not credentials.scheme == "Bearer": raise HTTPException(status_code=403, detail="Invalid authentication scheme.") #if not self.verify_jwt(credentials.credentials): # raise HTTPException(status_code=403, detail="Invalid token or expired token.") return credentials.credentials else: raise HTTPException(status_code=403, detail="Invalid authorization code.")``` now change your schema to oauth2_schema = JWTBearer(). This will give a new type of authorization that only accept jwt token instead of the normal password and username. Process flow on swagger docs: 1) go to any of the endpoints that give you token 2) copy the token and click on the padlock icon, 3) paste the token then you are login.
Handle Google authentication in FastAPI
I am trying to implement a Google authentication on a FastAPI application. I have a local register and login system with JWT that works perfectly, but the 'get_current_user' method depends on the oauth scheme for the local authentication: async def get_current_user(token: str = Depends(oauth2_scheme)): credentials_exception = HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail="Could not validate credentials", headers={"WWW-Authenticate": "Bearer"}, ) try: payload = jwt.decode(token, settings.JWT_SECRET_KEY, algorithms=[ALGORITHM]) email: EmailStr = payload.get("sub") if email is None: raise credentials_exception token_data = TokenData(email=email) except JWTError: raise credentials_exception user = await User.find_one(User.email == EmailStr(token_data.email)) if user is None: raise credentials_exception return user oauth2_scheme using fastapi.security: oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/jwt/login") Now, the problem is that I don't know how to handle when a user is authenticated via Google, because I've defined a different Oauth client for Google: google_oauth = OAuth(starlette_config) google_oauth.register( name='google', server_metadata_url='https://accounts.google.com/.well-known/openid-configuration', client_kwargs={'scope': 'openid email profile'} ) And my protected routes depend on the 'get_current_user' method, which is linked to the local oauth2_scheme. How should I go about allowing users who have logged in via Google to access my protected endpoints?
[ "from fastapi import Request, HTTPException\nfrom fastapi.security import HTTPBearer, HTTPAuthorizationCredentials\n\n\n\n\nclass JWTBearer(HTTPBearer):\n def __init__(self, auto_error: bool = True):\n super(JWTBearer, self).__init__(auto_error=auto_error)\n\n async def __call__(self, request: Request):\n credentials: HTTPAuthorizationCredentials = await super(JWTBearer, self).__call__(request)\n if credentials:\n if not credentials.scheme == \"Bearer\":\n raise HTTPException(status_code=403, detail=\"Invalid authentication scheme.\")\n #if not self.verify_jwt(credentials.credentials):\n # raise HTTPException(status_code=403, detail=\"Invalid token or expired token.\")\n return credentials.credentials\n else:\n raise HTTPException(status_code=403, detail=\"Invalid authorization code.\")```\n\nnow change your schema to \n\noauth2_schema = JWTBearer().\n\nThis will give a new type of authorization that only accept jwt token instead of the normal password and username.\n\nProcess flow on swagger docs:\n\n1) go to any of the endpoints that give you token\n\n2) copy the token and click on the padlock icon, \n\n3) paste the token then you are login.\n\n" ]
[ 0 ]
[]
[]
[ "authentication", "fastapi", "oauth_2.0", "python" ]
stackoverflow_0073945475_authentication_fastapi_oauth_2.0_python.txt
Q: Grade computing program I am learning python. The question is "Write a grade program using a function called computegrade that takes a score as its parameter and returns a grade as a string." # Score Grade #>= 0.9 A #>= 0.8 B #>= 0.7 C #>= 0.6 D # < 0.6 F How do I get the grades when I run this program? As I am not assigning the grades to any variable. Hence, unable to get the output. def computegrade(): if score >=0.9: print('Grade A') elif score >=0.8 and score<0.9: print('Grade B') elif score >=0.7 and score<0.8: print('Grade C') elif score >=0.6 and score<0.7: print('Grade D') else: print('Grade F') score = input('Enter the score: ') try: score = float(score) except: print('Enter numbers only') No Error messages, but I unable to see the grades when entering a value A: You're not seeing the grades because you're not telling python to run computegrade. If you do try: score = float(score) computegrade() It'll be done with. Some observations about the computegrade method. I advise you to make it accept score as an argument def computegrade(score): # grade calculations... Although it works without this - as long as there is a score variable in the same scope, Python takes it - it feels counterintuitive to call a function that requires as score, not passing a score to it. Also, currently your program accepts grades bigger than 1.0 and smaller than 0.0, which is something you may want to raise an AssertionError in the future. I don't know if that is in the scope of your learning program, but having an def computegrade(): if score > 1.0 or score < 0.0: raise AssertionError('Scores must be within the 1.0 and 0.0 range!') Is a good practice. A: def compute_grade(marks): try: if float(marks)>1.0 or float(marks)<0.0: print("Invalid enteries") else: if float(marks) >= 0.9: print("Grade A") elif float(marks) >= 0.8: print("Grade B") elif float(marks) >= 0.7: print("Grade C") elif float(marks) >= 0.6: print("Grade D") elif float(marks) < 0.6: print("Grade F") except: print("Please enter numeric value") compute_grade(input("Please enter your marks\n")) A: sco = float(input('Enter your score: ')) def compute_grade(score): if score > 1.0: s = 'Out of Range!' return s elif score >= 0.9: s = "A" return s elif score >= 0.8: s = 'B' return s elif score >= 0.7: s = 'C' return s elif score >= 0.6: s = 'D' return s elif score >= 0.5: s = 'E' return s else: s = 'Bad score' return s sc = compute_grade(sco) print(sc)
Grade computing program
I am learning python. The question is "Write a grade program using a function called computegrade that takes a score as its parameter and returns a grade as a string." # Score Grade #>= 0.9 A #>= 0.8 B #>= 0.7 C #>= 0.6 D # < 0.6 F How do I get the grades when I run this program? As I am not assigning the grades to any variable. Hence, unable to get the output. def computegrade(): if score >=0.9: print('Grade A') elif score >=0.8 and score<0.9: print('Grade B') elif score >=0.7 and score<0.8: print('Grade C') elif score >=0.6 and score<0.7: print('Grade D') else: print('Grade F') score = input('Enter the score: ') try: score = float(score) except: print('Enter numbers only') No Error messages, but I unable to see the grades when entering a value
[ "You're not seeing the grades because you're not telling python to run computegrade. If you do\ntry:\n score = float(score)\n computegrade()\n\nIt'll be done with.\nSome observations about the computegrade method. I advise you to make it accept score as an argument\ndef computegrade(score):\n # grade calculations...\n\nAlthough it works without this - as long as there is a score variable in the same scope, Python takes it - it feels counterintuitive to call a function that requires as score, not passing a score to it.\nAlso, currently your program accepts grades bigger than 1.0 and smaller than 0.0, which is something you may want to raise an AssertionError in the future. I don't know if that is in the scope of your learning program, but having an\ndef computegrade():\n if score > 1.0 or score < 0.0:\n raise AssertionError('Scores must be within the 1.0 and 0.0 range!')\n\nIs a good practice.\n", "def compute_grade(marks):\n try:\n if float(marks)>1.0 or float(marks)<0.0:\n print(\"Invalid enteries\")\n else:\n if float(marks) >= 0.9:\n print(\"Grade A\")\n elif float(marks) >= 0.8:\n print(\"Grade B\")\n elif float(marks) >= 0.7:\n print(\"Grade C\")\n elif float(marks) >= 0.6:\n print(\"Grade D\")\n elif float(marks) < 0.6:\n print(\"Grade F\")\n except:\n print(\"Please enter numeric value\")\ncompute_grade(input(\"Please enter your marks\\n\"))\n\n", "sco = float(input('Enter your score: '))\ndef compute_grade(score):\nif score > 1.0:\n s = 'Out of Range!'\n return s\nelif score >= 0.9:\n s = \"A\"\n return s\nelif score >= 0.8:\n s = 'B'\n return s\nelif score >= 0.7:\n s = 'C'\n return s\nelif score >= 0.6:\n s = 'D'\n return s\nelif score >= 0.5:\n s = 'E'\n return s\nelse:\n s = 'Bad score'\n return s\n\nsc = compute_grade(sco)\nprint(sc)\n" ]
[ 0, 0, 0 ]
[ "You aren’t calling the function; you have told Python what the function is, but not called it. \nWhat you need to do is \nscore = float(score) \ngrade = computegrade()\nprint(‘Score :’, score,’ Grade :’, grade)\n\nIt is better practice to define your function so that it takes a parameter ;\ndef computegrade( score):\n\nInstead of your current ‘def’ line, and then when you call the function:\ngrade = computegrade( score) \n\nIt is far better practice to write functions with parameters rather than rely on external variables. \n", "You forgot to call the function.\nThe following is only a definition of the wanted function.\ndef computegrade():\n if score >=0.9:\n print('Grade A')\n elif score >=0.8 and score<0.9:\n print('Grade B')\n elif score >=0.7 and score<0.8:\n print('Grade C')\n elif score >=0.6 and score<0.7:\n print('Grade D')\n else:\n print('Grade F')\n\nYou need to call the function for it to be \"activated\".\nYou do so by writing:\ncomputegrade()\n\nSo i would guess that the resulting code should look like this:\nscore = input('Enter the score: ')\ntry:\n computegrade()\nexcept:\n print('Enter numbers only')\n\n(no need to convert to float, the command input() does it for you...)\n" ]
[ -1, -1 ]
[ "python" ]
stackoverflow_0057454202_python.txt
Q: pulp constraint: exactly N in a category, up to X of a second choice, same category I have a problem I'm trying to solve for where I want N players from one team, and up to X players from a second team, but I don't particularly care which team fills those constraints. For example, if N=5 and X=2, I could have 5 from one team and up to 2 from a second, different, team. How would I write such a constraint? example dataframe: team pos name ceil salary 0 NYY OF Aaron Judge 21.6631 6500 1 HOU OF Yordan Alvarez 21.6404 6100 2 ATL OF Ronald Acuna Jr. 21.5363 5400 3 HOU OF Kyle Tucker 20.0992 4700 4 TOR 1B Vladimir Guerrero Jr. 20.0722 6000 5 LAD SS Trea Turner 20.0256 5700 6 LAD OF Mookie Betts 19.5231 6300 7 SEA OF Julio Rodriguez 19.3694 5200 8 MIN OF Byron Buxton 19.3412 5600 9 LAD 1B Freddie Freeman 19.3393 5600 10 TOR OF George Springer 19.1429 5100 11 NYM OF Starling Marte 19.0791 5200 12 ATL 1B Matt Olson 19.009 4800 13 ATL 3B Austin Riley 18.9091 5200 14 SF OF Austin Slater 18.9052 3700 15 NYM 1B Pete Alonso 18.8921 5700 16 TEX OF Adolis Garcia 18.7115 4200 17 TEX SS Corey Seager 18.6957 5100 18 TOR OF Teoscar Hernandez 18.6834 5200 19 CWS 1B Jose Abreu 18.497 4600 20 ATL SS Dansby Swanson 18.4679 4900 21 TEX 2B/SS Marcus Semien 18.4389 4100 22 NYY 1B Anthony Rizzo 18.4383 5300 23 NYY 2B Gleyber Torres 18.39 4500 24 CHC C Willson Contreras 18.3452 5800 existing code snippet: #problem definition prob = LpProblem(name="DFS", sense=LpMaximize) prob += lpSum(player_vars[i] * slate['ceil'].iloc[i] for i in player_ids), "FPTS" #salary and total player constraints prob += lpSum(player_vars[i] * slate['salary'].iloc[i] for i in player_ids) <= 50000, "Salary" prob += lpSum(player_vars[i] for i in player_ids) == 10, "Total Players" #position constraints prob += lpSum(player_vars[i] for i in player_ids if slate['pos'].iloc[i] == 'P') == 2, "Pitcher" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('C')].to_list()]) == 1, "Catcher" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('1B')].to_list()]) == 1, "1B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('2B')].to_list()]) == 1, "2B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('3B')].to_list()]) == 1, "3B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('SS')].to_list()]) == 1, "SS" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('OF')].to_list()]) == 3, "OF" #no opposing pitcher constraint for pid in player_ids: if slate['pos'].iloc[pid] == 'P': prob += lpSum([player_vars[i] for i in player_ids if slate['team'].iloc[i] == slate['opp'].iloc[pid]] + [9 * player_vars[pid]]) <= 9, "P{pid}".format(pid=pid) #three team max constraint unique_teams = slate['team'].unique() player_in_team = slate['team'].str.get_dummies() team_vars = LpVariable.dicts('team', unique_teams, cat = 'Binary') for team in unique_teams: prob += lpSum([player_in_team[team][i] * player_vars[i] for i in player_ids if slate['pos'].iloc[i] != 'P']) >= team_vars[team], "Team{team}Min".format(team=team) prob += lpSum(team_vars[team] for team in unique_teams) == 3, "3 Teams" A: Here's how I would attack this... In pseudocode.... make a set of teams that you can use to index a couple of new variables. Make subsets of your players grouped by team, or use pandas data frame filters to limit summations of players to the team of interest. Make 2 new variables, that are binary "indicator" variables, one call it use5from[team] and one called use[team] to indicate that the team has been used at all. Make appropriate constraints to link those to the selection variables. Something like: for team in teams: 5 * use5from[team] <= pulp.lpSum(x[i] for i in team[i]) And for the other, a constraint to indicate any use... for team in teams: use[team] <= pulp.lpSum(x[i] for i in team[I]) And then make constraints that those two variables sum to over 1 and 3 respectively.
pulp constraint: exactly N in a category, up to X of a second choice, same category
I have a problem I'm trying to solve for where I want N players from one team, and up to X players from a second team, but I don't particularly care which team fills those constraints. For example, if N=5 and X=2, I could have 5 from one team and up to 2 from a second, different, team. How would I write such a constraint? example dataframe: team pos name ceil salary 0 NYY OF Aaron Judge 21.6631 6500 1 HOU OF Yordan Alvarez 21.6404 6100 2 ATL OF Ronald Acuna Jr. 21.5363 5400 3 HOU OF Kyle Tucker 20.0992 4700 4 TOR 1B Vladimir Guerrero Jr. 20.0722 6000 5 LAD SS Trea Turner 20.0256 5700 6 LAD OF Mookie Betts 19.5231 6300 7 SEA OF Julio Rodriguez 19.3694 5200 8 MIN OF Byron Buxton 19.3412 5600 9 LAD 1B Freddie Freeman 19.3393 5600 10 TOR OF George Springer 19.1429 5100 11 NYM OF Starling Marte 19.0791 5200 12 ATL 1B Matt Olson 19.009 4800 13 ATL 3B Austin Riley 18.9091 5200 14 SF OF Austin Slater 18.9052 3700 15 NYM 1B Pete Alonso 18.8921 5700 16 TEX OF Adolis Garcia 18.7115 4200 17 TEX SS Corey Seager 18.6957 5100 18 TOR OF Teoscar Hernandez 18.6834 5200 19 CWS 1B Jose Abreu 18.497 4600 20 ATL SS Dansby Swanson 18.4679 4900 21 TEX 2B/SS Marcus Semien 18.4389 4100 22 NYY 1B Anthony Rizzo 18.4383 5300 23 NYY 2B Gleyber Torres 18.39 4500 24 CHC C Willson Contreras 18.3452 5800 existing code snippet: #problem definition prob = LpProblem(name="DFS", sense=LpMaximize) prob += lpSum(player_vars[i] * slate['ceil'].iloc[i] for i in player_ids), "FPTS" #salary and total player constraints prob += lpSum(player_vars[i] * slate['salary'].iloc[i] for i in player_ids) <= 50000, "Salary" prob += lpSum(player_vars[i] for i in player_ids) == 10, "Total Players" #position constraints prob += lpSum(player_vars[i] for i in player_ids if slate['pos'].iloc[i] == 'P') == 2, "Pitcher" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('C')].to_list()]) == 1, "Catcher" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('1B')].to_list()]) == 1, "1B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('2B')].to_list()]) == 1, "2B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('3B')].to_list()]) == 1, "3B" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('SS')].to_list()]) == 1, "SS" prob += lpSum([player_vars[i] for i in player_ids if slate['name'].iloc[i] in slate['name'][slate['pos'].str.contains('OF')].to_list()]) == 3, "OF" #no opposing pitcher constraint for pid in player_ids: if slate['pos'].iloc[pid] == 'P': prob += lpSum([player_vars[i] for i in player_ids if slate['team'].iloc[i] == slate['opp'].iloc[pid]] + [9 * player_vars[pid]]) <= 9, "P{pid}".format(pid=pid) #three team max constraint unique_teams = slate['team'].unique() player_in_team = slate['team'].str.get_dummies() team_vars = LpVariable.dicts('team', unique_teams, cat = 'Binary') for team in unique_teams: prob += lpSum([player_in_team[team][i] * player_vars[i] for i in player_ids if slate['pos'].iloc[i] != 'P']) >= team_vars[team], "Team{team}Min".format(team=team) prob += lpSum(team_vars[team] for team in unique_teams) == 3, "3 Teams"
[ "Here's how I would attack this... In pseudocode....\n\nmake a set of teams that you can use to index a couple of new variables.\n\nMake subsets of your players grouped by team, or use pandas data frame filters to limit summations of players to the team of interest.\n\nMake 2 new variables, that are binary \"indicator\" variables, one call it use5from[team] and one called use[team] to indicate that the team has been used at all.\n\nMake appropriate constraints to link those to the selection variables. Something like:\n\n\n\nfor team in teams:\n 5 * use5from[team] <= pulp.lpSum(x[i] for i in team[i])\n\n\nAnd for the other, a constraint to indicate any use...\n\n\nfor team in teams:\n use[team] <= pulp.lpSum(x[i] for i in team[I]) \n\n\nAnd then make constraints that those two variables sum to over 1 and 3 respectively.\n\n" ]
[ 1 ]
[]
[]
[ "constraints", "pulp", "python" ]
stackoverflow_0074670229_constraints_pulp_python.txt
Q: Error in converting torch tensor to numpy.ndarray var1 = tensor([[[[0., 1., 1., ..., 1., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 1.], ..., [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.]]]]) print(var1.size()) print(type(var1)) print(var1.dtype) Output: torch.Size([1, 1, 480, 640]) <class 'torch.Tensor'> torch.float32 When I tried to convert torch tensor into numpy.ndarray, all values became zero. nump_var1 = var1.argmax(dim=1).squeeze(0).cpu().numpy() print(nump_var1) print(nump_var1.shape) print(type(nump_var1)) print(nump_var1.dtype) Output: [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] (480, 640) <class 'numpy.ndarray'> int64 Can anyone point out the mistake I have made? Thanks for the help. A: As hpaulj hinted at in a comment, var1.argmax(dim=1) will result in a zero tensor because you have var1.size(1) == 1. A: The error seems to occur when using argmax() on the tensor. The argmax() function returns the index of the maximum value of a tensor along a given dimension. In your code, you are using argmax() with the dim argument set to 1, which will return the indices of the maximum value along the second dimension of the tensor. However, since your tensor has only one element in the second dimension, the argmax() function will always return 0 as the index of the maximum value, which means that all the values in your tensor will be set to 0 when you convert it to a NumPy array. To avoid this issue, you can try using the numpy() function instead of the argmax() function to convert your tensor to a NumPy array. This will simply convert the tensor to a NumPy array without changing the values. Here is how you can do it: nump_var1 = var1.squeeze(0).cpu().numpy() You can also remove the squeeze() function, as it is not necessary in this case. The squeeze() function is used to remove dimensions of size 1 from a tensor, but since your tensor has only one element in the first and second dimensions, the squeeze() function will not have any effect. Here is the updated code: var1 = tensor([[[[0., 1., 1., ..., 1., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 1.], ..., [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.]]]]) print(var1.size()) print(type(var1)) print(var1.dtype) # Use the numpy() function to convert the tensor to a NumPy array nump_var1 = var1.cpu().numpy() print(nump_var1) print(nump_var1.shape) print(type(nump_var1)) print(nump_var1.dtype) This should fix the issue and give you the expected output. Let me know if this helps.
Error in converting torch tensor to numpy.ndarray
var1 = tensor([[[[0., 1., 1., ..., 1., 0., 0.], [0., 0., 1., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 1.], ..., [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.], [0., 0., 0., ..., 1., 1., 1.]]]]) print(var1.size()) print(type(var1)) print(var1.dtype) Output: torch.Size([1, 1, 480, 640]) <class 'torch.Tensor'> torch.float32 When I tried to convert torch tensor into numpy.ndarray, all values became zero. nump_var1 = var1.argmax(dim=1).squeeze(0).cpu().numpy() print(nump_var1) print(nump_var1.shape) print(type(nump_var1)) print(nump_var1.dtype) Output: [[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] (480, 640) <class 'numpy.ndarray'> int64 Can anyone point out the mistake I have made? Thanks for the help.
[ "As hpaulj hinted at in a comment, var1.argmax(dim=1) will result in a zero tensor because you have var1.size(1) == 1.\n", "The error seems to occur when using argmax() on the tensor. The argmax() function returns the index of the maximum value of a tensor along a given dimension. In your code, you are using argmax() with the dim argument set to 1, which will return the indices of the maximum value along the second dimension of the tensor.\nHowever, since your tensor has only one element in the second dimension, the argmax() function will always return 0 as the index of the maximum value, which means that all the values in your tensor will be set to 0 when you convert it to a NumPy array.\nTo avoid this issue, you can try using the numpy() function instead of the argmax() function to convert your tensor to a NumPy array. This will simply convert the tensor to a NumPy array without changing the values.\nHere is how you can do it:\nnump_var1 = var1.squeeze(0).cpu().numpy()\n\nYou can also remove the squeeze() function, as it is not necessary in this case. The squeeze() function is used to remove dimensions of size 1 from a tensor, but since your tensor has only one element in the first and second dimensions, the squeeze() function will not have any effect.\nHere is the updated code:\nvar1 = tensor([[[[0., 1., 1., ..., 1., 0., 0.],\n [0., 0., 1., ..., 0., 0., 0.],\n [0., 0., 0., ..., 0., 0., 1.],\n ...,\n [0., 0., 0., ..., 1., 1., 1.],\n [0., 0., 0., ..., 1., 1., 1.],\n [0., 0., 0., ..., 1., 1., 1.]]]])\n\nprint(var1.size())\nprint(type(var1))\nprint(var1.dtype)\n\n# Use the numpy() function to convert the tensor to a NumPy array\nnump_var1 = var1.cpu().numpy()\n\nprint(nump_var1)\nprint(nump_var1.shape)\nprint(type(nump_var1))\nprint(nump_var1.dtype)\n\nThis should fix the issue and give you the expected output. Let me know if this helps.\n" ]
[ 0, 0 ]
[]
[]
[ "machine_learning", "numpy", "python", "pytorch", "tensor" ]
stackoverflow_0074669588_machine_learning_numpy_python_pytorch_tensor.txt
Q: How do I run python cgi script on apache2 server on Ubuntu 16.04? I am a newbie so I saw some tutorials. I have a python script as first.py #!/usr/bin/python3 print "Content-type: text/html\n" print "Hello, world!" I have multiple versions of python on my computer. I couldn't figure out my cgi enabled directory so I pasted this code at three places /usr/lib/cgi-bin/first.py /usr/lib/cups/cgi-bin/first.py /var/www/html/first.py Now when I run this code in terminal it works fine but when I type curl http://localhost/first.py it spit out just simple text and does not execute. I have given all the permissions to first.py I have enabled and started the server by commands a2enmod cgi systemctl restart apache2 Please tell how do I execute and what is going on here? Thanks in advance. A: To python cgi script on apache2 server on Ubuntu(Tested on Ubuntu 20.04) from scratch, Follow these steps(Expects python is installed and works perfectly). Install apache2 sever. sudo apt install apache2 Enable CGI module. sudo a2enmod cgi Here we set /var/www/cgi-bin/ as cgi-bin directory. If you want a different directory, change files appropriately. Open Apache server configuration file [/etc/apache2/apache2.conf] by, sudo gedit /etc/apache2/apache2.conf And following lines to the end of the file. ######### Adding capaility to run CGI-scripts ################# ServerName localhost ScriptAlias /cgi-bin/ /var/www/cgi-bin/ Options +ExecCGI AddHandler cgi-script .cgi .pl .py Open file /etc/apache2/conf-available/serve-cgi-bin.conf by, sudo gedit /etc/apache2/conf-available/serve-cgi-bin.conf Change lines : ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Require all granted </Directory> To : ScriptAlias /cgi-bin/ /var/www/cgi-bin/ <Directory "/var/www/cgi-bin/"> AllowOverride None Options +ExecCGI </Directory> Now restart apache2 server by, sudo service apache2 restart Now create python script say, first.py inside directory /var/www/cgi-bin/ by, sudo gedit /var/www/cgi-bin/first.py and add following sample code : #!/usr/bin/env python import cgitb cgitb.enable() print("Content-Type: text/html;charset=utf-8") print ("Content-type:text/html\r\n") print("<H1> Hello, From python server :) </H1>") And give executable permission to first.py by, sudo chmod +x /var/www/cgi-bin/first.py Now curl will work. :~$ curl http://localhost/cgi-bin/first.py <H1> Hello, From python server :) </H1> Or open browser and browse http://localhost/cgi-bin/first.py. This should work and will display webpage showing "Hello, From python server :)". Hope it helps... Happy coding :) A: These lines need to be in your apache2 site configuration file (for example /etc/apache2/sites-available/default-ssl.conf if your website uses HTTPS): <Directory /var/www/yoursite/yourdir/> Options +ExecCGI PassEnv LANG AddHandler cgi-script .py </Directory> Add them for the directory you want and restart apache2. You might also need to add AddHandler python-program .py to your httpd.conf if not present already. A more detailed discussion concerning python scripts and apache2 is found here: Executing a Python script in Apache2. A: Probably the problem you're running in Python 3 and your code is written in Python 2 Change the title to python2: #!/usr/bin/python print "Content-type: text/html\n" print "Hello, world!" A: For just proof of concept, use this python script ; No input nor output needed. #!/usr/bin/env python3 import cgitb cgitb.enable() outputfile = open("javascriptPython.out", "a") outputfile.write("Hello from Python\n\n")
How do I run python cgi script on apache2 server on Ubuntu 16.04?
I am a newbie so I saw some tutorials. I have a python script as first.py #!/usr/bin/python3 print "Content-type: text/html\n" print "Hello, world!" I have multiple versions of python on my computer. I couldn't figure out my cgi enabled directory so I pasted this code at three places /usr/lib/cgi-bin/first.py /usr/lib/cups/cgi-bin/first.py /var/www/html/first.py Now when I run this code in terminal it works fine but when I type curl http://localhost/first.py it spit out just simple text and does not execute. I have given all the permissions to first.py I have enabled and started the server by commands a2enmod cgi systemctl restart apache2 Please tell how do I execute and what is going on here? Thanks in advance.
[ "To python cgi script on apache2 server on Ubuntu(Tested on Ubuntu 20.04) from scratch, Follow these steps(Expects python is installed and works perfectly).\n\nInstall apache2 sever.\nsudo apt install apache2\n\n\nEnable CGI module.\nsudo a2enmod cgi\n\n\n\n\nHere we set /var/www/cgi-bin/ as cgi-bin directory. If you want a different directory, change files appropriately.\n\n\nOpen Apache server configuration file [/etc/apache2/apache2.conf] by,\nsudo gedit /etc/apache2/apache2.conf\n\nAnd following lines to the end of the file.\n######### Adding capaility to run CGI-scripts #################\nServerName localhost\nScriptAlias /cgi-bin/ /var/www/cgi-bin/\nOptions +ExecCGI\nAddHandler cgi-script .cgi .pl .py\n\n\nOpen file /etc/apache2/conf-available/serve-cgi-bin.conf by,\nsudo gedit /etc/apache2/conf-available/serve-cgi-bin.conf\n\nChange lines :\nScriptAlias /cgi-bin/ /usr/lib/cgi-bin/\n<Directory \"/usr/lib/cgi-bin\">\n AllowOverride None\n Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch\n Require all granted\n</Directory> \n\nTo :\nScriptAlias /cgi-bin/ /var/www/cgi-bin/\n<Directory \"/var/www/cgi-bin/\">\n AllowOverride None\n Options +ExecCGI\n</Directory>\n\n\nNow restart apache2 server by,\nsudo service apache2 restart\n\n\nNow create python script say, first.py inside directory\n/var/www/cgi-bin/ by,\nsudo gedit /var/www/cgi-bin/first.py\n\nand add following sample code :\n#!/usr/bin/env python\nimport cgitb\n\ncgitb.enable()\n\nprint(\"Content-Type: text/html;charset=utf-8\")\nprint (\"Content-type:text/html\\r\\n\")\nprint(\"<H1> Hello, From python server :) </H1>\")\n\nAnd give executable permission to first.py by,\nsudo chmod +x /var/www/cgi-bin/first.py\n\n\n\nNow curl will work.\n:~$ curl http://localhost/cgi-bin/first.py\n<H1> Hello, From python server :) </H1>\n\nOr open browser and browse http://localhost/cgi-bin/first.py. This should work and will display webpage showing \"Hello, From python server :)\".\nHope it helps... Happy coding :)\n", "These lines need to be in your apache2 site configuration file\n(for example /etc/apache2/sites-available/default-ssl.conf\nif your website uses HTTPS):\n<Directory /var/www/yoursite/yourdir/>\n Options +ExecCGI\n PassEnv LANG\n AddHandler cgi-script .py\n</Directory>\n\nAdd them for the directory you want and restart apache2. You might also need to add\nAddHandler python-program .py\n\nto your httpd.conf if not present already.\nA more detailed discussion concerning python scripts and apache2 is found here: Executing a Python script in Apache2.\n", "Probably the problem you're running in Python 3 and your code is written in Python 2\nChange the title to python2:\n#!/usr/bin/python\nprint \"Content-type: text/html\\n\"\nprint \"Hello, world!\"\n\n", "For just proof of concept, use this python script ; No input nor output needed.\n#!/usr/bin/env python3\nimport cgitb\ncgitb.enable()\noutputfile = open(\"javascriptPython.out\", \"a\")\noutputfile.write(\"Hello from Python\\n\\n\")\n\n" ]
[ 7, 4, 0, 0 ]
[]
[]
[ "apache2", "cgi", "python", "server_side_scripting", "ubuntu" ]
stackoverflow_0044871139_apache2_cgi_python_server_side_scripting_ubuntu.txt
Q: Pycharm how to switch back to English? I have been using Pycharm in English but today when I opened it, its interface got partially translated to Chinese, totally unexpected and unwanted. How can I switch back to English without reinstalling Pycharm? Thanks! A: go for settings and make it as default settings,consider the below image for the referance,count the row and select ok A: New version of pycharm let you unselect the Chinese interface and back to English interface.
Pycharm how to switch back to English?
I have been using Pycharm in English but today when I opened it, its interface got partially translated to Chinese, totally unexpected and unwanted. How can I switch back to English without reinstalling Pycharm? Thanks!
[ "go for settings and make it as default settings,consider the below image for the referance,count the row and select ok\n", "New version of pycharm let you unselect the Chinese interface and back to English interface.\n" ]
[ 2, 0 ]
[]
[]
[ "internationalization", "locale", "pycharm", "python", "user_interface" ]
stackoverflow_0047129007_internationalization_locale_pycharm_python_user_interface.txt
Q: SPARQL filter by date for multiple predicates parameters with same subject I would like to select the car brands (filter contains prefix "dbr:") where schema:Motor and filter schema:dateManufactured > year 2000. This is the source data (at the bottom you will find my query). As per RedCrusaderJr answer, we got now the brands but I don't know how to specify the query to filter the date of manufacture and the schema:Motor. cars = '''@prefix ex: <https://example.org/resource/> . @prefix schema: <https://schema.org/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix dbr: <http://dbpedia.org/resource/> . @prefix dbo: <http://dbpedia.org/ontology/> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . ex:Mustang ex:Deportivo dbr:Ford ; schema:Motor ex:Gasolina ; ex:potencia "450"; ex:km "120000"; schema:dateManufactured "2020-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ford Mustang GT"@en . ex:GT ex:Deportivo dbr:Ford ; schema:Motor ex:Gasolina ; ex:potencia "550"; ex:km "25000"; schema:dateManufactured "1968-04-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ford GT"@en . ex:Fiesta ex:Utilitario dbr:Ford ; schema:Motor ex:Diesel ; ex:potencia "100"; ex:km "45000"; schema:dateManufactured "2020-02-10"^^xsd:date ; rdfs:label "Ford Fiesta"@en . ex:206 ex:Utilitario dbr:Peugeot ; schema:Motor ex:Diesel ; ex:potencia "68"; ex:km "173100"; schema:dateManufactured "2004-01-01"^^xsd:date ; rdfs:label "Peugeot 206"@en . ex:California ex:Deportivo dbr:Ferrari; schema:Motor ex:Gasolina; ex:potencia "460"; ex:km "500000"; schema:dateManufactured "2010-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ferrari California"@en . ex:Enzo ex:Deportivo dbr:Ferrari; schema:Motor ex:Gasolina; ex:potencia ""; ex:km "200000"; schema:dateManufactured "2002-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ferrari Enzo"@en . ''' g_q1 = RDFGraph() g_q1.parse (data=cars, format="turtle") my initial query =''' SELECT ?h { ?h schema:Motor ?t; :dateManufactured ?date. FILTER (?date > "2000-12-31"^^xsd:date) } ''' RedCrusaderJr answer = SELECT * { { SELECT DISTINCT ?o { ?s ?p ?o FILTER CONTAINS(str(?o), "http://dbpedia.org/resource/") } } #here I understand it would go the schema:motor and date manufactured part. } Thanks! A: For the follow up on the original question, if I understand you correctly, you want all URIs with a specific prefix, which you'd get with this query: SELECT DISTINCT ?o { ?s ?p ?o FILTER CONTAINS(str(?o), "http://dbpedia.org/resource/") } For your data set you'd get these URIs: dbpedia:Ford dbpedia:LeMans dbpedia:Peugeot dbpedia:Delorean_Motor_Company dbpedia:Pontiac dbpedia:Ferrari And if you want to use those URIs further on, you can do it like this: SELECT * { { SELECT DISTINCT ?o { ?s ?p ?o FILTER CONTAINS(str(?o), "http://dbpedia.org/resource/") } } #rest of the query where ?o can be used } EDIT: If I run this SELECT ?car ?brand ?motor ?dateManufactured { ?car ex:Deportivo ?brand; schema:Motor ?motor; schema:dateManufactured ?dateManufactured. FILTER (?dateManufactured > "2000-12-31"^^xsd:date) } I get these triples car brand motor dateManufactured ex:Mustang dbpedia:Ford ex:Gasolina "2020-05-29" ex:California dbpedia:Ferrari ex:Gasolina "2010-05-29" ex:Enzo dbpedia:Ferrari ex:Gasolina "2002-05-29" Can you edit your answer again, so I can see exactly what result you want to get? If you want to merge brands with car/motor/dateManufactured, you don't have to get all brands first, but simply use the ex:Deportivo that connect them -> ?car ex:Deportivo ?brand.
SPARQL filter by date for multiple predicates parameters with same subject
I would like to select the car brands (filter contains prefix "dbr:") where schema:Motor and filter schema:dateManufactured > year 2000. This is the source data (at the bottom you will find my query). As per RedCrusaderJr answer, we got now the brands but I don't know how to specify the query to filter the date of manufacture and the schema:Motor. cars = '''@prefix ex: <https://example.org/resource/> . @prefix schema: <https://schema.org/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix dbr: <http://dbpedia.org/resource/> . @prefix dbo: <http://dbpedia.org/ontology/> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . ex:Mustang ex:Deportivo dbr:Ford ; schema:Motor ex:Gasolina ; ex:potencia "450"; ex:km "120000"; schema:dateManufactured "2020-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ford Mustang GT"@en . ex:GT ex:Deportivo dbr:Ford ; schema:Motor ex:Gasolina ; ex:potencia "550"; ex:km "25000"; schema:dateManufactured "1968-04-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ford GT"@en . ex:Fiesta ex:Utilitario dbr:Ford ; schema:Motor ex:Diesel ; ex:potencia "100"; ex:km "45000"; schema:dateManufactured "2020-02-10"^^xsd:date ; rdfs:label "Ford Fiesta"@en . ex:206 ex:Utilitario dbr:Peugeot ; schema:Motor ex:Diesel ; ex:potencia "68"; ex:km "173100"; schema:dateManufactured "2004-01-01"^^xsd:date ; rdfs:label "Peugeot 206"@en . ex:California ex:Deportivo dbr:Ferrari; schema:Motor ex:Gasolina; ex:potencia "460"; ex:km "500000"; schema:dateManufactured "2010-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ferrari California"@en . ex:Enzo ex:Deportivo dbr:Ferrari; schema:Motor ex:Gasolina; ex:potencia ""; ex:km "200000"; schema:dateManufactured "2002-05-29"^^xsd:date ; schema:wasInCompetition dbr:LeMans; rdfs:label "Ferrari Enzo"@en . ''' g_q1 = RDFGraph() g_q1.parse (data=cars, format="turtle") my initial query =''' SELECT ?h { ?h schema:Motor ?t; :dateManufactured ?date. FILTER (?date > "2000-12-31"^^xsd:date) } ''' RedCrusaderJr answer = SELECT * { { SELECT DISTINCT ?o { ?s ?p ?o FILTER CONTAINS(str(?o), "http://dbpedia.org/resource/") } } #here I understand it would go the schema:motor and date manufactured part. } Thanks!
[ "For the follow up on the original question, if I understand you correctly, you want all URIs with a specific prefix, which you'd get with this query:\nSELECT DISTINCT ?o\n{ \n ?s ?p ?o\n FILTER CONTAINS(str(?o), \"http://dbpedia.org/resource/\")\n}\n\nFor your data set you'd get these URIs:\ndbpedia:Ford\ndbpedia:LeMans\ndbpedia:Peugeot\ndbpedia:Delorean_Motor_Company\ndbpedia:Pontiac\ndbpedia:Ferrari\n\nAnd if you want to use those URIs further on, you can do it like this:\nSELECT * \n{\n {\n SELECT DISTINCT ?o\n { \n ?s ?p ?o\n FILTER CONTAINS(str(?o), \"http://dbpedia.org/resource/\")\n }\n }\n\n #rest of the query where ?o can be used\n}\n\n\n\nEDIT:\n\n\nIf I run this\nSELECT ?car ?brand ?motor ?dateManufactured\n{\n ?car ex:Deportivo ?brand;\n schema:Motor ?motor;\n schema:dateManufactured ?dateManufactured.\n FILTER (?dateManufactured > \"2000-12-31\"^^xsd:date)\n}\n\nI get these triples\ncar brand motor dateManufactured\nex:Mustang dbpedia:Ford ex:Gasolina \"2020-05-29\"\nex:California dbpedia:Ferrari ex:Gasolina \"2010-05-29\"\nex:Enzo dbpedia:Ferrari ex:Gasolina \"2002-05-29\"\n\nCan you edit your answer again, so I can see exactly what result you want to get? If you want to merge brands with car/motor/dateManufactured, you don't have to get all brands first, but simply use the ex:Deportivo that connect them -> ?car ex:Deportivo ?brand.\n" ]
[ 0 ]
[]
[]
[ "python", "rdf", "schema", "sparql" ]
stackoverflow_0074666967_python_rdf_schema_sparql.txt
Q: Inserting alpha value to a 4-dimentsional RGB numpy array Following this tutorial, I am trying to build a color cube in matplotlib. spatialAxes = [self.step, self.step, self.step] r, g, b= np.indices((self.step+1, self.step+1, self.step+1)) / 16.0 rc = self.midpoints(r) gc = self.midpoints(g) bc = self.midpoints(b) cube = np.ones(spatialAxes, dtype=np.bool) # combine the color components colors = np.zeros(cube.shape + (3,)) colors[..., 0] = rc colors[..., 1] = gc colors[..., 2] = bc self.axes.voxels(r, g, b, cube, facecolors = colors, linewidth = 0) midpoints is from the link above and is defined as def midpoints(self, x): sl = () for i in range(x.ndim): x = (x[sl + np.index_exp[:-1]] + x[sl + np.index_exp[1:]]) / 2.0 sl += np.index_exp[:] return x This does produce a color cube, but it is completely opaque. So I tried to add opacity by: colors = np.dstack( (colors, np.ones(spatialAxes) * .5) ) # .5 opacity Which did not work. Given that colors is a 4-dimensional array with a shape of (X, X, X, 3), where X is the value of self.step. How do I append the alpha channel to this array? A: You can make colors an array with a forth component like this: # combine the color components colors = np.zeros(sphere.shape + (4, )) colors[..., 0] = rc colors[..., 1] = gc colors[..., 2] = bc colors[..., 3] = 0.2 # opacity (alpha) BTW, I (half unconsciously) used the matplotlib example that you linked to. The challenge with your code is that it's not complete, so I cannot run it myself (hint ;)).
Inserting alpha value to a 4-dimentsional RGB numpy array
Following this tutorial, I am trying to build a color cube in matplotlib. spatialAxes = [self.step, self.step, self.step] r, g, b= np.indices((self.step+1, self.step+1, self.step+1)) / 16.0 rc = self.midpoints(r) gc = self.midpoints(g) bc = self.midpoints(b) cube = np.ones(spatialAxes, dtype=np.bool) # combine the color components colors = np.zeros(cube.shape + (3,)) colors[..., 0] = rc colors[..., 1] = gc colors[..., 2] = bc self.axes.voxels(r, g, b, cube, facecolors = colors, linewidth = 0) midpoints is from the link above and is defined as def midpoints(self, x): sl = () for i in range(x.ndim): x = (x[sl + np.index_exp[:-1]] + x[sl + np.index_exp[1:]]) / 2.0 sl += np.index_exp[:] return x This does produce a color cube, but it is completely opaque. So I tried to add opacity by: colors = np.dstack( (colors, np.ones(spatialAxes) * .5) ) # .5 opacity Which did not work. Given that colors is a 4-dimensional array with a shape of (X, X, X, 3), where X is the value of self.step. How do I append the alpha channel to this array?
[ "You can make colors an array with a forth component like this:\n# combine the color components\ncolors = np.zeros(sphere.shape + (4, ))\ncolors[..., 0] = rc\ncolors[..., 1] = gc\ncolors[..., 2] = bc\ncolors[..., 3] = 0.2 # opacity (alpha)\n\nBTW, I (half unconsciously) used the matplotlib example that you linked to. The challenge with your code is that it's not complete, so I cannot run it myself (hint ;)).\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "numpy", "python" ]
stackoverflow_0074671536_matplotlib_numpy_python.txt
Q: I need help regarding a python error: _tkinter.TclError: bitmap "class.ico" not defined I wanted to create an image classification app in python as a university project, but I get the error you can see on the pic below: Error message: Error_image I have 2 python files a teach.py, which teaches the pictures to the algorithm and saves it and a main.py which loads the saved model and a gui to uplad and predict the images, but when I debug I get the error don't know why neither do I know how to fix it... You can see main.py below: main.py: import keras import numpy as np import gradio as gr import pathlib import os import PIL import tkinter as tk import warnings import h5py import zipfile import tensorflow as tf import joblib import globals import matplotlib.pyplot as plt from keras.applications import vgg16 from keras.applications.vgg16 import decode_predictions from keras.utils import img_to_array from tkinter import * from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import preprocess_input from keras.models import Sequential from keras.layers import Dense,Flatten,Dropout from sklearn import model_selection, datasets from sklearn.tree import DecisionTreeClassifier from pathlib import Path from PIL import ImageTk, Image from tkinter import filedialog from tensorflow import keras from keras import layers from keras.models import Sequential warnings.filterwarnings('ignore') data_dir='birbs/birbs' class_names = os.listdir(data_dir) loaded_model = tf.keras.models.load_model("saved_model.h5") def load_img(): global img, image_data for img_display in frame.winfo_children(): img_display.destroy() image_data = filedialog.askopenfilename(initialdir="/", title="Choose an image", filetypes=(("all files", "*.*"), ("png files", "*.png"))) basewidth = 150 # Processing image for dysplaying img = Image.open(image_data) wpercent = (basewidth / float(img.size[0])) hsize = int((float(img.size[1]) * float(wpercent))) img = img.resize((basewidth, hsize), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) file_name = image_data.split('/') panel = tk.Label(frame, text= str(file_name[len(file_name)-1]).upper()).pack() panel_image = tk.Label(frame, image=img).pack() def classify(): original = Image.open(image_data) original = original.resize((224, 224), Image.ANTIALIAS) numpy_image = img_to_array(original) image_batch = np.expand_dims(numpy_image, axis=0) processed_image = vgg16.preprocess_input(image_batch.copy()) predictions = vgg_model.predict(processed_image) label = decode_predictions(predictions) table = tk.Label(frame, text="Top image class predictions and confidences").pack() for i in range(0, len(label[0])): result = tk.Label(frame, text= str(label[0][i][1]).upper() + ': ' + str(round(float(label[0][i][2])*100, 3)) + '%').pack() root = tk.Tk() root.title('Portable Image Classifier') root.iconbitmap('class.ico') root.resizable(False, False) tit = tk.Label(root, text="Portable Image Classifier", padx=25, pady=6, font=("", 12)).pack() canvas = tk.Canvas(root, height=500, width=500, bg='grey') canvas.pack() frame = tk.Frame(root, bg='white') frame.place(relwidth=0.8, relheight=0.8, relx=0.1, rely=0.1) chose_image = tk.Button(root, text='Choose Image', padx=35, pady=10, fg="white", bg="grey", command=load_img) chose_image.pack(side=tk.LEFT) class_image = tk.Button(root, text='Classify Image', padx=35, pady=10, fg="white", bg="grey", command=classify) class_image.pack(side=tk.RIGHT) vgg_model = vgg16.VGG16(weights='imagenet') root.mainloop() A: It looks like you are trying to use a bitmap image with the Tkinter library in Python, but the image you are trying to use has not been defined. To fix this error, you need to make sure that the bitmap image has been defined before you try to use it. This can be done by using the BitmapImage class in Tkinter to create a new instance of the image. For example: from tkinter import BitmapImage image = BitmapImage("class.ico") Once you have defined the image, you can use it in your Tkinter application. For example, you can set it as the icon for a window using the iconbitmap method: root = tk.Tk() root.iconbitmap(image) I hope this helps! Let me know if you have any other questions.
I need help regarding a python error: _tkinter.TclError: bitmap "class.ico" not defined
I wanted to create an image classification app in python as a university project, but I get the error you can see on the pic below: Error message: Error_image I have 2 python files a teach.py, which teaches the pictures to the algorithm and saves it and a main.py which loads the saved model and a gui to uplad and predict the images, but when I debug I get the error don't know why neither do I know how to fix it... You can see main.py below: main.py: import keras import numpy as np import gradio as gr import pathlib import os import PIL import tkinter as tk import warnings import h5py import zipfile import tensorflow as tf import joblib import globals import matplotlib.pyplot as plt from keras.applications import vgg16 from keras.applications.vgg16 import decode_predictions from keras.utils import img_to_array from tkinter import * from keras.preprocessing.image import ImageDataGenerator from keras.applications.vgg16 import preprocess_input from keras.models import Sequential from keras.layers import Dense,Flatten,Dropout from sklearn import model_selection, datasets from sklearn.tree import DecisionTreeClassifier from pathlib import Path from PIL import ImageTk, Image from tkinter import filedialog from tensorflow import keras from keras import layers from keras.models import Sequential warnings.filterwarnings('ignore') data_dir='birbs/birbs' class_names = os.listdir(data_dir) loaded_model = tf.keras.models.load_model("saved_model.h5") def load_img(): global img, image_data for img_display in frame.winfo_children(): img_display.destroy() image_data = filedialog.askopenfilename(initialdir="/", title="Choose an image", filetypes=(("all files", "*.*"), ("png files", "*.png"))) basewidth = 150 # Processing image for dysplaying img = Image.open(image_data) wpercent = (basewidth / float(img.size[0])) hsize = int((float(img.size[1]) * float(wpercent))) img = img.resize((basewidth, hsize), Image.ANTIALIAS) img = ImageTk.PhotoImage(img) file_name = image_data.split('/') panel = tk.Label(frame, text= str(file_name[len(file_name)-1]).upper()).pack() panel_image = tk.Label(frame, image=img).pack() def classify(): original = Image.open(image_data) original = original.resize((224, 224), Image.ANTIALIAS) numpy_image = img_to_array(original) image_batch = np.expand_dims(numpy_image, axis=0) processed_image = vgg16.preprocess_input(image_batch.copy()) predictions = vgg_model.predict(processed_image) label = decode_predictions(predictions) table = tk.Label(frame, text="Top image class predictions and confidences").pack() for i in range(0, len(label[0])): result = tk.Label(frame, text= str(label[0][i][1]).upper() + ': ' + str(round(float(label[0][i][2])*100, 3)) + '%').pack() root = tk.Tk() root.title('Portable Image Classifier') root.iconbitmap('class.ico') root.resizable(False, False) tit = tk.Label(root, text="Portable Image Classifier", padx=25, pady=6, font=("", 12)).pack() canvas = tk.Canvas(root, height=500, width=500, bg='grey') canvas.pack() frame = tk.Frame(root, bg='white') frame.place(relwidth=0.8, relheight=0.8, relx=0.1, rely=0.1) chose_image = tk.Button(root, text='Choose Image', padx=35, pady=10, fg="white", bg="grey", command=load_img) chose_image.pack(side=tk.LEFT) class_image = tk.Button(root, text='Classify Image', padx=35, pady=10, fg="white", bg="grey", command=classify) class_image.pack(side=tk.RIGHT) vgg_model = vgg16.VGG16(weights='imagenet') root.mainloop()
[ "It looks like you are trying to use a bitmap image with the Tkinter library in Python, but the image you are trying to use has not been defined. To fix this error, you need to make sure that the bitmap image has been defined before you try to use it. This can be done by using the BitmapImage class in Tkinter to create a new instance of the image. For example:\nfrom tkinter import BitmapImage\n\nimage = BitmapImage(\"class.ico\")\n\nOnce you have defined the image, you can use it in your Tkinter application. For example, you can set it as the icon for a window using the iconbitmap method:\nroot = tk.Tk()\nroot.iconbitmap(image)\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "keras", "machine_learning", "python", "tensorflow", "tkinter" ]
stackoverflow_0074670274_keras_machine_learning_python_tensorflow_tkinter.txt
Q: How do I decompose() a reoccurring row in a table that I find located in an html page? The row is a duplicate of the header row. The row occurs over and over again randomly, and I do not want it in the data set (naturally). I think the HTML page has it there to remind the viewer what column attributes they are looking at as they scroll down. Below is a sample of one of the row elements I want delete: <tr class ="thead" data-row="25> Here is another one: <tr class="thead" data-row="77"> They occur randomly, but if there's any way we could make a loop that can iterate and find the first cell in the row and determine that it is in fact the row we want to delete? Because they are identical each time. The first cell is always "Player", identifying the attribute. Below is an example of what that looks like as an HTML element. <th aria-label="Player" data-stat="player" scope="col" class=" poptip sort_default_asc center">Player</th> Maybe I can create a loop that iterates through each row and determines if that first cell says "Player". If it does, then delete that whole row. Is that possible? Here is my code: from bs4 import BeautifulSoup import pandas as pd import requests import string years = list(range(2023, 2024)) alphabet = list(string.ascii_lowercase) url_namegather = 'https://www.basketball-reference.com/players/a' lastname_a = 'a' url = url_namegather.format(lastname_a) data = requests.get(url) with open("player_names/lastname_a.html".format(lastname_a), "w+", encoding="utf-8") as f: f.write(data.text) with open("player_names/lastname_a.html", encoding="utf-8") as f: page = f.read() soup = BeautifulSoup(page, "html.parser") A: It is possible to create a loop that iterates through each row in an HTML table and delete a row if the first cell in the row matches a certain value. This can be done using a combination of the HTML DOM (Document Object Model) and JavaScript. First, you will need to use the getElementsByTagName method to retrieve all of the tr elements (table rows) in the table. Then, you can use a for loop to iterate over each of these rows. Inside the loop, you can use the getElementsByTagName method again to retrieve all of the th elements (table headers) in the current row. If the first th element has a data-stat attribute with a value of "player", then you can use the deleteRow method to delete the current row from the table. Here is an example of how this might look in code: // Get all of the table rows in the table var rows = document.getElementsByTagName("tr"); // Loop through each row for (var i = 0; i < rows.length; i++) { // Get all of the table headers in the current row var headers = rows[i].getElementsByTagName("th"); // If the first header has a data-stat attribute with a value of "player" if (headers[0].getAttribute("data-stat") == "player") { // Delete the current row rows[i].deleteRow(); } } Keep in mind that this is just an example and may need to be modified to fit the specific structure of your HTML table.
How do I decompose() a reoccurring row in a table that I find located in an html page?
The row is a duplicate of the header row. The row occurs over and over again randomly, and I do not want it in the data set (naturally). I think the HTML page has it there to remind the viewer what column attributes they are looking at as they scroll down. Below is a sample of one of the row elements I want delete: <tr class ="thead" data-row="25> Here is another one: <tr class="thead" data-row="77"> They occur randomly, but if there's any way we could make a loop that can iterate and find the first cell in the row and determine that it is in fact the row we want to delete? Because they are identical each time. The first cell is always "Player", identifying the attribute. Below is an example of what that looks like as an HTML element. <th aria-label="Player" data-stat="player" scope="col" class=" poptip sort_default_asc center">Player</th> Maybe I can create a loop that iterates through each row and determines if that first cell says "Player". If it does, then delete that whole row. Is that possible? Here is my code: from bs4 import BeautifulSoup import pandas as pd import requests import string years = list(range(2023, 2024)) alphabet = list(string.ascii_lowercase) url_namegather = 'https://www.basketball-reference.com/players/a' lastname_a = 'a' url = url_namegather.format(lastname_a) data = requests.get(url) with open("player_names/lastname_a.html".format(lastname_a), "w+", encoding="utf-8") as f: f.write(data.text) with open("player_names/lastname_a.html", encoding="utf-8") as f: page = f.read() soup = BeautifulSoup(page, "html.parser")
[ "It is possible to create a loop that iterates through each row in an HTML table and delete a row if the first cell in the row matches a certain value. This can be done using a combination of the HTML DOM (Document Object Model) and JavaScript.\nFirst, you will need to use the getElementsByTagName method to retrieve all of the tr elements (table rows) in the table. Then, you can use a for loop to iterate over each of these rows.\nInside the loop, you can use the getElementsByTagName method again to retrieve all of the th elements (table headers) in the current row. If the first th element has a data-stat attribute with a value of \"player\", then you can use the deleteRow method to delete the current row from the table.\nHere is an example of how this might look in code:\n// Get all of the table rows in the table\nvar rows = document.getElementsByTagName(\"tr\");\n\n// Loop through each row\nfor (var i = 0; i < rows.length; i++) {\n // Get all of the table headers in the current row\n var headers = rows[i].getElementsByTagName(\"th\");\n \n // If the first header has a data-stat attribute with a value of \"player\"\n if (headers[0].getAttribute(\"data-stat\") == \"player\") {\n // Delete the current row\n rows[i].deleteRow();\n }\n}\n\nKeep in mind that this is just an example and may need to be modified to fit the specific structure of your HTML table.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671640_python.txt
Q: ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,15) after running this code I keep getting the same error: note:(the data is in excel file (Heights : 16 column) and (Wights:16 column) I tried to change the epochs_num and it keeps giving the same problem... import pandas as pd import matplotlib.pyplot as plt import numpy as np # Load the dataset data = pd.read_csv('heights_weights.csv') # Plot the data distribution plt.scatter(data['Height'], data['Weight'], color='b') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Height vs. Weight') plt.show() # Define the linear regression model def linearRegression_model(X, weights): y_pred = np.dot(X, weights) return y_pred # Define the update weights function def linearRegression_update_weights(X, y, weights, learning_rate): y_pred = linearRegression_model(X, weights) weights_delta = np.dot(X.T, y_pred - y) m = len(y) weights -= (learning_rate/m) * weights_delta return weights # Define the train function def linearRegression_train(X, y, learning_rate, num_epochs): # Initialize weights and bias weights = np.zeros(X.shape[1]) for epoch in range(num_epochs): weights = linearRegression_update_weights(X, y, weights, learning_rate) if (epoch % 100 == 0): print('epoch: %s, weights: %s' % (epoch, weights)) return weights # Define the predict function def linearRegression_predict(X, weights): y_pred = linearRegression_model(X, weights) return y_pred # Define the mean squared error function def mean_squared_error(y_true, y_pred): mse = np.mean(np.power(y_true-y_pred, 2)) return mse # Prepare the data X = data['Height'].values.reshape(-1, 1) y = data['Weight'].values.reshape(-1, 1) # Train the model lr = 0.01 n_epochs = 1000 weights = linearRegression_train(X, y, lr, n_epochs) # Predict y_pred = linearRegression_predict(X, weights) # Evaluate the model mse = mean_squared_error(y, y_pred) print('Mean Squared Error: %s' % mse) # Plot the regression line plt.scatter(data['Height'], data['Weight'], color='b') plt.plot(X, y_pred, color='k') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Height vs. Weight') plt.show() # Plot the predicted and actual values plt.scatter(data['Height'], y, color='b', label='Actual') plt.scatter(data['Height'], y_pred, color='r', label='Predicted') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Actual vs. Predicted') plt.legend() plt.show() i try the same code to run step by step in google colab and i also change the epochs to 62 and run it many times but still the same : ValueError Traceback (most recent call last) <ipython-input-23-98703406a0a3> in <module> 2 learning_rate = 0.01 3 num_epochs = 62 ----> 4 weights = linearRegression_train(X, y, learning_rate, num_epochs) 1 frames <ipython-input-12-8f66dacdd5fc> in linearRegression_update_weights(X, y, weights, learning_rate) 4 weights_delta = np.dot(X.T, y_pred - y) 5 m = len(y) ----> 6 weights -= (learning_rate/m) * weights_delta 7 return weights ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,15) A: I can reproduce the error message with In [5]: x=np.array([1]) In [6]: x+=np.ones((1,5),int) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 x+=np.ones((1,5),int) ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,5) A: In linearRegression_update_weights, weights.shape == (1,) but weights_delta.shape == (1, 15) so the in-place subtraction fails. The shape of weights_delta is wrong because y_pred.shape == (15,) but y.shape == (15, 1) so (y_pred - y).shape == (15, 15) because of broadcasting. This results in the wrong shape of weights_delta after multiplied by X.T. The fix is to ensure y is a 1-D array to match the shape of y_pred, preventing broadcasting: y = data['Weight'].values.reshape(-1)
ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,15)
after running this code I keep getting the same error: note:(the data is in excel file (Heights : 16 column) and (Wights:16 column) I tried to change the epochs_num and it keeps giving the same problem... import pandas as pd import matplotlib.pyplot as plt import numpy as np # Load the dataset data = pd.read_csv('heights_weights.csv') # Plot the data distribution plt.scatter(data['Height'], data['Weight'], color='b') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Height vs. Weight') plt.show() # Define the linear regression model def linearRegression_model(X, weights): y_pred = np.dot(X, weights) return y_pred # Define the update weights function def linearRegression_update_weights(X, y, weights, learning_rate): y_pred = linearRegression_model(X, weights) weights_delta = np.dot(X.T, y_pred - y) m = len(y) weights -= (learning_rate/m) * weights_delta return weights # Define the train function def linearRegression_train(X, y, learning_rate, num_epochs): # Initialize weights and bias weights = np.zeros(X.shape[1]) for epoch in range(num_epochs): weights = linearRegression_update_weights(X, y, weights, learning_rate) if (epoch % 100 == 0): print('epoch: %s, weights: %s' % (epoch, weights)) return weights # Define the predict function def linearRegression_predict(X, weights): y_pred = linearRegression_model(X, weights) return y_pred # Define the mean squared error function def mean_squared_error(y_true, y_pred): mse = np.mean(np.power(y_true-y_pred, 2)) return mse # Prepare the data X = data['Height'].values.reshape(-1, 1) y = data['Weight'].values.reshape(-1, 1) # Train the model lr = 0.01 n_epochs = 1000 weights = linearRegression_train(X, y, lr, n_epochs) # Predict y_pred = linearRegression_predict(X, weights) # Evaluate the model mse = mean_squared_error(y, y_pred) print('Mean Squared Error: %s' % mse) # Plot the regression line plt.scatter(data['Height'], data['Weight'], color='b') plt.plot(X, y_pred, color='k') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Height vs. Weight') plt.show() # Plot the predicted and actual values plt.scatter(data['Height'], y, color='b', label='Actual') plt.scatter(data['Height'], y_pred, color='r', label='Predicted') plt.xlabel('Height') plt.ylabel('Weight') plt.title('Actual vs. Predicted') plt.legend() plt.show() i try the same code to run step by step in google colab and i also change the epochs to 62 and run it many times but still the same : ValueError Traceback (most recent call last) <ipython-input-23-98703406a0a3> in <module> 2 learning_rate = 0.01 3 num_epochs = 62 ----> 4 weights = linearRegression_train(X, y, learning_rate, num_epochs) 1 frames <ipython-input-12-8f66dacdd5fc> in linearRegression_update_weights(X, y, weights, learning_rate) 4 weights_delta = np.dot(X.T, y_pred - y) 5 m = len(y) ----> 6 weights -= (learning_rate/m) * weights_delta 7 return weights ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,15)
[ "I can reproduce the error message with\nIn [5]: x=np.array([1])\n\nIn [6]: x+=np.ones((1,5),int)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nInput In [6], in <cell line: 1>()\n----> 1 x+=np.ones((1,5),int)\n\nValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,5)\n\n", "In linearRegression_update_weights, weights.shape == (1,) but weights_delta.shape == (1, 15) so the in-place subtraction fails. The shape of weights_delta is wrong because y_pred.shape == (15,) but y.shape == (15, 1) so (y_pred - y).shape == (15, 15) because of broadcasting. This results in the wrong shape of weights_delta after multiplied by X.T. The fix is to ensure y is a 1-D array to match the shape of y_pred, preventing broadcasting:\ny = data['Weight'].values.reshape(-1)\n\n" ]
[ 0, 0 ]
[]
[]
[ "linear_regression", "numpy", "pandas", "python" ]
stackoverflow_0074658165_linear_regression_numpy_pandas_python.txt
Q: I'm wondering why I'm getting a TypeError: argument of type 'int' is not iterable I'm trying to make a list of names based off the last number in the values list. The new list will be ordered based on highest number to lowest number but is a list of the names. folks = {'Leia': [28, 'F', 'W', False, True, 'Unemployed',1], 'Junipero': [15, 'M', 'E', False, False, 'Teacher', 0.21158336054026594], 'Sunita': [110, 'D', 'E', True, False, 'Business', 0.9834949767416051], 'Issur': [17, 'F', 'O', True, False, 'Service', 0.7599396397686616], 'Luitgard': [0, 'D', 'U', True, True, 'Unemployed', 0.8874638219100845], 'Rudy': [112, 'M', 'W', True, True, 'Tradesperson', 0.6035917636433216], 'Ioudith': [20, 'D', 'W', True, True, 'Medical', 0.24957574519928294], 'Helmi': [109, 'D', 'M', False, False, 'Service', 0.20239906854483214], 'Katerina': [108, 'M', 'W', False, True, 'Student', 0.3046268530221382], 'Durai': [106, 'M', 'U', True, False, 'Business', 0.32332997497778493], 'Euphemios': [83, 'M', 'L', True, True, 'Banker', 0.17369577419188664], 'Lorinda': [8, 'F', 'E', False, True, 'Retail', 0.6667783756618852], 'Lasse': [30, 'D', 'U', True, True, 'Business', 0.6716420300452077], 'Adnan': [117, 'D', 'U', True, False, 'Banker', 0.7043759366238305], 'Pavica': [112, 'F', 'L', False, False, 'Business', 0.5875152728319836], 'Adrastos': [118, 'F', 'L', False, True, 'Service', 0.0660146284846359], 'Kobus': [49, 'D', 'S', False, False, 'Service', 0.4738056051140088], 'Daniel': [115, 'D', 'L', False, True, 'Service', 0.5182765931408372], 'Samantha': [97, 'D', 'W', True, True, 'Medical', 0.07082409148069169], 'Sacagawea': [28, 'F', 'U', True, True, 'Medical', 0.29790328657890996], 'Ixchel': [26, 'F', 'S', False, False, 'Business', 0.22593704520870372], 'Nobutoshi': [31, 'M', 'W', False, True, 'Business', 0.37923896100469956], 'Gorou': [55, 'M', 'B', True, True, 'Banker', 0.8684653864827863], 'Keiko': [34, 'M', 'L', False, True, 'Student', 0.02499269016601946], 'Seong-Su': [1, 'M', 'M', False, True, 'Retail', 0.3214997836868769], 'Aya': [41, 'M', 'B', True, True, 'Teacher', 0.3378161065313626], 'Okan': [11, 'D', 'W', True, True, 'Banker', 0.35535128959244744], 'Mai': [31, 'F', 'M', False, False, 'Service', 0.7072299366468716], 'Chaza-el': [84, 'D', 'E', True, True, 'Teacher', 0.263795143996962], 'Estera': [79, 'M', 'U', True, False, 'Tradesperson', 0.09970175216521693], 'Dante': [82, 'M', 'L', True, False, 'Unemployed', 0.2126494288577333], 'Leofric': [68, 'F', 'B', True, False, 'Unemployed', 0.19591887643941486], 'Anabelle': [63, 'M', 'B', False, False, 'Teacher', 0.3558324357405023], 'Harsha': [119, 'D', 'O', False, True, 'Retail', 0.3359989642837887], 'Dionisia': [92, 'F', 'B', True, False, 'Doctor', 0.42704604164789706], 'Rajesh': [55, 'F', 'M', True, False, 'Doctor', 0.485752225148387], 'Scilla': [60, 'F', 'M', False, False, 'Student', 0.7294089528796434], 'Arsenio': [10, 'D', 'L', False, True, 'Teacher', 0.0819890866210915]} def generate_prioritized_list(unordered_people): nums=[] for i in folks: nums.append(folks[i][6]) nums.sort(reverse=True) for i in nums: names=[] for name in folks: if i in folks[name][6]: names.append(folks[i]) for i in names: print(i) print(generate_prioritized_list(folks)) I'm trying to get a list of the names ordered highest to lowest by the last value in the list each persons attributes. A: I would use the key argument to the sorted function sorted(folks, key=lambda x: folks[x][-1])[::-1] sorted pulls the keys out of the dictionary key=lambda x: folks[x][-1] determines how to sort those keys [::-1] reverses the list. I get: ['Leia', 'Sunita', 'Luitgard', 'Gorou', 'Issur', # ... 'Estera', 'Arsenio', 'Samantha', 'Adrastos', 'Keiko'] A: Here is one way to do it: # Get the values of the last item in each sublist last_values = [folks[name][-1] for name in folks] # Sort the values in descending order sorted_values = sorted(last_values, reverse=True) # Create a list of names, sorted by the last value in their sublist sorted_names = [name for value in sorted_values for name in folks if folks[name][-1] == value] # Print the resulting list print(sorted_names) This code uses two nested list comprehensions to create the sorted list of names. The first list comprehension gets the values of the last item in each sublist, and the second list comprehension uses those values to create the sorted list of names.
I'm wondering why I'm getting a TypeError: argument of type 'int' is not iterable
I'm trying to make a list of names based off the last number in the values list. The new list will be ordered based on highest number to lowest number but is a list of the names. folks = {'Leia': [28, 'F', 'W', False, True, 'Unemployed',1], 'Junipero': [15, 'M', 'E', False, False, 'Teacher', 0.21158336054026594], 'Sunita': [110, 'D', 'E', True, False, 'Business', 0.9834949767416051], 'Issur': [17, 'F', 'O', True, False, 'Service', 0.7599396397686616], 'Luitgard': [0, 'D', 'U', True, True, 'Unemployed', 0.8874638219100845], 'Rudy': [112, 'M', 'W', True, True, 'Tradesperson', 0.6035917636433216], 'Ioudith': [20, 'D', 'W', True, True, 'Medical', 0.24957574519928294], 'Helmi': [109, 'D', 'M', False, False, 'Service', 0.20239906854483214], 'Katerina': [108, 'M', 'W', False, True, 'Student', 0.3046268530221382], 'Durai': [106, 'M', 'U', True, False, 'Business', 0.32332997497778493], 'Euphemios': [83, 'M', 'L', True, True, 'Banker', 0.17369577419188664], 'Lorinda': [8, 'F', 'E', False, True, 'Retail', 0.6667783756618852], 'Lasse': [30, 'D', 'U', True, True, 'Business', 0.6716420300452077], 'Adnan': [117, 'D', 'U', True, False, 'Banker', 0.7043759366238305], 'Pavica': [112, 'F', 'L', False, False, 'Business', 0.5875152728319836], 'Adrastos': [118, 'F', 'L', False, True, 'Service', 0.0660146284846359], 'Kobus': [49, 'D', 'S', False, False, 'Service', 0.4738056051140088], 'Daniel': [115, 'D', 'L', False, True, 'Service', 0.5182765931408372], 'Samantha': [97, 'D', 'W', True, True, 'Medical', 0.07082409148069169], 'Sacagawea': [28, 'F', 'U', True, True, 'Medical', 0.29790328657890996], 'Ixchel': [26, 'F', 'S', False, False, 'Business', 0.22593704520870372], 'Nobutoshi': [31, 'M', 'W', False, True, 'Business', 0.37923896100469956], 'Gorou': [55, 'M', 'B', True, True, 'Banker', 0.8684653864827863], 'Keiko': [34, 'M', 'L', False, True, 'Student', 0.02499269016601946], 'Seong-Su': [1, 'M', 'M', False, True, 'Retail', 0.3214997836868769], 'Aya': [41, 'M', 'B', True, True, 'Teacher', 0.3378161065313626], 'Okan': [11, 'D', 'W', True, True, 'Banker', 0.35535128959244744], 'Mai': [31, 'F', 'M', False, False, 'Service', 0.7072299366468716], 'Chaza-el': [84, 'D', 'E', True, True, 'Teacher', 0.263795143996962], 'Estera': [79, 'M', 'U', True, False, 'Tradesperson', 0.09970175216521693], 'Dante': [82, 'M', 'L', True, False, 'Unemployed', 0.2126494288577333], 'Leofric': [68, 'F', 'B', True, False, 'Unemployed', 0.19591887643941486], 'Anabelle': [63, 'M', 'B', False, False, 'Teacher', 0.3558324357405023], 'Harsha': [119, 'D', 'O', False, True, 'Retail', 0.3359989642837887], 'Dionisia': [92, 'F', 'B', True, False, 'Doctor', 0.42704604164789706], 'Rajesh': [55, 'F', 'M', True, False, 'Doctor', 0.485752225148387], 'Scilla': [60, 'F', 'M', False, False, 'Student', 0.7294089528796434], 'Arsenio': [10, 'D', 'L', False, True, 'Teacher', 0.0819890866210915]} def generate_prioritized_list(unordered_people): nums=[] for i in folks: nums.append(folks[i][6]) nums.sort(reverse=True) for i in nums: names=[] for name in folks: if i in folks[name][6]: names.append(folks[i]) for i in names: print(i) print(generate_prioritized_list(folks)) I'm trying to get a list of the names ordered highest to lowest by the last value in the list each persons attributes.
[ "I would use the key argument to the sorted function\nsorted(folks, key=lambda x: folks[x][-1])[::-1]\n\n\nsorted pulls the keys out of the dictionary\nkey=lambda x: folks[x][-1] determines how to sort those keys\n[::-1] reverses the list.\n\nI get:\n['Leia',\n 'Sunita',\n 'Luitgard',\n 'Gorou',\n 'Issur',\n # ... \n 'Estera',\n 'Arsenio',\n 'Samantha',\n 'Adrastos',\n 'Keiko']\n\n", "Here is one way to do it:\n# Get the values of the last item in each sublist\nlast_values = [folks[name][-1] for name in folks]\n\n# Sort the values in descending order\nsorted_values = sorted(last_values, reverse=True)\n\n# Create a list of names, sorted by the last value in their sublist\nsorted_names = [name for value in sorted_values\n for name in folks\n if folks[name][-1] == value]\n\n# Print the resulting list\nprint(sorted_names)\n\nThis code uses two nested list comprehensions to create the sorted list of names. The first list comprehension gets the values of the last item in each sublist, and the second list comprehension uses those values to create the sorted list of names.\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "list", "python", "typeerror" ]
stackoverflow_0074671633_dictionary_list_python_typeerror.txt
Q: What is wrong with my code? I am a newbie and I don't know why it isn't working. (Python) Sorry, I've only been using python for about an hour, I'm using PyCharm if that has to do with the problem, I don't think it does though. Here is my code: userAge = input("Hi, how old are you?\n") longRussiaString = ( "When will you guys stop telling me about how you had to walk uphill both ways for 10 miles to" "get to school amidst the icy tundra of Russia? We get it!") def reply_detect(): if userAge != 0 - 5: pass else: print("Wait a minute... Stop lying! I know your brain is too small for this!") if userAge != 6 - 18: pass else: print("You're too young to be interested in this. Unless your dad forced you to just like me :(") if userAge != 19 - 24: pass else: print("""Good luck dealing with those "taxes" things or whatever. Wait, you haven't heard of those?""") if userAge != 25 - 40: pass else: print("You post-millennial scumbags... No, just kidding, you guys are the ones carrying our society.") if userAge != 41 - 55: pass else: print(longRussiaString) def age_reply(): print(f"So, you're {userAge} years old, huh?") reply_detect() age_reply() I tried to inverse the if loops making a second function to neaten things up a bit, and lots of other things, what happens is that it shows the "So, you're {userAge} years old part, but it ends there and doesn't show me the rest, which is the function "reply_detect". Thanks! A: It seems you're looking to see if a number occurs in a range. This can be accomplished very directly in Python. Just remember that the end of the range is exclusive: 0 to 5 is covered by range(0, 6). >>> n = 16 >>> n in range(0, 17) True >>> n in range(0, 6) False >>> n not in range(0, 6) True >>> A: you are using a test of 0 - 5 but to python this means zero minus five which is negative five, just as you would do in a calculator. Instead you mean if userAge >= 0 and userAge <= 5 and so on for the remaining tests.
What is wrong with my code? I am a newbie and I don't know why it isn't working. (Python)
Sorry, I've only been using python for about an hour, I'm using PyCharm if that has to do with the problem, I don't think it does though. Here is my code: userAge = input("Hi, how old are you?\n") longRussiaString = ( "When will you guys stop telling me about how you had to walk uphill both ways for 10 miles to" "get to school amidst the icy tundra of Russia? We get it!") def reply_detect(): if userAge != 0 - 5: pass else: print("Wait a minute... Stop lying! I know your brain is too small for this!") if userAge != 6 - 18: pass else: print("You're too young to be interested in this. Unless your dad forced you to just like me :(") if userAge != 19 - 24: pass else: print("""Good luck dealing with those "taxes" things or whatever. Wait, you haven't heard of those?""") if userAge != 25 - 40: pass else: print("You post-millennial scumbags... No, just kidding, you guys are the ones carrying our society.") if userAge != 41 - 55: pass else: print(longRussiaString) def age_reply(): print(f"So, you're {userAge} years old, huh?") reply_detect() age_reply() I tried to inverse the if loops making a second function to neaten things up a bit, and lots of other things, what happens is that it shows the "So, you're {userAge} years old part, but it ends there and doesn't show me the rest, which is the function "reply_detect". Thanks!
[ "It seems you're looking to see if a number occurs in a range. This can be accomplished very directly in Python. Just remember that the end of the range is exclusive: 0 to 5 is covered by range(0, 6).\n>>> n = 16\n>>> n in range(0, 17)\nTrue\n>>> n in range(0, 6)\nFalse\n>>> n not in range(0, 6)\nTrue\n>>>\n\n", "you are using a test of 0 - 5 but to python this means zero minus five which is negative five, just as you would do in a calculator. Instead you mean if userAge >= 0 and userAge <= 5 and so on for the remaining tests.\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671685_python.txt
Q: My Python Recursive Function is returning None I made a function that is being called recursively, and the condition for it to keep being called is a user input. The recursion is working but the final value of the variable is being returned as None. I am a beginner at Python and i am trying to learn Functions and Recursion before going to Classes, OOP, Wrappers, etc. Here is my code: Main Py: import funcoes_moeda def switch(valor): case = int(input('Escolha uma opcao... (0 para encerrar) : ')) if case == 1: valor = funcoes_moeda.aumentar(valor) print('Valor aumentado: {}'.format(valor)) switch(valor) elif case == 2: pass elif case == 3: pass elif case == 4: pass else: return valor valor = float(input('Insira o valor: ')) print("Escolha a funcao a ser aplicada no valor inserido: \n" \ "1 - Aumentar Valor \n" \ "2 - Diminuir Valor \n" \ "3 - Dobrar Valor \n" \ "4 - Dividir Valor \n" \ "0 - Encerrar o Prorama" ) valor = switch(valor) print('Funcao foi aplicada. O valor final ficou: {}'.format(valor)) Imported Functions: def aumentar(valor): quantia_aumentada = float(input('Insira a quantidade que voce deseja acrescentar: ')) valor += quantia_aumentada return valor def diminuir(): pass def dobro(): pass def metade(): pass When i tried executing this, what i got was: Insira o valor: 100.00 Escolha a funcao a ser aplicada no valor inserido: 1 - Aumentar Valor 2 - Diminuir Valor 3 - Dobrar Valor 4 - Dividir Valor 0 - Encerrar o Prorama Escolha uma opcao... (0 para encerrar) : 1 Insira a quantidade que voce deseja acrescentar: 100.00 Valor aumentado: 200.0 Escolha uma opcao... (0 para encerrar) : 1 Insira a quantidade que voce deseja acrescentar: 100.00 Valor aumentado: 300.0 Escolha uma opcao... (0 para encerrar) : 0 Funcao foi aplicada. O valor final ficou: None For a test case, you can use: Chose 100.00, option 1 (2 times is enough), increment 100.00 each call. Expected output: Current value = 300.00 (Because 100 + 100 + 100) But i got None at the last print... Please. What am i doing wrong??? :( Thank you for all the help. PS: I tried going through the following answers, but i was not able to solve this problem because the explanation was for the problems in the question, and i found it was a litle different than mine.. 1 > Recursive function returning none - Dint understand. 2 > python recursive function returning none instead of string - This is treating a CSV file. A: The problem is that when the case variable is equal to 0, the return valor statement is being executed within the switch() function, but this function is being called recursively so the value of valor is not being returned to the caller. To fix this, you can add another return statement at the end of the switch() function that returns the value of valor when case is 0. This will ensure that the value of valor is returned to the caller, even when the switch() function is being called recursively. def switch(valor): case = int(input('Escolha uma opcao... (0 para encerrar) : ')) if case == 1: valor = funcoes_moeda.aumentar(valor) print('Valor aumentado: {}'.format(valor)) switch(valor) elif case == 2: pass elif case == 3: pass elif case == 4: pass else: return valor # Return the value of valor when case is 0 return valor A: When returning a value the recursion is braking and the value returns to the previous call of the function. If you want to return the value to the first call, you can return the value every time you call switch: import funcoes_moeda def switch(valor): case = int(input('Escolha uma opcao... (0 para encerrar) : ')) if case == 1: valor = funcoes_moeda.aumentar(valor) print('Valor aumentado: {}'.format(valor)) return switch(valor) elif case == 2: pass elif case == 3: pass elif case == 4: pass else: return valor valor = float(input('Insira o valor: ')) print("Escolha a funcao a ser aplicada no valor inserido: \n" \ "1 - Aumentar Valor \n" \ "2 - Diminuir Valor \n" \ "3 - Dobrar Valor \n" \ "4 - Dividir Valor \n" \ "0 - Encerrar o Prorama" ) valor = switch(valor) print('Funcao foi aplicada. O valor final ficou: {}'.format(valor)) A: Two issues: The return at the end of the function 'switch' should be at the end of the function and not part of the 'else' case. Because if you select '1' you will change the value of 'valor', but after changing the value you exit the function withou returning a value. The returned value of the recursive call to 'switch' in the code section of 'if case == 1:' is not saved and therefore lost. If you only correct the first point above you will get changed values only from the top call to 'switch', but all changes from the recursive calls will get lost. The solution proposed by thebjorn solves both above listed issues. But treating the issues one by one, might be easier to understand.
My Python Recursive Function is returning None
I made a function that is being called recursively, and the condition for it to keep being called is a user input. The recursion is working but the final value of the variable is being returned as None. I am a beginner at Python and i am trying to learn Functions and Recursion before going to Classes, OOP, Wrappers, etc. Here is my code: Main Py: import funcoes_moeda def switch(valor): case = int(input('Escolha uma opcao... (0 para encerrar) : ')) if case == 1: valor = funcoes_moeda.aumentar(valor) print('Valor aumentado: {}'.format(valor)) switch(valor) elif case == 2: pass elif case == 3: pass elif case == 4: pass else: return valor valor = float(input('Insira o valor: ')) print("Escolha a funcao a ser aplicada no valor inserido: \n" \ "1 - Aumentar Valor \n" \ "2 - Diminuir Valor \n" \ "3 - Dobrar Valor \n" \ "4 - Dividir Valor \n" \ "0 - Encerrar o Prorama" ) valor = switch(valor) print('Funcao foi aplicada. O valor final ficou: {}'.format(valor)) Imported Functions: def aumentar(valor): quantia_aumentada = float(input('Insira a quantidade que voce deseja acrescentar: ')) valor += quantia_aumentada return valor def diminuir(): pass def dobro(): pass def metade(): pass When i tried executing this, what i got was: Insira o valor: 100.00 Escolha a funcao a ser aplicada no valor inserido: 1 - Aumentar Valor 2 - Diminuir Valor 3 - Dobrar Valor 4 - Dividir Valor 0 - Encerrar o Prorama Escolha uma opcao... (0 para encerrar) : 1 Insira a quantidade que voce deseja acrescentar: 100.00 Valor aumentado: 200.0 Escolha uma opcao... (0 para encerrar) : 1 Insira a quantidade que voce deseja acrescentar: 100.00 Valor aumentado: 300.0 Escolha uma opcao... (0 para encerrar) : 0 Funcao foi aplicada. O valor final ficou: None For a test case, you can use: Chose 100.00, option 1 (2 times is enough), increment 100.00 each call. Expected output: Current value = 300.00 (Because 100 + 100 + 100) But i got None at the last print... Please. What am i doing wrong??? :( Thank you for all the help. PS: I tried going through the following answers, but i was not able to solve this problem because the explanation was for the problems in the question, and i found it was a litle different than mine.. 1 > Recursive function returning none - Dint understand. 2 > python recursive function returning none instead of string - This is treating a CSV file.
[ "The problem is that when the case variable is equal to 0, the return valor statement is being executed within the switch() function, but this function is being called recursively so the value of valor is not being returned to the caller.\nTo fix this, you can add another return statement at the end of the switch() function that returns the value of valor when case is 0. This will ensure that the value of valor is returned to the caller, even when the switch() function is being called recursively.\ndef switch(valor):\n case = int(input('Escolha uma opcao... (0 para encerrar) : '))\n if case == 1:\n valor = funcoes_moeda.aumentar(valor)\n print('Valor aumentado: {}'.format(valor))\n switch(valor)\n\n elif case == 2:\n pass\n elif case == 3:\n pass\n elif case == 4:\n pass\n else:\n return valor\n\n # Return the value of valor when case is 0\n return valor\n\n", "When returning a value the recursion is braking and the value returns to the previous call of the function. If you want to return the value to the first call, you can return the value every time you call switch:\nimport funcoes_moeda\n\ndef switch(valor):\n case = int(input('Escolha uma opcao... (0 para encerrar) : '))\n if case == 1:\n valor = funcoes_moeda.aumentar(valor)\n print('Valor aumentado: {}'.format(valor))\n return switch(valor)\n\n elif case == 2:\n pass\n elif case == 3:\n pass\n elif case == 4:\n pass\n else:\n return valor\n\nvalor = float(input('Insira o valor: '))\nprint(\"Escolha a funcao a ser aplicada no valor inserido: \\n\" \\\n \"1 - Aumentar Valor \\n\" \\\n \"2 - Diminuir Valor \\n\" \\\n \"3 - Dobrar Valor \\n\" \\\n \"4 - Dividir Valor \\n\" \\\n \"0 - Encerrar o Prorama\"\n )\n\nvalor = switch(valor)\n\nprint('Funcao foi aplicada. O valor final ficou: {}'.format(valor))\n\n", "Two issues:\n\nThe return at the end of the function 'switch' should be at the end of the function and not part of the 'else' case. Because if you select '1' you will change the value of 'valor', but after changing the value you exit the function withou returning a value.\nThe returned value of the recursive call to 'switch' in the code section of 'if case == 1:' is not saved and therefore lost.\n\nIf you only correct the first point above you will get changed values only from the top call to 'switch', but all changes from the recursive calls will get lost.\nThe solution proposed by thebjorn solves both above listed issues. But treating the issues one by one, might be easier to understand.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "function", "python", "python_3.x", "recursion" ]
stackoverflow_0074671577_function_python_python_3.x_recursion.txt
Q: (Python) What is wrong in my code causing my program to crash index error Program will run the first time through, but in the second loop it will sometimes crash. It was noted to me that it may be in regard to the toppers line, where a topper can be selected. Error: "IndexError: list index out of range" import random toppers = ['holographic', 'flakies', 'glitter', 'microshimmer'] def topper(): while True: topper = input() #getting input # random number between 1-3 for topper grab randomTop = random.randint(1,4) if topper.upper() == 'N' or topper.upper() == 'NO': print('A clear glossy top coat it is!') break elif topper.upper() == 'Y' or topper.upper() == 'YES': print('The topper you should use is ' + toppers[randomTop] + '.') break elif topper.upper() != ('N', 'NO', 'Y', 'YES') or topper.isalnum(): print('\nPlease enter a valid input. Y or N.') continue playloop() As the program has a break to exit if the user would like, it is expected that the program can loop through endlessly until they are ready to quit. While the first loop works fine, but subsequent trials crash. Any help is appreciated. :) A: toppers = ['holographic', 'flakies', 'glitter', 'microshimmer'] randomTop = random.randint(1,4) print('The topper you should use is ' + toppers[randomTop] + '.') toppers is a list with four items. Python lists are indexed starting at zero, so the valid indexes are 0-3. But you're picking a random number from 1 to 4. If you happen to pick 4, that is out of range. Use random.randint(0,3). Or even better, use random.choice(toppers), so you don't have to worry about the index value at all.
(Python) What is wrong in my code causing my program to crash index error
Program will run the first time through, but in the second loop it will sometimes crash. It was noted to me that it may be in regard to the toppers line, where a topper can be selected. Error: "IndexError: list index out of range" import random toppers = ['holographic', 'flakies', 'glitter', 'microshimmer'] def topper(): while True: topper = input() #getting input # random number between 1-3 for topper grab randomTop = random.randint(1,4) if topper.upper() == 'N' or topper.upper() == 'NO': print('A clear glossy top coat it is!') break elif topper.upper() == 'Y' or topper.upper() == 'YES': print('The topper you should use is ' + toppers[randomTop] + '.') break elif topper.upper() != ('N', 'NO', 'Y', 'YES') or topper.isalnum(): print('\nPlease enter a valid input. Y or N.') continue playloop() As the program has a break to exit if the user would like, it is expected that the program can loop through endlessly until they are ready to quit. While the first loop works fine, but subsequent trials crash. Any help is appreciated. :)
[ "toppers = ['holographic', 'flakies', 'glitter', 'microshimmer']\n\nrandomTop = random.randint(1,4)\nprint('The topper you should use is ' + toppers[randomTop] + '.')\n\ntoppers is a list with four items. Python lists are indexed starting at zero, so the valid indexes are 0-3.\nBut you're picking a random number from 1 to 4. If you happen to pick 4, that is out of range. Use random.randint(0,3).\nOr even better, use random.choice(toppers), so you don't have to worry about the index value at all.\n" ]
[ 1 ]
[]
[]
[ "index_error", "python" ]
stackoverflow_0074671695_index_error_python.txt
Q: How do I make add the input parts into the loop like the menu? The part of the code that ask for the length of the list and then the actual numbers the users want to use needs to be looped into the program like the menu is looped in. So that once it runs and the program is fulfilled it can loop again. However, the program needs to completely end once the number -1000 is entered or option C from the menu is entered. When I try I keep getting errors. Can someone help, please? I tried to move that statement into the loop, I tried to call it in the while loop, but when I do that it either prints the statement twice or just gives an error. numbersEntered = [] def menu(): print("[A] Smallest") print("[B] Largest") print("[C] Quit Game") lengthNumbers = int(input("Please enter the length of your list/array:")) print("Please enter your numbers individually:") for x in range(lengthNumbers): data=int(input()) numbersEntered.append(data) if (data == -1000): break while numbersEntered[-1] != -1000: menu() Options=str(input(f"Please select either option A,B,or C:")).upper() if Options == 'A': print("The smallest number is:", min(numbersEntered)) elif Options == 'B': print("The largest number is:", max(numbersEntered) ) # check for 'C' input from user to break out of the while loop elif Options == 'C': break print('Quit Game') A: See comments in line. import sys # will be used to exit the game. numbersEntered = [] # Accumulate the numbers provided by the user # Since I see you are familiar with functions, use a function to end def quit_game(): print('Quit Game') sys.exit() def menu(): print("[A] Smallest") print("[B] Largest") print("[C] Quit Game") # Get the starting number of number to process. lengthNumbers = int(input("Please enter the length of your list/array:")) print("Please enter your numbers individually:") while True: for x in range(lengthNumbers): data = int(input()) numbersEntered.append(data) if data == -1000: quit_game() # Once all numbers are entered provide the user a choice: # display the smallest or largest of the numbers or exit. menu() Options = str(input(f"Please select either option A,B,or C:")).upper() if Options == 'A': print("The smallest number is:", min(numbersEntered)) elif Options == 'B': print("The largest number is:", max(numbersEntered)) # check for 'C' input from user to break out of the while loop elif Options == 'C': quit_game() # Allow the user to process another set. lengthNumbers = int(input("Please enter the length of your list/array:")) print("Please enter your numbers individually:") # print('Quit Game') Moved to the function
How do I make add the input parts into the loop like the menu?
The part of the code that ask for the length of the list and then the actual numbers the users want to use needs to be looped into the program like the menu is looped in. So that once it runs and the program is fulfilled it can loop again. However, the program needs to completely end once the number -1000 is entered or option C from the menu is entered. When I try I keep getting errors. Can someone help, please? I tried to move that statement into the loop, I tried to call it in the while loop, but when I do that it either prints the statement twice or just gives an error. numbersEntered = [] def menu(): print("[A] Smallest") print("[B] Largest") print("[C] Quit Game") lengthNumbers = int(input("Please enter the length of your list/array:")) print("Please enter your numbers individually:") for x in range(lengthNumbers): data=int(input()) numbersEntered.append(data) if (data == -1000): break while numbersEntered[-1] != -1000: menu() Options=str(input(f"Please select either option A,B,or C:")).upper() if Options == 'A': print("The smallest number is:", min(numbersEntered)) elif Options == 'B': print("The largest number is:", max(numbersEntered) ) # check for 'C' input from user to break out of the while loop elif Options == 'C': break print('Quit Game')
[ "See comments in line.\nimport sys # will be used to exit the game.\n\nnumbersEntered = [] # Accumulate the numbers provided by the user\n\n\n# Since I see you are familiar with functions, use a function to end\n\ndef quit_game():\n print('Quit Game')\n sys.exit()\n\n\ndef menu():\n print(\"[A] Smallest\")\n print(\"[B] Largest\")\n print(\"[C] Quit Game\")\n\n# Get the starting number of number to process.\nlengthNumbers = int(input(\"Please enter the length of your list/array:\"))\nprint(\"Please enter your numbers individually:\")\n\nwhile True:\n for x in range(lengthNumbers):\n data = int(input())\n numbersEntered.append(data)\n if data == -1000:\n quit_game()\n # Once all numbers are entered provide the user a choice:\n # display the smallest or largest of the numbers or exit.\n menu()\n Options = str(input(f\"Please select either option A,B,or C:\")).upper()\n if Options == 'A':\n print(\"The smallest number is:\", min(numbersEntered))\n elif Options == 'B':\n print(\"The largest number is:\", max(numbersEntered))\n # check for 'C' input from user to break out of the while loop\n elif Options == 'C':\n quit_game()\n # Allow the user to process another set.\n lengthNumbers = int(input(\"Please enter the length of your list/array:\"))\n print(\"Please enter your numbers individually:\")\n# print('Quit Game') Moved to the function\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "nested_loops", "python", "while_loop" ]
stackoverflow_0074671052_arrays_nested_loops_python_while_loop.txt
Q: How can I receive data that someone is sending me through a HTTP GET request (data encoded in the path or query string parameters) I will be receiving data shared via "a HTTP (or HTTPS) GET request" every five minutes, so I am developing a flask server to 'listen' for this data. I understand from trawling google that this is an uncommon way to share data, and I haven't been able to find documentation on how to ingest this data. Any nudges would be very much appreciated. For example, I am asking person to point the webhook data at https://example.com/webhook. Here is an example of the string I am expecting to receive: 'https://example.com/webhook?Date_Event=2013-12-31T15:21:32Z&DeviceID=Item1&Variable=11.1' As I will be receiving this every five minutes, I deploy a flask server associated with that url. I'm confused as to how to accept this string and assign it to my 'receivedstring' variable so that I can process. Thank you! from flask import Flask, request, abort # For deploying the web server app = Flask(__name__) # Step 1 - Initiate flask app @app.route('/webhook', methods=['GET']) def webhook(): if request.method == 'GET': receivedstring = #response? ... A: Here is one way to accept the string and assign it to the receivedstring variable in your Flask server: from flask import Flask, request, abort app = Flask(__name__) @app.route('/webhook', methods=['GET']) def webhook(): if request.method == 'GET': receivedstring = request.args.get('Date_Event') # You can now process the received string as needed. # For example, you can print it to the console: print(receivedstring) # You can also access the other query parameters in the same way: device_id = request.args.get('DeviceID') variable = request.args.get('Variable') print(device_id, variable) # Return a response to the client if needed return "OK" if __name__ == '__main__': app.run() In this code, the request.args.get() method is used to retrieve the query parameters from the request URL. This method takes the name of the query parameter (e.g. "Date_Event") as its argument and returns the corresponding value (e.g. "2013-12-31T15:21:32Z"). If you want to retrieve all query parameters at once, you can use the request.args property, which is a dictionary-like object containing all the query parameters and their values. For example: @app.route('/webhook', methods=['GET']) def webhook(): if request.method == 'GET': # Get all query parameters as a dictionary query_params = request.args # You can now process the query parameters as needed # For example, you can print them to the console: print(query_params) # Return a response to the client if needed return "OK" I hope this helps! Let me know if you have any other questions.
How can I receive data that someone is sending me through a HTTP GET request (data encoded in the path or query string parameters)
I will be receiving data shared via "a HTTP (or HTTPS) GET request" every five minutes, so I am developing a flask server to 'listen' for this data. I understand from trawling google that this is an uncommon way to share data, and I haven't been able to find documentation on how to ingest this data. Any nudges would be very much appreciated. For example, I am asking person to point the webhook data at https://example.com/webhook. Here is an example of the string I am expecting to receive: 'https://example.com/webhook?Date_Event=2013-12-31T15:21:32Z&DeviceID=Item1&Variable=11.1' As I will be receiving this every five minutes, I deploy a flask server associated with that url. I'm confused as to how to accept this string and assign it to my 'receivedstring' variable so that I can process. Thank you! from flask import Flask, request, abort # For deploying the web server app = Flask(__name__) # Step 1 - Initiate flask app @app.route('/webhook', methods=['GET']) def webhook(): if request.method == 'GET': receivedstring = #response? ...
[ "Here is one way to accept the string and assign it to the receivedstring variable in your Flask server:\nfrom flask import Flask, request, abort\n\napp = Flask(__name__)\n\[email protected]('/webhook', methods=['GET'])\ndef webhook():\n if request.method == 'GET':\n receivedstring = request.args.get('Date_Event')\n # You can now process the received string as needed.\n # For example, you can print it to the console:\n print(receivedstring)\n # You can also access the other query parameters in the same way:\n device_id = request.args.get('DeviceID')\n variable = request.args.get('Variable')\n print(device_id, variable)\n # Return a response to the client if needed\n return \"OK\"\n\nif __name__ == '__main__':\n app.run()\n\nIn this code, the request.args.get() method is used to retrieve the query parameters from the request URL. This method takes the name of the query parameter (e.g. \"Date_Event\") as its argument and returns the corresponding value (e.g. \"2013-12-31T15:21:32Z\").\nIf you want to retrieve all query parameters at once, you can use the request.args property, which is a dictionary-like object containing all the query parameters and their values. For example:\[email protected]('/webhook', methods=['GET'])\ndef webhook():\n if request.method == 'GET':\n # Get all query parameters as a dictionary\n query_params = request.args\n # You can now process the query parameters as needed\n # For example, you can print them to the console:\n print(query_params)\n # Return a response to the client if needed\n return \"OK\"\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 1 ]
[]
[]
[ "flask", "python", "python_requests" ]
stackoverflow_0074671667_flask_python_python_requests.txt
Q: Bag of words process with comments data? I have a training set "x" containing an array, where each list within the array refers to the words of a comment. So if I run: len(x)" returns 8000 In particular, I want to choose the common tokens in at least 1% of the comments and then count the number of times each of those tokens appears in each review and generate, for each review, a vector of numeric variables in which each position refers to each of those relevant or common tokens and indicates how many times that token appears in the comment in question. Adjunto una muestra de x: [['amazon', 'phone', 'serious', 'mind', 'blown', 'serious', 'enjoy', 'use', 'applic', 'full', 'blown', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say'], ['would', 'say', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'quot', 'ware', 'orang', 'cloth', 'app', 'adiquit', 'would', 'recsmend', 'want', 'talk', 'ghost'], ['love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player']] Any help on how to save in a list the tokens that appear in at least 1% of the reviews? A: To create a list of tokens that appear in at least 1% of the reviews, you can use a dictionary to count the number of times each token appears in the training set, and then filter the dictionary to only include tokens that appear in at least 1% of the reviews. Here's an example of how you could do this: # First, create an empty dictionary to count the number of times each token appears token_counts = {} # Loop through each review in the training set for review in x: # Loop through each token in the review for token in review: # If the token is not already in the dictionary, add it with a count of 1 if token not in token_counts: token_counts[token] = 1 # If the token is already in the dictionary, increment its count by 1 else: token_counts[token] += 1 # Next, create a list of tokens that appear in at least 1% of the reviews common_tokens = [] # Calculate the number of reviews in the training set num_reviews = len(x) # Loop through each token and its count in the dictionary for token, count in token_counts.items(): # If the token appears in at least 1% of the reviews, add it to the list of common tokens if count / num_reviews >= 0.01: common_tokens.append(token) # Finally, you can use the list of common tokens to generate a vector for each review, where each position in the vector indicates how many times the corresponding token appears in the review for review in x: # Create an empty vector with the same length as the list of common tokens vector = [0] * len(common_tokens) # Loop through each token in the review for token in review: # If the token is in the list of common tokens, increment the corresponding position in the vector if token in common_tokens: vector[common_tokens.index(token)] += 1 # At this point, the vector for the review will be a list of numeric variables, where each position indicates how many times the corresponding common token appears in the review print(vector)
Bag of words process with comments data?
I have a training set "x" containing an array, where each list within the array refers to the words of a comment. So if I run: len(x)" returns 8000 In particular, I want to choose the common tokens in at least 1% of the comments and then count the number of times each of those tokens appears in each review and generate, for each review, a vector of numeric variables in which each position refers to each of those relevant or common tokens and indicates how many times that token appears in the comment in question. Adjunto una muestra de x: [['amazon', 'phone', 'serious', 'mind', 'blown', 'serious', 'enjoy', 'use', 'applic', 'full', 'blown', 'websit', 'allow', 'quick', 'track', 'packag', 'descript', 'say'], ['would', 'say', 'app', 'real', 'thing', 'show', 'ghost', 'said', 'quot', 'orang', 'quot', 'ware', 'orang', 'cloth', 'app', 'adiquit', 'would', 'recsmend', 'want', 'talk', 'ghost'], ['love', 'play', 'backgammonthi', 'game', 'offer', 'varieti', 'difficulti', 'make', 'perfect', 'beginn', 'season', 'player']] Any help on how to save in a list the tokens that appear in at least 1% of the reviews?
[ "To create a list of tokens that appear in at least 1% of the reviews, you can use a dictionary to count the number of times each token appears in the training set, and then filter the dictionary to only include tokens that appear in at least 1% of the reviews.\nHere's an example of how you could do this:\n# First, create an empty dictionary to count the number of times each token appears\ntoken_counts = {}\n\n# Loop through each review in the training set\nfor review in x:\n # Loop through each token in the review\n for token in review:\n # If the token is not already in the dictionary, add it with a count of 1\n if token not in token_counts:\n token_counts[token] = 1\n # If the token is already in the dictionary, increment its count by 1\n else:\n token_counts[token] += 1\n\n# Next, create a list of tokens that appear in at least 1% of the reviews\ncommon_tokens = []\n\n# Calculate the number of reviews in the training set\nnum_reviews = len(x)\n\n# Loop through each token and its count in the dictionary\nfor token, count in token_counts.items():\n # If the token appears in at least 1% of the reviews, add it to the list of common tokens\n if count / num_reviews >= 0.01:\n common_tokens.append(token)\n\n# Finally, you can use the list of common tokens to generate a vector for each review, where each position in the vector indicates how many times the corresponding token appears in the review\nfor review in x:\n # Create an empty vector with the same length as the list of common tokens\n vector = [0] * len(common_tokens)\n # Loop through each token in the review\n for token in review:\n # If the token is in the list of common tokens, increment the corresponding position in the vector\n if token in common_tokens:\n vector[common_tokens.index(token)] += 1\n # At this point, the vector for the review will be a list of numeric variables, where each position indicates how many times the corresponding common token appears in the review\n print(vector)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671742_python.txt
Q: FileNotFoundError: [Errno 2] No such file or directory: '1.pdf' So I was making a PDF Merger using Python as I found it to be a good project for a beginner like me. I started off with using PyPDF4 and after all the hard work (not that hard) had been done I ran the program only to be greeted by "FileNotFoundError: [Errno 2] No such file or directory: '1.pdf'". First question up, it DID find the filename that is in the specified directory and it does exist there. How did it find its name but still say it doesn't exist? Second Question, How do I get rid of this :< I Use the # thingy to keep the code clean, don't mind if I do! # <------Import Modules--------> from PyPDF4 import PdfFileMerger from os import listdir # <-----------Misc-------------> filedirinput = input("Please enter a directory destination: ") pdf = (".pdf") # <-----Merge our Files------------------ manager = PdfFileMerger()# <------------| PdfFileMerger is now "manager" so that Karens can call it anytime XD for files in listdir(filedirinput):# <--| For all the files in our Input Directory if files.endswith(pdf):# <-------| Check if file ends with .pdf and move to next step manager.append(files)# <--------| Merge all our files in the Directory using Manager (PdfFileMerger) # <--------Output Time YE!!!---------> outputname = input("Please enter your desired filename: ") manager.write(outputname + pdf) # <-----------Misc-------------> print(f"{outputname + pdf} was generated in {filedirinput}") # NOTE This part is in development and you currently CANNOT mail somebody # ALSO, I might turn all of this into a Tkinter GUI program :) print("Do you want to email this to someone: Y/N") yn = input("> ") if yn == "N": print("Thank You for Using PyDF Merger :)") print("Made By NightMX!") Me getting a Error: https://imgur.com/a/sXGpq7R Have a Good Day!, A: You have to pass complete(absolute) path along with file name to manager.append(files) at line#12. The directory you got at Ln#6 is used to retrieve the list of files, however you have not used this while appending the files at Ln#12 A: @venkat is correct in his answer. manager.append(files) does not contain the full path, and therefore cannot merge the files. For your example, try for files in listdir(filedirinput): if files.endswith(pdf): manager.append(filedirinput + str("\\") + files) outputname = input("Please enter your desired filename: ") merger.write(filedirinput + str("\\") + new_filename) merger.close()
FileNotFoundError: [Errno 2] No such file or directory: '1.pdf'
So I was making a PDF Merger using Python as I found it to be a good project for a beginner like me. I started off with using PyPDF4 and after all the hard work (not that hard) had been done I ran the program only to be greeted by "FileNotFoundError: [Errno 2] No such file or directory: '1.pdf'". First question up, it DID find the filename that is in the specified directory and it does exist there. How did it find its name but still say it doesn't exist? Second Question, How do I get rid of this :< I Use the # thingy to keep the code clean, don't mind if I do! # <------Import Modules--------> from PyPDF4 import PdfFileMerger from os import listdir # <-----------Misc-------------> filedirinput = input("Please enter a directory destination: ") pdf = (".pdf") # <-----Merge our Files------------------ manager = PdfFileMerger()# <------------| PdfFileMerger is now "manager" so that Karens can call it anytime XD for files in listdir(filedirinput):# <--| For all the files in our Input Directory if files.endswith(pdf):# <-------| Check if file ends with .pdf and move to next step manager.append(files)# <--------| Merge all our files in the Directory using Manager (PdfFileMerger) # <--------Output Time YE!!!---------> outputname = input("Please enter your desired filename: ") manager.write(outputname + pdf) # <-----------Misc-------------> print(f"{outputname + pdf} was generated in {filedirinput}") # NOTE This part is in development and you currently CANNOT mail somebody # ALSO, I might turn all of this into a Tkinter GUI program :) print("Do you want to email this to someone: Y/N") yn = input("> ") if yn == "N": print("Thank You for Using PyDF Merger :)") print("Made By NightMX!") Me getting a Error: https://imgur.com/a/sXGpq7R Have a Good Day!,
[ "You have to pass complete(absolute) path along with file name to manager.append(files) at line#12. The directory you got at Ln#6 is used to retrieve the list of files, however you have not used this while appending the files at Ln#12\n", "@venkat is correct in his answer. manager.append(files) does not contain the full path, and therefore cannot merge the files.\nFor your example, try\nfor files in listdir(filedirinput):\n if files.endswith(pdf):\n manager.append(filedirinput + str(\"\\\\\") + files)\n\noutputname = input(\"Please enter your desired filename: \")\nmerger.write(filedirinput + str(\"\\\\\") + new_filename)\nmerger.close()\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0067101966_python.txt
Q: How do you provide an error when an invalid input is provided ie. (a color other than purple or sky blue) or a number while True: try: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() user_info = {"color2": color1, "color2": color2,} except ValueError: print("Choose a valid input please...") continue else: break Trying to get it to give an error and restart the loop but its not working. A: To check if the input is valid, you can use an if statement inside the try block to check if the input matches the expected colors. If the input is not valid, you can raise a ValueError to indicate that there was an issue with the input. Here is an example of how you can do this: while True: try: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() if color1 not in ["purple", "sky blue"]: raise ValueError("Invalid input for color1: {}".format(color1)) if color2 not in ["white", "yellow", "pink"]: raise ValueError("Invalid input for color2: {}".format(color2)) user_info = {"color2": color1, "color2": color2,} except ValueError as error: print("Error: {}".format(error)) continue else: break This code will check if the input for color1 is either "purple" or "sky blue", and if the input for color2 is either "white", "yellow", or "pink". If the input is not valid, a ValueError is raised with a message indicating which color had an invalid input. The except block will catch this error and print the error message, then continue the loop to prompt for input again. If the input is valid, the loop will break and the program will continue. I hope this helps. Feel free to ask if you have any other questions. A: That works: while True: try: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() user_info = {"color2": color1, "color2": color2,} valid_input1 = ['purple', 'sky blue'] valid_input2 = ['white', 'yellow', 'pink'] if color1 not in valid_input1 or color2 not in valid_input2: raise ValueError except ValueError: print("Choose a valid input please...") continue else: break You could also do something like this: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() user_info = {"color2": color1, "color2": color2,} valid_input1 = ['purple', 'sky blue'] valid_input2 = ['white', 'yellow', 'pink'] while color1 not in valid_input1 or color2 not in valid_input2: print("Choose a valid input please...") color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() A: You aren't raising ValueError, you can change your code to check for invalid input using an if statement for each input: while True: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() user_info = {"color2": color1, "color2": color2,} if color1 not in ["purple", "sky blue"] or color2 not in ["yellow", "pink"]: print("Choose a valid input please...") continue break Also if you are using Python 3, then input returns a string, so passing it to str isn't needed.
How do you provide an error when an invalid input is provided ie. (a color other than purple or sky blue) or a number
while True: try: color1 = str(input("What should the color of the broken window be, Purple or Sky Blue? > ")).lower().strip() color2 = str(input("What should the color of the broken window, white Yellow or Pink? > ")).lower().strip() user_info = {"color2": color1, "color2": color2,} except ValueError: print("Choose a valid input please...") continue else: break Trying to get it to give an error and restart the loop but its not working.
[ "To check if the input is valid, you can use an if statement inside the try block to check if the input matches the expected colors. If the input is not valid, you can raise a ValueError to indicate that there was an issue with the input. Here is an example of how you can do this:\nwhile True:\n try:\n color1 = str(input(\"What should the color of the broken window be, Purple or Sky Blue? > \")).lower().strip()\n color2 = str(input(\"What should the color of the broken window, white Yellow or Pink? > \")).lower().strip()\n if color1 not in [\"purple\", \"sky blue\"]:\n raise ValueError(\"Invalid input for color1: {}\".format(color1))\n if color2 not in [\"white\", \"yellow\", \"pink\"]:\n raise ValueError(\"Invalid input for color2: {}\".format(color2))\n user_info = {\"color2\": color1, \"color2\": color2,}\n except ValueError as error:\n print(\"Error: {}\".format(error))\n continue\n else:\n break\n\nThis code will check if the input for color1 is either \"purple\" or \"sky blue\", and if the input for color2 is either \"white\", \"yellow\", or \"pink\". If the input is not valid, a ValueError is raised with a message indicating which color had an invalid input. The except block will catch this error and print the error message, then continue the loop to prompt for input again. If the input is valid, the loop will break and the program will continue. I hope this helps. Feel free to ask if you have any other questions.\n", "That works:\nwhile True:\n try:\n color1 = str(input(\"What should the color of the broken window be, Purple or Sky Blue? > \")).lower().strip()\n color2 = str(input(\"What should the color of the broken window, white Yellow or Pink? > \")).lower().strip()\n user_info = {\"color2\": color1, \"color2\": color2,}\n valid_input1 = ['purple', 'sky blue']\n valid_input2 = ['white', 'yellow', 'pink']\n if color1 not in valid_input1 or color2 not in valid_input2:\n raise ValueError\n except ValueError:\n print(\"Choose a valid input please...\")\n continue\n else:\n break\n\nYou could also do something like this:\n\ncolor1 = str(input(\"What should the color of the broken window be, Purple or Sky Blue? > \")).lower().strip()\ncolor2 = str(input(\"What should the color of the broken window, white Yellow or Pink? > \")).lower().strip()\nuser_info = {\"color2\": color1, \"color2\": color2,}\n\nvalid_input1 = ['purple', 'sky blue']\nvalid_input2 = ['white', 'yellow', 'pink']\n\nwhile color1 not in valid_input1 or color2 not in valid_input2:\n print(\"Choose a valid input please...\")\n color1 = str(input(\"What should the color of the broken window be, Purple or Sky Blue? > \")).lower().strip()\n color2 = str(input(\"What should the color of the broken window, white Yellow or Pink? > \")).lower().strip()\n\n", "You aren't raising ValueError, you can change your code to check for invalid input using an if statement for each input:\nwhile True:\n color1 = str(input(\"What should the color of the broken window be, Purple or Sky Blue? > \")).lower().strip()\n color2 = str(input(\"What should the color of the broken window, white Yellow or Pink? > \")).lower().strip()\n user_info = {\"color2\": color1, \"color2\": color2,}\n if color1 not in [\"purple\", \"sky blue\"] or color2 not in [\"yellow\", \"pink\"]:\n print(\"Choose a valid input please...\")\n continue\n break\n\nAlso if you are using Python 3, then input returns a string, so passing it to str isn't needed.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671719_python.txt
Q: python how can I convert shap values to probability increase/decreases? Issue on shap's repo: https://github.com/slundberg/shap/issues/2783 So currently, I know how to convert the base (expected) value from log odds to probability, with explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_train) odds = np.exp(explainer.expected_value) odds / (1 + odds) This works fine, but the problem comes when I try and convert each individual shap value to a probability increase/decrease. That formula doesn't work, so I'm wondering how I can get the percent increase/decrease that each feature contributes Basically, what percent do each of the lengths (like the length I annotated in red on the picture) take up? I'm looking for a discrete number that corresponds to the percent increase/decrease for the bar of each feature (in probability, not log odds) # this generates the plot shap.force_plot( explainer.expected_value, shap_values[1, :], X_train.iloc[1, :], link='logit' ) A: I think I may have found the answer, not sure if it's correct because the only way to compare is by visual approximation, but here is what I came up with. If anyone could try it out and determine is the calculation is off, that would be amazing! First, we have to create a helper function to convert log odds to probabilities def lo_to_prob(x): odds = np.exp(x) return odds / (1 + odds) Set up our shap explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_train) This is the conversion formula I found # this is the row that we want to find the shap outcomes for observation = 0 # this is the column that we want to find the percent change for column_num = 2 # this formula gives us the outcome probability number shap_outcome_val = lo_to_prob( explainer.expected_value + shap_values[observation, :].sum() ) # this give us the shap outcome value, without the single column shap_outcome_minus_one_column = lo_to_prob( explainer.expected_value + shap_values[observation, :].sum() - shap_values[observation, :][column_num] ) # simply subtract the 2 probabilities to get the pct increase that the one column provides pct_change = shap_outcome_val - shap_outcome_minus_one_column pct_change Check the graph to see if the length of the bar of the column we're interested in, is about the length that we get from the calculation shap.force_plot( explainer.expected_value, shap_values[observation, :], X_train.iloc[observation, :], link='logit' ) Again, not sure if this is 100% correct, as the only way to verify is visually. It looks close tho. Try it out and let me know
python how can I convert shap values to probability increase/decreases?
Issue on shap's repo: https://github.com/slundberg/shap/issues/2783 So currently, I know how to convert the base (expected) value from log odds to probability, with explainer = shap.TreeExplainer(model) shap_values = explainer.shap_values(X_train) odds = np.exp(explainer.expected_value) odds / (1 + odds) This works fine, but the problem comes when I try and convert each individual shap value to a probability increase/decrease. That formula doesn't work, so I'm wondering how I can get the percent increase/decrease that each feature contributes Basically, what percent do each of the lengths (like the length I annotated in red on the picture) take up? I'm looking for a discrete number that corresponds to the percent increase/decrease for the bar of each feature (in probability, not log odds) # this generates the plot shap.force_plot( explainer.expected_value, shap_values[1, :], X_train.iloc[1, :], link='logit' )
[ "I think I may have found the answer, not sure if it's correct because the only way to compare is by visual approximation, but here is what I came up with. If anyone could try it out and determine is the calculation is off, that would be amazing!\nFirst, we have to create a helper function to convert log odds to probabilities\ndef lo_to_prob(x):\n odds = np.exp(x)\n return odds / (1 + odds)\n\nSet up our shap\nexplainer = shap.TreeExplainer(model)\nshap_values = explainer.shap_values(X_train)\n\nThis is the conversion formula I found\n# this is the row that we want to find the shap outcomes for\nobservation = 0\n\n# this is the column that we want to find the percent change for\ncolumn_num = 2\n\n# this formula gives us the outcome probability number\nshap_outcome_val = lo_to_prob(\n explainer.expected_value + shap_values[observation, :].sum()\n)\n\n# this give us the shap outcome value, without the single column\nshap_outcome_minus_one_column = lo_to_prob(\n explainer.expected_value + shap_values[observation, :].sum() - shap_values[observation, :][column_num]\n)\n\n# simply subtract the 2 probabilities to get the pct increase that the one column provides\npct_change = shap_outcome_val - shap_outcome_minus_one_column\npct_change\n\nCheck the graph to see if the length of the bar of the column we're interested in, is about the length that we get from the calculation\nshap.force_plot(\n explainer.expected_value,\n shap_values[observation, :],\n X_train.iloc[observation, :],\n link='logit'\n)\n\nAgain, not sure if this is 100% correct, as the only way to verify is visually. It looks close tho. Try it out and let me know\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "math", "probability", "python", "shap" ]
stackoverflow_0074664259_machine_learning_math_probability_python_shap.txt
Q: python FastAPI websocket, Popen - print device stdout works, but stdout to websocket client doesn't I have a websocket server and a js/html client. When I click a button on the client, it invokes a shell command on the server. I use Popen to call the command cat /dev/ttyUSB0 to read the content of a device which is constantly changing. When I print the output on terminal it works, but when I send the output through the websocket it doesn't do anything until I interrupt the program. I don't know what's happening. This is what i've tried from fastapi import FastAPI, WebSocket from fastapi.responses import HTMLResponse import json from subprocess import Popen, PIPE, STDOUT import shlex app = FastAPI() html = open('templates/index.html').read() @app.get("/") async def get(): return HTMLResponse(html) Here's where the problem occur async def run_command(websocket: WebSocket): print("running command") command = "cat /dev/ttyUSB0" args = shlex.split(command) with Popen(args, stdout=PIPE, stderr=STDOUT) as process: for output in process.stdout: await websocket.send_text(output.decode()) # print(output.decode(), end='') I tried to send just one readline and it worked, except in a while loop await websocket.send_text( process.stdout.readline() ) Route @app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): await websocket.accept() while True: message = None data = await websocket.receive_text() try: message = json.loads(data) except: await websocket.send_text("Bad message") if message.get("type") == "monitor": await websocket.send_text("Uploading project") await run_command(websocket) client <form action="" onsubmit="sendMessage(event)"> <button>Send</button> </form> <ul id='output'></ul> <script> var ws = new WebSocket("ws://localhost:8000/ws"); ws.onmessage = function(event) { console.log(event.data) }; function sendMessage(event) { var input = document.getElementById("messageText") ws.send('{"type": "monitor"}') event.preventDefault() } </script>
python FastAPI websocket, Popen - print device stdout works, but stdout to websocket client doesn't
I have a websocket server and a js/html client. When I click a button on the client, it invokes a shell command on the server. I use Popen to call the command cat /dev/ttyUSB0 to read the content of a device which is constantly changing. When I print the output on terminal it works, but when I send the output through the websocket it doesn't do anything until I interrupt the program. I don't know what's happening. This is what i've tried from fastapi import FastAPI, WebSocket from fastapi.responses import HTMLResponse import json from subprocess import Popen, PIPE, STDOUT import shlex app = FastAPI() html = open('templates/index.html').read() @app.get("/") async def get(): return HTMLResponse(html) Here's where the problem occur async def run_command(websocket: WebSocket): print("running command") command = "cat /dev/ttyUSB0" args = shlex.split(command) with Popen(args, stdout=PIPE, stderr=STDOUT) as process: for output in process.stdout: await websocket.send_text(output.decode()) # print(output.decode(), end='') I tried to send just one readline and it worked, except in a while loop await websocket.send_text( process.stdout.readline() ) Route @app.websocket("/ws") async def websocket_endpoint(websocket: WebSocket): await websocket.accept() while True: message = None data = await websocket.receive_text() try: message = json.loads(data) except: await websocket.send_text("Bad message") if message.get("type") == "monitor": await websocket.send_text("Uploading project") await run_command(websocket) client <form action="" onsubmit="sendMessage(event)"> <button>Send</button> </form> <ul id='output'></ul> <script> var ws = new WebSocket("ws://localhost:8000/ws"); ws.onmessage = function(event) { console.log(event.data) }; function sendMessage(event) { var input = document.getElementById("messageText") ws.send('{"type": "monitor"}') event.preventDefault() } </script>
[]
[]
[ "I think the issue is that your websocket server is not handling the client's messages properly. When the client sends a message, the server receives it and parses it as JSON. If the type property of the message is \"monitor\", the server starts running the run_command function. However, your run_command function does not handle the case where the client disconnects from the websocket. As a result, when the client disconnects, the server continues to run the run_command function indefinitely, without being able to send any more messages over the websocket.\nTo fix this, you can add a try block around the for loop in the run_command function, and use the websocket.closed property to check if the client has disconnected. If the client has disconnected, you can break out of the loop and return from the run_command function.\nHere's an example of how you could modify the run_command function to do this:\nasync def run_command(websocket: WebSocket):\n print(\"running command\")\n command = \"cat /dev/ttyUSB0\"\n args = shlex.split(command)\n\n with Popen(args, stdout=PIPE, stderr=STDOUT) as process:\n try:\n for output in process.stdout:\n # If the client has disconnected, break out of the loop\n if websocket.closed:\n break\n await websocket.send_text(output.decode())\n except:\n # Catch any exceptions that occur while sending messages\n pass\n\n" ]
[ -1 ]
[ "fastapi", "popen", "python", "stdout", "websocket" ]
stackoverflow_0074671794_fastapi_popen_python_stdout_websocket.txt
Q: what is the meaning of the line bboxes= utils.format_boxes(bboxes,height,weight) I'm trying for object tracking using webcam using yolov4. I want to know the meaning of this line -> bboxes = utils.format_boxes(bboxes, original_h, original_w). I'm using https://github.com/theAIGuysCode/yolov4-deepsort.git repository for cloning. One can find the above line in object_tracer.py file. - line 151. # format bounding boxes from normalized ymin, xmin, ymax, xmax ---> xmin, ymin, width, height original_h, original_w, _ = frame.shape bboxes = utils.format_boxes(bboxes, original_h, original_w) A: The answer is literally in the comment at the first line of the code you pasted. This method translates bounding boxes in normalized coordinates (xmin, ymin, xmax, y max) to not normalized coordinates (xmin, ymin, width, height). Coordinates are usually expressed in pixels, which is the not normalized form. Normalized coordinates are ones that are divided by the image dimensions, i.e. numbers between 0 and 1. The point (xmin, ymin) is the top-left corner of the bounding box and (xmax, ymax) the bottom-right one. (width, height) is simply the dimensio of the bounding box.
what is the meaning of the line bboxes= utils.format_boxes(bboxes,height,weight)
I'm trying for object tracking using webcam using yolov4. I want to know the meaning of this line -> bboxes = utils.format_boxes(bboxes, original_h, original_w). I'm using https://github.com/theAIGuysCode/yolov4-deepsort.git repository for cloning. One can find the above line in object_tracer.py file. - line 151. # format bounding boxes from normalized ymin, xmin, ymax, xmax ---> xmin, ymin, width, height original_h, original_w, _ = frame.shape bboxes = utils.format_boxes(bboxes, original_h, original_w)
[ "The answer is literally in the comment at the first line of the code you pasted.\nThis method translates bounding boxes in normalized coordinates (xmin, ymin, xmax, y max) to not normalized coordinates (xmin, ymin, width, height).\nCoordinates are usually expressed in pixels, which is the not normalized form. Normalized coordinates are ones that are divided by the image dimensions, i.e. numbers between 0 and 1.\nThe point (xmin, ymin) is the top-left corner of the bounding box and (xmax, ymax) the bottom-right one. (width, height) is simply the dimensio of the bounding box.\n" ]
[ 0 ]
[]
[]
[ "python", "yolov4" ]
stackoverflow_0074645659_python_yolov4.txt
Q: RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 30 but got size 31 for tensor number 1 in the list Here's the part of my code. from transformers import BertTokenizer,BertForSequenceClassification,AdamW tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',do_lower_case = True,truncation=True) input_ids = [] attention_mask = [] for i in text: encoded_data = tokenizer.encode_plus( i, add_special_tokens=True, truncation=True, max_length=64, padding=True, #pad_to_max_length = True, return_attention_mask= True, return_tensors='pt') input_ids.append(encoded_data['input_ids']) attention_mask.append(encoded_data['attention_mask']) input_ids = torch.cat(input_ids,dim=0) attention_mask = torch.cat(attention_mask,dim=0) labels = torch.tensor(labels) dataset = TensorDataset(input_ids,attention_mask,labels) train_size = int(0.8*len(dataset)) val_size = len(dataset) - train_size train_dataset,val_dataset = random_split(dataset,[train_size,val_size]) print('Training Size - ',train_size) print('Validation Size - ',val_size) train_dl = DataLoader(train_dataset,sampler = RandomSampler(train_dataset), batch_size = 2) val_dl = DataLoader(val_dataset,sampler = SequentialSampler(val_dataset), batch_size = 2) model = BertForSequenceClassification.from_pretrained( 'bert-base-uncased', num_labels = 2, output_attentions = False, output_hidden_states = False) I know I get this line becase of the un-matched size in torch.cat. I wonder how can i fix it? --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [9], in <cell line: 18>() 16 input_ids.append(encoded_data['input_ids']) 17 attention_mask.append(encoded_data['attention_mask']) ---> 18 input_ids = torch.cat(input_ids,dim=0) 19 attention_mask = torch.cat(attention_mask,dim=0) 20 labels = torch.tensor(labels) RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 30 but got size 31 for tensor number 1 in the list. I get an error here. It is due to the unmatched dimension . But I have no idea where i can fix it. A: The error message says that you are trying to concatenate tensors of different sizes along the 0th dimension, which is not allowed. This is likely happening because you are not specifying the pad_to_max_length argument when calling tokenizer.encode_plus(), which means that the length of the encoded tensors will not be the same for all input texts. To fix this error, you can either specify pad_to_max_length = True when calling tokenizer.encode_plus(), which will ensure that all tensors are padded to the same length, or you can use the torch.nn.utils.rnn.pad_sequence() function to pad the tensors before concatenating them. Here is an example of how you could use pad_sequence() to fix the error: from torch.nn.utils.rnn import pad_sequence # Encode the input texts and create the input tensors input_ids = [] attention_mask = [] for i in text: encoded_data = tokenizer.encode_plus( i, add_special_tokens=True, truncation=True, max_length=64, padding=True, return_attention_mask= True, return_tensors='pt') input_ids.append(encoded_data['input_ids']) attention_mask.append(encoded_data['attention_mask']) # Pad the input tensors to the same length input_ids = pad_sequence(input_ids, batch_first=True) attention_mask = pad_sequence(attention_mask, batch_first=True) # Create the label tensor labels = torch.tensor(labels) # Create the dataset and dataloaders dataset = TensorDataset(input_ids, attention_mask, labels) train_size = int(0.8 * len(dataset)) val_size = len(dataset) - train_size train_dataset, val_dataset = random_split(dataset, [train_size, val_size]) train_dl = DataLoader(train_dataset, sampler=RandomSampler(train_dataset), batch_size=2) val_dl = DataLoader(val_dataset, sampler=SequentialSampler(val_dataset), batch_size=2) # Create and train the model model = BertForSequenceClassification.from_pretrained( 'bert-base-uncased', num_labels=2, output_attentions=False, output_hidden_states=False)
RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 30 but got size 31 for tensor number 1 in the list
Here's the part of my code. from transformers import BertTokenizer,BertForSequenceClassification,AdamW tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',do_lower_case = True,truncation=True) input_ids = [] attention_mask = [] for i in text: encoded_data = tokenizer.encode_plus( i, add_special_tokens=True, truncation=True, max_length=64, padding=True, #pad_to_max_length = True, return_attention_mask= True, return_tensors='pt') input_ids.append(encoded_data['input_ids']) attention_mask.append(encoded_data['attention_mask']) input_ids = torch.cat(input_ids,dim=0) attention_mask = torch.cat(attention_mask,dim=0) labels = torch.tensor(labels) dataset = TensorDataset(input_ids,attention_mask,labels) train_size = int(0.8*len(dataset)) val_size = len(dataset) - train_size train_dataset,val_dataset = random_split(dataset,[train_size,val_size]) print('Training Size - ',train_size) print('Validation Size - ',val_size) train_dl = DataLoader(train_dataset,sampler = RandomSampler(train_dataset), batch_size = 2) val_dl = DataLoader(val_dataset,sampler = SequentialSampler(val_dataset), batch_size = 2) model = BertForSequenceClassification.from_pretrained( 'bert-base-uncased', num_labels = 2, output_attentions = False, output_hidden_states = False) I know I get this line becase of the un-matched size in torch.cat. I wonder how can i fix it? --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [9], in <cell line: 18>() 16 input_ids.append(encoded_data['input_ids']) 17 attention_mask.append(encoded_data['attention_mask']) ---> 18 input_ids = torch.cat(input_ids,dim=0) 19 attention_mask = torch.cat(attention_mask,dim=0) 20 labels = torch.tensor(labels) RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 30 but got size 31 for tensor number 1 in the list. I get an error here. It is due to the unmatched dimension . But I have no idea where i can fix it.
[ "The error message says that you are trying to concatenate tensors of different sizes along the 0th dimension, which is not allowed. This is likely happening because you are not specifying the pad_to_max_length argument when calling tokenizer.encode_plus(), which means that the length of the encoded tensors will not be the same for all input texts.\nTo fix this error, you can either specify pad_to_max_length = True when calling tokenizer.encode_plus(), which will ensure that all tensors are padded to the same length, or you can use the torch.nn.utils.rnn.pad_sequence() function to pad the tensors before concatenating them.\nHere is an example of how you could use pad_sequence() to fix the error:\nfrom torch.nn.utils.rnn import pad_sequence\n\n# Encode the input texts and create the input tensors\ninput_ids = []\nattention_mask = []\n\nfor i in text:\n encoded_data = tokenizer.encode_plus(\n i,\n add_special_tokens=True,\n truncation=True,\n max_length=64,\n padding=True,\n return_attention_mask= True,\n return_tensors='pt')\n input_ids.append(encoded_data['input_ids'])\n attention_mask.append(encoded_data['attention_mask'])\n\n# Pad the input tensors to the same length\ninput_ids = pad_sequence(input_ids, batch_first=True)\nattention_mask = pad_sequence(attention_mask, batch_first=True)\n\n# Create the label tensor\nlabels = torch.tensor(labels)\n\n# Create the dataset and dataloaders\ndataset = TensorDataset(input_ids, attention_mask, labels)\ntrain_size = int(0.8 * len(dataset))\nval_size = len(dataset) - train_size\ntrain_dataset, val_dataset = random_split(dataset, [train_size, val_size])\n\ntrain_dl = DataLoader(train_dataset, sampler=RandomSampler(train_dataset),\n batch_size=2)\nval_dl = DataLoader(val_dataset, sampler=SequentialSampler(val_dataset),\n batch_size=2)\n\n# Create and train the model\nmodel = BertForSequenceClassification.from_pretrained(\n 'bert-base-uncased',\n num_labels=2,\n output_attentions=False,\n output_hidden_states=False)\n\n" ]
[ 1 ]
[]
[]
[ "bert_language_model", "machine_learning", "python", "sentiment_analysis", "torch" ]
stackoverflow_0074668301_bert_language_model_machine_learning_python_sentiment_analysis_torch.txt
Q: How do I change the variables inside of a python table? This is my table. playerdata = { 'Saves' : [{ 'name' : charactername, 'class' : playerclass, 'level' : level, 'race' : playerrace, 'Inventory' : [{ 'Slot 1' : 'Test', 'Slot 2' : 'Test', 'Slot 3' : 'Test' }], 'Attributes' : [{ 'Dexterity' : attDexterity, 'Strength' : attStrength, 'Wisdom' : attWisdom, 'Intelligence' : attIntelligence, 'Charisma' : attCharisma, 'Constitution' : attConstitution }] }] } If the variable charactername updates the value inside of the table stays the same. How would I make the value change when the variable changes? The table gets dumped into a json file. That’s how I know it doesn’t change. A: To update the value in the table when the variable charactername changes, you can use the following code: # Update the value of charactername in the table playerdata['Saves'][0]['name'] = charactername This code accesses the Saves key in the playerdata dictionary, then accesses the first element in the Saves list (which is a dictionary), and finally updates the name key with the current value of the charactername variable. Note that you will also need to update the value in the JSON file by writing the updated playerdata dictionary to the file, using the json.dump() method. Here is an example of how you can do that: import json # Update the value of charactername in the table playerdata['Saves'][0]['name'] = charactername # Open the JSON file for writing with open('playerdata.json', 'w') as f: # Dump the updated playerdata dictionary to the file json.dump(playerdata, f)
How do I change the variables inside of a python table?
This is my table. playerdata = { 'Saves' : [{ 'name' : charactername, 'class' : playerclass, 'level' : level, 'race' : playerrace, 'Inventory' : [{ 'Slot 1' : 'Test', 'Slot 2' : 'Test', 'Slot 3' : 'Test' }], 'Attributes' : [{ 'Dexterity' : attDexterity, 'Strength' : attStrength, 'Wisdom' : attWisdom, 'Intelligence' : attIntelligence, 'Charisma' : attCharisma, 'Constitution' : attConstitution }] }] } If the variable charactername updates the value inside of the table stays the same. How would I make the value change when the variable changes? The table gets dumped into a json file. That’s how I know it doesn’t change.
[ "To update the value in the table when the variable charactername changes, you can use the following code:\n# Update the value of charactername in the table\nplayerdata['Saves'][0]['name'] = charactername\n\nThis code accesses the Saves key in the playerdata dictionary, then accesses the first element in the Saves list (which is a dictionary), and finally updates the name key with the current value of the charactername variable.\nNote that you will also need to update the value in the JSON file by writing the updated playerdata dictionary to the file, using the json.dump() method.\nHere is an example of how you can do that:\nimport json\n\n# Update the value of charactername in the table\nplayerdata['Saves'][0]['name'] = charactername\n\n# Open the JSON file for writing\nwith open('playerdata.json', 'w') as f:\n # Dump the updated playerdata dictionary to the file\n json.dump(playerdata, f)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671769_python.txt
Q: Timeseries split datetime data type to seperate date and time columns I am importing a CSV file that contains a nonformatted dataset, the date and time are separated but the data type is an object, and the times' time zone is incorrect. The timezone of this original dataset is EET which is currently 7-hour difference from eastern standard time, (sometimes it is 6 hours during daylight savings time) I am trying to convert the objects to date time format and convert the timezone to Eastern standard time, having the date and time in separate columns. a snippet of my code is as follows: Transform the date column to a datetime format df['date'] = pd.to_datetime(df['date']) the above code successfully converts the date column to datetime data type, great. My trouble is when I work with the time column. I am unable to separate the date and time using .split(), the timezone is not accurate so I have compensated by using a different time zone to account for the -7hrs that I am looking for (us/mountain seems to produce the -7hr that I need) Transform the time column to a datetime data type df["time"] = pd.to_datetime( df["time"], infer_datetime_format = True ) Convert the time column to the US/Eastern timezone df["time"] = df["time"].dt.tz_localize("US/Mountain") As it turns out, the output is: 2022-12-03 23:55:00-07:00 I am looking for an output of 16:55 or even 16:55:00 is fine as well. My question is how can I separate the time from date while in datetime format and subtract the -7:00 so the output is 16:55 (or 16:55:00) I have tried using: df['time'].to_datetime().strftime('%h-%m') and receive the following error: AttributeError: 'Series' object has no attribute 'to_datetime' df['time'].apply(lambda x:time.strptime(x, "%H:%M")) gives the following output 64999 (1900, 1, 1, 23, 55, 0, 0, 1, -1) df['Time'] = pd.to_datetime(df['time']).dt.time gives me the original time '23:55:00' To be clear, I am looking for an output of just the converted time, for example 16:55. I do not want 23:55:00-07:00 I am looking for an output of date time split into separate columns in the correct timezone example: date | time ------ ------ 2022-12-02 | 16:55 A: sample: data = { "date": ["2022-12-02"], "time": ["23:55:00"] } df = pd.DataFrame(data) code: df["datetime"] = ( pd.to_datetime(df["date"].str.cat(df["time"], sep=" ")) .dt.tz_localize("UTC") .dt.tz_convert("US/Mountain") .dt.tz_localize(None) ) df["date"], df["time"] = zip(*[(x.date(), x.time()) for x in df.pop("datetime")]) print(df) output: date time 0 2022-12-02 16:55:00
Timeseries split datetime data type to seperate date and time columns
I am importing a CSV file that contains a nonformatted dataset, the date and time are separated but the data type is an object, and the times' time zone is incorrect. The timezone of this original dataset is EET which is currently 7-hour difference from eastern standard time, (sometimes it is 6 hours during daylight savings time) I am trying to convert the objects to date time format and convert the timezone to Eastern standard time, having the date and time in separate columns. a snippet of my code is as follows: Transform the date column to a datetime format df['date'] = pd.to_datetime(df['date']) the above code successfully converts the date column to datetime data type, great. My trouble is when I work with the time column. I am unable to separate the date and time using .split(), the timezone is not accurate so I have compensated by using a different time zone to account for the -7hrs that I am looking for (us/mountain seems to produce the -7hr that I need) Transform the time column to a datetime data type df["time"] = pd.to_datetime( df["time"], infer_datetime_format = True ) Convert the time column to the US/Eastern timezone df["time"] = df["time"].dt.tz_localize("US/Mountain") As it turns out, the output is: 2022-12-03 23:55:00-07:00 I am looking for an output of 16:55 or even 16:55:00 is fine as well. My question is how can I separate the time from date while in datetime format and subtract the -7:00 so the output is 16:55 (or 16:55:00) I have tried using: df['time'].to_datetime().strftime('%h-%m') and receive the following error: AttributeError: 'Series' object has no attribute 'to_datetime' df['time'].apply(lambda x:time.strptime(x, "%H:%M")) gives the following output 64999 (1900, 1, 1, 23, 55, 0, 0, 1, -1) df['Time'] = pd.to_datetime(df['time']).dt.time gives me the original time '23:55:00' To be clear, I am looking for an output of just the converted time, for example 16:55. I do not want 23:55:00-07:00 I am looking for an output of date time split into separate columns in the correct timezone example: date | time ------ ------ 2022-12-02 | 16:55
[ "sample:\ndata = {\n \"date\": [\"2022-12-02\"],\n \"time\": [\"23:55:00\"]\n}\ndf = pd.DataFrame(data)\n\ncode:\ndf[\"datetime\"] = (\n pd.to_datetime(df[\"date\"].str.cat(df[\"time\"], sep=\" \"))\n .dt.tz_localize(\"UTC\")\n .dt.tz_convert(\"US/Mountain\")\n .dt.tz_localize(None)\n)\ndf[\"date\"], df[\"time\"] = zip(*[(x.date(), x.time()) for x in df.pop(\"datetime\")])\nprint(df)\n\noutput:\n date time\n0 2022-12-02 16:55:00\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "pandas", "python", "time_series", "types" ]
stackoverflow_0074671714_datetime_pandas_python_time_series_types.txt
Q: How to order list of lists of strings by another list of lists of floats in Pandas I have a Pandas dataframe such that df['cname']: 0 [berkshire, hathaway] 1 [icbc] 2 [saudi, arabian, oil, company, saudi, aramco] 3 [jpmorgan, chase] 4 [china, construction, bank] Name: tokenized_company_name, dtype: object and another Pandas dataframe such that tfidf['output']: [0.7071067811865476, 0.7071067811865476] [1.0] [0.3779598156018814, 0.39838548612653973, 0.39838548612653973, 0.3285496573358837, 0.6570993146717674] [0.7071067811865476, 0.7071067811865476] [0.4225972188244829, 0.510750779645552, 0.7486956870005814] I'm trying to sort each list of tokens in f_sp['tokenized_company_name'] by tfidf['output_column'] such that I get: 0 [berkshire, hathaway] # no difference 1 [icbc] # no difference 2 [aramco, arabian, oil, saudi, company] # re-ordered by decreasing value of tf_sp['output_column'] 3 [chase, jpmorgan] # tied elements should be ordered alphabetically 4 [bank, construction, china] # re-ordered by decreasing value of tf_sp['output_column'] Here's what I've tried so far: (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1)) But I get the following error: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Input In [166], in <cell line: 1>() ----> 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\frame.py:9555, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs) 9544 from pandas.core.apply import frame_apply 9546 op = frame_apply( 9547 self, 9548 func=func, (...) 9553 kwargs=kwargs, 9554 ) -> 9555 return op.apply().__finalize__(self, method="apply") File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:746, in FrameApply.apply(self) 743 elif self.raw: 744 return self.apply_raw() --> 746 return self.apply_standard() File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:873, in FrameApply.apply_standard(self) 872 def apply_standard(self): --> 873 results, res_index = self.apply_series_generator() 875 # wrap results 876 return self.wrap_results(results, res_index) File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:889, in FrameApply.apply_series_generator(self) 886 with option_context("mode.chained_assignment", None): 887 for i, v in enumerate(series_gen): 888 # ignore SettingWithCopy here in case the user mutates --> 889 results[i] = self.f(v) 890 if isinstance(results[i], ABCSeries): 891 # If we have a view on v, we need to make a copy because 892 # series_generator will swap out the underlying data 893 results[i] = results[i].copy(deep=False) Input In [166], in <lambda>(x) ----> 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) Input In [166], in <lambda>.<locals>.<lambda>(y) 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], ----> 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) IndexError: list index out of range Why is this happening? Each list of lists has the same number of elements. A: To sort the list of tokens in f_sp['tokenized_company_name'] by the corresponding value in tf_sp['output_column'], you can use the zip function to combine the two columns and then sort the resulting list of tuples by the value of the second element in each tuple (which is the corresponding value from tf_sp['output_column']). You can then extract only the first element of each tuple (which is the token) to obtain the sorted list of tokens. Here is an example of how you can achieve this using a lambda function with the apply method of f_sp: f_sp['tokenized_company_name'] = f_sp.apply(lambda x: [t[0] for t in sorted(zip(x['tokenized_company_name'], tf_sp.loc[x.name, 'output_column']), key=lambda t: t[1], reverse=True)], axis=1) This will sort the list of tokens in f_sp['tokenized_company_name'] by the corresponding value in tf_sp['output_column'] and store the sorted list back in f_sp['tokenized_company_name']. Note that this solution assumes that the length of f_sp['tokenized_company_name'] and tf_sp['output_column'] is the same for each row in f_sp. Otherwise, you may need to handle the case where the length of the two columns is different. A: To order a list of lists of strings by another list of lists of floats in Pandas, you can use the "sort_values" method. Here is an example: import pandas as pd # create dataframe with string lists as data df = pd.DataFrame({'strings': [['apple', 'banana', 'cherry'], ['dog', 'cat', 'bird'], ['red', 'green', 'blue']]}) # create dataframe with float lists as data df_floats = pd.DataFrame({'floats': [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]}) # sort the string dataframe by the float dataframe df.sort_values(by=df_floats['floats']) This will return a new dataframe with the strings in each list sorted according to the corresponding list of floats.
How to order list of lists of strings by another list of lists of floats in Pandas
I have a Pandas dataframe such that df['cname']: 0 [berkshire, hathaway] 1 [icbc] 2 [saudi, arabian, oil, company, saudi, aramco] 3 [jpmorgan, chase] 4 [china, construction, bank] Name: tokenized_company_name, dtype: object and another Pandas dataframe such that tfidf['output']: [0.7071067811865476, 0.7071067811865476] [1.0] [0.3779598156018814, 0.39838548612653973, 0.39838548612653973, 0.3285496573358837, 0.6570993146717674] [0.7071067811865476, 0.7071067811865476] [0.4225972188244829, 0.510750779645552, 0.7486956870005814] I'm trying to sort each list of tokens in f_sp['tokenized_company_name'] by tfidf['output_column'] such that I get: 0 [berkshire, hathaway] # no difference 1 [icbc] # no difference 2 [aramco, arabian, oil, saudi, company] # re-ordered by decreasing value of tf_sp['output_column'] 3 [chase, jpmorgan] # tied elements should be ordered alphabetically 4 [bank, construction, china] # re-ordered by decreasing value of tf_sp['output_column'] Here's what I've tried so far: (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1)) But I get the following error: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) Input In [166], in <cell line: 1>() ----> 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\frame.py:9555, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs) 9544 from pandas.core.apply import frame_apply 9546 op = frame_apply( 9547 self, 9548 func=func, (...) 9553 kwargs=kwargs, 9554 ) -> 9555 return op.apply().__finalize__(self, method="apply") File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:746, in FrameApply.apply(self) 743 elif self.raw: 744 return self.apply_raw() --> 746 return self.apply_standard() File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:873, in FrameApply.apply_standard(self) 872 def apply_standard(self): --> 873 results, res_index = self.apply_series_generator() 875 # wrap results 876 return self.wrap_results(results, res_index) File ~\.conda\envs\python37dev\lib\site-packages\pandas\core\apply.py:889, in FrameApply.apply_series_generator(self) 886 with option_context("mode.chained_assignment", None): 887 for i, v in enumerate(series_gen): 888 # ignore SettingWithCopy here in case the user mutates --> 889 results[i] = self.f(v) 890 if isinstance(results[i], ABCSeries): 891 # If we have a view on v, we need to make a copy because 892 # series_generator will swap out the underlying data 893 results[i] = results[i].copy(deep=False) Input In [166], in <lambda>(x) ----> 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) Input In [166], in <lambda>.<locals>.<lambda>(y) 1 (f_sp.apply(lambda x: sorted(x['tokenized_company_name'], ----> 2 key=lambda y: tf_sp.loc[x.name,'output_column'][x['tokenized_company_name'].index(y)], 3 reverse=True), axis=1)) IndexError: list index out of range Why is this happening? Each list of lists has the same number of elements.
[ "To sort the list of tokens in f_sp['tokenized_company_name'] by the corresponding value in tf_sp['output_column'], you can use the zip function to combine the two columns and then sort the resulting list of tuples by the value of the second element in each tuple (which is the corresponding value from tf_sp['output_column']). You can then extract only the first element of each tuple (which is the token) to obtain the sorted list of tokens.\nHere is an example of how you can achieve this using a lambda function with the apply method of f_sp:\nf_sp['tokenized_company_name'] = f_sp.apply(lambda x: [t[0] for t in sorted(zip(x['tokenized_company_name'], tf_sp.loc[x.name, 'output_column']), key=lambda t: t[1], reverse=True)], axis=1)\n\nThis will sort the list of tokens in f_sp['tokenized_company_name'] by the corresponding value in tf_sp['output_column'] and store the sorted list back in f_sp['tokenized_company_name'].\nNote that this solution assumes that the length of f_sp['tokenized_company_name'] and tf_sp['output_column'] is the same for each row in f_sp. Otherwise, you may need to handle the case where the length of the two columns is different.\n", "To order a list of lists of strings by another list of lists of floats in Pandas, you can use the \"sort_values\" method. Here is an example:\nimport pandas as pd\n\n# create dataframe with string lists as data\ndf = pd.DataFrame({'strings': [['apple', 'banana', 'cherry'],\n ['dog', 'cat', 'bird'],\n ['red', 'green', 'blue']]})\n\n# create dataframe with float lists as data\ndf_floats = pd.DataFrame({'floats': [[1.0, 2.0, 3.0],\n [4.0, 5.0, 6.0],\n [7.0, 8.0, 9.0]]})\n\n# sort the string dataframe by the float dataframe\ndf.sort_values(by=df_floats['floats'])\n\nThis will return a new dataframe with the strings in each list sorted according to the corresponding list of floats.\n" ]
[ 0, 0 ]
[]
[]
[ "nlp", "pandas", "python", "string", "token" ]
stackoverflow_0074671883_nlp_pandas_python_string_token.txt
Q: asyncg - transaction requirement I found in the asyncpg documentation that every call to сonnection.execute() or connection.fetch() should be wrapped in async with connection.transaction():. But in one of the repositories I saw the following code without wrapping it in a transaction: async def bench_asyncpg_con(): start = time.monotonic() for i in range(1, 1000): con = await asyncpg.connect(user='benchmark_user', database='benchmark_db', host='127.0.0.1') await con.fetchval('SELECT * FROM "Post" LIMIT 100') await con.close() end = time.monotonic() print(end - start) And it works. Can you explain to me when I should use transactions and when I shouldn't ? A: The reason why the code in your example doesn't have a transaction is because it's just fetching data from the database. There are no changes happening to the database (no udpates, no inserted data, no deleting of data, etc..) Quoted from asyncpg docs: When not in an explicit transaction block, any changes to the database will be applied immediately. This is also known as auto-commit. From the quote above, when you use asyncpg to execute a query to change data in the database, it will be automatically committed unless you use a transaction. When you wrap your code in a transaction, you have to call commit to have those changes saved. Additionally, transactions allow you to execute queries and if any of those queries fail, you can rollback all of the executed queries that were wrapped in that transaction. Here is an example of a transaction that shows you the mechanics. tr = connection.transaction() await tr.start() try: ... except: await tr.rollback() raise else: await tr.commit()
asyncg - transaction requirement
I found in the asyncpg documentation that every call to сonnection.execute() or connection.fetch() should be wrapped in async with connection.transaction():. But in one of the repositories I saw the following code without wrapping it in a transaction: async def bench_asyncpg_con(): start = time.monotonic() for i in range(1, 1000): con = await asyncpg.connect(user='benchmark_user', database='benchmark_db', host='127.0.0.1') await con.fetchval('SELECT * FROM "Post" LIMIT 100') await con.close() end = time.monotonic() print(end - start) And it works. Can you explain to me when I should use transactions and when I shouldn't ?
[ "The reason why the code in your example doesn't have a transaction is because it's just fetching data from the database. There are no changes happening to the database (no udpates, no inserted data, no deleting of data, etc..) Quoted from asyncpg docs:\n\nWhen not in an explicit transaction block, any changes to the database will be applied immediately. This is also known as auto-commit.\n\nFrom the quote above, when you use asyncpg to execute a query to change data in the database, it will be automatically committed unless you use a transaction. When you wrap your code in a transaction, you have to call commit to have those changes saved. Additionally, transactions allow you to execute queries and if any of those queries fail, you can rollback all of the executed queries that were wrapped in that transaction. Here is an example of a transaction that shows you the mechanics.\ntr = connection.transaction()\nawait tr.start()\ntry:\n ...\nexcept:\n await tr.rollback()\n raise\nelse:\n await tr.commit()\n\n" ]
[ 1 ]
[]
[]
[ "asyncpg", "python" ]
stackoverflow_0071783811_asyncpg_python.txt
Q: How to replace a value in a matrix by index Let's say I have a 4X4 matrix containing value from 1 to 20. If I want the element at line 2 column 3 to be equal to 100, how can I do this ? A: First, import the numpy library: import numpy as np Then, create the matrix: matrix = np.array([[1,2,3], [4,5,6], [7,8,9]]) Finally, use the index to replace the value: matrix[1,2] = 10 The new matrix will be: [[ 1 2 3] [ 4 5 10] [ 7 8 9]]
How to replace a value in a matrix by index
Let's say I have a 4X4 matrix containing value from 1 to 20. If I want the element at line 2 column 3 to be equal to 100, how can I do this ?
[ "First, import the numpy library:\n\nimport numpy as np\n\nThen, create the matrix:\n\nmatrix = np.array([[1,2,3], [4,5,6], [7,8,9]])\n\nFinally, use the index to replace the value:\n\nmatrix[1,2] = 10\nThe new matrix will be:\n\n[[ 1 2 3]\n [ 4 5 10]\n [ 7 8 9]]\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671888_python.txt
Q: No module named 'Adafruit_GPIO' I am doing the project in this video (https://www.youtube.com/watch?v=9sb_zuHGmY4) with the OLED screen. The step by step guide I followed (https://www.the-diy-life.com/diy-raspberry-pi-4-desktop-case-with-oled-stats-display/) and I get this error: Traceback (most recent call last): File "stats.py", line 23, in <module> import Adafruit_GPIO.SPI as SPI ModuleNotFoundError: No module named 'Adafruit_GPIO' Code: # Author: Tony DiCola & James DeVito # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import time import Adafruit_GPIO.SPI as SPI import Adafruit_SSD1306 from PIL import Image from PIL import ImageDraw from PIL import ImageFont import subprocess # Raspberry Pi pin configuration: RST = None # on the PiOLED this pin isnt used # Note the following are only used with SPI: DC = 23 SPI_PORT = 0 SPI_DEVICE = 0 # Beaglebone Black pin configuration: # RST = 'P9_12' # Note the following are only used with SPI: # DC = 'P9_15' # SPI_PORT = 1 # SPI_DEVICE = 0 # 128x32 display with hardware I2C: disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST) # 128x64 display with hardware I2C: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST) # Note you can change the I2C address by passing an i2c_address parameter like: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST, i2c_address=0x3C) # Alternatively you can specify an explicit I2C bus number, for example # with the 128x32 display you would use: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, i2c_bus=2) # 128x32 display with hardware SPI: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=DC, spi=SPI.SpiDev(SPI_PORT, SPI_DEVICE, max_speed_hz=8000000)) # 128x64 display with hardware SPI: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST, dc=DC, spi=SPI.SpiDev(SPI_PORT, SPI_DEVICE, max_speed_hz=8000000)) # Alternatively you can specify a software SPI implementation by providing # digital GPIO pin numbers for all the required display pins. For example # on a Raspberry Pi with the 128x32 display you might use: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=DC, sclk=18, din=25, cs=22) # Initialize library. disp.begin() # Clear display. disp.clear() disp.display() # Create blank image for drawing. # Make sure to create image with mode '1' for 1-bit color. width = disp.width height = disp.height image = Image.new('1', (width, height)) # Get drawing object to draw on image. draw = ImageDraw.Draw(image) # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Draw some shapes. # First define some constants to allow easy resizing of shapes. padding = -2 top = padding bottom = height-padding # Move left to right keeping track of the current x position for drawing shapes. x = 0 # Load default font. font = ImageFont.load_default() # Alternatively load a TTF font. Make sure the .ttf font file is in the same directory as the python script! # Some other nice fonts to try: http://www.dafont.com/bitmap.php # font = ImageFont.truetype('Minecraftia.ttf', 8) while True: # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Shell scripts for system monitoring from here : https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load cmd = "hostname -I | cut -d\' \' -f1" IP = subprocess.check_output(cmd, shell = True ) cmd = "top -bn1 | grep load | awk '{printf \"CPU Load: %.2f\", $(NF-2)}'" CPU = subprocess.check_output(cmd, shell = True ) cmd = "free -m | awk 'NR==2{printf \"Mem: %s/%sMB %.2f%%\", $3,$2,$3*100/$2 }'" MemUsage = subprocess.check_output(cmd, shell = True ) cmd = "df -h | awk '$NF==\"/\"{printf \"Disk: %d/%dGB %s\", $3,$2,$5}'" Disk = subprocess.check_output(cmd, shell = True ) # Write two lines of text. draw.text((x, top), "IP: " + str(IP), font=font, fill=255) draw.text((x, top+8), str(CPU), font=font, fill=255) draw.text((x, top+16), str(MemUsage), font=font, fill=255) draw.text((x, top+25), str(Disk), font=font, fill=255) # Display image. disp.image(image) disp.display() time.sleep(.1) A: When in doubt, take a look at the official docs! You're missing an old library. While you may be able to bring it in, the maintainers have actually deprecated it and suggest another https://github.com/adafruit/Adafruit_Python_GPIO This library has been deprecated in favor of our python3 Blinka library. We have replaced all of the libraries that use this repo with CircuitPython libraries that are Python3 compatible, and support a wide variety of single board/linux computers! Do take a look at the updated docs from Adafruit: https://learn.adafruit.com/monochrome-oled-breakouts/python-usage-2 They have a direct example showing the use of these libraries import board import digitalio from PIL import Image, ImageDraw, ImageFont import adafruit_ssd1306 ... A: This is a couple years late, but figured why not post my solution. I’ve put together an updated setup.py script that’s adapted to Adafruit’s newer CircuitPython libraries. To note, I’ve tested this on a 4GB RPI4B running Raspbian GNU/Linux 11 armv7. Before attempting to execute this script, make sure to follow the 'Python Setup' instructions listed here.
No module named 'Adafruit_GPIO'
I am doing the project in this video (https://www.youtube.com/watch?v=9sb_zuHGmY4) with the OLED screen. The step by step guide I followed (https://www.the-diy-life.com/diy-raspberry-pi-4-desktop-case-with-oled-stats-display/) and I get this error: Traceback (most recent call last): File "stats.py", line 23, in <module> import Adafruit_GPIO.SPI as SPI ModuleNotFoundError: No module named 'Adafruit_GPIO' Code: # Author: Tony DiCola & James DeVito # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN # THE SOFTWARE. import time import Adafruit_GPIO.SPI as SPI import Adafruit_SSD1306 from PIL import Image from PIL import ImageDraw from PIL import ImageFont import subprocess # Raspberry Pi pin configuration: RST = None # on the PiOLED this pin isnt used # Note the following are only used with SPI: DC = 23 SPI_PORT = 0 SPI_DEVICE = 0 # Beaglebone Black pin configuration: # RST = 'P9_12' # Note the following are only used with SPI: # DC = 'P9_15' # SPI_PORT = 1 # SPI_DEVICE = 0 # 128x32 display with hardware I2C: disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST) # 128x64 display with hardware I2C: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST) # Note you can change the I2C address by passing an i2c_address parameter like: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST, i2c_address=0x3C) # Alternatively you can specify an explicit I2C bus number, for example # with the 128x32 display you would use: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, i2c_bus=2) # 128x32 display with hardware SPI: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=DC, spi=SPI.SpiDev(SPI_PORT, SPI_DEVICE, max_speed_hz=8000000)) # 128x64 display with hardware SPI: # disp = Adafruit_SSD1306.SSD1306_128_64(rst=RST, dc=DC, spi=SPI.SpiDev(SPI_PORT, SPI_DEVICE, max_speed_hz=8000000)) # Alternatively you can specify a software SPI implementation by providing # digital GPIO pin numbers for all the required display pins. For example # on a Raspberry Pi with the 128x32 display you might use: # disp = Adafruit_SSD1306.SSD1306_128_32(rst=RST, dc=DC, sclk=18, din=25, cs=22) # Initialize library. disp.begin() # Clear display. disp.clear() disp.display() # Create blank image for drawing. # Make sure to create image with mode '1' for 1-bit color. width = disp.width height = disp.height image = Image.new('1', (width, height)) # Get drawing object to draw on image. draw = ImageDraw.Draw(image) # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Draw some shapes. # First define some constants to allow easy resizing of shapes. padding = -2 top = padding bottom = height-padding # Move left to right keeping track of the current x position for drawing shapes. x = 0 # Load default font. font = ImageFont.load_default() # Alternatively load a TTF font. Make sure the .ttf font file is in the same directory as the python script! # Some other nice fonts to try: http://www.dafont.com/bitmap.php # font = ImageFont.truetype('Minecraftia.ttf', 8) while True: # Draw a black filled box to clear the image. draw.rectangle((0,0,width,height), outline=0, fill=0) # Shell scripts for system monitoring from here : https://unix.stackexchange.com/questions/119126/command-to-display-memory-usage-disk-usage-and-cpu-load cmd = "hostname -I | cut -d\' \' -f1" IP = subprocess.check_output(cmd, shell = True ) cmd = "top -bn1 | grep load | awk '{printf \"CPU Load: %.2f\", $(NF-2)}'" CPU = subprocess.check_output(cmd, shell = True ) cmd = "free -m | awk 'NR==2{printf \"Mem: %s/%sMB %.2f%%\", $3,$2,$3*100/$2 }'" MemUsage = subprocess.check_output(cmd, shell = True ) cmd = "df -h | awk '$NF==\"/\"{printf \"Disk: %d/%dGB %s\", $3,$2,$5}'" Disk = subprocess.check_output(cmd, shell = True ) # Write two lines of text. draw.text((x, top), "IP: " + str(IP), font=font, fill=255) draw.text((x, top+8), str(CPU), font=font, fill=255) draw.text((x, top+16), str(MemUsage), font=font, fill=255) draw.text((x, top+25), str(Disk), font=font, fill=255) # Display image. disp.image(image) disp.display() time.sleep(.1)
[ "When in doubt, take a look at the official docs! You're missing an old library.\nWhile you may be able to bring it in, the maintainers have actually deprecated it and suggest another https://github.com/adafruit/Adafruit_Python_GPIO\n\nThis library has been deprecated in favor of our python3 Blinka library. We have replaced all of the libraries that use this repo with CircuitPython libraries that are Python3 compatible, and support a wide variety of single board/linux computers!\n\nDo take a look at the updated docs from Adafruit: https://learn.adafruit.com/monochrome-oled-breakouts/python-usage-2\nThey have a direct example showing the use of these libraries\nimport board\nimport digitalio\nfrom PIL import Image, ImageDraw, ImageFont\nimport adafruit_ssd1306\n...\n\n", "This is a couple years late, but figured why not post my solution.\nI’ve put together an updated setup.py script that’s adapted to Adafruit’s newer CircuitPython libraries. To note, I’ve tested this on a 4GB RPI4B running Raspbian GNU/Linux 11 armv7.\nBefore attempting to execute this script, make sure to follow the\n'Python Setup' instructions listed here.\n" ]
[ 0, 0 ]
[]
[]
[ "adafruit", "python", "raspberry_pi4" ]
stackoverflow_0065148950_adafruit_python_raspberry_pi4.txt
Q: AssertionError: Label class 15 exceeds nc=1 in data/coco128.yaml. Possible class labels are 0-0 I've been building the yolov5 environment and trying to run it for the last few days. I used the following code to test whether My setup was successful. python train.py --img 640 --data data/coco128.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --batch-size 16 --epochs 100 And then it gave me the following error, and I tried to find answers on Google, but I didn't see anything useful. I'm devastated right now. Can someone give me a hand? I really appreciate it. Transferred 362/370 items from weights/yolov5s.pt Optimizer groups: 62 .bias, 70 conv.weight, 59 other Scanning labels data\coco128\labels\train2017.cache (32 found, 0 missing, 0 empty, 0 duplicate, for 32 images): 32it [00:00, 3270.57it/s] Traceback (most recent call last): File "train.py", line 456, in <module> train(hyp, opt, device, tb_writer) File "train.py", line 172, in train assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) AssertionError: Label class 15 exceeds nc=1 in data/coco128.yaml. Possible class labels are 0-0 I really don't use this site. Forgive me. A: I found this exact error too. In your .txt files you've created for the annotations, there will be an integer number followed by four floats (ie, 13 0.3434 0.251 0.4364 0.34353) - something like that. This error essentially articulates that your number of classes (ie, the number of different objects you're trying to train into the model) is too low for the ID number of the classes you're using. In my example above, the ID is 13 (the 14th class since 0 is included). If I set nc=1, then I can only have on class (0). I would need to set nc=14 and ensure that 0-12 existed. To fix this, simply change the classes so that the IDs sit inside your chose number of classes. For nc=1, you'll only need class/ID = 0. As a note (and I fell foul of this), delete train.cache before you re-run the training. That caused me a bit of a nuisance since it still was certain I had classes of >0, when I didn't. A: The error is caused by the fact that you have one or many labels with the number 15 as a class. You have to change the class to an allowed class value (which in your case appears to be only 0), you can do it manually or with a script. I changed the values of the classes manually in my dataset, for finding the files that contained the non-allowed classes, I ran a python script, which I have adapted for your situation: path = 'C:/foo/bar' #path of labels labels = os.listdir('path') for x in labels: with open('path'+x) as f: lines = f.read().splitlines() for y in lines: if y[:1]!='0': print(x) This snippet will print all of the files that contain a class different from 0. For anyone that finds this and has more than one class, you must substitute the value of 0 with the value or values(you could iterate through a list of possible values) of the class or classes that are higher than the number of classes you stated before. A: I also faced this problem and tried few solutions and solved as below: Actually the dataset is consist of 11 class. And when i check .xml files which is include label of image, i saw label:11. So : set nc:12, add '' value to the label array. ['','apple','banana', etc.] Dont forget to remove label chache !rm -f data/train/labels.npy
AssertionError: Label class 15 exceeds nc=1 in data/coco128.yaml. Possible class labels are 0-0
I've been building the yolov5 environment and trying to run it for the last few days. I used the following code to test whether My setup was successful. python train.py --img 640 --data data/coco128.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt --batch-size 16 --epochs 100 And then it gave me the following error, and I tried to find answers on Google, but I didn't see anything useful. I'm devastated right now. Can someone give me a hand? I really appreciate it. Transferred 362/370 items from weights/yolov5s.pt Optimizer groups: 62 .bias, 70 conv.weight, 59 other Scanning labels data\coco128\labels\train2017.cache (32 found, 0 missing, 0 empty, 0 duplicate, for 32 images): 32it [00:00, 3270.57it/s] Traceback (most recent call last): File "train.py", line 456, in <module> train(hyp, opt, device, tb_writer) File "train.py", line 172, in train assert mlc < nc, 'Label class %g exceeds nc=%g in %s. Possible class labels are 0-%g' % (mlc, nc, opt.data, nc - 1) AssertionError: Label class 15 exceeds nc=1 in data/coco128.yaml. Possible class labels are 0-0 I really don't use this site. Forgive me.
[ "I found this exact error too.\nIn your .txt files you've created for the annotations, there will be an integer number followed by four floats (ie, 13 0.3434 0.251 0.4364 0.34353) - something like that.\nThis error essentially articulates that your number of classes (ie, the number of different objects you're trying to train into the model) is too low for the ID number of the classes you're using. In my example above, the ID is 13 (the 14th class since 0 is included). If I set nc=1, then I can only have on class (0). I would need to set nc=14 and ensure that 0-12 existed.\nTo fix this, simply change the classes so that the IDs sit inside your chose number of classes. For nc=1, you'll only need class/ID = 0.\nAs a note (and I fell foul of this), delete train.cache before you re-run the training. That caused me a bit of a nuisance since it still was certain I had classes of >0, when I didn't.\n", "The error is caused by the fact that you have one or many labels with the number 15 as a class. You have to change the class to an allowed class value (which in your case appears to be only 0), you can do it manually or with a script. I changed the values of the classes manually in my dataset, for finding the files that contained the non-allowed classes, I ran a python script, which I have adapted for your situation:\npath = 'C:/foo/bar' #path of labels\nlabels = os.listdir('path')\nfor x in labels:\n with open('path'+x) as f:\n lines = f.read().splitlines()\n for y in lines:\n if y[:1]!='0':\n print(x)\n\nThis snippet will print all of the files that contain a class different from 0.\nFor anyone that finds this and has more than one class, you must substitute the value of 0 with the value or values(you could iterate through a list of possible values) of the class or classes that are higher than the number of classes you stated before.\n", "I also faced this problem and tried few solutions and solved as below:\nActually the dataset is consist of 11 class. And when i check .xml files which is include label of image, i saw label:11. So :\n\nset nc:12,\nadd '' value to the label array. ['','apple','banana', etc.]\n\nDont forget to remove label chache !rm -f data/train/labels.npy\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python", "yolov5" ]
stackoverflow_0063950508_python_yolov5.txt
Q: Django REST Framework - return a value from get_queryset? I am trying to return a value from get_queryset. def get_queryset(self): if self.request.user.is_superuser: return StockPriceModel.objects.order_by('ticker').distinct() elif not self.request.user.is_authenticated: print('in') print(self.request.data) last_price = StockPriceModel.objects.all().filter( ticker=self.request.data['ticker']).order_by('-last_date_time')[0].last_price print(last_price) return last_price last price gets printed without an issue. In return I get various errors: TypeError at /api/stock-prices-upload/ 'float' object is not iterable If I try to return till: StockPriceModel.objects.all().filter( ticker=self.request.data['ticker']).order_by('-last_date_time') It works. As soon as I try to return just the 0 position queryset I get errors. I assume this is because get_queryset is supposed to return a queryset. Not sure how to return just the value. Edit: I am now trying to get only the latest row i.e. [0] form the data but still getting the same errors i.e. StockPriceModel object is not iterable # The current output if I don't add the [0] i.e. try to get the last row of data [{"id":23,"last_price":"395.2","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":null,"last_date_time":"2022-10-20T15:58:26+04:00","created_at":"2022-10-20T23:20:37.499166+04:00"},{"id":1717,"last_price":"437.5","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":438.9,"last_date_time":"2022-11-07T15:53:41+04:00","created_at":"2022-11-07T14:26:40.763060+04:00"}] Expected response: [{"id":1717,"last_price":"437.5","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":438.9,"last_date_time":"2022-11-07T15:53:41+04:00","created_at":"2022-11-07T14:26:40.763060+04:00"}] I have tried using last, get etc. Just won't work. A: Because, get_queryset() always return a queryset of objects or a list of objects. You cannot return an object or a field from the get_queryset method. the last_price value will be printed, but it is a field value and therefore the get_queryset method will not return it. When you add [0], it takes the first object from the filtered queryset. Till that point, it is a queryset of objects. A: A bit hacky and I am sure there must be a better way to do this. I wanted to get return of either a single row(last_date_time) based or the last_price value. I wrapped the query: # removed the .last_price last_price = StockPriceModel.objects.all().filter( ticker=self.request.data['ticker']).order_by('-last_date_time')[0] last_price = [last_price] # made it into a list, i.e. iterable return last_price And now I can get the last row. Posting here incase someone spends the same number of hours trying to figure this out. If you have the correct way of doing this, please post.
Django REST Framework - return a value from get_queryset?
I am trying to return a value from get_queryset. def get_queryset(self): if self.request.user.is_superuser: return StockPriceModel.objects.order_by('ticker').distinct() elif not self.request.user.is_authenticated: print('in') print(self.request.data) last_price = StockPriceModel.objects.all().filter( ticker=self.request.data['ticker']).order_by('-last_date_time')[0].last_price print(last_price) return last_price last price gets printed without an issue. In return I get various errors: TypeError at /api/stock-prices-upload/ 'float' object is not iterable If I try to return till: StockPriceModel.objects.all().filter( ticker=self.request.data['ticker']).order_by('-last_date_time') It works. As soon as I try to return just the 0 position queryset I get errors. I assume this is because get_queryset is supposed to return a queryset. Not sure how to return just the value. Edit: I am now trying to get only the latest row i.e. [0] form the data but still getting the same errors i.e. StockPriceModel object is not iterable # The current output if I don't add the [0] i.e. try to get the last row of data [{"id":23,"last_price":"395.2","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":null,"last_date_time":"2022-10-20T15:58:26+04:00","created_at":"2022-10-20T23:20:37.499166+04:00"},{"id":1717,"last_price":"437.5","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":438.9,"last_date_time":"2022-11-07T15:53:41+04:00","created_at":"2022-11-07T14:26:40.763060+04:00"}] Expected response: [{"id":1717,"last_price":"437.5","name":null,"country":null,"sector":null,"industry":null,"ticker":"HINDALCO","high_price":438.9,"last_date_time":"2022-11-07T15:53:41+04:00","created_at":"2022-11-07T14:26:40.763060+04:00"}] I have tried using last, get etc. Just won't work.
[ "Because, get_queryset() always return a queryset of objects or a list of objects.\nYou cannot return an object or a field from the get_queryset method.\nthe last_price value will be printed, but it is a field value and therefore the get_queryset method will not return it.\nWhen you add [0], it takes the first object from the filtered queryset. Till that point, it is a queryset of objects.\n", "A bit hacky and I am sure there must be a better way to do this.\nI wanted to get return of either a single row(last_date_time) based or the last_price value.\nI wrapped the query:\n# removed the .last_price\nlast_price = StockPriceModel.objects.all().filter(\n ticker=self.request.data['ticker']).order_by('-last_date_time')[0]\nlast_price = [last_price] # made it into a list, i.e. iterable\nreturn last_price\n\nAnd now I can get the last row.\nPosting here incase someone spends the same number of hours trying to figure this out. If you have the correct way of doing this, please post.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074657136_django_python.txt
Q: Getting field from another model into my custom serializer I am trying to get 'first_name' and 'last_name' field into my serializer that uses a model which has no user information: This is the serializers.py file: enter image description here This is the models.py file (from django-friendship model): enter image description here I am also attaching views.py: enter image description here A: In this case I would make a serializer for the user and then use that in the FriendshipRequestSerializer. class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('id', 'first_name', 'last_name',) class FriendshipRequestSerialiser(serializers.ModelSerializer): to_user = UserSerializer(many=False) from_user = UserSerializer(many=False) class Meta: model = FriendshipRequest fields = ('id', 'to_user', 'from_user', 'message', ... extra_kwargs = ...
Getting field from another model into my custom serializer
I am trying to get 'first_name' and 'last_name' field into my serializer that uses a model which has no user information: This is the serializers.py file: enter image description here This is the models.py file (from django-friendship model): enter image description here I am also attaching views.py: enter image description here
[ "In this case I would make a serializer for the user and then use that in the FriendshipRequestSerializer.\nclass UserSerializer(serializers.ModelSerializer):\n \n class Meta:\n model = User\n fields = ('id', 'first_name', 'last_name',)\n\n\nclass FriendshipRequestSerialiser(serializers.ModelSerializer):\n to_user = UserSerializer(many=False)\n from_user = UserSerializer(many=False)\n\n class Meta:\n model = FriendshipRequest\n fields = ('id', 'to_user', 'from_user', 'message', ...\n extra_kwargs = ...\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0074671836_django_django_rest_framework_python.txt
Q: how to condition a url depending on the parameters in flask? I currently have my site like this: @main.route("/reports", methods=["GET", "POST"]) def reports(): return render_template( "template.html") I intend to add a new design and place it in the following way if in the url they add "/reports/1" or "/reports/0" direct them to a different template: @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds): View=ds if View == 1: return render_template("template.html") if View == 0: return render_template("templateNew.html") Within templeteNew.html I have the option to return to my old layout and place it in the same way by sending a parameter <a href="{{ url_for('main.report_in', ds=1) }}" > Return to previous layout </a> The problem is that in the whole project and in external projects it refers to this url: 127.0.0.1:8000/reportes and it might cause errors if I implement it the way I intended. What I want is that if there is any other way to condition the url, if they write this url: http://127.0.0.1:8000/reportes I directed them to this: @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds): View=ds if View == 1: return render_template( "template.html") if View == 0: return render_template( "templateNew.html") Any suggestions to improve this please? A: http://127.0.0.1:8000/reportes looks like a typo (extra "e"). Anyway you can add one more route to your function as follows: @main.route("/reports", methods=["GET", "POST"]) @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds=1): # <-- provide here the default value you want ds to be View=ds if View == 1: return render_template( "template.html") if View == 0: return render_template( "templateNew.html") Another option is to pass the default value in the route definition: @main.route("/reports", methods=["GET", "POST"], defaults={'ds': 1}) @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds): to be View=ds if View == 1: return render_template( "template.html") if View == 0: return render_template( "templateNew.html")
how to condition a url depending on the parameters in flask?
I currently have my site like this: @main.route("/reports", methods=["GET", "POST"]) def reports(): return render_template( "template.html") I intend to add a new design and place it in the following way if in the url they add "/reports/1" or "/reports/0" direct them to a different template: @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds): View=ds if View == 1: return render_template("template.html") if View == 0: return render_template("templateNew.html") Within templeteNew.html I have the option to return to my old layout and place it in the same way by sending a parameter <a href="{{ url_for('main.report_in', ds=1) }}" > Return to previous layout </a> The problem is that in the whole project and in external projects it refers to this url: 127.0.0.1:8000/reportes and it might cause errors if I implement it the way I intended. What I want is that if there is any other way to condition the url, if they write this url: http://127.0.0.1:8000/reportes I directed them to this: @main.route("/reports/<int:ds>", methods=["GET", "POST"]) def reports(ds): View=ds if View == 1: return render_template( "template.html") if View == 0: return render_template( "templateNew.html") Any suggestions to improve this please?
[ "http://127.0.0.1:8000/reportes looks like a typo (extra \"e\").\nAnyway you can add one more route to your function as follows:\[email protected](\"/reports\", methods=[\"GET\", \"POST\"])\[email protected](\"/reports/<int:ds>\", methods=[\"GET\", \"POST\"])\ndef reports(ds=1): # <-- provide here the default value you want ds to be\n View=ds\nif View == 1:\n return render_template(\n \"template.html\")\nif View == 0:\n return render_template(\n \"templateNew.html\")\n\nAnother option is to pass the default value in the route definition:\[email protected](\"/reports\", methods=[\"GET\", \"POST\"], defaults={'ds': 1})\[email protected](\"/reports/<int:ds>\", methods=[\"GET\", \"POST\"])\ndef reports(ds): to be\n View=ds\nif View == 1:\n return render_template(\n \"template.html\")\nif View == 0:\n return render_template(\n \"templateNew.html\")\n\n" ]
[ 0 ]
[]
[]
[ "flask", "python" ]
stackoverflow_0074670576_flask_python.txt
Q: The tkinter window of my rotating cube animation does not showing anyting from tkinter import * from math import * import time root = Tk() canvas = Canvas(root, width=500, height=500, bg='black') canvas.pack() class Cube: def __init__(self, canvas, x, y, size, colors): self.x = x self.y = y self.size = size self.colors = colors self.canvas = canvas self.angleX, self.angleY = 0, 0 self.update() def project(self, x, y, z): return self.x + (x * self.size) / (z + self.size), self.y + (y * self.size) / (z + self.size) def update(self): points = [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]] t = [[0, 1, 2, 3], [0, 4, 5, 1], [1, 5, 6, 2], [2, 6, 7, 3], [3, 7, 4, 0], [4, 7, 6, 5]] self.polygons = [] for point in points: x, y = self.project(point[0], point[1], point[2]) points[points.index(point)] = [x, y] for triangle in t: p1 = points[triangle[0]] p2 = points[triangle[1]] p3 = points[triangle[2]] p4 = points[triangle[3]] self.polygons.append([p1, p2, p3, p4]) def rotateX(self, angle): self.angleX += angle for point in [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]]: y = point[1] z = point[2] point[1] = y * cos(angle) - z * sin(angle) point[2] = z * cos(angle) + y * sin(angle) self.update() def rotateY(self, angle): self.angleY += angle for point in [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]]: x = point[0] z = point[2] point[0] = x * cos(angle) - z * sin(angle) point[2] = z * cos(angle) + x * sin(angle) self.update() def draw(self): for polygon in self.polygons: self.canvas.create_polygon(polygon, fill=self.colors[self.polygons.index(polygon)], outline='black') colors = ["red", "green", "blue", "white", "yellow", "purple"] cube = Cube(canvas, 250, 250, 200, colors) while True: cube.rotateY(0.01) cube.rotateX(0.01) canvas.delete("all") cube.draw() root.update() time.sleep(0.01) root.mainloop() I don't get an errror but somehow the tkinter window stays black. The program was written by gpt3 and I also tryed to let gpt3 fix the code by itself but it wasn't able to. I wasn't able to find the reason for the black screen jet but I asume that it is caused by the root.mainloop() function not to execute propperly. If this was the reason I would still not be able to fix it so here I am. I hope for your help. Philipp
The tkinter window of my rotating cube animation does not showing anyting
from tkinter import * from math import * import time root = Tk() canvas = Canvas(root, width=500, height=500, bg='black') canvas.pack() class Cube: def __init__(self, canvas, x, y, size, colors): self.x = x self.y = y self.size = size self.colors = colors self.canvas = canvas self.angleX, self.angleY = 0, 0 self.update() def project(self, x, y, z): return self.x + (x * self.size) / (z + self.size), self.y + (y * self.size) / (z + self.size) def update(self): points = [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]] t = [[0, 1, 2, 3], [0, 4, 5, 1], [1, 5, 6, 2], [2, 6, 7, 3], [3, 7, 4, 0], [4, 7, 6, 5]] self.polygons = [] for point in points: x, y = self.project(point[0], point[1], point[2]) points[points.index(point)] = [x, y] for triangle in t: p1 = points[triangle[0]] p2 = points[triangle[1]] p3 = points[triangle[2]] p4 = points[triangle[3]] self.polygons.append([p1, p2, p3, p4]) def rotateX(self, angle): self.angleX += angle for point in [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]]: y = point[1] z = point[2] point[1] = y * cos(angle) - z * sin(angle) point[2] = z * cos(angle) + y * sin(angle) self.update() def rotateY(self, angle): self.angleY += angle for point in [[-1, -1, -1], [1, -1, -1], [1, 1, -1], [-1, 1, -1], [-1, -1, 1], [1, -1, 1], [1, 1, 1], [-1, 1, 1]]: x = point[0] z = point[2] point[0] = x * cos(angle) - z * sin(angle) point[2] = z * cos(angle) + x * sin(angle) self.update() def draw(self): for polygon in self.polygons: self.canvas.create_polygon(polygon, fill=self.colors[self.polygons.index(polygon)], outline='black') colors = ["red", "green", "blue", "white", "yellow", "purple"] cube = Cube(canvas, 250, 250, 200, colors) while True: cube.rotateY(0.01) cube.rotateX(0.01) canvas.delete("all") cube.draw() root.update() time.sleep(0.01) root.mainloop() I don't get an errror but somehow the tkinter window stays black. The program was written by gpt3 and I also tryed to let gpt3 fix the code by itself but it wasn't able to. I wasn't able to find the reason for the black screen jet but I asume that it is caused by the root.mainloop() function not to execute propperly. If this was the reason I would still not be able to fix it so here I am. I hope for your help. Philipp
[]
[]
[ "I don't think your looop works like you expect it to\nYou never exit the while loop to start the main loop.\ninstead you might wanna try something like this:\ndef update_cube():\n cube.rotate_y(0.01)\n cube.rotate_x(0.01)\n canvas.delete(\"all\")\n cube.draw()\n root.update()\n root.after(10, update_cube())\n\nroot.after(0, update_cube())\nroot.mainloop()\n\nAnd I think there is someting wrong with your poligons in your draw method.\nThe attribute is supposed to hold tupples but you have multiple lists of values.\nCheck your self.polygons and make sure the values you put in the create_polygon method are correct. Depending on your IDE, if you tell self.canvas to be a Canvas, you might get some information about the argument error. At least in PyCharm this works.\ndef draw(self):\n for polygon in self.polygons:\n self.canvas: Canvas\n self.canvas.create_polygon(polygon, fill=self.colors[self.polygons.index(polygon)], outline='black')\n\n" ]
[ -1 ]
[ "python", "tkinter" ]
stackoverflow_0074671737_python_tkinter.txt
Q: I'm learning how to work with files in Python using Jupyter notebook. Why do I have to use open() each time to print what I'd like to? I tried: my_file = open("test.txt") for line in my_file: print("Here it says: " + line) lines = my_file.readlines() print(lines[1]) But the second print command did not print anything. then I tried: my_file = open("test.txt") for line in my_file: print("Here it says: " + line) my_file = open("test.txt") lines = my_file.readlines() print(lines[1]) and the second print command printed correctly. Why do I have to use open() each time? A: # See comments in line. fi = open("test.txt", 'w') for n in range(10): fi.write(str(n)) fi.close() my_file = open("test.txt") for line in my_file: print("Here it says: " + line) # The file indicates there are no more lines by setting the EOF # End of file marker lines = my_file.readlines() # reads the same end of file marker. print(lines[1])
I'm learning how to work with files in Python using Jupyter notebook. Why do I have to use open() each time to print what I'd like to?
I tried: my_file = open("test.txt") for line in my_file: print("Here it says: " + line) lines = my_file.readlines() print(lines[1]) But the second print command did not print anything. then I tried: my_file = open("test.txt") for line in my_file: print("Here it says: " + line) my_file = open("test.txt") lines = my_file.readlines() print(lines[1]) and the second print command printed correctly. Why do I have to use open() each time?
[ "# See comments in line.\nfi = open(\"test.txt\", 'w')\nfor n in range(10):\n fi.write(str(n))\nfi.close()\n\nmy_file = open(\"test.txt\")\nfor line in my_file:\n print(\"Here it says: \" + line)\n# The file indicates there are no more lines by setting the EOF\n# End of file marker\nlines = my_file.readlines() # reads the same end of file marker.\nprint(lines[1])\n\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074671890_jupyter_notebook_python.txt
Q: How can I make an int to add each value of the matrix and add 20% with input a = [ [200,300,5000,400],[554,500,1000,652],[800,500,650,800],[950,120,470,500],[500,600,2000,100]] for i in range(len(a)): for j in range(len(a[i])): print(a[i][j], end=' ') print() I am trying to increase each value by 20%, and then print the matrix with the increase, For example, this is the original matrix [200,300],[500,400] and I will increase 20% of each and show the matrix with new values [240,360],[600,480] A: This should work if you want to modify the original matrix; if not, Chris' answer is simpler. for x in a: for i in range(len(x)): x[i] = int(x[i] * 1.2) A: You may wish to generate a new "matrix" with your adjusted values, in which case list comprehensions can be useful: b = [[int(y * 1.2) for y in x] for x in a] Printing the matrix we can find the widest number, then use str.rjust to ensure columns are lined up. >>> w = len(str(max(max(x) for x in a))) >>> str(400).rjust(w) ' 400' >>> for x in a: ... print(*(str(y).rjust(w) for y in x), sep=' ', end='\n') ... 200 300 5000 400 554 500 1000 652 800 500 650 800 950 120 470 500 500 600 2000 100 >>>
How can I make an int to add each value of the matrix and add 20% with input
a = [ [200,300,5000,400],[554,500,1000,652],[800,500,650,800],[950,120,470,500],[500,600,2000,100]] for i in range(len(a)): for j in range(len(a[i])): print(a[i][j], end=' ') print() I am trying to increase each value by 20%, and then print the matrix with the increase, For example, this is the original matrix [200,300],[500,400] and I will increase 20% of each and show the matrix with new values [240,360],[600,480]
[ "This should work if you want to modify the original matrix; if not, Chris' answer is simpler.\nfor x in a:\n for i in range(len(x)):\n x[i] = int(x[i] * 1.2)\n\n", "You may wish to generate a new \"matrix\" with your adjusted values, in which case list comprehensions can be useful:\nb = [[int(y * 1.2) for y in x] for x in a]\n\nPrinting the matrix we can find the widest number, then use str.rjust to ensure columns are lined up.\n>>> w = len(str(max(max(x) for x in a)))\n>>> str(400).rjust(w)\n' 400'\n>>> for x in a:\n... print(*(str(y).rjust(w) for y in x), sep=' ', end='\\n')\n... \n 200 300 5000 400\n 554 500 1000 652\n 800 500 650 800\n 950 120 470 500\n 500 600 2000 100\n>>>\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074671824_python.txt
Q: How to extract the member from single-member set in python? I recently encountered a scenario in which if a set only contained a single element, I wanted to do something with that element. To get the element, I settled on this approach: element = list(myset)[0] But this isn't very satisfying, as it creates an unnecessary list. It could also be done with iteration, but iteration seems unnatural as well, since there is only a single element. Am I missing something simple? A: Tuple unpacking works. (element,) = myset (By the way, python-dev has explored but rejected the addition of myset.get() to return an arbitrary element from a set. Discussion here, Guido van Rossum answers 1 and 2.) My personal favorite for getting an arbitrary element is (when you have an unknown number, but also works if you have just one): element = next(iter(myset)) ¹ 1: in Python 2.5 and before, you have to use iter(myset).next() A: Between making a tuple and making an iterator, it's almost a wash, but iteration wins by a nose...: $ python2.6 -mtimeit -s'x=set([1])' 'a=tuple(x)[0]' 1000000 loops, best of 3: 0.465 usec per loop $ python2.6 -mtimeit -s'x=set([1])' 'a=tuple(x)[0]' 1000000 loops, best of 3: 0.465 usec per loop $ python2.6 -mtimeit -s'x=set([1])' 'a=next(iter(x))' 1000000 loops, best of 3: 0.456 usec per loop $ python2.6 -mtimeit -s'x=set([1])' 'a=next(iter(x))' 1000000 loops, best of 3: 0.456 usec per loop Not sure why all the answers are using the older syntax iter(x).next() rather than the new one next(iter(x)), which seems preferable to me (and also works in Python 3.1). However, unpacking wins hands-down over both: $ python2.6 -mtimeit -s'x=set([1])' 'a,=x' 10000000 loops, best of 3: 0.174 usec per loop $ python2.6 -mtimeit -s'x=set([1])' 'a,=x' 10000000 loops, best of 3: 0.174 usec per loop This of course is for single-item sets (where the latter form, as others mentioned, has the advantage of failing fast if the set you "knew" had just one item actually had several). For sets with arbitrary N > 1 items, the tuple slows down, the iter doesn't: $ python2.6 -mtimeit -s'x=set(range(99))' 'a=next(iter(x))' 1000000 loops, best of 3: 0.417 usec per loop $ python2.6 -mtimeit -s'x=set(range(99))' 'a=tuple(x)[0]' 100000 loops, best of 3: 3.12 usec per loop So, unpacking for the singleton case, and next(iter(x)) for the general case, seem best. A: I reckon kaizer.se's answer is great. But if your set might contain more than one element, and you want a not-so-arbitrary element, you might want to use min or max. E.g.: element = min(myset) or: element = max(myset) (Don't use sorted, because that has unnecessary overhead for this usage.) A: I suggest: element = myset.pop() A: you can use element = tuple(myset)[0] which is a bit more efficient, or, you can do something like element = iter(myset).next() I guess constructing an iterator is more efficient than constructing a tuple/list. A: There is also Extended Iterable Unpacking which will work on a singleton set or a mulit-element set element, *_ = myset Though some bristle at the use of a throwaway variable. A: One way is to use reduce with lambda x: x. from functools import reduce > reduce(lambda x: x, {3}) 3 > reduce(lambda x: x, {1, 2, 3}) TypeError: <lambda>() takes 1 positional argument but 2 were given > reduce(lambda x: x, {}) TypeError: reduce() of empty sequence with no initial value Benefits: Fails for multiple and zero values Doesn't change the original set Doesn't require data transformations (e.g., to list or iterable) Doesn't need a new variable and can be passed as an argument Arguably less awkward and PEP-compliant A: You can use the more-itertools package. All functions below will return one item from the set, with different behaviors if the set does not contain exactly one item: more_itertools.one raises an exception if iterable is empty or has more than one item: element = more_itertools.one(myset) This works like (element,) = myset, but might not be as fast: $ python3.11 -m timeit -s 'myset = {42}' '(element,) = myset' 5000000 loops, best of 5: 57.9 nsec per loop $ python3.11 -m timeit -s 'from more_itertools import one myset = {42}' 'element = one(myset)' 500000 loops, best of 5: 611 nsec per loop more_itertools.only returns a default if iterable is empty or raises an exception if iterable has more than one item: element = more_itertools.only(myset) more_itertools.first raises an exception if iterable is empty, or a default when one is provided: element = more_itertools.first(myset) This works as element = next(iter(myset)), but is arguably more idiomatic. more_itertools.first_true returns the first "truthy" value in the iterable or a default if iterable is empty: element = more_itertools.first_true(myset) You can also use the first package: from first import first element = first(myset) This works like more_itertools.first_true, mentioned above.
How to extract the member from single-member set in python?
I recently encountered a scenario in which if a set only contained a single element, I wanted to do something with that element. To get the element, I settled on this approach: element = list(myset)[0] But this isn't very satisfying, as it creates an unnecessary list. It could also be done with iteration, but iteration seems unnatural as well, since there is only a single element. Am I missing something simple?
[ "Tuple unpacking works.\n(element,) = myset\n\n(By the way, python-dev has explored but rejected the addition of myset.get() to return an arbitrary element from a set. Discussion here, Guido van Rossum answers 1 and 2.)\nMy personal favorite for getting an arbitrary element is (when you have an unknown number, but also works if you have just one):\nelement = next(iter(myset)) ¹\n\n1: in Python 2.5 and before, you have to use iter(myset).next()\n", "Between making a tuple and making an iterator, it's almost a wash, but iteration wins by a nose...:\n$ python2.6 -mtimeit -s'x=set([1])' 'a=tuple(x)[0]'\n1000000 loops, best of 3: 0.465 usec per loop\n$ python2.6 -mtimeit -s'x=set([1])' 'a=tuple(x)[0]'\n1000000 loops, best of 3: 0.465 usec per loop\n$ python2.6 -mtimeit -s'x=set([1])' 'a=next(iter(x))'\n1000000 loops, best of 3: 0.456 usec per loop\n$ python2.6 -mtimeit -s'x=set([1])' 'a=next(iter(x))'\n1000000 loops, best of 3: 0.456 usec per loop\n\nNot sure why all the answers are using the older syntax iter(x).next() rather than the new one next(iter(x)), which seems preferable to me (and also works in Python 3.1).\nHowever, unpacking wins hands-down over both:\n$ python2.6 -mtimeit -s'x=set([1])' 'a,=x'\n10000000 loops, best of 3: 0.174 usec per loop\n$ python2.6 -mtimeit -s'x=set([1])' 'a,=x'\n10000000 loops, best of 3: 0.174 usec per loop\n\nThis of course is for single-item sets (where the latter form, as others mentioned, has the advantage of failing fast if the set you \"knew\" had just one item actually had several). For sets with arbitrary N > 1 items, the tuple slows down, the iter doesn't:\n$ python2.6 -mtimeit -s'x=set(range(99))' 'a=next(iter(x))'\n1000000 loops, best of 3: 0.417 usec per loop\n$ python2.6 -mtimeit -s'x=set(range(99))' 'a=tuple(x)[0]'\n100000 loops, best of 3: 3.12 usec per loop\n\nSo, unpacking for the singleton case, and next(iter(x)) for the general case, seem best.\n", "I reckon kaizer.se's answer is great. But if your set might contain more than one element, and you want a not-so-arbitrary element, you might want to use min or max. E.g.:\nelement = min(myset)\n\nor:\nelement = max(myset)\n\n(Don't use sorted, because that has unnecessary overhead for this usage.)\n", "I suggest:\nelement = myset.pop()\n\n", "you can use element = tuple(myset)[0] which is a bit more efficient, or, you can do something like \nelement = iter(myset).next()\n\nI guess constructing an iterator is more efficient than constructing a tuple/list.\n", "There is also Extended Iterable Unpacking which will work on a singleton set or a mulit-element set\nelement, *_ = myset\nThough some bristle at the use of a throwaway variable.\n", "One way is to use reduce with lambda x: x.\nfrom functools import reduce\n\n> reduce(lambda x: x, {3})\n3\n\n> reduce(lambda x: x, {1, 2, 3})\nTypeError: <lambda>() takes 1 positional argument but 2 were given\n\n> reduce(lambda x: x, {})\nTypeError: reduce() of empty sequence with no initial value\n\nBenefits:\n\nFails for multiple and zero values\nDoesn't change the original set\nDoesn't require data transformations (e.g., to list or iterable)\nDoesn't need a new variable and can be passed as an argument\nArguably less awkward and PEP-compliant\n\n", "You can use the more-itertools package. All functions below will return one item from the set, with different behaviors if the set does not contain exactly one item:\n\nmore_itertools.one raises an exception if iterable is empty or has more than one item:\nelement = more_itertools.one(myset)\n\nThis works like (element,) = myset, but might not be as fast:\n$ python3.11 -m timeit -s 'myset = {42}' '(element,) = myset'\n5000000 loops, best of 5: 57.9 nsec per loop\n$ python3.11 -m timeit -s 'from more_itertools import one\nmyset = {42}' 'element = one(myset)'\n500000 loops, best of 5: 611 nsec per loop\n\n\nmore_itertools.only returns a default if iterable is empty or raises an exception if iterable has more than one item:\nelement = more_itertools.only(myset)\n\n\nmore_itertools.first raises an exception if iterable is empty, or a default when one is provided:\nelement = more_itertools.first(myset)\n\nThis works as element = next(iter(myset)), but is arguably more idiomatic.\n\nmore_itertools.first_true returns the first \"truthy\" value in the iterable or a default if iterable is empty:\nelement = more_itertools.first_true(myset)\n\n\n\nYou can also use the first package:\nfrom first import first\n\nelement = first(myset)\n\nThis works like more_itertools.first_true, mentioned above.\n" ]
[ 129, 31, 25, 15, 2, 2, 0, 0 ]
[]
[]
[ "python", "set" ]
stackoverflow_0001619514_python_set.txt
Q: Parse date in pandas with unit='D' What is wrong with this? pd.to_datetime('2022-01-01',unit='D') If I do it without the unit pd.to_datetime('2022-01-01') no error is raised. However, insted of the standard unit ns I rather want D. A: There is a quite clear description and examples on the official documentaiton. Let's take an example from it: pd.to_datetime([1, 2, 3], unit='D', origin=pd.Timestamp('1960-01-01')) Output: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None) What has happened here? Basically we are taking origin as the base date, and this list in the beginning as a… multiplier? By unit='D' we set it to days, no problem, let's see how it behaves on a different list: pd.to_datetime([0, 30, 64], unit='D', origin=pd.Timestamp('1960-01-01')) Output: DatetimeIndex(['1960-01-01', '1960-01-31', '1960-03-05'], dtype='datetime64[ns]', freq=None) Now look. 0 means there is no change. 30 means we are adding 30 days to our starting date. Finally 64 means we are adding 64 days to our base date. Let's do it in Excel: var value Base= 01-01-60 +64 05-03-60 So, feels legit, does not it? Let's try it on some different unit, e.g. s which stands for seconds: pd.to_datetime([0, 30, 64], unit='s', origin=pd.Timestamp('1960-01-01')) Output: DatetimeIndex(['1960-01-01 00:00:00', '1960-01-01 00:00:30', '1960-01-01 00:01:04'], dtype='datetime64[ns]', freq=None) That was expected. Basically same thing, we are rather taking the base value, or add 30 seconds or get 00:01:04 by adding 64 seconds To sum it up You are misusing this unit= key, it's meant to add up to the base datetime by providing a list of values of how much you want to add up. Your date should be featured in origin= key as origin='2022-01-01'. If you don't want this functionality and you want to cast this value to a day, than look at the other answer. Basically: pd.to_datetime('2022-01-01', format='%Y-%m-%d').day Output: 1 One is the first day of Jan 2022. Update From the comments I remember you wanted to cast you datetime with seconds to date. You can do it with .ceil('1D').
Parse date in pandas with unit='D'
What is wrong with this? pd.to_datetime('2022-01-01',unit='D') If I do it without the unit pd.to_datetime('2022-01-01') no error is raised. However, insted of the standard unit ns I rather want D.
[ "There is a quite clear description and examples on the official documentaiton.\nLet's take an example from it:\npd.to_datetime([1, 2, 3], unit='D',\n origin=pd.Timestamp('1960-01-01'))\n\nOutput:\nDatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)\n\nWhat has happened here? Basically we are taking origin as the base date, and this list in the beginning as a… multiplier? By unit='D' we set it to days, no problem, let's see how it behaves on a different list:\npd.to_datetime([0, 30, 64], unit='D',\n origin=pd.Timestamp('1960-01-01'))\n\nOutput:\nDatetimeIndex(['1960-01-01', '1960-01-31', '1960-03-05'], dtype='datetime64[ns]', freq=None)\n\nNow look. 0 means there is no change.\n30 means we are adding 30 days to our starting date.\nFinally 64 means we are adding 64 days to our base date.\nLet's do it in Excel:\n\n\n\n\nvar\nvalue\n\n\n\n\nBase=\n01-01-60\n\n\n+64\n05-03-60\n\n\n\n\n\nSo, feels legit, does not it?\nLet's try it on some different unit, e.g. s which stands for seconds:\npd.to_datetime([0, 30, 64], unit='s',\n origin=pd.Timestamp('1960-01-01'))\n\nOutput:\nDatetimeIndex(['1960-01-01 00:00:00', '1960-01-01 00:00:30',\n '1960-01-01 00:01:04'],\n dtype='datetime64[ns]', freq=None)\n\nThat was expected. Basically same thing, we are rather taking the base value, or add 30 seconds or get 00:01:04 by adding 64 seconds\nTo sum it up\nYou are misusing this unit= key, it's meant to add up to the base datetime by providing a list of values of how much you want to add up. Your date should be featured in origin= key as origin='2022-01-01'.\nIf you don't want this functionality and you want to cast this value to a day, than look at the other answer. Basically:\npd.to_datetime('2022-01-01', format='%Y-%m-%d').day\n\nOutput:\n1\n\nOne is the first day of Jan 2022.\nUpdate\nFrom the comments I remember you wanted to cast you datetime with seconds to date. You can do it with .ceil('1D').\n" ]
[ 1 ]
[]
[]
[ "datetime", "pandas", "parsing", "python", "timestamp" ]
stackoverflow_0074671728_datetime_pandas_parsing_python_timestamp.txt
Q: Solving an ODE with a Time-Dependent Variable For the 2 systems of ODE, I am using RK4 to solve. From 0 <= t <= 30, b is a constant. But at t >= 30, b is a time-dependent variable where b = 1.2 * exp(-0.5 * (t - 30)). I tried to implement it, but there's an error saying setting an array element with a sequence. How should I implement the time-variable? a = 0.05 b = 1.2 def fA(A, F, t): return -A + a * F + A**2 * F def fF(A, F, t): return b - a * F - A**2 * F h = 0.1 t = np.arange(0, 100 + h, h) A = np.zeros(t.shape) F = np.zeros(t.shape) A[0] = 1 F[0] = 1 for i in range(len(t) - 1): if t[i] >= 30: b = 1.2 * np.exp(-0.5 * (t - 30)) # <-- error here kA1 = fA(A[i], F[i], t[i]) kF1 = fF(A[i], F[i], t[i]) kA2 = fA(A[i] + h * kA1 / 2, F[i] + h * kF1 / 2, t[i] + h / 2) kF2 = fF(A[i] + h * kA1 / 2, F[i] + h * kF1 / 2, t[i] + h / 2) kA3 = fA(A[i] + h * kA2 / 2, F[i] + h * kF2 / 2, t[i] + h / 2) kF3 = fF(A[i] + h * kA2 / 2, F[i] + h * kF2 / 2, t[i] + h / 2) kA4 = fA(A[i] + h * kA3 / 2, F[i] + h * kF3 / 2, t[i] + h / 2) kF4 = fF(A[i] + h * kA3 / 2, F[i] + h * kF3 / 2, t[i] + h / 2) kA = (kA1 + 2 * kA2 + 2 * kA3 + kA4) / 6 kF = (kF1 + 2 * kF2 + 2 * kF3 + kF4) / 6 A[i + 1] = A[i] + h * kA F[i + 1] = F[i] + h * kF plt.plot(t, A) plt.plot(t, F) A: In the first code I use the integrator developed in scipy.integrate.solve_ivp from scipy. Code 1. ########################################## # AUTHOR : CARLOS DUARDO DA SILVA LIMA # # DATE : 03/12/2022 # # LANGUAGE: python # # IDE : GOOGLE COLAB # # CODE 1 : # ########################################## import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint, solve_ivp # Ordinary Differential Equations a = 0.05 b = 1.20 def B(t): return 1.2*np.exp(-0.5*(t-30)) def f(t,r): x,y=r if t>=0 and t<=30: ode1 = -x+a*y+x**2*y ode2 = b-a*y-x**2*y s = np.array([ode1,ode2]) return s elif t>30: b_ = B(t) ode1 = -x+a*y+x**2*y ode2 = b_-a*y-x**2*y s = np.array([ode1,ode2]) return s # Time t0<=t<=tf ti = 0.0 tf = 100.0 t = np.array([ti,tf]) # Initial conditions x0 = 1.0 y0 = 1.0 r0 = np.array([x0,y0]) # Solving the set of ordinary differential equations (Initial Value Problem) #sol = solve_ivp(f, t_span = t, y0 = r0,method='Radau', rtol=1E-09, atol=1e-09) #sol = solve_ivp(f, t_span = t, y0 = r0,method='DOP853', rtol=1E-09, atol=1e-09) #sol = solve_ivp(f, t_span = t, y0 = r0,method='RK23', rtol=1E-09, atol=1e-09) sol = solve_ivp(f, t_span = t, y0 = r0,method='RK45', rtol=1E-09, atol=1e-09) t_= sol.t x = sol.y[0, :] y = sol.y[1, :] # graphic plt.figure(figsize = (8,8)) plt.style.use('dark_background') plt.title('scipy.integrate.solve_ivp - RK45') plt.plot(t_,x,'y.',t_,y,'g.') plt.xlabel('t') plt.ylabel('x, y') plt.legend(['x', 'y'], shadow=True) plt.grid(lw = 0.95,color = 'white',linestyle = '--') plt.show() Figure 1: Output graph using scipy.integrate.solve_ivp In the second code I try to solve the problem following his reasoning, creating the fourth order Runge-Kutta method in the initial value problem. Heads up! The initial conditions for x0 and y0 were informed and I use them in both codes (code 1 and code 2), x0 = 1.0 and y0 = 1.0. ########################################## # AUTHOR : CARLOS DUARDO DA SILVA LIMA # # DATE : 03/12/2022 # # LANGUAGE: python # # IDE : GOOGLE COLAB # # CODE 2 : # ########################################## import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint, solve_ivp # Ordinary Differential Equations a = 0.05 b = 1.20 def B(t): return 1.2*np.exp(-0.5*(t-30)) def f(t,x,y): if t>=0 and t<=30: ode1 = -x+a*y+x**2*y ode2 = b-a*y-x**2*y s = np.array([ode1,ode2]) return s elif t>30: b_ = B(t) ode1 = -x+a*y+x**2*y ode2 = b_-a*y-x**2*y s = np.array([ode1,ode2]) return s # Initial conditions ti = 0.0 x0 = 1.0 y0 = 1.0 N = 1000000 h = 1E-4 # Step t = np.zeros(N) x = np.zeros(N) y = np.zeros(N) t[0] = ti x[0] = x0 y[0] = y0 for i in range(0,N-1,1): k11 = h*f(t[i],x[i],y[i])[0] k12 = h*f(t[i],x[i],y[i])[1] k21 = h*f(t[i]+(h/2),x[i]+(k11/2),y[i]+(k12/2))[0] k22 = h*f(t[i]+(h/2),x[i]+(k11/2),y[i]+(k12/2))[1] k31 = h*f(t[i]+(h/2),x[i]+(k21/2),y[i]+(k22/2))[0] k32 = h*f(t[i]+(h/2),x[i]+(k21/2),y[i]+(k22/2))[1] k41 = h*f(t[i]+h,x[i]+k31,y[i]+k32)[0] k42 = h*f(t[i]+h,x[i]+k31,y[i]+k32)[1] x[i+1] = x[i] + ((k11+2*(k21+k31)+k41)/6) y[i+1] = y[i] + ((k12+2*(k22+k32)+k42)/6) t[i+1] = t[i] + h # graphic t_ = t plt.figure(figsize = (8,8)) plt.style.use('dark_background') plt.title('4-Order Runge-Kutta') plt.plot(t_,x,'r.',t_,y,'b.') plt.xlabel('t') plt.ylabel('x, y') plt.legend(['x', 'y'], shadow=True) plt.grid(lw = 0.95,color = 'white',linestyle = '--') plt.show() Figure 2: Output graph 4-Order Runge-Kutta In case you don't want to be limited only between 0<=t<=30, we can use a simple logic, like. def f(t,r): x,y=r if t>=30: b_ = B(t) ode1 = -x+a*y+x**2*y ode2 = b_-a*y-x**2*y s = np.array([ode1,ode2]) return s else: ode1 = -x+a*y+x**2*y ode2 = b-a*y-x**2*y s = np.array([ode1,ode2]) return s You can apply them to both of the above codes by replacing the function f, bounded between 0<=t<=30 and t>30. One last note! I replaced variables A and F with x and y. Up until :).
Solving an ODE with a Time-Dependent Variable
For the 2 systems of ODE, I am using RK4 to solve. From 0 <= t <= 30, b is a constant. But at t >= 30, b is a time-dependent variable where b = 1.2 * exp(-0.5 * (t - 30)). I tried to implement it, but there's an error saying setting an array element with a sequence. How should I implement the time-variable? a = 0.05 b = 1.2 def fA(A, F, t): return -A + a * F + A**2 * F def fF(A, F, t): return b - a * F - A**2 * F h = 0.1 t = np.arange(0, 100 + h, h) A = np.zeros(t.shape) F = np.zeros(t.shape) A[0] = 1 F[0] = 1 for i in range(len(t) - 1): if t[i] >= 30: b = 1.2 * np.exp(-0.5 * (t - 30)) # <-- error here kA1 = fA(A[i], F[i], t[i]) kF1 = fF(A[i], F[i], t[i]) kA2 = fA(A[i] + h * kA1 / 2, F[i] + h * kF1 / 2, t[i] + h / 2) kF2 = fF(A[i] + h * kA1 / 2, F[i] + h * kF1 / 2, t[i] + h / 2) kA3 = fA(A[i] + h * kA2 / 2, F[i] + h * kF2 / 2, t[i] + h / 2) kF3 = fF(A[i] + h * kA2 / 2, F[i] + h * kF2 / 2, t[i] + h / 2) kA4 = fA(A[i] + h * kA3 / 2, F[i] + h * kF3 / 2, t[i] + h / 2) kF4 = fF(A[i] + h * kA3 / 2, F[i] + h * kF3 / 2, t[i] + h / 2) kA = (kA1 + 2 * kA2 + 2 * kA3 + kA4) / 6 kF = (kF1 + 2 * kF2 + 2 * kF3 + kF4) / 6 A[i + 1] = A[i] + h * kA F[i + 1] = F[i] + h * kF plt.plot(t, A) plt.plot(t, F)
[ "In the first code I use the integrator developed in scipy.integrate.solve_ivp from scipy.\nCode 1.\n##########################################\n# AUTHOR : CARLOS DUARDO DA SILVA LIMA #\n# DATE : 03/12/2022 #\n# LANGUAGE: python #\n# IDE : GOOGLE COLAB #\n# CODE 1 : #\n##########################################\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint, solve_ivp\n\n# Ordinary Differential Equations\na = 0.05\nb = 1.20\n\ndef B(t):\n return 1.2*np.exp(-0.5*(t-30))\n\ndef f(t,r):\n x,y=r\n if t>=0 and t<=30:\n ode1 = -x+a*y+x**2*y\n ode2 = b-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n elif t>30:\n b_ = B(t)\n ode1 = -x+a*y+x**2*y\n ode2 = b_-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n\n# Time t0<=t<=tf\nti = 0.0\ntf = 100.0\nt = np.array([ti,tf])\n\n# Initial conditions\nx0 = 1.0\ny0 = 1.0\nr0 = np.array([x0,y0])\n\n# Solving the set of ordinary differential equations (Initial Value Problem)\n#sol = solve_ivp(f, t_span = t, y0 = r0,method='Radau', rtol=1E-09, atol=1e-09)\n#sol = solve_ivp(f, t_span = t, y0 = r0,method='DOP853', rtol=1E-09, atol=1e-09)\n#sol = solve_ivp(f, t_span = t, y0 = r0,method='RK23', rtol=1E-09, atol=1e-09)\nsol = solve_ivp(f, t_span = t, y0 = r0,method='RK45', rtol=1E-09, atol=1e-09)\n\nt_= sol.t\nx = sol.y[0, :]\ny = sol.y[1, :]\n\n# graphic\nplt.figure(figsize = (8,8))\nplt.style.use('dark_background')\nplt.title('scipy.integrate.solve_ivp - RK45')\nplt.plot(t_,x,'y.',t_,y,'g.')\nplt.xlabel('t')\nplt.ylabel('x, y')\nplt.legend(['x', 'y'], shadow=True)\nplt.grid(lw = 0.95,color = 'white',linestyle = '--')\nplt.show()\n\nFigure 1: Output graph using scipy.integrate.solve_ivp\nIn the second code I try to solve the problem following his reasoning, creating the fourth order Runge-Kutta method in the initial value problem. Heads up! The initial conditions for x0 and y0 were informed and I use them in both codes (code 1 and code 2), x0 = 1.0 and y0 = 1.0.\n##########################################\n# AUTHOR : CARLOS DUARDO DA SILVA LIMA #\n# DATE : 03/12/2022 #\n# LANGUAGE: python #\n# IDE : GOOGLE COLAB #\n# CODE 2 : #\n##########################################\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint, solve_ivp\n\n# Ordinary Differential Equations\na = 0.05\nb = 1.20\n\ndef B(t):\n return 1.2*np.exp(-0.5*(t-30))\n\ndef f(t,x,y):\n if t>=0 and t<=30:\n ode1 = -x+a*y+x**2*y\n ode2 = b-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n elif t>30:\n b_ = B(t)\n ode1 = -x+a*y+x**2*y\n ode2 = b_-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n\n# Initial conditions\nti = 0.0\nx0 = 1.0\ny0 = 1.0\nN = 1000000\nh = 1E-4 # Step\n\nt = np.zeros(N)\nx = np.zeros(N)\ny = np.zeros(N)\nt[0] = ti\nx[0] = x0\ny[0] = y0\n\nfor i in range(0,N-1,1):\n k11 = h*f(t[i],x[i],y[i])[0]\n k12 = h*f(t[i],x[i],y[i])[1]\n\n k21 = h*f(t[i]+(h/2),x[i]+(k11/2),y[i]+(k12/2))[0]\n k22 = h*f(t[i]+(h/2),x[i]+(k11/2),y[i]+(k12/2))[1]\n\n k31 = h*f(t[i]+(h/2),x[i]+(k21/2),y[i]+(k22/2))[0]\n k32 = h*f(t[i]+(h/2),x[i]+(k21/2),y[i]+(k22/2))[1]\n\n k41 = h*f(t[i]+h,x[i]+k31,y[i]+k32)[0]\n k42 = h*f(t[i]+h,x[i]+k31,y[i]+k32)[1]\n\n x[i+1] = x[i] + ((k11+2*(k21+k31)+k41)/6)\n y[i+1] = y[i] + ((k12+2*(k22+k32)+k42)/6)\n t[i+1] = t[i] + h \n\n# graphic\nt_ = t\nplt.figure(figsize = (8,8))\nplt.style.use('dark_background')\nplt.title('4-Order Runge-Kutta')\nplt.plot(t_,x,'r.',t_,y,'b.')\nplt.xlabel('t')\nplt.ylabel('x, y')\nplt.legend(['x', 'y'], shadow=True)\nplt.grid(lw = 0.95,color = 'white',linestyle = '--')\nplt.show()\n\nFigure 2: Output graph 4-Order Runge-Kutta\nIn case you don't want to be limited only between 0<=t<=30, we can use a simple logic, like.\ndef f(t,r):\n x,y=r\n if t>=30:\n b_ = B(t)\n ode1 = -x+a*y+x**2*y\n ode2 = b_-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n else:\n ode1 = -x+a*y+x**2*y\n ode2 = b-a*y-x**2*y\n s = np.array([ode1,ode2])\n return s\n\nYou can apply them to both of the above codes by replacing the function f, bounded between 0<=t<=30 and t>30. One last note! I replaced variables A and F with x and y. Up until :).\n" ]
[ 0 ]
[]
[]
[ "math", "ode", "python" ]
stackoverflow_0074647657_math_ode_python.txt
Q: Why my flask app can't handle more than one client? I created a simple flask application that needs authentication to have access to the data. When I run this application locally it works fine (accepts more than one client), however when I host the app on railway or heroku it can't handle more than one client. Ex: when I access the URL on a computer and log in, if I access the URL on my cellphone (different netowrk) I get to have access to that account logged in. I'm using the latest version of flask and using flask_login to manage authentication. Does anyone have any idea why it's happening? I've tried everything I found out on Internet, such as using app.run(threaded=True) I've also set the numbers of workers on gunicorn command for exemple Does anyone have any idea why it's happening? A: As official Flask's documentation says, never run your application in production in dev mode (what app.run() actually is). Please refer to this section if you are going to deploy in self-hosted machine: https://flask.palletsprojects.com/en/2.2.x/deploying/ And if you are going to deploy to Heroku, you need to prepare for correct Procfile, like this: web: gunicorn run:app
Why my flask app can't handle more than one client?
I created a simple flask application that needs authentication to have access to the data. When I run this application locally it works fine (accepts more than one client), however when I host the app on railway or heroku it can't handle more than one client. Ex: when I access the URL on a computer and log in, if I access the URL on my cellphone (different netowrk) I get to have access to that account logged in. I'm using the latest version of flask and using flask_login to manage authentication. Does anyone have any idea why it's happening? I've tried everything I found out on Internet, such as using app.run(threaded=True) I've also set the numbers of workers on gunicorn command for exemple Does anyone have any idea why it's happening?
[ "As official Flask's documentation says, never run your application in production in dev mode (what app.run() actually is).\nPlease refer to this section if you are going to deploy in self-hosted machine: https://flask.palletsprojects.com/en/2.2.x/deploying/\nAnd if you are going to deploy to Heroku, you need to prepare for correct Procfile, like this:\nweb: gunicorn run:app\n\n" ]
[ 0 ]
[]
[]
[ "flask", "gunicorn", "heroku", "python" ]
stackoverflow_0074671705_flask_gunicorn_heroku_python.txt
Q: Triangulation Plot python curved scattered data I'm trying to get a interpolated contour surface with triangulation from matplotlib. My data looks like a curve and I can't get rid of the data below the curve. I would like to have the outside datapoints as boundaries. I got the code from this tutorial import matplotlib.tri as tri fig, (ax1, ax2) = plt.subplots(nrows=2) xi = np.linspace(-10,150,2000) yi = np.linspace(-10,60,2000) triang = tri.Triangulation(x_after, y_after) interpolator = tri.LinearTriInterpolator(triang, strain_after) Xi, Yi = np.meshgrid(xi, yi) zi = interpolator(Xi, Yi) ax1.triplot(triang, 'ro-', lw=5) ax1.contour(xi, yi, zi, levels=30, linewidths=0.5, colors='k') cntr1 = ax1.contourf(xi, yi, zi, levels=30, cmap="jet") fig.colorbar(cntr1, ax=ax1) ax1.plot(x_after, y_after, 'ko', ms=3) ax2.tricontour(x_after, y_after, strain_after, levels=30, linewidths=0.5, colors='k') cntr2 = ax2.tricontourf(x_after, y_after, strain_after, levels=30, cmap="jet") fig.colorbar(cntr2, ax=ax2) ax2.plot(x_after, y_after, 'ko', ms=3) plt.subplots_adjust(hspace=0.5) plt.show() I found the option to mask the data with this code, but I can't figure out how to define the mask to get what I want triang.set_mask() These are the values for the inner curve: x_after y_after z_after strain_after 39 117.2757 8.7586 0.1904 7.164 40 119.9474 7.152 0.1862 6.6456 37 111.8319 12.0568 0.1671 6.273 38 114.5314 10.4186 0.1651 5.7309 41 122.7482 5.4811 0.1617 9.1563 36 108.8823 13.4417 0.1421 8.8683 42 125.5035 3.8309 0.141 9.7385 33 99.8064 17.6315 0.1357 9.8613 32 96.8869 18.6449 0.1197 4.4147 35 105.8846 14.6086 0.1079 7.7055 28 84.2221 22.0191 0.1076 6.2098 26 77.8689 23.158 0.1067 7.5833 29 87.354 21.2974 0.1044 11.4365 27 81.0778 22.6443 0.1019 8.3794 24 71.4004 23.7749 0.0968 8.6207 34 102.8772 15.9558 0.0959 18.2025 23 68.2124 23.962 0.0939 7.9201 25 74.6905 23.4465 0.0901 9.0361 30 90.5282 20.398 0.0864 14.1051 31 93.802 19.335 0.0794 10.4563 43 128.3489 2.1002 0.0689 9.0292 22 65.0282 24.1107 0.0654 7.99 21 61.7857 24.0129 0.0543 8.2589 20 58.5831 23.9527 0.0407 9.0087 0 -0.0498 -0.5159 0.0308 7.1073 19 55.3115 23.7794 0.0251 9.6441 5 12.5674 9.3369 0.0203 7.2051 2 4.8147 3.6074 0.0191 8.0103 1 2.363 1.5329 0.0184 7.8285 18 52.0701 23.526 0.016 8.0149 3 7.4067 5.5988 0.0111 8.9994 7 18.2495 12.5836 0.0098 9.771 9 23.9992 15.4145 0.0098 6.7995 16 45.5954 22.5274 0.0098 12.9428 4 9.9776 7.5563 0.0093 6.9804 17 48.9177 23.0669 0.0084 9.3782 13 35.9812 20.0588 0.0066 9.6005 6 15.3578 11.0027 0.0062 9.7801 15 42.2909 21.8663 0.0052 12.0288 11 29.816 17.8723 0.0049 8.9085 8 21.1241 14.0893 0.0032 6.5716 10 26.8691 16.7093 0.0014 6.9672 44 131.1371 0.4155 0.0 11.9578 14 39.0687 20.991 -0.0008 9.9907 12 32.9645 18.9796 -0.0102 9.3389 45 134.083 -1.3928 -0.0616 15.29 A: I managed to find a way to not plot the triangles at the bottom by using the following code: xtri = x_after[triangles] - np.roll(x_after[triangles], 1, axis=1) ytri = y_after[triangles] - np.roll(y_after[triangles], 1, axis=1) maxi = np.max(np.sqrt(xtri**2 + ytri**2), axis=1) max_radius = 4.5 triang.set_mask(maxi > max_radius) A: Most of my code is devoted to build the x, y arrays and the list of triangles in terms of the numbering of the nodes, but I suppose that you already have (or at least you can get) a list of triangles from the mesher program that you have used... if you have the triangles it's as simple as plt.tricontourf(x, y, triangles, z). And here it is the code complete of the boring stuff. import matplotlib.pyplot as plt import numpy as np th0 = th2 = np.linspace(-0.5, 0.5, 21) th1 = np.linspace(-0.475, 0.475, 20) r = np.array((30, 31, 32)) x = np.concatenate(( np.sin(th0)*r[0], np.sin(th1)*r[1], np.sin(th2)*r[2])) y = np.concatenate(( np.cos(th0)*r[0], np.cos(th1)*r[1], np.cos(th2)*r[2])) z = np.sin(x)-np.cos(y) nodes0 = np.arange( 0, 21, dtype=int) nodes1 = np.arange(21, 41, dtype=int) nodes2 = np.arange(41, 62, dtype=int) triangles = np.vstack(( np.vstack((nodes0[:-1],nodes0[1:],nodes1)).T, np.vstack((nodes0[1:-1],nodes1[1:],nodes1[:-1])).T, np.vstack((nodes2[:-1],nodes1,nodes2[1:])).T, np.vstack((nodes1[:-1],nodes1[1:],nodes2[1:-1])).T, (0, 21, 41), (20, 61, 40) )) fig, ax = plt.subplots() ax.set_aspect(1) tp = ax.triplot(x, y, triangles, color='k', lw=0.5, zorder=4) tc = ax.tricontourf(x, y, triangles, np.sin(x)-np.cos(y)) plt.colorbar(tc, location='bottom') plt.show()
Triangulation Plot python curved scattered data
I'm trying to get a interpolated contour surface with triangulation from matplotlib. My data looks like a curve and I can't get rid of the data below the curve. I would like to have the outside datapoints as boundaries. I got the code from this tutorial import matplotlib.tri as tri fig, (ax1, ax2) = plt.subplots(nrows=2) xi = np.linspace(-10,150,2000) yi = np.linspace(-10,60,2000) triang = tri.Triangulation(x_after, y_after) interpolator = tri.LinearTriInterpolator(triang, strain_after) Xi, Yi = np.meshgrid(xi, yi) zi = interpolator(Xi, Yi) ax1.triplot(triang, 'ro-', lw=5) ax1.contour(xi, yi, zi, levels=30, linewidths=0.5, colors='k') cntr1 = ax1.contourf(xi, yi, zi, levels=30, cmap="jet") fig.colorbar(cntr1, ax=ax1) ax1.plot(x_after, y_after, 'ko', ms=3) ax2.tricontour(x_after, y_after, strain_after, levels=30, linewidths=0.5, colors='k') cntr2 = ax2.tricontourf(x_after, y_after, strain_after, levels=30, cmap="jet") fig.colorbar(cntr2, ax=ax2) ax2.plot(x_after, y_after, 'ko', ms=3) plt.subplots_adjust(hspace=0.5) plt.show() I found the option to mask the data with this code, but I can't figure out how to define the mask to get what I want triang.set_mask() These are the values for the inner curve: x_after y_after z_after strain_after 39 117.2757 8.7586 0.1904 7.164 40 119.9474 7.152 0.1862 6.6456 37 111.8319 12.0568 0.1671 6.273 38 114.5314 10.4186 0.1651 5.7309 41 122.7482 5.4811 0.1617 9.1563 36 108.8823 13.4417 0.1421 8.8683 42 125.5035 3.8309 0.141 9.7385 33 99.8064 17.6315 0.1357 9.8613 32 96.8869 18.6449 0.1197 4.4147 35 105.8846 14.6086 0.1079 7.7055 28 84.2221 22.0191 0.1076 6.2098 26 77.8689 23.158 0.1067 7.5833 29 87.354 21.2974 0.1044 11.4365 27 81.0778 22.6443 0.1019 8.3794 24 71.4004 23.7749 0.0968 8.6207 34 102.8772 15.9558 0.0959 18.2025 23 68.2124 23.962 0.0939 7.9201 25 74.6905 23.4465 0.0901 9.0361 30 90.5282 20.398 0.0864 14.1051 31 93.802 19.335 0.0794 10.4563 43 128.3489 2.1002 0.0689 9.0292 22 65.0282 24.1107 0.0654 7.99 21 61.7857 24.0129 0.0543 8.2589 20 58.5831 23.9527 0.0407 9.0087 0 -0.0498 -0.5159 0.0308 7.1073 19 55.3115 23.7794 0.0251 9.6441 5 12.5674 9.3369 0.0203 7.2051 2 4.8147 3.6074 0.0191 8.0103 1 2.363 1.5329 0.0184 7.8285 18 52.0701 23.526 0.016 8.0149 3 7.4067 5.5988 0.0111 8.9994 7 18.2495 12.5836 0.0098 9.771 9 23.9992 15.4145 0.0098 6.7995 16 45.5954 22.5274 0.0098 12.9428 4 9.9776 7.5563 0.0093 6.9804 17 48.9177 23.0669 0.0084 9.3782 13 35.9812 20.0588 0.0066 9.6005 6 15.3578 11.0027 0.0062 9.7801 15 42.2909 21.8663 0.0052 12.0288 11 29.816 17.8723 0.0049 8.9085 8 21.1241 14.0893 0.0032 6.5716 10 26.8691 16.7093 0.0014 6.9672 44 131.1371 0.4155 0.0 11.9578 14 39.0687 20.991 -0.0008 9.9907 12 32.9645 18.9796 -0.0102 9.3389 45 134.083 -1.3928 -0.0616 15.29
[ "I managed to find a way to not plot the triangles at the bottom by using the following code:\nxtri = x_after[triangles] - np.roll(x_after[triangles], 1, axis=1)\nytri = y_after[triangles] - np.roll(y_after[triangles], 1, axis=1)\nmaxi = np.max(np.sqrt(xtri**2 + ytri**2), axis=1)\nmax_radius = 4.5\ntriang.set_mask(maxi > max_radius)\n\n\n", "\nMost of my code is devoted to build the x, y arrays and the list of triangles in terms of the numbering of the nodes, but I suppose that you already have (or at least you can get) a list of triangles from the mesher program that you have used... if you have the triangles it's as simple as plt.tricontourf(x, y, triangles, z).\nAnd here it is the code complete of the boring stuff.\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nth0 = th2 = np.linspace(-0.5, 0.5, 21)\nth1 = np.linspace(-0.475, 0.475, 20)\nr = np.array((30, 31, 32))\n\nx = np.concatenate(( np.sin(th0)*r[0], \n np.sin(th1)*r[1],\n np.sin(th2)*r[2]))\ny = np.concatenate(( np.cos(th0)*r[0], \n np.cos(th1)*r[1],\n np.cos(th2)*r[2]))\n\nz = np.sin(x)-np.cos(y)\n\nnodes0 = np.arange( 0, 21, dtype=int)\nnodes1 = np.arange(21, 41, dtype=int)\nnodes2 = np.arange(41, 62, dtype=int)\n\ntriangles = np.vstack((\n np.vstack((nodes0[:-1],nodes0[1:],nodes1)).T,\n np.vstack((nodes0[1:-1],nodes1[1:],nodes1[:-1])).T,\n np.vstack((nodes2[:-1],nodes1,nodes2[1:])).T,\n np.vstack((nodes1[:-1],nodes1[1:],nodes2[1:-1])).T,\n (0, 21, 41),\n (20, 61, 40)\n ))\n\nfig, ax = plt.subplots()\nax.set_aspect(1)\ntp = ax.triplot(x, y, triangles, color='k', lw=0.5, zorder=4)\ntc = ax.tricontourf(x, y, triangles, np.sin(x)-np.cos(y))\nplt.colorbar(tc, location='bottom')\nplt.show()\n\n" ]
[ 0, 0 ]
[]
[]
[ "interpolation", "matplotlib", "python", "triangulation" ]
stackoverflow_0074659764_interpolation_matplotlib_python_triangulation.txt
Q: How to solve the problem 'zsh: command not found: jupyter' I bought Macbook air M1, and I tried to install jupyter notebook with this code. pip3 install --upgrade pip pip3 install jupyter and I tried to open jupyter notebook with this code. jupyter notebook but, then, this code appeared. zsh: command not found: jupyter enter image description here A: First, you need to find where it was installed. pip3 show jupyter | grep Location Example: $ pip3 show pip | grep Location Location: /usr/lib/python3/dist-packages Then you need to ensure that the path you get is in your PATH. Example: $ export PATH=/usr/lib/python3/dist-packages:$PATH A: Try adding Python's bin/ folder path to $PATH variable. You should be able to find it here - /Users/<your-username>/Library/Python/3.x/bin Step 1 : Open .bash_profile in text editor open ~/.bash_profile Step 2 : Add the following line at the end of the file export PATH="/Users/<your-username>/Library/Python/3.x/bin:$PATH" Step 3 : Save the edits made to .bash_profile and restart the terminal
How to solve the problem 'zsh: command not found: jupyter'
I bought Macbook air M1, and I tried to install jupyter notebook with this code. pip3 install --upgrade pip pip3 install jupyter and I tried to open jupyter notebook with this code. jupyter notebook but, then, this code appeared. zsh: command not found: jupyter enter image description here
[ "First, you need to find where it was installed.\npip3 show jupyter | grep Location\n\nExample:\n$ pip3 show pip | grep Location\n\nLocation: /usr/lib/python3/dist-packages\n\nThen you need to ensure that the path you get is in your PATH.\nExample:\n$ export PATH=/usr/lib/python3/dist-packages:$PATH\n\n", "Try adding Python's bin/ folder path to $PATH variable.\nYou should be able to find it here - /Users/<your-username>/Library/Python/3.x/bin\nStep 1 : Open .bash_profile in text editor\nopen ~/.bash_profile\n\nStep 2 : Add the following line at the end of the file\nexport PATH=\"/Users/<your-username>/Library/Python/3.x/bin:$PATH\"\n\nStep 3 : Save the edits made to .bash_profile and restart the terminal\n" ]
[ 6, 0 ]
[]
[]
[ "jupyter", "jupyter_notebook", "macos", "python" ]
stackoverflow_0069614802_jupyter_jupyter_notebook_macos_python.txt
Q: Difference between defining typing.Dict and dict? I am practicing using type hints in Python 3.5. One of my colleague uses typing.Dict: import typing def change_bandwidths(new_bandwidths: typing.Dict, user_id: int, user_name: str) -> bool: print(new_bandwidths, user_id, user_name) return False def my_change_bandwidths(new_bandwidths: dict, user_id: int, user_name: str) ->bool: print(new_bandwidths, user_id, user_name) return True def main(): my_id, my_name = 23, "Tiras" simple_dict = {"Hello": "Moon"} change_bandwidths(simple_dict, my_id, my_name) new_dict = {"new": "energy source"} my_change_bandwidths(new_dict, my_id, my_name) if __name__ == "__main__": main() Both of them work just fine, there doesn't appear to be a difference. I have read the typing module documentation. Between typing.Dict or dict which one should I use in the program? A: There is no real difference between using a plain typing.Dict and dict, no. However, typing.Dict is a Generic type * that lets you specify the type of the keys and values too, making it more flexible: def change_bandwidths(new_bandwidths: typing.Dict[str, str], user_id: int, user_name: str) -> bool: As such, it could well be that at some point in your project lifetime you want to define the dictionary argument a little more precisely, at which point expanding typing.Dict to typing.Dict[key_type, value_type] is a 'smaller' change than replacing dict. You can make this even more generic by using Mapping or MutableMapping types here; since your function doesn't need to alter the mapping, I'd stick with Mapping. A dict is one mapping, but you could create other objects that also satisfy the mapping interface, and your function might well still work with those: def change_bandwidths(new_bandwidths: typing.Mapping[str, str], user_id: int, user_name: str) -> bool: Now you are clearly telling other users of this function that your code won't actually alter the new_bandwidths mapping passed in. Your actual implementation is merely expecting an object that is printable. That may be a test implementation, but as it stands your code would continue to work if you used new_bandwidths: typing.Any, because any object in Python is printable. *: Note: If you are using Python 3.7 or newer, you can use dict as a generic type if you start your module with from __future__ import annotations, and as of Python 3.9, dict (as well as other standard containers) supports being used as generic type even without that directive. A: typing.Dict is a generic version of dict: class typing.Dict(dict, MutableMapping[KT, VT]) A generic version of dict. The usage of this type is as follows: def get_position_in_index(word_list: Dict[str, int], word: str) -> int: return word_list[word] Here you can specify the type of key and values in the dict: Dict[str, int] A: as said in python org: class typing.Dict(dict, MutableMapping[KT, VT]) A generic version of dict. Useful for annotating return types. To annotate arguments it is preferred to use an abstract collection type such as Mapping. This type can be used as follows: def count_words(text: str) -> Dict[str, int]: ... But dict is less general and you will be able to alter the mapping passed in. In fact, in python.Dict you specify more details. Another tip: Deprecated since version 3.9: builtins.dict now supports []. See PEP 585 and Generic Alias Type. A: If you're coming from google for TypeError: Too few parameters for typing.Dict; actual 1, expected 2, you need to provide a type for both the key and value. So, Dict[str, str] instead of Dict[str]
Difference between defining typing.Dict and dict?
I am practicing using type hints in Python 3.5. One of my colleague uses typing.Dict: import typing def change_bandwidths(new_bandwidths: typing.Dict, user_id: int, user_name: str) -> bool: print(new_bandwidths, user_id, user_name) return False def my_change_bandwidths(new_bandwidths: dict, user_id: int, user_name: str) ->bool: print(new_bandwidths, user_id, user_name) return True def main(): my_id, my_name = 23, "Tiras" simple_dict = {"Hello": "Moon"} change_bandwidths(simple_dict, my_id, my_name) new_dict = {"new": "energy source"} my_change_bandwidths(new_dict, my_id, my_name) if __name__ == "__main__": main() Both of them work just fine, there doesn't appear to be a difference. I have read the typing module documentation. Between typing.Dict or dict which one should I use in the program?
[ "There is no real difference between using a plain typing.Dict and dict, no.\nHowever, typing.Dict is a Generic type * that lets you specify the type of the keys and values too, making it more flexible:\ndef change_bandwidths(new_bandwidths: typing.Dict[str, str],\n user_id: int,\n user_name: str) -> bool:\n\nAs such, it could well be that at some point in your project lifetime you want to define the dictionary argument a little more precisely, at which point expanding typing.Dict to typing.Dict[key_type, value_type] is a 'smaller' change than replacing dict.\nYou can make this even more generic by using Mapping or MutableMapping types here; since your function doesn't need to alter the mapping, I'd stick with Mapping. A dict is one mapping, but you could create other objects that also satisfy the mapping interface, and your function might well still work with those:\ndef change_bandwidths(new_bandwidths: typing.Mapping[str, str],\n user_id: int,\n user_name: str) -> bool:\n\nNow you are clearly telling other users of this function that your code won't actually alter the new_bandwidths mapping passed in.\nYour actual implementation is merely expecting an object that is printable. That may be a test implementation, but as it stands your code would continue to work if you used new_bandwidths: typing.Any, because any object in Python is printable.\n\n*: Note: If you are using Python 3.7 or newer, you can use dict as a generic type if you start your module with from __future__ import annotations, and as of Python 3.9, dict (as well as other standard containers) supports being used as generic type even without that directive.\n", "typing.Dict is a generic version of dict:\n\nclass typing.Dict(dict, MutableMapping[KT, VT])\nA generic version of dict. The usage of this type is as follows:\ndef get_position_in_index(word_list: Dict[str, int], word: str) -> int:\n return word_list[word]\n\n\nHere you can specify the type of key and values in the dict: Dict[str, int]\n", "as said in python org:\n\nclass typing.Dict(dict, MutableMapping[KT, VT])\n\n\nA generic version of dict. Useful for annotating return types. To\nannotate arguments it is preferred to use an abstract collection type\nsuch as Mapping.\n\nThis type can be used as follows:\ndef count_words(text: str) -> Dict[str, int]:\n ...\n\nBut dict is less general and you will be able to alter the mapping passed in.\nIn fact, in python.Dict you specify more details.\nAnother tip:\n\nDeprecated since version 3.9: builtins.dict now supports []. See PEP 585\nand Generic Alias Type.\n\n", "If you're coming from google for\nTypeError: Too few parameters for typing.Dict; actual 1, expected 2, you need to provide a type for both the key and value.\nSo, Dict[str, str] instead of Dict[str]\n" ]
[ 285, 39, 11, 0 ]
[]
[]
[ "dictionary", "python", "python_typing", "type_hinting" ]
stackoverflow_0037087457_dictionary_python_python_typing_type_hinting.txt
Q: my raspberry pi pico oled display code is returning 'OSError: [Errno 5] EIO' I've been trying to use an ssd1306 oled display with a raspberry pi pico but every time I run the code it returns an error. I don't know what the error means and can't really find anything online for it. I was able to "fix" it yesterday by changing the address in the library file it uses, but although it worked, the issue came back for no apparent reason, despite me not even changing any of the code. this is the code that I am trying to use from machine import Pin, I2C from ssd1306 import SSD1306_I2C i2c=I2C(0,sda=Pin(0), scl=Pin(1), freq=400000) oled = SSD1306_I2C(128, 64, i2c) oled.text("hello world", 0, 0) oled.show() this is the error Traceback (most recent call last): File "<stdin>", line 11, in <module> File "/lib/ssd1306.py", line 110, in __init__ File "/lib/ssd1306.py", line 36, in __init__ File "/lib/ssd1306.py", line 71, in init_display File "/lib/ssd1306.py", line 115, in write_cmd OSError: [Errno 5] EIO and the 'addr=0x3D' was originally 0x3C which is 60 in hex but since my i2c.scan returned 61, I changed it to 0x3D, which fixed it for a little bit but it stopped working again for some reason class SSD1306_I2C(SSD1306): def __init__(self, width, height, i2c, addr=0x3D, external_vcc=False): self.i2c = i2c self.addr = addr self.temp = bytearray(2) self.write_list = [b"\x40", None] # Co=0, D/C#=1 super().__init__(width, height, external_vcc) A: Thank you, odog, your error helped me track mine down. This simple example now works for me: from machine import Pin, I2C from ssd1306 import SSD1306_I2C i2c=I2C(0,sda=Pin(0), scl=Pin(1), freq=400000) devices = i2c.scan() try: oled = SSD1306_I2C(128, 64, i2c,addr=devices[0]) oled.text("hello world", 0, 0) oled.show() except Exception as err: print(f"Unable to initialize oled: {err}") Since yours stopped responding to the address you found, I'm not sure if it will help you.
my raspberry pi pico oled display code is returning 'OSError: [Errno 5] EIO'
I've been trying to use an ssd1306 oled display with a raspberry pi pico but every time I run the code it returns an error. I don't know what the error means and can't really find anything online for it. I was able to "fix" it yesterday by changing the address in the library file it uses, but although it worked, the issue came back for no apparent reason, despite me not even changing any of the code. this is the code that I am trying to use from machine import Pin, I2C from ssd1306 import SSD1306_I2C i2c=I2C(0,sda=Pin(0), scl=Pin(1), freq=400000) oled = SSD1306_I2C(128, 64, i2c) oled.text("hello world", 0, 0) oled.show() this is the error Traceback (most recent call last): File "<stdin>", line 11, in <module> File "/lib/ssd1306.py", line 110, in __init__ File "/lib/ssd1306.py", line 36, in __init__ File "/lib/ssd1306.py", line 71, in init_display File "/lib/ssd1306.py", line 115, in write_cmd OSError: [Errno 5] EIO and the 'addr=0x3D' was originally 0x3C which is 60 in hex but since my i2c.scan returned 61, I changed it to 0x3D, which fixed it for a little bit but it stopped working again for some reason class SSD1306_I2C(SSD1306): def __init__(self, width, height, i2c, addr=0x3D, external_vcc=False): self.i2c = i2c self.addr = addr self.temp = bytearray(2) self.write_list = [b"\x40", None] # Co=0, D/C#=1 super().__init__(width, height, external_vcc)
[ "Thank you, odog, your error helped me track mine down. This simple example now works for me:\nfrom machine import Pin, I2C\nfrom ssd1306 import SSD1306_I2C\n\ni2c=I2C(0,sda=Pin(0), scl=Pin(1), freq=400000)\n\ndevices = i2c.scan()\ntry:\n oled = SSD1306_I2C(128, 64, i2c,addr=devices[0])\n oled.text(\"hello world\", 0, 0)\n oled.show()\nexcept Exception as err:\n print(f\"Unable to initialize oled: {err}\")\n\nSince yours stopped responding to the address you found, I'm not sure if it will help you.\n" ]
[ 0 ]
[]
[]
[ "python", "raspberry_pi_pico", "thonny" ]
stackoverflow_0074659614_python_raspberry_pi_pico_thonny.txt