content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Why can't I set the width on my plotly bar chart?
No matter what I try...I cannot seem to adjust the width of my plotly bar chart. Here is my current code and output:
fig = go.Figure(go.Bar(
x=top_10_belg['ID'],
y=top_10_belg['Description'],
marker=dict(color='rgba(50, 171, 96, 0.6)',
line=dict(color='rgba(50, 171, 96, 1.0)',width=1))
,orientation='h'))
fig.update_layout(
title='Top 10 Rule Breaks by ID',
height = 500, width = 400,
yaxis=dict(
showgrid=False,
showline=False,
showticklabels=True
),
xaxis=dict(
zeroline=False,
showline=False,
showticklabels=True,
showgrid=False,
),barmode='group', bargap=0.4,bargroupgap=0.0,
paper_bgcolor='white',
plot_bgcolor='white')
fig.update_yaxes(ticksuffix = " ")
fig.update_yaxes(autorange="reversed")
fig.show()
I had to line out the labels due to confidentiality.
A:
The width of the bar chart is set by the number of bars in the list. The value 1.0 sets the interval to zero. So as an example, the width 0.8 is used to list for the category variable. I have reproduced the similar horizontal bar chart in the reference using your decorations.
fig = go.Figure(go.Bar(
x=top_10_belg['ID'],
y=top_10_belg['Description'],
width=[0.8]*len(top_10_belg['Description']), # update
marker=dict(color='rgba(50, 171, 96, 0.6)',
line=dict(color='rgba(50, 171, 96, 1.0)',
width=1)),
orientation='h'))
Since increasing the x-axis is the same as increasing the width of the graph size, the width of the x-axis can be increased by adding the following code.
fig.update_layout(autosize=True, width=1000)
| Why can't I set the width on my plotly bar chart? | No matter what I try...I cannot seem to adjust the width of my plotly bar chart. Here is my current code and output:
fig = go.Figure(go.Bar(
x=top_10_belg['ID'],
y=top_10_belg['Description'],
marker=dict(color='rgba(50, 171, 96, 0.6)',
line=dict(color='rgba(50, 171, 96, 1.0)',width=1))
,orientation='h'))
fig.update_layout(
title='Top 10 Rule Breaks by ID',
height = 500, width = 400,
yaxis=dict(
showgrid=False,
showline=False,
showticklabels=True
),
xaxis=dict(
zeroline=False,
showline=False,
showticklabels=True,
showgrid=False,
),barmode='group', bargap=0.4,bargroupgap=0.0,
paper_bgcolor='white',
plot_bgcolor='white')
fig.update_yaxes(ticksuffix = " ")
fig.update_yaxes(autorange="reversed")
fig.show()
I had to line out the labels due to confidentiality.
| [
"The width of the bar chart is set by the number of bars in the list. The value 1.0 sets the interval to zero. So as an example, the width 0.8 is used to list for the category variable. I have reproduced the similar horizontal bar chart in the reference using your decorations.\nfig = go.Figure(go.Bar(\n x=top_10_belg['ID'],\n y=top_10_belg['Description'],\n width=[0.8]*len(top_10_belg['Description']), # update\n marker=dict(color='rgba(50, 171, 96, 0.6)',\n line=dict(color='rgba(50, 171, 96, 1.0)',\n width=1)),\n orientation='h'))\n\n\nSince increasing the x-axis is the same as increasing the width of the graph size, the width of the x-axis can be increased by adding the following code.\nfig.update_layout(autosize=True, width=1000)\n\n\n"
] | [
0
] | [] | [] | [
"plotly",
"python"
] | stackoverflow_0074645139_plotly_python.txt |
Q:
get value of field from an existing module in odoo 15
every product in odoo should have quantity minimum
i have created product and i set the quantity min to it in reordering rule
what i want to do is get the product name and it's quantity min in python
this is my python file
from odoo import fields,models,api
class Qty_Min_Alert(models.Model):
_inherit='product.template'
product_name = # here i want to set the product name from product.template
product_qty_min = # here i want to set the product quantity minimum from product.template
i added 'product' in depends on manifest.py
can you help me please
A:
your question is bit unclear, according to your question what i understood is that you want to set every product in product with a particular quantity which is in your case will be the minimum quantity?
if so then you have to inherit the product.product and create a method to search all records and then update the in_hand_qty to your desired minimum value as you have set it already on a field
| get value of field from an existing module in odoo 15 | every product in odoo should have quantity minimum
i have created product and i set the quantity min to it in reordering rule
what i want to do is get the product name and it's quantity min in python
this is my python file
from odoo import fields,models,api
class Qty_Min_Alert(models.Model):
_inherit='product.template'
product_name = # here i want to set the product name from product.template
product_qty_min = # here i want to set the product quantity minimum from product.template
i added 'product' in depends on manifest.py
can you help me please
| [
"your question is bit unclear, according to your question what i understood is that you want to set every product in product with a particular quantity which is in your case will be the minimum quantity?\nif so then you have to inherit the product.product and create a method to search all records and then update the in_hand_qty to your desired minimum value as you have set it already on a field\n"
] | [
1
] | [] | [] | [
"odoo",
"odoo_15",
"python",
"ubuntu"
] | stackoverflow_0074615440_odoo_odoo_15_python_ubuntu.txt |
Q:
Python Code Line Endings
Which line endings should be used for platform independent code (not file access)?
My next project must run on both Windows and Linux.
I wrote code on Linux and used Hg to clone to Windows and it ran fine with Linux endings. The downside is that if you open the file in something other than a smart editor the line endings are not correct.
A:
In general newlines (as typically used in Linux) are more portable than carriage return and then newline (as used in Windows). Note also that if you store your code on GitHub or another Git repository, it will convert it properly without you having to do anything.
A:
As John Messenger states, newlines (\n) as opposed to carriage-return/linefeeds (\n\r) are pretty much the canonical line ending everywhere, except for dedicated Windows-only enclaves.
To support this, git has the core.autocrlf option. When enabled on Windows systems, files with standard newline endings will be converted to CRLF when checked out, and converted back on the way back in.
This seems to be a seamless way to cope with the problem of Windows line-endings when running GUI editors and IDEs on Windows. Your editor is probably expecting CRLF line endings on Windows and will get them when using git with this option, which I believe is set to true by default on Windows installs.
If you're not using Git, maybe it's time you did? Even when working alone it has lots of value.
However, some users will be using GitBash / Cygwin / Ming tools on Windows, possibly with GitBash supplied vim, and will not appreciate CRLF line-endings. Those users can turn core.autocrlf off (false) before cloning a repo to avoid inappropriate "corrections", and thereby see proper newline-only line endings when editing files. This may also help when using other Linux tools when running on Windows that also expect newline line endings.
Git has three levels of settings: "system" (probably set in C:\Program Files\...), "global" (set in your home directory Git config file and affecting all your repos) and "local" set in each repo's .git/config file. Latter levels override the former levels, which can be helpful if your organisation has locked down your C drive.
Here are some of the Git commands to query and update your settings (and you can just edit the configuration files themselves as well):
git config --list --show-origin: Lists all your settings and the file locations where they are set. A setting may be repeated so the lower level will override the higher.
git config --get core.autocrlf: Get the effective setting from the combination of settings you currently have in place.
git config --system core.autocrlf false: Switch off automatic conversion at the system level, for all repos.
git config --global core.autocrlf false: Switch off automatic conversion in all repos from your home directory config file.
Remember, set core.autocrlf to what you need before you clone. Reference here: https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
| Python Code Line Endings | Which line endings should be used for platform independent code (not file access)?
My next project must run on both Windows and Linux.
I wrote code on Linux and used Hg to clone to Windows and it ran fine with Linux endings. The downside is that if you open the file in something other than a smart editor the line endings are not correct.
| [
"In general newlines (as typically used in Linux) are more portable than carriage return and then newline (as used in Windows). Note also that if you store your code on GitHub or another Git repository, it will convert it properly without you having to do anything.\n",
"As John Messenger states, newlines (\\n) as opposed to carriage-return/linefeeds (\\n\\r) are pretty much the canonical line ending everywhere, except for dedicated Windows-only enclaves.\nTo support this, git has the core.autocrlf option. When enabled on Windows systems, files with standard newline endings will be converted to CRLF when checked out, and converted back on the way back in.\nThis seems to be a seamless way to cope with the problem of Windows line-endings when running GUI editors and IDEs on Windows. Your editor is probably expecting CRLF line endings on Windows and will get them when using git with this option, which I believe is set to true by default on Windows installs.\nIf you're not using Git, maybe it's time you did? Even when working alone it has lots of value.\nHowever, some users will be using GitBash / Cygwin / Ming tools on Windows, possibly with GitBash supplied vim, and will not appreciate CRLF line-endings. Those users can turn core.autocrlf off (false) before cloning a repo to avoid inappropriate \"corrections\", and thereby see proper newline-only line endings when editing files. This may also help when using other Linux tools when running on Windows that also expect newline line endings.\nGit has three levels of settings: \"system\" (probably set in C:\\Program Files\\...), \"global\" (set in your home directory Git config file and affecting all your repos) and \"local\" set in each repo's .git/config file. Latter levels override the former levels, which can be helpful if your organisation has locked down your C drive.\nHere are some of the Git commands to query and update your settings (and you can just edit the configuration files themselves as well):\n\ngit config --list --show-origin: Lists all your settings and the file locations where they are set. A setting may be repeated so the lower level will override the higher.\ngit config --get core.autocrlf: Get the effective setting from the combination of settings you currently have in place.\ngit config --system core.autocrlf false: Switch off automatic conversion at the system level, for all repos.\ngit config --global core.autocrlf false: Switch off automatic conversion in all repos from your home directory config file.\n\nRemember, set core.autocrlf to what you need before you clone. Reference here: https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings\n"
] | [
1,
0
] | [] | [] | [
"line_endings",
"python"
] | stackoverflow_0035639333_line_endings_python.txt |
Q:
What type of line breaks does a Python script normally have?
My boss keeps getting annoyed at me for having Windows line breaks in my Python scripts, but I can't for the life of me work out how they are causing him a problem.
Is '\r\n' the normal line-break for a Python script? Or does that only happen on IDLE for PC?
PS: OK, it seems when I write the script on a Mac that it has '\n's, but is there any way that '\r\n' will cause a problem?
Edit:
OK... now I'm totally confused. When I interpret files written in Windows in Python they all spit out when I print lines to the screen as '\n'. Does the Python interpreter for Windows translate line-breaks?
A:
This has nothing to do with Python but with the underlying OS. If you save a text file on Windows, you get CRLF linebreaks, if you save it on Mac/Unix systems, you get LF linebreaks (and on stone-age Macs, CR linebreaks).
Use an editor that allows you to preserve the line break format of your files. No, Notepad doesn't, but most editors I've seen do. UltraEdit and EditPadPro are the ones I know, and I can recommend both. I'm pretty sure that IDEs like PyDev/Eclipse will handle that too, but I haven't tried.
A:
Make a Python script which replaces '\r\n' with '\n'
An run it every time you give the code to your boss.
:)
A:
I've had the same problem. Yes, doing a simple find-and-replace should fix this problem. However you'll have to do it every time you share code; you might want to think about automating it somehow. I never did do that myself.
A:
This sed one-liner will convert the Python script to have the desired line endings:
sed 's/\r\n/\n/g' <windows_script >unix_script
A:
newlines (\n), as opposed to carriage-return/linefeeds (\n\r), are pretty much the canonical line ending everywhere, except for dedicated Windows-only enclaves.
You shouldn't really be repeatedly manually converting line endings. That's not really a "Good Thing" Ⓡ.
To support this, git has the core.autocrlf option. When enabled on Windows systems, files with standard newline endings will be converted to CRLF when checked out, and converted back on the way back in.
This seems to be a seamless way to cope with the problem of Windows line-endings when running GUI editors and IDEs on Windows. Your editor is probably expecting CRLF line endings on Windows and will get them when using git with this option, which I believe is set to true by default on Windows installs.
If you're not using Git, maybe it's time you did? Even when working alone it has lots of value. You can version your own code for backing up and providing alternates.
Some users will be using GitBash / Cygwin / Ming tools on Windows, possibly with GitBash supplied vim, and will not appreciate CRLF line-endings. Those users can turn the Git core.autocrlf setting off (false) before cloning a repo to avoid inappropriate "corrections", and thereby see proper newline-only line endings when editing files, even on Windows. This may also help when using other Linux tools when running on Windows that also expect newline line endings.
Git has three levels of settings: "system" (probably set in C:\Program Files\...), "global" (set in your home directory Git config file and affecting all your repos) and "local" set in each repo's .git/config file. Latter levels override the former levels, which can be helpful if your organisation has locked down your C drive.
Here are some of the Git commands to query and update your settings (and you can just edit the configuration files themselves as well):
git config --list --show-origin: Lists all your settings and the file locations where they are set. A setting may be repeated so the lower level will override the higher.
git config --get core.autocrlf: Get the effective setting from the combination of settings you currently have in place.
git config --system core.autocrlf false: Switch off automatic conversion at the system level, for all repos.
git config --global core.autocrlf false: Switch off automatic conversion in all repos from your home directory config file.
Remember, set core.autocrlf to what you need before you clone. Reference here: https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings
| What type of line breaks does a Python script normally have? | My boss keeps getting annoyed at me for having Windows line breaks in my Python scripts, but I can't for the life of me work out how they are causing him a problem.
Is '\r\n' the normal line-break for a Python script? Or does that only happen on IDLE for PC?
PS: OK, it seems when I write the script on a Mac that it has '\n's, but is there any way that '\r\n' will cause a problem?
Edit:
OK... now I'm totally confused. When I interpret files written in Windows in Python they all spit out when I print lines to the screen as '\n'. Does the Python interpreter for Windows translate line-breaks?
| [
"This has nothing to do with Python but with the underlying OS. If you save a text file on Windows, you get CRLF linebreaks, if you save it on Mac/Unix systems, you get LF linebreaks (and on stone-age Macs, CR linebreaks).\nUse an editor that allows you to preserve the line break format of your files. No, Notepad doesn't, but most editors I've seen do. UltraEdit and EditPadPro are the ones I know, and I can recommend both. I'm pretty sure that IDEs like PyDev/Eclipse will handle that too, but I haven't tried.\n",
"Make a Python script which replaces '\\r\\n' with '\\n' \nAn run it every time you give the code to your boss.\n:)\n",
"I've had the same problem. Yes, doing a simple find-and-replace should fix this problem. However you'll have to do it every time you share code; you might want to think about automating it somehow. I never did do that myself.\n",
"This sed one-liner will convert the Python script to have the desired line endings:\nsed 's/\\r\\n/\\n/g' <windows_script >unix_script\n\n",
"newlines (\\n), as opposed to carriage-return/linefeeds (\\n\\r), are pretty much the canonical line ending everywhere, except for dedicated Windows-only enclaves.\nYou shouldn't really be repeatedly manually converting line endings. That's not really a \"Good Thing\" Ⓡ.\nTo support this, git has the core.autocrlf option. When enabled on Windows systems, files with standard newline endings will be converted to CRLF when checked out, and converted back on the way back in.\nThis seems to be a seamless way to cope with the problem of Windows line-endings when running GUI editors and IDEs on Windows. Your editor is probably expecting CRLF line endings on Windows and will get them when using git with this option, which I believe is set to true by default on Windows installs.\nIf you're not using Git, maybe it's time you did? Even when working alone it has lots of value. You can version your own code for backing up and providing alternates.\nSome users will be using GitBash / Cygwin / Ming tools on Windows, possibly with GitBash supplied vim, and will not appreciate CRLF line-endings. Those users can turn the Git core.autocrlf setting off (false) before cloning a repo to avoid inappropriate \"corrections\", and thereby see proper newline-only line endings when editing files, even on Windows. This may also help when using other Linux tools when running on Windows that also expect newline line endings.\nGit has three levels of settings: \"system\" (probably set in C:\\Program Files\\...), \"global\" (set in your home directory Git config file and affecting all your repos) and \"local\" set in each repo's .git/config file. Latter levels override the former levels, which can be helpful if your organisation has locked down your C drive.\nHere are some of the Git commands to query and update your settings (and you can just edit the configuration files themselves as well):\n\ngit config --list --show-origin: Lists all your settings and the file locations where they are set. A setting may be repeated so the lower level will override the higher.\ngit config --get core.autocrlf: Get the effective setting from the combination of settings you currently have in place.\ngit config --system core.autocrlf false: Switch off automatic conversion at the system level, for all repos.\ngit config --global core.autocrlf false: Switch off automatic conversion in all repos from your home directory config file.\n\nRemember, set core.autocrlf to what you need before you clone. Reference here: https://docs.github.com/en/get-started/getting-started-with-git/configuring-git-to-handle-line-endings\n"
] | [
1,
1,
0,
0,
0
] | [] | [] | [
"line_breaks",
"python",
"python_2.x"
] | stackoverflow_0006907245_line_breaks_python_python_2.x.txt |
Q:
UserWarning: positional arguments and argument "destination" are deprecated - Pytorch nn.modules.module.state_dict()
I am trying to manage the checkpoints of my Pytorch model through torch.save():
Pytorch 1.12.0 and Python 3.7
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, full_path)
But I am getting the following warning for model.state_dict():
/home/francesco/anaconda3/envs/env/lib/python3.7/site-packages/torch/nn/modules/module.py:1384: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
I had a look at the implementation of state_dict() here but I still don't get why I am getting the error since len(args) should be 0:
def state_dict(self, *args, destination=None, prefix='', keep_vars=False):
warn_msg = []
if len(args) > 0:
warn_msg.append('positional arguments')
if destination is None:
destination = args[0]
if len(args) > 1 and prefix == '':
prefix = args[1]
if len(args) > 2 and keep_vars is False:
keep_vars = args[2]
if destination is not None:
warn_msg.append('argument "destination"')
else:
destination = OrderedDict()
destination._metadata = OrderedDict()
if warn_msg:
# DeprecationWarning is ignored by default
warnings.warn(
" and ".join(warn_msg) + " are deprecated. nn.Module.state_dict will not accept them in the future. "
"Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.")
return self._state_dict_impl(destination, prefix, keep_vars)
For the sake of completeness, here's the model:
import torch
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv3d(in_channels=1, out_channels=32, kernel_size=3, stride=1, padding=1)
self.pool1 = nn.MaxPool3d(kernel_size=2)
self.conv2 = nn.Conv3d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1)
self.pool2 = nn.MaxPool3d(kernel_size=2)
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(16 * 16 * 16 * 64, 2)
self.sig1 = nn.Sigmoid()
def forward(self, x):
x = F.relu(self.pool1(self.conv1(x)))
x = F.relu(self.pool2(self.conv2(x)))
x = x.view(-1, 16 * 16 * 16 * 64)
x = self.dropout(x)
x = self.sig1(self.fc1(x))
return x
Anyone knows what I am missing? Thank you!
A:
I have the same error, as a result it is not logging dict training logs. I'm training using PyTorch Lightning in DDP. I works on single GPU but gives this warming on multi-gpu system with DDP.
| UserWarning: positional arguments and argument "destination" are deprecated - Pytorch nn.modules.module.state_dict() | I am trying to manage the checkpoints of my Pytorch model through torch.save():
Pytorch 1.12.0 and Python 3.7
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict()
}, full_path)
But I am getting the following warning for model.state_dict():
/home/francesco/anaconda3/envs/env/lib/python3.7/site-packages/torch/nn/modules/module.py:1384: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
I had a look at the implementation of state_dict() here but I still don't get why I am getting the error since len(args) should be 0:
def state_dict(self, *args, destination=None, prefix='', keep_vars=False):
warn_msg = []
if len(args) > 0:
warn_msg.append('positional arguments')
if destination is None:
destination = args[0]
if len(args) > 1 and prefix == '':
prefix = args[1]
if len(args) > 2 and keep_vars is False:
keep_vars = args[2]
if destination is not None:
warn_msg.append('argument "destination"')
else:
destination = OrderedDict()
destination._metadata = OrderedDict()
if warn_msg:
# DeprecationWarning is ignored by default
warnings.warn(
" and ".join(warn_msg) + " are deprecated. nn.Module.state_dict will not accept them in the future. "
"Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.")
return self._state_dict_impl(destination, prefix, keep_vars)
For the sake of completeness, here's the model:
import torch
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv3d(in_channels=1, out_channels=32, kernel_size=3, stride=1, padding=1)
self.pool1 = nn.MaxPool3d(kernel_size=2)
self.conv2 = nn.Conv3d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1)
self.pool2 = nn.MaxPool3d(kernel_size=2)
self.dropout = nn.Dropout(0.5)
self.fc1 = nn.Linear(16 * 16 * 16 * 64, 2)
self.sig1 = nn.Sigmoid()
def forward(self, x):
x = F.relu(self.pool1(self.conv1(x)))
x = F.relu(self.pool2(self.conv2(x)))
x = x.view(-1, 16 * 16 * 16 * 64)
x = self.dropout(x)
x = self.sig1(self.fc1(x))
return x
Anyone knows what I am missing? Thank you!
| [
"I have the same error, as a result it is not logging dict training logs. I'm training using PyTorch Lightning in DDP. I works on single GPU but gives this warming on multi-gpu system with DDP.\n"
] | [
0
] | [] | [] | [
"python",
"pytorch",
"state_dict",
"warnings"
] | stackoverflow_0071600617_python_pytorch_state_dict_warnings.txt |
Q:
Discord Bot, how do I take this code and try to make it work in videos links as well?
My Issue
The code at the bottom works perfectly but it only works with channels. I understand why it's not working but I can't find the right values for the yson decoding.
The code directly below this paragraph is the code that needs changing a bit I think.
# Finds channel information #
channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"]
channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"]
channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"]
channel_id_link = "https://www.youtube.com/channel/"+channel_id
# Prints Channel information to console #
What I've done
However I have tried to follow the JSON formatting of a video link and filled in the headers with ["subscribeButton"], ["subscribeButtonRenderer"] and ["channelId"] for example but it still comes back with this error.
Error
File "c:\Users\tom87\test\youtube video.py", line 63, in on_message
channel_id = json_data["subscribeButton"]["subscribeButtonRenderer"]["channelId"]
~~~~~~~~~^^^^^^^^^^^^^^^^^^^
Does anyone have any idea what entries would I use to have this work? Please keep in mind this is my first every script to the code is terrible. Thank you for your help
JSON File
Here is the json code that I'm using to find what I need. https://pastebin.com/XmSy4SP3
Code
import re
from re import search
from urllib.request import urlopen
from bs4 import BeautifulSoup
import requests
import re
import json
import base64
import os
import webbrowser
import pyperclip
import win32com.client as comclt
import time
import pyautogui
from configparser import ConfigParser
import discord
intents = discord.Intents.all()
client = discord.Client(command_prefix='/', intents=intents)
# Creates or checks for config
if os.path.exists(os.getcwd() + "/config.json"):
with open("./config.json") as f:
configData = json.load(f)
else:
print("Please enter your token and the channel ID of the Discord channel you'd like to use.")
print("If left blank, you'll need to go to the config.json to set them.")
token = str(input("Bot Token: ") or "token goes here...")
discordChannel = str(input("Channel ID: ") or "000000000000000000")
configTemplate = {"Token": (token), "Prefix": "!","discordChannel": (discordChannel)}
print("The script will now crash and show an error. Run 'python QualityYouTube.py' again.")
with open(os.getcwd() + "/config.json", "w+") as f:
json.dump(configTemplate, f)
token = configData["Token"]
prefix = configData["Prefix"]
discordChannel = configData["discordChannel"]
# Boots up the bot
@client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
# Bot is checking messages
@client.event
async def on_message(message):
if message.author == client.user:
return
if re.search("http", message.content):
print (message.content)
channelURL = message.content
print(channelURL)
discordChannelInt = int(discordChannel)
if (discordChannelInt == message.channel.id):
if re.search("http", channelURL):
if re.search("://", channelURL):
if re.search("youtu", channelURL):
await message.delete()
soup = BeautifulSoup(requests.get(channelURL, cookies={'CONSENT': 'YES+1'}).text, "html.parser")
data = re.search(r"var ytInitialData = ({.*});", str(soup.prettify())).group(1)
json_data = json.loads(data)
# Finds channel information #
channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"]
channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"]
channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"]
channel_id_link = "https://www.youtube.com/channel/"+channel_id
# Prints Channel information to console #
print("Channel ID: "+channel_id)
print("Channel Name: "+channel_name)
print("Channel Logo: "+channel_logo)
print("Channel ID: "+channel_id_link)
author = message.author
#Messages
Message_1 = channel_name+" was posted by "+(author.mention)+"(now.shifttime(""))"+""
timeOutMessage10 = " This message will be deleted in 10 seounds."
timeOutMessage60 = " This message will be deleted in 60 seounds."
noURL = " This does not contain a URL."
invalidURL = " This URL is not supported. Please enter a valid URL."
notChannel = """Make sure the channel follows one of the following formats starting with http or https.
\r - http:://youtube.com/user/username
\r - http://youtube.com/channel/username
\r - http://youtube.com/@username\r\r
***We hope to add video support soon***"""
num60 = 60
num10 = 10
await message.channel.send(channel_name+" - "+channel_id_link)
elif message.content.endswith('.com/'):
await message.channel.send(author.mention+notChannel+timeOutMessage60, delete_after=num60)
elif not message.content.includes('channel') or message.content('user') or message.content('@'):
author = message.author
await message.channel.send(author.mention+invalidURL+timeOutMessage60, delete_after=num60)
elif message.content.excludes('.com') or message.content.excludes('wwww') or message.content.excludes(''):
author = message.author
await message.channel.send(author.mention+noURL+timeOutMessage10, delete_after=num10)
else:
print("incorrect channel")
client.run(token)
What I've done
However I have tried to follow the JSON formatting of a video link and filled in the headers with ["subscribeButton"], ["subscribeButtonRenderer"] and ["channelId"] for example but it still comes back with this error.
Error
File "c:\Users\tom87\test\youtube video.py", line 63, in on_message
channel_id = json_data["subscribeButton"]["subscribeButtonRenderer"]["channelId"]
What I was expected
I was expecting it to feed back the channelid to the script.
A:
I resolved the issue by adding pytube support. You can see how I did it here.
https://github.com/flyinggoatman/YouTube-Link-Extractor
| Discord Bot, how do I take this code and try to make it work in videos links as well? | My Issue
The code at the bottom works perfectly but it only works with channels. I understand why it's not working but I can't find the right values for the yson decoding.
The code directly below this paragraph is the code that needs changing a bit I think.
# Finds channel information #
channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"]
channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"]
channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"]
channel_id_link = "https://www.youtube.com/channel/"+channel_id
# Prints Channel information to console #
What I've done
However I have tried to follow the JSON formatting of a video link and filled in the headers with ["subscribeButton"], ["subscribeButtonRenderer"] and ["channelId"] for example but it still comes back with this error.
Error
File "c:\Users\tom87\test\youtube video.py", line 63, in on_message
channel_id = json_data["subscribeButton"]["subscribeButtonRenderer"]["channelId"]
~~~~~~~~~^^^^^^^^^^^^^^^^^^^
Does anyone have any idea what entries would I use to have this work? Please keep in mind this is my first every script to the code is terrible. Thank you for your help
JSON File
Here is the json code that I'm using to find what I need. https://pastebin.com/XmSy4SP3
Code
import re
from re import search
from urllib.request import urlopen
from bs4 import BeautifulSoup
import requests
import re
import json
import base64
import os
import webbrowser
import pyperclip
import win32com.client as comclt
import time
import pyautogui
from configparser import ConfigParser
import discord
intents = discord.Intents.all()
client = discord.Client(command_prefix='/', intents=intents)
# Creates or checks for config
if os.path.exists(os.getcwd() + "/config.json"):
with open("./config.json") as f:
configData = json.load(f)
else:
print("Please enter your token and the channel ID of the Discord channel you'd like to use.")
print("If left blank, you'll need to go to the config.json to set them.")
token = str(input("Bot Token: ") or "token goes here...")
discordChannel = str(input("Channel ID: ") or "000000000000000000")
configTemplate = {"Token": (token), "Prefix": "!","discordChannel": (discordChannel)}
print("The script will now crash and show an error. Run 'python QualityYouTube.py' again.")
with open(os.getcwd() + "/config.json", "w+") as f:
json.dump(configTemplate, f)
token = configData["Token"]
prefix = configData["Prefix"]
discordChannel = configData["discordChannel"]
# Boots up the bot
@client.event
async def on_ready():
print('We have logged in as {0.user}'.format(client))
# Bot is checking messages
@client.event
async def on_message(message):
if message.author == client.user:
return
if re.search("http", message.content):
print (message.content)
channelURL = message.content
print(channelURL)
discordChannelInt = int(discordChannel)
if (discordChannelInt == message.channel.id):
if re.search("http", channelURL):
if re.search("://", channelURL):
if re.search("youtu", channelURL):
await message.delete()
soup = BeautifulSoup(requests.get(channelURL, cookies={'CONSENT': 'YES+1'}).text, "html.parser")
data = re.search(r"var ytInitialData = ({.*});", str(soup.prettify())).group(1)
json_data = json.loads(data)
# Finds channel information #
channel_id = json_data["header"]["c4TabbedHeaderRenderer"]["channelId"]
channel_name = json_data["header"]["c4TabbedHeaderRenderer"]["title"]
channel_logo = json_data["header"]["c4TabbedHeaderRenderer"]["avatar"]["thumbnails"][2]["url"]
channel_id_link = "https://www.youtube.com/channel/"+channel_id
# Prints Channel information to console #
print("Channel ID: "+channel_id)
print("Channel Name: "+channel_name)
print("Channel Logo: "+channel_logo)
print("Channel ID: "+channel_id_link)
author = message.author
#Messages
Message_1 = channel_name+" was posted by "+(author.mention)+"(now.shifttime(""))"+""
timeOutMessage10 = " This message will be deleted in 10 seounds."
timeOutMessage60 = " This message will be deleted in 60 seounds."
noURL = " This does not contain a URL."
invalidURL = " This URL is not supported. Please enter a valid URL."
notChannel = """Make sure the channel follows one of the following formats starting with http or https.
\r - http:://youtube.com/user/username
\r - http://youtube.com/channel/username
\r - http://youtube.com/@username\r\r
***We hope to add video support soon***"""
num60 = 60
num10 = 10
await message.channel.send(channel_name+" - "+channel_id_link)
elif message.content.endswith('.com/'):
await message.channel.send(author.mention+notChannel+timeOutMessage60, delete_after=num60)
elif not message.content.includes('channel') or message.content('user') or message.content('@'):
author = message.author
await message.channel.send(author.mention+invalidURL+timeOutMessage60, delete_after=num60)
elif message.content.excludes('.com') or message.content.excludes('wwww') or message.content.excludes(''):
author = message.author
await message.channel.send(author.mention+noURL+timeOutMessage10, delete_after=num10)
else:
print("incorrect channel")
client.run(token)
What I've done
However I have tried to follow the JSON formatting of a video link and filled in the headers with ["subscribeButton"], ["subscribeButtonRenderer"] and ["channelId"] for example but it still comes back with this error.
Error
File "c:\Users\tom87\test\youtube video.py", line 63, in on_message
channel_id = json_data["subscribeButton"]["subscribeButtonRenderer"]["channelId"]
What I was expected
I was expecting it to feed back the channelid to the script.
| [
"I resolved the issue by adding pytube support. You can see how I did it here.\nhttps://github.com/flyinggoatman/YouTube-Link-Extractor\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"discord",
"discord.py",
"python"
] | stackoverflow_0074646591_beautifulsoup_discord_discord.py_python.txt |
Q:
Stopping messagebox from looping the entire textfile after it found a value
i created register and login app. I made a messagebox that pops up when it found a value in the textfiles and also when it doesnt find the value. However it keeps looping my entire textfiles so it loops untill it find the value. How do i prevent it form looping? I tried break but it made it stop at 1st row of textfiles. Please ignore the register button, just the login function at the moment.
the textfiles(users) for login info
from tkinter import *
import time
import tkinter.messagebox as tkMessageBox
##3
#### Variable
##3
window = Tk()
window.geometry("600x400")
window.title("Bookloaner")
stud = open("users.txt","r")
logdin = False
username = "admin"
password = "admin123"
stud = open("users.txt","r")
books=[]
books = open("books.txt","r")
##3
#### Define
##3
def closer():
frame.pack_forget()
logframe.pack_forget()
regframe.pack_forget()
extframe.pack_forget()
time.sleep(0.1)
####2
# FRAMES
####2
logframe=Frame(window)
regframe=Frame(window)
extframe=Frame(window)
def Login():
closer()
def Chek():
for line in open("users.txt", "r").readlines():
loginn_info= line.split()
if name.get() == loginn_info[1] and passwd.get() == loginn_info[2]:
tkMessageBox.askokcancel("System","logged",)
else:
tkMessageBox.askokcancel("System","Error",)
frame.pack_forget()
conf = StringVar()
mesg=Entry(logframe, width=30,textvariable=conf,fg="#900",state="readonly",relief=FLAT)
mesg.grid(row=0,column=2)
labname=Label(logframe,text="What is your name?")
labname.grid(row=1, column=1, sticky=E)
name=StringVar()
entname=Entry(logframe, textvariable=name)
entname.grid(row=1, column=2, sticky=W)
labpass=Label(logframe,text="What is your password?")
labpass.grid(row=2, column=1, sticky=E)
passwd=StringVar()
entpass=Entry(logframe, textvariable=passwd)
entpass.grid(row=2, column=2, sticky=W)
checkbtn = Button(logframe, text="Login", fg="#fff", bg="#00f", command=Chek)
checkbtn.grid(row=3,column=2, sticky=E)
logframe.pack()
def Register():
def Chek():
for i in stud.readlines():
i.rstrip("\n")
i = i.split(":")
print(i)
if username in i[0]:
if password in i[1]:
print("logged in!")
break
else: print("wrong username or password")
else: print("User doesn't exist.")
frame.pack_forget()
logframe=Frame(window)
labname=Label(logframe,text="What is your name?")
labname.grid(row=1, column=1, sticky=E)
entname=Entry(logframe)
entname.grid(row=1, column=2, sticky=W)
labpass=Label(logframe,text="What is your password?")
labpass.grid(row=2, column=1, sticky=E)
entpass=Entry(logframe)
entpass.grid(row=2, column=2, sticky=W)
checkbtn = Button(logframe, text="Register", fg="#fff", bg="#090", command=Chek)
checkbtn.grid(row=3,column=2, sticky=E)
logframe.pack()
def Exit():
window.quit()
exit()
def Getbook():
closer()
def Chek():
firstnaem = firstnaem.rstrip("\n")
lastnaem = books.readline().rstrip("\n")
books += [[firstnaem,lastnaem]]
for i in books.readline():
print(i[0])
extframe = Frame(window, height=400, width=600)
bname=StringVar()
leftframe=LabelFrame(extframe, height=400, width=300)
Label(leftframe, text="Get",font="25").place(y=0,x=0)
Label(leftframe, text="Book ID").place(y=30,x=10)
nr=Entry(leftframe, width=45).place(y=50,x=10)
Label(leftframe, text="Book Name").place(y=70,x=10)
Entry(leftframe, textvariable=bname, width=45,state="readonly",).place(y=90,x=10)
Button(leftframe, text="Get", width=38, height=10, bg="yellow", command=Chek).place(y=121,x=9)
leftframe.place(y=0,x=1)
rightframe=LabelFrame(extframe, height=400, width=300)
Label(rightframe, text="Give",font="25").place(y=0,x=0)
rightframe.place(y=0,x=300)
extframe.place(y=0,x=0)
##3
#### Start
##3
frame = Frame(window, height=400, width=600)
welcom=Label(frame,text="Welcome to Book Extange v1",font="50")
welcom.grid(row=0,column=2)
button1 = Button(frame,text="Login",font="25",width="10",height="3",bg="#00f",fg="white",command=Login)
button1.grid(row=1,column=2, sticky=W)
button2 = Button(frame,text="Register",font="25",width="10",height="3",bg="#090",fg="white",command=Register)
button2.grid(row=1,column=2, sticky=E)
frame.pack()
closebtn = Button(window, text="Close", bg="red", command=Exit)
closebtn.place(x=560)
##3
#### DROP DOWN
##3
menubar = Menu(window)
navmenu= Menu(menubar, tearoff=0)
navmenu.add_command(label="Exit",command=Exit)
navmenu.add_command(label="Home")
navmenu.add_command(label="Login", command=Login)
navmenu.add_command(label="Register", command=Register)
navmenu.add_separator()
menubar.add_cascade(label="Menu", menu=navmenu)
navmenu.add_command(label="Extange",command=Getbook)
window.config(menu=menubar)
window.mainloop()
A:
You need to add break after the line tkMessageBox.askokcancel("System","logged",). Also the else block should be in same indentation of the for line.
Below is the modified Chek():
def Chek():
for line in open("users.txt", "r").readlines():
loginn_info = line.split()
if name.get() == loginn_info[1] and passwd.get() == loginn_info[2]:
tkMessageBox.askokcancel("System", "logged")
break # break out the loop
else:
# credential not found
tkMessageBox.askokcancel("System", "Error")
| Stopping messagebox from looping the entire textfile after it found a value | i created register and login app. I made a messagebox that pops up when it found a value in the textfiles and also when it doesnt find the value. However it keeps looping my entire textfiles so it loops untill it find the value. How do i prevent it form looping? I tried break but it made it stop at 1st row of textfiles. Please ignore the register button, just the login function at the moment.
the textfiles(users) for login info
from tkinter import *
import time
import tkinter.messagebox as tkMessageBox
##3
#### Variable
##3
window = Tk()
window.geometry("600x400")
window.title("Bookloaner")
stud = open("users.txt","r")
logdin = False
username = "admin"
password = "admin123"
stud = open("users.txt","r")
books=[]
books = open("books.txt","r")
##3
#### Define
##3
def closer():
frame.pack_forget()
logframe.pack_forget()
regframe.pack_forget()
extframe.pack_forget()
time.sleep(0.1)
####2
# FRAMES
####2
logframe=Frame(window)
regframe=Frame(window)
extframe=Frame(window)
def Login():
closer()
def Chek():
for line in open("users.txt", "r").readlines():
loginn_info= line.split()
if name.get() == loginn_info[1] and passwd.get() == loginn_info[2]:
tkMessageBox.askokcancel("System","logged",)
else:
tkMessageBox.askokcancel("System","Error",)
frame.pack_forget()
conf = StringVar()
mesg=Entry(logframe, width=30,textvariable=conf,fg="#900",state="readonly",relief=FLAT)
mesg.grid(row=0,column=2)
labname=Label(logframe,text="What is your name?")
labname.grid(row=1, column=1, sticky=E)
name=StringVar()
entname=Entry(logframe, textvariable=name)
entname.grid(row=1, column=2, sticky=W)
labpass=Label(logframe,text="What is your password?")
labpass.grid(row=2, column=1, sticky=E)
passwd=StringVar()
entpass=Entry(logframe, textvariable=passwd)
entpass.grid(row=2, column=2, sticky=W)
checkbtn = Button(logframe, text="Login", fg="#fff", bg="#00f", command=Chek)
checkbtn.grid(row=3,column=2, sticky=E)
logframe.pack()
def Register():
def Chek():
for i in stud.readlines():
i.rstrip("\n")
i = i.split(":")
print(i)
if username in i[0]:
if password in i[1]:
print("logged in!")
break
else: print("wrong username or password")
else: print("User doesn't exist.")
frame.pack_forget()
logframe=Frame(window)
labname=Label(logframe,text="What is your name?")
labname.grid(row=1, column=1, sticky=E)
entname=Entry(logframe)
entname.grid(row=1, column=2, sticky=W)
labpass=Label(logframe,text="What is your password?")
labpass.grid(row=2, column=1, sticky=E)
entpass=Entry(logframe)
entpass.grid(row=2, column=2, sticky=W)
checkbtn = Button(logframe, text="Register", fg="#fff", bg="#090", command=Chek)
checkbtn.grid(row=3,column=2, sticky=E)
logframe.pack()
def Exit():
window.quit()
exit()
def Getbook():
closer()
def Chek():
firstnaem = firstnaem.rstrip("\n")
lastnaem = books.readline().rstrip("\n")
books += [[firstnaem,lastnaem]]
for i in books.readline():
print(i[0])
extframe = Frame(window, height=400, width=600)
bname=StringVar()
leftframe=LabelFrame(extframe, height=400, width=300)
Label(leftframe, text="Get",font="25").place(y=0,x=0)
Label(leftframe, text="Book ID").place(y=30,x=10)
nr=Entry(leftframe, width=45).place(y=50,x=10)
Label(leftframe, text="Book Name").place(y=70,x=10)
Entry(leftframe, textvariable=bname, width=45,state="readonly",).place(y=90,x=10)
Button(leftframe, text="Get", width=38, height=10, bg="yellow", command=Chek).place(y=121,x=9)
leftframe.place(y=0,x=1)
rightframe=LabelFrame(extframe, height=400, width=300)
Label(rightframe, text="Give",font="25").place(y=0,x=0)
rightframe.place(y=0,x=300)
extframe.place(y=0,x=0)
##3
#### Start
##3
frame = Frame(window, height=400, width=600)
welcom=Label(frame,text="Welcome to Book Extange v1",font="50")
welcom.grid(row=0,column=2)
button1 = Button(frame,text="Login",font="25",width="10",height="3",bg="#00f",fg="white",command=Login)
button1.grid(row=1,column=2, sticky=W)
button2 = Button(frame,text="Register",font="25",width="10",height="3",bg="#090",fg="white",command=Register)
button2.grid(row=1,column=2, sticky=E)
frame.pack()
closebtn = Button(window, text="Close", bg="red", command=Exit)
closebtn.place(x=560)
##3
#### DROP DOWN
##3
menubar = Menu(window)
navmenu= Menu(menubar, tearoff=0)
navmenu.add_command(label="Exit",command=Exit)
navmenu.add_command(label="Home")
navmenu.add_command(label="Login", command=Login)
navmenu.add_command(label="Register", command=Register)
navmenu.add_separator()
menubar.add_cascade(label="Menu", menu=navmenu)
navmenu.add_command(label="Extange",command=Getbook)
window.config(menu=menubar)
window.mainloop()
| [
"You need to add break after the line tkMessageBox.askokcancel(\"System\",\"logged\",). Also the else block should be in same indentation of the for line.\nBelow is the modified Chek():\n def Chek():\n for line in open(\"users.txt\", \"r\").readlines():\n loginn_info = line.split()\n if name.get() == loginn_info[1] and passwd.get() == loginn_info[2]:\n tkMessageBox.askokcancel(\"System\", \"logged\")\n break # break out the loop \n else:\n # credential not found\n tkMessageBox.askokcancel(\"System\", \"Error\")\n\n"
] | [
0
] | [] | [] | [
"loops",
"messagebox",
"python",
"tkinter"
] | stackoverflow_0074650430_loops_messagebox_python_tkinter.txt |
Q:
Flutter WEB sending file to Django Rest Framework Backend
So in my flutter front end (WEB). I am using Image_picker and then image_cropper packages to obtain a file.
I know flutter web doesn't support Dart/io so instead you have to send your image in a mulitpart request FromBYtes. Normally for an ios/android flutter app you can use fromFile.
Now I send that to my backend as bytes. however, my django rest framework view isnt able to save the image to my model.
here is the code and step by step:
final imagetoSendToAPIasbytes = await cropImageFile.readAsBytes();
List<int> imageaslistint = imagetoSendToAPIasbytes.cast();
final response = await uploadImage(imageaslist, profileid);
uploadimage function:
var profilepic = await http.MultipartFile.fromBytes(
"profilepic", imageaslistint);
request.files.add(profilepic);
http.StreamedResponse response = await request.send();
var responseByteArray = await response.stream.toBytes();
ofcourse this is not the full code. But I am able to send it to the back end. my django backend view to handle:
@api_view(['PATCH', ])
@throttle_classes([updateprofileprofilepicThrottle])
@parser_classes((MultiPartParser, FormParser, JSONParser))
def updateprofileprofilepic(request,):
try:
user = request.user
except User.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
if request.method == "PATCH":
profileobjid= json.loads(request.data['profileid'])
profileobj = Profile.objects.get(creator = user, id = profileobjid)
profileobj.profilepic.delete(False)
print(request.FILES['profilepic'])
print(json.loads(request.data['profilepic']))
profileobj.profilepic= json.loads(request.data['profilepic'])
profileobj.save()
usualy (request.FILES['profilepic']) allows me to save a file (from ios/android)
however thats when its sending as a mulitipart request.fromPATH.
So now (json.loads(request.data['profilepic']))
im able to get the bytes? but How do I handle saving that into an IMAGE on my model. Thank you, any help would be appreciated
<QueryDict: {'profileid': ['"xxxx-xxxx-xxxxxxxe-xxxxxxx"'], 'profilepic': ['�PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01........
do I have to convert the bytesarray back into an image in the backend?
A:
so i figured it out.
when making that call from Flutter WEB YOU MUST include:
var profilepic = await http.MultipartFile.fromBytes(
"profilepic", file, filename: 'hello.png');
The filename. this is needed for django restframework and multiformparser to be able to save it as image.
| Flutter WEB sending file to Django Rest Framework Backend | So in my flutter front end (WEB). I am using Image_picker and then image_cropper packages to obtain a file.
I know flutter web doesn't support Dart/io so instead you have to send your image in a mulitpart request FromBYtes. Normally for an ios/android flutter app you can use fromFile.
Now I send that to my backend as bytes. however, my django rest framework view isnt able to save the image to my model.
here is the code and step by step:
final imagetoSendToAPIasbytes = await cropImageFile.readAsBytes();
List<int> imageaslistint = imagetoSendToAPIasbytes.cast();
final response = await uploadImage(imageaslist, profileid);
uploadimage function:
var profilepic = await http.MultipartFile.fromBytes(
"profilepic", imageaslistint);
request.files.add(profilepic);
http.StreamedResponse response = await request.send();
var responseByteArray = await response.stream.toBytes();
ofcourse this is not the full code. But I am able to send it to the back end. my django backend view to handle:
@api_view(['PATCH', ])
@throttle_classes([updateprofileprofilepicThrottle])
@parser_classes((MultiPartParser, FormParser, JSONParser))
def updateprofileprofilepic(request,):
try:
user = request.user
except User.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
try:
if request.method == "PATCH":
profileobjid= json.loads(request.data['profileid'])
profileobj = Profile.objects.get(creator = user, id = profileobjid)
profileobj.profilepic.delete(False)
print(request.FILES['profilepic'])
print(json.loads(request.data['profilepic']))
profileobj.profilepic= json.loads(request.data['profilepic'])
profileobj.save()
usualy (request.FILES['profilepic']) allows me to save a file (from ios/android)
however thats when its sending as a mulitipart request.fromPATH.
So now (json.loads(request.data['profilepic']))
im able to get the bytes? but How do I handle saving that into an IMAGE on my model. Thank you, any help would be appreciated
<QueryDict: {'profileid': ['"xxxx-xxxx-xxxxxxxe-xxxxxxx"'], 'profilepic': ['�PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x01........
do I have to convert the bytesarray back into an image in the backend?
| [
"so i figured it out.\nwhen making that call from Flutter WEB YOU MUST include:\nvar profilepic = await http.MultipartFile.fromBytes(\n\"profilepic\", file, filename: 'hello.png');\nThe filename. this is needed for django restframework and multiformparser to be able to save it as image.\n"
] | [
1
] | [] | [] | [
"django",
"django_rest_framework",
"flutter",
"multipartform_data",
"python"
] | stackoverflow_0074649077_django_django_rest_framework_flutter_multipartform_data_python.txt |
Q:
Language Translator Using Google API in Python
I have used this code from geeksforgeeks (https://www.geeksforgeeks.org/language-translator-using-google-api-in-python/), I am trying to run it and it runs without any error, and it prints out:
Speak 'hello' to initiate the Translation !
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
but when i say "hello" it does not recognize it and do not start listening for translation.
I have imported all the modules, tried updating every one of them, and also Im using a macbook m1 pro.
And heres the code:
import speech_recognition as spr
from googletrans import Translator
from gtts import gTTS
import os
# Creating Recogniser() class object
recog1 = spr.Recognizer()
# Creating microphone instance
mc = spr.Microphone()
# Capture Voice
with mc as source:
print("Speak 'hello' to initiate the Translation !")
print("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
recog1.adjust_for_ambient_noise(source, duration=0.2)
audio = recog1.listen(source)
MyText = recog1.recognize_google(audio)
MyText = MyText.lower()
# Here initialising the recorder with
# hello, whatever after that hello it
# will recognise it.
if 'hello' in MyText:
# Translator method for translation
translator = Translator()
# short form of english in which
# you will speak
from_lang = 'en'
# In which we want to convert, short
# form of hindi
to_lang = 'hi'
with mc as source:
print("Speak a stentence...")
recog1.adjust_for_ambient_noise(source, duration=0.2)
# Storing the speech into audio variable
audio = recog1.listen(source)
# Using recognize.google() method to
# convert audio into text
get_sentence = recog1.recognize_google(audio)
# Using try and except block to improve
# its efficiency.
try:
# Printing Speech which need to
# be translated.
print("Phase to be Translated :"+ get_sentence)
# Using translate() method which requires
# three arguments, 1st the sentence which
# needs to be translated 2nd source language
# and 3rd to which we need to translate in
text_to_translate = translator.translate(get_sentence,
src= from_lang,
dest= to_lang)
# Storing the translated text in text
# variable
text = text_to_translate.text
# Using Google-Text-to-Speech ie, gTTS() method
# to speak the translated text into the
# destination language which is stored in to_lang.
# Also, we have given 3rd argument as False because
# by default it speaks very slowly
speak = gTTS(text=text, lang=to_lang, slow= False)
# Using save() method to save the translated
# speech in capture_voice.mp3
speak.save("captured_voice.mp3")
# Using OS module to run the translated voice.
os.system("start captured_voice.mp3")
# Here we are using except block for UnknownValue
# and Request Error and printing the same to
# provide better service to the user.
except spr.UnknownValueError:
print("Unable to Understand the Input")
except spr.RequestError as e:
print("Unable to provide Required Output".format(e))
A:
from gtts import gTTS
from io import BytesIO
from pygame import mixer
import time
def speak():
mp3_fp = BytesIO()
tts = gTTS('KGF is a Great movie to watch', lang='en')
tts.write_to_fp(mp3_fp)
tts.save("Audio.mp3")
return mp3_fp
mixer.init()
sound = speak()
sound.seek(0)
mixer.music.load(sound, "mp3")
mixer.music.play()
| Language Translator Using Google API in Python | I have used this code from geeksforgeeks (https://www.geeksforgeeks.org/language-translator-using-google-api-in-python/), I am trying to run it and it runs without any error, and it prints out:
Speak 'hello' to initiate the Translation !
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
but when i say "hello" it does not recognize it and do not start listening for translation.
I have imported all the modules, tried updating every one of them, and also Im using a macbook m1 pro.
And heres the code:
import speech_recognition as spr
from googletrans import Translator
from gtts import gTTS
import os
# Creating Recogniser() class object
recog1 = spr.Recognizer()
# Creating microphone instance
mc = spr.Microphone()
# Capture Voice
with mc as source:
print("Speak 'hello' to initiate the Translation !")
print("~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
recog1.adjust_for_ambient_noise(source, duration=0.2)
audio = recog1.listen(source)
MyText = recog1.recognize_google(audio)
MyText = MyText.lower()
# Here initialising the recorder with
# hello, whatever after that hello it
# will recognise it.
if 'hello' in MyText:
# Translator method for translation
translator = Translator()
# short form of english in which
# you will speak
from_lang = 'en'
# In which we want to convert, short
# form of hindi
to_lang = 'hi'
with mc as source:
print("Speak a stentence...")
recog1.adjust_for_ambient_noise(source, duration=0.2)
# Storing the speech into audio variable
audio = recog1.listen(source)
# Using recognize.google() method to
# convert audio into text
get_sentence = recog1.recognize_google(audio)
# Using try and except block to improve
# its efficiency.
try:
# Printing Speech which need to
# be translated.
print("Phase to be Translated :"+ get_sentence)
# Using translate() method which requires
# three arguments, 1st the sentence which
# needs to be translated 2nd source language
# and 3rd to which we need to translate in
text_to_translate = translator.translate(get_sentence,
src= from_lang,
dest= to_lang)
# Storing the translated text in text
# variable
text = text_to_translate.text
# Using Google-Text-to-Speech ie, gTTS() method
# to speak the translated text into the
# destination language which is stored in to_lang.
# Also, we have given 3rd argument as False because
# by default it speaks very slowly
speak = gTTS(text=text, lang=to_lang, slow= False)
# Using save() method to save the translated
# speech in capture_voice.mp3
speak.save("captured_voice.mp3")
# Using OS module to run the translated voice.
os.system("start captured_voice.mp3")
# Here we are using except block for UnknownValue
# and Request Error and printing the same to
# provide better service to the user.
except spr.UnknownValueError:
print("Unable to Understand the Input")
except spr.RequestError as e:
print("Unable to provide Required Output".format(e))
| [
"from gtts import gTTS\nfrom io import BytesIO\nfrom pygame import mixer\nimport time\n\ndef speak():\n mp3_fp = BytesIO()\n tts = gTTS('KGF is a Great movie to watch', lang='en')\n tts.write_to_fp(mp3_fp)\n tts.save(\"Audio.mp3\")\n return mp3_fp\n\nmixer.init()\nsound = speak()\nsound.seek(0)\nmixer.music.load(sound, \"mp3\")\nmixer.music.play()\n\n"
] | [
0
] | [] | [] | [
"google_api",
"language_translation",
"python"
] | stackoverflow_0074143619_google_api_language_translation_python.txt |
Q:
How do I remove an unwanted word on python using Beautifulsoup?
I just started learning Python, every white level knowledge. I am trying to webscrape from a website and tweet it.
Here's my code.
def scrape ():
page = requests.get("https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find("h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").text.strip()
tweet (top_post)
The result :
FTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News, article with image
I want to get rid of "article with image"
A:
You have to access the span element within h3 to get your desired output
Change
top_post = posts[0].find(
"h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").text.strip()
to
top_post = posts[0].find(
"h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").find_all("span")[0].text.strip()
Full Code
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find(
"h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").find_all("span")[0].text.strip()
print(top_post)
Output
FTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News
| How do I remove an unwanted word on python using Beautifulsoup? | I just started learning Python, every white level knowledge. I am trying to webscrape from a website and tweet it.
Here's my code.
def scrape ():
page = requests.get("https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find("h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").text.strip()
tweet (top_post)
The result :
FTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News, article with image
I want to get rid of "article with image"
| [
"You have to access the span element within h3 to get your desired output\nChange\ntop_post = posts[0].find(\n\"h3\", class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM\").text.strip()\n\nto\ntop_post = posts[0].find(\n\"h3\", class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM\").find_all(\"span\")[0].text.strip()\n\nFull Code\nimport requests\nfrom bs4 import BeautifulSoup\n\npage = requests.get(\"https://www.reuters.com/business/future-of-money/\")\nsoup = BeautifulSoup(page.content, \"html.parser\")\nhome = soup.find(class_=\"editorial-franchise-layout__main__3cLBl\")\nposts = home.find_all(class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4\")\ntop_post = posts[0].find(\n\"h3\", class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM\").find_all(\"span\")[0].text.strip()\n\nprint(top_post)\n\nOutput\nFTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News\n\n"
] | [
1
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0074650519_beautifulsoup_python_web_scraping.txt |
Q:
Fit function in sklearn KNN model does not work: 'n_neighbors does not take value, enter integer value'
I am working on a project where I want to use the KNN model from the sklearn library. I simplified the original problem to the following one. X1, X2 and X3 are the predicters to assign each row to a category (Y- variable), which is either 1 or 2. I used a online instruction and all went fine untill I use the fit function. Here is the code:
#Importing necessary libraries
import pandas as pd
import numpy as np
#Imports for KNN models
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
#Imports for testing the model
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
#Import the data file
data = pd.read_csv("/content/drive/MyDrive/Python/Colab Notebooks/Onlyinttest.csv")
#Split data
X = data.loc[:,['X1','X2','X3']]
Y = data.loc[:,'Y']
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, random_state=0, test_size=0.2)
#Determine k by using sqrt
import math
k = math.sqrt(len(Y_test))
print(k)
#Make k uneven
k = k-1
#KNN Model
classifer = KNeighborsClassifier(n_neighbors=k, p=2,metric='euclidean')
classifer.fit(X_train,Y_train)
The error: ''n_neighbors does not take <class 'float'> value, enter integer value''
All the data from the original data were float data, but in every online example I read the algorithm also works with float data, so I do not understand this error.
To double check I created the csv used in the code above (Onlyinttest.csv), which only contains int values, but still the some error occures:
CSV data
Can someone help me out here?
A:
In your example, k is a float, not an integer. The n_neighbors value in KNeighborsClassifier(n_neighbors=k, p=2,metric='euclidean') has to be an integer, not a float.
You could convert k into an integer in this example using the math.ceil() function which return the integer that is equal to or greater than the float value. Alternately, you could use the math.floor() function which will return the integer that less than or equal to the input float.
For example:
#Determine k by using sqrt
import math
k = math.sqrt(len(Y_test))
print(k)
#Make k uneven
k = k-1
k = math.ceil(k)
print(k) # should now be an integer
print(type(k)) # <class 'int'>
#KNN Model
classifer = KNeighborsClassifier(n_neighbors=k, p=2, metric='euclidean')
classifer.fit(X_train, Y_train)
| Fit function in sklearn KNN model does not work: 'n_neighbors does not take value, enter integer value' | I am working on a project where I want to use the KNN model from the sklearn library. I simplified the original problem to the following one. X1, X2 and X3 are the predicters to assign each row to a category (Y- variable), which is either 1 or 2. I used a online instruction and all went fine untill I use the fit function. Here is the code:
#Importing necessary libraries
import pandas as pd
import numpy as np
#Imports for KNN models
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
#Imports for testing the model
from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
#Import the data file
data = pd.read_csv("/content/drive/MyDrive/Python/Colab Notebooks/Onlyinttest.csv")
#Split data
X = data.loc[:,['X1','X2','X3']]
Y = data.loc[:,'Y']
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, random_state=0, test_size=0.2)
#Determine k by using sqrt
import math
k = math.sqrt(len(Y_test))
print(k)
#Make k uneven
k = k-1
#KNN Model
classifer = KNeighborsClassifier(n_neighbors=k, p=2,metric='euclidean')
classifer.fit(X_train,Y_train)
The error: ''n_neighbors does not take <class 'float'> value, enter integer value''
All the data from the original data were float data, but in every online example I read the algorithm also works with float data, so I do not understand this error.
To double check I created the csv used in the code above (Onlyinttest.csv), which only contains int values, but still the some error occures:
CSV data
Can someone help me out here?
| [
"In your example, k is a float, not an integer. The n_neighbors value in KNeighborsClassifier(n_neighbors=k, p=2,metric='euclidean') has to be an integer, not a float.\nYou could convert k into an integer in this example using the math.ceil() function which return the integer that is equal to or greater than the float value. Alternately, you could use the math.floor() function which will return the integer that less than or equal to the input float.\nFor example:\n#Determine k by using sqrt\nimport math\nk = math.sqrt(len(Y_test))\nprint(k)\n#Make k uneven\nk = k-1\nk = math.ceil(k)\nprint(k) # should now be an integer\nprint(type(k)) # <class 'int'>\n\n#KNN Model\nclassifer = KNeighborsClassifier(n_neighbors=k, p=2, metric='euclidean')\nclassifer.fit(X_train, Y_train)\n\n"
] | [
1
] | [] | [] | [
"knn",
"pandas",
"python",
"scikit_learn"
] | stackoverflow_0074650080_knn_pandas_python_scikit_learn.txt |
Q:
Why can't client.message.create() receive an f-string as argument for its body parameter?
I'm building a Flask API, and one of its use cases consists on sending a WhatsApp message to a requested phone number. So far, I've been testing this feature through Twilio's sandbox & phone number in a trial account.
This is my use case code:
def send_greetings(order_id):
try:
phone = format_phone_number(
retrieve_target_phone(order_id)
) # Retrieves target phone number and formats it (removes whitespace, etc)
twilio_client.messages.create(
from_=f"whatsapp:{current_app.config['TWILIO_WHATSAPP_SENDER']}",
to=f"whatsapp:{phone}",
body=build_message(order_id), # Returns an f-string
)
except:
raise
The code above fails to submit the message, but doesn't raise an exception. However, if I change the body argument from the call to build_message to a regular string, the message is sent. If I change the same parameter to a variable containing an f-string, the message won't be submitted.
It's noteworthy that the message I try to submit doesn't really match any defined template. This is the code for the build_message function:
from flask import current_app
def build_message(order_id: str) -> str:
return f"¡Hola. Has recibido un regalo y junto con el, un saludo especial. Ve a {current_app.config['FRONTEND_DOMAIN']}/destinatario?order_id={order_id} para revisarlo!"
So why is it that when the parameter is a regular string the message is sent, even although it doesn't match any of the 3 predefined templates, but when it's an f-string it's not submitted?
A:
There's nothing magical about f-strings. The bad behavior must be caused by something else.
There is absolutely no difference in the return values of these two functions:
def hello():
return "Hello John"
def hello_f():
name = "John"
return f"Hello {name}"
Anyone calling these functions would see exactly the same return value: a plain old string. The caller would have absolutely no way of knowing that the string was generated using an f-string template.
So there must be some actual difference in the content of the regular string you're using, vs. the output of the build_message() function.
| Why can't client.message.create() receive an f-string as argument for its body parameter? | I'm building a Flask API, and one of its use cases consists on sending a WhatsApp message to a requested phone number. So far, I've been testing this feature through Twilio's sandbox & phone number in a trial account.
This is my use case code:
def send_greetings(order_id):
try:
phone = format_phone_number(
retrieve_target_phone(order_id)
) # Retrieves target phone number and formats it (removes whitespace, etc)
twilio_client.messages.create(
from_=f"whatsapp:{current_app.config['TWILIO_WHATSAPP_SENDER']}",
to=f"whatsapp:{phone}",
body=build_message(order_id), # Returns an f-string
)
except:
raise
The code above fails to submit the message, but doesn't raise an exception. However, if I change the body argument from the call to build_message to a regular string, the message is sent. If I change the same parameter to a variable containing an f-string, the message won't be submitted.
It's noteworthy that the message I try to submit doesn't really match any defined template. This is the code for the build_message function:
from flask import current_app
def build_message(order_id: str) -> str:
return f"¡Hola. Has recibido un regalo y junto con el, un saludo especial. Ve a {current_app.config['FRONTEND_DOMAIN']}/destinatario?order_id={order_id} para revisarlo!"
So why is it that when the parameter is a regular string the message is sent, even although it doesn't match any of the 3 predefined templates, but when it's an f-string it's not submitted?
| [
"There's nothing magical about f-strings. The bad behavior must be caused by something else.\nThere is absolutely no difference in the return values of these two functions:\ndef hello():\n return \"Hello John\"\n\ndef hello_f():\n name = \"John\"\n return f\"Hello {name}\"\n\nAnyone calling these functions would see exactly the same return value: a plain old string. The caller would have absolutely no way of knowing that the string was generated using an f-string template.\nSo there must be some actual difference in the content of the regular string you're using, vs. the output of the build_message() function.\n"
] | [
3
] | [] | [] | [
"flask",
"python",
"twilio",
"twilio_api",
"whatsapi"
] | stackoverflow_0074650558_flask_python_twilio_twilio_api_whatsapi.txt |
Q:
Trailing tab in the string not getting printed using the print function in python (python3)
I am trying to print a string with \t at both beginning and end, like below.
name2print="\tabhinav\t"
lastname="gupta"
print(name2print,lastname)
Expected output should be
abhinav gupta
But the actual output is
abhinav gupta
I tried with lstrip like this and as expected strips only the beginning "\t" and prints the trailing "\t"
print(name2print.lstrip(),lastname)
Output:
abhinav gupta
If lstrip() can print the trailing "\t" then why is the print statement ignoring the trailing tab character in the first string while printing? I think I am missing something basic. Please help.
A:
The output is correct. \t adds a variable number of spaces so that the next printed character is at a position which is a multiple of 8 (or whatever is configured in your terminal).
In your example the first \t adds 8 spaces, then you print abhinav (7 characters), the next tab adds 1 space to make it a multiple of 8, then the , in your print statements adds 1 space, then you print gupta:
a b h i n a v g u p t a
1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
└───── \t ────┘ \t, └\t┘
If you always want to print 8 spaces, use " ".
A:
When you print a tab character, Python uses it to align the string to the end of a 4 character boundary so that the next thing that gets printed is aligned to the next boundary. So you will not always get 4 spaces. Rather, you'll get between 1 and 4 spaces. Here's some code to demonstrate this:
print('\ta\t','bbbb')
print('\taa\t','bbbb')
print('\taaa\t','bbbb')
print('\taaaa\t','bbbb')
print('\taaaaa\t','bbbb')
print('1234567890123456')
Result:
a bbbb
aa bbbb
aaa bbbb
aaaa bbbb
aaaaa bbbb
1234567890123456
The bbbb values are aligned to the 10th and 14th positions rather than the 9th and 13th because of the extra space character that the Python print() function adds between terms. The second string being aligned to 4 character boundaries is ' bbbb'.
| Trailing tab in the string not getting printed using the print function in python (python3) | I am trying to print a string with \t at both beginning and end, like below.
name2print="\tabhinav\t"
lastname="gupta"
print(name2print,lastname)
Expected output should be
abhinav gupta
But the actual output is
abhinav gupta
I tried with lstrip like this and as expected strips only the beginning "\t" and prints the trailing "\t"
print(name2print.lstrip(),lastname)
Output:
abhinav gupta
If lstrip() can print the trailing "\t" then why is the print statement ignoring the trailing tab character in the first string while printing? I think I am missing something basic. Please help.
| [
"The output is correct. \\t adds a variable number of spaces so that the next printed character is at a position which is a multiple of 8 (or whatever is configured in your terminal).\nIn your example the first \\t adds 8 spaces, then you print abhinav (7 characters), the next tab adds 1 space to make it a multiple of 8, then the , in your print statements adds 1 space, then you print gupta:\n a b h i n a v g u p t a \n1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8\n└───── \\t ────┘ \\t, └\\t┘\n\nIf you always want to print 8 spaces, use \" \".\n",
"When you print a tab character, Python uses it to align the string to the end of a 4 character boundary so that the next thing that gets printed is aligned to the next boundary. So you will not always get 4 spaces. Rather, you'll get between 1 and 4 spaces. Here's some code to demonstrate this:\nprint('\\ta\\t','bbbb')\nprint('\\taa\\t','bbbb')\nprint('\\taaa\\t','bbbb')\nprint('\\taaaa\\t','bbbb')\nprint('\\taaaaa\\t','bbbb')\nprint('1234567890123456')\n\nResult:\n a bbbb\n aa bbbb\n aaa bbbb\n aaaa bbbb\n aaaaa bbbb\n1234567890123456\n\nThe bbbb values are aligned to the 10th and 14th positions rather than the 9th and 13th because of the extra space character that the Python print() function adds between terms. The second string being aligned to 4 character boundaries is ' bbbb'.\n"
] | [
0,
0
] | [] | [] | [
"printing",
"python",
"string",
"tabs"
] | stackoverflow_0074650572_printing_python_string_tabs.txt |
Q:
How to crack a cisco type 9 encryption password
I would like to decrypt an encrypted password in type 9 in a cisco device in python.
I do no have a code yet due to the reason I do not know where to start..
Thanks,
crack a cisco type 9 password in python
A:
These is no known vulnerability to decrypt type 9 cisco password, so the answer is that you can not decrypt it.
| How to crack a cisco type 9 encryption password | I would like to decrypt an encrypted password in type 9 in a cisco device in python.
I do no have a code yet due to the reason I do not know where to start..
Thanks,
crack a cisco type 9 password in python
| [
"These is no known vulnerability to decrypt type 9 cisco password, so the answer is that you can not decrypt it.\n"
] | [
0
] | [] | [] | [
"cisco",
"cracking",
"passwords",
"python",
"scrypt"
] | stackoverflow_0074597471_cisco_cracking_passwords_python_scrypt.txt |
Q:
discord.py - count how many times a user has bee mentioned in a specific channel
I'm coding a discord bot a for a friend using python (discord.py) and i've a specific task i want the bot to do that i can't figure out how to code (i'm kinda a newbie with py), so here it is: we use a specific text-channel to post wins of the games we play mentioning every partecipant, i want the bot to count every mention a user has in that channel and return me the number. Reading here and there i saw i should be using either message.raw_mentions or message.mentions but i don't know how to make it select the proper channel and how to just return the number of a specific user's mentions. So far the code looks like this:
async def testcommand(self, context: Context, user: discord.User) -> None:
member = context.guild.get_member(user.id) or await context.guild.fetch_member(user.id)
mentions = message.mentions
for user in mentions:
if user.id in ???????????
embed = discord.Embed(
title="**Wins:**",
description=f"{member}",
color=0x9C84EF
)
await context.send(embed=embed)
Thanks for your help, have a nice day!
A:
Maybe something like this, where it iterates over every message sent in the current channel?
@bot.command()
async def some_command(ctx):
mentions = 0
async for message in ctx.channel.history(limit=10000000): # set limit to some big number
if ctx.author in message.mentions:
mentions += 1
await ctx.send(mentions)
The above code counts the number of times the author is mentioned in the current channel.
Of course, this is highly inefficient, but it gets the job done.
If you wish to add more parameters (such as the user and the channel):
@bot.command()
async def better_command(ctx, user: discord.Member, channel: discord.TextChannel):
mentions = 0
async for message in channel.history(limit=10000000): # set limit to some big number
if user in message.mentions:
mentions += 1
await ctx.send(mentions)
| discord.py - count how many times a user has bee mentioned in a specific channel | I'm coding a discord bot a for a friend using python (discord.py) and i've a specific task i want the bot to do that i can't figure out how to code (i'm kinda a newbie with py), so here it is: we use a specific text-channel to post wins of the games we play mentioning every partecipant, i want the bot to count every mention a user has in that channel and return me the number. Reading here and there i saw i should be using either message.raw_mentions or message.mentions but i don't know how to make it select the proper channel and how to just return the number of a specific user's mentions. So far the code looks like this:
async def testcommand(self, context: Context, user: discord.User) -> None:
member = context.guild.get_member(user.id) or await context.guild.fetch_member(user.id)
mentions = message.mentions
for user in mentions:
if user.id in ???????????
embed = discord.Embed(
title="**Wins:**",
description=f"{member}",
color=0x9C84EF
)
await context.send(embed=embed)
Thanks for your help, have a nice day!
| [
"Maybe something like this, where it iterates over every message sent in the current channel?\[email protected]()\nasync def some_command(ctx):\n mentions = 0\n\n async for message in ctx.channel.history(limit=10000000): # set limit to some big number\n if ctx.author in message.mentions:\n mentions += 1\n\n await ctx.send(mentions)\n\nThe above code counts the number of times the author is mentioned in the current channel.\nOf course, this is highly inefficient, but it gets the job done.\n\nIf you wish to add more parameters (such as the user and the channel):\[email protected]()\nasync def better_command(ctx, user: discord.Member, channel: discord.TextChannel):\n mentions = 0\n\n async for message in channel.history(limit=10000000): # set limit to some big number\n if user in message.mentions:\n mentions += 1\n\n await ctx.send(mentions)\n\n"
] | [
1
] | [] | [] | [
"discord.py",
"python"
] | stackoverflow_0074646074_discord.py_python.txt |
Q:
Unable to play & convert .txt to mp3 using GTTS
I'm trying to read a .txt file using Google's text-to-speech API. But, when I try to run it, it gives an error that I can't quite fathom. So your help will be greatly appreciated!
My Python code:
#Import the required module for text
from gtts import gTTS
#required to play the converted file
import os
#The file you want to convert
with open('/Users/humma/Desktop/python_projects/flowers.txt', 'r') as myFile:
fileRead = myFile.read()
#passing file and language to the engine
myObj = gTTS(text = fileRead, lang = 'en-US', slow = False)
myObj.save('flowers.mp3')
os.system("flowers.mp3")
The error I get:
File "c:/Users/humma/Desktop/python_projects/txt-to-speech/txt-to-spch.py", line 12, in <module>
myObj.save('flowers.mp3')
File "C:\ProgramData\Anaconda3\lib\site-packages\gtts\tts.py", line 312, in save
self.write_to_fp(f)
File "C:\ProgramData\Anaconda3\lib\site-packages\gtts\tts.py", line 294, in write_to_fp
raise gTTSError(tts=self, response=r)
gtts.tts.gTTSError: 200 (OK) from TTS API. Probable cause: Unknown
Thank you in advance for your time :)
A:
from gtts import gTTS
from io import BytesIO
from pygame import mixer
import time
def speak():
mp3_fp = BytesIO()
tts = gTTS('You need to read documetation properly', lang='en')
tts.write_to_fp(mp3_fp)
tts.save("Audio.mp3")
return mp3_fp
mixer.init()
sound = speak()
sound.seek(0)
mixer.music.load(sound, "mp3")
mixer.music.play()
A:
It was lang = 'en-US' that caused the error. It's simply: lang = 'en'
| Unable to play & convert .txt to mp3 using GTTS | I'm trying to read a .txt file using Google's text-to-speech API. But, when I try to run it, it gives an error that I can't quite fathom. So your help will be greatly appreciated!
My Python code:
#Import the required module for text
from gtts import gTTS
#required to play the converted file
import os
#The file you want to convert
with open('/Users/humma/Desktop/python_projects/flowers.txt', 'r') as myFile:
fileRead = myFile.read()
#passing file and language to the engine
myObj = gTTS(text = fileRead, lang = 'en-US', slow = False)
myObj.save('flowers.mp3')
os.system("flowers.mp3")
The error I get:
File "c:/Users/humma/Desktop/python_projects/txt-to-speech/txt-to-spch.py", line 12, in <module>
myObj.save('flowers.mp3')
File "C:\ProgramData\Anaconda3\lib\site-packages\gtts\tts.py", line 312, in save
self.write_to_fp(f)
File "C:\ProgramData\Anaconda3\lib\site-packages\gtts\tts.py", line 294, in write_to_fp
raise gTTSError(tts=self, response=r)
gtts.tts.gTTSError: 200 (OK) from TTS API. Probable cause: Unknown
Thank you in advance for your time :)
| [
"from gtts import gTTS\nfrom io import BytesIO\nfrom pygame import mixer\nimport time\n\ndef speak():\n mp3_fp = BytesIO()\n tts = gTTS('You need to read documetation properly', lang='en')\n tts.write_to_fp(mp3_fp)\n tts.save(\"Audio.mp3\")\n return mp3_fp\n\nmixer.init()\nsound = speak()\nsound.seek(0)\nmixer.music.load(sound, \"mp3\")\nmixer.music.play()\n\n",
"It was lang = 'en-US' that caused the error. It's simply: lang = 'en'\n"
] | [
1,
0
] | [] | [] | [
"gtts",
"operating_system",
"python"
] | stackoverflow_0065981046_gtts_operating_system_python.txt |
Q:
How do I extract username from the json.loads object
I have a following block of code:
import json
from types import SimpleNamespace
data=json.dumps(
{
"update_id": 992108054,
"message": {
"delete_chat_photo": False,
"new_chat_members": [],
"date": 1669931418,
"photo": [],
"entities": [],
"message_id": 110,
"group_chat_created": False,
"caption_entities": [],
"new_chat_photo": [],
"supergroup_chat_created": False,
"chat": {
"type": "private",
"first_name": "test_name",
"id": 134839552,
"last_name": "test_l_name",
"username": "test_username"
},
"channel_chat_created": False,
"text": "test_text",
"from": {
"last_name": "test_l_name",
"is_bot": False,
"username": "test_username",
"id": 134839552,
"first_name": "test_name",
"language_code": "en"
}
}
})
x = json.loads(data, object_hook=lambda d: SimpleNamespace(**d))
print(x.message.text,
x.message.chat
)
Which works fine.
However, when I add
print(x.message.from)
I get an error:
File "<ipython-input-179-dec1b9f9affa>", line 1
print(x.message.from)
^
SyntaxError: invalid syntax
Could you please help me? How can I access fields inside 'from' block?
A:
You can access the fields inside the from block by using square bracket notation instead of dot notation. Here is how your code would look with the changes:
data=json.dumps(
{
"update_id": 992108054,
"message": {
"delete_chat_photo": False,
"new_chat_members": [],
"date": 1669931418,
"photo": [],
"entities": [],
"message_id": 110,
"group_chat_created": False,
"caption_entities": [],
"new_chat_photo": [],
"supergroup_chat_created": False,
"chat": {
"type": "private",
"first_name": "test_name",
"id": 134839552,
"last_name": "test_l_name",
"username": "test_username"
},
"channel_chat_created": False,
"text": "test_text",
"from": {
"last_name": "test_l_name",
"is_bot": False,
"username": "test_username",
"id": 134839552,
"first_name": "test_name",
"language_code": "en"
}
}
})
x = json.loads(data, object_hook=lambda d: SimpleNamespace(**d))
print(x.message.text,
x.message.chat
)
print(x.message["from"])
In Python, you use square bracket notation [] to access dictionary keys, and dot notation . to access object attributes. In this case, from is a reserved keyword in Python, so you cannot use dot notation to access it directly. Instead, you must use square bracket notation to access it.
(ChatGPT AI assisted with this answer)
A:
As from is a keyword, try this x.message.__getattribute__('from'):
>>> x.message.__getattribute__('from')
namespace(last_name='test_l_name',
is_bot=False,
username='test_username',
id=134839552,
first_name='test_name',
language_code='en')
And to get one of its attribute, say username, you could access like what you have been doing:
>>> x.message.__getattribute__('from').username
'test_username'
| How do I extract username from the json.loads object | I have a following block of code:
import json
from types import SimpleNamespace
data=json.dumps(
{
"update_id": 992108054,
"message": {
"delete_chat_photo": False,
"new_chat_members": [],
"date": 1669931418,
"photo": [],
"entities": [],
"message_id": 110,
"group_chat_created": False,
"caption_entities": [],
"new_chat_photo": [],
"supergroup_chat_created": False,
"chat": {
"type": "private",
"first_name": "test_name",
"id": 134839552,
"last_name": "test_l_name",
"username": "test_username"
},
"channel_chat_created": False,
"text": "test_text",
"from": {
"last_name": "test_l_name",
"is_bot": False,
"username": "test_username",
"id": 134839552,
"first_name": "test_name",
"language_code": "en"
}
}
})
x = json.loads(data, object_hook=lambda d: SimpleNamespace(**d))
print(x.message.text,
x.message.chat
)
Which works fine.
However, when I add
print(x.message.from)
I get an error:
File "<ipython-input-179-dec1b9f9affa>", line 1
print(x.message.from)
^
SyntaxError: invalid syntax
Could you please help me? How can I access fields inside 'from' block?
| [
"You can access the fields inside the from block by using square bracket notation instead of dot notation. Here is how your code would look with the changes:\n\ndata=json.dumps(\n{\n \"update_id\": 992108054,\n \"message\": {\n \"delete_chat_photo\": False,\n \"new_chat_members\": [],\n \"date\": 1669931418,\n \"photo\": [],\n \"entities\": [],\n \"message_id\": 110,\n \"group_chat_created\": False,\n \"caption_entities\": [],\n \"new_chat_photo\": [],\n \"supergroup_chat_created\": False,\n \"chat\": {\n \"type\": \"private\",\n \"first_name\": \"test_name\",\n \"id\": 134839552,\n \"last_name\": \"test_l_name\",\n \"username\": \"test_username\"\n },\n \"channel_chat_created\": False,\n \"text\": \"test_text\",\n \"from\": {\n \"last_name\": \"test_l_name\",\n \"is_bot\": False,\n \"username\": \"test_username\",\n \"id\": 134839552,\n \"first_name\": \"test_name\",\n \"language_code\": \"en\"\n }\n }\n})\nx = json.loads(data, object_hook=lambda d: SimpleNamespace(**d))\nprint(x.message.text,\n x.message.chat\n )\nprint(x.message[\"from\"])\n\nIn Python, you use square bracket notation [] to access dictionary keys, and dot notation . to access object attributes. In this case, from is a reserved keyword in Python, so you cannot use dot notation to access it directly. Instead, you must use square bracket notation to access it.\n(ChatGPT AI assisted with this answer)\n",
"As from is a keyword, try this x.message.__getattribute__('from'):\n>>> x.message.__getattribute__('from')\nnamespace(last_name='test_l_name',\n is_bot=False,\n username='test_username',\n id=134839552,\n first_name='test_name',\n language_code='en')\n\nAnd to get one of its attribute, say username, you could access like what you have been doing:\n>>> x.message.__getattribute__('from').username\n'test_username'\n\n"
] | [
0,
0
] | [] | [] | [
"json",
"python",
"telegram_bot"
] | stackoverflow_0074648762_json_python_telegram_bot.txt |
Q:
How to know embed's text and author in discord.py?
I was trying to find the way you can identify embed message text and author, but never found it. So is there any way to do that?
Googled through out of the Internet but not found it unfortunately.
A:
Take a look at the docs here
If you have an Embed object, obj just use obj.author to get the author. The text I'm assuming you mean the title, which can be accessed by obj.title.
A:
Assuming you have a Message object,
First, to get message author:
author = message.author
To get the embed from the message:
embed = discord.Embed.from_data(message.embeds[0]) # gets embed from message
Before getting the embed text, we first look at the attributes of a discord.Embed object. It has the attributes title and description [source].
Therefore:
embedTitle = embed.title # get title
embedDescription = embed.description # get description
| How to know embed's text and author in discord.py? | I was trying to find the way you can identify embed message text and author, but never found it. So is there any way to do that?
Googled through out of the Internet but not found it unfortunately.
| [
"Take a look at the docs here\nIf you have an Embed object, obj just use obj.author to get the author. The text I'm assuming you mean the title, which can be accessed by obj.title.\n",
"Assuming you have a Message object,\nFirst, to get message author:\nauthor = message.author\n\nTo get the embed from the message:\nembed = discord.Embed.from_data(message.embeds[0]) # gets embed from message\n\nBefore getting the embed text, we first look at the attributes of a discord.Embed object. It has the attributes title and description [source].\nTherefore:\nembedTitle = embed.title # get title\nembedDescription = embed.description # get description\n\n"
] | [
1,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074646811_discord_discord.py_python.txt |
Q:
after installing panda while importing in python Error: module 'os' has no attribute 'add_dll_directory'
After installing panda while importing in python I got the following Error:
File "<stdin>", line 1, in <module>
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\__init__.py", line 11, in <module>
__import__(dependency)
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__init__.py", line 126, in <module>
from numpy.__config__ import show as show_config
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__config__.py", line 13, in <module>
os.add_dll_directory(extra_dll_dir)
AttributeError: module 'os' has no attribute 'add_dll_directory'
A:
Remove Numpy Package and install again. The error will go away.
| after installing panda while importing in python Error: module 'os' has no attribute 'add_dll_directory' | After installing panda while importing in python I got the following Error:
File "<stdin>", line 1, in <module>
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\__init__.py", line 11, in <module>
__import__(dependency)
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__init__.py", line 126, in <module>
from numpy.__config__ import show as show_config
File "C:\Users\ss\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__config__.py", line 13, in <module>
os.add_dll_directory(extra_dll_dir)
AttributeError: module 'os' has no attribute 'add_dll_directory'
| [
"Remove Numpy Package and install again. The error will go away.\n"
] | [
0
] | [] | [] | [
"importerror",
"pandas",
"python"
] | stackoverflow_0061318720_importerror_pandas_python.txt |
Q:
Conda cannot find package despite package being listed on anaconda.org
I am trying to install zipline-reloaded using conda but am encountering a PackagesNotFoundError—this despite running the install command for zipline-reloaded provided on the package's page on anaconda.org. What might be going wrong here, and how can I resolve it?
My steps this far:
conda create -n zipline python=3.8
conda activate zipline
conda install -c ml4t zipline-reloaded (this is directly from the package's page on anaconda.org, linked above, but raises PackagesNotFoundError)
Note that trying to install, e.g., scipy, yields no similar error. Also, using mamba instead of conda leads to the same PackagesNotFoundError error.
The output of conda info is below:
active environment : zipline
active env location : /opt/homebrew/Caskroom/miniforge/base/envs/zipline
shell level : 1
user config file : /Users/name/.condarc
populated config files : /opt/homebrew/Caskroom/miniforge/base/.condarc
/Users/name/.condarc
conda version : 22.9.0
conda-build version : not installed
python version : 3.9.15.final.0
virtual packages : __osx=12.4=0
__unix=0=0
__archspec=1=arm64
base environment : /opt/homebrew/Caskroom/miniforge/base (writable)
conda av data dir : /opt/homebrew/Caskroom/miniforge/base/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/ml4t/osx-arm64
https://conda.anaconda.org/ml4t/noarch
https://conda.anaconda.org/conda-forge/osx-arm64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-arm64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-arm64
https://repo.anaconda.com/pkgs/r/noarch
https://conda.anaconda.org/ranaroussi/osx-arm64
https://conda.anaconda.org/ranaroussi/noarch
package cache : /opt/homebrew/Caskroom/miniforge/base/pkgs
/Users/name/.conda/pkgs
envs directories : /opt/homebrew/Caskroom/miniforge/base/envs
/Users/name/.conda/envs
platform : osx-arm64
user-agent : conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
UID:GID : 501:20
netrc file : None
offline mode : False
I've tried running:
conda clean -all
conda clean --index-cache
conda update conda
I've also made sure to set offline mode to false (conda config --set offline false), which it is, per the output above.
Finally, running conda search -c ml4t zipline-reloaded -vvv outputs:
DEBUG conda.gateways.logging:set_verbosity(236): verbosity set to 3
Loading channels: ...working... TRACE conda.gateways.disk.test:file_path_is_writable(24): checking path is writable /opt/homebrew/Caskroom/miniforge/base/pkgs/urls.txt
DEBUG conda.core.package_cache_data:_check_writable(268): package cache directory '/opt/homebrew/Caskroom/miniforge/base/pkgs' writable: True
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ml4t/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ml4t/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/r/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ranaroussi/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.json
DEBUG conda.core.subdir_data:_load(267): Using cached repodata for https://conda.anaconda.org/conda-forge/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/09cdf8bf.json. Timeout in 42 sec
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/main/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/09cdf8bf.q
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ranaroussi/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.json
DEBUG conda.core.subdir_data:_load(267): Using cached repodata for https://conda.anaconda.org/conda-forge/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/a850f475.json. Timeout in 37 sec
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/main/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/a850f475.q
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ml4t/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ml4t/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sun, 27 Nov 2022 23:15:46 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c305af6cc3f0-EWR
< Date: Thu, 01 Dec 2022 06:28:16 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=5_iHfkEj0MHUfLTzo1_JU44eFH7k.fkG4MefCLHFwCs-1669876096-0-Afv4K+DTTGsbF/EJ55ZMcprcOQU03A9xjLx1fVCxk220/+qEj2KgM0D9KShLZZMNwWIGCF/SrfFT/0yOU8PYUb3c1sef5rFCHNw00EumeSIB; path=/; expires=Thu, 01-Dec-22 06:58:16 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.374252
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ml4t/osx-arm64/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.json
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ranaroussi/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ranaroussi/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sat, 10 Jul 2021 20:29:09 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c305b931c436-EWR
< Date: Thu, 01 Dec 2022 06:28:16 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=L8JbFm6EySvGhs3b0c54CFobV5MnnDkLYszHBXNbkOE-1669876096-0-AXqlxjWUQxlASGQq2FDCPHAie99ZATvZmSa+RepV64uBbnR2GAeRM3oe+VMrkj23RXFUNlC8xP/j689pu+kPxuVwr5M1u96nKabCLWSujD5q; path=/; expires=Thu, 01-Dec-22 06:58:16 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.449782
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ranaroussi/osx-arm64/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.q
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.q
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ml4t/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ml4t/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sun, 27 Nov 2022 23:15:46 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c306acba17a5-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=OUlfjhiwUIls5GRrkrvx_gSvGNQ8jSFq4C4FQHuUEkI-1669876097-0-AeZAUF8zYwue32HP8rVUfV7ws8cYe8P/MZIXlS5tWbRONDfIemqH4Ze4eE1R6UuAdqaqqw/Rh2jJscMkBWdApRdTtKFIYda2yA5HJ44Urw+N; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.531916
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ml4t/noarch/repodata.json'. Updating mtime and loading from disk
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/r/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/r/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Fri, 19 Aug 2022 21:27:22 GMT
> If-None-Match: W/"bd18071599942dd824e1ec40e9d10873"
<<HTTPS 304 Not Modified
< Age: 70484
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306daf41899-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "bd18071599942dd824e1ec40e9d10873"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Tue, 03 May 2022 07:57:20 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=JAZuB9kuZKjDvy1WXV.IHe1gCAnfEEZIs2uQ0YOh.nY-1669876097-0-AcbFb9BTBsrUintQD6w4DP0qheARYSHDdmj1IfpgfRRjTdJlAeFbjfAQhX/8gthx2K7VCEDYMRtjVnJGkgyCwUw=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: fY9HZBoI/58riYlxXBQEYfV8bKmFamLhAbhZ0J9i5X/O76LU6Xv7DmrLw/lA4gHYZlTeq3dMEDM=
< x-amz-request-id: YY7S26ZWVFSN8T0M
< x-amz-version-id: PxKW1ua_FXQgEcZyls93YYZW2aaplpSK
< Connection: keep-alive
< Elapsed: 00:00.524347
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/main/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json'. Updating mtime and loading from disk
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/main/noarch/repodata.json HTTP/1.1" 304 0
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.json
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/r/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/main/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Wed, 30 Nov 2022 19:20:01 GMT
> If-None-Match: W/"1d4ef7661a7ed1933c0a6c41ba4cc425"
<<HTTPS 304 Not Modified
< Age: 39940
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306dadbc431-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "1d4ef7661a7ed1933c0a6c41ba4cc425"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Wed, 30 Nov 2022 19:20:01 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=7x27GyPAJ_UqkTJ5ao1cZqqBeayN2Vq_70IaoJpxzR8-1669876097-0-ATY8FjtZs8mdSSsf4qFxF152kwLQLqkVqwVancCZTg2qapMhP1+FEa/52Z+AOF4oswA3GiYn7Uix0Aw73CW0GRs=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: Uish9rt+JimvRr8GdFZh7QAY41ytOYotp0TpctIyZ20CIAjj/6MuY/CWhjDJpjKlCv1rRqREnwY=
< x-amz-request-id: JVFF0K2E15ZN38V9
< x-amz-version-id: NpiUwg7Fl941vKA3jucFV5n6mUv51EJP
< Connection: keep-alive
< Elapsed: 00:00.524693
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/main/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Tue, 29 Nov 2022 16:03:07 GMT
> If-None-Match: W/"6e04ba60f4112b8b66a702155b149789"
<<HTTPS 304 Not Modified
< Age: 138272
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306da4b8c45-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "6e04ba60f4112b8b66a702155b149789"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Tue, 29 Nov 2022 16:03:07 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=8QKdusunJ3shgZum2CdhgDnvVSZar0.5gchEPAhnDBY-1669876097-0-AegybkAav3z1xItNVSkXgTOadb6BWGFIkf1h503nEQXjDZzXBwfpUPK6FUZ2NL6Lm1Vp5CzzYIbuW8VJDG2RDGo=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: XOcwXl6k5xoUe4CZu165pl5DxNRL491AX2Bo9oR0kKVB7C17o3jsWbDG9NiZYZJdTx5oBpaxcqA=
< x-amz-request-id: GVVB079XW40XCD4Q
< x-amz-version-id: Q4_RUh4DCA9G9p4t7FVjTf1kw3tFux..
< Connection: keep-alive
< Elapsed: 00:00.530062
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/r/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Fri, 28 Oct 2022 15:33:23 GMT
> If-None-Match: W/"93476d5e7aa8d3f8bc0c04afafc94d26"
<<HTTPS 304 Not Modified
< Age: 485667
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306dc0918d0-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "93476d5e7aa8d3f8bc0c04afafc94d26"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Fri, 28 Oct 2022 15:33:23 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=RY6dkW_AH.MOyDqWGMA8yJp41UzE7C0EjdUA6z.qv2E-1669876097-0-AX/HkmV1qRZQ5J/UH1Z0z4SbLecq+BzVhaoJGmrI2vgoIR/bJv2tfkoNqHeerl3ZI2S6oqtRb8P0Gav35+nOgOY=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: D6P8KXSXC8gI7KOzlv0g5TO90T3ZSLUoRW6bdyxr5QPE9G0npKKYCVCJxA2sG2SUDPMQvTjcbxg=
< x-amz-request-id: PRT6QH04241S05D7
< x-amz-version-id: gruUyeXEAuhL5g34laDjUOasClLQRFQz
< Connection: keep-alive
< Elapsed: 00:00.531401
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/main/osx-arm64/repodata.json'. Updating mtime and loading from disk
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/main/noarch/repodata.json'. Updating mtime and loading from disk
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.q
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/r/noarch/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.json
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.json
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.q
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.q
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ranaroussi/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ranaroussi/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sat, 10 Jul 2021 20:29:09 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c306eae4c44f-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=z8IMtaIACkUQIrN4AAyFQZdV13R.rBlauiHJyaXMD84-1669876097-0-AeYOO899tSB0BSkyr1RX+7rzFGl8/m2XUWEw0WLe4gaVuzuEzaTXes5vpeUhvoSew0ZVOGdKRcf404U90fpX+srJDKTNNlvNf+wjflTXFgmU; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.645770
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ranaroussi/noarch/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.q
DEBUG conda.core.subdir_data:_read_pickled(375): Pickle load validation failed for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json.
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.q
DEBUG conda.core.subdir_data:_read_local_repdata(330): Loading raw json for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.q
DEBUG conda.core.subdir_data:_pickle_me(316): Saving pickled state for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.q
done
No match found for: zipline-reloaded. Search: *zipline-reloaded*
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/exceptions.py", line 1129, in __call__
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/main.py", line 86, in main_subshell
exit_code = do_call(args, p)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/conda_argparse.py", line 93, in do_call
return getattr(module, func_name)(args, parser)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/main_search.py", line 89, in execute
raise PackagesNotFoundError((str(spec),), channels_urls)
conda.exceptions.PackagesNotFoundError: The following packages are not available from current channels:
This is a duplicate of this question, but none of the solutions outlined there solve the issue.
A:
You are trying to install onto a mac with an arm processor (see platform : osx-arm64 in the output of conda info). But https://anaconda.org/ml4t/zipline-reloaded shows the channel you are attempting to install from does not have an osx-arm64 build.
A:
Will Holtz was right in pointing out that the problem stems from my being on an M1 Mac and there not yet being a build of zipline-reloaded for M1 Macs. A full answer, however, would also give a resolution. The solution below, copied from here, did the trick for me:
CONDA_SUBDIR=osx-64 conda create -n [environment] # create a new environment
conda activate [environment]
conda env config vars set CONDA_SUBDIR=osx-64 # subsequent commands use intel packages
| Conda cannot find package despite package being listed on anaconda.org | I am trying to install zipline-reloaded using conda but am encountering a PackagesNotFoundError—this despite running the install command for zipline-reloaded provided on the package's page on anaconda.org. What might be going wrong here, and how can I resolve it?
My steps this far:
conda create -n zipline python=3.8
conda activate zipline
conda install -c ml4t zipline-reloaded (this is directly from the package's page on anaconda.org, linked above, but raises PackagesNotFoundError)
Note that trying to install, e.g., scipy, yields no similar error. Also, using mamba instead of conda leads to the same PackagesNotFoundError error.
The output of conda info is below:
active environment : zipline
active env location : /opt/homebrew/Caskroom/miniforge/base/envs/zipline
shell level : 1
user config file : /Users/name/.condarc
populated config files : /opt/homebrew/Caskroom/miniforge/base/.condarc
/Users/name/.condarc
conda version : 22.9.0
conda-build version : not installed
python version : 3.9.15.final.0
virtual packages : __osx=12.4=0
__unix=0=0
__archspec=1=arm64
base environment : /opt/homebrew/Caskroom/miniforge/base (writable)
conda av data dir : /opt/homebrew/Caskroom/miniforge/base/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/ml4t/osx-arm64
https://conda.anaconda.org/ml4t/noarch
https://conda.anaconda.org/conda-forge/osx-arm64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/osx-arm64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/osx-arm64
https://repo.anaconda.com/pkgs/r/noarch
https://conda.anaconda.org/ranaroussi/osx-arm64
https://conda.anaconda.org/ranaroussi/noarch
package cache : /opt/homebrew/Caskroom/miniforge/base/pkgs
/Users/name/.conda/pkgs
envs directories : /opt/homebrew/Caskroom/miniforge/base/envs
/Users/name/.conda/envs
platform : osx-arm64
user-agent : conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
UID:GID : 501:20
netrc file : None
offline mode : False
I've tried running:
conda clean -all
conda clean --index-cache
conda update conda
I've also made sure to set offline mode to false (conda config --set offline false), which it is, per the output above.
Finally, running conda search -c ml4t zipline-reloaded -vvv outputs:
DEBUG conda.gateways.logging:set_verbosity(236): verbosity set to 3
Loading channels: ...working... TRACE conda.gateways.disk.test:file_path_is_writable(24): checking path is writable /opt/homebrew/Caskroom/miniforge/base/pkgs/urls.txt
DEBUG conda.core.package_cache_data:_check_writable(268): package cache directory '/opt/homebrew/Caskroom/miniforge/base/pkgs' writable: True
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ml4t/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ml4t/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/r/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ranaroussi/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.json
DEBUG conda.core.subdir_data:_load(267): Using cached repodata for https://conda.anaconda.org/conda-forge/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/09cdf8bf.json. Timeout in 42 sec
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/main/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/09cdf8bf.q
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://conda.anaconda.org/ranaroussi/noarch/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.json
DEBUG conda.core.subdir_data:_load(267): Using cached repodata for https://conda.anaconda.org/conda-forge/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/a850f475.json. Timeout in 37 sec
DEBUG conda.core.subdir_data:_load(273): Local cache timed out for https://repo.anaconda.com/pkgs/main/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/a850f475.q
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): conda.anaconda.org:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_new_conn(1003): Starting new HTTPS connection (1): repo.anaconda.com:443
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ml4t/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ml4t/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sun, 27 Nov 2022 23:15:46 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c305af6cc3f0-EWR
< Date: Thu, 01 Dec 2022 06:28:16 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=5_iHfkEj0MHUfLTzo1_JU44eFH7k.fkG4MefCLHFwCs-1669876096-0-Afv4K+DTTGsbF/EJ55ZMcprcOQU03A9xjLx1fVCxk220/+qEj2KgM0D9KShLZZMNwWIGCF/SrfFT/0yOU8PYUb3c1sef5rFCHNw00EumeSIB; path=/; expires=Thu, 01-Dec-22 06:58:16 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.374252
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ml4t/osx-arm64/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.json
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ranaroussi/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ranaroussi/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sat, 10 Jul 2021 20:29:09 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c305b931c436-EWR
< Date: Thu, 01 Dec 2022 06:28:16 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=L8JbFm6EySvGhs3b0c54CFobV5MnnDkLYszHBXNbkOE-1669876096-0-AXqlxjWUQxlASGQq2FDCPHAie99ZATvZmSa+RepV64uBbnR2GAeRM3oe+VMrkj23RXFUNlC8xP/j689pu+kPxuVwr5M1u96nKabCLWSujD5q; path=/; expires=Thu, 01-Dec-22 06:58:16 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.449782
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ranaroussi/osx-arm64/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/b4814506.q
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/639ecbbd.q
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ml4t/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ml4t/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sun, 27 Nov 2022 23:15:46 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c306acba17a5-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=OUlfjhiwUIls5GRrkrvx_gSvGNQ8jSFq4C4FQHuUEkI-1669876097-0-AeZAUF8zYwue32HP8rVUfV7ws8cYe8P/MZIXlS5tWbRONDfIemqH4Ze4eE1R6UuAdqaqqw/Rh2jJscMkBWdApRdTtKFIYda2yA5HJ44Urw+N; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.531916
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ml4t/noarch/repodata.json'. Updating mtime and loading from disk
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/r/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/r/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Fri, 19 Aug 2022 21:27:22 GMT
> If-None-Match: W/"bd18071599942dd824e1ec40e9d10873"
<<HTTPS 304 Not Modified
< Age: 70484
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306daf41899-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "bd18071599942dd824e1ec40e9d10873"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Tue, 03 May 2022 07:57:20 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=JAZuB9kuZKjDvy1WXV.IHe1gCAnfEEZIs2uQ0YOh.nY-1669876097-0-AcbFb9BTBsrUintQD6w4DP0qheARYSHDdmj1IfpgfRRjTdJlAeFbjfAQhX/8gthx2K7VCEDYMRtjVnJGkgyCwUw=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: fY9HZBoI/58riYlxXBQEYfV8bKmFamLhAbhZ0J9i5X/O76LU6Xv7DmrLw/lA4gHYZlTeq3dMEDM=
< x-amz-request-id: YY7S26ZWVFSN8T0M
< x-amz-version-id: PxKW1ua_FXQgEcZyls93YYZW2aaplpSK
< Connection: keep-alive
< Elapsed: 00:00.524347
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/main/osx-arm64/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json'. Updating mtime and loading from disk
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/main/noarch/repodata.json HTTP/1.1" 304 0
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.json
DEBUG urllib3.connectionpool:_make_request(456): https://repo.anaconda.com:443 "GET /pkgs/r/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/main/osx-arm64/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Wed, 30 Nov 2022 19:20:01 GMT
> If-None-Match: W/"1d4ef7661a7ed1933c0a6c41ba4cc425"
<<HTTPS 304 Not Modified
< Age: 39940
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306dadbc431-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "1d4ef7661a7ed1933c0a6c41ba4cc425"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Wed, 30 Nov 2022 19:20:01 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=7x27GyPAJ_UqkTJ5ao1cZqqBeayN2Vq_70IaoJpxzR8-1669876097-0-ATY8FjtZs8mdSSsf4qFxF152kwLQLqkVqwVancCZTg2qapMhP1+FEa/52Z+AOF4oswA3GiYn7Uix0Aw73CW0GRs=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: Uish9rt+JimvRr8GdFZh7QAY41ytOYotp0TpctIyZ20CIAjj/6MuY/CWhjDJpjKlCv1rRqREnwY=
< x-amz-request-id: JVFF0K2E15ZN38V9
< x-amz-version-id: NpiUwg7Fl941vKA3jucFV5n6mUv51EJP
< Connection: keep-alive
< Elapsed: 00:00.524693
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/main/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Tue, 29 Nov 2022 16:03:07 GMT
> If-None-Match: W/"6e04ba60f4112b8b66a702155b149789"
<<HTTPS 304 Not Modified
< Age: 138272
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306da4b8c45-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "6e04ba60f4112b8b66a702155b149789"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Tue, 29 Nov 2022 16:03:07 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=8QKdusunJ3shgZum2CdhgDnvVSZar0.5gchEPAhnDBY-1669876097-0-AegybkAav3z1xItNVSkXgTOadb6BWGFIkf1h503nEQXjDZzXBwfpUPK6FUZ2NL6Lm1Vp5CzzYIbuW8VJDG2RDGo=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: XOcwXl6k5xoUe4CZu165pl5DxNRL491AX2Bo9oR0kKVB7C17o3jsWbDG9NiZYZJdTx5oBpaxcqA=
< x-amz-request-id: GVVB079XW40XCD4Q
< x-amz-version-id: Q4_RUh4DCA9G9p4t7FVjTf1kw3tFux..
< Connection: keep-alive
< Elapsed: 00:00.530062
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /pkgs/r/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Fri, 28 Oct 2022 15:33:23 GMT
> If-None-Match: W/"93476d5e7aa8d3f8bc0c04afafc94d26"
<<HTTPS 304 Not Modified
< Age: 485667
< Cache-Control: public, max-age=30
< CF-Cache-Status: HIT
< CF-RAY: 7729c306dc0918d0-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< ETag: "93476d5e7aa8d3f8bc0c04afafc94d26"
< Expires: Thu, 01 Dec 2022 06:28:47 GMT
< Last-Modified: Fri, 28 Oct 2022 15:33:23 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=RY6dkW_AH.MOyDqWGMA8yJp41UzE7C0EjdUA6z.qv2E-1669876097-0-AX/HkmV1qRZQ5J/UH1Z0z4SbLecq+BzVhaoJGmrI2vgoIR/bJv2tfkoNqHeerl3ZI2S6oqtRb8P0Gav35+nOgOY=; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
< x-amz-id-2: D6P8KXSXC8gI7KOzlv0g5TO90T3ZSLUoRW6bdyxr5QPE9G0npKKYCVCJxA2sG2SUDPMQvTjcbxg=
< x-amz-request-id: PRT6QH04241S05D7
< x-amz-version-id: gruUyeXEAuhL5g34laDjUOasClLQRFQz
< Connection: keep-alive
< Elapsed: 00:00.531401
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/main/osx-arm64/repodata.json'. Updating mtime and loading from disk
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/main/noarch/repodata.json'. Updating mtime and loading from disk
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/091140c5.q
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://repo.anaconda.com/pkgs/r/noarch/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.json
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.json
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.q
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/4ea078d6.q
DEBUG urllib3.connectionpool:_make_request(456): https://conda.anaconda.org:443 "GET /ranaroussi/noarch/repodata.json HTTP/1.1" 304 0
DEBUG conda.core.subdir_data:fetch_repodata_remote_request(530):
>>GET /ranaroussi/noarch/repodata.json HTTPS
> User-Agent: conda/22.9.0 requests/2.28.1 CPython/3.9.15 Darwin/21.5.0 OSX/12.4
> Accept: application/json
> Accept-Encoding: gzip, deflate, compress, identity
> Connection: keep-alive
> If-Modified-Since: Sat, 10 Jul 2021 20:29:09 GMT
<<HTTPS 304 Not Modified
< CF-Cache-Status: DYNAMIC
< CF-Ray: 7729c306eae4c44f-EWR
< Date: Thu, 01 Dec 2022 06:28:17 GMT
< Server: cloudflare
< Set-Cookie: __cf_bm=z8IMtaIACkUQIrN4AAyFQZdV13R.rBlauiHJyaXMD84-1669876097-0-AeYOO899tSB0BSkyr1RX+7rzFGl8/m2XUWEw0WLe4gaVuzuEzaTXes5vpeUhvoSew0ZVOGdKRcf404U90fpX+srJDKTNNlvNf+wjflTXFgmU; path=/; expires=Thu, 01-Dec-22 06:58:17 GMT; domain=.anaconda.org; HttpOnly; Secure; SameSite=None
< Strict-Transport-Security: max-age=31536000
< Vary: Accept-Encoding
< Connection: keep-alive
< Elapsed: 00:00.645770
DEBUG conda.core.subdir_data:_load(292): 304 NOT MODIFIED for 'https://conda.anaconda.org/ranaroussi/noarch/repodata.json'. Updating mtime and loading from disk
TRACE conda.gateways.disk.update:touch(132): touching path /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/9e99ffaf.q
DEBUG conda.core.subdir_data:_read_pickled(375): Pickle load validation failed for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json.
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/3e39a7aa.q
DEBUG conda.core.subdir_data:_read_local_repdata(330): Loading raw json for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.json
DEBUG conda.core.subdir_data:_read_pickled(357): found pickle file /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/f2f1db2e.q
DEBUG conda.core.subdir_data:_pickle_me(316): Saving pickled state for https://repo.anaconda.com/pkgs/r/osx-arm64/repodata.json at /opt/homebrew/Caskroom/miniforge/base/pkgs/cache/8bd55712.q
done
No match found for: zipline-reloaded. Search: *zipline-reloaded*
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/exceptions.py", line 1129, in __call__
return func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/main.py", line 86, in main_subshell
exit_code = do_call(args, p)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/conda_argparse.py", line 93, in do_call
return getattr(module, func_name)(args, parser)
File "/opt/homebrew/Caskroom/miniforge/base/lib/python3.9/site-packages/conda/cli/main_search.py", line 89, in execute
raise PackagesNotFoundError((str(spec),), channels_urls)
conda.exceptions.PackagesNotFoundError: The following packages are not available from current channels:
This is a duplicate of this question, but none of the solutions outlined there solve the issue.
| [
"You are trying to install onto a mac with an arm processor (see platform : osx-arm64 in the output of conda info). But https://anaconda.org/ml4t/zipline-reloaded shows the channel you are attempting to install from does not have an osx-arm64 build.\n",
"Will Holtz was right in pointing out that the problem stems from my being on an M1 Mac and there not yet being a build of zipline-reloaded for M1 Macs. A full answer, however, would also give a resolution. The solution below, copied from here, did the trick for me:\nCONDA_SUBDIR=osx-64 conda create -n [environment] # create a new environment\nconda activate [environment]\nconda env config vars set CONDA_SUBDIR=osx-64 # subsequent commands use intel packages\n\n"
] | [
1,
0
] | [] | [] | [
"anaconda",
"conda",
"mini_forge",
"python"
] | stackoverflow_0074637834_anaconda_conda_mini_forge_python.txt |
Q:
list of dictionaries in another list and sort it according to date without using sort
I have a list of dictionaries in another list, I want sort those lists of dictionaries according to the date but I can't use sort function I don't know how to access in list (May be date it is not in correct way)
I WANT TO KNOW HOW TO SORT SOME THING LIKE THIS OR HOW TO GET ACCESS TO THE "DATE"
dr = [
[{"name": "Tom", "age": 10,"group":"sdd","points":2,"date":"2022 3 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 3 10"},
{"name": "Pam", "age": 7,"group":"spp","points":4,"date":"2022 3 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":5,"date":"2022 4 12"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 4 12"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 4 12"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":8,"date":"2022 1 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":12,"date":"2022 1 10"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 1 10"}],
]
I tried like this, but it doesn't work
for j in range(len(dr)):
for k in range(j+1,len(dr)):
if dr[j]["date"] < dr[k]["date"]:
dr[j],dr[k]=dr[k],dr[j]
after sorting it should be like this,
dr = [
[{"name": "Tom", "age": 10,"group":"sdd","points":8,"date":"2022 1 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":12,"date":"2022 1 10"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 1 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":2,"date":"2022 3 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 3 10"},
{"name": "Pam", "age": 7,"group":"spp","points":4,"date":"2022 3 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":5,"date":"2022 4 12"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 4 12"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 4 12"}]
]
A:
If the "date" key was actually a date (which currently isn't), this would have worked:
for j in range(len(dr)):
for k in range(j + 1, len(dr)):
if dr[j][0]["date"] < dr[k][0]["date"]:
dr[j], dr[k] = dr[k], dr[j]
Your problem is, because dr is a list of list of dictionaries (i.e. not a list of dictionaries) you have to compare the first date in the list (assuming that all dates within one sublist is equal).
A:
You must be getting an error from attempting to reference a date key from the inner list objects. What you want to do is first grab the first dictionary from each inner list, using [0], and then reference and compare the date key within that dictionary. Also, your use of < will give you a reverse sort. Use > to get the order you show being what you desire. This works:
for j in range(len(dr)):
for k in range(j+1,len(dr)):
if dr[j][0]["date"] > dr[k][0]["date"]:
dr[j],dr[k]=dr[k],dr[j]
Note that this only works after adding quotes around your date strings, and then only works because those strings happen to do the right thing when compared as simple strings. For example, if you had a 2 digit month, I think your sorting wouldn't be right. You should really have datetime objects in your structure and be comparing those.
| list of dictionaries in another list and sort it according to date without using sort | I have a list of dictionaries in another list, I want sort those lists of dictionaries according to the date but I can't use sort function I don't know how to access in list (May be date it is not in correct way)
I WANT TO KNOW HOW TO SORT SOME THING LIKE THIS OR HOW TO GET ACCESS TO THE "DATE"
dr = [
[{"name": "Tom", "age": 10,"group":"sdd","points":2,"date":"2022 3 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 3 10"},
{"name": "Pam", "age": 7,"group":"spp","points":4,"date":"2022 3 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":5,"date":"2022 4 12"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 4 12"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 4 12"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":8,"date":"2022 1 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":12,"date":"2022 1 10"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 1 10"}],
]
I tried like this, but it doesn't work
for j in range(len(dr)):
for k in range(j+1,len(dr)):
if dr[j]["date"] < dr[k]["date"]:
dr[j],dr[k]=dr[k],dr[j]
after sorting it should be like this,
dr = [
[{"name": "Tom", "age": 10,"group":"sdd","points":8,"date":"2022 1 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":12,"date":"2022 1 10"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 1 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":2,"date":"2022 3 10"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 3 10"},
{"name": "Pam", "age": 7,"group":"spp","points":4,"date":"2022 3 10"}],
[{"name": "Tom", "age": 10,"group":"sdd","points":5,"date":"2022 4 12"},
{"name": "Mark", "age": 5,"group":"sdo","points":6,"date":"2022 4 12"},
{"name": "Pam", "age": 7,"group":"spp","points":6,"date":"2022 4 12"}]
]
| [
"If the \"date\" key was actually a date (which currently isn't), this would have worked:\nfor j in range(len(dr)):\n for k in range(j + 1, len(dr)):\n if dr[j][0][\"date\"] < dr[k][0][\"date\"]:\n dr[j], dr[k] = dr[k], dr[j]\n\nYour problem is, because dr is a list of list of dictionaries (i.e. not a list of dictionaries) you have to compare the first date in the list (assuming that all dates within one sublist is equal).\n",
"You must be getting an error from attempting to reference a date key from the inner list objects. What you want to do is first grab the first dictionary from each inner list, using [0], and then reference and compare the date key within that dictionary. Also, your use of < will give you a reverse sort. Use > to get the order you show being what you desire. This works:\nfor j in range(len(dr)):\n for k in range(j+1,len(dr)):\n if dr[j][0][\"date\"] > dr[k][0][\"date\"]:\n dr[j],dr[k]=dr[k],dr[j]\n\nNote that this only works after adding quotes around your date strings, and then only works because those strings happen to do the right thing when compared as simple strings. For example, if you had a 2 digit month, I think your sorting wouldn't be right. You should really have datetime objects in your structure and be comparing those.\n"
] | [
0,
0
] | [] | [] | [
"dictionary",
"list",
"python"
] | stackoverflow_0074650675_dictionary_list_python.txt |
Q:
Is there a way to print a list of online members of a channel with a Discord Bot?
I am trying to create a function of my discord bot where on a command it prints the names of online members in a specific channel to the chat. I can get the bot to print all members of a channel but cannot get it to isolate only the online members.
My current code is thus
linkchannel = int(message.channel.topic)
channel = client.get_channel(linkchannel)
members = channel.members
names = [] #(list)
for member in members:
if member.Status == discord.client.Status.online:
names.append(member.name)
print(names)
await message.channel.send(names)
it returns the error 'Member has no attribute Status' despite the docs stating it does. It has also previously failed to identify discord.Status as a valid path despite the documentation stating it is. Any help would be appreciated. My bot has access to all permissions including all privileged gateway intents
A:
You need member intents for this to function.
For more information on how to enable member intents, read the official documentation, or this answer that explains it quite clearly.
| Is there a way to print a list of online members of a channel with a Discord Bot? | I am trying to create a function of my discord bot where on a command it prints the names of online members in a specific channel to the chat. I can get the bot to print all members of a channel but cannot get it to isolate only the online members.
My current code is thus
linkchannel = int(message.channel.topic)
channel = client.get_channel(linkchannel)
members = channel.members
names = [] #(list)
for member in members:
if member.Status == discord.client.Status.online:
names.append(member.name)
print(names)
await message.channel.send(names)
it returns the error 'Member has no attribute Status' despite the docs stating it does. It has also previously failed to identify discord.Status as a valid path despite the documentation stating it is. Any help would be appreciated. My bot has access to all permissions including all privileged gateway intents
| [
"You need member intents for this to function.\nFor more information on how to enable member intents, read the official documentation, or this answer that explains it quite clearly.\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074540901_discord_discord.py_python.txt |
Q:
Initialize a superclass with an existing object (copy constructor)
Preface: From my understanding the existing answers to this question assume control over the source or work around the problem.
Given a Super class, and MyClass which derives from it: How can an existing instance of a Super class be used as a base? The goal is to not call super().__init__() with fields from existing_super, but use the existing object. Similar to how in C++ a copy constructor would be used. Only MyClass can be adapted, Super has to remain unchanged.
class Super:
def __init__(self, constructed, by, the, super_factory):
""" no `existing_super` accepted """
pass
class MyClass(Super):
def __init__(self, existing_super):
""" ????? """
pass
s = super_factory()
mine = MyClass(s)
If this is not possible, would monkey patching Super help? What if Super uses/doesn't use slots?
A:
One way to approach it is to convert your instance of the super class into an instance of the subclass using the __class__ attribute.
class A:
def __init__(self):
self.A_attribute = 'from_A'
class B(A):
def __init__(self):
#Create an instance of B from scratch
self.B_attribute = 'from_B'
super().__init__()
def init_A(self):
#Initializes an instance of B from what is already an instance of A
self.B_attribute = 'from_B'
myA = A()
myA.__class__ = B
myA.init_A()
| Initialize a superclass with an existing object (copy constructor) | Preface: From my understanding the existing answers to this question assume control over the source or work around the problem.
Given a Super class, and MyClass which derives from it: How can an existing instance of a Super class be used as a base? The goal is to not call super().__init__() with fields from existing_super, but use the existing object. Similar to how in C++ a copy constructor would be used. Only MyClass can be adapted, Super has to remain unchanged.
class Super:
def __init__(self, constructed, by, the, super_factory):
""" no `existing_super` accepted """
pass
class MyClass(Super):
def __init__(self, existing_super):
""" ????? """
pass
s = super_factory()
mine = MyClass(s)
If this is not possible, would monkey patching Super help? What if Super uses/doesn't use slots?
| [
"One way to approach it is to convert your instance of the super class into an instance of the subclass using the __class__ attribute.\nclass A:\n def __init__(self):\n self.A_attribute = 'from_A'\n\n\nclass B(A):\n def __init__(self):\n #Create an instance of B from scratch\n self.B_attribute = 'from_B'\n super().__init__()\n\n def init_A(self):\n #Initializes an instance of B from what is already an instance of A\n self.B_attribute = 'from_B'\n\nmyA = A()\nmyA.__class__ = B\nmyA.init_A()\n\n"
] | [
0
] | [] | [] | [
"constructor",
"inheritance",
"python",
"super"
] | stackoverflow_0071209560_constructor_inheritance_python_super.txt |
Q:
How do I divide the hourly data into 5 mininutes interval and ensure the records are same for each hour?
I received data similar to this format
Time Humidity Condition
2014-09-01 00:00:00 84 Cloudy
2014-09-01 01:00:00 94 Rainy
I tried to use df.resample('5T')
but it seems the data cannot be replicated for the same hour and df.resample('5T') need the function like mean() but I do not need it.
I tried to do it this way
But the problem is... I don't want to use 'mean', because it does not keep "Humidity" and "Condition" as original. I just want the data be
Time Humidity Condition
2014-09-01 00:00:00 84 Cloudy
2014-09-01 00:05:00 84 Cloudy
2014-09-01 00:10:00 84 Cloudy
.
.
.
2014-09-01 00:55:00 84 Cloudy
2014-09-01 01:00:00 94 Rainy
2014-09-01 01:05:00 94 Rainy
2014-09-01 01:10:00 94 Rainy
.
.
.
Wonder is there is a way out, could ask if there is any solution to this issue? Many thanks!
A:
Example
data = {'Time': {0: '2014-09-01 00:00:00', 1: '2014-09-01 01:00:00'},
'Humidity': {0: 84, 1: 94},
'Condition': {0: 'Cloudy', 1: 'Rainy'}}
df = pd.DataFrame(data)
df
Time Humidity Condition
0 2014-09-01 00:00:00 84 Cloudy
1 2014-09-01 01:00:00 94 Rainy
Code
i make code for 20T instead 5T, becuz 5T is too short.
(df.set_axis(pd.to_datetime(df['Time']))
.reindex(pd.date_range(df['Time'][0], freq='20T', periods=6))
.assign(Time=lambda x: x.index)
.reset_index(drop=True).ffill())
result:
Time Humidity Condition
0 2014-09-01 00:00:00 84.0 Cloudy
1 2014-09-01 00:20:00 84.0 Cloudy
2 2014-09-01 00:40:00 84.0 Cloudy
3 2014-09-01 01:00:00 94.0 Rainy
4 2014-09-01 01:20:00 94.0 Rainy
5 2014-09-01 01:40:00 94.0 Rainy
| How do I divide the hourly data into 5 mininutes interval and ensure the records are same for each hour? | I received data similar to this format
Time Humidity Condition
2014-09-01 00:00:00 84 Cloudy
2014-09-01 01:00:00 94 Rainy
I tried to use df.resample('5T')
but it seems the data cannot be replicated for the same hour and df.resample('5T') need the function like mean() but I do not need it.
I tried to do it this way
But the problem is... I don't want to use 'mean', because it does not keep "Humidity" and "Condition" as original. I just want the data be
Time Humidity Condition
2014-09-01 00:00:00 84 Cloudy
2014-09-01 00:05:00 84 Cloudy
2014-09-01 00:10:00 84 Cloudy
.
.
.
2014-09-01 00:55:00 84 Cloudy
2014-09-01 01:00:00 94 Rainy
2014-09-01 01:05:00 94 Rainy
2014-09-01 01:10:00 94 Rainy
.
.
.
Wonder is there is a way out, could ask if there is any solution to this issue? Many thanks!
| [
"Example\ndata = {'Time': {0: '2014-09-01 00:00:00', 1: '2014-09-01 01:00:00'},\n 'Humidity': {0: 84, 1: 94},\n 'Condition': {0: 'Cloudy', 1: 'Rainy'}}\ndf = pd.DataFrame(data)\n\ndf\n Time Humidity Condition\n0 2014-09-01 00:00:00 84 Cloudy\n1 2014-09-01 01:00:00 94 Rainy\n\nCode\ni make code for 20T instead 5T, becuz 5T is too short.\n(df.set_axis(pd.to_datetime(df['Time']))\n .reindex(pd.date_range(df['Time'][0], freq='20T', periods=6))\n .assign(Time=lambda x: x.index)\n .reset_index(drop=True).ffill())\n\nresult:\n Time Humidity Condition\n0 2014-09-01 00:00:00 84.0 Cloudy\n1 2014-09-01 00:20:00 84.0 Cloudy\n2 2014-09-01 00:40:00 84.0 Cloudy\n3 2014-09-01 01:00:00 94.0 Rainy\n4 2014-09-01 01:20:00 94.0 Rainy\n5 2014-09-01 01:40:00 94.0 Rainy\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074650541_dataframe_pandas_python.txt |
Q:
django.db.utils.IntegrityError: CHECK constraint failed
when I am migrating I am getting the error django.db.utils.IntegrityError: CHECK constraint failed. I am using django-cms. this error popped up after trying to add editor.js to the project
full Error:
Applying advita.0003_auto_20220615_1506...Traceback (most recent call last):
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\base.py", line 413, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.IntegrityError: CHECK constraint failed: (JSON_VALID("sub_title") OR "sub_title" IS NULL)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".\manage.py", line 22, in <module>
main()
File ".\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
utility.execute()
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 371, in execute
output = self.handle(*args, **options)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 85, in wrapped
res = handle_func(*args, **kwargs)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\commands\migrate.py", line 243, in handle
post_migrate_state = executor.migrate(
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\operations\fields.py", line 236, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 138, in alter_field
super().alter_field(model, old_field, new_field, strict=strict)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\base\schema.py", line 571, in alter_field
self._alter_field(model, old_field, new_field, old_type, new_type,
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 360, in _alter_field
self._remake_table(model, alter_field=(old_field, new_field))
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 283, in _remake_table
self.execute("INSERT INTO %s (%s) SELECT %s FROM %s" % (
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\base\schema.py", line 142, in execute
cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 98, in execute
return super().execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\base.py", line 413, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: CHECK constraint failed: (JSON_VALID("sub_title") OR "sub_title" IS NULL)
Can anyone clarify why I am getting this error?
A:
This is happening because field 'sub_title' has values earlier which are not valid json.
Table probably already has values for 'sub_title' field and you are changing trying to change earlier fieldtype to jsonfield if this is the case, you should update all values to valid json first.
| django.db.utils.IntegrityError: CHECK constraint failed | when I am migrating I am getting the error django.db.utils.IntegrityError: CHECK constraint failed. I am using django-cms. this error popped up after trying to add editor.js to the project
full Error:
Applying advita.0003_auto_20220615_1506...Traceback (most recent call last):
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\base.py", line 413, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.IntegrityError: CHECK constraint failed: (JSON_VALID("sub_title") OR "sub_title" IS NULL)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".\manage.py", line 22, in <module>
main()
File ".\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line
utility.execute()
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 371, in execute
output = self.handle(*args, **options)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\base.py", line 85, in wrapped
res = handle_func(*args, **kwargs)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\core\management\commands\migrate.py", line 243, in handle
post_migrate_state = executor.migrate(
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\migration.py", line 124, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\migrations\operations\fields.py", line 236, in database_forwards
schema_editor.alter_field(from_model, from_field, to_field)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 138, in alter_field
super().alter_field(model, old_field, new_field, strict=strict)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\base\schema.py", line 571, in alter_field
self._alter_field(model, old_field, new_field, old_type, new_type,
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 360, in _alter_field
self._remake_table(model, alter_field=(old_field, new_field))
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\schema.py", line 283, in _remake_table
self.execute("INSERT INTO %s (%s) SELECT %s FROM %s" % (
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\base\schema.py", line 142, in execute
cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 98, in execute
return super().execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "C:\Users\mulla\AppData\Local\Programs\Python\Python38\lib\site-packages\django\db\backends\sqlite3\base.py", line 413, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.IntegrityError: CHECK constraint failed: (JSON_VALID("sub_title") OR "sub_title" IS NULL)
Can anyone clarify why I am getting this error?
| [
"This is happening because field 'sub_title' has values earlier which are not valid json.\nTable probably already has values for 'sub_title' field and you are changing trying to change earlier fieldtype to jsonfield if this is the case, you should update all values to valid json first.\n"
] | [
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0072629593_django_python.txt |
Q:
Python: How to get attribute of attribute of an object with getattr?
How do I evaluate
a = myobject.id.number
and return None if it myobject is None
with built-in getattr? Maybe getattr(myobject, "id.number", None)?
A:
This should scale well to any depth:
reduce(lambda obj, attr : getattr(obj, attr, None), ("id","num"), myobject)
A:
getattr(getattr(myobject, "id", None), "number", None)
should work.
A:
my favorites are
from functools import reduce
try:
a = reduce(getattr, ("id", "number"), myobject)
except AttributeError:
a = None
or
from operator import attrgetter
try:
a = attrgetter('id.number')(myobject)
except AttributeError:
a = None
A:
Here's a one-liner
a = myobject is not None and myobject.id.number or None
It doesn't check whether id is None, but that wasn't part of the original question.
A:
A slightly over generic solution keeping in view all members:
if myobject and myobject.id and myobject.id.number:
a = myobject.id.number
else:
a = None
A:
return myobject.id.number if myobject else None
A:
I use the following function which does it for any level.
def Resolve(object, attribute:str):
"""
Resolve attribute of an object.
"""
attribute_list = attribute.split(".")
obj = object
try:
for att in attribute_list:
obj = getattr(obj, att)
except AttributeError:
obj = None
return obj
To use it you write:
a = Resolve(myobject, 'id.number')
The code simply splits the string using the the period character and loops through the attributes. If you had another level say `myobject.id.number.another' you would use:
a = Resolve(myobject, 'id.number.another')
| Python: How to get attribute of attribute of an object with getattr? | How do I evaluate
a = myobject.id.number
and return None if it myobject is None
with built-in getattr? Maybe getattr(myobject, "id.number", None)?
| [
"This should scale well to any depth:\nreduce(lambda obj, attr : getattr(obj, attr, None), (\"id\",\"num\"), myobject)\n\n",
"getattr(getattr(myobject, \"id\", None), \"number\", None)\n\nshould work.\n",
"my favorites are\nfrom functools import reduce\ntry:\n a = reduce(getattr, (\"id\", \"number\"), myobject)\nexcept AttributeError:\n a = None\n\nor\nfrom operator import attrgetter\ntry:\n a = attrgetter('id.number')(myobject)\nexcept AttributeError:\n a = None\n\n",
"Here's a one-liner\na = myobject is not None and myobject.id.number or None\n\nIt doesn't check whether id is None, but that wasn't part of the original question.\n",
"A slightly over generic solution keeping in view all members:\nif myobject and myobject.id and myobject.id.number:\n a = myobject.id.number\nelse:\n a = None\n\n",
"return myobject.id.number if myobject else None\n\n",
"I use the following function which does it for any level.\ndef Resolve(object, attribute:str):\n \"\"\"\n Resolve attribute of an object.\n \"\"\"\n attribute_list = attribute.split(\".\")\n obj = object\n try:\n for att in attribute_list:\n obj = getattr(obj, att)\n except AttributeError:\n obj = None\n \n return obj\n\nTo use it you write:\na = Resolve(myobject, 'id.number')\n\nThe code simply splits the string using the the period character and loops through the attributes. If you had another level say `myobject.id.number.another' you would use:\na = Resolve(myobject, 'id.number.another')\n\n"
] | [
7,
6,
4,
1,
0,
0,
0
] | [] | [] | [
"attributes",
"object",
"python"
] | stackoverflow_0014925239_attributes_object_python.txt |
Q:
Create a Java UDF that uses geoip2 library with the database in a S3 bucket
Correct me if i'm wrong, but my understanding of the UDF function in Snowpark is that you can send the function UDF from your IDE and it will be executed inside Snowflake. I have a staged database called GeoLite2-City.mmdb inside a S3 bucket on my Snowflake account and i would like to use it to retrieve informations about an ip address. So my strategy was to
1 Register an UDF which would return a response string n my IDE Pycharm
2 Create a main function which would simple question the database about the ip address and give me a response.
The problem is that, how the UDF and my code can see the staged file at
s3://path/GeoLite2-City.mmdb
in my bucket, in my case i simply named it so assuming that it will eventually find it (with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:) since the
stage_location='@AWS_CSV_STAGE' is the same as were the UDF will be saved? But i'm not sure if i understand correctly what the option stage_location is referring exactly.
At the moment i get the following error:
"Cannot add package geoip2 because Anaconda terms must be accepted by ORGADMIN to use Anaconda 3rd party packages. Please follow the instructions at https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-packages.html#using-third-party-packages-from-anaconda."
Am i importing geoip2.database correctly in order to use it with snowpark and udf?
Do i import it by writing session.add_packages('geoip2') ?
Thank You for clearing my doubts.
The instructions i'm following about geoip2 are here.
https://geoip2.readthedocs.io/en/latest/
my code:
from snowflake.snowpark import Session
import geoip2.database
from snowflake.snowpark.functions import col
import logging
from snowflake.snowpark.types import IntegerType, StringType
logger = logging.getLogger()
logger.setLevel(logging.INFO)
session = None
user = ''*********'
password = '*********'
account = '*********'
warehouse = '*********'
database = '*********'
schema = '*********'
role = '*********'
print("Connecting")
cnn_params = {
"account": account,
"user": user,
"password": password,
"warehouse": warehouse,
"database": database,
"schema": schema,
"role": role,
}
def first_udf():
with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:
response = reader.city('203.0.113.0')
print('response.country.iso_code')
return response
try:
print('session..')
session = Session.builder.configs(cnn_params).create()
session.add_packages('geoip2')
session.udf.register(
func=first_udf
, return_type=StringType()
, input_types=[StringType()]
, is_permanent=True
, name='SNOWPARK_FIRST_UDF'
, replace=True
, stage_location='@AWS_CSV_STAGE'
)
session.sql('SELECT SNOWPARK_FIRST_UDF').show()
except Exception as e:
print(e)
finally:
if session:
session.close()
print('connection closed..')
print('done.')
UPDATE
I'm trying to solve it using a java udf as in my staging area i have the 'geoip2-2.8.0.jar' library staged already. If i could import it's methods to get the country of an ip it would be perfect, the problem is that i don't know how to do it exactly. I'm trying to follow these instructions https://maxmind.github.io/GeoIP2-java/.
I wanna interrogate the database and get as output the iso code of the country and i want to do it on snowflake worksheet.
CREATE OR REPLACE FUNCTION GEO()
returns varchar not null
language java
imports = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar', '@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
handler = 'test'
as
$$
def test():
File database = new File("geodata/GeoLite2-City.mmdb")
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName("128.101.101.101");
CityResponse response = reader.city(ipAddress);
Country country = response.getCountry();
System.out.println(country.getIsoCode());
$$;
SELECT GEO();
A:
This will be more complicated that it looks:
To use session.add_packages('geoip2') in Snowflake you need to accept the Anaconda terms. This is easy if you can ask your account admin.
But then you can only get the packages that Anaconda has added to Snowflake in this way. The list is https://repo.anaconda.com/pkgs/snowflake/, and I don't see geoip2 there yet.
So you will need to package you own Python code (until Anaconda sees enough requests for geoip2 in the wishlist). I described the process here https://medium.com/snowflake/generating-all-the-holidays-in-sql-with-a-python-udtf-4397f190252b.
But wait! GeoIP2 is not pure Python, so you will need to wait until Anaconda packages the C extension libmaxminddb. But this will be harder, as you can see their docs don't offer a straightforward way like other pip installable C libraries.
So this will be complicated.
There are other alternative paths, like a commercial provider of this functionality (like I describe here https://medium.com/snowflake/new-in-snowflake-marketplace-monetization-315aa90b86c).
There other approaches to get this done without using a paid dataset, but I haven't written about that yet - but someone else might before I get to do it.
Btw, years ago I wrote something like this for BigQuery (https://cloud.google.com/blog/products/data-analytics/geolocation-with-bigquery-de-identify-76-million-ip-addresses-in-20-seconds), but today I was notified that Google recently deleted the tables that I had shared with the world (https://twitter.com/matthew_hensley/status/1598386009129058315).
So it's time to rebuild in Snowflake. But who (me?) and when is still a question.
| Create a Java UDF that uses geoip2 library with the database in a S3 bucket | Correct me if i'm wrong, but my understanding of the UDF function in Snowpark is that you can send the function UDF from your IDE and it will be executed inside Snowflake. I have a staged database called GeoLite2-City.mmdb inside a S3 bucket on my Snowflake account and i would like to use it to retrieve informations about an ip address. So my strategy was to
1 Register an UDF which would return a response string n my IDE Pycharm
2 Create a main function which would simple question the database about the ip address and give me a response.
The problem is that, how the UDF and my code can see the staged file at
s3://path/GeoLite2-City.mmdb
in my bucket, in my case i simply named it so assuming that it will eventually find it (with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:) since the
stage_location='@AWS_CSV_STAGE' is the same as were the UDF will be saved? But i'm not sure if i understand correctly what the option stage_location is referring exactly.
At the moment i get the following error:
"Cannot add package geoip2 because Anaconda terms must be accepted by ORGADMIN to use Anaconda 3rd party packages. Please follow the instructions at https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-packages.html#using-third-party-packages-from-anaconda."
Am i importing geoip2.database correctly in order to use it with snowpark and udf?
Do i import it by writing session.add_packages('geoip2') ?
Thank You for clearing my doubts.
The instructions i'm following about geoip2 are here.
https://geoip2.readthedocs.io/en/latest/
my code:
from snowflake.snowpark import Session
import geoip2.database
from snowflake.snowpark.functions import col
import logging
from snowflake.snowpark.types import IntegerType, StringType
logger = logging.getLogger()
logger.setLevel(logging.INFO)
session = None
user = ''*********'
password = '*********'
account = '*********'
warehouse = '*********'
database = '*********'
schema = '*********'
role = '*********'
print("Connecting")
cnn_params = {
"account": account,
"user": user,
"password": password,
"warehouse": warehouse,
"database": database,
"schema": schema,
"role": role,
}
def first_udf():
with geoip2.database.Reader('GeoLite2-City.mmdb') as reader:
response = reader.city('203.0.113.0')
print('response.country.iso_code')
return response
try:
print('session..')
session = Session.builder.configs(cnn_params).create()
session.add_packages('geoip2')
session.udf.register(
func=first_udf
, return_type=StringType()
, input_types=[StringType()]
, is_permanent=True
, name='SNOWPARK_FIRST_UDF'
, replace=True
, stage_location='@AWS_CSV_STAGE'
)
session.sql('SELECT SNOWPARK_FIRST_UDF').show()
except Exception as e:
print(e)
finally:
if session:
session.close()
print('connection closed..')
print('done.')
UPDATE
I'm trying to solve it using a java udf as in my staging area i have the 'geoip2-2.8.0.jar' library staged already. If i could import it's methods to get the country of an ip it would be perfect, the problem is that i don't know how to do it exactly. I'm trying to follow these instructions https://maxmind.github.io/GeoIP2-java/.
I wanna interrogate the database and get as output the iso code of the country and i want to do it on snowflake worksheet.
CREATE OR REPLACE FUNCTION GEO()
returns varchar not null
language java
imports = ('@AWS_CSV_STAGE/lib/geoip2-2.8.0.jar', '@AWS_CSV_STAGE/geodata/GeoLite2-City.mmdb')
handler = 'test'
as
$$
def test():
File database = new File("geodata/GeoLite2-City.mmdb")
DatabaseReader reader = new DatabaseReader.Builder(database).build();
InetAddress ipAddress = InetAddress.getByName("128.101.101.101");
CityResponse response = reader.city(ipAddress);
Country country = response.getCountry();
System.out.println(country.getIsoCode());
$$;
SELECT GEO();
| [
"This will be more complicated that it looks:\n\nTo use session.add_packages('geoip2') in Snowflake you need to accept the Anaconda terms. This is easy if you can ask your account admin.\nBut then you can only get the packages that Anaconda has added to Snowflake in this way. The list is https://repo.anaconda.com/pkgs/snowflake/, and I don't see geoip2 there yet.\nSo you will need to package you own Python code (until Anaconda sees enough requests for geoip2 in the wishlist). I described the process here https://medium.com/snowflake/generating-all-the-holidays-in-sql-with-a-python-udtf-4397f190252b.\nBut wait! GeoIP2 is not pure Python, so you will need to wait until Anaconda packages the C extension libmaxminddb. But this will be harder, as you can see their docs don't offer a straightforward way like other pip installable C libraries.\n\nSo this will be complicated.\nThere are other alternative paths, like a commercial provider of this functionality (like I describe here https://medium.com/snowflake/new-in-snowflake-marketplace-monetization-315aa90b86c).\nThere other approaches to get this done without using a paid dataset, but I haven't written about that yet - but someone else might before I get to do it.\nBtw, years ago I wrote something like this for BigQuery (https://cloud.google.com/blog/products/data-analytics/geolocation-with-bigquery-de-identify-76-million-ip-addresses-in-20-seconds), but today I was notified that Google recently deleted the tables that I had shared with the world (https://twitter.com/matthew_hensley/status/1598386009129058315).\nSo it's time to rebuild in Snowflake. But who (me?) and when is still a question.\n"
] | [
0
] | [] | [] | [
"python",
"snowflake_cloud_data_platform",
"snowpark",
"user_defined_functions"
] | stackoverflow_0074649140_python_snowflake_cloud_data_platform_snowpark_user_defined_functions.txt |
Q:
Querying dynamodb returns "Can't Pick _thread.lock"?
I'm trying to make a general dynamodb query but keep getting a TypeError: can't pickle _thread.lock object.
response = table.query(
KeyConditionExpression=Key("Key").eq("Whatever")
)
Any possible pointers? I did some research prior and apparently this error appears when trying to do multithreading, however I am not doing any threading in this script.
A:
try this:
table = boto3.resource("dynamodb", "region").Table('table_name')
response = table.query(KeyConditionExpression=Key("Key").eq("Whatever"))
| Querying dynamodb returns "Can't Pick _thread.lock"? | I'm trying to make a general dynamodb query but keep getting a TypeError: can't pickle _thread.lock object.
response = table.query(
KeyConditionExpression=Key("Key").eq("Whatever")
)
Any possible pointers? I did some research prior and apparently this error appears when trying to do multithreading, however I am not doing any threading in this script.
| [
"try this:\ntable = boto3.resource(\"dynamodb\", \"region\").Table('table_name')\nresponse = table.query(KeyConditionExpression=Key(\"Key\").eq(\"Whatever\"))\n\n"
] | [
0
] | [] | [] | [
"boto3",
"dynamodb_queries",
"python"
] | stackoverflow_0069922193_boto3_dynamodb_queries_python.txt |
Q:
Pytest - how to skip tests unless you declare an option/flag?
I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the @pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
A:
We are using markers with addoption in conftest.py
testcase:
@pytest.mark.no_cmd
def test_skip_if_no_command_line():
assert True
conftest.py:
in function
def pytest_addoption(parser):
parser.addoption("--no_cmd", action="store_true",
help="run the tests only in case of that command line (marked with marker @no_cmd)")
in function
def pytest_runtest_setup(item):
if 'no_cmd' in item.keywords and not item.config.getoption("--no_cmd"):
pytest.skip("need --no_cmd option to run this test")
pytest call:
py.test test_the_marker
-> test will be skipped
py.test test_the_marker --no_cmd
-> test will run
A:
The pytest documentation offers a nice example on how to skip tests marked "slow" by default and only run them with a --runslow option:
# conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)
We can now mark our tests in the following way:
# test_module.py
from time import sleep
import pytest
def test_func_fast():
sleep(0.1)
@pytest.mark.slow
def test_func_slow():
sleep(10)
The test test_func_fast is always executed (calling e.g. pytest). The "slow" function test_func_slow, however, will only be executed when calling pytest --runslow.
A:
There are two ways to do that:
First method is to tag the functions with @pytest.mark decorator and run / skip the tagged functions alone using -m option.
@pytest.mark.anytag
def test_calc_add():
assert True
@pytest.mark.anytag
def test_calc_multiply():
assert True
def test_calc_divide():
assert True
Running the script as py.test -m anytag test_script.py will run only the first two functions.
Alternatively run as py.test -m "not anytag" test_script.py will run only the third function and skip the first two functions.
Here 'anytag' is the name of the tag. It can be anything.!
Second way is to run the functions with a common substring in their name using -k option.
def test_calc_add():
assert True
def test_calc_multiply():
assert True
def test_divide():
assert True
Running the script as py.test -k calc test_script.py will run the functions and skip the last one.
Note that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.
A:
Following the approach suggested in the pytest docs, thus the answer of @Manu_CJ, is certainly the way to go here.
I'd simply like to show how this can be adapted to easily define multiple options:
The canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.
This is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:
# content of conftest.py
import pytest
# Create a dict of markers.
# The key is used as option, so --{key} will run all tests marked with key.
# The value must be a dict that specifies:
# 1. 'help': the command line help text
# 2. 'marker-descr': a description of the marker
# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.
optional_markers = {
"flag1": {"help": "<Command line help text for flag1...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
"flag2": {"help": "<Command line help text for flag2...>",
"marker-descr": "<Description of the marker...>",
"skip-reason": "Test only runs with the --{} option."},
# add further markers here
}
def pytest_addoption(parser):
for marker, info in optional_markers.items():
parser.addoption("--{}".format(marker), action="store_true",
default=False, help=info['help'])
def pytest_configure(config):
for marker, info in optional_markers.items():
config.addinivalue_line("markers",
"{}: {}".format(marker, info['marker-descr']))
def pytest_collection_modifyitems(config, items):
for marker, info in optional_markers.items():
if not config.getoption("--{}".format(marker)):
skip_test = pytest.mark.skip(
reason=info['skip-reason'].format(marker)
)
for item in items:
if marker in item.keywords:
item.add_marker(skip_test)
Now you can use the markers defined in optional_markers in your test modules:
# content of test_module.py
import pytest
@pytest.mark.flag1
def test_some_func():
pass
@pytest.mark.flag2
def test_other_func():
pass
A:
If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.
test_reports.py contents:
import os
import pytest
@pytest.mark.skipif(
not os.environ.get("MY_SPECIAL_FLAG"),
reason="MY_SPECIAL_FLAG not set in environment"
)
def test_skip_if_no_cli_tag():
assert True
def test_always_run():
assert True
In Windows:
> pytest -v test_reports.py --no-header
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============== 1 passed, 1 skipped in 0.01s ==============
> cmd /c "set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header"
================== test session starts ===================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
=================== 2 passed in 0.01s ====================
In Linux or other *NIX-like systems:
$ pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]
test_reports.py::test_always_run PASSED [100%]
============ 1 passed, 1 skipped in 0.00s =============
$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header
================= test session starts =================
collected 2 items
test_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]
test_reports.py::test_always_run PASSED [100%]
================== 2 passed in 0.00s ==================
MY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.
Enjoy.
| Pytest - how to skip tests unless you declare an option/flag? | I have some unit tests, but I'm looking for a way to tag some specific unit tests to have them skipped unless you declare an option when you call the tests.
Example:
If I call pytest test_reports.py, I'd want a couple specific unit tests to not be run.
But if I call pytest -<something> test_reports, then I want all my tests to be run.
I looked into the @pytest.mark.skipif(condition) tag but couldn't quite figure it out, so not sure if I'm on the right track or not. Any guidance here would be great!
| [
"We are using markers with addoption in conftest.py\ntestcase:\[email protected]_cmd\ndef test_skip_if_no_command_line():\n assert True\n\nconftest.py:\nin function\ndef pytest_addoption(parser):\n parser.addoption(\"--no_cmd\", action=\"store_true\",\n help=\"run the tests only in case of that command line (marked with marker @no_cmd)\")\n\nin function\ndef pytest_runtest_setup(item):\n if 'no_cmd' in item.keywords and not item.config.getoption(\"--no_cmd\"):\n pytest.skip(\"need --no_cmd option to run this test\")\n\npytest call:\n py.test test_the_marker \n -> test will be skipped\n\n py.test test_the_marker --no_cmd\n -> test will run\n\n",
"The pytest documentation offers a nice example on how to skip tests marked \"slow\" by default and only run them with a --runslow option:\n# conftest.py\n\nimport pytest\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--runslow\", action=\"store_true\", default=False, help=\"run slow tests\"\n )\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\"markers\", \"slow: mark test as slow to run\")\n\n\ndef pytest_collection_modifyitems(config, items):\n if config.getoption(\"--runslow\"):\n # --runslow given in cli: do not skip slow tests\n return\n skip_slow = pytest.mark.skip(reason=\"need --runslow option to run\")\n for item in items:\n if \"slow\" in item.keywords:\n item.add_marker(skip_slow)\n\nWe can now mark our tests in the following way:\n# test_module.py\nfrom time import sleep\n\nimport pytest\n\n\ndef test_func_fast():\n sleep(0.1)\n\n\[email protected]\ndef test_func_slow():\n sleep(10)\n\nThe test test_func_fast is always executed (calling e.g. pytest). The \"slow\" function test_func_slow, however, will only be executed when calling pytest --runslow.\n",
"There are two ways to do that:\nFirst method is to tag the functions with @pytest.mark decorator and run / skip the tagged functions alone using -m option.\[email protected]\ndef test_calc_add():\n assert True\n\[email protected]\ndef test_calc_multiply():\n assert True\n\ndef test_calc_divide():\n assert True\n\nRunning the script as py.test -m anytag test_script.py will run only the first two functions.\nAlternatively run as py.test -m \"not anytag\" test_script.py will run only the third function and skip the first two functions.\nHere 'anytag' is the name of the tag. It can be anything.!\nSecond way is to run the functions with a common substring in their name using -k option.\ndef test_calc_add():\n assert True\n\ndef test_calc_multiply():\n assert True\n\ndef test_divide():\n assert True\n\nRunning the script as py.test -k calc test_script.py will run the functions and skip the last one.\nNote that 'calc' is the common substring present in both the function name and any other function having 'calc' in its name like 'calculate' will also be run.\n",
"Following the approach suggested in the pytest docs, thus the answer of @Manu_CJ, is certainly the way to go here.\nI'd simply like to show how this can be adapted to easily define multiple options:\nThe canonical example given by the pytest docs highlights well how to add a single marker through command line options. However, adapting it to add multiple markers might not be straight forward, as the three hooks pytest_addoption, pytest_configure and pytest_collection_modifyitems all need to be evoked to allow adding a single marker through command line option.\nThis is one way you can adapt the canonical example, if you have several markers, like 'flag1', 'flag2', etc., that you want to be able to add via command line option:\n# content of conftest.py\n \nimport pytest\n\n# Create a dict of markers.\n# The key is used as option, so --{key} will run all tests marked with key.\n# The value must be a dict that specifies:\n# 1. 'help': the command line help text\n# 2. 'marker-descr': a description of the marker\n# 3. 'skip-reason': displayed reason whenever a test with this marker is skipped.\noptional_markers = {\n \"flag1\": {\"help\": \"<Command line help text for flag1...>\",\n \"marker-descr\": \"<Description of the marker...>\",\n \"skip-reason\": \"Test only runs with the --{} option.\"},\n \"flag2\": {\"help\": \"<Command line help text for flag2...>\",\n \"marker-descr\": \"<Description of the marker...>\",\n \"skip-reason\": \"Test only runs with the --{} option.\"},\n # add further markers here\n}\n\n\ndef pytest_addoption(parser):\n for marker, info in optional_markers.items():\n parser.addoption(\"--{}\".format(marker), action=\"store_true\",\n default=False, help=info['help'])\n\n\ndef pytest_configure(config):\n for marker, info in optional_markers.items():\n config.addinivalue_line(\"markers\",\n \"{}: {}\".format(marker, info['marker-descr']))\n\n\ndef pytest_collection_modifyitems(config, items):\n for marker, info in optional_markers.items():\n if not config.getoption(\"--{}\".format(marker)):\n skip_test = pytest.mark.skip(\n reason=info['skip-reason'].format(marker)\n )\n for item in items:\n if marker in item.keywords:\n item.add_marker(skip_test)\n\n\nNow you can use the markers defined in optional_markers in your test modules:\n# content of test_module.py\n\nimport pytest\n\n\[email protected]\ndef test_some_func():\n pass\n\n\[email protected]\ndef test_other_func():\n pass\n\n",
"If the use-case prohibits modifying either conftest.py and/or pytest.ini, here's how to use environment variables to directly take advantage of the skipif marker.\ntest_reports.py contents:\nimport os\nimport pytest\n\[email protected](\n not os.environ.get(\"MY_SPECIAL_FLAG\"),\n reason=\"MY_SPECIAL_FLAG not set in environment\"\n)\ndef test_skip_if_no_cli_tag():\n assert True\n\ndef test_always_run():\n assert True\n\nIn Windows:\n> pytest -v test_reports.py --no-header\n================== test session starts ===================\ncollected 2 items\n\ntest_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]\ntest_reports.py::test_always_run PASSED [100%]\n\n============== 1 passed, 1 skipped in 0.01s ==============\n\n> cmd /c \"set MY_SPECIAL_FLAG=1&pytest -v test_reports.py --no-header\"\n================== test session starts ===================\ncollected 2 items\n\ntest_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]\ntest_reports.py::test_always_run PASSED [100%]\n\n=================== 2 passed in 0.01s ====================\n\nIn Linux or other *NIX-like systems:\n$ pytest -v test_reports.py --no-header\n================= test session starts =================\ncollected 2 items\n\ntest_reports.py::test_skip_if_no_cli_tag SKIPPED [ 50%]\ntest_reports.py::test_always_run PASSED [100%]\n\n============ 1 passed, 1 skipped in 0.00s =============\n\n$ MY_SPECIAL_FLAG=1 pytest -v test_reports.py --no-header\n================= test session starts =================\ncollected 2 items\n\ntest_reports.py::test_skip_if_no_cli_tag PASSED [ 50%]\ntest_reports.py::test_always_run PASSED [100%]\n\n================== 2 passed in 0.00s ==================\n\nMY_SPECIAL_FLAG can be whatever you wish based on your specific use-case and of course --no-header is just being used for this example.\nEnjoy.\n"
] | [
25,
23,
5,
3,
0
] | [] | [] | [
"pytest",
"python",
"unit_testing"
] | stackoverflow_0047559524_pytest_python_unit_testing.txt |
Q:
Telthon client Get link/url for all Groups or channel
Hello im try to find a solution to get for all groups i have in telgram app to save in one list
If is group i can generate a link
if is private channel can i generate join url
i get group id name and all info but i can get URL
Thanks
`
async for dialog in client.iter_dialogs():
if dialog.is_group:
try:
print('name:{0} ids:{1} is_user:{2} is_channel:{3} is_group:{4}'.format(dialog.name,dialog.id,dialog.is_user,dialog.is_channel,dialog.is_group))
#print(dialog.is_banned)
print(dialog.name)
print(dialog.message.peer_id.channel_id)
print(dialog.entity)
exit()
except Exception as e:
print(e)
error(str(row[1]), str(dialog.name), str(e))
print('____________E_R_R_O_R___________')
exit()
`
i have try some other librari but i dont have find any solution
A:
"URL" is "https://t.me/username". if an entity object has username attribute; it's public. else if you're an owner or admin (with needed permissions) you get the invite link by making a seperate request.
import telethon.tl.functions as _fn
async for d in client.iter_dialogs():
if not d.is_group: continue
try:
print(f'Name:{d.name} ID:{d.id} is_user:{d.is_user} is_channel:{d.is_channel} is_group:{d.is_group}')
en = d.entity
public = hasattr(en, 'username') and en.username
is_chat = d.is_group and not d.is_channel and not en.deactivated
admin = en.creator or (en.admin_rights and en.admin_rights.invite_users)
if not d.is_channel or is_chat: continue
if public: print(f'Link: https://t.me/{en.username}')
elif admin:
if is_chat: r = await client(_fn.messages.GetFullChatRequest(en.id))
else: r = await client(_fn.channels.GetFullChannelRequest(en))
link = r.full_chat.exported_invite
print(f'Link: {link.link}')
except Exception as e:
print(e)
error(str(row[1]), d.name, str(e))
print('____________E_R_R_O_R___________')
exit()
| Telthon client Get link/url for all Groups or channel | Hello im try to find a solution to get for all groups i have in telgram app to save in one list
If is group i can generate a link
if is private channel can i generate join url
i get group id name and all info but i can get URL
Thanks
`
async for dialog in client.iter_dialogs():
if dialog.is_group:
try:
print('name:{0} ids:{1} is_user:{2} is_channel:{3} is_group:{4}'.format(dialog.name,dialog.id,dialog.is_user,dialog.is_channel,dialog.is_group))
#print(dialog.is_banned)
print(dialog.name)
print(dialog.message.peer_id.channel_id)
print(dialog.entity)
exit()
except Exception as e:
print(e)
error(str(row[1]), str(dialog.name), str(e))
print('____________E_R_R_O_R___________')
exit()
`
i have try some other librari but i dont have find any solution
| [
"\"URL\" is \"https://t.me/username\". if an entity object has username attribute; it's public. else if you're an owner or admin (with needed permissions) you get the invite link by making a seperate request.\nimport telethon.tl.functions as _fn \n\nasync for d in client.iter_dialogs():\n if not d.is_group: continue\n try:\n print(f'Name:{d.name} ID:{d.id} is_user:{d.is_user} is_channel:{d.is_channel} is_group:{d.is_group}')\n\n en = d.entity\n public = hasattr(en, 'username') and en.username\n is_chat = d.is_group and not d.is_channel and not en.deactivated\n admin = en.creator or (en.admin_rights and en.admin_rights.invite_users)\n\n if not d.is_channel or is_chat: continue\n\n if public: print(f'Link: https://t.me/{en.username}')\n\n elif admin:\n if is_chat: r = await client(_fn.messages.GetFullChatRequest(en.id))\n else: r = await client(_fn.channels.GetFullChannelRequest(en))\n\n link = r.full_chat.exported_invite\n print(f'Link: {link.link}')\n except Exception as e:\n print(e)\n error(str(row[1]), d.name, str(e))\n print('____________E_R_R_O_R___________')\n exit()\n\n"
] | [
0
] | [] | [] | [
"api",
"python",
"telegram",
"telethon"
] | stackoverflow_0074649844_api_python_telegram_telethon.txt |
Q:
OSMNX graph to torch_geometry Error: "Could not infer dtype of Point"
I am a freshman in graph neural networks. Recently I have been struggling with doing TGCN on the transportation network.
I have a lot of Geospatial data points with timestamps in one area. I want to map /summarize these data to node and edge features of a graph representing the transportation network.
What I have achieved:
[x] Load the data points to Geopandas Dataframe
[x] Retrieve a graph on the transportation network: The OSMNX helped me a lot and generated a NetworkX graph by retrieving info from OpenStreetMap
[x] Paring the points to nodes and edges of the NetworkX graph
[x] Generate new node or edge features based on paired points
[x] Build an updated Networkx graph from nodes and edges
[ ] Conduct TGCN training
The next step now is how to conduct TGCN training on NetworkX graphs?? I only found some tutorials on TGCN modeling using Pytorch Geometrics. So I tried to transfer the NetworkX graph from OSMNX to torch geometrics. I followed the steps in How to load in graph from networkx into PyTorch geometric and set node features and labels?
The problem is the referred tutorial build graph with NetworkX directly. I filtered out the features that I needed and used the OSMNX ox.graph_from_gdfs(new_nodes, new_edges) to build the NetworkX. It is ensured that the features in nodes and edges are numeric (except the geometry column).
new_nodes
new_edges = temp_edges.drop(['osmid','oneway','lanes',
'highway', 'ref', 'name', 'maxspeed', 'bridge', 'access',
'uni', 'max_speed', 'min_speed', 'avg_speed', 'count'], axis=1)
new_edges
new_graph = ox.graph_from_gdfs(new_nodes, new_edges)
new_graph
<networkx.classes.multidigraph.MultiDiGraph at 0x28c70c8ad90>
The error arises when transfering the OSMNX NetworkX graph to torch geometric
import networkx as nx
import numpy as np
import torch
from torch_geometric.utils.convert import from_networkx
# Convert the graph into PyTorch geometric
G = new_graph
pyg_graph = from_networkx(G, group_node_attrs = ["street_count"] , group_edge_attrs = ["length"])
# pyg_graph = from_networkx(G)
print(pyg_graph)
So, anyone knows how to transfer the OSMNX graph to torch_geometric? Or How to train TGCN models on OSMNX graphs? Thanks.
A:
I'm not familiar with OSMNX graphs but I just ran into a similar issue where the error message
RuntimeError: Could not infer dtype of CLASS
comes from from_networkx() or more specifically, the convert() subroutine it called.
I fixed the issue by creating a no-data networkx graph:
no_data_graph = networkx.DiGraph() # change to Graph() if your graph is undirected
no_data_graph.add_nodes_from(G.nodes())
no_data_graph.add_edges_from(G.edges())
I guess in your case you can create a new graph that contains only numeric data fields and remove any data field that is of type Point.
| OSMNX graph to torch_geometry Error: "Could not infer dtype of Point" | I am a freshman in graph neural networks. Recently I have been struggling with doing TGCN on the transportation network.
I have a lot of Geospatial data points with timestamps in one area. I want to map /summarize these data to node and edge features of a graph representing the transportation network.
What I have achieved:
[x] Load the data points to Geopandas Dataframe
[x] Retrieve a graph on the transportation network: The OSMNX helped me a lot and generated a NetworkX graph by retrieving info from OpenStreetMap
[x] Paring the points to nodes and edges of the NetworkX graph
[x] Generate new node or edge features based on paired points
[x] Build an updated Networkx graph from nodes and edges
[ ] Conduct TGCN training
The next step now is how to conduct TGCN training on NetworkX graphs?? I only found some tutorials on TGCN modeling using Pytorch Geometrics. So I tried to transfer the NetworkX graph from OSMNX to torch geometrics. I followed the steps in How to load in graph from networkx into PyTorch geometric and set node features and labels?
The problem is the referred tutorial build graph with NetworkX directly. I filtered out the features that I needed and used the OSMNX ox.graph_from_gdfs(new_nodes, new_edges) to build the NetworkX. It is ensured that the features in nodes and edges are numeric (except the geometry column).
new_nodes
new_edges = temp_edges.drop(['osmid','oneway','lanes',
'highway', 'ref', 'name', 'maxspeed', 'bridge', 'access',
'uni', 'max_speed', 'min_speed', 'avg_speed', 'count'], axis=1)
new_edges
new_graph = ox.graph_from_gdfs(new_nodes, new_edges)
new_graph
<networkx.classes.multidigraph.MultiDiGraph at 0x28c70c8ad90>
The error arises when transfering the OSMNX NetworkX graph to torch geometric
import networkx as nx
import numpy as np
import torch
from torch_geometric.utils.convert import from_networkx
# Convert the graph into PyTorch geometric
G = new_graph
pyg_graph = from_networkx(G, group_node_attrs = ["street_count"] , group_edge_attrs = ["length"])
# pyg_graph = from_networkx(G)
print(pyg_graph)
So, anyone knows how to transfer the OSMNX graph to torch_geometric? Or How to train TGCN models on OSMNX graphs? Thanks.
| [
"I'm not familiar with OSMNX graphs but I just ran into a similar issue where the error message\n\nRuntimeError: Could not infer dtype of CLASS\n\ncomes from from_networkx() or more specifically, the convert() subroutine it called.\nI fixed the issue by creating a no-data networkx graph:\nno_data_graph = networkx.DiGraph() # change to Graph() if your graph is undirected\nno_data_graph.add_nodes_from(G.nodes())\nno_data_graph.add_edges_from(G.edges())\n\nI guess in your case you can create a new graph that contains only numeric data fields and remove any data field that is of type Point.\n"
] | [
0
] | [] | [] | [
"graph",
"osmnx",
"python",
"pytorch",
"pytorch_geometric"
] | stackoverflow_0073991516_graph_osmnx_python_pytorch_pytorch_geometric.txt |
Q:
Store Values generated in a while loop on a list in python
Just a simple example of what i want to do:
numberOfcalculations = 3
count = 1
while contador <= numberOfcalculations:
num = int(input(' number:'))
num2 = int(input(' other number:'))
calculate = num * num2
print(calculate)
count = count + 1
How do i store the 3 different values that "calculate" will be worth in a list []?
A:
When you initialize calculate as list type you can append values with + operator:
numberOfcalculations = 3
count = 1
calculate = []
while count <= numberOfcalculations:
num = int(input(' number:'))
num2 = int(input(' other number:'))
calculate += [ num * num2 ]
print(calculate)
count = count + 1
Also you have to change contador somehow or you will end up in an infinite loop maybe. I used count here instead to give the user the ability to input 3 different calculations.
| Store Values generated in a while loop on a list in python | Just a simple example of what i want to do:
numberOfcalculations = 3
count = 1
while contador <= numberOfcalculations:
num = int(input(' number:'))
num2 = int(input(' other number:'))
calculate = num * num2
print(calculate)
count = count + 1
How do i store the 3 different values that "calculate" will be worth in a list []?
| [
"When you initialize calculate as list type you can append values with + operator:\nnumberOfcalculations = 3\ncount = 1\ncalculate = []\nwhile count <= numberOfcalculations:\n num = int(input(' number:'))\n num2 = int(input(' other number:'))\n \n calculate += [ num * num2 ]\n print(calculate)\n count = count + 1\n\nAlso you have to change contador somehow or you will end up in an infinite loop maybe. I used count here instead to give the user the ability to input 3 different calculations.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074650866_python.txt |
Q:
Trying to print the die value
So as in the title Im trying to print the dice value when it runs but im not sure how to do it
from random import randint
class Die(object):
def __init__(self):
set.value=1
def roll(self):
self.value=randint(1,6)
def getvalue(self):
return self.value
def __str__(self):
return str(self.value)
I have tried to print self value but that doesn't work so I'm lost
A:
First of all, there is a typo in your init. It should be self.value = 1
To print a dice value after a roll, you first need to create an object of your class Die.
new_dice = Die()
Then, call method roll:
new_dice.roll()
Now, the value after the roll is stored in the object. To print the dice value you can do either of the below:
print(new_dice.getvalue())
print(new_dice)
| Trying to print the die value | So as in the title Im trying to print the dice value when it runs but im not sure how to do it
from random import randint
class Die(object):
def __init__(self):
set.value=1
def roll(self):
self.value=randint(1,6)
def getvalue(self):
return self.value
def __str__(self):
return str(self.value)
I have tried to print self value but that doesn't work so I'm lost
| [
"First of all, there is a typo in your init. It should be self.value = 1\nTo print a dice value after a roll, you first need to create an object of your class Die.\nnew_dice = Die()\n\nThen, call method roll:\nnew_dice.roll()\n\nNow, the value after the roll is stored in the object. To print the dice value you can do either of the below:\nprint(new_dice.getvalue())\n\nprint(new_dice)\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074650224_python.txt |
Q:
Content pulled via Headless Selenium Chromedriver does not reflect dynamically updating content on webpage (as it does in "headful" mode)
TL;DR: content from a webpage that is known to dynamically update over time only updates in the headful Chromedriver, but does not dynamically update if the Chromedriver is headless. How can I preserve the headful updates in the headless driver condition?
I am using Python Selenium (version=3.141.0) Chromedriver (chromedriver version = 104.0.5112.79; browser version = 105.0.5195.125) to pull information from websites that dynamically update their content over time in the absence of explicit browser refreshes, e.g:
https://www.paddypower.com/football?tab=in-play
If I run a "headful" Chromedriver (e.g. without passing the headless=True argument when instantiating the driver) and pull the data, the pulled content reflects the updated information over time without having to explicitly refresh the page, i.e. every time I pull I get the most up-to-date information without having to run driver.refresh() (note my pulls simply send JavaScript commands via the driver to the webpage to pull all text from specific elements)
However, if I run my exact same data pulls but now with a headless Chromedriver, I can only ever pull the information that was displayed on the page at the time of the driver's deployment, and repeated pulls after this do not reflect changes in that page's information over time unless I explicitly refresh the page (now using driver.refresh()).
Note I want to avoid explicit page refreshes as they can take significant time, and I want to avoid using headful Chromedrivers as I want to open several pages simultaneously.
I routinely pass the following arguments to Chromedriver, none make a difference:
options = Options()
options.headless=headless
options.add_argument('window-size=2000x1500')
options.add_argument('--no-proxy-server')
options.add_argument("--proxy-server='direct://'");
options.add_argument("--proxy-bypass-list=*");
options.add_argument('--disable-gpu');
# bypass OS security
options.add_argument('--no-sandbox')
# don't tell chrome that it is automated
options.add_experimental_option(
"excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
# disable images
prefs = {"profile.managed_default_content_settings.images": 2}
options.add_experimental_option("prefs", prefs)
Thanks for any help you can give!
A:
The Chromium developers recently added a 2nd headless mode that functions the same way as normal Chrome.
--headless=chrome
(the old way was: --headless or options.headless = True)
The New Headless Mode Usage:
options.add_argument("--headless=chrome")
You should be able to use that mode and get the same results as headed mode.
There's more info on that here: https://bugs.chromium.org/p/chromium/issues/detail?id=706008#c36
A:
To provide extra help to the answer provied by Michael Mintz is very good.
Create a headless browser like so:
options = webdriver.ChromeOptions()
options.add_argument("--headless=chrome")
It's good to note that when this crashes, the browser will still be open. In this case, trying to open a new browser might result in an error if you are using chrome user profiles:
(The process started from chrome location C:\Program Files\Google\Chrome\Application\chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
It'll just be invisible and might be difficult to close. It can be closed by the following:
import os
os.system("taskkill /f /im geckodriver.exe /T")
os.system("taskkill /f /im chromedriver.exe /T")
os.system("taskkill /f /im IEDriverServer.exe /T")
| Content pulled via Headless Selenium Chromedriver does not reflect dynamically updating content on webpage (as it does in "headful" mode) | TL;DR: content from a webpage that is known to dynamically update over time only updates in the headful Chromedriver, but does not dynamically update if the Chromedriver is headless. How can I preserve the headful updates in the headless driver condition?
I am using Python Selenium (version=3.141.0) Chromedriver (chromedriver version = 104.0.5112.79; browser version = 105.0.5195.125) to pull information from websites that dynamically update their content over time in the absence of explicit browser refreshes, e.g:
https://www.paddypower.com/football?tab=in-play
If I run a "headful" Chromedriver (e.g. without passing the headless=True argument when instantiating the driver) and pull the data, the pulled content reflects the updated information over time without having to explicitly refresh the page, i.e. every time I pull I get the most up-to-date information without having to run driver.refresh() (note my pulls simply send JavaScript commands via the driver to the webpage to pull all text from specific elements)
However, if I run my exact same data pulls but now with a headless Chromedriver, I can only ever pull the information that was displayed on the page at the time of the driver's deployment, and repeated pulls after this do not reflect changes in that page's information over time unless I explicitly refresh the page (now using driver.refresh()).
Note I want to avoid explicit page refreshes as they can take significant time, and I want to avoid using headful Chromedrivers as I want to open several pages simultaneously.
I routinely pass the following arguments to Chromedriver, none make a difference:
options = Options()
options.headless=headless
options.add_argument('window-size=2000x1500')
options.add_argument('--no-proxy-server')
options.add_argument("--proxy-server='direct://'");
options.add_argument("--proxy-bypass-list=*");
options.add_argument('--disable-gpu');
# bypass OS security
options.add_argument('--no-sandbox')
# don't tell chrome that it is automated
options.add_experimental_option(
"excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
# disable images
prefs = {"profile.managed_default_content_settings.images": 2}
options.add_experimental_option("prefs", prefs)
Thanks for any help you can give!
| [
"The Chromium developers recently added a 2nd headless mode that functions the same way as normal Chrome.\n--headless=chrome\n(the old way was: --headless or options.headless = True)\nThe New Headless Mode Usage:\noptions.add_argument(\"--headless=chrome\")\n\nYou should be able to use that mode and get the same results as headed mode.\nThere's more info on that here: https://bugs.chromium.org/p/chromium/issues/detail?id=706008#c36\n",
"To provide extra help to the answer provied by Michael Mintz is very good.\nCreate a headless browser like so:\noptions = webdriver.ChromeOptions() \noptions.add_argument(\"--headless=chrome\")\n\nIt's good to note that when this crashes, the browser will still be open. In this case, trying to open a new browser might result in an error if you are using chrome user profiles:\n (The process started from chrome location C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe is no longer running, so ChromeDriver is assuming that Chrome has crashed.)\n\nIt'll just be invisible and might be difficult to close. It can be closed by the following:\nimport os\n\nos.system(\"taskkill /f /im geckodriver.exe /T\")\nos.system(\"taskkill /f /im chromedriver.exe /T\")\nos.system(\"taskkill /f /im IEDriverServer.exe /T\")\n\n"
] | [
0,
0
] | [] | [] | [
"headless",
"python",
"selenium",
"selenium_chromedriver",
"selenium_webdriver"
] | stackoverflow_0073846185_headless_python_selenium_selenium_chromedriver_selenium_webdriver.txt |
Q:
What's wrong here and why is 'int' not "iterable"
I am making some type of "encoder" program which uses 16 'int' functions in total, but for some reason, they raise an error bcuz they are not "iterable"
This is the code(from text to numbers and from numbers to text aren't actual code lines):
from text to numbers
chara=str(numbers[int(chara,36)-10])
charb=str(numbers[int(charb,36)-10])
charc=str(numbers[int(charc,36)-10])
chard=str(numbers[int(chard,36)-10])
chare=str(numbers[int(chare,36)-10])
charf=str(numbers[int(charf,36)-10])
charg=str(numbers[int(charg,36)-10])
charh=str(numbers[int(charh,36)-10])
From numbers to text
chara=letters[int(chara)]
charb=letters[int(charb)]
charc=letters[int(charc)]
chard=letters[int(chard)]
chare=letters[int(chare)]
charf=letters[int(charf)]
charg=letters[int(charg)]
charh=letters[int(charh)]
Please note that i'm argentinian and i dont know the meaning of "iterable" bcuz i havent learned that word in school yet, also notice that im a newbie to coding and im 14 years old, so what might seem easy/a piece of cake for you, it isn't for me, dont threat me as garbage like some memes suggest
Answering to the comments, this is the whole code
numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]
letters = [" ", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
characters=["0", "0", "0", "0", "0", "0", "0", "0"]
def num_to_let(number):
chara=0
charb=0
charc=0
chard=0
chare=0
charf=0
charg=0
charh=0
index=0
n=number
for x in n:
characters[index]=x
print(characters)
index= index+1
print(index)
chara=characters[0]
charb=characters[1]
charc=characters[2]
chard=characters[3]
chare=characters[4]
charf=characters[5]
charg=characters[6]
charh=characters[7]
chara=letters[int(chara)]
charb=letters[int(charb)]
charc=letters[int(charc)]
chard=letters[int(chard)]
chare=letters[int(chare)]
charf=letters[int(charf)]
charg=letters[int(charg)]
charh=letters[int(charh)]
print(chara, charb, charc, chard, chare, charf, charg, charh)
def let_to_num(text):
chara=0
charb=0
charc=0
chard=0
chare=0
charf=0
charg=0
charh=0
index=0
t=text
for x in t:
characters[index]=x
print(characters)
index= index+1
print(index)
chara=characters[0]
charb=characters[1]
charc=characters[2]
chard=characters[3]
chare=characters[4]
charf=characters[5]
charg=characters[6]
charh=characters[7]
print(chara, charb, charc, chard, chare, charf, charg, charh)
chara=str(numbers[int(chara,36)-10])
charb=str(numbers[int(charb,36)-10])
charc=str(numbers[int(charc,36)-10])
chard=str(numbers[int(chard,36)-10])
chare=str(numbers[int(chare,36)-10])
charf=str(numbers[int(charf,36)-10])
charg=str(numbers[int(charg,36)-10])
charh=str(numbers[int(charh,36)-10])
print(chara+charb+charc+chard+chare+charf+charg+charh)
num_to_let(91942473)
Copy it, paste it into an ide, examine it, run it as many times as necessary, then try to answer xd
The final "num_to_let(91942473)" is a testing function call, without that the error just wouldn't appear
A:
Your error comes from for x in n: in your num_to_let function. In Python, we can't use a for loop to traverse through an integer, because Python doesn't know how.
Instead, you can iterate/traverse through a string. (it is a collection of characters, so Python knows how to traverse through it) Now you can actually cast an integer into a string, which basically means converting its type to a string. You can do so with the str() function. (Read more here) Now seeing as this is actually the goal of the function, perhaps you will use this function instead.
| What's wrong here and why is 'int' not "iterable" | I am making some type of "encoder" program which uses 16 'int' functions in total, but for some reason, they raise an error bcuz they are not "iterable"
This is the code(from text to numbers and from numbers to text aren't actual code lines):
from text to numbers
chara=str(numbers[int(chara,36)-10])
charb=str(numbers[int(charb,36)-10])
charc=str(numbers[int(charc,36)-10])
chard=str(numbers[int(chard,36)-10])
chare=str(numbers[int(chare,36)-10])
charf=str(numbers[int(charf,36)-10])
charg=str(numbers[int(charg,36)-10])
charh=str(numbers[int(charh,36)-10])
From numbers to text
chara=letters[int(chara)]
charb=letters[int(charb)]
charc=letters[int(charc)]
chard=letters[int(chard)]
chare=letters[int(chare)]
charf=letters[int(charf)]
charg=letters[int(charg)]
charh=letters[int(charh)]
Please note that i'm argentinian and i dont know the meaning of "iterable" bcuz i havent learned that word in school yet, also notice that im a newbie to coding and im 14 years old, so what might seem easy/a piece of cake for you, it isn't for me, dont threat me as garbage like some memes suggest
Answering to the comments, this is the whole code
numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]
letters = [" ", "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"]
characters=["0", "0", "0", "0", "0", "0", "0", "0"]
def num_to_let(number):
chara=0
charb=0
charc=0
chard=0
chare=0
charf=0
charg=0
charh=0
index=0
n=number
for x in n:
characters[index]=x
print(characters)
index= index+1
print(index)
chara=characters[0]
charb=characters[1]
charc=characters[2]
chard=characters[3]
chare=characters[4]
charf=characters[5]
charg=characters[6]
charh=characters[7]
chara=letters[int(chara)]
charb=letters[int(charb)]
charc=letters[int(charc)]
chard=letters[int(chard)]
chare=letters[int(chare)]
charf=letters[int(charf)]
charg=letters[int(charg)]
charh=letters[int(charh)]
print(chara, charb, charc, chard, chare, charf, charg, charh)
def let_to_num(text):
chara=0
charb=0
charc=0
chard=0
chare=0
charf=0
charg=0
charh=0
index=0
t=text
for x in t:
characters[index]=x
print(characters)
index= index+1
print(index)
chara=characters[0]
charb=characters[1]
charc=characters[2]
chard=characters[3]
chare=characters[4]
charf=characters[5]
charg=characters[6]
charh=characters[7]
print(chara, charb, charc, chard, chare, charf, charg, charh)
chara=str(numbers[int(chara,36)-10])
charb=str(numbers[int(charb,36)-10])
charc=str(numbers[int(charc,36)-10])
chard=str(numbers[int(chard,36)-10])
chare=str(numbers[int(chare,36)-10])
charf=str(numbers[int(charf,36)-10])
charg=str(numbers[int(charg,36)-10])
charh=str(numbers[int(charh,36)-10])
print(chara+charb+charc+chard+chare+charf+charg+charh)
num_to_let(91942473)
Copy it, paste it into an ide, examine it, run it as many times as necessary, then try to answer xd
The final "num_to_let(91942473)" is a testing function call, without that the error just wouldn't appear
| [
"Your error comes from for x in n: in your num_to_let function. In Python, we can't use a for loop to traverse through an integer, because Python doesn't know how.\nInstead, you can iterate/traverse through a string. (it is a collection of characters, so Python knows how to traverse through it) Now you can actually cast an integer into a string, which basically means converting its type to a string. You can do so with the str() function. (Read more here) Now seeing as this is actually the goal of the function, perhaps you will use this function instead.\n"
] | [
1
] | [] | [] | [
"integer",
"iterable",
"python"
] | stackoverflow_0074647861_integer_iterable_python.txt |
Q:
List directory tree structure in python from a list of path file
The question is intended to broaden the scope of a question already answered on stackoverflow by the topic "List directory tree structure in python?".
The goal is to form a list of strings that visually represent a directory tree, with branchs.
But instead of the input being a valid directory path (as in the already answered topic),
the quest is to generate the same behavior being a "path file list" as input.
Naturally the function needs to be recursive to accommodate any depth of files.
Exemple
input:
['main_folder\\file01.txt',
'main_folder\\file02.txt',
'main_folder\\folder_sub1\\file03.txt',
'main_folder\\folder_sub1\\file04.txt',
'main_folder\\folder_sub1\\file05.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file06.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file07.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file08.txt',
'main_folder\\folder_sub2\\file09.txt',
'main_folder\\folder_sub2\\file10.txt',
'main_folder\\folder_sub2\\file11.txt']
output:
├── file01.txt
├── file02.txt
├── folder_sub1
│ ├── file03.txt
│ ├── file04.txt
│ ├── file05.txt
│ └── folder_sub1-1
│ ├── file06.txt
│ ├── file07.txt
│ └── file08.txt
└── folder_sub2
├── file09.txt
├── file10.txt
└── file11.txt
Transforming the list of file paths into nested dictionaries representing the structure of a directory has been answered in the topic "Python convert path to dict".
With this output:
{'main_folder': {'file01.txt': 'txt',
'file02.txt': 'txt',
'folder_sub1': {'file03.txt': 'txt',
'file04.txt': 'txt',
'file05.txt': 'txt',
'folder_sub1-1': {'file06.txt': 'txt',
'file07.txt': 'txt',
'file08.txt': 'txt'}},
'folder_sub2': {'file09.txt': 'txt',
'file10.txt': 'txt',
'file11.txt': 'txt'}}}
But generating the beautiful layout with branchs remains unsolved.
A:
Possible solution:
paths = {
'main_folder': {
'file01.txt': 'txt',
'file02.txt': 'txt',
'folder_sub1': {
'file03.txt': 'txt',
'file04.txt': 'txt',
'file05.txt': 'txt',
'folder_sub1-1': {
'file06.txt': 'txt',
'file07.txt': 'txt',
'file08.txt': 'txt'
}
},
'folder_sub2': {
'file09.txt': 'txt',
'file10.txt': 'txt',
'file11.txt': 'txt'
}
}
}
# prefix components:
space = ' '
branch = '│ '
# pointers:
tee = '├── '
last = '└── '
def tree(paths: dict, prefix: str = ''):
"""A recursive generator, given a directory Path object
will yield a visual tree structure line by line
with each line prefixed by the same characters
"""
# contents each get pointers that are ├── with a final └── :
pointers = [tee] * (len(paths) - 1) + [last]
for pointer, path in zip(pointers, paths):
yield prefix + pointer + path
if isinstance(paths[path], dict): # extend the prefix and recurse:
extension = branch if pointer == tee else space
# i.e. space because last, └── , above so no more |
yield from tree(paths[path], prefix=prefix+extension)
for line in tree(paths):
print(line)
Ref: List directory tree structure in python?
A:
bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame.
For this scenario, we can use 3 lines of code,
path_list = [
'main_folder\\file01.txt',
'main_folder\\file02.txt',
'main_folder\\folder_sub1\\file03.txt',
'main_folder\\folder_sub1\\file04.txt',
'main_folder\\folder_sub1\\file05.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file06.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file07.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file08.txt',
'main_folder\\folder_sub2\\file09.txt',
'main_folder\\folder_sub2\\file10.txt',
'main_folder\\folder_sub2\\file11.txt']
from bigtree import list_to_tree, print_tree
root = list_to_tree(path_list, sep="\\")
print_tree(root)
This will result in output,
main_folder
├── file01.txt
├── file02.txt
├── folder_sub1
│ ├── file03.txt
│ ├── file04.txt
│ ├── file05.txt
│ └── folder_sub1-1
│ ├── file06.txt
│ ├── file07.txt
│ └── file08.txt
└── folder_sub2
├── file09.txt
├── file10.txt
└── file11.txt
Source/Disclaimer: I'm the creator of bigtree ;)
A:
A little modification to @rafaeldss's answer
# prefix components:
space = ' '
branch = '│ '
# pointers:
tee = '├── '
last = '└── '
def tree(paths: dict, prefix: str = '', first: bool = True):
"""A recursive generator, given a directory Path object
will yield a visual tree structure line by line
with each line prefixed by the same characters
"""
# contents each get pointers that are ├── with a final └── :
pointers = [tee] * (len(paths) - 1) + [last]
for pointer, path in zip(pointers, paths):
if first:
yield prefix + path
else:
yield prefix + pointer + path
if isinstance(paths[path], dict): # extend the prefix and recurse:
if first:
extension = ''
else:
extension = branch if pointer == tee else space
# i.e. space because last, └── , above so no more │
yield from tree(paths[path], prefix=prefix+extension, first=False)
for line in tree(paths):
print(line)
diffs
| List directory tree structure in python from a list of path file | The question is intended to broaden the scope of a question already answered on stackoverflow by the topic "List directory tree structure in python?".
The goal is to form a list of strings that visually represent a directory tree, with branchs.
But instead of the input being a valid directory path (as in the already answered topic),
the quest is to generate the same behavior being a "path file list" as input.
Naturally the function needs to be recursive to accommodate any depth of files.
Exemple
input:
['main_folder\\file01.txt',
'main_folder\\file02.txt',
'main_folder\\folder_sub1\\file03.txt',
'main_folder\\folder_sub1\\file04.txt',
'main_folder\\folder_sub1\\file05.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file06.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file07.txt',
'main_folder\\folder_sub1\\folder_sub1-1\\file08.txt',
'main_folder\\folder_sub2\\file09.txt',
'main_folder\\folder_sub2\\file10.txt',
'main_folder\\folder_sub2\\file11.txt']
output:
├── file01.txt
├── file02.txt
├── folder_sub1
│ ├── file03.txt
│ ├── file04.txt
│ ├── file05.txt
│ └── folder_sub1-1
│ ├── file06.txt
│ ├── file07.txt
│ └── file08.txt
└── folder_sub2
├── file09.txt
├── file10.txt
└── file11.txt
Transforming the list of file paths into nested dictionaries representing the structure of a directory has been answered in the topic "Python convert path to dict".
With this output:
{'main_folder': {'file01.txt': 'txt',
'file02.txt': 'txt',
'folder_sub1': {'file03.txt': 'txt',
'file04.txt': 'txt',
'file05.txt': 'txt',
'folder_sub1-1': {'file06.txt': 'txt',
'file07.txt': 'txt',
'file08.txt': 'txt'}},
'folder_sub2': {'file09.txt': 'txt',
'file10.txt': 'txt',
'file11.txt': 'txt'}}}
But generating the beautiful layout with branchs remains unsolved.
| [
"Possible solution:\npaths = {\n 'main_folder': {\n 'file01.txt': 'txt',\n 'file02.txt': 'txt',\n 'folder_sub1': {\n 'file03.txt': 'txt',\n 'file04.txt': 'txt',\n 'file05.txt': 'txt',\n 'folder_sub1-1': {\n 'file06.txt': 'txt',\n 'file07.txt': 'txt',\n 'file08.txt': 'txt'\n }\n },\n 'folder_sub2': {\n 'file09.txt': 'txt',\n 'file10.txt': 'txt',\n 'file11.txt': 'txt'\n }\n }\n}\n\n\n# prefix components:\nspace = ' '\nbranch = '│ '\n# pointers:\ntee = '├── '\nlast = '└── '\n\ndef tree(paths: dict, prefix: str = ''):\n \"\"\"A recursive generator, given a directory Path object\n will yield a visual tree structure line by line\n with each line prefixed by the same characters\n \"\"\"\n # contents each get pointers that are ├── with a final └── :\n pointers = [tee] * (len(paths) - 1) + [last]\n for pointer, path in zip(pointers, paths):\n yield prefix + pointer + path\n if isinstance(paths[path], dict): # extend the prefix and recurse:\n extension = branch if pointer == tee else space\n # i.e. space because last, └── , above so no more |\n yield from tree(paths[path], prefix=prefix+extension)\n\n\nfor line in tree(paths):\n print(line)\n\nRef: List directory tree structure in python?\n",
"bigtree is a Python tree implementation that integrates with Python lists, dictionaries, and pandas DataFrame.\nFor this scenario, we can use 3 lines of code,\npath_list = [\n 'main_folder\\\\file01.txt',\n 'main_folder\\\\file02.txt',\n 'main_folder\\\\folder_sub1\\\\file03.txt',\n 'main_folder\\\\folder_sub1\\\\file04.txt',\n 'main_folder\\\\folder_sub1\\\\file05.txt',\n 'main_folder\\\\folder_sub1\\\\folder_sub1-1\\\\file06.txt',\n 'main_folder\\\\folder_sub1\\\\folder_sub1-1\\\\file07.txt',\n 'main_folder\\\\folder_sub1\\\\folder_sub1-1\\\\file08.txt',\n 'main_folder\\\\folder_sub2\\\\file09.txt',\n 'main_folder\\\\folder_sub2\\\\file10.txt',\n 'main_folder\\\\folder_sub2\\\\file11.txt']\n\nfrom bigtree import list_to_tree, print_tree\nroot = list_to_tree(path_list, sep=\"\\\\\")\nprint_tree(root)\n\nThis will result in output,\nmain_folder\n├── file01.txt\n├── file02.txt\n├── folder_sub1\n│ ├── file03.txt\n│ ├── file04.txt\n│ ├── file05.txt\n│ └── folder_sub1-1\n│ ├── file06.txt\n│ ├── file07.txt\n│ └── file08.txt\n└── folder_sub2\n ├── file09.txt\n ├── file10.txt\n └── file11.txt\n\nSource/Disclaimer: I'm the creator of bigtree ;)\n",
"A little modification to @rafaeldss's answer\n# prefix components:\nspace = ' '\nbranch = '│ '\n# pointers:\ntee = '├── '\nlast = '└── '\n\ndef tree(paths: dict, prefix: str = '', first: bool = True):\n \"\"\"A recursive generator, given a directory Path object\n will yield a visual tree structure line by line\n with each line prefixed by the same characters\n \"\"\"\n # contents each get pointers that are ├── with a final └── :\n pointers = [tee] * (len(paths) - 1) + [last]\n for pointer, path in zip(pointers, paths):\n if first:\n yield prefix + path\n else:\n yield prefix + pointer + path\n if isinstance(paths[path], dict): # extend the prefix and recurse:\n if first:\n extension = ''\n else:\n extension = branch if pointer == tee else space\n # i.e. space because last, └── , above so no more │\n yield from tree(paths[path], prefix=prefix+extension, first=False)\n\nfor line in tree(paths):\n print(line)\n\ndiffs\n"
] | [
2,
1,
0
] | [] | [] | [
"directory_structure",
"python",
"tree",
"treeview"
] | stackoverflow_0072618673_directory_structure_python_tree_treeview.txt |
Q:
Python block/halt on importing torch
I have developed a deep-learning object-detect program based on pytorch and it works very well. Today I deploy this program on a PC, everything goes well, but the program cannot be launched. Debug and find out that, the program blocks, or halts, on importing pytorch.
Simply start a python prompt, type import torch and the prompt blocks. top command shows that CPU/memory usage is very low. Press ctrl-c cannot stop the prompt. While other library importing is fine. I have tried pycrypto and the one I wrote myself, all work but pytorch cannot.
I have deployed more than 100 times, but never meet this situation. I also tried to reinstall pytorch, from 1.6 to 1.4, torchvision from 0.7 to 0.5, still not work. No error printed, no complain shown.
Environment:
OS: centos 7.4
CUDA: 10.0
NVIDIA driver: 440.82
GPU: GTX 1660
python: 3.6
pytorch version: 1.6 and 1.4.
Any information are welcome, thanks in advance.
Edit:
According to Szymon's idea, running python3 foo.py, which with only import torch in it, and press ctrl-c, the prompt prints:
Traceback (most recent call last):
File "foo.py", line 1, in <module>
import torch
File "/usr/local/lib64/python3.6/site-packages/torch/__init__.py", line 48, in <module>
if platform.system() == 'Windows':
File "/usr/lib64/python3.6/platform.py", line 1068, in system
return uname().system
File "/usr/lib64/python3.6/platform.py", line 1034, in uname
processor = _syscmd_uname('-p', '')
File "/usr/lib64/python3.6/platform.py", line 788, in _syscmd_uname
f = os.popen('uname %s 2> %s' % (option, DEV_NULL))
File "/usr/lib64/python3.6/os.py", line 980, in popen
bufsize=buffering)
File "/usr/lib64/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib64/python3.6/subprocess.py", line 1318, in _execute_child
part = os.read(errpipe_read, 50000)
It seems that python hangs on running uname <option>, so I tried uname -a 2> /dev/null on command line, it returned immediately and nothing strange. Also created a file named bar.py with the content below:
import platform
print(platform.system())
and run it with python3, works well, printed 'Linux'. I don't think it is the reason, maybe just a coincidence?
I also tried more times, other situation looked like running import torch in python prompt, could not kill the process and nothing printed. Once it printed Soft lock up on CPU#4, I thought it was caused by the test process in the last disconnected session.
| Python block/halt on importing torch | I have developed a deep-learning object-detect program based on pytorch and it works very well. Today I deploy this program on a PC, everything goes well, but the program cannot be launched. Debug and find out that, the program blocks, or halts, on importing pytorch.
Simply start a python prompt, type import torch and the prompt blocks. top command shows that CPU/memory usage is very low. Press ctrl-c cannot stop the prompt. While other library importing is fine. I have tried pycrypto and the one I wrote myself, all work but pytorch cannot.
I have deployed more than 100 times, but never meet this situation. I also tried to reinstall pytorch, from 1.6 to 1.4, torchvision from 0.7 to 0.5, still not work. No error printed, no complain shown.
Environment:
OS: centos 7.4
CUDA: 10.0
NVIDIA driver: 440.82
GPU: GTX 1660
python: 3.6
pytorch version: 1.6 and 1.4.
Any information are welcome, thanks in advance.
Edit:
According to Szymon's idea, running python3 foo.py, which with only import torch in it, and press ctrl-c, the prompt prints:
Traceback (most recent call last):
File "foo.py", line 1, in <module>
import torch
File "/usr/local/lib64/python3.6/site-packages/torch/__init__.py", line 48, in <module>
if platform.system() == 'Windows':
File "/usr/lib64/python3.6/platform.py", line 1068, in system
return uname().system
File "/usr/lib64/python3.6/platform.py", line 1034, in uname
processor = _syscmd_uname('-p', '')
File "/usr/lib64/python3.6/platform.py", line 788, in _syscmd_uname
f = os.popen('uname %s 2> %s' % (option, DEV_NULL))
File "/usr/lib64/python3.6/os.py", line 980, in popen
bufsize=buffering)
File "/usr/lib64/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/lib64/python3.6/subprocess.py", line 1318, in _execute_child
part = os.read(errpipe_read, 50000)
It seems that python hangs on running uname <option>, so I tried uname -a 2> /dev/null on command line, it returned immediately and nothing strange. Also created a file named bar.py with the content below:
import platform
print(platform.system())
and run it with python3, works well, printed 'Linux'. I don't think it is the reason, maybe just a coincidence?
I also tried more times, other situation looked like running import torch in python prompt, could not kill the process and nothing printed. Once it printed Soft lock up on CPU#4, I thought it was caused by the test process in the last disconnected session.
| [] | [] | [
"In my case, in my directory with the main python file, there was also a file named signal.py, when i renamed it to signal1.py everything started working. So try to find some other python files in your directory and try rename it.\n"
] | [
-1
] | [
"python",
"pytorch"
] | stackoverflow_0064278495_python_pytorch.txt |
Q:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu, using Google Colab GPU environment
I'm working with this notebook right now, using Google Colab GPU environment. When I execute the block containing the following code
with torch.no_grad():
generated_images = vae.decode(generated_image_codes)
I got the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-36-56287126db2f> in <module>
1 with torch.no_grad():
----> 2 generated_images = vae.decode(generated_image_codes)
3 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2200
2201
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
I tried to comment all the previous block and searched for solution in similar questions, but nothing helped to solve the problem. Can anyone give me help about this?
A:
If you executed the code blocks sequentially, both generated_image_codes and vae should be on the same device, i.e. CPU.
generated_image_codes = torch.cat(generated_image_codes, axis=0).cpu()
and
torch.cuda.empty_cache()
vae.cpu()
To double check, you can run
print(generated_image_codes.device)
print(next(vae.parameters()).device)
Expected outputs are
torch.device("cpu")
torch.device("cpu")
| RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu, using Google Colab GPU environment | I'm working with this notebook right now, using Google Colab GPU environment. When I execute the block containing the following code
with torch.no_grad():
generated_images = vae.decode(generated_image_codes)
I got the following error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-36-56287126db2f> in <module>
1 with torch.no_grad():
----> 2 generated_images = vae.decode(generated_image_codes)
3 frames
/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2197 # remove once script supports set_grad_enabled
2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2200
2201
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
I tried to comment all the previous block and searched for solution in similar questions, but nothing helped to solve the problem. Can anyone give me help about this?
| [
"If you executed the code blocks sequentially, both generated_image_codes and vae should be on the same device, i.e. CPU.\ngenerated_image_codes = torch.cat(generated_image_codes, axis=0).cpu()\n\nand\ntorch.cuda.empty_cache()\nvae.cpu()\n\nTo double check, you can run\nprint(generated_image_codes.device)\nprint(next(vae.parameters()).device)\n\nExpected outputs are\ntorch.device(\"cpu\")\ntorch.device(\"cpu\")\n\n"
] | [
0
] | [] | [] | [
"python",
"pytorch"
] | stackoverflow_0074647808_python_pytorch.txt |
Q:
A problem with output of the program [estimated end of pandemic]
I used mathemetical formula to calculate estimated number of weeks for an end of pandemic with respect to countries' data. It was supposed to be harmonic sequence that accounts uncertainty, vaccinated people and vaccine per 100 people. The output for all countries is the same which concerns me. I understand that my formula might be wrong, so please help me fixing this problem
def statistica():
country = input('Which country you are from? ')
infected_per_week = 0
#infected per day to find the uncertainty for parameters
infected_per_day = 0
#last week average to find difference
infected_last_week = 0
#Total vaccine doses administered per 100 population
vaccine = 0
#Persons fully vaccinated with last dose of primary series per 100 population
vaccinated = 0
#population of the country
population1 = 0
if country == 'Republic of Korea' or 'Korea' or 'South Korea':
population1 += 52000000
infected_per_week += 376590
vaccinated += 87166
vaccine += 257
infected_per_day += 71476
infected_last_week += 373681
elif country == 'Brazil':
population1 = population1 + 527000000
infected_per_week += 177052
vaccinated += 78932
vaccine += 230
infected_per_day += 39083
infected_last_week += 153292
average = int(infected_per_week / 7)
uncertainty = average - infected_per_day
difference = (1 - (int(infected_last_week / 7) * int(average))) / (int(average))
not_vaccinated = 100000 - vaccinated
all_not_vaccinated = population1 * (not_vaccinated / 100000)
vaccine_for_all = all_not_vaccinated / (vaccine / 100)
n_term_pos_unc = ((all_not_vaccinated - (infected_last_week/7) + uncertainty + difference) / (-difference))
n_term_neg_unc = ((all_not_vaccinated - (infected_last_week/7) - uncertainty + difference) / (-difference))
n_term_neg_unc_vacc = ((all_not_vaccinated - (infected_last_week/7) - uncertainty + difference - vaccine_for_all) / (-difference))
n_term_pos_unc_vacc = ((all_not_vaccinated - (infected_last_week/7) + uncertainty + difference - vaccine_for_all) / (-difference))
return n_term_pos_unc, n_term_neg_unc, n_term_pos_unc_vacc, n_term_neg_unc_vacc
a = statistica()
print(a)
A:
There are a few errors in your code
As the comment mentioned, you are using “or” condition wrongly
Your “if” and “elif” is not on the same indentation
To calculate harmonic mean, it would be better to collect a list of values (instead of a sum), and perform n divide by the sum of reciprocals
Relating to your logic/flow, it can be less hard coded
| A problem with output of the program [estimated end of pandemic] | I used mathemetical formula to calculate estimated number of weeks for an end of pandemic with respect to countries' data. It was supposed to be harmonic sequence that accounts uncertainty, vaccinated people and vaccine per 100 people. The output for all countries is the same which concerns me. I understand that my formula might be wrong, so please help me fixing this problem
def statistica():
country = input('Which country you are from? ')
infected_per_week = 0
#infected per day to find the uncertainty for parameters
infected_per_day = 0
#last week average to find difference
infected_last_week = 0
#Total vaccine doses administered per 100 population
vaccine = 0
#Persons fully vaccinated with last dose of primary series per 100 population
vaccinated = 0
#population of the country
population1 = 0
if country == 'Republic of Korea' or 'Korea' or 'South Korea':
population1 += 52000000
infected_per_week += 376590
vaccinated += 87166
vaccine += 257
infected_per_day += 71476
infected_last_week += 373681
elif country == 'Brazil':
population1 = population1 + 527000000
infected_per_week += 177052
vaccinated += 78932
vaccine += 230
infected_per_day += 39083
infected_last_week += 153292
average = int(infected_per_week / 7)
uncertainty = average - infected_per_day
difference = (1 - (int(infected_last_week / 7) * int(average))) / (int(average))
not_vaccinated = 100000 - vaccinated
all_not_vaccinated = population1 * (not_vaccinated / 100000)
vaccine_for_all = all_not_vaccinated / (vaccine / 100)
n_term_pos_unc = ((all_not_vaccinated - (infected_last_week/7) + uncertainty + difference) / (-difference))
n_term_neg_unc = ((all_not_vaccinated - (infected_last_week/7) - uncertainty + difference) / (-difference))
n_term_neg_unc_vacc = ((all_not_vaccinated - (infected_last_week/7) - uncertainty + difference - vaccine_for_all) / (-difference))
n_term_pos_unc_vacc = ((all_not_vaccinated - (infected_last_week/7) + uncertainty + difference - vaccine_for_all) / (-difference))
return n_term_pos_unc, n_term_neg_unc, n_term_pos_unc_vacc, n_term_neg_unc_vacc
a = statistica()
print(a)
| [
"There are a few errors in your code\n\nAs the comment mentioned, you are using “or” condition wrongly\nYour “if” and “elif” is not on the same indentation\nTo calculate harmonic mean, it would be better to collect a list of values (instead of a sum), and perform n divide by the sum of reciprocals\nRelating to your logic/flow, it can be less hard coded\n\n"
] | [
0
] | [] | [] | [
"add",
"formula",
"function",
"python",
"sequence"
] | stackoverflow_0074651044_add_formula_function_python_sequence.txt |
Q:
didn't recv UDP dgram socket
So. There is a server and a client. The client knows the address of the server based on UDP dgram and sends packets to the server. But strange thing. Packets seem to be leaving, but the server does not read them. That is, in the recv block, he does not see messages until a mutual packet is sent back to the client (knowing his address in advance). And only then he starts to see messages from the client. Tell me please, what is the problem? (I don't use broadcastcasts, because both the server and the client are at a distance).
Code example:
import stun
import socket
import threading
source_ip = "0.0.0.0"
source_port = 9992
external_ip = None
external_port = None
sock = None
def GetIP(ip, port):
global external_ip, external_port, sock
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((ip, port))
sock.settimeout(0.1)
nat_type, nat = stun.get_nat_type(sock, ip, port, stun_host='stun.l.google.com', stun_port=19302)
if nat['ExternalIP']:
external_ip = nat['ExternalIP']
external_port = nat['ExternalPort']
print("my addr: %s:%s\n" % (external_ip, external_port))
sock.shutdown(1)
sock.close()
return ip, port
else:
port += 1
return GetIP(ip, port)
source_ip, source_port = GetIP(source_ip, source_port)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((source_ip, source_port))
data = None
remote_ip, remote_port = input(
"input `addr:port` other machine >"
).split(':')
remote_port = int(remote_port)
remote = remote_ip, remote_port
def read_chat(s):
global data
while True:
try:
data, addr = s.recvfrom(1024)
print(addr,'>', data)
except TimeoutError:
continue
reader = threading.Thread(target=read_chat, args=[sock])
reader.start()
while True:
line = input(">")
if ':' in line:
remote_ip, remote_port = input(
"input `addr:port` other machine >"
).split(':')
remote_port = int(remote_port)
remote = remote_ip, remote_port
else:
sock.sendto(line.encode(), remote)
Two such instances are started from the same network. also launched two different instances from different networks and then from the same network. Windows system firewall is disabled.
tried different combinations of sockets.
A:
it was all because of NAT. if you interested in this topic, you can learn about ways to bypass NAT.
Thanks everyone for replies.
| didn't recv UDP dgram socket | So. There is a server and a client. The client knows the address of the server based on UDP dgram and sends packets to the server. But strange thing. Packets seem to be leaving, but the server does not read them. That is, in the recv block, he does not see messages until a mutual packet is sent back to the client (knowing his address in advance). And only then he starts to see messages from the client. Tell me please, what is the problem? (I don't use broadcastcasts, because both the server and the client are at a distance).
Code example:
import stun
import socket
import threading
source_ip = "0.0.0.0"
source_port = 9992
external_ip = None
external_port = None
sock = None
def GetIP(ip, port):
global external_ip, external_port, sock
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((ip, port))
sock.settimeout(0.1)
nat_type, nat = stun.get_nat_type(sock, ip, port, stun_host='stun.l.google.com', stun_port=19302)
if nat['ExternalIP']:
external_ip = nat['ExternalIP']
external_port = nat['ExternalPort']
print("my addr: %s:%s\n" % (external_ip, external_port))
sock.shutdown(1)
sock.close()
return ip, port
else:
port += 1
return GetIP(ip, port)
source_ip, source_port = GetIP(source_ip, source_port)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind((source_ip, source_port))
data = None
remote_ip, remote_port = input(
"input `addr:port` other machine >"
).split(':')
remote_port = int(remote_port)
remote = remote_ip, remote_port
def read_chat(s):
global data
while True:
try:
data, addr = s.recvfrom(1024)
print(addr,'>', data)
except TimeoutError:
continue
reader = threading.Thread(target=read_chat, args=[sock])
reader.start()
while True:
line = input(">")
if ':' in line:
remote_ip, remote_port = input(
"input `addr:port` other machine >"
).split(':')
remote_port = int(remote_port)
remote = remote_ip, remote_port
else:
sock.sendto(line.encode(), remote)
Two such instances are started from the same network. also launched two different instances from different networks and then from the same network. Windows system firewall is disabled.
tried different combinations of sockets.
| [
"it was all because of NAT. if you interested in this topic, you can learn about ways to bypass NAT.\nThanks everyone for replies.\n"
] | [
0
] | [] | [] | [
"python",
"sockets",
"udp"
] | stackoverflow_0074634006_python_sockets_udp.txt |
Q:
Remove mode entirely
I am trying to write a method that removes the mode entirely from a list. I've looked up other articles to use a for loop but it doesn't entirely get rid of the mode from the list like it needs to.
this is what my list would look like before I get rid of the mode entirely from a list.
list = 1,3,4,6,3,1,3,
I have already tried the .remove function but it only removes 1 of the numbers
my expected outcome
list = 1,4,6,1
A:
mylist.remove(x) removes only the first occurrence of x from the list.
If you think there may be more than one, use a while loop:
while 3 in mylist:
mylist.remove(3)
If the list is long and/or there might be several 3s in the list, this approach would be more efficient, as it only iterates over the list once:
mylist = [item for item in mylist if item != 3]
| Remove mode entirely | I am trying to write a method that removes the mode entirely from a list. I've looked up other articles to use a for loop but it doesn't entirely get rid of the mode from the list like it needs to.
this is what my list would look like before I get rid of the mode entirely from a list.
list = 1,3,4,6,3,1,3,
I have already tried the .remove function but it only removes 1 of the numbers
my expected outcome
list = 1,4,6,1
| [
"mylist.remove(x) removes only the first occurrence of x from the list.\nIf you think there may be more than one, use a while loop:\nwhile 3 in mylist:\n mylist.remove(3)\n\nIf the list is long and/or there might be several 3s in the list, this approach would be more efficient, as it only iterates over the list once:\nmylist = [item for item in mylist if item != 3]\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074651129_python.txt |
Q:
Fail to draw historical data from TWS API
def get_IB_historical_data(self, ibcontract, tickerid, durationStr, barSizeSetting):
historic_data_queue = finishableQueue(self.init_historicprices(tickerid))
self.reqHistoricalData(
tickerid, # tickerId,
ibcontract, # contract,
datetime.datetime.today().strftime("%Y%m%d %H:%M:%S %Z"), # endDateTime,
durationStr, # durationStr,
barSizeSetting, # barSizeSetting,
"TRADES", # whatToShow,
1, # useRTH,
1, # formatDate
False, # KeepUpToDate <<==== added for api 9.73.2
[] ## chartoptions not used
)
MAX_WAIT_SECONDS = 10
historic_data = historic_data_queue.get(timeout = MAX_WAIT_SECONDS)
if historic_data_queue.timed_out():
print("historic_data_queue.timed_out")
self.cancelHistoricalData(tickerid)
df = pd.DataFrame(historic_data)
df.columns = ['Datetime', 'Open', 'High', 'Low', 'Close', 'Volume']
return df
if name == 'main':
app = App_Class('127.0.0.1',7497, 11)
time.sleep(1234)
ibcontract = IBcontract()
ibcontract.secType = 'FUT'
ibcontract.lastTradeDateOrContractMonth = '20221129'
ibcontract.symbol = 'HSI'
ibcontract.exchange = 'HKFE'
resolved_ibcontract = app.resolve_ib_contract(ibcontract)
print(resolved_ibcontract)
df = app.get_IB_historical_data(resolved_ibcontract, 10, durationStr='30 D', barSizeSetting='1 D')
print(df)
I am new python learner. I dont know why i cant print the dataframe, I have subscribed the data.
My first run to run the program: 576501930,HSI,FUT,20221129,0.0,,50,HKFE,,HKD,HSIX2,HSI,False,,combo:
ERROR 10 320 Error reading request. String index out of range: 0
historic_data_queue.timed_out
ERROR -1 504 Not connected
ValueError: Length mismatch: Expected axis has 0 elements, new values have 6 elements
My second time to run the program and when i rerun, they said" _queue.Empty"
Anyone know why and how to fix it, thanks.
A:
ibcontract.includeExpired = True
datetime.datetime.today().strftime("%Y%m%d-%H:%M:%S"), # endDateTime,
| Fail to draw historical data from TWS API | def get_IB_historical_data(self, ibcontract, tickerid, durationStr, barSizeSetting):
historic_data_queue = finishableQueue(self.init_historicprices(tickerid))
self.reqHistoricalData(
tickerid, # tickerId,
ibcontract, # contract,
datetime.datetime.today().strftime("%Y%m%d %H:%M:%S %Z"), # endDateTime,
durationStr, # durationStr,
barSizeSetting, # barSizeSetting,
"TRADES", # whatToShow,
1, # useRTH,
1, # formatDate
False, # KeepUpToDate <<==== added for api 9.73.2
[] ## chartoptions not used
)
MAX_WAIT_SECONDS = 10
historic_data = historic_data_queue.get(timeout = MAX_WAIT_SECONDS)
if historic_data_queue.timed_out():
print("historic_data_queue.timed_out")
self.cancelHistoricalData(tickerid)
df = pd.DataFrame(historic_data)
df.columns = ['Datetime', 'Open', 'High', 'Low', 'Close', 'Volume']
return df
if name == 'main':
app = App_Class('127.0.0.1',7497, 11)
time.sleep(1234)
ibcontract = IBcontract()
ibcontract.secType = 'FUT'
ibcontract.lastTradeDateOrContractMonth = '20221129'
ibcontract.symbol = 'HSI'
ibcontract.exchange = 'HKFE'
resolved_ibcontract = app.resolve_ib_contract(ibcontract)
print(resolved_ibcontract)
df = app.get_IB_historical_data(resolved_ibcontract, 10, durationStr='30 D', barSizeSetting='1 D')
print(df)
I am new python learner. I dont know why i cant print the dataframe, I have subscribed the data.
My first run to run the program: 576501930,HSI,FUT,20221129,0.0,,50,HKFE,,HKD,HSIX2,HSI,False,,combo:
ERROR 10 320 Error reading request. String index out of range: 0
historic_data_queue.timed_out
ERROR -1 504 Not connected
ValueError: Length mismatch: Expected axis has 0 elements, new values have 6 elements
My second time to run the program and when i rerun, they said" _queue.Empty"
Anyone know why and how to fix it, thanks.
| [
"ibcontract.includeExpired = True\ndatetime.datetime.today().strftime(\"%Y%m%d-%H:%M:%S\"), # endDateTime,\n"
] | [
0
] | [] | [] | [
"dataframe",
"historical_db",
"interactive_brokers",
"python",
"tws"
] | stackoverflow_0074290321_dataframe_historical_db_interactive_brokers_python_tws.txt |
Q:
How to use pd.apply() to instantiate new columns?
Instead of doing this:
df['A'] = df['A'] if 'A' in df else None
df['B'] = df['B'] if 'B' in df else None
df['C'] = df['C'] if 'C' in df else None
df['D'] = df['D'] if 'D' in df else None
...
I want to do this in one line or function. Below is what I tried:
def populate_columns(df):
col_names = ['A', 'B', 'C', 'D', 'E', 'F', ...]
def populate_column(df, col_name):
df[col_name] = df[col_name] if col_name in df else None
return df[col_name]
df[col_name] = df.apply(lambda x: populate_column(x) for x in col_names)
return df
But I just get Exception has occurred: ValueError. What can I do here?
A:
Looks like you can replace your whole code with a reindex:
ensure_cols = ['A', 'B', 'C', 'D']
df = df.reindex(columns=df.columns.union(ensure_cols))
NB. By default the fill value is NaN, if you really want None use fill_value=None.
If you want to fix your code, just use a single loop:
col_names = ['A', 'B', 'C', 'D']
for c in col_names:
if c not in df:
df[c] = None
| How to use pd.apply() to instantiate new columns? | Instead of doing this:
df['A'] = df['A'] if 'A' in df else None
df['B'] = df['B'] if 'B' in df else None
df['C'] = df['C'] if 'C' in df else None
df['D'] = df['D'] if 'D' in df else None
...
I want to do this in one line or function. Below is what I tried:
def populate_columns(df):
col_names = ['A', 'B', 'C', 'D', 'E', 'F', ...]
def populate_column(df, col_name):
df[col_name] = df[col_name] if col_name in df else None
return df[col_name]
df[col_name] = df.apply(lambda x: populate_column(x) for x in col_names)
return df
But I just get Exception has occurred: ValueError. What can I do here?
| [
"Looks like you can replace your whole code with a reindex:\nensure_cols = ['A', 'B', 'C', 'D']\ndf = df.reindex(columns=df.columns.union(ensure_cols))\n\nNB. By default the fill value is NaN, if you really want None use fill_value=None.\nIf you want to fix your code, just use a single loop:\ncol_names = ['A', 'B', 'C', 'D']\nfor c in col_names:\n if c not in df:\n df[c] = None\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074651065_dataframe_pandas_python.txt |
Q:
Python Correctly Parse a Complex Object into a JSON format
I have the following which I'd like to parse it into JSON. The class has a list of item object also
class Item(JSONEncoder):
def __init__(self):
self.Type = ''
self.Content = ''
self.N = None
self.Parent = None
self.Items = []
def reprJSON(self):
d = dict()
for a, v in self.__dict__.items():
if (hasattr(v, "reprJSON")):
d[a] = v.reprJSON()
else:
d[a] = v
return d
So, when I try to parse the instance of Item class, root.reprJSON() I get the following result.
{'Type': 'root',
'Content': '',
'N': 'root',
'Parent': None,
'Items': [<Item.Item at 0x10575fb3c88>,
<Item.Item at 0x10575fb3e10>,
<Item.Item at 0x10575fb3eb8>,
<Item.Item at 0x10575fbc080>,
<Item.Item at 0x10575fbc2b0>,
<Item.Item at 0x10575fc6a20>,
<Item.Item at 0x10575fc6a58>,
<Item.Item at 0x10575fc6b70>,
<Item.Item at 0x10575fc6be0>,
<Item.Item at 0x10575fc6c50>,
<Item.Item at 0x10575fc6da0>,
<Item.Item at 0x10575fc6fd0>,
<Item.Item at 0x10575fcb128>,
<Item.Item at 0x10575fcb358>,
<Item.Item at 0x10575fcba90>,
<Item.Item at 0x10575fcbb00>,
<Item.Item at 0x10575fcbb70>,
<Item.Item at 0x10575fcbc18>,
<Item.Item at 0x10575fcbda0>,
<Item.Item at 0x10575fcbfd0>,
<Item.Item at 0x10575fd3208>,
<Item.Item at 0x10575fd34a8>,
<Item.Item at 0x10575fd3550>,
<Item.Item at 0x10575fd35c0>,
<Item.Item at 0x10575fd36d8>,
<Item.Item at 0x10575fd37f0>,
<Item.Item at 0x10575fd3898>,
<Item.Item at 0x10575fd3940>,
<Item.Item at 0x10575fd39b0>,
<Item.Item at 0x10575fd3a20>,
<Item.Item at 0x10575fd3ac8>,
<Item.Item at 0x10575fd3b70>,
<Item.Item at 0x10575fd3c88>,
<Item.Item at 0x10575fd3d68>,
<Item.Item at 0x10575fd3dd8>,
<Item.Item at 0x10575fd3e10>,
<Item.Item at 0x10575fd3ef0>,
<Item.Item at 0x10575fdc080>,
<Item.Item at 0x10575fdc0b8>,
<Item.Item at 0x10575fdc128>,
<Item.Item at 0x10575fdc1d0>,
<Item.Item at 0x10575fdc240>,
<Item.Item at 0x10575fdc390>,
<Item.Item at 0x10575fdc438>,
<Item.Item at 0x10575fdc550>,
<Item.Item at 0x10575fdc5c0>,
<Item.Item at 0x10575fdc630>,
<Item.Item at 0x10575fdc6a0>,
<Item.Item at 0x10575fdc6d8>,
<Item.Item at 0x10575fdc780>,
<Item.Item at 0x10575fdc908>,
<Item.Item at 0x10575fdc9e8>,
<Item.Item at 0x10575fdca58>,
<Item.Item at 0x10575fdcac8>,
<Item.Item at 0x10575fdcb00>,
<Item.Item at 0x10575fdcba8>,
<Item.Item at 0x10575fdccc0>,
<Item.Item at 0x10575fdcd30>,
<Item.Item at 0x10575fdcda0>,
<Item.Item at 0x10575fdce48>,
<Item.Item at 0x10575fdceb8>,
<Item.Item at 0x10575fdcf28>,
<Item.Item at 0x10575fe22e8>,
<Item.Item at 0x10575fe2828>,
<Item.Item at 0x10575fe2940>,
<Item.Item at 0x10575fe2b70>,
<Item.Item at 0x10575fe2be0>,
<Item.Item at 0x10575fe2c88>,
<Item.Item at 0x10575fe2cc0>,
<Item.Item at 0x10575fe2cf8>]}
But I'd like to get the values of those item also into a single json object. I don't know how to do it, would appreciate any help. Thank you
Edit
Following code create an instance of item class and filled it with data.
def Crawl(parsedPDF):
soup = BeautifulSoup(parsedPDF, "html.parser")
root = Item()
root.Type = "root"
root.N = "root"
parent = root
head = root
body = RemoveEmptyTags(soup.body)
for tag in body:
elements = RemoveEmptyChild(tag.contents)
for element in elements:
if element.name == "head":
head = CreateHeading(root, parent, element)
parent = head.Parent
elif element.name == "p":
AddParagraph(head, element)
elif element.name == "figure":
pass
elif element.name == "figdesc":
pass
elif element.name == "table":
#elem = AddElement(head, element)
pass
else:
#elem = AddElement(head, element)
pass
pass
return root
def AddParagraph(head, element):
# split the paragraph into multiple lines based on alphabetize bullet points
lines = split_with_AplhabetizeBullets(element.text, '\.\s(\(.*?\)\s)')
for line in lines:
item = Item()
item.Content = line
item.Type = element.name
item.Parent = head
head.Items.append(item)
def CreateHeading(root, parent, element):
item = Item()
item.Content = element.text
item.Type = element.name
item.Parent = parent
try:
item.N = element["n"]
except:
pass
if item.N is None:
bracketTextLength = 0
try:
result = re.search(r'\(.*?\)',item.Content)
bracketTextLength = len(result.group)
except:
pass
item.N = item.Content
# to check if the heading without 'N' is a heading or its a subheading
if len(item.Content) > 3 and bracketTextLength == 0:
root.Items.append(item)
item.Parent = item
pass
else:
parent.Items.append(item)
pass
else: # item.N is not None
if parent.N is None:
item.Parent = item
parent = item.Parent
pass
#else: # if the new heading sharing the same reference as of its parent then
if parent.N in item.N[:len(parent.N)]:
parent.Items.append(item)
pass
else: # if the new heading has no parent then add it into root
root.Items.append(item)
item.Parent = item
pass
return item
A:
Looking at your code you can use this demo solution in your code as I'm storing objects of Demo class in the Items list. You need to write serialize() and dumper() methods in Items class, and also changes need to be done in reprJSON method for iteration on Items list.
from json import JSONEncoder
class Demo():
def __init__(self):
self.name = ''
self.demolist = []
class Item(JSONEncoder):
def __init__(self):
# super().__init__()
self.Type = ''
self.Content = ''
self.N = None
self.Parent = None
self.Items = []
def reprJSON(self):
d = {}
for a, v in self.__dict__.items():
if isinstance(v, list):
for i in v:
if d.get(a, []) == []:
d[a] = []
d[a].append(self.dumper(i))
else:
d[a].append(self.dumper(i))
else:
d[a] = v
return d
def serialize(self):
return self.__dict__
@staticmethod
def dumper(obj):
if "serialize" in dir(obj):
return obj.serialize()
return obj.__dict__
itemobj = Item()
d1 = Demo()
d2 = Demo()
d1.name = 'akash'
d1.demolist = [{'good':[4,6,5],'yyy':'why'},{'ho':{'ksks':'333'}}]
d2.name = 'heheh'
d2.demolist = [4,6,1111]
itemobj.Items.extend([d1,d2])
from pprint import pprint
pprint(itemobj.reprJSON())
Output:
{'Content': '',
'Items': [{'demolist': [{'good': [4, 6, 5], 'yyy': 'why'},
{'ho': {'ksks': '333'}}],
'name': 'akash'},
{'demolist': [4, 6, 1111], 'name': 'heheh'}],
'N': None,
'Parent': None,
'Type': ''}```
A:
pip install jsonwhatever
from jsonwhatever import jsonwhatever as jw
class Item():
def __init__(self):
self.Type = ''
self.Content = ''
self.N = None
self.Parent = None #Not to reference father class to avoid infinite recursivity
self.Items = None #You should put None by default to stop recursivity
obj = Item()
obj01 = Item()
obj01.Type = '01'
obj01.Content = 'stuff'
obj01.N = 9
obj01.Parent = None
list_objects = []
list_objects.append(obj01)
obj.Items = list_objects
json_string = jw.jsonwhatever('list_of_items', obj)
print(json_string)
| Python Correctly Parse a Complex Object into a JSON format | I have the following which I'd like to parse it into JSON. The class has a list of item object also
class Item(JSONEncoder):
def __init__(self):
self.Type = ''
self.Content = ''
self.N = None
self.Parent = None
self.Items = []
def reprJSON(self):
d = dict()
for a, v in self.__dict__.items():
if (hasattr(v, "reprJSON")):
d[a] = v.reprJSON()
else:
d[a] = v
return d
So, when I try to parse the instance of Item class, root.reprJSON() I get the following result.
{'Type': 'root',
'Content': '',
'N': 'root',
'Parent': None,
'Items': [<Item.Item at 0x10575fb3c88>,
<Item.Item at 0x10575fb3e10>,
<Item.Item at 0x10575fb3eb8>,
<Item.Item at 0x10575fbc080>,
<Item.Item at 0x10575fbc2b0>,
<Item.Item at 0x10575fc6a20>,
<Item.Item at 0x10575fc6a58>,
<Item.Item at 0x10575fc6b70>,
<Item.Item at 0x10575fc6be0>,
<Item.Item at 0x10575fc6c50>,
<Item.Item at 0x10575fc6da0>,
<Item.Item at 0x10575fc6fd0>,
<Item.Item at 0x10575fcb128>,
<Item.Item at 0x10575fcb358>,
<Item.Item at 0x10575fcba90>,
<Item.Item at 0x10575fcbb00>,
<Item.Item at 0x10575fcbb70>,
<Item.Item at 0x10575fcbc18>,
<Item.Item at 0x10575fcbda0>,
<Item.Item at 0x10575fcbfd0>,
<Item.Item at 0x10575fd3208>,
<Item.Item at 0x10575fd34a8>,
<Item.Item at 0x10575fd3550>,
<Item.Item at 0x10575fd35c0>,
<Item.Item at 0x10575fd36d8>,
<Item.Item at 0x10575fd37f0>,
<Item.Item at 0x10575fd3898>,
<Item.Item at 0x10575fd3940>,
<Item.Item at 0x10575fd39b0>,
<Item.Item at 0x10575fd3a20>,
<Item.Item at 0x10575fd3ac8>,
<Item.Item at 0x10575fd3b70>,
<Item.Item at 0x10575fd3c88>,
<Item.Item at 0x10575fd3d68>,
<Item.Item at 0x10575fd3dd8>,
<Item.Item at 0x10575fd3e10>,
<Item.Item at 0x10575fd3ef0>,
<Item.Item at 0x10575fdc080>,
<Item.Item at 0x10575fdc0b8>,
<Item.Item at 0x10575fdc128>,
<Item.Item at 0x10575fdc1d0>,
<Item.Item at 0x10575fdc240>,
<Item.Item at 0x10575fdc390>,
<Item.Item at 0x10575fdc438>,
<Item.Item at 0x10575fdc550>,
<Item.Item at 0x10575fdc5c0>,
<Item.Item at 0x10575fdc630>,
<Item.Item at 0x10575fdc6a0>,
<Item.Item at 0x10575fdc6d8>,
<Item.Item at 0x10575fdc780>,
<Item.Item at 0x10575fdc908>,
<Item.Item at 0x10575fdc9e8>,
<Item.Item at 0x10575fdca58>,
<Item.Item at 0x10575fdcac8>,
<Item.Item at 0x10575fdcb00>,
<Item.Item at 0x10575fdcba8>,
<Item.Item at 0x10575fdccc0>,
<Item.Item at 0x10575fdcd30>,
<Item.Item at 0x10575fdcda0>,
<Item.Item at 0x10575fdce48>,
<Item.Item at 0x10575fdceb8>,
<Item.Item at 0x10575fdcf28>,
<Item.Item at 0x10575fe22e8>,
<Item.Item at 0x10575fe2828>,
<Item.Item at 0x10575fe2940>,
<Item.Item at 0x10575fe2b70>,
<Item.Item at 0x10575fe2be0>,
<Item.Item at 0x10575fe2c88>,
<Item.Item at 0x10575fe2cc0>,
<Item.Item at 0x10575fe2cf8>]}
But I'd like to get the values of those item also into a single json object. I don't know how to do it, would appreciate any help. Thank you
Edit
Following code create an instance of item class and filled it with data.
def Crawl(parsedPDF):
soup = BeautifulSoup(parsedPDF, "html.parser")
root = Item()
root.Type = "root"
root.N = "root"
parent = root
head = root
body = RemoveEmptyTags(soup.body)
for tag in body:
elements = RemoveEmptyChild(tag.contents)
for element in elements:
if element.name == "head":
head = CreateHeading(root, parent, element)
parent = head.Parent
elif element.name == "p":
AddParagraph(head, element)
elif element.name == "figure":
pass
elif element.name == "figdesc":
pass
elif element.name == "table":
#elem = AddElement(head, element)
pass
else:
#elem = AddElement(head, element)
pass
pass
return root
def AddParagraph(head, element):
# split the paragraph into multiple lines based on alphabetize bullet points
lines = split_with_AplhabetizeBullets(element.text, '\.\s(\(.*?\)\s)')
for line in lines:
item = Item()
item.Content = line
item.Type = element.name
item.Parent = head
head.Items.append(item)
def CreateHeading(root, parent, element):
item = Item()
item.Content = element.text
item.Type = element.name
item.Parent = parent
try:
item.N = element["n"]
except:
pass
if item.N is None:
bracketTextLength = 0
try:
result = re.search(r'\(.*?\)',item.Content)
bracketTextLength = len(result.group)
except:
pass
item.N = item.Content
# to check if the heading without 'N' is a heading or its a subheading
if len(item.Content) > 3 and bracketTextLength == 0:
root.Items.append(item)
item.Parent = item
pass
else:
parent.Items.append(item)
pass
else: # item.N is not None
if parent.N is None:
item.Parent = item
parent = item.Parent
pass
#else: # if the new heading sharing the same reference as of its parent then
if parent.N in item.N[:len(parent.N)]:
parent.Items.append(item)
pass
else: # if the new heading has no parent then add it into root
root.Items.append(item)
item.Parent = item
pass
return item
| [
"Looking at your code you can use this demo solution in your code as I'm storing objects of Demo class in the Items list. You need to write serialize() and dumper() methods in Items class, and also changes need to be done in reprJSON method for iteration on Items list.\nfrom json import JSONEncoder\n\nclass Demo():\n def __init__(self):\n self.name = ''\n self.demolist = []\n\nclass Item(JSONEncoder):\n\n def __init__(self):\n # super().__init__()\n self.Type = ''\n self.Content = ''\n self.N = None\n self.Parent = None\n self.Items = []\n\n def reprJSON(self):\n d = {}\n for a, v in self.__dict__.items():\n if isinstance(v, list):\n for i in v:\n if d.get(a, []) == []:\n d[a] = []\n d[a].append(self.dumper(i))\n else:\n d[a].append(self.dumper(i))\n else:\n d[a] = v\n return d\n\n def serialize(self):\n return self.__dict__\n\n @staticmethod\n def dumper(obj):\n if \"serialize\" in dir(obj):\n return obj.serialize()\n return obj.__dict__\n\n\n\n\nitemobj = Item()\nd1 = Demo()\nd2 = Demo()\nd1.name = 'akash'\nd1.demolist = [{'good':[4,6,5],'yyy':'why'},{'ho':{'ksks':'333'}}]\nd2.name = 'heheh'\nd2.demolist = [4,6,1111]\nitemobj.Items.extend([d1,d2])\n\nfrom pprint import pprint\npprint(itemobj.reprJSON())\n\nOutput:\n{'Content': '',\n 'Items': [{'demolist': [{'good': [4, 6, 5], 'yyy': 'why'},\n {'ho': {'ksks': '333'}}],\n 'name': 'akash'},\n {'demolist': [4, 6, 1111], 'name': 'heheh'}],\n 'N': None,\n 'Parent': None,\n 'Type': ''}```\n\n",
"pip install jsonwhatever\nfrom jsonwhatever import jsonwhatever as jw\n\n\nclass Item():\n def __init__(self):\n self.Type = ''\n self.Content = ''\n self.N = None\n self.Parent = None #Not to reference father class to avoid infinite recursivity\n self.Items = None #You should put None by default to stop recursivity\n\n\n\nobj = Item()\nobj01 = Item()\n\nobj01.Type = '01'\nobj01.Content = 'stuff'\nobj01.N = 9\nobj01.Parent = None\n\nlist_objects = []\n\nlist_objects.append(obj01)\n\nobj.Items = list_objects\n\njson_string = jw.jsonwhatever('list_of_items', obj)\n\nprint(json_string)\n\n\n"
] | [
1,
0
] | [] | [] | [
"dictionary",
"json",
"parsing",
"python"
] | stackoverflow_0058180038_dictionary_json_parsing_python.txt |
Q:
how do I copy over a few folders from one directory into another folder in Linux using python
I'm trying to copy over a bunch of folders, read from a txt file into another folder . How do I do this? I'm using python in Linux
e.g this txt file has the following folder names
001YG
00HFP
00MFE
00N38
00NN7
00SL4
00T1E
00T4B
00X3U
00YZL
00ZCA
01K8X
01KM1
01KML
01O27
01THT
01ZWG
and I want to copy these folders and their contents to a folder called duplicate-folder
How do I do this? I tried the following code but it gave an error because I think it's only for files, not folders :)
with open("missing-list-files.txt",'w') as outfile:
# outfile.write('\n'.join(temp3))
for x in temp3:
outfile.write(x + '\n')
filelistset = temp3
dest = 'mydirectoryhere'
for file in filelistset:
shutil.copyfile(file ,dest + file)
A:
use shutil.copytree(src, dest, ...) (https://docs.python.org/3/library/shutil.html#shutil.copytree) for directories (also copies subfiles and -directories. And also don't forget to give it the full path for file and dest (don't know which value your temp3 var holds). If you don't give it a full path, but only a filename, it will search in the current working directory.
| how do I copy over a few folders from one directory into another folder in Linux using python | I'm trying to copy over a bunch of folders, read from a txt file into another folder . How do I do this? I'm using python in Linux
e.g this txt file has the following folder names
001YG
00HFP
00MFE
00N38
00NN7
00SL4
00T1E
00T4B
00X3U
00YZL
00ZCA
01K8X
01KM1
01KML
01O27
01THT
01ZWG
and I want to copy these folders and their contents to a folder called duplicate-folder
How do I do this? I tried the following code but it gave an error because I think it's only for files, not folders :)
with open("missing-list-files.txt",'w') as outfile:
# outfile.write('\n'.join(temp3))
for x in temp3:
outfile.write(x + '\n')
filelistset = temp3
dest = 'mydirectoryhere'
for file in filelistset:
shutil.copyfile(file ,dest + file)
| [
"use shutil.copytree(src, dest, ...) (https://docs.python.org/3/library/shutil.html#shutil.copytree) for directories (also copies subfiles and -directories. And also don't forget to give it the full path for file and dest (don't know which value your temp3 var holds). If you don't give it a full path, but only a filename, it will search in the current working directory.\n"
] | [
1
] | [] | [] | [
"file",
"io",
"linux",
"python"
] | stackoverflow_0074651147_file_io_linux_python.txt |
Q:
Removing a comma at end a each row in python
I have the below dataframe
After doing the below manipulations to the dataframe, I am getting the output in the Rule column with comma at the end which is expected .but I want to remove it .How to do it
df['Rule'] = df.State.apply(lambda x: str("'"+str(x)+"',"))
df['Rule'] = df.groupby(['Description'])['Rule'].transform(lambda x: ' '.join(x))
df1 = df.drop_duplicates('Description',keep = 'first')
df1['Rule'] = df1['Rule'].apply(lambda x: str("("+str(x)+")")
I have tried it using ilo[-1].replace(",",""). But it is not working .
A:
Try this:
df['Rule'] = df.State.apply(lambda x: str("'"+str(x)+"'"))
df['Rule'] = df.groupby(['Description'])['Rule'].transform(lambda x: ', '.join(x))
df1 = df.drop_duplicates('Description', keep = 'first')
df1['Rule'] = df1['Rule'].apply(lambda x: str("("+str(x)+")"))
| Removing a comma at end a each row in python | I have the below dataframe
After doing the below manipulations to the dataframe, I am getting the output in the Rule column with comma at the end which is expected .but I want to remove it .How to do it
df['Rule'] = df.State.apply(lambda x: str("'"+str(x)+"',"))
df['Rule'] = df.groupby(['Description'])['Rule'].transform(lambda x: ' '.join(x))
df1 = df.drop_duplicates('Description',keep = 'first')
df1['Rule'] = df1['Rule'].apply(lambda x: str("("+str(x)+")")
I have tried it using ilo[-1].replace(",",""). But it is not working .
| [
"Try this:\ndf['Rule'] = df.State.apply(lambda x: str(\"'\"+str(x)+\"'\"))\ndf['Rule'] = df.groupby(['Description'])['Rule'].transform(lambda x: ', '.join(x))\ndf1 = df.drop_duplicates('Description', keep = 'first')\ndf1['Rule'] = df1['Rule'].apply(lambda x: str(\"(\"+str(x)+\")\"))\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074650804_dataframe_pandas_python.txt |
Q:
Finding Missing Quarters for last years in data
I have a pyspark dataframe with Quarterly data in that. The data is in the following format
2022-03-01 abc
2022-06-01 xyz
2000-03-01 abcd
Starting from the very first date (somewhere around 1960's) I need to find if there are any quarters missing from the date. And for the current year, any quarters that have passed. For example for 2022 checking only first 3 quarters if the data exists for those.
the code i have written works fine for the previous years but takes a few lines to code for the whole scenario to cover.
I am looking for a one liner kind of code if possible.
i am looking for all quarters in all years except for 1965 as there is no full quarter data is available for that year (Just one year is an exception)
My code is something as under.
qtrs = df.groupBy(year("mydate").alias("q_count")).count().filter(col("count")!= 4).filter(~col("qtr_count").isin(1965)).collect()
If len[qtrs] !=0:
return ("Error")
The above works for previous years but for the current year i have to write a separate logic. Is there a way I can incorporate the complete logic in the above one liner ? to check all the quarters.
Simply i want to make sure that no quarters are missing from the data starting from particular year Up until the last Quarter of the current year.
Any help please ?
A:
Here is my solution:
from pyspark.sql import functions as F
# I purposely commented out some part of 2022 so you can see the result
data = [
['2020-03-01', 'x']
, ['2020-04-01', 'y']
, ['2020-05-01', 'x']
, ['2020-06-01', 'x']
, ['2020-01-01', 'y']
, ['2020-01-01', 'y']
, ['2020-07-01', 'y']
, ['2020-08-01', 'y']
, ['2020-09-01', 'y']
, ['2020-10-01', 'y']
, ['2020-11-01', 'y']
, ['2020-12-01', 'y']
, ['2021-03-01', 'x']
, ['2021-04-01', 'y']
, ['2021-05-01', 'x']
, ['2021-06-01', 'x']
, ['2021-01-01', 'y']
, ['2021-01-01', 'y']
, ['2021-07-01', 'y']
, ['2021-08-01', 'y']
, ['2021-09-01', 'y']
, ['2021-10-01', 'y']
, ['2021-11-01', 'y']
, ['2021-12-01', 'y']
, ['2022-03-01', 'x']
, ['2022-04-01', 'y']
, ['2022-05-01', 'x']
, ['2022-06-01', 'x']
, ['2022-01-01', 'y']
, ['2022-01-01', 'y']
, ['2022-07-01', 'y']
# , ['2022-08-01', 'y']
# , ['2022-09-01', 'y']
# , ['2022-10-01', 'y']
# , ['2022-11-01', 'y']
# , ['2022-12-01', 'y']
]
cols = ['mydate', 'id']
# Creating Dataframe
df = spark.createDataFrame(data, cols)
# Group by year(mydate)
# Aggregate by year(mydate) and count distinct the quarter(mydate) where year(mydate) is not 1965
# Filter for years where the count(quarter(mydate)) != 4
res = df.groupBy(F.year('mydate').alias("q_count")).agg(F.countDistinct(F.quarter('mydate')).alias("qrt_count")). where(F.year('mydate') != 1965).filter(F.col('qrt_count') != 4)
res.display()
Here is the output:
A:
I have taken the sample data that you have provided and created a dataframe.
from pyspark.sql.functions import *
data = [ ['2020-03-01', 'x'] , ['2020-06-01', 'y'] , ['2020-09-01', 'x'] , ['2020-12-01', 'x'] , ['2021-03-01', 'x'] , ['2021-06-01', 'y'] , ['2021-09-01', 'x'] , ['2021-12-01', 'x'] , ['2022-03-01', 'x'] , ['2022-06-01', 'y'] , ['2022-09-01', 'x'] , ['2022-12-01', 'x'] ]
df = spark.createDataFrame(data = data, schema=['date', 'id'])
display(df)
Now, I have used the following code to get the desired output. Since you don't want to consider the current year and the year 1965.
I have taken the condition where I used datetime library to get the year from today's date. If it is equal to any of the records returned from after grouping and obtaining the years where quarter count is less than 4, then we filter them out.
import datetime
df.groupBy(year('date').alias("quarter_year"))\
.agg(countDistinct(quarter('date')).alias("quarter_count"))\
.where((year('date') != 1965)).filter((col('quarter_count') < 4) & (col('quarter_year') != datetime.datetime.today().year)).show()
Let's say I take out 2020-09-01 and test the code, then it would return the following result.
| Finding Missing Quarters for last years in data | I have a pyspark dataframe with Quarterly data in that. The data is in the following format
2022-03-01 abc
2022-06-01 xyz
2000-03-01 abcd
Starting from the very first date (somewhere around 1960's) I need to find if there are any quarters missing from the date. And for the current year, any quarters that have passed. For example for 2022 checking only first 3 quarters if the data exists for those.
the code i have written works fine for the previous years but takes a few lines to code for the whole scenario to cover.
I am looking for a one liner kind of code if possible.
i am looking for all quarters in all years except for 1965 as there is no full quarter data is available for that year (Just one year is an exception)
My code is something as under.
qtrs = df.groupBy(year("mydate").alias("q_count")).count().filter(col("count")!= 4).filter(~col("qtr_count").isin(1965)).collect()
If len[qtrs] !=0:
return ("Error")
The above works for previous years but for the current year i have to write a separate logic. Is there a way I can incorporate the complete logic in the above one liner ? to check all the quarters.
Simply i want to make sure that no quarters are missing from the data starting from particular year Up until the last Quarter of the current year.
Any help please ?
| [
"Here is my solution:\nfrom pyspark.sql import functions as F\n\n# I purposely commented out some part of 2022 so you can see the result\n\ndata = [\n ['2020-03-01', 'x']\n, ['2020-04-01', 'y']\n, ['2020-05-01', 'x']\n, ['2020-06-01', 'x']\n, ['2020-01-01', 'y'] \n, ['2020-01-01', 'y']\n, ['2020-07-01', 'y']\n, ['2020-08-01', 'y']\n, ['2020-09-01', 'y']\n, ['2020-10-01', 'y']\n, ['2020-11-01', 'y']\n, ['2020-12-01', 'y']\n, ['2021-03-01', 'x']\n, ['2021-04-01', 'y']\n, ['2021-05-01', 'x']\n, ['2021-06-01', 'x']\n, ['2021-01-01', 'y'] \n, ['2021-01-01', 'y']\n, ['2021-07-01', 'y']\n, ['2021-08-01', 'y']\n, ['2021-09-01', 'y']\n, ['2021-10-01', 'y']\n, ['2021-11-01', 'y']\n, ['2021-12-01', 'y']\n, ['2022-03-01', 'x']\n, ['2022-04-01', 'y']\n, ['2022-05-01', 'x']\n, ['2022-06-01', 'x']\n, ['2022-01-01', 'y'] \n, ['2022-01-01', 'y']\n, ['2022-07-01', 'y']\n# , ['2022-08-01', 'y']\n# , ['2022-09-01', 'y']\n# , ['2022-10-01', 'y']\n# , ['2022-11-01', 'y']\n# , ['2022-12-01', 'y'] \n \n]\n\ncols = ['mydate', 'id']\n\n# Creating Dataframe\ndf = spark.createDataFrame(data, cols)\n\n# Group by year(mydate)\n# Aggregate by year(mydate) and count distinct the quarter(mydate) where year(mydate) is not 1965\n# Filter for years where the count(quarter(mydate)) != 4\n\nres = df.groupBy(F.year('mydate').alias(\"q_count\")).agg(F.countDistinct(F.quarter('mydate')).alias(\"qrt_count\")). where(F.year('mydate') != 1965).filter(F.col('qrt_count') != 4)\n\nres.display()\n\nHere is the output:\n\n",
"I have taken the sample data that you have provided and created a dataframe.\nfrom pyspark.sql.functions import *\n\ndata = [ ['2020-03-01', 'x'] , ['2020-06-01', 'y'] , ['2020-09-01', 'x'] , ['2020-12-01', 'x'] , ['2021-03-01', 'x'] , ['2021-06-01', 'y'] , ['2021-09-01', 'x'] , ['2021-12-01', 'x'] , ['2022-03-01', 'x'] , ['2022-06-01', 'y'] , ['2022-09-01', 'x'] , ['2022-12-01', 'x'] ]\ndf = spark.createDataFrame(data = data, schema=['date', 'id'])\ndisplay(df)\n\n\n\nNow, I have used the following code to get the desired output. Since you don't want to consider the current year and the year 1965.\n\nI have taken the condition where I used datetime library to get the year from today's date. If it is equal to any of the records returned from after grouping and obtaining the years where quarter count is less than 4, then we filter them out.\n\n\nimport datetime\n\ndf.groupBy(year('date').alias(\"quarter_year\"))\\\n .agg(countDistinct(quarter('date')).alias(\"quarter_count\"))\\\n .where((year('date') != 1965)).filter((col('quarter_count') < 4) & (col('quarter_year') != datetime.datetime.today().year)).show()\n\n\n\nLet's say I take out 2020-09-01 and test the code, then it would return the following result.\n\n\n"
] | [
0,
0
] | [] | [] | [
"azure_databricks",
"pyspark",
"python"
] | stackoverflow_0074606200_azure_databricks_pyspark_python.txt |
Q:
Replace value of a column converted to day and month to a text using Python
how do I achieve this in Python? Source file is a CSV file, and value of one column in that file is converted from numeric to day and month. Thank you very much in advance.
Example below:
Picture of the column:
room column
In my python script, value should look below:
1-Feb ---> 2-1
2-Feb ---> 2-2
3-Mar ---> 3-3
4-Mar ---> 3-4
Here's my script.
import os
import pandas as pd
directory = 'C:/Path'
ext = ('.csv')
for filename in os.listdir(directory):
f = os.path.join(directory, filename)
if f.endswith(ext):
head_tail = os.path.split(f)
head_tail1 = 'C:/Path'
k =head_tail[1]
r=k.split(".")[0]
p=head_tail1 + "/" + r + " - Revised.csv"
mydata = pd.read_csv(f)
# to pull columns and values
new = mydata[["A","Room","C","D"]]
new = new.rename(columns={'D': 'Qty. of Parts'})
new['Qty. of Parts'] = 1
new.to_csv(p ,index=False)
#to merge columns and values
merge_columns = ['A', 'Room', 'C']
merged_col = ''.join(merge_columns).replace('ARoomC', 'F')
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
new.drop(merge_columns, axis=1, inplace=True)
new = new.groupby(merged_col).count().reset_index()
new.to_csv(p, index=False)
A:
If you want to convert the strings to dates to later get the values
import datetime
# datetime.datetime(1900, 2, 1, 0, 0)
d = datetime.datetime.strptime("1-Feb", "%d-%b")
print(f'{d.month}-{d.day}')
result:
2-1
A:
You can use pandas.to_datetime :
new["Room"]= (
pd.to_datetime(new["Room"], format="%d-%b", errors="coerce")
.dt.strftime("%#m-%#d")
.fillna(new["Room"])
)
Example :
col
0 1-Feb
1 2-Feb
2 3-Mar
3 2-1
Gives :
col
0 2-1
1 2-2
2 3-3
3 2-1
NB: In case you're facing errors, you need to give a minimal reproducible example to show how look like your original (.csv) opened in a text editor and not in Excel.
| Replace value of a column converted to day and month to a text using Python | how do I achieve this in Python? Source file is a CSV file, and value of one column in that file is converted from numeric to day and month. Thank you very much in advance.
Example below:
Picture of the column:
room column
In my python script, value should look below:
1-Feb ---> 2-1
2-Feb ---> 2-2
3-Mar ---> 3-3
4-Mar ---> 3-4
Here's my script.
import os
import pandas as pd
directory = 'C:/Path'
ext = ('.csv')
for filename in os.listdir(directory):
f = os.path.join(directory, filename)
if f.endswith(ext):
head_tail = os.path.split(f)
head_tail1 = 'C:/Path'
k =head_tail[1]
r=k.split(".")[0]
p=head_tail1 + "/" + r + " - Revised.csv"
mydata = pd.read_csv(f)
# to pull columns and values
new = mydata[["A","Room","C","D"]]
new = new.rename(columns={'D': 'Qty. of Parts'})
new['Qty. of Parts'] = 1
new.to_csv(p ,index=False)
#to merge columns and values
merge_columns = ['A', 'Room', 'C']
merged_col = ''.join(merge_columns).replace('ARoomC', 'F')
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
new.drop(merge_columns, axis=1, inplace=True)
new = new.groupby(merged_col).count().reset_index()
new.to_csv(p, index=False)
| [
"If you want to convert the strings to dates to later get the values\nimport datetime\n# datetime.datetime(1900, 2, 1, 0, 0)\nd = datetime.datetime.strptime(\"1-Feb\", \"%d-%b\")\nprint(f'{d.month}-{d.day}')\n\nresult:\n2-1\n\n",
"You can use pandas.to_datetime :\nnew[\"Room\"]= (\n pd.to_datetime(new[\"Room\"], format=\"%d-%b\", errors=\"coerce\")\n .dt.strftime(\"%#m-%#d\")\n .fillna(new[\"Room\"])\n )\n\nExample :\n col\n0 1-Feb\n1 2-Feb\n2 3-Mar\n3 2-1\n\nGives :\n col\n0 2-1\n1 2-2\n2 3-3\n3 2-1\n\nNB: In case you're facing errors, you need to give a minimal reproducible example to show how look like your original (.csv) opened in a text editor and not in Excel.\n"
] | [
1,
0
] | [] | [] | [
"csv",
"if_statement",
"python"
] | stackoverflow_0074638521_csv_if_statement_python.txt |
Q:
Boto3 SES Client gets SignatureDoesNotMatch error
I have the following setup:
Python Flask API with boto3 installed. I create a boto3 client like so:
client = boto3.client(
"ses",
region_name='eu-west-1',
aws_access_key_id='myAccessKeyID',
aws_secret_access_key='mySecretAccessKey'
)
Then I try to send an email like so:
try:
client.send_email(
Source='VerifiedSourceEmail',
Destination={
'ToAddresses': ['VerifiedRecipientEmail'],
'CcAddresses': [],
'BccAddresses': [],
},
Message={
'Subject': {'Data': 'Test'},
'Body': {'Text': {'Data': 'Test'},
}
except ClientError as e:
return {
'ErrorCode': e.response['Error']['Code'],
'ErrorMessage': e.response['Error']['Message'],
}
When I try to do this, I get:
ErrorCode: SignatureDoesNotMatch
ErrorMessage: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
I have used the exact same client creation process when connecting to my S3 bucket:
client = boto3.client(
"s3",
region_name='eu-west-1',
aws_access_key_id='myS3AccessKeyID',
aws_secret_access_key='myS3SecretAccessKey'
)
...and this client works fine, I have tested both gets, uploads and deletes with no errors.
I have tested sending a mail directly in AWS, this works. What am I doing wrong?
A:
From this answer:
The keys to be provided to send Emails are not "SMTP Credentials" .
The keys are instead Global access key which can be retrieved
http://docs.amazonwebservices.com/ses/latest/GettingStartedGuide/GetAccessIDs.html.
| Boto3 SES Client gets SignatureDoesNotMatch error | I have the following setup:
Python Flask API with boto3 installed. I create a boto3 client like so:
client = boto3.client(
"ses",
region_name='eu-west-1',
aws_access_key_id='myAccessKeyID',
aws_secret_access_key='mySecretAccessKey'
)
Then I try to send an email like so:
try:
client.send_email(
Source='VerifiedSourceEmail',
Destination={
'ToAddresses': ['VerifiedRecipientEmail'],
'CcAddresses': [],
'BccAddresses': [],
},
Message={
'Subject': {'Data': 'Test'},
'Body': {'Text': {'Data': 'Test'},
}
except ClientError as e:
return {
'ErrorCode': e.response['Error']['Code'],
'ErrorMessage': e.response['Error']['Message'],
}
When I try to do this, I get:
ErrorCode: SignatureDoesNotMatch
ErrorMessage: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
I have used the exact same client creation process when connecting to my S3 bucket:
client = boto3.client(
"s3",
region_name='eu-west-1',
aws_access_key_id='myS3AccessKeyID',
aws_secret_access_key='myS3SecretAccessKey'
)
...and this client works fine, I have tested both gets, uploads and deletes with no errors.
I have tested sending a mail directly in AWS, this works. What am I doing wrong?
| [
"From this answer:\n\nThe keys to be provided to send Emails are not \"SMTP Credentials\" .\nThe keys are instead Global access key which can be retrieved\nhttp://docs.amazonwebservices.com/ses/latest/GettingStartedGuide/GetAccessIDs.html.\n\n"
] | [
0
] | [] | [] | [
"amazon_ses",
"amazon_web_services",
"boto3",
"python",
"python_3.x"
] | stackoverflow_0072462576_amazon_ses_amazon_web_services_boto3_python_python_3.x.txt |
Q:
Custom pattern matching in python
I am trying to write a simple python program to read a log file and extract specific values
I have the following log line I want to look out for
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269
I have many topics such as myTopic2, myTopic3 etc
I want to be able to detect all such lines which show the total incoming bytes for various topics and extract the value.
Is there any easy and efficient way to do so ?
basically I want to be able to detect the following pattern
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.${}.TotalIncomingBytes.Count, value=${}
Ignoring the timestamp ofcourse
A:
Maybe something like this:
resultLines = []
resultSums = {}
with open('recent.logs') as f:
for idx, line in enumerate(f):
pieces = line.rsplit('.TotalIncomingBytes.Count, value=', 1)
if len(pieces) != 2: continue
value = pieces[1]
pieces = pieces[0].rsplit(' [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.', 1)
if len(pieces) != 2: continue
topic = pieces[1]
value = int(value)
resultLines.append({
'idx': idx,
'line': line,
'topic': topic,
'value': value,
})
if topic not in resultSums:
resultSums[topic] = 0
resultSums[topic] = resultSums[topic] + value
for topic, value in resultSums.iteritems():
print(topic, value)
A:
Here's the way I would do it. This could also be done with a regular expression.
data = """\
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269
"""
counts = {}
for line in data.splitlines():
if '[INFO ] metrics' in line:
parts = line.split(' - ')
parts = parts[1].split(', ')
dct = {}
for part in parts:
key,val = part.split('=')
dct[key] = val
if dct['name'] not in counts:
counts[dct['name']] = int(dct['value'])
else:
counts[dct['name']] += int(dct['value'])
print(counts)
Output:
{'Topic.myTopic1.TotalIncomingBytes.Count': 62175807}
Here's a regex version:
pattern = re.compile(r".* - type=([^,]*), name=([^,]*), value=([^,]*)")
counts = {}
for line in data.splitlines():
if '[INFO ] metrics' in line:
parts = pattern.match(line)
if parts[2] not in counts:
counts[parts[2]] = int(parts[3])
else:
counts[parts[2]] += int(parts[3])
print(counts)
| Custom pattern matching in python | I am trying to write a simple python program to read a log file and extract specific values
I have the following log line I want to look out for
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269
I have many topics such as myTopic2, myTopic3 etc
I want to be able to detect all such lines which show the total incoming bytes for various topics and extract the value.
Is there any easy and efficient way to do so ?
basically I want to be able to detect the following pattern
2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.${}.TotalIncomingBytes.Count, value=${}
Ignoring the timestamp ofcourse
| [
"Maybe something like this:\nresultLines = []\nresultSums = {}\nwith open('recent.logs') as f:\n for idx, line in enumerate(f):\n pieces = line.rsplit('.TotalIncomingBytes.Count, value=', 1)\n if len(pieces) != 2: continue\n\n value = pieces[1]\n\n pieces = pieces[0].rsplit(' [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.', 1)\n if len(pieces) != 2: continue\n\n topic = pieces[1]\n value = int(value)\n\n resultLines.append({\n 'idx': idx,\n 'line': line,\n 'topic': topic,\n 'value': value,\n })\n\n if topic not in resultSums:\n resultSums[topic] = 0\n resultSums[topic] = resultSums[topic] + value\n\nfor topic, value in resultSums.iteritems():\n print(topic, value)\n\n",
"Here's the way I would do it. This could also be done with a regular expression.\ndata = \"\"\"\\\n2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269\n2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269\n2022-12-02 13:13:10.539 [metrics-writer-1] [INFO ] metrics - type=GAUGE, name=Topic.myTopic1.TotalIncomingBytes.Count, value=20725269\n\"\"\"\n\ncounts = {}\n\nfor line in data.splitlines():\n if '[INFO ] metrics' in line:\n parts = line.split(' - ')\n parts = parts[1].split(', ')\n dct = {}\n for part in parts:\n key,val = part.split('=')\n dct[key] = val\n if dct['name'] not in counts:\n counts[dct['name']] = int(dct['value'])\n else:\n counts[dct['name']] += int(dct['value'])\n\nprint(counts)\n\nOutput:\n{'Topic.myTopic1.TotalIncomingBytes.Count': 62175807}\n\nHere's a regex version:\n\npattern = re.compile(r\".* - type=([^,]*), name=([^,]*), value=([^,]*)\")\ncounts = {}\n\nfor line in data.splitlines():\n if '[INFO ] metrics' in line:\n parts = pattern.match(line)\n if parts[2] not in counts:\n counts[parts[2]] = int(parts[3])\n else:\n counts[parts[2]] += int(parts[3])\n\nprint(counts)\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074651250_python.txt |
Q:
Discord.py channel.connect() never returns
I am currently working on a discord.py-rewrite (1.3.3) bot for my discord server. At the moment, I am trying to make the bot play music in the voice channels. According to the discord.py documentation, you would use the function channel.connect() to connect to a voice channel, which would return a VoiceClient object.
However, I never get a VoiceClient object back from channel.connect(). The bot does join my channel, but it seems to be stuck in an infinite loop. Nothing after the line "await channel.connect()" is executed, so the line "test" is not printed. When I update the bot's role in the server it works once, but after i restart the bot it will no longer work.
# This is just a function, not the command the user calls. The context is passed through
async def join(ctx):
voice_status = ctx.author.voice
# Checking if author voice_status is not none
if voice_status:
# Getting the channel of the author
channel = voice_status.channel
if ctx.voice_client is None:
# Connect the bot
vc = await channel.connect()
print("test")
I have found a few threads on github and overflow where people were experiencing the same problem, but they never fixed it. I'm quite sure that the code is correct.
I have already tried reinstalling and updating discord.py. I have also asked for help in the discord API server but they could not replicate my issue.
This is my first overflow post so I apologize in advance if there is anything wrong with my post.
Cheers
A:
Here's a way to make your bot join a voice channel:
async def join(ctx):
channel = ctx.message.author.voice.channel
if not channel:
await ctx.send("You're not connected to any voice channel !")
else:
voice = get(self.bot.voice_clients, guild=ctx.guild)
if voice and voice.is_connected():
await voice.move_to(channel)
else:
voice = await channel.connect()
PS : if you add a play command, you'll still have to get the bot's channel and voice with those to lines :
voice = get(self.bot.voice_clients, guild=ctx.guild)
channel = ctx.message.author.voice.channel
A:
I removed your comments and added my own #comments
voice_status = ctx.author.voice
if voice_status: # this check does nothing. This is a discord.VoiceState object
channel = voice_status.channel
# channel is None if you are not connected to a voice channel
# channel is a Channel object if you are connected to a voice channel
if ctx.voice_client is None:
# I may be wrong but if I read the docs right this returns the voice client of the guild, something semi-related to the author.
vc = await channel.connect()
# You are not connected to a voice channel. So channel is None. Now you are trying to connect to a None channel.
print("test")
A:
ik this is old, but I've spend a long time fixing the problem myself and I haven't found any answers online. My problem was that my bot didn't have the Intents for voice stuff. If your bot is just for a small group of people like your friends, just settings intents to discord.Intents.all() will work. Otherwise you can manually select them.
A:
You have problem discord RTC. Your bot try to connect but rtc is blocked by firewall. You should anable DNS UDP in firewall.
| Discord.py channel.connect() never returns | I am currently working on a discord.py-rewrite (1.3.3) bot for my discord server. At the moment, I am trying to make the bot play music in the voice channels. According to the discord.py documentation, you would use the function channel.connect() to connect to a voice channel, which would return a VoiceClient object.
However, I never get a VoiceClient object back from channel.connect(). The bot does join my channel, but it seems to be stuck in an infinite loop. Nothing after the line "await channel.connect()" is executed, so the line "test" is not printed. When I update the bot's role in the server it works once, but after i restart the bot it will no longer work.
# This is just a function, not the command the user calls. The context is passed through
async def join(ctx):
voice_status = ctx.author.voice
# Checking if author voice_status is not none
if voice_status:
# Getting the channel of the author
channel = voice_status.channel
if ctx.voice_client is None:
# Connect the bot
vc = await channel.connect()
print("test")
I have found a few threads on github and overflow where people were experiencing the same problem, but they never fixed it. I'm quite sure that the code is correct.
I have already tried reinstalling and updating discord.py. I have also asked for help in the discord API server but they could not replicate my issue.
This is my first overflow post so I apologize in advance if there is anything wrong with my post.
Cheers
| [
"Here's a way to make your bot join a voice channel:\nasync def join(ctx):\n channel = ctx.message.author.voice.channel\n if not channel:\n await ctx.send(\"You're not connected to any voice channel !\")\n else:\n voice = get(self.bot.voice_clients, guild=ctx.guild)\n if voice and voice.is_connected():\n await voice.move_to(channel)\n else:\n voice = await channel.connect()\n\nPS : if you add a play command, you'll still have to get the bot's channel and voice with those to lines :\nvoice = get(self.bot.voice_clients, guild=ctx.guild)\nchannel = ctx.message.author.voice.channel\n\n",
"I removed your comments and added my own #comments\nvoice_status = ctx.author.voice\nif voice_status: # this check does nothing. This is a discord.VoiceState object\n channel = voice_status.channel\n # channel is None if you are not connected to a voice channel\n # channel is a Channel object if you are connected to a voice channel\n if ctx.voice_client is None:\n # I may be wrong but if I read the docs right this returns the voice client of the guild, something semi-related to the author.\n vc = await channel.connect()\n # You are not connected to a voice channel. So channel is None. Now you are trying to connect to a None channel.\n print(\"test\")\n\n",
"ik this is old, but I've spend a long time fixing the problem myself and I haven't found any answers online. My problem was that my bot didn't have the Intents for voice stuff. If your bot is just for a small group of people like your friends, just settings intents to discord.Intents.all() will work. Otherwise you can manually select them.\n",
"You have problem discord RTC. Your bot try to connect but rtc is blocked by firewall. You should anable DNS UDP in firewall.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
"bots",
"discord",
"discord.py",
"python"
] | stackoverflow_0062557255_bots_discord_discord.py_python.txt |
Q:
Sort rows of curve shaped data in python
I have a dataset that consists of 5 rows that are formed like a curve. I want to separate the inner row from the other or if possible each row and store them in a separate array. Is there any way to do this, like somehow flatten the curved data and sorting it afterwards based on the x and y values?
I would like to assign each row from left to right numbers from 0 to the max of the row. Right now the labels for each dot are not useful for me and I can't change the labels.
Here are the first 50 data points of my data set:
x y
0 -6.4165 0.3716
1 -4.0227 2.63
2 -7.206 3.0652
3 -3.2584 -0.0392
4 -0.7565 2.1039
5 -0.0498 -0.5159
6 2.363 1.5329
7 -10.7253 3.4654
8 -8.0621 5.9083
9 -4.6328 5.3028
10 -1.4237 4.8455
11 1.8047 4.2297
12 4.8147 3.6074
13 -5.3504 8.1889
14 -1.7743 7.6165
15 1.1783 6.9698
16 4.3471 6.2411
17 7.4067 5.5988
18 -2.6037 10.4623
19 0.8613 9.7628
20 3.8054 9.0202
21 7.023 8.1962
22 9.9776 7.5563
23 0.1733 12.6547
24 3.7137 11.9097
25 6.4672 10.9363
26 9.6489 10.1246
27 12.5674 9.3369
28 3.2124 14.7492
29 6.4983 13.7562
30 9.2606 12.7241
31 12.4003 11.878
32 15.3578 11.0027
33 6.3128 16.7014
34 9.7676 15.6557
35 12.2103 14.4967
36 15.3182 13.5166
37 18.2495 12.5836
38 9.3947 18.5506
39 12.496 17.2993
40 15.3987 16.2716
41 18.2212 15.1871
42 21.1241 14.0893
43 12.3548 20.2538
44 15.3682 18.9439
45 18.357 17.8862
46 21.0834 16.6258
47 23.9992 15.4145
48 15.3776 21.9402
49 18.3568 20.5803
50 21.1733 19.3041
A:
It seems that your curves have a pattern, so you could select the curve of interest using splicing. I had the offset the selection slightly to get the five curves because the first 8 points are not in the same order as the rest of the data. So the initial 8 data points are discarded. But these could be added back in afterwards if required.
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({ 'x': [-6.4165, -4.0227, -7.206, -3.2584, -0.7565, -0.0498, 2.363, -10.7253, -8.0621, -4.6328, -1.4237, 1.8047, 4.8147, -5.3504, -1.7743, 1.1783, 4.3471, 7.4067, -2.6037, 0.8613, 3.8054, 7.023, 9.9776, 0.1733, 3.7137, 6.4672, 9.6489, 12.5674, 3.2124, 6.4983, 9.2606, 12.4003, 15.3578, 6.3128, 9.7676, 12.2103, 15.3182, 18.2495, 9.3947, 12.496, 15.3987, 18.2212, 21.1241, 12.3548, 15.3682, 18.357, 21.0834, 23.9992, 15.3776, 18.3568, 21.1733],
'y': [0.3716, 2.63, 3.0652, -0.0392, 2.1039, -0.5159, 1.5329, 3.4654, 5.9083, 5.3028, 4.8455, 4.2297, 3.6074, 8.1889, 7.6165, 6.9698, 6.2411, 5.5988, 10.4623, 9.7628, 9.0202, 8.1962, 7.5563, 12.6547, 11.9097, 10.9363, 10.1246, 9.3369, 14.7492, 13.7562, 12.7241, 11.878, 11.0027, 16.7014, 15.6557, 14.4967, 13.5166, 12.5836, 18.5506, 17.2993, 16.2716, 15.1871, 14.0893, 20.2538, 18.9439, 17.8862, 16.6258, 15.4145, 21.9402, 20.5803, 19.3041]})
# Generate the 5 dataframes
df_list = [df.iloc[i+8::5, :] for i in range(5)]
# Generate the plot
fig = plt.figure()
for frame in df_list:
plt.scatter(frame['x'], frame['y'])
plt.show()
# Print the data of the innermost curve
print(df_list[4])
OUTPUT:
The 5th dataframe df_list[4] contains the data of the innermost plot.
x y
12 4.8147 3.6074
17 7.4067 5.5988
22 9.9776 7.5563
27 12.5674 9.3369
32 15.3578 11.0027
37 18.2495 12.5836
42 21.1241 14.0893
47 23.9992 15.4145
You can then add the missing data like this:
# Retrieve the two missing points of the inner curve
inner_curve = pd.concat([df_list[4], df[5:7]]).sort_index(ascending=True)
print(inner_curve)
# Plot the inner curve only
fig2 = plt.figure()
plt.scatter(inner_curve['x'], inner_curve['y'], color = '#9467BD')
plt.show()
OUTPUT: inner curve
x y
5 -0.0498 -0.5159
6 2.3630 1.5329
12 4.8147 3.6074
17 7.4067 5.5988
22 9.9776 7.5563
27 12.5674 9.3369
32 15.3578 11.0027
37 18.2495 12.5836
42 21.1241 14.0893
47 23.9992 15.4145
Complete Inner Curve
| Sort rows of curve shaped data in python | I have a dataset that consists of 5 rows that are formed like a curve. I want to separate the inner row from the other or if possible each row and store them in a separate array. Is there any way to do this, like somehow flatten the curved data and sorting it afterwards based on the x and y values?
I would like to assign each row from left to right numbers from 0 to the max of the row. Right now the labels for each dot are not useful for me and I can't change the labels.
Here are the first 50 data points of my data set:
x y
0 -6.4165 0.3716
1 -4.0227 2.63
2 -7.206 3.0652
3 -3.2584 -0.0392
4 -0.7565 2.1039
5 -0.0498 -0.5159
6 2.363 1.5329
7 -10.7253 3.4654
8 -8.0621 5.9083
9 -4.6328 5.3028
10 -1.4237 4.8455
11 1.8047 4.2297
12 4.8147 3.6074
13 -5.3504 8.1889
14 -1.7743 7.6165
15 1.1783 6.9698
16 4.3471 6.2411
17 7.4067 5.5988
18 -2.6037 10.4623
19 0.8613 9.7628
20 3.8054 9.0202
21 7.023 8.1962
22 9.9776 7.5563
23 0.1733 12.6547
24 3.7137 11.9097
25 6.4672 10.9363
26 9.6489 10.1246
27 12.5674 9.3369
28 3.2124 14.7492
29 6.4983 13.7562
30 9.2606 12.7241
31 12.4003 11.878
32 15.3578 11.0027
33 6.3128 16.7014
34 9.7676 15.6557
35 12.2103 14.4967
36 15.3182 13.5166
37 18.2495 12.5836
38 9.3947 18.5506
39 12.496 17.2993
40 15.3987 16.2716
41 18.2212 15.1871
42 21.1241 14.0893
43 12.3548 20.2538
44 15.3682 18.9439
45 18.357 17.8862
46 21.0834 16.6258
47 23.9992 15.4145
48 15.3776 21.9402
49 18.3568 20.5803
50 21.1733 19.3041
| [
"It seems that your curves have a pattern, so you could select the curve of interest using splicing. I had the offset the selection slightly to get the five curves because the first 8 points are not in the same order as the rest of the data. So the initial 8 data points are discarded. But these could be added back in afterwards if required.\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.DataFrame({ 'x': [-6.4165, -4.0227, -7.206, -3.2584, -0.7565, -0.0498, 2.363, -10.7253, -8.0621, -4.6328, -1.4237, 1.8047, 4.8147, -5.3504, -1.7743, 1.1783, 4.3471, 7.4067, -2.6037, 0.8613, 3.8054, 7.023, 9.9776, 0.1733, 3.7137, 6.4672, 9.6489, 12.5674, 3.2124, 6.4983, 9.2606, 12.4003, 15.3578, 6.3128, 9.7676, 12.2103, 15.3182, 18.2495, 9.3947, 12.496, 15.3987, 18.2212, 21.1241, 12.3548, 15.3682, 18.357, 21.0834, 23.9992, 15.3776, 18.3568, 21.1733],\n 'y': [0.3716, 2.63, 3.0652, -0.0392, 2.1039, -0.5159, 1.5329, 3.4654, 5.9083, 5.3028, 4.8455, 4.2297, 3.6074, 8.1889, 7.6165, 6.9698, 6.2411, 5.5988, 10.4623, 9.7628, 9.0202, 8.1962, 7.5563, 12.6547, 11.9097, 10.9363, 10.1246, 9.3369, 14.7492, 13.7562, 12.7241, 11.878, 11.0027, 16.7014, 15.6557, 14.4967, 13.5166, 12.5836, 18.5506, 17.2993, 16.2716, 15.1871, 14.0893, 20.2538, 18.9439, 17.8862, 16.6258, 15.4145, 21.9402, 20.5803, 19.3041]})\n\n# Generate the 5 dataframes\ndf_list = [df.iloc[i+8::5, :] for i in range(5)]\n\n# Generate the plot\nfig = plt.figure()\nfor frame in df_list:\n plt.scatter(frame['x'], frame['y'])\nplt.show()\n\n# Print the data of the innermost curve\nprint(df_list[4])\n\nOUTPUT:\n\nThe 5th dataframe df_list[4] contains the data of the innermost plot.\n x y\n12 4.8147 3.6074\n17 7.4067 5.5988\n22 9.9776 7.5563\n27 12.5674 9.3369\n32 15.3578 11.0027\n37 18.2495 12.5836\n42 21.1241 14.0893\n47 23.9992 15.4145\n\n\nYou can then add the missing data like this:\n# Retrieve the two missing points of the inner curve\ninner_curve = pd.concat([df_list[4], df[5:7]]).sort_index(ascending=True)\nprint(inner_curve)\n\n# Plot the inner curve only\nfig2 = plt.figure()\nplt.scatter(inner_curve['x'], inner_curve['y'], color = '#9467BD')\nplt.show()\n\nOUTPUT: inner curve\n x y\n5 -0.0498 -0.5159\n6 2.3630 1.5329\n12 4.8147 3.6074\n17 7.4067 5.5988\n22 9.9776 7.5563\n27 12.5674 9.3369\n32 15.3578 11.0027\n37 18.2495 12.5836\n42 21.1241 14.0893\n47 23.9992 15.4145\n\nComplete Inner Curve\n\n"
] | [
1
] | [] | [] | [
"data_analysis",
"python",
"scipy",
"sorting"
] | stackoverflow_0074651199_data_analysis_python_scipy_sorting.txt |
Q:
How to modify (flip sign) secondary y-axis tick labels
Data (this block of code is good; feel free to skip):
#Import statements
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
#Constants
start_date = "2018-01-01"
end_date = "2023-01-01"
#Pull in data
tenYear_master = yf.download('^TNX', start_date, end_date)
thirtyYear_master = yf.download('^TYX', start_date, end_date)
#Trim DataFrames to only include 'Adj Close columns'
tenYear = tenYear_master['Adj Close'].to_frame()
thirtyYear = thirtyYear_master['Adj Close'].to_frame()
#Rename columns
tenYear.rename(columns = {'Adj Close' : 'Adj Close - Ten Year'}, inplace= True)
thirtyYear.rename(columns = {'Adj Close' : 'Adj Close - Thirty Year'}, inplace= True)
#Join DataFrames
data = tenYear.join(thirtyYear)
#Add column for difference (spread)
data['Spread'] = data['Adj Close - Thirty Year'] - data['Adj Close - Ten Year']
print(data.head(25))
Adj Close - Ten Year Adj Close - Thirty Year Spread
Date
2018-01-02 2.465 2.811 0.346
2018-01-03 2.447 2.785 0.338
2018-01-04 2.453 2.786 0.333
2018-01-05 2.476 2.811 0.335
2018-01-08 2.480 2.814 0.334
2018-01-09 2.546 2.887 0.341
2018-01-10 2.550 2.891 0.341
2018-01-11 2.531 2.865 0.334
2018-01-12 2.552 2.853 0.301
2018-01-16 2.544 2.836 0.292
2018-01-17 2.578 2.848 0.270
2018-01-18 2.611 2.888 0.277
2018-01-19 2.637 2.912 0.275
2018-01-22 2.665 2.928 0.263
2018-01-23 2.624 2.902 0.278
2018-01-24 2.654 2.938 0.284
2018-01-25 2.621 2.881 0.260
2018-01-26 2.662 2.912 0.250
2018-01-29 2.699 2.943 0.244
2018-01-30 2.726 2.980 0.254
2018-01-31 2.720 2.942 0.222
2018-02-01 2.773 3.005 0.232
2018-02-02 2.854 3.097 0.243
2018-02-05 2.794 3.067 0.273
2018-02-06 2.768 3.044 0.276
This block is also good.
'''Plot data'''
#Delete top, left, and right borders from figure
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.left'] = False
plt.rcParams['axes.spines.right'] = False
#Create figure
fig, ax = plt.subplots(figsize = (12.5,7.5))
data.plot(ax = ax, secondary_y = ['Spread'], ylabel = 'Yield', legend = False);
'''Change left y-axis tick labels to percentage'''
left_yticks = ax.get_yticks().tolist()
ax.yaxis.set_major_locator(mticker.FixedLocator(left_yticks))
ax.set_yticklabels((("%.1f" % tick) + '%') for tick in left_yticks);
'''Change x-axis ticks and tick labels'''
# set the locator to Jan, Apr, Jul, Oct
ax.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10)) )
# set the formater for month-year, with lower y to show only two digits
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b-%y"))
#Add legend
fig.legend(loc="upper center", ncol = 3, frameon = False)
fig.tight_layout()
plt.show()
Note how the right y-axis starts at -0.2 at the bottom and goes up to 0.8. Without changing anything about the data nor the shape of the curves, how can I flip the sign of the right y-axis tick labels so that they go from 0.2 at the bottom to -0.8 at the top? I only want to change the sign of the y-axis tick labels in this graph, nothing else.
I tried doing the following:
'''Change right y-axis tick labels'''
#Pull current right y-axis tick labels
right_yticks = (ax.right_ax).get_yticks().tolist()
#Loop through and multiply each right y-axis tick label by -1
for index, value in enumerate(right_yticks):
right_yticks[index] = value*(-1)
#Set new right y-axis tick labels
(ax.right_ax).yaxis.set_major_locator(mticker.FixedLocator(right_yticks))
(ax.right_ax).set_yticklabels(right_yticks)
But I got this:
Note how the right y-axis is incomplete and corrupted.
I'd appreciate any help. Thank you!
A:
The problem here I think is, you change the y_ticks before you pass them to set_major_locator, but you don't want to change the ticks, you actually only want to change the label (as you did for the left y labels).
Change that part to:
"""Change right y-axis tick labels"""
# Pull current right y-axis tick labels
right_yticks = ax.right_ax.get_yticks().tolist()
# Set new right y-axis tick labels
ax.right_ax.yaxis.set_major_locator(mticker.FixedLocator(right_yticks)) # right_yticks need to be unchanged here
# NOW you change them in a comprehension like you did it for the left y-axis
ax.right_ax.set_yticklabels((f"{(-1)*tick:.2f}") for tick in right_yticks)
| How to modify (flip sign) secondary y-axis tick labels | Data (this block of code is good; feel free to skip):
#Import statements
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mticker
#Constants
start_date = "2018-01-01"
end_date = "2023-01-01"
#Pull in data
tenYear_master = yf.download('^TNX', start_date, end_date)
thirtyYear_master = yf.download('^TYX', start_date, end_date)
#Trim DataFrames to only include 'Adj Close columns'
tenYear = tenYear_master['Adj Close'].to_frame()
thirtyYear = thirtyYear_master['Adj Close'].to_frame()
#Rename columns
tenYear.rename(columns = {'Adj Close' : 'Adj Close - Ten Year'}, inplace= True)
thirtyYear.rename(columns = {'Adj Close' : 'Adj Close - Thirty Year'}, inplace= True)
#Join DataFrames
data = tenYear.join(thirtyYear)
#Add column for difference (spread)
data['Spread'] = data['Adj Close - Thirty Year'] - data['Adj Close - Ten Year']
print(data.head(25))
Adj Close - Ten Year Adj Close - Thirty Year Spread
Date
2018-01-02 2.465 2.811 0.346
2018-01-03 2.447 2.785 0.338
2018-01-04 2.453 2.786 0.333
2018-01-05 2.476 2.811 0.335
2018-01-08 2.480 2.814 0.334
2018-01-09 2.546 2.887 0.341
2018-01-10 2.550 2.891 0.341
2018-01-11 2.531 2.865 0.334
2018-01-12 2.552 2.853 0.301
2018-01-16 2.544 2.836 0.292
2018-01-17 2.578 2.848 0.270
2018-01-18 2.611 2.888 0.277
2018-01-19 2.637 2.912 0.275
2018-01-22 2.665 2.928 0.263
2018-01-23 2.624 2.902 0.278
2018-01-24 2.654 2.938 0.284
2018-01-25 2.621 2.881 0.260
2018-01-26 2.662 2.912 0.250
2018-01-29 2.699 2.943 0.244
2018-01-30 2.726 2.980 0.254
2018-01-31 2.720 2.942 0.222
2018-02-01 2.773 3.005 0.232
2018-02-02 2.854 3.097 0.243
2018-02-05 2.794 3.067 0.273
2018-02-06 2.768 3.044 0.276
This block is also good.
'''Plot data'''
#Delete top, left, and right borders from figure
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.left'] = False
plt.rcParams['axes.spines.right'] = False
#Create figure
fig, ax = plt.subplots(figsize = (12.5,7.5))
data.plot(ax = ax, secondary_y = ['Spread'], ylabel = 'Yield', legend = False);
'''Change left y-axis tick labels to percentage'''
left_yticks = ax.get_yticks().tolist()
ax.yaxis.set_major_locator(mticker.FixedLocator(left_yticks))
ax.set_yticklabels((("%.1f" % tick) + '%') for tick in left_yticks);
'''Change x-axis ticks and tick labels'''
# set the locator to Jan, Apr, Jul, Oct
ax.xaxis.set_major_locator(mdates.MonthLocator( bymonth = (1, 4, 7, 10)) )
# set the formater for month-year, with lower y to show only two digits
ax.xaxis.set_major_formatter(mdates.DateFormatter("%b-%y"))
#Add legend
fig.legend(loc="upper center", ncol = 3, frameon = False)
fig.tight_layout()
plt.show()
Note how the right y-axis starts at -0.2 at the bottom and goes up to 0.8. Without changing anything about the data nor the shape of the curves, how can I flip the sign of the right y-axis tick labels so that they go from 0.2 at the bottom to -0.8 at the top? I only want to change the sign of the y-axis tick labels in this graph, nothing else.
I tried doing the following:
'''Change right y-axis tick labels'''
#Pull current right y-axis tick labels
right_yticks = (ax.right_ax).get_yticks().tolist()
#Loop through and multiply each right y-axis tick label by -1
for index, value in enumerate(right_yticks):
right_yticks[index] = value*(-1)
#Set new right y-axis tick labels
(ax.right_ax).yaxis.set_major_locator(mticker.FixedLocator(right_yticks))
(ax.right_ax).set_yticklabels(right_yticks)
But I got this:
Note how the right y-axis is incomplete and corrupted.
I'd appreciate any help. Thank you!
| [
"The problem here I think is, you change the y_ticks before you pass them to set_major_locator, but you don't want to change the ticks, you actually only want to change the label (as you did for the left y labels).\nChange that part to:\n\"\"\"Change right y-axis tick labels\"\"\"\n# Pull current right y-axis tick labels\nright_yticks = ax.right_ax.get_yticks().tolist()\n# Set new right y-axis tick labels\nax.right_ax.yaxis.set_major_locator(mticker.FixedLocator(right_yticks)) # right_yticks need to be unchanged here\n\n# NOW you change them in a comprehension like you did it for the left y-axis\nax.right_ax.set_yticklabels((f\"{(-1)*tick:.2f}\") for tick in right_yticks)\n\n\n"
] | [
0
] | [] | [] | [
"matplotlib",
"pandas",
"python",
"yticks"
] | stackoverflow_0074647364_matplotlib_pandas_python_yticks.txt |
Q:
TypeError: sequence item 1: expected str instance, int found (Python)
Seeking for your assistance regarding this issue and I'm trying to resolve it, tried so many syntax but still getting the same error. I got multiple csv files to be converted and I'm pulling the same data, the script works for 1 of my csv file but not on the other. Looking forward to your feedback. Thank you very much.
My code:
import os
import pandas as pd
directory = 'C:/path'
ext = ('.csv')
for filename in os.listdir(directory):
f = os.path.join(directory, filename)
if f.endswith(ext):
head_tail = os.path.split(f)
head_tail1 = 'C:/path'
k =head_tail[1]
r=k.split(".")[0]
p=head_tail1 + "/" + r + " - Revised.csv"
mydata = pd.read_csv(f)
# to pull columns and values
new = mydata[["A","Room","C","D"]]
new = new.rename(columns={'D': 'Qty. of Parts'})
new['Qty. of Parts'] = 1
new.to_csv(p ,index=False)
#to merge columns and values
merge_columns = ['A', 'Room', 'C']
merged_col = ''.join(merge_columns).replace('ARoomC', 'F')
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
new.drop(merge_columns, axis=1, inplace=True)
new = new.groupby(merged_col).count().reset_index()
new.to_csv(p, index=False)
The error I get:
Traceback (most recent call last):
File "C:Path\MyProject.py", line 34, in <module>
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
File "C:Path\MyProject.py", line 9565, in apply
return op.apply().__finalize__(self, method="apply")
File "C:Path\MyProject.py", line 746, in apply
return self.apply_standard()
File "C:Path\MyProject.py", line 873, in apply_standard
results, res_index = self.apply_series_generator()
File "C:Path\MyProject.py", line 889, in apply_series_generator
results[i] = self.f(v)
File "C:Path\MyProject.py", line 34, in <lambda>
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
TypeError: sequence item 1: expected str instance, int found
A:
It's hard to say what you're trying to achieve without showing a sample of your data. But anyway, to fix the error, you need to cast the values as a string with str when calling pandas.Series.apply :
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(str(x)), axis=1)
Or, you can also use pandas.Series.astype:
new[merged_col] = new[merge_columns].astype(str).apply(lambda x: '.'.join(x), axis=1)
| TypeError: sequence item 1: expected str instance, int found (Python) | Seeking for your assistance regarding this issue and I'm trying to resolve it, tried so many syntax but still getting the same error. I got multiple csv files to be converted and I'm pulling the same data, the script works for 1 of my csv file but not on the other. Looking forward to your feedback. Thank you very much.
My code:
import os
import pandas as pd
directory = 'C:/path'
ext = ('.csv')
for filename in os.listdir(directory):
f = os.path.join(directory, filename)
if f.endswith(ext):
head_tail = os.path.split(f)
head_tail1 = 'C:/path'
k =head_tail[1]
r=k.split(".")[0]
p=head_tail1 + "/" + r + " - Revised.csv"
mydata = pd.read_csv(f)
# to pull columns and values
new = mydata[["A","Room","C","D"]]
new = new.rename(columns={'D': 'Qty. of Parts'})
new['Qty. of Parts'] = 1
new.to_csv(p ,index=False)
#to merge columns and values
merge_columns = ['A', 'Room', 'C']
merged_col = ''.join(merge_columns).replace('ARoomC', 'F')
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
new.drop(merge_columns, axis=1, inplace=True)
new = new.groupby(merged_col).count().reset_index()
new.to_csv(p, index=False)
The error I get:
Traceback (most recent call last):
File "C:Path\MyProject.py", line 34, in <module>
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
File "C:Path\MyProject.py", line 9565, in apply
return op.apply().__finalize__(self, method="apply")
File "C:Path\MyProject.py", line 746, in apply
return self.apply_standard()
File "C:Path\MyProject.py", line 873, in apply_standard
results, res_index = self.apply_series_generator()
File "C:Path\MyProject.py", line 889, in apply_series_generator
results[i] = self.f(v)
File "C:Path\MyProject.py", line 34, in <lambda>
new[merged_col] = new[merge_columns].apply(lambda x: '.'.join(x), axis=1)
TypeError: sequence item 1: expected str instance, int found
| [
"It's hard to say what you're trying to achieve without showing a sample of your data. But anyway, to fix the error, you need to cast the values as a string with str when calling pandas.Series.apply :\nnew[merged_col] = new[merge_columns].apply(lambda x: '.'.join(str(x)), axis=1)\n\nOr, you can also use pandas.Series.astype:\nnew[merged_col] = new[merge_columns].astype(str).apply(lambda x: '.'.join(x), axis=1)\n\n"
] | [
0
] | [] | [] | [
"csv",
"join",
"python",
"string"
] | stackoverflow_0074651204_csv_join_python_string.txt |
Q:
How to use a condition inside a while loop?
I have a line of codes to check if the entered value exsist in the database
and will continue to loop but inside the while loop it also print the else statement which it shouldn't
cottageNotAvailable = False
mycursor.execute("SELECT * FROM reserved")
occupide = 0
name = input("Enter Name: ")
cottage_row = int(input("Select Cottage Row: "))
while cottage_row < 1 or cottage_row > 2:
cottage_row = int(input("Select Cottage Row: "))
cottage = int(input("Select Cottage: "))
##ROW 1 Cottages
if cottage_row == 1:
subtotals = 1000
##Check Cottage if available
users = mycursor.fetchall()
for allUser in users:
if str(cottage) in str(allUser[3]):
cottageNotAvailable = True
while cottage < 1 or cottage > 5 or cottageNotAvailable:
print("======Cottage is not available=======")
cottage = int(input("Select Cottage: "))
for allUser2 in users:
if(str(cottage) in str(allUser2[3])):
print("1")
else:
print('2')
When Value exsist it print 1 and 2
Output:
1
2
When Value does not exsist it just print. Which works fine
2
inside the while loop if the value exsist the else shouldn't also trigger
A:
What is the value of cottage and users? What does allUser2[3] value return ? Based on the condition check the value of str(cottage) in str(allUser2[3])
| How to use a condition inside a while loop? | I have a line of codes to check if the entered value exsist in the database
and will continue to loop but inside the while loop it also print the else statement which it shouldn't
cottageNotAvailable = False
mycursor.execute("SELECT * FROM reserved")
occupide = 0
name = input("Enter Name: ")
cottage_row = int(input("Select Cottage Row: "))
while cottage_row < 1 or cottage_row > 2:
cottage_row = int(input("Select Cottage Row: "))
cottage = int(input("Select Cottage: "))
##ROW 1 Cottages
if cottage_row == 1:
subtotals = 1000
##Check Cottage if available
users = mycursor.fetchall()
for allUser in users:
if str(cottage) in str(allUser[3]):
cottageNotAvailable = True
while cottage < 1 or cottage > 5 or cottageNotAvailable:
print("======Cottage is not available=======")
cottage = int(input("Select Cottage: "))
for allUser2 in users:
if(str(cottage) in str(allUser2[3])):
print("1")
else:
print('2')
When Value exsist it print 1 and 2
Output:
1
2
When Value does not exsist it just print. Which works fine
2
inside the while loop if the value exsist the else shouldn't also trigger
| [
"What is the value of cottage and users? What does allUser2[3] value return ? Based on the condition check the value of str(cottage) in str(allUser2[3])\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651499_python.txt |
Q:
Must have equal len keys and value when setting with an iterable
I have two dataframes as follows:
leader:
0 11
1 8
2 5
3 9
4 8
5 6
[6065 rows x 2 columns]
```none
`DatasetLabel`:
```none
0 1 .... 7 8 9 10 11 12
0 A J .... 1 2 5 NaN NaN NaN
1 B K .... 3 4 NaN NaN NaN NaN
[4095 rows x 14 columns]
The Information dataset column names 0 to 6 are DatasetLabel about data and 7 to 12 are indexes that refer to the first column of leader Dataframe.
I want to create dataset where instead of the indexes in DatasetLabel dataframe, I have the value of each index from the leader dataframe, which is leader.iloc[index,1]
How can I do it using python features?
The output should look like:
DatasetLabel:
0 1 .... 7 8 9 10 11 12
0 A J .... 8 5 6 NaN NaN NaN
1 B K .... 9 8 NaN NaN NaN NaN
I have come up with the following, but I get an error:
for column in DatasetLabel.ix[:, 8:13]:
DatasetLabel[DatasetLabel[column].notnull()] = leader.iloc[DatasetLabel[DatasetLabel[column].notnull()][column].values, 1]
Error:
ValueError: Must have equal len keys and value when setting with an iterable
A:
You can use apply to index into leader and exchange values with DatasetLabel, although it's not very pretty.
One issue is that Pandas won't let us index with NaN. Converting to str provides a workaround. But that creates a second issue, namely, column 9 is of type float (because NaN is float), so 5 becomes 5.0. Once it's a string, that's "5.0", which will fail to match the index values in leader. We can remove the .0, and then this solution will work - but it's a bit of a hack.
With DatasetLabel as:
Unnamed:0 0 1 7 8 9 10 11 12
0 0 A J 1 2 5.0 NaN NaN NaN
1 1 B K 3 4 NaN NaN NaN NaN
And leader as:
0 1
0 0 11
1 1 8
2 2 5
3 3 9
4 4 8
5 5 6
Then:
cols = ["7","8","9","10","11","12"]
updated = DatasetLabel[cols].apply(
lambda x: leader.loc[x.astype(str).str.split(".").str[0], 1].values, axis=1)
updated
7 8 9 10 11 12
0 8.0 5.0 6.0 NaN NaN NaN
1 9.0 8.0 NaN NaN NaN NaN
Now we can concat the unmodified columns (which we'll call original) with updated:
original_cols = DatasetLabel.columns[~DatasetLabel.columns.isin(cols)]
original = DatasetLabel[original_cols]
pd.concat([original, updated], axis=1)
Output:
Unnamed:0 0 1 7 8 9 10 11 12
0 0 A J 8.0 5.0 6.0 NaN NaN NaN
1 1 B K 9.0 8.0 NaN NaN NaN NaN
Note: It may be clearer to use concat here, but here's another, cleaner way of merging original and updated, using assign:
DatasetLabel.assign(**updated)
A:
The source code shows that this error occurs when you try to broadcast a list-like object (numpy array, list, set, tuple etc.) to multiple columns or rows but didn't specify the index correctly. Of course, list-like objects don't have custom indices like pandas objects, so it usually causes this error.
Solutions to common cases:
You want to assign the same values across multiple columns at once. In other words, you want to change the values of certain columns using a list-like object whose (a) length doesn't match the number of columns or rows and (b) dtype doesn't match the dtype of the columns they are being assigned to.1 An illustration may make it clearer. If you try to make the transformation below:
using a code similar to the one below, this error occurs:
df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})
df.loc[:2, ['C','D']] = [100, 200.2, 300]
Solution: Duplicate the list/array/tuple, transpose it (either using T or zip()) and assign to the relevant rows/columns.2
df.loc[:2, ['C','D']] = np.tile([100, 200.2, 300], (len(['C','D']), 1)).T
# if you don't fancy numpy, use zip() on a list
# df.loc[:2, ['C','D']] = list(zip(*[[100, 200.2, 300]]*len(['C','D'])))
You want to assign the same values to multiple rows at once. If you try to make the following transformation
using a code similar to the following:
df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})
df.loc[[0, 1], ['A', 'B', 'C']] = [100, 200.2]
Solution: To make it work as expected, we must convert the list/array into a Series with the correct index:
df.loc[[0, 1], ['A', 'B', 'C']] = pd.Series([100, 200.2], index=[0, 1])
A common sub-case is if the row indices come from using a boolean mask. N.B. This is the case in the OP. In that case, just use the mask to filter df.index:
msk = df.index < 2
df.loc[msk, ['A', 'B', 'C']] = [100, 200.2] # <--- error
df.loc[msk, ['A', 'B', 'C']] = pd.Series([100, 200.2], index=df.index[msk]) # <--- OK
You want to store the same list in some rows of a column. An illustration of this case is:
Solution: Explicitly construct a Series with the correct indices.
# for the case on the left in the image above
df['D'] = pd.Series([[100, 200.2]]*len(df), index=df.index)
# latter case
df.loc[[1], 'D'] = pd.Series([[100, 200.2]], index=df.index[[1]])
1: Here, we tried to assign a list containing a float to int dtype columns, which contributed to this error being raised. If we tried to assign a list of ints (so that the dtypes match), we'd get a different error: ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (2,3) which can also be solved by the same method as above.
2: An error related to this one ValueError: Must have equal len keys and value when setting with an ndarray occurs if the object being assigned is a numpy array and there's a shape mismatch. That one is often solved either using np.tile or simply transposing the array.
| Must have equal len keys and value when setting with an iterable | I have two dataframes as follows:
leader:
0 11
1 8
2 5
3 9
4 8
5 6
[6065 rows x 2 columns]
```none
`DatasetLabel`:
```none
0 1 .... 7 8 9 10 11 12
0 A J .... 1 2 5 NaN NaN NaN
1 B K .... 3 4 NaN NaN NaN NaN
[4095 rows x 14 columns]
The Information dataset column names 0 to 6 are DatasetLabel about data and 7 to 12 are indexes that refer to the first column of leader Dataframe.
I want to create dataset where instead of the indexes in DatasetLabel dataframe, I have the value of each index from the leader dataframe, which is leader.iloc[index,1]
How can I do it using python features?
The output should look like:
DatasetLabel:
0 1 .... 7 8 9 10 11 12
0 A J .... 8 5 6 NaN NaN NaN
1 B K .... 9 8 NaN NaN NaN NaN
I have come up with the following, but I get an error:
for column in DatasetLabel.ix[:, 8:13]:
DatasetLabel[DatasetLabel[column].notnull()] = leader.iloc[DatasetLabel[DatasetLabel[column].notnull()][column].values, 1]
Error:
ValueError: Must have equal len keys and value when setting with an iterable
| [
"You can use apply to index into leader and exchange values with DatasetLabel, although it's not very pretty. \nOne issue is that Pandas won't let us index with NaN. Converting to str provides a workaround. But that creates a second issue, namely, column 9 is of type float (because NaN is float), so 5 becomes 5.0. Once it's a string, that's \"5.0\", which will fail to match the index values in leader. We can remove the .0, and then this solution will work - but it's a bit of a hack.\nWith DatasetLabel as:\n Unnamed:0 0 1 7 8 9 10 11 12\n0 0 A J 1 2 5.0 NaN NaN NaN\n1 1 B K 3 4 NaN NaN NaN NaN\n\nAnd leader as:\n 0 1\n0 0 11\n1 1 8\n2 2 5\n3 3 9\n4 4 8\n5 5 6\n\nThen:\ncols = [\"7\",\"8\",\"9\",\"10\",\"11\",\"12\"]\nupdated = DatasetLabel[cols].apply(\n lambda x: leader.loc[x.astype(str).str.split(\".\").str[0], 1].values, axis=1)\n\nupdated\n 7 8 9 10 11 12\n0 8.0 5.0 6.0 NaN NaN NaN\n1 9.0 8.0 NaN NaN NaN NaN\n\nNow we can concat the unmodified columns (which we'll call original) with updated:\noriginal_cols = DatasetLabel.columns[~DatasetLabel.columns.isin(cols)]\noriginal = DatasetLabel[original_cols]\npd.concat([original, updated], axis=1)\n\nOutput:\n Unnamed:0 0 1 7 8 9 10 11 12\n0 0 A J 8.0 5.0 6.0 NaN NaN NaN\n1 1 B K 9.0 8.0 NaN NaN NaN NaN\n\nNote: It may be clearer to use concat here, but here's another, cleaner way of merging original and updated, using assign:\nDatasetLabel.assign(**updated)\n\n",
"The source code shows that this error occurs when you try to broadcast a list-like object (numpy array, list, set, tuple etc.) to multiple columns or rows but didn't specify the index correctly. Of course, list-like objects don't have custom indices like pandas objects, so it usually causes this error.\nSolutions to common cases:\n\nYou want to assign the same values across multiple columns at once. In other words, you want to change the values of certain columns using a list-like object whose (a) length doesn't match the number of columns or rows and (b) dtype doesn't match the dtype of the columns they are being assigned to.1 An illustration may make it clearer. If you try to make the transformation below:\n\nusing a code similar to the one below, this error occurs:\ndf = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})\ndf.loc[:2, ['C','D']] = [100, 200.2, 300]\n\nSolution: Duplicate the list/array/tuple, transpose it (either using T or zip()) and assign to the relevant rows/columns.2\ndf.loc[:2, ['C','D']] = np.tile([100, 200.2, 300], (len(['C','D']), 1)).T \n# if you don't fancy numpy, use zip() on a list\n# df.loc[:2, ['C','D']] = list(zip(*[[100, 200.2, 300]]*len(['C','D'])))\n\n\n\n\n\nYou want to assign the same values to multiple rows at once. If you try to make the following transformation\n\nusing a code similar to the following:\ndf = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})\ndf.loc[[0, 1], ['A', 'B', 'C']] = [100, 200.2]\n\nSolution: To make it work as expected, we must convert the list/array into a Series with the correct index:\ndf.loc[[0, 1], ['A', 'B', 'C']] = pd.Series([100, 200.2], index=[0, 1])\n\n\nA common sub-case is if the row indices come from using a boolean mask. N.B. This is the case in the OP. In that case, just use the mask to filter df.index:\nmsk = df.index < 2\ndf.loc[msk, ['A', 'B', 'C']] = [100, 200.2] # <--- error\ndf.loc[msk, ['A', 'B', 'C']] = pd.Series([100, 200.2], index=df.index[msk]) # <--- OK\n\n\n\n\n\nYou want to store the same list in some rows of a column. An illustration of this case is:\n\nSolution: Explicitly construct a Series with the correct indices.\n# for the case on the left in the image above\ndf['D'] = pd.Series([[100, 200.2]]*len(df), index=df.index)\n\n# latter case\ndf.loc[[1], 'D'] = pd.Series([[100, 200.2]], index=df.index[[1]])\n\n\n\n\n1: Here, we tried to assign a list containing a float to int dtype columns, which contributed to this error being raised. If we tried to assign a list of ints (so that the dtypes match), we'd get a different error: ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (2,3) which can also be solved by the same method as above.\n2: An error related to this one ValueError: Must have equal len keys and value when setting with an ndarray occurs if the object being assigned is a numpy array and there's a shape mismatch. That one is often solved either using np.tile or simply transposing the array.\n"
] | [
10,
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"valueerror"
] | stackoverflow_0048000225_dataframe_pandas_python_valueerror.txt |
Q:
Chrome browser closes immediately after loading from selenium
I am running a basic python program to open the Chrome Window but as soon as the code executes, the window is there for a sec and then it closes immediately.
from selenium import webdriver
import time
browser = webdriver.Chrome(executable_path=r"C:\APIR\chromedriver.exe")
browser.maximize_window()
browser.get("https://www.google.com")
Chromedriver version: 91.0.4472.101
Chrome Version: 91.0.4472.164
Any help would be appreciated.
Thank you
A:
It closes because the program ends.
You can:
Wait with time.sleep, for example time.sleep(10) to keep the browser open for 10 seconds after everything is done
Have the user press enter with input()
Or detect when the browser is closed. Many ways to do that.
Example: https://stackoverflow.com/a/52000037/8997916
You could also catch the BrowserUnreachable exception in a loop with a small delay
A:
#For Edge Browser
from selenium import webdriver
from selenium.webdriver.edge.service import Service
from webdriver_manager.microsoft import EdgeChromiumDriverManager
options = webdriver.EdgeOptions()
options.add_experimental_option("detach", True)
driver = webdriver.Edge(options=options, service=Service(EdgeChromiumDriverManager().install()))
driver.maximize_window()
driver.get('https://stackoverflow.com/questions/68543285/chrome-browser-closes-immediately-after-loading-from-selenium')
#For Chrome Browser
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
options = webdriver.ChromeOptions()
options.add_experimental_option("detach", True)
driver = webdriver.Chrome(options=options, service=Service(ChromeDriverManager().install()))
driver.maximize_window()
driver.get('https://stackoverflow.com/questions/68543285/chrome-browser-closes-immediately-after-loading-from-selenium')
Try this, this will permanently solve the problem
Thank You
Subhankar Chakraborty
| Chrome browser closes immediately after loading from selenium | I am running a basic python program to open the Chrome Window but as soon as the code executes, the window is there for a sec and then it closes immediately.
from selenium import webdriver
import time
browser = webdriver.Chrome(executable_path=r"C:\APIR\chromedriver.exe")
browser.maximize_window()
browser.get("https://www.google.com")
Chromedriver version: 91.0.4472.101
Chrome Version: 91.0.4472.164
Any help would be appreciated.
Thank you
| [
"It closes because the program ends.\nYou can:\nWait with time.sleep, for example time.sleep(10) to keep the browser open for 10 seconds after everything is done\nHave the user press enter with input()\nOr detect when the browser is closed. Many ways to do that.\nExample: https://stackoverflow.com/a/52000037/8997916\nYou could also catch the BrowserUnreachable exception in a loop with a small delay\n",
"#For Edge Browser\nfrom selenium import webdriver\nfrom selenium.webdriver.edge.service import Service\nfrom webdriver_manager.microsoft import EdgeChromiumDriverManager\n\noptions = webdriver.EdgeOptions()\noptions.add_experimental_option(\"detach\", True)\ndriver = webdriver.Edge(options=options, service=Service(EdgeChromiumDriverManager().install()))\ndriver.maximize_window()\ndriver.get('https://stackoverflow.com/questions/68543285/chrome-browser-closes-immediately-after-loading-from-selenium')\n\n#For Chrome Browser\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\n\noptions = webdriver.ChromeOptions()\noptions.add_experimental_option(\"detach\", True)\ndriver = webdriver.Chrome(options=options, service=Service(ChromeDriverManager().install()))\ndriver.maximize_window()\ndriver.get('https://stackoverflow.com/questions/68543285/chrome-browser-closes-immediately-after-loading-from-selenium')\n\nTry this, this will permanently solve the problem\nThank You\nSubhankar Chakraborty\n"
] | [
3,
0
] | [] | [] | [
"chromium",
"google_chrome",
"python",
"selenium",
"selenium_chromedriver"
] | stackoverflow_0068543285_chromium_google_chrome_python_selenium_selenium_chromedriver.txt |
Q:
How to download .xlsm ,docx, png,jpg files using python from a http GET request
I am downloading content from a link using python GET request, there are certain content type headers which i am not able to save. The file types of pdf,csv,jpeg,xlsx are getting saved fine when i write the corressponding contents to the file types, but the jpg,png,xlsxm,docx contents not getting saved, though the content type headers for these file types, are correct and as received from the response.
Please let me know how to save these file types
if(contentType1 == 'application/pdf;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".pdf",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jpeg;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jpeg",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jpg;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jpg",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/png;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".png",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/tiff;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".tiff",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jfif;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jfif",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.ms-excel;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xls",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xlsx",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.openxmlformats-officedocument.wordprocessingml.document;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".docx",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.ms-excel.sheet.macroEnabled.12;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xlsm",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'text/csv;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".csv",'wb')
pod_file.write(response1.content)
pod_file.close()
A:
FOR DOC File (you can use different thing for download different type of File or USE same code)
def save_link(book_link, book_name):
the_book = requests.get(book_link, stream=True)
with open(book_name, 'wb') as f:
for chunk in the_book.iter_content(1024 * 1024 * 2): # 2 MB chunks
f.write(chunk)
save_link("URL","NAME.docx")
FOR IMAGE
import shutil
url = 'http://example.com/img.png'
response = requests.get(url, stream=True)
with open('img.png', 'wb') as out_file:
shutil.copyfileobj(response.raw, out_file)
| How to download .xlsm ,docx, png,jpg files using python from a http GET request | I am downloading content from a link using python GET request, there are certain content type headers which i am not able to save. The file types of pdf,csv,jpeg,xlsx are getting saved fine when i write the corressponding contents to the file types, but the jpg,png,xlsxm,docx contents not getting saved, though the content type headers for these file types, are correct and as received from the response.
Please let me know how to save these file types
if(contentType1 == 'application/pdf;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".pdf",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jpeg;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jpeg",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jpg;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jpg",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/png;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".png",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/tiff;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".tiff",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'image/jfif;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".jfif",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.ms-excel;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xls",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xlsx",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.openxmlformats-officedocument.wordprocessingml.document;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".docx",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'application/vnd.ms-excel.sheet.macroEnabled.12;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".xlsm",'wb')
pod_file.write(response1.content)
pod_file.close()
elif(contentType1 == 'text/csv;charset=UTF-8'):
pod_file = open('downloads/POD_'+str(el)+".csv",'wb')
pod_file.write(response1.content)
pod_file.close()
| [
"FOR DOC File (you can use different thing for download different type of File or USE same code)\ndef save_link(book_link, book_name):\n the_book = requests.get(book_link, stream=True)\n with open(book_name, 'wb') as f:\n for chunk in the_book.iter_content(1024 * 1024 * 2): # 2 MB chunks\n f.write(chunk)\n\nsave_link(\"URL\",\"NAME.docx\")\n\nFOR IMAGE\nimport shutil\n\nurl = 'http://example.com/img.png'\nresponse = requests.get(url, stream=True)\nwith open('img.png', 'wb') as out_file:\n shutil.copyfileobj(response.raw, out_file)\n\n"
] | [
0
] | [] | [] | [
"http_headers",
"python",
"response_headers"
] | stackoverflow_0074651507_http_headers_python_response_headers.txt |
Q:
MongoDB change streams lead to COLLSCAN with getMore
I've been recently using the Change Stream framework in pymongo to update dynamically a collection.
My pipeline is quite simple and is the following :
pipeline = [
{"$match":
{"$and":
[{"updateDescription.updatedFields.updated_data":
{"$exists": True}},
{"operationType": "update"}]
}
}
]
It is used in the following code :
with collection.watch(pipeline) as stream:
for insert_change in stream:
'''DO SOMETHING'''
resume_token = insert_change['_id']
This update happens quite often.
I'm monitoring the COLLSCANS of my db and I've realized that the getMore of the cursor induced by the watch method is performing a collscan each time it is called. Sometime it is a quite small collscan with a hundred of docsExamined but sometimes it inspects way more.
I could not find a way to build an index to eliminate this collscan. I'm thinking that I missed something.
Should I pass parameters to the cursor ? Should I build an index somewhere ?
Thank you in advance for your help !
A:
According to the official document, we cannot avoid the COLLSCAN on oplog collection.
So, in my opinion, in order to reduce the performance impact, watch should be run on the secondary instead of the primary.
| MongoDB change streams lead to COLLSCAN with getMore | I've been recently using the Change Stream framework in pymongo to update dynamically a collection.
My pipeline is quite simple and is the following :
pipeline = [
{"$match":
{"$and":
[{"updateDescription.updatedFields.updated_data":
{"$exists": True}},
{"operationType": "update"}]
}
}
]
It is used in the following code :
with collection.watch(pipeline) as stream:
for insert_change in stream:
'''DO SOMETHING'''
resume_token = insert_change['_id']
This update happens quite often.
I'm monitoring the COLLSCANS of my db and I've realized that the getMore of the cursor induced by the watch method is performing a collscan each time it is called. Sometime it is a quite small collscan with a hundred of docsExamined but sometimes it inspects way more.
I could not find a way to build an index to eliminate this collscan. I'm thinking that I missed something.
Should I pass parameters to the cursor ? Should I build an index somewhere ?
Thank you in advance for your help !
| [
"According to the official document, we cannot avoid the COLLSCAN on oplog collection.\nSo, in my opinion, in order to reduce the performance impact, watch should be run on the secondary instead of the primary.\n"
] | [
0
] | [] | [] | [
"changestream",
"mongodb",
"pymongo",
"python"
] | stackoverflow_0060492783_changestream_mongodb_pymongo_python.txt |
Q:
How do I loop this webscrape/tweet script 24/7?
Just started learning Python. I am trying to gather data by webscraping and tweet out info. But everytime I rerun the code. I get
Forbidden: 403 Forbidden
187 - Status is a duplicate.
How do I loop this script without getting this error?
Here's my code :
def scrape ():
page = requests.get("https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find("h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").find_all("span")[0].text.strip()
tweet (top_post)
def tweet (top_post):
api_key = 'deletedforprivacy'
api_key_secret = 'deletedforprivacy'
access_token = 'deletedforprivacy'
access_token_secret = 'deletedforprivacy'
authenticator = tweepy.OAuthHandler(api_key, api_key_secret)
authenticator.set_access_token(access_token, access_token_secret)
api = tweepy.API(authenticator, wait_on_rate_limit=True)
api.update_status(f"{top_post} \nSource : https://www.reuters.com/business/future-of-money/")
print(top_post)
scrape()
A:
The twitter api checks if the content is duplicate and if it is duplicate it returns:
Request returned an error: 403 {"detail":"You are not allowed to create a Tweet with duplicate content.","type":"about:blank","title":"Forbidden","status":403}
I added an simple function to check if the previous content is same as the one about to be added
** Full Code**
from requests_oauthlib import OAuth1Session
import os
import json
import requests
from bs4 import BeautifulSoup
import time
user_id = 000000000000000 # Get userid from https://tweeterid.com/
bearer_token = "<BEARER_TOKEN>"
consumer_key = "<CONSUMER_KEY>"
consumer_secret = "<CONSUMER_SECRET>"
def init():
# Get request token
request_token_url = "https://api.twitter.com/oauth/request_token?oauth_callback=oob&x_auth_access_type=write"
oauth = OAuth1Session(consumer_key, client_secret=consumer_secret)
try:
fetch_response = oauth.fetch_request_token(request_token_url)
except ValueError:
print(
"There may have been an issue with the consumer_key or consumer_secret you entered."
)
resource_owner_key = fetch_response.get("oauth_token")
resource_owner_secret = fetch_response.get("oauth_token_secret")
print("Got OAuth token and secret")
# Get authorization
base_authorization_url = "https://api.twitter.com/oauth/authorize"
authorization_url = oauth.authorization_url(base_authorization_url)
print("Please go here and authorize: %s" % authorization_url)
verifier = input("Paste the PIN here: ")
# Get the access token
access_token_url = "https://api.twitter.com/oauth/access_token"
oauth = OAuth1Session(
consumer_key,
client_secret=consumer_secret,
resource_owner_key=resource_owner_key,
resource_owner_secret=resource_owner_secret,
verifier=verifier,
)
oauth_tokens = oauth.fetch_access_token(access_token_url)
access_token = oauth_tokens["oauth_token"]
access_token_secret = oauth_tokens["oauth_token_secret"]
# Make the request
oauth = OAuth1Session(
consumer_key,
client_secret=consumer_secret,
resource_owner_key=access_token,
resource_owner_secret=access_token_secret,
)
scraper(oauth, bearer_token)
def bearer_oauth(r):
"""
Method required by bearer token authentication.
"""
r.headers["Authorization"] = f"Bearer {bearer_token}"
r.headers["User-Agent"] = "v2UserTweetsPython"
return r
def previous_tweet():
url = "https://api.twitter.com/2/users/{}/tweets".format(user_id)
# Tweet fields are adjustable.
# Options include:
# attachments, author_id, context_annotations,
# conversation_id, created_at, entities, geo, id,
# in_reply_to_user_id, lang, non_public_metrics, organic_metrics,
# possibly_sensitive, promoted_metrics, public_metrics, referenced_tweets,
# source, text, and withheld
params = {"tweet.fields": "text"}
response = requests.request(
"GET", url, auth=bearer_oauth, params=params)
print(response.status_code)
if response.status_code != 200:
raise Exception(
"Request returned an error: {} {}".format(
response.status_code, response.text
)
)
# checking if this is the first post
if response.json() != {'meta': {'result_count': 0}}:
# Since twitter changes html to small url I am splitting at \n to match to new payload
previous_tweet_text = response.json()["data"][0]["text"].split("\n")[0]
previous_payload = {"text": f"{previous_tweet_text}"}
else:
previous_payload = {"text": f""}
return previous_payload
def scraper(oauth, bearer_token):
while True:
page = requests.get(
"https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(
class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find(
"h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").find_all("span")[0].text.strip()
# Be sure to add replace the text of the with the text you wish to Tweet. You can also add parameters to post polls, quote Tweets, Tweet with reply settings, and Tweet to Super Followers in addition to other features.
payload = {
"text": f"{top_post}\nSource:https://www.reuters.com/business/future-of-money/"}
current_checker_payload = {"text": payload["text"].split("\n")[0]}
previous_payload = previous_tweet()
if previous_payload != current_checker_payload:
tweet(payload, oauth)
else:
print("Content hasn't changed")
time.sleep(60)
def tweet(payload, oauth):
# Making the request
response = oauth.post(
"https://api.twitter.com/2/tweets",
json=payload,
)
if response.status_code != 201:
raise Exception(
"Request returned an error: {} {}".format(
response.status_code, response.text)
)
print("Response code: {}".format(response.status_code))
# Showing the response as JSON
json_response = response.json()
print(json.dumps(json_response, indent=4, sort_keys=True))
if __name__ == "__main__":
init()
** Output**
Response code: 201
{
"data": {
"id": "1598558336672497664",
"text": "FTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News\nSource:URL" #couldn't post short url in stackoverflow
}
}
Content hasn't changed
Content hasn't changed
Content hasn't changed
Hope this helps. Happy Coding :)
| How do I loop this webscrape/tweet script 24/7? | Just started learning Python. I am trying to gather data by webscraping and tweet out info. But everytime I rerun the code. I get
Forbidden: 403 Forbidden
187 - Status is a duplicate.
How do I loop this script without getting this error?
Here's my code :
def scrape ():
page = requests.get("https://www.reuters.com/business/future-of-money/")
soup = BeautifulSoup(page.content, "html.parser")
home = soup.find(class_="editorial-franchise-layout__main__3cLBl")
posts = home.find_all(class_="text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4")
top_post = posts[0].find("h3", class_="text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM").find_all("span")[0].text.strip()
tweet (top_post)
def tweet (top_post):
api_key = 'deletedforprivacy'
api_key_secret = 'deletedforprivacy'
access_token = 'deletedforprivacy'
access_token_secret = 'deletedforprivacy'
authenticator = tweepy.OAuthHandler(api_key, api_key_secret)
authenticator.set_access_token(access_token, access_token_secret)
api = tweepy.API(authenticator, wait_on_rate_limit=True)
api.update_status(f"{top_post} \nSource : https://www.reuters.com/business/future-of-money/")
print(top_post)
scrape()
| [
"The twitter api checks if the content is duplicate and if it is duplicate it returns:\n Request returned an error: 403 {\"detail\":\"You are not allowed to create a Tweet with duplicate content.\",\"type\":\"about:blank\",\"title\":\"Forbidden\",\"status\":403}\n\nI added an simple function to check if the previous content is same as the one about to be added\n** Full Code**\nfrom requests_oauthlib import OAuth1Session\nimport os\nimport json\nimport requests\nfrom bs4 import BeautifulSoup\nimport time\n\nuser_id = 000000000000000 # Get userid from https://tweeterid.com/\nbearer_token = \"<BEARER_TOKEN>\"\nconsumer_key = \"<CONSUMER_KEY>\"\nconsumer_secret = \"<CONSUMER_SECRET>\"\n\n\ndef init():\n\n # Get request token\n request_token_url = \"https://api.twitter.com/oauth/request_token?oauth_callback=oob&x_auth_access_type=write\"\n oauth = OAuth1Session(consumer_key, client_secret=consumer_secret)\n\n try:\n fetch_response = oauth.fetch_request_token(request_token_url)\n except ValueError:\n print(\n \"There may have been an issue with the consumer_key or consumer_secret you entered.\"\n )\n\n resource_owner_key = fetch_response.get(\"oauth_token\")\n resource_owner_secret = fetch_response.get(\"oauth_token_secret\")\n print(\"Got OAuth token and secret\")\n\n # Get authorization\n base_authorization_url = \"https://api.twitter.com/oauth/authorize\"\n authorization_url = oauth.authorization_url(base_authorization_url)\n print(\"Please go here and authorize: %s\" % authorization_url)\n verifier = input(\"Paste the PIN here: \")\n\n # Get the access token\n access_token_url = \"https://api.twitter.com/oauth/access_token\"\n oauth = OAuth1Session(\n consumer_key,\n client_secret=consumer_secret,\n resource_owner_key=resource_owner_key,\n resource_owner_secret=resource_owner_secret,\n verifier=verifier,\n )\n oauth_tokens = oauth.fetch_access_token(access_token_url)\n\n access_token = oauth_tokens[\"oauth_token\"]\n access_token_secret = oauth_tokens[\"oauth_token_secret\"]\n\n # Make the request\n oauth = OAuth1Session(\n consumer_key,\n client_secret=consumer_secret,\n resource_owner_key=access_token,\n resource_owner_secret=access_token_secret,\n )\n scraper(oauth, bearer_token)\n\n\ndef bearer_oauth(r):\n \"\"\"\n Method required by bearer token authentication.\n \"\"\"\n\n r.headers[\"Authorization\"] = f\"Bearer {bearer_token}\"\n r.headers[\"User-Agent\"] = \"v2UserTweetsPython\"\n return r\n\n\ndef previous_tweet():\n url = \"https://api.twitter.com/2/users/{}/tweets\".format(user_id)\n\n # Tweet fields are adjustable.\n # Options include:\n # attachments, author_id, context_annotations,\n # conversation_id, created_at, entities, geo, id,\n # in_reply_to_user_id, lang, non_public_metrics, organic_metrics,\n # possibly_sensitive, promoted_metrics, public_metrics, referenced_tweets,\n # source, text, and withheld\n params = {\"tweet.fields\": \"text\"}\n response = requests.request(\n \"GET\", url, auth=bearer_oauth, params=params)\n print(response.status_code)\n if response.status_code != 200:\n raise Exception(\n \"Request returned an error: {} {}\".format(\n response.status_code, response.text\n )\n )\n # checking if this is the first post\n if response.json() != {'meta': {'result_count': 0}}:\n # Since twitter changes html to small url I am splitting at \\n to match to new payload\n previous_tweet_text = response.json()[\"data\"][0][\"text\"].split(\"\\n\")[0]\n previous_payload = {\"text\": f\"{previous_tweet_text}\"}\n else:\n previous_payload = {\"text\": f\"\"}\n\n return previous_payload\n\n\ndef scraper(oauth, bearer_token):\n while True:\n page = requests.get(\n \"https://www.reuters.com/business/future-of-money/\")\n soup = BeautifulSoup(page.content, \"html.parser\")\n home = soup.find(class_=\"editorial-franchise-layout__main__3cLBl\")\n posts = home.find_all(\n class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__inherit-font__1Y8w3 text__inherit-size__1DZJi link__underline_on_hover__2zGL4\")\n top_post = posts[0].find(\n \"h3\", class_=\"text__text__1FZLe text__dark-grey__3Ml43 text__medium__1kbOh text__heading_3__1kDhc heading__base__2T28j heading__heading_3__3aL54 hero-card__title__33EFM\").find_all(\"span\")[0].text.strip()\n # Be sure to add replace the text of the with the text you wish to Tweet. You can also add parameters to post polls, quote Tweets, Tweet with reply settings, and Tweet to Super Followers in addition to other features.\n payload = {\n \"text\": f\"{top_post}\\nSource:https://www.reuters.com/business/future-of-money/\"}\n current_checker_payload = {\"text\": payload[\"text\"].split(\"\\n\")[0]}\n previous_payload = previous_tweet()\n if previous_payload != current_checker_payload:\n tweet(payload, oauth)\n else:\n print(\"Content hasn't changed\")\n time.sleep(60)\n\n\ndef tweet(payload, oauth):\n\n # Making the request\n response = oauth.post(\n \"https://api.twitter.com/2/tweets\",\n json=payload,\n )\n\n if response.status_code != 201:\n raise Exception(\n \"Request returned an error: {} {}\".format(\n response.status_code, response.text)\n )\n\n print(\"Response code: {}\".format(response.status_code))\n\n # Showing the response as JSON\n json_response = response.json()\n print(json.dumps(json_response, indent=4, sort_keys=True))\n\n\nif __name__ == \"__main__\":\n init()\n\n** Output**\nResponse code: 201\n{\n\"data\": {\n \"id\": \"1598558336672497664\",\n \"text\": \"FTX ex-CEO Bankman-Fried claims he was unaware of improper use of customer funds -ABC News\\nSource:URL\" #couldn't post short url in stackoverflow\n }\n}\n\nContent hasn't changed\nContent hasn't changed\nContent hasn't changed\n\n\nHope this helps. Happy Coding :)\n"
] | [
0
] | [] | [] | [
"beautifulsoup",
"python",
"tweepy",
"twitter",
"web_scraping"
] | stackoverflow_0074651027_beautifulsoup_python_tweepy_twitter_web_scraping.txt |
Q:
Check Normal distribution with Kolmogorov test
I am a learning statistics using python, and I have a task to check that data have Normal Distribution with mean=10 and dispersion=5.5.
I've checked scipy.stats.kstest function, but I don't understand how to interpret the results, and where I should pass mean and dispersion args.
Thank you, for your help
A:
Generate a dataset
import scipy
import matplotlib.pyplot as plt
# generate data with norm(mean = 0,std = 15)
data = scipy.stats.norm.rvs(loc = 0,scale = 15,size = 1000,random_state = 0)
Perfrom KS-test
# perform KS test on your sample versus norm(10,5.5)
D, p = scipy.stats.kstest(data, 'norm', args= (10, 5.5))
test = 'Reject' if p < 0.05 else 'Not reject'
print(f'D-statistics: {D:.4f},\np-value: {p:.4f}, \ntest-result: {test}')
Out:
'D-statistics:0.5091,p-value:0.0000,test-result:Reject'
Add a plot with distributions and your data
# draw a plot to see the distribution and your data hist
# draw a plot to see the distribution and your data hist
fig, ax = plt.subplots(1, 1)
x = np.linspace(data.min(),data.max(),100)
ax.plot(x, scipy.stats.norm.pdf(x,loc = 10, scale = 5.5), 'r-', color='green', lw=1, alpha=0.6, label='norm(10,5.5) pdf')
ax.hist(data, normed=True, histtype='stepfilled', bins=20, alpha=0.2, label='my data distribution')
ax.legend(loc='best', frameon=False)
plt.title('norm(10,5.5) vs. data')
plt.show()
A:
if error occured,‘Polygon’ object has no property ‘normed’
it was found,normed=True , this property is deprecated,
change to use 'density=True'.
it will work.
| Check Normal distribution with Kolmogorov test | I am a learning statistics using python, and I have a task to check that data have Normal Distribution with mean=10 and dispersion=5.5.
I've checked scipy.stats.kstest function, but I don't understand how to interpret the results, and where I should pass mean and dispersion args.
Thank you, for your help
| [
"Generate a dataset\nimport scipy\nimport matplotlib.pyplot as plt\n# generate data with norm(mean = 0,std = 15)\ndata = scipy.stats.norm.rvs(loc = 0,scale = 15,size = 1000,random_state = 0)\n\nPerfrom KS-test\n# perform KS test on your sample versus norm(10,5.5)\nD, p = scipy.stats.kstest(data, 'norm', args= (10, 5.5))\ntest = 'Reject' if p < 0.05 else 'Not reject'\nprint(f'D-statistics: {D:.4f},\\np-value: {p:.4f}, \\ntest-result: {test}')\n\nOut:\n\n'D-statistics:0.5091,p-value:0.0000,test-result:Reject'\n\nAdd a plot with distributions and your data\n# draw a plot to see the distribution and your data hist\n# draw a plot to see the distribution and your data hist\nfig, ax = plt.subplots(1, 1)\nx = np.linspace(data.min(),data.max(),100)\nax.plot(x, scipy.stats.norm.pdf(x,loc = 10, scale = 5.5), 'r-', color='green', lw=1, alpha=0.6, label='norm(10,5.5) pdf')\nax.hist(data, normed=True, histtype='stepfilled', bins=20, alpha=0.2, label='my data distribution')\nax.legend(loc='best', frameon=False)\nplt.title('norm(10,5.5) vs. data')\nplt.show()\n\n\n",
"if error occured,‘Polygon’ object has no property ‘normed’\nit was found,normed=True , this property is deprecated,\nchange to use 'density=True'.\nit will work.\n"
] | [
1,
0
] | [] | [] | [
"numpy",
"pandas",
"python",
"scipy",
"statistics"
] | stackoverflow_0059612155_numpy_pandas_python_scipy_statistics.txt |
Q:
Python LAB - Driving Costs (Functions)
When I run my program my output has the decimal in the wrong place. How would I move the decimal point over and round up? (EX. My output was 6.3198 but should be 63.2) Besides that the rest of my program does not work. Any help would be appreciated. Thank you!
DIRECTIONS
Write a function driving_cost() with input parameters miles_per_gallon, dollars_per_gallon, and miles_driven, that returns the dollar cost to drive those miles. All items are of type float. The function called with arguments (20.0, 3.1599, 50.0) returns 7.89975.
Define that function in a program whose inputs are the car's miles per gallon and the price of gas in dollars per gallon (both float). Output the gas cost for 10 miles, 50 miles, and 400 miles, by calling your driving_cost() function three times.
Output each floating-point value with two digits after the decimal point, which can be achieved as follows:
print(f'{your_value:.2f}')
Ex: If the input is:
20.0
3.1599
the output is:
1.58
7.90
63.20
Your program must define and call a function:
def driving_cost(miles_per_gallon, dollars_per_gallon, miles_driven)
def driving_cost(miles_per_gallon, dollars_per_gallon, miles_driven):
return (miles_driven/miles_per_gallon)*dollars_per_gallon
if __name__ == '__main__':
miles_per_gallon=float(input())
dollars_per_gallon=float(input())
print(driving_cost(400, miles_per_gallon, dollars_per_gallon))
print(driving_cost(50, miles_per_gallon, dollars_per_gallon))
print(driving_cost(10, miles_per_gallon, dollars_per_gallon))
MY OUTPUT
0.157995
1.26396
6.3198
A:
I believe you wrote your code wrongly, you want the gallon of gas, for 400 miles, 50 miles and 10 miles, and your function in main having the parameter in the wrong place.
Your function:
driving_cost(miles_per_gallon, dollars_per_gallon, miles_driven)
The function in your main write:
driving_cost(miles_per_gallon=400, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)
driving_cost(miles_per_gallon=50, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)
driving_cost(miles_per_gallon=10, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)
I believe it should be
print(driving_cost(miles_per_gallon, dollars_per_gallon, 400))
print(driving_cost(miles_per_gallon, dollars_per_gallon, 50))
print(driving_cost(miles_per_gallon, dollars_per_gallon, 10))
| Python LAB - Driving Costs (Functions) | When I run my program my output has the decimal in the wrong place. How would I move the decimal point over and round up? (EX. My output was 6.3198 but should be 63.2) Besides that the rest of my program does not work. Any help would be appreciated. Thank you!
DIRECTIONS
Write a function driving_cost() with input parameters miles_per_gallon, dollars_per_gallon, and miles_driven, that returns the dollar cost to drive those miles. All items are of type float. The function called with arguments (20.0, 3.1599, 50.0) returns 7.89975.
Define that function in a program whose inputs are the car's miles per gallon and the price of gas in dollars per gallon (both float). Output the gas cost for 10 miles, 50 miles, and 400 miles, by calling your driving_cost() function three times.
Output each floating-point value with two digits after the decimal point, which can be achieved as follows:
print(f'{your_value:.2f}')
Ex: If the input is:
20.0
3.1599
the output is:
1.58
7.90
63.20
Your program must define and call a function:
def driving_cost(miles_per_gallon, dollars_per_gallon, miles_driven)
def driving_cost(miles_per_gallon, dollars_per_gallon, miles_driven):
return (miles_driven/miles_per_gallon)*dollars_per_gallon
if __name__ == '__main__':
miles_per_gallon=float(input())
dollars_per_gallon=float(input())
print(driving_cost(400, miles_per_gallon, dollars_per_gallon))
print(driving_cost(50, miles_per_gallon, dollars_per_gallon))
print(driving_cost(10, miles_per_gallon, dollars_per_gallon))
MY OUTPUT
0.157995
1.26396
6.3198
| [
"I believe you wrote your code wrongly, you want the gallon of gas, for 400 miles, 50 miles and 10 miles, and your function in main having the parameter in the wrong place.\nYour function:\ndriving_cost(miles_per_gallon, dollars_per_gallon, miles_driven)\n\nThe function in your main write:\ndriving_cost(miles_per_gallon=400, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)\ndriving_cost(miles_per_gallon=50, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)\ndriving_cost(miles_per_gallon=10, dollars_per_gallon=miles_per_gallon, miles_driven=dollars_per_gallon)\n\nI believe it should be\nprint(driving_cost(miles_per_gallon, dollars_per_gallon, 400))\nprint(driving_cost(miles_per_gallon, dollars_per_gallon, 50))\nprint(driving_cost(miles_per_gallon, dollars_per_gallon, 10))\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651530_python.txt |
Q:
How to download entire directory from azure file share
Not able to download entire directory from azure file share in python
I have used all basic stuffs available in google
A:
I tried in my environment and got below results:
Initially I tried with python,
Unfortunately, ShareServiceClient Class which Interacts with A client to interact with the File Share Service at the account level. does not yet support Download operation in the Azure Python SDK.
ShareClient Class which only interacts with specific file Share in the account does not support Download Directory or file share option Python SDK..
But there's one class ShareFileClient Class which supports downloading of individual files inside a directory but not entire directory, you can use this class to download the files from directory with Python SDK.
Also you check Azure Portal > Select your Storage account > File Share and Directory > There's no option in Portal too to download the directory, but there's an option to download specific file.
As workaround, if you need to download directory from file-share, you can use Az copy tool command to download the directory in your local machine.
I tried to download the Directory with Az-copy command and was able to download to it successfully!
Command:
azcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3xxxxxxxx' 'C:\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true
Console:
Local environment:
| How to download entire directory from azure file share | Not able to download entire directory from azure file share in python
I have used all basic stuffs available in google
| [
"I tried in my environment and got below results:\nInitially I tried with python,\n\nUnfortunately, ShareServiceClient Class which Interacts with A client to interact with the File Share Service at the account level. does not yet support Download operation in the Azure Python SDK.\nShareClient Class which only interacts with specific file Share in the account does not support Download Directory or file share option Python SDK..\nBut there's one class ShareFileClient Class which supports downloading of individual files inside a directory but not entire directory, you can use this class to download the files from directory with Python SDK.\n\nAlso you check Azure Portal > Select your Storage account > File Share and Directory > There's no option in Portal too to download the directory, but there's an option to download specific file.\nAs workaround, if you need to download directory from file-share, you can use Az copy tool command to download the directory in your local machine.\nI tried to download the Directory with Az-copy command and was able to download to it successfully!\nCommand:\nazcopy copy 'https://mystorageaccount.file.core.windows.net/myfileshare/myFileShareDirectory?sv=2018-03-28&ss=bjqt&srs=sco&sp=rjklhjup&se=2019-05-10T04:37:48Z&st=2019-05-09T20:37:48Z&spr=https&sig=/SOVEFfsKDqRry4bk3xxxxxxxx' 'C:\\myDirectory' --recursive --preserve-smb-permissions=true --preserve-smb-info=true\n\nConsole:\n\nLocal environment:\n\n"
] | [
0
] | [] | [] | [
"azure",
"azure_files",
"azure_storage",
"python"
] | stackoverflow_0074560083_azure_azure_files_azure_storage_python.txt |
Q:
Hatching the definition area of the matplotlib function
I want to make a hatching of the function definition area, something similar as in the example
fig, ax = plt.subplots()
plt.title('$f(x)= x^3 + x^2 + 17 $')
plt.minorticks_on()
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
x = np.linspace(-100, 100)
y = lambda x: x**3 + x**2 + 17
ax.plot(x, y(x))
ax.axhline(color='green', lw=2, alpha=0.7)
canvas = FigureCanvasTkAgg(fig, self)
canvas.draw()
canvas.get_tk_widget().place(x=300, y=0)
I tried filling in the area below the line but it filled in the entire area
A:
Hatches in combination with fill_between should to the trick:
import matplotlib.pyplot as plt
import numpy as np
#your code
fig, ax = plt.subplots()
plt.title('$f(x)= x^3 + x^2 + 17 $')
plt.minorticks_on()
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
x = np.linspace(-100, 100)
y = lambda x: x ** 3 + x ** 2 + 17
ax.axhline(color='green', lw=2, alpha=0.7)
#end of your code
# use two regions to create two hatches, one for y>0, one for y<0
# zorder = 2 to keep the hatches in front of the gridline
# change the amount of '|' to change the hatch density
for mask, color in zip([y(x) > 0, y(x) < 0], ["red", "green"]):
ax.fill_between(x[mask], y(x[mask]), y2=0, hatch='|||', zorder=2,
color="none", edgecolor=color, linewidth=0.0)
ax.plot(x, y(x))
plt.show()
| Hatching the definition area of the matplotlib function | I want to make a hatching of the function definition area, something similar as in the example
fig, ax = plt.subplots()
plt.title('$f(x)= x^3 + x^2 + 17 $')
plt.minorticks_on()
plt.grid()
plt.xlabel('x')
plt.ylabel('y')
x = np.linspace(-100, 100)
y = lambda x: x**3 + x**2 + 17
ax.plot(x, y(x))
ax.axhline(color='green', lw=2, alpha=0.7)
canvas = FigureCanvasTkAgg(fig, self)
canvas.draw()
canvas.get_tk_widget().place(x=300, y=0)
I tried filling in the area below the line but it filled in the entire area
| [
"Hatches in combination with fill_between should to the trick:\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n#your code\nfig, ax = plt.subplots()\nplt.title('$f(x)= x^3 + x^2 + 17 $')\nplt.minorticks_on()\nplt.grid()\nplt.xlabel('x')\nplt.ylabel('y')\n\nx = np.linspace(-100, 100)\ny = lambda x: x ** 3 + x ** 2 + 17\nax.axhline(color='green', lw=2, alpha=0.7)\n#end of your code\n\n# use two regions to create two hatches, one for y>0, one for y<0\n# zorder = 2 to keep the hatches in front of the gridline\n# change the amount of '|' to change the hatch density\nfor mask, color in zip([y(x) > 0, y(x) < 0], [\"red\", \"green\"]):\n ax.fill_between(x[mask], y(x[mask]), y2=0, hatch='|||', zorder=2,\n color=\"none\", edgecolor=color, linewidth=0.0)\nax.plot(x, y(x))\nplt.show()\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074649138_python.txt |
Q:
Sum of value in a dictionary
I’m new here. I wanted to sum all the values inside a dictionary, but my values are all strings, I don’t know how to convert the strings to integers…
I really appreciate if anyone can help with it!
Here’s the dictionary with code:
dic1 = dict()
dic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']}
for i in dic1:
print(i,’,’,sum(dic1[i]))
I want the output to be like this:
2012-03-06, 10
2012-03-12, 20
A:
1st Solution: you can do this using map
dic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']}
result_dict = {key: sum(map(int, value)) for key, value in dic1.items()}
print(result_dict)
Output:
{'2012-03-06': 10, '2012-03-12': 20}
2nd Solution: And convert into expected output easily
dic1 = {'2012-03-06':['1','3','5'],'2012-03-12':['7','3','10']}
for key, value in dic1.items():
print(f"{key}, {sum(map(int, value))}")
Output:
2012-03-06, 10
2012-03-12, 20
A:
It can be
dic1 = dict()
dic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']}
for i in dic1:
print(i+','+ str(sum(map(int, dic1[i]))))
In python3, map(func,iter) returns an iterator(list, tuple etc.) of the results after applying the given function to each item of a given iterable
| Sum of value in a dictionary | I’m new here. I wanted to sum all the values inside a dictionary, but my values are all strings, I don’t know how to convert the strings to integers…
I really appreciate if anyone can help with it!
Here’s the dictionary with code:
dic1 = dict()
dic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']}
for i in dic1:
print(i,’,’,sum(dic1[i]))
I want the output to be like this:
2012-03-06, 10
2012-03-12, 20
| [
"1st Solution: you can do this using map\ndic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']}\n\nresult_dict = {key: sum(map(int, value)) for key, value in dic1.items()}\nprint(result_dict)\n\nOutput:\n{'2012-03-06': 10, '2012-03-12': 20}\n\n2nd Solution: And convert into expected output easily\ndic1 = {'2012-03-06':['1','3','5'],'2012-03-12':['7','3','10']}\nfor key, value in dic1.items():\n print(f\"{key}, {sum(map(int, value))}\")\n\nOutput:\n2012-03-06, 10\n2012-03-12, 20\n\n",
"It can be\ndic1 = dict() \ndic1 = {'2012-03-06':['1','4','5'],'2012-03-12':['7','3','10']} \nfor i in dic1: \n print(i+','+ str(sum(map(int, dic1[i]))))\n\nIn python3, map(func,iter) returns an iterator(list, tuple etc.) of the results after applying the given function to each item of a given iterable\n"
] | [
2,
-2
] | [] | [] | [
"addition",
"dictionary",
"integer",
"python",
"string"
] | stackoverflow_0074651587_addition_dictionary_integer_python_string.txt |
Q:
Refit python's surprise recommedation system with new data
I've built a recommender system using Python Surprise library.
Next step is to update algorithm with new data. For example a new user or a new item was added.
I've digged into documentation and got nothing for this case. The only possible way is to train new model from time to time from scratch.
It looks like I missed something but I can't figure out what exactly.
Can anybody point me out how I can refit existing algorithm with new data?
A:
Unfortunately Surprise doesn't support partial fit yet.
In this thread there are some workarounds and forks with implemented partial fit.
| Refit python's surprise recommedation system with new data | I've built a recommender system using Python Surprise library.
Next step is to update algorithm with new data. For example a new user or a new item was added.
I've digged into documentation and got nothing for this case. The only possible way is to train new model from time to time from scratch.
It looks like I missed something but I can't figure out what exactly.
Can anybody point me out how I can refit existing algorithm with new data?
| [
"Unfortunately Surprise doesn't support partial fit yet.\nIn this thread there are some workarounds and forks with implemented partial fit.\n"
] | [
0
] | [] | [] | [
"python",
"recommendation_engine"
] | stackoverflow_0072439952_python_recommendation_engine.txt |
Q:
Currently only multi-regression, multilabel and survival objectives work with multidimensional target
I used bayes_optto tunse hper-parameter of CatBoostRegressor (from catboost) for regression and got the following error:
CatBoostError: catboost/private/libs/target/data_providers.cpp:603: Currently only multi-regression, multilabel and survival objectives work with multidimensional target
Here is the code:
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from catboost import Pool, CatBoostRegressor
from bayes_opt import BayesianOptimization
from bayes_opt.util import Colours
from sklearn.metrics import accuracy_score
def get_data():
""" Preparing data ."""
# trainx, testx, trainy, testy= train_test_split(XN, YN, test_size=0.2, random_state= 31)
return trainx, testx, trainy, testy
def CBR_cv(iterations, learning_rate, depth, l2_leaf_reg, min_child_samples, trainx, testx, trainy, testy):
train_pool = Pool(trainx, trainy)
test_pool = Pool(testx)
model = CatBoostRegressor(iterations = iterations, learning_rate = learning_rate, depth = depth,
l2_leaf_reg = l2_leaf_reg, min_child_samples = min_child_samples, loss_function='RMSE' )
# param['learning_rate'] = trial.suggest_discrete_uniform("learning_rate", 0.001, 0.02, 0.001)
# param['depth'] = trial.suggest_int('depth', 9, 15)
# param['l2_leaf_reg'] = trial.suggest_discrete_uniform('l2_leaf_reg', 1.0, 5.5, 0.5)
# param['min_child_samples'] = trial.suggest_categorical('min_child_samples', [1, 4, 8, 16, 32])
# cval = cross_val_score(model, trainx, trainy, scoring='accuracy', cv=4)
# return cval.mean()
## fit the model
model.fit(train_pool)
## evaluate performance
yhat = model.predict(test_pool)
score = r2_score(testy, yhat)
return score
def optimize_XGB(trainx2, testx2, trainy2, testy2):
"""Apply Bayesian Optimization to Random Forest parameters."""
def CBR_crossval(iterations, learning_rate, depth, l2_leaf_reg, min_child_samples):
"""Wrapper of RandomForest cross validation.
Notice how we ensure n_estimators and min_samples_split are casted
to integer before we pass them along. Moreover, to avoid max_features
taking values outside the (0, 1) range, we also ensure it is capped
accordingly.
"""
return CBR_cv(iterations = int(iterations),
learning_rate = max(min(learning_rate, 0.5), 1e-3),
depth = int(depth),
l2_leaf_reg = max(min(l2_leaf_reg, 5.5), 1.0),
min_child_samples = int(min_child_samples),
trainx = trainx2, testx= testx2, trainy = trainy2, testy= testy2)
optimizer = BayesianOptimization(
f=CBR_crossval,
pbounds={
"iterations": (50, 500),
"depth": (2, 25),
"learning_rate": (0.01, 0.5),
"l2_leaf_reg": (1.0, 5.5),
"min_child_samples": (1, 50),
},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=1000)
print("Final result:", optimizer.max)
if __name__ == "__main__":
trainx2, testx2, trainy2, testy2 = get_data()
print(Colours.green("--- Optimizing XGB ---"))
optimize_XGB(trainx2, testx2, trainy2, testy2)
A:
if your target is multid- then you need to choose another loss function ex. MultiRMSE instead of the default function RMSE
| Currently only multi-regression, multilabel and survival objectives work with multidimensional target | I used bayes_optto tunse hper-parameter of CatBoostRegressor (from catboost) for regression and got the following error:
CatBoostError: catboost/private/libs/target/data_providers.cpp:603: Currently only multi-regression, multilabel and survival objectives work with multidimensional target
Here is the code:
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from catboost import Pool, CatBoostRegressor
from bayes_opt import BayesianOptimization
from bayes_opt.util import Colours
from sklearn.metrics import accuracy_score
def get_data():
""" Preparing data ."""
# trainx, testx, trainy, testy= train_test_split(XN, YN, test_size=0.2, random_state= 31)
return trainx, testx, trainy, testy
def CBR_cv(iterations, learning_rate, depth, l2_leaf_reg, min_child_samples, trainx, testx, trainy, testy):
train_pool = Pool(trainx, trainy)
test_pool = Pool(testx)
model = CatBoostRegressor(iterations = iterations, learning_rate = learning_rate, depth = depth,
l2_leaf_reg = l2_leaf_reg, min_child_samples = min_child_samples, loss_function='RMSE' )
# param['learning_rate'] = trial.suggest_discrete_uniform("learning_rate", 0.001, 0.02, 0.001)
# param['depth'] = trial.suggest_int('depth', 9, 15)
# param['l2_leaf_reg'] = trial.suggest_discrete_uniform('l2_leaf_reg', 1.0, 5.5, 0.5)
# param['min_child_samples'] = trial.suggest_categorical('min_child_samples', [1, 4, 8, 16, 32])
# cval = cross_val_score(model, trainx, trainy, scoring='accuracy', cv=4)
# return cval.mean()
## fit the model
model.fit(train_pool)
## evaluate performance
yhat = model.predict(test_pool)
score = r2_score(testy, yhat)
return score
def optimize_XGB(trainx2, testx2, trainy2, testy2):
"""Apply Bayesian Optimization to Random Forest parameters."""
def CBR_crossval(iterations, learning_rate, depth, l2_leaf_reg, min_child_samples):
"""Wrapper of RandomForest cross validation.
Notice how we ensure n_estimators and min_samples_split are casted
to integer before we pass them along. Moreover, to avoid max_features
taking values outside the (0, 1) range, we also ensure it is capped
accordingly.
"""
return CBR_cv(iterations = int(iterations),
learning_rate = max(min(learning_rate, 0.5), 1e-3),
depth = int(depth),
l2_leaf_reg = max(min(l2_leaf_reg, 5.5), 1.0),
min_child_samples = int(min_child_samples),
trainx = trainx2, testx= testx2, trainy = trainy2, testy= testy2)
optimizer = BayesianOptimization(
f=CBR_crossval,
pbounds={
"iterations": (50, 500),
"depth": (2, 25),
"learning_rate": (0.01, 0.5),
"l2_leaf_reg": (1.0, 5.5),
"min_child_samples": (1, 50),
},
random_state=1234,
verbose=2
)
optimizer.maximize(n_iter=1000)
print("Final result:", optimizer.max)
if __name__ == "__main__":
trainx2, testx2, trainy2, testy2 = get_data()
print(Colours.green("--- Optimizing XGB ---"))
optimize_XGB(trainx2, testx2, trainy2, testy2)
| [
"if your target is multid- then you need to choose another loss function ex. MultiRMSE instead of the default function RMSE\n"
] | [
0
] | [] | [] | [
"catboost",
"catboostregressor",
"python"
] | stackoverflow_0073381894_catboost_catboostregressor_python.txt |
Q:
PySpark: How to create DataFrame containing date range
I am trying to create a PySpark data frame with a single column that contains the date range, but I keep getting this error. I also tried converting it to an int, but I am not sure if you are even supposed to do that.
# Gets an existing SparkSession or, if there is no existing one, creates a new one
spark = SparkSession.builder.appName('pyspark-shellTest2').getOrCreate()
from pyspark.sql.functions import col, to_date, asc
from pyspark.sql.types import TimestampType
import datetime
# Start and end dates for the date range
start_date = "2022-08-20"
end_date = "2022-10-03"
# Create a DataFrame with a single column containing the date range
date_range_df = spark.range(start_date, end_date) \
.withColumn("date", to_date(col("id")))
Error:
A:
you can use the sequence sql function to create an array of dates using the start and end. this array can be exploded to get new rows.
see example below
spark.sparkContext.parallelize([(start_date, end_date)]). \
toDF(['start', 'end']). \
withColumn('start', func.to_date('start')). \
withColumn('end', func.to_date('end')). \
withColumn('date_seq', func.expr('sequence(start, end, interval 1 day)')). \
select(func.explode('date_seq').alias('date')). \
show(10)
# +----------+
# | date|
# +----------+
# |2022-08-20|
# |2022-08-21|
# |2022-08-22|
# |2022-08-23|
# |2022-08-24|
# |2022-08-25|
# |2022-08-26|
# |2022-08-27|
# |2022-08-28|
# |2022-08-29|
# +----------+
# only showing top 10 rows
| PySpark: How to create DataFrame containing date range | I am trying to create a PySpark data frame with a single column that contains the date range, but I keep getting this error. I also tried converting it to an int, but I am not sure if you are even supposed to do that.
# Gets an existing SparkSession or, if there is no existing one, creates a new one
spark = SparkSession.builder.appName('pyspark-shellTest2').getOrCreate()
from pyspark.sql.functions import col, to_date, asc
from pyspark.sql.types import TimestampType
import datetime
# Start and end dates for the date range
start_date = "2022-08-20"
end_date = "2022-10-03"
# Create a DataFrame with a single column containing the date range
date_range_df = spark.range(start_date, end_date) \
.withColumn("date", to_date(col("id")))
Error:
| [
"you can use the sequence sql function to create an array of dates using the start and end. this array can be exploded to get new rows.\nsee example below\nspark.sparkContext.parallelize([(start_date, end_date)]). \\\n toDF(['start', 'end']). \\\n withColumn('start', func.to_date('start')). \\\n withColumn('end', func.to_date('end')). \\\n withColumn('date_seq', func.expr('sequence(start, end, interval 1 day)')). \\\n select(func.explode('date_seq').alias('date')). \\\n show(10)\n\n# +----------+\n# | date|\n# +----------+\n# |2022-08-20|\n# |2022-08-21|\n# |2022-08-22|\n# |2022-08-23|\n# |2022-08-24|\n# |2022-08-25|\n# |2022-08-26|\n# |2022-08-27|\n# |2022-08-28|\n# |2022-08-29|\n# +----------+\n# only showing top 10 rows\n\n"
] | [
0
] | [] | [] | [
"apache_spark_sql",
"dataframe",
"date",
"pyspark",
"python"
] | stackoverflow_0074649809_apache_spark_sql_dataframe_date_pyspark_python.txt |
Q:
GraphQL schema to python dataclasses codegen
I have a GraphQL schema defined from server and I'd like to write a nice Python GraphQL client for it. I'm looking for a way to transform my GraphQL schema into python classes with type hints such that I'll be able to see all available queries, mutations, their fields(names & types) and return vals.
I cannot write manually all python classes due to schema complexity, I have many filters on each field. see this example from ent on TodoWhereInput to understand how error prune this will be. I really enjoy using GraphQL playground with auto completion, I want that experience in my python client.
For example, given this schema as an input:
type Book {
title: String
year: Int
}
type Author {
name: String
books: [Book]
}
I'd like to generate this python code as an output:
from dataclasses import dataclass
@dataclass
class Book:
title: str
year: int
@dataclass
class Author:
name: str
books: list[Book]
same for Inputs in schema.
I already looked at:
codegen which is awesome for typescript! but doesn't have python support :/
gql_schema_codegen nice, but generating TypedDict which isn't dataclasses, I have to change each dict and pass total=False so it won't required all fields by default.
sgqlc code-generator which doesn't allow type hints. writing queries is still dynamically and error prune.
A:
I am actually working on a code generator, as part of a library that has the objective of allowing a code-first approach when querying GraphQL API servers from python.
To give you a preview what will be the outcome:
class Book(GQLObject):
title: str
year: int
class Author(GQLObject):
name: str
books: list[Book]
I just published the core of the mapper on github:
https://github.com/dapalex/py-graphql-mapper/
Have a look at the documentation and I hope to publish the code generator soon.
| GraphQL schema to python dataclasses codegen | I have a GraphQL schema defined from server and I'd like to write a nice Python GraphQL client for it. I'm looking for a way to transform my GraphQL schema into python classes with type hints such that I'll be able to see all available queries, mutations, their fields(names & types) and return vals.
I cannot write manually all python classes due to schema complexity, I have many filters on each field. see this example from ent on TodoWhereInput to understand how error prune this will be. I really enjoy using GraphQL playground with auto completion, I want that experience in my python client.
For example, given this schema as an input:
type Book {
title: String
year: Int
}
type Author {
name: String
books: [Book]
}
I'd like to generate this python code as an output:
from dataclasses import dataclass
@dataclass
class Book:
title: str
year: int
@dataclass
class Author:
name: str
books: list[Book]
same for Inputs in schema.
I already looked at:
codegen which is awesome for typescript! but doesn't have python support :/
gql_schema_codegen nice, but generating TypedDict which isn't dataclasses, I have to change each dict and pass total=False so it won't required all fields by default.
sgqlc code-generator which doesn't allow type hints. writing queries is still dynamically and error prune.
| [
"I am actually working on a code generator, as part of a library that has the objective of allowing a code-first approach when querying GraphQL API servers from python.\nTo give you a preview what will be the outcome:\nclass Book(GQLObject):\n title: str\n year: int\n\nclass Author(GQLObject):\n name: str\n books: list[Book]\n\nI just published the core of the mapper on github:\nhttps://github.com/dapalex/py-graphql-mapper/\nHave a look at the documentation and I hope to publish the code generator soon.\n"
] | [
0
] | [] | [] | [
"code_generation",
"graphql",
"python",
"type_hinting"
] | stackoverflow_0074326921_code_generation_graphql_python_type_hinting.txt |
Q:
How can she use the global variable without passing it in the function argument
Currently in day 15 of Angela's 100 days of python. What I understood from all the exercises and project is that variables outside the function cannot be used inside a function unless it is passed as an argument or you input "global" inside the function.
MENU = {
"espresso": {
"ingredients": {
"water": 50,
"coffee": 18,
},
"cost": 1.5,
},
"latte": {
"ingredients": {
"water": 200,
"milk": 150,
"coffee": 24,
},
"cost": 2.5,
},
"cappuccino": {
"ingredients": {
"water": 250,
"milk": 100,
"coffee": 24,
},
"cost": 3.0,
}
}
profit = 0
resources = {
"water": 300,
"milk": 200,
"coffee": 100,
}
def is_resource_sufficient(order_ingredients):
"""Returns True when order can be made, False if ingredients are insufficient."""
for item in order_ingredients:
if order_ingredients[item] > resources[item]:
print(f"Sorry there is not enough {item}.")
return False
return True
def process_coins():
"""Returns the total calculated from coins inserted."""
print("Please insert coins.")
total = int(input("how many quarters?: ")) * 0.25
total += int(input("how many dimes?: ")) * 0.1
total += int(input("how many nickles?: ")) * 0.05
total += int(input("how many pennies?: ")) * 0.01
return total
def is_transaction_successful(money_received, drink_cost):
"""Return True when the payment is accepted, or False if money is insufficient."""
if money_received >= drink_cost:
change = round(money_received - drink_cost, 2)
print(f"Here is ${change} in change.")
global profit
profit += drink_cost
return True
else:
print("Sorry that's not enough money. Money refunded.")
return False
def make_coffee(drink_name, order_ingredients):
"""Deduct the required ingredients from the resources."""
for item in order_ingredients:
resources[item] -= order_ingredients[item]
print(f"Here is your {drink_name} ☕️. Enjoy!")
is_on = True
while is_on:
choice = input("What would you like? (espresso/latte/cappuccino): ")
if choice == "off":
is_on = False
elif choice == "report":
print(f"Water: {resources['water']}ml")
print(f"Milk: {resources['milk']}ml")
print(f"Coffee: {resources['coffee']}g")
print(f"Money: ${profit}")
else:
drink = MENU[choice]
if is_resource_sufficient(drink["ingredients"]):
payment = process_coins()
if is_transaction_successful(payment, drink["cost"]):
make_coffee(choice, drink["ingredients"])
I tried to look at her solution and saw that one of her function is using the dictionary resources that is not declared inside the function nor passed as an argument. I am not very good in english that's why I am having a hard time searching in the internet what I specifically want to understand. Can someone enlighten me with this topic please.
NOTE: it is not advised to use global
My code:
(my understanding is that you can never use variables outside the function if it is not either set to global or passed as an argument)
def use_resources(user_order, machine_menu, machine_resources):
"""Deduct the resources needed for the user's order and returns the current resources of the machine after the
user's order. """
for menu_ingredients_key in machine_menu[user_order]["ingredients"]:
# print(menu_ingredients_key) # REPRESENT KEY water, coffee
# print(menu[order]["ingredients"][menu_ingredients_key]) # REPRESENT VALUES [50,18]
for resources_key in machine_resources:
if resources_key == menu_ingredients_key:
machine_resources[menu_ingredients_key] -= menu[user_order]["ingredients"][menu_ingredients_key]
print(f"Here is your {user_order} ☕. Enjoy! Come again :)")
How can the function use the resources that was declared outside the function and not passed as an argument?
def make_coffee(drink_name, order_ingredients):
"""Deduct the required ingredients from the resources."""
for item in order_ingredients:
resources[item] -= order_ingredients[item]
print(f"Here is your {drink_name} ☕️. Enjoy!")
A:
would like to add an answer here posted by John in the udemy QnA section:
Lists and dictionaries are mutable. That means that you can add and remove elements from the list/dictionary and it still remains the same list/dictionary object. It is not necessary to create a new list/dictionary in this case.
Almost all other Python objects are immutable. That means that once created, they cannot be altered in any way. For example, if a is an integer, then when you do a += 1 an entirely new integer object is created.
Global variables can be read anywhere in the file, including inside functions.
The rule is that functions are not allowed to create new global objects without you giving specific permission using the global keyword.
To illustrate the point, let's see what the memory location of the object is:
my_list = [1, 2, 3]
print(id(my_list), my_list)
my_list += [4] # my_list is the same list object
print(id(my_list), my_list)
print()
my_int = 123
print(id(my_int), my_int)
my_int += 1 # my_int is a new integer object
print(id(my_int), my_int)
See Mutable vs Immutable Objects in Python
A:
Consider a simpler example
resources = {
"water": 300,
"milk": 200,
"coffee": 100,
}
def test(item, val):
resources[item] -= val
test("water", 5)
This program works. test updates resources even though resources isn't a parameter. Python needs to know which variables used in a function are local and which are not. The rule is simple: variables that are assigned in a function are local to the function. The global keyword breaks that rule and lets you assign variables in the global scope.
In this example
def test2():
var1 = "foo"
print(var2)
global var3
var3 = "baz"
var1 is assigned in the function, so python makes it local. var2 is referenced in the function but is not assigned, so python looks into the global scope for its value. var3 is assigned, but its also declared global, so the local variable rule is broken and the assignment will go to the global scope.
| How can she use the global variable without passing it in the function argument | Currently in day 15 of Angela's 100 days of python. What I understood from all the exercises and project is that variables outside the function cannot be used inside a function unless it is passed as an argument or you input "global" inside the function.
MENU = {
"espresso": {
"ingredients": {
"water": 50,
"coffee": 18,
},
"cost": 1.5,
},
"latte": {
"ingredients": {
"water": 200,
"milk": 150,
"coffee": 24,
},
"cost": 2.5,
},
"cappuccino": {
"ingredients": {
"water": 250,
"milk": 100,
"coffee": 24,
},
"cost": 3.0,
}
}
profit = 0
resources = {
"water": 300,
"milk": 200,
"coffee": 100,
}
def is_resource_sufficient(order_ingredients):
"""Returns True when order can be made, False if ingredients are insufficient."""
for item in order_ingredients:
if order_ingredients[item] > resources[item]:
print(f"Sorry there is not enough {item}.")
return False
return True
def process_coins():
"""Returns the total calculated from coins inserted."""
print("Please insert coins.")
total = int(input("how many quarters?: ")) * 0.25
total += int(input("how many dimes?: ")) * 0.1
total += int(input("how many nickles?: ")) * 0.05
total += int(input("how many pennies?: ")) * 0.01
return total
def is_transaction_successful(money_received, drink_cost):
"""Return True when the payment is accepted, or False if money is insufficient."""
if money_received >= drink_cost:
change = round(money_received - drink_cost, 2)
print(f"Here is ${change} in change.")
global profit
profit += drink_cost
return True
else:
print("Sorry that's not enough money. Money refunded.")
return False
def make_coffee(drink_name, order_ingredients):
"""Deduct the required ingredients from the resources."""
for item in order_ingredients:
resources[item] -= order_ingredients[item]
print(f"Here is your {drink_name} ☕️. Enjoy!")
is_on = True
while is_on:
choice = input("What would you like? (espresso/latte/cappuccino): ")
if choice == "off":
is_on = False
elif choice == "report":
print(f"Water: {resources['water']}ml")
print(f"Milk: {resources['milk']}ml")
print(f"Coffee: {resources['coffee']}g")
print(f"Money: ${profit}")
else:
drink = MENU[choice]
if is_resource_sufficient(drink["ingredients"]):
payment = process_coins()
if is_transaction_successful(payment, drink["cost"]):
make_coffee(choice, drink["ingredients"])
I tried to look at her solution and saw that one of her function is using the dictionary resources that is not declared inside the function nor passed as an argument. I am not very good in english that's why I am having a hard time searching in the internet what I specifically want to understand. Can someone enlighten me with this topic please.
NOTE: it is not advised to use global
My code:
(my understanding is that you can never use variables outside the function if it is not either set to global or passed as an argument)
def use_resources(user_order, machine_menu, machine_resources):
"""Deduct the resources needed for the user's order and returns the current resources of the machine after the
user's order. """
for menu_ingredients_key in machine_menu[user_order]["ingredients"]:
# print(menu_ingredients_key) # REPRESENT KEY water, coffee
# print(menu[order]["ingredients"][menu_ingredients_key]) # REPRESENT VALUES [50,18]
for resources_key in machine_resources:
if resources_key == menu_ingredients_key:
machine_resources[menu_ingredients_key] -= menu[user_order]["ingredients"][menu_ingredients_key]
print(f"Here is your {user_order} ☕. Enjoy! Come again :)")
How can the function use the resources that was declared outside the function and not passed as an argument?
def make_coffee(drink_name, order_ingredients):
"""Deduct the required ingredients from the resources."""
for item in order_ingredients:
resources[item] -= order_ingredients[item]
print(f"Here is your {drink_name} ☕️. Enjoy!")
| [
"would like to add an answer here posted by John in the udemy QnA section:\nLists and dictionaries are mutable. That means that you can add and remove elements from the list/dictionary and it still remains the same list/dictionary object. It is not necessary to create a new list/dictionary in this case.\nAlmost all other Python objects are immutable. That means that once created, they cannot be altered in any way. For example, if a is an integer, then when you do a += 1 an entirely new integer object is created.\nGlobal variables can be read anywhere in the file, including inside functions.\nThe rule is that functions are not allowed to create new global objects without you giving specific permission using the global keyword.\nTo illustrate the point, let's see what the memory location of the object is:\nmy_list = [1, 2, 3]\nprint(id(my_list), my_list)\nmy_list += [4] # my_list is the same list object\nprint(id(my_list), my_list)\n \nprint()\n \nmy_int = 123\nprint(id(my_int), my_int)\nmy_int += 1 # my_int is a new integer object\nprint(id(my_int), my_int)\n\nSee Mutable vs Immutable Objects in Python\n",
"Consider a simpler example\nresources = {\n \"water\": 300,\n \"milk\": 200,\n \"coffee\": 100,\n}\n\ndef test(item, val):\n resources[item] -= val\n\ntest(\"water\", 5)\n\nThis program works. test updates resources even though resources isn't a parameter. Python needs to know which variables used in a function are local and which are not. The rule is simple: variables that are assigned in a function are local to the function. The global keyword breaks that rule and lets you assign variables in the global scope.\nIn this example\ndef test2():\n var1 = \"foo\"\n print(var2)\n global var3\n var3 = \"baz\"\n\nvar1 is assigned in the function, so python makes it local. var2 is referenced in the function but is not assigned, so python looks into the global scope for its value. var3 is assigned, but its also declared global, so the local variable rule is broken and the assignment will go to the global scope.\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074648730_python.txt |
Q:
How to update rows in pandas dataframe from another dataframe on condition with diffrenet indexes?
I have two sample datasets
test1
ID Label Key
0 K1a89aKkkkkk 23 23_TAMPA
1 Ka18d8Kkkkkk 2 2_MIAMI
2 Kae851Kkkkkk 10 10_WEST PALM BEACH
3 Kf054cKkkkkk 27 27_JACKSONVILLE
4 Ka1129Kkkkkk 2 2_MIAMI
5 Kae8e1Kkkkkk 10 10_WEST PALM BEACH
6 Ka9045Kkkkkk 50 50_ORLANDO
7 K1a95eKkkkkk 51 51_SAINT PETERSBURG
8 Kae931Kkkkkk 19 19_FORT LAUDERDALE
9 Ka1382Kkkkkk 19 19_FORT LAUDERDALE
test2
ID Label Key
9791 Ke53a3Kkkkkk NaN NaN
9792 Ke1c8dKkkkkk NaN NaN
9793 Kf69acKkkkkk NaN NaN
9794 Ke2821Kkkkkk NaN NaN
9795 Ke0a14Kkkkkk NaN NaN
9796 Kf6a4cKkkkkk NaN NaN
9797 Ka83acKkkkkk NaN NaN
9798 Kf4698Kkkkkk NaN NaN
9799 Ke1981Kkkkkk NaN NaN
9800 Ka9a40Kkkkkk NaN NaN
I'm trying to update Label and Key in test2 based on conditions from test1
Ex:
if ID == Ke2821Kkkkkk in test2 Label and Key should be updated with 2, 2_MIAMI
test2.loc[test2["ID"]=="Ke2821Kkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk"][["Label", "Key"]]
test2.loc[test2["ID"]=="Ka83acKkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka9045Kkkkkk"][["Label", "Key"]]
test2.loc[test2["ID"]=="Ka9a40Kkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk"][["Label", "Key"]]
As it has a different index it's not updating.
How do I update rows in a dataframe from another dataframe on conditions with different indexes?
Desired Output
ID Label Key
9791 Ke53a3Kkkkkk NaN NaN
9792 Ke1c8dKkkkkk NaN NaN
9793 Kf69acKkkkkk NaN NaN
9794 Ke2821Kkkkkk 2 2_MIAMI
9795 Ke0a14Kkkkkk NaN NaN
9796 Kf6a4cKkkkkk NaN NaN
9797 Ka83acKkkkkk 50 50_ORLANDO
9798 Kf4698Kkkkkk NaN NaN
9799 Ke1981Kkkkkk NaN NaN
9800 Ka9a40Kkkkkk 2 2_MIAMI
A:
Convert values to numpy arrays, for improve solution remove nested ][:
test2.loc[test2["ID"]=="Ke2821Kkkkkk",["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk",["Label", "Key"]].to_numpy()
test2.loc[test2["ID"]=="Ka83acKkkkkk",["Label", "Key"]] = test1.loc[test1["ID"]=="Ka9045Kkkkkk",["Label", "Key"]].to_numpy()
test2.loc[test2["ID"]=="Ka9a40Kkkkkk",["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk",["Label", "Key"]].to_numpy()
print (test2)
ID Label Key
9791 Ke53a3Kkkkkk NaN NaN
9792 Ke1c8dKkkkkk NaN NaN
9793 Kf69acKkkkkk NaN NaN
9794 Ke2821Kkkkkk 2.0 2_MIAMI
9795 Ke0a14Kkkkkk NaN NaN
9796 Kf6a4cKkkkkk NaN NaN
9797 Ka83acKkkkkk 50.0 50_ORLANDO
9798 Kf4698Kkkkkk NaN NaN
9799 Ke1981Kkkkkk NaN NaN
9800 Ka9a40Kkkkkk 2.0 2_MIAMI
| How to update rows in pandas dataframe from another dataframe on condition with diffrenet indexes? | I have two sample datasets
test1
ID Label Key
0 K1a89aKkkkkk 23 23_TAMPA
1 Ka18d8Kkkkkk 2 2_MIAMI
2 Kae851Kkkkkk 10 10_WEST PALM BEACH
3 Kf054cKkkkkk 27 27_JACKSONVILLE
4 Ka1129Kkkkkk 2 2_MIAMI
5 Kae8e1Kkkkkk 10 10_WEST PALM BEACH
6 Ka9045Kkkkkk 50 50_ORLANDO
7 K1a95eKkkkkk 51 51_SAINT PETERSBURG
8 Kae931Kkkkkk 19 19_FORT LAUDERDALE
9 Ka1382Kkkkkk 19 19_FORT LAUDERDALE
test2
ID Label Key
9791 Ke53a3Kkkkkk NaN NaN
9792 Ke1c8dKkkkkk NaN NaN
9793 Kf69acKkkkkk NaN NaN
9794 Ke2821Kkkkkk NaN NaN
9795 Ke0a14Kkkkkk NaN NaN
9796 Kf6a4cKkkkkk NaN NaN
9797 Ka83acKkkkkk NaN NaN
9798 Kf4698Kkkkkk NaN NaN
9799 Ke1981Kkkkkk NaN NaN
9800 Ka9a40Kkkkkk NaN NaN
I'm trying to update Label and Key in test2 based on conditions from test1
Ex:
if ID == Ke2821Kkkkkk in test2 Label and Key should be updated with 2, 2_MIAMI
test2.loc[test2["ID"]=="Ke2821Kkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk"][["Label", "Key"]]
test2.loc[test2["ID"]=="Ka83acKkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka9045Kkkkkk"][["Label", "Key"]]
test2.loc[test2["ID"]=="Ka9a40Kkkkkk"][["Label", "Key"]] = test1.loc[test1["ID"]=="Ka1129Kkkkkk"][["Label", "Key"]]
As it has a different index it's not updating.
How do I update rows in a dataframe from another dataframe on conditions with different indexes?
Desired Output
ID Label Key
9791 Ke53a3Kkkkkk NaN NaN
9792 Ke1c8dKkkkkk NaN NaN
9793 Kf69acKkkkkk NaN NaN
9794 Ke2821Kkkkkk 2 2_MIAMI
9795 Ke0a14Kkkkkk NaN NaN
9796 Kf6a4cKkkkkk NaN NaN
9797 Ka83acKkkkkk 50 50_ORLANDO
9798 Kf4698Kkkkkk NaN NaN
9799 Ke1981Kkkkkk NaN NaN
9800 Ka9a40Kkkkkk 2 2_MIAMI
| [
"Convert values to numpy arrays, for improve solution remove nested ][:\ntest2.loc[test2[\"ID\"]==\"Ke2821Kkkkkk\",[\"Label\", \"Key\"]] = test1.loc[test1[\"ID\"]==\"Ka1129Kkkkkk\",[\"Label\", \"Key\"]].to_numpy()\ntest2.loc[test2[\"ID\"]==\"Ka83acKkkkkk\",[\"Label\", \"Key\"]] = test1.loc[test1[\"ID\"]==\"Ka9045Kkkkkk\",[\"Label\", \"Key\"]].to_numpy()\ntest2.loc[test2[\"ID\"]==\"Ka9a40Kkkkkk\",[\"Label\", \"Key\"]] = test1.loc[test1[\"ID\"]==\"Ka1129Kkkkkk\",[\"Label\", \"Key\"]].to_numpy()\nprint (test2)\n ID Label Key\n9791 Ke53a3Kkkkkk NaN NaN\n9792 Ke1c8dKkkkkk NaN NaN\n9793 Kf69acKkkkkk NaN NaN\n9794 Ke2821Kkkkkk 2.0 2_MIAMI\n9795 Ke0a14Kkkkkk NaN NaN\n9796 Kf6a4cKkkkkk NaN NaN\n9797 Ka83acKkkkkk 50.0 50_ORLANDO\n9798 Kf4698Kkkkkk NaN NaN\n9799 Ke1981Kkkkkk NaN NaN\n9800 Ka9a40Kkkkkk 2.0 2_MIAMI\n\n"
] | [
1
] | [] | [] | [
"data_manipulation",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074651798_data_manipulation_dataframe_pandas_python.txt |
Q:
Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters. new to programming
I have tried this solution. But I am not receiving any output. Can someone please point out my error.
def num_case(str):
z=0
j=0
for i in str:
if i.isupper():
z=z+1
return z
elif i.islower():
j=j+1
return j
else:
pass
print('upper case:', z)
print('lowercase:', j)
num_case('The quick Brow Fox')
A:
You are returning your function inside your loop. So once if finds a uppercase or lowercase it will directly return from the function. Just remove the return lines.
A:
You should not put return statements inside the loop. The num_case function should look like this:
def num_case(str):
z = 0
j = 0
for i in str:
if i.isupper():
z = z + 1
elif i.islower():
j = j + 1
return (z, j)
uppercase, lowercase = num_case('The quick Brow Fox')
print("upper case:", uppercase)
print("lowercase:", lowercase)
A:
Don't use builtin names like str.
instead of returning the value inside the loop return or print after the loop.
No need to add else if you are just going to pass
try to use understandable variable names while writing the code.
def num_case(str_val):
upper_count = 0
lower_count = 0
for val in str_val:
if val.isupper():
upper_count += 1
elif val.islower():
lower_count += 1
print('upper case:', upper_count)
print('lowercase:', lower_count)
num_case('The quick Brow Fox')
| Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters. new to programming | I have tried this solution. But I am not receiving any output. Can someone please point out my error.
def num_case(str):
z=0
j=0
for i in str:
if i.isupper():
z=z+1
return z
elif i.islower():
j=j+1
return j
else:
pass
print('upper case:', z)
print('lowercase:', j)
num_case('The quick Brow Fox')
| [
"You are returning your function inside your loop. So once if finds a uppercase or lowercase it will directly return from the function. Just remove the return lines.\n",
"You should not put return statements inside the loop. The num_case function should look like this:\ndef num_case(str): \n z = 0 \n j = 0 \n for i in str: \n if i.isupper(): \n z = z + 1 \n elif i.islower(): \n j = j + 1 \n \n return (z, j) \n \n \nuppercase, lowercase = num_case('The quick Brow Fox')\n \nprint(\"upper case:\", uppercase) \nprint(\"lowercase:\", lowercase) \n\n",
"\nDon't use builtin names like str.\n\ninstead of returning the value inside the loop return or print after the loop.\n\nNo need to add else if you are just going to pass\n\ntry to use understandable variable names while writing the code.\n\n\ndef num_case(str_val):\n upper_count = 0\n lower_count = 0\n for val in str_val:\n if val.isupper():\n upper_count += 1\n elif val.islower():\n lower_count += 1\n\n print('upper case:', upper_count)\n print('lowercase:', lower_count)\n\n\nnum_case('The quick Brow Fox')\n\n"
] | [
0,
0,
0
] | [] | [] | [
"lowercase",
"python",
"string",
"uppercase"
] | stackoverflow_0074651626_lowercase_python_string_uppercase.txt |
Q:
One Hot Encoder on columns
The data available is as follows:
bread milk butter jam nutella cheese chips
0 bread NaN butter jam nutella NaN NaN
1 NaN NaN butter jam nutella NaN chips
2 NaN milk NaN NaN NaN cheese NaN
3 bread milk butter jam nutella cheese chips
4 bread milk NaN NaN nutella NaN NaN
5 bread milk butter jam NaN cheese chips
6 bread milk NaN NaN nutella NaN NaN
7 bread NaN butter NaN NaN cheese NaN
8 bread NaN butter jam nutella NaN NaN
9 NaN milk butter jam NaN cheese NaN
10 bread NaN NaN jam nutella cheese chips
11 bread milk butter jam nutella NaN NaN
12 bread NaN butter NaN nutella cheese NaN
13 bread NaN butter jam nutella cheese chips
14 bread milk butter jam nutella cheese chips
15 NaN milk butter jam nutella cheese NaN
16 NaN milk NaN jam nutella cheese NaN
17 bread milk butter jam nutella cheese chips
18 bread NaN butter jam nutella cheese NaN
19 bread milk butter NaN nutella cheese NaN
20 NaN milk NaN NaN NaN NaN chips
I want to one hot encode each column to produce something as follows for all column, the entire dataset:
bread
milk
butter
jam
nutella
cheese
chips
1
0
1
1
1
0
0
0
0
1
1
1
0
1
Can someone please help me with the code?
So I tried to use the following code:
pd.get_dummies(book_data, columns = ['bread', 'milk','butter','jam', 'nutella','cheese','chips'])
I obtained the following error:
KeyError: "['bread', 'cheese'] not in index"
A:
You can use a trick with pandas.Series.name to replace the column name with 1, then fillna(0).
First make sure to clean up the column names with:
book_data.columns= book_data.columns.str.strip()
And why not also the values of each row :
book_data= book_data.replace("\s+", "", regex=True)
Then try this :
out= book_data.apply(lambda x: x.replace(x.name, 1), axis=0).fillna(0).astype(int)
# Output :
print(out)
bread milk butter jam nutella cheese chips
0 1 0 1 1 1 0 0
1 0 0 1 1 1 0 1
2 0 1 0 0 0 1 0
3 1 1 1 1 1 1 1
4 1 1 0 0 1 0 0
5 1 1 1 1 0 1 1
6 1 1 0 0 1 0 0
7 1 0 1 0 0 1 0
8 1 0 1 1 1 0 0
9 0 1 1 1 0 1 0
10 1 0 0 1 1 1 1
11 1 1 1 1 1 0 0
12 1 0 1 0 1 1 0
13 1 0 1 1 1 1 1
14 1 1 1 1 1 1 1
15 0 1 1 1 1 1 0
16 0 1 0 1 1 1 0
17 1 1 1 1 1 1 1
18 1 0 1 1 1 1 0
19 1 1 1 0 1 1 0
20 0 1 0 0 0 0 1
| One Hot Encoder on columns | The data available is as follows:
bread milk butter jam nutella cheese chips
0 bread NaN butter jam nutella NaN NaN
1 NaN NaN butter jam nutella NaN chips
2 NaN milk NaN NaN NaN cheese NaN
3 bread milk butter jam nutella cheese chips
4 bread milk NaN NaN nutella NaN NaN
5 bread milk butter jam NaN cheese chips
6 bread milk NaN NaN nutella NaN NaN
7 bread NaN butter NaN NaN cheese NaN
8 bread NaN butter jam nutella NaN NaN
9 NaN milk butter jam NaN cheese NaN
10 bread NaN NaN jam nutella cheese chips
11 bread milk butter jam nutella NaN NaN
12 bread NaN butter NaN nutella cheese NaN
13 bread NaN butter jam nutella cheese chips
14 bread milk butter jam nutella cheese chips
15 NaN milk butter jam nutella cheese NaN
16 NaN milk NaN jam nutella cheese NaN
17 bread milk butter jam nutella cheese chips
18 bread NaN butter jam nutella cheese NaN
19 bread milk butter NaN nutella cheese NaN
20 NaN milk NaN NaN NaN NaN chips
I want to one hot encode each column to produce something as follows for all column, the entire dataset:
bread
milk
butter
jam
nutella
cheese
chips
1
0
1
1
1
0
0
0
0
1
1
1
0
1
Can someone please help me with the code?
So I tried to use the following code:
pd.get_dummies(book_data, columns = ['bread', 'milk','butter','jam', 'nutella','cheese','chips'])
I obtained the following error:
KeyError: "['bread', 'cheese'] not in index"
| [
"You can use a trick with pandas.Series.name to replace the column name with 1, then fillna(0).\nFirst make sure to clean up the column names with:\nbook_data.columns= book_data.columns.str.strip()\n\nAnd why not also the values of each row :\nbook_data= book_data.replace(\"\\s+\", \"\", regex=True)\n\nThen try this :\nout= book_data.apply(lambda x: x.replace(x.name, 1), axis=0).fillna(0).astype(int)\n\n# Output :\nprint(out)\n\n bread milk butter jam nutella cheese chips\n0 1 0 1 1 1 0 0\n1 0 0 1 1 1 0 1\n2 0 1 0 0 0 1 0\n3 1 1 1 1 1 1 1\n4 1 1 0 0 1 0 0\n5 1 1 1 1 0 1 1\n6 1 1 0 0 1 0 0\n7 1 0 1 0 0 1 0\n8 1 0 1 1 1 0 0\n9 0 1 1 1 0 1 0\n10 1 0 0 1 1 1 1\n11 1 1 1 1 1 0 0\n12 1 0 1 0 1 1 0\n13 1 0 1 1 1 1 1\n14 1 1 1 1 1 1 1\n15 0 1 1 1 1 1 0\n16 0 1 0 1 1 1 0\n17 1 1 1 1 1 1 1\n18 1 0 1 1 1 1 0\n19 1 1 1 0 1 1 0\n20 0 1 0 0 0 0 1\n\n"
] | [
0
] | [] | [] | [
"one_hot_encoding",
"python"
] | stackoverflow_0074651687_one_hot_encoding_python.txt |
Q:
Reshaping data in pandas by converting R code
Using the below R code, I could appropriately reshape my data from wide to long format. I wonder how I can replicate the below R code in pandas!
total_RACE_Reshape<-total_RACE %>% pivot_longer(cols=c('non_Hispanic_Black_65_percent', 'non_Hispanic_White_65_percent'),
names_to='Race',
values_to='Percent')
A:
The equivalent pandas method is melt(). Selecting columns, renaming columns etc. are very similar.
total_RACE_Reshape = total_RACE.melt(
value_vars=['non_Hispanic_Black_65_percent', 'non_Hispanic_White_65_percent'],
value_name='Race',
var_name='Percent'
)
| Reshaping data in pandas by converting R code | Using the below R code, I could appropriately reshape my data from wide to long format. I wonder how I can replicate the below R code in pandas!
total_RACE_Reshape<-total_RACE %>% pivot_longer(cols=c('non_Hispanic_Black_65_percent', 'non_Hispanic_White_65_percent'),
names_to='Race',
values_to='Percent')
| [
"The equivalent pandas method is melt(). Selecting columns, renaming columns etc. are very similar.\ntotal_RACE_Reshape = total_RACE.melt(\n value_vars=['non_Hispanic_Black_65_percent', 'non_Hispanic_White_65_percent'], \n value_name='Race', \n var_name='Percent'\n)\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python",
"r"
] | stackoverflow_0074650141_pandas_python_r.txt |
Q:
How can I make a recurring async task (I don't control where asyncio.run() is called)
I'm using a library that itself makes the call to asyncio.run(internal_function) so I can't control that at all. I do however have access to the event loop, it's something that I pass into this library.
Given that, is there some way I can set up an recurring async event that will execute every X seconds while the main library is running.
This doesn't exactly work, but maybe it's close?
import asyncio
from third_party import run
loop = asyncio.new_event_loop()
async def periodic():
while True:
print("doing a thing...")
await asyncio.sleep(30)
loop.create_task(periodic())
run(loop) # internally this will call asyncio.run() using the given loop
The problem here of course is that the task I've created is never awaited. But I can't just await it, because that would block.
Edit: Here's a working example of what I'm facing. When you run this code you will only ever see "third party code executing" and never see "doing my stuff...".
import asyncio
# I don't know how the loop argument is used
# by the third party's run() function,
def third_party_run(loop):
async def runner():
while True:
print("third party code executing")
await asyncio.sleep(5)
# but I do know that this third party eventually runs code
# that looks **exactly** like this.
try:
asyncio.run(runner())
except KeyboardInterrupt:
return
loop = asyncio.new_event_loop()
async def periodic():
while True:
print("doing my stuff...")
await asyncio.sleep(1)
loop.create_task(periodic())
third_party_run(loop)
If you run the above code you get:
third party code executing
third party code executing
third party code executing
^CTask was destroyed but it is pending!
task: <Task pending name='Task-1' coro=<periodic() running at example.py:22>>
/usr/local/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py:674: RuntimeWarning: coroutine 'periodic' was never awaited
A:
You don't need to await on a created task.
It will run in the background as long as the event loop is active and is not stuck in a CPU bound operation.
According to your comment, you don't have an access to the event loop. In this case you don't have many options other than running in a different thread (which will have its own loop), or changing the loop creation policy in order to get the event loop, which is a very bad idea in most cases.
A:
I found a way to make your test program run. However, it's a hack. It could fail, depending on the internal design of your third party library. From the information you provided, the library has been structured to be a black box. You can't interact with the event loop or schedule a callback. It seems like there might be a very good reason for this.
If I were you I would try to contact the library designer and let him know what your problem is. Perhaps there is a better solution. If this is a commercial project, I would make 100% certain that the team understands the issue, before attempting to use my below solution or anything like it.
The script below overrides one method (new_event_loop) in the DefaultEventLoopPolicy. When this method is called, I create a task in this loop to execute your periodic function. I don't know how often, or for what purpose, the library will call this function. Also, if the library internally overrides the EventLoopPolicy then this solution will not work. In both of these cases it may lead to unforeseeable consequences.
OK, enough disclaimers.
The only significant change to your test script was to replace the infinite loop in runner with a one that times out. This allowed me to verify that the program shuts down cleanly.
import asyncio
# I don't know how the loop argument is used
# by the third party's run() function,
def third_party_run():
async def runner():
for _ in range(4):
print("third party code executing")
await asyncio.sleep(5)
# but I do know that this third party eventually runs code
# that looks **exactly** like this.
try:
asyncio.run(runner())
except KeyboardInterrupt:
return
async def periodic():
while True:
print("doing my stuff...")
await asyncio.sleep(1)
class EventLoopPolicyHack(asyncio.DefaultEventLoopPolicy):
def __init__(self):
self.__running = None
super().__init__()
def new_event_loop(self):
# Override to create our periodic task in the new loop
# Get a loop from the superclass.
# This method must return that loop.
print("New event loop")
loop = super().new_event_loop()
if self.__running is not None:
self.__running.cancel() # I have no way to test this idea
self.__running = loop.create_task(periodic())
return loop
asyncio.set_event_loop_policy(EventLoopPolicyHack())
third_party_run()
| How can I make a recurring async task (I don't control where asyncio.run() is called) | I'm using a library that itself makes the call to asyncio.run(internal_function) so I can't control that at all. I do however have access to the event loop, it's something that I pass into this library.
Given that, is there some way I can set up an recurring async event that will execute every X seconds while the main library is running.
This doesn't exactly work, but maybe it's close?
import asyncio
from third_party import run
loop = asyncio.new_event_loop()
async def periodic():
while True:
print("doing a thing...")
await asyncio.sleep(30)
loop.create_task(periodic())
run(loop) # internally this will call asyncio.run() using the given loop
The problem here of course is that the task I've created is never awaited. But I can't just await it, because that would block.
Edit: Here's a working example of what I'm facing. When you run this code you will only ever see "third party code executing" and never see "doing my stuff...".
import asyncio
# I don't know how the loop argument is used
# by the third party's run() function,
def third_party_run(loop):
async def runner():
while True:
print("third party code executing")
await asyncio.sleep(5)
# but I do know that this third party eventually runs code
# that looks **exactly** like this.
try:
asyncio.run(runner())
except KeyboardInterrupt:
return
loop = asyncio.new_event_loop()
async def periodic():
while True:
print("doing my stuff...")
await asyncio.sleep(1)
loop.create_task(periodic())
third_party_run(loop)
If you run the above code you get:
third party code executing
third party code executing
third party code executing
^CTask was destroyed but it is pending!
task: <Task pending name='Task-1' coro=<periodic() running at example.py:22>>
/usr/local/Cellar/[email protected]/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py:674: RuntimeWarning: coroutine 'periodic' was never awaited
| [
"You don't need to await on a created task.\nIt will run in the background as long as the event loop is active and is not stuck in a CPU bound operation.\nAccording to your comment, you don't have an access to the event loop. In this case you don't have many options other than running in a different thread (which will have its own loop), or changing the loop creation policy in order to get the event loop, which is a very bad idea in most cases.\n",
"I found a way to make your test program run. However, it's a hack. It could fail, depending on the internal design of your third party library. From the information you provided, the library has been structured to be a black box. You can't interact with the event loop or schedule a callback. It seems like there might be a very good reason for this.\nIf I were you I would try to contact the library designer and let him know what your problem is. Perhaps there is a better solution. If this is a commercial project, I would make 100% certain that the team understands the issue, before attempting to use my below solution or anything like it.\nThe script below overrides one method (new_event_loop) in the DefaultEventLoopPolicy. When this method is called, I create a task in this loop to execute your periodic function. I don't know how often, or for what purpose, the library will call this function. Also, if the library internally overrides the EventLoopPolicy then this solution will not work. In both of these cases it may lead to unforeseeable consequences.\nOK, enough disclaimers.\nThe only significant change to your test script was to replace the infinite loop in runner with a one that times out. This allowed me to verify that the program shuts down cleanly.\nimport asyncio\n\n# I don't know how the loop argument is used\n# by the third party's run() function,\ndef third_party_run():\n async def runner():\n for _ in range(4):\n print(\"third party code executing\")\n await asyncio.sleep(5)\n\n # but I do know that this third party eventually runs code\n # that looks **exactly** like this.\n try:\n asyncio.run(runner())\n except KeyboardInterrupt:\n return\n\nasync def periodic():\n while True:\n print(\"doing my stuff...\")\n await asyncio.sleep(1)\n\nclass EventLoopPolicyHack(asyncio.DefaultEventLoopPolicy):\n def __init__(self):\n self.__running = None\n super().__init__()\n \n def new_event_loop(self):\n # Override to create our periodic task in the new loop\n # Get a loop from the superclass.\n # This method must return that loop.\n print(\"New event loop\")\n loop = super().new_event_loop()\n if self.__running is not None:\n self.__running.cancel() # I have no way to test this idea\n self.__running = loop.create_task(periodic())\n return loop\n \nasyncio.set_event_loop_policy(EventLoopPolicyHack())\n\nthird_party_run()\n\n"
] | [
1,
1
] | [] | [] | [
"python",
"python_asyncio"
] | stackoverflow_0074649296_python_python_asyncio.txt |
Q:
How to check if a number is a np.float64 or np.float32 or np.float16?
Other than using a set of or statements
isinstance( x, np.float64 ) or isinstance( x, np.float32 ) or isinstance( np.float16 )
Is there a cleaner way to check of a variable is a floating type?
A:
You can use np.floating:
In [11]: isinstance(np.float16(1), np.floating)
Out[11]: True
In [12]: isinstance(np.float32(1), np.floating)
Out[12]: True
In [13]: isinstance(np.float64(1), np.floating)
Out[13]: True
Note: non-numpy types return False:
In [14]: isinstance(1, np.floating)
Out[14]: False
In [15]: isinstance(1.0, np.floating)
Out[15]: False
to include more types, e.g. python floats, you can use a tuple in isinstance:
In [16]: isinstance(1.0, (np.floating, float))
Out[16]: True
A:
To check numbers in numpy array, it provides 'character code' for the general kind of data.
x = np.array([3.6, 0.3])
if x.dtype.kind == 'f':
print('x is floating point')
See other kinds of data here in the manual
EDITED---------------
Be careful when using isinstance and is operator to determine type of numbers.
import numpy as np
a = np.array([1.2, 1.3], dtype=np.float32)
print(isinstance(a.dtype, np.float32)) # False
print(isinstance(type(a[0]), np.float32)) # False
print(a.dtype is np.float32) # False
print(type(a[0]) is np.dtype(np.float32)) # False
print(isinstance(a[0], np.float32)) # True
print(type(a[0]) is np.float32) # True
print(a.dtype == np.float32) # True
A:
The best way to a check if a NumPy array is in floating precision whether 16,32 or 64 is as follows:
import numpy as np
a = np.random.rand(3).astype(np.float32)
print(issubclass(a.dtype.type,np.floating))
a = np.random.rand(3).astype(np.float64)
print(issubclass(a.dtype.type,np.floating))
a = np.random.rand(3).astype(np.float16)
print(issubclass(a.dtype.type,np.floating))
In this case all will be True.
The common solution however can be give wrong outputs as shown below,
import numpy as np
a = np.random.rand(3).astype(np.float32)
print(isinstance(a,np.floating))
a = np.random.rand(3).astype(np.float64)
print(isinstance(a,np.floating))
a = np.random.rand(3).astype(np.float16)
print(isinstance(a,np.floating))
In this case all will be False
The workaround for above though is
import numpy as np
a = np.random.rand(3).astype(np.float32)
print(isinstance(a[0],np.floating))
a = np.random.rand(3).astype(np.float64)
print(isinstance(a[0],np.floating))
a = np.random.rand(3).astype(np.float16)
print(isinstance(a[0],np.floating))
Now all will be True
| How to check if a number is a np.float64 or np.float32 or np.float16? | Other than using a set of or statements
isinstance( x, np.float64 ) or isinstance( x, np.float32 ) or isinstance( np.float16 )
Is there a cleaner way to check of a variable is a floating type?
| [
"You can use np.floating:\nIn [11]: isinstance(np.float16(1), np.floating)\nOut[11]: True\n\nIn [12]: isinstance(np.float32(1), np.floating)\nOut[12]: True\n\nIn [13]: isinstance(np.float64(1), np.floating)\nOut[13]: True\n\nNote: non-numpy types return False:\nIn [14]: isinstance(1, np.floating)\nOut[14]: False\n\nIn [15]: isinstance(1.0, np.floating)\nOut[15]: False\n\nto include more types, e.g. python floats, you can use a tuple in isinstance:\nIn [16]: isinstance(1.0, (np.floating, float))\nOut[16]: True\n\n",
"To check numbers in numpy array, it provides 'character code' for the general kind of data.\nx = np.array([3.6, 0.3])\nif x.dtype.kind == 'f':\n print('x is floating point')\n\nSee other kinds of data here in the manual\nEDITED---------------\nBe careful when using isinstance and is operator to determine type of numbers.\nimport numpy as np\na = np.array([1.2, 1.3], dtype=np.float32)\n\nprint(isinstance(a.dtype, np.float32)) # False\nprint(isinstance(type(a[0]), np.float32)) # False\nprint(a.dtype is np.float32) # False\nprint(type(a[0]) is np.dtype(np.float32)) # False\n\nprint(isinstance(a[0], np.float32)) # True\nprint(type(a[0]) is np.float32) # True\nprint(a.dtype == np.float32) # True\n\n",
"The best way to a check if a NumPy array is in floating precision whether 16,32 or 64 is as follows:\nimport numpy as np\na = np.random.rand(3).astype(np.float32)\nprint(issubclass(a.dtype.type,np.floating))\n\na = np.random.rand(3).astype(np.float64)\nprint(issubclass(a.dtype.type,np.floating))\n\na = np.random.rand(3).astype(np.float16)\nprint(issubclass(a.dtype.type,np.floating))\n\nIn this case all will be True.\nThe common solution however can be give wrong outputs as shown below,\nimport numpy as np\na = np.random.rand(3).astype(np.float32)\nprint(isinstance(a,np.floating))\n\na = np.random.rand(3).astype(np.float64)\nprint(isinstance(a,np.floating))\n\na = np.random.rand(3).astype(np.float16)\nprint(isinstance(a,np.floating))\n\nIn this case all will be False\nThe workaround for above though is\nimport numpy as np\na = np.random.rand(3).astype(np.float32)\nprint(isinstance(a[0],np.floating))\n\na = np.random.rand(3).astype(np.float64)\nprint(isinstance(a[0],np.floating))\n\na = np.random.rand(3).astype(np.float16)\nprint(isinstance(a[0],np.floating))\n\nNow all will be True\n"
] | [
53,
3,
0
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0028292542_numpy_python.txt |
Q:
Can I increase the decrease value from a for loop in python?
How do I increase the decrease value every loop in for loop ?.
Example:
from decrease by -7 -> -6 -> -4 -> -1.
current code:
for i in range(4,0,-1):
Dev.step(2)
if i == 1 or i == 3:
Dev.turnLeft()
Dev.step(i)
Dev.step(-i)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(i+4)
Dev.step(-i-4)
Dev.turnLeft()
A:
Updated based on comment not to use a separate variable
You can use the loop variable i to determine the current decrement value.
# Loop from 4 to 0, using the current value of i as the decrement value
for i in range(4,0,-i):
Dev.step(2)
if i == 1 or i == 3:
Dev.turnLeft()
Dev.step(i)
Dev.step(-i)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(i+4)
Dev.step(-i-4)
Dev.turnLeft()
| Can I increase the decrease value from a for loop in python? | How do I increase the decrease value every loop in for loop ?.
Example:
from decrease by -7 -> -6 -> -4 -> -1.
current code:
for i in range(4,0,-1):
Dev.step(2)
if i == 1 or i == 3:
Dev.turnLeft()
Dev.step(i)
Dev.step(-i)
Dev.turnRight()
else:
Dev.turnRight()
Dev.step(i+4)
Dev.step(-i-4)
Dev.turnLeft()
| [
"Updated based on comment not to use a separate variable\nYou can use the loop variable i to determine the current decrement value.\n# Loop from 4 to 0, using the current value of i as the decrement value\nfor i in range(4,0,-i):\n Dev.step(2)\n if i == 1 or i == 3:\n Dev.turnLeft()\n Dev.step(i)\n Dev.step(-i)\n Dev.turnRight()\n else:\n Dev.turnRight()\n Dev.step(i+4)\n Dev.step(-i-4)\n Dev.turnLeft()\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651826_python.txt |
Q:
Python Openpyxl - Modify excel file and its value
I get an exported Excel file displaying ProductItems, locations, and some sale numbers.
Now, the problem is that the ProductItems and Locations are all in one column, indented a bit, like this:
ProductItem_1
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
(110 locations total)
ProductItem_2
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
ProductItem_1
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
.... etc... like 150 ProductItems x 110 Locations ...
My idea is to Insert a Column to the left, it would be empty, and then copy the name of ProductItem name to every row, like this:
ProductItem_1 | Location_a | QuantityVal | PriceVal
ProductItem_1 | Location_b | QuantityVal | PriceVal
ProductItem_1 | Location_c | QuantityVal | PriceVal
ProductItem_2 | Location_a | QuantityVal | PriceVal
ProductItem_2 | Location_b | QuantityVal | PriceVal
ProductItem_2 | Location_c | QuantityVal | PriceVal
ProductItem_3 | Location_a | QuantityVal | PriceVal
ProductItem_3 | Location_b | QuantityVal | PriceVal
ProductItem_3 | Location_c | QuantityVal | PriceVal
How do I accomplish this? I am attaching a screenshot of the Excel file... Any idea how to tackle this with Openpyxl in Python?
Thank you
Desired outcome would look like this:
A:
This example produces the requested format based on your before image and includes rows 1 -4 from the original sheet unchanged.
Basically the code loops through the rows from row 5, the header row for the data in columns B, C, D, and E. It moves each range from A to E across by 1 so that column A is then empty, and sets the value in that row's column A to the value of the latest determined 'ProductItems' using variable 'colA_val'.
'ProductItems' is determined by the leading spaces in the cell text set by the variable 'num_indent'. Judging from the image has 1 leading space. If there is a more definitive way to determine the 'ProductItems' row this part can be changed.
The following steps are optional
After the data is moved 1 column across a loop is used to apply the column widths 1 across so the Headers retain the same width as before. Column A is slighly reduced in width since the larger width text in now in Column B.
Cell B5 in the header row has an outline border added. The cell contains no text as this is not specified in the question.
...
from openpyxl import load_workbook
from openpyxl.utils import get_column_letter
from openpyxl.styles import Border, Side
filename = "openpyxl2.xlsx"
# Open workbook and select sheet
wb = load_workbook(filename)
ws = wb.active
### Loop the cells
colA_val = '' # Hold the text value to enter in to Col A
col_width = {} # This dict holds the column dimenions so the col widths can be readjusted
num_indent = 1 # How many leading spaces for the 'ProductItems' cell
for row in ws.iter_rows(min_row=5, max_row=ws.max_row):
cur_row =row[0].row
row_colA_value = row[0].value
### If this is a row with data
if row_colA_value is not None:
### Determine if 'ProductItems' row from the leading spaces, set by 'num_indent'
leading_spaces = len(row_colA_value) - len(row_colA_value.lstrip(' '))
### Is current row a ProductItems row, if so get details and delete row
if leading_spaces == num_indent:
### One off row 5 header adjustment and column widths collection
if not col_width:
### Adjust header row 5 one column across
ws.move_range(f'A{cur_row - 1}:E{cur_row - 1}', cols=1)
### Get column dimensions
for x in range(ws.max_column-1):
col_width[row[x].column] = ws.column_dimensions[row[x].column_letter].width
### Get next ProductItem name
colA_val = row_colA_value.lstrip()
### Delete the row the ProductItem was on
ws.delete_rows(cur_row)
### move name, units, sales etc cells 1 column across
ws.move_range(f'A{cur_row}:E{cur_row}', cols=1)
### Enter the ProductItem name into cell in column A
ws.cell(row=cur_row, column=1).value = colA_val
### Reset the column widths using the sizes saved in the col_width dictionary
for c, d in col_width.items():
### Adjust Column A to be a little smaller than before
if c == 1:
ws.column_dimensions['A'].width = d - 10
### Set the columns to the same width as before
ws.column_dimensions[get_column_letter(c + 1)].width = d
### Add Text and Border to header section cell B5
ws.cell(row=5, column=2).value = 'Enter text here'
med_border = Border(left=Side(style='medium'),
right=Side(style='medium'),
top=Side(style='medium'),
bottom=Side(style='medium'))
ws.cell(row=5, column=2).border = med_border
wb.save('out_' + filename)
| Python Openpyxl - Modify excel file and its value | I get an exported Excel file displaying ProductItems, locations, and some sale numbers.
Now, the problem is that the ProductItems and Locations are all in one column, indented a bit, like this:
ProductItem_1
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
(110 locations total)
ProductItem_2
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
ProductItem_1
Location_a | Quantity | Price
Location_b | Quantity | Price
Location_c | Quantity | Price
.... etc... like 150 ProductItems x 110 Locations ...
My idea is to Insert a Column to the left, it would be empty, and then copy the name of ProductItem name to every row, like this:
ProductItem_1 | Location_a | QuantityVal | PriceVal
ProductItem_1 | Location_b | QuantityVal | PriceVal
ProductItem_1 | Location_c | QuantityVal | PriceVal
ProductItem_2 | Location_a | QuantityVal | PriceVal
ProductItem_2 | Location_b | QuantityVal | PriceVal
ProductItem_2 | Location_c | QuantityVal | PriceVal
ProductItem_3 | Location_a | QuantityVal | PriceVal
ProductItem_3 | Location_b | QuantityVal | PriceVal
ProductItem_3 | Location_c | QuantityVal | PriceVal
How do I accomplish this? I am attaching a screenshot of the Excel file... Any idea how to tackle this with Openpyxl in Python?
Thank you
Desired outcome would look like this:
| [
"This example produces the requested format based on your before image and includes rows 1 -4 from the original sheet unchanged.\nBasically the code loops through the rows from row 5, the header row for the data in columns B, C, D, and E. It moves each range from A to E across by 1 so that column A is then empty, and sets the value in that row's column A to the value of the latest determined 'ProductItems' using variable 'colA_val'.\n\n'ProductItems' is determined by the leading spaces in the cell text set by the variable 'num_indent'. Judging from the image has 1 leading space. If there is a more definitive way to determine the 'ProductItems' row this part can be changed.\n\nThe following steps are optional \n\nAfter the data is moved 1 column across a loop is used to apply the column widths 1 across so the Headers retain the same width as before. Column A is slighly reduced in width since the larger width text in now in Column B.\nCell B5 in the header row has an outline border added. The cell contains no text as this is not specified in the question.\n\n...\nfrom openpyxl import load_workbook\nfrom openpyxl.utils import get_column_letter\nfrom openpyxl.styles import Border, Side\n\nfilename = \"openpyxl2.xlsx\"\n\n# Open workbook and select sheet\nwb = load_workbook(filename)\nws = wb.active\n\n### Loop the cells\ncolA_val = '' # Hold the text value to enter in to Col A\ncol_width = {} # This dict holds the column dimenions so the col widths can be readjusted\nnum_indent = 1 # How many leading spaces for the 'ProductItems' cell\n\nfor row in ws.iter_rows(min_row=5, max_row=ws.max_row):\n cur_row =row[0].row\n row_colA_value = row[0].value\n ### If this is a row with data\n if row_colA_value is not None:\n ### Determine if 'ProductItems' row from the leading spaces, set by 'num_indent'\n leading_spaces = len(row_colA_value) - len(row_colA_value.lstrip(' '))\n ### Is current row a ProductItems row, if so get details and delete row\n if leading_spaces == num_indent:\n ### One off row 5 header adjustment and column widths collection\n if not col_width:\n ### Adjust header row 5 one column across\n ws.move_range(f'A{cur_row - 1}:E{cur_row - 1}', cols=1)\n ### Get column dimensions \n for x in range(ws.max_column-1):\n col_width[row[x].column] = ws.column_dimensions[row[x].column_letter].width\n ### Get next ProductItem name\n colA_val = row_colA_value.lstrip()\n ### Delete the row the ProductItem was on\n ws.delete_rows(cur_row)\n\n ### move name, units, sales etc cells 1 column across\n ws.move_range(f'A{cur_row}:E{cur_row}', cols=1)\n ### Enter the ProductItem name into cell in column A\n ws.cell(row=cur_row, column=1).value = colA_val\n\n\n### Reset the column widths using the sizes saved in the col_width dictionary\nfor c, d in col_width.items():\n ### Adjust Column A to be a little smaller than before\n if c == 1:\n ws.column_dimensions['A'].width = d - 10\n ### Set the columns to the same width as before\n ws.column_dimensions[get_column_letter(c + 1)].width = d\n\n### Add Text and Border to header section cell B5\nws.cell(row=5, column=2).value = 'Enter text here'\n\nmed_border = Border(left=Side(style='medium'),\n right=Side(style='medium'),\n top=Side(style='medium'),\n bottom=Side(style='medium'))\nws.cell(row=5, column=2).border = med_border\n\nwb.save('out_' + filename)\n\n"
] | [
0
] | [] | [] | [
"excel",
"openpyxl",
"python"
] | stackoverflow_0074647451_excel_openpyxl_python.txt |
Q:
how to calculate 256 bit number in numba on a CUDA GPU
I'm using python numba and when a number exceed 64 bit it will use cpu instead of gpu so i guess it only support up to 64 bit number. How to calculate 256 bit number in numba(like adding two 256 bit number)?
A:
Generally speaking, GPUs are 32-bit machines with 64-bit addressing capability. All 64-bit integer operations are emulated. In the simplest case (logical operations, additions, subtractions) each 64-bit integer operation requires the execution of two 32-bit integer instructions. Very roughly, emulation of 64-bit multiplication requires about twenty 32-bit instructions, 64-bit division requires about eighty 32-bit instructions.
https://forums.developer.nvidia.com/t/question-about-64-bit-integer-performance/64147
GPUs are optimised for doing 3D rendering calculations. Following the history of OpenGL, these are traditionally done using 32-bit floating point numbers arranged as either vectors of four floats or quaternion matrices of 4x4 floats. So that's the capability GPUs are very good at.
If you want to do floating point with more bits, or 64-bit integerarithmetic, you may find it unsupported or slow.
https://cs.stackexchange.com/a/121119
| how to calculate 256 bit number in numba on a CUDA GPU | I'm using python numba and when a number exceed 64 bit it will use cpu instead of gpu so i guess it only support up to 64 bit number. How to calculate 256 bit number in numba(like adding two 256 bit number)?
| [
"\nGenerally speaking, GPUs are 32-bit machines with 64-bit addressing capability. All 64-bit integer operations are emulated. In the simplest case (logical operations, additions, subtractions) each 64-bit integer operation requires the execution of two 32-bit integer instructions. Very roughly, emulation of 64-bit multiplication requires about twenty 32-bit instructions, 64-bit division requires about eighty 32-bit instructions.\n\nhttps://forums.developer.nvidia.com/t/question-about-64-bit-integer-performance/64147\n\nGPUs are optimised for doing 3D rendering calculations. Following the history of OpenGL, these are traditionally done using 32-bit floating point numbers arranged as either vectors of four floats or quaternion matrices of 4x4 floats. So that's the capability GPUs are very good at.\nIf you want to do floating point with more bits, or 64-bit integerarithmetic, you may find it unsupported or slow.\n\nhttps://cs.stackexchange.com/a/121119\n"
] | [
1
] | [] | [] | [
"cuda",
"numba",
"python"
] | stackoverflow_0074651328_cuda_numba_python.txt |
Q:
full outer join in python without pandas
I'm trying to do a full outer join in python without using pandas, I already developed a code to an inner join but can't really edit it for the full outer join
here is my code for the inner join
import collections
import csv
import sys
def c_merge(f1,f2):
with open(f1,'r') as infile:
obj=csv.reader(infile)
header_a=next(obj)
dict_a={row[0]: row[1:] for row in obj}
with open(f2,'r') as infile:
obj=csv.reader(infile)
header_b=next(obj)
dict_b=collections.defaultdict(list)
for row in obj:
dict_b[row[0]].append(row[1:])
with open('newfile.txt','w') as newfile:
w=csv.writer(newfile)
w.writerow(header_a+header_b[1:])
for m in dict_a.keys():
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a[m]+n)
if __name__ == "__main__":
c_merge(sys.argv[0],sys.argv[1])
obj=csv.reader(open('newfile.txt','r'))
for x in obj:
print(x)
A:
Hi, Welcome to StackOverflow!
Both of these works:
with open('newfile.txt','w') as newfile:
w=csv.writer(newfile)
w.writerow(header_a+header_b[1:])
for m in set(dict_a.keys()).union(dict_b.keys()):
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a.get(m, [])+n)
OR
with open('newfile.txt','w') as newfile:
w=csv.writer(newfile)
w.writerow(header_a+header_b[1:])
for m in dict_a.keys()+dict_b.keys():
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a.get(m, [])+n)
Complete Code:
import collections
import csv
import sys
def c_merge(f1,f2):
with open(f1,'r') as infile:
obj=csv.reader(infile)
header_a=next(obj)
dict_a={row[0]: row[1:] for row in obj}
with open(f2,'r') as infile:
obj=csv.reader(infile)
header_b=next(obj)
dict_b=collections.defaultdict(list)
for row in obj:
dict_b[row[0]].append(row[1:])
with open('newfile.txt','w') as newfile:
w=csv.writer(newfile)
w.writerow(header_a+header_b[1:])
for m in set(dict_a.keys()).union(dict_b.keys()):
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a.get(m, [])+n)
# OR
"""
for m in dict_a.keys()+dict_b.keys():
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a.get(m, [])+n)
"""
if __name__ == "__main__":
c_merge(sys.argv[0],sys.argv[1])
obj=csv.reader(open('newfile.txt','r'))
for x in obj:
print(x)
| full outer join in python without pandas | I'm trying to do a full outer join in python without using pandas, I already developed a code to an inner join but can't really edit it for the full outer join
here is my code for the inner join
import collections
import csv
import sys
def c_merge(f1,f2):
with open(f1,'r') as infile:
obj=csv.reader(infile)
header_a=next(obj)
dict_a={row[0]: row[1:] for row in obj}
with open(f2,'r') as infile:
obj=csv.reader(infile)
header_b=next(obj)
dict_b=collections.defaultdict(list)
for row in obj:
dict_b[row[0]].append(row[1:])
with open('newfile.txt','w') as newfile:
w=csv.writer(newfile)
w.writerow(header_a+header_b[1:])
for m in dict_a.keys():
for n in dict_b.get(m, [[]]):
w.writerow([m]+dict_a[m]+n)
if __name__ == "__main__":
c_merge(sys.argv[0],sys.argv[1])
obj=csv.reader(open('newfile.txt','r'))
for x in obj:
print(x)
| [
"Hi, Welcome to StackOverflow!\nBoth of these works:\nwith open('newfile.txt','w') as newfile:\n w=csv.writer(newfile)\n w.writerow(header_a+header_b[1:])\n\n for m in set(dict_a.keys()).union(dict_b.keys()):\n for n in dict_b.get(m, [[]]):\n w.writerow([m]+dict_a.get(m, [])+n)\n\nOR\nwith open('newfile.txt','w') as newfile:\n w=csv.writer(newfile)\n w.writerow(header_a+header_b[1:])\n\n for m in dict_a.keys()+dict_b.keys():\n for n in dict_b.get(m, [[]]):\n w.writerow([m]+dict_a.get(m, [])+n)\n\nComplete Code:\nimport collections\nimport csv\nimport sys\n\ndef c_merge(f1,f2):\n with open(f1,'r') as infile:\n obj=csv.reader(infile)\n header_a=next(obj)\n dict_a={row[0]: row[1:] for row in obj}\n\n with open(f2,'r') as infile:\n obj=csv.reader(infile)\n header_b=next(obj)\n dict_b=collections.defaultdict(list)\n for row in obj:\n dict_b[row[0]].append(row[1:])\n\n with open('newfile.txt','w') as newfile:\n w=csv.writer(newfile)\n w.writerow(header_a+header_b[1:])\n\n for m in set(dict_a.keys()).union(dict_b.keys()):\n for n in dict_b.get(m, [[]]):\n w.writerow([m]+dict_a.get(m, [])+n)\n\n # OR\n\n \"\"\"\n for m in dict_a.keys()+dict_b.keys():\n for n in dict_b.get(m, [[]]):\n w.writerow([m]+dict_a.get(m, [])+n)\n \"\"\"\n\nif __name__ == \"__main__\":\n c_merge(sys.argv[0],sys.argv[1])\n obj=csv.reader(open('newfile.txt','r'))\n for x in obj:\n print(x)\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651760_python.txt |
Q:
VS code: Updated PYTHONPATH in settings. Import autocomplete now works, but module not found when running program
I have been trying to fix a problem while running python files in VSCode. I have a directory with a program my_program.py that imports a module personal_functions.py from a folder packages_i_made.
project
├── .env
└── folder_a
├── my_program.py
my_packages
├── __init__.py
└── packages_i_made
├── __init__.py
└── personal_functions.py
my_program.py contains:
from packages_i_made import personal_functions as pf
When typing this import line, the text autocompletes. Previously autocomplete didn't work and imports failed with the module not being recognized, which is why I added the .env file which contains PYTHONPATH="C:/Users/user_1/my_packages" . I also updated the workspace settings with a path to the .env file.
I hoped this would fix my import issue, but imports still fail as before when running the file:
Traceback (most recent call last):
File "c:\Users\user_1\...\...\project\folder_a\my_program.py", line 1, in <module>
from packages_i_made import personal_functions as pf
ModuleNotFoundError: No module named 'packages_i_made'
I'm an utter beginner to programming, but my interpretation of the issue is this: As far as I can tell, Pylance can see the my_packages directory and recognize the packages inside it. However, the python interpreter fails to recognize the path to my_packages. If it's important, the my_packages directory is in my system path. I've been banging my head against this for days. Any help is appreciated.
A:
The easiest way to solve the problem is to use the sys.path.append method above your import statement to indicate the path. For you, the code should look like this:
import sys
sys.path.append("C:/Users/user_1/my_packages")
from packages_i_made import personal_functions as pf
You set the .env file just to let vscode intellisense (pylance for python) see the package, but in fact the python interpreter still cannot access your package. This effect is the same as adding "python.analysis.extraPaths" configuration in settings.json, it will add additional import search resolution path.
The difference this brings is that when you write import code you get autocompletion and the yellow warning squiggly line disappears. But when you run the code, you still get errors because the python interpreter doesn't actually have access to your package.
VS code uses the currently open folder as the workspace. So which folder you open in vscode is also critical.
A:
Open the vscode from your project root folder. It should work then.
| VS code: Updated PYTHONPATH in settings. Import autocomplete now works, but module not found when running program | I have been trying to fix a problem while running python files in VSCode. I have a directory with a program my_program.py that imports a module personal_functions.py from a folder packages_i_made.
project
├── .env
└── folder_a
├── my_program.py
my_packages
├── __init__.py
└── packages_i_made
├── __init__.py
└── personal_functions.py
my_program.py contains:
from packages_i_made import personal_functions as pf
When typing this import line, the text autocompletes. Previously autocomplete didn't work and imports failed with the module not being recognized, which is why I added the .env file which contains PYTHONPATH="C:/Users/user_1/my_packages" . I also updated the workspace settings with a path to the .env file.
I hoped this would fix my import issue, but imports still fail as before when running the file:
Traceback (most recent call last):
File "c:\Users\user_1\...\...\project\folder_a\my_program.py", line 1, in <module>
from packages_i_made import personal_functions as pf
ModuleNotFoundError: No module named 'packages_i_made'
I'm an utter beginner to programming, but my interpretation of the issue is this: As far as I can tell, Pylance can see the my_packages directory and recognize the packages inside it. However, the python interpreter fails to recognize the path to my_packages. If it's important, the my_packages directory is in my system path. I've been banging my head against this for days. Any help is appreciated.
| [
"The easiest way to solve the problem is to use the sys.path.append method above your import statement to indicate the path. For you, the code should look like this:\nimport sys\nsys.path.append(\"C:/Users/user_1/my_packages\")\n\nfrom packages_i_made import personal_functions as pf\n\nYou set the .env file just to let vscode intellisense (pylance for python) see the package, but in fact the python interpreter still cannot access your package. This effect is the same as adding \"python.analysis.extraPaths\" configuration in settings.json, it will add additional import search resolution path.\nThe difference this brings is that when you write import code you get autocompletion and the yellow warning squiggly line disappears. But when you run the code, you still get errors because the python interpreter doesn't actually have access to your package.\nVS code uses the currently open folder as the workspace. So which folder you open in vscode is also critical.\n",
"Open the vscode from your project root folder. It should work then.\n"
] | [
1,
0
] | [] | [] | [
"path",
"python",
"python_import",
"visual_studio_code"
] | stackoverflow_0074651496_path_python_python_import_visual_studio_code.txt |
Q:
How to change instance type in EC2 Launch Template using AWS SDK?
I'm looking to change certain things in the launch template e.g. the instance type. Which means creating a new version while doing so.
I have gone through the SDK documentation for both Go and Python. Neither seem to have the paramenters that'd let me acheive the same.
I'm refering to these:
Go's function,
Python's function
Please help me out...
A:
EC2 launch template is immutable. You must create a new version if you need to modify the current launch template version.
Here is an example of creating a new version and then making it the default version using AWS SDK v2.
Install these two packages:
"github.com/aws/aws-sdk-go-v2/service/ec2"
ec2types "github.com/aws/aws-sdk-go-v2/service/ec2/types"
Assuming you created the AWS config:
func createLaunchTemplateVersion(cfg aws.Config) {
ec2client := ec2.NewFromConfig(cfg)
template := ec2types.RequestLaunchTemplateData{
InstanceType: ec2types.InstanceTypeT2Medium}
createParams := ec2.CreateLaunchTemplateVersionInput{
LaunchTemplateData: &template,
LaunchTemplateName: aws.String("MyTemplate"),
SourceVersion: aws.String("1"),
}
outputCreate, err := ec2client.CreateLaunchTemplateVersion(context.Background(), &createParams)
if err != nil {
log.Fatal(err)
}
if outputCreate.Warning != nil {
log.Fatalf("%v\n", outputCreate.Warning.Errors)
}
// set the new launch type version as the default version
modifyParams := ec2.ModifyLaunchTemplateInput{
DefaultVersion: aws.String(strconv.FormatInt(*outputCreate.LaunchTemplateVersion.VersionNumber, 10)),
LaunchTemplateName: outputCreate.LaunchTemplateVersion.LaunchTemplateName,
}
outputModify, err := ec2client.ModifyLaunchTemplate(context.Background(), &modifyParams)
if err != nil {
log.Fatal(err)
}
fmt.Printf("default version %d\n", *outputModify.LaunchTemplate.DefaultVersionNumber)
}
| How to change instance type in EC2 Launch Template using AWS SDK? | I'm looking to change certain things in the launch template e.g. the instance type. Which means creating a new version while doing so.
I have gone through the SDK documentation for both Go and Python. Neither seem to have the paramenters that'd let me acheive the same.
I'm refering to these:
Go's function,
Python's function
Please help me out...
| [
"EC2 launch template is immutable. You must create a new version if you need to modify the current launch template version.\nHere is an example of creating a new version and then making it the default version using AWS SDK v2.\nInstall these two packages:\n\"github.com/aws/aws-sdk-go-v2/service/ec2\"\nec2types \"github.com/aws/aws-sdk-go-v2/service/ec2/types\"\n\nAssuming you created the AWS config:\nfunc createLaunchTemplateVersion(cfg aws.Config) {\n ec2client := ec2.NewFromConfig(cfg)\n template := ec2types.RequestLaunchTemplateData{\n InstanceType: ec2types.InstanceTypeT2Medium}\n createParams := ec2.CreateLaunchTemplateVersionInput{\n LaunchTemplateData: &template,\n LaunchTemplateName: aws.String(\"MyTemplate\"),\n SourceVersion: aws.String(\"1\"),\n }\n outputCreate, err := ec2client.CreateLaunchTemplateVersion(context.Background(), &createParams)\n if err != nil {\n log.Fatal(err)\n }\n if outputCreate.Warning != nil {\n log.Fatalf(\"%v\\n\", outputCreate.Warning.Errors)\n }\n // set the new launch type version as the default version\n modifyParams := ec2.ModifyLaunchTemplateInput{\n DefaultVersion: aws.String(strconv.FormatInt(*outputCreate.LaunchTemplateVersion.VersionNumber, 10)),\n LaunchTemplateName: outputCreate.LaunchTemplateVersion.LaunchTemplateName,\n }\n outputModify, err := ec2client.ModifyLaunchTemplate(context.Background(), &modifyParams)\n if err != nil {\n log.Fatal(err)\n }\n fmt.Printf(\"default version %d\\n\", *outputModify.LaunchTemplate.DefaultVersionNumber)\n}\n\n"
] | [
2
] | [] | [] | [
"amazon_ec2",
"amazon_web_services",
"go",
"python"
] | stackoverflow_0074650569_amazon_ec2_amazon_web_services_go_python.txt |
Q:
DateTime adjustment in pandas
I have a dataframe with thousands of rows, there is a column which is datetime:
I would like to adjust the time, a little like 00 ± 15 -> 00, and 30±15 ->30.
More precise saying is the minute within the range 46<->15 will change to 00, 16<->45 will change to 30, but it also needs care ± 1 on the hour
datetime
2022/11/15 00:29
2022/11/15 00:29
2022/11/15 00:29
2022/11/15 00:59
2022/11/15 00:59
2022/11/15 00:59
2022/11/15 01:35
2022/11/15 01:35
2022/11/15 01:35
2022/11/15 02:01
2022/11/15 02:01
2022/11/15 02:01
2022/11/15 02:45
2022/11/15 02:45
2022/11/15 02:45
2022/11/15 02:48
2022/11/15 02:48
2022/11/15 02:48
After adjustment, it would become
datetime
2022/11/15 00:30
2022/11/15 00:30
2022/11/15 00:30
2022/11/15 01:00
2022/11/15 01:00
2022/11/15 01:00
2022/11/15 01:30
2022/11/15 01:30
2022/11/15 01:30
2022/11/15 02:00
2022/11/15 02:00
2022/11/15 02:00
2022/11/15 02:30
2022/11/15 02:30
2022/11/15 02:30
2022/11/15 03:00
2022/11/15 03:00
2022/11/15 03:00
A:
Use Series.dt.ceil by 15 minutes and then Series.dt.floor by 30:
df['datetime'] = pd.to_datetime(df['datetime']).dt.ceil('15Min').dt.floor('30Min')
print (df)
datetime
0 2022-11-15 00:30:00
1 2022-11-15 00:30:00
2 2022-11-15 00:30:00
3 2022-11-15 01:00:00
4 2022-11-15 01:00:00
5 2022-11-15 01:00:00
6 2022-11-15 01:30:00
7 2022-11-15 01:30:00
8 2022-11-15 01:30:00
9 2022-11-15 02:00:00
10 2022-11-15 02:00:00
11 2022-11-15 02:00:00
12 2022-11-15 02:30:00
13 2022-11-15 02:30:00
14 2022-11-15 02:30:00
15 2022-11-15 03:00:00
16 2022-11-15 03:00:00
17 2022-11-15 03:00:00
| DateTime adjustment in pandas | I have a dataframe with thousands of rows, there is a column which is datetime:
I would like to adjust the time, a little like 00 ± 15 -> 00, and 30±15 ->30.
More precise saying is the minute within the range 46<->15 will change to 00, 16<->45 will change to 30, but it also needs care ± 1 on the hour
datetime
2022/11/15 00:29
2022/11/15 00:29
2022/11/15 00:29
2022/11/15 00:59
2022/11/15 00:59
2022/11/15 00:59
2022/11/15 01:35
2022/11/15 01:35
2022/11/15 01:35
2022/11/15 02:01
2022/11/15 02:01
2022/11/15 02:01
2022/11/15 02:45
2022/11/15 02:45
2022/11/15 02:45
2022/11/15 02:48
2022/11/15 02:48
2022/11/15 02:48
After adjustment, it would become
datetime
2022/11/15 00:30
2022/11/15 00:30
2022/11/15 00:30
2022/11/15 01:00
2022/11/15 01:00
2022/11/15 01:00
2022/11/15 01:30
2022/11/15 01:30
2022/11/15 01:30
2022/11/15 02:00
2022/11/15 02:00
2022/11/15 02:00
2022/11/15 02:30
2022/11/15 02:30
2022/11/15 02:30
2022/11/15 03:00
2022/11/15 03:00
2022/11/15 03:00
| [
"Use Series.dt.ceil by 15 minutes and then Series.dt.floor by 30:\ndf['datetime'] = pd.to_datetime(df['datetime']).dt.ceil('15Min').dt.floor('30Min')\nprint (df)\n datetime\n0 2022-11-15 00:30:00\n1 2022-11-15 00:30:00\n2 2022-11-15 00:30:00\n3 2022-11-15 01:00:00\n4 2022-11-15 01:00:00\n5 2022-11-15 01:00:00\n6 2022-11-15 01:30:00\n7 2022-11-15 01:30:00\n8 2022-11-15 01:30:00\n9 2022-11-15 02:00:00\n10 2022-11-15 02:00:00\n11 2022-11-15 02:00:00\n12 2022-11-15 02:30:00\n13 2022-11-15 02:30:00\n14 2022-11-15 02:30:00\n15 2022-11-15 03:00:00\n16 2022-11-15 03:00:00\n17 2022-11-15 03:00:00\n\n"
] | [
2
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074651895_pandas_python.txt |
Q:
discord.py embed help command with reaction pages
I'm trying to make a help command with multiple pages that you can go back and forth with using reactions. It works properly but when you get to page 2 and go forward again, Nothing happens.
Same when you go to page 1 and try to go back. How can I make it so when you try to go past the last page it goes back to the first page and when you try to go before the first page, it takes you to the last page?
@client.hybrid_command(name = "help", with_app_command=True, description="Get a list of commands")
@commands.guild_only()
async def help(ctx):
pages = 2
cur_page = 1
roleplayembed = discord.Embed(color=embedcolor, title="Roleplay Commands")
roleplayembed.add_field(name=f"{client.command_prefix}Cuddle", value="Cuddle a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Hug", value="Hug a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Kiss", value="Kiss a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Slap", value="Slap a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Pat", value="Pat a user and add a message(Optional)",inline=False)
roleplayembed.set_footer(text=f"Page {cur_page} of {pages}")
roleplayembed.timestamp = datetime.datetime.utcnow()
basicembed = discord.Embed(color=embedcolor, title="Basic Commands")
basicembed.add_field(name=f"{client.command_prefix}Waifu", value="Posts a random AI Generated Image of a waifu",inline=False)
basicembed.add_field(name=f"{client.command_prefix}8ball", value="Works as an 8 ball",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ara", value="Gives you a random ara ara from Kurumi Tokisaki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Wikipedia", value="Search something up on the wiki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Userinfo", value="Look up info about a user",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ask", value="Ask the bot a question",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Askwhy", value="Ask the boy a question beginning with 'why'",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Avatar", value="Get a user's avatar or your own avatar",inline=False)
basicembed.set_footer(text=f"Page {cur_page+1} of {pages}")
basicembed.timestamp = datetime.datetime.utcnow()
contents = [roleplayembed, basicembed]
message = await ctx.send(embed=contents[cur_page-1])
# getting the message object for editing and reacting
await message.add_reaction("◀️")
await message.add_reaction("▶️")
def check(reaction, user):
return user == ctx.author and str(reaction.emoji) in ["◀️", "▶️"]
# This makes sure nobody except the command sender can interact with the "menu"
while True:
try:
reaction, user = await client.wait_for("reaction_add", timeout=60, check=check)
# waiting for a reaction to be added - times out after x seconds, 60 in this
# example
if str(reaction.emoji) == "▶️" and cur_page != pages:
cur_page += 1
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
elif str(reaction.emoji) == "◀️" and cur_page > 1:
cur_page -= 1
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
else:
await message.remove_reaction(reaction, user)
# removes reactions if the user tries to go forward on the last page or
# backwards on the first page
except asyncio.TimeoutError:
await message.delete()
break
# ending the loop if user doesn't react after x seconds
A:
Avoid repetition, in your loop:
while True:
try:
reaction, user = await client.wait_for("reaction_add", timeout=60, check=check)
# waiting for a reaction to be added - times out after x seconds, 60 in this
# example
if str(reaction.emoji) == "▶️":
cur_page += 1
elif str(reaction.emoji) == "◀️":
cur_page -= 1
if cur_page > pages: #check if forward on last page
cur_page = 1
elif cur_page < 1: #check if back on first page
cur_page = pages
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
except asyncio.TimeoutError:
await message.delete()
break
| discord.py embed help command with reaction pages | I'm trying to make a help command with multiple pages that you can go back and forth with using reactions. It works properly but when you get to page 2 and go forward again, Nothing happens.
Same when you go to page 1 and try to go back. How can I make it so when you try to go past the last page it goes back to the first page and when you try to go before the first page, it takes you to the last page?
@client.hybrid_command(name = "help", with_app_command=True, description="Get a list of commands")
@commands.guild_only()
async def help(ctx):
pages = 2
cur_page = 1
roleplayembed = discord.Embed(color=embedcolor, title="Roleplay Commands")
roleplayembed.add_field(name=f"{client.command_prefix}Cuddle", value="Cuddle a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Hug", value="Hug a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Kiss", value="Kiss a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Slap", value="Slap a user and add a message(Optional)",inline=False)
roleplayembed.add_field(name=f"{client.command_prefix}Pat", value="Pat a user and add a message(Optional)",inline=False)
roleplayembed.set_footer(text=f"Page {cur_page} of {pages}")
roleplayembed.timestamp = datetime.datetime.utcnow()
basicembed = discord.Embed(color=embedcolor, title="Basic Commands")
basicembed.add_field(name=f"{client.command_prefix}Waifu", value="Posts a random AI Generated Image of a waifu",inline=False)
basicembed.add_field(name=f"{client.command_prefix}8ball", value="Works as an 8 ball",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ara", value="Gives you a random ara ara from Kurumi Tokisaki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Wikipedia", value="Search something up on the wiki",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Userinfo", value="Look up info about a user",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Ask", value="Ask the bot a question",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Askwhy", value="Ask the boy a question beginning with 'why'",inline=False)
basicembed.add_field(name=f"{client.command_prefix}Avatar", value="Get a user's avatar or your own avatar",inline=False)
basicembed.set_footer(text=f"Page {cur_page+1} of {pages}")
basicembed.timestamp = datetime.datetime.utcnow()
contents = [roleplayembed, basicembed]
message = await ctx.send(embed=contents[cur_page-1])
# getting the message object for editing and reacting
await message.add_reaction("◀️")
await message.add_reaction("▶️")
def check(reaction, user):
return user == ctx.author and str(reaction.emoji) in ["◀️", "▶️"]
# This makes sure nobody except the command sender can interact with the "menu"
while True:
try:
reaction, user = await client.wait_for("reaction_add", timeout=60, check=check)
# waiting for a reaction to be added - times out after x seconds, 60 in this
# example
if str(reaction.emoji) == "▶️" and cur_page != pages:
cur_page += 1
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
elif str(reaction.emoji) == "◀️" and cur_page > 1:
cur_page -= 1
await message.edit(embed=contents[cur_page-1])
await message.remove_reaction(reaction, user)
else:
await message.remove_reaction(reaction, user)
# removes reactions if the user tries to go forward on the last page or
# backwards on the first page
except asyncio.TimeoutError:
await message.delete()
break
# ending the loop if user doesn't react after x seconds
| [
"Avoid repetition, in your loop:\nwhile True:\n\n try:\n reaction, user = await client.wait_for(\"reaction_add\", timeout=60, check=check)\n # waiting for a reaction to be added - times out after x seconds, 60 in this\n # example\n\n if str(reaction.emoji) == \"▶️\":\n cur_page += 1\n \n\n elif str(reaction.emoji) == \"◀️\": \n cur_page -= 1\n\n if cur_page > pages: #check if forward on last page\n cur_page = 1 \n elif cur_page < 1: #check if back on first page\n cur_page = pages \n await message.edit(embed=contents[cur_page-1])\n await message.remove_reaction(reaction, user)\n\n \n except asyncio.TimeoutError:\n await message.delete()\n break\n\n"
] | [
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074648761_discord_discord.py_python.txt |
Q:
How to simultaneoulsy expand and shrink the dataframe as per some conditions?
i have a df as follows:
df = pd.DataFrame.from_dict({'Type': {0: 'A1', 1: 'A2', 2: 'A2', 3: 'A2', 4: 'A2', 5: 'A3', 6: 'A3', 7: 'A3', 8: 'A3', 9: 'A3', 10: 'A3', 11: 'A3', 12: 'A3', 13: 'A3', 14: 'A3', 15: 'A3', 16: 'A3', 17: 'A3', 18: 'A3', 19: 'A3', 20: 'A3', 21: 'A3', 22: 'A3', 23: 'A3', 24: 'A3', 25: 'A3', 26: 'A3', 27: 'A3', 28: 'A3', 29: 'A3', 30: 'A3', 31: 'A3', 32: 'A3', 33: 'A3', 34: 'A3', 35: 'A3', 36: 'A3', 37: 'A3', 38: 'A3', 39: 'A3', 40: 'A3', 41: 'A3', 42: 'A3', 43: 'A3', 44: 'A3', 45: 'A3', 46: 'A3', 47: 'A3', 48: 'A3', 49: 'A3', 50: 'A3', 51: 'A3', 52: 'A3', 53: 'A3', 54: 'A3', 55: 'A3', 56: 'A3', 57: 'A3', 58: 'A3', 59: 'A3', 60: 'A3', 61: 'A3', 62: 'A3', 63: 'A3', 64: 'A3', 65: 'A3', 66: 'A3', 67: 'A3', 68: 'A3', 69: 'A3', 70: 'A3', 71: 'A3', 72: 'A3', 73: 'A3', 74: 'A3', 75: 'A3'}, 'FN': {0: 'F1', 1: 'F2', 2: 'F3', 3: 'F3', 4: 'F4', 5: 'F5', 6: 'F5', 7: 'F5', 8: 'F6', 9: 'F6', 10: 'F6', 11: 'F6', 12: 'F7', 13: 'F7', 14: 'F1', 15: 'F1', 16: 'F8', 17: 'F8', 18: 'F8', 19: 'F8', 20: 'F8', 21: 'F9', 22: 'F9', 23: 'F9', 24: 'F10', 25: 'F10', 26: 'F11', 27: 'F12', 28: 'F12', 29: 'F13', 30: 'F13', 31: 'F14', 32: 'F14', 33: 'F15', 34: 'F15', 35: 'F16', 36: 'F16', 37: 'F16', 38: 'F17', 39: 'F17', 40: 'F18', 41: 'F3', 42: 'F3', 43: 'F3', 44: 'F3', 45: 'F19', 46: 'F20', 47: 'F21', 48: 'F22', 49: 'F23', 50: 'F23', 51: 'F24', 52: 'F25', 53: 'F26', 54: 'F26', 55: 'F27', 56: 'F27', 57: 'F27', 58: 'F28', 59: 'F29', 60: 'F30', 61: 'F30', 62: 'F31', 63: 'F31', 64: 'F32', 65: 'F32', 66: 'F33', 67: 'F34', 68: 'F34', 69: 'F35', 70: 'F35', 71: 'F36', 72: 'F37', 73: 'F37', 74: 'F38', 75: 'F39'}, 'ID': {0: 'S1', 1: 'S2', 2: 'S3', 3: 'S4', 4: 'S5', 5: 'S6', 6: 'S6', 7: 'S7', 8: 'S8', 9: 'S9', 10: 'S10', 11: 'S11', 12: 'S12', 13: 'S13', 14: 'S1', 15: 'S1', 16: 'S14', 17: 'S15', 18: 'S16', 19: 'S17', 20: 'S17', 21: 'S18', 22: 'S18', 23: 'S19', 24: 'S20', 25: 'S21', 26: 'S22', 27: 'S23', 28: 'S23', 29: 'S24', 30: 'S25', 31: 'S26', 32: 'S27', 33: 'S28', 34: 'S28', 35: 'S29', 36: 'S29', 37: 'S29', 38: 'S30', 39: 'S30', 40: 'S31', 41: 'S32', 42: 'S32', 43: 'S3', 44: 'S3', 45: 'S33', 46: 'S34', 47: 'S35', 48: 'S36', 49: 'S37', 50: 'S38', 51: 'S39', 52: 'S40', 53: 'S41', 54: 'S41', 55: 'S42', 56: 'S43', 57: 'S44', 58: 'S45', 59: 'S46', 60: 'S47', 61: 'S48', 62: 'S49', 63: 'S49', 64: 'S50', 65: 'S50', 66: 'S51', 67: 'S52', 68: 'S52', 69: 'S53', 70: 'S53', 71: 'S54', 72: 'S55', 73: 'S55', 74: 'S56', 75: 'S57'}, 'DN': {0: 'D1', 1: 'D2', 2: 'D3', 3: 'D4', 4: 'D5', 5: 'D6', 6: 'D6', 7: 'D7', 8: 'D8', 9: 'D9', 10: 'D10', 11: 'D11', 12: 'D12', 13: 'D13', 14: 'D1', 15: 'D1', 16: 'D14', 17: 'D15', 18: 'D16', 19: 'D17', 20: 'D17', 21: 'D18', 22: 'D18', 23: 'D19', 24: 'D20', 25: 'D21', 26: 'D22', 27: 'D23', 28: 'D23', 29: 'D24', 30: 'D25', 31: 'D26', 32: 'D27', 33: 'D28', 34: 'D28', 35: 'D29', 36: 'D29', 37: 'D29', 38: 'D30', 39: 'D30', 40: 'D31', 41: 'D32', 42: 'D32', 43: 'D3', 44: 'D3', 45: 'D33', 46: 'D34', 47: 'D35', 48: 'D36', 49: 'D37', 50: 'D38', 51: 'D39', 52: 'D40', 53: 'D41', 54: 'D41', 55: 'D42', 56: 'D43', 57: 'D44', 58: 'D45', 59: 'D46', 60: 'D47', 61: 'D48', 62: 'D49', 63: 'D49', 64: 'D50', 65: 'D50', 66: 'D51', 67: 'D52', 68: 'D52', 69: 'D53', 70: 'D53', 71: 'D54', 72: 'D55', 73: 'D55', 74: 'D56', 75: 'D57'}, 'Group': {0: 'FC', 1: 'SCZ', 2: 'FC', 3: 'SCZ', 4: 'SCZ', 5: 'FC', 6: 'FC', 7: 'FC', 8: 'FC', 9: 'FC', 10: 'FC', 11: 'FC', 12: 'FC', 13: 'FC', 14: 'FC', 15: 'FC', 16: 'BPAD', 17: 'BPAD', 18: 'FC', 19: 'FC', 20: 'FC', 21: 'FC', 22: 'FC', 23: 'FC', 24: 'BPAD', 25: 'SCZ', 26: 'FC', 27: 'PC', 28: 'PC', 29: 'FC', 30: 'FC', 31: 'FC', 32: 'FC', 33: 'FC', 34: 'FC', 35: 'FC', 36: 'FC', 37: 'FC', 38: 'FC', 39: 'FC', 40: 'FC', 41: 'FC', 42: 'FC', 43: 'FC', 44: 'FC', 45: 'FC', 46: 'FC', 47: 'FC', 48: 'FC', 49: 'FC', 50: 'FC', 51: 'FC', 52: 'FC', 53: 'FC', 54: 'FC', 55: 'FC', 56: 'FC', 57: 'SCZ', 58: 'FC', 59: 'FC', 60: 'FC', 61: 'SCZ', 62: 'PC', 63: 'PC', 64: 'PC', 65: 'PC', 66: 'PC', 67: 'PC', 68: 'PC', 69: 'PC', 70: 'PC', 71: 'PC', 72: 'PC', 73: 'PC', 74: 'PC', 75: 'PC'}, 'POS': {0: 'C1', 1: 'C2', 2: 'C3', 3: 'C3', 4: 'C4', 5: 'C5', 6: 'C6', 7: 'C7', 8: 'C5', 9: 'C5', 10: 'C5', 11: 'C5', 12: 'C5', 13: 'C5', 14: 'C8', 15: 'C7', 16: 'C9', 17: 'C7', 18: 'C5', 19: 'C5', 20: 'C6', 21: 'C5', 22: 'C7', 23: 'C5', 24: 'C7', 25: 'C7', 26: 'C5', 27: 'C5', 28: 'C10', 29: 'C11', 30: 'C5', 31: 'C5', 32: 'C5', 33: 'C5', 34: 'C7', 35: 'C12', 36: 'C5', 37: 'C7', 38: 'C5', 39: 'C7', 40: 'C5', 41: 'C13', 42: 'C5', 43: 'C13', 44: 'C5', 45: 'C5', 46: 'C5', 47: 'C5', 48: 'C5', 49: 'C5', 50: 'C5', 51: 'C5', 52: 'C5', 53: 'C5', 54: 'C14', 55: 'C5', 56: 'C5', 57: 'C5', 58: 'C5', 59: 'C5', 60: 'C5', 61: 'C5', 62: 'C5', 63: 'C7', 64: 'C5', 65: 'C7', 66: 'C5', 67: 'C5', 68: 'C7', 69: 'C5', 70: 'C7', 71: 'C5', 72: 'C5', 73: 'C7', 74: 'C5', 75: 'C15'}, 'VC': {0: 'MI', 1: 'MI', 2: 'IN', 3: 'IN', 4: 'MI', 5: 'MI', 6: 'LOF', 7: 'MI', 8: 'MI', 9: 'MI', 10: 'MI', 11: 'MI', 12: 'MI', 13: 'MI', 14: 'MI', 15: 'MI', 16: 'MI', 17: 'MI', 18: 'MI', 19: 'MI', 20: 'LOF', 21: 'MI', 22: 'MI', 23: 'MI', 24: 'MI', 25: 'MI', 26: 'MI', 27: 'MI', 28: 'MI', 29: 'MI', 30: 'MI', 31: 'MI', 32: 'MI', 33: 'MI', 34: 'MI', 35: 'MI', 36: 'MI', 37: 'MI', 38: 'MI', 39: 'MI', 40: 'MI', 41: 'MI', 42: 'MI', 43: 'MI', 44: 'MI', 45: 'MI', 46: 'MI', 47: 'MI', 48: 'MI', 49: 'MI', 50: 'MI', 51: 'MI', 52: 'MI', 53: 'MI', 54: 'MI', 55: 'MI', 56: 'MI', 57: 'MI', 58: 'MI', 59: 'MI', 60: 'MI', 61: 'MI', 62: 'MI', 63: 'MI', 64: 'MI', 65: 'MI', 66: 'MI', 67: 'MI', 68: 'MI', 69: 'MI', 70: 'MI', 71: 'MI', 72: 'MI', 73: 'MI', 74: 'MI', 75: 'MI'}})
I wanted to expand and shrink columns simulataneously such that the output look as follws:
Type POS FN VC ID DN FC SCZ BPAD PC
A1 C1 F1 MI S1 D1 1 0 0 0
A2 C2 F2 MI S2 D2 0 1 0 0
C3 F3 IN S3|S4 D3|D4 1 1 0 0
C4 F4 MI S5 D5 0 1 0 0
A3 C5 F5 MI S6 D6 1 0 0 0
F6 MI S8|S9|S10|S11 D8|D9|D10|D11 3 0 0 1
F7 MI S12|S13 D11|D12 2 0 0 0
C6 F5 LOF S6 D6 1 0 0 0
C7 F1 MI S1 D1 1 0 0 0
F5 MI S7 D7 1 0 0 0
F8 MI S15 D15 0 0 1 0
C8 F1 MI S1 D1 1 0 0 0
F8 MI S14 D14 0 0 1 0
I tried the following code to shrink and expand the data
df1 = df.groupby(['Type', 'FN']).agg(lambda x: '|'.join(x.unique()))[['POS', 'VC', 'ID', 'DN']]
df2 = pd.get_dummies(df.set_index(['Type', 'FN'])['Group']).sum(level=[0, 1])
pd.concat([df1, df2], axis=1)
But in the output POS also got splitted, but I wanted to expand that
A:
I think you need aggregate per 3 columns:
df1 = df.groupby(['Type','POS', 'FN'])[['VC','ID','DN']].agg(lambda x: '|'.join(x.unique()))
df2 = pd.get_dummies(df.set_index(['Type','POS', 'FN'])['Group']).sum(level=[0, 1, 2])
df = pd.concat([df1, df2], axis=1)
print (df.head(20))
VC ID DN BPAD FC PC SCZ
Type POS FN
A1 C1 F1 MI S1 D1 0 1 0 0
A2 C2 F2 MI S2 D2 0 0 0 1
C3 F3 IN S3|S4 D3|D4 0 1 0 1
C4 F4 MI S5 D5 0 0 0 1
A3 C10 F12 MI S23 D23 0 0 1 0
C11 F13 MI S24 D24 0 1 0 0
C12 F16 MI S29 D29 0 1 0 0
C13 F3 MI S32|S3 D32|D3 0 2 0 0
C14 F26 MI S41 D41 0 1 0 0
C15 F39 MI S57 D57 0 0 1 0
C5 F11 MI S22 D22 0 1 0 0
F12 MI S23 D23 0 0 1 0
F13 MI S25 D25 0 1 0 0
F14 MI S26|S27 D26|D27 0 2 0 0
F15 MI S28 D28 0 1 0 0
F16 MI S29 D29 0 1 0 0
F17 MI S30 D30 0 1 0 0
F18 MI S31 D31 0 1 0 0
F19 MI S33 D33 0 1 0 0
F20 MI S34 D34 0 1 0 0
| How to simultaneoulsy expand and shrink the dataframe as per some conditions? | i have a df as follows:
df = pd.DataFrame.from_dict({'Type': {0: 'A1', 1: 'A2', 2: 'A2', 3: 'A2', 4: 'A2', 5: 'A3', 6: 'A3', 7: 'A3', 8: 'A3', 9: 'A3', 10: 'A3', 11: 'A3', 12: 'A3', 13: 'A3', 14: 'A3', 15: 'A3', 16: 'A3', 17: 'A3', 18: 'A3', 19: 'A3', 20: 'A3', 21: 'A3', 22: 'A3', 23: 'A3', 24: 'A3', 25: 'A3', 26: 'A3', 27: 'A3', 28: 'A3', 29: 'A3', 30: 'A3', 31: 'A3', 32: 'A3', 33: 'A3', 34: 'A3', 35: 'A3', 36: 'A3', 37: 'A3', 38: 'A3', 39: 'A3', 40: 'A3', 41: 'A3', 42: 'A3', 43: 'A3', 44: 'A3', 45: 'A3', 46: 'A3', 47: 'A3', 48: 'A3', 49: 'A3', 50: 'A3', 51: 'A3', 52: 'A3', 53: 'A3', 54: 'A3', 55: 'A3', 56: 'A3', 57: 'A3', 58: 'A3', 59: 'A3', 60: 'A3', 61: 'A3', 62: 'A3', 63: 'A3', 64: 'A3', 65: 'A3', 66: 'A3', 67: 'A3', 68: 'A3', 69: 'A3', 70: 'A3', 71: 'A3', 72: 'A3', 73: 'A3', 74: 'A3', 75: 'A3'}, 'FN': {0: 'F1', 1: 'F2', 2: 'F3', 3: 'F3', 4: 'F4', 5: 'F5', 6: 'F5', 7: 'F5', 8: 'F6', 9: 'F6', 10: 'F6', 11: 'F6', 12: 'F7', 13: 'F7', 14: 'F1', 15: 'F1', 16: 'F8', 17: 'F8', 18: 'F8', 19: 'F8', 20: 'F8', 21: 'F9', 22: 'F9', 23: 'F9', 24: 'F10', 25: 'F10', 26: 'F11', 27: 'F12', 28: 'F12', 29: 'F13', 30: 'F13', 31: 'F14', 32: 'F14', 33: 'F15', 34: 'F15', 35: 'F16', 36: 'F16', 37: 'F16', 38: 'F17', 39: 'F17', 40: 'F18', 41: 'F3', 42: 'F3', 43: 'F3', 44: 'F3', 45: 'F19', 46: 'F20', 47: 'F21', 48: 'F22', 49: 'F23', 50: 'F23', 51: 'F24', 52: 'F25', 53: 'F26', 54: 'F26', 55: 'F27', 56: 'F27', 57: 'F27', 58: 'F28', 59: 'F29', 60: 'F30', 61: 'F30', 62: 'F31', 63: 'F31', 64: 'F32', 65: 'F32', 66: 'F33', 67: 'F34', 68: 'F34', 69: 'F35', 70: 'F35', 71: 'F36', 72: 'F37', 73: 'F37', 74: 'F38', 75: 'F39'}, 'ID': {0: 'S1', 1: 'S2', 2: 'S3', 3: 'S4', 4: 'S5', 5: 'S6', 6: 'S6', 7: 'S7', 8: 'S8', 9: 'S9', 10: 'S10', 11: 'S11', 12: 'S12', 13: 'S13', 14: 'S1', 15: 'S1', 16: 'S14', 17: 'S15', 18: 'S16', 19: 'S17', 20: 'S17', 21: 'S18', 22: 'S18', 23: 'S19', 24: 'S20', 25: 'S21', 26: 'S22', 27: 'S23', 28: 'S23', 29: 'S24', 30: 'S25', 31: 'S26', 32: 'S27', 33: 'S28', 34: 'S28', 35: 'S29', 36: 'S29', 37: 'S29', 38: 'S30', 39: 'S30', 40: 'S31', 41: 'S32', 42: 'S32', 43: 'S3', 44: 'S3', 45: 'S33', 46: 'S34', 47: 'S35', 48: 'S36', 49: 'S37', 50: 'S38', 51: 'S39', 52: 'S40', 53: 'S41', 54: 'S41', 55: 'S42', 56: 'S43', 57: 'S44', 58: 'S45', 59: 'S46', 60: 'S47', 61: 'S48', 62: 'S49', 63: 'S49', 64: 'S50', 65: 'S50', 66: 'S51', 67: 'S52', 68: 'S52', 69: 'S53', 70: 'S53', 71: 'S54', 72: 'S55', 73: 'S55', 74: 'S56', 75: 'S57'}, 'DN': {0: 'D1', 1: 'D2', 2: 'D3', 3: 'D4', 4: 'D5', 5: 'D6', 6: 'D6', 7: 'D7', 8: 'D8', 9: 'D9', 10: 'D10', 11: 'D11', 12: 'D12', 13: 'D13', 14: 'D1', 15: 'D1', 16: 'D14', 17: 'D15', 18: 'D16', 19: 'D17', 20: 'D17', 21: 'D18', 22: 'D18', 23: 'D19', 24: 'D20', 25: 'D21', 26: 'D22', 27: 'D23', 28: 'D23', 29: 'D24', 30: 'D25', 31: 'D26', 32: 'D27', 33: 'D28', 34: 'D28', 35: 'D29', 36: 'D29', 37: 'D29', 38: 'D30', 39: 'D30', 40: 'D31', 41: 'D32', 42: 'D32', 43: 'D3', 44: 'D3', 45: 'D33', 46: 'D34', 47: 'D35', 48: 'D36', 49: 'D37', 50: 'D38', 51: 'D39', 52: 'D40', 53: 'D41', 54: 'D41', 55: 'D42', 56: 'D43', 57: 'D44', 58: 'D45', 59: 'D46', 60: 'D47', 61: 'D48', 62: 'D49', 63: 'D49', 64: 'D50', 65: 'D50', 66: 'D51', 67: 'D52', 68: 'D52', 69: 'D53', 70: 'D53', 71: 'D54', 72: 'D55', 73: 'D55', 74: 'D56', 75: 'D57'}, 'Group': {0: 'FC', 1: 'SCZ', 2: 'FC', 3: 'SCZ', 4: 'SCZ', 5: 'FC', 6: 'FC', 7: 'FC', 8: 'FC', 9: 'FC', 10: 'FC', 11: 'FC', 12: 'FC', 13: 'FC', 14: 'FC', 15: 'FC', 16: 'BPAD', 17: 'BPAD', 18: 'FC', 19: 'FC', 20: 'FC', 21: 'FC', 22: 'FC', 23: 'FC', 24: 'BPAD', 25: 'SCZ', 26: 'FC', 27: 'PC', 28: 'PC', 29: 'FC', 30: 'FC', 31: 'FC', 32: 'FC', 33: 'FC', 34: 'FC', 35: 'FC', 36: 'FC', 37: 'FC', 38: 'FC', 39: 'FC', 40: 'FC', 41: 'FC', 42: 'FC', 43: 'FC', 44: 'FC', 45: 'FC', 46: 'FC', 47: 'FC', 48: 'FC', 49: 'FC', 50: 'FC', 51: 'FC', 52: 'FC', 53: 'FC', 54: 'FC', 55: 'FC', 56: 'FC', 57: 'SCZ', 58: 'FC', 59: 'FC', 60: 'FC', 61: 'SCZ', 62: 'PC', 63: 'PC', 64: 'PC', 65: 'PC', 66: 'PC', 67: 'PC', 68: 'PC', 69: 'PC', 70: 'PC', 71: 'PC', 72: 'PC', 73: 'PC', 74: 'PC', 75: 'PC'}, 'POS': {0: 'C1', 1: 'C2', 2: 'C3', 3: 'C3', 4: 'C4', 5: 'C5', 6: 'C6', 7: 'C7', 8: 'C5', 9: 'C5', 10: 'C5', 11: 'C5', 12: 'C5', 13: 'C5', 14: 'C8', 15: 'C7', 16: 'C9', 17: 'C7', 18: 'C5', 19: 'C5', 20: 'C6', 21: 'C5', 22: 'C7', 23: 'C5', 24: 'C7', 25: 'C7', 26: 'C5', 27: 'C5', 28: 'C10', 29: 'C11', 30: 'C5', 31: 'C5', 32: 'C5', 33: 'C5', 34: 'C7', 35: 'C12', 36: 'C5', 37: 'C7', 38: 'C5', 39: 'C7', 40: 'C5', 41: 'C13', 42: 'C5', 43: 'C13', 44: 'C5', 45: 'C5', 46: 'C5', 47: 'C5', 48: 'C5', 49: 'C5', 50: 'C5', 51: 'C5', 52: 'C5', 53: 'C5', 54: 'C14', 55: 'C5', 56: 'C5', 57: 'C5', 58: 'C5', 59: 'C5', 60: 'C5', 61: 'C5', 62: 'C5', 63: 'C7', 64: 'C5', 65: 'C7', 66: 'C5', 67: 'C5', 68: 'C7', 69: 'C5', 70: 'C7', 71: 'C5', 72: 'C5', 73: 'C7', 74: 'C5', 75: 'C15'}, 'VC': {0: 'MI', 1: 'MI', 2: 'IN', 3: 'IN', 4: 'MI', 5: 'MI', 6: 'LOF', 7: 'MI', 8: 'MI', 9: 'MI', 10: 'MI', 11: 'MI', 12: 'MI', 13: 'MI', 14: 'MI', 15: 'MI', 16: 'MI', 17: 'MI', 18: 'MI', 19: 'MI', 20: 'LOF', 21: 'MI', 22: 'MI', 23: 'MI', 24: 'MI', 25: 'MI', 26: 'MI', 27: 'MI', 28: 'MI', 29: 'MI', 30: 'MI', 31: 'MI', 32: 'MI', 33: 'MI', 34: 'MI', 35: 'MI', 36: 'MI', 37: 'MI', 38: 'MI', 39: 'MI', 40: 'MI', 41: 'MI', 42: 'MI', 43: 'MI', 44: 'MI', 45: 'MI', 46: 'MI', 47: 'MI', 48: 'MI', 49: 'MI', 50: 'MI', 51: 'MI', 52: 'MI', 53: 'MI', 54: 'MI', 55: 'MI', 56: 'MI', 57: 'MI', 58: 'MI', 59: 'MI', 60: 'MI', 61: 'MI', 62: 'MI', 63: 'MI', 64: 'MI', 65: 'MI', 66: 'MI', 67: 'MI', 68: 'MI', 69: 'MI', 70: 'MI', 71: 'MI', 72: 'MI', 73: 'MI', 74: 'MI', 75: 'MI'}})
I wanted to expand and shrink columns simulataneously such that the output look as follws:
Type POS FN VC ID DN FC SCZ BPAD PC
A1 C1 F1 MI S1 D1 1 0 0 0
A2 C2 F2 MI S2 D2 0 1 0 0
C3 F3 IN S3|S4 D3|D4 1 1 0 0
C4 F4 MI S5 D5 0 1 0 0
A3 C5 F5 MI S6 D6 1 0 0 0
F6 MI S8|S9|S10|S11 D8|D9|D10|D11 3 0 0 1
F7 MI S12|S13 D11|D12 2 0 0 0
C6 F5 LOF S6 D6 1 0 0 0
C7 F1 MI S1 D1 1 0 0 0
F5 MI S7 D7 1 0 0 0
F8 MI S15 D15 0 0 1 0
C8 F1 MI S1 D1 1 0 0 0
F8 MI S14 D14 0 0 1 0
I tried the following code to shrink and expand the data
df1 = df.groupby(['Type', 'FN']).agg(lambda x: '|'.join(x.unique()))[['POS', 'VC', 'ID', 'DN']]
df2 = pd.get_dummies(df.set_index(['Type', 'FN'])['Group']).sum(level=[0, 1])
pd.concat([df1, df2], axis=1)
But in the output POS also got splitted, but I wanted to expand that
| [
"I think you need aggregate per 3 columns:\ndf1 = df.groupby(['Type','POS', 'FN'])[['VC','ID','DN']].agg(lambda x: '|'.join(x.unique()))\n\ndf2 = pd.get_dummies(df.set_index(['Type','POS', 'FN'])['Group']).sum(level=[0, 1, 2])\ndf = pd.concat([df1, df2], axis=1)\nprint (df.head(20))\n VC ID DN BPAD FC PC SCZ\nType POS FN \nA1 C1 F1 MI S1 D1 0 1 0 0\nA2 C2 F2 MI S2 D2 0 0 0 1\n C3 F3 IN S3|S4 D3|D4 0 1 0 1\n C4 F4 MI S5 D5 0 0 0 1\nA3 C10 F12 MI S23 D23 0 0 1 0\n C11 F13 MI S24 D24 0 1 0 0\n C12 F16 MI S29 D29 0 1 0 0\n C13 F3 MI S32|S3 D32|D3 0 2 0 0\n C14 F26 MI S41 D41 0 1 0 0\n C15 F39 MI S57 D57 0 0 1 0\n C5 F11 MI S22 D22 0 1 0 0\n F12 MI S23 D23 0 0 1 0\n F13 MI S25 D25 0 1 0 0\n F14 MI S26|S27 D26|D27 0 2 0 0\n F15 MI S28 D28 0 1 0 0\n F16 MI S29 D29 0 1 0 0\n F17 MI S30 D30 0 1 0 0\n F18 MI S31 D31 0 1 0 0\n F19 MI S33 D33 0 1 0 0\n F20 MI S34 D34 0 1 0 0\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"group_by",
"pandas",
"python"
] | stackoverflow_0074651970_dataframe_group_by_pandas_python.txt |
Q:
Is there any way to make a program that runs again on the user inputs
I want to make a program that runs again or stops by the wish of the user, How do I do it?
I tried while true, while loop but nothing seems to work, Am i doing something wrong
Pictures of my attempt: https://imgur.com/a/vqupw3z
https://imgur.com/a/zq0HgKL
A:
It sounds like you are looking for a way to create a program that can be controlled by the user at runtime. One way to do this is to use a loop that continues until the user specifies that they want to stop the program.
One common way to do this is to use a while loop that repeats until the user enters a specific command to exit the program. For example, you could use a while loop that repeats indefinitely, and inside the loop, you could check if the user has entered the command to exit the program. If they have, you can use the break statement to exit the loop and end the program. Here is an example of how this might look:
while True:
# do some work here
# check if the user wants to exit the program
user_input = input("Enter 'exit' to exit the program: ")
if user_input == "exit":
break
| Is there any way to make a program that runs again on the user inputs | I want to make a program that runs again or stops by the wish of the user, How do I do it?
I tried while true, while loop but nothing seems to work, Am i doing something wrong
Pictures of my attempt: https://imgur.com/a/vqupw3z
https://imgur.com/a/zq0HgKL
| [
"It sounds like you are looking for a way to create a program that can be controlled by the user at runtime. One way to do this is to use a loop that continues until the user specifies that they want to stop the program.\nOne common way to do this is to use a while loop that repeats until the user enters a specific command to exit the program. For example, you could use a while loop that repeats indefinitely, and inside the loop, you could check if the user has entered the command to exit the program. If they have, you can use the break statement to exit the loop and end the program. Here is an example of how this might look:\nwhile True:\n # do some work here\n \n # check if the user wants to exit the program\n user_input = input(\"Enter 'exit' to exit the program: \")\n if user_input == \"exit\":\n break\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651986_python.txt |
Q:
python Ranking on rows having same value is giving random ranking not as ascending = False
Here is df, I want to Rank on value on group "Id" , ranking within class
df['Rank']=df.groupby(["Id"])[' value'].rank(ascending=0)
Sample df
Expected Result
Expected Result
Result what I get from above code
Result what I get from above code
Above code works well if value are unique
Example
df
Example df
Result
A:
IIUC, use a dense method on pandas.Series.rank with pandas.Series.astype :
df['Rank']= df.groupby('ID')['Value'].rank(ascending=False, method='dense').astype(int)
# Output :
print(df)
ID Class Value Rank
0 US A 10 1
1 US B 10 1
2 US C 2 2
3 US D 2 2
Input used :
ID Class Value
0 US A 10
1 US B 10
2 US C 2
3 US D 2
| python Ranking on rows having same value is giving random ranking not as ascending = False | Here is df, I want to Rank on value on group "Id" , ranking within class
df['Rank']=df.groupby(["Id"])[' value'].rank(ascending=0)
Sample df
Expected Result
Expected Result
Result what I get from above code
Result what I get from above code
Above code works well if value are unique
Example
df
Example df
Result
| [
"IIUC, use a dense method on pandas.Series.rank with pandas.Series.astype :\ndf['Rank']= df.groupby('ID')['Value'].rank(ascending=False, method='dense').astype(int)\n\n# Output :\nprint(df)\n\n ID Class Value Rank\n0 US A 10 1\n1 US B 10 1\n2 US C 2 2\n3 US D 2 2\n\nInput used :\n ID Class Value\n0 US A 10\n1 US B 10\n2 US C 2\n3 US D 2\n\n"
] | [
0
] | [] | [] | [
"duplicates",
"python",
"rank",
"ranking_functions",
"unique_values"
] | stackoverflow_0074651668_duplicates_python_rank_ranking_functions_unique_values.txt |
Q:
How to print my calculations for even and odd integers?
The program instructions follow: Your program should calculate how many values in a list of randomly generated integers are odd and how many are even with the following requirements:
Get the number of values to be generated along with the range of values from the user. After calculating the total number of odd and even values, display the results and allow the user to continue to generate and count new sets of values until they choose to exit.
Example program run:
Enter number of values needed: 100
Enter high end of value range from 1 to: 50
Odd values: 48.7%
Even values: 51.3%
Do you want to generate a new set of values? (Y/N) Y
Enter number of values needed: 20
Enter high end of value range from 1 to: 100
Odd values: 40.0%
Even values: 60.0%
Do you want to generate a new set of values? (Y/N) N
I am able to get it to print the first two prompts, but then it seems that it is infinitely loading.
import random
#if play is True when game starts
play = True
while play:
#Prompt user with Enter number of values needed:
num_values = int(input())
print("Enter number of values needed:", num_values)
# get user input of how many values to generate for i in range(0, num_values)
for i in range(0,num_values):
#Prompt user with Enter high end of value range from 1 to:
high_end = int(input())
print("Enter high end of value range from 1 to:", high_end)
#Get range of values from 1 to user_input list1 = random.randrange(1,user_input)
list1 = random.randrange(1, high_end)
def even():
even_count=0
if (list1 % 2) == 0:#finds odd numbers
even_count += 1 #keeps count of even numbers
return even_count
def odd():
odd_count=0
if (list1 % 2) != 0:#finds odd numbers
odd_count += 1 #keeps count of even numbers
return odd_count
even_total = even_count / num_values
odd_total = odd_count / num_values
print("Odd values:", odd_total)`your text`
print("Even values:", even_total)
letter = str(input())
print("Do you want to generate a new set of values? (Y/N)", play)
if letter != 'Y':
play = False
This is my current code, and I am required to have at least two functions. I am unsure as to why my even_values/odd_values are not processing correctly.
A:
The range consists of two values, it doesn't have to start with 1.
The task is to make a list. You create a list1 variable and inside the loop ask for a range for each number
You don't call the functions you wrote
import random
play = True
while play:
print("Enter number of values needed: ", end = "")
num_values = int(input())
print("Enter start of value range from 1 to: ", end = "")
start = int(input())
print("Enter end of value range from 1 to: ", end = "")
end = int(input())
list1 = []
for i in range(0,num_values):
list1.append(random.randint(start, end))
print(list1)
def even():
even_count = 0
for i in list1:
if i % 2 == 0:
even_count += 1
return even_count
def odd():
odd_count = 0
for i in list1:
if i % 2 != 0:
odd_count += 1
return odd_count
even_total = (even() / num_values) * 100
odd_total = (odd() / num_values) * 100
print(f"Odd values:{odd_total}%")
print(f"Even values:{even_total}%")
print("Do you want to generate a new set of values? (Y/N)")
letter = str(input())
if letter != 'Y':
play = False
| How to print my calculations for even and odd integers? | The program instructions follow: Your program should calculate how many values in a list of randomly generated integers are odd and how many are even with the following requirements:
Get the number of values to be generated along with the range of values from the user. After calculating the total number of odd and even values, display the results and allow the user to continue to generate and count new sets of values until they choose to exit.
Example program run:
Enter number of values needed: 100
Enter high end of value range from 1 to: 50
Odd values: 48.7%
Even values: 51.3%
Do you want to generate a new set of values? (Y/N) Y
Enter number of values needed: 20
Enter high end of value range from 1 to: 100
Odd values: 40.0%
Even values: 60.0%
Do you want to generate a new set of values? (Y/N) N
I am able to get it to print the first two prompts, but then it seems that it is infinitely loading.
import random
#if play is True when game starts
play = True
while play:
#Prompt user with Enter number of values needed:
num_values = int(input())
print("Enter number of values needed:", num_values)
# get user input of how many values to generate for i in range(0, num_values)
for i in range(0,num_values):
#Prompt user with Enter high end of value range from 1 to:
high_end = int(input())
print("Enter high end of value range from 1 to:", high_end)
#Get range of values from 1 to user_input list1 = random.randrange(1,user_input)
list1 = random.randrange(1, high_end)
def even():
even_count=0
if (list1 % 2) == 0:#finds odd numbers
even_count += 1 #keeps count of even numbers
return even_count
def odd():
odd_count=0
if (list1 % 2) != 0:#finds odd numbers
odd_count += 1 #keeps count of even numbers
return odd_count
even_total = even_count / num_values
odd_total = odd_count / num_values
print("Odd values:", odd_total)`your text`
print("Even values:", even_total)
letter = str(input())
print("Do you want to generate a new set of values? (Y/N)", play)
if letter != 'Y':
play = False
This is my current code, and I am required to have at least two functions. I am unsure as to why my even_values/odd_values are not processing correctly.
| [
"\nThe range consists of two values, it doesn't have to start with 1.\n\nThe task is to make a list. You create a list1 variable and inside the loop ask for a range for each number\n\nYou don't call the functions you wrote\n import random\n play = True\n while play:\n print(\"Enter number of values needed: \", end = \"\")\n num_values = int(input())\n print(\"Enter start of value range from 1 to: \", end = \"\")\n start = int(input())\n print(\"Enter end of value range from 1 to: \", end = \"\")\n end = int(input())\n list1 = []\n for i in range(0,num_values):\n list1.append(random.randint(start, end))\n print(list1)\n\n def even():\n even_count = 0\n for i in list1:\n if i % 2 == 0:\n even_count += 1\n return even_count\n\n def odd():\n odd_count = 0\n for i in list1:\n if i % 2 != 0:\n odd_count += 1\n return odd_count\n\n even_total = (even() / num_values) * 100\n odd_total = (odd() / num_values) * 100\n print(f\"Odd values:{odd_total}%\")\n print(f\"Even values:{even_total}%\")\n\n\n print(\"Do you want to generate a new set of values? (Y/N)\")\n letter = str(input())\n if letter != 'Y':\n play = False\n\n\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651528_python.txt |
Q:
Copying a huge file using Python scripts
I am using below python function to make a copy of file, which I am processing as part of data ingestion in Azure datafactory pipelines . This works for for small files, but fails to process huge files without returning any errors .On calling this function for 2.2 GB file , it stops the execution after writing 107 KB of data without throwing any exceptions .Can anyone point out what could be the issue here
with open(Temp_File_Name,encoding='ISO-8859-1') as a, open(Load_File_Name, 'w') as b:
for line in a:
if ' blah blah ' in line:
var=line[34:42]
data = line[0:120] + var + '\n'
b.write(data)
The input and output location , I have used here are files in Azure Blob storage .I am following this approach , as I need to read each line and perform some operation after reading it
A:
You can use os and rsync
--no-whole-file or --no-W parameters use the block-level sync instead of the file level syncing.
--progress is used for getting the logs of file transfer
You can also use file_name.log for adding logs into that file instead of on terminal and that file will be saved at the current location where you are running the program
for e.g,
os.system("rsync -r --progress --no-W --stats 'Source path' 'Destination Path' > file_name.log")
import os
os.system("rsync -r --progress --no-W --stats 'Source path' 'Destination Path'")
A:
Your snippet wastes compute/time loading the file into the ram and writing it back to disk. Use the underlying OS to copy the file using pythons pathlib and shutil.
Have a look at this stackoverflow post. Here's the top answer:
import pathlib
import shutil
my_file = pathlib.Path('/etc/hosts')
to_file = pathlib.Path('/tmp/foo')
shutil.copy(str(my_file), str(to_file)) # For older Python.
shutil.copy(my_file, to_file) # For newer Python.
| Copying a huge file using Python scripts | I am using below python function to make a copy of file, which I am processing as part of data ingestion in Azure datafactory pipelines . This works for for small files, but fails to process huge files without returning any errors .On calling this function for 2.2 GB file , it stops the execution after writing 107 KB of data without throwing any exceptions .Can anyone point out what could be the issue here
with open(Temp_File_Name,encoding='ISO-8859-1') as a, open(Load_File_Name, 'w') as b:
for line in a:
if ' blah blah ' in line:
var=line[34:42]
data = line[0:120] + var + '\n'
b.write(data)
The input and output location , I have used here are files in Azure Blob storage .I am following this approach , as I need to read each line and perform some operation after reading it
| [
"You can use os and rsync\n--no-whole-file or --no-W parameters use the block-level sync instead of the file level syncing.\n--progress is used for getting the logs of file transfer\nYou can also use file_name.log for adding logs into that file instead of on terminal and that file will be saved at the current location where you are running the program\nfor e.g,\nos.system(\"rsync -r --progress --no-W --stats 'Source path' 'Destination Path' > file_name.log\")\nimport os\nos.system(\"rsync -r --progress --no-W --stats 'Source path' 'Destination Path'\")\n\n",
"Your snippet wastes compute/time loading the file into the ram and writing it back to disk. Use the underlying OS to copy the file using pythons pathlib and shutil.\nHave a look at this stackoverflow post. Here's the top answer:\nimport pathlib\nimport shutil\n\nmy_file = pathlib.Path('/etc/hosts')\nto_file = pathlib.Path('/tmp/foo')\n\nshutil.copy(str(my_file), str(to_file)) # For older Python.\nshutil.copy(my_file, to_file) # For newer Python.\n\n"
] | [
2,
0
] | [] | [] | [
"azure_blob_storage",
"python"
] | stackoverflow_0066675216_azure_blob_storage_python.txt |
Q:
When I scroll the table gets stick to the position and only the text gets scrolled. I want it to scroll with the text
When I scroll the table gets stick to the position and only the text gets scrolled. It should scroll with the text.
The table was created with entry widget. The code does not throw any error but the scrolling is not working properly.
from tkinter import *
import tkinter as tk
from tkinter import scrolledtext
app = tk.Tk(screenName="main")
#Font and orientation setup
app.geometry('600x200')
txtbox = scrolledtext.ScrolledText(app, width=50, height=10)
txtbox.grid(row=0, column=0, sticky=E+W+N+S)
txtbox.insert(INSERT,".\n.\n.\n.\n.\n.\n.\n.\n.\n.\n Physical Properties.\n.\n.\n Physical Properties",)
fortable1 = Frame(txtbox, padx=5, pady=5,width=500,height=200 )
fortable1.place(x=0,y=5)
class Table:
def __init__(self, fortable1):
# code for creating table
self.e = Entry(fortable1, width=12, fg='blue', bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=0)
self.e.insert(END, lst[0][0])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=1)
self.e.insert(END, lst[0][1])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=2)
self.e.insert(END, lst[0][2])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=3)
self.e.insert(END, lst[0][3])
for i in range(1, total_rows):
for j in range(total_columns):
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=i, column=j)
self.e.insert(END, lst[i][j])
lst = [("Sr/N", 'Parameter', 'Value', "Remarks"),
(1, 'Grade Category', '1', '-'),
(2, 'Equivalant Grade', 'N/A', "No such grades \n in our record"),
(3, 'Liquidus', 'Tliq', '_'),
(4, 'Solidus', 'Tsol', '_')]
total_rows = len(lst)
total_columns = len(lst[0])
t = Table(fortable1)
app.mainloop()
A:
It is because the frame fortable1 which holds the table is not part of the content of txtbox since it is just put on top of txtbox using .place(). You need to use txtbox.window_create() to insert the frame into the text box instead.
Below is the updated code:
...
txtbox = scrolledtext.ScrolledText(app, width=50, height=10)
txtbox.grid(row=0, column=0, sticky=E+W+N+S)
fortable1 = Frame(txtbox, padx=5, pady=5,width=500,height=200 )
#fortable1.place(x=0,y=5)
# insert the frame using .window_create()
txtbox.window_create(INSERT, window=fortable1)
txtbox.insert(INSERT, "\n.\n.\n.\n.\n.\n.\n.\n.\n.\n.\n Physical Properties.\n.\n.\n Physical Properties")
...
| When I scroll the table gets stick to the position and only the text gets scrolled. I want it to scroll with the text | When I scroll the table gets stick to the position and only the text gets scrolled. It should scroll with the text.
The table was created with entry widget. The code does not throw any error but the scrolling is not working properly.
from tkinter import *
import tkinter as tk
from tkinter import scrolledtext
app = tk.Tk(screenName="main")
#Font and orientation setup
app.geometry('600x200')
txtbox = scrolledtext.ScrolledText(app, width=50, height=10)
txtbox.grid(row=0, column=0, sticky=E+W+N+S)
txtbox.insert(INSERT,".\n.\n.\n.\n.\n.\n.\n.\n.\n.\n Physical Properties.\n.\n.\n Physical Properties",)
fortable1 = Frame(txtbox, padx=5, pady=5,width=500,height=200 )
fortable1.place(x=0,y=5)
class Table:
def __init__(self, fortable1):
# code for creating table
self.e = Entry(fortable1, width=12, fg='blue', bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=0)
self.e.insert(END, lst[0][0])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=1)
self.e.insert(END, lst[0][1])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=2)
self.e.insert(END, lst[0][2])
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=0, column=3)
self.e.insert(END, lst[0][3])
for i in range(1, total_rows):
for j in range(total_columns):
self.e = Entry(fortable1, width=12, bg="#FFFFE0",
font=('Arial', 12, 'bold'), justify=CENTER)
self.e.grid(row=i, column=j)
self.e.insert(END, lst[i][j])
lst = [("Sr/N", 'Parameter', 'Value', "Remarks"),
(1, 'Grade Category', '1', '-'),
(2, 'Equivalant Grade', 'N/A', "No such grades \n in our record"),
(3, 'Liquidus', 'Tliq', '_'),
(4, 'Solidus', 'Tsol', '_')]
total_rows = len(lst)
total_columns = len(lst[0])
t = Table(fortable1)
app.mainloop()
| [
"It is because the frame fortable1 which holds the table is not part of the content of txtbox since it is just put on top of txtbox using .place(). You need to use txtbox.window_create() to insert the frame into the text box instead.\nBelow is the updated code:\n...\ntxtbox = scrolledtext.ScrolledText(app, width=50, height=10)\ntxtbox.grid(row=0, column=0, sticky=E+W+N+S)\nfortable1 = Frame(txtbox, padx=5, pady=5,width=500,height=200 )\n#fortable1.place(x=0,y=5)\n# insert the frame using .window_create()\ntxtbox.window_create(INSERT, window=fortable1)\ntxtbox.insert(INSERT, \"\\n.\\n.\\n.\\n.\\n.\\n.\\n.\\n.\\n.\\n.\\n Physical Properties.\\n.\\n.\\n Physical Properties\")\n...\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"tkinter_layout",
"tkinter_text"
] | stackoverflow_0074651927_python_tkinter_tkinter_layout_tkinter_text.txt |
Q:
How to annotate seaborn pairplots
I have a collection of binned data from which I generate a series of seaborn pairplots. Since all of the bins have the same labels, but not bin names, I need to annotate the pairplots with the bin name 'n' below so that I can later associate them with their bins.
import seaborn as sns
groups = data.groupby(pd.cut(data['Lat'], bins))
for n,g in groups:
p = sns.pairplot(data=g, hue="Label", palette="Set2",
diag_kind="kde", size=4, vars=labels)
I noted in the documentation that seaborn uses, or is built upon, matplotlib. I have been unable to figure out how to annotate the legend on the left, or provide a title above or below the paired plots. Can anyone provide examples of pointers to the documentation on how to add arbitrary text to those three areas of a plot?
A:
After following up on mwaskom's suggestion to use matplotlib.text() (thanks), I was able to get the following to work as expected:
p = sns.pairplot(data=g, hue="Label", palette="Set2",
diag_kind="kde", size=4, vars=labels)
#bottom labels
p.fig.text(0.33, -0.01, "Bin: %s"%(n), ha ='left', fontsize = 15)
p.fig.text(0.33, -0.04, "Num Points: %d"%(len(g)), ha ='left', fontsize = 15)
and other useful functionality:
# title on top center of subplot
p.fig.suptitle('this is the figure title', verticalalignment='top', fontsize=20)
# title above plot
p.fig.text(0.33, 1.02,'Above the plot', fontsize=20)
# left and right of plot
p.fig.text(0, 1,'Left the plot', fontsize=20, rotation=90)
p.fig.text(1.02, 1,'Right the plot', fontsize=20, rotation=270)
# an example of a multi-line footnote
p.fig.text(0.1, -0.08,
'Some multiline\n'
'footnote...',
fontsize=10)
A:
In addition, using the seaborn functionality, you can move the title up.
After your p=sns.pairplot line, add
p.fig.suptitle("here is your title", y=1.05)
Which will give you a title that is just above the actual plot area instead of overlapping the plot (depends on what you're plotting and how much you might need the space).
You can also use fontweight (bold, etc) in addition to the fontsize.
| How to annotate seaborn pairplots | I have a collection of binned data from which I generate a series of seaborn pairplots. Since all of the bins have the same labels, but not bin names, I need to annotate the pairplots with the bin name 'n' below so that I can later associate them with their bins.
import seaborn as sns
groups = data.groupby(pd.cut(data['Lat'], bins))
for n,g in groups:
p = sns.pairplot(data=g, hue="Label", palette="Set2",
diag_kind="kde", size=4, vars=labels)
I noted in the documentation that seaborn uses, or is built upon, matplotlib. I have been unable to figure out how to annotate the legend on the left, or provide a title above or below the paired plots. Can anyone provide examples of pointers to the documentation on how to add arbitrary text to those three areas of a plot?
| [
"After following up on mwaskom's suggestion to use matplotlib.text() (thanks), I was able to get the following to work as expected:\np = sns.pairplot(data=g, hue=\"Label\", palette=\"Set2\", \n diag_kind=\"kde\", size=4, vars=labels)\n#bottom labels\np.fig.text(0.33, -0.01, \"Bin: %s\"%(n), ha ='left', fontsize = 15)\np.fig.text(0.33, -0.04, \"Num Points: %d\"%(len(g)), ha ='left', fontsize = 15)\n\nand other useful functionality:\n# title on top center of subplot\np.fig.suptitle('this is the figure title', verticalalignment='top', fontsize=20)\n\n# title above plot\np.fig.text(0.33, 1.02,'Above the plot', fontsize=20)\n\n# left and right of plot\np.fig.text(0, 1,'Left the plot', fontsize=20, rotation=90)\np.fig.text(1.02, 1,'Right the plot', fontsize=20, rotation=270)\n\n# an example of a multi-line footnote\np.fig.text(0.1, -0.08,\n 'Some multiline\\n'\n 'footnote...',\n fontsize=10)\n\n",
"In addition, using the seaborn functionality, you can move the title up.\nAfter your p=sns.pairplot line, add\np.fig.suptitle(\"here is your title\", y=1.05)\nWhich will give you a title that is just above the actual plot area instead of overlapping the plot (depends on what you're plotting and how much you might need the space).\nYou can also use fontweight (bold, etc) in addition to the fontsize.\n"
] | [
12,
0
] | [] | [] | [
"matplotlib",
"pairplot",
"plot_annotations",
"python",
"seaborn"
] | stackoverflow_0032481214_matplotlib_pairplot_plot_annotations_python_seaborn.txt |
Q:
I'm getting File format b'\x1aE\xdf\xa3' not understood. Only 'RIFF' and 'RIFX' supported error when I want to read a wav format audio file
I want to save the uploaded voice with wav format in FastAPI using the below code:
@router.post('/save')
async def save_audio(audio = Form()):
filename = str(uuid.uuid4())
out_file_path = f"{filename}.wav"
with open(out_file_path, "wb") as buffer:
shutil.copyfileobj(audio.file, buffer)
Everything is fine and I can play the voice with the music player, but when I want to open this file with wavfile package using this code:
rate, data = await wavfile.read(f"{filename}.wav")
I got the File format b'\x1aE\xdf\xa3' not understood. Only 'RIFF' and 'RIFX' supported. error.
How can I solve this?
A:
I solved the problem with librosa package
data, sampleRate = await librosa.load(f'{filename}.wav')
| I'm getting File format b'\x1aE\xdf\xa3' not understood. Only 'RIFF' and 'RIFX' supported error when I want to read a wav format audio file | I want to save the uploaded voice with wav format in FastAPI using the below code:
@router.post('/save')
async def save_audio(audio = Form()):
filename = str(uuid.uuid4())
out_file_path = f"{filename}.wav"
with open(out_file_path, "wb") as buffer:
shutil.copyfileobj(audio.file, buffer)
Everything is fine and I can play the voice with the music player, but when I want to open this file with wavfile package using this code:
rate, data = await wavfile.read(f"{filename}.wav")
I got the File format b'\x1aE\xdf\xa3' not understood. Only 'RIFF' and 'RIFX' supported. error.
How can I solve this?
| [
"I solved the problem with librosa package\ndata, sampleRate = await librosa.load(f'{filename}.wav')\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074651901_python.txt |
Q:
While trying to run cryptocode I get the error: "module 'hashlib' has no attribute 'scrypt'"
As I mentioned in the title Im trying to run the library cryptocode by using this simple code:
import cryptocode
password = "This is a test"
key = "My Key"
def encrypt(password, key):
return cryptocode.encrypt(password, key)
def decrypt(encryptetpass):
return cryptocode.decrypt(encryptetpass, key)
encrypted_pass = encrypt(password, key)
print(encrypted_pass)
print(decrypt(encrypted_pass))
While running it locally on Windows I get no errors, but trying the same on Linux generates me the previously in the title mentioned error:
(venv) pwd$ python3.9 crypt_test.py
Traceback (most recent call last):
File "/crypt_test.py", line 15, in <module>
encrypted_pass = encrypt(password, key)
File "/crypt_test.py", line 8, in encrypt
return cryptocode.encrypt(password, key)
File "/venv/lib/python3.9/site-packages/cryptocode/myfunctions.py", line 16, in encrypt
private_key = hashlib.scrypt(
AttributeError: module 'hashlib' has no attribute 'scrypt'
I tried updating Openssl, reinstalled my venv and Python.
A:
Did you try to build the latest OpenSSL? I see instructions here: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/
(wasn't able to try this because I'm not running Linux). Please let us know if this worked.
A:
check below points:
To check which all versions are installed
which python
//use python or python3 as per your needs
To check where all versions are installed
where python
//use python or python3 as per your needs
In mycase i was having two paths as below,
/usr/bin/python
/usr/local/bin/python
so, now check VS Code,
Command + Shipt + P
Search/Select Python Interpreter
select appropriate version you are using, if not listed Add the respective path and select the same.
This Solution worked for me:)
| While trying to run cryptocode I get the error: "module 'hashlib' has no attribute 'scrypt'" | As I mentioned in the title Im trying to run the library cryptocode by using this simple code:
import cryptocode
password = "This is a test"
key = "My Key"
def encrypt(password, key):
return cryptocode.encrypt(password, key)
def decrypt(encryptetpass):
return cryptocode.decrypt(encryptetpass, key)
encrypted_pass = encrypt(password, key)
print(encrypted_pass)
print(decrypt(encrypted_pass))
While running it locally on Windows I get no errors, but trying the same on Linux generates me the previously in the title mentioned error:
(venv) pwd$ python3.9 crypt_test.py
Traceback (most recent call last):
File "/crypt_test.py", line 15, in <module>
encrypted_pass = encrypt(password, key)
File "/crypt_test.py", line 8, in encrypt
return cryptocode.encrypt(password, key)
File "/venv/lib/python3.9/site-packages/cryptocode/myfunctions.py", line 16, in encrypt
private_key = hashlib.scrypt(
AttributeError: module 'hashlib' has no attribute 'scrypt'
I tried updating Openssl, reinstalled my venv and Python.
| [
"Did you try to build the latest OpenSSL? I see instructions here: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/\n(wasn't able to try this because I'm not running Linux). Please let us know if this worked.\n",
"check below points:\nTo check which all versions are installed\n\nwhich python\n\n//use python or python3 as per your needs\n\nTo check where all versions are installed\n\nwhere python\n\n//use python or python3 as per your needs\n\nIn mycase i was having two paths as below,\n/usr/bin/python \n/usr/local/bin/python\n\nso, now check VS Code,\n\nCommand + Shipt + P\nSearch/Select Python Interpreter\nselect appropriate version you are using, if not listed Add the respective path and select the same.\n\nThis Solution worked for me:)\n"
] | [
0,
0
] | [] | [] | [
"cryptography",
"hashlib",
"python"
] | stackoverflow_0069204547_cryptography_hashlib_python.txt |
Q:
Import models from different apps to admin Django
I'm trying to create an admin page for my project including app1 and app2
myproject
settings.py
urls.py
admin.py
app1
app2
In myproject/urls.py
urlpatterns = [
path('admin/', admin.site.urls),
path('app1/', include('app1.urls')),
path('app2/', include('app2.urls')),
]
In myproject/admin.py
from django.contrib import admin
from app1.models import User
from app2.models import Manager, Employee, Task, Template
admin.site.register(User)
admin.site.register(Manager)
admin.site.register(Employee)
admin.site.register(Task)
admin.site.register(Template)
Why doesn't my admin page import any models at all? Thanks!
A:
inside each app you must put admin file so can django track these files , so in your app1 in admin.py file related to app1 directory app1/admin.py , you need to put this code
from django.contrib import admin
from app1.models import User
admin.site.register(User)
and in app2 in admin.py related to app2 directoryapp2/admin.py put this :
from django.contrib import admin
from app2.models import Manager, Employee, Task, Template
admin.site.register(Manager)
admin.site.register(Employee)
admin.site.register(Task)
admin.site.register(Template)
A:
Every app should have their own admin.py file by default design. Django will detect it in the respective app folder and reflect it in admin page.
However if what you're trying here is to compile every model in one admin.py. Django wont detect that file by default. Another way to do this, you can try create a new app "admin" and register every model in that folder. I never did this because admin.py separated in their own app has its own very significant purpose.
| Import models from different apps to admin Django | I'm trying to create an admin page for my project including app1 and app2
myproject
settings.py
urls.py
admin.py
app1
app2
In myproject/urls.py
urlpatterns = [
path('admin/', admin.site.urls),
path('app1/', include('app1.urls')),
path('app2/', include('app2.urls')),
]
In myproject/admin.py
from django.contrib import admin
from app1.models import User
from app2.models import Manager, Employee, Task, Template
admin.site.register(User)
admin.site.register(Manager)
admin.site.register(Employee)
admin.site.register(Task)
admin.site.register(Template)
Why doesn't my admin page import any models at all? Thanks!
| [
"inside each app you must put admin file so can django track these files , so in your app1 in admin.py file related to app1 directory app1/admin.py , you need to put this code\nfrom django.contrib import admin\nfrom app1.models import User\n \nadmin.site.register(User)\n\nand in app2 in admin.py related to app2 directoryapp2/admin.py put this :\n\nfrom django.contrib import admin\n\nfrom app2.models import Manager, Employee, Task, Template\n\n\nadmin.site.register(Manager)\nadmin.site.register(Employee)\nadmin.site.register(Task)\nadmin.site.register(Template)\n\n",
"Every app should have their own admin.py file by default design. Django will detect it in the respective app folder and reflect it in admin page.\nHowever if what you're trying here is to compile every model in one admin.py. Django wont detect that file by default. Another way to do this, you can try create a new app \"admin\" and register every model in that folder. I never did this because admin.py separated in their own app has its own very significant purpose.\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074651506_django_python.txt |
Q:
Filtering or Querying Pandas MultiIndex Dataframe based on column values
I have a multi-index pandas DataFrame such as below, primarily indexed with DateTime object.
>>> type(feed_tail)
<class 'pandas.core.frame.DataFrame'>
>>> feed_tail.index
DatetimeIndex(['2022-11-11', '2022-11-14', '2022-11-15', '2022-11-16',
'2022-11-17', '2022-11-18', '2022-11-21', '2022-11-22',
'2022-11-23', '2022-11-24'],
dtype='datetime64[ns]', name='Date', freq=None)
>>> feed_tail.columns
MultiIndex([( 'Close', 'BALKRISIND.NS'),
( 'Close', 'KSB.NS'),
( 'SMA13', 'BALKRISIND.NS'),
( 'SMA13', 'KSB.NS'),
('ClosegtSMA13', 'BALKRISIND.NS'),
('ClosegtSMA13', 'KSB.NS'),
( 'MTDPerf', 'BALKRISIND.NS'),
( 'MTDPerf', 'KSB.NS')],
names=['Attributes', 'Symbols'])
>>> feed_tail
Attributes Close SMA13 ClosegtSMA13 MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-11 1889.45 1834.40 1933.03 1959.00 False False -3.73 -11.86
2022-11-14 1875.55 1848.60 1927.28 1944.42 False False -4.44 -11.18
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
I am trying to access/filter the dataframe into another dataframe, for every datetime index in sequence where ClosegtSMA13 column is True but seems like I am failing at understanding the datamodel here. Quest is to iterate over the datetime index in sequence, and get dataframes where the Symbols' ClosegtSMA13 is True or Close is greater than SMA13 with in the same dataframe and then go over the filtered/queried dataframe for further processing within the loop.
Any help towards unravelling this further is sincerely appreciated.
Thank you
Updates:
Following @jezrael's suggestion to use mask. This helps in performing the 'OR' operation in a way that prefers to get all series rows that have satisfying Close gt SMA13 for all symbols though.
>>> feed_tail
Attributes Close SMA13 ClosegtSMA13 MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-11 1889.45 1834.40 1933.03 1959.00 False False -3.73 -11.86
2022-11-14 1875.55 1848.60 1927.28 1944.42 False False -4.44 -11.18
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
>>> mask = feed_tail['Close'].gt(feed_tail['SMA13']).any(axis=1)
>>> df = feed_tail[mask]
>>> df
Attributes Close SMA13 SMA13gtClose MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
Overall quest is associated with bigger shape of this dataframe model where I intend to get the top 'MTDPerf' items for each day and this seemingly helps but I would like to filter by making sure they have their 'Close gt SMA13' before checking for their MTDPerf values.
>>> for dt in feed_tail.index:
...
feed_tail['MTDPerf'].loc[dt].head(10).sort_values(ascending=False)
Trying to filter before going for MTDPerf related stuff,
>>> for dt in feed_tail.index:
... d=feed_tail[feed_tail['Close'].loc[dt] > feed_tail['SMA13'].loc[dt]]
... d
...
<stdin>:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "lib/python3.9/site-packages/pandas/core/frame.py", line 3796, in __getitem__
return self._getitem_bool_array(key)
File "lib/python3.9/site-packages/pandas/core/frame.py", line 3849, in _getitem_bool_array
key = check_bool_indexer(self.index, key)
File "lib/python3.9/site-packages/pandas/core/indexing.py", line 2548, in check_bool_indexer
raise IndexingError(
pandas.errors.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
Solution (or approach used so far): (many thanks to @jezrael 's leading answer)
mmtd = feed_tail.where(feed_tail['Close'] > feed_tail['SMA13']).where(feed_tail['MTDPerf'] > 0)
for dt in mmtd.index:
dt_str = dt.strftime("%Y-%m-%d")
a = mmtd.loc[dt_str, ['Close', 'MTDPerf']]
a = a[a.notna()]
unstacked_a = a.unstack(0)
if not unstacked_a.empty:
unstacked_a = unstacked_a.sort_values(by=(['MTDPerf']), ascending=False)
print(dt_str, unstacked_a)
A:
Use DataFrame.loc with filter MTDPerf Series:
for dt in feed_tail.index:
mmtd = feed_tail.loc[dt, 'MTDPerf']
d = mmtd[feed_tail.loc[dt, 'Close'] > feed_tail.loc[dt, 'SMA13']]
print (d)
Series([], Name: 2022-11-11 00:00:00, dtype: object)
Series([], Name: 2022-11-14 00:00:00, dtype: object)
Symbols
BALKRISIND.NS 0.02
KSB.NS -6.11
Name: 2022-11-15 00:00:00, dtype: object
Symbols
BALKRISIND.NS -0.33
KSB.NS -5.36
Name: 2022-11-16 00:00:00, dtype: object
Symbols
BALKRISIND.NS 0.79
KSB.NS -5.85
Name: 2022-11-17 00:00:00, dtype: object
Symbols
BALKRISIND.NS 0.51
KSB.NS -7.85
Name: 2022-11-18 00:00:00, dtype: object
Symbols
BALKRISIND.NS -0.86
Name: 2022-11-21 00:00:00, dtype: object
Symbols
BALKRISIND.NS -0.63
Name: 2022-11-22 00:00:00, dtype: object
Symbols
BALKRISIND.NS -0.82
KSB.NS -7.23
Name: 2022-11-23 00:00:00, dtype: object
Symbols
BALKRISIND.NS 0.64
KSB.NS -7.47
Name: 2022-11-24 00:00:00, dtype: object()
Solution with DataFrame.where for replace NaNs if no match:
df = feed_tail['MTDPerf'].where(feed_tail['Close'] > feed_tail['SMA13'])
print (df)
Symbols BALKRISIND.NS KSB.NS
2022-11-11 NaN NaN
2022-11-14 NaN NaN
2022-11-15 0.02 -6.11
2022-11-16 -0.33 -5.36
2022-11-17 0.79 -5.85
2022-11-18 0.51 -7.85
2022-11-21 -0.86 NaN
2022-11-22 -0.63 NaN
2022-11-23 -0.82 -7.23
2022-11-24 0.64 -7.47
And after reshaping:
s = feed_tail['MTDPerf'].where(feed_tail['Close'] > feed_tail['SMA13']).stack()
print (s)
Symbols
2022-11-15 BALKRISIND.NS 0.02
KSB.NS -6.11
2022-11-16 BALKRISIND.NS -0.33
KSB.NS -5.36
2022-11-17 BALKRISIND.NS 0.79
KSB.NS -5.85
2022-11-18 BALKRISIND.NS 0.51
KSB.NS -7.85
2022-11-21 BALKRISIND.NS -0.86
2022-11-22 BALKRISIND.NS -0.63
2022-11-23 BALKRISIND.NS -0.82
KSB.NS -7.23
2022-11-24 BALKRISIND.NS 0.64
KSB.NS -7.47
dtype: float64
| Filtering or Querying Pandas MultiIndex Dataframe based on column values | I have a multi-index pandas DataFrame such as below, primarily indexed with DateTime object.
>>> type(feed_tail)
<class 'pandas.core.frame.DataFrame'>
>>> feed_tail.index
DatetimeIndex(['2022-11-11', '2022-11-14', '2022-11-15', '2022-11-16',
'2022-11-17', '2022-11-18', '2022-11-21', '2022-11-22',
'2022-11-23', '2022-11-24'],
dtype='datetime64[ns]', name='Date', freq=None)
>>> feed_tail.columns
MultiIndex([( 'Close', 'BALKRISIND.NS'),
( 'Close', 'KSB.NS'),
( 'SMA13', 'BALKRISIND.NS'),
( 'SMA13', 'KSB.NS'),
('ClosegtSMA13', 'BALKRISIND.NS'),
('ClosegtSMA13', 'KSB.NS'),
( 'MTDPerf', 'BALKRISIND.NS'),
( 'MTDPerf', 'KSB.NS')],
names=['Attributes', 'Symbols'])
>>> feed_tail
Attributes Close SMA13 ClosegtSMA13 MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-11 1889.45 1834.40 1933.03 1959.00 False False -3.73 -11.86
2022-11-14 1875.55 1848.60 1927.28 1944.42 False False -4.44 -11.18
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
I am trying to access/filter the dataframe into another dataframe, for every datetime index in sequence where ClosegtSMA13 column is True but seems like I am failing at understanding the datamodel here. Quest is to iterate over the datetime index in sequence, and get dataframes where the Symbols' ClosegtSMA13 is True or Close is greater than SMA13 with in the same dataframe and then go over the filtered/queried dataframe for further processing within the loop.
Any help towards unravelling this further is sincerely appreciated.
Thank you
Updates:
Following @jezrael's suggestion to use mask. This helps in performing the 'OR' operation in a way that prefers to get all series rows that have satisfying Close gt SMA13 for all symbols though.
>>> feed_tail
Attributes Close SMA13 ClosegtSMA13 MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-11 1889.45 1834.40 1933.03 1959.00 False False -3.73 -11.86
2022-11-14 1875.55 1848.60 1927.28 1944.42 False False -4.44 -11.18
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
>>> mask = feed_tail['Close'].gt(feed_tail['SMA13']).any(axis=1)
>>> df = feed_tail[mask]
>>> df
Attributes Close SMA13 SMA13gtClose MTDPerf
Symbols BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS BALKRISIND.NS KSB.NS
Date
2022-11-15 1963.20 1954.15 1928.51 1938.12 True True 0.02 -6.11
2022-11-16 1956.30 1969.75 1929.43 1933.65 True True -0.33 -5.36
2022-11-17 1978.35 1959.55 1932.08 1927.51 True True 0.79 -5.85
2022-11-18 1972.75 1917.90 1932.85 1914.94 True True 0.51 -7.85
2022-11-21 1945.80 1874.70 1932.80 1902.38 True False -0.86 -9.93
2022-11-22 1950.30 1882.85 1932.60 1892.80 True False -0.63 -9.54
2022-11-23 1946.60 1930.90 1936.52 1893.97 True True -0.82 -7.23
2022-11-24 1975.40 1925.80 1941.11 1901.10 True True 0.64 -7.47
Overall quest is associated with bigger shape of this dataframe model where I intend to get the top 'MTDPerf' items for each day and this seemingly helps but I would like to filter by making sure they have their 'Close gt SMA13' before checking for their MTDPerf values.
>>> for dt in feed_tail.index:
...
feed_tail['MTDPerf'].loc[dt].head(10).sort_values(ascending=False)
Trying to filter before going for MTDPerf related stuff,
>>> for dt in feed_tail.index:
... d=feed_tail[feed_tail['Close'].loc[dt] > feed_tail['SMA13'].loc[dt]]
... d
...
<stdin>:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "lib/python3.9/site-packages/pandas/core/frame.py", line 3796, in __getitem__
return self._getitem_bool_array(key)
File "lib/python3.9/site-packages/pandas/core/frame.py", line 3849, in _getitem_bool_array
key = check_bool_indexer(self.index, key)
File "lib/python3.9/site-packages/pandas/core/indexing.py", line 2548, in check_bool_indexer
raise IndexingError(
pandas.errors.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match).
Solution (or approach used so far): (many thanks to @jezrael 's leading answer)
mmtd = feed_tail.where(feed_tail['Close'] > feed_tail['SMA13']).where(feed_tail['MTDPerf'] > 0)
for dt in mmtd.index:
dt_str = dt.strftime("%Y-%m-%d")
a = mmtd.loc[dt_str, ['Close', 'MTDPerf']]
a = a[a.notna()]
unstacked_a = a.unstack(0)
if not unstacked_a.empty:
unstacked_a = unstacked_a.sort_values(by=(['MTDPerf']), ascending=False)
print(dt_str, unstacked_a)
| [
"Use DataFrame.loc with filter MTDPerf Series:\nfor dt in feed_tail.index:\n mmtd = feed_tail.loc[dt, 'MTDPerf']\n d = mmtd[feed_tail.loc[dt, 'Close'] > feed_tail.loc[dt, 'SMA13']]\n\n\n print (d)\n\nSeries([], Name: 2022-11-11 00:00:00, dtype: object)\nSeries([], Name: 2022-11-14 00:00:00, dtype: object)\nSymbols\nBALKRISIND.NS 0.02\nKSB.NS -6.11\nName: 2022-11-15 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS -0.33\nKSB.NS -5.36\nName: 2022-11-16 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS 0.79\nKSB.NS -5.85\nName: 2022-11-17 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS 0.51\nKSB.NS -7.85\nName: 2022-11-18 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS -0.86\nName: 2022-11-21 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS -0.63\nName: 2022-11-22 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS -0.82\nKSB.NS -7.23\nName: 2022-11-23 00:00:00, dtype: object\nSymbols\nBALKRISIND.NS 0.64\nKSB.NS -7.47\nName: 2022-11-24 00:00:00, dtype: object()\n\nSolution with DataFrame.where for replace NaNs if no match:\ndf = feed_tail['MTDPerf'].where(feed_tail['Close'] > feed_tail['SMA13'])\nprint (df)\nSymbols BALKRISIND.NS KSB.NS\n2022-11-11 NaN NaN\n2022-11-14 NaN NaN\n2022-11-15 0.02 -6.11\n2022-11-16 -0.33 -5.36\n2022-11-17 0.79 -5.85\n2022-11-18 0.51 -7.85\n2022-11-21 -0.86 NaN\n2022-11-22 -0.63 NaN\n2022-11-23 -0.82 -7.23\n2022-11-24 0.64 -7.47\n\nAnd after reshaping:\ns = feed_tail['MTDPerf'].where(feed_tail['Close'] > feed_tail['SMA13']).stack()\nprint (s)\n Symbols \n2022-11-15 BALKRISIND.NS 0.02\n KSB.NS -6.11\n2022-11-16 BALKRISIND.NS -0.33\n KSB.NS -5.36\n2022-11-17 BALKRISIND.NS 0.79\n KSB.NS -5.85\n2022-11-18 BALKRISIND.NS 0.51\n KSB.NS -7.85\n2022-11-21 BALKRISIND.NS -0.86\n2022-11-22 BALKRISIND.NS -0.63\n2022-11-23 BALKRISIND.NS -0.82\n KSB.NS -7.23\n2022-11-24 BALKRISIND.NS 0.64\n KSB.NS -7.47\ndtype: float64\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"multi_index",
"numpy",
"pandas",
"python"
] | stackoverflow_0074652085_dataframe_multi_index_numpy_pandas_python.txt |
Q:
How to install Talib (on windows machine) in colab (2022-12)?
Yesterday I tried to run a python code which contains a talib package. The package failed to run in colab, ModuleNotFoundError: No module named 'talib'
I used this code, which normally worked, but after yesterday it didn't.
url = 'https://anaconda.org/conda-forge/libta-lib/0.4.0/download/linux-64/libta-lib-0.4.0-h516909a_0.tar.bz2'
!curl -L $url | tar xj -C /usr/lib/x86_64-linux-gnu/ lib --strip-components=1
url = 'https://anaconda.org/conda-forge/ta-lib/0.4.19/download/linux-64/ta-lib-0.4.19-py37ha21ca33_2.tar.bz2'
!curl -L $url | tar xj -C /usr/local/lib/python3.7/dist-packages/ lib/python3.7/site-packages/talib --strip-components=3
The idea is to install the following items:
import talib as ta
from talib import RSI, BBANDS, MACD
I tried this as well, without succes:
!pip install TA-Lib as ta
from TA-Lib import RSI, BBANDS, MACD
Does someone know how to install this package in colab?
A:
Since Google Colab is a notebook, you can use the ! operator with pip to install the TA-Lib package.
Try this :
!pip install TA-Lib
As suggested by @DarknessPlusPlus, you can also use the magic command % :
%pip install TA-Lib
This answer by @jakevdp explains the difference between the two commands.
# Edit :
If you still can't import the modules, run this in a Google Colab cell :
!curl -L http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz -O && tar xzvf ta-lib-0.4.0-src.tar.gz
!cd ta-lib && ./configure --prefix=/usr && make && make install && cd - && pip install ta-lib
# Result :
But before that, make sure to uninstall any talib package with :
%pip uninstall TA-lib talib-binary
| How to install Talib (on windows machine) in colab (2022-12)? | Yesterday I tried to run a python code which contains a talib package. The package failed to run in colab, ModuleNotFoundError: No module named 'talib'
I used this code, which normally worked, but after yesterday it didn't.
url = 'https://anaconda.org/conda-forge/libta-lib/0.4.0/download/linux-64/libta-lib-0.4.0-h516909a_0.tar.bz2'
!curl -L $url | tar xj -C /usr/lib/x86_64-linux-gnu/ lib --strip-components=1
url = 'https://anaconda.org/conda-forge/ta-lib/0.4.19/download/linux-64/ta-lib-0.4.19-py37ha21ca33_2.tar.bz2'
!curl -L $url | tar xj -C /usr/local/lib/python3.7/dist-packages/ lib/python3.7/site-packages/talib --strip-components=3
The idea is to install the following items:
import talib as ta
from talib import RSI, BBANDS, MACD
I tried this as well, without succes:
!pip install TA-Lib as ta
from TA-Lib import RSI, BBANDS, MACD
Does someone know how to install this package in colab?
| [
"Since Google Colab is a notebook, you can use the ! operator with pip to install the TA-Lib package.\nTry this :\n!pip install TA-Lib\n\nAs suggested by @DarknessPlusPlus, you can also use the magic command % :\n%pip install TA-Lib\n\nThis answer by @jakevdp explains the difference between the two commands.\n# Edit :\nIf you still can't import the modules, run this in a Google Colab cell :\n!curl -L http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz -O && tar xzvf ta-lib-0.4.0-src.tar.gz\n!cd ta-lib && ./configure --prefix=/usr && make && make install && cd - && pip install ta-lib\n\n# Result :\n\nBut before that, make sure to uninstall any talib package with :\n%pip uninstall TA-lib talib-binary\n\n"
] | [
3
] | [] | [] | [
"python",
"ta_lib"
] | stackoverflow_0074652073_python_ta_lib.txt |
Q:
What are the valid values for --platform, --abi, and --implementation for pip download?
pip download has several flags that I would like to play with --platform, --abi, and --implementation.
Where can I find the complete list of valid values for these flags?
A:
I don't think there is one definitive list. You have to collect it from different sources. Start with PEP 425: https://www.python.org/dev/peps/pep-0425/
python tag: ‘py27’, ‘cp33’
abi tag: ‘cp32dmu’, ‘none’
platform tag: ‘linux_x86_64’, ‘any’
--implementation:
cp: CPython
ip: IronPython
pp: PyPy
jy: Jython
--platform:
win32
linux_i386
linux_x86_64
A:
If you have access to the PC (or similar platform) for which you need to download the package, per the documentation, the following function can be called to get the explicit platform name.
distutils.util.get_platform()
The platform tag is simply distutils.util.get_platform() with all hyphens - and periods . replaced with underscore _.
In our case, we have offline PCs (which must remain offline); so this approach works perfectly to ensure we download the correct platform for those PCs.
A:
If you are only downloading a single package, you could attempt go to pypi.org and search for what's available.
E.g. for orjson https://pypi.org/project/orjson/3.8.2/#files, you can see there's things like:
win_amd64
manylinux_2_28_x86_64
manylinux_2_28_aarch64
manylinux_2_17_x86_64
manylinux2014_x86_64
manylinux_2_17_armv7l
manylinux2014_armv7l
manylinux_2_17_aarch64
manylinux2014_aarch64
macosx_10_9_x86_64
macosx_11_0_arm64
macosx_10_9_universal2
macosx_10_7_x86_64
| What are the valid values for --platform, --abi, and --implementation for pip download? | pip download has several flags that I would like to play with --platform, --abi, and --implementation.
Where can I find the complete list of valid values for these flags?
| [
"I don't think there is one definitive list. You have to collect it from different sources. Start with PEP 425: https://www.python.org/dev/peps/pep-0425/\npython tag: ‘py27’, ‘cp33’\nabi tag: ‘cp32dmu’, ‘none’\nplatform tag: ‘linux_x86_64’, ‘any’ \n--implementation:\ncp: CPython\nip: IronPython\npp: PyPy\njy: Jython\n\n--platform:\nwin32\nlinux_i386\nlinux_x86_64\n\n",
"If you have access to the PC (or similar platform) for which you need to download the package, per the documentation, the following function can be called to get the explicit platform name.\ndistutils.util.get_platform()\n\n\nThe platform tag is simply distutils.util.get_platform() with all hyphens - and periods . replaced with underscore _.\n\n\nIn our case, we have offline PCs (which must remain offline); so this approach works perfectly to ensure we download the correct platform for those PCs.\n",
"If you are only downloading a single package, you could attempt go to pypi.org and search for what's available.\nE.g. for orjson https://pypi.org/project/orjson/3.8.2/#files, you can see there's things like:\n\nwin_amd64\nmanylinux_2_28_x86_64\nmanylinux_2_28_aarch64\nmanylinux_2_17_x86_64\nmanylinux2014_x86_64\nmanylinux_2_17_armv7l\nmanylinux2014_armv7l\nmanylinux_2_17_aarch64\nmanylinux2014_aarch64\nmacosx_10_9_x86_64\nmacosx_11_0_arm64\nmacosx_10_9_universal2\nmacosx_10_7_x86_64\n\n"
] | [
13,
4,
0
] | [] | [] | [
"pip",
"python"
] | stackoverflow_0049672621_pip_python.txt |
Q:
Looking for a Python editor that will let me collapse functions
I really loved this feature when I used Eclipse for Java programming, but I can't find the same functionality for a Python editor. IDLE and Pyscripter are nice, but they don't help in this area.
Basically, I just want the option to collapse or otherwise hide functions that I don't feel like looking at for a while. Know of anything like this?
A:
In addition to the aforementioned (great) editors, you might want to give PyDev a shot as well.
A:
Geany can do this.
A:
Notepad++ has this feature.
A:
Komodo Edit IDE, for Windows, Mac and Linux, for Python, PHP, Ruby, JavaScript, Perl and Web Dev.
A:
I've used Komodo Edit and Notepad++ in the past but my current preference is Sublime Text Edit 2.
Although not free (and actually quite expensive), it can be used in free mode with only an occasional reminder and no other restrictions.
It is actually written in Python so you get a Python console built in - you can also get other consoles such as JavaScript. It is VERY flexible & has some very good features. It is also has an excellent community with loads of very useful plugins.
It is much lighter on resource usage than Komodo, can use Textmate bundles directly (so gets loads of formatting options for different file types). It is cross-platform and doesn't even need installation on Windows.
A:
Pycharm CE, from Jet Brains, indeed, wonderful. Functions and comments collapse is ready out of the box, as well as edit helpers. Project files and assets organization, integrated python console, powerful debugging tools,... Then, lots of plugins: git integration, tinycode view, extra languages' helpers and highlighters,.... anything you need when coding, but simple and easy to use. There's a Pro (paid) version for those who want even more.
https://www.jetbrains.com/pycharm/download
(This question is more than 10 years old. I got surprised, nobody answered about Pycharm before...)
| Looking for a Python editor that will let me collapse functions | I really loved this feature when I used Eclipse for Java programming, but I can't find the same functionality for a Python editor. IDLE and Pyscripter are nice, but they don't help in this area.
Basically, I just want the option to collapse or otherwise hide functions that I don't feel like looking at for a while. Know of anything like this?
| [
"In addition to the aforementioned (great) editors, you might want to give PyDev a shot as well.\n",
"Geany can do this.\n",
"Notepad++ has this feature.\n",
"Komodo Edit IDE, for Windows, Mac and Linux, for Python, PHP, Ruby, JavaScript, Perl and Web Dev.\n",
"I've used Komodo Edit and Notepad++ in the past but my current preference is Sublime Text Edit 2.\nAlthough not free (and actually quite expensive), it can be used in free mode with only an occasional reminder and no other restrictions.\nIt is actually written in Python so you get a Python console built in - you can also get other consoles such as JavaScript. It is VERY flexible & has some very good features. It is also has an excellent community with loads of very useful plugins.\nIt is much lighter on resource usage than Komodo, can use Textmate bundles directly (so gets loads of formatting options for different file types). It is cross-platform and doesn't even need installation on Windows.\n",
"Pycharm CE, from Jet Brains, indeed, wonderful. Functions and comments collapse is ready out of the box, as well as edit helpers. Project files and assets organization, integrated python console, powerful debugging tools,... Then, lots of plugins: git integration, tinycode view, extra languages' helpers and highlighters,.... anything you need when coding, but simple and easy to use. There's a Pro (paid) version for those who want even more.\nhttps://www.jetbrains.com/pycharm/download\n(This question is more than 10 years old. I got surprised, nobody answered about Pycharm before...)\n"
] | [
4,
3,
2,
1,
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0010394223_python.txt |
Q:
Derive path in nested list tree structure (python)
I have created a tree data structure using a nested list in python. I have code which works through the list and prints a list of of all nodes and their parent node:
def printParents(node,adj,parent):
if (parent == 0):
print(node, "-> root")
else:
print(node, "->", parent)
for cur in adj[node]:
if (cur != parent):
printParents(cur,adj,node)
The result is:
1 -> root
2 -> 1
3 -> 1
10 -> 3
11 -> 3
14 -> 11
15 -> 11
16 -> 11
17 -> 11
18 -> 11
19 -> 11
4 -> 1
5 -> 1
I am struggling to write a function which works backward to derive the path back to root given a specific node. For example with the input of 17 it should return something like 17 -> 11 -> 3 -> 1 (output format can vary).
Any help would be greatly appreciated. I am not a super experienced code so answers that treat me like I'm dumb would be appreciated.
A:
You could use a generator, that builds the path when coming back from recursion:
def paths(adj, node, parent=None):
yield [node]
for child in adj[node]:
if child != parent:
for path in paths(adj, child, node):
yield [*path, node]
Here is how to call it:
# example tree
adj = [
[1],
[0,2,3,4,5],
[1],
[1,10,11],
[1],[1],[],[],[],[],[3],
[3,14,15,16,17,18,19],
[],[],[11],[11],[11],[11],[11],[11]
]
for path in paths(adj, 0):
if len(path) > 1:
print(" -> ".join(map(str, path[:-1])) + " -> root")
This will output:
1 -> root
2 -> 1 -> root
3 -> 1 -> root
10 -> 3 -> 1 -> root
11 -> 3 -> 1 -> root
14 -> 11 -> 3 -> 1 -> root
15 -> 11 -> 3 -> 1 -> root
16 -> 11 -> 3 -> 1 -> root
17 -> 11 -> 3 -> 1 -> root
18 -> 11 -> 3 -> 1 -> root
19 -> 11 -> 3 -> 1 -> root
4 -> 1 -> root
5 -> 1 -> root
If you want to only output the paths that start at a leaf, then change the generator function to:
def paths(adj, node, parent=None):
children = [child for child in adj[node] if child != parent]
if not children: # it's a leaf
yield [node]
else:
for child in children:
for path in paths(adj, child, node):
yield [*path, node]
Corresponding output:
2 -> 1 -> root
10 -> 3 -> 1 -> root
14 -> 11 -> 3 -> 1 -> root
15 -> 11 -> 3 -> 1 -> root
16 -> 11 -> 3 -> 1 -> root
17 -> 11 -> 3 -> 1 -> root
18 -> 11 -> 3 -> 1 -> root
19 -> 11 -> 3 -> 1 -> root
4 -> 1 -> root
5 -> 1 -> root
| Derive path in nested list tree structure (python) | I have created a tree data structure using a nested list in python. I have code which works through the list and prints a list of of all nodes and their parent node:
def printParents(node,adj,parent):
if (parent == 0):
print(node, "-> root")
else:
print(node, "->", parent)
for cur in adj[node]:
if (cur != parent):
printParents(cur,adj,node)
The result is:
1 -> root
2 -> 1
3 -> 1
10 -> 3
11 -> 3
14 -> 11
15 -> 11
16 -> 11
17 -> 11
18 -> 11
19 -> 11
4 -> 1
5 -> 1
I am struggling to write a function which works backward to derive the path back to root given a specific node. For example with the input of 17 it should return something like 17 -> 11 -> 3 -> 1 (output format can vary).
Any help would be greatly appreciated. I am not a super experienced code so answers that treat me like I'm dumb would be appreciated.
| [
"You could use a generator, that builds the path when coming back from recursion:\ndef paths(adj, node, parent=None):\n yield [node]\n for child in adj[node]:\n if child != parent:\n for path in paths(adj, child, node):\n yield [*path, node]\n\nHere is how to call it:\n# example tree\nadj = [\n [1],\n [0,2,3,4,5],\n [1],\n [1,10,11],\n [1],[1],[],[],[],[],[3],\n [3,14,15,16,17,18,19],\n [],[],[11],[11],[11],[11],[11],[11]\n]\n\nfor path in paths(adj, 0):\n if len(path) > 1:\n print(\" -> \".join(map(str, path[:-1])) + \" -> root\")\n\nThis will output:\n1 -> root\n2 -> 1 -> root\n3 -> 1 -> root\n10 -> 3 -> 1 -> root\n11 -> 3 -> 1 -> root\n14 -> 11 -> 3 -> 1 -> root\n15 -> 11 -> 3 -> 1 -> root\n16 -> 11 -> 3 -> 1 -> root\n17 -> 11 -> 3 -> 1 -> root\n18 -> 11 -> 3 -> 1 -> root\n19 -> 11 -> 3 -> 1 -> root\n4 -> 1 -> root\n5 -> 1 -> root\n\nIf you want to only output the paths that start at a leaf, then change the generator function to:\ndef paths(adj, node, parent=None):\n children = [child for child in adj[node] if child != parent]\n if not children: # it's a leaf\n yield [node]\n else:\n for child in children:\n for path in paths(adj, child, node):\n yield [*path, node]\n\nCorresponding output:\n2 -> 1 -> root\n10 -> 3 -> 1 -> root\n14 -> 11 -> 3 -> 1 -> root\n15 -> 11 -> 3 -> 1 -> root\n16 -> 11 -> 3 -> 1 -> root\n17 -> 11 -> 3 -> 1 -> root\n18 -> 11 -> 3 -> 1 -> root\n19 -> 11 -> 3 -> 1 -> root\n4 -> 1 -> root\n5 -> 1 -> root\n\n"
] | [
1
] | [] | [] | [
"list",
"multidimensional_array",
"nested",
"python",
"tree"
] | stackoverflow_0074650373_list_multidimensional_array_nested_python_tree.txt |
Q:
How to get input from InputText() without a button press in PySimpleGui
Is there a way to be able to get the input from an InputText() without having to rely on a button press? I am trying to make a form where the submit button is only available when the input text is not empty, however the only way to get the input from InputText() that I have found is with a button which needs to be clicked in order for me to receive the input.
A:
Decide what event by yourself to send the content of the Input element.
click a button - Add one button into your layout.
import PySimpleGUI as sg
layout = [[sg.Input(key='-IN-'), sg.Button('Submit')]]
window = sg.Window('Title', layout)
event, values = window.read()
if event == 'Submit':
print(values['-IN-'])
window.close()
Button element with Enter key - Wth option bind_return_key=True
import PySimpleGUI as sg
layout = [[sg.Input(key='-IN-'), sg.Button('Submit', bind_return_key=True)]]
window = sg.Window('Title', layout)
event, values = window.read()
if event == 'Submit':
print(values['-IN-'])
window.close()
Input element with Enter key - Binding event '<Return>' to your Input element.
import PySimpleGUI as sg
layout = [[sg.Input(key='-IN-')]]
window = sg.Window('Title', layout, finalize=True)
window['-IN-'].bind('<Return>', ' ENTER')
event, values = window.read()
if event == '-IN- ENTER':
print(values['-IN-'])
window.close()
click any keyboard - With option enable_events=True
import PySimpleGUI as sg
layout = [[sg.Input(enable_events=True, key='-IN-')]]
window = sg.Window('Title', layout)
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
break
elif event == '-IN-':
print(values['-IN-'])
window.close()
| How to get input from InputText() without a button press in PySimpleGui | Is there a way to be able to get the input from an InputText() without having to rely on a button press? I am trying to make a form where the submit button is only available when the input text is not empty, however the only way to get the input from InputText() that I have found is with a button which needs to be clicked in order for me to receive the input.
| [
"Decide what event by yourself to send the content of the Input element.\n\nclick a button - Add one button into your layout.\n\nimport PySimpleGUI as sg\n\nlayout = [[sg.Input(key='-IN-'), sg.Button('Submit')]]\nwindow = sg.Window('Title', layout)\nevent, values = window.read()\nif event == 'Submit':\n print(values['-IN-'])\nwindow.close()\n\n\nButton element with Enter key - Wth option bind_return_key=True\n\nimport PySimpleGUI as sg\n\nlayout = [[sg.Input(key='-IN-'), sg.Button('Submit', bind_return_key=True)]]\nwindow = sg.Window('Title', layout)\nevent, values = window.read()\nif event == 'Submit':\n print(values['-IN-'])\nwindow.close()\n\n\nInput element with Enter key - Binding event '<Return>' to your Input element.\n\nimport PySimpleGUI as sg\n\nlayout = [[sg.Input(key='-IN-')]]\nwindow = sg.Window('Title', layout, finalize=True)\nwindow['-IN-'].bind('<Return>', ' ENTER')\nevent, values = window.read()\nif event == '-IN- ENTER':\n print(values['-IN-'])\nwindow.close()\n\n\nclick any keyboard - With option enable_events=True\n\nimport PySimpleGUI as sg\n\nlayout = [[sg.Input(enable_events=True, key='-IN-')]]\nwindow = sg.Window('Title', layout)\nwhile True:\n event, values = window.read()\n if event == sg.WIN_CLOSED:\n break\n elif event == '-IN-':\n print(values['-IN-'])\nwindow.close()\n\n"
] | [
1
] | [] | [] | [
"pysimplegui",
"python"
] | stackoverflow_0074651936_pysimplegui_python.txt |
Subsets and Splits