question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
60,899,445
2020-3-28
https://stackoverflow.com/questions/60899445/how-to-connect-kafka-topic-with-web-endpoint-using-faust-python-package
I have a simple app, with two functions, one for listening to topic and other for web endpoint. I want to create server side event streaming (SSE) i.e text/event-stream, so that on client end I could listen to it using EventSource. I have the following code for now, where each function is doing its particular job: import faust from faust.web import Response app = faust.App("app1", broker="kafka://localhost:29092", value_serializer="raw") test_topic = app.topic("test") @app.agent(test_topic) async def test_topic_agent(stream): async for value in stream: print(f"test_topic_agent RECEIVED -- {value!r}") yield value @app.page("/") async def index(self, request): return self.text("yey") Now, I want in the index, something like this code, but using faust: import asyncio from aiohttp import web from aiohttp.web import Response from aiohttp_sse import sse_response from datetime import datetime async def hello(request): loop = request.app.loop async with sse_response(request) as resp: while True: data = 'Server Time : {}'.format(datetime.now()) print(data) await resp.send(data) await asyncio.sleep(1, loop=loop) return resp async def index(request): d = """ <html> <body> <script> var evtSource = new EventSource("/hello"); evtSource.onmessage = function(e) { document.getElementById('response').innerText = e.data } </script> <h1>Response from server:</h1> <div id="response"></div> </body> </html> """ return Response(text=d, content_type='text/html') app = web.Application() app.router.add_route('GET', '/hello', hello) app.router.add_route('GET', '/', index) web.run_app(app, host='127.0.0.1', port=8080) I have tried this: import faust from faust.web import Response app = faust.App("app1", broker="kafka://localhost:29092", value_serializer="raw") test_topic = app.topic("test") # @app.agent(test_topic) # async def test_topic_agent(stream): # async for value in stream: # print(f"test_topic_agent RECEIVED -- {value!r}") # yield value @app.page("/", name="t1") @app.agent(test_topic, name="t") async def index(self, request): return self.text("yey") But it gives me the following error: Traceback (most recent call last): File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/cli/base.py", line 299, in find_app val = symbol_by_name(app, imp=imp) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/utils/imports.py", line 262, in symbol_by_name module = imp( # type: ignore File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/utils/imports.py", line 376, in import_from_cwd return imp(module, package=package) File "/Users/maverick/.pyenv/versions/3.8.1/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/Users/maverick/company/demo1/baiohttp-demo/app1.py", line 18, in <module> async def index(self, request): File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/app/base.py", line 1231, in _decorator view = view_base.from_handler(cast(ViewHandlerFun, fun)) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/web/views.py", line 50, in from_handler return type(fun.__name__, (cls,), { AttributeError: 'Agent' object has no attribute '__name__' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/maverick/.pyenv/versions/faust_demo/bin/faust", line 8, in <module> sys.exit(cli()) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/click/core.py", line 829, in __call__ return self.main(*args, **kwargs) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/click/core.py", line 781, in main with self.make_context(prog_name, args, **extra) as ctx: File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/cli/base.py", line 407, in make_context self._maybe_import_app() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/cli/base.py", line 372, in _maybe_import_app find_app(appstr) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/cli/base.py", line 303, in find_app val = imp(app) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/utils/imports.py", line 376, in import_from_cwd return imp(module, package=package) File "/Users/maverick/.pyenv/versions/3.8.1/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/Users/maverick/company/demo1/baiohttp-demo/app1.py", line 18, in <module> async def index(self, request): File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/app/base.py", line 1231, in _decorator view = view_base.from_handler(cast(ViewHandlerFun, fun)) File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/web/views.py", line 50, in from_handler return type(fun.__name__, (cls,), { AttributeError: 'Agent' object has no attribute '__name__' I event tried this: import faust from faust.web import Response app = faust.App("app1", broker="kafka://localhost:29092", value_serializer="raw") test_topic = app.topic("test") # @app.agent(test_topic) # async def test_topic_agent(stream): # async for value in stream: # print(f"test_topic_agent RECEIVED -- {value!r}") # yield value @app.agent(test_topic, name="t") @app.page("/", name="t1") async def index(self, request): return self.text("yey") But I get following error: [2020-03-28 10:32:50,676] [29976] [INFO] [^--Producer]: Creating topic 'app1-__assignor-__leader' [2020-03-28 10:32:50,695] [29976] [INFO] [^--ReplyConsumer]: Starting... [2020-03-28 10:32:50,695] [29976] [INFO] [^--AgentManager]: Starting... [2020-03-28 10:32:50,695] [29976] [INFO] [^---Agent: app1.index]: Starting... [2020-03-28 10:32:50,696] [29976] [ERROR] [^Worker]: Error: TypeError("__init__() missing 1 required positional argument: 'web'") Traceback (most recent call last): File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/worker.py", line 273, in execute_from_commandline self.loop.run_until_complete(self._starting_fut) File "/Users/maverick/.pyenv/versions/3.8.1/lib/python3.8/asyncio/base_events.py", line 612, in run_until_complete return future.result() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 736, in start await self._default_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start await self._actually_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 767, in _actually_start await child.maybe_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 795, in maybe_start await self.start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 736, in start await self._default_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start await self._actually_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 767, in _actually_start await child.maybe_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 795, in maybe_start await self.start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 736, in start await self._default_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start await self._actually_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 760, in _actually_start await self.on_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/manager.py", line 58, in on_start await agent.maybe_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 795, in maybe_start await self.start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 736, in start await self._default_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 743, in _default_start await self._actually_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/mode/services.py", line 760, in _actually_start await self.on_start() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 282, in on_start await self._on_start_supervisor() File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 312, in _on_start_supervisor res = await self._start_one( File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 251, in _start_one return await self._start_task( File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 617, in _start_task actor = self( File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 525, in __call__ return self.actor_from_stream(stream, File "/Users/maverick/.pyenv/versions/3.8.1/envs/faust_demo/lib/python3.8/site-packages/faust/agents/agent.py", line 552, in actor_from_stream res = self.fun(actual_stream) TypeError: __init__() missing 1 required positional argument: 'web' [2020-03-28 10:32:50,703] [29976] [INFO] [^Worker]: Stopping... [2020-03-28 10:32:50,703] [29976] [INFO] [^-App]: Stopping... [2020-03-28 10:32:50,703] [29976] [INFO] [^-App]: Flush producer buffer... [2020-03-28 10:32:50,703] [29976] [INFO] [^--TableManager]: Stopping... Could there be a way for this? Thanks a lot in advance!
The Faust worker will also expose a web server on every instance, that by default runs on port 6066. The server will use the aiohttp HTTP server library and you can take advantage of this thing and create a server-side event streaming (SSE) like in your example code. You can create an agent that will read from Kafka topic test and will update a variable last_message_from_topic with the last message from the topic, this variable will be visible also from your web pages. In the index page (@app.page('/')) the EventSource interface is used to receive server-sent events. It connects to the server over HTTP and receives events in text/event-stream format from the page /hello without closing the connection. The web page /hello at every second is sending a message text with the last message from the Kafka topic test and with the current time from the server. here is my file my_worker.py code: import asyncio from datetime import datetime import faust from aiohttp.web import Response from aiohttp_sse import sse_response app = faust.App( "app1", broker='kafka://localhost:9092', value_serializer='json', ) test_topic = app.topic("test") last_message_from_topic = ['No messages yet'] @app.agent(test_topic) async def greet(greetings): async for greeting in greetings: last_message_from_topic[0] = greeting @app.page('/hello') async def hello(self, request): loop = request.app.loop async with sse_response(request) as resp: while True: data = f'last message from topic_test: {last_message_from_topic[0]} | ' data += f'Server Time : {datetime.now()}' print(data) await resp.send(data) await asyncio.sleep(1, loop=loop) return resp @app.page('/') async def index(self, request): d = """ <html> <body> <script> var evtSource = new EventSource("/hello"); evtSource.onmessage = function(e) { document.getElementById('response').innerText = e.data } </script> <h1>Response from server:</h1> <div id="response"></div> </body> </html> """ return Response(text=d, content_type='text/html') now you have to start the Faust worker with the following command: faust -A my_worker worker -l info on your web browser you can access http://localhost:6066/: here is the code to send messages to Kafka on the topic test (from another python file): import time import json from kafka import KafkaProducer producer = KafkaProducer(bootstrap_servers=['localhost:9092'],value_serializer=lambda x: json.dumps(x).encode('utf-8')) for i in range(220): time.sleep(1) producer.send('test', value=f'Some message from kafka id {i}')
12
14
60,858,424
2020-3-25
https://stackoverflow.com/questions/60858424/authentication-with-hashing
I need to make a connection to an API using a complicated authentication process that I don't understand. I know it involves multiple steps and I have tried to mimic it, but I find the documentation to be very confusing... The idea is that I make a request to an endpoint which will return a token to me that I need to use to make a websocket connection. I did get a code sample which is in Python that I don't know the syntax of, but I can use it as a guide to convert it to C#-syntax. This is the Python code sample: import time, base64, hashlib, hmac, urllib.request, json api_nonce = bytes(str(int(time.time()*1000)), "utf-8") api_request = urllib.request.Request("https://www.website.com/getToken", b"nonce=%s" % api_nonce) api_request.add_header("API-Key", "API_PUBLIC_KEY") api_request.add_header("API-Sign", base64.b64encode(hmac.new(base64.b64decode("API_PRIVATE_KEY"), b"/getToken" + hashlib.sha256(api_nonce + b"nonce=%s" % api_nonce).digest(), hashlib.sha512).digest())) print(json.loads(urllib.request.urlopen(api_request).read())['result']['token']) So I have tried to convert this into C# and this is the code I got so far: static string apiPublicKey = "API_PUBLIC_KEY"; static string apiPrivateKey = "API_PRIVATE_KEY"; static string endPoint = "https://www.website.com/getToken"; private void authenticate() { using (var client = new HttpClient()) { ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls; // CREATE THE URI string uri = "/getToken"; // CREATE THE NONCE /// NONCE = unique identifier which must increase in value with each API call /// in this case we will be using the epoch time DateTime baseTime = new DateTime(1970, 1, 1, 0, 0, 0); TimeSpan epoch = CurrentTime - baseTime; Int64 nonce = Convert.ToInt64(epoch.TotalMilliseconds); // CREATE THE DATA string data = string.Format("nonce={0}", nonce); // CALCULATE THE SHA256 OF THE NONCE string sha256 = SHA256_Hash(data); // DECODE THE PRIVATE KEY byte[] apiSecret = Convert.FromBase64String(apiPrivateKey); // HERE IS THE HMAC CALCULATION } } public static String SHA256_Hash(string value) { StringBuilder Sb = new StringBuilder(); using (var hash = SHA256.Create()) { Encoding enc = Encoding.UTF8; Byte[] result = hash.ComputeHash(enc.GetBytes(value)); foreach (Byte b in result) Sb.Append(b.ToString("x2")); } return Sb.ToString(); } So the next part is where I'm really struggling. There needs to be some HMAC-calculation that needs to be done but I'm completely lost there.
The main task here is to reverse the API-Sign SHA-512 HMAC calculation. Use DateTimeOffset.Now.ToUnixTimeMilliseconds to get the API nonce, it will return a Unix timestamp milliseconds value. Then it all boils down concating byte arrays and generating the hashes. I'm using a hardcoded api_nonce time just to demonstrate the result; you'll have to uncomment string ApiNonce = DateTimeOffset.Now.ToUnixTimeMilliseconds to get the current Unix timestamp milliseconds each time the API-Sign key is calculated. Python API-Sign generation: import time, base64, hashlib, hmac, urllib.request, json # Hardcoce API_PRIVATE_KEY base 64 value API_PRIVATE_KEY = base64.encodebytes(b"some_api_key_1234") # time_use = time.time() # Hardcode the time so we can confirm the same result to C# time_use = 1586096626.919 api_nonce = bytes(str(int(time_use*1000)), "utf-8") print("API nonce: %s" % api_nonce) api_request = urllib.request.Request("https://www.website.com/getToken", b"nonce=%s" % api_nonce) api_request.add_header("API-Key", "API_PUBLIC_KEY_1234") print("API_PRIVATE_KEY: %s" % API_PRIVATE_KEY) h256Dig = hashlib.sha256(api_nonce + b"nonce=%s" % api_nonce).digest() api_sign = base64.b64encode(hmac.new(base64.b64decode(API_PRIVATE_KEY), b"/getToken" + h256Dig, hashlib.sha512).digest()) # api_request.add_header("API-Sign", api_sign) # print(json.loads(urllib.request.urlopen(api_request).read())['result']['token']) print("API-Sign: %s" % api_sign) Will output: API nonce: b'1586096626919' API_PRIVATE_KEY: b'c29tZV9hcGlfa2V5XzEyMzQ=\n' API-Sign: b'wOsXlzd3jOP/+Xa3AJbfg/OM8wLvJgHATtXjycf5EA3tclU36hnKAMMIu0yifznGL7yhBCYEwIiEclzWvOgCgg==' C# API-Sign generation: static string apiPublicKey = "API_PUBLIC_KEY"; // Hardcoce API_PRIVATE_KEY base 64 value static string apiPrivateKey = Base64EncodeString("some_api_key_1234"); static string endPoint = "https://www.website.com/getToken"; public static void Main() { Console.WriteLine("API-Sign: '{0}'", GenApiSign()); } static private string GenApiSign() { // string ApiNonce = DateTimeOffset.Now.ToUnixTimeMilliseconds().ToString(); // Hardcode the time so we can confirm the same result with Python string ApiNonce = "1586096626919"; Console.WriteLine("API nonce: {0}", ApiNonce); Console.WriteLine("API_PRIVATE_KEY: '{0}'", apiPrivateKey); byte[] ApiNonceBytes = Encoding.Default.GetBytes(ApiNonce); byte[] h256Dig = GenerateSHA256(CombineBytes(ApiNonceBytes, Encoding.Default.GetBytes("nonce="), ApiNonceBytes)); byte[] h256Token = CombineBytes(Encoding.Default.GetBytes("/getToken"), h256Dig); string ApiSign = Base64Encode(GenerateSHA512(Base64Decode(apiPrivateKey), h256Token)); return ApiSign; } // Helper functions ___________________________________________________ public static byte[] CombineBytes(byte[] first, byte[] second) { byte[] ret = new byte[first.Length + second.Length]; Buffer.BlockCopy(first, 0, ret, 0, first.Length); Buffer.BlockCopy(second, 0, ret, first.Length, second.Length); return ret; } public static byte[] CombineBytes(byte[] first, byte[] second, byte[] third) { byte[] ret = new byte[first.Length + second.Length + third.Length]; Buffer.BlockCopy(first, 0, ret, 0, first.Length); Buffer.BlockCopy(second, 0, ret, first.Length, second.Length); Buffer.BlockCopy(third, 0, ret, first.Length + second.Length, third.Length); return ret; } public static byte[] GenerateSHA256(byte[] bytes) { SHA256 sha256 = SHA256Managed.Create(); return sha256.ComputeHash(bytes); } public static byte[] GenerateSHA512(byte[] key, byte[] bytes) { var hash = new HMACSHA512(key); var result = hash.ComputeHash(bytes); hash.Dispose(); return result; } public static string Base64EncodeString(string plainText) { var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText); return System.Convert.ToBase64String(plainTextBytes); } public static string Base64Encode(byte[] bytes) { return System.Convert.ToBase64String(bytes); } public static byte[] Base64Decode(string base64EncodedData) { var base64EncodedBytes = System.Convert.FromBase64String(base64EncodedData); return base64EncodedBytes; } Will output: API nonce: 1586096626919 API_PRIVATE_KEY: 'c29tZV9hcGlfa2V5XzEyMzQ=' API-Sign: 'wOsXlzd3jOP/+Xa3AJbfg/OM8wLvJgHATtXjycf5EA3tclU36hnKAMMIu0yifznGL7yhBCYEwIiEclzWvOgCgg==' You can see it working and the result in this .NET Fiddle.
7
8
60,960,889
2020-3-31
https://stackoverflow.com/questions/60960889/django-formset-creates-multiple-inputs-for-multiple-image-upload
I am trying to create a simple post sharing form like this one. I'm using formset for image upload. But this gives me multiple input as you can see. Also each input can choose single image. But I'm trying to upload multiple image with single input. views.py def share(request): ImageFormSet = modelformset_factory(Images, form=ImageForm, extra=3) # 'extra' means the number of photos that you can upload ^ if request.method == 'POST': postForm = PostForm(request.POST) formset = ImageFormSet(request.POST, request.FILES, queryset=Images.objects.none()) if postForm.is_valid() and formset.is_valid(): post = postForm.save(commit=False) post.author = request.user post.save() for form in formset.cleaned_data: # this helps to not crash if the user # do not upload all the photos if form: image = form['image'] photo = Images(post=post, image=image) photo.save() return redirect("index") else: print(postForm.errors, formset.errors) else: postForm = PostForm() formset = ImageFormSet(queryset=Images.objects.none()) return render(request, "share.html", {"postForm": postForm, 'formset': formset}) share.html <form method="POST" id="post-form" class="post-form js-post-form" enctype="multipart/form-data"> {% csrf_token %} {{ formset.management_form }} {% for form in formset %} {{ form }} {% endfor %} </form> if you need, forms.py class PostForm(forms.ModelForm): class Meta: model = Post fields = ["title", "content"] def __init__(self, *args, **kwargs): super(PostForm, self).__init__(*args, **kwargs) self.fields['title'].widget.attrs.update({'class': 'input'}) self.fields['content'].widget.attrs.update({'class': 'textarea'}) class ImageForm(forms.ModelForm): image = forms.ImageField(label='Image') class Meta: model = Images fields = ('image', ) def __init__(self, *args, **kwargs): self.fields['image'].widget.attrs.update( {'class': 'fileinput', 'multiple': True}) models.py from django.db import models from django.contrib.auth.models import User from django.template.defaultfilters import slugify class Post(models.Model): # on_delete ile, bu kullanıcı silindiğinde bu kullanıcıya ait tüm postlar da silinecek. author = models.ForeignKey( "auth.User", on_delete=models.CASCADE, verbose_name="Yazar") title = models.CharField(max_length=150, verbose_name="Başlık") content = models.TextField(verbose_name="İçerik") # auto_now_add = True ile veritabanına eklendiği tarihi otomatik alacak created_date = models.DateTimeField(auto_now_add=True) # admin panelinde Post Object 1 yazması yerine başlığı yazsın istersek... def __str__(self): return self.title def get_image_filename(instance, filename): title = instance.post.title slug = slugify(title) return "post_images/%s-%s" % (slug, filename) class Images(models.Model): post = models.ForeignKey( Post, on_delete=models.DO_NOTHING, default=None) image = models.ImageField(upload_to=get_image_filename, verbose_name='Image', default="images/default_game_img.png")
If you need one filed for multiple image upload, try this: views.py from .forms import PostForm from .models import Post, Images def share(request): form = PostForm() if request.method == 'POST': post = Post() post.title = request.POST['title'] post.content = request.POST['content'] post.author = request.user post.save() for image in request.FILES.getlist('images'): image_obj = Image() image_obj.post_id = post.id image_obj.image = image image_obj.save() return render(request, 'share.html', {'form': form}) forms.py from django import forms class PostForm(forms.Form): title = forms.CharField(label='', widget=forms.TextInput(attrs={ 'class': 'input', } )) content = forms.CharField(label='', widget=forms.Textarea(attrs={ 'class': 'textarea', } )) images = forms.ImageField(widget=forms.ClearableFileInput(attrs={'multiple': True})) share.html <form method="POST" id="post-form" class="post-form js-post-form" enctype="multipart/form-data"> {% csrf_token %} {% for elem in form %} {{ elem }} {% endfor %} </form>
7
8
60,928,718
2020-3-30
https://stackoverflow.com/questions/60928718/python-how-to-replace-tqdm-progress-bar-by-next-one-in-nested-loop
I use tqdm module in Jupyter Notebook. And let's say I have the following piece of code with a nested for loop. import time from tqdm.notebook import tqdm for i in tqdm(range(3)): for j in tqdm(range(5)): time.sleep(1) The output looks like this: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:15<00:00, 5.07s/it] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:10<00:00, 2.02s/it] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:05<00:00, 1.01s/it] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:05<00:00, 1.01s/it] Is there any option, how to show only current j progress bar during the run? So, the final output after finishing the iteration would look like this? 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:15<00:00, 5.07s/it] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:05<00:00, 1.01s/it]
You can use leave param when create progress bar. Something like this: import time from tqdm import tqdm for i in tqdm(range(3)): for j in tqdm(range(5), leave=bool(i == 2)): time.sleep(1)
10
12
60,951,814
2020-3-31
https://stackoverflow.com/questions/60951814/how-to-avoid-conda-activate-base-from-automatically-executing-in-my-vs-code-edit
PS E:\Python and Data Science\PythonDatabase> conda activate base conda : The term 'conda' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + conda activate base + ~~~~~ + CategoryInfo : ObjectNotFound: (conda:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS E:\Python and Data Science\PythonDatabase> & C:/Users/Lenovo/Anaconda3/python.exe "e:/Python and Data Science/PythonDatabase/CourseHelper.py" Hello World PS E:\Python and Data Science\PythonDatabase>
You can set "python.terminal.activateEnvironment": false in your settings to deactivate activation of your environment. Alternatively, you can set "python.condaPath" to where conda exists so the extension can use conda appropriately.
16
26
60,928,734
2020-3-30
https://stackoverflow.com/questions/60928734/robust-way-to-ensure-other-people-can-run-my-python-program
I wish to place a python program on GitHub and have other people download and run it on their computers with assorted operating systems. I am relatively new to python but have used it enough to have noticed that getting the assorted versions of all the included modules to work together can be problematic. I just discovered the use of requirements.txt (generated with pipreqs and deployed with the command pip install -r /path/to/requirements.txt) but was very surprised to notice that requirements.txt does not actually state what version of python is being used so obviously it is not the complete solution on its own. So my question is: what set of specifications/files/something-else is needed to ensure that someone downloading my project will actually be able to run it with the fewest possible problems. EDIT: My plan was to be guided by whichever answer got the most upvotes. But so far, after 4 answers and 127 views, not a single answer has even one upvote. If some of the answers are no good, it would be useful to see some comments as to why they are no good.
Have you considered setting up a setup.py file? It's a handy way of bundling all of your... well setup into a single location. So all your user has to do is A) clone your repo and B) run pip install . to run the setup.py There's a great stack discussion about this. As well as a handle example written by the requests guy. This should cover most use cases. Now if you want to make it truly distributable then you'll want to look into setting it up in PyPi, the official distribution hub. Beyond that if you're asking how to make a program "OS independent" there isn't a one size fits all. It depends on what you are doing with your code. Requires researching how your particular code interacts with those OS's etc.
17
16
60,949,451
2020-3-31
https://stackoverflow.com/questions/60949451/how-to-send-a-cvmat-to-python-over-shared-memory
I have a c++ application that sends data through to a python function over shared memory. This works great using ctypes in Python such as doubles and floats. Now, I need to add a cv::Mat to the function. My code currently is: //h #include <iostream> #include <opencv2\core.hpp> #include <opencv2\highgui.hpp> struct TransferData { double score; float other; int num; int w; int h; int channels; uchar* data; }; #define C_OFF 1000 void fill(TransferData* data, int run, uchar* frame, int w, int h, int channels) { data->score = C_OFF + 1.0; data->other = C_OFF + 2.0; data->num = C_OFF + 3; data->w = w; data->h = h; data->channels = channels; data->data = frame; } //.cpp namespace py = pybind11; using namespace boost::interprocess; void main() { //python setup Py_SetProgramName(L"PYTHON"); py::scoped_interpreter guard{}; py::module py_test = py::module::import("Transfer_py"); // Create Data windows_shared_memory shmem(create_only, "TransferDataSHMEM", read_write, sizeof(TransferData)); mapped_region region(shmem, read_write); std::memset(region.get_address(), 0, sizeof(TransferData)); TransferData* data = reinterpret_cast<TransferData*>(region.get_address()); //loop for (int i = 0; i < 10; i++) { int64 t0 = cv::getTickCount(); std::cout << "C++ Program - Filling Data" << std::endl; cv::Mat frame = cv::imread("input.jpg"); fill(data, i, frame.data, frame.cols, frame.rows, frame.channels()); //run the python function //process py::object result = py_test.attr("datathrough")(); int64 t1 = cv::getTickCount(); double secs = (t1 - t0) / cv::getTickFrequency(); std::cout << "took " << secs * 1000 << " ms" << std::endl; } std::cin.get(); } //Python //transfer data class import ctypes class TransferData(ctypes.Structure): _fields_ = [ ('score', ctypes.c_double), ('other', ctypes.c_float), ('num', ctypes.c_int), ('w', ctypes.c_int), ('h', ctypes.c_int), ('frame', ctypes.c_void_p), ('channels', ctypes.c_int) ] PY_OFF = 2000 def fill(data): data.score = PY_OFF + 1.0 data.other = PY_OFF + 2.0 data.num = PY_OFF + 3 //main Python function import TransferData import sys import mmap import ctypes def datathrough(): shmem = mmap.mmap(-1, ctypes.sizeof(TransferData.TransferData), "TransferDataSHMEM") data = TransferData.TransferData.from_buffer(shmem) print('Python Program - Getting Data') print('Python Program - Filling Data') TransferData.fill(data) How can I add the cv::Mat frame data into the Python side? I am sending it as a uchar* from c++, and as i understand, I need it to be a numpy array to get a cv2.Mat in Python. What is the correct approach here to go from 'width, height, channels, frameData' to an opencv python cv2.Mat? I am using shared memory because speed is a factor, I have tested using the Python API approach, and it is much too slow for my needs.
The general idea (as used in the OpenCV Python bindings) is to create a numpy ndarray that shares its data buffer with the Mat object, and pass that to the Python function. Note: At this point, I'll limit the example to continuous matrices only. We can take advantage of the pybind11::array class. We need to determine the appropriate dtype for the numpy array to use. This is a simple 1-to-1 mapping, which we can do using a switch: py::dtype determine_np_dtype(int depth) { switch (depth) { case CV_8U: return py::dtype::of<uint8_t>(); case CV_8S: return py::dtype::of<int8_t>(); case CV_16U: return py::dtype::of<uint16_t>(); case CV_16S: return py::dtype::of<int16_t>(); case CV_32S: return py::dtype::of<int32_t>(); case CV_32F: return py::dtype::of<float>(); case CV_64F: return py::dtype::of<double>(); default: throw std::invalid_argument("Unsupported data type."); } } Determine the shape for the numpy array. To make this behave similarly to OpenCV, let's have it map 1-channel Mats to 2D numpy arrays, and multi-channel Mats to 3D numpy arrays. std::vector<std::size_t> determine_shape(cv::Mat& m) { if (m.channels() == 1) { return { static_cast<size_t>(m.rows) , static_cast<size_t>(m.cols) }; } return { static_cast<size_t>(m.rows) , static_cast<size_t>(m.cols) , static_cast<size_t>(m.channels()) }; } Provide means of extending the shared buffer's lifetime to the lifetime of the numpy array. We can create a pybind11::capsule around a shallow copy of the source Mat -- due to the way the object is implemented, this effectively increases its reference count for the required amount of time. py::capsule make_capsule(cv::Mat& m) { return py::capsule(new cv::Mat(m) , [](void *v) { delete reinterpret_cast<cv::Mat*>(v); } ); } Now, we can perform the conversion. py::array mat_to_nparray(cv::Mat& m) { if (!m.isContinuous()) { throw std::invalid_argument("Only continuous Mats supported."); } return py::array(determine_np_dtype(m.depth()) , determine_shape(m) , m.data , make_capsule(m)); } Let's assume, we have a Python function like def foo(arr): print(arr.shape) captured in a pybind object fun. Then to call this function from C++ using a Mat as a source we'd do something like this: cv::Mat img; // Initialize this somehow auto result = fun(mat_to_nparray(img)); Sample Program #include <pybind11/pybind11.h> #include <pybind11/embed.h> #include <pybind11/numpy.h> #include <pybind11/stl.h> #include <opencv2/opencv.hpp> #include <iostream> namespace py = pybind11; // The 4 functions from above go here... int main() { // Start the interpreter and keep it alive py::scoped_interpreter guard{}; try { auto locals = py::dict{}; py::exec(R"( import numpy as np def test_cpp_to_py(arr): return (arr[0,0,0], 2.0, 30) )"); auto test_cpp_to_py = py::globals()["test_cpp_to_py"]; for (int i = 0; i < 10; i++) { int64 t0 = cv::getTickCount(); cv::Mat img(cv::Mat::zeros(1024, 1024, CV_8UC3) + cv::Scalar(1, 1, 1)); int64 t1 = cv::getTickCount(); auto result = test_cpp_to_py(mat_to_nparray(img)); int64 t2 = cv::getTickCount(); double delta0 = (t1 - t0) / cv::getTickFrequency() * 1000; double delta1 = (t2 - t1) / cv::getTickFrequency() * 1000; std::cout << "* " << delta0 << " ms | " << delta1 << " ms" << std::endl; } } catch (py::error_already_set& e) { std::cerr << e.what() << "\n"; } return 0; } Console Output * 4.56413 ms | 0.225657 ms * 3.95923 ms | 0.0736127 ms * 3.80335 ms | 0.0438603 ms * 3.99262 ms | 0.0577587 ms * 3.82262 ms | 0.0572 ms * 3.72373 ms | 0.0394603 ms * 3.74014 ms | 0.0405079 ms * 3.80621 ms | 0.054546 ms * 3.72177 ms | 0.0386222 ms * 3.70683 ms | 0.0373651 ms
9
15
60,954,146
2020-3-31
https://stackoverflow.com/questions/60954146/how-to-create-date-from-year-month-and-day-in-pyspark
I have three columns about year, month and day. How can I use these to create date in PySpark?
You can use concat_ws() to concat columns with - and cast to date. #sampledata df.show() #+----+-----+---+ #|year|month|day| #+----+-----+---+ #|2020| 12| 12| #+----+-----+---+ from pyspark.sql.functions import * df.withColumn("date",concat_ws("-",col("year"),col("month"),col("day")).cast("date")).show() +----+-----+---+----------+ |year|month|day| date| +----+-----+---+----------+ |2020| 12| 12|2020-12-12| +----+-----+---+----------+ #dynamic way cols=["year","month","day"] df.withColumn("date",concat_ws("-",*cols).cast("date")).show() #+----+-----+---+----------+ #|year|month|day| date| #+----+-----+---+----------+ #|2020| 12| 12|2020-12-12| #+----+-----+---+----------+ #using date_format,to_timestamp,from_unixtime(unix_timestamp) functions df.withColumn("date",date_format(concat_ws("-",*cols),"yyyy-MM-dd").cast("date")).show() df.withColumn("date",to_timestamp(concat_ws("-",*cols),"yyyy-MM-dd").cast("date")).show() df.withColumn("date",to_date(concat_ws("-",*cols),"yyyy-MM-dd")).show() df.withColumn("date",from_unixtime(unix_timestamp(concat_ws("-",*cols),"yyyy-MM-dd"),"yyyy-MM-dd").cast("date")).show() #+----+-----+---+----------+ #|year|month|day| date| #+----+-----+---+----------+ #|2020| 12| 12|2020-12-12| #+----+-----+---+----------+
13
19
60,945,542
2020-3-31
https://stackoverflow.com/questions/60945542/string-formatting-using-many-pandas-columns-to-create-a-new-one
I would like to create a new columns in a pandas DataFrame just like I would do using a python f-Strings or format function. Here is an example: df = pd.DataFrame({"str": ["a", "b", "c", "d", "e"], "int": [1, 2, 3, 4, 5]}) print(df) str int 0 a 1 1 b 2 2 c 3 3 d 4 4 e 5 I would like to obtain: str int concat 0 a 1 a-01 1 b 2 b-02 2 c 3 c-03 3 d 4 d-04 4 e 5 e-05 So something like: concat = f"{str}-{int:02d}" but directly with elements of pandas columns. I imagine the solution is using pandas map, apply, agg but nothing successful. Many thanks for your help.
Use lsit comprehension with f-strings: df['concat'] = [f"{a}-{b:02d}" for a, b in zip(df['str'], df['int'])] Or is possible use apply: df['concat'] = df.apply(lambda x: f"{x['str']}-{x['int']:02d}", axis=1) Or solution from comments with Series.str.zfill: df["concat"] = df["str"] + "-" + df["int"].astype(str).str.zfill(2) print (df) str int concat 0 a 1 a-01 1 b 2 b-02 2 c 3 c-03 3 d 4 d-04 4 e 5 e-05
12
15
60,919,782
2020-3-29
https://stackoverflow.com/questions/60919782/how-to-get-count-on-mongodbs-motor-driver
I want to get count with Motor's diver but I got this error. AttributeError: 'AsyncIOMotorCursor' object has no attribute 'count' This is my code: await MOTOR_CURSOR.users.find().count()
MotorCollection.find() returns AsyncIOMotorCursor and it does not have count method. Instead, you should call MotorCollection.count_documents() instead. await db.users.count_documents({'x': 1}) Also worth noting that what you're referring to as MOTOR_CURSOR is MotorDatabase instance, it would preferable to call it a db instance instead of a cursor to avoid confusion.
7
10
60,879,235
2020-3-27
https://stackoverflow.com/questions/60879235/python-windows-10-launching-an-application-on-a-specific-virtual-desktop-envir
I have 3 different Windows 10 virtual desktops. When the computer starts up, I want python to load all of my applications in the different virtual desktops. Right now I can only start things in Desktop 1. How do I tell python to launch an app but in Desktop 2 and 3? I'm using python 3.6.
How do I tell python to launch an app but in Desktop 2 and 3? This can be achieved by launching your applications with subprocess.Popen(), then changing virtual desktop by calling GoToDesktopNumber() from VirtualDesktopAccessor.dll with the help of ctypes, and launching your applications again. Tested with 64-bit Windows 10 Version 10.0.18363.720. VirtualDesktopAccessor.dll by Jari Pennanen exports the functions a part of the mostly undocumented (by Microsoft) Virtual Desktop API. Put the dll in the current working directory. import ctypes, time, shlex, subprocess def launch_apps_to_virtual_desktops(command_lines, desktops=3): virtual_desktop_accessor = ctypes.WinDLL("VirtualDesktopAccessor.dll") for i in range(desktops): virtual_desktop_accessor.GoToDesktopNumber(i) time.sleep(0.25) # Wait for the desktop to switch for command_line in command_lines: if command_line: subprocess.Popen(shlex.split(command_line)) time.sleep(2) # Wait for apps to open their windows virtual_desktop_accessor.GoToDesktopNumber(0) # Go back to the 1st desktop command_lines = r""" "C:\Program Files (x86)\Google\Chrome Beta\Application\chrome.exe" "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" "C:\StudyGuide.pdf" "C:\Program Files\Mozilla Firefox\firefox.exe" "C:\Program Files\VideoLAN\VLC\vlc.exe" """.splitlines() launch_apps_to_virtual_desktops(command_lines) The time.sleep() calls are needed because Windows doesn't change virtual desktops instantly (presumably because of animations), and to give the processes time to create windows. You might need to tweak the timings. Note that some applications only allow one instance/process, so you can't get multiple separate windows for each virtual desktop (e.g. Adobe Reader with default settings). Another strategy I tried was launching the applications, sleeping for a bit to allow the windows to be created, then calling MoveWindowToDesktopNumber() to move every window created by the new processes to different virtual desktops. The problem with that is, for applications like Chrome or Firefox, the new process is immediately closed if an existing process already exists, so it doesn't move the new windows (which actually belong to another, older process) to another desktop. import ctypes, time, shlex, subprocess from ctypes.wintypes import * from ctypes import windll, byref def get_windows(pid): current_window = 0 pid_local = DWORD() while True: current_window = windll.User32.FindWindowExA(0, current_window, 0, 0) windll.user32.GetWindowThreadProcessId(current_window, byref(pid_local)) if pid == pid_local.value: yield current_window if current_window == 0: return def launch_apps_to_virtual_desktops_by_moving(command_lines, desktops=3): virtual_desktop_accessor = ctypes.WinDLL("VirtualDesktopAccessor.dll") for i in range(desktops): pids = [] for command_line in command_lines: if command_line: args = shlex.split(command_line) pids.append(subprocess.Popen(args).pid) time.sleep(3) for pid in pids: for window in get_windows(pid): window = HWND(window) virtual_desktop_accessor.MoveWindowToDesktopNumber(window, i) command_lines = r""" "C:\Program Files (x86)\Google\Chrome Beta\Application\chrome.exe" "C:\Program Files (x86)\Adobe\Acrobat Reader DC\Reader\AcroRd32.exe" "C:\StudyGuide.pdf" "C:\Program Files\Mozilla Firefox\firefox.exe" "C:\Program Files\VideoLAN\VLC\vlc.exe" """.splitlines() launch_apps_to_virtual_desktops_by_moving(command_lines)
10
14
60,905,976
2020-3-28
https://stackoverflow.com/questions/60905976/cloudfront-give-access-denied-response-created-through-aws-cdk-python-for-s3-buc
Created Cloud Front web distribution with AWS CDK for S3 bucket without public access. Able to create Origin access identity, and deploy but on successful deploy i get access denied response on browser. Grant Read Permissions on Bucket from Origin settings will be set to No, setting this to Yes manually everything will work fine, but this setting needs to be achieved through AWS CDK and python. Below is my code. from aws_cdk import aws_cloudfront as front, aws_s3 as s3 class CloudFrontStack(core.Stack): def __init__(self, scope: core.Construct, idx: str, **kwargs) -> None: super().__init__(scope, idx, **kwargs) bucket = s3.Bucket.from_bucket_name(self, 'CloudFront',bucket_name="bucket_name") oia = aws_cloudfront.OriginAccessIdentity(self, 'OIA', comment="Created By CDK") bucket.grant_read(oia) s3_origin_source = aws_cloudfront.S3OriginConfig(s3_bucket_source=bucket, origin_access_identity=oia) source_config = aws_cloudfront.SourceConfiguration(s3_origin_source=s3_origin_source, origin_path="bucket_path", behaviors=[aws_cloudfront.Behavior(is_default_behavior=True)]) aws_cloudfront.CloudFrontWebDistribution(self, "cloud_front_name", origin_configs=[source_config], comment='Cloud Formation created', default_root_object='index.html') I also tried adding the permissions to the as below but still no luck. policyStatement = aws_iam.PolicyStatement() policyStatement.add_resources() policyStatement.add_actions('s3:GetBucket*'); policyStatement.add_actions('s3:GetObject*'); policyStatement.add_actions('s3:List*'); policyStatement.add_resources(bucket.bucket_arn); policyStatement.add_canonical_user_principal(oia.cloud_front_origin_access_identity_s3_canonical_user_id); code_bucket.add_to_resource_policy(policyStatement);
I tried to mimic this and was able to integrate Cloudfront distribution to a private S3 bucket successfully. However, I used TS for my stack. I am sure it will be easy to correlate below code to Python version. Assume there is an index.html file in dist aws-cdk v1.31.0 (latest as of March 29th, 2020) import { App, Stack, StackProps } from '@aws-cdk/core'; import { BucketDeployment, Source } from '@aws-cdk/aws-s3-deployment'; import { CloudFrontWebDistribution, OriginAccessIdentity } from '@aws-cdk/aws-cloudfront'; import { BlockPublicAccess, Bucket, BucketEncryption } from '@aws-cdk/aws-s3'; export class HelloCdkStack extends Stack { constructor(scope: App, id: string, props?: StackProps) { super(scope, id, props); const myFirstBucket = new Bucket(this, 'MyFirstBucket', { versioned: true, encryption: BucketEncryption.S3_MANAGED, bucketName: 'cdk-example-bucket-for-test', websiteIndexDocument: 'index.html', blockPublicAccess: BlockPublicAccess.BLOCK_ALL }); new BucketDeployment(this, 'DeployWebsite', { sources: [Source.asset('dist')], destinationBucket: myFirstBucket }); const oia = new OriginAccessIdentity(this, 'OIA', { comment: "Created by CDK" }); myFirstBucket.grantRead(oia); new CloudFrontWebDistribution(this, 'cdk-example-distribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: myFirstBucket, originAccessIdentity: oia }, behaviors: [ { isDefaultBehavior: true } ] } ] }); } } == Update == [S3 bucket without Web Hosting] Here is an example where S3 is used as an Origin without Web hosting. It works as expected. import { App, Stack, StackProps } from '@aws-cdk/core'; import { BucketDeployment, Source } from '@aws-cdk/aws-s3-deployment'; import { CloudFrontWebDistribution, OriginAccessIdentity } from '@aws-cdk/aws-cloudfront'; import { BlockPublicAccess, Bucket, BucketEncryption } from '@aws-cdk/aws-s3'; export class CloudfrontS3Stack extends Stack { constructor(scope: App, id: string, props?: StackProps) { super(scope, id, props); // Create bucket (which is not a static website host), encrypted AES-256 and block all public access // Only Cloudfront access to S3 bucket const testBucket = new Bucket(this, 'TestS3Bucket', { encryption: BucketEncryption.S3_MANAGED, bucketName: 'cdk-static-asset-dmahapatro', blockPublicAccess: BlockPublicAccess.BLOCK_ALL }); // Create Origin Access Identity to be use Canonical User Id in S3 bucket policy const originAccessIdentity = new OriginAccessIdentity(this, 'OAI', { comment: "Created_by_dmahapatro" }); testBucket.grantRead(originAccessIdentity); // Create Cloudfront distribution with S3 as Origin const distribution = new CloudFrontWebDistribution(this, 'cdk-example-distribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: testBucket, originAccessIdentity: originAccessIdentity }, behaviors: [ { isDefaultBehavior: true } ] } ] }); // Upload items in bucket and provide distribution to create invalidations new BucketDeployment(this, 'DeployWebsite', { sources: [Source.asset('dist')], destinationBucket: testBucket, distribution, distributionPaths: ['/images/*.png'] }); } } == UPDATE == [S3 Bucket imported instead of creating in the same stack] When we refer to an existing S3 bucket the issue can be recreated. Reason: The root cause of the issue lies here in this line of code. autoCreatePolicy will always be false for an imported S3 bucket. To make addResourcePolicy work either the imported bucket has to already have an existing Bucket policy so that the new policy statements can be appended or manually create new BucketPolicy and add the policy statements. In the below code I have manually created the bucket policy and add the required policy statements. This is very close to the github issue #941 but the subtle difference is between creating a bucket in the stack vs importing an already created bucket. import { App, Stack, StackProps } from '@aws-cdk/core'; import { CloudFrontWebDistribution, OriginAccessIdentity } from '@aws-cdk/aws-cloudfront'; import { Bucket, BucketPolicy } from '@aws-cdk/aws-s3'; import { PolicyStatement } from '@aws-cdk/aws-iam'; export class CloudfrontS3Stack extends Stack { constructor(scope: App, id: string, props?: StackProps) { super(scope, id, props); const testBucket = Bucket.fromBucketName(this, 'TestBucket', 'dmahapatro-personal-bucket'); // Create Origin Access Identity to be use Canonical User Id in S3 bucket policy const originAccessIdentity = new OriginAccessIdentity(this, 'OAI', { comment: "Created_by_dmahapatro" }); // This does not seem to work if Bucket.fromBucketName is used // It works for S3 buckets which are created as part of this stack // testBucket.grantRead(originAccessIdentity); // Explicitly add Bucket Policy const policyStatement = new PolicyStatement(); policyStatement.addActions('s3:GetBucket*'); policyStatement.addActions('s3:GetObject*'); policyStatement.addActions('s3:List*'); policyStatement.addResources(testBucket.bucketArn); policyStatement.addResources(`${testBucket.bucketArn}/*`); policyStatement.addCanonicalUserPrincipal(originAccessIdentity.cloudFrontOriginAccessIdentityS3CanonicalUserId); // testBucket.addToResourcePolicy(policyStatement); // Manually create or update bucket policy if( !testBucket.policy ) { new BucketPolicy(this, 'Policy', { bucket: testBucket }).document.addStatements(policyStatement); } else { testBucket.policy.document.addStatements(policyStatement); } // Create Cloudfront distribution with S3 as Origin const distribution = new CloudFrontWebDistribution(this, 'cdk-example-distribution', { originConfigs: [ { s3OriginSource: { s3BucketSource: testBucket, originAccessIdentity: originAccessIdentity }, behaviors: [ { isDefaultBehavior: true } ] } ] }); } }
14
47
60,914,361
2020-3-29
https://stackoverflow.com/questions/60914361/python-import-error-module-factory-has-no-attribute-fuzzy
I'm a new to factory_boy module. In my code, I import factory and then used this import to access the fuzzy attribute with factory.fuzzy then it throws error module 'factory' has no attribute 'fuzzy'. I solved this problem by again importing like this import factory from factory import fuzzy by doing so there were no errors. What is the reason for this!
Why this happens? When you import a Python module (your import factory), you can then access directly what is declared in that module (e.g factory.Factory): all symbols declared in the module are automatically exported. However, if a nested module is not imported in its parent, you have to import it directly. Here, factory.Factory is available, because factory/__init__.py contains: from .base import Factory => When you type factory.Factory, Python looks up the symbol named Factory in factory/__init__.py, which is (per the above line) a reference to the Factory class defined in factory/base.py. Since there is no line with from . import fuzzy in factory/__init__.py, Python cannot load it this way. But why don't you add this line? Other modules in the factory_boy package have dependencies on third-party packages; for instance, factory.django imports Django. If factory/__init__.py contained the from . import django line (required to have factory.django available from import factory), every program that runs import factory would require to have Django installed. In order to allow users of the package to decide what they depend on, the choice was made to not add those direct imports at the package top-level when possible β€” this allows for future versions to add external dependencies without breaking existing code.
8
6
60,932,036
2020-3-30
https://stackoverflow.com/questions/60932036/check-if-pandas-column-contains-all-elements-from-a-list
I have a df like this: frame = pd.DataFrame({'a' : ['a,b,c', 'a,c,f', 'b,d,f','a,z,c']}) And a list of items: letters = ['a','c'] My goal is to get all the rows from frame that contain at least the 2 elements in letters I came up with this solution: for i in letters: subframe = frame[frame['a'].str.contains(i)] This gives me what I want, but it might not be the best solution in terms of scalability. Is there any 'vectorised' solution? Thanks
I would build a list of Series, and then apply a vectorized np.all: contains = [frame['a'].str.contains(i) for i in letters] resul = frame[np.all(contains, axis=0)] It gives as expected: a 0 a,b,c 1 a,c,f 3 a,z,c
25
21
60,895,196
2020-3-27
https://stackoverflow.com/questions/60895196/pandas-dataframe-droping-certain-hours-of-the-day-from-20-years-of-historical
I have stock market data for a single security going back 20 years. The data is currently in an Pandas DataFrame, in the following format: The problem is, I do not want any "after hours" trading data in my DataFrame. The market in question is open from 9:30AM to 4PM (09:30 to 16:00 on each trading day). I would like to drop all rows of data that are not within this time frame. My instinct is to use a Pandas mask, which I know how to do if I wanted certain hours in a single day: mask = (df['date'] > '2015-07-06 09:30:0') & (df['date'] <= '2015-07-06 16:00:0') sub = df.loc[mask] However, I have no idea how to use one on a revolving basis to remove the data for certain times of day over a 20 year period.
Problem here is how you are importing data. There is no indicator whether 04:00 is am or pm? but based on your comments we need to assume it is PM. However input is showing it as AM. To solve this we need to include two conditions with OR clause. 9:30-11:59 0:00-4:00 Input: df = pd.DataFrame({'date': {880551: '2015-07-06 04:00:00', 880552: '2015-07-06 04:02:00',880553: '2015-07-06 04:03:00', 880554: '2015-07-06 04:04:00', 880555: '2015-07-06 04:05:00'}, 'open': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.08, 880555: 125.12}, 'high': {880551: 125.00, 880552: 125.36,880553: 125.34, 880554: 125.11, 880555: 125.12}, 'low': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12}, 'close': {880551: 125.00, 880552: 125.32,880553: 125.21, 880554: 125.05, 880555: 125.12}, 'volume': {880551: 141, 880552: 200,880553: 750, 880554: 17451, 880555: 1000}, }, ) df.head() date open high low close volume 880551 2015-07-06 04:00:00 125.00 125.00 125.00 125.00 141 880552 2015-07-06 04:02:00 125.36 125.36 125.32 125.32 200 880553 2015-07-06 04:03:00 125.34 125.34 125.21 125.21 750 880554 2015-07-06 04:04:00 125.08 125.11 125.05 125.05 17451 880555 2015-07-06 04:05:00 125.12 125.12 125.12 125.12 1000 from datetime import time start_first = time(9, 30) end_first = time(11, 59) start_second = time(0, 00) end_second = time(4,00) df['date'] = pd.to_datetime(df['date']) df= df[(df['date'].dt.time.between(start_first, end_first)) | (df['date'].dt.time.between(start_second, end_second))] df date open high low close volume 880551 2015-07-06 04:00:00 125.0 125.0 125.0 125.0 141 Above is not good practice, and I strongly discourage to use this kind of ambiguous data. long time solution is to correctly populate data with am/pm. We can achieve it in two way in case of correct data format: 1) using datetime from datetime import time start = time(9, 30) end = time(16) df['date'] = pd.to_datetime(df['date']) df= df[df['date'].dt.time.between(start, end)] 2) using between time, which only works with datetime index df['date'] = pd.to_datetime(df['date']) df = (df.set_index('date') .between_time('09:30', '16:00') .reset_index()) If you still face error, edit your question with line by line approach and exact error.
9
9
60,909,380
2020-3-29
https://stackoverflow.com/questions/60909380/django-serializers-vs-rest-framework-serializers
What is the difference between Django serializers vs rest_framework serializers? I making a webapp, where I want the API to be part of the primary app created by the project. Not creating a separate App for the API functionality. Which serializer do I need to use for Django views and models, and at the same time will work for the API? from django.core import serializers https://docs.djangoproject.com/en/3.0/topics/serialization/ from rest_framework import serializers https://www.django-rest-framework.org/api-guide/serializers/
tl;dr If you want to create just a few very small API endpoints and don't want to use DRF, you're better off manually building the dictionaries. Django core serializers are not meant for external consumers. You can use the same primary app in your project and make it work with DRF in parallel. Just add a serializers.py file with the definitions, add the DRF logic in the same views.py file and do the routing. You could use function based views. Detailed explanation of differences Let's say you have the following model class Employee(models.Model): identification_number = models.CharField(max_length=12) first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) And you want to create an endpoint /employees/ that returns all such objects with JSON representation { "first_name": "Jon", "last_name": "Skeet" } With Django serializers from django.core import serializers from django.http import HttpResponse class EmployeeView(View): def get(self, request): employees = Employee.objects.all() serialized = serializers.serialize( 'json', employees, fields=('first_name', 'last_name'), ) return HttpResponse(serialized) and the result you get would be a list of dictionaries of the form { "fields" : { "first_name" : "Jon", "last_name" : "Skeet" }, "model" : "employees.Employee", "pk" : 12 } But this isn't what we're looking for. Django core serializers are meant to serialize models as representations of what's in the database. This is made explicit by the fact that the dumpdata command uses it. python manage.py dumpdata employees.Employee | json_pp [ { "fields" : { "identification_number" : "20201293", "first_name" : "Jon", "last_name" : "Skeet" }, "model" : "employees.Employee", "pk" : 12 } ] Now, of course you could do some things to your code to get the representation you want, but this module is not meant to be used for API views to be consumed by a external consumer. With Django REST framework Here we can create serializer classes that are independent of the Model. This is important since the external representation of the object is kept separate from the internal one. class EmployeeSerializer(serializers.ModelSerializer): class Meta: model = Employee fields = ( 'first_name', 'last_name', ) and, trying to use only the most basic serialization-deserialization features of DRF, we would get from rest_framework.renderers import JSONRenderer from django.http import HttpResponse class EmployeeView(View): def get(self, request): employees = Employee.objects.all() serialized = EmployeeSerializer(employees, many=True) json_representation = JSONRenderer().render(serialized.data) return HttpResponse(json_representation) and result in the representation we were looking for. Now, of course you usually don't use DRF as in the last example, but instead from rest_framework import viewsets class EmployeeViewSet(viewsets.ReadOnlyModelViewSet): queryset = Employee.objects.all() serializer_class = EmployeeSerializer It takes care of all the boilerplate so it's really convenient and, in contrast with the Django core serializers, this is really meant for external consumers.
7
12
60,920,851
2020-3-29
https://stackoverflow.com/questions/60920851/django-models-how-can-i-create-abstract-methods
In django abstract classes seem to be posibble by using: class Meta: abstract = True However I do not see how to declare abstract methods/functions within these classes that do not contain any logic like e.g. class AbstractClass(models.Model): def abstractFunction(): class Meta: abstract = True The library abc repectively the notation @abstractmethod doesnt seem to applicable here, or am I wrong?
From what I understand, you're correct. Meta: abstract=True in Django models is there to make sure Django doesn't create database tables for the model. It predates Pythons ABC, which might explain the naming/functional confusion. Inheriting from both ABC and django.db.models.Model raises metaclass exception However, a quick search gave me this. Don't know if it breaks anything. https://gist.github.com/gavinwahl/7778717 I do like how ABC raises an exception at class instantiation, so it would be pretty neat. Let me know if you find it useful.
7
8
60,887,648
2020-3-27
https://stackoverflow.com/questions/60887648/colorize-the-background-of-a-seaborn-plot-using-a-column-in-dataframe
Question How to shade or colorize the background of a seaborn plot using a column of a dataframe? Code snippet import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") fmri.sort_values('timepoint',inplace=True) ax = sns.lineplot(x="timepoint", y="signal", data=fmri) arr = np.ones(len(fmri)) arr[:300] = 0 arr[600:] = 2 fmri['background'] = arr ax = sns.lineplot(x="timepoint", y="signal", hue="event", data=fmri) Which produced this graph: Desired output What I'd like to have, according to the value in the new column 'background' and any palette or user defined colors, something like this:
ax.axvspan() could work for you, assuming backgrounds don't overlap over timepoints. import numpy as np import seaborn as sns; sns.set() import matplotlib.pyplot as plt fmri = sns.load_dataset("fmri") fmri.sort_values('timepoint',inplace=True) arr = np.ones(len(fmri)) arr[:300] = 0 arr[600:] = 2 fmri['background'] = arr fmri['background'] = fmri['background'].astype(int).astype(str).map(lambda x: 'C'+x) ax = sns.lineplot(x="timepoint", y="signal", hue="event", data=fmri) ranges = fmri.groupby('background')['timepoint'].agg(['min', 'max']) for i, row in ranges.iterrows(): ax.axvspan(xmin=row['min'], xmax=row['max'], facecolor=i, alpha=0.3)
10
13
60,908,298
2020-3-28
https://stackoverflow.com/questions/60908298/cannot-use-assignment-expressions-with-subscript
if session['dp'] := current_user.avatar : ^ SyntaxError: cannot use assignment expressions with subscript Why Python forbids this use of walrus operator?
Because, as the alternative name (named expressions) suggests, the left hand side of the walrus operator is to be a NAME. Therefore, by definition such expressions as noted in your question as well as, for instance, function calls are not allowed to be assigned in this form. The documentation also specifies: Single assignment targets other than a single NAME are not supported To further this argument, one can notice that cPython explicitly checks if the expression is Name_kind: if (target->kind != Name_kind) { const char *expr_name = get_expr_name(target); if (expr_name != NULL) { ast_error(c, n, "cannot use assignment expressions with %s", expr_name); } return NULL; }
20
13
60,910,345
2020-3-29
https://stackoverflow.com/questions/60910345/django-how-to-annotate-multiple-fields-from-a-subquery
I'm working on a Django project on which i have a queryset of a 'A' objects ( A.objects.all() ), and i need to annotate multiple fields from a 'B' objects' Subquery. The problem is that the annotate method can only deal with one field type per parameter (DecimalField, CharField, etc.), so, in order to annotate multiple fields, i must use something like: A.objects.all().annotate(b_id =Subquery(B_queryset.values('id')[:1], b_name =Subquery(B_queryset.values('name')[:1], b_other_field =Subquery(B_queryset.values('other_field')[:1], ... ) Which is very inefficient, as it creates a new subquery/subselect on the final SQL for each field i want to annotate. I would like to use the same Subselect with multiple fields on it's values() params, and annotate them all on A's queryset. I'd like to use something like this: b_subquery = Subquery(B_queryset.values('id', 'name', 'other_field', ...)[:1]) A.objects.all().annotate(b=b_subquery) But when i try to do that (and access the first element A.objects.all().annotate(b=b_subquery)[0]) it raises an exception: {FieldError}Expression contains mixed types. You must set output_field. And if i set Subquery(B_quer...[:1], output_field=ForeignKey(B, models.DO_NOTHING)), i get a DB exception: {ProgrammingError}subquery must return only one column In a nutshell, the whole problem is that i have multiple Bs that "belongs" to a A, so i need to use Subquery to, for every A in A.objects.all(), pick a specific B and attach it on that A, using OuterRefs and a few filters (i only want a few fields of B), which seens a trivial problem for me. Thanks for any help in advance!
What I do in such situations is to use prefetch-related a_qs = A.objects.all().prefetch_related( models.Prefetch('b_set', # NOTE: no need to filter with OuterRef (it wont work anyway) # Django automatically filter and matches B objects to A queryset=B_queryset, to_attr='b_records' ) ) Now a.b_records will be a list containing a's related b objects. Depending on how you filter your B_queryset this list may be limited to only 1 object.
23
19
60,909,455
2020-3-29
https://stackoverflow.com/questions/60909455/what-will-happen-if-you-dont-await-a-async-function
If I don't use await to call the async function, I will get back a coroutine. In that case, what will happen for the coroutine? Do I have to manually execute the coroutine? Or this coroutine will continue to run itself in the background? Using await async def work(): result = await stuff() Without await async def work(): result = stuff()
From the official docs: Note that simply calling a coroutine will not schedule it to be executed: That means you did not call your function actually so there no one waiting for anything and nothing to be waiting for if you did not place await before your function call. You could instead schedule a task for it or many tasks using asyncio: import asyncio async def main(): loop = asyncio.get_event_loop() t1 = loop.create_task(stuff()) t2 = loop.create_task(stuff()) loop = asyncio.get_event_loop() loop.run_until_complete(main()) To find more about this, I would recommend reading https://docs.python.org/3/library/asyncio-task.html
10
8
60,904,532
2020-3-28
https://stackoverflow.com/questions/60904532/first-row-to-header-with-pandas
I have the following pandas dataframe df : import pandas as pd from io import StringIO s = '''\ "Unnamed: 0","Unnamed: 1" Objet,"UnitΓ©s vendues" Chaise,3 Table,2 Tabouret,1 ''' df = pd.read_csv(StringIO(s)) which looks as: Unnamed: 0 Unnamed: 1 0 Objet UnitΓ©s vendues 1 Chaise 3 2 Table 2 3 Tabouret 1 My target is to make the first row as header. I use : headers = df.iloc[0] df.columns = [headers] However, the "0" appears in index column name (which is normal, because this 0 was in the first row). 0 Objet UnitΓ©s vendues 1 Chaise 3 2 Table 2 I tried to delete it in many way, but nothing work : Neither del df.index.name from this post Neither df.columns.name = None from this post or this one (which is the same situation) How can I have this expected output : Objet UnitΓ©s vendues 1 Chaise 3 2 Table 2
what about defining that when you load your table in the first place? pd.read_csv('filename', header = 1) otherwise I guess you can just do this: df.drop('0', axis = 1)
7
3
60,903,145
2020-3-28
https://stackoverflow.com/questions/60903145/typeerror-cannot-instantiate-typing-optional
I have such a method: def select_unassigned_variable(self, variables: List[V]) -> Optional(V): I want this to return something of type V or None in some cases. But I get such error: TypeError: Cannot instantiate typing.Optional What should I change?
You need to use it with brackets instead of parentheses: def select_unassigned_variable(self, variables: List[V]) -> Optional[V]: like you did with List.
7
22
60,894,798
2020-3-27
https://stackoverflow.com/questions/60894798/importerror-cannot-import-name-bigquery
This must be a super trivial issue, but i've updated my windows virtual machine with; pip install --upgrade google-cloud-storage However, when I run the script I still receive the following error; Traceback (most recent call last): File "file.py", line 6, in <module> from google.cloud import bigquery, storage ImportError: cannot import name 'bigquery' Any suggestions or workarounds? Thanks, Neel R
From a fresh setup of the VM; Failed - pip3 install --upgrade google-cloud Worked - pip3 install --upgrade google-cloud-bigquery Worked - pip3 install --upgrade google-cloud-storage It appears that individual product solutions should be installed instead of the generic google-cloud. If you're still stuck, this helped!
25
17
60,882,638
2020-3-27
https://stackoverflow.com/questions/60882638/install-a-particular-version-of-python-package-in-a-virtualenv-created-with-reti
When using reticulate package in order to use Python inside R, we can create a virtualenv thanks to the command reticulate::virtualenv_create specifying env name and the path to the python bin. We can also add packages to the previously created environment like this: reticulate::virtualenv_create(envname = 'venv_shiny_app', python = '/usr/bin/python3') reticulate::virtualenv_install('venv_shiny_app', packages = c('numpy', 'xlrd', 'pandas', 'beautifulsoup4', 'joblib')) Is it possible to install a specific version of those packages?? Thanks
You can request a specific version of a package with, for example: reticulate::virtualenv_install(packages = c("numpy==1.8.0"))
7
10
60,892,714
2020-3-27
https://stackoverflow.com/questions/60892714/how-to-get-the-weight-of-evidence-woe-and-information-value-iv-in-python-pan
I was wondering how to calculate the WOE and IV in python. Are there any dedication function in numpy/scipy/pandas/sklearn? Here is my example dataframe: import numpy as np import pandas as pd np.random.seed(100) df = pd.DataFrame({'grade': np.random.choice(list('ABCD'),size=(20)), 'pass': np.random.choice([0,1],size=(20)) }) df
Formulas for woe and iv: Code to achieve this: import numpy as np import pandas as pd np.random.seed(100) df = pd.DataFrame({'grade': np.random.choice(list('ABCD'),size=(20)), 'pass': np.random.choice([0,1],size=(20)) }) feature,target = 'grade','pass' df_woe_iv = (pd.crosstab(df[feature],df[target], normalize='columns') .assign(woe=lambda dfx: np.log(dfx[1] / dfx[0])) .assign(iv=lambda dfx: np.sum(dfx['woe']* (dfx[1]-dfx[0])))) df_woe_iv output pass 0 1 woe iv grade A 0.3 0.3 0.000000 0.690776 B 0.1 0.1 0.000000 0.690776 C 0.2 0.5 0.916291 0.690776 D 0.4 0.1 -1.386294 0.690776
7
16
60,841,650
2020-3-25
https://stackoverflow.com/questions/60841650/how-to-test-one-single-image-in-pytorch
I created my model in pytorch and is working really good, but when i want to test just one image batch_size=1 always return the second class (in this case a dog). I tried to test with batch > 1 and in all cases this works! The architecture: model = models.densenet121(pretrained=True) for param in model.parameters(): param.requires_grad = False from collections import OrderedDict classifier = nn.Sequential(OrderedDict([ ('fc1', nn.Linear(1024, 500)), ('relu', nn.ReLU()), ('fc2', nn.Linear(500, 2)), ('output', nn.LogSoftmax(dim=1)) ])) model.classifier = classifier so my tensors are [batch, 3, 224, 224] i have tried with: resize reshape unsqueeze(0) the response when is one image is always [[0.4741, 0.5259]] My Test Code from PIL import * msize = 256 loader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()]) def image_loader(image_name): """load image, returns cuda tensor""" image = Image.open(image_name) image = loader(image).float() image = image.unsqueeze(0) return image.cuda() image = image_loader('Cat_Dog_data/test/cat/cat.16.jpg') with torch.no_grad(): logits = model.forward(image) ps = torch.exp(logits) _, predTest = torch.max(ps,1) print(ps) ## same value in all cases imagen_mostrar = images[ii].to('cpu') helper.imshow(imagen_mostrar,title=clas_perro_gato(predTest), normalize=True) Second Test Code andrea_data = datasets.ImageFolder(data_dir + '/andrea', transform=test_transforms) andrealoader = torch.utils.data.DataLoader(andrea_data, batch_size=1, shuffle=True) dataiter = iter(andrealoader) images, labels = dataiter.next() images, labels = images.to(device), labels.to(device) ps = torch.exp(model.forward(images)) _, predTest = torch.max(ps,1) print(ps.float()) if i changed my batch_size to 1 always returned a tensor who say that is a dog [0.43,0.57] for example. Thanks!
I realized that my model wasn't in eval mode. So i just added model.eval() and now that's all, works for any size batch
7
11
60,883,397
2020-3-27
https://stackoverflow.com/questions/60883397/using-pymongo-upsert-to-update-or-create-a-document-in-mongodb-using-python
I have a dataframe that contains data I want to upload into MongoDB. Below is the data: MongoRow = pd.DataFrame.from_dict({'school': {1: schoolID}, 'student': {1: student}, 'date': {1: dateToday}, 'Probability': {1: probabilityOfLowerThanThreshold}}) school student date Probability 1 5beee5678d62101c9c4e7dbb 5bf3e06f9a892068705d8420 2020-03-27 0.000038 I have the following code which checks if a row in mongo contains the same student ID and date, if it doesn't then it adds the row: def getPredictions(school): schoolDB = DB[school['database']['name']] schoolPredictions = schoolDB['session_attendance_predicted'] Predictions = schoolPredictions.aggregate([{ '$project': { 'school': '$school', 'student':'$student', 'date':'$date' } }]) return list(Predictions) Predictions = getPredictions(school) Predictions = pd.DataFrame(Predictions) schoolDB = DB[school['database']['name']] collection = schoolDB['session_attendance_predicted'] import json for i in Predictions.index: schoolOld = Predictions.loc[i,'school'] studentOld = Predictions.loc[i,'student'] dateOld = Predictions.loc[i,'date'] if(studentOld == student and date == dateOld): print("Student Exists") #UPDATE THE ROW WITH NEW VALUES else: print("Student Doesn't Exist") records = json.loads(df.T.to_json()).values() collection.insert(records) However if it does exist, I want it to update the row with the new values. Does anyone know how to do this? I have looked at pymongo upsert but I'm not sure how to use it. Can anyone help? '''''''UPDATE''''''' The above is partly working now, however, I am now getting an error with the following code: dateToday = datetime.datetime.combine(dateToday, datetime.time(0, 0)) MongoRow = pd.DataFrame.from_dict({'school': {1: schoolID}, 'student': {1: student}, 'date': {1: dateToday}, 'Probability': {1: probabilityOfLowerThanThreshold}}) data_dict = MongoRow.to_dict() for i in Predictions.index: print(Predictions) collection.replace_one({'student': student, 'date': dateToday}, data_dict, upsert=True) Error: InvalidDocument: documents must have only string keys, key was 1
To upsert you cannot use insert() (deprecated) insert_one() or insert_many(). You must use one of the collection level operators that supports upserting. To get started I would point you towards reading the dataframe line by line and using replace_one() on each line. There are more advanced ways of doing this but this is the easiest. Your code will look a bit like: collection.replace_one({'Student': student, 'Date': date}, record, upsert=True)
13
10
60,879,982
2020-3-27
https://stackoverflow.com/questions/60879982/attributeerror-timedelta-object-has-no-attribute-dt
I have a df: id timestamp data group Date 27001 27242 2020-01-01 09:07:21.277 19.5 1 2020-01-01 27002 27243 2020-01-01 09:07:21.377 19.0 1 2020-01-01 27581 27822 2020-01-02 07:53:05.173 19.5 1 2020-01-02 27582 27823 2020-01-02 07:53:05.273 20.0 1 2020-01-02 27647 27888 2020-01-02 10:01:46.380 20.5 1 2020-01-02 ... and I would like to calculate the time difference between row 1 and row 2 in seconds. I could do it with df['timediff'] = (df['timestamp'].shift(-1) - df['timestamp']).dt.total_seconds() However, when I zoom in to look at only 2 rows, ie. row1 and row0, with code: difference = (df.loc[0, 'timestamp'] - df.loc[1, 'timestamp']).dt.total_seconds() it returned error AttributeError: 'Timedelta' object has no attribute 'dt' Why is this happening?
As @hpaulj said in the comments, dt is only associated with dataframe like object. So to obtain total seconds you have to use difference = (df.loc[0, 'timestamp'] - df.loc[1, 'timestamp']).total_seconds()
7
3
60,878,196
2020-3-27
https://stackoverflow.com/questions/60878196/seaborn-rc-parameters-for-set-context-and-set-style
In the tutorial for setting up the aesthetics of your plots, there are a few different methods: set_style set_context axes_style Each one of these accepts an rc keyword parameter dictionary. In each individual API page for the above three functions, it says: rcdict, optional: Parameter mappings to override the values in the preset seaborn style dictionaries. This only updates parameters that are considered part of the style definition. Back in the tutorial page, under axes_style it goes on to say exactly how you can see what parameters are available for the rc dictionary for this one function: If you want to see what parameters are included, you can just call the function with no arguments, which will return the current settings: However, using this on the other functions always returns None. So, for example, I am using the following mix of matplotlib and seaborn to set parameters: mpl.rcParams['figure.figsize'] = [16,10] viz_dict = { 'axes.titlesize':18, 'axes.labelsize':16, } sns.set_context("notebook", rc=viz_dict) sns.set_style("whitegrid") I also noticed that putting my dictionary in the set_style method does nothing, while, at least for those parameters, it only works in set_context. This means that they each have mutually exclusively characteristics that can be edited. However, this is not outlined anywhere in the docs. I want to know which one of these three functions will accept a parameter for figsize. I'd also be curious to see what else they accept that might help me fine-tune things. My goal is to exclusively use the seaborn interface as often as possible. I don't need the fine tune control of things matplotlib provides, and often find it awkward anyway.
It would appear that the answer is 'none of the above'. The valid keys for set_style and set_context are listed here: _style_keys = [ "axes.facecolor", "axes.edgecolor", "axes.grid", "axes.axisbelow", "axes.labelcolor", "figure.facecolor", "grid.color", "grid.linestyle", "text.color", "xtick.color", "ytick.color", "xtick.direction", "ytick.direction", "lines.solid_capstyle", "patch.edgecolor", "patch.force_edgecolor", "image.cmap", "font.family", "font.sans-serif", "xtick.bottom", "xtick.top", "ytick.left", "ytick.right", "axes.spines.left", "axes.spines.bottom", "axes.spines.right", "axes.spines.top",] _context_keys = [ "font.size", "axes.labelsize", "axes.titlesize", "xtick.labelsize", "ytick.labelsize", "legend.fontsize", "axes.linewidth", "grid.linewidth", "lines.linewidth", "lines.markersize", "patch.linewidth", "xtick.major.width", "ytick.major.width", "xtick.minor.width", "ytick.minor.width", "xtick.major.size", "ytick.major.size", "xtick.minor.size", "ytick.minor.size",] Also note that set_style is just a convenience function which calls axes_style. So you will have to use matplotlib.rcParams, although if the typical rcParams['figure.figsize'] = [16,10] syntax is not amenable you could of course create your own style.
7
11
60,878,959
2020-3-27
https://stackoverflow.com/questions/60878959/attributeerror-numpy-ndarray-object-has-no-attribute-save
i have short code for crop image all image in folder that i labeled and save as csv using opencv like this: import os, sys from PIL import Image import cv2 import pandas as pd # The annotation file consists of image names, text label, # bounding box information like xmin, ymin, xmax and ymax. ANNOTATION_FILE = 'data/annot_crop_plate.csv' df = pd.read_csv(ANNOTATION_FILE) #image directory path IMG_DIR = 'data/images' # The cropped images will be stored here CROP_DIR = 'data/crops' files = df['filename'] size = (200,200) for file in files: print(file) img = cv2.imread(IMG_DIR +'/' + file) annot_data = df[df['filename'] == file] xmin = int(annot_data['xmin']) ymin = int(annot_data['ymin']) xmax = int(annot_data['xmax']) ymax = int(annot_data['ymax']) crop = img[ymin:ymax,xmin:xmax] new_crop = cv2.resize(crop, dsize=size, interpolation=cv2.INTER_CUBIC) new_crop.save(CROP_DIR + '/' + file.split('.')[0] + '.png', 'PNG', quality=90) but in the end of line have said "AttributeError: 'numpy.ndarray' object has no attribute 'save'"
try using cv2.imwrite(path,img_to_save) in the last line.
7
15
60,842,487
2020-3-25
https://stackoverflow.com/questions/60842487/python-was-not-found-but-can-be-installed-from-the-microsoft-store-march-2020
I started watching a Python course on YouTube in which the guy giving the lesson teaches using VSCode. He started with software installation (Python & Pycharm). Then, in VSCode he downloaded the Python extension (the one made by Microsoft) and the extension called "Code Runner" to run the Python code on VSCode. When I try running my code it hits me with the following error which you can also see in the image on the link at the end of the question. I'm not able to post a screenshot of it because I'm new on this platform. Thanks to whoever sees this. [Running] python -u "c:\Users\Ryan\Desktop\Python\app.py" Python was not found but can be installed from the Microsoft Store: htttps://go.microsoft.com/fwlink?linkID=2082640 Screenshot of the VSCode error screen:
You don't have the command python installed into your PATH on Windows which is the default if you didn't get your copy of Python from the Windows Store. If you selected your Python interpreter in VS Code (look in the status bar), then I would disable Code Runner. That way the Python extension is what provides the ability to run Python (the Play button will be green instead of white).
42
35
60,872,434
2020-3-26
https://stackoverflow.com/questions/60872434/django-onetoonefield-relatedobjectdoesnotexist
I have this two following classes in my model: class Answer(models.Model): answer = models.CharField(max_length=300) question = models.ForeignKey('Question', on_delete=models.CASCADE) def __str__(self): return "{0}, view: {1}".format(self.answer, self.answer_number) class Vote(models.Model): answer = models.OneToOneField(Answer, related_name="votes", on_delete=models.CASCADE) users = models.ManyToManyField(User) def __str__(self): return str(self.answer.answer)[:30] In the shell I take the first Answer: >>> Answer.objects.all()[0] <Answer: choix 1 , view: 0> I want to get the Vote object: >>> Answer.objects.all()[0].votes Traceback (most recent call last): File "<console>", line 1, in <module> File "C:\Users\Hippolyte\AppData\Roaming\Python\Python38\site-packages\django\db\models\fields\related_descriptors.py", line 420, in __get__ raise self.RelatedObjectDoesNotExist( questions.models.Answer.votes.RelatedObjectDoesNotExist: Answer has no votes. But an error occured. I don't understand why the related_name is not recognized. Could you help me ?
Your related_name IS recognized, but it is only assigned to the instance if the related object exists. In your case, there is no Vote instance in your database where its answer field points to your Answer instance Just catch the exception and return None if you want to proceed: answer = Answer.objects.all().first() try: vote = answer.votes except Answer._meta.model.related_field.RelatedObjectDoesNotExist as e: vote = None If you want to shorten this you can use hasattr(answer, 'vote'), to check, but this will mask ALL exceptions arising from the db lookup if any answer = Answer.objects.all().first() vote = answer.votes if hasattr(answer, 'votes') else None Note that since you used a OneToOneField, answer.votes will always return a single instance if the related Vote exists. As such, it would be more appropriate to use related_name='vote' (without the s)
7
14
60,868,060
2020-3-26
https://stackoverflow.com/questions/60868060/module-on-test-pypi-cant-install-dependencies-even-though-they-exist
I have done this small package that I want to distribute in my community. It is now on test.pypi and when I want to try to install it, it gives an error that dependencies couldn't be found. setup.py ... install_requires=[ 'defcon>=0.6.0', 'fonttools>=3.31.0' ] ... throws this error ERROR: Could not find a version that satisfies the requirement defcon>=0.6.0 (from sameWidther==0.6) (from versions: none) ERROR: No matching distribution found for defcon>=0.6.0 (from sameWidther==0.6) but when I manually install, it works pip install 'fonttools>=3.6.0' pip install 'defcon>=0.6.0'
-i URL, or --index-url URL means "use URL for installing packages from exclusively". By passing -i https://test.pypi.org/simple/, you thus prohibit searching and downloading packages from PyPI (https://pypi.org/simple). To use both indexes, use --extra-index-url: $ python -m pip install --extra-index-url https://test.pypi.org/simple/ sameWidther
7
22
60,870,128
2020-3-26
https://stackoverflow.com/questions/60870128/cant-install-geopandas-in-anaconda-environment
I am trying to install the geopandas package with Anaconda Prompt, but after I use conda install geopandas an unexpected thing happened: Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: - Found conflicts! Looking for incompatible packages After this, it proceeds to search for conflicts, but hours pass without finishing. In the end, I still cannot use geopandas. I have also tried installing geopandas in a different virtual environment and it works but I do not know how to use the environment in Jupyter Notebooks. I would like to know, how can install geopandas without a separate environment? Or, alternatively, how can I use geopandas in Jupyter Notebooks after install it in a separate environment?
You can install geopandas with pip, however, geopandas requires other dependencies (such as pandas, fiona, shapely, pyproj, rtree). You need to make sure that they are properly installed. After that you should be able to use them in jupyter with a simple import geopandas.
15
-3
60,869,306
2020-3-26
https://stackoverflow.com/questions/60869306/how-to-simple-crop-the-bounding-box-in-python-opencv
I am trying to learn opencv and implementing a research project by testing some used cases. I am trying to crop the bounding box of the inside the image using python opencv . I have successfully created the bounding box but failed in crop this is the image import cv2 import matplotlib.pyplot as plt img = cv2.imread("Segmentacion/Img_183.png") gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) dst = cv2.Canny(gray, 0, 150) blured = cv2.blur(dst, (5,5), 0) MIN_CONTOUR_AREA=200 img_thresh = cv2.adaptiveThreshold(blured, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2) Contours,imgContours = cv2.findContours(img_thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) for contour in Contours: if cv2.contourArea(contour) > MIN_CONTOUR_AREA: [X, Y, W, H] = cv2.boundingRect(contour) box=cv2.rectangle(img, (X, Y), (X + W, Y + H), (0,0,255), 2) cropped_image = img[X:W, Y:H] print([X,Y,W,H]) cv2.imwrite('contour.png', cropped_image )
I figured it out the formula for cropping the bounding box from the original image cropped_image = img[Y:Y+H, X:X+W] print([X,Y,W,H]) plt.imshow(cropped_image) cv2.imwrite('contour1.png', cropped_image)
15
33
60,869,243
2020-3-26
https://stackoverflow.com/questions/60869243/how-can-i-filter-dataframe-based-on-null-not-null-using-a-column-name-as-a-varia
I want to list a dataframe where a specific column is either null or not null, I have it working using - df[df.Survive.notnull()] # contains no missing values df[df.Survive.isnull()] #---> contains missing values This works perfectly but I want to make my code more dynamic and pass the column "Survive" as a variable but it's not working for me. I tried: variableToPredict = ['Survive'] df[df[variableToPredict].notnull()] I get the error - ValueError: cannot reindex from a duplicate axis I'm sure I'm making a silly mistake, what can I do to fix this?
So idea is always is necessary Series or list or 1d array for mask for filtering. If want test only one column use scalar: variableToPredict = 'Survive' df[df[variableToPredict].notnull()] But if add [] output is one column DataFrame, so is necessaty change function for test by any (test if at least one NaN per row, sense in multiple columns) or all (test if all NaNs per row, sense in multiple columns) functions: variableToPredict = ['Survive'] df[df[variableToPredict].notnull().any(axis=1)] variableToPredict = ['Survive', 'another column'] df[df[variableToPredict].notnull().any(axis=1)] Sample: df = pd.DataFrame({'Survive':[np.nan, 'A', 'B', 'B', np.nan], 'another column':[np.nan, np.nan, 'a','b','b']}) print (df) Survive another column 0 NaN NaN 1 A NaN 2 B a 3 B b 4 NaN b First if test only one column: variableToPredict = 'Survive' df1 = df[df[variableToPredict].notnull()] print (df1) Survive another column 1 A NaN 2 B a 3 B b print (type(df[variableToPredict])) <class 'pandas.core.series.Series'> #Series print (df[variableToPredict]) 0 NaN 1 A 2 B 3 B 4 NaN Name: Survive, dtype: object print (df[variableToPredict].isnull()) 0 True 1 False 2 False 3 False 4 True Name: Survive, dtype: bool If use list - here one element list: variableToPredict = ['Survive'] print (type(df[variableToPredict])) <class 'pandas.core.frame.DataFrame'> #one element DataFrame print (df[variableToPredict]) Survive 0 NaN 1 A 2 B 3 B 4 NaN If testing per rows it is same output for any or all: print (df[variableToPredict].notnull().any(axis=1)) 0 False 1 True 2 True 3 True 4 False dtype: bool print (df[variableToPredict].notnull().all(axis=1)) 0 False 1 True 2 True 3 True 4 False dtype: bool If test one or more columns in list: variableToPredict = ['Survive', 'another column'] print (type(df[variableToPredict])) <class 'pandas.core.frame.DataFrame'> print (df[variableToPredict]) Survive another column 0 NaN NaN 1 A NaN 2 B a 3 B b 4 NaN b print (df[variableToPredict].notnull()) Survive another column 0 False False 1 True False 2 True True 3 True True 4 False True #at least one NaN per row, at least one True print (df[variableToPredict].notnull().any(axis=1)) 0 False 1 True 2 True 3 True 4 True dtype: bool #all NaNs per row, all Trues print (df[variableToPredict].notnull().all(axis=1)) 0 False 1 False 2 True 3 True 4 False dtype: bool
17
27
60,868,629
2020-3-26
https://stackoverflow.com/questions/60868629/valueerror-solver-lbfgs-supports-only-l2-or-none-penalties-got-l1-penalty
I'm running the process of feature selection on classification problem, using the embedded method (L1 - Lasso) With LogisticRegression. I'm running the following code: from sklearn.linear_model import Lasso, LogisticRegression from sklearn.feature_selection import SelectFromModel # using logistic regression with penalty l1. selection = SelectFromModel(LogisticRegression(C=1, penalty='l1')) selection.fit(x_train, y_train) But I'm getting exception (on the fit command): selection.fit(x_train, y_train) File "C:\Python37\lib\site-packages\sklearn\feature_selection\_from_model.py", line 222, in fit self.estimator_.fit(X, y, **fit_params) File "C:\Python37\lib\site-packages\sklearn\linear_model\_logistic.py", line 1488, in fit solver = _check_solver(self.solver, self.penalty, self.dual) File "C:\Python37\lib\site-packages\sklearn\linear_model\_logistic.py", line 445, in _check_solver "got %s penalty." % (solver, penalty)) ValueError: Solver lbfgs supports only 'l2' or 'none' penalties, got l1 penalty. I'm running under python 3.7.6 and sscikit-learn version is 0.22.2.post1 What is wrong and how can I fix it ?
This is cleared up in the documentation. solver : {β€˜newton-cg’, β€˜lbfgs’, β€˜liblinear’, β€˜sag’, β€˜saga’}, default=’lbfgs’ ... β€˜newton-cg’, β€˜lbfgs’, β€˜sag’ and β€˜saga’ handle L2 or no penalty β€˜liblinear’ and β€˜saga’ also handle L1 penalty Call it like this: LogisticRegression(C=1, penalty='l1', solver='liblinear')
26
45
60,867,546
2020-3-26
https://stackoverflow.com/questions/60867546/save-failed-in-google-colab
I opened a number of tabs at the same time. I think that's why Google Colab was not able to support the heavy load. The message stated: Save failed This file could not be saved. Please use the File menu to download the .ipynb and upload the notebook to make a copy that includes your recent changes. Is downloading the file and uploading again the only solution?
This can happen if you open the same notebook in multiple tabs and make incompatible edits to the notebook. At this point, the only way to save your work is to follow the advice in the dialog. To prevent this in the future, avoid simultaneously editing the same notebook in multiple browser windows.
8
-1
60,862,145
2020-3-26
https://stackoverflow.com/questions/60862145/what-is-the-point-of-norm-fit-in-scipy
Im generating a random sample of data and plotting its pdf using scipy.stats.norm.fit to generate my loc and scale parameters. I wanted to see how different my pdf would look like if I just calculated the mean and std using numpy without any actual fitting. To my surprise when I plot both pdfs and print both sets of mu and std the results I get are exactly the same. So my question is, what is the point of norm.fit if I can just calculate the mean and std of my sample and still get the same results? This is my code: import numpy as np from scipy.stats import norm import matplotlib.pyplot as plt data = norm.rvs(loc=0,scale=1,size=200) mu1 = np.mean(data) std1 = np.std(data) print(mu1) print(std1) mu, std = norm.fit(data) plt.hist(data, bins=25, density=True, alpha=0.6, color='g') xmin, xmax = plt.xlim() x = np.linspace(xmin, xmax, 100) p = norm.pdf(x, mu, std) q = norm.pdf(x, mu1, std1) plt.plot(x, p, 'k', linewidth=2) plt.plot(x, q, 'r', linewidth=1) title = "Fit results: mu = %.5f, std = %.5f" % (mu, std) plt.title(title) plt.show() And this is the results I got: Pdf of a random set of values mu1 = 0.034824979915482716 std1 = 0.9945453455908072
The point is that there are several other distributions out there besides the normal distribution. Scipy provides a consistent API for learning the parameters of these distributions from data. (Want an exponential distribution instead of a normal distribution? It’s scipy.stats.expon.fit.) So sure, your way also works because the parameters of the normal distribution happen to be the mean and standard deviation. But this is about providing a consistent interface across distributions, including ones where that’s not true.
8
8
60,858,862
2020-3-25
https://stackoverflow.com/questions/60858862/write-a-functon-to-modify-a-certain-string-in-a-certain-way-by-adding-character
I have to write a function that takes a string, and will return the string with added "asteriks" or "*" symbols to signal multiplication. As we know 4(3) is another way to show multiplication, as well as 4*3 or (4)(3) or 4*(3) etc. Anyway, my code needs to fix that problem by adding an asterik between the 4 and the 3 for when multiplication is shown WITH PARENTHESIS but without the multiplication operator " * ". Some examples: "4(3)" -> "4*(3)" "(4)(3)" -> "(4)*(3)" "4*2 + 9 -4(-3)" - > "4*2 + 9 -4*(-3)" "(-9)(-2) (4)" -> "(-9)*(2) *(4)" "4^(3)" -> "4^(3)" "(4-3)(4+2)" -> "(4-3)*(4+2)" "(Aflkdsjalkb)(g)" -> "(Aflkdsjalkb)*(g)" "g(d)(f)" -> "g*(d)*(f)" "(4) (3)" -> "(4)*(3)" I'm not exactly sure how to do this, I am thinking about finding the left parenthesis and then simply adding a " * " at that location but that wouldn't work hence the start of my third example would output "* (-9)" which is what I don't want or my fourth example that would output "4^*(3)". Any ideas on how to solve this problem? Thank you. Here's something I've tried, and obviously it doesn't work: while index < len(stringtobeconverted) parenthesis = stringtobeconverted[index] if parenthesis == "(": stringtobeconverted[index-1] = "*"
I'll share mine. def insertAsteriks(string): lstring = list(string) c = False for i in range(1, len(lstring)): if c: c = False pass elif lstring[i] == '(' and (lstring[i - 1] == ')' or lstring[i - 1].isdigit() or lstring[i - 1].isalpha() or (lstring[i - 1] == ' ' and not lstring[i - 2] in "*^-+/")): lstring.insert(i, '*') c = True return ''.join(lstring) Let's check against your inputs. print(insertAsteriks("4(3)")) print(insertAsteriks("(4)(3)")) print(insertAsteriks("4*2 + 9 -4(-3)")) print(insertAsteriks("(-9)(-2) (4)")) print(insertAsteriks("(4)^(-3)")) print(insertAsteriks("ABC(DEF)")) print(insertAsteriks("g(d)(f)")) print(insertAsteriks("(g)-(d)")) The output is: 4*(3) (4)*(3) 4*2 + 9 -4*(-3) (-9)*(-2) (4) (4)^(-3) ABC*(DEF) g*(d)*(f) (g)-(d) [Finished in 0.0s]
7
2
60,865,887
2020-3-26
https://stackoverflow.com/questions/60865887/exclude-env-directory-from-flake8-tests
Problem I'm getting thousands of flake8 errors stemming from my local .env. An example of some of the error messages: ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3848:80: E501 line too long (85 > 79 characters) ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3848:84: E202 whitespace before ')' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3855:51: E201 whitespace after '(' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3855:65: E202 whitespace before ')' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3858:50: E201 whitespace after '(' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3858:78: E202 whitespace before ')' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3861:31: E231 missing whitespace after ',' ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3865:1: W293 blank line contains whitespace ./env/lib/python3.7/site-packages/pip/_vendor/pyparsing.py:3866:1: E302 expected 2 blank lines, found 1 My directory looks like this: My flake8 file looks like this: [flake8] exclude = migrations, __pycache__, manage.py, settings.py, Question How can I add my env file contents to the exclude list in flake8? What I've tried I've tried adding: env, ./env, I've tried taking off commas and adding: [flake8] exclude = migrations __pycache__ manage.py settings.py env .env ./env env/ .env/ Running flake8 --exclude migrations,pycache,manage.py,settings.py,env returns: ./app/core/models.py:7:80: E501 line too long (93 > 79 characters) ./app/core/models.py:13:66: E225 missing whitespace around operator ./app/core/models.py:13:80: E501 line too long (103 > 79 characters) ./app/core/admin.py:10:1: W293 blank line contains whitespace ./app/core/admin.py:11:1: W293 blank line contains whitespace ./app/core/admin.py:13:1: W293 blank line contains whitespace ./app/core/tests/test_commands.py:23:47: W292 no newline at end of file ./app/core/management/commands/wait_for_db.py:21:69: W292 no newline at end of file ./app/project/urls.py:23:2: W292 no newline at end of file ./app/inventory/serializers.py:4:1: E302 expected 2 blank lines, found 1 ./app/inventory/serializers.py:8:1: W391 blank line at end of file ./app/inventory/urls.py:5:70: W291 trailing whitespace ./app/inventory/urls.py:7:26: W292 no newline at end of file ./app/inventory/views.py:1:1: F401 'rest_framework.response.Response' imported but unused ./app/inventory/views.py:7:1: E302 expected 2 blank lines, found 1 ./app/inventory/views.py:11:1: W293 blank line contains whitespace ./app/inventory/views.py:12:5: E303 too many blank lines (2) ./app/inventory/views.py:36:43: W292 no newline at end of file ./app/inventory/tests/test_api.py:2:1: F401 'django.urls.reverse' imported but unused ./app/inventory/tests/test_api.py:7:1: W293 blank line contains whitespace ./app/inventory/tests/test_api.py:9:8: E111 indentation is not a multiple of four ./app/inventory/tests/test_api.py:10:8: E111 indentation is not a multiple of four ./app/inventory/tests/test_api.py:10:40: W291 trailing whitespace ./app/inventory/tests/test_api.py:11:1: W293 blank line contains whitespace ./app/inventory/tests/test_api.py:12:8: E111 indentation is not a multiple of four ./app/inventory/tests/test_api.py:13:8: E111 indentation is not a multiple of four ./app/inventory/tests/test_api.py:18:41: W291 trailing whitespace ./app/inventory/tests/test_api.py:19:1: W293 blank line contains whitespace ./app/inventory/tests/test_api.py:25:37: W291 trailing whitespace ./app/inventory/tests/test_api.py:26:41: W291 trailing whitespace ./app/inventory/tests/test_api.py:27:1: W293 blank line contains whitespace ./app/inventory/tests/test_api.py:31:1: W391 blank line at end of file ./app/inventory/tests/test_api.py:31:1: W293 blank line contains whitespace
I notice your .flake8 file is inside app folder. I presume you are starting flake8 from outside the app folder, in other words from the project root. Move .flake8 to the project root, and everything's gonna work: mv app/.flake8 .
37
13
60,844,846
2020-3-25
https://stackoverflow.com/questions/60844846/read-a-body-json-list-with-fastapi
The body of an HTTP PUT request is a JSON list - like this: [item1, item2, item3, ...] I can't change this. (If the root was a JSON object rather than a list there would be no problem.) Using FastAPI I seem to be unable to access this content in the normal way: @router.put('/data') def set_data(data: DataModel): # This doesn't work; how do I even declare DataModel? I found the following workaround, which seems like a very ugly hack: class DataModel(BaseModel): __root__: List[str] from fastAPI import Request @router.put('/data') async def set_data(request: Request): # Get the request object directly data = DataModel(__root__=await request.json()) This surely can't be the 'approved' way to achieve this. I've scoured the documentation both of FastAPI and pydantic. What am I missing?
Descending from the model perspective to primitives In FastAPI, you derive from BaseModel to describe the data models you send and receive (i.e. FastAPI also parses for you from a body and translates to Python objects). Also, it relies on the modeling and processing from pydantic. from typing import List from pydantic import BaseModel class Item(BaseModel): name: str class ItemList(BaseModel): items: List[Item] def process_item_list(items: ItemList): pass This example would be able to parse JSON like {"items": [{"name": "John"}, {"name": "Mary"}]} In your case - depending on what shape your list entries have - you'd also go for proper type modeling, but you want to directly receive and process the list without the JSON dict wrapper around it. You could go for: from typing import List from pydantic import BaseModel class Item(BaseModel): name: str def process_item_list(items: List[Item]): pass Which is now able to process JSON like: [{"name": "John"}, {"name": "Mary"}] This is probably what you're looking for and the last adaption to take is depending on the shape of your item* in the list you receive. If it's plain strings, you can also go for: from typing import List def process_item_list(items: List[str]): pass Which could process JSON like ["John", "Mary"] I outlined the path from models down to primitives in lists because I think it's worth knowing where this can go if one needs more complexity in the data models.
15
36
60,852,962
2020-3-25
https://stackoverflow.com/questions/60852962/training-time-of-gensim-word2vec
I'm training word2vec from scratch on 34 GB pre-processed MS_MARCO corpus(of 22 GB). (Preprocessed corpus is sentnecepiece tokenized and so its size is more) I'm training my word2vec model using following code : from gensim.test.utils import common_texts, get_tmpfile from gensim.models import Word2Vec class Corpus(): """Iterate over sentences from the corpus.""" def __init__(self): self.files = [ "sp_cor1.txt", "sp_cor2.txt", "sp_cor3.txt", "sp_cor4.txt", "sp_cor5.txt", "sp_cor6.txt", "sp_cor7.txt", "sp_cor8.txt" ] def __iter__(self): for fname in self.files: for line in open(fname): words = line.split() yield words sentences = Corpus() model = Word2Vec(sentences, size=300, window=5, min_count=1, workers=8, sg=1, hs=1, negative=10) model.save("word2vec.model") My model is running now for about more than 30 hours now. This is doubtful since on my i5 laptop with 8 cores, I'm using all the 8 cores at 100% for every moment of time. Plus, my program seems to have read more than 100 GB of data from the disk now. I don't know if there is anything wrong here, but the main reason after my doubt on the training is because of this 100 GB of read from the disk. The whole corpus is of 34 GB, then why my code has read 100 GB of data from the disk? Does anyone know how much time should it take to train word2vec on 34 GB of text, with 8 cores of i5 CPU running all in parallel? Thank you. For more information, I'm also attaching the photo of my process from system monitor. I want to know why my model has read 112 GB from memory, even when my corpus is of 34 GB in total? Will my training ever get finished? Also I'm bit worried about health of my laptop, since it is running constantly at its peak capacity since last 30 hours. It is really hot now. Should I add any additional parameter in Word2Vec for quicker training without much performance loss?
Completing a model requires one pass over all the data to discover the vocabulary, then multiple passes, with a default of 5, to perform vector training. So, you should expect to see about 6x your data size in disk-reads, just from the model training. (If your machine winds up needing to use virtual-memory swapping during the process, there could be more disk activity – but you absolutely do not want that to happen, as the random-access pattern of word2vec training is nearly a worst-case for virtual memory usage, which will slow training immensely.) If you'd like to understand the code's progress, and be able to estimate its completion time, you should enable Python logging to at least the INFO level. Various steps of the process will report interim results (such as the discovered and surviving vocabulary size) and estimated progress. You can often tell if something is going wrong before the end of a run by studying the logging outputs for sensible values, and once the 'training' phase has begun the completion time will be a simple projection from the training completed so far. I believe most laptops should throttle their own CPU if it's becoming so hot as to become unsafe or risk extreme wear on the CPU/components, but whether yours does, I can't say, and definitely make sure its fans work & vents are unobstructed. I'd suggest you choose some small random subset of your data – maybe 1GB? – to be able to run all your steps to completion, becoming familiar with the Word2Vec logging output, resource usage, and results, and tinkering with settings to observe changes, before trying to run on your full dataset, which might require days of training time. Some of your shown parameters aren't optimal for speedy training. In particular: min_count=1 retains every word seen in the corpus-survey, including those with only a single occurrence. This results in a much, much larger model - potentially risking a model that doesn't fit into RAM, forcing disastrous swapping. But also, words with just a few usage examples can't possibly get good word vectors, as the process requires seeing many subtly-varied alternate uses. Still, via typical 'Zipfian' word-frequencies, the number of such words with just a few uses may be very large in total, so retaining all those words takes a lot of training time/effort, and even serves a bit like 'noise' making the training of other words, with plenty of usage examples, less effective. So for model size, training speed, and quality of remaining vectors, a larger min_count is desirable. The default of min_count=5 is better for more projects than min_count=1 – this is a parameter that should only really be changed if you're sure you know the effects. And, when you have plentiful data – as with your 34GB – the min_count can go much higher to keep the model size manageable. hs=1 should only be enabled if you want to use the 'hierarchical-softmax' training mode instead of 'negative-sampling' – and in that case, negative=0 should also be set to disable 'negative-sampling'. You probably don't want to use hierarchical-softmax: it's not the default for a reason, and it doesn't scale as well to larger datasets. But here you've enabled in in addition to negative-sampling, likely more-than-doubling the required training time. Did you choose negative=10 because you had problems with the default negative=5? Because this non-default choice, again, would slow training noticeably. (But also, again, a non-default choice here would be more common with smaller datasets, while larger datasets like yours are more likely to experiment with a smaller negative value.) The theme of the above observations is: "only change the defaults if you've already got something working, and you have a good theory (or way of testing) how that change might help". With a large-enough dataset, there's another default parameter to consider changing to speed up training (& often improve word-vector quality, as well): sample, which controls how-aggressively highly-frequent words (with many redundant usage-examples) may be downsampled (randomly skipped). The default value, sample=0.001 (aka 1e-03), is very conservative. A smaller value, such as sample=1e-05, will discard many-more of the most-frequent-words' redundant usage examples, speeding overall training considerably. (And, for a corpus of your size, you could eventually experiment with even smaller, more-aggressive values.) Finally, to the extent all your data (for either a full run, or a subset run) can be in an already-space-delimited text file, you can use the corpus_file alternate method of specifying the corpus. Then, the Word2Vec class will use an optimized multithreaded IO approach to assign sections of the file to alternate worker threads – which, if you weren't previously seeing full saturation of all threads/CPU-cores, could increase our throughput. (I'd put this off until after trying other things, then check if your best setup still leaves some of your 8 threads often idle.)
7
14
60,851,784
2020-3-25
https://stackoverflow.com/questions/60851784/remove-pandas-columns-based-on-list
I have a list: my_list = ['a', 'b'] and a pandas dataframe: d = {'a': [1, 2], 'b': [3, 4], 'c': [1, 2], 'd': [3, 4]} df = pd.DataFrame(data=d) What can I do to remove the columns in df based on list my_list, in this case remove columns a and b
This is very simple: df = df.drop(columns=my_list) drop removes columns by specifying a list of column names
9
15
60,850,596
2020-3-25
https://stackoverflow.com/questions/60850596/calculate-average-of-every-n-rows-from-a-csv-file
I have a csv file that has 25000 rows. I want to put the average of every 30 rows in another csv file. I've given an example with 9 rows as below and the new csv file has 3 rows (3, 1, 2): | H | ======== | 1 |---\ | 3 | |--->| 3 | | 5 |---/ | -1 |---\ | 3 | |--->| 1 | | 1 |---/ | 0 |---\ | 5 | |--->| 2 | | 1 |---/ What I did: import numpy as np import pandas as pd m_path = "file.csv" m_df = pd.read_csv(m_path, usecols=['Col-01']) m_arr = np.array([]) temp = m_df.to_numpy() step = 30 for i in range(1, 25000, step): arr = np.append(m_arr,np.array([np.average(temp[i:i + step])])) data = np.array(m_arr)[np.newaxis] m_df = pd.DataFrame({'Column1': data[0, :]}) m_df.to_csv('AVG.csv') This works well but Is there any other option to do this?
You can use integer division by step for consecutive groups and pass to groupby for aggregate mean: step = 30 m_df = pd.read_csv(m_path, usecols=['Col-01']) df = m_df.groupby(m_df.index // step).mean() Or: df = m_df.groupby(np.arange(len(dfm_df// step).mean() Sample data: step = 3 df = m_df.groupby(m_df.index // step).mean() print (df) H 0 3 1 1 2 2
9
9
60,842,775
2020-3-25
https://stackoverflow.com/questions/60842775/how-to-package-a-python-module-with-extras-as-default
My Python package has optional features (extras_require) and I would prefer them to be selected by default. More specifically, I'd like that pip install mypackage behaves like pip install mypackage[extra] and that I can install a minimal version with something like pip install mypackage[core]. setup( name="mypackage", ... extras_require={ "extra": ["extra1>=1.2", "extra2"], "core": [], } ) Is it possible to achieve this with a setup script similar to above?
Unfortunately this is not possible with the current state of Python packaging metadata & tooling. See a long discussion here as to why.
11
7
60,775,172
2020-3-20
https://stackoverflow.com/questions/60775172/pyenvs-python-is-missing-bzip2-module
I used pyenv to install python 3.8.2 and to create a virtualenv. In the virtualenv, I used pipenv to install pandas. However, when importing pandas, I'm getting the following: [...] File "/home/luislhl/.pyenv/versions/poc-prefect/lib/python3.8/site-packages/pandas/io/common.py", line 3, in <module> import bz2 File "/home/luislhl/.pyenv/versions/3.8.2/lib/python3.8/bz2.py", line 19, in <module> from _bz2 import BZ2Compressor, BZ2Decompressor ModuleNotFoundError: No module named '_bz2' After some googling, I found out some people suggesting I rebuild Python from source after installing bzip2 library in my system. However, after trying installing it with sudo dnf install bzip2-devel I see that I already had it installed. As far as I know, pyenv builds python from source when installing some version. So, why wasn't it capable of including the bzip2 module when building? How can I manage to rebuild Python using pyenv in order to make bzip2 available? I'm in Fedora 30 Thanks in advance UPDATE I tried installing another version of python with pyenv in verbose mode, to see the compilation output. There is this message in the end of the compilation: WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib? But as I stated before, I checked I already have bzip2 installed in my system. So I don't know what to do.
On Ubuntu 22 LTS Missing Library Problem in Python Installation with Pyenv Before the fix: $> pyenv install 3.11.0 command result: pyenv: /home/user/.pyenv/versions/3.11.0 already exists continue with installation? (y/N) y Downloading Python-3.11.0.tar.xz... -> https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz Installing Python-3.11.0... WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib? WARNING: The Python readline extension was not compiled. Missing the GNU readline lib? WARNING: The Python lzma extension was not compiled. Missing the lzma lib? TLDR; Recipe to fix: sudo apt-get install build-essential zlib1g-dev libffi-dev libssl-dev libbz2-dev libreadline-dev libsqlite3-dev liblzma-dev libncurses-dev tk-dev Result After the fix: $> pyenv install 3.11.0 Command result: pyenv: /home/user/.pyenv/versions/3.11.0 already exists continue with installation? (y/N) y Downloading Python-3.11.0.tar.xz... -> https://www.python.org/ftp/python/3.11.0/Python-3.11.0.tar.xz Installing Python-3.11.0... Installed Python-3.11.0 to /home/user/.pyenv/versions/3.11.0
59
118
60,736,556
2020-3-18
https://stackoverflow.com/questions/60736556/pandas-rolling-apply-using-multiple-columns
I am trying to use a pandas.DataFrame.rolling.apply() rolling function on multiple columns. Python version is 3.7, pandas is 1.0.2. import pandas as pd #function to calculate def masscenter(x): print(x); # for debug purposes return 0; #simple DF creation routine df = pd.DataFrame( [['02:59:47.000282', 87.60, 739], ['03:00:01.042391', 87.51, 10], ['03:00:01.630182', 87.51, 10], ['03:00:01.635150', 88.00, 792], ['03:00:01.914104', 88.00, 10]], columns=['stamp', 'price','nQty']) df['stamp'] = pd.to_datetime(df2['stamp'], format='%H:%M:%S.%f') df.set_index('stamp', inplace=True, drop=True) 'stamp' is monotonic and unique, 'price' is double and contains no NaNs, 'nQty' is integer and also contains no NaNs. So, I need to calculate rolling 'center of mass', i.e. sum(price*nQty)/sum(nQty). What I tried so far: df.apply(masscenter, axis = 1) masscenter is be called 5 times with a single row and the output will be like price 87.6 nQty 739.0 Name: 1900-01-01 02:59:47.000282, dtype: float64 It is desired input to a masscenter, because I can easily access price and nQty using x[0], x[1]. However, I stuck with rolling.apply() Reading the docs DataFrame.rolling() and rolling.apply() I supposed that using 'axis' in rolling() and 'raw' in apply one achieves similiar behaviour. A naive approach rol = df.rolling(window=2) rol.apply(masscenter) prints row by row (increasing number of rows up to window size) stamp 1900-01-01 02:59:47.000282 87.60 1900-01-01 03:00:01.042391 87.51 dtype: float64 then stamp 1900-01-01 02:59:47.000282 739.0 1900-01-01 03:00:01.042391 10.0 dtype: float64 So, columns is passed to masscenter separately (expected). Sadly, in the docs there is barely any info about 'axis'. However the next variant was, obviously rol = df.rolling(window=2, axis = 1) rol.apply(masscenter) Never calls masscenter and raises ValueError in rol.apply(..) > Length of passed values is 1, index implies 5 I admit that I'm not sure about 'axis' parameter and how it works due to lack of documentation. It is the first part of the question: What is going on here? How to use 'axis' properly? What it is designed for? Of course, there were answers previously, namely: How-to-apply-a-function-to-two-columns-of-pandas-dataframe It works for the whole DataFrame, not Rolling. How-to-invoke-pandas-rolling-apply-with-parameters-from-multiple-column The answer suggests to write my own roll function, but the culprit for me is the same as asked in comments: what if one needs to use offset window size (e.g. '1T') for non-uniform timestamps? I don't like the idea to reinvent the wheel from scratch. Also I'd like to use pandas for everything to prevent inconsistency between sets obtained from pandas and 'self-made roll'. There is another answer to that question, suggessting to populate dataframe separately and calculate whatever I need, but it will not work: the size of stored data will be enormous. The same idea presented here: Apply-rolling-function-on-pandas-dataframe-with-multiple-arguments Another Q & A posted here Pandas-using-rolling-on-multiple-columns It is good and the closest to my problem, but again, there is no possibility to use offset window sizes (window = '1T'). Some of the answers were asked before pandas 1.0 came out, and given that docs could be much better, I hope it is possible to roll over multiple columns simultaneously now. The second part of the question is: Is there any possibility to roll over multiple columns simultaneously using pandas 1.0.x with offset window size?
How about this: import pandas as pd def masscenter(ser: pd.Series, df: pd.DataFrame): df_roll = df.loc[ser.index] return your_actual_masscenter(df_roll) masscenter_output = df['price'].rolling(window=3).apply(masscenter, args=(df,)) It uses the rolling logic to get subsets via an arbitrary column. The arbitrary column itself is not used, only the rolling index is used. This relies on the default of raw=False which provides the index values for those subsets. The applied function uses those index values to get multi-column slices from the original dataframe.
40
46
60,827,896
2020-3-24
https://stackoverflow.com/questions/60827896/how-to-configure-an-equivalent-of-ssh-stricthostkeychecking-no-in-python-paramik
I am using Paramiko for sshing from Python script. My ssh command is listed below: ssh -A -o strictHostKeyChecking=no <hostname> I need same Paramiko code for Python.
In Paramiko, an equivalent of OpenSSH StrictHostKeyChecking=no is the default behaviour of MissingHostKeyPolicy, which implements missing_host_key to simply do nothing. client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.MissingHostKeyPolicy()) client.connect(hostname, ...) Though you should not do this (and neither StrictHostKeyChecking=no). You are losing a protection against Man-in-the-middle attacks this way. For correct solution, see Paramiko "Unknown Server".
8
13
60,768,676
2020-3-20
https://stackoverflow.com/questions/60768676/what-is-the-default-install-path-for-poetry
I installed poetry, however I'm getting the following error when attempting to call poetry zsh: command not found: poetry I know I have it installed because I get the following output when trying to run the following install script $ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python Retrieving Poetry metadata Latest version already installed.
The default install location is ~/.poetry/bin/poetry I added the following to my .zshrc export PATH="$HOME/.local/bin:$PATH"
22
56
60,814,982
2020-3-23
https://stackoverflow.com/questions/60814982/how-to-setup-pip-to-download-from-mirror-repository-by-default
I am forced to download python packages from local mirror PyPi repository. I do this by using the -i and --trusted-host options. Whole installation command looks like this: pip install -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com package_name Having to type in that options each time is kinda annoying though (in reality those are long URL's). I've tried to create get_package.bat file (I'm working on Windows 10) with following content: pip install -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com "%1" It works perfectly fine, although when I wanted to execute pip search command, it turned out to be useless since it has hard-coded install command and there is no way to use it with search. Is there any way in which I can setup pip to download from mirror repository by default, so that I can execute pip install package_name or pip search package_name without any additional options? Eventually I could try making .bat file that would take 2 parameters like this: pip %1 -i https://sampleurl.com/pypi-remote/simple --trusted-host sample.host.com "%2" But I wonder if there's more "elegant" way to do this.
using pip config, on user or global level. I have /etc/pip.conf configured like this: [global] index=https://my-company/nexus/repository/pypi-group/pypi index-url=https://my-company/nexus/repository/pypi-group/simple trusted-host=my-company but you can configure this using pip config on user or global level, something like: pip config --user set global.index https://my-company/nexus/repository/pypi-group/pypi pip config --user set global.index-url https://my-company/nexus/repository/pypi-group/simple pip config --user set global.trusted-host my-company #NOTES --index-url is used by pip install --index is used by pip search
76
127
60,839,470
2020-3-24
https://stackoverflow.com/questions/60839470/pymongo-errors-serverselectiontimeouterrorlocalhost27017winerror-10061-no-c
I am following the Python tutorial from W3Schools. I just started the MongoDB chapter. I installed MongoDB and checked it with: import pymongo without getting an error. But as soon as I enter the following code: import pymongo myclient = pymongo.MongoClient("mongodb://localhost:27017/") mydb = myclient["mydatabase"] mycol = mydb["customers"] mydict = { "name": "John", "address": "Highway 37" } x = mycol.insert_one(mydict) print(x.inserted_id) I get these messages and an error message at the bottom in cmd: cd C:\Users\xxx\myname python index.py Output: Traceback (most recent call last): File "index.py", line 8, in <module> x = mycol.insert_one(mydict) File "C:\Users\path...\pymongo\collection.py", line 695, in insert_one self._insert(document, File "C:\Users\path...\pymongo\collection.py", line 610, in _insert return self._insert_one( File "C:\Users\path...\pymongo\collection.py", line 599, in _insert_one self.__database.client._retryable_write( File "C:\Users\path...\pymongo\mongo_client.py", line 1490, in _retryable_write with self._tmp_session(session) as s: File "C:\Program Files\WindowsApps\path...\lib\contextlib.py", line 113, in __enter__ return next(self.gen) File "C:\Users\path...\pymongo\mongo_client.py", line 1823, in _tmp_session s = self._ensure_session(session) File "C:\Users\path...\pymongo\mongo_client.py", line 1810, in _ensure_session return self.__start_session(True, causal_consistency=False) File "C:\Users\path...\pymongo\mongo_client.py", line 1763, in __start_session server_session = self._get_server_session() File "C:\Users\path...\pymongo\mongo_client.py", line 1796, in _get_server_session return self._topology.get_server_session() File "C:\Users\path...\pymongo\topology.py", line 482, in get_server_session self._select_servers_loop( File "C:\Users\path...\pymongo\topology.py", line 208, in _select_servers_loop raise ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError: localhost: 27017: [WinError 10061] Could not connect because target computer actively refused connection I also tried too disabling firewall temporarily, but the error kept coming up. I used: "python 3.8.2 , mongoDB 4.2.5.0 , pymongo 3.10.1 , windows 10 home" What is going wrong?
There is nothing wrong with your code. If you have disabled your firewall, the most likely reason is that the MongoDB service is not installed or running. On Windows, press the Windows key and type services to open the services application. Check the service MongoDB Server is listed and has a Running status. You can test local connectivity by opening your favourite Windows terminal or PowerShell and typing mongo. If it is working you should see: PS> mongo MongoDB shell version v4.2.3 connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb Implicit session: session { "id" : UUID("1b5499b8-166a-4de6-a8c9-643499f04e66") } MongoDB server version: 4.2.3
9
11
60,832,201
2020-3-24
https://stackoverflow.com/questions/60832201/how-can-i-do-real-time-voice-activity-detection-in-python
I am performing a voice activity detection on the recorded audio file to detect speech vs non-speech portions in the waveform. The output of the classifier looks like (highlighted green regions indicate speech): The only issue I face here is making it work for a stream of audio input (for eg: from a microphone) and do real-time analysis for a stipulated time-frame. I know PyAudio can be used to record speech from the microphone dynamically and there a couple of real-time visualization examples of a waveform, spectrum, spectrogram, etc, but could not find anything relevant to carrying out feature extraction in a near real-time manner.
You should try using Python bindings to webRTC VAD from Google. It's lightweight, fast and provides very reasonable results, based on GMM modelling. As the decision is provided per frame, the latency is minimal. # Run the VAD on 10 ms of silence. The result should be False. import webrtcvad vad = webrtcvad.Vad(2) sample_rate = 16000 frame_duration = 10 # ms frame = b'\x00\x00' * int(sample_rate * frame_duration / 1000) print('Contains speech: %s' % (vad.is_speech(frame, sample_rate)) Also, this article might be useful for you. UPDATE December 2022 As the topic still draws attention, I'd like to update my answer. SileroVAD is very fast and very accurate VAD that was released recently under MIT license.
19
25
60,782,785
2020-3-20
https://stackoverflow.com/questions/60782785/python3-m-pip-install-vs-pip3-install
I always use pip install (which I think is equivalent to pip3 install since I only have python3 in my env) to install packages. But I recently heard python3 -m pip install is better. Why?
I would advise against ever calling any pip somecommand (or pip3) script directly. Instead it's much safer to call pip's executable module for a specific Python interpreter explicitly, something of the form path/to/pythonX.Y -m pip somecommand. There are many advantages to this, for example: It is explicit for which Python interpreter the projects will be pip-installed (Python 2 or 3, inside the virtual environment or not, etc.) For a virtual environment, one can pip-install (or do other things) without activating it: path/to/venv/bin/python -m pip install SomeProject Under Windows this is the only way to safely upgrade pip itself path\to\venv\Scripts\python.exe -m pip install --upgrade pip But yes, if all is perfectly setup, then python3 -m pip install SomeProject and pip3 install SomeProject should do the exact same thing, but there are way too many cases where there is an issue with the setup and things don't work as expected and users get confused (as shown by the many questions about this topic on this platform). References Brett Cannon's article "Why you should use python -m pip" pip's documentation section on "Upgrading pip" venv's documentation section on "Creating virtual environments": "You don’t specifically need to activate an environment [...]"
12
10
60,830,938
2020-3-24
https://stackoverflow.com/questions/60830938/python-multiprocessing-logging-via-queuehandler
I have a Python multiprocessing application to which I would like to add some logging functionality. The Python logging cookbook recommends using a Queue. Every process will put log records into it via the QueueHandler and a Listener Process will handle the records via a predefined Handler. Here is the example provided by the Python logging cookbook: # You'll need these imports in your own code import logging import logging.handlers import multiprocessing # Next two import lines for this demo only from random import choice, random import time # # Because you'll want to define the logging configurations for listener and workers, the # listener and worker process functions take a configurer parameter which is a callable # for configuring logging for that process. These functions are also passed the queue, # which they use for communication. # # In practice, you can configure the listener however you want, but note that in this # simple example, the listener does not apply level or filter logic to received records. # In practice, you would probably want to do this logic in the worker processes, to avoid # sending events which would be filtered out between processes. # # The size of the rotated files is made small so you can see the results easily. def listener_configurer(): root = logging.getLogger() h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10) f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') h.setFormatter(f) root.addHandler(h) # This is the listener process top-level loop: wait for logging events # (LogRecords)on the queue and handle them, quit when you get a None for a # LogRecord. def listener_process(queue, configurer): configurer() while True: try: record = queue.get() if record is None: # We send this as a sentinel to tell the listener to quit. break logger = logging.getLogger(record.name) logger.handle(record) # No level or filter logic applied - just do it! except Exception: import sys, traceback print('Whoops! Problem:', file=sys.stderr) traceback.print_exc(file=sys.stderr) # Arrays used for random selections in this demo LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] LOGGERS = ['a.b.c', 'd.e.f'] MESSAGES = [ 'Random message #1', 'Random message #2', 'Random message #3', ] # The worker configuration is done at the start of the worker process run. # Note that on Windows you can't rely on fork semantics, so each process # will run the logging configuration code when it starts. def worker_configurer(queue): h = logging.handlers.QueueHandler(queue) # Just the one handler needed root = logging.getLogger() root.addHandler(h) # send all messages, for demo; no other level or filter logic applied. root.setLevel(logging.DEBUG) # This is the worker process top-level loop, which just logs ten events with # random intervening delays before terminating. # The print messages are just so you know it's doing something! def worker_process(queue, configurer): configurer(queue) name = multiprocessing.current_process().name print('Worker started: %s' % name) for i in range(10): time.sleep(random()) logger = logging.getLogger(choice(LOGGERS)) level = choice(LEVELS) message = choice(MESSAGES) logger.log(level, message) print('Worker finished: %s' % name) # Here's where the demo gets orchestrated. Create the queue, create and start # the listener, create ten workers and start them, wait for them to finish, # then send a None to the queue to tell the listener to finish. def main(): queue = multiprocessing.Queue(-1) listener = multiprocessing.Process(target=listener_process, args=(queue, listener_configurer)) listener.start() workers = [] for i in range(10): worker = multiprocessing.Process(target=worker_process, args=(queue, worker_configurer)) workers.append(worker) worker.start() for w in workers: w.join() queue.put_nowait(None) listener.join() if __name__ == '__main__': main() My question: The example from the Python logging cookbook implies that the Queue has to be passed to every function that will be executed in multiprocessing mode. This sure works if you have a small application, but it gets ugly if you have a bigger programm. Is there a way to use something like a Singleton Queue that is created once via logging.config.dictConfig and then shared by all processes without having to pass it to every function?
In your case a few simple classes will do the trick. Have a look and let me know if you need some further explanations or want something different. import logging import logging.handlers import multiprocessing import multiprocessing.pool from random import choice, random import time class ProcessLogger(multiprocessing.Process): _global_process_logger = None def __init__(self): super().__init__() self.queue = multiprocessing.Queue(-1) @classmethod def get_global_logger(cls): if cls._global_process_logger is not None: return cls._global_process_logger raise Exception("No global process logger exists.") @classmethod def create_global_logger(cls): cls._global_process_logger = ProcessLogger() return cls._global_process_logger @staticmethod def configure(): root = logging.getLogger() h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 1024**2, 10) f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s') h.setFormatter(f) root.addHandler(h) def stop(self): self.queue.put_nowait(None) def run(self): self.configure() while True: try: record = self.queue.get() if record is None: break logger = logging.getLogger(record.name) logger.handle(record) except Exception: import sys, traceback print('Whoops! Problem:', file=sys.stderr) traceback.print_exc(file=sys.stderr) def new_process(self, target, args=[], kwargs={}): return ProcessWithLogging(self, target, args, kwargs) def configure_new_process(log_process_queue): h = logging.handlers.QueueHandler(log_process_queue) root = logging.getLogger() root.addHandler(h) root.setLevel(logging.DEBUG) class ProcessWithLogging(multiprocessing.Process): def __init__(self, target, args=[], kwargs={}, log_process=None): super().__init__() self.target = target self.args = args self.kwargs = kwargs if log_process is None: log_process = ProcessLogger.get_global_logger() self.log_process_queue = log_process.queue def run(self): configure_new_process(self.log_process_queue) self.target(*self.args, **self.kwargs) class PoolWithLogging(multiprocessing.pool.Pool): def __init__(self, processes=None, context=None, log_process=None): if log_process is None: log_process = ProcessLogger.get_global_logger() super().__init__(processes=processes, initializer=configure_new_process, initargs=(log_process.queue,), context=context) LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL] LOGGERS = ['a.b.c', 'd.e.f'] MESSAGES = [ 'Random message #1', 'Random message #2', 'Random message #3', ] def worker_process(param=None): name = multiprocessing.current_process().name print('Worker started: %s' % name) for i in range(10): time.sleep(random()) logger = logging.getLogger(choice(LOGGERS)) level = choice(LEVELS) message = choice(MESSAGES) logger.log(level, message) print('Worker finished: {}, param: {}'.format(name, param)) return param def main(): process_logger = ProcessLogger.create_global_logger() process_logger.start() workers = [] for i in range(10): worker = ProcessWithLogging(worker_process) workers.append(worker) worker.start() for w in workers: w.join() with PoolWithLogging(processes=4) as pool: print(pool.map(worker_process, range(8))) process_logger.stop() process_logger.join() if __name__ == '__main__': main()
10
12
60,788,680
2020-3-21
https://stackoverflow.com/questions/60788680/what-is-the-time-complexity-of-type-casting-function-in-python
For example, int(x) float(x) str(x) What is time complexity of them?
There is no definite answer to this because it depends not just what type you're converting to, but also what type you're converting from. Let's consider just numbers and strings. To avoid writing "log" everywhere, we'll measure the size of an int by saying n is how many bits or digits it takes to represent it. (Asymptotically it doesn't matter if you count bits or digits.) For strings, obviously we should let n be the length of the string. There is no meaningful way to measure the "input size" of a float object, since floating-point numbers all take the same amount of space. Converting an int, float or str to its own type ought to take Θ(1) time because they are immutable objects, so it's not even necessary to make a copy. Converting an int to a float ought to take Θ(1) time because you only need to read at most a fixed constant number of bits from the int object to find the mantissa, and the bit length to find the exponent. Converting an int to a str ought to take Θ(n2) time, because you have to do Θ(n) division and remainder operations to find n digits, and each arithmetic operation takes Θ(n) time because of the size of the integers involved. Converting a str to an int ought to take Θ(n2) time because you need to do Θ(n) multiplications and additions on integers of size Θ(n). Converting a str to a float ought to take Θ(n) time. The algorithm only needs to read a fixed number of characters from the string to do the conversion, and floating-point arithmetic operations (or operations on bounded int values to avoid intermediate rounding errors) for each character take Θ(1) time; but the algorithm needs to look at the rest of the characters anyway in order to raise a ValueError if the format is wrong. Converting a float to any type takes Θ(1) time because there are only finitely many distinct float values. I've said "ought to" because I haven't checked the actual source code; this is based on what the conversion algorithms need to do, and the assumption that the algorithms actually used aren't asymptotically worse than they need to be. There could be special cases to optimise the str-to-int conversion when the base is a power of 2, like int('11001010', 2) or int('AC5F', 16), since this can be done without arithmetic. If those cases are optimised then they should take Θ(n) time instead of Θ(n2). Likewise, converting an int to a str in a base which is a power of 2 (e.g. using the bin or hex functions) should take Θ(n) time.
14
10
60,765,317
2020-3-19
https://stackoverflow.com/questions/60765317/how-to-create-an-openapi-schema-for-an-uploadfile-in-fastapi
FastAPI automatically generates a schema in the OpenAPI spec for UploadFile parameters. For example, this code: from fastapi import FastAPI, File, UploadFile app = FastAPI() @app.post("/uploadfile/") async def create_upload_file(file: UploadFile = File(..., description="The file")): return {"filename": file.filename} will generate this schema under components:schemas in the OpenAPI spec: { "Body_create_upload_file_uploadfile__post": { "title": "Body_create_upload_file_uploadfile__post", "required":["file"], "type":"object", "properties":{ "file": {"title": "File", "type": "string", "description": "The file","format":"binary"} } } } How can I explicitly specify the schema for UploadFiles (or at least its name)? I have read FastAPIs docs and searched the issue tracker but found nothing.
You can edit the OpenAPI schema itself. I prefer to just move these schemas to the path (since they are unique to each path anyway): from fastapi import FastAPI, File, UploadFile from fastapi.openapi.utils import get_openapi app = FastAPI() @app.post("/uploadfile/") async def create_upload_file(file1: UploadFile = File(...), file2: UploadFile = File(...)): pass def custom_openapi(): if app.openapi_schema: return app.openapi_schema openapi_schema = get_openapi( title="Custom title", version="2.5.0", description="This is a very custom OpenAPI schema", routes=app.routes, ) # Move autogenerated Body_ schemas, see https://github.com/tiangolo/fastapi/issues/1442 for path in openapi_schema["paths"].values(): for method_data in path.values(): if "requestBody" in method_data: for content_type, content in method_data["requestBody"]["content"].items(): if content_type == "multipart/form-data": schema_name = content["schema"]["$ref"].lstrip("#/components/schemas/") schema_data = openapi_schema["components"]["schemas"].pop(schema_name) content["schema"] = schema_data app.openapi_schema = openapi_schema return app.openapi_schema app.openapi = custom_openapi
13
4
60,762,378
2020-3-19
https://stackoverflow.com/questions/60762378/exec-python-executable-file-not-found-in-path
I am running Arduion IDE 1.8.12 on Ubuntu 18.04.4 LTS. I am trying to compile Example code for ESP32 Camera module (standard camera module with default example on Arduino IDE) and I got this error (which I think is not Arduino issue, but Python): "exec: "python": executable file not found in $PATH Error compiling for board ESP32 Wrover Module" Same message with all ESP32. I also did sudo apt install python. Got back this: Reading package lists... Done Building dependency tree Reading state information... Done python is already the newest version (2.7.15~rc1-1). 0 modernizētas, 0 instalētas no jauna, 0 tiks noņemtas un 6 netiks modernizētas. When I type Python in Terminal, got this back: Python 2.7.17 (default, Nov 7 2019, 10:07:09) [GCC 7.4.0] on linux2 Type "help", "copyright", "credits" or "license" for more information. Thank you for help! BR, Valters
To solve & Fixed the following upload error from Arduino To ESP32-CAM (And for ESP32 too): environment: ubuntu 20.04 64bit, Arduino 1.8.13 ESP32-CAM And yp-05 (for ESP's serial connection) exec: "python": executable file not found in $PATH Error compiling for board AI Thinker ESP32-CAM. The solution is: Installing the package (for example python-is-python3_3.8.2-4_all): sudo dpkg -i python-is-python3_3.8.2-4_all.deb 2) Wiring WIRES colors: | black | NO | WHITE | GRAY | BROWN | EMPTY yp-05 legs ordere: | GRD | EMPTY | VCC | TX | RX | DIR esp32s CAM: | GRD | EMPTY | 3.3V | GPIO 3 UOR | GPIO 1 UOT | EMPTY *** just For Upload: SHORT ESP32-CAM 100 & GRD I hope it will save time to start using ESP32-CAM (And ESP32 too). That's it - Solved & Run!
10
2
60,739,653
2020-3-18
https://stackoverflow.com/questions/60739653/gdown-is-giving-permission-error-for-particular-file-although-it-is-opening-up-f
I am not able to download file using gdown package.It is giving permission error. But when i am opening it manually.It is giving no such error and opening up fine. Here is the code i am using and link import gdown url='https://drive.google.com/uc?id=0B1lRQVLFjBRNR3Jqam1menVtZnc' output='letter.pdf' gdown.download(url, output, quiet=False) Error is Permission denied: https://drive.google.com/uc?id=0B1lRQVLFjBRNR3Jqam1menVtZnc Maybe you need to change permission over 'Anyone with the link'?
If you're working with big files (in my case was a >1gb file), you can solve by copying the url from 'Download anyway' button in Google Drive.
32
10
60,819,376
2020-3-23
https://stackoverflow.com/questions/60819376/fastapi-throws-an-error-error-loading-asgi-app-could-not-import-module-api
I tried to run FastAPI using uvicorn webserver but it throws an error. I run this command, uvicorn api:app --reload --host 0.0.0.0 but there is an error in the terminal. Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) Started reloader process [23445] Error loading ASGI app. Could not import module "api". Stopping reloader process [23445]
TL;DR Add the directory name in front of your filename uvicorn src.main:app or cd into that directory cd src uvicorn main:app Long Answer It happens because you are not in the same folder with your FastAPI app instance more specifically: Let's say i have an app-tree like this; my_fastapi_app/ β”œβ”€β”€ app.yaml β”œβ”€β”€ docker-compose.yml β”œβ”€β”€ src β”‚ └── main.py └── tests β”œβ”€β”€ test_xx.py └── test_yy.py $ pwd # Present Working Directory /home/yagiz/Desktop/my_fastapi_app I'm not inside the same folder with my app instance, so if I try to run my app with uvicorn I'll get an error like yours $ uvicorn main:app --reload INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [40645] using statreload ERROR: Error loading ASGI app. Could not import module "main". The answer is so simple, add the folder name in front of your filename uvicorn src.main:app --reload or you can change your working directory cd src Now i'm inside of the folder with my app instance src └── main.py Run your uvicorn again $ uvicorn main:app --reload INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started reloader process [40726] using statreload INFO: Started server process [40728] INFO: Waiting for application startup. INFO: Application startup complete.
171
316
60,816,403
2020-3-23
https://stackoverflow.com/questions/60816403/get-week-number-with-week-start-day-different-than-monday-python
I have a dataset with a date column. I want to get the week number associated with each date. I know I can use: x['date'].isocalendar()[1] But it gives me the week num with start day = monday. While I need the week to start on a friday. How do you suggest I go about doing that?
tl;dr The sections "ISO Standard" and "What you want" is to clarify your need. You could just copy paste the code in the section "Solution" and see if the result is what you want. ISO Standard Definition Weeks start with Monday. Each week's year is the Gregorian year in which the Thursday falls. Result of Python Standard Library datetime >>> datetime(2020, 1, 1).isocalendar() (2020, 1, 3) # The 3rd day of the 1st week in 2020 >>> datetime(2019, 12, 31).isocalendar() (2020, 1, 2) # The 2nd day of the 1st week in 2020 >>> datetime(2019, 1, 1).isocalendar() (2019, 1, 2) >>> datetime(2017, 1, 1).isocalendar() (2016, 52, 7) >>> datetime(2016, 12, 26).isocalendar() (2016, 52, 1) >>> datetime(2015, 12, 31).isocalendar() (2015, 53, 4) >>> datetime(2016, 1, 1).isocalendar() (2015, 53, 5) Calendar Sketch # Mo Tu Wd Th Fr Sa Sn # [2019-52w] DEC/ 23 24 25 26 27 28 29 /DEC # [2020-1w] DEC/ 30 31 1 2 3 4 5 /JAN # [2019-1w] DEC/ 31 1 2 3 4 5 6 /JAN # [2016-52w] DEC/ 26 27 28 29 30 31 1 /JAN # [2015-53w] DEC/ 28 29 30 31 1 2 3 /JAN # [2016-1w] JAN/ 4 5 6 7 8 9 10 /JAN What You Want Definition Weeks start with Friday. Each week's year is the Gregorian year in which the Monday falls. Calendar Sketch # Fr Sa Sn. Mo Tu Wd Th # [2019-51w] DEC/ 20 21 22. 23 24 25 26 /DEC # [2019-52w] DEC/ 27 28 29. 30 31 1 2 /JAN # [2020-1w] JAN/ 3 4 5. 6 7 8 9 /JAN # [2018-53w] DEC/ 28 29 30. 31 1 2 3 /JAN # [2019-1w] JAN/ 4 5 6. 7 8 9 10 /JAN # [2016-52w] DEC/ 23 24 25. 26 27 28 29 /DEC # [2017-1w] DEC/ 30 31 1. 2 3 4 5 /JAN # [2015-52w] DEC/ 25 26 27. 28 29 30 31 /DEC # [2016-1w] JAN/ 1 2 3. 4 5 6 7 /JAN Solution from datetime import datetime, timedelta from enum import IntEnum WEEKDAY = IntEnum('WEEKDAY', 'MON TUE WED THU FRI SAT SUN', start=1) class CustomizedCalendar: def __init__(self, start_weekday, indicator_weekday=None): self.start_weekday = start_weekday self.indicator_delta = 3 if not (indicator_weekday) else (indicator_weekday - start_weekday) % 7 def get_week_start(self, date): delta = date.isoweekday() - self.start_weekday return date - timedelta(days=delta % 7) def get_week_indicator(self, date): week_start = self.get_week_start(date) return week_start + timedelta(days=self.indicator_delta) def get_first_week(self, year): indicator_date = self.get_week_indicator(datetime(year, 1, 1)) if indicator_date.year == year: # The date "year.1.1" is on 1st week. return self.get_week_start(datetime(year, 1, 1)) else: # The date "year.1.1" is on the last week of "year-1". return self.get_week_start(datetime(year, 1, 8)) def calculate(self, date): year = self.get_week_indicator(date).year first_date_of_first_week = self.get_first_week(year) diff_days = (date - first_date_of_first_week).days return year, (diff_days // 7 + 1), (diff_days % 7 + 1) if __name__ == '__main__': # Use like this: my_calendar = CustomizedCalendar(start_weekday=WEEKDAY.FRI, indicator_weekday=WEEKDAY.MON) print(my_calendar.calculate(datetime(2020, 1, 2))) To Test We could simply initialize CustomizedCalendar with original ISO settings, and verify if the outcome is the same with original isocalendar()'s result. my_calendar = CustomizedCalendar(start_weekday=WEEKDAY.MON) s = datetime(2019, 12, 19) for delta in range(20): print my_calendar.calculate(s) == s.isocalendar() s += timedelta(days=1)
12
10
60,783,222
2020-3-21
https://stackoverflow.com/questions/60783222/how-to-test-a-fastapi-api-endpoint-that-consumes-images
I am using pytest to test a FastAPI endpoint that gets in input an image in binary format as in @app.post("/analyse") async def analyse(file: bytes = File(...)): image = Image.open(io.BytesIO(file)).convert("RGB") stats = process_image(image) return stats After starting the server, I can manually test the endpoint successfully by running a call with requests import requests from requests_toolbelt.multipart.encoder import MultipartEncoder url = "http://127.0.0.1:8000/analyse" filename = "./example.jpg" m = MultipartEncoder( fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')} ) r = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000) assert r.status_code == 200 However, setting up tests in a function of the form: from fastapi.testclient import TestClient from requests_toolbelt.multipart.encoder import MultipartEncoder from app.server import app client = TestClient(app) def test_image_analysis(): filename = "example.jpg" m = MultipartEncoder( fields={'file': ('filename', open(filename, 'rb'), 'image/jpeg')} ) response = client.post("/analyse", data=m, headers={"Content-Type": "multipart/form-data"} ) assert response.status_code == 200 when running tests with python -m pytest, that gives me back a > assert response.status_code == 200 E assert 400 == 200 E + where 400 = <Response [400]>.status_code tests\test_server.py:22: AssertionError -------------------------------------------------------- Captured log call --------------------------------------------------------- ERROR fastapi:routing.py:133 Error getting request body: can't concat NoneType to bytes ===================================================== short test summary info ====================================================== FAILED tests/test_server.py::test_image_analysis - assert 400 == 200 what am I doing wrong? What's the right way to write a test function test_image_analysis() using an image file?
You see a different behavior because requests and TestClient are not exactly same in every aspect as TestClient wraps requests. To dig deeper, refer to the source code: (FastAPI is using TestClient from starlette library, FYI) https://github.com/encode/starlette/blob/master/starlette/testclient.py To solve, you can get rid of MultipartEncoder because requests can accept file bytes and encode it by form-data format, with something like # change it r = requests.post(url, data=m, headers={'Content-Type': m.content_type}, timeout = 8000) # to r = requests.post(url, files={"file": ("filename", open(filename, "rb"), "image/jpeg")}) and modifying the FastAPI test code: # change response = client.post("/analyse", data=m, headers={"Content-Type": "multipart/form-data"} ) # to response = client.post( "/analyse", files={"file": ("filename", open(filename, "rb"), "image/jpeg")} )
20
33
60,827,999
2020-3-24
https://stackoverflow.com/questions/60827999/use-dictionary-in-tf-function-input-signature-in-tensorflow-2-0
I am using Tensorflow 2.0 and facing the following situation: @tf.function def my_fn(items): .... #do stuff return If items is a dict of Tensors like for example: item1 = tf.zeros([1, 1]) item2 = tf.zeros(1) items = {"item1": item1, "item2": item2} Is there a way of using input_signature argument of tf.function so I can force tf2 to avoid creating multiple graphs when item1 is for example tf.zeros([2,1]) ?
The input signature has to be a list, but elements in the list can be dictionaries or lists of Tensor Specs. In your case I would try: (the name attributes are optional) signature_dict = { "item1": tf.TensorSpec(shape=[2], dtype=tf.int32, name="item1"), "item2": tf.TensorSpec(shape=[], dtype=tf.int32, name="item2") } # don't forget the brackets around the 'signature_dict' @tf.function(input_signature = [signature_dict]) def my_fn(items): .... # do stuff return # calling the TensorFlow function my_fun(items) However, if you want to call a particular concrete function created by my_fn, you have to unpack the dictionary. You also have to provide the name attribute in tf.TensorSpec. # creating a concrete function with an input signature as before but without # brackets and with mandatory 'name' attributes in the TensorSpecs my_concrete_fn = my_fn.get_concrete_function(signature_dict) # calling the concrete function with the unpacking operator my_concrete_fn(**items) This is annoying but should be resolved in TensorFlow 2.3. (see the end of the TF Guide to 'Concrete functions')
14
15
60,789,886
2020-3-21
https://stackoverflow.com/questions/60789886/error-failed-to-create-temp-directory-c-users-user-appdata-local-temp-conda
When I try to Activate "conda activate tensorflow_cpu" conda activate tensorflow_cpu Error : Failed to create temp directory "C:\Users\user\AppData\Local\Temp\conda-\"
It is due to a bug from conda developers. The bug is the temp path is having names with spaces, so to overcome please reassign the Env Variables TEMP, TMP. (for windows) go to environment variables In "User Variables for " section look for TEMP, TMP double click on TMP and in "variable value", type "C:\conda_tmp" similarly do it for TEMP close env variables section Restart the anaconda prompt, the error should vanish
12
20
60,839,909
2020-3-24
https://stackoverflow.com/questions/60839909/errors-running-pandas-profile-report
I'm trying to run a Profile Report for EDA in conda Jupyter NB, but keep getting errors. Here is my code thus far: import pandas_profiling from pandas_profiling import ProfileReport profile = ProfileReport(data) and profile = pandas_profiling.ProfileReport(data) both of which produce: TypeError: concat() got an unexpected keyword argument 'join_axes' Research recommended upgrading to Pandas 1.0, which I'm using. Also tried data.profile_report() AttributeError: 'DataFrame' object has no attribute 'profile_report' Any tips on where I am going wrong? Addendum...So I finally figured it out. Needed to install latest version of pandas-profiling in conda, which was 202003 version. Too easy.
Installed most recent version (March 2020) of pandas-profiling in conda. conda install -c conda-forge/label/cf202003 pandas-profiling Was then able to import pandas_profiling in jupyter notebook
12
4
60,835,421
2020-3-24
https://stackoverflow.com/questions/60835421/pyspark-topandas-function-is-changing-column-type
I have a pyspark dataframe with following schema: root |-- src_ip: integer (nullable = true) |-- dst_ip: integer (nullable = true) When converting this dataframe to pandas via toPandas(), the column type changes from integer in spark to float in pandas: <class 'pandas.core.frame.DataFrame'> RangeIndex: 9847 entries, 0 to 9846 Data columns (total 2 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 src_ip 9607 non-null float64 1 dst_ip 9789 non-null float64 dtypes: float64(2) memory usage: 154.0 KB Is there any way to keep integer value with toPandas() or I can only cast column type in resulting pandas dataframe?
SPARK-21766 (https://issues.apache.org/jira/browse/SPARK-21766) explains the behavior your observed. As a workaround, you can call fillna(0) before toPandas(): df1 = sc.createDataFrame([(0, None), (None, 8)], ["src_ip", "dest_ip"]) print(df1.dtypes) # Reproduce the issue pdf1 = df1.toPandas() print(pdf1.dtypes) # A workaround pdf2 = df1.fillna(0).toPandas() print(pdf2.dtypes)
8
3
60,828,641
2020-3-24
https://stackoverflow.com/questions/60828641/simplest-way-to-perform-logging-from-google-cloud-run
I followed this guide https://firebase.google.com/docs/hosting/cloud-run to setup cloud run docker. Then I tried to follow this guide https://cloud.google.com/run/docs/logging to perform a simple log. Trying to write a structured log to stdout This is my code: trace_header = request.headers.get('X-Cloud-Trace-Context') if trace_header: trace = trace_header.split('/') global_log_fields['logging.googleapis.com/trace'] = "projects/sp-64d90/traces/" + trace[0] # Complete a structured log entry. entry = dict(severity='NOTICE', message='This is the default display field.', # Log viewer accesses 'component' as jsonPayload.component'. component='arbitrary-property', **global_log_fields) print(json.dumps(entry)) I cannot see this log in the Cloud Logs Viewer. I do see the http Get logs each time I call the docker. Am I missing anything? I am new to this and wondered what is the simples way to be able to log information and view it assuming the docker I created was exactly with the steps from the guide (https://firebase.google.com/docs/hosting/cloud-run) Thanks
I am running into the exact same issue. I did find that flushing stdout causes the logging to appear when it otherwise would not. Looks like a bug in Cloud Run to me. print(json.dumps(entry)) import sys sys.stdout.flush() Output with flushing
14
14
60,778,279
2020-3-20
https://stackoverflow.com/questions/60778279/fastapi-middleware-peeking-into-responses
I try to write a simple middleware for FastAPI peeking into response bodies. In this example I just log the body content: app = FastAPI() @app.middleware("http") async def log_request(request, call_next): logger.info(f'{request.method} {request.url}') response = await call_next(request) logger.info(f'Status code: {response.status_code}') async for line in response.body_iterator: logger.info(f' {line}') return response However it looks like I "consume" the body this way, resulting in this exception: ... File ".../python3.7/site-packages/starlette/middleware/base.py", line 26, in __call__ await response(scope, receive, send) File ".../python3.7/site-packages/starlette/responses.py", line 201, in __call__ await send({"type": "http.response.body", "body": b"", "more_body": False}) File ".../python3.7/site-packages/starlette/middleware/errors.py", line 156, in _send await send(message) File ".../python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 515, in send raise RuntimeError("Response content shorter than Content-Length") RuntimeError: Response content shorter than Content-Length Trying to look into the response object I couldn't see any other way to read its content. What is the correct way to do it?
I had a similar need in a FastAPI middleware and although not ideal here's what we ended up with: app = FastAPI() @app.middleware("http") async def log_request(request, call_next): logger.info(f'{request.method} {request.url}') response = await call_next(request) logger.info(f'Status code: {response.status_code}') body = b"" async for chunk in response.body_iterator: body += chunk # do something with body ... return Response( content=body, status_code=response.status_code, headers=dict(response.headers), media_type=response.media_type ) Be warned that such an implementation is problematic with responses streaming a body that would not fit in your server RAM (imagine a response of 100GB). Depending on what your application does, you will rule if it is an issue or not. In the case where some of your endpoints produce large responses, you might want to avoid using a middleware and instead implement a custom ApiRoute. This custom ApiRoute would have the same issue with consuming the body, but you can limit it's usage to a particular endpoints. Learn more at https://fastapi.tiangolo.com/advanced/custom-request-and-route/
15
14
60,776,749
2020-3-20
https://stackoverflow.com/questions/60776749/plot-confusion-matrix-without-estimator
I'm trying to use plot_confusion_matrix, from sklearn.metrics import confusion_matrix y_true = [1, 1, 0, 1] y_pred = [1, 1, 0, 0] confusion_matrix(y_true, y_pred) Output: array([[1, 0], [1, 2]]) Now, while using the followings; using 'classes' or without 'classes' from sklearn.metrics import plot_confusion_matrix plot_confusion_matrix(y_true, y_pred, classes=[0,1], title='Confusion matrix, without normalization') or plot_confusion_matrix(y_true, y_pred, title='Confusion matrix, without normalization') I expect to get similar output like this except the numbers inside, Plotting simple diagram, it should not require the estimator. Using mlxtend.plotting, from mlxtend.plotting import plot_confusion_matrix import matplotlib.pyplot as plt import numpy as np binary1 = np.array([[4, 1], [1, 2]]) fig, ax = plot_confusion_matrix(conf_mat=binary1) plt.show() It provides same output. Based on this it requires a classifier, disp = plot_confusion_matrix(classifier, X_test, y_test, display_labels=class_names, cmap=plt.cm.Blues, normalize=normalize) Can I plot it without a classifier?
plot_confusion_matrix expects a trained classifier. If you look at the source code, what it does is perform the prediction to generate y_pred for you: y_pred = estimator.predict(X) cm = confusion_matrix(y_true, y_pred, sample_weight=sample_weight, labels=labels, normalize=normalize) So in order to plot the confusion matrix without specifying a classifier, you'll have to go with some other tool, or do it yourself. A simple option is to use seaborn: import seaborn as sns cm = confusion_matrix(y_true, y_pred) f = sns.heatmap(cm, annot=True)
8
10
60,827,049
2020-3-24
https://stackoverflow.com/questions/60827049/how-to-document-small-changes-to-complex-api-functions
Let's say we have a complex API function, imported from some library. def complex_api_function( number, <lots of positional arguments>, <lots of keyword arguments>): '''really long docstring''' # lots of code I want to write a simple wrapper around that function to make a tiny change. For example, it should be possible to pass the first argument as a string. How to document this? I considered the following options: Option 1: def my_complex_api_function(number_or_str, *args, **kwargs): ''' Do something complex. Like `complex_api_function`, but first argument can be a string. Parameters ---------- number_or_str : int or float or str Can be a number or a string that can be interpreted as a float. <copy paste description from complex_api_function docstring> *args Positional arguments passed to `complex_api_function`. **kwargs Keyword arguments passed to `complex_api_function`. Returns ------- <copy paste from complex_api_function docstring> Examples -------- <example where first argument is a string, e.g. '-5.0'> ''' return complex_api_function(float(number_or_str), *args, **kwargs) Disadvantage: User has to look at the docs of complex_api_function to get information about *args and **kwargs. Needs adjustment when the copy pasted sections from complex_api_function change. Option 2: Copy and paste complex_api_function's signature (instead of using *args and **kwargs) and its docstring. Make a tiny change to the docstring that mentions that the first argument can also be a string. Add an example. Disadvantage: verbose, has to be changed when complex_api_function changes. Option 3: Decorate my_complex_api_function with functools.wraps(complex_api_function). Disadvantage: There's no information that number can also be a string. I'm looking for an answer that does not hinge on the details of what changes in my_complex_api_function. The procedure should work for any tiny adjustment to the original complex_api_function.
I'd recommend something like the following: def my_complex_api_function(number_or_str, *args, **kwargs): """This function is a light wrapper to `complex_api_function`. It allows you to pass a string or a number, whereas `complex_api_function` requires a number. See :ref:`complex_api_function` for more details. :param number_or_str: number or str to convert to a number and pass to `complex_api_function`. :param args: Arguments to pass to `complex_api_function` :param kwargs: Keyword arguments to pass to `complex_api_function` :return: Output of `complex_api_function`, called with passed parameters """ This is clear and concise. But please also remember that, if using a documentation system like sphinx, to link the functions with :ref:`bob` or similar.
8
3
60,747,047
2020-3-18
https://stackoverflow.com/questions/60747047/spyder-4-deactivate-automatic-highlighting-of-last-word-after-few-seconds
When I stop typing in Spyder 4, all occurrences of the last word are automatically highlighted after about two seconds. Is it a bug or can I disable it? I use Spyder on Ubuntu 18.04.
I found the setting: Tools > Preferences > Editor > Display > Highlight occurrences after
9
16
60,736,569
2020-3-18
https://stackoverflow.com/questions/60736569/timestamp-subtraction-must-have-the-same-timezones-or-no-timezones-but-they-are
There are questions that addresses the same error TypeError: Timestamp subtraction must have the same timezones or no timezones but none faces the same issue as this one. I have 2 UTC Timestamps that throw that error when substracted. print(date, type(date), date.tzinfo) >>> 2020-07-17 00:00:00+00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> UTC print(date2, type(date2), date2.tzinfo) >>> 2020-04-06 00:00:00.000000001+00:00 <class 'pandas._libs.tslibs.timestamps.Timestamp'> UTC date - date2 >>> TypeError: Timestamp subtraction must have the same timezones or no timezones Edit: I'm using Python 3.6.9 and Pandas 1.0.1
After checking the timezone types: type(date.tzinfo) gives <class 'datetime.timezone'> and type(date2.tzinfo) gives <class 'pytz.UTC'> so acording of pandas source code they are not considered equal even even if they are both UTC. So the solution was to make them have the same tzinfo type (either pytz or datitme.timezone) This is an open issue in Github: https://github.com/pandas-dev/pandas/issues/32619
13
6
60,793,752
2020-3-21
https://stackoverflow.com/questions/60793752/extract-artwork-from-table-game-card-image-with-opencv
I wrote a small script in python where I'm trying to extract or crop the part of the playing card that represents the artwork only, removing all the rest. I've been trying various methods of thresholding but couldn't get there. Also note that I can't simply record manually the position of the artwork because it's not always in the same position or size, but always in a rectangular shape where everything else is just text and borders. from matplotlib import pyplot as plt import cv2 img = cv2.imread(filename) gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) ret,binary = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY) binary = cv2.bitwise_not(binary) kernel = np.ones((15, 15), np.uint8) closing = cv2.morphologyEx(binary, cv2.MORPH_OPEN, kernel) plt.imshow(closing),plt.show() The current output is the closest thing I could get. I could be on the right way and try some further wrangling to draw a rectangle around the white parts, but I don't think it's a sustainable method : As a last note, see the cards below, not all frames are exactly the same sizes or positions, but there's always a piece of artwork with only text and borders around it. It doesn't have to be super precisely cut, but clearly the art is a "region" of the card, surrounded by other regions containing some text. My goal is to try to capture the region of the artwork as well as I can.
I used Hough line transform to detect linear parts of the image. The crossings of all lines were used to construct all possible rectangles, which do not contain other crossing points. Since the part of the card you are looking for is always the biggest of those rectangles (at least in the samples you provided), i simply chose the biggest of those rectangles as winner. The script works without user interaction. import cv2 import numpy as np from collections import defaultdict def segment_by_angle_kmeans(lines, k=2, **kwargs): #Groups lines based on angle with k-means. #Uses k-means on the coordinates of the angle on the unit circle #to segment `k` angles inside `lines`. # Define criteria = (type, max_iter, epsilon) default_criteria_type = cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER criteria = kwargs.get('criteria', (default_criteria_type, 10, 1.0)) flags = kwargs.get('flags', cv2.KMEANS_RANDOM_CENTERS) attempts = kwargs.get('attempts', 10) # returns angles in [0, pi] in radians angles = np.array([line[0][1] for line in lines]) # multiply the angles by two and find coordinates of that angle pts = np.array([[np.cos(2*angle), np.sin(2*angle)] for angle in angles], dtype=np.float32) # run kmeans on the coords labels, centers = cv2.kmeans(pts, k, None, criteria, attempts, flags)[1:] labels = labels.reshape(-1) # transpose to row vec # segment lines based on their kmeans label segmented = defaultdict(list) for i, line in zip(range(len(lines)), lines): segmented[labels[i]].append(line) segmented = list(segmented.values()) return segmented def intersection(line1, line2): #Finds the intersection of two lines given in Hesse normal form. #Returns closest integer pixel locations. #See https://stackoverflow.com/a/383527/5087436 rho1, theta1 = line1[0] rho2, theta2 = line2[0] A = np.array([ [np.cos(theta1), np.sin(theta1)], [np.cos(theta2), np.sin(theta2)] ]) b = np.array([[rho1], [rho2]]) x0, y0 = np.linalg.solve(A, b) x0, y0 = int(np.round(x0)), int(np.round(y0)) return [[x0, y0]] def segmented_intersections(lines): #Finds the intersections between groups of lines. intersections = [] for i, group in enumerate(lines[:-1]): for next_group in lines[i+1:]: for line1 in group: for line2 in next_group: intersections.append(intersection(line1, line2)) return intersections def rect_from_crossings(crossings): #find all rectangles without other points inside rectangles = [] # Search all possible rectangles for i in range(len(crossings)): x1= int(crossings[i][0][0]) y1= int(crossings[i][0][1]) for j in range(len(crossings)): x2= int(crossings[j][0][0]) y2= int(crossings[j][0][1]) #Search all points flag = 1 for k in range(len(crossings)): x3= int(crossings[k][0][0]) y3= int(crossings[k][0][1]) #Dont count double (reverse rectangles) if (x1 > x2 or y1 > y2): flag = 0 #Dont count rectangles with points inside elif ((((x3 >= x1) and (x2 >= x3))and (y3 > y1) and (y2 > y3) or ((x3 > x1) and (x2 > x3))and (y3 >= y1) and (y2 >= y3))): if(i!=k and j!=k): flag = 0 if flag: rectangles.append([[x1,y1],[x2,y2]]) return rectangles if __name__ == '__main__': #img = cv2.imread('TAJFp.jpg') #img = cv2.imread('Bj2uu.jpg') img = cv2.imread('yi8db.png') width = int(img.shape[1]) height = int(img.shape[0]) scale = 380/width dim = (int(width*scale), int(height*scale)) # resize image img = cv2.resize(img, dim, interpolation = cv2.INTER_AREA) img2 = img.copy() gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray,(5,5),cv2.BORDER_DEFAULT) # Parameters of Canny and Hough may have to be tweaked to work for as many cards as possible edges = cv2.Canny(gray,10,45,apertureSize = 7) lines = cv2.HoughLines(edges,1,np.pi/90,160) segmented = segment_by_angle_kmeans(lines) crossings = segmented_intersections(segmented) rectangles = rect_from_crossings(crossings) #Find biggest remaining rectangle size = 0 for i in range(len(rectangles)): x1 = rectangles[i][0][0] x2 = rectangles[i][1][0] y1 = rectangles[i][0][1] y2 = rectangles[i][1][1] if(size < (abs(x1-x2)*abs(y1-y2))): size = abs(x1-x2)*abs(y1-y2) x1_rect = x1 x2_rect = x2 y1_rect = y1 y2_rect = y2 cv2.rectangle(img2, (x1_rect,y1_rect), (x2_rect,y2_rect), (0,0,255), 2) roi = img[y1_rect:y2_rect, x1_rect:x2_rect] cv2.imshow("Output",roi) cv2.imwrite("Output.png", roi) cv2.waitKey() These are the results with the samples you provided: The code for finding line crossings can be found here: find intersection point of two lines drawn using houghlines opencv You can read more about Hough Lines here.
12
5
60,838,082
2020-3-24
https://stackoverflow.com/questions/60838082/altair-line-chart-with-stroked-point-markers
I'm trying to create a line chart with point markers in Altair. I'm using the multi-series line chart example from Altair's documentation and trying to combine it with the line chart with stroked point markers example from Vega-Lite's documentation. Where I'm confused is how to handle the 'mark_line' argument. From the Vega example, I need use "point" and then set "filled" to False. "mark": { "type": "line", "point": { "filled": false, "fill": "white" } }, How would I apply that in Altair? I figured out that setting 'point' to 'True' or '{}' added a point marker, but confused on how to get the fill to work. source = data.stocks() alt.Chart(source).mark_line( point=True ).encode( x='date', y='price', color='symbol' )
You can always pass a raw vega-lite dict to any property in Altair: source = data.stocks() alt.Chart(source).mark_line( point={ "filled": False, "fill": "white" } ).encode( x='date', y='price', color='symbol' ) or you can check the docstring of mark_line() and see that it expects point to be an OverlayMarkDef() and use the Python wrappers: alt.Chart(source).mark_line( point=alt.OverlayMarkDef(filled=False, fill='white') ).encode( x='date', y='price', color='symbol' )
7
9
60,832,569
2020-3-24
https://stackoverflow.com/questions/60832569/pandas-rolling-aggregate-boolean-values
Is there any rolling "any" function in a pandas.DataFrame? Or is there any other way to aggregate boolean values in a rolling function? Consider: import pandas as pd import numpy as np s = pd.Series([True, True, False, True, False, False, False, True]) # this works but I don't think it is clear enough - I am not # interested in the sum but a logical or! s.rolling(2).sum() > 0 # What I would like to have: s.rolling(2).any() # AttributeError: 'Rolling' object has no attribute 'any' s.rolling(2).agg(np.any) # Same error! AttributeError: 'Rolling' object has no attribute 'any' So which functions can I use when aggregating booleans? (if numpy.any does not work) The rolling documentation at https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.rolling.html states that "a Window or Rolling sub-classed for the particular operation" is returned, which doesn't really help.
This method is not implemented, close, what you need is use Rolling.apply: s = s.rolling(2).apply(lambda x: x.any(), raw=False) print (s) 0 NaN 1 1.0 2 1.0 3 1.0 4 1.0 5 0.0 6 0.0 7 1.0 dtype: float64 s = s.rolling(2).apply(lambda x: x.any(), raw=False).fillna(0).astype(bool) print (s) 0 False 1 True 2 True 3 True 4 True 5 False 6 False 7 True dtype: bool Better here is use strides - generate numpy 2d arrays and processing later: s = pd.Series([True, True, False, True, False, False, False, True]) def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) a = rolling_window(s.to_numpy(), 2) print (a) [[ True True] [ True False] [False True] [ True False] [False False] [False False] [False True]] print (np.any(a, axis=1)) [ True True True True False False True] Here first NaNs pandas values are omitted, you can add first values for processing, here Falses: n = 2 x = np.concatenate([[False] * (n), s]) def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) a = rolling_window(x, n) print (a) [[False False] [False True] [ True True] [ True False] [False True] [ True False] [False False] [False False] [False True]] print (np.any(a, axis=1)) [False True True True True True False False True]
6
3
60,820,508
2020-3-23
https://stackoverflow.com/questions/60820508/how-to-make-pytest-cases-runnable-in-intellij
I want to write my test using pytest and to be able to run them (individually) in IntelliJ. I have pytest installed, along with (obviously) Python plugin for the IDE. My test file (tests/test_main.py) looks like that: from app.main import sum_numbers def test_sum_numbers(): assert sum_numbers(1, 2) == 3 assert sum_numbers(10, 20) == 30 assert sum_numbers(1, 2, -3) == 0 Running pytest from terminal works perfectly. But IntelliJ seems to treat this as a regular file, there are no "run" icons before the declarations. Also, running the file from the IDE does nothing, as it is not treated as a test file. What can I do?
Here what you need to do: 1) Remove all the existing run configurations for your test file. 2) Make sure that Preferences | Tools | Python Integrated Tools | Default rest runner is set to pytest. After that, you should see the run icon next to your test functions and right-clicking the test file should suggest running the pytest. See the docs for more details.
13
27
60,741,970
2020-3-18
https://stackoverflow.com/questions/60741970/optional-cli-arguments-with-python-click-library-option
I'm having a conundrum with the Python Click library when parsing some CLI options. I would like an option to act as a flag by itself, but optionally accept string values. E.g.: $ myscript β‡’ option = False $ myscript -o β‡’ option = True $ myscript -o foobar β‡’ option = Foobar Additionally, I would like the option to be "eager" (e.g. in "Click" terms abort execution after a callback), but this can be ignored for now. When I define my arguments like this: @click.command() @click... @click.option("-o", "option", is_flag=True, default=False) def myscript(..., option): I achieve example 1 and 2, but 3 is naturally impossible because the flag detects present/not present only. When I define my arguments like this: @click.command() @click... @click.option("-o", "--option", default="") # Let's assume I will cast empty string to False def myscript(..., option): I achieve 1 and 3, but 2 will fail with an Error: -c option requires an argument. This does not seems like an out-of-this world scenario, but I can't seem to be able to achieve this or find examples that behave like this. How can I define an @click.option that gets parsed like: False when not set True when set but without value str when set with value
One way that I have managed to achieve this behaviour was by actually using arguments as below. I'll post this as a workaround, while I try to see if it could be done with an option, and I'll update my post accordingly @click.command(context_settings={"ignore_unknown_options": True}) @click.argument("options", nargs=-1) def myscript(options): option = False if options is (): option = False if '-o' in options or '--option' in options: option = True if len(options) > 1: option = options[1] print(option) Later Edit Using an option, I have managed to achieve this by adding an argument to the command definition. @click.command() @click.option('-o', '--option', is_flag=True, default=False, is_eager=True) @click.argument('value', nargs=-1) def myscript(option, value): if option and value != (): option = value[0] print(option) The nargs can be removed if you only expect at most one argument to follow, and can be treated as not required. @click.command() @click.option('-o', '--option', is_flag=True, default=False, is_eager=True) @click.argument('value', required=False) def myscript(option, value=None): if option and value is not None: option = value print(option) It might also be possible by putting together a context generator and storing some state, but that seems the least desirable solution, since you would be relying on the context storing your state.
10
5
60,814,081
2020-3-23
https://stackoverflow.com/questions/60814081/how-to-convert-a-rgb-image-into-a-cmyk
I want to convert a RGB image into CMYK. This is my code; the first problem is when I divide each pixel by 255, the value closes to zero, so the resulting image is approximately black! The second problem is that I don't know how to convert the one-channel resultant image to 4 channels. Of course, I'm not sure the made CMYK in the following code is correct. Thank you for your attention import cv2 import numpy as np import time img = cv2.imread('image/dr_trump.jpg') B = img[:, :, 0] G = img[:, :, 1] R = img[:, :, 2] B_ = np.copy(B) G_ = np.copy(G) R_ = np.copy(R) K = np.zeros_like(B) C = np.zeros_like(B) M = np.zeros_like(B) Y = np.zeros_like(B) ts = time.time() for i in range(B.shape[0]): for j in range(B.shape[1]): B_[i, j] = B[i, j]/255 G_[i, j] = G[i, j]/255 R_[i, j] = R[i, j]/255 K[i, j] = 1 - max(B_[i, j], G_[i, j], R_[i, j]) if (B_[i, j] == 0) and (G_[i, j] == 0) and (R_[i, j] == 0): # black C[i, j] = 0 M[i, j] = 0 Y[i, j] = 0 else: C[i, j] = (1 - R_[i, j] - K[i, j])/float((1 - K[i, j])) M[i, j] = (1 - G_[i, j] - K[i, j])/float((1 - K[i, j])) Y[i, j] = (1 - B_[i, j] - K[i, j])/float((1 - K[i, j])) CMYK = C + M + Y + K t = (time.time() -ts) print("Loop: {:} ms".format(t*1000)) cv2.imshow('CMYK by loop',CMYK) cv2.waitKey(0) cv2.destroyAllWindows()
You can let PIL/Pillow do it for you like this: from PIL import Image # Open image, convert to CMYK and save as TIF Image.open('drtrump.jpg').convert('CMYK').save('result.tif') If I use IPython, I can time loading, converting and saving that at 13ms in toto like this: %timeit Image.open('drtrump.jpg').convert('CMYK').save('PIL.tif') 13.6 ms Β± 627 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each) If you want to do it yourself by implementing your formula, you would be better off using vectorised Numpy rather than for loops. This takes 35ms. #!/usr/bin/env python3 import cv2 import numpy as np # Load image bgr = cv2.imread('drtrump.jpg') # Make float and divide by 255 to give BGRdash bgrdash = bgr.astype(np.float)/255. # Calculate K as (1 - whatever is biggest out of Rdash, Gdash, Bdash) K = 1 - np.max(bgrdash, axis=2) # Calculate C C = (1-bgrdash[...,2] - K)/(1-K) # Calculate M M = (1-bgrdash[...,1] - K)/(1-K) # Calculate Y Y = (1-bgrdash[...,0] - K)/(1-K) # Combine 4 channels into single image and re-scale back up to uint8 CMYK = (np.dstack((C,M,Y,K)) * 255).astype(np.uint8) If you want to check your results, you need to be aware of a few things. Not all image formats can save CMYK, that's why I saved as TIFF. Secondly, your formula leaves all your values as floats in the range 0..1, so you probably want scale back up by multiplying by 255 and converting to uint8. Finally, you can be assured of what the correct result is by simply using ImageMagick in the Terminal: magick drtrump.jpg -colorspace CMYK result.tif
9
13
60,805,253
2020-3-22
https://stackoverflow.com/questions/60805253/matplotlib-turning-axes-off-and-setting-facecolor-at-the-same-time-not-possible
can someone explain why this simple code wont execute the facecolor command while setting the axis off? fig = plt.figure(1) ax = fig.add_subplot(211, facecolor=(0,0,0), aspect='equal') ax.scatter(np.random.random(10000), np.random.random(10000), c="gray", s=0.25) ax.axes.set_axis_off() Thanks in advance!
The background patch is part of the axes. So if the axes is turned off, so will the background patch. Some options: Re-add the background patch ax = fig.add_subplot(211, facecolor=(0,0,0), aspect='equal') ax.set_axis_off() ax.add_artist(ax.patch) ax.patch.set_zorder(-1) Create new patch ax = fig.add_subplot(211, facecolor=(0,0,0), aspect='equal') ax.set_axis_off() ax.add_patch(plt.Rectangle((0,0), 1, 1, facecolor=(0,0,0), transform=ax.transAxes, zorder=-1)) Turn axis spines and ticks invisible ...but keep the axis on. ax = fig.add_subplot(211, facecolor=(0,0,0), aspect='equal') for spine in ax.spines.values(): spine.set_visible(False) ax.tick_params(bottom=False, labelbottom=False, left=False, labelleft=False)
7
11
60,811,307
2020-3-23
https://stackoverflow.com/questions/60811307/check-if-all-sides-of-a-multidimensional-numpy-array-are-arrays-of-zeros
An n-dimensional array has 2n sides (a 1-dimensional array has 2 endpoints; a 2-dimensional array has 4 sides or edges; a 3-dimensional array has 6 2-dimensional faces; a 4-dimensional array has 8 sides; etc.). This is analogous to what happens with abstract n-dimensional cubes. I want to check if all sides of an n-dimensional array are composed by only zeros. Here are three examples of arrays whose sides are composed by zeros: # 1D np.array([0,1,2,3,0]) # 2D np.array([[0, 0, 0, 0], [0, 1, 0, 0], [0, 2, 3, 0], [0, 0, 1, 0], [0, 0, 0, 0]]) # 3D np.array([[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 1, 2, 0], [0, 0, 0, 0]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]) How can I check if all sides of a multidimensional numpy array are arrays of zeros? For example, with a simple 2-dimensional array I can do this: x = np.random.rand(5, 5) assert np.sum(x[0:, 0]) == 0 assert np.sum(x[0, 0:]) == 0 assert np.sum(x[0:, -1]) == 0 assert np.sum(x[-1, 0:]) == 0 While this approach works for 2D cases, it does not generalize to higher dimensions. I wonder if there is some clever numpy trick I can use here to make it efficient and also more maintainable.
Here's how you can do it: assert(all(np.all(np.take(x, index, axis=axis) == 0) for axis in range(x.ndim) for index in (0, -1))) np.take does the same thing as "fancy" indexing.
15
10
60,810,463
2020-3-23
https://stackoverflow.com/questions/60810463/is-this-a-correct-way-to-create-a-read-only-view-of-a-numpy-array
I would like to create a read-only reference to a NumPy array. Is this a correct way to make b a read-only reference to a (a is any NumPy array)? def get_readonly_view(a): b = a.view() b.flags.writeable = False return b Specifically, I would like to ensure the above does not 'copy' the contents of a? (I tried testing this with np.shares_memory and it does return True. But I am not sure if that is a correct test.) In addition I wonder if get_readonly_view is already implemented in NumPy? Update. It has been suggested to turn the array to a class property to make it read-only. I think this does not work: import numpy as np class Foo: def __init__(self): self._a = np.arange(15).reshape((3, 5)) @property def a(self): return self._a def bar(self): print(self._a) But the client can change contents of _a: >> baz = Foo() >> baz.bar() [[ 0 1 2 3 4] [ 5 6 7 8 9] [10 11 12 13 14]] >> baz.a[1, 2] = 10 >> baz.bar() [[ 0 1 2 3 4] [ 5 6 10 8 9] [10 11 12 13 14]] while I would like baz.a[1, 2] = 10 to raise an exception.
Your approach seems to be the suggested way of creating a read-only view. In particular, arr.view() (which can also be written as a slicing arr[:]) will create a reference to arr, while modifying the writeable flag is the suggested way of making a NumPy array read-only. The documentation also provides some additional information on the inheritance of the writeable property: The data area can be written to. Setting this to False locks the data, making it read-only. A view (slice, etc.) inherits WRITEABLE from its base array at creation time, but a view of a writeable array may be subsequently locked while the base array remains writeable. (The opposite is not true, in that a view of a locked array may not be made writeable. However, currently, locking a base object does not lock any views that already reference it, so under that circumstance it is possible to alter the contents of a locked array via a previously created writeable view onto it.) Attempting to change a non-writeable array raises a RuntimeError exception. Just to reiterate and inspect what is going on: import numpy as np def get_readonly_view(arr): result = arr.view() result.flags.writeable = False return result a = np.zeros((2, 2)) b = get_readonly_view(a) print(a.flags) # C_CONTIGUOUS : True # F_CONTIGUOUS : False # OWNDATA : True # WRITEABLE : True # ALIGNED : True # WRITEBACKIFCOPY : False # UPDATEIFCOPY : False print(b.flags) # C_CONTIGUOUS : True # F_CONTIGUOUS : False # OWNDATA : False # WRITEABLE : False # ALIGNED : True # WRITEBACKIFCOPY : False # UPDATEIFCOPY : False print(a.base) # None print(b.base) # [[0. 0.] # [0. 0.]] a[1, 1] = 1.0 # ...works b[0, 0] = 1.0 # raises ValueError: assignment destination is read-only
10
9
60,792,029
2020-3-21
https://stackoverflow.com/questions/60792029/modulenotfounderror-in-docker
I have imported my entire project into docker, and I am getting a ModuleNotFoundError from one of the modules I have created. FROM python:3.8 WORKDIR /workspace/ COPY . /workspace/ RUN pip install pipenv RUN pipenv install --deploy --ignore-pipfile #EXPOSE 8000 #CMD ["pipenv", "run", "python", "/workspace/bin/web.py"] I tried looking around for answers, but I cannot seem to get it working. commands: docker build -t atletico . docker run -p 8000:8000 atletico Docker Build: https://pastebin.com/FXMrY2En Traceback (most recent call last): File "/workspace/bin/web.py", line 3, in <module> from bin.setup import setup_app ModuleNotFoundError: No module named 'bin' A copy of my directory: β”œβ”€β”€ Dockerfile β”œβ”€β”€ Pipfile β”œβ”€β”€ Pipfile.lock β”œβ”€β”€ README.md β”œβ”€β”€ bin β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ └── web.cpython-38.pyc β”‚ β”œβ”€β”€ setup.py β”‚ └── web.py β”œβ”€β”€ docker-compose.yml β”œβ”€β”€ frio β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ __pycache__ β”‚ β”‚ └── __init__.cpython-38.pyc β”‚ β”œβ”€β”€ app_events.py β”‚ └── config.py β”œβ”€β”€ routes β”‚ β”œβ”€β”€ __init__.py docker-compose.yml: version: '3' services: db: image: postgres:12 ports: - "5432:5432" environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=test_db redis: image: "redis:alpine" web: env_file: - .env.local build: . ports: - "8000:8000" volumes: - .:/workspace depends_on: - db - redis command: "pipenv run python /workspace/bin/web.py"
So I finally fixed the issue. For those who may be wondering how was it that I fixed it. You need to define a PYTHONPATH environment variable either in the Dockerfile or docker-compose.yml.
8
21
60,792,121
2020-3-21
https://stackoverflow.com/questions/60792121/filling-area-under-the-curve-with-matplotlib
I have a pandas series, and this is the graph: I want to fill the area under the curve. The problem is that calling plt.fill(y) outputs: As seen in other answers, this is because we need to send a polygon to the function, so we have to add a (0,0) point. (And a (lastPoint, 0), but in this case it's not necessary). However, the proposed solution is writing the following code: plt.fill([0]+[*range(0,len(y))], [0]+pd.Series.tolist(y)) I refuse to believe this is the best solution. The code is horrible, not at all easy to read, and I am losing information (no dates on x axis): Furthermore, if I call both plot and fill (to have the red line on the top), an error occurs: /usr/local/anaconda3/lib/python3.7/site-packages/matplotlib/dates.py in refresh(self) 1446 def refresh(self): 1447 'Refresh internal information based on current limits.' -> 1448 dmin, dmax = self.viewlim_to_dt() 1449 self._locator = self.get_locator(dmin, dmax) 1450 /usr/local/anaconda3/lib/python3.7/site-packages/matplotlib/dates.py in viewlim_to_dt(self) 1197 'often happens if you pass a non-datetime ' 1198 'value to an axis that has datetime units' -> 1199 .format(vmin)) 1200 return num2date(vmin, self.tz), num2date(vmax, self.tz) 1201 ValueError: view limit minimum -36868.15 is less than 1 and is an invalid Matplotlib date value. This often happens if you pass a non-datetime value to an axis that has datetime units So I was hoping somebody could help me write better code and resolve this issue. I think matplotlib should add a function fill_area or similar. What do you guys think about this?
There is such a function: matplotlib.pyplot.fill_between() import matplotlib.pyplot as plt plt.plot(y, c='red') plt.fill_between(y.index, y, color='blue', alpha=0.3)
7
11
60,779,234
2020-3-20
https://stackoverflow.com/questions/60779234/how-to-add-a-default-array-of-values-to-arrayfield
Is it possible to add a default value to ArrayField? I tried to do this for email field, but this did not work: constants.py: ORDER_STATUS_CHANGED = 'order_status_changed' NEW_SIGNAL = 'new_signal' NOTIFICATION_SOURCE = ( (ORDER_STATUS_CHANGED, 'Order Status Changed'), (NEW_SIGNAL, 'New Signal'), ) models.py: from notifications import constants from django.contrib.postgres.fields import ArrayField class NotificationSetting(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, primary_key=True, related_name='notification_setting') telegram = ArrayField(models.CharField( choices= constants.NOTIFICATION_SOURCE, max_length=30 ), default=list) email = ArrayField(models.CharField( choices= constants.NOTIFICATION_SOURCE, max_length=16 ), default=list(dict(constants.NOTIFICATION_SOURCE).keys())) class Meta: db_table = 'notification_settings' def __str__(self): return f'Notification setting for user {self.user}' And override the save method of the model would be bad practice, I think. The problem is that in the django admin site I see that the default values did not count when the object was created. (UPD. Maibe i have problem with my custom ChoiseArrayField widged) And i get this mesagge: WARNINGS: notifications.NotificationSetting.email: (postgres.E003) ArrayField default should be a callable instead of an instance so that it's not shared between all field instances. HINT: Use a callable instead, e.g., uselistinstead of[]``
The default property on an ArrayField should be a callable. You can read more about that here: https://docs.djangoproject.com/en/3.0/ref/contrib/postgres/fields/. What you are getting by placing directly there list(dict(constants.NOTIFICATION_SOURCE).keys()) is just a warning so it should still add the defaults to the field. By placing this default directly there it will put in the migrations the following thing and the values will be shared across all field instances: default=['order_status_changed', 'new_signal'] To get rid of the warning you should create a function that returns the default value: def get_email_default(): return list(dict(constants.NOTIFICATION_SOURCE).keys()) and put the function as the default to the field: email = ArrayField(models.CharField( choices= constants.NOTIFICATION_SOURCE, max_length=16 ), default=get_email_default) By doing this the warning will be gone and from the function you can have logic for choosing the default value. After doing this, in the migrations the default value will look like this: default=my_model.models.get_email_default
13
21
60,761,175
2020-3-19
https://stackoverflow.com/questions/60761175/how-to-solve-importerror-dlopen-symbol-not-found-expected-in-flat-name
Can anyone help me solve this issue? ImportError: dlopen(/Users/......./venv/lib/python3.6/site-packages/recordclass/mutabletuple.cpython-36m-darwin.so, 2): Symbol not found: __PyEval_GetBuiltinId Referenced from: /Users/......./venv/lib/python3.6/site-packages/recordclass/mutabletuple.cpython-36m-darwin.so Expected in: flat namespace in /Users/......../venv/lib/python3.6/site-packages/recordclass/mutabletuple.cpython-36m-darwin.so I'm using a Mac if that's of any relevance
I couldn't quite figure out what the issue was but I'm assuming __PyEval_GetBuiltinId was broken/uninstalled. So all I did to fix this was pip uninstall recordclass and then pip install --no-cache-dir recordclass and it seemed to have worked
16
14
60,796,555
2020-3-22
https://stackoverflow.com/questions/60796555/certain-members-of-a-torch-module-arent-moved-to-gpu-even-if-model-todevice-i
An mwe is as follows: import torch import torch.nn as nn class model(nn.Module): def __init__(self): super(model,self).__init__() self.mat = torch.randn(2,2) def forward(self,x): print('self.mat.device is',self.mat.device) x = torch.mv(self.mat,x) return x device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') m = model() m.to(device) x = torch.tensor([2.,1.]) x = x.to(device) m(x) The output is self.mat.device is cpu and right after that comes Traceback (most recent call last): File "Z:\cudatest.py", line 21, in <module> print(m(x)) File "E:\Python37\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\cudatest.py", line 11, in forward x = torch.mv(self.mat,x) RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_mv The code works fine if I set device = torch.device('cpu'). It seems that the problem is model.mat is not moved to GPU even after m.to(device) is called. Why doesn't whis work? How can I fix this? Please note the following: Even though this particular example can be fixed by using self.mat = nn.Linear(2,2) and x = self.mat(x) instead, in my original program, I need a temporary tensor to store some data in forward() that is also used in some arithmetics. How can I construct such a tensor and send it to GPU when calling m.to(device) It is not known in advance whether the computer has GPU or not. Therefore, writing self.mat = self.mat.cuda() is not a good solution for my case.
pytorch apply Module's methods such as .cpu(), .cuda() and .to() only to sub-modules, parameters and buffers, but NOT to regular class members. pytorch has no way of knowing that self.mat, in your case, is an actual tensor that should be moved around. Once you decide if your mat should be a parameter or a buffer, simply register it accordingly, e.g. class model(nn.Module): def __init__(self): super(model,self).__init__() self.register_buffer(name='mat', tensor=torch.randn(2,2))
7
8
60,785,825
2020-3-21
https://stackoverflow.com/questions/60785825/vscode-how-to-pass-pytest-command-line-arguments-running-in-debugger
I've written tests to be run by pytest in a vscode project. The configuration file .vscode/settings.json allow passing additional command line parameters to pytest using: "python.testing.pytestArgs": [ "test/", "--exitfirst", "--verbose" ], How can I also pass custom script arguments to the test script itself? like invoking pytest from the command line as: pytest --exitfirst --verbose test/ --test_arg1 --test_arg2
After much experimentation I finally found how to do it. What I needed was to pass user name and password to my script in order to allow the code to log into a test server. My test looked like this: my_module_test.py import pytest import my_module def login_test(username, password): instance = my_module.Login(username, password) # ...more... conftest.py import pytest def pytest_addoption(parser): parser.addoption('--username', action='store', help='Repository user') parser.addoption('--password', action='store', help='Repository password') def pytest_generate_tests(metafunc): username = metafunc.config.option.username if 'username' in metafunc.fixturenames and username is not None: metafunc.parametrize('username', [username]) password = metafunc.config.option.password if 'password' in metafunc.fixturenames and password is not None: metafunc.parametrize('password', [password]) Then in my settings file I can use: .vscode/settings.json { // ...more... "python.testing.autoTestDiscoverOnSaveEnabled": true, "python.testing.unittestEnabled": false, "python.testing.nosetestsEnabled": false, "python.testing.pytestEnabled": true, "python.testing.pytestArgs": [ "--exitfirst", "--verbose", "test/", "--username=myname", "--password=secret", // ...more... ], } An alternative way is to use pytest.ini file instead: pytest.ini [pytest] junit_family=legacy addopts = --username=myname --password=secret
32
20
60,786,220
2020-3-21
https://stackoverflow.com/questions/60786220/attributeerror-gridsearchcv-object-has-no-attribute-best-params
Grid search is a way to find the best parameters for any model out of the combinations we specify. I have formed a grid search on my model in the below manner and wish to find best parameters identified using this gridsearch. from sklearn.model_selection import GridSearchCV # Create the parameter grid based on the results of random search param_grid = { 'bootstrap': [True],'max_depth': [20,30,40, 100, 110], 'max_features': ['sqrt'],'min_samples_leaf': [5,10,15], 'min_samples_split': [40,50,60], 'n_estimators': [150, 200, 250] } # Create a based model rf = RandomForestClassifier() # Instantiate the grid search model grid_search = GridSearchCV(estimator = rf, param_grid = param_grid, cv = 3, n_jobs = -1, verbose = 2) Now I want to find the best parameters of the gridsearch as the output grid_search.best_params_ Error: ----> grid_search.best_params_ AttributeError: 'GridSearchCV' object has no attribute 'best_params_' What am I missing?
You cannot get best parameters without fitting the data. Fit the data grid_search.fit(X_train, y_train) Now find the best parameters. grid_search.best_params_ grid_search.best_params_ will work after fitting on X_train and y_train.
24
38
60,780,831
2020-3-20
https://stackoverflow.com/questions/60780831/python-how-to-cut-out-an-area-with-specific-color-from-image-opencv-numpy
so I've been trying to code a Python script, which takes an image as input and then cuts out a rectangle with a specific background color. However, what causes a problem for my coding skills, is that the rectangle is not on a fixed position in every image (the position will be random). I do not really understand how to manage the numpy functions. I also read something about OpenCV, but I'm totally new to it. So far I just cropped the images through the ".crop" function, but then I would have to use fixed values. This is how the input image could look and now I would like to detect the position of the yellow rectangle and then crop the image to its size. Help is appreciated, thanks in advance. Edit: @MarkSetchell's way works pretty good, but found a issue for a different test picture. The problem with the other picture is that there are 2 small pixels with the same color at the top and the bottom of the picture, which cause errors or a bad crop.
Updated Answer I have updated my answer to cope with specks of noisy outlier pixels of the same colour as the yellow box. This works by running a 3x3 median filter over the image first to remove the spots: #!/usr/bin/env python3 import numpy as np from PIL import Image, ImageFilter # Open image and make into Numpy array im = Image.open('image.png').convert('RGB') na = np.array(im) orig = na.copy() # Save original # Median filter to remove outliers im = im.filter(ImageFilter.MedianFilter(3)) # Find X,Y coordinates of all yellow pixels yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2)) top, bottom = yellowY[0], yellowY[-1] left, right = yellowX[0], yellowX[-1] print(top,bottom,left,right) # Extract Region of Interest from unblurred original ROI = orig[top:bottom, left:right] Image.fromarray(ROI).save('result.png') Original Answer Ok, your yellow colour is rgb(247,213,83), so we want to find the X,Y coordinates of all yellow pixels: #!/usr/bin/env python3 from PIL import Image import numpy as np # Open image and make into Numpy array im = Image.open('image.png').convert('RGB') na = np.array(im) # Find X,Y coordinates of all yellow pixels yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2)) # Find first and last row containing yellow pixels top, bottom = yellowY[0], yellowY[-1] # Find first and last column containing yellow pixels left, right = yellowX[0], yellowX[-1] # Extract Region of Interest ROI=na[top:bottom, left:right] Image.fromarray(ROI).save('result.png') You can do the exact same thing in Terminal with ImageMagick: # Get trim box of yellow pixels trim=$(magick image.png -fill black +opaque "rgb(247,213,83)" -format %@ info:) # Check how it looks echo $trim 251x109+101+220 # Crop image to trim box and save as "ROI.png" magick image.png -crop "$trim" ROI.png If still using ImageMagick v6 rather than v7, replace magick with convert.
12
9
60,763,529
2020-3-19
https://stackoverflow.com/questions/60763529/unable-to-import-pandas-pandas-libs-window-aggregations
I've lost a couple of hours trying to solve this so I guess it's time to ask someone/somehwere. I (think) I have uninstalled everything related to python and than installed again. I just installed python most recent version and used pip to install pandas. I also try to install it with anaconda but the error persists. I've also tried installing directly from github but was unsuccessful. I'm using windows 10. C:\Users\m>python -V Python 3.8.2 C:\Users\m>pip install pandas Collecting pandas Downloading https://files.pythonhosted.org/packages/07/12/5a087658337a230f4a77e3d548c847e81aa59b332cdd8ddf5c8d7f11c4a1/pandas-1.0.3-cp38-cp38-win32.whl (7.6MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.6MB 3.3MB/s Collecting pytz>=2017.2 (from pandas) Downloading https://files.pythonhosted.org/packages/e7/f9/f0b53f88060247251bf481fa6ea62cd0d25bf1b11a87888e53ce5b7c8ad2/pytz-2019.3-py2.py3-none-any.whl (509kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 512kB 3.2MB/s Collecting python-dateutil>=2.6.1 (from pandas) Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 235kB 6.4MB/s Collecting numpy>=1.13.3 (from pandas) Downloading https://files.pythonhosted.org/packages/5d/b3/f3543d9919baa11afc24adc029a25997821f0376e5fab75fdc16e13469db/numpy-1.18.2-cp38-cp38-win32.whl (10.8MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10.8MB 6.4MB/s Requirement already satisfied: six>=1.5 in c:\users\m\appdata\roaming\python\python38\site-packages (from python-dateutil>=2.6.1->pandas) (1.13.0) Installing collected packages: pytz, python-dateutil, numpy, pandas Successfully installed numpy-1.18.2 pandas-1.0.3 python-dateutil-2.8.1 pytz-2019.3 WARNING: You are using pip version 19.2.3, however version 20.0.2 is available. You should consider upgrading via the 'python -m pip install --upgrade pip' command. C:\Users\m>python Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 22:45:29) [MSC v.1916 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import pandas Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\__init__.py", line 55, in <module> from pandas.core.api import ( File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\api.py", line 29, in <module> from pandas.core.groupby import Grouper, NamedAgg File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\groupby\__init__.py", line 1, in <module> from pandas.core.groupby.generic import DataFrameGroupBy, NamedAgg, SeriesGroupBy File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\groupby\generic.py", line 60, in <module> from pandas.core.frame import DataFrame File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\frame.py", line 124, in <module> from pandas.core.series import Series File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\series.py", line 4572, in <module> Series._add_series_or_dataframe_operations() File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\generic.py", line 10349, in _add_series_or_dataframe_operations from pandas.core.window import EWM, Expanding, Rolling, Window File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\window\__init__.py", line 1, in <module> from pandas.core.window.ewm import EWM # noqa:F401 File "C:\Users\m\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\window\ewm.py", line 5, in <module> import pandas._libs.window.aggregations as window_aggregations ImportError: DLL load failed while importing aggregations: The specified module could not be found.
I finally ended up trying a slightly older version of pandas to resolve this exact error. The default install was 1.0.3, so I backed up a couple of release points: pip uninstall pandas pip install pandas==1.0.1 Then I could import pandas without error
10
14
60,783,788
2020-3-21
https://stackoverflow.com/questions/60783788/virtualenv-virtualenvwrapper-virtualenv-error-unrecognized-arguments-no-sit
I am trying to upgrade python from 3.6 to 3.8. I was successfully using virtualenv/wrapper successfully (although only one environment and no bells, whistles, or hooks), but the upgrade has not gone smoothly. I deleted everything and tried to start again. I am trying to make a new environment with mkvirtualenv test, and I am now getting the error: virtualenv: error: unrecognized arguments: --no-site-packages after it gives a man(ual) suggestion on how to invoke virtualenv, which leads me to believe virtualenvwrapper is working, but I've missed something. Here are my details: terminal (osx - 10.13.6 (17G65)) today@5 ~/dev/MST/server(master)$ which python /usr/bin/python today@5 ~/dev/MST/server(master)$ which python3 /usr/local/bin/python3 today@5 ~/dev/MST/server(master)$ which pip /usr/local/bin/pip today@5 ~/dev/MST/server(master)$ which pip3 today@5 ~/dev/MST/server(master)$ pip -V -bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python3.6: bad interpreter: No such file or directory today@5 ~/dev/MST/server(master)$ pip3 -V pip 20.0.2 from /usr/local/lib/python3.8/site-packages/pip (python 3.8) today@5 ~/dev/MST/server(master)$ pip3 install virtualenv virtualenvwrapper Requirement already satisfied: virtualenv in /usr/local/lib/python3.8/site-packages (20.0.13) Requirement already satisfied: virtualenvwrapper in /usr/local/lib/python3.8/site-packages (4.8.4) Requirement already satisfied: filelock<4,>=3.0.0 in /usr/local/lib/python3.8/site-packages (from virtualenv) (3.0.12) Requirement already satisfied: appdirs<2,>=1.4.3 in /usr/local/lib/python3.8/site-packages (from virtualenv) (1.4.3) Requirement already satisfied: six<2,>=1.9.0 in /usr/local/lib/python3.8/site-packages (from virtualenv) (1.14.0) Requirement already satisfied: distlib<1,>=0.3.0 in /usr/local/lib/python3.8/site-packages (from virtualenv) (0.3.0) Requirement already satisfied: stevedore in /usr/local/lib/python3.8/site-packages (from virtualenvwrapper) (1.32.0) Requirement already satisfied: virtualenv-clone in /usr/local/lib/python3.8/site-packages (from virtualenvwrapper) (0.5.3) Requirement already satisfied: pbr!=2.1.0,>=2.0.0 in /usr/local/lib/python3.8/site-packages (from stevedore->virtualenvwrapper) (5.4.4) today@5 ~/dev/MST/server(master)$ which virtualenv /usr/local/bin/virtualenv today@5 ~/dev/MST/server(master)$ which virtualenvwrapper today@5 ~/dev/MST/server(master)$ today@5 ~/dev/MST/server(master)$ workon today@5 ~/dev/MST/server(master)$ today@5 ~/dev/MST/server(master)$ mkvirtualenv test usage: virtualenv [--version] [--with-traceback] [-v | -q] [--app-data APP_DATA] [--clear-app-data] [--discovery {builtin}] [-p py] [--creator {builtin,cpython3-posix,venv}] [--seeder {app-data,pip}] [--no-seed] [--activators comma_sep_list] [--clear] [--system-site-packages] [--symlinks | --copies] [--download | --no-download] [--extra-search-dir d [d ...]] [--pip version] [--setuptools version] [--wheel version] [--no-pip] [--no-setuptools] [--no-wheel] [--symlink-app-data] [--prompt prompt] [-h] dest virtualenv: error: unrecognized arguments: --no-site-packages ~/.bash_profile #… export WORKON_HOME=$HOME/.virtualenvs export PROJECT_HOME=$HOME/Devel export VIRTUALENVWRAPPER_PYTHON=/usr/local/bin/python3 export VIRTUALENVWRAPPER_VIRTUALENV=/usr/local/bin/virtualenv export VIRTUALENVWRAPPER_VIRTUALENV_ARGS='--no-site-packages' source /usr/local/bin/virtualenvwrapper.sh #… NB - trying to install virtualenvwrapper it says version 4.8.4, but the online docs say it is in 5.x
--no-site-packages is the default for virtualenv (and has been for like 5 years?) you can remove export VIRTUALENVWRAPPER_VIRTUALENV_ARGS='--no-site-packages' from your .bashrc it appears in virtualenv>=20 that this option was removed
23
36
60,780,433
2020-3-20
https://stackoverflow.com/questions/60780433/how-can-i-pass-a-list-of-columns-to-select-in-pyspark-dataframe
I have list column names. columns = ['home','house','office','work'] and I would like to pass that list values as columns name in "select" dataframe. I have tried it... df_tables_full = df_tables_full.select('time_event','kind','schema','table',columns) but I have received error below.. TypeError: Invalid argument, not a string or column: ['home', 'house', 'office', 'work'] of type <class 'list'>. For column literals, use 'lit', 'array', 'struct' or 'create_map' function. Can you have any ideia? Thank you guys!
Use * before columns to unnest columns list and use in .select. columns = ['home','house','office','work'] #select the list of columns df_tables_full.select('time_event','kind','schema','table',*columns).show() df_tables_full = df_tables_full.select('time_event','kind','schema','table',*columns)
16
36
60,766,714
2020-3-19
https://stackoverflow.com/questions/60766714/pyserial-flush-vs-reset-input-buffer-reset-output-buffer
I am trying to use pySerial==3.4, and find the documentation on serial.Serial.flush() rather lacking: Flush of file like objects. In this case, wait until all data is written. Source Questions What is a "file like object"? What is being flushed? When would one use flush as opposed to just individually resetting the input/output buffers? serial = Serial("COM3") # Option 1 serial.flush() # Option 2 serial.reset_input_buffer() serial.reset_output_buffer() Relevant Questions using pyserial flush method
It looks like this: What is a "file like object"? What is exactly a file-like object in Python? file-like objects are mainly StringIO objects, connected sockets and well.. actual file objects. If everything goes fine, urllib.urlopen() also returns a file-like objekt supporting the necessary methods. file-like object A synonym for file object. file object An object exposing a file-oriented API (with methods such as read() or write()) to an underlying resource. Depending on the way it was created, a file object can mediate access to a real on-disk file or to another type of storage or communication device (for example standard input/output, in-memory buffers, sockets, pipes, etc.). File objects are also called file-like objects or streams. There are actually three categories of file objects: raw binary files, buffered binary files and text files. Their interfaces are defined in the io module. The canonical way to create a file object is by using the open() function. io β€” Core tools for working with streams The io module provides Python’s main facilities for dealing with various types of I/O. There are three main types of I/O: text I/O, binary I/O and raw I/O. These are generic categories, and various backing stores can be used for each of them. A concrete object belonging to any of these categories is called a file object. Other common terms are stream and file-like object. What is being flushed? The data held in the output buffer. When would one use flush as opposed to just individually resetting the input/output buffers? There is data that has been output(write()), and it will be called before closing. flush() has nothing to do with input buffer or reset_input_buffer(). flush() has a different function than reset_output_buffer(). flush() sends all the data in the output buffer to the peer, while reset_output_buffer() discards the data in the output buffer. reset_output_buffer() Clear output buffer, aborting the current output and discarding all that is in the buffer. Note, for some USB serial adapters, this may only flush the buffer of the OS and not all the data that may be present in the USB part.
7
5
60,761,243
2020-3-19
https://stackoverflow.com/questions/60761243/how-to-specify-several-marks-for-the-pytest-command
Reading http://doc.pytest.org/en/latest/example/markers.html I see the example of including or excluding certain python tests based on a mark. Including: pytest -v -m webtest Excluding: pytest -v -m "not webtest" What if I would like to specify several marks for both include and exclude?
Use and/or to combine multiple markers, same as for -k selector. Example test suite: import pytest @pytest.mark.foo def test_spam(): assert True @pytest.mark.foo def test_spam2(): assert True @pytest.mark.bar def test_eggs(): assert True @pytest.mark.foo @pytest.mark.bar def test_eggs2(): assert True def test_bacon(): assert True Selecting all tests marked with foo and not marked with bar $ pytest -q --collect-only -m "foo and not bar" test_mod.py::test_spam test_mod.py::test_spam2 Selecting all tests marked neither with foo nor with bar $ pytest -q --collect-only -m "not foo and not bar" test_mod.py::test_bacon Selecting tests that are marked with any of foo, bar $ pytest -q --collect-only -m "foo or bar" test_mod.py::test_spam test_mod.py::test_spam2 test_mod.py::test_eggs test_mod.py::test_eggs2
22
31
60,758,625
2020-3-19
https://stackoverflow.com/questions/60758625/sort-pandas-dataframe-by-sum-of-columns
I have a dataframe that looks like this Australia Austria United Kingdom Vietnam date 2020-01-30 9 0 1 2 2020-01-31 9 9 4 2 I would like to crate a new dataframe that inclues countries that have sum of their column > 4 and I do it df1 = df[[i for i in df.columns if int(df[i].sum()) > 4]] this gives me Australia Austria United Kingdom date 2020-01-30 9 0 1 2020-01-31 9 9 4 I now would like to sort the countries based on the sum of their column and than take the first 2 Australia Austria date 2020-01-30 9 0 2020-01-31 9 9 I know I have to use sort_values and tail. I just can't workout how
IIUC, you can do: s = df.sum() df[s.sort_values(ascending=False).index[:2]] Output: Australia Austria date 2020-01-30 9 0 2020-01-31 9 9
21
21
60,751,868
2020-3-19
https://stackoverflow.com/questions/60751868/how-to-check-whether-a-bucket-exists-in-gcs-with-python
Code: from google.cloud import storage client = storage.Client() bucket = ['symbol_wise_nse', 'symbol_wise_final'] for i in bucket: if client.get_bucket(i).exists(): BUCKET = client.get_bucket(i) if the bucket exists i want to do client.get_bucket. How to check whether the bucket exists or not?
There is no method to check if the bucket exists or not, however you will get an error if you try to access a non existent bucket. I would recommend you to either list the buckets in the project with storage_client.list_buckets() and then use the response to confirm if the bucket exists in your code, or if you wish to perform the client.get_bucket in every bucket in your project, you can just iterate through the response directly. Hope you find this information useful
10
2
60,751,819
2020-3-19
https://stackoverflow.com/questions/60751819/valueerror-2-columns-passed-passed-data-had-1-columns
I have a list with name of organizations like this: name = ['ALPHABET INC', 'AMAZON COM INC', 'APPLE INC',....] and another list of cu values like this: cu = ['02079K305', '023135106', '037833100',....] When i'm trying to convert it to dataframe it's giving me error message saying, "ValueError: 2 columns passed, passed data had 1 columns" My code to convert list ot dataframe: df = pd.DataFrame([name, cu], columns=['name of issuer', 'cusip']) Where am i going wrong? Thanks in advance!
I think simpliest is create dictionaries: df = pd.DataFrame({'name of issuer': name, 'cusip':cu}) Your solution is possible with zip, in last version of pandas should be omit list: df = pd.DataFrame(list(zip(name, cu)), columns=['name of issuer', 'cusip']) print (df) name of issuer cusip 0 ALPHABET INC 02079K305 1 AMAZON COM INC 023135106 2 APPLE INC 037833100
7
12
60,749,032
2020-3-18
https://stackoverflow.com/questions/60749032/how-to-keep-a-docker-container-run-for-ever
I created a Dockerfile which sets up python environment so that when people run the image in their host, they can run my python file without installing multiple python packages themselves. The problem is after I build and run the image, the container stopped immediately (because it completed). How can I keep the container running forever? My understanding is after people pull and run my image, they can start to run python file by running "python file.python". my Dockerfile looks like this (may not be correct. I am still learning): FROM python:3-alpine ADD . /app WORKDIR /app RUN pip install configparser
From HOW TO KEEP DOCKER CONTAINERS RUNNING, we can know that docker containers, when run in detached mode (the most common -d option), are designed to shut down immediately after the initial entrypoint command (program that should be run when container is built from image) is no longer running in the foreground. So to keep docker container run even the inside program has done just add CMD tail -f /dev/null as last line to dockerfile What's more important is that we should understand what's docker is intended for and how to use it properly. Try to use docker as environment foundation for applications in host machine is not a good choice, docker is designed to run applications environment-independent. Applications should be placed into docker image via docker build and run in docker container in runtime.
8
13
60,749,802
2020-3-19
https://stackoverflow.com/questions/60749802/python-with-pandas-file-size-44546-not-512-multiple-of-sector-size-512
After read excel file with pandas, gets the follow warning: key code: pd_obj = pd.read_excel("flie.xls", dtype=str, usecols=usecols, skiprows=3) for idx, row in pd_obj.iterrows(): json_tmpl = copy.deepcopy(self.details) json_tmpl["nameInBank"] = row["nameInBank"] json_tmpl["totalBala"] = row["totalBala"].replace(",", '') # parse pdf file status = self._get_banksplip_json(json_tmpl["bankReceipts"], row) json_buf.append(copy.deepcopy(json_tmpl)) warning info : WARNING *** file size (48130) not 512 + multiple of sector size (512) WARNING *** file size (44546) not 512 + multiple of sector size (512)
This appears to be a normal warning from the underlying XLRD library, and it seems safe to ignore. A pandas issue (#16620) was opened and closed without a conclusive resolution. However, the discussion did provide an alternative that would allow you to suppress the warnings: from os import devnull import pandas as pd import xlrd wb = xlrd.open_workbook('file.xls', logfile=open(devnull, 'w')) pd_obj = pd.read_excel(wb, dtype=str, usecols=usecols, skiprows=3, engine='xlrd') You can read a more detailed analysis of the actual cause of the error on the forum here: https://groups.google.com/forum/m/#!topic/python-excel/6Lue-1mTPSM Moral of the story: whenever you get a warning you aren't sure about, you should search for the keywords that appear (discard any specific parts like file sizes or local paths). This answer is based on the first two results to show up on Google.
7
14