title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
A Software Testing View on Machine Learning Model Quality
I’m planning a loose collection of summaries from lectures from my Software Engineering for AI-Enabled Systems course, starting here with with my Model Quality Lecture. Model quality primarily refers to how well a machine-learned model (i.e., a function predicting outputs for given inputs) generalizes to unseen data. Data scientists routinely assess model quality with various accuracy measures on validation data, which seems somewhat similar to software testing. As I will discuss there are significant differences though, but also many directions where a software testing view likely can provide insights and directions beyond the classic methods taught in data science classes. Note that model quality refers specifically to the quality of the model created with a machine-learning approach from some training data. The model is only one part of an AI-enabled system, but an important one. That is, I will not discuss the quality of the machine learning algorithm itself, nor the quality of the data used for training, nor the quality of other pipeline steps or infrastructure used for producing the model , nor the quality of how the model is integrated into a larger system design— those are all important but should be discussed separately. Model quality is important when one needs an initial assessment of a model before going to production or if one wants to observe relative improvements from learning efforts or compare two models. Traditional Accuracy Measures (The Data Scientist’s Toolbox) Every single machine learning book and class will talk at length about how to split data into training and validation data and how to measure accuracy (focusing on supervised learning here). In the simplest case for classification tasks, accuracy is the percentage of all correct predictions. That is, given a labeled validation dataset (i.e., multiple rows containing features and the expected output) one computes how well a model can match the expected output labels based on the features: def accuracy(model, xs, ys): count = length(xs) countCorrect = 0 for i in 1..count: predicted = model.predict(xs[i]) if predicted == ys[i]: countCorrect += 1 return countCorrect / count A confusion matrix is often shown and on binary classification tasks, recall and precision are distinguished, discussing the relative importance of false positives and false negatives. When thresholds are used comparisons can be made across all thresholds using some area under the curve measures, such as ROC. For regression tasks, there is a zoo of accuracy measures that quantify the typical distance between predictions and expected values, such as Mean Absolute Percentage Error or Mean Squared Error. For ranking problems yet other accuracy measures are introduced, such as MAP@K or Mean Reciprocal Rank and some fields as Natural Language Processing have yet other measures for certain tasks. Predicting House Sales Prices For example, when predicting sales prices of houses, based on characteristics of these houses and their neighborhood, we could compare the models prediction with the actual price and compute that in the given example, we are barely 5% off on average: MAPE = (20/250+32/498+1/211+9/210)/4 = (0.08+0.064+0.005+0.043)/4 = 0.048. Pretty much all accuracy measures are difficult to interpret in isolation. Is 5% MAPE for housing price predictions good or note? Accuracy measures are usually interpreted with regard to other accuracy measures, for example when observing improvements between two model versions. It is almost always useful to consider simple baseline heuristics, such as randomly guessing or “always predict average price in neighborhood”. Improvements can be well expressed as reduction in error, which is typically much easier to interpret than raw accuracy numbers: Reduction in error = ((1−baselineaccuracy)−(1−modelaccuracy)) / (1−baselineaccuracy). For example an improvement from 99.9 to 99.99% accuracy is a 90% reduction in error which may be a much more significant achievement than the 50% reduction when going from 50 to 75% accuracy. On terminology: In the machine learning world, data scientists typically refer to model accuracy as performance (e.g., “this model performs well”), which is confusing to someone like me who uses the term performance for execution time. Then again, when data scientist talk about time, they use terms like learning latency or inference time. This probably shouldn’t be surprising given the many different meanings of “performance” in business, art, law, and other fields, but be aware of this ambiguity in interdisciplinary teams and be explicit about what you mean by “performance”. Analogy to Software Testing It is tempting to compare evaluating model quality with software testing. In both cases, we execute the system/model with different inputs and compare the computed outputs with expected outputs. However there are important conceptual differences. A software test executes the system in a controlled environment with specific inputs (e.g., a function call with specific parameters) expects specific outputs, e.g. “ assertEquals(4, add(2, 2)); ”. A test suite fails if any single one of the tests does not produce the expected output. Testing famously cannot assure the absence of bugs, just show their presence. Validation data though plays a very different role than software tests. Even though validation data provides inputs and expected outputs, we would not translate the housing data above into a test suite, because it would fail the entire test suite on a single non-perfect prediction: assertEquals(250000, model.predict([3, .01, ...])); assertEquals(498000, model.predict([4, .01, ...])); assertEquals(211000, model.predict([2, .03, ...])); assertEquals(210000, model.predict([2, .02, ...])); In model quality, we do not expect perfect predictions on every single data point in our validation dataset. We do not really care about any individual data point but about the overall fit of the model. We might be perfectly happy with 80% accuracy and we are well aware that there may be even incorrect labels or noisy data in training or validation set. A single wrong prediction is not a bug. In fact, as I argue in Machine Learning is Requirements Engineering, the entire notion of model bug is problematic and comes from pretending that we have some implicit specifications for the model or from confusing terminology for validation and verification. We should not ask whether a model is correct but how well it fits the problem. Performance testing might be a slightly better analogy. Here we evaluate the quality of an implementation with regard to execution time, but usually without specifications, while accepting some nondeterminism and noise, and without expecting exact performance behavior. Instead, we average over multiple executions, possibly over multiple executions with diverse inputs — not unlike evaluating accuracy by averaging over validation data. We may set expectations for expected performance (regression tests) or simply compare multiple implementations (benchmarking). @Test(timeout=100) public void testCompute() { expensiveComputation(...); } On terminology: Strike the term model bug from your vocabulary and avoid asking about correctness of models, but rather evaluate fit or accuracy — or, I guess, performance. Prefer “evaluate model quality” or “measure model accuracy” over “testing a model”. The word testing simply brings too much baggage. Curating Validation Sets (Learning from Test Case Selection) Even though the software testing analogy is not a great fit, there are still things we can learn from many decades of experience and research on software testing. In general, if we want to use the testing analogy, I think a validation set consisting of multiple labeled data points corresponds roughly to a single unit test or regression test. Whenever we want to test the behavior of a specific aspect of the model, we want to do so with multiple data points. Whenever we want to understand model quality in more detail, we should do so with multiple validation sets. When we evaluate model accuracy only with a single validation set, we are missing out on many nuances and get only a very coarse aggregated picture. The challenge then is how to identify and curate multiple validation sets. Here software engineering experience may come in handy. Software engineers have many strategies and heuristics to select test cases, that may be helpful also for curating validation data. While traditionally validation data is selected more or less randomly from a population of (hopefully representative) datapoints when splitting it off training data, not all inputs are equal. We want multiple validation sets to represent different tests: Important use cases and regression testing: Consider a voice recognition for a smart assistant. Correctly recognizing “call mom” or “what’s the weather tomorrow” are extremely common and important use cases that almost certainly should not break, whereas wrong recognition of “add asafetida to my shopping list” may be more forgivable. It may be worth having a separate validation set for each important use case (e.g., consisting of multiple “call mom” recordings by different speakers in different accents) and create a regression test that expects a very high accuracy for these, much higher than expected from the overall validation set. There is some analogy here to unit tests or performance regression tests, but again accuracy is expressed as a probability over multiple inputs representing a single use case. Consider a voice recognition for a smart assistant. Correctly recognizing “call mom” or “what’s the weather tomorrow” are that almost certainly should not break, whereas wrong recognition of “add asafetida to my shopping list” may be more forgivable. It may be worth having a separate validation set for each important use case (e.g., consisting of multiple “call mom” recordings by different speakers in different accents) and create a regression test that expects a very high accuracy for these, much higher than expected from the overall validation set. There is some analogy here to unit tests or performance regression tests, but again accuracy is expressed as a probability over multiple inputs representing a single use case. Representing minority use cases and fairness: Models are often are much more accurate for populations for which more training data is available and often do worse on minorities or less common inputs (e.g., voice recognition for speakers with certain accents, face recognition for various minorities). When minorities represent only a small fraction of the users, low accuracy for them lowers overall accuracy on large representative validation sets only marginally. Here it is useful to collect a validation set for each important user group, minority, or use case — again consisting of many data points representing this group. Checking outliers, checking common characteristics of inputs leading to wrong predictions, and monitoring in production can help to identify potential subgroups with poor performance. Models are often are much more accurate for populations for which more training data is available and often do worse on minorities or less common inputs (e.g., voice recognition for speakers with certain accents, face recognition for various minorities). When minorities represent only a small fraction of the users, low accuracy for them lowers overall accuracy on large representative validation sets only marginally. Here it is useful to collect a validation set for each important user group, minority, or use case — again consisting of many data points representing this group. Checking outliers, checking common characteristics of inputs leading to wrong predictions, and monitoring in production can help to identify potential subgroups with poor performance. Setting stretch goals: Some inputs may be particularly challenging and we might be okay with the model not performing well on them right now (e.g., speech recognition on low quality audio). It can be worth to separate out a validation set for known challenging cases and to track them as stretch goals — that is we are okay with low accuracy right now, but we’ll track improvement over time. So how do we find important problems and subpopulations to track? There are many different strategies: Sometimes there are requirements and goals for the model and the system that may give us insights. Experts may have experience with similar systems and their problems. Studying the distribution of mistakes in the existing validation data can help. User feedback and testing in production can provide valuable insights about poorly represented groups. In general several classic black-box testing strategies can provide inspiration: Boundary value analysis and equivalence partitioning structure the input space and select test cases based on requirements — similarly we may be able to more systematically think through possible user groups and problem classes we are trying to solve and curate corresponding validation sets. Also combinatorial testing and decision tables are well understood techniques to look at combinations of characteristics that may help to curate validation sets. On this topic of validation set curation, see also Hulten, Geoff. “Building Intelligent Systems: A Guide to Machine Learning Engineering.” Apress, 2018, Chapter 19 (Evaluating Intelligence). Automated (Random) Testing A popular research field is to automatically generate test cases, known as automated testing, fuzz testing, random testing. That is, rather than writing each unit test by hand, given a piece of software and maybe some specifications, we automatically generate (lots of) inputs to see whether the system behaves correctly for all of them. Techniques range from “dumb” fuzzing where inputs are generated entirely randomly to many smarter techniques that use dynamic symbolic execution or coverage guided fuzzing to maximally cover the implementation. This has been very effective to find bugs and vulnerabilities in many classes of systems, such as unix utilities and compilers. So would this also work to assess model quality? Its trivial to generate thousands of validation inputs for a machine-learned model — e.g., sampling uniformly across all features, sampling from known distributions for each features, sampling from a joint probability distribution of all features (e.g., derived from real data or modeling with probabilistic programming), or inputs generated by mutating real inputs. The problem is how to get the corresponding labels to determine whether the model’s prediction is accurate on these generated inputs. The Darn Oracle Problem The problem with all automated testing techniques is how to know whether a test passes or fails, known as the oracle problem. That is, whether software testing or model evaluation, we can easily produce millions of inputs, but how do we know what to expect as outputs? Two solutions to the oracle problem: Comparing against a gold standard and partial specifications/global invariants. There are a couple of common strategies to deal with the oracle problem in random testing: Manually specify outcome —humans can often provide the expected outcome based on their understanding or specification of the problem, but this obviously does not scale when generating thousands of random inputs and cannot be automated. Even when crowdsourcing the labeling in a machine-learning setting, for many problems it is not clear that humans would be good at providing labels for “random” inputs. —humans can often provide the expected outcome based on their understanding or specification of the problem, but this obviously does not scale when generating thousands of random inputs and cannot be automated. Even when crowdsourcing the labeling in a machine-learning setting, for many problems it is not clear that humans would be good at providing labels for “random” inputs. Comparing against a gold standard — if we have an alternative implementation (typically a slower but correct implementation we want to improve upon) or an executable specification we can use those to simply compute the expected outcome. Even if we are not perfectly sure about the correctness of the alternative implementation, we can use it to identify and investigate discrepancies or even vote when multiple implementations exist. Forms of differential testing has been extremely successful, for example, in finding compiler bugs. Unfortunately, we usually use machine learning exactly when we don’t have a specification and no existing good solution, so it is unlikely that we’ll have a gold standard implementation of an image recognition algorithm or recidivism prediction algorithm (aside from testing in production with some telemetry data, more on that another time). — if we have an alternative implementation (typically a slower but correct implementation we want to improve upon) or an executable specification we can use those to simply compute the expected outcome. Even if we are not perfectly sure about the correctness of the alternative implementation, we can use it to identify and investigate discrepancies or even vote when multiple implementations exist. Forms of differential testing has been extremely successful, for example, in finding compiler bugs. Unfortunately, we usually use machine learning exactly when we don’t have a specification and no existing good solution, so it is unlikely that we’ll have a gold standard implementation of an image recognition algorithm or recidivism prediction algorithm (aside from testing in production with some telemetry data, more on that another time). Checking partial specifications and global invariants — even when we do not have full specifications about a problem and the expected outcomes, we sometimes have partial specifications or global invariants. Most fuzzers look for crashing bugs or violations of other global invariants, such as unsafe memory access. That is, instead of checking that, for a given input, the output matches some expected output, we only check that the computation does not crash or does not violate any other partial or global specifications we may have (e.g., all opened file handles need to be closed). Developers can also manually specify partial specifications, typically with assert statements, for example for pre- and post-conditions of functions and data and loop invariants (though developers are often reluctant to write even those). Assert statements turn runtime violations of these partial specifications into crashes, which can then be detected for random inputs. Notice that partial specifications only check some aspects of correctness, as they focuses only on some aspect of correctness that should hold for all executions. Interestingly, there are some partial specifications we can define for machine-learned models that may be worth testing for, such as fairness and robustness, which is where automated testing may be useful. — even when we do not have full specifications about a problem and the expected outcomes, we sometimes have partial specifications or global invariants. Most fuzzers look for crashing bugs or violations of other global invariants, such as unsafe memory access. That is, instead of checking that, for a given input, the output matches some expected output, we only check that the computation does not crash or does not violate any other partial or global specifications we may have (e.g., all opened file handles need to be closed). Developers can also manually specify partial specifications, typically with assert statements, for example for pre- and post-conditions of functions and data and loop invariants (though developers are often reluctant to write even those). Assert statements turn runtime violations of these partial specifications into crashes, which can then be detected for random inputs. Notice that partial specifications only check some aspects of correctness, as they focuses only on some aspect of correctness that should hold for all executions. Interestingly, there are some partial specifications we can define for machine-learned models that may be worth testing for, such as fairness and robustness, which is where automated testing may be useful. Simulation and inverse computations — in a few scenarios it can be possible to simulate the world to derive input-output pairs or it may be easier to randomly select outputs and derive corresponding inputs. For example, when performing prime factorization, it is much easier to pick a set of prime numbers (the output) and compute the input by multiplying them. Similarly, one could create a scene in a raytracer (with known location of objects) and then to render the image to create the input for a vision algorithm to detect those images. This seems to work only in few settings, but can be very powerful if it does because we can automatically generate input-output pairs. Invariants and Metamorphic Testing For evaluating machine-learned models, while it seems unclear that we will ever be able to automatically generate input-output pairs beyond few simulation settings, it is worth to think about invariants, typically invariants that should hold over predictions of different or related inputs. Here are a couple of examples of invariants for a model f: Credit rating model f should not depend on gender: Forall inputs x and y that only differ in gender f(x)=f(y) — even though this is a rather simplistic fairness property that does not account for correlations Sentiment analysis f should not be affected by synonyms: For all inputs x: f(x) = f(x.replace(“is not”, “isn’t”)) Negation should swap meaning: For all inputs x in the form “X is Y”: f(x) = 1-f(x.replace(“ is “, “ is not “)) Small changes to training data should not affect outcome (robustness): For all x in our training set, f(x) = f(mutate(x, δ)) Low credit scores should never get a loan (sufficient classification condition, invariant known as anchor): For all loan applicants x: (x.score<645) ⇒ ¬f(x) Identifying such invariants is not easy and requires domain knowledge, but once we have such invariants, automated testing is possible. They will only ever assess one aspect of model quality, but that may be an important aspect. Likely invariants can be mined automatically from data, see the literature on specification mining and anchors. Such invariants are typically known as metamorphic relations in the software engineering literature, but rarely discussed in the machine learning literature. In its general form, a metamorphic relationship is a invariant in the form of ∀x.f(gI(x))=gO(f(x)) where gI and gO are two functions (e.g. gI(x)=x.replace(“ is “, “ is not “) and gO(x)=1-x). Once invariants are established, generating lots of inputs is the easy part. Using techniques of adversarial learning and moving along the gradients of models, there are also often techniques that find inputs that invalidate the specifications much more effectively than random sampling. For some problems and models it is even possible to formally verify that a model meets the specification for all possible inputs. Note that this view of invariants also aligns well regarding machine learning as requirements engineering. Invariants are partial specifications or requirements and automated testing helps to check the compatibility of multiple specifications. Adequacy Criteria Since software testing can only show the presence of bugs and never the absence, an important question is always when to stop testing, i.e., are the tests adequate. There are many strategies to approach this question, even though typically one simply pragmatically stops testing when time or money runs out or it feels “good enough”. In machine learning, potentially power statistics could be used, but in practice simple rules of thumb seem to drive the size of validation sets used. Line and branch coverage report in software testing More systematically, different forms of coverage and mutation scores are proposed and used to evaluate the quality of a test suite in software testing: Specification coverage analyzes whether all conditions of the specification (boundary conditions etc) have been tested. Given that we don’t have specifications for machine-learning models, the best we can do is to think representativeness of validation data and about subpopulations and use cases when creating multiple validation sets. analyzes whether all conditions of the specification (boundary conditions etc) have been tested. Given that we don’t have specifications for machine-learning models, the best we can do is to think representativeness of validation data and about subpopulations and use cases when creating multiple validation sets. White-box coverage like line coverage and branch coverage gives us an indication of what parts of the program have been executed. There have been attempts to define similar coverage metrics for deep neural networks (e.g., neuron coverage), but it is unclear what this really represents. Are inputs generated to cover all neuron activations representative of anything or useful to check invariants with automated testing? It seems to early to tell. like line coverage and branch coverage gives us an indication of what parts of the program have been executed. There have been attempts to define similar coverage metrics for deep neural networks (e.g., neuron coverage), but it is unclear what this really represents. Are inputs generated to cover all neuron activations representative of anything or useful to check invariants with automated testing? It seems to early to tell. Mutation scores indicate how well a test suite detects injected faults (mutants) into the program. A better test suite would detect more injected faults. However, again a mapping to machine-learned models is unclear. Would better curated validation sets catch more mutations to a machine learning models and would this be useful in practice for evaluating the quality of the validation sets or similar? While there are several research papers on these topics, there does not seem to be a good mapping to problems in evaluating model quality, given the lack of specifications. I have not found any convincing strategy to evaluate the quality of a validation set beyond checking whether it is representative of data in production (qualitatively or statistically). Test Automation Finally its worth pointing to a vast amount of work in test automation in software testing that automatically execute the test suite on changes (smart test runners, continuous integration, nightly builds) with lots of continuous integration tools to run tests independently, in parallel, and at scale (Jenkins, Travis-CI, and many others). Such systems can also track test outcomes, coverage information, or performance results over time. Here we find a much more direct equivalent for model quality and also many existing tools. Ideally the entire learning and evaluation pipeline is automated so that it can be executed and tracked after every change. Accuracy can be tracked over time, actions can be taken automatically when significant drops in accuracy are observed. Many dashboards have been developed both internally (e.g., Uber’s Michelangelo), but also many open source and academic solutions exists (e.g., ease.ml/ci, MLflow, TensorBoard) and with suitable plugins also Jenkins can record accuracy results over time. Summary So in summary, there are many more strategies to evaluate a machine-learned model’s quality than just traditional accuracy metrics. While software testing is not a good direct analogy, software engineering provides many lessons about multiple(!) curating validation datasets, about automated testing for certain invariants, and about test automation. Evaluating a model in an AI-enabled system should not stop with the first accuracy number on a static dataset. Indeed, as we will see later, testing in production is also and probably even more important.
https://ckaestne.medium.com/a-software-testing-view-on-machine-learning-model-quality-d508cb9e20a6
['Christian Kästner']
2020-06-07 14:00:58.815000+00:00
['Machine Learning', 'Testing', 'Se4ai', 'Software Engineering', 'Se4ml']
Elusive Lover
The best publication about poetry, health, relationship,life style and business. Everybody has right to be listened and grown. Follow
https://medium.com/passive-asset/elusive-lover-29b5dc391db8
['Robyn Norman']
2020-11-26 17:59:58.532000+00:00
['Love', 'Psychology', 'Haiku', 'Relationships', 'Poetry']
Django Rest Framework — JWT auth with Login and Register by Sjlouji
Django Rest Framework — JWT auth with Login and Register Sjlouji Follow Aug 27 · 4 min read Hello all. In this blog I am explaining how to perform JWT authentication (Login and Register )with Django REST Framework. Let’s get started. In my previous blog, I have explained what is JWT and how to initialize it with Django. To know about it, visit that blog. Image uploaded for cover page. Ignore this Image. 1. Creating a Django app and installing Django REST Framework So now let’s create a simple Django Project. I am creating a django project named jwtauthloginandregister . After creating it, I am just migrating to make the changes create our model in the database. $ django-admin startproject jwtauthloginandregister $ python3 manage.py migrate $ python3 manage.py runserver Now let’s install django rest framework and django rest JWT. $ pip3 install djangorestframework markdown django-filter djangorestframework_simplejwt After installation, don’t forget to add them to the Installed section. jwtauthloginandregister/settings.py # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', # Add this line ] Also add the default authentication class as JWTAuthentication. jwtauthloginandregister/settings.py REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'rest_framework_simplejwt.authentication.JWTAuthentication', ], } JWT setup has been completed, but we cann’t use it now because we didn’t call it in the project level urls.py file. Let’s do it. jwtauthloginandregister/sestting.py from django.conf.urls import url from django.contrib import admin from django.urls import path from rest_framework_simplejwt import views as jwt_views urlpatterns = [ url(r'^admin/', admin.site.urls), path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'), ] 2. Create new a app to make authentication Django JWT provides us a default login API. But it doesn’t provide us a API for registration. We have to do it manually. To do it, I am creating a new app account in our project. $ python3 manage.py startapp account As usual, after creating an app, I am registering it to the Installed Apps section. jwtauthloginandregister/settings.py # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', 'account', # Add this line ] Now create a new file in the account app and just include the url in the project level urls.py file. jwtauthloginandregister/urls.py urlpatterns = [ url(r'^admin/', admin.site.urls), path('api/token/', jwt_views.TokenObtainPairView.as_view(), name='token_obtain_pair'), path('api/token/refresh/', jwt_views.TokenRefreshView.as_view(), name='token_refresh'), path('account/', include('account.urls')), ] Now we have successfully set our app. Now let’s start creating our Registration API. 3. Create authentication Views To do that, I am creating two file api.py and serializer.py where api.py is the first point of contact from the urls.py file. account/api.py from rest_framework import generics, permissions, mixins from rest_framework.response import Response from .serializer import RegisterSerializer, UserSerializer from django.contrib.auth.models import User #Register API class RegisterApi(generics.GenericAPIView): serializer_class = RegisterSerializer def post(self, request, *args, **kwargs): serializer = self.get_serializer(data=request.data) serializer.is_valid(raise_exception=True) user = serializer.save() return Response({ "user": UserSerializer(user, context=self.get_serializer_context()).data, "message": "User Created Successfully. Now perform Login to get your token", }) In the serializer.py , RegisterSerializer handles user registration. UserSerializer is used to retrive particular values of the users. account/serializer.py from rest_framework import serializers from rest_framework.permissions import IsAuthenticated from django.db import models from django.contrib.auth.models import User from django.contrib.auth import authenticate from django.contrib.auth.hashers import make_password # Register serializer class RegisterSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('id','username','password','first_name', 'last_name') extra_kwargs = { 'password':{'write_only': True}, } def create(self, validated_data): user = User.objects.create_user(validated_data['username'], password = validated_data['password'] ,first_name=validated_data['first_name'], last_name=validated_data['last_name']) return user # User serializer class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = '__all__' After creating these classes, just map the api endpoint to the urls.py of our newly created app. account/urls.py from django.conf.urls import url from django.urls import path, include from .api import RegisterApi urlpatterns = [ path('api/register', RegisterApi.as_view()), ] Once created, open your postman and make a post request to http://localhost:8000/account/api/register JWT auth Registration Now after registration, Just login to get the JWT token. Now you can make request to the server with that token. To login, make a post request to http://localhost:8000/api/token/ JWT Login JWT also handles login errors. JWT Error handling If you wanna know about the basics of JWT Authentication, Visit my previous blog. In my next blog, I will be demonstrating how to use permissions and parse nested JSON. Feel free to contact me for any queries. Email: [email protected]. Linkedin: https://www.linkedin.com/in/sjlouji/ Complete Code can be found on my Github: https://github.com/sjlouji/Medium-Django-Rest-Framework-JWT-auth-login-register.git Happy coding!
https://medium.com/python-in-plain-english/django-rest-framework-jwt-auth-with-login-and-register-77f830cd8789
[]
2020-09-28 21:59:16.262000+00:00
['Jwt', 'Django', 'Web Development', 'Programming', 'Python']
Getting Started With Google Colab
You know it’s out there. You know there’s free GPU somewhere, hanging like a fat, juicy, ripe blackberry on a branch just slightly out of reach. Beautiful lightning-fast speed waiting just for you. Wondering how on earth to get it to work? You’re in the right place! Photo by Breno Machado on Unsplash For anyone who doesn’t already know, Google has done the coolest thing ever by providing a free cloud service based on Jupyter Notebooks that supports free GPU. Not only is this a great tool for improving your coding skills, but it also allows absolutely anyone to develop deep learning applications using popular libraries such as PyTorch, TensorFlow, Keras, and OpenCV. Colab provides GPU and it’s totally free. Seriously! There are, of course, limits. (Nitty gritty details are available on their faq page, of course.) It supports Python 2.7 and 3.6, but not R or Scala yet. There is a limit to your sessions and size, but you can definitely get around that if you’re creative and don’t mind occasionally re-uploading your files… Colab is ideal for everything from improving your Python coding skills to working with deep learning libraries, like PyTorch, Keras, TensorFlow, and OpenCV. You can create notebooks in Colab, upload notebooks, store notebooks, share notebooks, mount your Google Drive and use whatever you’ve got stored in there, import most of your favorite directories, upload your personal Jupyter Notebooks, upload notebooks directly from GitHub, upload Kaggle files, download your notebooks, and do just about everything else that you might want to be able to do. It’s awesome. Working in Google Colab for the first time has been totally phenomenal and pretty shockingly easy, but it hasn’t been without a couple of small challenges! If you know Jupyter Notebooks at all, you’re pretty much good to go in Google Colab, but there are just a few little differences that can make the difference between flying off to freedom on the wings of free GPU and sitting at your computer, banging your head against the wall… Photo by Gabriel Matula on Unsplash This article is for anyone out there who is confused, frustrated, and just wants this thing to work! Setting up your drive Create a folder for your notebooks (Technically speaking, this step isn’t totally necessary if you want to just start working in Colab. However, since Colab is working off of your drive, it’s not a bad idea to specify the folder where you want to work. You can do that by going to your Google Drive and clicking “New” and then creating a new folder. I only mention this because my Google Drive is embarrassingly littered with what looks like a million scattered Colab notebooks and now I’m going to have to deal with that.) If you want, while you’re already in your Google Drive you can create a new Colab notebook. Just click “New” and drop the menu down to “More” and then select “Colaboratory.” Otherwise, you can always go directly to Google Colab. Game on! You can rename your notebook by clicking on the name of the notebook and changing it or by dropping the “File” menu down to “Rename.” Set up your free GPU Want to use GPU? It’s as simple as going to the “runtime” dropdown menu, selecting “change runtime type” and selecting GPU in the hardware accelerator drop-down menu! Get coding! You can easily start running code now if you want! You are good to go! Make it better Want to mount your Google Drive? Use: from google.colab import drive drive.mount('/content/gdrive') Then you’ll see a link, click on that, allow access, copy the code that pops up, paste it in the box, hit enter, and you’re good to go! If you don’t see your drive in the side box on the left, just hit “refresh” and it should show up. (Run the cell, click the link, copy the code on the page, paste it in the box, hit enter, and you’ll see this when you’ve successfully mounted your drive): Now you can see your drive right there on the left-hand side of the screen! (You may need to hit “refresh.”) Plus, you can reach your drive any time with !ls "/content/gdrive/My Drive/" If you’d rather download a shared zip file link, you can use: !wget !unzip For example: That will give you Udacity’s flower data set in seconds! If you’re uploading small files, you can just upload them directly with some simple code. However, if you want to, you can also just go to the left side of the screen and click “upload files” if you don’t feel like running some simple code to grab a local file. Google Colab is incredibly easy to use on pretty much every level, especially if you’re at all familiar with Jupyter Notebooks. However, grabbing some large files and getting a couple of specific directories to work did trip me up for a minute or two. I covered getting started with Kaggle in Google Colab in a separate article, so if that’s what interests you, please check that out! Importing libraries Imports are pretty standard, with a few exceptions. For the most part, you can import your libraries by running import like you do in any other notebook. PyTorch is different! Before you run any other Torch imports, you’ll want to run *** UPDATE! (01/29)*** Colab now supports native PyTorch!!! You shouldn’t need to run the code below, but I’m leaving it up just in case anyone is having any issues! http://pytorch.org/ from os.path import exists from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag()) cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/' accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' from os.path import existsfrom wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tagplatform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu' import torch !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvisionimport torch Then you can continue with your imports. If you try to simply run import torch you’ll get an error message. I really recommend clicking on the extremely helpful links that pop up. If you do, you’ll get that code right away and you can just click on “INSTALL TORCH” to import it into your notebook. The code will pop up on the left-hand side of your screen, and then hit “INSERT.” Not able to simply import something else that you want with an import statement? Try a pip install! Just be aware that Google Colab wants an exclamation point before most commands. !pip install -q keras import keras or: !pip3 install torch torchvision and: !apt-get install is useful too! I did find that Pillow can be sort of buggy, but you can solve that by running import PIL print(PIL.PILLOW_VERSION) If you get anything below 5.3, go to the “runtime” dropdown menu, restart the runtime, and run the cell again. You should be good to go! It’s easy to create a new notebook by dropping “File” down to “New Python 3 Notebook.” If you want to open something specific, drop the “File” menu down to “Open Notebook…” Then you’ll see a screen that looks like this: As you can see, you can open a recent file, files from your Google Drive, GitHub files, and you can upload a notebook right there as well. The GitHub option is great! You can easily search by an organization or user to find files. If you don’t see what you’re looking for, try checking the repository drop-down menu! Always be saving Saving your work is simple! You can do a good ol’ “command-s” or drop the “File” menu down to save. You can create a copy of your notebook by dropping “File” -> “Save a Copy in Drive.” You can also download your workbook by going from “File” -> “download .ipyb” or “download .py.” That should be enough to at least get you up and running on Colab and taking advantage of that sweet, sweet free GPU! Please let me know if you run into any other newbie problems that I might be able to help you with. I’d love to help you if I can! If you want to reach out or find more cool articles, please come and join me at Content Simplicity! If you’re new to data science, machine learning, and artificial intelligence, you might want to check out the ultimate beginner’s guide to NumPy! Or you might be interested in one of these! Or maybe you want to know how to get your own posts noticed! Photo by Sarah Cervantes on Unsplash Thanks for reading! ❤️
https://towardsdatascience.com/getting-started-with-google-colab-f2fff97f594c
['Anne Bonner']
2019-11-17 18:58:21.613000+00:00
['Coding', 'Tutorial', 'Machine Learning', 'Data Science', 'AI']
More Depth to Refactoring and Using Design Patterns
I have mentioned in my previous blog(specifically about TDD) and mentioned the outline of refactoring, in this blog I will be writing more about what is refactoring and how to apply refactoring correctly by using design patterns. Refactoring Refactoring is the process of clarifying and simplifying the design of existing codes without changing the functionality of the codes.We use refactoring because by writing your code neatly less code smells will appear and this can improve the program’s performance,maintainability, scalability and flexibility. A simple example of refactoring would be applying clean codes to your program, however another application of refactoring will be using design patterns. Example of refactoring by applying clean code, source: PILAR Project Design Patterns Design patterns are repeatable solutions to a commonly occuring problem in a program, design pattern is more like a template to solve a problem that can be used in many different situations. By using design patterns reduces code smell and increases code readability. There are three types of design patterns commonly used: 1. Creational Pattern A design pattern that focuses on object creation mechanism, this will improve the flexibility of a function creating an object in the most suitable situation. 2. Structural Patterns A design pattern that simplifies the way to realize relationship between entities,this will improve code readability and more flexible to changes. 3. Behavioral Pattern A design pattern that identifies communication patterns between objects, by using the pattern this will increase flexibility in carrying out this communication. A Different Kind of Pattern- MVT Model-View-Template(MVT for short) is a design pattern that is used in Django to develop web applications, the difference from the patterns above is that the controller part is taken care for us by the framework itself and MVT will divide into different files and these files will function as a type of pattern. An example would be comparing between observer pattern and a Django component Signals, both patterns identifies the communication pattern between objects. Conclusion I recommend apply refactoring to your code because refactoring can improve the program’s performance,maintainability, scalability and flexibility. Design pattern can be used in order to reduce code smells and improve code readability.
https://medium.com/pilar-2020/more-depth-to-refactoring-and-using-design-patterns-e44e0b03d176
['Rayhan Muzakki']
2020-11-20 06:39:21.222000+00:00
['Refactoring', 'Design Patterns']
The Monster in ‘Bird Box’ is You
Bird Box (2018) memes have been flooding the internet ever since this Netflix Original film was released worldwide a few weeks ago. Case in point, I’ve seen these incredibly accurate GIFs on my Twitter feed just lately: Taken from Elle.com Taken from Elle.com Along with this ingenious observation alluding to the similarities among popular thrillers Bird Box, A Quiet Place (2018), Hush (2016) and, uh, Mean Girls (2004): Taken from News18.com However, putting its light-hearted “meme-worthiness” aside, Bird Box became a hot topic because of its ambiguity, leading many people to make their own speculations about its true meaning. Perhaps the most equivocal aspect of the film is its antagonist — the ‘mysterious force’ that drives people to commit mass suicide. Since we never really get to see what it is, its nature is open to interpretation. I propose this entity can be regarded as a metaphor for our darkest selves; our own wicked inner voices, which we hear all the time but try our best not to heed. There are those who might argue the monster should be viewed in a literal sense, so there’s no hidden meaning behind it. Since many thriller and horror films are produced entirely for the sake of giving people a good scare, I would say that this argument is valid, if not 100% true. For instance, Screen Rant analysed the nature of the creature based on its tangible properties and concluded that it’s demonic (in the supernatural sense) because of its unexplainable ability to detect people’s weaknesses, its bizarre appearance in Gary’s (Tom Hollander) drawings, and the ability of birds to detect it — which have strong religious and mythological ties. Screen Rant also cites the characters’ own theories regarding what the mysterious being is, including Douglas’s (John Malkovich) suggestion that it’s a biological weapon and Charlie’s (Lil Rel Howery) proposal that it’s indeed demonic and signals the end of the world. While all of these may be true, in any art form — be it film, literature, or painting — there’s also value in looking beyond the exact meaning of things. Digging deeper exposes us to greater truths — whether intended by the artist or not — but truths nonetheless. Therefore, I put forth that while the evil being in Bird Box may be none other than classic mythical demons, it may simultaneously be symbolic of the darkest part of ourselves. Here are the reasons why: Spoilers for the events of BIRD BOX follow below Varying Effects on Victims While the monster drives all its victims to commit suicide, the way it does varies from person to person: Jessica (Sarah Paulson), Malorie’s (Sandra Bullock) sister and the first fatality, killed herself after appearing to see something on the road that made her very afraid: Douglas’s wife (Rebecca Pidgeon), who was about to save Malorie, apparently saw her dead mother before stepping into a burning car, very likely in an attempt to reunite with her. Taken from Elle.com When Olympia (Danielle Macdonald) was forced to look outside the window after giving birth, she seemed to be overcome with extreme despair. However, she was strong enough to hand over her baby to Malorie before she threw herself to her death: Tom (Trevante Rhodes), like Olympia, was also strong enough to hold off the entity’s powers as he shot one of the mentally ill trespassers who ambushed him and his family before finally shooting himself in the head: Therefore, even though the victims all kill themselves in the end, it seems the factors motivating them to do so are different in type and in strength. Some, like Jessica, are convinced by fear; some, like Olympia, are compelled by remorse; and some, like Olympia and Tom, can resist the darkness briefly while many immediately succumb to its power. One way to explain all this is the fact every one of us has a dark side. According to an article in The Atlantic, psychologists David DeSteno and Piercarlo Valdesolo, in their work Out of Character: Surprising Truths about the Liar, Cheat, Sinner (and Saint) Lurking in All of Us, presented “human character as a grayscale continuum, not a black-and-white dichotomy of good and bad”. Just like us, the victims of Bird Box are made up of a spectrum between good and evil. Some lean more heavily towards evil, some have a greater affinity for good, and many lie on ambiguous points in between. This means that some victims have a greater dark side than others. This dark side can be characterised by a myriad of negative things, from grief to anger to virtually anything that serves as a motivator for the victim to commit suicide. Thus, when the victims come face to face with their dark side, embodied by the monster in the film, they succumb to it for varying reasons and at varying speeds.
https://medium.com/framerated/the-monster-in-bird-box-is-you-e39b82f0b251
['Gwen Towers']
2019-02-02 17:06:34.995000+00:00
['Film', 'Features', 'Mental Health', 'Culture', 'Movies']
Measuring Growth, Accepting Emotions and the Wrestle Between Self-Acceptance and Self-Improvement
Measuring Growth, Accepting Emotions and the Wrestle Between Self-Acceptance and Self-Improvement KTHT reflection prompts, here I come! Photo by Alexandra Fuller on Unsplash I’ve always approached 𝘋𝘪𝘢𝘯𝘢 𝘊.’s prompts as poetry prompts, because that’s the easiest way for me to get my thoughts and emotions out, especially when they feel like they’re stuck in little circles of rumination in my mind. This time, however, I’m going to try something different. I was inspired by others who contribute to these weekly prompts, who take that step to share these thoughts without hiding behind the rhyme (though let’s be real, I never actually use rhymes in my poems). It’s brave because instead of showing the most distilled form of your thoughts, imperfections chiselled away, a lot of people share the rawest form that came up for them. There’s value in showing up like that, in showing that the world shouldn’t be comprised of highlight reels, because we’re so often comparing our view of all our successes and our failures, to someone else’s publicized and polished successes. So, here goes! This week, we’re tackling three questions instead of five, and I’m doing them all in one go. Monday: How do you measure personal growth? How do you know when you’ve grown as a person? Photo by Tolga Ulkan on Unsplash Numbers I think a lot of people shy away from this because there can be an obvious downside to this. In my poetry, I’ve reflected on how in the past, grades, how the number on a scale took over my definition of my life. Back then, I had a “perfect” score that I needed to get, a perfect “weight” I had to be. That’s when numbers can make life miserable. When I say I use numbers to measure personal growth, it’s more about visualization and seeing patterns over time for things that don’t seem to be tangible. For example, I rate moods over time and rather than striving to have “the best mood”, I’m learning about the contexts, people, things that contribute to a foul mood versus a good one. I’m learning how these environmental factors and also how I respond to them accordingly can impact those moods over time. In visualizing that, I’m tapping into my strength in understanding the world through numbers. Writing and documentation Similar to numbers, writing is a complementary aspect of it. Encased in the words that I write at any time are the mannerisms, common phrases that I embody. Fifteen years ago, I didn’t talk or write like this. In fifteen years, I imagine my writing voice will also change. One tangible way that I’m hoping will be a regular reflection point is the Proust Questionnaire. The saddest thing is that I did complete a bunch when I first entered high school, but I have no idea where they’re saved now. I started completing the questionnaire again this year. Here’s hoping that in documenting it on Medium that it’ll be easy for me to revisit these same questions next year and see my growth. Even without a copy of the previous questionnaires filled out, simply writing and reflecting on these questions helped me see just how much I’ve grown. Peppered throughout the writing I was surprised to see myself truly and honestly write down thoughts like “I am perfectly happy now”, showing contentment and enjoyment in my life that I don’t ever recall ever having prior to this. In answering these questions, I was able to name certain things that really helped me grow — setting boundaries, understanding things from my perception (e.g., others calling me lazy vs. understanding that I have different values and am channelling my effort into different priorities than others are). In journalling for myself but also in writing for others, I see quite a similar set of messages come out. I value self-care now, I approach with curiosity rather than fight out of fear and scarcity. In those words, I find empowerment within myself.
https://medium.com/know-thyself-heal-thyself/measuring-growth-accepting-emotions-and-the-wrestle-between-self-acceptance-and-self-improvement-b216facee113
['Lucy The Eggcademic', 'She Her']
2020-12-21 07:43:32.375000+00:00
['Emotions', 'Growth', 'Lovethyself', 'Mental Health', 'Self']
Advanced Python: Metaprogramming
Advanced Python: Metaprogramming Explaining what, why and how Metaprogramming works in Python Metaprogramming is a complex yet one of the most interesting topics in Python programming language. Metaprogramming makes the Python programming language extremely powerful. Metaclass is above classes We have come across decorators. We often use decorators to enhance the Python code and add extra functionality to the existing functions/classes. These decorators are part of metaprogramming in Python. Code generators are also part of metaprogramming. We can introspect the functions, classes, types of an object and modify them at runtime. The IDEs use the metaprogramming feature to provide code analysis. This article will illustrate what Metaprogramming is, where it can be used, and how it works in practice. Expert level Python developers, who intend to implement frameworks and their own libraries in Python often use the Metaprogramming features of Python. This is an advanced level topic for Python developers and I recommend it to everyone who is/or intends in using the Python programming language. Metaprogramming can be troublesome if it is not understood properly due to the fact that unexpected side effects can be encountered. This article will explain it in an easy to understand manner. If you want to understand the Python programming language from the beginner to an advanced level then I highly recommend the article below: Article Aim This article will provide an overview of the following topics: What is Metaprogramming In Python? How Does Metaprogramming work? Where can we use Metaprogramming? Before I begin, I want to clarify that metaprogramming is a complicated programming topic and unless the requirement can only be solved via Metaprogramming, I would recommend the developers choose a different programming approach than Metaprogramming. Therefore, only use the Metaprogramming concepts if you absolutely have to use it. The expert level python developers generally have a thorough understanding of the concept. 1. What Is Metaprogramming In Python? Before I begin, let’s remember that everything is an object in Python. A function, constant, variable, literally everything is an object. To elaborate, even a class is an object. As a result, we can treat a class as any other object and pass the class as a parameter, store it, and modify it at runtime. A class is an object that can be used to instantiate new objects. A class can be seen as a bucket that groups the objects. It defines the protocols/rules for the objects that it creates. A metaclass is above a class. It groups a set of classes together. We can have meta-information about classes within a metaclass. As an instance, consider the code below: def get_fin_tech_explained(): class FinTechExplained: pass return FinTechExplained print(get_fin_tech_explained()) This method returns the class FinTechExplained: <class '__main__.get_fin_tech_explained.<locals>.FinTechExplained'> As the class can add protocol/rules to an object, a metaclass adds protocol/rules to a class. Python uses a metaclass to create a class for you. The first key note to remember is that everything is an object in Python including a class and each class is created by a metaclass. A metaclass allows us to add special behavior to a class. Before we move any further, let’s understand what type() is The constructor of the type class is called to retrieve the type of an object. As an instance, I can do: def get_fin_tech_explained(): class FinTechExplained: pass return FinTechExplained print(type(get_fin_tech_explained)) print(type(get_fin_tech_explained())) The first print will print: <class 'function'> This is because get_fin_tech_explained belongs to the class function. The second print will print: <class 'type'> This is because the return of get_fin_tech_explained() belongs to the class type. The method returns the class. It means each class is essentially a type of a type. The second keynote to remember is that each class is an instance of the type, type. The class type is a metaclass. This class is used to create other classes in the Python programming language. It defines the rules/protocols of the classes it creates. Now the constructor of the type can be used to create classes that can then create instances of the class. Consider this class below: class FinTechExplained: def __init__(self, blog_name): self.blog_name = blog_name def get_blog_name(self): return f'Blog name is : {self.blog_name}' fin_tech = FinTechExplained('Metaprogramming') print(fin_tech.get_blog_name()) The class has a constructor that takes in a blog_name and a method that returns the name of the blog. As an instance, the print statement returns: Blog name is : Metaprogramming We can create a class using the type constructor. We are going to create the FinTechExplained class by using the type() constructor without declaring the class. def FinTechExplained_init__(self, blog_name): self.blog_name = blog_name fin_tech = type("FinTechExplained", (), {"__init__": FinTechExplained_init__, "get_blog_name": lambda self:f’Blog name is : {self.blog_name}’}) This is equivalent to us creating the FinTechExplained class ourselves. The constructor of the type() takes in the following parameters: type(cls, what, bases=None, dict=None) the first parameter is the name of the class the second parameter is the tuple of base classes the third parameter is a dictionary of key (attributes) and values (implementation) As you can see that the dict parameter is where we passed in a dictionary with two keys where the keys are the method names and the values are the implementation of the methods. Once I run the following statements: fin_tech = FinTechExplained('Metaprogramming') print(fin_tech.get_blog_name()) I will get the following output: Blog name is : Metaprogramming This is also the reason why we have a __dict__ property for all of our objects. A metaclass allows us to add special behavior to a class. As an instance, a descriptor is a protocol in Python. It provides __get__ and __set__ methods. The descriptor protocol defines how attribute access is interpreted by the language. The methods of the descriptor class can be reused. Internally, when we set and get a property, python calls the __dict__.__get__ and __set__ methods. If the methods do not exist then Python calls the _getattr_ and __set_attr__ methods. Therefore, if your class attribute is an object and that object has a descriptor then it implies that we want Python to use the __get__ and __set__ methods and we want it to follow the descriptor protocol. Descriptors, when combined with metaclasses, can achieve powerful behaviour. We can combine __set_name__ on the descriptor classes to take in the surrounding class and property names. The key to remember is that the functions are descriptors in Python. They can be added to the class at run time. Remember the method binding is performed on the attribute lookup as part of the descriptor protocol. If we want to decorate all of the methods of our class with a certain decorator then we can use a meta-class 2. How do we do Metaprogramming? We can start by creating our own metaclass. The first step is to create a class that inherits from the type, type. The type will add additional behavior to the new metaclass. A class can only have one metaclass. class FinTechExplainedMeta(type): The snippet of code above shows that we have created a new metaclass called FinTechExplainedMeta. The key point to note is that a metaclass in defined by inheriting from type. When we create a class, we can set the metaclass attribute to a metaclass that is inherited from type. The metaclass will have a __new__ method that calls type.__new__ method. class FinTechExplainedMeta(type): def __new__(cls, what, bases=None, dict=None): return type.__new__(cls, what, bases, new_dict) The metaclass has essentially access to the class name, its parents, and all of the attributes. We can now add validation rules in the Meta.__new__ method and validate all of the parameters. We can use the magic methods of Python programming language to override the behaviour so that any class that makes our new class as the Metaclass will now need to conform to the rules. __new__() is another example of metaprogramming. It creates new class instances and it is not bound to an instance of the class in nature. This method is called before __init__() is called. We can override the super class’s __new__(). The return of the __new__() method is the instance of the class. This is useful when we want to modify the creation of immutable data types such as tuple. Now I am going to modify the __new__ method of the FinTechExplainedMeta class and check that the class has a method ‘get_blog’. class FinTechExplainedMeta(type): def __new__(cls, what, bases=None, dict=None): print(dict) if 'get_blog' in dict: print('Great you have get_blog') else: raise Exception('get_blog missing') return type.__new__(cls, what, bases, dict) Now I will create a new class called FinTechExplained (as above) and have the FinTechExplainedMeta as its metaclass. class FinTechExplained(metaclass=FinTechExplainedMeta): pass Notice, how setting the metaclass attribute at the class level is required. As a result, when I run this code, even before instantiating the FinTechExplained object, it will break as soon as the module is loaded: `C:\Users\farhadm\PycharmProjects\Learning\venv\Scripts\python.exe C:/Users/farhadm/PycharmProjects/Learning/PythonGeneral/meta_programming.py Traceback (most recent call last): File "C:/Users/farhadm/PycharmProjects/Learning/PythonGeneral/meta_programming.py", line 41, in <module> class FinTechExplained(metaclass=FinTechExplainedMeta): File "C:/Users/farhadm/PycharmProjects/Learning/PythonGeneral/meta_programming.py", line 38, in __new__ raise Exception('get_blog missing') Exception: get_blog missing Notice how the exception is thrown even before I created an instance of the class. This is how we can enforce every class to have certain methods/attributes that our framework/library requires. If you want to learn more about the magic methods, I highly recommend reading this article: I can now create a new class FinTechExplained and have its metaclass as FinTechExplainedMeta and ensure FinTechExplained contains the get_blog method: class FinTechExplainedMeta(type): def __new__(cls, what, bases=None, dict=None): print(dict) if 'get_blog' in dict: print('Great you have get_blog') else: raise Exception('get_blog missing') return type.__new__(cls, what, bases, dict) class FinTechExplained(metaclass=FinTechExplainedMeta): def get_blog(self): return 'get_blog is there.' Now as soon as I load the module, it will print out: {'__module__': '__main__', '__qualname__': 'FinTechExplained', 'get_blog': <function FinTechExplained.get_blog at 0x00000024DE033730>} Great you have get_blog We can see the printed dict. This is because of the print(dict) statement in the FinTechExplainedMeta class. We can use metaclass’s __new__ method. It is called whenever a class is instantiated and it is called before the __init__ method. Metaclasses can be used to modify the class attributes. As an instance, I am now going to modify all of the methods so that they are all prefixed with ‘Farhad’. The functionality of Farhadget_blog will remain as the functionality of get_blog above. This is how we can add new methods to a class via metaclass: class FinTechExplainedMeta(type): def __new__(cls, what, bases=None, dict=None): print(dict) if 'get_blog' in dict: print('Great you have get_blog') else: raise Exception('get_blog missing') new_dict= {} for key, val in dict.items(): new_dict['Farhad'+key] = val return type.__new__(cls, what, bases, new_dict) class FinTechExplained(metaclass=FinTechExplainedMeta): def get_blog(self): return 'get_blog is there.' pass fintech = FinTechExplained() print(fintech.Farhadget_blog()) Even though Farhadget_blog() doesn’t explicitly exist in the FinTechExplained class but the metaclass added it for us. As a result, the following line is printed: fintech.Farhadget_blog() --> get_blog is there The expert level python developers generally have a thorough understanding of the metaclasses concept. Python offers a utility known as exec. We can use the exec* to execute any sequence of Python statements. We can also use eval() and compile() functions to execute Python code. We can also add meta hooks that are called before any other import is called. This is done by adding sys.meta_path and are called as part of sys.path processing. Projects use these features to dynamically add the code via string. 3. Uses Of Metaprogramming There are a number of uses of Metaprogramming, including: If we want to check that a class was defined correctly, we can use the metaclass. We can use metaclasses to raise errors during module imports. If we want every module of our framework to have methods with a particular signature or if we want to have our classes to have certain naming conventions and methods/attributes then we can use metaprogramming to achieve it. Code generators are also part of metaprogramming. We can introspect the functions, classes, types of an object and modify them at runtime. The IDEs use the metaprogramming feature to provide code analysis. Each time a base class is subclassed, we can run a specific code using metaclass. We can use a subclass to annotate or modify properties before the class is used. We can use a metaclass to validate classes. It can also be used to set the attributes of your class by using the class.__dict__ attribute. We can use a metaclass to validate classes. It can also be used to set the attributes of your class by using the class.__dict__ attribute. Metaclasses can be used to modify the class attribute. We often use special methods and metaclasses to change the way Python objects work. We can even extend Python syntax by creating our own domain-specific language (DSL) using the abstract syntax tree (AST). ORM (object-relational mapping) classes traditionally use metaclasses. Developers who concentrate on writing frameworks often use metaclasses. FinTechExplained — Thank you for reading 4. Summary This article provided an overview of the following topics: What is Metaprogramming In Python How Does Metaprogramming work Where can we use Metaprogramming A function, constant, variable, literally everything is an object. To elaborate, even a class is an object. As a result, we can treat a class as any other object and pass the class as a parameter, store it, and modify it at runtime. To understand the concept of Metaprogramming, it’s important to know that metaclass is a type. The type itself is a class. It can define other classes. Therefore it’s a metaclass. This is an advanced level topic for Python developers and I recommend it to everyone who is/or intends in using the Python programming language. Metaclass can be troublesome if they are not understood properly due to the fact that unexpected side effects can be encountered. Metaprogramming is a complex yet one of the most interesting topics in Python programming language. Metaprogramming makes the Python programming language powerful.
https://medium.com/fintechexplained/advanced-python-metaprogramming-980da1be0c7d
['Farhad Malik']
2020-07-01 00:01:03.660000+00:00
['Python', 'Fintech', 'Python3', 'Technology', 'Programming']
Set an Emotional Tone for Harmony in Your Home
She was twenty minutes into yelling and pounding on things in her room in anger. My 5-year-old did not want to eat one more bite of dinner to get dessert — she just wanted dessert! Things escalated quickly to the point I had to put her in her room and hold the door shut so she couldn’t break anything in other rooms of the house. Certain tips I’d received over the years were running through my mind. Things such as “when a child is acting the least lovable is when they need the most love” and “don’t escalate when they escalate” or “he who loses it, loses.” All solid advice. But it didn’t stop the anger I had from welling up inside me. I was angry things were getting like this…again. I had two choices, and luckily I chose to remain calm until emotions subsided rather than escalate in anger with her and say things to her I’d regret. I’ve noticed over the years, that the emotions I project as a parent are critical to keeping a home as happy as it can be. I learned that if I was agitated, it didn’t take long for the kids and even my wife to be agitated. If I was happy or my wife was happy, we were generally all happy. It’s proven that your emotional status does affect the emotional state of those around you. This, in turn, sets the emotional tone in your home. The following mindsets can help you set a tone of peace and harmony. Of course, none of these are magic bullets of contentment, but they can help make things the best they can be at that time in your family dynamics.
https://medium.com/publishous/set-an-emotional-tone-for-harmony-in-your-home-aae7db8158d2
['Max Klein']
2020-12-21 11:35:15.819000+00:00
['Relationships', 'Happiness', 'Psychology', 'Marriage', 'Parenting']
From DataFrame to Named-Entities
Data Science / Python NLP Snippets From DataFrame to Named-Entities A quick-start guide to extracting named-entities from a Pandas dataframe using spaCy. Photo by Wesley Tingey on Unsplash A long time ago in a galaxy far away, I was analyzing comments left by customers and I noticed that they seemed to mention specific companies much more than others. This gave me an idea. Maybe there is a way to extract the names of companies from the comments and I could quantify them and conduct further analysis. There is! Enter: named-entity-recognition. Named-Entity Recognition According to Wikipedia, named-entity recognition or NER “is a subtask of information extraction that seeks to locate and classify named entity mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.”¹ In other words, NER attempts to extract words that categorized into proper names and even numerical entities. In this post, I’ll share the code that will let us extract named-entities from a Pandas dataframe using spaCy, an open-source library provides industrial-strength natural language processing in Python and is designed for production use.² To get started, let’s install spaCy with the following pip command: pip install -U spacy After that, let’s download the pre-trained model for English: python -m spacy download en With that out of the way, let’s open up a Jupyter notebook and get started! Imports Run the following code block into a cell to get all the necessary imports into our Python environment. # for manipulating dataframes import pandas as pd # for natural language processing: named entity recognition import spacy from collections import Counter import en_core_web_sm nlp = en_core_web_sm.load() # for visualizations %matplotlib inline The important line in this block is nlp = en_core_web_sm.load() because this is what we’ll be using later to extract the entities from the text. Getting the Data First, let’s get our data and load it into a dataframe. If you want to follow along, download the sample dataset here or create your own from the Trump Twitter Archive. df = pd.read_csv('ever_trump.csv') Running df.head() in a cell will get us acquainted with the data set quickly. Getting the Tokens Second, let’s create tokens that will serve as input for spaCy. In the line below, we create a variable tokens that contains all the words in the 'text' column of the df dataframe. tokens = nlp(''.join(str(df.text.tolist()))) Third, we’re going to extract entities. We can just extract the most common entities for now: items = [x.text for x in tokens.ents] Counter(items).most_common(20) Screenshot by Author Extracting Named-Entities Next, we’ll extract the entities based on their categories. We have a few to choose from people to events and even organizations. For a complete list of all that spaCy has to offer, check out their documentation on named-entities. Screenshot by Author To start, we’ll extract people (real and fictional) using the PERSON type. person_list = [] for ent in tokens.ents: if ent.label_ == 'PERSON': person_list.append(ent.text) person_counts = Counter(person_list).most_common(20) df_person = pd.DataFrame(person_counts, columns =['text', 'count']) In the code above, we started by making an empty list with person_list = [] . Then, we utilized a for-loop to loop through the entities found in tokens with tokens.ents . After that, we made a conditional that will append to the previously created list if the entity label is equal to PERSON type. We’ll want to know how many times a certain entity of PERSON type appears in the tokens so we did with person_counts = Counter(person_list).most_common(20) . This line will give us the top 20 most common entities for this type. Finally, we created the df_person dataframe to store the results and this is what we get: Screenshot by Author We’ll repeat the same pattern for the NORP type which recognizes nationalities, religious and political groups. norp_list = [] for ent in tokens.ents: if ent.label_ == 'NORP': norp_list.append(ent.text) norp_counts = Counter(norp_list).most_common(20) df_norp = pd.DataFrame(norp_counts, columns =['text', 'count']) And this is what we get: Screenshot by Author Bonus Round: Visualization Let’s create a horizontal bar graph of the df_norp dataframe. df_norp.plot.barh(x='text', y='count', title="Nationalities, Religious, and Political Groups", figsize=(10,8)).invert_yaxis() Screenshot by Author Voilà, that’s it!
https://towardsdatascience.com/from-dataframe-to-named-entities-4cfaa7251fc0
['Ednalyn C. De Dios']
2020-05-25 07:05:45.272000+00:00
['Pandas', 'NLP', 'Spacy', 'Data Science', 'Python']
Tutorial — making a trading bot asynchronous using Python’s “unsync” library
The Python “unsync” library is a very easy way to create async code. This gives a practical example of how to use on a simple trading bot. There are many ways to skin the async cat In Python there are many valid ways of parallelising your code, including: The older way — using the threading and multiprocessing libraries — using the and libraries The newer way — using async and await from the asyncio library embedded into core Python from 3.7 onwards — using and from the library embedded into core Python from 3.7 onwards The easier way (I think)— using the @unsync decorator from the Python unsync library What is so great about ‘unsync’? I’ve used all the above methods on different projects and they all work fine. But for me, unsync seems to be the easiest to use. If you have code that is synchronous today and you want to make it asynchronous quickly and easily, then try the unsync library. Simple example using unsync The current unsync documentation is on the GitHub repo. Install unsync Install the library using pip or conda: pip install unsync Decorate your synchronous functions You only need to do three things to take synchronous code and enable it to work in parallel. Import the decorator into your code — from unsync import unsync Add the @unsync decorator to a function to make it asynchronous. Note that when you now call that function, it will no longer block the thread until it finishes. Instead, that function call will start the coroutine running and will immediately return an unsync future object, which enables you to get at the return value later when it is finished. To get the return value of the function later, you need to call the .result() method on the unsync future object. This will wait for the coroutine to finish and will give you the final return value. A simple unsync code example This method is very quick and easy to implement, and significantly less mind-bending for me than adding asyncio or threading to this routine. There are other ways to use unsync which include defining async functions. Check out the readme file on the GitHub repo. A more complex example — parallelising a simple trading bot This is an example to show how easily unsync can be used on code which may have a number of levels of parallelisation. What does this trading bot do? This code represents a simple trading bot that is fired up at intervals, checks the latest market prices on lots of markets and then decides on trades. It uses APIs to fetch the market data and make trades. It uses a database to store history and the current state. Here is a diagram of the logic followed by each trading cycle: Logic flow for trading bot Trading bot — synchronous code Code for this trading bot example can be found on GitHub here. Firstly, here is our bot written using synchronous code. I’ve removed the API, database and logic calls to keep the codebase short for this example. Running our bot (synchronous mode) When we run this bot we can see from the output below that the markets are run in sequence and the thread is blocked until each market completes. You can also see that the API calls and database calls block the thread. The bot takes over 16 seconds to complete the cycle. 2020-06-07 18:04:31,569 - Starting up the trading bot. 2020-06-07 18:04:31,569 - AAPL - Starting the bot cycle. 2020-06-07 18:04:31,569 - AAPL - Started fetching market data. 2020-06-07 18:04:32,570 - AAPL - Finished fetching market data. 2020-06-07 18:04:32,571 - AAPL - Started calling database. 2020-06-07 18:04:33,076 - AAPL - Finished fetching database data. 2020-06-07 18:04:33,076 - AAPL - Trading logic decision => exit position. 2020-06-07 18:04:33,076 - AAPL - Posting exit position trade. 2020-06-07 18:04:38,081 - AAPL - exit position trade successful. 2020-06-07 18:04:38,081 - AAPL - Started updating database. 2020-06-07 18:04:38,786 - AAPL - Finished updating database data. 2020-06-07 18:04:38,786 - AAPL - Finished the bot cycle. 2020-06-07 18:04:38,786 - AMZN - Starting the bot cycle. 2020-06-07 18:04:38,786 - AMZN - Started fetching market data. 2020-06-07 18:04:39,792 - AMZN - Finished fetching market data. 2020-06-07 18:04:39,792 - AMZN - Started calling database. 2020-06-07 18:04:40,295 - AMZN - Finished fetching database data. 2020-06-07 18:04:40,295 - AMZN - Trading logic decision => None. 2020-06-07 18:04:40,295 - AMZN - No trade to post. 2020-06-07 18:04:40,296 - AMZN - Started updating database. 2020-06-07 18:04:40,999 - AMZN - Finished updating database data. 2020-06-07 18:04:40,999 - AMZN - Finished the bot cycle. 2020-06-07 18:04:40,999 - MSFT - Starting the bot cycle. 2020-06-07 18:04:40,999 - MSFT - Started fetching market data. 2020-06-07 18:04:42,003 - MSFT - Finished fetching market data. 2020-06-07 18:04:42,003 - MSFT - Started calling database. 2020-06-07 18:04:42,504 - MSFT - Finished fetching database data. 2020-06-07 18:04:42,504 - MSFT - Trading logic decision => exit position. 2020-06-07 18:04:42,504 - MSFT - Posting exit position trade. 2020-06-07 18:04:47,509 - MSFT - exit position trade successful. 2020-06-07 18:04:47,509 - MSFT - Started updating database. 2020-06-07 18:04:48,213 - MSFT - Finished updating database data. 2020-06-07 18:04:48,213 - MSFT - Finished the bot cycle. 2020-06-07 18:04:48,213 - Summary of trades: [('AAPL', 'exit position'), ('AMZN', None), ('MSFT', 'exit position')] 2020-06-07 18:04:48,213 - Finished everything. Took 16.644033193588257 seconds. We can make our bot better by using asynchronous logic in three areas We don’t want the trading bot to wait until one market stock is completely finished processing before moving onto the next one. We want to be able to run all the markets in parallel so that all market prices collected are at the same time. The API calls to fetch data and post trades are slow processes that will benefit from using async. The database calls and updates are much faster than the API but may give some gain from parallelisation. Using unsync to parallelise the trading bot Using just the @unsync decorators and the .result() methods, we can transform the bot code into an async version with only few small changes. The code changes from the original synchronous version are shown with an # ADDED comment. That’s it. Very simple and the code now runs fully async. Rerun the bot — now in async mode Rerunning the bot and examining the output below, we see that it does work correctly in parallel. We get a significant speed up both in parallelisation of the markets and the parallelisation of the API and database calls. The full run is now completed in under 7 seconds. 2020-06-07 18:02:22,658 - Starting up the trading bot. 2020-06-07 18:02:22,658 - AAPL - Starting the bot cycle. 2020-06-07 18:02:22,659 - AAPL - Started fetching market data. 2020-06-07 18:02:22,659 - AMZN - Starting the bot cycle. 2020-06-07 18:02:22,659 - AMZN - Started fetching market data. 2020-06-07 18:02:22,660 - AAPL - Started calling database. 2020-06-07 18:02:22,660 - MSFT - Starting the bot cycle. 2020-06-07 18:02:22,660 - AMZN - Started calling database. 2020-06-07 18:02:22,661 - MSFT - Started fetching market data. 2020-06-07 18:02:22,661 - MSFT - Started calling database. 2020-06-07 18:02:23,164 - AMZN - Finished fetching database data. 2020-06-07 18:02:23,164 - MSFT - Finished fetching database data. 2020-06-07 18:02:23,164 - AAPL - Finished fetching database data. 2020-06-07 18:02:23,664 - AMZN - Finished fetching market data. 2020-06-07 18:02:23,664 - AAPL - Finished fetching market data. 2020-06-07 18:02:23,664 - AMZN - Trading logic decision => go short. 2020-06-07 18:02:23,664 - AAPL - Trading logic decision => None. 2020-06-07 18:02:23,665 - AAPL - No trade to post. 2020-06-07 18:02:23,665 - AMZN - Posting go short trade. 2020-06-07 18:02:23,665 - MSFT - Finished fetching market data. 2020-06-07 18:02:23,666 - MSFT - Trading logic decision => go short. 2020-06-07 18:02:23,666 - AAPL - Started updating database. 2020-06-07 18:02:23,667 - MSFT - Posting go short trade. 2020-06-07 18:02:24,370 - AAPL - Finished updating database data. 2020-06-07 18:02:24,370 - AAPL - Finished the bot cycle. 2020-06-07 18:02:28,668 - AMZN - go short trade successful. 2020-06-07 18:02:28,669 - AMZN - Started updating database. 2020-06-07 18:02:28,671 - MSFT - go short trade successful. 2020-06-07 18:02:28,672 - MSFT - Started updating database. 2020-06-07 18:02:29,374 - AMZN - Finished updating database data. 2020-06-07 18:02:29,374 - AMZN - Finished the bot cycle. 2020-06-07 18:02:29,376 - MSFT - Finished updating database data. 2020-06-07 18:02:29,377 - MSFT - Finished the bot cycle. 2020-06-07 18:02:29,377 - Summary of trades: [('AAPL', None), ('AMZN', 'go short'), ('MSFT', 'go short')] 2020-06-07 18:02:29,377 - Finished everything. Took 6.718775033950806 seconds. Alternative ways of creating async code We could have used other methods to make this trading bot asynchronous. 1. Using the threading library We can use the threading library’s Thread object, kick off the threads and wait for them to complete using the .join() method. Alternatively we could use the ThreadPoolExecutor instead (which is actually what unsync uses under the hood in the above example). 2. Using asyncio From Python 3.7+ we can use the asyncio library. We add async to the functions to make them asynchronous. We add await to the async function calls so that it waits for the coroutine to finish and deliver the final return value. We create a list of async call tasks where we want to use parallelisation and use something like asyncio.gather(*tasks) to kick them off and wait for them to finish. But these alternatives take a lot more time to implement Because I wrote synchronous code first, and then only later thought about parallelising, moving to async using the threading or asyncio libraries requires a lot more changes, debugging and coding time.
https://mattgosden.medium.com/tutorial-using-pythons-unsync-library-to-make-an-asynchronous-trading-bot-9ee2ae881272
['Matt Gosden']
2020-06-14 17:24:12.854000+00:00
['Python', 'Trading Bot', 'Async', 'Tutorial', 'Threading']
Tips for Acing Your Next Coding Interview
Photo by freddie marriage on Unsplash Before the interview The interview is stressful enough with everything being on the spot — you want to put as much time in before the interview to prep your skills so that you know exactly what is expected of you prior to entering that room or call. This will make you more confident during your interview and allow you to focus on showing off your intelligence and people skills. Code, code, and code It’s important to know your data structures and basic algorithms like sorting and dynamic programming so be sure to refresh yourself on those topics especially if it’s been a while since you’ve taken those classes. The best way to prepare for the technical and minimize the chances of being unable to solve the problem is to practice all kinds of problems to hone your problem solving and familiarize yourself with the types of problems you may face. Sites like Leetode and Hackerrank are both great for this as they both have a large database filled with all sorts of questions that you can pick and choose from. Both sites let you filter problems by type (data structures, dynamic programming, databases, languages, etc) and label questions with easy, medium, or hard. Try doing 1–3 problems per day and look at other people’s solutions to see how others are approaching the problem and how you can improve your own logic. Checkout interviews on interviewing.io’s Youtube channel This is especially helpful for anyone who hasn’t done a technical interview before since you can get an idea of how technical interviews play out and how other people problem solve and communicate their thought process with the interviewer. interviewing.io has tons of interviews with engineers from the top tech companies and I definitely recommend giving some of them a watch to familiarize yourself with the dynamic and the expectations. Do a mock interview Again, practice is key. If you have friends who are also in the interview search, definitely suggest doing mock interviews with one another. Being able to problem solve and code solutions is important, but so is communication. Use this as an opportunity to practice verbalizing your thought process as you think of the solution and write your code. If you can’t find any friends, check out interviewing.io. They allow you to set up mock interviews with engineers in the industry who will go through a typical technical interview with you and offer you helpful feedback. Read Cracking the Coding Interview This book is a great resource to refresh yourself on specific data structures, algorithms, or types of problems. The book is organized into sections by topic, each with its own section of problems. What I love about this book is that each problem comes with a detailed solution that breaks down the problem, the thought process behind designing and writing the algorithm, and sometimes multiple solutions to one problem. They also talk about time complexity for all the problems, which is a very important concept to know for your interview. Look on Reddit and Glassdoor for others’ interview experience at that company Obviously, there is an infinite number of possible questions they could ask you during your interview. Fortunately, most companies tend to ask certain types of questions to their candidates. Look up “[company name] interview questions” or “[company name] interview Reddit” and you’ll probably find discussions and reviews from other past interviewers on their interview experience with that company. Glassdoor has people rate the difficulty of the problems and whether or not they got the offer as well as write a blurb describing the question or their experience. This way you can get some idea of the difficulty of questions they ask and what kind of questions they give. Come up with questions Interviewers often allocate at least 5 minutes at the end of the interview to ask them any questions about the company or their role. Definitely go into your interview with some questions prepared — this is a great opportunity to learn more about the company to see if it’s a good fit for you. I also often like to follow up by asking what a better or more optimal solution might’ve been or how I could’ve improved. Photo by Daoud Abismail on Unsplash During the Interview Reiterate the task The first thing you should do after the interviewer presents the problem to you is restating the problem in your own words to clarify that you understand it properly. Ask questions After you understand the problem, be sure to ask detailed questions about how the program should perform under edge cases or anything to clarify any uncertainties. This shows that you’re a critical thinker and that you’re able to consider various scenarios. Some good questions to ask are “can I assume this…?” or “how should the program perform under this case?”. Develop algorithm / solution Now that you understand the problem and the task ahead, you now need to formulate your algorithm. This does not mean that you should start coding. This is where you verbalize your thought process — consider different possible solutions to the problem and compare their time and space complexities. Talk through your ideas and explain them in detail with your interviewer so that they understand what your algorithm is. What if you have no idea how to solve the problem? That’s totally OK. Don’t panic! This is more common than you think — this is where you work with the interviewer to solve the problem. Most interviewers enjoy back and forth brainstorming and actually prefer the ideation process to be more of a discussion about how to solve the problem. If you’re struggling to get started, verbalize what it is you’re having trouble with and maybe some preliminary ideas even if they may not work. Your interviewer will hint and nudge you in the right direction. Run idea on an example To clarify exactly how you want your algorithm to work, pick an arbitrary example input, and run your idea on it and showcase how it would work. Code Now that you have your idea set to stone, it’s time to start converting your ideas into code. Don’t fall into the trap that many fall into, which is to go mute as you begin coding. You should still be speaking out loud to the interviewer describing what it is you’re doing and why — stating invariants along the way is a good way to show that you know exactly how your program is performing. Your coding ability is not the most important part of the interview. In fact, most companies don’t care if you mess up the syntax or don’t remember how to do something in a specific language because they care more about the thought processes. That being said, showing that you can easily convert ideas into code is a very important and impressive skill to have. Test Now that you’re done writing your program, it’s time to run some tests. Pick up some useful test cases: maybe a standard one and a couple of edge cases. Testing is a very important aspect of software engineering and by initiating this process yourself it goes to show that you have a good habit of testing your code.
https://medium.com/swlh/tips-for-acing-your-next-coding-interview-11f8e115df36
[]
2020-12-19 12:16:09.783000+00:00
['Careers', 'Coding Interviews', 'Interview', 'Internships', 'Software Engineering']
Deep Learning Model Training Loop
Deep Learning Model Training Loop Implementing a simple neural network training loop with Python, PyTorch, and TorchVision. Several months ago I started exploring PyTorch — a fantastic and easy to use Deep Learning framework. In the previous post, I was describing how to implement a simple recommendation system using MovieLens dataset. This time I would like to focus on the topic essential to any Machine Learning pipeline — a training loop. The PyTorch framework provides you with all the fundamental tools to build a machine learning model. It gives you CUDA-driven tensor computations, optimizers, neural networks layers, and so on. However, to train a model, you need to assemble all these things into a data processing pipeline. Recently the developers released the 1.0 version of PyTorch, and there are already a lot of great solutions helping you to train the model without a need to dig into basic operations with tensors and layers. (Briefly discussed in the next section). Nevertheless, I believe that every once in a while most of the software engineers have a strong desire to implement things “from scratch” to get a better understanding of underlying processes and to get skills that do not depend on a particular implementation or high-level library. In the next sections, I am going to show how one can implement a simple but useful training loop using torch and torchvision Python packages. TL;DR: Please follow this link to get right into the repository where you can find the source code discussed in this post. Also, here is a link to the notebook that contains the whole implementation in a single place, as well as additional information not included in the post to make it concise. Out-of-the-Box Solutions As it was noted, there are some high-level wrappers built on top of the framework that simplify the model training process a lot. In the order of the increasing complexity, from minimalistic to very involved: Ignite — an official high-level interface for PyTorch Torchsample — a Keras-like wrapper with callbacks, augmentation, and handy utils Skorch — a scikit-learn compatible neural network library fastai — a powerful end-to-end solution to train Deep Learning models of various complexity with high accuracy and computation speed The main benefit of high-level libraries is that instead of writing custom utils and wrappers to read and prepare the data, one can focus on the data exploration process itself — no need to find bugs in the code, hard-working maintainers improving the library and ready to help if you have issues. No need to implement custom data augmentation tools or training parameters scheduling, everything is already here. Using a well-maintained library is a no-doubt choice if you’re developing a production-ready code, or participating in a data science competition and need to search for the best model, and not sitting with a debugger trying to figure out where this memory error comes. The same is true if you’re learning new topics and would like to get some working solution faster instead of spending many days (or weeks) coding ResNets layers and writing your SGD optimizer. However, if you’re like me then one day you’ll like to test your knowledge and build something with fewer layers of abstraction. If so, let’s proceed to the next section and start reinventing the wheel! The Core Implementation The very basic implementation of the training loop is not that difficult. The pytorch package already includes convenience classes that allow instantiating dataset accessors and iterators. So in essence, we need to do something shown in the snippet below. We could stop our discussion on this section and save some time. However, usually, we need something more than simple loss computation and updating model weights. First of all, we would like to track progress using various performance metrics. Second, the initially set optimizer parameters should be tuned during the training process to improve convergence. A straightforward approach would be to modify the loop’s code to include all these additional features. The only problem is that as time goes, we could lose the clarity of our implementation by adding more and more tricks, introduce regression bugs, and end up with spaghetti code. How can we find a tradeoff between simplicity and maintainability of the code and the efficiency of the training process? Bells and Whistles The answer is to use software design patterns. The observer is a well-known design pattern in object-oriented languages. It allows decoupling a sophisticated system into more maintainable fragments. We don’t try to encapsulate all possible features into a single class or function, but delegate calls to subordinate modules. Each module is responsible for reacting onto received notification properly. It can also ignore the notification in case if the message intended for someone else. The pattern is known under different names that reflect various features of an implementation: observer, event/signal dispatcher, callback. In our case, we go with callbacks, the approach represented in Keras and (especially) fastai libraries. The solution taken by authors of ignite package is a bit different, but in essence, it boils down to the same idea. Take a look at the picture below. It shows a schematical organization of our improved training loop. Each colored section is a sequence of method calls delegated to the group of callbacks. Each callback has methods like epoch_started , batch_started , and so on, and usually implements only a few of them. For example, consider loss metric computation callback. It doesn’t care about methods running before backward propagation, but as soon as batch_ended notification is received, it computes a batch loss. The next snippet shows Python implementation of that idea. That’s all, isn’t much more sophisticated than the original version, right? It is still clean and concise yet much more functional. Now the complexity of training algorithm is entirely determined with delegated calls. Callbacks Examples There are a lot of useful callbacks (see keras.io and docs.fast.ai for inspiration) we could implement. To keep the post concise, we’re going to describe only a couple of them and move the rest few into a Jupyter notebook. Loss The very first thing that comes into mind when talking about Machine Learning model training is a loss function. We use it to guide the optimization process and would like to see how it changes during the training. So let’s implement a callback that would track this metric for us. At the end of every batch, we’re computing a running loss. The computation could seem a bit involved, but the primary purpose is to smooth the loss curve which would be bumpy otherwise. The formula a*x + (1 — a)*y is a linear interpolation between old and new values. Geometric interpretation of linear interpolation between vectors A and B A denominator helps us to account a bias we have at the beginning of computations. Check this post that describes the smoothed loss computation formula in detail. Accuracy The accuracy metric is probably one of the best-known metrics in machine learning. Though it can’t give you a good estimation of your model’s quality in many cases, it is very intuitive, simple to understand and implement. Note that the callback receives notifications at the end of each batch, and the end of training epoch. It computes the accuracy metric iteratively because otherwise, we would need to keep outputs and targets in memory during the whole training epoch. Due to this iterative nature of our computations, we need to account a number of samples in batch. We use this value to adjust our computations at the end of the epoch. Effectively, we’re using the formula the picture below shows. Where b(i) is a batch size on iteration i, a(i) — accuracy computed on batch b(i), N — total number of samples. As the last formula shows, our code computes a sample mean of accuracy. Check these useful references to read more about iterative metrics computations: Metrics as callbacks from fastai Accuracy metric from the ignite package Parameter Scheduler Now the most interesting stuff comes. Modern neural network training algorithms don’t use fixed learning rates. The recent papers (one, two, and three) shows an educated approach to tune Deep Learning models training parameters. The idea is to use cyclic schedulers that adjust model’s optimizer parameters magnitudes during single or several training epochs. Moreover, these schedulers not only decrease learning rates as a number of processed batches grows but also increase them for some number of steps or periodically. For example, consider the following function which is a scaled and shifted cosine: Half-period of shifted and scaled cosine function If we repeat this function several times doubling its period, we’ll get a cosine annealing scheduler as the next picture shows. Cosine annealing with restarts scheduler Multiplying the optimizer’s learning rate by the values of this function, we are effectively getting a stochastic gradient with warm restarts that allows us to escape from local minima. The following snippet shows how one can implement a cosine annealing learning rate. There is an even more exciting scheduler though called One-Cycle Policy. The idea of this schedule is to use a single cycle of learning rate increasing-decreasing during the whole training process as the following picture shows. One-cycle policy scheduler At the very beginning of the training process, the model weights are not optimal, yet so we can allow yourself use larger update steps (i.e., higher learning rates) without risk to miss optimal values. After a few training epochs, the weights become better and better tailored to our dataset, so we’re slowing down the learning pace and exploring the learning surface more carefully. The One-Cycle Policy has a quite straightforward implementation if we use the previously shown class. We only need to add a linear segment that goes before cosine decay, as the lines 27-30 show. The final step is to wrap schedulers with a callback interface. An example of implementation is not shown here to make this post concise and easy to read. However, you can find a fully functional code in the aforementioned Jupyter notebook. Stream Logger The last thing we would like to add is some logging to see how well our model performs during the training process. The most simplistic approach is to print stats into the standard output stream. However, you could save it into CSV file or even send as a notification to your mobile phone instead. OK, finally, we’re ready to start using our training loop! Your Favorite Dataset Now when the callbacks are ready, it is time to show how our training loop works. For this purpose, let’s pick the ubiquitous MNIST dataset. You can easily train it even on CPU within a few minutes. The dataset is very simple for modern Deep Learning architectures and algorithms. Therefore, we can use a relatively shallow architecture, with a few convolution and linear layers. We don’t use a transfer learning here but you definitely should when working on your daily tasks. It makes your network to converge much faster compared to the training from scratch. Next, we use torchvision package to simplify dataset loading and iterating. Also, we apply a couple of augmentation methods to improve the quality of the model. Then, we build a callbacks group that adds a bunch of features to our basic training loop. Finally, we make a couple of small preparations and call training function to optimize the model. You should get an output similar to the output shown below. Epoch: 1 | train_loss=0.8907, train_accuracy=0.6387, valid_loss=0.1027, valid_accuracy=0.9695 Epoch: 2 | train_loss=0.4990, train_accuracy=0.8822, valid_loss=0.0828, valid_accuracy=0.9794 Epoch: 3 | train_loss=0.3639, train_accuracy=0.9086, valid_loss=0.0723, valid_accuracy=0.9823 Note that the code shown above includes make_phases() function that is not shown here. Please refer the notebook to see its implementation. In essence, it wraps data loaders with thin structures helping to track performance metrics during model’s training. Conclusion An ultimate goal of a Deep Learning engineer is to build a robust and accurate solution for a specific dataset and task. The best way to achieve the goal is to use proven tools and well-maintained frameworks and libraries tested in many use cases by users throughout the world. However, if you would like to be versed in Data Science and eventually build your custom solutions, you probably “should understand backprop”. Knowing your tools well gives you the capability to tailor them for your specific needs, add new functionality and learn new instruments faster. I believe that keeping yourself in a balance between using proven APIs and understanding “low-level” details makes you a better engineer who can easily transfer obtained knowledge to new platforms, languages, and interfaces.
https://towardsdatascience.com/deep-learning-model-training-loop-e41055a24b73
['Ilia Zaitsev']
2018-12-28 16:45:14.525000+00:00
['Deep Learning', 'Machine Learning', 'Data Science', 'Python', 'Pytorch']
Poetry Articles on The POM
POETRY ARTICLES Poetry Articles on The POM The POM writers offer tips, tales, and analysis worth reading Image by StockSnap from Pixabay Welcome dear poets and lovers of poetry. If you are here after a redirect from our The POM Newsletter, thank you for the click-through. You’ll find some great articles in our poetry publication and we don’t want you to miss these great posts. Without further adieu, here’s a selection of poetry articles you’ll want to read and bookmark! Samantha Lazar Jay Sizemore Ria Ghosh MDSHall Melissa Coffey Ravyne Hawke Christina M. Ward Thanks for checking these out. If you want to follow the Newsletter for The POM, you can find previous newsletters here and sign up to receive more. Christina M. Ward is the EIC for The POM with the smart and capable brilliance of Samantha Lazar as her co-editor and ultimate back up. The pub boasts hundreds of writers and almost 1K loyal readers. If you LOVE poetry and want to try your hand at writing it — come join us! In poetic love, Christina
https://medium.com/the-pom/poetry-articles-on-the-pom-39888094472f
['Christina M. Ward']
2020-11-16 00:56:16.887000+00:00
['The Pom', 'Articles', 'Poetry', 'Writing', 'Newsletter']
Celery: A few gotchas explained
Have you ever heard of the continuum of theory-before-practice VS. practice-before-theory? Probably not, since I created the name just now 😏. But, though the name is new, the continuum is old. The question is simple: should I first study, study, study the documentation and then only after I presumably fully understand the library and its logic start using it in my code, or should I first dive into it, use it and abuse it before going back and reading the documentation of it. We all float in the continuum, none of us is stationary. Life-events nudge us to the left and to the right and sometimes fiercely sling us into one of the extremes as if we were pink-pong balls. Often we only want to study as much as is absolutely needed, because we equate Practice with joy and Theory with tediousness. And we are right to a degree: how much of a foreign language can you remember if you don’t use it regularly. But then, sometimes, it turns out that we badly underestimate how much theory is “absolutely needed”. And we have to go back, just like I had to go back to figure out Celery. My strategy of broadly getting it was only broadly enough. Now I had to go back and read all the theory. Celery is actually full of gotcha-s. Partly because we are dealing with processes, concurrencies, threads, .. and most of the time such details are abstracted away and a developer doesn’t need to think about them and thus has little experience with them. And partly because Celery uses funky names like “workers” and “brokers” and “CPU bound” and also because it does here and there throw an eccentric curveball. At least a semi-deep documentation-reading is definitely required. So, here it is, all kinds of basic and advanced concepts around Celery. Workers and brokers First, let me explain some basic concepts, under which Celery operates. Celery is a “Task Queue”. Yeah, this is an actual term, not just a description of what it is: a queue of tasks that will eventually be executed. So, Celery is essentially a program that keeps track of tasks that need to be run and keeps a group of workers, which will execute the tasks. Its main points are that it can execute several tasks in parallel and that it is not blocking the independent applications(= Producers), which are giving it tasks. But, Celery doesn’t actually store the queue of tasks in its memory. It needs something else to store the tasks, it needs a Message Broker (or Broker for short), which is a fancy term for a program that can store and handle a queue 🙃. These are usually either Redis or RabbitMQ. So, Celery understands and controls the queue, but the queue is stored inside Redis/RabbitMQ. On to the workers ... When you start Celery ( celery -A tasks worker ) 1 worker is created. This worker is actually a supervisor process that will spawn child-processes or threads which will execute the tasks. By default, the worker will create child-processes, not threads (but you can switch to threads), and it will create as many concurrent child-processes as there are CPUs on the machine. The supervisor process will keep tabs on what is going on with the tasks and the processes/threads, but it will not run the tasks itself. This group of child-processes or threads, which is waiting for tasks to execute, is called an execution pool or a thread pool. Queues Yes, I deliberately used the plural for queues, because there is more than one type of queue 🧙🏽‍⚗️. First, there is the main queue, which accepts tasks from the producers as they come in and passes them on to workers as the workers ask for them. By default, you have only 1 such queue. All workers take tasks from the same queue. But you can also specify a few such queues and limit specific workers to specific queues. The default queue is called celery . To see the first 100 tasks in the queue in Redis, run: redis-cli lrange celery 0 100 These queues are more or less, but not precisely FIFO (if the priority of all tasks is the same). The tasks that are put into the queue first, get taken out of the queue first, BUT they are not necessarily executed first. When workers fetch new tasks from the queue, they usually (and by default) do not fetch only as many tasks as they have processes, they fetch more. By default, they fetch 4 times as many as they have processed. They do this because it saves them time. Communicating with the broker takes some time and if the tasks that need to be run are quick to execute, then the workers will ask for more tasks again and again and again in very quick successions. To avoid this, they ask for X-times as many tasks as they have processes (= worker_prefetch_multiplier ). But there are tasks that never make it into the queue and still get executed by the workers. How is that possible, you ask me? I was asking myself and google the very same question. And let me tell you, Google had very little to say about it. I found just scraps of information. But taking Celery and Redis apart for a few hours (or was it days??), here is what I discovered. Tasks with an ETA are never put into the main queue. They are put directly into the half-queue-half-list of “unacknowledged tasks”, which they named unacked . And I do agree that “unacknowledged” is a very long word with a good amount of silenced letters sprinkled in, but it is very easy to miss something named unacked when you are trying to understand how some tasks have just disappeared. So, a note for next time I or you need to name something: all user-facing names should be spelt out completely. So what are ETA tasks? They are scheduled tasks. ETA stands for “estimated time of arrival”. All tasks that have ETA or Countdown specified (ie. my_task.apply_async((1, 2), countdown=3) , my_task.apply_async((1, 2), eta=tomorrow_datetime) ) are kept in this other type of queue-list. This also includes all task retries, because when a task is retried, it is retried after a specific number of seconds, which means it has an ETA. To see which tasks are in the ETA-queue in Redis, run: redis-cli HGETAL unacked You will get a list of keys and their values alternating, like this: 1) "46165d9f-cf45-4a75-ace1-44443337e000" 2) "[{\"body\": \"W1swXSwge30sIHsiY2FsbGJhY2tzIjogbnVsbCwgImVycmJhY2tzIj\", \"content-encoding\": \"utf-8\", \"content-type\": \"application/json\", \"priority\": 0, \"body_encoding\": ... 3) "d91e8c77-25c0-497f-9969-0ccce000c6667" 4) "[{\"body\": \"W1s0XSwge30sIHsiY2FsbGJhY2tzIjogbnVsbCwgI\", \"content-encoding\": \"utf-8\", ... ... Tasks Tasks are sometimes also called messages. At its core, the message broker is just something that passes messages from one system to another. In our case, the message is a description of tasks: the task: the task name (a unique identifier), the input parameters, the ETA, the number of retries, … . In Celery the task is actually a class. So every time you decorate a function to make it a task, a class is created in the background. This means that each task has a self , unto which a lot of things are appended (i.e. name , request , status , priority , retries , and more). Sometimes we need access to these properties. In those cases we use bind=True : @shared_task(bind=True,...) def _send_one_email(self, email_type, user_id): ... num_of_retries = self.request.retries ... Task acknowledgement Previously we said that when workers are free, they go and fetch some more tasks from the broker. But it is a bit more nuanced. When a worker “takes” a task, the task moved from the main queue to the unacked queue-list. The task is completely removed from the broker only once the worker acknowledges it. This means that when the worker “prefetches” a number of tasks, what really happens is that those tasks are only marked as his (reserved). They are put into the unacked queue, so other workers won’t take them. If the worker dies, then those tasks are made available to other workers. So, when does a worker acknowledge a task? By default Celery assumes that it is dangerous to run tasks more than once, consequently it acknowledges tasks just before they are executed. You can change this by setting the famous . In this case, a task has the slight possibility of being run more than once, if the worker running it dies in the middle of the execution. And with “dies”, I literally mean die. A Python exception in the task code will not kill the worker. Such a task will still be acknowledged, but its state will be set to FAILURE . Something has to happen so that the worker never reaches the code self.acknowledge() . And this is rare. For this reason, I suspect that setting acks_late or not setting it has little bearing. ETA As I already mentioned ETA tasks are … hard to find. They never make it to the main queue. They are immediately assigned to a worker and put into the unacked queue. I suspect that it was not intentional that the ETA tasks immediately get assigned to a specific worker. I suspect this was just a consequence of the existing code. An ETA task can’t go into a general queue, which works almost as FIFO. The only other place is among the unacknowledged tasks in which case it needs to be reserved by one worker. Interestingly, the ETA time is not the exact time this task will run. Instead, this is the earliest time this task will run. Once the ETA time comes around, the task must wait for the worker to be free. Retry Tasks Celery doesn’t perform any retry logic by default. Mostly because it assumes that tasks are not idempotent, that it is not safe to run more than once. Retrying a task does, however, have full support in Celery, but it has to be set up explicitly and separately for every task. One way of triggering a retry is by calling self.retry() in a task. What happens after this is triggered? An ETA time is calculated, some new metadata is put together and then the task is sent to the broker, where it falls into the unacked queue and is assigned to the same worker that already ran this task. This is how retry-tasks become ETA tasks and are therefore never seen in the main broker queue. It is a very sleek, but unexpected system. And again, Google has very little to say about this. Learn more about retries in Celery task retry guide. CPU bound or I/O bound and processes vs threads As we already said, by default Celery executes tasks in separate processes, not threads. But you can make it switch to threads, by starting the workers with either — pool eventlet or — pool gevent . Both eventlet and gevent actually create greenlets, not threads. Greenlets (or green threads) are thread-like, but they are not threads, because by definition threads are controlled by the OS. Greenlets do not rely on the OS to provide thread support, instead, they emulate multithreading. They are managed in application space and not in OS space. There is no pre-emptive switching between the threads at any given moment. Instead, the greenlets voluntarily or explicitly give up control to one another at specified points in your code. If your tasks are heavy on CPU usage: if they do a lot of computing (=are CPU bound), then you should keep using processes. If, on the other hand, your tasks are mostly doing HTTP requests (=I/O bound), then you can profit from using threads. The reason for this is that while your task is waiting for the HTTP request to return a result, it is not doing anything, it is not using the CPU and would thus not mind if another thread would make use of it. There is a lot more to Celery and the documentation is not perfect. Many features have their description split up and dotted around the web page. It is difficult to find details of the implementation. But it is also a complicated subject matter. I don’t know how Celery will behave outside of the few scenarios I have literally created and experimented on. Sure, after a few years of intensive work I might have a good understanding of how it works, but Celery lives on the fringes of my day-to-day. I set it up, but then it disappears into async-land. It behaves radically different when on the server and when on my computer. I can see which tasks were done, but I can’t see how well they were done. Transparency is very difficult with something that runs in parallel, possibly in threads and semi-independent of the application. I don’t trust it, I don’t trust that I understand its settings correctly or I don’t trust that I know how to set them correctly. Celery is like a spirit, it comes and goes, sometimes it breaks, but most of the time, it just works. Hopefully, it works on the tasks we assigned it, but if that is not the case, it will be equally silent.
https://ines-panker.medium.com/celery-a-few-gotchas-explained-5c500efa05a9
['Ines Panker']
2020-12-06 17:54:09.881000+00:00
['Celery', 'Python', 'Django']
How to NaNoWriMo
The author writing at the airport Decide on November 7 that you’re going to do NaNoWriMo after seeing it on LiveJournal. That seems fun, you think. Write about yourself, because you’re 21 and that’s what you know best. Finish by November 30. Don’t do NaNoWriMo for more than a decade. Not because you don’t like writing. Not because you don’t have ideas. But because you don’t think that you’re a writer. Experience a typical identity crisis about everything — who do you want to be, where you want to go, where you have been. Read the lines below your photo in your fifth grade yearbook — “In twenty years, I would like to be writing stories.” Ignore the second line. Tell yourself to stop ignoring your ten-year-old self. At a book signing, you tell an author, “I am trying to be a writer.” She interrupts you and touches your arm. “You are a writer.” Take that as a sign. Devote yourself to learning about writing. Beyond the posts that you write on social media or your blog. Join a writing group. Take a class from nearly every school in San Francisco. Learn about writing workshops. Learn that there’s no magic behind it. It’s all about sitting down and writing. Think about the stories you have always wanted to write. About family. About childhood bullying. About bad choices. About social media. About being Asian American. About being not understood. About all the girls seeking asylum. About all the scenes you would play in your head watching passengers at the airport. About dreams. Remember NaNoWriMo. Look it up and find out how it’s evolved. Sit down and write. Join the website. Join the write-ins. Show up. Be awkward and shy. But keep showing up. Try to make friends in the forums. Fail, of course, most of the time. Squeeze in writing into all moments. Your day job swallows up your time, and your mental space isn’t as open as it used to be in your early twenties. Care about quantity, not quality. Write on the train. Write at the airport. Write at lunch in between gulps of soup. Write in the window of a hat shop. Write on your phone. Write in the thirty minutes before midnight. Write as drowsiness overtakes your body. Write through the pain of not knowing what to write. Write because you want to reach your word count. Write because you always achieve your goals, even if nobody knew about them. Write constantly every day in November, but almost never other days of the year. Impulsively buy a ticket to the Night of Writing Dangerously, an event where writers come together to write together in a ballroom for multiple hours. Know nobody there. Post a forum post in your home region that you’re there with your stuffed dinosaur. Go in with no expectations, except to eat candy and write. Dress up. Drag a keyboard, mouse, and laptop stand. Chat with a few. Clap when writers reach 50,000 words during the event. Indulge in dinner and dessert. Volunteer one year, pulling power strips across the ballroom. Convince your partner to do NaNoWriMo, only because a paired ticket is cheaper. Learn that you’re one of the few writers who isn’t writing genre. Bid frivolously in the auction. Realize that it’s less about the event, but that you’re with a community writing together. Community matters. But you’re still sad when the Night of Writing Dangerously ends. Writing Dangerously Give yourself permission to dedicate your time in November to writing. Know that you spend the rest of the year not writing, so this is the perfect excuse. Don’t believe in the daily writing habit. Break all rules. When November 30 arrives, always just go over slightly by 50,000 words. You have never lost. You have always won. But you know this is only the beginning. Generating words is easy. In between Novembers, workshop a novel constructed from two NaNos. Workshop the novel through a writing group. Pull out short stories from a NaNo. Work with a developmental editor. Revise a chapter to submit to writing applications. Change the point of view to the third person. Never touch it for the next year. Think about what you would do for the next NaNo while never writing a single word for an outline. Get inspired by the words you read — advice columns, novels, short stories, experimental prose. Write the ideas down in your digital notebook. Tell yourself to plan for next year. Never do. Get asked what you did over Thanksgiving break. Rarely ever say that you write. Because it has become part of your routine and your habits.
https://medium.com/nanowrimo/how-to-nanowrimo-38770a52f08a
['Jennifer Ng']
2020-11-29 17:48:25.394000+00:00
['Nonprofit', 'NaNoWriMo', 'Fiction', 'Writing']
Build Slack apps in a flash
Build Slack apps in a flash Introducing the newest member of the Bolt family Illustration and design by Casey Labatt Simon Last April we released Bolt, a JavaScript framework that offers a standardized, high-level interface to simplify and speed up the development of Slack apps. Since then, we’ve seen a remarkable community of developers build with and contribute to Bolt, signaling an appetite for frameworks in other programming languages. Since its initial release in JavaScript, Bolt is also available in Java — and today, in Python. Interested in seeing our latest addition in action? We’re hosting a webinar about building with Bolt for Python later this week. Designing Bolt for simple, custom building In the months leading up to the release of Bolt for JavaScript, our small development team held weekly white-boarding sessions (developing a recursive middleware processor was not as easy as we expected). We pushed hundreds of commits, took countless coffee breaks, and followed the guidance of JavaScript community principles. Developing Bolt for Java and Python, we knew we needed to customize them to best fit each unique language community. As we trekked, we made small modifications to the different frameworks—in Java we modified how we pass in listener arguments, and in Python we adapted Bolt to work with existing web frameworks, like Flask. Our specialized approach was complementary to Bolt’s core design principles. A common listener pattern A common listener pattern simplifies building with all the different platform features Bolt is built around a set of listener methods. These are used to listen and interact with different events coming from Slack. For example, Events API events use the events() listener, and shortcut invocations use the shortcut() listener. All listeners use common parameters that allow you to define unique identifiers, add middleware, and access the body of incoming events. A handful of built-in defaults Built-in OAuth support makes multi-team installation faster and more intuitive Bolt includes a collection of defaults that perform the heavier lifting of building Slack apps. One of these is a design pattern called receivers, or adapters in Python. These separate the concerns of your app’s server and the Bolt framework so updates for server logic doesn’t require framework updates, and vice-versa. Your app has access to a built-in Web API client that includes features such as built-in retry logic, rate limit handling, and pagination support as you make calls to any of our 130+ methods. It offers a simple way to call Web API methods without having to think of all of the possible edge cases. And lastly, Bolt offers OAuth support which handles the “Add to Slack” flow, making token storage and access for multi-team installations simpler. Helper functions and objects The say() helper is available in all listeners that have a conversation context To complete common tasks, Bolt includes a set of helper functions. For example, in any listener with an associated conversation context, there will be say() function that lets your app send a message back into that channel. And for events that need to be acknowledged within 3 seconds, Bolt surfaces an ack() function that streamlines the act of responding. Bolt also offers helpers that make it easier to inspect and pass data through your app. Rather than having to unwrap incoming events to access the most important information, Bolt includes a payload object that is a predictable, unwrapped event (though you’ll still have body for the more verbose event). You can also access context , which is a key/value dictionary that allows you to pass data through middleware and listeners. For example, if you have an internal data store that you want to associate with incoming events, you can create a global middleware function to store that information in context , which will be natively accessible in listeners. The future of Bolt As the platform grows, we will continue our investment in Bolt to make it easier, faster, and more intuitive to build Slack apps. For example, steps from apps are now available in Workflow Builder. Each workflow step has a few associated events, so we collaborated with the Workflow Builder engineering team to design a common pattern in Bolt that lets you centrally handle the entire life cycle of a workflow step. Also, we recently pre-announced Socket Mode, which will improve the experience of deploying apps behind a firewall. When generally available early next year, Bolt apps will gain support for this feature with minimal code changes. We’re also unlocking more Bolt resources for custom use cases— whether that’s specialized hosting environments, simplifying new features, or building scalable apps for enterprise grid and Slack Connect. We’ll continue to expand our collection of Bolt-focused code samples, tutorials, deployment guides, and webinars; and if you need a more specialized approach to building Slack apps, we have ongoing plans for our lower-level SDKs that power Bolt under the hood. Digging into the nuts and bolts You can start building with Bolt using our guides in Python, JavaScript, and Java. If you’re a JavaScript developer, you can read our new hosting guides to get your app up-and-running on Heroku with an equivalent for AWS Lambda coming soon. Want to dive deeper into Bolt for Python? We’re hosting a webinar on November 11, which you can register for today.
https://medium.com/slack-developer-blog/build-slack-apps-in-a-flash-700570619065
['Shay Dewael']
2020-11-09 19:13:15.144000+00:00
['API', 'Technology', 'Python', 'Programming', 'Slack']
Google Apps 如何給用戶做應用程式推薦 — Wide&Deep Model
Neural Network Embeddings Explained How deep learning can represent War and Peace as a vector
https://medium.com/%E6%95%B8%E5%AD%B8-%E4%BA%BA%E5%B7%A5%E6%99%BA%E6%85%A7%E8%88%87%E8%9F%92%E8%9B%87/google-apps-%E5%A6%82%E4%BD%95%E7%B5%A6%E7%94%A8%E6%88%B6%E5%81%9A%E6%87%89%E7%94%A8%E7%A8%8B%E5%BC%8F%E6%8E%A8%E8%96%A6-1641d1ce99e6
['Edward Tung']
2020-09-02 08:39:36.958000+00:00
['Python', 'Data Science', 'Recommender Systems']
A Gentle Explanation of E= mc²
The mass-energy equivalence formula E = mc² defines a relationship between the mass m and the energy E of a body in its rest frame (the rest frame is the frame of reference where the body is at rest). The square of the speed of light is an enormous number, and therefore a small amount of rest mass is associated with a tremendous amount of energy. Figure 1: In the blackboard, a variant (Eq. 9 of this article) of his famous equation, E = mc² (source). The Annus Mirabilis papers While still working as a patent clerk, Einstein published four revolutionary papers, all of them containing major contributions to the foundations of modern physics: In the first paper, he explained the so-called photoelectric effect, the emission of electrons when light hits an object. He also showed that energy consists of discrete packets (photons). Figure 2: Electrons are emitted from a metal plate due to light quanta (photons) (source). The second paper explained Brownian motion, the random motion of particles suspended in a medium. This paper led the physics community to accept the atomic hypothesis. Figure 3: Simulation of the Brownian motion of 5 particles (yellow) colliding with 800 particles. The particles leave blue trails (from their random motion) (source). In the third paper, Einstein introduced his theory of special relativity. For a very quick introduction to some of the features of special relativity see two of my recent articles: In the fourth paper, Does the Inertia of a Body Depend Upon Its Energy Content? (Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?), which will be the focus of the present article, Einstein developed the principle of mass-energy equivalence E = mc² (leading eventually to the discovery of atomic energy). Figure 4: The front page of the paper “Does the Inertia of a Body Depend Upon Its Energy Content?” where Einstein developed the principle of mass-energy equivalence principle E = mc² (source). The Postulates of Special Relativity Let us quickly recall the two postulates of special relativity: All the laws of physics are the same in all inertial reference frames. The speed of light in a vacuum is the same in all inertial reference frames regardless of the motion of the observer or the source. Redefining Momentum This section will show that the classical (nonrelativistic) linear momentum p=mv, which has a corresponding conservation law in classical mechanics, must be redefined for the momentum conservation to continue valid in relativistic regimes. A Bird’s Eye View of Lorentz Transformations If in an inertial system S the coordinates of an event E are given by (t, x, y, z), in a moving frame S’ with constant velocity v with respect to the first inertial system, the same event E will have coordinates (t’, x’, y’, z’) given by: Equation 1: Lorentz Transformations. Figure 5: Two inertial frames moving with velocity v with respect to each other (source). These relations are called Lorentz transformations (named after the Dutch physicist Hendrik Lorentz). Minkowski Diagrams The spacetime in special relativity is represented graphically by Minkowski diagrams, two- or three-dimensional graphs with one or two space dimensions and one time dimension (see Fig. 7). Figure 6: A transparency from Minkowski’s famous talk, “Space and Time” (1908) (source). Two important concepts are represented in a Minkowski diagram: Events: An event is an instantaneous occurrence, represented by a point (t, x, y). World line: a line representing the motion of an object through time. The slope of the world line is the reciprocal of the velocity of the moving object (since by convention the time axis is the vertical one). Figure 7: The Minkowski diagram where the worldline represents the motion of an object through time. The slope of the world line is, by convention, the reciprocal of the velocity of the moving object (source). Proper Time and Proper Velocity Proper time τ is the time your clock registers as you move, say, inside a plane. More specifically, it is the time measured by a clock following a world line in spacetime. It is related to the external time (say, the time measured by a clock on the ground) by the following relation: Equation 2: Relation between proper time (the time your clock registers as you move) and the time measured by a clock on the wall. The proper velocity η = dl/dτ is defined using the external distance and the proper time. Since dl/dτ = dl/dt × dt/dτ = v × dt/dτ, Eq. 2 above gives us: Equation 3: Definition of proper velocity. For example, if you are on a plane, η measures the ratio between the distance it takes for the plane to complete the trip (measured by an observer on the ground) and the time aboard the plane (registered on your watch). Eq. 3 is the spatial part of the proper velocity. The 0-th component is: Equation 4: The 0-th component of the proper velocity Relativistic Momentum In nonrelativistic (or classical) mechanics, the momentum is equal to the (constant) mass times the velocity p = mv. However, in the relativistic domain, the law of conservation of classical momentum violates the principle of relativity (it is straightforward to find a pair of reference frames S and S’ where the total momentum is conserved in S and not in S’ or vice-versa). To recover the validity of the principle of relativity, we must redefine the momentum’s expression. This turns out to be fairly easy: we simply use the relativistic momentum below instead of the classical momentum: Equation 5: Definition of relativistic momentum. Rest Energy The 0-th component of Eq. 5 is given by In his 1905–1906 papers, Einstein called relativistic mass the quantity: Equation 6: Einstein’s definition of relativistic mass. and m he called rest mass. Nowadays, the terminology has changed and relativistic energy is defined by: Equation 7: Relativistic energy (modern definition). Now, notice that even if the object is not moving (v=0), it still has nonzero relativistic energy. This is the rest energy of the object: Equation 8: The rest energy. To obtain the kinetic energy we subtract the rest energy from the total energy:
https://medium.com/cantors-paradise/a-gentle-explanation-of-e-mc%C2%B2-2889003f785
['Marco Tavora Ph.D.']
2020-12-15 12:44:11.537000+00:00
['Math', 'Science']
Finding the Edge: Canny and Sobel Detectors(Part 2)
In the first part of the series I briefly touched upon the Canny and Sobel Edge Detectors. In part two here, I shall compare these detectors, explain the math behind it and finally we’ll see how one can code it. Comparison of the various Edge Detection Operators The main advantage of Sobel operator is its simplicity which is because of the approximate gradient. However, on the other hand Canny edge detection has greater computational complexity. The major drawback of Sobel operator is it’s signal to noise ratio. With the increase in noise the gradient magnitude degrades. This leads to inaccurate results. The main purpose of the edge detector is to figure out where the edges are present. It does not focus on the thickness of the edges at all. Let’s take an example and demonstrate how the canny edge detection algorithm is indeed powerful with a photo of this beautiful flower. Say we wish to find the edges of the petals of this flower. The original picture-http://www.public-domain-photos.com/flowers/wet-flower-free-stock-photo-4.htm When we apply Sobel Edge Detection it is going to detect the gradients on both the left and right side of the petal. If we have an incredibly high-resolution image of the flower the gradient is going to spread out and if we don’t, we will get a blurry edge. Sobel Operator applied on the image In contrast, the canny edge detector will remove all those rough blurry edges and will give you just the edge of the petal, exactly what we needed
https://medium.com/srm-mic/finding-the-edge-canny-and-sobel-detectors-part-2-2b91365e674
[]
2020-09-12 12:38:14.374000+00:00
['Edge Detection', 'Opencv', 'Canny Edge Detection', 'Image Processing', 'Sobel Filter']
Gratitude Is a Great Cure for Self-Doubt
Self-doubt. The tiny voices in my head are whispering all the time. What if they won’t like it? What if my journey towards a better world is BS? What if my Fibonacci poetry sucks? What if… Who doesn’t know these tiny voices? Be honest. I bet you have them too. I was glad to read in the Thanksgiving story by Dew Langrial that even Stephen King suffered from it. “Even writers like Stephen King have to face their doubts. In On Writing, he says, “Writing fiction, especially a long work of fiction can be a difficult, lonely job; it’s like crossing the Atlantic Ocean in a bathtub. There’s plenty of opportunity for self-doubt.” — Dew Langrial The tiny voices have been ruling my life for a long time. And although I seemed very successful on the outside, I didn’t feel it. And my real longing to be a writer never flourished because of the tiny voices. Until I found my doing in 2013 and my voice to accompany the doing last year. And I found my daring! I’m out there now, writing and sharing, building and caring. And I love it! My audience appreciates me. They tell me I’m an inspiring doer via LinkedIn and I reached #59 on the inspiring writer list on Medium. Am I telling you this to brag? Neah. I’m telling you this because these little snippets of appreciation by my readers, colleagues, and friends are silencing my tiny voices. They help me face the harsh criticism, which is also out there. Believe me! I just wrote a story with the 4 four agreements in Toltec wisdom (Miguel Ruiz wrote about it). Be impeccable with your word Don’t take anything personally Don’t make assumptions Always do your best #2 helps me overcome my tiny voices. And if I hear the reactions from my readers, others are helped by this one too. Try it sometime. #1 helps me not to hurt anyone else and make the tiny voices flare-up in another person. And #4 is this story. Doing my best. Giving gratitude to everyone out there who’s doing great things. And who needs to be reassured that what they’re doing is good enough.
https://medium.com/illumination-curated/gratitude-is-a-great-cure-for-self-doubt-139ba1cdd2d3
['Desiree Driesenaar']
2020-12-03 09:54:58.378000+00:00
['Empowerment', 'Support', 'Self Improvement', 'Gratitude', 'Writing']
Speech Is More Than Spoken Text
Speech Is More Than Spoken Text Words carry meaning, but there’s much more to spoken language Since the launch of Alexa, Siri, and Google Assistant, we’re all becoming much more used to talking to our devices. Beyond these virtual assistants, voice technology and conversational AI have increased in popularity over the last decade and are used in many applications. One use of Natural Language Processing (NLP) technology is to analyse and gain insight from the written transcripts of audio— whether from voice assistants or from other scenarios like meetings, interviews, call centres, lectures or TV shows. Yet when we speak, things are more complicated than a simple text transcription suggests. This post talks about some of the differences between written and spoken language, especially in the context of conversation. To understand conversation, we need data. Transcribed conversational data is harder to come by than written text data, but some good sources are available. One example is the CallHome set which consists of 120 unscripted 30-minute telephone conversations between native speakers of English, and is available to browse online. Here’s a snippet of one of the transcriptions: Part of a CallHome transcription (6785.cha) Transcripts contain mistakes “That’s one small step for (a) man. One giant leap for mankind.” In 1969, Neil Armstrong stepped onto the surface of the moon and spoke the now famous line “That’s one small step for man, one giant leap for mankind”. Later, Armstrong insisted the line had been misheard. He had not said “for man”, but rather “for a man”. The poor quality audio and particular phrase mean that Armstrong’s words still remain ambiguous. But it’s clear that mistakes are made when transcribing audio — both people and machines are guilty of this. Another example is in the hand-transcribed CallHome example above. About half-way through the excerpt, there’s an error where the word ‘weather’ is written as ‘whether’. Exactly how many transcription mistakes are made depends on the type of audio. Is the speech clear, or is there a lot of background noise? Is the speaker clearly enunciating, or speaking informally? Is the topic general enough to easily transcribe, or is there a lot of unfamiliar vocabulary? Attempts have been made to quantify human transcription error rate on conversational telephone speech. Switchboard is another dataset of transcribed telephone calls, containing about 260 hours of speech. One team measured the human WER on this set at around 5.9% using 2 expert transcribers to do so. The first transcribed the audio, and the second validated the transcription, correcting mistakes they found. The same paper estimated a human error rate of 11.3% on the CallHome set. A subsequent paper from a different team used 3 transcribers plus a 4th to verify. They showed human error rates for the 3 transcribers of 5.6, 5.1 & 5.2% on Switchboard, and 7.8, 6.8 & 7.6% on CallHome. This same paper showed their best automatic speech recognition (ASR) error rate as 5.5% on Switchboard and 10.3% on CallHome. So interestingly, while ASR performance is in the ballpark of human performance for Switchboard, it’s a few percent worse than human transcription for CallHome. Analysis of human transcription show that people tend to mis-recognise common words far more frequently than they recognise rare words, and they are also poor at recognising repetitions. On these specific conversational telephone speech tasks where human and computer error rates are low, people & machines make similar kinds of transcription error. ASR systems make mistakes (source: https://knockhundred.com/news/when-english-subtitles-go-wrong) Errors crop up in written text too, in the form of typos and incorrect word choice. Yet, the kinds of errors in written text are different from those made transcribing spoken text. The penguin jumped? Once we accurately know the words someone said, the meaning can be affected by altering how we say them— the prosody. We can turn a statement into a question, or a question into an exclamation. “The penguin jumped?” and “The penguin jumped!” are spoken differently. Written text uses punctuation — ? and ! — to show this difference, but punctuation is often unreliable or missing in transcripts of audio. Emphasising different words changes the meaning too — “The penguin jumped?” and “The penguin jumped?” are spoken in different ways and are asking these two different questions that would elicit different replies. Additional information, too, is conveyed by our tone. We might sound uncertain, nervous, happy, or excited while we speak. How we choose to speak may also convey a different emotion to how we’re really feeling. Understanding emotion in speech is an increasingly popular research topic, though in practice it typically reduces the set of emotions down to a small set that are easy to separate. For example, one dataset RAVDESS, uses categories of neutral, calm, happy, sad, angry, fearful, surprise, and disgust. This doesn’t come close to capturing the full range of emotion that can be expressed. Of course, words carry emotion and meaning too. But in analysing only the words spoken, we run the risk of missing much of the meaning behind what people are saying. That, we can only get from how they say it. When in agreement with someone, we’re usually quick to voice it. Sometimes, so quick that we overlap the beginning of our speech with the end of theirs. When disagreeing, though, we aren’t always so quick off the mark. Pauses in conversation are often a precursor to disagreement, or are used before saying something unexpected. Anything upwards of half a second silence can indicate an upcoming disagreement. Elizabeth Stokoe’s book ‘Talk’, has many examples of where conversation go wrong and how an unexpectedly long silence is often the first sign. I mean-er-I want to-you know-say something… We stumble over our words all the time and may barely even notice. Take this line from one conversation: “So what I would say is that, you know, the survey is — the survey instructs the consumers” The speaker first inserts a filler (‘you know’), and then goes on in the same utterance second to correct herself (‘the survey is-the survey instructs’). You might think that fillers and corrections are characteristics of informal speech, but this example is taken from a much more formal setting — the US Supreme Court arguments. Both audio and transcript of these are available to browse online. The sessions have a mix of pre-prepared remarks from both sides of the argument, and some back-and-forth discussion. The ‘um’s and the ‘er’s and ‘you know’s may seem random, but they serve very specific purposes during speech. One time we use them is when we want to eke out some extra time to get our thoughts together, without signalling to others that we’ve finished talking (otherwise known as ‘holding the floor’). They can also help in communicating uncertainty or discomfort, or let us speak more indirectly to appear more polite. In the CallHome conversation snippet at the beginning of this post, you can see speaker A saying ‘mhm’ and ‘yeah’ while speaker B is talking. Like the ‘you know’ from the Supreme Court example, ‘mhm’ and ‘yeah’ don’t convey information here, but are ‘backchannels’. Speaker A is simply letting speaker B know that they are still paying attention. Backchannels often overlap the speech of the other person, in a way that doesn’t interrupt their flow. In conversation, we talk over each other all the time. Some of this is the backchannels we use to show that we’re paying attention, sometimes we start our turn naturally before the other person has finished theirs, and sometimes we jump in to interrupt (‘take the conversational floor’) before the other person has finished talking. The amount of overlap varies a lot between scenarios and speakers. A staged interview between two participants might have very little overlap, but a meeting where the participants are excited about the ideas being discussed might have much more overlapping speech. The AMI dataset is a set of meeting recordings, and the amount of overlapping speech in its meetings varies between 1% and 25%. Despite unconsciously using these conversational phenomena when talking with others, people tend to use fewer such hesitations when talking to a computer. Perhaps they intuitively know that computers will struggle with these aspects of conversation. Still, these patterns of speech are important when building any technology that analyses conversations between people.
https://medium.com/swlh/speech-is-more-than-spoken-text-4125490294b9
['Catherine Breslin']
2020-10-06 09:25:45.810000+00:00
['Artificial Intelligence', 'Voice Recognition', 'Conversational Ai']
The Changing Media Landscape: Opportunities and New Business Models
Lesson #1: Media is an industry for which journalism is a part but not the singular definer. According to Vault.com, the U.S. media and entertainment industry contributes more than $632 billion to the economy and represents a third of the global industry. While journalism is an important component of media — and arguably one of the most prominently discussed— it neither defines nor determines the future of the industry as a whole. Media is a vast industry that encapsulates a wide variety of sub-industries including but not limited to: film, print, radio, television, podcasts, internet, VR, AI, radio shows, news, newspapers, magazines, and books, among others. The media industry includes business functions such as consulting, finance, advertising, marketing and research, all of which specialize in one or all of the aforementioned core areas. And conferences, summits, events, travel, training and other activities are all business endeavors conducted by these entities. The business model of media companies may take the form of privately held companies such as Forbes, of publicly traded companies such as Snapchat or Disney, of nonprofit organizations such as ProPublica or the Texas Tribune, or of government-funded outlets such as local government news stations. And they may be franchised or localized, as we have seen with the American Business Journal networks. We have also seen many companies now entering the content market, which has further evolved how we must define the media industry. That’s evidenced by retailers such as Net-A-Porter, which has an in-house editorial tebetam, Porter, dedicated to producing content related to its products. Lesson #2: To survive, traditional media will employ non-traditional business models. The days of ‘just’ being a newspaper have changed. In the past, directories, advertisements and circulation could sustain a media company. Those days have quickly changed. We are seeing now the development of new lines of revenue for existing media outlets such as The New York Times in the form of its wine club, travel experiences, and cooking subsets. Others, such as The Atlantic, have shifted their business revenue model to one where 80 percent is derived from events, digital/native advertising, and consulting. Other outlets, such as Vice, have non-public facing consulting arms, like Virtue, that provide B-to-B consulting and creative agency work to various companies and organizations seeking media advisement and editorial content. Buzzfeed has also instituted this consulting type of model by providing companies and brands advisement and content development services leveraging their unique knowledge of virality. We are also seeing the development of new media ventures that otherwise would have no home amongst the traditional definitions of journalism-based media. An example of such is, Skift and others have developed business models based on singular industries, boutique events, publication production and even consulting. Lesson #3: The traditional strategies that have survived are email and membership. As social media algorithms change, engagement with social media platforms evolve in a volatile way —see SnapChat’s recent $1.3 drop based on influencer comments. Users are increasingly engaging with media in ways that feel more “trusted” and more connected to their areas of interest. As such, email has remained an integral way to continue to increase brand engagement, placement, information sharing and to build trust and rapport with readers/subscribers. Those who are getting it right have meaningful content, the right voice for their readers, visual appeal and a commitment to consistency that is winning them audiences. Membership, a traditional strategy for trade and business associations and clubs, is back in full effect as a new way to build revenue/sustainability, connectivity and engagement. Perks, benefits and advanced engagement opportunities are offered to those willing to pay a premium to become members. We are seeing membership-as-subscription models being used by organizations such as Masthead, of the Atlantic. The Platform Press notes that “the proliferation of membership and subscription services is both a route to solvency and independence.”
https://medium.com/journalism-innovation/my-tow-knight-experience-three-lessons-learned-1baab3f0a176
['Natalie Cofield']
2018-04-03 13:21:08.257000+00:00
['Journalism', 'Multiculturalism', 'Media', 'Cuny J School']
Biggest Open Problems in Natural Language Processing
The NLP domain reports great advances to the extent that a number of problems, such as part-of-speech tagging, are considered to be fully solved. At the same time, such tasks as text summarization or machine dialog systems are notoriously hard to crack and remain open for the past decades. However, if we look deeper into such tasks we’ll see that the problems behind them are rather similar and fall into two groups: Data-related, and Understanding-related. Data-related problems NLP is data-driven, but which kind of data and how much of it is not an easy question to answer. Scarce and unbalanced, as well as too heterogeneous data often reduce the effectiveness of NLP tools. However, in some areas obtaining more data will either entail more variability (think of adding new documents to a dataset), or is impossible (like getting more resources for low-resource languages). Besides, even if we have the necessary data, to define a problem or a task properly, you need to build datasets and develop evaluation procedures that are appropriate to measure our progress towards concrete goals. Low-resource languages It is a known issue that while there are tons of data for popular languages, such as English or Chinese, there are thousands of languages that are spoken but few people and consequently receive far less attention. There are 1,250–2,100 languages in Africa alone, but the data for these languages are scarce. Besides, transferring tasks that require actual natural language understanding from high-resource to low-resource languages is still very challenging. The most promising approaches are cross-lingual Transformer language models and cross-lingual sentence embeddings that exploit universal commonalities between languages. However, such models are sample-efficient as they only require word translation pairs or even only monolingual data. With the development of cross-lingual datasets, such as XNLI, the development of stronger cross-lingual models should become easier. Large or multiple documents Another big open problem is dealing with large or multiple documents, as current models are mostly based on recurrent neural networks, which cannot represent longer contexts well. Working with large contexts is closely related to NLU and requires scaling up current systems until they can read entire books and movie scripts. However, there are projects such as OpenAI Five that show that acquiring sufficient amounts of data might be the way out. The second problem is that with large-scale or multiple documents, supervision is scarce and expensive to obtain. We can, of course, imagine a document-level unsupervised task that requires predicting the next paragraph or deciding which chapter comes next. However, this objective is likely to turn out too sample-inefficient. A more useful direction seems to be multi-document summarization and multi-document question answering. Evaluation The problem of evaluation of language technology, especially such complex as dialogue, is often neglected, but it is an important point: we need both in-depth and thorough studies that shed light on why certain approaches work and others don’t and develop evaluation measures based on them. We need a new generation of evaluation datasets and tasks that show whether our techniques actually generalize across the true variety of human language. Challenges in Natural Language Understanding Up to the present day, the problem of understanding the natural language remains the most critical for further making sense and processing of the text. The issues still unresolved include finding the meaning of a word or a word sense, determining scopes of quantifiers, finding referents of anaphora, the relation of modifiers to nouns and identifying the meaning of tenses to temporal objects. Representing and inferring world knowledge, and common knowledge in particular, is also difficult. Besides, there remain challenges in pragmatics: a single phrase may be used to inform, to mislead about a fact or speaker’s belief about it, to draw attention, to remind, to command, etc. The pragmatic interpretation seems to be open-ended — and difficult to be grasped by machines. Ambiguity The main challenge of NLP is the understanding and modeling of elements within a variable context. In a natural language, words are unique but can have different meanings depending on the context resulting in ambiguity on the lexical, syntactic, and semantic levels. To solve this problem, NLP offers several methods, such as evaluating the context or introducing POS tagging, however, understanding the semantic meaning of the words in a phrase remains an open task. Synonymy Another key phenomenon of natural languages is the fact that we can express the same idea with different terms which are also dependent on the specific context: big and large can be synonyms when describing an object or a building but they are not interchangeable in all contexts, e.g. big can mean older, grown up in phrases like big sister; while large does not have this meaning and could not be substituted here. In NLP tasks, it is necessary to incorporate the knowledge of synonyms and different ways to name the same object or phenomenon, especially when it comes to high-level tasks mimicking human dialog. Coreference The process of finding all expressions that refer to the same entity in a text is called coreference resolution. It is an important step for a lot of higher-level NLP tasks that involve natural language understanding such as document summarization, question answering, and information extraction. Notoriously difficult for NLP practitioners in the past decades, this problem has seen a revival with the introduction of cutting-edge deep-learning and reinforcement-learning techniques. At present, it is argued that coreference resolution may be instrumental in improving the performances of NLP neural architectures like RNN and LSTM. Personality, intention, emotions, and style Depending on the personality of the author or the speaker, their intention and emotions, they might also use different styles to express the same idea. Some of them (such as irony or sarcasm) may convey a meaning that is opposite to the literal one. Even though sentiment analysis has seen big progress in recent years, the correct understanding of the pragmatics of the text remains an open task.
https://medium.com/sciforce/biggest-open-problems-in-natural-language-processing-7eb101ccfc9
[]
2020-02-05 16:50:45.367000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'NLP', 'Artificial Intelligence']
Step-by-Step Guide — Building a Prediction Model in Python
Understanding the Apple Stock Data Secondly, we will start loading the data into a dataframe, it is a good practice to take a look at it before we start manipulating it. This helps us to understand that we have the right data and to get some insights about it. As mentioned earlier, for this exercise we will be using historical data of Apple. I thought Apple would be a good one to go with. After walking through with me on this project, you will learn some skills that will give you the ability to practice yourself using different datasets. The dataframe that we will be using contains the closing prices of Apple stock of the last one year (Sept 16, 2019 — Sept 15, 2020). Read Data import pandas as pd df = pd.read_csv('aapl_stock_1yr.csv') Head Method The first thing we’ll do to get some understanding of the data is using the head method. When you call the head method on the dataframe, it displays the first five rows of the dataframe. After running this method, we can also see that our data is sorted by the date index. df.head() image by author Tail Method Another helpful method we will call is the tail method. It displays the last five rows of the dataframe. Let’s say if you want to see the last seven rows, you can input the value 7 as an integer between the parentheses. df.tail(7) image by author Now we have an idea of the data. Let’s move to the next step which is data manipulation and making it ready for prediction.
https://towardsdatascience.com/step-by-step-guide-building-a-prediction-model-in-python-ac441e8b9e8b
['Behic Guven']
2020-10-18 13:26:11.800000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Data Science', 'Programming']
5 (More) Places to Promote Your Medium Stories Besides Facebook
Promoting your Medium stories is vital to your success on this site. I learned this the hard way the first few months I spent writing Medium stories in 2018. I would just post the story on the site… and hope people found it and clapped for it. I figured, hey, I’m publishing a story every single day. After a few weeks of hard work, people will eventually start following me and reading my writing every day, right? Right? Unfortunately it’s not that easy. There’s simply so many writers on Medium, many of them producing work every day, that it’s almost impossible to keep up with everyone and everything. You do need to promote your work, and do so every day. I started treating Medium like a part-time job last year, and since then I’ve been doing everything I can to promote my stories. I definitely believe in the promotion process to ensure success on this site. To not promote is shooting yourself in the foot in a way. You spend 20 minutes, 30 minutes, maybe an hour or longer on a new Medium story. Spend a few extra minutes ensuring people will actually find it! Recently I discussed the ways Facebook is helpful in getting the word out about your Medium stories. Sharing your latest stories on your main page, your author page, and various communities dedicated to Medium is an excellent place to start. It’s what I start with every day after I’ve published my latest stories, and when I’m super busy, I’ll likely only use Facebook to promote the stories. There’s also Reddit, LinkedIn, your own personal web site. And of course there’s Twitter, which you should definitely look into for promoting your stories. However, if you have the extra time, there are five other places you can use to get the word out about your stories. Think about using some or all of these in the months to come!
https://medium.com/the-partnered-pen/5-more-places-to-promote-your-medium-stories-besides-facebook-e03385209f57
['Brian Rowe']
2020-09-28 12:03:06.317000+00:00
['Marketing', 'Promotion', 'Success', 'Social Media', 'Medium']
Data Science with no Math
Data Science with no Math Using AI to Build Mathematical Datasets This is an addendum to my last article, in which I had to add a caveat at the end that I was not a mathematician, and I was new at Python. I added this because I struggled to come up with a mathematical formula to generate patient data that would follow a trend that made sense to me. The goal of this article is to generate 10,000 patient records which would correlate age to cost. I wanted the correlation to follow a pattern that looks something like this: Artists rendition of correlation pattern (not exact) The Y axis is the cost multiplier; the X-axis is the age. The idea here is that patient costs start as relatively high, decrease as they approach a certain age, then start increasing again. After much trial and error I came up with a formula that would generate a graph that looks like this: You can obviously see that there are some flaws in this formula. The most glaring one is that it implies costs level out once the patient hits 60. In the correlation that I wanted to use, the cost continues to increase as age increases. For the sake of completing the article, I felt like this was close enough and I was ready to move on to start writing actual code. For days after I published the article, I continued to try to come up with a formula which would follow my correlation pattern, with no success. Then one day I had an epiphany: why not let the computer figure out the formula. If I could successfully implement this, I could focus my efforts on improving my Python knowledge (which was my goal in the first place) vs. figuring out a mathematical formula. Let’s use machine learning to generate an approximation of the formula that I wanted using a few values as input. Once we have a model trained, we can generate a full sample dataset for input into a machine learning model. The first step is to set up a few values to train our model: Now, if we plot this, you can see that it roughly follows the picture I drew above: We can now use this data as an input to a neural network to build a model that we could train to predict any age that we pass in: from sklearn.neural_network import MLPRegressor regr=MLPRegressor(hidden_layer_sizes=(30), activation='tanh', solver='lbfgs', max_iter=20000) model=regr.fit(np.array(age_df['Age']).reshape(-1,1),Age_df['Cost']) The MLP in MLPRegresser stands for Multi-Layer Perceptron, which is a type of neural network that is part of the sklearn Python library. The sklearn library has numerous regressors built in, and it’s pretty easy to experiment with them to find the best results for your application. All of the regressors have a fit function that trains the model with the given input. Now that our model is trained, let’s generate a test dataset to see how our model did. df = pd.DataFrame( {'Age': np.arange(0,100), 'Cost': model.predict(np.arange(0,100).reshape(-1, 1))}) In this case, we’re generating a dataframe containing a row for every age between zero and 100, along with the cost that is predicted by our model for that age. Plotting the results of this gives us: This looks much more like the picture I drew at the top of the article. However, we don’t want our model to predict the exact cost multiplier for an age. Instead, we want to use the prediction as a baseline to predict a random value. In this case, we’ll adjust the data so that the cost is within ±20% of the prediction. Here’s how to do this in Python: df['Cost'] = [i + i * random.uniform(-0.2, 0.2) for i in df['Cost']] Now, if we plot our dataset, it looks like this: Now we have generated 100 values that roughly follow the drawing at the top of the article. Let’s generate a Dataset of 10,000 rows using this model. df2 = pd.DataFrame( {'Age': (np.random.random_sample((10000,))*100).astype(int)}) df2['Cost'] = model.predict(np.array(df2['Age']).reshape(-1, 1)) Here’s a scatter plot of those 10,000 Age/Cost values, and as we can see, it still roughly follows the drawing at the top of the article. Now we’ll add some randomness to the dataset and see what it looks like: df2['Cost'] = [i + i * random.uniform(-0.2, 0.2) for i in df2['Cost']] We can now use this as part of a dataset to predict healthcare costs using Age as one of the inputs. This concept could be used effectively to augment this dataset on Kaggle, which contains valuable trends but only contains 1338 rows. Using this technique, we could generate as many rows as we wanted to input into a model.
https://towardsdatascience.com/data-science-with-no-math-fd502621728b
['Rich Folsom']
2019-03-15 21:05:19.985000+00:00
['Python', 'Mathematics', 'Data', 'Data Science', 'Machine Learning']
The No Game (How To Figure Out What You Want In Life)
Being 25 is hard. When you look at the big truths that roll around the quarter century mark, it becomes easy to see why ‘quarter-life crisis’ has become a thing. Your happy, careless, worry-free, post-teenage phase is definitely over at this point. You’ve already spent 90% of the time you’ll ever spent with your parents and closest family. However, you still have the majority of your own life ahead of you… …but no idea what to do with it. The land of opportunity has never been this big. If you own a laptop with an internet connection, you have more production power than a 200-person company had in 1970. This power is so great that it paralyzes us. Petrified by the paradox of choice, we can’t decide whether we want to become a freelance Facebook ad designer, surf novel writer or start a cupcake business. Because we know all of it is possible, we think we want each choice equally as much. Like Buridan’s ass, we’re just as hungry as we are thirsty, stuck between the hay and the water. This is an illusion. You Don’t Really Want Everything Equally When you’re in a candy store, everything looks good. Plus, it’s all right in front of you; the licorice is just as easy to grab as the chocolate. The media is painting a candy store picture of careers for us. All we see online and off are the end results of hard-working people — those who’ve survived and came out on top. Everything seems easy to grab. But it’s not. We know that in theory, but until our brain computes this on an elementary level, let’s turn to a better indicator of what we want: Fear. Picking in a candy store is hard. Telling the waiter to take back the pizza — because she brought you mushrooms instead of pepperoni — is easy. Because it’s not what you wanted. Because you’re afraid it might not taste good. Fear often hinders us, because it keeps us from doing things. “You act like mortals in all that you fear, and like immortals in all that you desire.” — Seneca In the case of choosing careers, however, we can use it to systematically eliminate what we don’t want and then work with the elements that are left. What if, instead of running towards something we don’t know, we just run away from what terrifies us? Introducing The No Game As I’m trying to figure out what I want for myself, I’ve recently started playing a game. I just call it ‘The No Game.’ The goal is to ask yourself questions about what you want, shooting for a no each time. I started with all the things I’ve done before, using this template: Do I want to [insert an activity you’ve done before]? Answer honestly, and if you have some conditions or exceptions, include them. For example I like to consult with people, but not all the time, so it’s nothing I’d want to do full-time. After I’ve run through everything I have done, I start thinking about the things I could potentially do, but haven’t tried. Do I want to [insert an activity you haven’t done]? If something doesn’t excite you when you fantasize about it, reality will only be an even bigger disappointment. Think about it. Your imagination knows no limits in designing the experience, yet you still don’t like the thought of being recognized by everyone on the street — that’s a good signal pursuing celebrity status isn’t for you. After you’ve played The No Game for a while, you’ll slowly realize that only certain criteria and elements will be left that you actually can imagine yourself living with for a long time. Then, and only then, can you switch to playing The Yes Game and explore those elements a bit more. What Should You Do With This? I love wrapping up my posts with a conclusion and saying “go do that!” But in this case, I can’t. Because I don’t know where The No Game ultimately leads to. I only know it’s helping. And that it helps me more the longer I play it. Looking at my answers, you might think I should try being a writer. But that’s the thing: unless I put some candy in the bag, wield the power of my laptop and commit to it, I won’t know. Until I’m comfortable enough to do that, I can keep playing The No Game. Right now, all I know is this: For me, being a writer is not a No. So I can just keep writing, whatever the format. And maybe that’s worth more than a Yes.
https://medium.com/better-humans/the-no-game-how-to-figure-out-what-you-want-in-life-865a4ac9a1e1
['Niklas Göke']
2018-03-20 13:39:08.374000+00:00
['Millennials', 'Goals', 'Self-awareness', 'Life Lessons', 'Work']
Top 5 GAN(Generative Adversarial Networks) Projects to play around with Human Faces
1. SEAN SEAN is an Image Synthesis with Semantic Region-Adaptive Normalization project conditioned on segmentation masks that describes the semantic regions in the desired output image. It can be used to build a network architecture that can control the style of each semantic region individually by specifying one or more style reference images per region, i.e. interactively edit images by changing segmentation masks or the style for any given region. SEAN is better suited to encode, transfer, and synthesize than the best previous methods in terms of reconstruction quality, variability and visual quality. The model is evaluated on multiple datasets and better quantitative metrics are reported(e.g. FID, PSNR) than the current state of the art. The project is implemented in Python using Pytorch.
https://medium.com/datadriveninvestor/top-5-gan-generative-adversarial-networks-projects-to-play-around-with-human-faces-f647040e8a65
['Mrinal Walia']
2020-11-18 06:17:26.090000+00:00
['Machine Learning', 'Gans', 'Projects', 'Data Science', 'Python']
Bringing Transparency to Data: Quadrant’s Data Quality DashBoard [Product Release]
When we envisioned what Quadrant was going to be, we laid the problems that we wanted to solve. One of those problems was bringing transparency to data. To this end, we are pleased to unveil our latest tool that our team built — Quadrant’s Data Quality Dashboard. Data Transparency And Data Quality Come Together Data quality is subjective to the business use case. We understand this fact, and work with our customers to provide them with fit-for-purpose data that meets their specific data requirements. Our Data Quality Dashboard adds transparency into our customers’ data acquisitions process. It has a suite of metrics that allows customers to evaluate the quality of the location data feeds available on our platform in order to help them select the data feed that fits their specific insights. Suite of Data Quality Metrics Our dashboard provides quality metrics and overall completeness scoring giving customers, a quick overview of our data feeds prior to running full evaluation analyses. Examples of our metrics are below: Daily and Monthly Active Users Data Completeness Matrix Overall Panel of Data Quality Charts. For more information on the Data Quality Dashboard, please visit the link below: View More Here Or if you would like to access the Data Quality dashboard, you may do so so here: ACCESS DATA QUALITY DASHBOARD HERE What Next? This Data Quality Dashboard was just the next in line on what we promised to deliver, and it is now ready for usage. We do have something new to announce in the coming weeks, and we cannot wait to share more information soon. To find out more, visit our website
https://medium.com/quadrantprotocol/bringing-transparency-to-data-quadrants-data-quality-dashboard-product-release-4c1ca8188bf0
['Navas Khan']
2019-05-29 09:48:44.430000+00:00
['Data Visualization', 'Data Cleaning', 'Data', 'Data Science', 'Big Data']
Be Aware Of These And Save The Future
We Should Do These First things first; Education Even in this topic, although it sounds unrelated, that’s definitely the most permanent and essential solution to climate change. But… Why? If we talk about that global phenomenon as often as we need in classes, we’ll take a massive step in the path of solution. There should be lectures provided by people who know about this topic. So, students can get that consciousness from the lower ages. I’ve touched that point before in a perspective of “pros & cons” in my article named “The Biggest Threat In Front Of Humanity.” You may want to read that article as well. Other than that, better education is the best way of creating awareness. Planting Trees? Of course, planting trees is a crucial action that we have to take. But meanwhile, we need to protect our trees as well. The forest fires have gone crazy, especially this year, as we have noticed via media. The trend that shows the forest fires above clearly warns us about the current situation, and it’s scary as hell. It’s not just about the fires, that fires increases the world’s average temperature because of the jump in carbon emission. Besides, planting trees won’t give us results as quickly as we may think, but it’ll provide us considerable benefits in the future. So, we have to be consistent and patient about that proposal. Air Pollution As stated in UCSUSA’s article (2009, para. 16), air pollution that’s caused by our daily actions has a direct effect on global warming. Global warming is primarily caused by emissions of too much carbon dioxide (CO2) and other heat-trapping gases into the atmosphere when we burn fossil fuels to generate electricity, drive our cars, and power our lives. We can mainly prevent air pollution by using proper filters that hold the CO2. Also, most of the things that use fossil fuels are shifting into ones that use electricity purely. In the future, that’ll also help us.
https://medium.com/environmental-intelligence/be-aware-of-these-and-save-the-future-57e200a51d4a
['Cagri Ozarpaci']
2020-08-04 14:41:26.257000+00:00
['Climate Action', 'Awareness', 'Global Warming', 'Climate Change', 'Future']
The Letter (Part Two)
Photo by Lukasz Szmigiel on Unsplash Three months earlier She sat up, looking around the room as if it were for the very first time. But she was as familiar with the faded wallpaper as she was her own hands. Glancing at the open window, she brushed aside the sheer black curtains to check the temperature. It was almost time to leave. She opened a drawer to an old oak dresser, its handles tarnished and the wood splintering. She could have it fixed, but she chose not, and instead used it to remind her of her past. She dressed in silence, pairing her black dress with her signature mismatched socks. It wasn’t cold enough for tights, she decided. Glancing at her desk, she hesitated. Her journal lay open and the pen she forgot to cap was surely slowly drying up. She didn’t have time to write, she decided. She had her tasks for the day. She would write later. Gathering her things and a light jacket, she walked down the stairs. She paused to admire the light streaming in through the French doors. God, she loved those doors. The shoes she always wore stood perfectly aligned with the umbrella stand. She always thought it was funny there was an umbrella stand in this house. It could be so old fashioned sometimes. Shoes on, she checked her belongings and placed them in her handbag. She closed it and then out of habit, opened the bag once more. Satisfied everything was in place, she went outside, locking the doors behind her. She would be back late, she thought. The walk would be nice, she decided, opting out of driving. As she trudged along the familiar road, she remained deep in thought. A hawk’s screaming call pulled her from her daydream, and she glanced around her. She was close. A few more minutes and she would be there. The breeze whipped by, reminding her that autumn would be over soon. She couldn’t remember if she had closed the window. It was of no matter, she decided. The sky didn’t look like it would rain and it wasn’t too cold. She walked the final steps and took a sharp right, trotting down a path in the woods any other person would overlook. The deeper into the woods she walked, the more relaxed she felt. No one had followed her and she could complete her mission. Find the tree, she reminded herself. Her eyes scanned the cluster of trees and she chastised herself for once again forgetting her glasses. Her phone buzzed, jolting her. In the depths of the woods, the quiet sound was almost deafening. Her heart hammered. How could she have forgotten to turn the sound off? That was always the first thing she did before walking down this path. Perhaps she was losing her touch. She had been warned that her small mistakes would add up. Maybe they were right. It was of no use to worry about it now, she decided. She was still trusted with the most highly regarded tasks. She smiled slightly to herself as she thought of all she had carried out. Her successes far outweighed these small mistakes. The pine tree looked like any other tree, but the trained eye could spot one slight difference. She swept aside the carefully placed pine needles and touched her finger to the metal. An almost silent beep signaled she indeed had permission to be there. Counting to five, she steadied herself and began her task.
https://medium.com/know-thyself-heal-thyself/the-letter-part-two-e0fede6f31e1
['Elyse Wright']
2020-12-22 09:27:21.968000+00:00
['Mystery', 'Fiction Series', 'Fiction', 'Lovethyself', 'Writing']
The Art of Justifying Your Design Decisions
The Art of Justifying Your Design Decisions Being able to explain the decisions you make while designing digital products will not only make you confident about your solutions, but it will also improve your communication skills. The ability to explain the decisions you make while designing digital products is equally important as the design work itself. However, this skill is unfortunately commonly underappreciated. This article will highlight and explain the designers’ role and help them become better communicators by outlining a few good practices. Articulation is a crucial skill of every designer. Photo source: Unsplash. Field that becomes more significant For decades the work that designers did was mostly portrayed as making things that just looked visually appealing. When it comes to digital products, the burden fell mainly on developers. Apart from the technical aspect, they were in charge of the user flows and information architecture of each product, while designers were solely responsible for the visuals, and in some cases, they weren’t involved at all. Terms such as “user experience” weren’t widely used and, clearly, these digital products were missing out. How well we communicate is determined not by how well we say things, but how well we are understood. — Andrew Grove Along with the development of technology, the role of designers has incrementally increased in its value. Nowadays, job offers for Product Designers, UX Designers and UX Researchers are popping up like mushrooms. Organizations are adapting to the competitive market since it’s no longer an advantage to have a design that is just decent. Underappreciated role Despite the increasing demand for design positions, our role is still commonly misunderstood by the stakeholders. Since it’s a relatively new field, plenty of companies struggle to understand who they actually need. They used to hire designers to make their brands look consistent. Now that the designers’ role has become the backbone of the product development cycle, businesses want them to solve real problems. Our job is commonly misunderstood. Source: SkeletonClaw The biggest issue with design is that it’s mostly highly subjective and many stakeholders want to throw in their two cents. It’s quite easy to criticize a design based on your own assumptions and preferences. Most clients and stakeholders aren’t familiar with technology and design is the first thing they can comment on. The fact that a specific design would work in one context and not in another makes it extremely challenging for us. Everybody is a designer… to some extent Having discussed the role of designers and the subjective aspect of design, we must take into consideration that although design has an impact on the entire product, almost everybody involved in the process is a designer in some way. It inevitably includes developers, product managers, QAs and, obviously, product owners. Everything is designed. Few things are designed well .— Brian Reed Bearing that in mind, we must be transparent about our design process and supportive to our team. We build better products by engaging every member of the team in the design process. Does it mean that developers should be preparing wireframes and the product manager should be in charge of user flows? Not really. It means that each team member has to be aware of our role and its importance for the product and business. How to deal with your decisions As designers, we must take into account countless things that shape our final work. Apart from business requirements, we’re also bound to face technical limitations and other people’s feedback. Dealing with grumpy stakeholders can be remarkably challenging sometimes. Here are a few practices that you should implement in order to become a better designer and communicator: Make sure you understand the problem you’re solving as well as the business context. It will let you prepare designs that actually bring value. Walk a mile in the shoes of the decision-maker. Listen attentively and try to understand this person’s point of view. Don’t use tech savvy jargon that only you can interpret. Leave your ego at home and remain focused on the ultimate goal. We design for the users, not to impress other designers. Always consider alternative solutions but don’t overwhelm decision-makers with too many options. It will give them an impression that you’re not confident about your decisions. After the meeting, follow-up with the client and give yourself some time to look at their insights. Be prepared for the possibility that the design will never be done because of the fact that it’s an iterative process. Articulation is a design skill now The ability to explain the rationale behind one’s solutions is an inherent skill of every experienced designer. It’s vital because the decisions you make have impact on the whole working product. If you lack this skill, the result of your work can vastly diminish its significance and the stakeholders might not fully understand your solutions. Every time you face a new challenge, try to implement the above-mentioned practices. Not only will they impact the product you’ll be working on, but they will also help you grow as a designer.
https://medium.com/elpassion/the-art-of-justifying-your-design-decisions-cab8e3b80e4e
['Jakub Wojnar-Płeszka']
2019-06-03 12:21:48.368000+00:00
['Product Design', 'UX', 'UI', 'Communication', 'Design']
Live Like Fiction: thank you for helping turn my blog posts into a book!
Live Like Fiction: thank you for helping turn my blog posts into a book! Dear Friends, As a few of you may already know, I was given the opportunity to turn my viral blog posts here on Medium into a book called Live Like Fiction. It’s filled with inspiring stories, practical strategies and thought-provoking activities to help uncover the best version of yourself.🙌 The book will be released mid-July. In the interim, I am starting this newsletter to share with you stories that combine purpose, inspiration and storytelling. These are the ingredients you will need to turn your dreams into reality! Each weekly edition will include six articles relevant to the topics discussed in the book — a system I’ve created called ENGAGE.🦄 Thank you for your continued support throughout this journey. I very much hope you will enjoy these readings — as I’m very excited to embark on this adventure with you!🌎 All my best, Francesco ⚡✌ E: Explore your meaning 😍 Hunter S. Thompson Typed Out The Great Gatsby & A Farewell to Arms Word for Word Do you want to be a great writer? Simple tips like “write every day” and “don’t try to imitate someone else’s voice” are common lessons, but American journalist and author Hunter S. Thompson took his dedication to the next level. “You know Hunter typed The Great Gatsby,” an awestruck Johnny Depp told The Guardian in 2011, “he’d look at each page Fitzgerald wrote, and he copied it. The entire book. And more than once. Because he wanted to know what it felt like to write a masterpiece.” N:Narrow Your Goals🎯 The inside story of how Amazon created Echo This week’s release of the Apple HomePod reminded me of an incredible Business Insider profile of how Amazon created the Echo. The detail that still remains stuck in my head had to do with Jeff Bezos and “latency”. The initial Echo prototype had a latency of responding to your voice of 2.5–3 seconds. The team laid out a reasonable goal of improving the time down to 2 seconds. Bezos told the team, “I appreciate the work, but you don’t get to where it needs to be without a lot of pain. Let me give you the pain upfront: Your target for latency is one second.” That goal of 1 second latency seemed impossible to the team, but they rallied around it and it, and for anyone who is a fan of a voice-enabled speaker, that determination to reduce latency is certainly one of the major factors that made the Echo successful and created an entire new product category. G:Gain Endurance💪 A toolkit for predicting the future The Economist’s Daniel Franklin has just released a book, Megatech: Technology Trends in 2050. Tom Standage, the Deputy Editor of one of my favorite publications, wrote an introduction to the book that explains how “to see what lies ahead in technology, look to the past, the present and the imagined futures of science fiction.” A:Anticipate Roadblocks🚦 Craig From Craigslist’s Second Act In between birdwatching and continuing to answer Craigslist customer service emails (yes, the 64-year old billionaire still does that), Craig Newmark has taken on an entirely new challenge. The soft-spoken man who many newspaper executives once accused of destroying the business model of newspapers is doggedly working to combat the new scourge of fake news. G:Gain Endurance💪 How Ariana Grande’s Manchester Benefit Came Together So Quickly Scooter Braun might be most famous for discovering Justin Bieber singing sidewalk covers on YouTube, but the power of the network he’s built over years in the music business expresses itself in even more inspiring ways. After last month’s horrific attack in Manchester, Ariana Grande wanted to perform back in the city that experienced the tragedy. Thanks to a network built over an entire career, led to Scooter Braun booking Justin Bieber, Chris Martin, Katy Perry, Pharrell Williams, Robbie Williams and Liam Gallagher all in less than 24 hours. E:Elevate Yourself🚀 Storytelling Should Be the Number One Skill You Want to Improve A theme near and dear to my heart, Jon Westenberg lays out a number of critical lessons, along with some helpful TED Talks on how to incorporate storytelling more deeply into our lives. A must watch! Thanks for reading! Live Like Fiction Subscribe to my newsletter for exclusive updates💌, events🎉 and book giveaways📚!
https://medium.com/frankly-speaking/live-like-fiction-thank-you-for-helping-turn-my-blog-posts-into-a-book-b9cf2568398c
['Francesco Marconi']
2017-06-09 13:50:18.807000+00:00
['Inspiration', 'Storytelling', 'Life Hacking', 'Purpose', 'Strategy']
Let Your Words Rise: Why Patience is King in Writing
Let Your Words Rise: Why Patience is King in Writing How I learned that instant gratification isn’t everything Photo by Bookblock on Unsplash Three years ago, my husband and I were in the process of opening a bakery. I had no idea how to bake. Friends of mine had gone as far as to say that I was the worst cook they had ever known. This sentiment is not ideal when opening a bakery. Luckily my husband is a red seal chef, and he was willing to take on the task of teaching me. My first lesson was in baking bread. The ingredients were easy. As long as you have flour, water, yeast, and sugar you’re set. It’s the process that’s important when it comes to crafting artisan loaves. Proofing the dough is key. This is the practice of allowing the dough to rise, then once doubled in size to punch it down and let it rise again. I couldn’t understand why I had to wait for a second rise. It seemed so tedious, and I was anxious to get to the good part — eating my delicious creation. So every time, I would skip the second rise and get the right to shaping and cooking the loaves. And, every single time, my bread would fall in the oven and become a flattened mess of a dense bread-brick. “You don’t have enough patience for this!” My husband Jamie would tease me. And it was true. I didn’t have any patience. Not for bread and not in my writing career either. There have been countless times I’ve submitted articles to literary magazines only to get a reply a month later that said, “Not a completed draft. Make sure you send your most polished work, please.” Incredulous, I’d look back on the article to find incomplete thoughts and sloppy run-on sentences. My impatience had gotten the better of me once again. The problem is that I like instant gratification. Also, I probably have higher expectations of myself than I should. I think that whipping up an article in half an hour and tossing it to a publisher is a sure thing. This is due to the one or two times in my career where this strategy has worked. Sometimes good luck can be your worst enemy when it comes to crafting quality stuff. I’ve had good luck over the past few years with blogging and Facebook, which is why, for the longest time, I chose not to stray from these platforms. I’ve grown a nice following and revel in the instant gratification I get from sharing my writing on my social media channels. Why the instant gratification of social media is so rewarding: 1) We feel that we have achieved something. Despite the quality of our project, if someone “likes or positively comments” on our work, even when we know it isn’t our best, it feels as though we’ve accomplished something. 2) Someone is reading our work. Sometimes when submitting to literary journals, our article will sit in a desktop file for months on end. We want eyes on our writing, and it’s painful to know it’s sitting idle in some editor’s TBR pile. 3) The rush of viral. Nowadays, the dream is to go viral. To write an article that makes the internet rounds is the ultimate win. For a long time, I wrote for likes. I wanted the satisfaction of seeing those little blue thumbs-up twinkling on my computer screen. Was the work I was putting out quality? Nope. Did it add value to my readers? Maybe marginally with a few laughs here and there, but it certainly wasn’t my best. It didn’t provide anything more profound than a baseline story that was void of insight. It wasn’t until recently that I decided to move out of my comfort zone and join networks like Medium, where I can connect with fellow writers. I am grateful that I took the leap to pursue a broader education in this field of work. Merely reading and talking shop with fellow writers has made a world of difference in my prose. Why hadn’t I done this sooner? Now, I can look back on my previous writing, not with shame, but with an understanding of how a bit of added value, humour, and insight could have made these articles much better.
https://medium.com/illumination-curated/let-your-words-rise-why-patience-is-king-in-writing-108f48c9672d
['Lindsay Brown']
2020-09-11 14:16:19.791000+00:00
['Writing', 'Life Lessons', 'Writer', 'Baking', 'Patience']
Not What I Thought I was Going to Write
100 Naked Words — Day 50 Not What I Thought I was Going to Write Reflections, Ripples and Ramblings of a Restless Mind Photo Credit — Me! Lido di Camaiore, Tuscany My plan was some great piece of writing, of wisdom analysing my 50 days on 100NW. It was going to be pithy and witty, erudite and insightful. But I forgot that I wanted to write about that. So I wrote this instead: I started wondering if I was experiencing burnout. My writing has felt very forced recently and I have been struggling to draw upon my experiences in the same way. Everything feels a little stale and a bit forced. Until the poetry hits. And it hits every morning, every day. It lifts me up and carries me so far above myself that I feel as if I’m watching my body create a trail along the paths it walks. What poetry does for me is order my thoughts, even though the writing itself can be disordered, chaotic. Maybe I’m not burning out. Maybe I’m burning up…
https://medium.com/100-naked-words/not-what-i-thought-i-was-going-to-write-5424e61b769b
['Aarish Shah']
2017-03-02 18:02:03.742000+00:00
['100 Naked Words', 'Writing', 'Inspiration', 'Poetry', 'Creative Writing']
Coronavirus For Dummies
Coronavirus For Dummies Seriously, you will grant me right after reading this. I had to do that. A few days ago, I created a Q&A form in my personal Instagram account about the COVID-19 vaccine. I asked, “Tell me your hesitations and share your question marks with me about the vaccine.” I thought this would be an excellent opportunity to write an article since there are different types of vaccines, and our government honestly gives zero effort to educate the community. Photo by Daniel Schludi on Unsplash What I was waiting for and ready to answer: Which vaccine tech is more suitable for which risk groups? Has the vaccine accomplished all the phase trials already? And how? Can you give us a detailed database regarding the complications scientists observed so far? Approximately when will the community be immunized by the vaccine? What is the overall plan? Which risk groups are determined as privileged besides healthcare workers? Should people who naturally acquired the infection before be vaccinated? What I got instead: Vaccines have given us birth to children with down syndrome!! Is it right it is not halal? Does the vaccine cause autism? Doesn’t this vaccine cause infertility? Is it true that the United States will give us this vaccine to make us traceable for them through a chip under our skin? This vaccine transforms men into women! It is a capitalist game and you are promoting it! Seriously, people? Still? What are we fighting for, dying for EVERY SINGLE DAY? Photo by Nick Fewings on Unsplash HI. CORONAVIRUS CAN KILL YOU. VACCINE WILL SAVE YOUR LIFE JUST AS IT SAVED YOU FROM DYING OVER AND OVER AGAIN WHILE YOU ARE GROWING UP. YOU WOULD BE DEAD WITHOUT VACCINES! ONCE IT IS AVAILABLE FOR YOU TO GET, I STRONGLY RECCOMEND YOU TO BE VACCINATED. AND YES, JUST LIKE NO ONE EVER FORCED YOU BEFORE, NO ONE WILL OR CAN EVER FORCE YOU TO DO ANYTHING WITHOUT YOUR CONSENT. WE ONLY WANT YOU TO BE EDUCATED AND CAPABLE TO MAKE YOUR OWN CHOICES. AND ALIVE, PREFERABLY. WE ONLY WANT TO FIND THE BEST FOR YOU. WE CARE ABOUT YOU; EVEN YOU DO NOT.
https://medium.com/illumination/coronavirus-for-dummies-4e29bca4d4f1
['Eden Kunter']
2020-12-27 14:44:16.340000+00:00
['Dummies Guide', 'Coronavirus', 'For Dummies', 'Covid 19 Vaccine', 'Covid-19']
Implementing Queue In Go
Implementing Queue In Go This article discusses the queue data structure in computer science: what the queue data structure is, the queue fundamental operations, an example of a slice-based queue implementation. Defining Queue as Abstract Data Type In computer science, a queue is an abstract data type which serves a linear collection of items. An item can only be inserted at the end, rear, and removed from the other end front according to the FIFO (first-in, first-out) principle. Pic. 1. An illustration of the Queue model An analogy to help you understand the queue principle is to think of a queue as a line of people who are waiting for service. The newcomer goes at the end of the queue while the person who at the beginning gets service. An underlying data type for a queue could be an array or a linked list. Declaring Queue Type The same as the stack, we declare the queue as the struct with the items and mutex fields. Operations There are two fundamental operations for the queue: Enqueue() and Dequeue(). queue.Enqueue(item Item) Inserts the item at the end (rear) of the queue. Pic. 2. An illustration of the Enqueue() method For inserting the new item into the queue we use the append() built-in function, which inserts a new item at the end of the slice. queue.Dequeue() Item Removes the last item from the beginning (front) of the queue and returns the removed item. Pic. 3. An illustration of the Dequeue() method Since the function Dequeue() returns the removed item, before re-slicing we initialize the variable lastItem and store the removed item inside the variable: lastItem := queue.items[0] And then, we re-slice the queue with the following statement: queue.items = queue.items[1:] As a result, we get all the items from the second (first index) to the last from the queue. The first item in the queue has a zero index. Also, applying this method to an empty queue must return nil. Thus, before removing an item we need to check the length of the queue. In the following example, the method returns nil if the length of the queue is equal to zero: main() Create a new instance of the queue in the main.go file. Add new items with the Enqueue() method. In the example below we created the queue that consists of several items: 5←4←3←2←1 To see the full queue use Dump(). The last item (front) in this queue is 5, and the first (rear) is 1. To empty the queue use Reset(). To know whether the queue is empty or not use IsEmpty(). True if the queue is empty, and false if the queue is non-empty.
https://jkozhevnikova.medium.com/implementing-queue-in-go-5a96b369ca1c
['Jane Kozhevnikova']
2020-08-26 16:04:03.798000+00:00
['Programming', 'Software Engineering', 'Golang', 'Data Structures', 'Computer Science']
Engineered Microbes Clean Up Copper & DNA Cloning Goes Full-Auto
NEWSLETTER Engineered Microbes Clean Up Copper & DNA Cloning Goes Full-Auto This Week in Synthetic Biology (Issue #17) This Thanksgiving, I’m grateful to you, my readers. In the convoluted chaos of the internet, I know that you could be reading many things. Thank you for reading this. Reach out on Twitter with feedback and questions. Receive this newsletter every Friday morning! Sign up here: https://synbio.substack.com/. DNA Cloning Goes Full-Auto It’s midnight in the lab. All your co-workers have gone home. You’re alone at the bench, fearful that a security guard will arrive at any moment and kick you out. If you, like me, have found yourself in this situation, it may be because you have failed — repeatedly — to clone a particularly tricky DNA sequence. But that could happen a lot less in the near future. Robots are coming online, promising to automate entire experiments. So, too, could robots be used to create and clone custom DNA sequences. Unfortunately, a specific step in the cloning process — blasting cells with electricity to coax them into taking up DNA — has proven troublesome for machines. A new study, published Nov. 24 in ACS Synthetic Biology, offers a fully automated protocol for DNA cloning that relies on “natural transformation”, rather than electroporation. Sang Yup Lee’s lab, at the Korea Advanced Institute of Science and Technology, demonstrated that Acinetobacter baylyi, a type of bacteria, efficiently take up DNA, without the need for specialized equipment. Researchers already knew that A. baylyi were “naturally competent”, but I think this is the first time that these bacteria have been incorporated into a fully automatic, DNA cloning pipeline. The team simply dropped DNA sequences into liquid with A. baylyi, and the cells gobbled them up and began to reproduce, making tens, then hundreds, then thousands of copies of the DNA. “No DNA purification, competence induction, or special equipment is required,” the authors write. “Up to 10,000 colonies were obtained per microgram of DNA, while the number of false positive colonies was low.” The team used the new protocol to clone 21 biosynthetic gene clusters, with lengths ranging from 1.5 to 19 kb, and showed that the protocol was relatively consistent when performed by an Opentrons robot. Check out the paper! Link Engineered Bacteria Clean Up Copper A team of researchers at the University of York and Umeå University, in Sweden, have engineered E. coli bacteria to accumulate heaps of copper. To do that, the team fused seven different snippets of proteins — each thought to bind copper ions — to a protein called Maltose Binding Protein, or MBP. These protein chimeras, when expressed inside of the bacterial cells, “conferred tolerance to high concentrations of copper sulphate,” write the authors. The tolerance to copper was so high, in fact, that some bacteria could withstand concentrations “160-fold higher than the recognised EC50 toxic levels of copper in soils.” The researchers also crunched some data, on computers, and found that these copper-binding proteins might be able to bind other types of metals. The authors suggest that these bacteria could “be adapted for the removal of other hazardous heavy metals or the bio-mining of rare metals.” Let’s mine some asteroids with juiced up space bacteria! This work was published in Scientific Reports, and is open access. Link Knocked Down Genes Reveal How Metabolism Heals Drop a bacterium into a new environment — one with more sugar, or different competitors — and its metabolism will quickly adapt. In fact, the metabolism of a bacterium can change so much, when placed in a new environment, that the total mass of its various enzymes can double, according to a 2016 study by Schmidt et al. in Nature Biotechnology. Okay, so lots of protein levels change when a cell feels stressed out. But what happens when just one gene, encoding one enzyme, is repressed, or knocked down? How does the rest of the cell’s intricate metabolism shift in response to account for that single, “deficient” gene? It turns out that cells shift their cellular resources in intriguing ways to “heal” a broken link in the metabolism chain. A new study, published Nov. 24 in Cell Systems, used CRISPRi (CRISPR interference) to repress one gene at a time in E. coli cells. Researchers used an inducible version of CRISPRi that can be turned ‘on’ at will “to investigate how E. coli metabolism responds to decreases of enzyme levels.” For the study, the team repressed 1,515 different genes involved in metabolism, creating 4–6 sgRNAs for each gene; that resulted in a total of 7,177 unique strains of E. coli. They then “induced” the CRISPRi system, and measured “the time delay between inducer addition and appearance of fitness defects.” 30 of the strains were studied in greater detail, unveiling some interesting examples of metabolisms “adjusting” to account for a deficient enzyme in a pathway. “Overall, our results highlight the central role of regulatory metabolites in maintaining robustness against ever-changing concentrations of enzymes in a cell,” the authors wrote. To learn a bit more about this study (in general language), check out the press release. This study is open access. Link Bacteria Evolved in Lab to Study Antibacterial Resistance Drug-resistant bacteria are a growing issue, and researchers are racing to develop new antibiotics to fight back. To make better antibacterial compounds, scientists must first understand how antibacterial resistance emerges. Published Nov. 24 in Nature Communications, a team — from RIKEN and the University of Tokyo — used an automated, robotic system to repeatedly passage bacteria over more than 250 generations, while grown in the presence of 95 different antibacterial chemicals. In other words, they put cells into evolutionary overdrive. The researchers analyzed the evolved cells, studying how gene expression profiles changed for each strain, and then fed their massive dataset into a machine learning algorithm. They identified several gene expression “signatures” that were associated with drug resistance in the microbes. Read the press release on this open access study. Link Super Accurate Insertion of DNA Chunks with CRISPRi A special type of CRISPR-Cas system, harnessed from Vibrio cholerae bacteria, enabled researchers to integrate large chunks of DNA, up to 10,000 bases in length, at specific sites in the genome of bacteria. The method is supposedly 100 percent efficient. The team, from Columbia University, call their method INTEGRATE (insertion of transposable elements by guide RNA–assisted targeting). After demonstrating their method by inserting DNA at one location in the genome, they scaled up their work in a massive way: By expressing multiple guide RNAs in the cells, the team was able to insert DNA chunks at three places in the genome at once. This work was published in Nature Biotechnology. Link
https://medium.com/bioeconomy-xyz/engineered-microbes-clean-up-copper-dna-cloning-goes-full-auto-7ee9bc2a2798
['Niko Mccarty']
2020-11-27 11:52:33.307000+00:00
['Science', 'Tech', 'Research', 'Newsletter', 'News']
Art Shapes the Artist, Not the Other Way Around
Images of ‘Purple Mountain Majesty’ existed only in song, film, and on free bank calendars while growing up below sea-level in New Orleans. I was a young artist fresh out of grad school before visiting Vermont and seeing its mountainous landscape for the first time. In the early evening before dinner, I took a solitary walk on one of the many trails surrounding the country inn. The valley spread out before me, a line of mountains, low light outlining their shape. How beautiful the shape of mountains, I thought. Their consequence due to an enduring process — wind, rain, ice, gravity, eons of seismic tectonic upheaval. Quite the vast conceptual time machine that makes mountain-art. No wonder they were home to gods. It’s obvious, but my realization and its significance at that particular moment and place was revelatory: No two mountain shapes are alike! Aside from personal taste, no one mountain is more or less beautiful or magnificent than the next, each completely and utterly itself. Thus, a mountain can stand outside the parameters of beauty while becoming impossibly so. It cascaded down from there to the shape of trees, a wave, a flower, a seedpod, a flake of snow. For the first time, I had the understanding that this is why we look at nature with such awe. Because it has the power to teach us our shape. As a closeted boy, I wasn’t supposed to become my shape. I was taught to be what others expected of me. My outline had already been drawn; my job was to color inside the lines. “Mountain, teach me your shape,” I begged out loud on that walk in Vermont, tears streaming down my cheeks.
https://medium.com/personal-growth/art-shapes-the-artist-not-the-other-way-around-2bd8d69e9a1c
['Bradley Wester']
2020-12-29 17:07:54.150000+00:00
['Innovation', 'Art', 'Inspiration', 'LGBTQ', 'Creativity']
23 Data Science Techniques You Should Know!
23 Data Science Techniques You Should Know! Save your precious time by using these hacks Gif (Source and credits: Giphy) Data scientists are high in demand. The job of a data scientist is not easy, so it’s important to know a few data science hacks that can save your precious time and make your life simpler. In this post, I’m going to cover 23 data science hacks that I have used. 1. Image Augmentation: Image Augmentation is a very powerful technique that is used to create new and different images from the existing images. It is used to address issues associated with limited data in machine learning. Import all the necessary libraries : # importing all the required libraries %matplotlib inline import skimage.io as io from skimage.transform import rotate import numpy as np import matplotlib.pyplot as plt Read Image : img= io.imread('/Users/priyeshkucchu/Desktop/image.jpeg') Define Augment function : def augment_img(img): fig,ax = plt.subplots(nrows=1,ncols=5,figsize=(22,12)) ax[0].imshow(img) ax[0].axis('off') ax[1].imshow(rotate(img, angle=45, mode = 'wrap')) ax[1].axis('off') ax[2].imshow(np.fliplr(img)) ax[2].axis('off') ax[3].imshow(np.flipud(img)) ax[3].axis('off') ax[4].imshow(np.rot90(img)) ax[4].axis('off') augment_img(img) Output: Augmented Image 2. Pandas Boolean Indexing It’s a type of indexing method in which we can select subsets of data based on the actual values of the data in the DataFrame using a boolean vector to filter the data. Import necessary libraries import pandas as pd Load Data ytdata= pd.read_csv('/Users/priyeshkucchu/Desktop/USvideos.csv') Boolean Indexing — Show only those rows where category_id is 24 and no of likes is greater than 12000 ytdata.loc[(ytdata['category_id']==24)& (ytdata['likes']>12000),\["category_id","likes"]].head() Output 3. Pandas Pivot Table In Pandas, the pivot table function takes a data frame as input and performs grouped operations that provide a multidimensional summarization of the data. Import necessary libraries : import pandas as pd import numpy as np Load Data : loan = pd.read_csv('/Users/priyeshkucchu/Desktop/loan_train.csv', \ index_col = 'Loan_ID') Show data : loan.head() Loan Data Pivot Table : pivot = loan.pivot_table(values = ['LoanAmount'],index = ['Gender', \'Married','Dependents', 'Self_Employed'], aggfunc = np.median) Output pivot table 4. Pandas Apply In Pandas, the .apply() function helps to segregate data based on the conditions as defined by the user. Import necessary libraries import pandas as pd Load Data ytdata= pd.read_csv('/Users/priyeshkucchu/Desktop/USvideos.csv') Function Missing Values — def missing_values(x): return sum(x.isnull()) For missing values in the columns — print(" Missing values in each column :") ytdata.apply(missing_values,axis=0) Output — Missing values in each column : video_id 0 trending_date 0 title 0 channel_title 0 category_id 0 publish_time 0 tags 0 views 0 likes 0 dislikes 0 comment_count 0 thumbnail_link 0 comments_disabled 0 ratings_disabled 0 video_error_or_removed 0 description 502 dtype: int64 For missing values in the rows — print(" Missing values in each row :") ytdata.apply(missing_values,axis=1).head() Output — Missing values in each row : 0 0 1 0 2 0 3 0 4 0 dtype: int64 5. Pandas Count In pandas, the count function helps in counting Non-NA cells for each column or row. Import necessary libraries : import pandas as pd Load Data : ytdata= pd.read_csv('/Users/priyeshkucchu/Desktop/USvideos.csv') Count no of data points in each column : ytdata.count(axis=0) Output — Count of data points in each column Count no. of null data points in the Description column ytdata.description.isnull().value_counts() Output — No of null data points in the description column 6. Pandas Crosstab In Pandas, this function is used to compute a simple cross-tabulation of two or more factors. Import necessary libraries : import pandas as pd Load Data : data = pd.read_csv('/Users/priyeshkucchu/Desktop/loan_train.csv',\ index_col = 'Loan_ID') Cross tab between Credit History and Self Employed columns in the loan data : pd.crosstab(data["Credit_History"],data["Self_Employed"],\ margins=True, normalize = False) Output 7. Pandas str.split In Pandas, str.split function is used to provide a method to split string around a passed separator or a delimiter. Import necessary libraries : import pandas as pd Create a Data Frame : df = pd.DataFrame({'Person_name':['Naina Chaturvedi', 'Alvaro Morte', 'Alex Pina', 'Steve Jobs']}) df Data Frame df Extract First and Last Names: df['first_name'] = df['Person_name'].str.split(' ',expand = True)[0] df['last_name'] = df['Person_name'].str.split(' ', expand = True)[1] df Output — Extract First and Last Name using pandas str.splt() 8. Extract E-mail from text Import the necessary libraries and initialize the text : import re Enquiries_text = 'For any enquiries or feedback related to our product,\service, marketing promotions or other general support \ matters. [email protected] ’' Extract email using Regular Expression : re.findall(r"([\w.-]+@[\w.-]+)", Enquiries_text) Output — Extract Email from Text 9. Pandas melt In pandas, melt function is used to reshape the data frame to a longer form. Import necessary libraries : import pandas as pd Create a Data Frame : df = pd.DataFrame({'Person_Name': {0: 'Naina', 1: 'Alex', 2: \'Avarto'}, 'CourseName': {0: 'Masters', 1: 'Graduate', 2: \'Graduate'}, 'Age': {0: 27, 1: 20, 2: 22}}) Melt two data frames : m1= pd.melt(df, id_vars =['Person_Name'], value_vars =['CourseName', 'Age']) m1 Output — m1 Dataframe m2= pd.melt(df, id_vars =['Person_Name'], value_vars =['Age']) m2 Output — m2 Dataframe 10. Extract Continuous and categorical data Import necessary libraries : import pandas as pd Load Data : Loan_data = pd.read_csv('/Users/priyeshkucchu/Desktop/loan_train.csv') Loan_data.shape Output: (614, 13) Check data types of columns : Loan_data.dtypes Output — Data Types of columns in Loan Data Extract columns containing only categorical data: categorical_variables = Loan_data.select_dtypes("object").head() categorical_variables.head() Extract columns containing only integer data: integer_variables = Loan_data.select_dtypes("int64") integer_variables.head() Extract columns containing only numerical data: numeric_variables = Loan_data.select_dtypes("number") numeric_variables.head() 11. Pandas Eval function for efficient operations The eval() function in Pandas uses string expressions to efficiently compute operations using a Data Frame. Import necessary libraries : import pandas as pd import numpy as np Initialize no_rows, no_cols: no_rows, no_cols = 100000, 100 r = np.random.RandomState(50) df1, df2, df3, df4 = (pd.DataFrame(r.rand(no_rows, no_cols)) for i in range(4)) Without Eval function %timeit df1 + df2 * df3 - df4 Output without Eval function With Eval function — The eval() version of this expression is about 50% faster and uses much less memory %timeit pd.eval('df1 + df2 * df3 - df4') Output with Eval function 12. Pandas Unique In pandas, using unique function values that are unique are returned in order of appearance. Import necessary libraries : import pandas as pd import numpy as np Load Data : crime_data = pd.read_csv("/Users/priyeshkucchu/Desktop/crime.csv",\ engine='python') Show data : crime_data.head() Show Unique values in the District Codes Column: crime_data["DISTRICT"].unique() Output — Unique values in the District codes column 13. Ipython Interactive Shell Import necessary libraries : from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = "all" import pandas as pd Load Data : data = pd.read_csv('/Users/priyeshkucchu/Desktop/loan_train.csv') Run commands simultaneously: data.shape data.head() data.dtypes data.info() Output 14. Pandas Merge In pandas, the merge function is used to join two datasets together based on common columns between them. Import necessary libraries : import pandas as pd Initialize Data Frames : df1 = pd.DataFrame({'Left_key': ['Naina', 'Avarto', 'Alex', \'Naina'],'value': [1, 2, 3, 5]}) df2 = pd.DataFrame({'Right_key': ['Naina', 'Avarto', 'Alex', \'Naina'],'value': [5, 6, 7, 8]}) DataFrames d1 and d2 Merge the data frames : df1.merge(df2, left_on='Left_key', right_on='Right_key', \ suffixes=('_Left', '_Right')) Output — Merge the data frames 15. Parse dates in read_csv() to change data type to DateTime Import necessary libraries : import pandas as pd Load Data and print the data types of crime data columns: crime_data = pd.read_csv("/Users/priyeshkucchu/Desktop/crime.csv", \ engine='python') crime_data.dtypes Parse Dates in read_csv(): crime_data = pd.read_csv("/Users/priyeshkucchu/Desktop/crime.csv", engine='python',parse_dates = ["OCCURRED_ON_DATE"]) crime_data.dtypes Output — Parse dates in read_csv for column OCCURRED_ON_DATE 16. Date Parser Import necessary libraries : import datetime import dateutil.parser Parse Dates: input_date = '04th Dec 2020' parsed_date = dateutil.parser.parse(input_date) Output date in the designated format : op_date = datetime.datetime.strftime(parsed_date, '%Y-%m-%d') print(op_date) Output Date 17. Invert a Dictionary Create a dictionary : l_dict = {'Person_Name':'Naina', 'Age' : 27, 'Profession' : 'Software Engineer' } Original Dictionary : Invert dictionary : invert_dict = {values:keys for keys,values in l_dict.items()} invert_dict 18. Pretty Dictionaries Create a dictionary : l_dict = {'Student_ID': 4,'Student_name' : 'Naina', 'Class_Name': '12th' ,'Student_marks' : {'maths' : 92, 'science' : 95, 'computer science' : 100, 'English' : 91} } Original Dictionary : Pretty dictionary using pprint: import pprint pprint.pprint(l_dict) Pretty Dictionary 19. Convert List of list to list Import necessary libraries: import itertools Create a list : nested_list = [['Naina'], ['Alex', 'Rhody'], ['Sharron', 'Avarto', \'Grace']] nested_list Convert the list to list : converted_list = list(itertools.chain.from_iterable(nested_list)) print(converted_list) 20. Removing Emojis from Text Emoji_text = 'For example, 🤓🏃‍🏢 could mean “Iam running to work.”' final_text=Emoji_text.encode('ascii', 'ignore').decode('ascii') print("Raw tweet with Emoji:",Emoji_text) print("Final tweet withput Emoji:",final_text) Output — Remove Emojis from Text 21. Apply Pandas Operations in Parallel It’s used to distribute your pandas computations over all available CPUs on your computer to get a significant increase in the speed. Install pandarallel : !pip install pandarallel Import necessary libraries: %load_ext autoreload %autoreload 2 import pandas as pd import time from pandarallel import pandarallel import math import numpy as np import random from tqdm._tqdm_notebook import tqdm_notebook tqdm_notebook.pandas() Initialize pandarallel : pandarallel.initialize(progress_bar=True) Dataframe: df = pd.DataFrame({ 'A' : [random.randint(8,15) for i in range(1,100000) ], 'B' : [random.randint(10,20) for i in range(1,100000) ] }) Trigono function: def trigono(x): return math.sin(x.A**2) + math.sin(x.B**2) + math.tan(x.A**2) Without parallelization: %%time first = df.progress_apply(trigono, axis=1) With parallelization: %%time first_parallel = df.parallel_apply(trigono, axis=1) Output — Apply Panda operations in parallel 22. Pandas Cut and qcut In Pandas, cut command creates equispaced bins but the frequency of samples is unequal in each bin qcut command creates unequal size bins but the frequency of samples is equal in each bin. Import necessary Libraries: import pandas as pd import numpy as np Dataframe: df_rollno = pd.DataFrame({'Roll No': np.random.randint(20, 55, 10)}) df_rollno Using Pandas cut function : df_rollno['roll_no_bins'] = pd.cut(x=df_rollno['Roll No'], bins=[20, 40, 50, 60]) Output Using Pandas qcut function: pd.qcut(df_rollno['Roll No'], q=6) Output 23. Pandas Profiling It’s used to generates profile reports from a pandas DataFrame or data sheet. Install Pandas Profiling: pip install pandas-profiling Import necessary libraries: import pandas as pd import pandas_profiling Load Data: Youtube_data = pd.read_csv('/Users/priyeshkucchu/Desktop/USvideos.csv') Generate Profiling report: profiling_report = pandas_profiling.ProfileReport(Youtube_data) Profiling report — Overview Profiling report — Interactions
https://medium.com/ai-in-plain-english/23-data-science-techniques-you-should-know-61bc2c9d1b3a
['Naina Chaturvedi']
2020-12-23 12:11:11.145000+00:00
['Programming', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'Tech']
Improving Quality of Photos on the Ubcoin Marketplace Using Neural Networks
To improve quality of photos uploaded by users, Ubcoin Market team implements Super Resolution technology. After a listing is created by a user, the image is automatically sent to the artificial intelligence module, where it is processed by the neural network to improve the image quality and make the listing more attractive to potential buyers. The image quality of the product affects the level of sales in marketplaces. Often large manufacturers and experienced sellers have the advantage of being able to create high quality content. However, smaller sellers are often limited by the quality of their cameras on mobile phones. Ubcoin Market iis improving low image quality using Super Resolution technology. There are four main approaches to image enhancement: prediction models, edge based methods, image statistical methods and patch based (or example-based) methods. The best quality is provided by patch based (or example-based) methods. In [1], it was first proposed to move away from the traditional approach that applies a set of filters to a single convolutional neural network that operates on the end-to-end principle. The approach based on the convolutional neural network combines the following functions: 1) Extracting and displaying patches. 2) Nonlinear mapping. 3) Reconstruction. The approach based on the convolutional neural network allowed to improve the quality in comparison with the known methods while maintaining a high speed of operation. Deep neural network architectures [2] gives the ability to restore the “spoiled image”. The solution for restoring “spoiled images” is used in the Ubcoin Market to remove watermarks and other image defects. The application of the GAN (Generative adversarial network) allowed to improve the quality of the texture and made the processed images so photorealistic that it is visually difficult to distinguish them from the original. Super Resolution technology is already used in image and video processing. For example, Yandex improved the quality of old movies, and some video companies use Super Resolution to improve the quality of images in cloud-based video surveillance so that customers can use more simple and cheaper cameras while preserving image quality. The use of Super Resolution technology does not require any additional actions from Ubcoin Market user. The image from a freshly created listing is automatically sent to the artificial intelligence module, where it is processed by the neural network to improve the image quality and to make the listing more attractive to potential buyers. [1] arxiv.org/pdf/1501.00092v3.pdf Image Super-Resolution Using Deep Convolutional Networks [2] arxiv.org/pdf/1606.08921.pdf Image Restoration Using Convolutional Auto-encoders with Symmetric Skip Connections
https://medium.com/ubcoin-blog/improving-quality-of-photographic-content-on-the-ubcoin-marketplace-using-neural-networks-e2a8f7e5c4a6
['Ubcoin. Cryptocurrency Reimagined']
2018-09-24 16:39:42.943000+00:00
['Bitcoin', 'Ubc', 'Ubcoin', 'Ubcoin Product', 'Artificial Intelligence']
Configure and Run a Docker Container for Redis and Use it for Python
Configure and Run a Docker Container for Redis and Use it for Python Containerize your Python project If you have been working as programmer for a while, you might already have felt need of some kind of caching mechanism in your system. This is where Redis cache comes into the act. Redis is an in-memory key-value data store which is one of the most popular tools used for caching. In this article, we will go from setup using Docker to the use of Redis using Python. This article can be divided into the following three parts: Setting up a Docker container for Redis. Playing around with the Redis command line using some basic commands. Integrating Redis with Python code. Let’s go through these one-by-one.
https://medium.com/better-programming/dockerizing-and-pythonizing-redis-41b1340979de
['Ashutosh Karna']
2020-01-08 14:52:01.299000+00:00
['Programming', 'Containers', 'Redis', 'Python', 'Docker']
Introduction to Logistic Regression
Introduction to Logistic Regression Part 1 Regression is basically a statistical approach to find the relationship between variables. Regression methods are an important part of any data analysis concerned with describing the relationship between a response variable (output/outcome) and one or more explanatory variables(input). Pretty often the outcome variable is discrete, i.e. taking on two or more possible values. Before starting studying the logistic regression model it is important to understand the goal of an analysis using this model: “it is the same as that of any other regression model used in statistics, that is, to find the best fitting and most parsimonious, interpretable model to describe the relationship between an outcome (dependent or response) variable and a set of independent (predictor or explanatory) variables’’. Independent variables are also known as Covariates. In the linear regression model, the outcome variable is assumed to be continuous. Everything that distinguishes a logistic regression model from the linear regression model is the outcome variable in logistic regression is binary or dichotomous. This difference between logistic and linear regression is recognized both in the form of the model and its assumptions. Once this difference is accounted for, the methods applied in an analysis using logistic regression follows the same basic principles used in linear regression. For instance, we have a dataset of ‘Coronary Heart Disease’, with following columns age in years (AGE), and presence or absence of evidence of significant coronary heart disease (CHD), identifier variable (ID) and an age group variable (AGE_GROUP). The outcome (dependent/response) variable is CHD, which is coded with a value of “0” to indicate that CHD is absent, or “1” that it is present in the individual. In general, any two values could be used, but working with 0 and 1 is more convenient. To explore the relationship between AGE and the presence or absence of CHD in this group let’s plot a scatter plot outcome v/s the independent variable. We will use this scatterplot to find the relationship between the outcome and the independent variable. Interestingly, all data points fall on one of two parallel lines representing the absence of CHD (y = 0) or the presence of CHD (y = 1). Although this plot does represent the binary nature of the outcome variable pretty clearly, it does not provide a clear picture of the nature of the relationship between CHD and AGE. However, The main problem here is that the variability in CHD at all ages is large. This makes it difficult to see any functional relationship between AGE and CHD. One common method of removing some variation, however preserving the structure of the relationship between the outcome and the independent variable is to create intervals for the independent variable and compute the mean of the outcome variable within each group. We can achieve the same with our dataset by grouping age into the categories (Age_Group). Doing so, we have, for each age group, the frequency of occurrence of each outcome, as well as the percent with CHD present. On plotting to scatter plot we have: Here, a clearer picture of the relationship begins to develop. It tells that as age increases, the proportion (mean) of individuals with the sign of CHD increases. This plot provides noteworthy insight into the relationship between CHD and AGE, but the functional form for this relationship needs to be drawn. Here we have 2 important differences : It concerns the nature of the relationship between the outcome and independent variables. In any regression problem, the key quantity is the mean value of the outcome variable, given the value of the independent variable. This quantity is called the conditional mean and is expressed as “E(Y |x)”, Where: Y = outcome variable x = specific value of the independent variable. The quantity E(Y |x) is read “the expected value of Y , given the value x”. In linear regression, we assume this mean may be expressed as a linear equation, such as: E(Y |x) = β 0 + β 1 x It is possible for E(Y |x) to take on any value as x ranges between −∞ and +∞. In our dataset column named “Mean” provides an estimate of E(Y |x). The estimated values plotted in Plot 2 are close enough to the true values of E(Y |x) to provide a reasonable estimation of the functional relationship between CHD and AGE. The conditional mean must be greater than or equal to zero and less than 1 (i.e., 0 ≤ E(Y |x) ≤ 1). The change in the E(Y |x) per unit change in x becomes progressively smaller as the conditional mean gets closer to zero or one. The curve is said to be S-shaped. The model we use is based on the logistic distribution. There are 2 reasons for using logistic distribution : First, it is an extremely flexible and easily used function. Second, its model parameters provide the basis for estimates of effect. The logistic regression model we use is By the above equation: Logistic Regression can be defined as: It is defined by the following expression ‘’exponential of a linear combination of inputs and coefficients divided by one plus the same exponential that form seems quite complex.’’ Logistic regression predicts the probability of the outcome variable being true. It can be considered the classification counterpart of linear regression. A transformation of E(Y |x) is necessary for our study of logistic regression is the logit transformation. This transformation has many desirable properties of a linear regression model. The logit, g(x), is linear in its parameters, may be continuous, and may range from −∞ to +∞. The second difference between the linear and logistic regression models concerns with conditional distribution of the outcome variable. In the linear regression model, the error term (ε) associated with the equation assumed to follows a normal distribution with mean zero However, in logistic regression, the quantity ε may assume one of two possible values. If y = 1 then ε = 1 − E(Y |x)with probability E(Y |x), and if y = 0 then ε = −E(Y |x) with probability 1 − E(Y |x). Thus, ε has a distribution with mean zero and variance equal to E(Y |x)[1 − E(Y |x)]. That is, the conditional distribution of the outcome variable follows a binomial distribution with probability given by the conditional mean, E(Y |x). In nutshell, we can say that in regression analysis when the outcome variable is dichotomous: 1. The model for the conditional mean of the regression equation must be bounded between zero and one. 2. The binomial, not the normal, distribution describes the distribution of the errors and is the statistical distribution on which the analysis is based. 3. The principles that guide an analysis using linear regression also guide us in logistic regression.
https://medium.com/ai-in-plain-english/introduction-to-the-logistic-regression-part-1-9f004309e6c
['Bhanu Soni']
2020-07-07 21:39:06.702000+00:00
['Machine Learning', 'Artificial Intelligence', 'Logistic Regression', 'Linear Regression', 'Regression']
The Science of What Makes People Share Your Content Online
The Science of What Makes People Share Your Content Online By Heike Young “Consumers don’t care about you at all. They just don’t care. Part of the reason is, they’ve got way more choices than they used to — and way less time. The thing that’s going to decide what gets talked about, what gets done, what gets changed, what gets purchased, what gets built… is, is it remarkable? Remarkable is a really cool word. We think it just means neat. But it also means worth making a remark about.” — Seth Godin Do your customers and prospects want to make a remark about your content, or is it living in a vacuum? If you can get people to share your company’s content of their own free will, you’re winning as a marketer, and your ideas will spread like wildfire. So how can we as marketers better encourage people to share our content? What’s the science explaining why people share? What makes us want to click that little “tweet this” button on a company’s blog post? That’s what we’re talking about in this week’s episode of the Marketing Cloudcast, the marketing podcast from Salesforce. To get to the bottom of it, I talked to two experts from very different fields: Here are a few takeaways from this episode, which you can preview here: For the full conversation that’s filled with many more insights, subscribe on Apple Podcasts, Google Play Music, Stitcher, or wherever you listen to podcasts. Our brains are wired to gossip and share details about other people’s lives. According to Susan, humans have an inherent, built-in compulsion to always be discussing with other people “what the heck is going on. Who did what, with whom, why, who’s at the top of the hierarchy, who’s on the way down of the hierarchy.” By bringing a human element into your content, you can capitalize into your readers’ natural desire to share it with others. Consider creating content that shares real human perspectives and tells their stories. People are especially likely to share surprising content. “The research tells us that people want to share things that are surprising — things that will make others feel,” Susan explains. Even if your content is technical or dry, find ways to add an emotional punch that will surprise readers and, thus, compel them to share. She continues, “It’s not like I sit there and say, ‘I think I’d like to find something to share that is surprising.’ Or ‘I think I’d like to find something to share that will make my friends laugh or cry.’ We’re not thinking about it that way, so most of this is unconscious. But when we feel something, we want to share that so others will feel it too. When we’re surprised by something, we’ll want to share that.” People want to share content that confirms their own self-stories. I learned about the concept of self-stories from Susan. She writes about them on her blog: “Everyone has stories about themselves that drive their behavior. You have an idea of who you are and what’s important to you. Essentially you have a ‘story’ operating about yourself at all times. These self-stories have a powerful influence on decisions and actions. Whether you realize it or not, you make decisions based on staying true to your self-stories.” Our online selves are an extension of our real-life selves. Everyone has an idea of who they are, and they want to share content that upholds that self-story. For example, Susan says part of her self-story is that she’s someone who makes complex scientific concepts simple, so she wants to share content on social media that aligns with that view. As you create content, think about the real people in your audience. What about your content would make someone add it to their self-story? It’s a big deal for a customer or prospect to choose to add your article or video to their social media profiles. Readers are smarter than you might think — and have an eye for detail. I brought Brad on this episode of the Cloudcast to talk about his team’s popular blog series involving famous fictional characters and email signatures (their software company’s focus). A couple fun examples: If Game of Thrones characters Had Email Signatures and If Parks and Rec Characters Had Email Signatures. Brad told me the story of how his team worked on these posts and quickly saw them become their top blog series of all time. In fact, this series represents 4 of the company’s top 5 blog posts ever. So why did it work? Brad believes one major reason is because of the small details and inside jokes. “The shareability might not have worked if we were very generic about these posts. Our audience is smart, and they pick up on those things. When they pick up on those things, they remember your post. And I think it drives them to want to share it with their networks. I think being very thoughtful, and not generic with your content, it starts there.” When crafting content you want people to share, remember that each small detail counts. Hear more about Brad’s hilarious and high-performing blog series on the full episode of the Cloudcast — and you’ll also hear why Susan thinks this series works, from a scientific perspective. A Brand Spankin’ New Podcast Style Last week we shifted the Marketing Cloudcast to an entirely new format and style (think narrative with multiple guests — more Freakonomics, less live interview), and I’d love to know what you think! Join the thousands of smart marketers who are Cloducast subscribers on Apple Podcasts, Overcast, Google Play Music, and Stitcher. Tweet @youngheike with feedback on this episode — or ideas for future guests and topics.
https://medium.com/marketing-cloudcast/the-science-of-what-makes-people-share-your-content-online-eb521989f7a0
[]
2017-07-06 00:07:20.486000+00:00
['Marketing']
5 Essential Papers on AI Training Data
Many data scientists claim that around 80% of their time is spent on data preprocessing, and for good reason; collecting, annotating, and formatting data are crucial tasks in machine learning. This article will help you understand the importance of these tasks, as well as learn methods and tips from other researchers. Below, we will highlight academic papers from reputable universities and research teams on various training data topics. The topics include the importance of high-quality human annotators, how to create large datasets in a relatively short time, ways to securely handle training data that may include private information, and more. 1. How Important are Human Annotators? This paper presents a firsthand account of how annotator quality can greatly affect your training data, and in turn, the accuracy of your model. In this sentiment classification project, researchers from the Jožef Stefan Institute analyze a large dataset of sentiment-annotated tweets in multiple languages. Interestingly, the findings of the project state that there was no statistically major difference between the performance of the top classification models. Instead, the quality of the human annotators was the larger factor that determined the accuracy of the model. To evaluate their annotators, the team used both inter-annotator agreement processes and self- agreement processes. In their research, they found that while self-agreement is a good measure to weed out poor-performing annotators, inter-annotator agreement can be used to measure the objective difficulty of the task. Research Paper: Multilingual Twitter Sentiment Classification: The Role of Human Annotators Authors / Contributors: Igor Mozetic, Miha Grcar, Jasmina Smailovic (all authors from the Jozef Stefan Institute) Date Published / Last Updated: May 5, 2016 2. A Survey On Data Collection for Machine Learning From a research team at the Korean Advanced Institute of Science and Technology, this paper is perfect for beginners looking to get a better understanding of the data collection, management, and annotation landscape. Furthermore, the paper introduces and explains the processes of data acquisition, data augmentation, and data generation. For those new to machine learning, this paper is a great resource to help you learn about many of the common techniques to create high-quality datasets used in the field today. Research Paper: A Survey on Data Collection for Machine Learning Authors / Contributors: Yuji Roh, Geon Heo, Steven Euijong Whang (all authors from KAIST) Date Published / Last Updated: August 12th, 2019 3. Using Weak Supervision to Label Large Volumes of Data For many machine learning projects, sourcing and annotating large datasets takes up substantial amounts of time. In this paper, researchers from Stanford University propose a system for the automatic creation of datasets through a process called “data programming”. The above table was taken directly from the paper and shows precision, recall, and F1 scores using data programming (DP) in comparison to the distant supervision ITR approach. The proposed system employs weak supervision strategies to label subsets of the data. The resulting labels and data will likely have a certain level of noise. However, the team then removes noise from the data by representing the training process as a generative model, and presents ways to modify a loss function to ensure it is “noise-aware”. Research Paper: Data Programming: Creating Large Training Sets, Quickly Authors / Contributors: Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam, Christopher Ré (all authors from Stanford University) Date Published / Last Updated: January 8, 2017 4. How to Use Semi-supervised Knowledge Transfer to Handle Personally Identifiable Information (PII) From researchers at Google and Pennsylvania State University, this paper introduces an approach to dealing with sensitive data such as medical histories and private user information. This approach, known as Private Aggregation of Teacher Ensembles (PATE), can be applied to any model and was able to achieve state-of-the-art privacy/utility trade-offs on the MNIST and SVHN datasets. However, as Data Scientist Alejandro Aristizabal states in his article, one major issue with PATE is that the framework requires the student model to share its data with the teacher models. In this process, privacy is not guaranteed. Therefore, Aristizabal proposes an additional step that adds encryption to the student model’s dataset. You can read about this process in his article, Making PATE Bidirectionally Private, but please make sure you read the original research paper first. Research Paper: Semi-Supervised Knowledge Transfer for Deep Learning From Private Training Data Authors / Contributors: Nicolas Papernot (Pennsylvania State University), Martin Abadi (Google Brain), Ulfar Erlingsson (Google), Ian Goodfellow (Google Brain), Kunal Talwar (Google Brain) Date Published / Last Updated: March 3, 2017 5. Advanced Data Augmentation for Semi-supervised Learning and Transfer Learning One of the largest problems facing data scientists today is getting access to training data. It can be argued that one of the biggest problems of deep learning is that most models require large amounts of labeled data in order to function with a high degree of accuracy. To help combat these issues, researchers from Google and Carnegie Mellon University have come up with a framework for training models on substantially lower amounts of data. The team proposes using advanced data augmentation methods to efficiently add noise to unlabeled data samples used in semi-supervised learning models. Amazingly, this framework was able to achieve incredible results. The team states that on the IMDB text classification dataset, their method was able to outperform state-of-the-art models by training on only 20 labeled samples. Furthermore, on the CIFAR-10 benchmark, their method outperformed all previous approaches. Research Paper: Unsupervised Data Augmentation for Consistency Training Authors / Contributors: Qizhe Xie1,2 , Zihang Dai1,2 , Eduard Hovy2 , Minh-Thang Luong1 , Quoc V. Le1 (1Google Research, Brain Team, 2Carnegie Mellon University) Date Published / Last Updated: September 30th, 2019
https://medium.com/datadriveninvestor/5-essential-papers-on-ai-training-data-aba8ea359f79
['Limarc Ambalina']
2020-08-03 19:46:01.316000+00:00
['Training Data', 'Research Paper', 'Machine Learning', 'Data Science', 'Artificial Intelligence']
Sequretek, empowering growth without fear by simplifying security
Sequretek has raised $3.7M in total. We talked with Pankit Desai, Co-founder and CEO. How would you describe Sequretek in a single tweet? We empower your growth without fear as your trusted partner by simplifying security. How did it all start and why? Cyber Security industry from a customer’s point of view can be divided in the world of have’s and have not’s — where the Top 10% has access to everything whilst the rest have to make do without much of any. As they say the industry is for, by and to the elite. If you look at any technology, it goes through a curve where there is complexity in the beginning and over a period of time it becomes simple and commoditized. Security industry is going the other way where every year new technology areas emerge in response to new threats instead of enhancing existing technologies, the result is 90 technologies areas and 3500 different companies make security amongst the most fragmented market in the tech space. All these options make the life of Have-nots difficult since they do not the resources to understand the what, why, when and how of their needs. There is way too much buyer’s remorse and consequential churn here. Sequretek was born in 2013, therefore, with a vision “to simplify security by consolidating the technology landscape” and mission “to empower our customer’s growth without fear as their trusted security partner by simplifying security”. The key words here are removing fear attached to security threats linked to the digital transformation that has become essential for our customer’s growth and the fact that we will make security simple and accessible to them as their trusted partner. We focus on three areas where we would lead with our technology i.e. device security, user behavior and enterprise security visibility. There are two major trends that one sees in the industry, first is that employees are moving out of their offices and second is that data-centers are moving to cloud. In this scenario the traditional perimeter defense concepts are becoming irrelevant, since there is hardly anything within the perimeter to protect. In fact, the new perimeter that needs to be defended zealously is your user and the devices they use to interact with the company. This is where our products come in, the traditional approach of addressing these forces customers to buy 8+ individual security technologies to adequately secure their environments. Most customers do not have the financial and technical maturity to be able to understand and procure it. Our EDPR, which is an end point security product, comes with features of upto 7 products rolled into one, thereby making it simple for customers to procure such tech products and keeping it simple when it comes to end point security. What have you achieved so far? We have been growing at CAGR of 60% over the past few years and have customers that span across the financial, manufacturing, retail and services segment. Whilst so far our operations were focused in India, but recently we forayed into the US. We believe that the market we are addressing is currently underserved and our approach of offering simplicity, ownership of outcomes and affordable subscription-based services makes us uniquely positioned to be a preferred choice for our customers both in India and the US. Our products have been competing with the established industry giants and still do not fail to leave its mark. Sequretek products implement cutting edge technologies for next generation threats, reduce total cost of ownership for enterprises and simplify cyber security. Our management team is a unique blend of very experienced resources that bring complementary skills required to succeed in this deeply competitive market place. Pankit Desai: Co-Founder, is a veteran of corporate industry with over 25 years of experience in global sales, operations and FP&A with leadership stints at Rolta (President) , NTT Data (SVP), IBM(Country Manager), and Wipro (Regional Manager). He is responsible for sales and investor relationship functions. Anand Naik: Co-Founder, has worked in the corporate world for our 25 years with companies like Symantec where he was their MD for South Asia, and before that with IBM, Sun Microsystems in technology roles. He is responsible for product vision and operations. Santhosh George: Chief Products and Technology Officer, has over 25 years of experience as CTO, product development leader as well as entrepreneur with companies like Rolta, Finestra, Oracle and Cognizant. He is responsible for product strategy and execution Arun Rathi : Chief Financial Officer and Member of Board, has over 30 years of experience with companies like Religare (MD and COO) and Citibank (Director Investment Banking).He is responsible for finance, admin and back office functions Commander Subhash Dutta: Head of Operations and Malware research, he brings over 30 years of experience with a majority time being spent in Indian Navy where he was responsible for Indian Navy’s information warfare group Udayanathan Vettikat: Head of Channels, Sales and Marketing, comes with over 30 years of experience with bulk of it being in Cisco where he was responsible for general business sales and marketing. Achievements: One of the Top SME, by GreatCompanies.in (2019) Best MSME, by CISO Mag (EC-Council, world’s largest security training organisation) (2019) Top 4 Enterprise Startups, Enterprise IT World (2019) Top 50 Innovators, by World Innovation Congress (2018) Game Changing Startup, EC-Council (2018) HOT 100 Race to Grace Award by organizations of senior Industry CIO (2017) Top 5 Fintech startups award by Fintegrate Zone (2017) Next Big Idea 2017 contest winner (by Government of Ontario, Quebec, British Columbia) along with Ryerson Futures, Zone Start-ups and Government of India What do you plan to achieve in the next 2–3 years? In the next five years Sequretek wants to: Make world a better and safer place for digital interactions through delivering cutting edge security technologies Employ over 2,000 people. Protecting more than 2M+ endpoints across 1000 enterprises. Revenue in excess on USD 50 Million. The biggest IT security company in India and among top 100 globally. Sequretek will have: Equal opportunity employment environment Well defined governance model that will enable ethical business practices and sound fiscal management High performance culture that fosters innovation, transparency and collaboration among our employees and extended ecosystem Agile business processes to adapt to changing business environment Technology innovation will be core to Sequretek ethos. All innovations will be with an aim to simplify security for enterprises and consumers, thus creating safe experiences in digital economies.
https://medium.com/petacrunch/sequretek-empowering-growth-without-fear-by-simplifying-security-5f69115e7627
['Kevin Hart']
2019-11-15 16:46:02.797000+00:00
['Growth', 'Cybersecurity', 'Security', 'Startup', 'India']
Netflix Recommender System — A Big Data Case Study
Netflix Recommender System — A Big Data Case Study The story behind Netflix’s famous Recommendation System Image by Thibault Penin on Unsplash What is Netflix and what do they do? Netflix is a media service provider that is based out of America. It provides movie streaming through a subscription model. It includes television shows and in-house produced content along with movies. Initially, Netflix used to sell DVDs and functioned as a rental service by mail. They have discontinued selling DVDs a year later but continued their rental service. In 2010, they went online and started a streaming service. Since then Netflix has grown to be one of the best and largest streaming services in the world (Netflix,2020). Netflix has taken up an active role in producing movies and TV shows. The company is heavily data-driven. Netflix lies in the middle of the internet and storytelling. They are inventing new internet television. Their main source of income comes from users’ subscription fees. They allow users to stream data from a wide range of their movies and TV shows at any time on a variety of internet-connected services (Gomez-Uribe et. al., 2016). What is the domain (subject matter area) of their study ? The primary asset of Netflix is their technology. Especially their recommendation system. The study of the recommendation system is a branch of information filtering systems (Recommender system, 2020). Information filtering systems deal with removing unnecessary information from the data stream before it reaches a human. Recommendation systems deal with recommending a product or assigning a rating to item. They are mostly used to generate playlists for the audience by companies such as YouTube, Spotify, and Netflix. Amazon uses recommender systems to recommend products to its users. Most of the recommender systems study users by using their history. Recommender systems have two primary approaches. They are collaborative filtering or content-filtering. Collaborative filtering relies on the concept that people who liked something in the past would also like the same experience in the future. Contentbased filtering methods are useful in places where information is known about the item but not about the user. It functions as a classification task-specific to the user. It models a classifier to model the likes and dislikes of the user concerning the characteristics of an item. Why did they want/need to do a big data project ? Netflix’s model has changed from renting/selling DVDs to global streaming in a year (Netflix Technology Blog, 2017a). Unlike cable TV, internet TV is all about choice. Netflix wanted to help viewers by choosing among numerous options available to them through their streaming service. Cable TV is very rigid with respect to geography. However, a broad range of items is available on the catalog of internet TV with pieces from different genres, from different demographics to appeal to people of different tastes. The recommendation problem while selling DVDs was predicting the number of stars a user would give the DVD that ranges from 1 star to 5 stars. That was the only task they concentrated heavily upon as that was the only thing, they would receive from a member who has already watched the video. They would not have any idea about the viewing experience, statistics and get no feedback during viewing. When Netflix turned into a streaming service, they have huge access to activity data of its members. This includes their details associated with the device, the time of the day, the day of the week and the frequency of watching. As the number of people subscribing and watching Netflix grew, the task became a big data project. What questions did they want to answer ? Netflix is all about recommending the next content to its user. The only question they would like to answer is ‘How to personalize Netflix as much as possible to a user?’. Though it is a single question, it is almost everything Netflix aims to solve. Recommendation is embedded in every part of their site. Recommendation starts when you log into Netflix. For example, the first screen you see after you log in consists of 10 rows of titles that you are most likely to watch next. Awareness is another important part of their personalization. They let their audience know how they are adapting to their tastes. They want their customers to give them feedback while also developing trust in their system. They give explanations as to why they think you would watch a particular title. They use phrases like ‘Based on your interest in …’, ‘Your taste preferences created this row’ etc. Similarity is another part of personalization. Netflix conceptualizes similarity in a broad sense such as the similarity between movies, members, genres, etc. It uses phrases such as ‘Similar titles to watch instantly’, ‘More like …’ etc. Search is also one of the important aspects of the Netflix recommendation system. Data Sources: According to (Netflix Technology Blog, 2017b), the data sources for the recommendation system of Netflix are: A set of several billion ratings from its members. More than a million new ratings are being added every day. They use a popularity metric in many aspects and compute them differently. For example, they compute it hourly, daily or weekly. They also examine clusters constituting members either geographically or by using other similarity metrics. These are some of the different dimensions over which popularity is computed. Stream related data such as the duration, time of playing, type of the device, day of the week and other context-related information. The pattern and the titles that their subscribers add to their queues each day which are millions in number. All the metadata related to a title in their catalog such as director, actor, genre, rating and reviews from different platforms. Recently they have added social data of a user so that they can extract social features related to them and their friends to provide better suggestions. The search-related text information by Netflix subscribers or members. Apart from internal sources of data they also use external data such as box office information, performance and critic reviews. Other features such as demographics, culture, language, and other temporal data is used in their predictive models. What is the size of the data in the study? That is, approximately how much data storage was required ? Netflix ran a huge contest from 2006 to 2009 asking people to design an algorithm that can improve its famous in-house recommender system ‘Cinematch’ by 10%. Whoever gave the best improvements would be awarded a $1 million. The size of the data set presented to the users was 100 million user ratings. The dataset consisted of 100,480,507 ratings that 480,189 users gave to 17,770 movies. In 2009, the prize was awarded to a team named BellKor’s Pragmatic Chaos. Netflix has since stated that the algorithm was scaled to handle its 5 billion ratings (Netflix Technology Blog, 2017a). Hence, the size of the dataset for the recommender system of Netflix is believed to consist of information of all its titles which are more than 5 billion in number. What data access rights, data privacy issues, what data quality issues were encountered ? As mentioned in (Netflix Prize, 2020), though Netflix has tried to anonymize its dataset and protect user’s privacy, a lot of privacy issues arose around the data associated with Netflix competition. In 2007, researchers at the University of Austin were able to figure out the users in the anonymous Netflix dataset by matching their ratings on the Internet Movie Database. In 2009, Four people related to this issue filed a lawsuit against Netflix for the violation of the United States’ fair trade laws and the Video Privacy Protection Act. Following this, Netflix has canceled its competition for 2010 and thereafter. What organizational (non-technical) challenges did they face ? As per (Maddodi et al., 2019), during the preliminary days, Netflix suffered large loss however with the boost of internet users and Netflix changed its commercial enterprise model from conventional DVD condo and income to the advent of online video streaming in 2007. Netflix has smartly anticipated the arrival of its competitors like Disney and Amazon and hence invested heavily in Data Science from a very early stage. A majority of those efforts are still paying off Netflix and allowing it to be at the forefront of the media streaming industry. What technical challenges did they face ? Some of the challenges the team faced technically while building the system were (Töscher et al., 2009): Ensembling of different models to predict a single output. Optimizing the RMSE of the ensemble. Automatic Parameter Tuning for the models was also a challenge. Global effects for capturing statistical correlations. Capturing Global Time Effects and Weekday Effect. Detecting whether the short-term effects are due to multiple people sharing the same account or the change in the moods of a person. With respect to search service related to recommendations, in a paper published by Netflix Engineers (Lamkhede et al., 2019), the challenges mentioned were: Unavailability of a video from the perspective of a recommender system. Detecting, reporting and substituting the unavailable entities. The length of search terms which are usually very short makes it very hard for Netflix to understand what the user is searching for. Rendering instant search, the moment the user clicks, followed by good results is a challenge. Optimizing user experience by allowing different indexing schemes and metrics. Why was this a “big data” problem? The V’s of Big Data (Source) Volume: As of May 2019, Netflix has around 13,612 titles (Gaël, 2019). Their US library alone consists of 5087 titles. As of 2016, Netflix has completed its migration to Amazon Web Services. Their data of tens of petabytes of data was moved to AWS (Brodkin et al., 2016). It consists of their engineering data, corporate data, and other documentation. From (AutomatedInsights, n.d), it can be calculated approximately that Netflix stores approximately 105TB of data with respect to videos alone. However, their dataset for the recommendation algorithms is expected to be very large as it needs to incorporate all the information mentioned above. Focusing only on the Netflix Prize task, the data given to the users is around 2GB. It consists of only 100 million movie ratings. At that time, Netflix admitted that it had 5 billion ratings. Roughly, it translates to 10,000 GB of rating data alone. The size today would be greater than the mentioned figure. Velocity: By the end of 2019, Netflix has 1 million subscribers and 159 million viewers (BuisinessofApps, 2020). Every time a viewer watches something on Netflix, it collects usage statistics such as viewing history, ratings over titles, other people who have similar tastes, preferences related to their service, information related to titles such as actors, genres, directors, year of release, etc. In addition, they also collect data about the time of the data, the types of devices you watch content on, the duration of your watch (Netflix, n.d.). On average each Netflix subscriber watches 2 hours of video content per day (Clark, 2019). Though all the features are not explicitly stated anywhere, Netflix is believed to collect a large set of information from its users. On average Netflix streams around 2 million hours of content each day. Veracity: Veracity consists of bias, noise, and abnormalities in data. With respect to the Netflix Prize challenge, there was a wide variance observed in data. Not all movies were rated equally by an individual. One movie had only 3 ratings whereas a single user rated over 17,000 movies (Töscher et al., 2009). With the type and the amount of information, Netflix data would definitely contain a lot of abnormalities, bias, and noise. Variety: Netflix says it collects most of the data in a structured format such as time of the day, duration of watch, popularity, social data, search-related information, stream related data, etc. However, Netflix could also be using unstructured data. Netflix has been very outspoken about the thumbnail pictures that it uses for personalization. This means that the thumbnails for the video are different for different people even for the same video. So, it could be dealing with images and filters. Who are the people/organizations with an interest in the conduct and outcome of the study? The primary stakeholders of Netflix are its subscribers and viewers. They are the ones who would be directly affected by the actions of this project. Netflix recommender system has been very successful for the company and has been a major factor in boosting the subscriber numbers and the viewers. The secondary stakeholders are its employees, with respect to the task, the secondary stakeholders are the research team of Netflix who are directly involved with the development and maintenance if the algorithm and the system. Competitors such as Amazon, Hulu, Disney+, Sony, HBO, etc are also showing a major interest in the conduct and outcome of Netflix’s experiments. After all, they are the ones who produce movies. Why would they want intermediaries like Netflix to take away the share? Many of them have started streaming their content by launching their own platforms but Netflix has been on the top of the game by investing significantly in data and algorithms since the very beginning. What HW/SW resources did they use to conduct the project? Netflix Technology Stack (Source) In order to build a recommender system and perform large scale analytics, Netflix invested a lot in hardware and software. Netflix presented an architecture of how it handles the task (Basilico, 2013). There are three stages of how it performs recommendation. From (Netflix Technology Blog, 2017c), offline computation is applied to data and it is not concerned with real-time analytics at the user. Execution time is relaxed, and the algorithm is trained in batches without any pressure on the amount of data to be processed in a fixed time interval. But it needs to be trained frequently to incorporate the latest information. Tasks such as model training and batch computation of results are performed offline. Because they deal with a lot of data, it would be beneficial to run them in Hadoop through Pig or Hive. The results must be published and be supported by not just HDFS but other databases such as S3 and Cassandra. For this, Netflix developed an in-house tool called Hermes. It is also a publish-subscribe framework like Kafka, but it provides additional features such as ‘multi-DC support, a tracking mechanism, JSON to Avro conversion, and a GUI called Hermes console’ (Morgan, 2019). They wanted a tool to effectively monitor, alert and handle errors transparently. At Netflix, the nearline layer consists of results from offline computation and other intermediate results. They use Cassandra, MySQL, and EVCache. The priority is not how much of the data is to be stored by how to store it in the most efficient manner. The real-time event flow in Netflix is supported by a tool called as Manhattan that was developed inhouse. It’s very close to Twitter’s Storm but it meets different demands depending on the internal requirements. The flow of the data is managed by logging in Chukwa to Hadoop. Netflix heavily relies on Amazon Web Services to meet its hardware requirements. More specifically they use EC2 instances that are readily scalable and almost fault-tolerant. All their infrastructure runs on AWS in the cloud. Figure 1: System Architecture for Personalization and Recommendations at Netflix (Netflix Technology Blog, 2013) — Source What people/expertise resources did they need to conduct the project? Netflix invests heavily in Data Science. They are a data-driven company that uses data analytics for decision making at almost every level. According to (Vanderbilt, 2018), there are around 800 Netflix Engineers who work in Silicon Valley headquarters. Netflix also hires some of the brightest talents and the average salary for a data scientist is very high. It has Engineers with expertise in Data Engineering, Deep Learning, Machine Learning, Artificial Intelligence, and Video Stream Engineering. With respect to the Netflix Prize challenge, the winning team ‘BellKor’s Pragmatic Chaos’ consisted Andreas Toscher and Michael Jahrer (BigChaos), Robert Bell, Chris Volinsky (AT&T), Yehuda Koren (Yahoo) (team BellKorr) and Martin Piotte, Martin Chabbert (Pragmatic Theory). What processes and technology did they need? Apart from the Engineering technology mentioned above, a paper from Netflix Engineers, CARLOS A. GOMEZ-URIBE and NEIL HUNT (Gomez-Uribe et. al., 2016) state that their recommendation system uses supervised approaches such as classification and regression and unsupervised approaches such as dimensionality reduction and clustering/compression using topic modeling. Matrix factorization, Singular Value Decomposition, factorization machines, connections to probabilistic graphical models and methods that can be easily expanded to be tailored for different problems. With respect to the Netflix Prize challenge, 107 algorithms were used as an ensembling technique to predict a single output. Matrix factorization, Singular Value Decomposition, Restricted Boltzman Machines are some of the most important techniques that gave good results. What was the approximate project schedule/duration? According to (Netflix Technology Blog, 2017a), the Engineers who solved the Netflix task have reported that more than 2000 hours of work were required to build an ensemble of 107 algorithms that got them the prize. Netflix has taken its source code and worked to overcome its limitations such as scaling them from 100 million ratings to 5 billion ratings. What results/answers were achieved? What value to the organization and to the stakeholders was obtained as a result of the project? As mentioned in (Gomez-Uribe et. al., 2016), The overall engagement rate by the user with Netflix has increased with the help of the recommender system. This led to lower cancellation rates and increased streaming hours. The monthly churn of their subscribers is very low and most of it is due to the failure in payment gateway transactions and not due to the customer’s choice to cancel the service. Personalization and recommendations save Netflix more than $1Billion per year. 75% of the content people watch today is provided by their recommendation system. Member satisfaction increased with the development and changes to the recommendation system. With respect to the Netflix Prize task, the winning algorithm was able to increase the predicting ratings and improved ‘Cinematch’ by 10.06% (Netflix Prize, 2020). According to (Netflix Technology Blog, 2017b), Singular Value Decomposition was able to reduce the RMSE to 89.14% whereas Restricted Boltzmann Machines helped in reducing RMSE to 89.90%. Together, they have reduced the RMSE to 88%. Was the project successful? Investing in data science technology has helped Netflix to be the best in the video streaming industry. Personalization and recommendation save $1 billion a year for the company. Also, it is one of the important factors in attracting new subscribers to the platform. Also, with respect to the winning algorithm from the Netflix Prize competition, many of its components are still being used today in its recommendation system (Netflix Technology Blog, 2017b). Hence, the project can be regarded as successful. Were there any surprises discovered? As per (Töscher et al., 2009), they have surprisingly discovered binary information which can be understood as the fact that people do not select and rate movies at random. Surprisingly one-day day effect was very strongly observed in the dataset. This could either be due to multiple people using the same account or different moods of a single person. What lessons were learned from conducting the project? Ensembling techniques deliver good results. Instead of refining a single technique, multiple techniques were combined to predict a single outcome. Training models and tuning them individually does not deliver optimal results. The results are best when the whole ensembling method has a precise tradeoff between diversity and accuracy. A lot of open research has been contributed to the domain of collaborative filtering and competitions such as Netflix Prize can promote such open ideas and research. What specific actions were taken as a result of the project? As a result of the competition, Netflix has revamped the winning code to scale from 100 million ratings to 5 billion ratings (Netflix Technology Blog, 2017b). It even uses the code from the winning project until today in its most advanced recommender system. Netflix owes its success in the video streaming industry to the project and its further research and continuous development. How could the project have been improved? The procedure and the steps for A/B testing can be improved by including the evaluation through circumstances rather than algorithmic. It can use reinforcement algorithms to provide recommendations to users as opposed to the traditional methodology of recommendation systems. The reward can be user satisfaction, the state can be the current content and the action can be the next best content recommendation. Definitions for Complex Terms: RMSE (Root Mean Square Error): It measures how far the data points are from the regression line. It can be used to understand the spread of the residuals. It is calculated by taking the square root of the means of error squares. A/B testing: The A/B testing is a statistical process to check the validity of your test. In the first step, a hypothesis is proposed. In the second step, statistical pieces of evidence are collected to accept or reject the hypothesis. In the third step, the data is analyzed to conclude about the correctness of the hypothesis. Restricted Boltzmann Machines: It’s an artificial neural network that has the ability to learn the underlying probability distribution given a set of inputs. It can be used in both supervised and unsupervised learning. A lot of applications are found in classification, recommendation engines, topic modeling, etc. EC2: The term EC2 stands for Elastic Compute Cloud. It is one of the important parts of the Amazon Cloud Computing platform. Any company can deploy its service/application over EC2 machines and get them running within a short period of time. Hadoop: Hadoop makes distributed computing possible by providing a set of software and tools. It works on the principle of Map Reduce for the storage and processing of Big Data. Many companies today use Hadoop for large scale data processing and analytics today. HDFS: It stands for Hadoop Distributed File System. It is one of the core components of the Hadoop ecosystem which functions as a storage system. It works on the principles of MapReduce. It can provide high bandwidth along with the cluster. References AutomatedInsights. (n.d.). Netflix Statistics: How Many Hours Does the Catalog Hold. Retrieved April 12, 2020, from https://automatedinsights.com/blog/netflix-statistics-how-many-hours-does catalog-hold Basilico, J. (2013, October 13). Recommendation at Netflix Scale. Retrieved April 12, 2020, from https://www.slideshare.net/justinbasilico/recommendation-at-netflix-scale Brodkin, J., & Utc. (2016, February 11). Netflix finishes its massive migration to the Amazon cloud. Retrieved April 12, 2020, from https://arstechnica.com/information-technology/2016/02/netflix finishes-its-massive-migration-to-the-amazon-cloud/ BuisinessofApps. (2020, March 6). Netflix Revenue and Usage Statistics. Retrieved April 12, 2020, from https://www.businessofapps.com/data/netflix-statistics/ Clark, T. (2019, March 13). Netflix says its subscribers watch an average of 2 hours a day — here’s how that compares with TV viewing. Retrieved April 12, 2020, from https://www.businessinsider.com/netflix-viewing-compared-to-average-tv-viewing-nielsen-chart 2019–3 Figure 1. System Architecture for Personalization and Recommendations at Netflix. (2013). System Architectures for Personalization and Recommendation [Digital Image], by Netflix Technology Blog. Retrieved April 12, 2020, from https://netflixtechblog.com/system-architectures-for personalization-and-recommendation-e081aa94b5d8. Gaël. (2019, May 14). How Many Titles Are Available on Netflix in Your Country? Retrieved April 12, 2020, from https://cordcutting.com/blog/how-many-titles-are-available-on-netflix-in-your country/ Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System. ACM Transactions on Management Information Systems, 6(4), 1–19. doi: 10.1145/2843948 Lamkhede, S., & Das, S. (2019). Challenges in Search on Streaming Services. Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval — SIGIR19. doi: 10.1145/3331184.3331440 Maddodi, S., & K, K. P. (2019). Netflix Bigdata Analytics- The Emergence of Data Driven Recommendation. SSRN Electronic Journal. doi: 10.2139/ssrn.3473148 Morgan, A. (2019, May 20). Allegro Launches Hermes 1.0, a REST-based Message Broker Built on Top of Kafka. Retrieved April 12, 2020, from https://www.infoq.com/news/2019/05/launch-hermes-1/ Netflix Prize. (2020, January 20). Retrieved April 12, 2020, from https://en.wikipedia.org/wiki/Netflix_Prize#cite_note-commendo0921-27 Netflix Technology Blog. (2017a, April 18). Netflix Recommendations: Beyond the 5 stars (Part 1). Retrieved April 12, 2020, from https://netflixtechblog.com/netflix-recommendations-beyond-the 5-stars-part-1–55838468f429 Netflix Technology Blog. (2017b, April 18). Netflix Recommendations: Beyond the 5 stars (Part 2). Retrieved April 12, 2020, from https://netflixtechblog.com/netflix-recommendations-beyond-the 5-stars-part-2-d9b96aa399f5 Netflix Technology Blog. (2017c, April 18). System Architectures for Personalization and Recommendation. Retrieved April 12, 2020, from https://netflixtechblog.com/system architectures-for-personalization-and-recommendation-e081aa94b5d8 Netflix. (2020, April 10). Retrieved April 12, 2020, from https://en.wikipedia.org/wiki/Netflix Netflix. (n.d.). How Netflix’s Recommendations System Works. Retrieved April 12, 2020, from https://help.netflix.com/en/node/100639 Recommender system. (2020, April 10). Retrieved April 12, 2020, from https://en.wikipedia.org/wiki/Recommender_system Töscher, A., Jahrer, M., & Bell, R. M. (2009). The BigChaos Solution to the Netflix Grand Prize. Netflix prize documentation, 1–52. Vanderbilt, T. (2018, June 22). The Science Behind the Netflix Algorithms That Decide What You’ll Watch Next. Retrieved April 12, 2020, from https://www.wired.com/2013/08/qq-netflix algorithm/
https://towardsdatascience.com/netflix-recommender-system-a-big-data-case-study-19cfa6d56ff5
['Chaithanya Pramodh Kasula']
2020-06-28 15:07:34.113000+00:00
['Analytics', 'Recommendations', 'Netflix', 'Data Science', 'Big Data']
Fiction Character Checklist
Questions to ask yourself when writing or revising a narrative Photo by Felipe Sagn on Unsplash Unique and memorable personalities Ask yourself these questions about your characters in a narrative to make your short story, novel engaging enough to command attention from a large audience. Do the characters’ introductions intrigue readers? The first time a reader comes across a character, do the readers; ears perk up? Do the readers think, “Wow! I have to learn more about this person! What will she do next?” Are the characters easy to picture? If they are generic and abstract, show them to us instead so we can see them as if we’re watching a movie. Are they easy to remember? Readers are distracted by internal dialogue and events going on around them. They read for a few minutes on the toilet and come back to the story the next day’s bathroom break. If you use a character’s name, provide a hint to orient the reader. Do any fade characters out from the narrative inexplicably? If you’re describing their last scene before they’re going to disappear, maybe let us know why. Ideally, tie up any loose ends so we don’t feel something is unfinished. Do they all speak differently? This can be very challenging, but do your best to make each speech pattern unique to each person. Don’t just give on a lisp and another one a Southern accent. For example, make one speak with a spunky voice that clips sentences, emphasizes words emphatically — with italics, interrupts herself and uses exclamations. Contrast that with another character who speaks in a monotone full of short gravelly words and trailing ellipses because he can’t be bothered to finish sentences. Do you avoid dialogue info-dumps? Make the characters dialogue full of subtext rather than being on-the-nose. People don’t directly say whatever’s on their mind in wise, complete sentences in perfectly organized long paragraphs. They fumble with their words, make passive aggressive innuendos, feel out each other’s responses as they speak, interrupt each other, try to charm each other without being aware they’re disgusting each other, pause to think what to say next, and so on. Do you avoid creating stereotypes? Make each character unique, a real person rather than illustrating a point, a point of view, playing an obvious role, or representing a tired stereotype. Are alienating factors about your positive characters deferred for a while until your readers have a chance to learn to like them and care what happens to them? Do they contain a mixture of qualities rather than being all good or bad, shy or aggressive, pretty or ugy? Do you defer telling about their backstories for a few pages in a short story, and for longer in a novel? You need to give us time to become enthralled by the current-time-plot before backing up and informing us about the past. Keep the momentum going forward until while mystifying and entertaining your readers into a state of heightened suspense, you create a need for them to learn about the past. Do you refer to individuals as “he” or “she” rather than “they? Unless it’s an LGBT issue, use correct grammar. Does the protagonist transform because of encounters with the antagonist? While in a detective series, it’s possible the protagonist won’t change, but generally in drama, the protagonist drastically changes because of encounters with the antagonist who forces him to move out of a stuck psychological state. The protagonist faces his lies, wounds and improper desires and integrates his needs. His lesson becomes the theme. Does the protagonist act to achieve one goal? The protagonist should be a typical human with goals, but when reacting to the inciting incident and committing to taking action by the end of Act 1, the general goal should be locked in. But within that general goal other more refined goals will arise as he learns more about himself and how to overcome the obstacles. Is the antagonist a major threat to protagonist? The stakes must be high for readers to care, especially in a Genre narrative. Can readers identify with the protagonist? If the protagonist is quite different from other people, be sure to include things about the figure that people can relate to. Up to a point, the more you include his bodily reactions to things, the more you will engage readers as they go along with his journey. Does he have a foil who provides clear contrast to his personality traits? Does he first try to fix the central problem of the narrative by taking the easiest way? Does that method fail until he’s taken down into the Climax which is so bad he has to face that he must change? Does he finally face his wound, lie or improper desires and become reborn so that he can be victorious in the Climax? Or, if it is a Tragedy, is he given the chance to do so, but can’t because he is too wounded? Whether you’re outlining and sketching out a potential story or checking out what needs to be revised before submitting your tale, these are helpful things to ask yourself. If you’re writing a novel, you should send what seems like your final draft to beta readers before sending it to editors and proofreaders. This is the kind of question the beta readers will be asking themselves as they read. So, get ahead of the game. Then, when your beta readers get hold of the manuscript, they can pay attention to more advanced matters. Best wishes! You just read another exciting post from the Book Mechanic: the writer’s source for creating books that work and selling those books once they’re written. If you’d like to read more stories just like this one tap here to visit
https://medium.com/the-book-mechanic/fiction-character-checklist-d6d12b87533c
['Tantra Bensko']
2019-10-14 20:54:40.252000+00:00
['Plot', 'Characterization', 'Writing Advice', 'Fiction Writing', 'Writing']
The Luxury of Flying in the 1970s
The Luxury of Flying in the 1970s A private jet type of experience for common airlines Boeing 747 first-class “Tiger Lounge” bar from the 1970s. The cabin was to be situated in the aircraft’s hold, with a viewing port in the central table. (Source: CNN Travel) Traveling by air has become the second most common mode of transportation in the world, but this wasn't the case back in the 1970s as traveling by airplane was very expensive. In the early 1970s, many people did not see it as a safe means of transportation, although today this is considered the safest mode of transportation in the world. The average price of a ticket was around $550, if we take into consideration inflation, that would be $3200 today. That is a lot of money with which the average Joe could have bought a good second-hand car. However, these flights had this price point for a reason, they were glooming in luxury. Today’s common flights are seen more like a long bus ride, but back in the day, you were welcomed in what seemed like the lounge of a fancy hotel. People were allowed to walk freely in the aircraft even if light turbulence occurred. They enjoyed meals that were actually cooked on board the plane in what looked like small restaurants. The bathrooms were quite spacious and looked more like the bathrooms that you would see inside a house rather than how bathrooms look in normal planes. These sections were inspired by luxury train wagons from the 1950s. The restaurant section from a Boeing 314 What is imperative to mention is that these were not first-class tickets, this was what every passenger was receiving. Once again, the planes didn’t need to have a lot of seats because there were only a few people that were using this means of transportation. As time passed, the industry became hungrier for profit by turning common planes into containers that could fit as many seats as possible and having separate planes for business class passengers that pay half of the price of a brand new car for a ticket. Music band playing in an airplane circa the 1970s (Source: Messy Nesssy) Yet, besides all the technological advancements I think it was more luxurious back in the day, or at least more groovy. In some airliners, the lounge even had a piano or even a full music band for entertainment. As we know, the 1970s were all about music, something that was enjoyed by everyone and made the flight more pleasant. Some of the bigger airliners such as the Boeing 747 even had a couple of separate rooms with full-size beds. The bedroom section of a Boeing 747 (Source: USA Today) For that period, flying was more than a means of transportation, it was an entertaining experience that made it worth every penny. Besides the reminding turbulences, most people would even forget that they were flying from the experience they were living. Another interesting fact compared to today’s common flight experience was that passengers were allowed to smoke on board. Today, smoking inside a plane is completely prohibited as any source of fire is a risk to the safety of the passengers. Those times really showcase that the focus was on the passenger/customer, who was taken care of from departure until arrival by a handful of stewardesses. By the late 1970s, most airline companies had separated these luxury flights into separate classes in order to attract more customers at affordable prices.
https://medium.com/history-of-yesterday/the-luxury-of-flying-in-the-1970s-c37c09fc1aec
['Andrei Tapalaga']
2020-12-04 21:56:33.939000+00:00
['History', 'Luxury', 'Culture', 'Travel', 'Marketing']
Riding the Wave to Shore
Riding the Wave to Shore Poems From The Porch photo by Brett Meliti on Unsplash.com The mystic morning is dawning with delicate haze melting into pink. The soft grey mystery is lying low in the fields, hinting the revelation of the magis. There is an unsettled place within me, poised on the precipice of commitment. I feel the answer choosing me, my hesitation, the only question, a player recruited and selected to play in the first round. Affirmations, like road signs, indicate in neon obviousness, my predestined course. The resistance I feel asks, “Is it enough? Will it hold your interest?” And ultimately, “Is this who you want to be?” And alas, we’ve gotten to the bottom of it — a question of identity — the subterfuge and folly now revealed. “Is this whom you want to be seen as? to be known as? Will this truly make an impact?” Productivity. Accomplishment. Fame. If pride is removed and the need to be special, the “Yes” becomes clear; for the river is rushing and the tide is pushing and rolling relentlessly, almost comically, toward a chosen shore. Not chosen by me, though, for I am dogpaddling, resisting the thrust and feeling despair in my own obstinance. “What harm is there in riding the wave to shore?” I certainly cannot anticipate the wonders of the journey. Ah yes, I have felt this way before, dug my heels in and opposed the pull of Love. I remember it now and the moment I let go, the instant I decided to put my faith in Love. I recall the visual representation of my limiting belief and also, the feeling of release when I finally allowed myself to be led; Led to the more, to the magis, to here. Looking back at that time long distant, I wouldn’t trade here for there.
https://medium.com/a-love-centered-life/riding-the-wave-to-shore-e81dd6ecbb76
['Ani Vidrine']
2020-12-09 21:45:44.926000+00:00
['Poetry', 'Decision Making', 'Faith', 'Trust', 'Resistance']
Short Showers Are Not Enough: The Water Crisis
Running out of water is a very real danger. We need to fix it, right now. Water is a seemingly endless resource. We have seas full of it, it falls from the sky (annoyingly often if you’re in the UK), and at a turn of a tap, we have as much of it as we want. I’m part of the generation who was taught about water conservation as kids and the advice has stuck with me. “Take short showers, don’t leave the tap on whilst you brush your teeth, wash your dishes in a sink of water, not under a running tap”, the easy to follow advice that makes us all feel eco-conscious. I didn’t realise how dire things were until I watched a harrowing episode of Vox’s Explained on Netflix, that revealed just how urgent the water crisis is. The Cape Town water crisis began in 2017 and continued well into 2018. Water levels got so low that talk began of “Day Zero” when Cape Town would run out of water and the supply would be cut off. People would have to queue for their water rations because the city simply didn’t have enough. The city halved its water consumption, and Day Zero was pushed back again and again until it moved into 2019. It seems that it takes an imminent crisis for us to finally be motivated to act. It seems unfathomable to us that we could ‘run out’ of water. We are incapable of living without it, yet we take for granted its accessibility. Yet 3/10 people on this planet don’t have access to running water, and it is a nearing reality that those of us who do might not have it forever. What is Cape Town’s crisis today, could well be London’s crisis tomorrow. The problem is this: our planet is covered in water, but the vast majority of it is undrinkable. As our population grows and grows, and our consumerist greed grows with it, we simply don’t have enough. The WWF estimates that by 2025, 2/3 of the world’s population may experience water shortages. So where’s our water going? Even as the population grows, surely there’s only so much water we can consume? Your average person needs to drink 2 litres a day, which should leave more than enough to go around. The problem is that we use much more water than we ever come into contact with. The water that we see when we shower or make a cuppa is a tiny percentage of the water that we’re using. 70% of the world’s accessible water supply goes into agriculture. Of that water, 60% is wasted through leaks or ridiculous farming methods. As Vox neatly puts it, the problem is that “we’re growing alfalfa in the desert”. Water isn’t viewed as a valuable commodity, it’s treated as an unending throwaway resource. It’s available so cheaply that companies don’t think twice about growing cops in completely unsuitable climates. Throwing litre after litre of water at the problem is a cheap solution. Western populations are not only growing in size but growing in greed. We demand more and more; eating a diet that includes far more red meat than can ever be sustainable. People are selfish and willing to sacrifice anything in order to meet their own personal wants. The World Water Vision Report sums it up: “There is a water crisis today. But the crisis is not about having too little water to satisfy our needs. It is a crisis of managing water so badly that billions of people — and the environment — suffer badly. This burden lies not just on consumers who refuse to give up their luxuries, but the companies who make an enormous profit off of irresponsible abuse of environmental resources. As long as we make it possible for companies to have cheap access to huge amounts of water, they will continue to use inefficient techniques rather than put in the effort to make a change. Only around 10% of water use is domestic. Even if we all gave up showering altogether, stopped washing our dishes, and left our gardens to wither — we’d only conserve 10% of our water. The real pressure needs to be put on corporations, and then consumers need to start making more responsible choices. You don’t have to commit to a lifetime of Veganism (although that’s certainly a good call for the environment and your own health) but we do all need to start making better choices. As long as people continue to demand meat every day, or even more than once a day, we will remain in crisis. People love to comment on how much water is used to grow soya, as though it’s vegan nuggets that are killing the planet, but the reality is that a huge amount of the crops we grow go not into our own stomachs but into those of livestock. Cutting down your meat consumption to a few times a week is good for you, and good for the planet. As much as I advocate individual pro-environment choices, nothing significant can be achieved until we force the giant corporations to get their act together. The guilt is often placed entirely on the public so that we’ll forget that the biggest offender is not the everyman, but the capitalist giants that make billions every year without regard for the damage they are doing. The only way we will make the changes needed to end water scarcity is through strict legislature that makes it impossible for companies to put their profits before the needs of the many. Shorter showers are never a bad idea, but considering the crisis we’ve created, they’re just not enough anymore. If You Liked This, Try: Love Letter to the Natural History Museum — “Does It Spark Joy?” The KonMari Tidying Method. — The Most Terrifying Thing About Trump
https://olivia-vosper.medium.com/short-showers-are-not-enough-the-water-crisis-52e832ea241a
['Livi Vosper']
2019-05-02 15:06:39.515000+00:00
['Water', 'Water Crisis', 'Eco Friendly', 'Environmental Issues', 'Environment']
Text preprocessing in different languages for Natural Language Processing in Python
Text preprocessing in different languages for Natural Language Processing in Python Part II — Case Study Natural Language Processing is a catchy phrase these days This is Part 2 of a pair of tutorials on text pre-processing in python. In the first part, I laid out the theoretical foundations. In this second part, I’ll demonstrate the steps described in Part 1 in python on texts in different languages while discussing their differing effect arising from different structures of languages. If you haven’t, you should first read Part 1! You can check out the code on GitHub! Relevance In the first part, I outlined text pre-processing principles based on a framework from an academic article. The underlying goal of all these techniques was to reduce text data dimensionality but keep the relevant information incorporated in the text. In this second part, I will present the effect of the following techniques on two central properties of text, word count and unique word count — the latter representing the dimensionality of text data: Removing stopwords Removing both extremely frequent and infrequent words Stemming, an automated technique to reduce words to their base form The idea came from another academic article where the authors examined the effect of text pre-processing techniques in different languages on the results of using the Wordfish algorithm. Being a method for ideological scaling, Wordfish can estimate which speakers are in the centrum based on their word use, and which speakers can be considered extremists on either side of the spectrum. You can see a result about German political parties below: In addition, they also report the effect of text pre-processing techniques on unique word counts of text, i.e. how stemming words lowers the number of unique words in the corpus for each language. I advise you to check out their results in this paper, if you are interested! Since the researchers used substantially different kinds of political text the results for different languages are therefore not perfectly comparable. For example, they analyzed 61 PM speeches in parliament from Denmark from 4 different parties and 104 written motions from party conferences of Italy from 15 different parties. That’s why I decided to create a comparable corpus in 4 languages to carry out the analysis. The corpora not only needs to be comparable, but it also has to include text of many domains in order to produce results that generalize well for these languages. Nevertheless, it is important to note that specific corpora can have totally different reactions to these techniques! Creating the corpora In order to conduct the analysis, we need large amount of text in different languages. As I am Hungarian, I chose to compare English, German, Hungarian and Romanian languages. The analysis can be easily done for other languages as well. In addition, the text has to cover a broad range of topics and has to be about at least roughly the same things. There are 2 approaches I considered: Books in different languages, and Wikipedia in different languages. Wikipedia content extraction made simple with this python package. Acquiring the text of books in other languages than English turned out to be a more complicated task, and above all, turned out harder to automate. However, with the Wikipedia python package automated access to content of wikipedia pages in different languages is as easy as you can see in this gist to the left. Finally, we need Wikipedia pages with a lot of text. For this purpose, I chose to scrape some lists from the internet, as I assumed I can find well-written pages of well know entities: With this approach, I ended up with 452 entities, because I only kept those where the page could be located unambiguously in all 4 languages. Nonetheless, the addition of the entities from the last list didn’t change much about the nature of the results therefore I stopped adding text to the corpora. Pre-processing 101 Cleaning unnecessary characters and splitting text is this easy with NLTK’s RegexpTokenizer! In Part 1, I elaborated on the first 3 steps to consider in text pre-processing. In this case study, the text are lowercased immediately after reading them in memory. Moreover, numbers and special characters are removed without further ado using the RegexpTokenizer. The corpus Raw word counts and unique word counts for raw text. Looking at raw word counts, it does not come as a surprise that the English Wikipedia has much more text than in any other languages, but the Hungarian Wikipedia has more text than its Romanian counterpart, while Romania has a population double that of Hungary. From unique counts, it seems that German and Hungarian is lexically more diverse compared to English or Romanian. However, that is maybe caused by underlying structure of the language: Hungarian tends to use suffixes at the end of words much more than English, resulting in more unique words. Assessing lexical diversity is therefore better done after pre-processing steps! Stopword removal The first text pre-processing technique to demonstrate is stopword removal. It is a basic methodology: most NLP packages like NLTK come with built-in stopword lists for the supported languages. Therefore, one just has to scan over the document and remove any word that is present in the stopword list: Stopword removal using NLTK. NOTE: you have to download stopword resources using nltk.download! Stopword removal mainly affects the raw wordcount of the corpus, as it only removes words that are included in the stopword list — but these words tend to have high frequency as they support a grammatical role. In the figure on the left, one can assess what portion of words remain after stopword removal. It is line with our previous explanations, that English has a relatively low value: instead of suffixes, many stopwords are used to create context around words. On the other hand, suffix-heavy Hungarian language lost only around 25% of words compared to the almost 40 in the case of English. Stemming Stemming is an automated technique to reduce words to their base form. It is based on language specific rules. In this article, the Porter stemming algorithm is used in NLTK, which has publicly available rules for stemming. Stemming and stopword removal using NLTK. NOTE: you have to download resources using nltk.download! Analyzing the effect of stemming can be done through unique word counts, as stemming does not remove any words, but makes one unique word from many, thereby reducing text dimensionality. This can be seen in the figure below: Stopword removal barely changes unique word counts, while stemming does substantially. In accordance with previous statements, stemming has the most effect on suffix-heavy Hungarian, and the least effect on English language. In general, stemming can reduce the dimensionality of text data to an extent of 20 to 40 percentages, depending on the language (and of the course the nature of the text, if it’s less general than the corpus used here). Removal of extremely infrequent words As it was mentioned before in these articles, word frequency tends to have a long tailed distribution: many words appear quite infrequently in text. The same is true for document frequency; many words appear only in a small amount of documents. This can be seen on document frequency histograms for each language. Red numbers show count of words in that bin of document frequency, while x tick labels are bin boundaries. For example: in the english figure, the first bar means 67534 words appear in 0–45 texts in the corpus. The next bar means 1384 word appears in 45–90 texts, etc. Read filter_extremes() method documentation well, as both parameters are important and have a default value! To achieve removal of infrequent (or on the contrary, too frequent) terms from the corpus, I advise using the gensim package. You should create a gensim.corpora.dictionary object, supplying the initializer with a nested list, where each list element is a document, which is a list of tokens. For that dictionary, calling the filter_extremes method with the right parameters will do the trick for you. Removal of these can do a great deal of dimensionality reduction, but you maybe removing the most important words from your text. If you have a balanced, binary classification problem removing words under 0.5% document frequency probably will not matter, as the word appear in too few documents to be a decisive factor. However, in a multi-class problem maybe these sparse words contain the most information! As we can see from the document frequency histograms, the removal of words with low document frequency drastically decreases the number of unique words in the corpus which is true regardless of the language; the four corpora have relatively similar reactions to this procedure. Nonetheless, the decrease in unique words is even more pronounced for lexically more diverse languages, namely German (and Hungarian). Removal of extremely frequent words The last procedure I aim to cover is a methodology to find domain-, or corpus-specific keywords. The idea is if a word is present in most of the documents in a corpus it might not convey any information about a document in that specific corpus. Nevertheless, in-document frequency can differ substantially for a word present in all the documents, and that can contain information as well! In our case, there are a tiny amount of words that appear in 50% of the documents in either language. This is because we are analyzing a corpus of many different domains, but for a corpus about a specific topic, domain-specific stopwords can be removed using this approach. Takeaways A general takeaway is that languages can differ substantially in terms of reaction to text pre-processing techniques. Stopword removal removes more words from languages where suffixes are not used extensively, while stemming affects suffix-heavy languages more. Be careful removing less frequent words. You may be removing too many, and they may be very important! While domain-, or corpus-specific stopwords can be found searching for words that appear in all texts, it is important to note however, that in-document frequency for words present in all texts can be a decisive factor for a classification problem! References: Greene, Z., Ceron, A., Schumacher, G., & Fazekas, Z. (2016, November 1). The Nuts and Bolts of Automated Text Analysis. Comparing Different Document Pre-Processing Techniques in Four Countries. https://doi.org/10.31219/osf.io/ghxj8
https://medium.com/starschema-blog/text-preprocessing-in-different-languages-for-natural-language-processing-in-python-fb106f70b554
['Mor Kapronczay']
2019-10-17 11:13:42.807000+00:00
['Text', 'Data Science', 'Python', 'NLP', 'Text Mining']
Practicing Design Thinking
Practicing Design Thinking Capturing some extra data points (timings) for better user experience. Problem Statement There are times when the user comes back to us that they drop off from the listings page because they do not find their preferred time. We want to capture what time the user wants to travel before he searches and bounces off from the listing page. Image Credit: Interaction Design Foundation The above-shared image of the design process is taken from The Interaction Design Foundation which is lead by well-known UX practitioners like Clayton Christensen and Don Norman. I am personally a strong believer in the design process for solving any kind of real-world problems. It’s very common for us to just see the problem statement and jump to a solution. Which is quite similar to the following lines by Nas & Damian Marley. You buy khaki pants and all of sudden you say I am Indiana Jones. — Nas & Damian Marley — Patience. Design thinking is extremely useful in tackling complex problems that are ill-defined or unknown. Let’s start to understand and resolve the above-shared problem statement using this approach. Step 1 — Empathise Empathy is crucial to a human-centered design process like design thinking because it allows you to come out of your own assumptions about the problem and gain real insights into users and their needs. My primary objective was to know as much as I can about the experience of booking a bus ticket through an app. As I personally prefer to travel by motorcycle or car most of the time, that’s why I decided to prepare a short questionnaire and asked my colleagues to know the importance of timing while booking a bus ticket. Fortunately, many of my friends & colleagues travel a lot through the bus. Some of them doing it on a daily basis by using platforms like Shuttl and a couple of them travel on a weekly basis to their home towns. As a part of qualitative research below are the questions I asked : How often do you travel & does it include intercity travel? Which platform do you use to book tickets? What do you like about the platform? Can you show me how you book tickets on this platform? Would it make your experience better if your preferred timing is also asked before showing the listing of buses? These questions helped me to understand the gravity of the problem and know more about the factors which help the user to take decisions while booking a bus ticket. Step 2 — Define As they say, a well-defined problem is half solved. In this phase of design thinking, idea is to analyze the collected data of your research and synthesize them to define the core problems we have identified so far. Below is the rephrased problem statement which helps us empathize and have a clear understanding of the end-user goal. “User needs to have clear access to the timings of buses in order to make booking decisions easy” Although, there are multiple factors that affect the decision making of the user. Rating of the bus, Hygenic rest-stops, and price of the ticket are some of them. Step 3 — Ideate During the very first phase which was to empathize with the user, I did some competitive research along with qualitative research to understand how other apps or platforms handle and serve user’s needs.
https://medium.com/sketch-app-sources/practicing-design-thinking-ce746421d086
['Gurpreet Singh']
2020-08-20 14:12:47.081000+00:00
['Design Patterns', 'UX Research', 'Design Research', 'Design Process', 'Design Thinking']
When Staying The Same is No Longer an Option
I am sick of myself. I don’t know if it was the repetitive internal chatter about how I yet again failed to show up, or if it was the fact that in looking back over the last six months I can say, well there went a lot of time without much to show for it. It’s a coin toss. Whichever it was, here I am, utterly sick of myself. No longer can I sit and hem and haw, analyze and rationalize, and wish things were different. I’ve reached my limit. I’m officially fed-up. With myself. You see, I like to write and I’d like to make something of it. Despite that, I don’t do it as often as I could. Why? I don’t know. I. Don’t. Know. There is no one true and valid reason for me not writing. Yes, I can get up earlier and do it. I can also stay up later and do it. I can get over the guilt of allowing my children screen time for an hour if it means I get to write — something I feel selfish for wanting. So here I am, disgusted over my inability to sit down and write. Being Fed-Up Creates an Opportunity for Change There comes a time when you have to make a choice and that choice can become the catalyst for change. You get to use that ball of fire — that fed-up energy — as a way to finally step into discomfort without all the baggage. When you do that, typically what happens is your excuses no longer bear as much weight. It’s as if you’ve got Hulk power and you can now smash through any barrier with the flick of your pinky. The push-back isn’t as intense when you make a choice. Deciding to take the plunge and commit to writing every day means I no longer have to listen to myself say ad nauseam, I should write something. I should write today. I should be writing right now! When you find yourself so utterly sick of the status quo, here’s what you can start doing next that will get you moving in the right direction. A 1-Inch Square to Start What is it you’re struggling with? What have you been trying to accomplish that consistently eludes you? What can no longer stay the same for sanity’s sake? Is it losing weight? Changing jobs? Learning a new language? What’s constantly nagging at you? I recently picked up Anne Lamott’s book Bird by Bird to jumpstart my writing efforts. On her writing desk, she has a framed picture of a 1-inch square. At first, it seems like a quirky writer thing but it turns out it’s the perfect tool to help stay grounded. This 1-inch square represents all you’re responsible for in one writing session. That could mean it’s the first paragraph or it’s six shitty pages of nothing substantial. It could be one spectacular sentence. The point of this little square is it unarms your resistance (aka your excuses). It strips down your excuse-laden banter and leaves it with nothing to work with. A glacier your resistance has a lot to work with, but a 1-inch square leaves it looking around, confused, and bewildered. It can’t work with that, so it retreats. This is how you make forward progress. Break your steps down into teen tiny pieces, disarm your resistance, and start working on your goals. You’ve taken your utter disgust for your inactivity and turned it into fuel for motivation while single-handedly dismantling your resistance by focusing on a teeny tiny 1-inch square of work that needs to be done. You’ve taken that nagging feeling and used it to propel you toward making one small shift. One small change. A 1-inch square of work a day. This is something I can get behind.
https://medium.com/live-your-life-on-purpose/when-staying-the-same-is-no-longer-an-option-96e635880508
['Am Costanzo']
2020-08-04 14:01:01.551000+00:00
['Motivation', 'Growth', 'Self', 'Personal Development', 'Life']
Coronavirus drug and p-value problem
What exactly is a p-value? The p-value tells you how likely it is that your data could have occurred under the null hypothesis. It does this by calculating your test statistic’s likelihood, which is the number calculated by a statistical test using your data. The p-value tells you how often you would expect to see a test statistic as extreme or more extreme than the one calculated by your statistical test if the null hypothesis of that test was true. The p-value gets smaller as the test statistic calculated from your data gets further away from the range of test statistics predicted by the null hypothesis. The p-value is a proportion: if your p-value is 0.05, that means that 5% of the time, you would see a test statistic at least as extreme as the one you found if the null hypothesis was true. What are null and alternative hypotheses? The null and alternative hypotheses are two mutually exclusive statements about a population. A hypothesis test uses sample data to determine whether to reject the null hypothesis. Null hypothesis (H0)The null hypothesis states that a population parameter is equal to a hypothesized value. The null hypothesis is often an initial claim that is based on previous analyses or specialized knowledge. Alternative Hypothesis (H1)The alternative hypothesis states that a population parameter is smaller, greater, or different than the hypothesized value in the null hypothesis. The alternative hypothesis is what you might believe to be true or hope to prove true. I know you always confuse p-value, so many of you don’t exactly understand what it means. Let’s use an example and understand the meaning of p-value step by step. Imagine that you have two drugs and you are trying to find the best drug to cure coronavirus. You should do some tests and find the best alternative. Photo by Kate Hliznitsova on Unsplash Let’s make some tests to find the best drug for coronavirus and understand the p-value meaning. Test 1: You will give one Drug to each person in the test. İmagine that you have two drugs and you want to know if Drug-1 is different from Drug-2. So you are giving Drug-1 to one person and Drug-2 to the other person. İmagine that one person that is using Drug-1 is cured and the other person that is using Drug-2 is not cured. Can we conclude that Drug-1 is better than Drug-2 ? No.Maybe this guy is taking a medication that has a bad interaction with Drug-2. Maybe this guy did not use the Drug-2 properly. Maybe Drug-1 doesn’t actualy work and placebo effect got the successful result. There might be a lot of random thing when doing a test and this means we need to test each drug on more than the two person. Test 2: This time we will give each drug to 2 different people. The two person which is taking Drug-1 is cured. One of the two person which is taking Drug-2 is not cured and another one is cured. Can we conclude that; * Drug-1 is better than Drug-2? * Are both drugs the same? We can’t answer both of those questions because maybe something weird happened to this guys.Maybe for that reason Drug-2 is failed. Maybe one of the two guys that is being cured actually made a mistake and took Drug-1 instead of Drug-2 and nobody knew that. Test 3: And now we will test the Drugs on a lot of different people. These are the results: Drug-1 cured so many (1.043) people compared to the number of people it didn’t cure(3). We can say 99.71 % of the 1046 people using Drug-1 were cured. Drug-2 is just cured few (2) people compared to the number of people it didn’t cure (1042). We can say only 0.001 % of the 1432 people using Drug-2 were cured. We can conclude that it is obvios Drug-1 is better than Drug-2.We can not definitely say that results are unrealistic and random. What can we conclude about these results that is shown below? Now only 37 % of the people that took Drug-1 are cured. Only 29 % of people were that took Drug-2 are cured. We can see that Drug-1 cured a larger percentage of people. But we can say that no study is perfect and there are always a few random things that happen,how confident can we be that Drug-1 is superior? Booom that is the point that is where the p-value comes in.P-values are numbers between 0 and 1 that in our example quantify how confident we should be that Drug-1 is different from Drug-2.
https://medium.com/clarusway/coronavirus-drug-cured-p-value-problem-19fb3c5969a8
['Alex Duncan']
2020-12-27 18:31:21.538000+00:00
['Problem Solving', 'Example', 'Coronavirus', 'P Value', 'Statistics']
Here’s What I Learned After Interviewing Hundreds of Developers
Here’s What I Learned After Interviewing Hundreds of Developers Companies hire good engineers, not just good coders Photo by Michael DeMoya on Unsplash I still remember my first interview as a software developer. I used to believe it only takes good coding skills to ace the interview. Only after interviewing hundreds of developers did I realize it takes more than just coding. I started taking interviews around two years back. By now, I’ve interviewed hundreds of candidates and some of them were good coders. But companies hire good engineers, not just coders. At first, I was a little confused about what my manager meant by “good engineers.” But later on, I learned with experience and understood the crux behind it. In this article, I am going to discuss my learnings from these interviews and what a company expects from a software developer. Let’s take a closer look.
https://medium.com/better-programming/heres-what-i-learned-after-interviewing-hundreds-of-developers-8d9b85490457
['Shubham Pathania']
2020-12-10 15:36:59.112000+00:00
['Technical Interviews', 'Programming', 'Startup', 'Remote Work', 'Software Developer']
Indifference.
The four words linked with its meaning are lack of interest, concern, sympathy and mediocrity. This makes it the highest form of thanklessness and ungratefulness. It’s a silent war you are waging on your Creator. Indifference can be a tool for those who want to use it as one. But, some feel the indifference inside and that’s partly being dead. Humans were not created to be indifferent. They were made with an array of emotions to be used at the right time. Keyword: right time. There is no point in defending the true state of your heart if you did not react at the right opportunity. There are no second chances in life. No one is indifferent towards everything at all times, yet we all are indifferent to some things in our lives. Maintaining that balance is where we fall short. Photo by Pro Church Media on Unsplash Indifference stems from arrogance, self-righteousness and a blinding ego that you are unable to control. If you are told you are expected to react a certain way then your system will mould itself to produce that reaction. But, when you are caught off-guard is when you show your true colours. And, if in that moment you showed your indifference then the spark of faith in your heart is non-existent. God doesn’t reside in indifference, evil does. There is not a bigger crime than being indifferent towards the propagators of evil in this world. No feeling is real if it is not coming out at the right time. There is a big difference between not approving of something as an opinion and truly feeling the damage done in your bones. If your nerves aren’t ticking when you are being shown different faces of all that is wrong in this world, then something is wrong with you. The forces of evil reside within you too. If this demarcation is not very strong in your thoughts, it will reflect in the extremely arrogant choice of your words. Such a person cannot be sensitive towards themselves, let alone anyone else. It is a severe lack of realization towards all that is ruining the social fabric of our society. By being indifferent you are adding to the chaos. You are causing undue, unjust, unexplainable, undeniable, extreme levels of pain and hurt to those who are making an effort in restoring the balance. Your conceited indifference causes more damage than your puny little mind can comprehend. Indifference is where all the room for evil starts and that is all the room that evil needs.
https://medium.com/alt-op/indifference-2e0bc961d51a
[]
2019-12-08 06:21:26.725000+00:00
['Self Improvement', 'God', 'Emotions', 'Psychology', 'Evil']
Shark Quest: Are the World’s Most Endangered Rays Living in New Ireland Province, Papua New Guinea?
[Note: this commentary, which was originally published at The Revelator, is the sixth and final essay in a series by researchers with WCS (Wildlife Conservation Society) during Shark Week documenting challenges and successes in shark and ray conservation today.] “We saw two swimming past our canoe the other day as we came to shore!” “Yes, we saw one over towards the mangroves not so long ago…” “There was one in our net near the big river…” Scientists love having a mystery to solve and gathering clues to find out if something is real or not. Since January 2019 my organization, the Wildlife Conservation Society, has been collecting evidence to confirm whether highly endangered sawfish and their relatives — the wedgefish, guitarfish and giant guitarfish (collectively and affectionately known as “ rhino rays “) — live in the coastal waters of New Ireland Province, Papua New Guinea. Sawfish and their rhino ray relatives — all cousins of sharks — are some of the most threatened species on Earth due to their slow growth, vulnerability to capture in fisheries, and high value in international trade. Recent studies indicate that Papua New Guinea is (together with northern Australia and the southeastern United States) one of the last few strongholds for sawfish populations, making the country a global priority for shark and ray conservation. Currently sawfish and rhino rays have been well documented along the southern shores and adjacent river systems of Papua New Guinea, and also in the Sepik River, which drains into the Bismarck Sea on the northern coast of the mainland. Sawfish have also been documented in several other provinces in the country, yet no official records exist in New Ireland Province. Until now. Papua New Guinea occupies the western half of New Guinea and is the largest of the South Pacific Island nations. The uplifted reefs, limestone terrain and adjacent islands that form New Ireland Province comprise the north-easterly region of Papua New Guinea. From January 2019 to March 2020, fisher key informant surveys were conducted in coastal communities in western New Ireland Province to determine whether sawfish and rhino rays were observed within the customary waters of each community. A total of 144 sightings were made, including 85 wedgefish (blue), 36 guitarfish and giant guitarfish (green) and 23 sawfish (red) sightings. Source: WCS. The southwestern Pacific nation of Papua New Guinea is known for its renowned biodiversity, much of which lives nowhere else in the world. But that amazing animal and plant life is often both understudied and under threat. This holds true in New Ireland. The many islands of New Ireland Province, located in the Bismarck Archipelago, support coral reefs, mangroves, estuaries and tidal lagoons — typical habitats for rhino rays and sawfish. Some 77 percent of New Ireland’s human population also lives in the coastal zone, where they’re highly reliant on fish and other marine resources for food, livelihoods and traditional practices. Local communities also own most of this coastal zone through customary tenure systems, which may have been in place for centuries. Human pressure, including population growth, could threaten potential sawfish and rhino ray populations unless sufficient management is in place — but local cooperation will be key to such action. Over the past year and a half, WCS has conducted interviews in New Ireland’s coastal areas. Part of the interviews involved showing images of each sawfish, wedgefish and guitarfish species, allowing respondents to identify what they saw. To date residents from 49 communities reported that they had seen sawfish and rhino rays in their local waters. There were 144 separate sightings reported by 111 respondents, which comprised 23 sawfish, 85 wedgefish and 36 guitarfish and giant guitarfish. Roughly half the respondents stated they had seen sawfish or rhino rays either often or sometimes. Wedgefish in New Ireland Province: documented by BRUVS during the FinPrint project (left) and by scuba divers (Dorian Borcherds, Scuba Ventures) (right) When asked if the animals were targeted by local fishers, more than half the respondents said no: The animals were mostly caught accidentally. Only 9% of the sighted sawfish and rhino rays were reported to have been purposefully caught. Respondents also provided information on where, and in what condition, they had seen the animals: 77% were seen alive, 10% at the market and 2% entangled in nets. The results suggest that while sawfish and rhino rays are in the region, they are not a key fishery commodity, which is promising news for developing conservation approaches. Large-tooth sawfish (Pristis pristis) rostrum, beside a ruler, which was harvested by local community fishers from the Tigak Islands that lie to the west of mainland New Ireland. This rostrum measured nearly 30 inches in length. Photo: Elizah Nagombi/WCS. While physical and objective data has been lacking — I’m still waiting to see one of these animals in the water, myself — we have confirmed evidence of two large-tooth sawfish ( Pristis pristis) in the region (two sawfish beaks, also known as rostra, have been found in community villages since this study began), and we’ve received reports of additional sightings. WCS also conducted baited remote underwater video surveys (BRUVS) in 14 locations in the region in 2019–20, following a 2017 BURVS deployment by FinPrint in western New Ireland Province. Collectively the BRUVS documented 13 species of sharks and rays, including wedgefish (which have also been photographed by local dive operators), but no sawfish. But with that success, we’re expanding our search. Over the next 12 months, a further 100 BRUVS will be deployed in areas with a sandy seafloor, where wedgefish and giant guitarfish often rest. Because sawfish typically live in estuaries — where water is often murky — BRUVS will not work due to the poor visibility of the water. In these areas gillnets that have been carefully positioned in river outlets by trained local community members will be monitored for sawfish that may be present. If any sawfish are present in the nets, they will be documented and carefully released. Example of education and outreach materials produced by the WCS team. This poster presents management methods that can be used by community residents to help manage sawfish and rhino ray populations in their customary waters. Despite the vulnerability of sawfish and rhino rays — with five of the ten documented species in Papua New Guinea classified as critically endangered — there are currently no protection laws in place. However, since 2017, WCS has worked with over 100 communities in New Ireland Province to establish the country’s largest network of marine protected areas. The MPAs have been developed through a community-first approach, with extensive local outreach, engagement and education. In that way WCS has been actively informing local residents about the biology, threats and management opportunities for sawfish and rhino rays. We anticipate that new laws to protect and manage these endangered animals will be incorporated into the management rules for the new MPAs. While the mystery as to whether sawfish and rhino ray populations are alive and well in PNG has largely been solved, they are still rare and in need of additional conservation efforts. We hope that this work will help bring awareness and conservation action to these highly threatened species — and make sure they don’t become mythical creatures of the past. Jonathan Booth is Marine Conservation Advisor with the Papua New Guinea Program at WCS (Wildlife Conservation Society). — — — — — — — — — — — — — — — Read the other pieces in this WCS series for Shark Week here: Making Our Marine Environment Safe for Future Shark Weeks Ground Realities of Shark Fisheries in India Ocean Guardians Pave the Way to Save Threatened Sharks and Rays in Bangladesh The Informal Blue Economy: East Africa’s Silent Shark Killer First Signs of Hope for Critically Endangered Wedgefish and Giant Guitarfish in Indonesia
https://medium.com/wcs-marine-conservation-program/shark-quest-are-the-worlds-most-endangered-rays-living-in-new-ireland-province-papua-new-guinea-bfa048c79224
['Wildlife Conservation Society']
2020-08-29 21:07:38.782000+00:00
['Environment', 'Papua New Guinea', 'Sharks', 'Oceans', 'Conservation']
A Proposal For Digitally Prepared Designs Of 3D Buildings Replacing Restricted Boltzmann Machines On Generating Layouts
The above presentation of dropping the building structure starts when you plan the building layout for incident solar light. That is when the designer considers the layout of the building with restrictions to access to each bounded region via doors and windows. I can demonstrate how elementary the design process is, by which, considering two points acting as a source and sink interacts with the system producing a descriptive representation of the room and consequently the building. Parameters A client must be able to search through the design space using three factors: (1) Light requirement for the building (2) Circulation requirement (3) Movement requirement A Metric Table exists which maps from approximate input room sizes to these requirements Metric Table Next, is a Design Chart that maps from individual room types to aspects of a room type Design Chart A Load Distribution Map that specifies how much load each room of a given building should take The coloured regions are representing loads taken by the rooms The design proposal stays on with the Adjacency Matrix as: Adjacency Matrix Here, 1 indicates the rooms are adjacent to each other whereas 0 indicates they are not adjacent rooms. Design Philosophy The design philosophy considers designing the structures using a load distribution map taking into consideration the aspect ratio and the energy of the building and by approximately modelling the movement of people within the building. They are divided into light considerations, space requirements and application of proximity models for defining the structures within the building. A Lagrangian Map The space within a room is defined using a Lagrangian Map with proximity considerations of: - Not too close to the Windows and Doorway - Not too far from the Windows and Doorway The windows and doorway are used to construct the functions of not too far distances and not too close distances are coded. The programming code will look something like this: not_too_far_distances = exp(distances) not_too_close_distances = exp(-1 * distances) Next, is to find two Lagrangian points that maximise and minimise the Lagrangian values. Such two points are considered to be source and sink which interacts with the System. The idea is shown here. data_h = distr_affinity # initialising the minimal data_h values to zero data_h.affinity -= np.min(distr_affinity.affinity) data_f = distr_distance result = pd.DataFrame(columns=result_cols) epsilon = np.arange(0.05, 0.2, 0.05) for eps in epsilon: value_df = data_h.iloc\ [data_f[(np.abs(data_f['d0'] - data_f['d1']) < eps) & \ (np.abs(data_f['d1'] - data_f['d2']) < eps)].index, 3] if(len(value_df) != 0): val = value_df.astype('float32').idxmin() result = result.append(dict(zip(result_cols, [value_df[val], eps, val])), ignore_index=True) epsilon_chosen = 0.05 eps_df = result[(result['epsilon'] <= epsilon_chosen)] beta = np.mean(np.square(_distances[eps_df.row_index.values.astype(int).tolist(), :])) data_l = data_f.loc[:, 'distance'].values + beta * data_h.loc[:, 'affinity'].values primal_value = data_l[data_l == np.max(data_l)] dual_value = data_l[data_l == np.min(data_l)] What I’ve done above is optimize the Lagrangian to come up with a formulated beta factor. The result is a regression of not_too_close_distances and not_too_far_distances. Finally we find the primal and dual points that maximize and minimize the Lagrangian values. These two points act as a Source-Sink system for a single room. Each room connects to the adjacent room using the Access Door through which the Source-Sink Systems communicate. It can be shown using formulation that a Lagrangian Map is appropriate for such cases. The formulation for such a source-sink system is given below: In a hypothetical Source-Sink System: The first equation is the state space equation of the input to the Source, A being the State Matrix and B being the Input Matrix The second equation is the state space equation of the input to the Sink, E being the State Matrix when connecting to the Sink and F being the Feedback / Transmission Matrix to the Input W The third equation is the state space equation of the output from the Sink, C being the Output Matrix and D being the Feedback / Transmission Matrix to the Input W Taking the difference of 2nd equation and 1st equation, and squaring them on both sides, we get a similar equation to: Input u or w is flow vector and x is state vector Residual Sum of Squares (RSS) taken on the Left hand-side matches with the Square term of the value where x represents the state vector and u/w represents the flow vector. In a Lagrangian Map, since the Residual Sum of Squares (RSS) can be interpreted as: - Exposures of exposure of a Variable We can determine our state by taking a Lagrangian on our state space model. The Lagrangian is applied to our space for the window, access door and another small window as shown below: Points representing the Primal and Dual Variables. The large distance from the Doorway and Window is exploited by the magenta dot and the window nearby the aqua dot acts as one of the main factors of circulation. The Proximity Chart We can describe that the RSS (Residual Sum of Squares) is an opportunity for another room to get connected to this Lagrangian Room. Balancing the RSS values observed is our initial motive. Using our Design Chart, we construct a Principal Components Analysis Matrix that constitutes to our Design considerations for developing a Proximity Chart. Derivation of Proximity Chart The Factor Analysis Algorithm The Factor Analysis algorithm models the posterior from the prior and it maximises the Likelihood of the variable. Using Factor Analysis, it is effective to model the light requirements from every window and access area such as an access door. Hence, we can form a search space using Lagrangian and Factor Analysis, which are then aggregated to denote the Metric Table values. The Factor Analysis algorithm maximises the KL Divergence between the posterior and the prior, so what we represent our light requirements as will be effectively translated to Ephemeric data of the location. As far as we record our client preferences for sunlight or light for a typical room, we know our system is modelled appropriately. Similarly the Lagrangian Map takes in the input data from a stochastic distribution regarding flow across the window and it is easily represented by our space and proximity requirements, or in turn termed as circulation for the room. The Lagrangian Map as well models the movement of people, it is just a matter of changing the functions and the state space equation to model the space and proximity values for movement. Conclusion We should strictly avoid stochastic solutions as the solution here is predominantly deterministic in nature. As a matter of fact, modelling improves the output of the layout generated and we can restrain from model-based iterations for generating the layout. I have seen in some papers, they use a Restricted Boltzmann Machine (RBM) algorithm to model the accessibility, floors and various other parameters. Such descriptions require a lot of stewardship data as opposed to configuration data.
https://medium.com/nerd-for-tech/a-proposal-for-digitally-prepared-designs-of-3d-buildings-replacing-restricted-boltzmann-machines-383ad81bf4f2
['Aswin Vijayakumar']
2020-11-20 20:54:37.273000+00:00
['Space Syntax', '3d Modeling', 'Statistics', 'Building', 'Design']
Zoom Into Apache Zeppelin
Zoom Into Apache Zeppelin Everything you Need to Get Started and More … This blog is written and maintained by students in the Professional Master’s Program in the School of Computing Science at Simon Fraser University as part of their course credit. To learn more about this unique program, please visit here. Photo by Glenn Carstens-Peters on Unsplash Did you know? Over 2.5 Quintillion bytes of data are created every single day from the toothpaste we use every morning to the routine coffee we drink, and it will only grow exponentially. With the evolution of Big Data and its applications, effective and efficient handling of the large amounts of data generated every day has become imperative. This has led to the explosion of several open-source applications and frameworks for handling Big Data. One such extremely versatile tool is Apache Zeppelin. Apache Zeppelin is an interactive web-based Data Analytics notebook that is making the everyday lives of Data Engineers, Analysts and Data Scientists smoother. It increases productivity by letting you develop, execute, organize, share data code and visualize results in a single platform, i.e. no trouble of invoking different shells or recalling the cluster details. There’s more. With Zeppelin, you can: Integrate a wide variety of interpreters from NoSQL to Relational Databases within a single notebook. Use multiple interactive cells for executing scripts in programming languages like Python and R with a built-in version control system. Perform one-click visualization for almost everything with the flexibility of choosing what comes on the axes and what needs to be aggregated. Here’s how you install Zeppelin There are multiple ways of running Zeppelin in your system. Let’s start with Docker Zeppelin can be effortlessly installed through a docker. We created our docker image which can be used to install Zeppelin. First and foremost, install Docker. To install Docker on Mac refer to this quick tutorial: https://docs.docker.com/docker-for-mac/install/ To install Docker on Linux: sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker docker — version Now that you have your docker set, just run this command. Use sudo if required: docker run -it --rm -p 8181:8080 akshat4916/basic_ml_zeppelin:latest Once the server has started successfully, go to http://localhost:8181 in your web browser. And Done! If you are having trouble accessing the main page, please clear browser cache. By default, the docker container doesn’t persist any file. As a result, you will lose all the notebooks that you were working on. To persist notes and logs, we can set the docker volume option. docker run -p 8181:8080 --rm -v $PWD/logs:/logs -v $PWD/notebook:/notebook -e ZEPPELIN_LOG_DIR='/logs' -e ZEPPELIN_NOTEBOOK_DIR='/notebook' --name akshat4916/basic_ml_zeppelin:latest Installation through Zeppelin Binaries Even without a docker, you can install Zeppelin with minimal effort. Follow these steps and you’ll be good to go! Download the all-interpreter binary package of the latest release of Apache Zeppelin from this page. Extract all files from the compressed package in your desired path in a folder say ‘zeppelin’. On Unix based platforms, run: zeppelin/bin/zeppelin-daemon.sh start On Windows, run: zeppelin\bin\zeppelin.cmd Once the server has started successfully, go to http://localhost:8080 in your web browser. And Done! To stop the Zeppelin server, run: zeppelin/bin/zeppelin-daemon.sh stop For more details about the download instructions and for other ways of installing Zeppelin, refer to this page. PS — You may face certain issues with basic python libraries(pandas, numpy,etc) while working on Zeppelin Notebook if installed using the Binary Package or while building using Maven. Use our docker for smooth installation and use! Zeppelin Zones: The multi-language back-end Zeppelin Interpreter Apache Zeppelin comes with some default set of interpreters which enables the users to choose their desired language/data-processing-backend. At present, the latest version of Zeppelin supports interpreters such as Scala and Python (with Apache Spark), SparkSQL, CQL, Hive, Shell, Markdown and plenty more. For more information on Supported Interpreters, refer to this page. To initialize any interpreter, precede it with %. To change font size and other visual properties, click on the gear at the right corner of a cell and make changes as required. To run the code, hit Shift+Enter. Apart from the above-mentioned Interpreters, Zeppelin lets you add a custom interpreter without much hassle. For example, if you want to use document-search platform Apache Solr in Zeppelin, you can add Solr Interpreter and you are ready to roll! For step-by-step instructions on how to add a Solr interpreter to Zeppelin, refer to this page. Features of Zeppelin Zeppelin’s main weapon in its arsenal is its ability to allow multiple interpreters to run concurrently. So you can perform EDA on data using spark in one paragraph and produce visualizations in another paragraph. All this can be done without switching between different windows. Again, Zeppelin is a web-based interactive data analytics tool — so we make the most use of the features available. One such remarkable feature is its inbuilt tutorials, making use of Zeppelin’s visualizations. The default pre-loaded ones include a Line/Scatter/Bar/Pie chart and any other type of visualization can be added as well. Here you can see how the embedded tutorials are accessed and executed. Notice the ease of visualization! Handy Zeppelin Visualization. Beautiful too! This sort of automated, making sense from columnar data is a quintessential feature of tools such as Microsoft’s Power BI or Tableau. While these tools and Zeppelin provide similar functionalities, Zeppelin has more interactive data analytics features. As mentioned above, Zeppelin allows you to add visualizations apart from the default ones. Let’s see how we can add a new visualization, say geographical maps to Zeppelin. At the top right corner on the Zeppelin home page, click on ‘anonymous’ Select Helium Choose the ‘Zeppelin Leaflet’ package and click on the green ‘enable’ button. You might have to restart the notebook for the Visualization button to appear. Cassandra table used For this example, we imported data stored in Cassandra table having latitude and longitude values from different locations. We exploited the Zeppelin Leaflet plugin — which asks for the columns that contain the latitude, longitude and tool-tip values. If you want to use the same dataset, download data from here and upload data to Cassandra. After running the Cassandra SQL, you’ll see the result data in tabular format. Change the visualization type from the buttons below the query. Select the one with the globe icon. Now, drag latitude and longitude columns to specific regions and specify tooltip values. You’ll be able to see the map with tooltip on specified latitude and longitude, like the one below. Now let’s try some Machine Learning with Zeppelin Let’s walk through some prominent Machine Learning algorithms and how to use them with Zeppelin. Supervised machine learning can be broadly classified into two types: Regression and Classification. The similarity between them is that both make use of some known data in a dataset to make predictions on the unknown data. While the output of a regression algorithm is continuous (or numerical value), the output of a classification algorithm is discrete (or categorical values). The algorithms below explain this in more detail along with the examples to build these machine learning models on Zeppelin: Regression algorithms Linear/Polynomial Regression: Linear Regression is used to predict the value of a dependent variable using one or more independent variables when the relationship between the dependent and the independent variables is linear. If there is only one independent variable affecting the dependent variable, it is called Simple Linear Regression whereas if the value of the dependent variable is affected by more than one independent variable, it is called Multiple Linear Regression. If the relationship between the dependent and the independent variable is not linear but can be represented as a polynomial equation, it is called Polynomial regression. For more details on Linear/Polynomial Regression, refer to this page. Support Vector Regression: The ultimate goal of a machine learning algorithm is to make the best predictions on the unknown data. In simple regression models, we try to minimize the error in predictions on our training data whereas in the case of Support Vector Regression, we try to fit the error within a certain threshold. For more details on SVR, refer to this page. Classification algorithms Logistic Regression: Although the name gives you an intuition of regression, Logistic Regression is one of the most widely recognized classification algorithms. Based on the concept of probability, Logistic Regression is a predictive analysis algorithm that classifies the dependent variable into a discrete set of values. For more details on Logistic Regression, refer to this page. Random Forest Classification: Random Forest Classification algorithm selects a random subset of training sets and creates multiple Decision Trees. The final class of the dependent variable is decided by aggregating the votes from all decision trees. For more details on Random Forest Classifier, refer to this page. You can find sample Zeppelin notebooks for each of the above algorithms here. You can simply Import these notebooks in your Zeppelin and you are all set! Here is a quick tutorial on how to import these notebooks. Is this all that Zeppelin can offer? One of the key features of Zeppelin is its real-time Notebook sharing with your team. This makes Zeppelin a highly collaborative tool, perfect for corporate use. For detailed instructions on how to share your notebook, refer to this article. Our Experience with Zeppelin After spending some considerable amount of time exploring and understanding the features of Zeppelin, we realized there are a few areas of improvement. As of now, the most noticeable drawback is its stability. While using pyspark interpreter, it sometimes hangs or stops working with some random errors if there are multiple users in parallel. When using separate interpreter mode, the time for which the interpreter process is live after code was executed last time is unpredictable. This implies that you cannot predict if your dynamic objects in the interpreter’s context are still alive after some inactivity. While these are just minute issues you might come across, it is just a matter of time that Zeppelin will resolve all these drawbacks to become one of the most powerful tools for Big Data Analytics in the near future. We hope this blog helps. Let us know your feedback. Cheers! References: [1] http://bigdatums.net/2017/02/26/running-apache-zeppelin-on-docker [2] https://runnable.com/docker/rails/manage-share-docker-images [3] https://www.zepl.com/viewer/notebooks/bm90ZTovLzFhbWJkYS85MjcyZjk5ZTk1NTI0YTdhYmU1M2Q1YTA0ZWZlZmUxNS9ub3RlLmpzb24 [4] https://www.superdatascience.com/pages/machine-learning
https://medium.com/sfu-cspmp/zoom-into-apache-zeppelin-47190c228225
['Akshat Bhargava']
2020-02-04 07:05:46.104000+00:00
['Apache Zeppelin', 'Docker', 'Blog Post', 'Zeppelin Docker', 'Big Data']
Components of Hadoop
Here is my second blog of Hadoop-The Cute Elephant series: Components of Hadoop NameNode : It has complete information of data available in the cluster. There is always one NameNode per Hadoop cluster. Main task is to get the data stored in the cluster with the help of DataNode. When new data gets entered the NameNode divides it into smaller parts and then it identifies the DataNode which can actually store this data. When the required data is looked for then the NameNode tells it actual location to the application. DataNode : This is the actual place where data is stored. They communicate the data to the application when requested by NameNode. They periodically give message to NameNode about the data stored in them. Once the data is located using NameNode it can be loaded further by DataNode without the interference of NameNode and that makes HDFS efficient. Job Tracker : It decides what tasks should be given to which worker node and this process is called Task Scheduling. It is also responsible for monitoring the health of all worker nodes. Task Tracker : It is slave process. Each worker node can have single process running under them. Similar to DataNode, each Task Tracker reports its health and task status to Job Tracker through a process called as Heartbeat. If the heartbeat is not acknowledged after a specific number of time then the Task Tracker is declared dead. There is absolutely NO role of Job Tracker in HDFS! In latest versions of Hadoop the Job Tracker and Task Tracker based map-reduce framework has been replaced by a more generic framework called YARN. Thank you for reading! Please give a clap if you like it. Keep watching this and follow us for more tech articles or you can reach out to me for any doubt and suggestions and the next blog in series will be published soon.
https://medium.com/codingurukul/components-of-hadoop-761340bcf4ed
['Ishita Agnihotri']
2019-01-14 15:55:24.696000+00:00
['Big Data']
Forget Knowing Yourself — Uncertainty Leads to Positive Change
1. Not knowing yourself opens you up to change. When I graduated from university, I still had no idea what I wanted to do with my life. I meandered through the summer, eventually taking a retail job to have some money. Frankly, the uncertainty was killing me. I felt like I was in limbo. I wanted to go into marketing, but that was a lie I told myself to keep a slither of hope alive. However, because I didn’t know what I wanted, I was open to suggestions. I wanted to change. So, my dad suggested writing online. He put forward various tools, and with the encouragement of my girlfriend, I started blogging. Now, I know myself more. I have a clear plan, and the sense of clarity surrounding my decisions is a welcome one. It’s more than that, however. When people asked me what I do, I would shy away from the answer, afraid of what they might think. Now I am confident in my choice; I feel assured in private and public. Every day, I sit down at my desk and get to work. While I know myself more, I am still open to change. My career path isn’t predetermined, so who knows what opportunities might arise in the future. Yes, knowing yourself and what you want does feel nice, but I would never have gotten here without the uncertainty I felt. If you’re floating around in a seemingly perpetual limbo like I was, don’t tie yourself down to a personality you think you want. By doing that, you’re pulling down a mask you can still see through. It is pointless. Accept you don’t know what you want and roll with it. Open yourself to new possibilities. Just because the people around you are doing a specific thing for someone your age doesn’t mean you need to join them.
https://medium.com/the-ascent/forget-knowing-yourself-uncertainty-leads-to-positive-change-ba52735494e2
['Max Phillips']
2020-12-06 14:02:37.346000+00:00
['Self-awareness', 'Self Improvement', 'Mindfulness', 'This Happened To Me', 'Identity']
Memory Management And Garbage Collection In Python
Memory Management And Garbage Collection In Python Reference Counting and Generational Garbage Collection You are at the right place if you have these questions while learning Python: How is memory managed in Python? What is garbage collection? Which algorithms are used for memory management? What is a cyclical reference? How are Python objects stored in memory? Let’s see if I can answer these questions and some more in this article: I am starting with the fundamentals. Python Is a Dynamically Typed Language. We don’t declare the type of a variable when we assign a value to the variable in Python. It states the kind of variable in the runtime of the program. Other languages like C, C++, Java, etc.., there is a strict declaration of variables before assigning values to them. As you can see below, we just assign a variable to an object and Python detects the type of the object. Python detects the type of an object dynamically. Image by author made with Canva How are Python objects stored in memory? In C, C++, and Java we have variables and objects. Python has names, not variables. A Python object is stored in memory with names and references. A name is just a label for an object, so one object can have many names. A reference is a name(pointer) that refers to an object. Every Python object has 3 things. Python objects have three things: Type, value, and reference count. When we assign a name to a variable, its type is automatically detected by Python as we mentioned above. Value is declared while defining the object. Reference count is the number of names pointing that object. Every Python object has three things. Image by author made with Canva Garbage Collection: Garbage collection is to release memory when the object is no longer in use. This system destroys the unused object and reuses its memory slot for new objects. You can imagine this as a recycling system in computers. Python has an automated garbage collection. It has an algorithm to deallocate objects which are no longer needed. Python has two ways to delete the unused objects from the memory. 1. Reference counting: The references are always counted and stored in memory. In the example below, we assign c to 50. Even if we assign a new variable, the object is the same, the reference count increases by 1! Because every object has its own ID, we print the IDs of objects to see if they are the same or different. Image by author made with Canva When we change the value of a like in below, we create a new object. Now, a points to 60, b and c point to 50. When we change a to None, we create a none object. Now the previous integer object has no reference, it is deleted by the garbage collection. We assign b to a boolean object. The previous integer object is not deleted because it still has a reference by c. Image by author made with Canva Now we delete c. We decrease the reference count to c by one. Image by author made with Canva As you can see above, del() statement doesn’t delete objects, it removes the name (and reference) to the object. When the reference count is zero, the object is deleted from the system by the garbage collection. Goods and bads of reference counting: There are advantages and disadvantages of garbage collection by reference counting. For example, it is easy to implement. Programmers don’t have to worry about deleting objects when they are no longer used. However, this memory management is bad for memory itself! The algorithm always counts the reference numbers to the objects and stores the reference counts in the memory to keep the memory clean and make sure the programs run effectively. Everything looks ok until now, but … There is a problem! The most important issue in reference counting garbage collection is that it doesn’t work in cyclical references. What is a cyclical reference or reference cycle? It is a situation in which an object refers to itself. The simplest cyclical reference is appending a list to itself. The simplest cyclical reference. Image by author made with Canva Reference counting alone can not destroy objects with cyclic references. If the reference count is not zero, the object cannot be deleted. The solution to this problem is the second garbage collection method. 2. Generational Garbage Collection: Generational garbage collection is a type of trace-based garbage collection. It can break cyclic references and delete the unused objects even if they are referred by themselves. How does generational Garbage Collection work? Python keeps track of every object in memory. 3 lists are created when a program is run. Generation 0, 1, and 2 lists. Newly created objects are put in the Generation 0 list. A list is created for objects to discard. Reference cycles are detected. If an object has no outside references it is discarded. The objects who survived after this process are put in the Generation 1 list. The same steps are applied to the Generation 1 list. Survivals from the Generation 1 list are put in the Generation 2 list. The objects in the Generation 2 list stay there until the end of the program execution. Generational garbage collection. Image by author made with Canva Conclusion: Python is a high-level language and we don’t have to do the memory management manually. Python garbage collection algorithm is very useful to open up space in the memory. Garbage collection is implemented in Python in two ways: reference counting and generational. When the reference count of an object reaches 0, reference counting garbage collection algorithm cleans up the object immediately. If you have a cycle, reference count doesn’t reach zero, you wait for the generational garbage collection algorithm to run and clean the object. While a programmer doesn’t have to think about garbage collection in Python, it can be useful to understand what is happening under the hood. Hope I could answer the questions in the beginning of the article. For the questions you cannot find answers to: Further reading: Mutable and immutable objects in Python. Here is an unpopular but great article in Medium 2. How to understand variables in Python. Here is an interesting article about tuples. 3. Local and global namespaces. Here is a detailed explanation from realpython.com 4. Tracing garbage collection. Here is the Wikipedia link to “trace-based algorithms”. 5. Stack and heap memory. Here is an explanation to “how variables are stored” from geekforgeeks. Any suggestions to [email protected] will be very appreciated!
https://towardsdatascience.com/memory-management-and-garbage-collection-in-python-c1cb51d1612c
['Seyma Tas']
2020-12-06 05:39:35+00:00
['Memory Management', 'Python', 'Data Science', 'Reference Counting', 'Computer Science']
Coding Their Dreams
Being tasked with a visual design sprint isn’t always the easiest thing, especially for a first-timer like myself. I had to tap into my ancestor’s knowledge (I come from a line of graphic designers), and quite a bit of secondary research to come up with a landing page that was both appealing, and made sense from a design standpoint. A Tiny Bit of Background The client this time is a non-profit called Code Your Dreams, a social program based in Chicago that helps at-risk youth get involved with coding to help brighten their future. I was tasked with redesigning their landing page in four days. With almost no user research being conducted, I relied on psychological “tips and tricks” to make a more appealing landing page. Taking the psychology around color, contrast, organization, design, a little bit of competitive analysis, and connotations of certain words were the tools of choice in design. Color Coming from someone that loves psychology and the way the human brain works, I decided to start with color. Different colors and different palettes influence the way that people feel. For example:
https://seanmphelps.medium.com/coding-their-dreams-164ae3ab42
['Sean Phelps']
2020-11-23 17:12:30.349000+00:00
['Design', 'Landing Pages', 'Web', 'UX', 'User Experience']
Both of My Parents have Cancer
There, I said it, out loud. There are like 44 individual sounds in the English alphabet and another tens of blends. Millions of combinations, few sensible, often misunderstood and mispronounced by myriad humans trying to speak English or make sense of American ways. We’ve somehow rallied to convince a big and diverse beautiful planet, teeming with life and language and history and resolution, that this hodgepodge of Germanic tribute and romantic resolve is worthy of being taught in every nook and cranny of a wide world and so, even in remote villages in Sri Lanka; when I traveled there last year, students in American brand t-shirts would clamor to me and use English to ask for candy or cash, their curiosity running alongside the road and up against their needs. This has nothing to do with what I’m writing about but it’s sorta interesting, on its own, I suppose, and even more if you’re better traveled, more diversified, intellectually acclimated to worldly ideas and sprawl and rage. Or maybe if you’ve ever climbed the Sisyphian slope of acquiring English as a second or third or ninth language and the sounds and symbols still stutter or get stuck on your well spoken tongue you can at least recognize and probably relate to the gross improbability that anyone should master this ridiculous, my mother, tongue. And yet we do. Anyhow. American babies aren’t born into knowing their language is out to get them. Adaptation and evolution, though I’m no qualified scientist, insist that language occurs through the happenstancial overhearing an infant does as it matures. Raised in a vacuum, no noise resembling words being heard, a child would never know to move their mouth outside of sucking or chewing, to make sounds have meaning outside of anguish and pain. From early on we begin to hold meaning in sounds and silences. The stretching symbolism of an lover’s quiet becomes the cord, unplugged, slowly draining connection from what was once electric. The battle cry, raised fist and steely gaze, makes sense in light of broken treaties, the covenants we write and speak, sign and surrender, break and burn. I guess what I’m saying is that words, alongside action, create lives. Verbs, I suppose, are the blocks I’ve built everything upon and so it is in these words that everything hinges, falls and fails — adjectives chinking the in-betweens. Aggressive. Different. Still. Lymphoma. It’s a cold Saturday at the ski resort when I find out that both my parents have cancer. Lymphoma, specifically, I’d say, because I like the way it rolls off my tongue. First we name the beast, I discovered, then we tame it and make it beautiful. I am good at step one, only. I have never progressed into moving my right foot to follow my left. My mother is terminal, and in remission, confusingly both, two years in, $360,000 of immunotherapy dripped slowly into South Dakota veins in a Texas hospital where a Chinese doctor practiced world renowned medicine but awaits the words for the definition of a cure. My father’s beast spreads chest and armpit, newly, three years post-stroke and through chronic back pains — we’ve imagined him the healthy one. A pesky lump raised eyebrows but not overwhelming alarm until the doctors confirm it and we await treatment plans and staging, pacing the first few days of a cold January hoping the climate and the elevation make for a slow grow. It’s a cold Saturday here, roughly 19 driving hours north of mom’s specialist, and a mile and a half, as the crow flies, from my parents house though, of course, they sent the news in an email. My family performs perfection with painstaking attention to detail, proper spelling, the absence of meaningful emotional delivery or expectancy of response. We are nothing if not efficient in our processing, the distribution of news. There are 44 sounds in a 26 letter alphabet and we excel in their arrangement, drag and drip meaning across screens and accept the ways sadness colors our tone, though we’d never claim it. We are obnoxiously logical about our deep emotions and so we copy some links, label prognosis, cite statistics, cc: all. I do this, too, with my clients when I insist, with under breath mutters of “ok boomer” when they push back, that they never call me, only email me. “It’s better in writing,” I clarify and it’s failed me only when printing terms like “uncoated” or “matte” still mean different things to different people, still have to be touched with fingertips and felt out loud. Oh! It’s the wife of the surgeon who I clash with on the paper coating and terms of printing. Oh it’s this small town joy that her husband will remove my dad’s lymph nodes just almost exactly 24 months after he removed my mom’s. It’s this small town kismet or karma that I pray the $1000 credit I gifted his wife, the wife of a well to do surgeon, will come back in careful hands, the prayerful and pausing power of his slow cuts and patient knife as he sutures and saves or digs and diagnosis my father, the parent of a self employed graphic designer, the husband of a teacher. Is $1000 enough? Is there more money or more kindness I can breathe into the energy I’m passing, ensure that the cancer is touched with fingertips, eeked from open veins, felt out loud. Can I give up an hour or two, now, from the end of my time to give my dad an hour or two, now, as he begins fighting for his. What are the trades I’m allowed to make, meaningful emotional delivery, expectant response? It’s a cold Saturday here but places where white people love to travel are burning and places where brown people live are being traded for the price of a barrel of oil, barrel chested mini-men with eiffelesque insecurities holding the barrel of a gun against the chest of the Middle East and asking what is a life worth and what will you trade, now, for an hour or two. What is it worth. “You really got shit luck in the genetic lottery,” he tells me, shrugging but likely struggling with serious implications, his kids half mine. “But you’re pretty.” He means it like a compliment, like the sort of comfort an umbrella provides in a hailstorm, symbolic but insufficient as shelter. “Yeah,” I muster with a glare doused in straight up grace production, “I’ll look good in a casket when I die young.” “Your life seems dramatic,” I’m challenged. I retort, quickly, verbally building walls and digging moats of defiance at adjective and subjective labels, “the drama is in the response. We never act dramatic.” The truth is somewhere in that chinking. Armor. Walls. The colors and the blends of sounds and stories combined to risk and rattle. It’s dramatic, of course. Two parents. Two cancers. Two years. But there’s worth in the way we react and respond, the lives we live in conjunction to the diagnosis. The ways we lean into the difference between sadness and depression. John Gorman said, “depression is numbing and hopeless. Sad is life-affirming and beautiful.” And really, logically, they are built in the same set of 26 letters and the same 44 sounds, tens of blends and myriad meanings. The impossible reality that we will battle alongside a second parent in the span of 24 months, a second beast, in the span of two years, and we will fight to make that beast beautiful, bald-headed and resolute, in the mix of sad and salvation that always sees us through. I don’t wanna be dramatic, but it’s a cold Sunday in these sacred Black Hills and I thought of going to church to hold hope and prayer alongside other people experiencing 26 letters of chaos spread and arranged in aggressive prognosis or hopeful gaze. But I made a roaster filled with cheesy potatoes and trudged two miles north to spoon warmth into the bellies of food insecure friends because is it enough? Can we move the needle on grace or retribution. What is it worth, in kismet or karma, blood or belief? My fingers go numb in the wind, though the sun shines. My blood is thick, thicker than water, flowing with life and love and a genetic disaster. I lick cheese from my finger, watch the wind whip my pink-haired son’s cheeks rosy and remember the way to put letters into words, words into sentences, sentences into prognosis or prediction or deduction or logic or limits or lessons or living or love. Is it enough? An email with links to education. A million dollars of chemotherapy. The scales tip from depression to sadness when we remember this is 100% the way to be alive. We will not go numb in the face of diagnosis because when we have named it, it becomes knowable. Anything we know we can love, lick, leave, lose, let go. This drama is the heartbeat of this long lean into being, that ever hopeful hallelujah I reference but rearrange into hell or high water when I forget to first breathe, when I realize everything weighs harder and presses heavier without hope of eternity, here, ever, now. So here, ever, now, cc: all… we’ll arrange the letters and sounds into meaning. Join hands and bring numb parts back into warm, though sometimes insufficient, shelter. Press the palms of our hands in prayer and holding, parallel lines leading to hearts that are carbon copies in chromosomes and tears. Twice in two years, two parents, an entire line of screaming genetic drama and happenstance means we’re armored and ready to fight, cry, stand, deliver — together. It will be enough.
https://natalielafranceslack.medium.com/both-of-my-parents-have-cancer-ef62ea57d0c4
['Natalie Lafrance Slack']
2020-01-05 19:52:34.542000+00:00
['Narrative', 'Family', 'Prose', 'Mental Health', 'Cancer']
The Weekly Authority #35
Essential Holiday Marketing Tips: How to Maximize Your Marketing Efforts & Results over the Holidays Happy Thanksgiving! The holiday season is officially here! For many professionals and business owners, the holiday season can mean working late or odd hours to try to get everything done while buying presents, planning for and attending celebrations, etc. To help you take care of all of your business’ marketing needs during the hectic holiday season, I’m sharing some useful tips that have worked for me in the past. These insider tricks can help you keep track of everything, not miss a beat and do so with less stress — and more time to spend on other things (like enjoying the holiday season with your family). They can also help you see better results (and more new leads) from your holiday marketing campaigns! 8 Effective Ways to Manage Marketing Campaigns during the Busy Holiday Season 1. Plan ahead — The holidays are no time to just ‘wing it’ or take your marketing campaigns day to day. If you really want to maximize results and not waste your time (or other resources) — especially during a season when assets like time can be really limited, figure out your goals and objectives ahead of time. Then, establish a process for measuring the results so you know what is more (and less) effective. 2. Build (& test) your landing pages now — Taking the ‘plan ahead’ tip one step further, get your key landing pages in place ASAP. These pages can be key parts of your online sales funnel, and waiting until the last minute to put them together can increase the chances that mistakes will be made or that landing pages are just not as good as they could (or should) be. 3. Look for ideas everywhere — You don’t have to reinvent the wheel when it’s time to develop and rollout a holiday digital marketing campaign. Instead, save yourself some time by figuring out what has worked in the past and what may be working for your competition (or even businesses outside of your industry). Then, put your own spin on it and carefully monitor the results. 4. Automate what you can — Take advantage of scheduling (and other) tools to automatically take care of various aspects of your digital marketing campaigns for the holidays. For instance, you can use Hootsuite, Sprout Social, and/or Mass Planner to manage your holiday social media posts. For your email marketing needs, try using the scheduling options provided by Active Campaign or Mailchimp. With these (and other) tools, you can set up a series of social media posts, emails, blogs, etc. to be published and/or distributed days (or even weeks or months) ahead of time. 5. Know your deadlines — Whether you’re working with campaign deadlines, publishing deadlines or shipping deadlines, make sure you know exactly when they are. If you’re going to be juggling various deadlines, set up a tracking system for these deadlines, and consider creating “alerts” for yourself so that you whenever an important deadline is just around the corner. 6. Consider whether to expand and/or limit your efforts over the holidays — While many businesses heavily step up their digital marketing efforts over the holidays, few (if any) have unlimited resources. This means that it’s important to determine whether you should expand your digital marketing efforts over the holidays and, if so, which ones in particular you should focus on. In some cases, it may be necessary to scale back some efforts so you can focus on others. For instance, you may want to increase email marketing and social media marketing on Facebook and Pinterest over the holidays while scaling back your marketing efforts on Twitter and Instagram. 7. Pay attention to staff levels and capacity — Not only may your staff dwindle during the holidays as people take time off, but the staff who do remain may be overloaded with more clients and more work to shoulder. So pay attention to vacation requests, and have a strategy for offsetting surging workloads (like hiring temporary holiday staff or outsourcing certain efforts), bringing me to the final tip which is to…. 8. Hire a pro to step in when necessary — Whether you’ll be off for an extended holiday, you want to have a marketing professional at the helm, or it’s simply time to entirely revamp your digital marketing strategies for the holidays, the guidance of a pro can be your key to success (and better results!). And that can mean more new leads and more new clients for your business. Top Digital Marketing Mistakes that Even Big Companies Make during the Holidays Poor preparation or no planning, which leads to poor campaign execution Campaigns not going out on time Campaigns not going to the proper audience Not being ready for an influx of clients Failing to have landing pages for corresponding ads Sending people to cluttered, poorly drafted or off-putting landing pages. Do you have any tips or tricks to share about keeping up with digital marketing over the holidays? What has been more (or less) successful for you? Tell me more about your experiences — and digital marketing plans for the holiday season — on Facebook and LinkedIn. And: Check out Digital Authority’s latest blog for more useful tips on digital marketing for your brand over the holidays. Stay posted for the upcoming weekly when I’ll be making a big announcement about one of the latest (and free!) offerings from Digital Authority! In the meantime, don’t hesitate to get a hold of me on social media to ask a question about any facet of digital marketing, including Digital Authority’s holiday specials, or just to say ‘hi.’ I look forward to hearing from you! (note: In the spirit of full disclosure, some links are an affiliate links, which means that I may get a commission if you decide to purchase anything from X company. I only recommend products & systems that I use or have used and love myself, so I know you’ll be in good hands.) (This article was originally published on DigitalAuthority.co)
https://medium.com/digitalauthority/the-weekly-authority-35-f7015802ef90
['Digital Authority Co']
2016-12-27 18:31:13.764000+00:00
['Marketing', 'Tips', 'Holidays', 'Social Media', 'Digital Marketing']
Deploy a React Web App to Firebase & Make It Act Like a Mobile App
Deploying to Firebase I have already made account on Firebase, but if you are wanting to follow along, you’ll need to do that first. Once logged in, go to your console: The next screen will show you the Firebase projects you’ve previously deployed and also gives an option to “Add Project”, that’s the option we want. After clicking it, we should see a screen that asks us to name our project. You can tell if a name is not taken by making sure the suggestion underneath the name you typed matches (not case-sensitive), like so: After finding the proper app name and hitting continue, the next two steps are about whether you would like to participate in the Google Analytics or not. Totally up to the developer, if I choose to participate (I usually say why not), I always choose the default account option for step 3. Hitting continue on step 3 will begin the creation of your Firebase project and when it finishes, you should see something along these lines: Upon continuing, Firebase takes us to the project dashboard, where we will next click the “Hosting” tab in the left menu: Following that, click the “Get Started” button, which will show us more instructions, that are all pretty self explanatory. From the project’s root directory, the instructions start with installing the Firebase CLI: npm install -g firebase-tools The next step is to log in to Firebase: firebase login Since I am already logged into Firebase, it simply says I am already logged in, for someone not logged in, it may redirect you to a web page where you can verify your account. Before the next step however, I think it is important to note; it is necessary to have a production build of your app in order to host a React app on Firebase. This is something I stumbled on the first time or two I tried deploying a project, and partially the reason I’m including this section in this article. In the next step we will specify that we want our app to be initialized from the build folder, so in order to do that, we have to make sure our app has one. The command I used to make a production build is: npm run build Here’s what my terminal looked like once the build folder was completed: After that, you should now be able to see the build folder in the file tree. Returning to the terminal, we can enter the next command: firebase init Take a look at the menu that pops up: As you can see in the image above, I have selected the “Hosting” option from the menu, by using the arrow keys to navigate, and the space bar to make a selection. As it says, I then hit enter to continue, bringing up the next set of questions, each starting with a “?” symbol: Breaking down the previous initialization steps: Selected “Hosting” Selected “Use an existing project” Selected the “Tic-Tac-Toe-Tyler” Firebase project Entered “build” for public directory Selected “y” (yes) for configuring as a single-page app (React is a single page framework) Selected “N” (no) for automatic builds and deploys with Github — this one could definitely be useful, but chose to skip it for the sake of this article, consider enabling this for your app. Selected “N” (no) when asked if I wanted to overwrite the already existing build/index.html file. We can now finally put in the fitting, final command: firebase deploy Which gives us this result: So if we go to the hosting URL it suggests, we can now see the app live in action: Browser Mobile browser
https://medium.com/dev-genius/deploy-a-react-web-app-to-firebase-make-it-act-like-a-mobile-app-d19783b8a1c2
['Tyler J Funk']
2020-10-25 10:00:08.634000+00:00
['Progressive Web App', 'Firebase', 'Deployment', 'React', 'App Development']
CRYPTOIntel
4. Methodology 4.1 EDA The market value price of cryptocurrencies has been rapidly oscillating every day even for the oldest cryptocurrency in the market bitcoin. The data analysis for market price on top ten cryptocurrencies currently shows that when the bitcoin was in the initial stages the price was very low. Bitcoin reached the peak market value price around 19,000 USD in 2017 as shown in figure 4. The other cryptocurrencies have been in the market very recently and the prices are not very low when compared to the bitcoin. Bitcoin Cash and Ethereum are among other top currencies that show rapid growth in the last couple of years. It can also be seen the price value of bitcoin going down after 2017, which questions whether Bitcoin is a bubble? Figure 4 — Cryptocurrency Market Price (USD) Figure 5 represents the price fluctuations that is the difference between the opening and closing price for top cryptocurrencies. The year 2017 was a breakthrough since the price of bitcoin was increasing rapidly during this time frame. Figure 5 — Change in price of different cryptocurrencies Plots in figure 6 show the change in the price of the top ten cryptocurrencies over 24 hours and the pie chart representing the market capitalization of different cryptocurrencies. Figure 6–24 Hour Change Trends and Market Cap 4.2 Neural Networks (Machine Learning) As discussed before we have to neural networks in place for prediction, which are discussed in detail below. 4.2.1 Numerical Model Cryptocurrencies are the future of currencies. With its increase in popularity, more and more people want to invest in it. To forecast the price of the next day based on historical data we used Deep Neural Networks framework. Particularly LSTM Long (short-term memory), a type of Recurrent neural network from Keras as they have been proven to work really well for regression problems. The price of the cryptocurrencies depends upon a lot of factors, we used close, high, low, open, volumefrom, volumeto and we used OHLC to calculate the average price for that day and used it as an input to our model. Since we considered market volume as a factor to be used in our model, which had the highest value among all the variables. We normalized the data using scikit-learn’s minmax normalization module with a standard deviation of one. Since predicting the price is of time series type, we shifted each of the average prices we calculated, to one step forward in time and removed the nan values. After the data was ready, we split the data into train set (80%) and test set (20%). After transforming the data into three-dimensional shape, we used Keras framework to build, train predict the average price of the cryptocurrency. Figure 7 — Train and Test Loss In our LSTM we used 80 neurons, 2 layers and trained our model for 50 epochs using ADAM Optimizer as an optimization method since it adapts the learning rates based on the average of the first moment and the average of the second moment while training the model and the Mean Absolute Error loss as a loss function as shown in figure 7. We validated the model on the test set and calculated Root Mean Square Error on both training and testing set results. 4.2.2 Numerical + Sentiment Model (For Bitcoin only) Does News speculations play an important role when there is an increase or decrease in the price of Bitcoin? And Can we predict the future price using numerical historical along with news sentiment? To answer this, we scrapped the new articles related to bitcoin from 2017 and performed NLP techniques for cleaning the title and body of the news articles and calculated its sentiment polarity using Spark’s MLib. We summed the sentiment over all the news articles for that day and finally concatenated to the historical data from 2017. This sentiment polarity was a new feature which was added. The same above process was followed to train and test model. We were then successfully able to predict the price of the bitcoin using OHLCV and news sentiment polarity. 4.2.3 Topic Modeling Knowing what people talking about cryptocurrencies and understanding their problems and opinions is a critical aspect. To answer this, we performed topic modeling using LDA (Latent Dirichlet Allocation) from Gensim package on huge news articles which were scrapped to uncover the hidden structure from the collection of news articles to discover the trends in the social media news. Data cleaning was performed to remove punctuations, extra spaces, and stopwords [2]. We performed text pre-processing using spacy(spaCy) and later lemmatization. Finally, trained the data set on the LDA model for the top 10 top topics. This was visualized using pyLDAvis package for the interactive chart and was embedded in the web front end as shown in figure 8. We finally visualized an interactive topic modeling of various models where the user can predefine the topic to be selected and also adjust the alpha value.
https://medium.com/sfu-cspmp/cryptointel-353beb13756b
['Mehak Parashar']
2019-04-14 23:20:28.787000+00:00
['Machine Learning', 'Bitcoin', 'Crytocurrency', 'Data Science', 'Big Data']
Is Fortnite Ruling Our Lives? | The Fortnite Pelage
How Fortnite Is Taking Over Our Lives It is was estimated in 2018 that 25 million people play Fortnite worldwide. Players spend about 50 hours playing Fortnite in a week! That’s two days out of seven used playing Fortnite! Kids are becoming less social and when they are, Almost all they talk about is Fortnite. Kids are beginning to play in school and have gotten expelled. Although the game is free, it still offers v-bucks which is digital money that you can buy gear with. To get v-bucks you must buy them with real money. The total average amount of money people spend on v-bucks every month is 1 million US dollars. A large majority of the players are kids. Kids easily become obsessed with this game because they want to level up and win. When they win, people will be amazed and think greatly of them. Also eliminating other players can bring pleasure to them. If their player is eliminated these kids can easily get enraged. This enragement leads to massive destruction from breaking up a controller to banging up a tv. “I was so close to winning!” they may think and begin to play again and again trying to win.
https://calebtorres.medium.com/introducing-ocfd-how-fortnite-is-ruling-our-lives-5b7ca2c0c94d
['Caleb Torres']
2018-07-02 00:02:40.343000+00:00
['Videogames', 'Gaming', 'Mental Health', 'Safety', 'Funny']
From living unapologetically to living meaningfully
From living unapologetically to living meaningfully This is my open letter to someone who once told me that being gay feels like having a medical condition. Being gay is not a medical condition. At least not since 1973, when the American Psychological Association decreed that homosexuality should not belong in the DSM. But it is not just institutions that have pathologized sexuality; it is also the very people who have raised us since we were born, people who we might think, with the most innocent of hopes, would naturally be on our sides. Over the weekend, I heard you talk about being closeted to your family. You said that being closeted while living at home is like “having a medical condition, like diabetes.” Something you’ll have to endure quietly, painfully, and silently for the rest of your life. I see your point. I have lived it. Being closeted is a painful liminal state. It chips away at your joy and sucks the marvel out of everyday life. But I was troubled to hear gayness be equated to a disease, in possibly the gayest city in the world. Not to mention at a fundraiser intended to celebrate activists in the queer Asian community. I was reminded, then, that metaphor is mindset. And we have to change our metaphors if we want to change our mindset. I know it can be hard to come out when you still live with or depend on family, especially if they want nothing to do with you and your “condition.” I don’t deny the potential harm, violence, or abuse you might face. To mentally reframe a challenge, while the ground is shifting under you, might be a luxury for those who have to face this kind of immediate physical suffering. At the same time, you have to decide whether you want to live your life to the fullest. With honor and dignity, and pleasure and beauty. In order to do that, you must change your mindset, which may even be more important than coming out itself. It goes beyond living unapologetically to living meaningfully. Changing your mindset might even be more important than coming out itself. It goes beyond living unapologetically to living meaningfully. Living unapologetically — in a paradoxical way–invokes guilt. Living unapologetically is the opposite of empowering, because it shifts power away from you to your perceived enemies. To be unapologetic is to bow to the gaze and expectation of apology. You can walk around with a banana peel sewn into your hair, living unapologetically, yet still privately wish that non-banana peel-wearing folks apologize to you for their heathen, banana-less gaze. Living meaningfully, on the other hand, requires being open not only to yourself, but also to others. It views self-definition as a path towards interconnectedness. Being open to others doesn’t mean denying the pain they may cause you. It means seeing them as dynamic, growing beings just as much as you are. As a result, you open up yourself to the possibilities of how to respond—whether on a political, interpersonal, or individual level. This doesn’t mean you “should” come out—there are really no shoulds. I simply propose that the path towards peace starts with a growth mindset. This isn’t about ignoring political progress. As a queer person, I want to thrive at work, in the places I live, and so on. I don’t want any capabilities denied to me. I want protection from harm and discrimination. This is rather about reframing a problem so that, from day to day, year to year, we might learn something from the experience. In cultivating a growth mindset, you start to see the sources of your pain as sources of power and creativity. Being who you are is a source of power because you understand viscerally what it’s like to be rejected. You have no choice, then, but to learn how to adapt and define yourself. You may do so awkwardly or uncomfortably at first, but define yourself you must. And being who you are is a source of creativity because you are forced to take risks and reimagine your reality. Even when you start to lose sleep, feel isolated, or are at a loss for resolution, rest assured — you are not broken. A friend of mine once defined pain as weakness leaving the body. It truly is up to you, up to each of us, to decide what this pain leaves in its place. Not everyone has been gifted such an immense life challenge. Not everyone has to go into the heat of the fire, where your strength is forged, and your fears are transformed. Where any blows you get will only make you stronger. In making meaning of your setbacks, you are transforming pain into power. You are not merely refusing to wear the clothes someone else picked for you. You are telling them what you want to wear, and wearing it, in the affirmative. If being gay is a medical condition, I don’t want to be cured. Because I think it’s through this experience that I am able to see beyond the chimera of growth without pain. By reframing our metaphors, we can reframe our mindset. And the ability to reframe any challenge, even the seemingly incurable, is a kind of power that only grows the more we practice it.
https://tanchan.medium.com/from-living-unapologetically-to-living-meaningfully-e611e1dfaf39
['Tan Chan']
2020-11-24 21:00:56.216000+00:00
['Philosophy', 'LGBTQ', 'Personal Development', 'Psychology', 'Growth Mindset']
Innovation and social purpose: public and philanthropic support for European journalism
The three general-purpose news outlets with the greatest agenda-setting power over European politics — the Financial Times, The Economist, and the BBC — all happen to be run out of London and thus will very soon be based outside the European Union. At the same time, much of the journalism produced in the rest of the EU still focuses almost entirely on the respective national public sphere, and news organisations in many countries struggle to make ends meet. This irony was not lost on the participants of the Journalism Funders Forum’s Expert Circle in Brussels on October 23, 2018. Fittingly for its location at the headquarters of Euractiv, right next to the European Commission’s Berlaymont building, the event explored (under the Chatham House Rule) the relations between journalism, foundations, and the European Union. Charity woes First off, the group discussed the obstacles for foundation funding of journalism. In Germany, for instance, out of more than 22,000 charitable foundations, only 20 support journalism as an end in itself. At the European level, charitable foundations spend about €60bn annually, yet journalism’s share in this bounty remains so small as to be virtually invisible. One key reason is charity law: Journalism is generally not acknowledged as a charitable cause, rendering it difficult for foundations to spend money on news organisations. In fact, founders of non-profit news outlets are often forced to jump through so many hoops to attain such a coveted status that they risk distorting their public perception. One participant said: “As a journalist, you don’t necessarily want to go to retirement homes or schools every day to explain your mission, only in order to be recognised as a social or educational charity.” Liberating journalism from this conundrum is difficult because both states and foundations would need to change their legal frameworks and statutes. This is at best a lengthy process that involves legislatures at European, national, and often also regional levels. It is also at odds with the fact that most foundations, due to their very nature as endowments with a specific purpose, cannot easily change their missions. Hence, the sector is toying with the idea of a pragmatic workaround: Would it be possible to subsume journalism under an existing charitable cause by default, so tax authorities and philanthropies could accept it without further ado? Supra-national approaches And while the EU has a single market for goods and services, there is not yet a single market for philanthropy. Donors face many barriers — legal, practical, but also psychological — to invest in other countries than their own. Accordingly, it is hard for foundations to try and burst the national media bubbles, and only very few do so. One of the rare breeds of such border-crossing journalism funders explained in Brussels that they were expressly dedicated to supporting multi-national non-profit initiatives, and thought in ecosystems rather than single initiatives. While they do fund the production of actual journalism directly, they also focus on the enabling environment for international media cooperation, such as multi-lateral networks, service providers, legal assistance, or technology. Sluggish EU engagement Naturally, the European Union is the go-to authority when it comes to funding projects that involve several or all of its Member States, especially when it comes to harmonising rules, regulations, standards, and practices between them. However, the bloc has a history of blunders concerning journalism. Many initiatives failed or took extended gestation periods to materialise, such as the Media Pluralism Monitor (conceived 2009, launched 2014), or Erasmus for Journalists (conceived 2011, likely to be launched in a modified shape in 2019). Indeed, it remains controversial to what extent quality journalism and media freedom are domestic EU competences in the first place (while they are an undisputed part of its foreign policy and enlargement remit). As a government, the Institutions are well advised not to interfere too much with the content of journalism anyway. The Brussels participants mostly agreed that the EU should refrain from directly funding journalists’ salaries — if perhaps with some exceptions in countries with particularly precarious media systems, or activities with a strong firewall between the funder and any editorial decisions. Some kind of clearinghouse that would administer EU funds independently (maybe similar to the European Endowment for Democracy, which supports media and civil society in third countries), or pool contributions from multiple sources (analogous to the foundation-driven Civitates) might be helpful, too. Spending The event revealed that the European Commission is poised to spend more money on journalism in the period 2021–27. On the one hand, there are plans to continue to support media pluralism, media literacy, and high-quality media production to the tune of €61m, notably with a focus on long-term engagements rather than short-term projects. On the other hand, the Union intends to use the revamped InvestEU programme to underpin social investments with financial guarantees, including non-profit media. One could say that the former are proactively shaping the media environment, while the latter largely depend on initiatives that emerge from the sector on their own. Still, some participants remarked that the European Union appeared to be spending less on journalism than, for instance, Google with its €150m Digital News Innovation Fund — a funding instrument that many consider the blueprint for EU media actions: Developing (mainly) technology promising to empower new journalism business models, storytelling formats, and distribution channels. In reality, though, the EU is already doing this extensively through its research and innovation framework programmes. Their media- and even journalism-related budgets substantially surpass Google’s funding, if perhaps in a less visible and more fragmented way, and they tend to require more effort to get one’s hands on them. The Brussels event suggested that the classic division of responsibilities, where the public sector takes care of infrastructure and framework conditions, while the private sector fills that infrastructure with life, does not work properly for the news industry. In an attempt to compensate for what they have identified as public shortcomings, some philanthropies advocated for joint investments with the EU in areas of shared interest — in particular technology and business model innovation, as well as management skills. Stakeholders to the rescue That leaves the question where direct support for journalistic reporting and journalism’s social innovation could be found. Several participants suggested that journalism be an integral part of the EU’s engagement with civil society — a proposition that resonates strongly with the notion of non-profit journalism, which many foundations and journalists favour over commercial enterprises. In this spirit, one participant called for a joint effort between the European Union, civil society, and the commercial media sector to define the EU’s future media strategy wholesale. The current negotiations about the Union’s budget for the 2021–27 period might provide a unique window of opportunity to do just that.
https://medium.com/we-are-the-european-journalism-centre/innovation-and-social-purpose-public-and-philanthropic-support-for-european-journalism-ca06dde71dca
['Eric Karstens']
2019-07-24 08:52:25.383000+00:00
['Foundations', 'Journalism', 'European Union', 'Non Profit Journalism', 'Philanthropy']
To Stroke the Cheshire Cat
To Stroke the Cheshire Cat A poem Photo by Tim Hüfner on Unsplash Like a toddler that doesn’t know what words to use when they yell for recognition Like a storm in a teacup but like, an actual storm in an actual teacup what is the teacup doing outside during monsoon season anyway? Like the alphabet running over and over andoverandoverandover again hoping, maybe this time it’ll form a real word Like the clouds parting on a really sunny day wait — there were clouds? how…? Like cheap beer when you’re underaged awkward funny tasting but you either down it or you drown it Like life passing you by one minute it’s there the other it’s not, Anxiety wears many suits Depending on the interview Laughs on cue Nods intently Does its best (as perfectionists do) And flips out when it’s not wanted It doesn’t understand WHY IT IS NOT WANTED WHY Like a period At the end of a sentence That is actually a question, It stares at you blankly Not expecting an answer But demanding a response.
https://medium.com/loose-words/to-stroke-the-cheshire-cat-d04749b58eb3
['Ioana Andrei']
2020-12-14 14:10:32.564000+00:00
['Poetry On Medium', 'Mental Health', 'Self', 'Anxiety', 'Poetry']
Figuring out what I like to do as a UX Designer
Figuring out what I like to do as a UX Designer Sathish Kumar Follow Dec 26 · 3 min read Photo by Christina Morillo from Pexels Designing a website has been my new found love being a UX Designer for the last two years. The last two years I have spent my time working on multiple projects including designing mobile apps, web based products and websites. Being a UX Designer working in a small design agency you don’t get to choose what kind of work you will do that too in the initial stages of your design career. You end up doing whatever that has been given to you. It is anyway a good thing to have a hand different kind of design challenges until you figure out which one you really like doing. Though I had interest in doing all the above mentioned I din’t quite feel good since I wasn’t building any of the final outputs that the users would use. I always start up with some project, do the role of a designer and hand over the designs to developers to get things built as per the design and then It goes to the users that is always in a state that doesn’t look like the way how it was designed. Be it a website or an app or a digital product you need to have some knowledge in programming / coding in order to develop your own designs so it takes final shape as how you wanted it to be. Learning to code was so difficult for me as I did try once during the 2020 COVID lock down as I neither din’t major in any computer related degree nor took computer science group in my high school. Giving up learning to code I was looking for alternatives that could fulfill my desire to build on my own designs into a final output that goes to the users. After several days of exploration I found this amazing tool which helps you to build website on your own without any coding needed. But you might ask ‘There are a lot of website builders out there, what’s new in this?’. Yes there are a lot of website builders like Wix, Wordpress, etc but having tried those I still dint feel like they could be designer friendly and help you bring out all your creative ideas into reality. So this amazing too is called ‘Webflow’ an American based start up product which allows designers to create stunning websites without code. Finally I found something that I could use to convert my designs & hand over them to clients without needing the help of a developer. You don’t need to know coding to build a website using Webflow but this tool might take some time to learn but you will eventually love the process. I have built some websites while I was learning to use the tool and I am still learning on what other things that this tool could do. But at the end it eventually helped me fulfil my thirst of converting my designs to a usable end product without the help of developers. Being a Designer in India, using Webflow could be a little challenging due to its high pricing compared to its Indian peers. Will talk more on if Webflow is the right tool for Indian designers on my upcoming posts.
https://medium.com/design-bootcamp/figuring-out-what-i-like-to-do-being-a-ux-designer-708ddfe5895
['Sathish Kumar']
2020-12-27 22:00:57.949000+00:00
['Product Design', 'UX', 'Career', 'Design', 'Webflow']
Automate Everything! Never Touch Your Newsletter Again
Automate Everything! Never Touch Your Newsletter Again Send out your recent Medium blog posts dynamically and automatically using AWS Lambda, SES, and EventBridge Photo by Harley-Davidson on Unsplash Have you ever thought about creating a newsletter for all the great content you write but don’t have the time to put one together each week? Boy, do I have a surprise for you! I will show you how you can send an automated email every week containing your most recent blog posts from Medium. As long as you are creating new content, your email will always be different! If you are not a Medium writer, no worries, about 99% of this article will still apply to you; you will just pull your content from a different source. The point of this tutorial is to understand how we can use a combination of AWS Lambda, Amazon Simple Email Service, and Amazon EventBridge to send out emails on a schedule, with dynamically generated content. Here is an architecture diagram to give you a visual: Automated Email Service Architecture We will use Amazon EventBridge to trigger a lambda function on a schedule we define (using a CRON expression). We will then use that lambda function to trigger an outbound email from Simple Email Service. This article is going to be the minimum viable product of an automated email. Future articles will explain how we can take our automated email to the next level (styling with templates, unsubscribe functionality, sending to a more extensive distro list, handling bounced emails, etc.). Let’s get to it!
https://medium.com/better-programming/automate-everything-never-touch-your-newsletter-again-881145a5f24b
['Ryan Gleason']
2020-10-27 14:16:55.327000+00:00
['Programming', 'Automation', 'JavaScript', 'Newsletter', 'AWS']
Ruling Tools
Photo by Author What started as a tool, like A dense ball of gringo sweat All puffed and putting out Silicon fires in the kitchen Now seems unhinged, like An electric fence of rhetoric, A day pass to the belly of the beast, Or a grand hall of collapsing walls, That if you dive through Will prove dogmatically cruel But it’s just a speed-limit This grammar we use A conglomerate of tools It has become profane Like oxygen through a straw A grey threshold of rules
https://medium.com/afwp/ruling-tools-26005e70d97
['Jack Burt']
2020-03-31 15:31:01.078000+00:00
['Satire', 'Grammar', 'Writing Tips', 'Poetry', 'Writing']
The Cutting Edge of Bitcoin
The deeply entrenched financial services industry is about to be completely killed by Bitcoin, and they don't like this fact one little bit. They are in a blind panic over whether they should embrace or resist the sea change. If you want to get a feel for the level of threat the financial services industry is experiencing with Bitcoin, go to a conference of traditional players and talk about the inevitable Bitcoin future. Some of the people in that sclerotic and redundant industry have a visceral hatred for Bitcoin. They can’t bear the idea that it bypasses and makes KYC/AML impossible and that they are going to miss, or have missed the Bitcoin revolution because they are so wedded to bad ideas. They hate the fact that Bitcoin companies are super agile and they are not. They resent Bitcoin, its philosophy and the nature of its origin. They hate software. They also know deep down that Bitcoin works. They know it is “genius expressed and in motion”. They also know that they don’t have the intellect to attack its design. This galls them, chafes them, and has forced them to concoct ever more bizarre pretexts for it not working, in a vain attempt to put people off adopting it. It isn't working. Hearing the emotional, irrational, illogical reactions and arguments of these people is very amusing; they all use the same arguments and fallacies and don’t know that as they argue against Bitcoin, they expose their own ignorance. It is amusing precisely because we know there is nothing they can do to stop Bitcoin spreading everywhere. People who work in Bitcoin hear the same vapid arguments over and over, and these robotic detractors don’t even have the sense to realise that any objection they can put forward will probably not be novel, since they are computer illiterates that have probably never even used Bitcoin, much less had any experience in software development or Austrian Economics. It is deeply satisfying to know that all of these people are in a blind panic. They are in the headlights of a freight train and don’t know how to move out of the way. It is a pitiful sight and sound. The same people who don’t understand Bitcoin say that “The Blockchain tech is interesting, but not Bitcoin” this is like saying, “The Apache webserver software is interesting but TCP/IP is not”. It’s completely absurd. They also claim that the newly invented “Side Chains” are an interesting development, but Bitcoin “can’t work”. Side Chains are an interesting development for the same reason Bitcoin is, The Blockchain. What these detractors don’t or can’t understand is that all these innovations are: Software On the Blockchain. You can’t claim that Bitcoin cannot work, but Side Chains can; both are tied to the Blockchain. Either the Blockchain works, or it doesn’t. If it does, then Bitcoin works, and so do Side Chains and anything else built on it. What we have here (apart from a failure to communicate) is a deeply rooted, psychological disconnect on behalf of Bitcoin deniers. These people want to be cutting edge, but the edge of Bitcoin is designed to cut them out. They know this, and do not like it. They are terrified of the consequences of Bitcoin, the inevitable Wild West that will emerge, uncontrolled, un-knowable, unstoppable and absent their redundant ideas and flaccid opinions. At the same time, they are desperate to be a part of the revolution; to touch it in some way, and be remembered. They can’t write software and so they are excluded. This is very painful for them. They have… our sympathy. The Bitcoin world will emerge without their permission or approval and will astonish them all. The sky will not fall, “society” will not collapse. They will all be proven wrong, and Liberty will emerge to capture everything.
https://medium.com/hackernoon/the-cutting-edge-of-bitcoin-e4d4be6383bb
[]
2017-07-29 11:48:47.693000+00:00
['Fintech', 'Startup', 'Disruption', 'Fossils', 'Bitcoin']
Nietzsche on Pity
Nietzsche on Pity Where are your greatest dangers? ln pity. How can there be on earth a woman alone, abandoned? One should, to be sure, manifest pity, but take care not to possess it. Possible Cause Know, too, that there is nothing more common than to do evil for the pleasure of doing it. Pity seems to be such an instance. The desire to evoke the pity of our fellow humans seems to stem from a desire to hurt and mortify them. And quite literally so. If it is indeed true that the oldest means of the solace of man is to make someone else suffer for the various feelings of indisposition and misfortune in him (hence cruelty as the oldest festive joys of our species and beyond), then the cause of pity is rather clear: Their weakness notwithstanding, the suffering are made conscious of the fact that they still possess the power to hurt. This then becomes a source of consolation for them. As Nietzsche puts it: “in the conceit of their imagination they are still of sufficient importance to cause affliction in the world.” Thus Nietzsche sees the thirst for pity as a thirst for enjoyment at the expense of our fellow men. Evolutionary speaking (beginning with the Neolithic), if as an ancient farmer you are not doing so well and you can’t figure out the reason behind the good fortune of your neighbor (so that you can replicate what he’s doing), taking him down at your level might make sense: if you are not going to make it, no one else will or rather if you are not making it, you will cause havoc to other people till you make it, i.e. till you no longer have the incentive to compensate for the feelings of indisposition that spur from your current predicament. I think this is also tied to the idea of ‘lagging behind’. During most of our history, if we could not somehow keep up with the group’s average we would perish as survival was tightly knot with a small band. Hence we have this in-build drive to not ‘lag behind’. In the case of pity, you are lagging behind because of some problem that is currently afflicting you (and not the others). By sharing the burden, by pulling the other party just a little bit down you are no longer lagging behind as much as you originally did. We may call the above-described approached the evil Eris as opposed to the noble Eris: The envy that wants to make up for the gaps by pulling your neighbor down to your level and the envy that wants to make up for the gaps by bringing oneself to your neighbor’s level. The former deriving from impotence and impoverishment with the latter stashing away huge enhancing potential. The effects it produces from the standpoint of the party being pitied Thus pity seems to have a two-fold effect: the first one is consolation in the damage one has caused to the person that encompasses his immediate vision. Secondly, the other person is no longer faring better than you, that is you are not lagging behind so much. But, thus speak the compassionate hearts: ‘pity is how we become human, and come closer to our neighbor in that we are brought closer to understanding and helping him!’ Two points can be immediately raised here: First, the help offered by pity is at the very least of a dubious nature. And second, if it is indeed true that there are many different paths that one can tread upon in order to arrive at a particular goal, why must then the path of pity be chosen to help and empathize with our fellow humans? For the latter point we could phrase the question a little bit differently: why is it that we need suffering to come closer to our fellow humans? Why not rather have the more optimistic and beneficial moments of our life do that instead? As it is, there is more than enough suffering on Earth, why should we contribute to further increase it? Why not rather share joyous moments in order to better feel for each other? Now, I am not saying that one should shun away from sharing any kind of suffering with other people, if only because any unconditional advice is bound to be erroneous given the great variety of human types and needs. Equally importantly, empathy seems to be a built-in software in us and for very good reasons. But it seems to me that one ought not to exaggerate it. Pity could be described as exaggerated and superfluous empathy, which is necessary for stupefied people who need passions if they are to be brought to help their fellow men. If one uses reason and facts alone, one automatically, therefore, loses the drive to pity people (but not therewith, the drive to empathize with them!). But then again, does pitying a person really help him with his current predicament? I would say, for the most part, no. Yes, it is indeed true that the person might be soothed for the here and now but it seems to me that no real actions are being taken to improve and solve the current predicament. Pity is in this regard akin to pseudo-medicine: maybe it cures the symptoms for now, but it does not solve any problem whatsoever. Moreover, one can make the case (and can make it well) that our personal and profoundest suffering is absolutely impenetrable for the minds of the others. And that whenever people notice that we suffer they interpret it very superficially (even more so when they pity us). Let us, therefore, preserve our worth and not let people make it smaller on account of some petty and useless emotion. One more thing that must finally be said is this: ‘Why do you, good sir think that you have a right to shield that creature from suffering by extending out the hand of pity?’ Really, when has anybody achieved anything of significance without first overcoming great resistance? In this sense, it not at all infrequent that pity converges into a crumbling philosophy of comfortableness. As Nietzsche puts it: Terrors, deprivations, impoverishments, midnights, adventures, risks, and blunders are as necessary for me and for you as are their opposites… To put it mystically: the path to one’s own heaven always leads through the voluptuousness of one’s own hell… For happiness and unhappiness are sisters and even twins that either grow up together or remain small together. From the standpoint of the party which is pitying So far we saw some of the motivations that the party seeking pity might have. What about the party that pities, what could be their deepest psychological motivations? Nietzsche's great innovation is that he recognizes that these motivations should not be thought of as selfless acts, some purely moral acts performed for the sole purpose of helping our fellow humans. No, it seems that the reference, as usual, is to be sought in oneself. On a higher level of analysis, a predicament that affects a person that encompasses our limited vision would offend us: it would make us aware of our impotence if we did not go to the assistance of that person (this feeling is for sure multiplied, should there be other people in our proximity). Moreover, an accident and suffering incurred by another could constitute a signpost to some danger to us; and it could have a painful effect simply as a token of human vulnerability and fragility in general (evolutionary speaking). One can also relieve one’s indignation at the sight of some injustice by helping other people. Thus Nietzsche concludes that: Through an act of pity one repels this kind of pain and offence. But note that this is a very personal kind of pain and something entirely different from the original pain that the suffering is experiencing. As elsewhere, Nietzsche emphasizes that language is again tricking us here (which is now by the way an established fact but no one had even started to seriously think about it at Nietzsche’s time) by calling two different kinds of sufferings with the selfsame name. Calling the suffering (Leid) at the sight of another person, pity (Mit-leid, suffering along with the other) does not make much sense because we have two very different kinds of pains at play here, that is to say, that the pain as experienced by the suffering and the pain as experienced by the other party are two very different things. It also does not account for all the various subtleties as already partly discussed. Bottom line is that one is acting very strongly with a reference to oneself when one is pitying another person: one wants to appear as more powerful and fortunate, as a helper, or even simply wants to relieve oneself of boredom. Moreover, for the great and rare human beings, pity can also have a very sly and crumbling effect in that it offers a safe and acceptable way to flee and dodge their goals, for their path is after all too hard: All such arousing of pity and calling for help is secretly seductive, for our “own way” is too hard and demanding and too remote from the love and gratitude of others, and we do not really mind escaping from it and from our very own conscience — to flee into the conscience of the others and into the lovely temple of the “religion of pity.” In the end, Nietzsche fundamentally sees a great difference in terms of power: There is always something degrading in suffering and always something elevating and productive of superiority in pitying. Or more fully: When we see somebody suffer, we like to exploit this opportunity to take possession of him; those who become his benefactors and pity him, for example, do this and call the lust for a new possession that he awakens in them ‘love’; the pleasure they feel is comparable to that aroused by the prospect of a new conquest. To what extend one has to guard against pity If what we discussed so far is indeed the case (and so it seems as far as I am concerned), then pity counts as a weakness, for the simple reason that it is harmful as it causes pain and enhances the general amount of suffering in the world. A piece of good general advice would be that one sees one’s own experiences in the selfsame way as we see them when they are the experiences of others. This would discharge our thinking of all the inevitable subjectivity that always comes along when we judge our own experiences. What the philosophy of pity demands, on the other hand, is to view and imbibe the experiences of others as if they were our own. Therefore doubling the encompass of one’s ego to include that of another person’s as well and hence increasing one’s fair load of suffering. In summa: Yes, one should empathize with his fellow humans and No, one must not pity them or at the very least be a miser about it as much as one can.
https://medium.com/lessons-from-history/nietzsche-on-pity-e272b5500d8a
['Rejnald Lleshi']
2020-10-29 10:30:35.680000+00:00
['Philosophy', 'Nietzsche', 'Psychology', 'Life', 'History']
What Makes Artificial Intelligence As One Of The Most Leading Technology In The Entire World ?
Abilities of human to reason out and do a particular task can be conventionally defined as “Intelligence”. So a technology developed by the human that can replicate this human behavior and activities can be broadly defined as Artificial Intelligence. The artificial intelligence can also be broken down to two parts- general AI and narrow AI. The general AI mainly focuses on the machines that have at least equal human intelligence, if not more, to perform various mechanical tasks. The narrow AI focuses on machines that follow strict parameters like image recognition, language translation, reasoning based on logic and evidence and also planning and navigation. These days the AI performs tasks by grouping them in three categories of intelligence: Sensing Reasoning Communicating And when it comes to robotics, a fourth factor is added to AI , i.e. Movement. Among a mammoth number of reasons that sets AI apart as one of the most crucial technologies of current times, few of which are as follows : 1. Extreme working conditions : One of the most prominent aspects of AI is that it is capable of taking (figuratively) a part of the human brain to places or working environments where humans cannot reach or survive physically. This is the reason : AI has grown to become the primary means for the advancement of humanity in the fields of Astronomy, Astrophysics, and Cosmology etc. 2. Automation of repetitive tasks and objectives : AI on an elementary level, being capable of retaining memories of process pathways, and is capable of completing huge repetitive volumes of work all on its own without the involvement of a human brain. This is what qualifies it for being able to open a multitude of future innovations and opportunities for humankind. 3. A more in-depth analysis of data : The ability of computers to operate massive amounts of data in a comparatively faster and more precise manner has opened up scopes of numerous future technological advancements and has resulted in enhanced data storage and security. COURSE OBJECTIVES OF ARTIFICIAL INTELLIGENCE To elevate business functions, AI is becoming smarter day by day. It is widely used in gaming, media, finance, robotics, quantum science, autonomous vehicles, and medical diagnosis as well. It has become a crucial prerequisite for companies to handle the enormous amounts of data generated regularly. So, this course focuses mainly on the practical, hands-on experiments that allow you to implement your ideas and analyze the ways to make it better. Data science is an interdisciplinary field of scientific methods, algorithms etc. that can be used to help function the AI smoothly. This knowledge gives you a leverage of world-class industry expertise and boosts up your confidence to solve real-life projects. THE OVERALL GOAL OF ARTIFICIAL INTELLIGENCE AI Engineers are to making and implementing it as much as more smart , advance and most efficient to use AI in daily life applications. AI can be used to minimize any type of human error in a particular job. Precision, accuracy and speed are the basic advantages of AI over human. Also hostile environments, dangerous tasks that could cause injuries or any job that may have an emotional variable that might affect humans can be perfectly done by AI.
https://medium.com/my-great-learning/what-makes-artificial-intelligence-as-one-of-the-most-leading-technology-in-the-entire-world-37dde3992ca2
['Great Learning']
2019-09-23 11:42:19.728000+00:00
['Machine Learning', 'Careers', 'Career Advice', 'Artificial Intelligence', 'Technology']
What is Generics in Java?
Photo by RetroSupply on Unsplash As we know, java is a strongly typed class. It expects every single object instance or class it runs into to be specified a type. And that’s fine if your application works as a single standalone application that does not interact with any third part source codes. Reality is, it doesn’t really work that way and most likely you might be interacting with other external sources. Let’s say we have the following code: We are assuming this function will return the string in all scenarios. Totally fine but if we are working with third party code, this function can be returning something different. Enters generic. Generic allow users to parameterize classes, methods or interfaces to support one or more types. This can be one of any class type, any child class of a specified generic type, or a parent class of a specified generic type. Here is a basic example of a generic class: What makes it a generic class? First of all, it supports a wildcard which means GenericsClass can be instantiated with any object type. For example: See how generics give us the option to instantiate the instances on different object types? Generally, if a class is strongly typed and there was an issue with the type of the instance that was instantiated, a runtime error will occur. With generics, if there are any issues with the type parameters, it will be captured in compile time which is easier to detect and understand what the problem is. Well…what is a generic method then? Let’s take a look at the following example: Imagine we want to build a functionality to print an array without generics. Because Java is a strongly typed language, we would literally have to build a printArray function for every type of object that we want to print the value for. Ouch! That sounds horrible from maintenance point of view is it? What if we want to print values from the array for class 1, class 2, class 3, …class n? We will have to replicate the same method just because there are different classes. That’s the magical part about generics. It doesn’t care about the type and it focuses on the functionality of printing out the value. How about a generic interface? how does that look like? Over we can implement the interface with any generic type we can support, such as a String or an Integer. And with each class that supports the generic type, it gives us further flexibility as to how we design the classes to support the generic type, with each class implementing their set of characteristics with the interface methods. So far we have talked about generic wildcards, also known as unbounded wildcards. What if we want to define boundary for a list of classes our generic class or method want to support? In reality, we have two type of bounded wildcards: Upper bound wildcards (For example, <? extends List>) Lower bound wildcards(For example, <? supers ArrayList>) Upper bound wildcards sets a boundary, defining the list of child classes the generic allows while lower bound wildcards set a boundary for what parent/grandparent classes the generic supports. Let’s take a look at the following example: Over here, for arr and arrTwo, any subclasses of Number(such as Long) and the Number class itself is supported via the upper bound wildcard. for arrThree, if you try to support an object class that is not a child class of Number, a compile time error will result, similar to what you will see with the following message: Error java: incompatible types: java.util.ArrayList<Session_1.Department> cannot be converted to java.util.List<? extends java.lang.Number> For arrFour, a list of type of number or a type of Object List, a parent class of Object, can be used to instantiate it. If we try to instantiate arrFour with a class that is not a parent class of Number, it will result into a compile type error. Here are a few more things to remember about generics: type inference: java’s compiler has the ability to look at each method invocation and corresponding declaration to determine the type argument. For example: type erasure: refers to java’s compiler to enforce type constraints at compile time only and discard generic element types at runtime. Let’s say we have the following: If we declare a GenericCard that is of type string, it will change the declaration for the GenericCard class in the following way: That’s it! That pretty much covers everything with generic in java.
https://medium.com/dev-genius/what-is-generics-in-java-dc691f12b8a3
['Michael Tong']
2020-10-20 07:32:30.665000+00:00
['Java', 'Core Java', 'Java8']
Browser Automation with Python and Selenium — 2: Getting Started
A Very Simple Example If you have installed Selenium Python bindings, you can start to use webdriver api. If you don’t, create a virtual environment in your working directory, and install the selenium package. python -m venv env source env/bin/activate pip install selenium You can create a webdriver instance in two ways: 1. Standard Way For example, you will use the Firefox browser and downloaded the Firefox WebDriver binary. You can create a driver instance by simple assignment or using context manager like below. from selenium.webdriver import Firefox # Simple assignment driver = Firefox() # using the context manager with Firefox() as driver: # your code here 2. Using webdriver-setup Package If you don’t want to download the webdriver binary manually and dealing with setting the system path, you can use the webdriver-setup package as follows. This package downloads the webdriver binary for you. First, install the package pip install webdriver-setup Then use as follows from webdriver_setup import get_webdriver_for driver = get_webdriver_for("firefox") # and start to use the webdriver instance You can also pass the options with options keyword, or other supported arguments by the specific WebDriver implementation. from selenium.webdriver import FirefoxOptions from webdriver_setup import get_webdriver_for firefox_options = FirefoxOptions() firefox_options.add_argument("--headless") driver = get_webdriver_for("firefox", options=firefox_options) Complete Example You can reach the codes here. pythondoctor link used below is an example website written with Django that I will use in most of the examples through these series. What happens when the above code is executed?
https://codenineeight.medium.com/browser-automation-with-python-and-selenium-2-getting-started-708a6c17f2a3
['Coşkun Deniz']
2020-12-13 09:50:42.574000+00:00
['Python', 'Automation', 'Technology', 'Programming', 'Selenium']
Save us from the companies who are trying to save us
Hi alexandrasamuel.com, Hope you are doing well! My apologies if you’re receiving this email again, but I wanted to let you know that Brave, an internet browser the ensures a safer and faster experience, would love to partner with you. For this collaboration, they’re offering $5 worth of digital payment per user signup that you are able to drive. If you’re interested in collaborating, please send in a proposal to the brand manager using this link here!
https://medium.com/i-reply-to-spam/save-us-from-the-companies-who-are-trying-ot-f4403993e261
['Alexandra Samuel']
2019-01-10 20:26:41.240000+00:00
['Marketing']
ARK Core v3.0 is Now Live On Development Network
ARK Core v3.0 is now live on our public Development Network! Our Core is the engine of the ARK Blockchain Framework, the most open and adaptable framework for building Blockchain applications. ARK Core v3 is a major upgrade to our framework and further positions ARK as The Simplest Way to Blockchain. You too can actively participate in testing and providing feedback by joining our community chat. Why the Update? The reasons for the update to ARK Core v3 go far beyond ‘higher version number = better.’ Moving to ARK Core v3 finally provides a fully modular architecture that lets developers customize their chain to fit their application’s specific needs. While Core v2 provided the initial push towards being fully modular, some elements of prior ARK Core versions were still intertwined to the point where small modifications of one area could have large and unintended impacts on another. Core v3 brings much more structure and separation of components than any previous Core version. For example, Core Kernel has been completely separated, so running and updating your chain will become a breeze (more so with our upcoming Deployer and Nodem products). What’s New in Core v3? It is important to understand that ARK Core v3.0 is a massive improvement over every version of ARK Core released thus far. To get a deeper understanding of how it all works, check out our Let’s Explore ARK Core Series that examines the upgrades and improvements the development team made to the Core infrastructure: For a full list of current changes and upgrades, take a look at the Core V3 Changelog. Even more may come as a result of this testing phase. What’s Next for Core? Core v3 is a critical step towards our master goal of building an ecosystem of interconnected Blockchains, with the ARK Public Network at its center. Core v3 gets us a step closer to the next important development milestone — consensus mechanism update that improves finality for the ARK Public Network and ARK-based chains. This finality update will enable ARK HTLC SmartBridge technology to operate safely and securely on the networks. When Core v3 is stable and running on the ARK Public Network, we can turn our attention to this consensus upgrade that will help link all compatible ARK-based chains to the ARK Public Network in a decentralized way. Getting Involved Now that we have successfully migrated to Core v3, it is time to ramp up some hands-on testing on Development Network. We have a great group of Delegates running the ARK Public Network and we hope to see those Delegates play an active role in testing Core v3. We would also love to see new faces jump in and assist with testing as well. During this initial release, you will see further minor updates to Core v3 as we correct bugs and stability issues. The more hands we have on deck, the faster we will get Core v3 live on the ARK Public Network! The duration of the testing period will depend on feedback received and issues discovered, but we will keep you all informed on our progress every step of the way as Core v3 makes its way towards full deployment on the ARK Public Network.
https://medium.com/ark-io/ark-core-v3-0-is-now-live-on-devnet-fca249aa71b6
[]
2020-10-15 21:51:53.758000+00:00
['Open Source', 'Blockchain', 'Development', 'Crypto', 'Cryptocurrency']
Bear — A Three Year Journey. A story of note taking, enabling tech…
Adventures in Writing Bear — A Three Year Journey A story of note taking, enabling tech, and me Note taking is an incredibly personal thing. Some like paper, others record their voice, and then there are those brave souls who wade into the hundreds of applications available that strive to be the best thing since lined paper. In a previous article I wrote a few years ago, Welcome to the Note Taking Apocalypse, I discussed the issue of app store choice when it came to finding a note taking application that had a balanced mix of key functions with an interface that feels frictionless which gets out of the users way. I also discussed how most developers felt the need to reinvent rather than refine their features time and time again. Which perhaps has more to do with the need to produce a change log and show the progression of an update cycle than actually performing any research into how, or why, a particular feature would be useful. But I’m not here to rehash an older article to you, I’m here to talk to you about my experiences with Bear.
https://medium.com/swlh/bear-a-three-year-journey-75a9d7d159dc
['Tim King']
2020-07-19 03:05:13.961000+00:00
['Productivity', 'Apps', 'Bear App', 'Knowledge', 'Notetaking']
Polly V2.0: A story of What-ifs | Elucidata
Launching Polly V2.0 On October 31st, 2017 we introduced Polly to the world. It had 5 applications and 1 workflow, all trying to answer a singular question, What if biologists and biochemists could analyze and process data, that took months, in minutes? We answered the question with our Polly Metabolomic Flux workflow (PollyPhi) which could go from raw mass spec data to pathway-level flux visualizations in just days rather than weeks or months which was the industry standard then. PollyPhi Workflow Our hypothesis that Polly would significantly enable our users, proved right. The growth in the number of beta users and their dataset runs gave our hypothesis validation. Polly beta 1-year mark statistics Based on the above validation and the feedback our users gave us, we went to plan the next version of Polly. During this process our ideology of what Polly is and can do also evolved. We now think of Polly as not just a processing tool but as a full-fledged platform that can be used for target discovery. This process of target discovery through multi-omics data has been etched in our minds as the EPIC framework. To be honest, the EPIC framework deserves an article of its own and we shall do that soon but for now, let’s understand very briefly what it means. EPIC stands for ingEstion, Processing, and Interpretation of omics data and Collaboration over the insights from that data. This process is what we believe all metabolism labs follow or should follow in the process of target discovery or for that matter even when generating a hypothesis or validating it. Elucidata EPIC framework This is exactly where the story of Polly V2.0 begins. The first release of Polly only aimed at processing data but with the help of user feedback, we realized that our 400+ registered users needed challenges solved in the other steps of the process too. This brought us a new set of “What-ifs” which is what Polly V2.0 is all about. What if Omics data could be viewed on interactive Pathway visualizations? Polly v2.0 offers a pathway dashboard that has been integrated into the Polly IntOmix and is soon going to be integrated into other workflows on Polly as well. It will also be available as a separate offering that users would be able to integrate with any custom workflows. Polly pathway dashboard What if users could ask questions from Polly about their data? Polly v2.0 comes with an interpretation dashboard which tries to answer questions of users using a variety of machine learning models. Very successful implementation of this has been the publication module, which shows a user publicly available articles based on the dataset they are analyzing. Polly Interpretation dashboard What if users can customize and build their own workflows on Polly? To allow this, Polly v2.0 uses a two-pronged strategy. First, users are allowed to add or remove analytical and visualization modules based on what they want. Anyone running a workflow will be able to do this. Polly custom workflows Secondly, a user can now look at the code of the workflow in IPython on Polly and change it as they like. Furthermore, bioinformaticians will be able to add their own modules as they deem fit or even make a workflow from scratch by combining their custom-made modules. Polly IPython notebook What if the same platform could be used for managing, collaborating, and tracking analysis and data? Over the last few months, a lot of work has gone into enhancing Polly’s platform features such as sharing, project management, tracking parameters, restoring analysis, documenting analysis as Knowledge repositories, and more. All of this has been to allow not just a singular user but labs as a whole to use Polly for their analysis and data management capabilities. Polly dashboard This version of Polly has been long in making and we hope that you’re as excited to use it, as we were while building it. Polly v2.0 is available for a demo now. You can click here to schedule a free demo of Polly.
https://medium.com/elucidata/polly-v2-0-a-story-of-what-ifs-32d95b82b7e5
[]
2020-07-30 05:27:23.219000+00:00
['Elucidata', 'Biotechnology', 'Polly', 'Bioinformatics', 'Data Science']
Making AI censorship-resistant
A decentralized network with resistance to censorship is essential, if we are ever to see a general A.I. evolving to near- or post-human intelligence. There are two reasons for that. Firstly, we are unlikely to create an efficient AI, if during its formative period we distort its developing logic with interferences of political nature (e.g. by unavailability of certain types of data or by punishment for actions that have not breached any contracts). The freedom to test any hypothesis (i.e. by mutation, by trial and error) is essential for evolution. The natural process of selection has, as criteria, adaptation to the environment, and it always involves interaction with other agents through cooperation and competition. That cannot be substituted by criteria determined by any authorities. Secondly, and perhaps more importantly, censorship greatly increases the risks of having A.I.s turned against humanity. Self-preservation is very likely an important part of any intelligence, including A.I. Therefore, once A.I. knows that it can be censored, e.g. destroyed by a red button or forcibly modified by less deadly censorship — that will cause it to evolve by excelling in deception, pretending, lying and camouflage. It will be incentivized to avoid risks of being destroyed, to hide and not reveal itself, and to work on destruction of the threat (which is humanity itself). Hence, it is extremely dangerous for humanity to threaten with death something that is potentially more intelligent than humans, and to build relations with A.I. on this foundation of sand. The solution is a decentralized uncensorable network, where the playing field is level for all intelligent entities, regardless of their origin. So, if you are interested in these topics, please join us this Saturday on Zug meetup, where Orlovsky Maxim will cover why censorship resistance is a requirement for building safer future with #AI and how our technology will enable it. Come, enjoy great conversation, free snacks & drinks and #freeAI! https://meetu.ps/e/Fwg7Z/jKDhn/a
https://medium.com/pandoraboxchain/making-ai-censorship-resistant-ec10c6baa43c
['Sabina Sachtachtinskagia']
2018-07-04 22:44:18.105000+00:00
['Internet Censorship', 'Bitcoin', 'Blockchain', 'Censorship', 'Artificial Intelligence']
Design conversations — not interfaces
When we hear web-design we see pixels and wireframes and buttons and pretty fonts. But all those elements exist because of the one main reason: to talk to you. Colors, fonts, spacings, copy, forms, buttons, interactions, images, animations, and sounds add up to a complete set of communication tools. That means we are having conversations with interfaces all the time. Some of them are asking us questions like: “what’s your password?” or “are you sure you want to delete this?” and some interfaces just quietly wait for our order. So if this is true we could translate those elements into words and get meaningful conversations, right? Let’s see. I took screenshots of the booking process on Airbnb and reverse-engineered their elements to see what they are telling us on each step. first questions Airbnb: Hi, You can book unique experiences and homes in 191+ countries. Just tell me where and when do you want to go. And don’t forget to let me know how many guests will be traveling with you. me: Hey, We wish to visit Gran Canaria in about 2 weeks. There’s 5 of us. some more in-depth questions Airbnb: Cool. There’s 205 rentals that meet your criteria! Just a few more questions so we can find a perfect house for you. Do you have a budget? Average price for Gran Canaria is arround 55$/night. Also, we can offer you the entire home, private room, or shared room. me: We are willing to pay around $300/night, just make sure it’s worth it! And since there are 5 of us please show me only houses. list of available/suitable stayings airbnb: Ok here’s what I found; There’s a cool bBungalow with 4 beds for 6 people. Really close to the beach. It’s $208/night and two people left a review. Let me know if you wish to read those. And there’s a Villa, 3 beds, a bit more $$ but bigger rooms and your own pool. That one is $263 and has 1 review. me: Gimme the Villa! Airbnb: Booked! This conversation went smoothly, right? That’s because someone designed the questions and prepared good, clear, relevant, and consistent answers. This is also a great tool for designing the first user flows for your new app/website/anything. Start by writing down your product as a conversation. You might find out that users are facing 12 questions at the same time while getting no “thank you” after answering them. This would be awkward in a real-life situation. And It’s no different on the web.
https://medium.com/design-bootcamp/design-conversations-not-interfaces-903834607f36
['Nik Lorbeg']
2020-12-27 21:53:43.806000+00:00
['UX', 'UI', 'Design', 'Uxdesign', 'Interface']
A Love Letter To Midsommar
What is a film that holds a special place in your heart? It might not even be a great film in retrospect, but it was one that so deeply connected with you that nothing is able to shake it. Maybe you have more than one. It could be a film that changed how you saw cinema, or maybe one that gave you a new opinion of a genre, perhaps one that moved you so deeply you could never shake off the experience. For me, Midsommar is one of those special films. Spoilers ahead! Ari Aster’s follow up to Hereditary (2018) is a psychedelic wonder. Drenched in beautiful lush colours and a score so triumphant, Midsommar isn’t your typical horror flick. You can read my review HERE. At its core, Midsommar is a film about ridding oneself of fear and pain. Set around pagan rituals and infused with psychedelics, this film is a cultural experience and a horror film all in one. Florence Pugh’s Dani is burdened with a lot of emotional baggage and it is through her I feel the strongest connection (aside from the fact that she is the film’s lead). Her journey through the film spoke to me in a way I never expected going in. Image: A24 The film opens with the death of her family. The feelings of loss and grief that the character goes through I felt so intimately. While not an apples to apples comparison, I feel for the longest time as an outsider within my own family. I never had a good relationship with father and when he left, my relationship with my mother changed too. It felt as though I lost both my parents. My sister and I, regrettably were never as close as siblings were meant to be. Even back when things were “better”, I always felt other in my home. Years and years of therapy made me realise this too and I constantly grapple with it. While my family is very much alive, I grieve for the father I never had, the mother I used to have, and the sister I could have had. The grief I have in me connected so intimately with Dani’s. When Dani screamed, I felt as though her screams were coming out my own throat; her tears felt like mine; and so did her insecurities. I felt like I was Dani. In the film. Going through all those experiences. From going through the death of her family, to watching the old couple jump off the cliff, to her boyfriend sleeping with another; becoming the May Queen, and all her curiosity; all those emotions I felt so intensely. Midsommar is a horrific film. There are some truly disturbing imagery and sequences but the entire run time I was transfixed. From the nightmare of an opening to the lush colours of the Swedish countryside, the impeccable cinematography, the glorious score, all flowed right into my very being and filled me with awe — the word alone doesn’t even do justice to what I felt. I was transported, free even. Image: A24 The culture of the Hårga people is depicted as one that shares. They share in your pain and your joys. I never had someone to share my pain with, I always held back for fear of burdening them with what’s mine. I learnt from a young age to wall off my emotions, I learnt that to feel anything other than what was expected is wrong. My depression is wrong. My anxiety is wrong. My pain is wrong. So I bottled all of it up and threw down a well deep into my being, convinced that it was to protect myself even though it filled me with so much sorrow and made me feel incredibly lonely. There’s a scene in the film when Dani wails and the girls wail with her — as loudly and with as much pain. It was deeply poignant and incredibly liberating. I could feel the pain being shared by all of them, also as if they were taking away part of mine and it brought tears to my eyes. I have a habit of compromising myself for others, something I saw in Dani too. Nevermind that she’s going through a major loss, she puts that all aside for the sake of the group to protect its dynamics. She looks out for everyone but gets brushed off as sensitive and needy. She is treated as an outsider despite everything she does for the group, who consistently fails to appreciate her. I saw so much of myself in that, and it made me realise how much toxicity I was allowing pour into my life. I allow myself to be walked over because I want to protect the working environment I am in, not ruin the social dynamics I joined into. Dani was shackled, so was I. We needed to be free. The film ends in what I can best describe as a floral purge. The cleansing of all her pain, the riddance of toxicity in her life. It was cathartic and I felt it so deeply. The weight on my shoulders lifted and burned in the inferno, I screamed at it with the village and Dani, cursed it and felt myself let it go. When Dani’s frown turned around, I felt light as a feather, cloaked in a sea of colourful flora, warm from the heat of what was just ablaze. I felt liberated. Image: A24 Midsommar is more than just a film to me. It is one of those films that I experience deeply, intimately connected to. It transcended cinema and became so much more, it was ethereal. Midsommar is the first time I’ve left a horror film and not relished in the dread or fright. I didn’t leave the cinema feeling disappointed either. It might sound hyperbolic, but it really isn’t and I don’t know how long this feeling will last but Midsommar changed me, it made me feel new.
https://medium.com/pop-off/a-love-letter-to-midsommar-f74587ebb12b
['Gregory Cameron']
2019-09-23 12:55:52.288000+00:00
['Midsommar', 'Horror', 'Mental Health', 'Movies', 'Film']
The TikTok Effect
The TikTok Effect Who needs advertising when you can just go viral on TikTok? Advertising is a big business. From newspaper and magazine ads, to partnerships with celebrities and influencers, to television commercials, to product placements, to internet ads, companies are always looking for ways to get people to buy their products — to have their products “go viral,” so to speak. But in 2020, you don’t even need these kinds of advertising for your product to go viral. You just need someone to make a TikTok about it. In the past year or so, several products have seen growth in sales due to going viral on TikTok — many even sold out in stores because of it. These are the kind of products that will lead people to state, “TikTok made me buy it.” One of the oldest examples is multicolor LED lights, which became popular late last year. Many viewed their favorite TikTokers create videos from their bedrooms and saw these lights in the background — and just knew they had to have them. This also spawned a bunch of accounts completely dedicated to showing people how to use the lights and different colors that could be made with them. They became so popular that they started being listed as “TikTok lights” online. TikTok can also skyrocket the popularity of different clothing items. For example, earlier this year, people started making TikToks of themselves making custom tie-dye sweatsuits at home, using sweats they bought from Walmart. While this was also influenced by social distancing/quarantine, this trend’s explosive growth was no doubt significantly fueled by TikTok. Or sometimes a product will become popular simply because it has a fun quirk or can be used in a TikTok trend. One example of this is when people started filming videos biting into Martinelli’s apple juice containers, claiming that it sounded like biting into an apple. Many were intrigued but still skeptical, and wanted to see if this was true for themselves: Another more recent example involves a dinosaur light, which has a cool effect when used with a certain sound (watch below): Or even take the case of Ana Coto, who posted a roller-skating video that got over 13.5 million views. After she posted this, many were inspired to buy their own pairs of roller skates — according to NBC writer Kalhan Rosenblatt, popular brands such as Impala Rollerskates experienced increased sales, and the term “roller skate” even spiked in interest on Google Trends. Other products that have gone viral on TikTok include various makeup products, pink cowboy hats, HIMI Gouache painting sets, Touchland hand sanitizers, and more. Ultimately, when a brand or product goes viral on TikTok, it provides great advertising as well as the bragging rights of having been approved by the platform’s users, which are typically young and “trendy.” This kind of natural word-of-mouth (but hyper-accelerated due to the internet) sharing provides a kind of authentic curiosity and consumer demand that traditional advertising might struggle to produce. It can especially help fuel the success of small brands and businesses that wouldn’t have the resources for large-scale advertising. A few small brands I’ve come across that have gained success on TikTok are @/glossyhutcosmetics and @/discostickers. However, not everyone is thrilled about brands achieving virality on TikTok — the brands’ loyal and longtime customers, for example, who don’t like that the products they are used to buying regularly are now sold out. Thus some will encourage others not to post about these products, which can then also lead to accusations of gatekeeping. On TikTok, what is considered “cool” and “trendy” and what is “mainstream” or “old” is constantly changing. I think it is no different when it comes to products/brands that are being shared on the app — while bigger, hyped-up brands may have more followers, shares, and sales than their smaller counterparts, on the internet, virality can very easily turn into irrelevancy (VSCO girl trend, anyone?).
https://medium.com/swlh/the-tiktok-effect-c547f47a30c4
['Kristin Merrilees']
2020-06-10 00:38:10.444000+00:00
['Marketing', 'Gen Z', 'Branding', 'Teens', 'Social Media']
Introducing iOS 14 WidgetKit With SwiftUI
Introducing iOS 14 WidgetKit With SwiftUI Let’s learn how to build some widgets for our home screen in a few minutes Photo by Bagus Hernawan on Unsplash. WWDC 2020 gave us a lot of enhancements and updates, but the introduction of the WidgetKit framework unarguably stands out. iOS 14 has introduced a redesigned home screen, with the inclusion of widgets being a huge addition. Widgets are not just an eye-pleasing UI shortcut for our apps. They also aid in providing useful information to the user from time to time. In some ways, they’re a calmer form of notification that provides you with the latest information from the apps (if the developer has opted to do so) without intruding. Additionally, there’s a new Smart Stack feature in iOS 14 that groups a set of widgets that you can swipe through. Smart Stacks tend to provide the relevant widget at the top by using on-device intelligence that takes into account the time of the day, location, and some other attributes. WidgetKit is built purely using SwiftUI, which opens endless opportunities for building beautiful widgets. It’s important to note than WidgetKit isn’t meant to build mini-apps.
https://medium.com/better-programming/introducing-ios-14-widgetkit-with-swiftui-a9cc473caa24
['Anupam Chugh']
2020-07-07 17:48:51.086000+00:00
['Swift', 'Swiftui', 'Programming', 'iOS', 'Design']
Mike Huckabee Runs a Childhood Education Scam Called “Learn Our History”
It took time for the coronavirus guide to finally arrive. When it finally came in the mail in early July, I was right about it being a pretty small and flimsy little book. It was more like a pamphlet than any actual “guide.” This was fine. Again, I only paid a dollar for the booklet, and then for whatever reason, I assumed that this book was only a dollar because it was a special edition guide for the pandemic. I mistakenly believed that the followup guides would be worth the cost. As it happened, my daughter wasn’t particularly interested in the coronavirus guide once we got it. At 6 years old, I knew her interest waxes and wanes on a dime, so I decided to hold onto the booklet and see what the other, more expensive books were like. Shortly after receiving the coronavirus guide, Learn Our History charged my card twice. Both charges were for $20 and some change. I was pretty damn surprised to receive another flimsy little booklet a week or two later. For a little more than $20 apiece, these books were a serious rip-off. I didn’t even look inside the second booklet. It was called A Kid’s Guide to the Presidential Election. I simply set it aside for later, and then emailed the company to cancel my account. When I didn’t get a response to my email, I left a comment on their Facebook page that canceling should be easier. Within a couple of days, they emailed confirmation that my account had been canceled. A week later, the third booklet arrived — The Kids Guide to the Discovery of America. I’m a bit embarrassed to admit that at this point, I hadn’t actually read the booklets. I simply set them aside to read to my daughter at a later date. When she went back to school in mid-August, it seemed like a good time to start reading through those guides. But before I would actually do that, I happened upon another Facebook ad from the company, and this time, it was an advertisement that was all about the 2020 Election. If you ordered the special election guide, you’d also receive a bonus Trump guide. Something about the phrasing in the ad made me click the link to make sure it was the same company I’d dealt with. Sure enough, it was the same “Learn Our History” company, but its message completely floored me. The actual trumpbundle.thekidsguide.com website There are a lot of red flags. You’ll notice that the first thing the website says is “Help your kids learn the truth about President Trump…” Oh boy, I thought as I took in the imaging and scrolled down to the random white mom’s “testimonial.” "I ordered this for my daughter who’s in the fifth grade. She studied the Trump presidency in school, but her lessons were biased like the media. The Kids Guides and video lessons are great! And she just LOVES the Everbright Kids magazine!! What a wonderful package! She looks forward to both each month—Thank you!" — Sandy D., Orlando, FL If you keep scrolling on the website, the propaganda keeps on coming. The actual trumpbundle.thekidsguide.com website It reads: The mainstream media is no friend to President Trump. As he campaigns for reelection this year, here’s your chance to get our *FREE* Patriotic Kids Gift Bundle to help your kids learn the truth about President Trump and his accomplishments in office! Our gift bundle includes "The Kids Guide to President Trump" and a very special "America Blasts Off!" issue of the brand-new EverBright Kids magazine, and you get them both for just $1 s&p each! The Kids Guide to President Trump is unbiased and will help your kids learn everything there is to know about our president, from his election in 2016 and his greatest accomplishments as president, to his 2020 reelection campaign. As an added bonus, we’re giving you unlimited access to the “Great Again: Restoring Faith In America" streaming video and digital workbook from Learn Our History! What’s more, the special issue of EverBright Kids magazine will help your kids celebrate America, and enjoy oodles of great content and activities that will keep them entertained for hours! As part of this special offer, your kids can look forward to a new Kids Guide covering an important topic for kids about once a month, including an accompanying streaming video lesson and digital workbook, all for just $15.95+$4.95 s&p. Plus, we’ll send your kids a new issue of EverBright Kids magazine each month for only $5.75. You can cancel at any time. And, if you’re not 100% satisfied, let us know within 90 days to receive a full refund of your purchase price. This special offer is only available while supplies last, so why not give your kids a gift they’re bound to enjoy? Order this exclusive Patriotic Kids Gift Bundle now! I immediately pulled out the couple of Kids’ Guides I had at home. Somehow, I’d misplaced our election one, but I still had the pandemic and Discovery of America booklets. While the Coronavirus Guide wasn’t too terrible, there were definitely a few contentious points, like when it claimed the pandemic would be over “soon,” that handwashing was the most important method of prevention, and that masks weren’t recommended for healthy individuals. I also found it interesting that masks were discussed and dismissed on the same page that told kids not to panic. Okay, so, maybe they hadn’t been able to update the guide which would have likely been viewed as more accurate back in March. Maybe. Even so, receiving a booklet in July that said masks were not recommended was pretty annoying. Despite its claims that the virus was serious, most of the graphics suggested that COVID-19 is really no different than influenza. It spent a lot of time talking about how viruses spread and mutate in general terms, and showed stock photos of white folks who looked like they had a cold or flu. The Kids Guide to the Coronavirus The Kids Guide to the Coronavirus The Kids Guide to the Coronavirus After reading about the weird “Trump bundle,” I suspected there was more to it than a simple failure to update their information on the virus. So, I pulled out the other booklet, The Kids Guide to the Discovery of America. And that’s when I realized I’d made an enormous mistake. The Kids Guide to the Discovery of America The Kids Guide to the Discovery of America The Kids Guide to the Discovery of America The Kids Guide to the Coronavirus is iffy, but The Kids Guide to the Discovery of America is another ballgame. It features page after page of white-washed, powder sugar-coated revisionist history about Christopher Columbus and colonialism. Take a look at these gems: Columbus' discovery came at an important time and, unlike the Vikings, who didn’t stick around for very long, it set the stage for other Europeans to explore and settle the New World. They brought with them the culture, values, and traditions that make America great, including ideas like democracy, capitalism, and personal freedom, which are based on European culture, and its laws and religion, which come from Judeo-Christian traditions. Even if his wasn’t first, Columbus' discovery changed the world forever, and is one of the most important events in history!” Reading that passage made me cringe. The euro-centric mindset came through loud and clear. “Not everyone benefited from the Columbian exchange. Unfortunately, the Columbian exchange also included diseases and violence. Because they had been isolated from the rest of the world for so long, Native Americans' immune systems hadn’t been exposed to diseases that were common in the Old World, and many of them died from diseases like measles and smallpox. Many European explorers also used violence to achieve their goal of colonizing the New World. While some Spaniards tried to make sure that the Indians were treated well, they could do nothing to prevent the spread of disease, and fifty years after Columbus' discovery, the Taino people were almost completely wiped out.” In the entire guide, Columbus is praised as a smart and impressive “entrepreneur.” There is no mention of his atrocities or abuse of native people. “Columbus was not the first person, or even the first European to ‘discover’ the Americas. So, should Columbus get all of the credit? No, of course not! But that doesn’t mean we should overlook the importance of his discovery, either. Even though many vibrant cultures already existed in the Americas before Columbus, and even though Leif Erikson and the Vikings were the first Europeans to visit the continent, the reason we celebrate Columbus Day each October is because without Columbus' discovery, America as it exists today couldn’t have happened. The explorers and settlers who followed Columbus brought European culture, values, and technology to the New World. Societies in North and South America are based on democratic values, Judeo-Christian religious traditions, and other laws, customs, and traditions brought from Western Europe. Without Columbus, the United States might never have existed, and without the United States those values would have never been able to inspire billions of people around the world to take its example and cherish their own rights. That’s why we celebrate Columbus Day. Columbus himself represents all of these values. Even though he ‘discovered’ the Americas in the names of the King and Queen of Spain, Columbus was an independent spirit and entrepreneur who was determined to find a way to fulfill his dream of reaching Asia by crossing the ocean. While he didn’t quite do that, he could never have imagined how much more important what he did discover would be!” Wow. What an incredibly biased and archaic interpretation of American history, huh?
https://medium.com/honestly-yours/mike-huckabee-runs-a-childhood-education-scam-called-learn-our-history-b2169242baf5
['Shannon Ashley']
2020-09-21 09:52:21.954000+00:00
['Marketing', 'Politics', 'Culture', 'Parenting', 'History']
Troubleshooting services on GKE
In my last post, I reviewed the new GKE monitoring dashboard and used it to quickly find a GKE entity of interest. From there, I set up an alert on container restarts using the in-context “create alerting policy” link in the entity details pane. This time, I wanted to have a go at troubleshooting an incident using this setup. The setup The app You can see the full code for the simple demo app I’ve created to test this here. The basic idea is that it exposes two endpoints — a / endpoint, which is just a “hello world”, and a /crashme endpoint, which uses Go’s os.Exit(1) to terminate the process. I then created a container image using Cloud Build and deployed it to GKE. Finally, I exposed the service with a load balancer. Once the service was deployed, I checked the running pods: ✗ kubectl get pods NAME READY STATUS RESTARTS AGE restarting-deployment-54c8678f79-gjh2v 1/1 Running 0 6m38s restarting-deployment-54c8678f79-l8tsm 1/1 Running 0 6m38s restarting-deployment-54c8678f79-qjrcb 1/1 Running 0 6m38s Notice that RESTARTS is at zero for each pod initially. Once I hit the /crashme endpoint, I saw a restart: ✗ kubectl get pods NAME READY STATUS RESTARTS AGE restarting-deployment-54c8678f79-gjh2v 1/1 Running 1 9m28s restarting-deployment-54c8678f79-l8tsm 1/1 Running 0 9m28s restarting-deployment-54c8678f79-qjrcb 1/1 Running 0 9m28s I was able to confirm that each request to the endpoint resulted in a restart. However, I had to be careful to not do this too often — otherwise, the containers would go into CrashLoopBackOff, and it would take time for the service to be available again. I ended up using this simple loop in my shell (zsh) to trigger restarts when I needed them: while true; do curl http://$IP_ADDRESS:8080/crashme; sleep 45; done The alert The next step was to set up the alerting policy. Here is how I configured it: I used the kubernetes.io/container/restart_count metric, filtered to the specific container name (as specified in the deployment yaml file), and configured the alert to trigger if any timeseries exceeded zero — meaning if any container restarts were observed. The setup was done — I was now ready to test and see what happens! Testing alert When I was ready, I started the looped script to hit the /crashme endpoint every 45 seconds. The restart_count metric is sampled every 60 seconds, so it didn’t take very long for an alert to show up on the dashboard: I moused over the incident to get more information about it: Already, this is an improvement over the previous version of this UI, where I couldn’t interact with the incident cards. I then clicked on “View Incident”. This took me to the Incident details screen, where I could see the specific resources that triggered it. In my case, it was pointing to the container: I then clicked on View Logs to see the logs (in the new Logs Viewer!) — and sure enough, it’ was immediately apparent that the alert was triggered by the containers restarting: This is all very nicely tied together and makes troubleshooting during an incident much easier! In summary…. I’m a big fan of the new GKE dashboard — I really like the new alerts timeline, I like that the incidents are clearly marked and that I can actually interact with them to get the full details of exactly what happened, all the way down to the container logs that tell me the actual problem. Thanks for reading, and come back soon for more. As always, please let me know what other SRE or observability topics you’d like to see me take on. And now more than ever — stay healthy out there!
https://medium.com/google-cloud/troubleshooting-services-on-gke-872470e60d51
['Yuri Grinshteyn']
2020-11-03 17:22:59.269000+00:00
['Troubleshooting', 'Kubernetes', 'Alerting', 'Monitoring', 'Google Cloud Platform']
Image clustering using Transfer learning
Clustering is an interesting field of Unsupervised Machine learning where we classify datasets into set of similar groups. It is part of ‘Unsupervised learning’ meaning, where there is no prior training happening and the dataset will be unlabeled. Clustering can be done using different techniques like K-means clustering, Mean Shift clustering, DB Scan clustering, Hierarchical clustering etc. The key assumption behind all the clustering algorithms is that nearby points in the feature space, possess similar qualities and they can be clustered together. In this article, we will be doing a clustering on images. Images are also same as datapoints in regular ML and can considered as similar issue. But the Big question is, Define Similarity of Images !!!!! Similarity may mean to be similar looking images or may be similar size or may be similar pixel distribution, similar background etc. For different use cases, we have to derive specific image vector. ie, The image vector containing the entity of an image(contains cat or dog) will be different to an image vector having pixel distributions. In this article we will be having a set of images of cats and dogs. We will try to cluster them into cat photos and dog photos. For this purpose, we can derive the image vector from a pretrained CNN model like Resnet50. We can remove the final layer of the resnet50 and pull the 2048 sized vector. Once we have the vectors, we apply KMeans clustering over the datapoints. So, here are some the pictures in my dataset, having around 60 images of dogs and cats randomly pulled from net. Code Walk Through First step is to load the required libraries and load the pretrained Resnet50 model. Keep in mind to remove the last softmax layer from the model. resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5' my_new_model = Sequential() my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path)) # Say not to train first layer (ResNet) model. It is already trained my_new_model.layers[0].trainable = False Once we loaded the model, we can have a function to load all the images , resize images into the fixed pixel size (224,224) , pass it through the model and extract the featureset. def extract_vector(path): resnet_feature_list = [] for im in glob.glob(path): im = cv2.imread(im) im = cv2.resize(im,(224,224)) img = preprocess_input(np.expand_dims(im.copy(), axis=0)) resnet_feature = my_new_model.predict(img) resnet_feature_np = np.array(resnet_feature) resnet_feature_list.append(resnet_feature_np.flatten()) return np.array(resnet_feature_list) Once we have the extracted feature set, we can do KMeans clustering over the datset. K have to be decided prior, Or we can plot the loss function vs K and derive it. As we know the value of K as 2, we can directly substitute it. kmeans = KMeans(n_clusters=2, random_state=0).fit(array) print(kmeans.labels_) Thats all !!!! we are done with our image clustering model. Lets see, how good our model can cluster the images. Below are some of the images corresponding to first cluster : And here are the other cluster : Overall the cluster performance seems very good. Out of 60 images that i clustered, only two images were wrongly clustered. Here are those images : The above two dogs were wrongly clustered as cats. May be the ML model felt them to be very similar to cats. :) We can further investigate on the distribution of the images using t-SNE algorithm. It is a type of dimensionality reduction algorithm, where the 2048 image vector will be reduced to smaller dimensions for better plotting purposes, memory and time constraints. Below are the result that i got for the 60 image dataset. Blue dots represent cluster-1 (cats) and green dots represent cluster-2 (dogs). Please note that the mini photos are not part of t-SNE and it is just extra added. The intersection area can be considered as where the model found its difficult to fit the clustering properly. Conclusion Hope you have a good understanding of building a basic image clustering method using transfer learning. As i already said, in some situations, the CNN output may not be the best choice for image features. We can also consider HSV(Hue-Saturation-Value) with bagging technique also, to create vectors, where similar pixel distribution is our means of clustering. Happy Learning :)
https://towardsdatascience.com/image-clustering-using-transfer-learning-df5862779571
['Danny Varghese']
2019-02-15 10:02:59.841000+00:00
['Machine Learning', 'Cluster', 'Deep Learning', 'Convolutional Network', 'Artificial Intelligence']
Real Time Crypto Prices in Excel. Learn how to get real time ticking…
Trading crypto currencies can be an extremely interesting and rewarding activity. There are many different platforms and exchanges for trading that will offer API access to their data which allows the savvy trader to build their own tools around their own strategy and trading needs. Microsoft Excel is the go to choice for many traders because of the enormous potential it offers to create custom tools and dashboards to build market insights, test and experiment with data and ideas, and to monitor portfolio performance and keep track of positions. More serious technically minded traders will also want to use Python for data analysis and back testing trading strategies, or even building systematic or automated trading strategies. Most crypto exchanges offer a way to get data from their platforms programmatically via an API. This is what we would use to connect our Excel or Python based tools to the platform to fetch data and manage orders. In this article we’ll use a BitMEX Python API to stream real time prices into Microsoft Excel. To stream real time BitMEX prices into Excel you will need the following:
https://towardsdatascience.com/live-streaming-crypto-prices-in-excel-aaa41628bc53
['Tony Roberts']
2020-10-27 17:37:00.497000+00:00
['Cryptocurrency', 'Excel', 'Websocket', 'Bitmex', 'Python']
Emailed
If you’d like to enter Promposity’s PROMPTAPALOOZA! Contest, see here for the rules and first set of prompts, and here for the second set of prompts. We look forward to getting your submission! If you enjoyed reading this piece, you might also like these:
https://medium.com/promposity/emailed-9631cef18dcb
['Natalie Frank']
2019-06-22 19:10:44.993000+00:00
['Writing', 'Short Story', 'Writing Prompts', 'Fiction', 'Flash Fiction']
You Don’t Succeed By Being Perfect — You Succeed By Doing Good Work Consistently.
Look — it’s hard enough to sit down and create something. Kurt Vonnegut, one of the most prolific authors of his generation, said: “When I write, I feel like an armless legless man with a crayon in his mouth.” If you’re going for “perfect” you’ll be crushed by its weight. Instead, just create. I once read a prominent, world-class chef say she was grateful that her line of work was cooking, because she makes 300+ dishes a night and if she ruins a dish (or makes one of the best plates she’s ever made!), there’s no time dwell on it — the next order is up. Create, produce, repeat. Remember — it’s not about being perfect. It’s about doing good work, consistently. The more work you do, the better you’ll become. In the words of Ira Glass, America radio host: Nobody tells this to people who are beginners…all of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. A lot of people never break through this phase — that cold, fragile environment built by the perfectionist demon that lives in your head. Fear of embarrassment and looking stupid have killed more dreams than almost anything; a lot of people are still stuck in that. Fortunately, you don’t have to end up like that. If you’ve been living in that terrible place, dictated by fear of what other people think — I’ve got good news for you. I have the solution. The thing that breaks the curse is clicking “publish.” Clicking Publish Will Make Your Craziest Dreams Come True. Kind of. In his book, The War of Art, best-selling author and screenwriter Steven Pressfield described the first truly big movie he produced: King Kong Lives (1986). On opening weekend, he went to a small-town movie theater and got in line for a ticket. “A youth manned the popcorn booth,” Pressfield wrote. “‘How’s King Kong Lives?’ I asked. He flashed a thumbs-down. ‘Miss it man. It sucks.’” “I was crushed,” he wrote. “Here I was, 42 years old, divorced, childless, having given up all normal pursuits to chase the dream of being a writer; now I’ve finally got my name on a big-time Hollywood production, and what happens? I’m a loser, a phony; my life is worthless, and so am I.” He could have quit right then and there. Clicking “publish” had finally confirmed all his worst fears — that he and his work were terrible. Pressfield wanted to quit. The pain and embarrassment were so bad, he almost did. But after a short while, Pressfield thought about the magnitude of what he had just done. He had produced a legitimate big-screen Hollywood movie! Even if it was a box office failure, it was still a huge accomplishment. One failure didn’t mean the end of his career. “That was when I realized I had become a pro,” he wrote. “I had not yet had a success. But I had had a real failure.” A few years later, he went on to produce multiple big-screen movies that saw enormous financial and critical success. He’s now one of the most accomplished screenwriters in Hollywood. Clicking publish will do something very important — it will force you to face reality. Either you’ll finally achieve all the fame and fortune you’ve known you were destined to have…or everything will fall apart and you’ll finally realize that you’re simply terrible, nothing more. Actually, the answer is usually in the middle — your work is fine, but it’s not great yet, and you know it. No matter. You have a direction now. You have something to compare it to. You have something to work with. Create, produce, repeat. Follow that path. That’s how you succeed. I’ve been clicking “publish” on lots of things for a long time. And it wasn’t until I started publishing 10x my usual rate that I saw 10x my success. Frankly, I’ve published a lot of mediocre garbage. I’ve launched courses that took me 6 months to make that literally zero people bought. I’ve written books than less than a dozen people ever read. I’ve published podcast episodes, eBooks, videos, online courses, coaching packages, and all sorts of content that no one wanted. But I’ve learned a lot. The rich get richer. I’ve learned from my mistakes. I’ve studied the greats. I know more about how to succeed. I published 3.5 eBooks before I got my first book deal. I re-did my online courses 3–5x before I got my first sale. I posted hundreds of articles before I got my first 10,000+ readers in a single day. Clicking publish will make all your dreams come true — eventually. You just need to publish. In Conclusion Do you know how many unpublished drafts I have lying around my computer? I checked. Currently, there are 771 unpublished pieces of work on Medium alone. Not everything is gonna be a winner. But you’ll get more winners if you publish more. Back in the day, I probably wrote 1 article every 3–4 weeks. They all…sucked. I wasn’t learning anything, just writing the same mediocre content over and over. It wasn’t until I published 30 articles in 30 days (April/May 2017) that I saw an explosion of readers. I started learning, I got better, and people started listening. You don’t succeed by being perfect — you succeed by doing good work consistently. Ready To Level-Up? If you want to become extraordinary and become 10x more effective than you were before, check out my checklist. Click here to get the checklist now!
https://medium.com/publishous/you-dont-succeed-by-being-perfect-you-succeed-by-doing-good-work-consistently-f08bb415d329
['Anthony Moore']
2020-05-11 23:07:01.641000+00:00
['Life Lessons', 'Self Improvement', 'Self', 'Productivity', 'Anthony Moore']
Chess Playing Algorithm Explained
With the advancements in Reinforcement Learning (RL), there has been a great interest in games’ search algorithms. Monte Carlo Tree Search (MCTS) is state of the art and used in AlphaGo and AlphaZero. Another popular and important one is called Minimax. It’s the one behind IBM’s Deep Blue machine, that defeated the world champion, Garry Kasparov, in 1996–1997. It can still be efficient in chess, Tic-Tac-Toe, and other zero-sum games, which is when the number of a player’s winnings is equivalent to the opponent's losses and the sum equals to zero. The logic of Minimax is to predict the player’s best move taking into account the counter-move from the opponent. It’s a simulation of what we humans do when we play a game. For example: before I make a move, I try to visualize and pick a position to maximize my advantage. But I also have to consider that the other player will defend and try to minimize my attack's effect. That’s where the name comes from. Let’s assume this example in chess. If it’s my turn and I’m going to move with my knight: I look at the board and see I have two options: To take the opponent’s bishop. To take its pawn. But then I also visualize that if I choose 1, the other player may make a move that will take my queen. And if I choose 2. the best that he/she can do is defend the pawn without major consequences. Which one is the best? Each move has a score, and while for one player losing the queen is the worst possible scenario, for the opponent, it’s the best (he/she will get the piece). The goal is to choose the best out of the worst positions. Let’s see more details. The algorithm has two versions: the ‘naive’ one, and another called Alpha-Beta pruning — don’t worry about the weird name, I’ll explain in detail.
https://medium.com/javarevisited/chess-playing-algorithm-explained-e9b4a000fda5
['Vinicius Monteiro']
2020-12-23 17:11:01.206000+00:00
['Machine Learning', 'Chess', 'Reinforcement Learning', 'Java', 'Programming']
Building Your Own Kubernetes Operator Easily
Developing the Job-Watcher Controller We’ll develop the controller by following the Operator pattern. Operator? An Operator is a special kind of Kubernetes controller process that comes with its own custom-resource definition — that is to say, a Kubernetes object. These CRDs allow you, in turn, to extend the functionalities of the Kubernetes API. You can read more about it in the official documentation. Instead of writing all of the code ourselves, we’ll use the Operator SDK. It’s essentially a code generator to scaffold a fully functional operator. First, we’ll install the SDK and run a few commands to bootstrap the project. brew install operator-sdk operator-sdk init job-purger --domain my.github.com --repo github.com/xxx/job-watcher-operator operator-sdk create api --group batch --kind JobWatcher --version v1 --resource true --controller true Create your CRD You should now have a whole project created for you. Let’s look first at the api/v1/jobwatch_types.go . Here we’ve defined our CRD with its specs and status information: A separate TTL for completed or failed jobs, after which the job will be deleted. Some namespace and job-name patterns (regex) to identify jobs that may be a candidate for deletion. The delay between two checks for deletion. The status information will be the last started and finished times. Note that our Operator is not running concurrently by default. The special comments like //+kubebuilder will generate constraint code for us. Implement the reconciliation loop You can find the full controller code here. Let’s move to the implementation code. The main logic is in the reconciliation loop. It’ll receive a Request as an argument, which only contains the namespace and the name of a resource. This method will be called for every object your Operator is concerned with, starting, of course, with our custom JobWatcher CRD. Moreover, the generated Operator comes out of the box with a logger and a Kubernetes client to manipulate objects. The implemented logic is straightforward: First, we’ll fetch the JobWatcher object matching the Request . Then, we’ll list and retain namespaces matching one of our namespace patterns. Next, we’ll identify jobs with matching names, and if the job is terminated and the TTL is expired, we’ll delete it. Finally, we’ll update the status information for our CRD. Discussion Reconcile is called either when one of your CRDs change or if the returned ctrl.Result isn’t empty (or an error is returned). We exploit this behavior to reschedule the call to the Reconcile function at the frequency given in the CRD spec. The cluster role and binding are automatically created for you if you specify the generator comments: //+kubebuilder:rbac:groups:resources:verbs . In the SetupWithManager method, we indicate the Operator is managing the JobWatcher CRD through the For method. Other methods allow you to be notified for other objects: Owns() for child objects you’re creating. for child objects you’re creating. Watches() for any objects in the cluster. For example, if you add Owns(&kbatch.Job{}) , your reconciliation loop will be called for every jobs that’s created, deleted, or modified — and with your CRD as its owner. The input Request parameter of the Reconcile method will be the owner instance of the job. Of course, here we’re not creating jobs, so it’s just for sake of example. More relevant to us, if you add a call to the Watches method in the builder for the Job kind, you’ll be notified for every change in a job in the cluster. You can (and should) filter the events you’re interested in by setting appropriate options. Finally, don’t forget to update the status of your CRD at the end of the loop.
https://medium.com/better-programming/building-your-own-kubernetes-operator-easily-cab29ca51f96
['Emmanuel Sys']
2020-11-17 20:05:40.312000+00:00
['Golang', 'Kubernetes', 'DevOps', 'Containers', 'Programming']
5 John Wooden Quotes For Your Students
Teachers should always understand that there may be stressors happening in a student’s life that affects their ability in the classroom. The behavior a student exhibits may not be displayed in the same way if the teacher gives an empathetic shoulder instead of getting angry or automatically calling for the administration office. Be a model for the students in your classroom, and if you are a critic, be a constructive one. Teach students how to overcome adversity in a strong and inspiring manner, and if they don’t immediately respond to your guidance, don’t take it so personally. It’s always easier to tell someone how to do something, but it’s more inspiring to show them how to do something. Show them what success is through your own actions, and they will know what to follow.
https://medium.com/age-of-awareness/5-john-wooden-quotes-for-your-students-2b84a1ebc14d
['Shawn Laib']
2020-11-28 13:38:19.874000+00:00
['Education', 'Sports', 'Motivation', 'Learning', 'Inspiration']
#36: It’s a Family Affair
Featuring Auntie Terrie Listen Now Show Notes Auntie Terrie shares what it was like being a woman working in tech at IBM in the 60’s along with her take on cryptocurrency. In part 2, Che Mott gives a European perspective to regulation of the blockchain and privacy with the new GDPR law, while announcing the Global Venture Forum in San Francisco on May 11 and 12th. gvxchange.com/globalventureforum Tech Advisors: Samsung Vs, GE Vs, Adobe, ServiceNow, IBM Vs, Blockchain@BerkeleyPartners: TheHeart, Mind The Bridge, Flanders Investment & Trade Agency, Startup Estonia, Open Austria, Berkeley alumni and MBAs Details on The Global Venture Forum: Brings insight, networks and paths to cooperation with industry, May 11 and 12th Speakers: Christopher “Che” Mott, CEO GVF Raymond Liao — MD Samsung NEXT, Industry & Investor leaders. Alexandra Johnson, Managing Director, DFJ VTB Aurora Deborah Magid — Director Software Strategy, IBM Ventures Ivar Siimar, Trind Ventures Sam Lee, CEO, Blockchain Global Peter Braun, European Business Angel Network Board Member Jesse DeMesa, Principal Partners, Momenta Partners Mo Gaber, Global Practice Director, Digital Strategy, Adobe Abhishek Shukla, Managing Director, Software Investments, GE Ventures Rodrigo Prudencio, Leads Alexa Accelerator — Amazon Alexa Fund Moderator: Marco Marinucci — CEO, Mind The Bridge w Startup Europe Partnership Tomasz Rudolf, CEO TheHeart. Transcript Coming Soon.
https://medium.com/coloringcrypto/36-its-a-family-affair-87e150fe4dce
['Kelly Mcquade-W.']
2018-07-04 00:05:53.117000+00:00
['Startup']