title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Say Hi to Mura! | In 2017, I decided to leave CNN-IBN, a place that had been home for more than half a decade. “Don’t quit” was the advice I received from nearly everyone from the industry.
Why? Since I had made it on-air, I was expected to stay there. It was the ‘sweet spot’ that nearly everyone in a TV newsroom was told to aspire for.
Many well wishers volunteered to make me see sense in staying back “Why are you quitting?! You look so good on-air!”.
While I had made up my mind to leave, that statement meant to convince me to stay is what reassured me in my exit.
I left that ‘sweet spot’ to help pilot data journalism projects across multiple newsrooms in India. During the last two years, I’ve had the opportunity to learn things about journalism that have little to do with mass media, big newsrooms, first news breaks and conventional storytelling. Why did I choose to learn new things? As my friend Lakshmi Sivadas puts it succinctly— In a non-linear world, it does not pay to be linear sources of information gatherers and distributors.
In my previous post, I talked about why I decided to become a Tow-Knight fellow — understanding how people consumed the news outside of big newsrooms was important to me.
Over the course of my interviews, I learnt that there are students in India who consume the daily news with the sole purpose of learning from it to clear the general knowledge sections of various exams. I am hoping to build Mura, my project as a Tow-Knight fellow as a news-based learning tool that can help this community.
Mura is a Telugu word roughly translated as a “hand’s measure”, used here to inspire taking one small step at a time.
Keeping this in mind, I envision Mura to be a virtual mentor that teaches students from the news and tracks their learning. For this, I built my first prototype in April and had 10 people test it.
Based on the feedback, the key takeaways have been both for understanding the technology needed to build this, as well as figuring out a conversational voice for Mura to be an effective learning tool. | https://medium.com/journalism-innovation/say-hi-to-1327efbe259a | ['Kamala Sripada'] | 2019-04-08 19:37:04.206000+00:00 | ['Journalism'] |
Guild AI: Better Models by Measuring | Experiments in Guild AI, screenshot by author
Guild AI is a lightweight, open source tool used to run, capture, and compare machine learning experiments.
To create an experiment, run your script with Guild AI from a command prompt:
$ guild run train.py
Guild captures a full record of your experiment.
Source code
Hyperparameters (learning rate, batch size, etc.)
Script output
Script results (accuracy, AUC, loss, etc.)
Generated files (saved models, data sets, plots, etc.)
Guild lets you compare results with a variety of tools including TensorBoard, HiPlot, file diff, and Guild View. Explore and compare experiments to answer questions about what ran and how it performed.
Compare runs using parallel coordinates view, screenshot by author
Why track experiments?
If you don’t record results, you depend on memory and intuition. This gets you only so far. Without systematic measurement, it’s hard to answer important questions.
Which hyperparameters yield the best results for a given approach?
Did my latest change break something unexpectedly?
What caused that sudden improvements in performance last week?
How do our AUC-ROC curves compare over time?
Did my colleague get the same result? What was different? What can we learn from those differences?
The list goes on.
If you don’t measure, you miss opportunities to learn from your work and make better decisions.
But experiment tracking is a pain
Experiment tracking can be a pain, depending on your approach.
Spreadsheets. Copy-and-paste is tedious and error prone. You miss crucial details like source code, script output, and files generated by your script.
Roll-your-own experiment tracking. It seems simple: just copy results to a timestamped directory. This misconception leads many to develop their own experiment tracking systems. Do you want to spend time building tools, or using them?
Paid services. If you have to create an account, provide credit card information, or otherwise submit data to a corporation — just to capture an experiment — you’re less likely to capture an experiment.
Modifications to code. Experiment tracking tools often mandate invasive changes to code. Even simple operations like writing a file need specialized libraries. Each code change is a distraction from your work. Worse, it ties your scripts to non-standard frameworks.
Required databases, agents, file systems, etc. Many experiment tracking tools use external systems that you install, configure and maintain. Even if you have the expertise, this work is a distraction from the goal of building better models.
Despite your best intentions to track experiments, you may conclude that it’s not work the pain.
Experiment tracking is NOT a pain
Guild AI is different.
Guild does not mandate changes to your code — your code runs as-is without ties to a framework.
Guild does not use databases, exotic file systems, or back-end services.
Guild never asks you to create an account.
Guild is 100% open source and platform independent. It comes free of charge and without strings attached.
If experiment tracking is fast and easy — as it is with Guild — you’re more likely to do it.
Guild AI design philosophy
Guild AI adheres to the Unix philosophy. It’s designed to be simple and lightweight without compromising features.
Guild saves each run in a unique directory. There are no databases or exotic file systems. External systems are a pain to setup and maintain. You don’t need them so Guild doesn’t use them.
Guild captures inputs and outputs through standard process interfaces. It sets hyperparameters using command line arguments, environment variables, and specialized support for Python modules. Guild gets output from standard IO and industry standard summary files. This keeps your code 100% independent of the experiment tracking framework.
Guild saves everything you need with each run. This includes source code, system properties, software libraries, and a host of metadata. It’s hard to appreciate the breadth of saved information until you need it. Luckily Guild handles this for you.
If you’re tempted to write your own experiment tracking code, try Guild first. You get the same straightforward approach but with a full-featured, proven tool.
Hidden benefits of experiment tracking
When you use Guild AI to run experiments, you get powerful extras.
Automated pipelines. Output from a run can be used as input to other runs to create automated pipelines.
Hyperparameter search. Automate grid search, random search, and Bayesian optimization.
Remote runs. Run on remote systems the same way you run locally.
Backup and restore. Copy runs to cloud services or on-prem backup servers.
Collaboration. Share results with colleagues and collect their feedback on each run.
Open source and platform independent
Guild AI is 100% open source software. You can use Guild in your projects without limiting others.
Guild AI is platform independent. When you use Guild, you don’t connect to a back-end system. You don’t share your data. You’re free to run your code how and where you want without influence from a corporate interest.
These are important factors when considering the cost of experiment management. Independent, open source software is not only free in terms of money. It’s free in terms of autonomy — you make the best decisions for your project and your code.
Does Guild AI use magic?
Guild AI supports some magic to keep things simple for new users. This support is limited. To be explicit and control what Guild does, add a Guild file to your project.
Consider an example. Guild auto-detects project source code by looking for typical source code files. What if Guild gets this wrong? Use a Guild file to control what’s copied:
# guild.yml (located in project directory) train: # operation definition
sourcecode: # source code config
- exclude: data # don't copy files from data dir
For a complete description of Guild’s configuration support, see Guild File Reference.
Better models by measuring
No matter how formidable your intuition, data science is inherently experimental. You don’t know whether a decision is helpful or harmful until you test it and observe an outcome.
Guild AI lowers the barrier to formal measurement in data science. Run your script with Guild to capture a full record of the operation. With each measurement you collect more evidence to inform your next steps — and to build better models.
To learn more about Guild AI, visit https://guild.ai. | https://gar1t.medium.com/guild-ai-better-models-by-measuring-88e482650faa | ['Garrett Smith'] | 2020-10-14 23:40:47.371000+00:00 | ['Experiment Management', 'Software Engineering', 'Guild Ai', 'Data Science', 'Machine Learning'] |
The Infinite Fount of Novel Ideas | But there was a personal surprise, a gift just for myself, that slayed my greatest novel writing fear:
I don’t run out of ideas.
The first novel I wrote was four years ago, I used my first NaNoWriMo in 2012 to edit and finish that novel. Cheating, I know, but I needed the push to finish.
(I could use that same sort of push now, to finish this novel! — Anyone up for a NaNo-Finishing-Mo? Um, what’s up with the crickets?)
I clung to my first novel through the rest of 2012, and most of 2013, rewriting the same pages over and over again. I was convinced it had to be complete and perfect and publisher-ready before I wrote anything else.
Then October 2013 approached, with its belated San Francisco summer warmth, and I knew I needed to write something new, to release the suffocating attachment I had to my first novel.
From the great abyss of all ideas, new characters arose from my PaperMate Profile pen in my 9x12 writing book.
Set in San Francisco, like my first novel, these characters were new, different, and had their own quirks entirely unlike those in my first novel. The only similarity, in fact, was that I told their story much in the same way as my first novel — alternating points of view by chapter between a mother and a daughter. 55K words later, that NaNo was won.
Then this year, in marched the idea to write the sequel to my first (still unpublished) novel — this time alternating between father and daughter points of view when the daughter visits the father for a month in Manhattan.
62K words and counting, I realized the biggest surprise of all:
Faith that there are always new novel-length ideas.
Always.
My biggest fear was that at best I’d be a one-book-wonder, my first lonely tome destined to be the only title listed next to my name on my hypothetical Amazon.com bookshelf.
But once I created an outlet for my inner writer, the ideas haven’t stopped. I’ll be back next November, and although I don’t know what I’ll write next year, I trust the idea will arrive right on time. | https://medium.com/nanowrimo/the-infinite-fount-of-novel-ideas-2f7ffd0caa1d | ['Julie Russell'] | 2015-03-09 04:02:04.532000+00:00 | ['Novel', 'NaNoWriMo', 'Writing'] |
The cost of hiring a data scientist. | “Deus ex machina” — Menander
Data science, machine learning, Ai, and predictive analytics are the current ‘rock star’ *shudder*, skillsets of digital marketing, and here at Distributed we provide our clients with access to all of them, either to work on in-house proprietary technology, or to augment their teams and help deliver first rate client work .
As we have quite a bit of experience in hiring awesome data scientists I thought it would be helpful to benchmark what costs you should be anticipating when building your own data science team. Below are a few examples which should shed some light on the discipline and what it takes to bring these skillsets in-house.
Firstly, let’s make sure that it’s a data scientist that you need.
Here’s a great graphic to help visualise that we’re looking for in a data scientist.
Jack of all
So what we’re looking for in a data scientist is someone who is capable of overseeing complex data projects from beginning to execution. In addition to having great technical skills, they need to be able to effectively communicate their findings to others in the organisation. They should also be able to manage a team. They should be able to query databases like an analyst, but also able to perform much more sophisticated analysis using statistical techniques and machine learning, depending on the task at hand.
Like most of us that work in specialisms, each data scientist will lean towards a particular specialism, here’s a quick look at the different specialisms. | https://medium.com/distributed/the-cost-of-hiring-a-data-scientist-819f8c37ad50 | ['Callum Adamson'] | 2017-06-26 06:51:55.488000+00:00 | ['Data Scientist', 'Artificial Intelligence', 'Distributed Teams', 'Data Science', 'Hiring'] |
How does understanding impermanence make life better? | In life, only change and death occur 100% of the time. After a near-fatal car accident that left me unable to walk, I had a lot of time for contemplation. My mom brought me the book The Tibetan Art of Living and Dying, and I became obsessed with understanding the Buddhist teachings about suffering. However, the teachings about impermanence were the ones that troubled me. I danced with impermanence and briefly slipped away from this life.
Instead of learning to embrace impermanence, I spent decades trying to create something stable, meaningful, and permanent. My life was filled with fear and trepidation. PTSD controlled where I went and how I traveled. Each time I got into a car, I was acutely terrorized by the knowledge that I could die and leave my children without a mother. Or worse, I could lose them. People mocked me about my fears as if fatal car crashes were something that never happened.
As the years passed, impermanence screamed louder. Mass shootings propelled my community into darkness. The murder of my dear friend and mentor reminded me again to savor each moment because you genuinely do not know which will be your last — more loved ones perished by accidental overdose, cancer, and suicide.
Impermanence hurt like hell. Impermanence. I must fight.
How does understanding that impermanence exists make life better? I didn’t get it.
With time my community became stronger, we leaned on each other, and we loved harder. Amazingly, the wrenching sadness caused by each of the tragedies was impermanent too. Of course, there is still a sense of loss, but it is no longer all-consuming.
We pulled together in love. We move forward.
It turns out impermanence is not the enemy, for it brings joy as well as despair. In my darkest times, I now understand that they too are impermanent. In this I find comfort.
Instead of grasping tightly to what is — for fear of it going away — I try to savor each beautiful moment so I will have no regrets later. Armed with this knowledge, I step onto my loosely charted path with excitement instead of fear. After all, stability is an illusion. | https://medium.com/narrative/fighting-impermanence-6a20d28fa188 | ['Robin Harwick'] | 2019-09-28 13:55:26.691000+00:00 | ['Life Lessons', 'Mental Health', 'Self Improvement', 'Self', 'Trauma'] |
Using Pandas Profiling to Accelerate Our Exploratory Analysis | Using Pandas Profiling to Accelerate Our Exploratory Analysis
Pandas Profiling is a library that generates reports from a pandas DataFrame
Image by Cassandra Norman
Pandas Profiling is a library that generates reports from a pandas DataFrame. The pandas df.describe() function that we normally use in Pandas is great but it is a bit basic for a more serious and detailed exploratory data analysis. pandas_profiling extends the pandas DataFrame with df.profile_report() for a quick data analysis.
The following statistics for each column are presented in an interactive report.
Type information : detect the types of columns in a dataframe.
: detect the types of columns in a dataframe. Essentials : type, single values, missing values
: type, single values, missing values Quantile statistics as minimum value, Q1, median, Q3, maximum, range, interquartile range
as minimum value, Q1, median, Q3, maximum, range, interquartile range Descriptive statistics such as mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, asymmetry
such as mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, asymmetry The most frequent values
Histograms
Outstanding correlations of highly correlated variables, Spearman, Pearson and Kendall matrices
of highly correlated variables, Spearman, Pearson and Kendall matrices Lost Value Matrix , Count, Heat Map and Lost Value Dendrogram
, Count, Heat Map and Lost Value Dendrogram Text Analysis learns about categories (Shift, Space), hyphens (Latin, Cyrillic) and blocks (ASCII) of text data.
learns about categories (Shift, Space), hyphens (Latin, Cyrillic) and blocks (ASCII) of text data. File and image analysis extracts file sizes, creation dates, and dimensions and scans images that are truncated or contain EXIF information.
For this Notebook we will be working on the dataset found in the following link Meteorite landings
If you want to follow the Notebook of this exploratory data analysis, you can download it here.
This comprehensive data set from the Meteorological Society contains information on all known meteorite landings. It is interesting to observe the places on earth where these objects have fallen, following the coordinates of the dataset.
Meteorite fall map
Image by Author
Let’s start now by importing the dataset, in order to understand a little bit the data we will work with
In [1]:
import pandas as pd
We have saved the Meteorite Falling dataset (Meteorite landings) in the ‘datasets’ folder of the present working environment, so we selected the right path for the import
In [2]:
df = pd.read_csv("datasets/Meteorite_Landings.csv")
And now we check the data
In [4]:
df.head()
Out[4]:
Image by Author
In [5]:
df.shape
Out[5]:
(45716, 10)
It is a very interesting dataset, where we can observe the name that the scientists gave to the meteorite, the type of class recclass , the weight in grams mass (g) the date in which it fell and the coordinates in which it fell.
It is also important to note that this is a very complete dataset, with 45,716 records and 10 columns. This information is given by the .shape
For more information about the dataset you can enter here: Meteorite — Nasa
Now, as we mentioned at the beginning, with Pandas we have gotten used to running the .describe() command to generate a descriptive analysis on the dataset in question. Descriptive statistics include those that summarize the central tendency, dispersion, and form of distribution of a data set, excluding the NaN values.
It analyzes both numerical and object series, as well as DataFrame column sets of mixed data types. The result will vary depending on what is provided. For more information about .describe() and the parameters we can pass, you can find the info here: Pandas describe
Now let’s run this command over our dataset and see the result
In [6]:
df.describe()
Out[6]:
Image by Author
The describe() method skips over the categorical columns (string type) and makes a descriptive statistical analysis on the numerical columns. Here we could see that the id column might not be useful for this analysis, since it is only a single indicator for each row (primary key of the table), while the mass is useful and interesting to understand, for example, the minimum and maximum value, the mean and the percentiles (25, 50, 75).
As we can see, this is a very basic analysis, without further relevant information. If we want more relevant information we must start writing code.
This is where Pandas profiling comes in, and its usefulness. The documentation of this library can be found in the following link. Pandas profiling. The installation will be done in the following way
Installing and importing Pandas Profiling
!pip3 install 'pandas-profiling[notebook,html]'
Image by Author
It is mandatory to pass between quotes the command and followed by notebook,html this is because we will need these two functions of the library.
If you are using Conda, these are the other ways to install it: Installing Pandas profiling
Creating relevant columns to the analysis
Now we are going to create a series of columns that will be relevant to the analysis that we will do with Pandas Profiling, the first one will be to create a constant variable for all the records, this time we will say that all the records belong to NASA, and we do the following
In [8]:
df['source'] = "NASA"
In [9]:
df.head()
Image by Author
As we can see, this column was eventually created. We are now going to create a boolean variable at random, simulating some kind of boolean output for each record.
Remember that this is done so that our exploratory analysis can identify this type of data in the result.
In [11]:
# we imported numpy, it should have been installed with Pandas. If you don't have it, you can do it with
# the `pip3 install numpy` command import numpy as np
In [12]:
# numpy is going to help us create those random booleans in the next line of code df['boolean'] = np.random.choice([True, False], df.shape[0])
In [13]:
df.head()
Image by Author
As we see, the column boolean was created with random values of True or False for each of the rows of our dataset, this is thanks to the df.shape[0] that refers to the rows or records of the dataset, that is to say that it made this operation 45,716 times, which is the total number of records.
Let’s do now something similar, but mixing numerical data types and categorical data types (strings)
In [14]:
df['mixed'] = np.random.choice([1, 'A'], df.shape[0])
In [15]:
df.head()
Image by Author
As we can see, here we are simulating that a column has two types of data mixed together, both numerical and categorical. This is something that we can find in real datasets, and describe() of Pandas will simply ignore them, and will not give us any analysis about that column (remember that describe() only gives results about numerical columns, it even ignores the boolean columns too)
Now let’s do something even more interesting. We are going to create a new column by simulating a high correlation with an existing column. In particular, we will do it on the column reclat that talks about the latitude where the meteorite has fallen, and we will add a normal distribution with a standard deviation of 5 and a sample size equal to the dataset longitude.
If you want to see how to create a simulation of a normal distribution with random numbers with Numpy, check this link. Random normal numpy
In [16]:
df['reclat_city'] = df['reclat'] + np.random.normal(scale=5, size=(len(df)))
In [17]:
df.head()
Image By Author
Let’s check the result of the last command, we can see that this column reclat_city now has a high correlation with reclat , because when one observation or row is positive the other one too, and when one is negative, the other one too.
To analyze correlations with Pandas we use a different method than describe() , in this case we use the corr() command. However, with Pandas profiling both analyses (descriptive statistics and correlations) we will obtain them with only one command. We will see this in a few moments when we run our exploratory analysis.
Remember that for now what we are doing is adding columns to the dataframe in order to see all the possibilities offered by the Pandas profiling tool.
We are now going to simulate another common scenario in the datasets, and that is to have duplicate observations or rows. This we will do it like this:
In [18]:
duplicates_to_add = pd.DataFrame(df.iloc[0:10])
In [19]:
duplicates_to_add
Image by Author
What we just did was to create a new dataframe from the first 10 rows of our original dataframe. To do this we use an iloc that serves to select rows and a slice selector to select from row 0 to row 10-1.
Now let’s change the name to identify them later, but the other values remain the same
In [20]:
duplicates_to_add['name'] = duplicates_to_add['name'] + " copy"
In [21]:
duplicates_to_add
Image by Author
If we look, now all the names have the word ‘copy’ at the end. We already have this new dataset ready to concatenate it to the original dataset, so we can have duplicated data. Let’s do now the append
In [22]:
df = df.append(duplicates_to_add, ignore_index=True)
In [23]:
df.head()
Image by Author
df.shape
(45726, 14)
The original dataset contains 45716 rows, now we have 10 more rows, which are the duplicate rows. In fact we can see some of them in the above display!
Using Pandas profiling
Now we have arrived at the expected moment, we have added some columns to the dataset that will allow us to see interesting analyses on it. But before that, we must be fair to the pandas describe() and see what analysis it gives us on the resulting dataset
In [25]:
df.describe()
Image by Author
As we see, very little difference, it does not give us additional information about:
Boolean columns
Mixed columns
Correlations
This is where Pandas profiling shines by its simplicity to perform an exploratory analysis on our datasets. Without further ado, let’s run the following command
In [26]:
## we already have the library installed, now we need to import it import pandas_profiling
from pandas_profiling.utils.cache import cache_file
In [27]:
## now we run the report report = df.profile_report(sort='None', html={'style':{'full_width':True}})
In [28]:
report
Image by Author
Understanding the results
The output speaks for itself. In comparison with Pandas describe() or even Pandas corr() it is quite significant, and from the beginning we can observe a lot of additional data and analysis that will help us to better interpret the dataset we are working with. Let's analyze for example the columns we recently added
In the Overview we can see the duplicate rows report: Duplicate rows 10
In the Type of variable we can see the Boolean column: BOOL 1
In the Overview, but in the Warnings we can see the high correlation between the columns we created: reclat_city is highly correlated with reclat High correlation
We can see after the Overview an analysis of each column/variable
In the variable mixed we can see the analysis of the randomly generated values
we can see the analysis of the randomly generated values Further down in the section Interactions we can see the different types of graphs and their correlations between variables.
we can see the different types of graphs and their correlations between variables. Then we can see an analysis of correlations, which is always important to understand the interdependence of the data and the possible predictive power that these variables have
We can also see an analysis of the “missing values”, which is always interesting to make some kind of cleaning or normalization of the data.
Finally we might want to have this report in a different format than a Jupyter Notebook, the library offers us the possibility to export the report to html , which is useful to show it in a more friendly environment for the end user. In which you can even interact by means of navigation bars.
In [29]:
report.to_file(output_file="report_eda.html")
Image by Author
If we click on it, it will open in the browser. This format, personally I like quite a lot, since it does not influence the code, but you can navigate through the analysis and show it to the interested stackeholders in the analysis and make decisions based on them.
Image by Author
Final Notes
As you can see, it is very easy to use the tool, and it is a first step before starting to perform feature engineering and/or predictions. However there are some disadvantages about the tool that are important to take into account:
The main disadvantage of pandas profiling is its use with large data sets. With increasing data size, the time to generate the report also increases a lot.
One way to solve this problem is to generate the profile report for a portion of the data set. But while doing this, it is very important to make sure that the data are sampled randomly so that they are representative of all the data we have. We can do this for example:
In [30]:
data = df.sample(n=1000)
In [31]:
data.head()
Image by Author
len(data)
Out[32]:
1000
As we can see, 1000 samples have been selected at random, so the analysis will not be done on more than 40,000 samples. If we have, say 1,000,000 samples, the difference in performance will be significant, so this would be a good practice
In [33]:
profile_in_sample = data.profile_report(sort='None', html={'style':{'full_width':True}})
In [34]:
profile_in_sample
Image by Author
As we see it takes less time to run with a sample of 1,000 examples.
Alternatively, if you insist on getting the report of the whole data set, you can do it using the minimum mode.
In the minimum mode a simplified report will be generated with less information than the full one, but it can be generated relatively quickly for a large data set.
The code for it is given below:
In [35]:
profile_min = data.profile_report(minimal=True)
In [36]:
profile_min
Image by Author
As we can see, it is a faster report but with less information about the exploratory analysis of the data. We leave it up to you to decide what type of report you want to generate. If you want to see more advanced features of the library please go to the following link: Advanced Pandas profiling
I have this other posts in Towards Data Science
I hope you enjoyed this reading! you can follow me on twitter or linkedin | https://towardsdatascience.com/using-pandas-profiling-to-accelerate-our-exploratory-analysis-953d9397b5a6 | ['Daniel Morales'] | 2020-12-08 13:04:58.592000+00:00 | ['Data Science', 'Machine Learning', 'Artificial Intelligence', 'Pandas Dataframe', 'Pandas Profiling'] |
How to use images from a private container registry for Kubernetes: AWS ECR, Hosted Private Container Registry. | How to use images from a private container registry for Kubernetes: AWS ECR, Hosted Private Container Registry. Joe Blue Follow Dec 24 · 3 min read
Accessing the hosted private container registry from Kubernetes
Some container registry providers in the industry give public and private access to the images in the registry repositories. For public access, as in Docker Hub, there is no issue which you have to tackle down in the Kubernetes(K8s) cluster. However, when it comes to private images, you have to define a way to access those images securely. In this article, the described methodology in the Kubernetes to access the private container registry will be explained.
Before continuing with the Kubernetes, make sure to apply the Client Machine Settings to all worker nodes of K8s defined at the Creating a Private Container Registry: Repository and Web Service.
The Structure of the Deployment Object
Kubernetes Deployment, and the Pod object, has a special tag/field, imagePullSecret , to interact with private repositories. With this tag, you can reference the object's name that holds the required credentials information to interact with the private registry. A sample Deployment file with settings is shown below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: secret-registry
containers:
- name: my-app
image: ip-172-31-82-125.ec2.internal:5000/nginx
#image: 046402772087.dkr.ecr.us-east-1.amazonaws.com/my-nginx:latest
imagePullPolicy: Always
ports:
- containerPort: 80
The private registry access information is stored in Secret file, this file referenced in Deployment file with imagePullSecret field. The name is the file that contains the secret information to access the private registry.
The Structure of Secret Object
The Kubernetes Secret object has a special type of kind for private registries as;
type: kubernetes.io/dockercfg
type: kubernetes.io/dockerconfigjson
Let’s remember the structure of Secret object:
apiVersion: v1
kind: Secret
metadata:
name: secret-registry
type: kubernetes.io/dockercfg
data:
.dockercfg: |
"<base64 encoded ~/.docker/config.json-file>"
or
apiVersion: v1
kind: Secret
metadata:
name: secret-registry
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: |
"<base64 encoded ~/.docker/config.json-file>"
As long as we have defined this secret file and referenced it in the Deployment or Pod definition file, the access to the private registry should run smoothly.
Now let’s discuss the ways to create the Secret file.
Create Secret Object from login Command
When we login into the container registry, the credentials are saved in the ~/.docker/config.json file. We can get the required information from this file and can place it inside the Secret file data portion.
For AWS ECR;
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin <your-id>.dkr.ecr.us-east-1.amazonaws.com
For hosted private container registry;
docker login -u username -p password https://private-registry
Firstly, after logging in to the container registry, create secret data from the stored data as follows;
cat ~/.docker/config.json | base64
You can now copy the output as secret data and place it in the file's data portion. Next, give it a try in the K8s cluster.
Create Secret Object with kubectl Command
It is also possible to create the Secret object with the help of kubectl command. They are listed as follows.
Secondly, you can also create the Secret object from ~/.docker/config.json directly as follows;
kubectl create secret generic secret-registry \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
Thirdly, you can also create the Secret object by entering the credentials as follows;
--docker-server=
--docker-username=user-name \
--docker-password=password kubectl create secret docker-registry secret-registry \--docker-server= https://private-registry --docker-username=user-name \--docker-password=password
Note that when it comes to AWS ECR, the command aws ecr get-login-password --region us-east-1 gives the password and user-name is AWS .
The secret-registry is the name of the Secret object to reference inside the Deployment or Pod file. | https://medium.com/clarusway/how-to-use-images-from-a-private-container-registry-for-kubernetes-aws-ecr-hosted-private-13a759e2c4ea | ['Joe Blue'] | 2020-12-28 12:45:53.463000+00:00 | ['Aws Ecr', 'Container Registry', 'Kubernetes', 'Kubernetes Deployment', 'Kubernetes Secret'] |
Building a Predictable Lead Generation Strategy for One of New England’s Top Colleges | Building a Predictable Lead Generation Strategy for One of New England’s Top Colleges Ideometry Follow Oct 20, 2017 · 3 min read
Based in Worcester, MA, Assumption College is one of the top Catholic colleges in New England, boasting over 40 majors for undergrads, and seven programs for graduate students.
The Challenge
Assumption’s marketing team needed new, more predictable streams of quality leads for their MBA and Clinical Counseling Psychology graduate programs. Though they had invested in email marketing, event marketing initiatives, content marketing, and billboards, Assumption had yet to find a way to consistently generate leads for these programs.
At the same time, Assumption was also launching a brand new graduate program — Applied Behavioral Analysis — and wanted to seed the initial class with high-quality candidates.
The Solution
We started via a discovery and research phase, interviewing staff, administrators, current students, and alumni to better understand what differentiates Assumption and their programs in addition to what are the key factors prospective students consider in their decision making process.
Based on our research, we determined that the best way to reach these people were through custom Facebook and SEM campaigns. Our approach to the Facebook and SEM campaigns for all three programs was to build out dozens of viable target audiences and hundreds of unique ads for each program. We like to call this “ad and audience experimentation at scale.” In testing hundreds of creative ad concepts across all of the target audiences, we were able to figure out which messages resonate with which specific audiences.
Once the campaigns were live, we could see which audiences and creative were generating the best results and reallocate the remaining budget to the highest-performing ads. To track our results, we built highly accurate conversion attribution funnels to identify precisely which ads/audiences are responsible for driving each lead.
After a year of running the campaign, and Assumption had a predictable amount of high-quality leads coming in for each program. As we optimized the creative concepts and audiences, we continued to see increases in the number of leads coming in.
Beyond the hundreds of leads that were generated, these campaigns generated over 11+ million impressions, and over thousands of people “liked” or shared the Assumption FB page. The reach and engagement level of our campaign was also been a great push for overall brand awareness and visibility. Additionally, the research we conducted on which audiences responded best to the ads was a great exercise in target realignment, which gave Assumption a better idea of the audiences they should be targeting with all marketing communications.
***
If you liked what you saw here, check out some of the other branding and creative campaigns we’ve done for a major credit union and a tech startup.
Need help generating fresh leads to fill your sales pipeline? Get in touch with us today. | https://medium.com/ideometry/building-a-predictable-lead-generation-strategy-for-one-of-new-englands-top-colleges-f00f98bc5d15 | [] | 2017-10-24 14:45:39.906000+00:00 | ['Higher Education', 'Marketing', 'Digital Marketing', 'Lead Generation', 'Social Media Marketing'] |
Need to Vigorously Fight Vaccine Misinformation | There is so much misinformation about vaccines out there, now that worldwide vaccination programs against COVID-19 have started. We need to vigorously fight against this misinformation because the fewer people get vaccinated, the longer this virus will wreak havoc in our societies and our world, and the longer it will take for this pandemic to end.
Robert Turner — Editor and co-Founder of Being Well and Medika Life (as well as my esteemed colleague) — does an excellent job showing how the vaccine is really, truly safe. And he doesn’t even use science to do it. Check it out. | https://medium.com/beingwell/need-to-vigorously-fight-vaccine-misinformation-cd6e16b98053 | ['Dr. Hesham A. Hassaballa'] | 2020-12-21 18:39:50.252000+00:00 | ['Covid 19', 'Misinformation', 'Vaccines', 'Medicine', 'Science'] |
Are Some People Just Born Lucky? | To me, I’d attribute my luck to my bubbly personality and being blessed with the ability to learn quickly and get things done right away. I also rarely deviated from the rules and was extremely obedient. And obviously, I also took advantage of every single opportunity I received.
However, in my eyes, even though things came a little easier for me in comparison to other people, I never really deemed myself as a lucky person. In the grand scheme of things, I actually considered myself less fortunate than the average middle class.
My parents immigrated from another country, knowing no one and having very little. They did this so they could give me the opportunities that I have now — And I am eternally grateful. But, growing up, we didn’t have everything.
Our house wasn’t huge. We never went to Disney World. We never had a dishwasher. I always had to work growing up. Going to college, I didn’t receive any financial aid from my parents — Every dime I had for college came directly from student loans.
Of course, this isn’t me dismissing the things that I did have. By no means were we poor. We got by. But, it was a humbling thing which taught me a lot about being thankfulness.
As I got older, I realized that the less you had, the more appreciative of a person you were because you know what it’s like to live with less. In turn, the more you know about actually being able to live with less, the easier it is for you to live with less.
So, where does luck fit into all of this? | https://lindseyruns.medium.com/are-some-of-us-just-born-lucky-d879a545ed38 | ['Lindsey', 'Lazarte'] | 2019-09-27 01:58:39.857000+00:00 | ['Self-awareness', 'Personal Growth', 'Advice', 'Life', 'Self'] |
Named Entity Recognition — Simple Transformers —Flask REST API | Named Entity Recognition — Simple Transformers —Flask REST API Soumibardhan Follow Jun 26 · 4 min read
Image by author
Named-entity recognition (NER) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes.
I wanted to start with NER, I looked at SpaCy and nltk. SpaCy already has a pre trained NER model, which can be custom trained. Started with that approach. Then I decided to explore transformers in NLP.
I looked for something which can be implemented fast and found out the amazing Simple Transformers library created by Thilina Rajapakse.
Simple Transformers lets you quickly train and evaluate Transformer models.The Simple Transformers library is built on top of the Transformers library by Hugging Face.
The process of Named Entity Recognition has four steps:
Obtaining training and testing data
Preprocessing the data into dataframes required for the model inputs
Training the model
Testing the model
Tuning the model further for better accuracy
Creating an interface for the model
Obtaining training and testing data
Regarding obtaining the data, the coNLL data set can be used to start with. I was given the data from a company for a project, so the data isn’t publicly available.
So here is the format we need the data in: A list of lists with three columns, sentence number, word and the tag. Same for the test data as well.
Image by author
I tried it in google colab, and connected drive for getting the data. Here is the format my dataset was in, which if a real life dataset is constructed manually will look like.
My raw data was in this format :
Text : sentence
Rest unnamed columns :tags corresponding to each tag in the sentence
Preprocessing the data into dataframes required for the model inputs
To parse the data in the format required by simple transformers, I divided the data in 85:15 ratio and wrote a parser.
df: dataframe loads the data CSV file into a pandas dataframe. Sentences : List to store the Text column of the dataframe df['Labels'] : New column in dataframe to add the strings of all tags in the unnamed columns of the dataframe Labels : List to store the Labels column of the dataframe traindata : List of lists with each list: [sentence number,word,tag]
from 0th sentence to 94th sentence. testdata : List of lists with each list: [sentence number,word,tag]
from 95th sentence to 106th sentence.
Training the model
Training the model is real easy, though it may take some time.
The first trial gave very low f1 score, that was because by default the number of epochs was set to one.
Here I used the bert-base-cased model to train on as my data was cased. Model is saved to MODEL folder so that we do not need to train the model everytime to test it. We can load the trained model directly from the folder.
train_batch_size : Batch size, preferably a power of 2 gives good results
Testing the model
Now check the outputs by passing any arbitary sentence :
In place of ‘tax rate’, any arbitrary sentence can be entered to check output tags.
Tuning the model further for better accuracy
Now lets try tuning the Hyper Parameters (args). These are the args that are available :
Here is the link to all the hyperparameters.
Increasing the number of epochs and max_seq_length helped in getting the accuracy to 86%. By using a large model (roBERTa large), I was able to achieve 92.6% accuracy on the test data.
Outputs:
Image by author
INFO:simpletransformers.ner.ner_model:{'eval_loss': 0.6981344372034073, 'precision': 0.9069767441860465, 'recall': 0.9512195121951219, 'f1_score': 0.9285714285714286}
There are several pretrained models that can be used. You can choose one depending on whether your data is in a specific language, or if your data is cased. My training data was financial and in English. So BERT and roBERTa were my best options. I did try distilBERT as well.
Supported model types for Named Entity Recognition in Simple Transformers:
BERT
CamemBERT
DistilBERT
ELECTRA
RoBERTa
XLM-RoBERTa
Creating an interface for the model
I worked on creating an interface for the model. Using Flask and Rest API, I created a web page with two buttons, one to display the results of the sentence entered (GET NER TAGS).
Here is the Flask app.py code :
Make sure you use the same args to load the model as used to train the model.
When you click on the (GET CSV) button, a csv of the words in the sentence and the respective tags gets downloaded automatically.
Image by author
Structure of tags.csv:
Image by author
Simple transformers is a really awesome tool! | https://medium.com/swlh/named-entity-recognition-simple-transformers-flask-rest-api-ec14a7a444cb | [] | 2020-07-05 22:32:53.423000+00:00 | ['NLP', 'Deep Learning', 'Bert', 'Python', 'Named Entity Recognition'] |
Book-Destiny | 2: The Haunted Bookshop
One of my favorite pastimes — it is, in fact, a sort of solitaire, in which books assume the place of playing cards, with letters, stories, poems, and memoirs standing in for hearts, diamonds, clubs, and spades — is the maintenance of an imaginary bookcase. This small stack of shelves and its carefully chosen contents belong to an equally imaginary, yet nonetheless gracious guest bedroom that lies somewhere beneath (in space) and beyond (in time) the parking lot for toys that currently qualifies as our home’s hosting pad. No matter, the books in mind are real enough to me, and I take endless pains to balance their qualities so that our phantom visitors will be treated to an inviting menu of the pleasures to be taken between covers. Large questions of reading matter and moment are entertained as I select and arrange these bedside companions: do I opt for diverting tales or absorbing tomes? For prose that teases or that tranquilizes? Is it best that essays plumb pools of meditation, or merely skim their surface with wit and clever commentary? Are dreams deepened by weighty thoughts or leavened by light reading? Should the words I wish upon my guests induce sleep or encourage the dangerous intrigue of insomnia? It’s a game that gives a diverting outward cast to a lifetime’s inward occupation, and one I suspect that’s played, in some form or another, by anyone who trades in books.
It is certainly played in a spirited fashion by Roger Mifflin in Chapter III of The Haunted Bookshop, as he prepares a spare bedroom for his new shop assistant, Miss Titania Chapman, in the “comfortable old brown-stone” dwelling he shares with his wife, the former Helen McGill (with whom readers of Morley’s first Mifflin novel, Parnassus on Wheels, are already well-acquainted). In addition to living quarters, of course, the Mifflins’ Brooklyn brownstone houses — in “warm and comfortable obscurity, a kind of drowsy dusk” — Parnassus at Home, the stationary incarnation of the itinerant literary tabernacle that gave the earlier book its name. Having hung some pictures on the walls of his acolyte’s new quarters, Mifflin “bethought himself of the books that should stand on the bedside shelf,” devoting the “delighted hours of the morning” to consideration of what to put there. What follows is as careful a disquisition on the subject of the “proper books for a guest-room” — “a question that admits of the utmost nicety of discussion” — as one is ever likely to come across, one that not only generates a list of alluring titles that will send you scurrying to your own favorite sources of secondhand books, but also speaks volumes about the bookseller’s faith in the life-enhancing force these titles represent.
Such faith, after all, is the animating spirit of The Haunted Bookshop, despite the very real diversion and suspense delivered by the novel’s capering plot, in which an ambitious (albeit unlettered) advertising man, Mr. Aubrey Gilbert, woos a comely young heiress (Miss Chapman) by doggedly uncovering the designs of nefarious foreign agents operating just down the block from Mifflin’s den of serenity.** While Gilbert is busy deciphering the perilous puzzle he has stumbled upon, whose pieces include a local druggist, hotel chefs, and, largest and most mysterious of all, a vanishing copy of Thomas Carlyle’s edition of Oliver Cromwell’s Letters and Speeches, Mifflin goes about his bookish business, celebrating, with his “convincing air of competent originality,” the magic and motive power of books. His enthusiasm is of course endorsed and abetted by Morley himself, who allows his erudite protagonist to offer diagnoses (“I can see just by looking at you that your mind is ill for lack of books but you are blissfully unaware of it!”) and prescriptions (“If your mind needs phosphorous, try Trivia by Logan Pearsall Smith. If your mind needs a whiff of strong air, blue and cleansing, from hilltops and primrose valleys, try The Story of My Heart by Richard Jefferies. If your mind needs a tonic of iron and wine, and a thorough rough-and-tumbling, try Samuel Butler’s Notebooks or The Man Who Was Thursday, by Chesterton”) that will cheer the heart of any book-besotted reader. To say nothing of the happy lessons Mifflin imparts to Titania as he extemporizes an introductory course in the art of bookselling, or the trade-based disputations of the convention of his colleagues — “The Corn Cob Club” — assembled in Chapter II (the latter half of which, Morley coyly advises us, “may be omitted by all readers who are not booksellers”).
In short, for all the entertainment one can take in the pages of The Haunted Bookshop and its predecessor (and the pleasures they proffer are considerable), the enduring delights of both novels belong to their unabashed and merry exaltation of books. The romance of the bookshop (with the attendant charms of dust and discovery, study and serendipity, the errors of pursuit and the truths of apprehension — “We have what you want, though you may not know you want it”) is nowhere better caught than in the environs of Roger Mifflin’s humble Parnassus, and most often on his own tongue. “To spread good books about,” he tells Titania, “to sow them on fertile minds, to propagate understanding and a carefulness of life and beauty, isn’t that high enough mission for a man?” From where I sit, I can only hope — no, trust — that it is.
That books can influence, amuse, enhance, intensify, heal, even determine our lives is the ardent message wrapped within the warm folds of this cozy tale. Like ghosts, books haunt us. Which is why the ones placed on a bedside shelf must be chosen carefully; in the rooms within my ken, it goes without saying, such a fateful selection is bound to include both The Haunted Bookshop and Parnassus on Wheels. | https://jamesmustich.medium.com/book-destiny-88aabc62a530 | ['James Mustich'] | 2019-11-05 01:27:44.939000+00:00 | ['Literature', 'Culture', 'Reading', 'Books'] |
Is it May already? | Dreaming of the outdoors
Dear Readers,
We sent out our last newsletter in February. A lot has happened since then, as you may have noticed! But global pandemic aside, some excellent stories have made their way into Fourth Wave, a publication by/for/about women and other disempowered groups.
Some address these unprecedented times head on. Some look the other way and provide a distraction. We have a lot of book and movie reviews because…what else is there to do?!? :p
We know it’s a long list. That’s because we don’t want to bombard you with newsletters. But we hope you’ll pick some to read and appreciate our writers, and maybe even think about contributing something yourself. Fourth Wave is a growing community of readers, writers, and thinkers who care about equity and changing the world for the better. Here’s a round-up of what we’ve been up to lately:
What We Published in May (so far)…
What to Binge Watch if You’re Anxious
Setting the “Portrait of a Lady” on Fire
Armed Men in Public Aren’t a Symbol of “Freedom”
The Heart of “The Nightengale”
Self-Care is the Antidote to Self-Loathing
What We Published in April
Believe Reade and Vote for Biden
Quarantine Life With Cats and “Control Issues”
Scenes From the In Between
What to Read in Quarantine
What’s the Difference Between Grift and Graft?
Twelve Films to Inspire Discussion About Gender Equality
What We Published in March (when we were feeling poetic!)
What to do in a Pandemic (a poem)
Prophesy (a poem)
Writing is a Starry Night (a poem)
Pandemic Panacea
Open Poem to Lawrence Ferlinghetti From The Urban Forester (a poem)
Ferlinghetti Day is Coming For You!
The Problem With Bucket Lists
Porn, Tears & Talk: How I Wrote _Freeloader in the House of Love_
And What We Published in February After Sending the Newsletter…
Pundits Are Poobahs
More Senseless Destruction From SFDPW
Why Intermittent Fasting Works For Me
What Do Trump Voters Believe He’s Doing For Them?
Harvey Weinstein’s False Memory Defense and its Shocking Origin Story
Wow! I had no idea Fourth Wave had been so productive. Just writing that list of links was exhausting :p (Plus I began another publication called Tarot Me This, so if you’re interested in tarot cards, psychology, archetypes, or divination, check that out.) Yet somehow all I remember from the past few months is staring out the window with longing, like my daughter’s cat in the picture up top. So I guess that’s a bit of good news, anyway. Even if you don’t think you’re being productive, perhaps when you sit down and tote it all up, you are. Or maybe productivity is happening under the surface? Could be. We’ll see what happens next…
In the meantime, please stay safe, stay positive, and stay alive. And keep reading!
Yours truly,
Patsy Fergusson
Editor
Fourth Wave | https://medium.com/fourth-wave/is-it-may-already-2dbd1c23716c | ['Patsy Fergusson'] | 2020-05-16 03:29:36.906000+00:00 | ['Stories That Matter', 'Writing On Medium', 'Women', 'Feminism', 'Writing'] |
Compare the Best Javascript Chart Libraries | Compare the Best Javascript Chart Libraries
Chart.js, Highcharts, C3, NVD3, Plotly.js, Chartist, Victory
Read the original article on Sicara’s blog here.
In this article, I provide a decision tree to quickly decide which open source javascript charting library is right for your project. You can use any framework like React, Angular, AngularJS, VueJS or just use vanilla javascript. Links to the graph libraries for each framework are provided below.
I purposely exclude D3.js among the chart libraries. D3 is an awesome javascript library, but it has no “ready to ship” charts and graphs. If you want to learn about D3 you can read my article A starting point on using D3 with React. I will however list NVD3 which is a chart library built on top of D3.
Edit: this post has been updated since its first publication. Last update: 09/11/2017
Table of content
The most used javascript chart libraries
I reviewed all the libraries listed in the aforementioned blog post. This post comes on the first page if you google ‘best javascript chart libraries’. I used this tool to get the following graph. It represents the number of downloads per day of each library through npm (the weekends were removed). | https://medium.com/sicara/compare-best-javascript-chart-libraries-2017-89fbe8cb112d | ['Adil Baaj'] | 2019-12-05 11:31:21.134000+00:00 | ['Development', 'Charts', 'Web Development', 'Data Visualization', 'JavaScript'] |
Conversion Optimization: 3 Simple Steps to Improve Your Conversion Rate Using Google Analytics | All websites are created for a purpose. They can make your customers more aware of your brand, encourage them to buy your product, or push them to visit your storefront.
The effectiveness of your website in accomplishing these various objectives can be summarized in one metric — conversion rate.
Your conversion rate is a measure of how effectively your website is able to accomplish the purpose it was designed for. As such, everything about your website should be geared towards optimizing your conversion rate..
However, tracking and optimizing your conversion rate in the real world is often like using a lighthouse as a landmark when sailing on a dark and stormy night — while you might know where you are going in general, it is still difficult to chart a course that will get you to your destination safely.
Luckily, free tools like Google Analytics can serve as a compass that guide you through the complex stormy sea of digital marketing. In other words, it’ll help you accomplish your business objectives and by safely leading you through the night.
For this reason, we spent the last couple of months touching on each of the aspects of Google Analytics in an attempt to help leverage your business to the greatest extent possible.
As a brief summary, here’s what we’ve done so far.
How to quickly identify the areas you should focus on in order to optimize user experience with simple metrics (linked here) How to identify your core audience groups and better understand them (linked here). How to identify where your core audiences come from and how to utilize with channel-specific strategies accordingly (linked here). How to identify and optimize sub-optimal pages on your website (linked here). How to design a user behavior flow that facilitates conversion (linked here).
This post, as the final post of the series, serves as both a conclusion and a wrap that help you bring together everything we’ve talked about before. It’ll also provide you with an easy-to-understand business framework that will help you derive actual business benefit from your Google Analytics through increased optimization.
More specifically, we will provide you with a three-step analyze-act-monitor framework (summarized with graphic below) that will help you identify the core audiences you should be optimizing for, make clear the actions you should be taking to optimize for those audiences, and showcase how you should monitor the effects of those optimization steps.
Let’s begin.
Analyze: Identify your “Core Audience”
Because modern technology allows businesses to quickly and efficiently contact potential customers in large numbers, the modern customer is often overwhelmed with outreach attempts from businesses every day.
Therefore, unless your message is specifically tailored towards a specific customer group, addresses their burning problem, and is delivered through a specific channel that they frequent, it is difficult to make an impact on the busy minds of your customers.
For this reason, the first step of any conversion optimization process is to figure out exactly who you are optimizing your website for. This is your “core audience”.
A easy way to identify who your “core audience” is to target the audience group that is already engaging with and converting on your website.
Our previous article (linked here) talked at length about how to do just that but the basic principle is to slice and dice your key traffic, engagement, and conversion metrics across multiple demographic dimensions such as age, gender, and location in order to find a significant group that has a high rating across each of your metrics.
After deciding which “core audience” to target, you’re ready to move on the next step. Now, you need to come up with strategies that will improve your interaction with those audience groups.
Action: Find The “Lowest Hanging Fruit” and Chart the Course of Action
When choosing which action to perform, the key here is to focus and identify the “lowest hanging fruit” that is most likely to bring large improvement in the experience of your “core audiences” and ultimately conversion.
I will begin this section by first introducing actions you can do to improve your “core audience” experience, and then talk about how to use analytics to figure out which specific actions to take.
The actions you can perform in order your interaction with your “core audience” group can be divided into two categories: optimizing acquisition and optimizing on-site experience.
Acquisition focuses on getting your “core audience” to visit your website and keep on coming back. Business actions related to acquisition include:
Identifying and targeting channels that your “core audience” frequents.
Tailoring your messaging to target the key emotional drivers of your “core audience”
Launching outreach campaigns such as paid search, social media acquisition in order to optimize those campaigns
Launching retention campaigns such as email marketing and social media engagement to get your “core audience” to come back to your website.
On the other hand, optimizing user experience focuses on improving your visitors’ website experience and, ultimately, their chance of converting. Actions that can help you optimize user experience include:
Improving the landing pages of your website to increase the chance that visitors will go beyond the first page (minimize bounce rate)
Increasing engagement and trust. This can be done by improving the user experience so they are able to learn more about your company and are more likely to perform actions that bring them closer to converting (increase engagement)
Fixing drop-off points. This can be done by improving pages that users are often exiting from. This will further increase on-site engagement (reduce dropoff rate)
To decide which actions to take, reference the 8 core metrics of Google Analytics with a specific eye to your “core audience” (you can accomplish this by setting up a custom segment), and follow the logic below (we discussed those metrics in more detail in this blog post:
If your traffic metrics are decreasing over time (users, sessions, pageviews), then you should focus on acquisition steps that are geared towards attracting both new and return visitors.
(users, sessions, pageviews), then you should focus on acquisition steps that are geared towards attracting both new and return visitors. If the number of new sessions coming to your website are decreasing over time , you should focus on acquiring new visitors with methods such as SEO, paid search ads, and social media ads. w
, you should focus on acquiring new visitors with methods such as SEO, paid search ads, and social media ads. w If the number of new sessions are stable, but traffic still decreases over time , then you should focus on re-engage your current website visitors with methods such as email marketing, content marketing, or social media engagement.
, then you should focus on re-engage your current website visitors with methods such as email marketing, content marketing, or social media engagement. If your bounce rate is high (which is a bad thing), then you should focus on optimizing your landing page with A/B testing.
(which is a bad thing), then you should focus on optimizing your landing page with A/B testing. If your engagement metrics are low (avg. sessions duration and avg. pages per session), then you should focus on optimizing information flow on your website and identify sub-optimal pages.
(avg. sessions duration and avg. pages per session), then you should focus on optimizing information flow on your website and identify sub-optimal pages. If your conversion rate is low but all other metrics are stable over time, then you should focus on actions that can facilitate more purchases from your visitors such as offering free shipping and adding a countdown clock.
As you decide which business actions to take, you also need to consider the best way to track the effectiveness of those actions.
When choosing metrics, it is important to choose ones that are not only easy to collect, but also that are closely linked with your business objectives.
For example, when conducting Facebook advertising campaign, the number of likes your page receives is, at best, indirectly linked with an increase or decrease in your conversion rate. Metrics such as “percent of website visit” or “percent conversion” (trackable through Facebook Pixel) are much better metrics.
We also wrote a guide to help you through this process that you can find here. This will help you avoid common mistakes and choose the correct metrics.
After you have figured out the actions to take that will best engage your “core audience”, it’s time to implement the actions and rapidly iterate.
Monitor: Iterate and Improve
This stage is where the digital marketing truly happens.
Almost inevitably, many of your action steps won’t survive first contact with your “core audience.” This means that you’ll need to not only figure out what are the actions that were sub-optimal, but also why those action steps led to sub-optimal results in the first place.
To identify those suboptimal action steps, we recommend you to take a holistic analytical approach and consider both the impact of the action step in short-term (with metrics such as conversion rate and click through rate), and long term (with metrics such as differences in traffic or total conversions before and after the campaign)
The reason we recommend a long-term approach in identifying campaign effectiveness is because very often true value of a specific campaign lies not in short-term, one-off purchase of your customers, but the long-term repeated engagement they have with your business long after the campaign’s conclusion.
After identifying sub-optimal campaigns, it is time to understand why they have failed.
To identify the underlying reasons behind your failed actions, you need to take a customer-driven mindset and actually talk to people who belong to your “core audience” group. (Here is a guide by HubSpot on how to do this). These conversations will serve as a way for you to understand the flaws in your original assumption while also helping you develop empathy regarding the needs of your customers.
Based on the feedback you receive from your “core audience” you should modify both your long-term and short-term objectives and keep iterating.
In this step, the key for success in this step is a combination of patience and vigilance.
You should both take a long-term view in understanding that all optimization steps take time to take effect, and take a short-term in realizing that constant adjustment on a small scale is what makes those wonderful long-term effects happen in the first place.
While the changes may be marginal in the beginning, they will become more and more obvious as you are consistently trying to become better at making data-driven decisions and adjustments. | https://medium.com/analytics-for-humans/conversion-optimization-use-google-analytics-to-improve-your-conversion-rate-using-3-simple-steps-639c3f8ac279 | ['Bill Su'] | 2018-06-08 19:35:14.124000+00:00 | ['Analytics', 'Google Analytics', 'Marketing', 'Small Business', 'Digital Marketing'] |
Data Quality at Airbnb | Introduction
At Airbnb, we’ve always had a data-driven culture. We’ve assembled top-notch data science and engineering teams, built industry-leading data infrastructure, and launched numerous successful open source projects, including Apache Airflow and Apache Superset. Meanwhile, Airbnb has transitioned from a startup moving at light speed to a mature organization with thousands of employees. During this transformation, Airbnb experienced the typical growth challenges that most companies do, including those that affect the data warehouse.
In the first post of this series, we shared an overview of how we evolved our organization and technology standards to address the data quality challenges faced during hyper growth. In this post we’ll focus on Midas, the initiative we developed as a mechanism to unite the company behind a shared “gold standard” that serves as a guarantee of data quality at Airbnb.
Defining the Gold Standard
As Airbnb’s business grew over the years, the company’s data warehouse expanded significantly. As the scale of our data assets and the size of the teams developing and maintaining them grew, it became a challenge to enforce a consistent set of standards for data quality and reliability across the company. In 2019, an internal customer survey revealed that Airbnb’s data scientists were finding it increasingly difficult to navigate the growing warehouse and had trouble identifying which data sources met the high quality bar required for their work.
This was recognized as a key opportunity to define a consistent “gold standard” for data quality at Airbnb.
A Multi-dimensional Challenge
While all stakeholders agreed that data quality was important, employee definitions of “data quality” encompassed a constellation of different issues. These included:
Accuracy: Is the data correct?
Is the data correct? Consistency: Is everybody looking at the same data?
Is everybody looking at the same data? Usability: Is data easy to access?
Is data easy to access? Timeliness: Is data refreshed on time, and on the right cadence?
Is data refreshed on time, and on the right cadence? Cost Efficiency: Are we spending on data efficiently?
Are we spending on data efficiently? Availability: Do we have all the data we need?
The scope of the problem meant that standards focused on individual data quality components would have limited impact. To make real headway, we needed an ambitious, comprehensive plan to standardize data quality expectations across multiple dimensions. As work began, we named our initiative Midas, in recognition of the golden touch we hoped to apply to Airbnb’s data.
End-to-end Data Quality
In addition to addressing multiple dimensions of data quality, we recognized that the standard needed to be applicable to all commonly consumed data assets, with end-to-end coverage of all data inputs and outputs. In particular, improving the quality of data warehouse tables was not sufficient, since that covered only a subset of data assets and workflows.
Many employees at Airbnb will never directly query a data warehouse table, yet use data on a daily basis. Regardless of function or expertise, data users of all types are accustomed to viewing data through the lens of metrics, an abstraction which does not require familiarity with the underlying data sources. For a data quality guarantee to be relevant for many of the most important data use cases, we needed to guarantee quality for both data tables and the individual metrics derived from them.
In Airbnb’s data architecture, metrics are defined in Minerva — a service that enables each metric to be uniquely defined in a single place — and broadly accessed across company data tools. A metric defined in Minerva can be directly accessed in company dashboarding tools, our experimentation and A/B testing framework, anomaly detection and lineage tools, our ML training feature repository, and for ad-hoc analysis using internal R and Python libraries.
For example, take Active Listings, a top-line metric used to measure Airbnb’s listing supply. An executive looking up the number of Active Listings in a Apache Superset dashboard, a data scientist analyzing the Active Listings conversion funnel in R, and an engineer reviewing how an experiment affected Active Listings in our internal experiment framework will all be relying on identical units for their analysis. When you analyze a metric across any of Airbnb’s suite of data tools, you can be sure you are looking at the same numbers as everybody else.
In Airbnb’s offline data architecture, there is a single source of truth for each metric definition shared across the company. This key architectural feature made it possible for Midas to guarantee end-to-end data quality, covering both data warehouse tables and the metric definitions derived from them.
The Midas Promise
To build data to meet consistent quality standards, we created a certification process. The goal of certification was to make a single, straightforward promise to end users: “Midas Certified” data represents the gold standard for data quality.
In order to make this claim, the certification process needed to collectively address the multiple dimensions of data quality, guaranteeing each of the following:
Accuracy: certified data is fully validated for accuracy, with exhaustive one-off checks of all historical data, and ongoing automated checks built into the production pipelines.
certified data is fully validated for accuracy, with exhaustive one-off checks of all historical data, and ongoing automated checks built into the production pipelines. Consistency : certified data and metrics represent the single source of truth for key business concepts across all teams and stakeholders at the company.
: certified data and metrics represent the single source of truth for key business concepts across all teams and stakeholders at the company. Timeliness: certified data has landing time SLAs, backed by a central incident management process.
certified data has landing time SLAs, backed by a central incident management process. Cost Efficiency: certified data pipelines follow data engineering best practices that optimize storage and compute costs.
certified data pipelines follow data engineering best practices that optimize storage and compute costs. Usability: certified data is clearly labeled in internal tools, and supported by extensive documentation of definitions and computation logic.
certified data is clearly labeled in internal tools, and supported by extensive documentation of definitions and computation logic. Availability: certification is mandatory for important company data.
As a last step, once data was certified, that status needed to be clearly communicated to internal end users. Partnering with our analytics tools team, we ensured data that was “Midas Certified” would be clearly identified through badging and context within our internal data tools.
Fig 1: Midas badging next to metric and table names in Dataportal, Airbnb’s data discovery tool.
Fig 2: Midas badging for metrics in Airbnb’s Experimentation Reporting Framework (ERF).
Fig 3: Pop-up with Midas context in Airbnb’s internal data tools.
The comprehensive Midas quality guarantee, coupled with clear identification of certified data across Airbnb’s internal tools, became our big bet to guarantee access to high quality data across the company.
The Midas Certification Process
The certification process we developed consists of nine steps, summarized in the figure below. | https://medium.com/airbnb-engineering/data-quality-at-airbnb-870d03080469 | ['Vaughn Quoss'] | 2020-11-24 21:08:07.704000+00:00 | ['Analytics Engineering', 'Data', 'Data Engineering', 'Data Warehouse', 'Data Science'] |
Power to the Indigenous Peoples! | St. Lawrence Island Yupik share their knowledge and concerns about local waterways given loss of sea ice and increase in commercial shipping. Photo ©WCS.
By David Wilkie and Michael Painter
October 23, 2018
In the wake of the recent Global Climate Action Summit, we need to replay and reinforce the message that Indigenous Peoples and traditional communities — by exercising their rights, securing their wellbeing, and maintaining their cultural identities — will play an enormously important part in keeping forests intact and helping humanity avoid a climate change catastrophe.
WCS has long supported the rights of Indigenous Peoples to decide how to benefit from stewardship of their lands and waters. During the summit, our staff in the Democratic Republic of Congo, in the boreal north of Canada, and in Bolivia communicated passionately about the vital role that Indigenous Peoples play in making our planet more resilient to climate change.
Local women helping WCS staff to monitor fish landings in Tanjung Luar village in Lombok, Indonesia. Photo ©WCS.
We noted how civil society has and continues to play a key supporting role in securing rights to Indigenous Peoples’ territory and helping them to build their capacity to manage their natural resources.
But while the importance of securing rights and building capacity for Indigenous Peoples is well understood, we less frequently discuss why it is essential that Indigenous Peoples hold the power to govern their traditional territories and exercise their authority. Without clout, they are often thwarted in their efforts to manage their nature resources.
Time and again, politically or economically powerful actors have undermined or usurped the legitimate authority of Indigenous Peoples. On the other hand, when civil society actors have empowered Indigenous Peoples partners to govern their own territory, they and not others have benefitted.
It is essential that Indigenous Peoples hold the power to govern their traditional territories and exercise their authority.
A little over 180 km west of Kabul in the Hindu Kush mountains of central Afghanistan, the Hazara people have depended for generations on the Band-e-Amir landscape to graze their livestock, provide fuel and fodder, and facilitate rain-fed crop production.
In 2006, WCS helped establish the Band-e-Amir Protected Area Committee (BAPAC). BAPAC is comprised of elected representatives from all 15 villages in the area and members of various ministries and levels of government. Community representatives are the majority of the BAPAC members and consequently have the ability to guide the discussion on whether, why, and how to protect the Band-e-Amir area. | https://medium.com/communities-for-conservation/power-to-the-indigenous-peoples-145992dfbcc6 | ['Wildlife Conservation Society'] | 2018-10-24 17:33:02.226000+00:00 | ['Indigenous People', 'Indigenous', 'Wildlife', 'Environment', 'Conservation'] |
Is it on the Calendar? | The number one excuse for missing anything in business is “it wasn’t on the calendar”. That’s true if it’s an internal meeting of your team or a webinar held for your prospects.
The calendar has become an important tool in managing our time, both personally and professionally. It’s on every computer, every phone and included in every productivity application ever created.
Companies such as Franklin Covey have made hundreds of millions of dollars helping us manage our calendars. A quick search on Google for the term “productivity calendar planners” will produce over 30 million results. We crave a better way to manage our time, to be more productive, and get better results. And in most cases, we come back to the calendar to help us do just that.
My question is, if we know that in managing our own lives, why do we believe anyone else is different? If we manage manage our time through our calendar, isn’t it a logical conclusion to believe our prospects and customers do the same thing? Aren’t their lives just as busy as ours, with dozens of tasks to be completed today, and hundreds during a month?
If the answer to that question is ‘yes’, they why not leverage the calendar (which we all know is being used by them) and get our marketing activities right to the calendar. A calendar “event” sent to them is the same as that meeting request you sent to your team this morning. It gets on their calendar, and they can make plans accordingly.
We wrote a Ideabook to help start your idea engine. It’s really just a matter of thinking a bit differently about how you communicate and how you ask for someone’s time. We take that very seriously, we don’t believe in calendar spam any more than we believe in email spam. But we do believe, that even if someone has great intentions to attend your “event”, if it’s not on their calendar, there is a higher possibility of them not attending (as evidenced by webinar attendance rates of 40%).
So, download the Ideabook, flip through the ideas we’ve presented, and see if one or two might spark a thought on how to improve a marketing activity. Right now, it will only cost a few minutes of your time, but we believe, in the end there are major benefits of using the calendar in our marketing activities. | https://medium.com/calendar-marketing/is-it-on-the-calendar-908c68061139 | ['Arnie Mckinnis'] | 2016-12-27 12:54:46.867000+00:00 | ['Digital Marketing', 'Time Management', 'Marketing'] |
Continuous Integration for mobile and web projects | Greetings!
Today I want to share some knowledge about one commonly-known practice in IT development — Continuous Integration.
To make my text stand out from the regular wikipedia article I want to mention that our team had a task to create a unified process for all our projects. We have various mobile (iOS, Android) and web (services, sites, portals) projects.
Introduction
In the beginning let’s look at the term itself and see how it can be helpful.
According to Wikipedia continuous integration is a practise in IT development that automates the products’ adjustments for finding out the integration problems.
Speaking simply, — it’s just a way to have a ready build for your product (site, app) after one simple click on the button (or even automated completely).
How can it be useful?
First of all, your QA department will say you many thanks, because they won’t need to ask a developer every hour to have a test build for their needs. Secondly, you’ll have option to test your entire system. It will help you to find out all the integration mistakes in advance.
Thirdly, your client will be also grateful to receive one single link to a project that will be accessible all the time to see the stage of the development and find out where his money were invested.
So, the goal is to create a system for project builds for various platforms (web + mobile). Searching for an answer led us to a several solutions in this field:
Hudson
CruiseControl
TeamCity
FastBuilder
After looking carefully on every one of it and exploring all pros and cons we decided to go with Hudson solution. This instrument was the most flexible and the most supported by the developers around the world. Later we found out that there is a branch of it called Jenkins, but never transferred a project there ’cause Hudson was fine with us.
As a system to control the builds’ versions historically we had Git, with it help we store all the original codes and builds. Moreover, with the help of hooks we publish current versions of projects of test server.
Finally, the thing that we decided to create in the first place — unified corporate web-portal that corresponds to access of users to the builds, was created to storage the information about it and to log in, download and test all the builds and adjustments. We named it Publisher. It is a simple web-service that was written on Python (Django).
It looks like this:
Realization
Now it’s time to see how it all works together.
Right after the start of the new project the initial code and new project in Hudson is also being created. In the project settings we write the repository address and set it to be questioned every 5 minutes. In case of repository being modified, the new build creation starts automatically. There are many Hudson plugins that can automatically adjust various types of projects, but we wanted to create our own. In the end the process of adjustment we have is a one line command script.
The script itself varies based on the type of the project.
The simple one is the web project. In such case current version of repository is adjusted in one archive and published for people to access it. Android project is a little bit complex to adjust. In the beginning you need to update an ant scenario (in the default mode it’s supported by Android SDK) command, than we need to run the generated script of the build. For example, like here:
android update project — target 10 — name ProjectName
ant debug
In the result we’ll have ready apk file that you can download and install on your device. In this example we didn’t look at the scenario of project mentioning to publish it in GooglePlay. Still, it all can be made with a help of command line tools with SDK utilities.
Finally, the most interesting part is iOS adjustment. To use standard algorithm from Apple you need to adjust the app, than sign it with developer certificate and only after that you can install it (regularly it’s done through iTunes). We decided to simplify this process as much as we can. In the end we found such thing: with a help of xcodebuild utility we compile and adjust the project in app. Later with a help of xcrun utility we sign it with developer certificate (it’s accessible form builds’ server) and send the ready apk file to become accessible.
xcodebuild -project ProjectName.xcodeproj
xcrun -sdk iphoneos PackageApplication -v ProjectName.app -o ~/path/projectname.ipa -embed “~/path/provision.mobileprovision”
In such case we create that exact Target, that was installed in xCode of developer, but this parameter can be also changed manually.
In the end all adjustments (ipa, apk, zip) go to common access where it will be ready for users to check. There are no problems with web projects — they all have links to the test servers, Android apk file is can be downloaded with your phone where you can install it. The main difficulty is once again with iOS app. Ideally, user needs to go to the portal from his device, click on the link and install the app. And there is such possibility. After reading a pile of documentation and searching the internet we solved the problem. For iPhone user to be able to download an app by clicking the link it suppose to look like this:
itms-services://?action=download-manifest&url=http://server/projectname.plist
By this address you can access xml file, which structure needs to be described in documentation for adjustment project in xCode by clicking the ‘check’ on Enterprise window when adjusting the app. The main fields can be found in this file:
<! — link to the ipa file →
<key>url</key>
<string>http://server/distribs/prokectname.ipa</string> <! — app identificator →
<key>bundle-identifier</key>
<string>ru.handh.projectname</string> <! — name of the app →
<key>title</key>
<string>ProjectName</string>
This file is generated automatically in our case. That is why after adjusting a new build and clicking the Install link we see the question for permission to install an app. Upon agreeing we can see as an app being downloaded and installed.
Conclusion
Right now every project user can access the portal, download the latest version of the project and test it. Right now we can publish there all different projects for various platforms (Web, iOS, Android). Later on we want to add there an ability to publish the results of Unit testing and various notifications about failed adjustments. | https://medium.com/the-mind-of-heads-and-hands/continuous-integration-for-mobile-and-web-projects-7537b5078b48 | ['Heads'] | 2017-01-19 15:08:12.190000+00:00 | ['Application Development', 'Continuous Integration', 'iOS', 'Mobile App Development', 'Web Development'] |
Tableau Services Manager (TSM) API —The undocumented “Passwordless” Authentication | Tableau is a complex platform with tons of APIs for analysts, developers, and platform administrators. If you are a server admin, most probably you’ve already used REST API for basic stuff like managing users and contents. However, platform level activities such as starting and stoping the Server, creating backups, getting detailed status information about each service, changing topology, or retrieving license information rely on a different API: the Tableau Services Manager (TSM) API.
Tableau Services Manager’s API is still in alpha status with version 0.0
Usually, TSM API is used mostly from the tsm command-line utility, which is part of the Server installation. But you can use the TSM API to perform all the functionalities in tsm remotely, invoking simple HTTP endpoints. Some of the things you can do by this API are:
Start and stop Tableau Server
View the status of services and nodes
Back up and restore Tableau Server
Make configuration and topology changes
Change port assignments for services
Create log file archives
In this post, I will go thru on a basic use case: getting status information from a Tableau Server cluster, its nodes, and its services. We will check how to authenticate and get information from TSM as well as see TSM’s hidden gem: the passwordless login support.
But before trying to reimplement tsm status from our own code, let’s see an alternative option from pre-TSM times for getting server status.
The serverinfo.xml endpoint
Historically, the only supported way to get status information from Tableau Server was this so-called serverinfo.xml endpoint. It supports both authentication and white-list based access: allowing to retrieve the Server status information from third party tools without implementing any complex API or authentication logic, with a simple HTTP GET call.
To allow external apps (or yourself) to access systeminfo.xml in an unauthenticated session, add your IP to the whitelist by:
tsm configuration set -k wgserver.systeminfo.allow_referrer_ips -v <ip address>
Then open /admin/serverinfo.xml :
http://my_tableau_server/admin/systeminfo.xml
It should return something like this:
This looks useful, however, it misses a few key points:
We don’t know what is desired state: it’s great that the server is stopped, but did we wanted to stop it? Or, maybe it’s running but we already started a stop command. Serverinfo has no information about our intents (the service desired state).
The serverinfo.xml only available when the vizportal is running. For real monitoring, we want to know the status all the time, not just when the Server is running.
The TSM API
This is when the TSM API could be helpful, as it gets the status directly from tabadmincontroller, which is supposed to be always running and available even when the Server is in stopped state.
The public part of the API is documented here. For getting status info, we need to call two endpoints:
login to authenticate and status to get server, nodes, and services status
Authenticating with TSM credentials
Under the hood, TSM uses OS credential validation (like PAM on Linux). All OS users can access the API who are members of the tsmadmin group.
Unlike any other regular API, TSM API uses cookies for subsequent authenticated calls after login. While it sounds like an anti-pattern, I guess the reason for this cookie based security is that the API was primarily created for Tableau’s own web app, where cookie-based implementation is required anyway. Long story short, the /login endpoint sets a cookie called AUTH_COOKIE , and you should pass that cookie to all requests until you log out.
Login from the command line:
curl -k https://localhost:8850/api/0.5/login -X POST -H "Content-Type: application/json" --data '{
"authentication": {
"name": "tsmuser",
"password": "tsmpassword"
}
}' -v * About to connect() to localhost port 8850 (#0)
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8850 (#0)
[...]
> POST /api/0.5/login HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8850
> Accept: */*
> Content-Type: application/json
> Content-Length: 104
>
< HTTP/1.1 204 No Content
< Date: Wed, 23 Dec 2020 08:16:55 GMT
< X-Application-Context: application
< Cache-Control: no-store
< Set-Cookie: AUTH_COOKIE=eyJjdHkiOiJKV1QiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2IiwiYWxnIjoiZGlyIn0..xpoNg5Hg3n9AvLFN40ND7Q.13LYw0uQT9bDF2V7R-fMu3rXWpNg7vQk7QQeLbsUUqM1pEkoLNouizD-kco2gkms0b_mn2rOIGek7htCxnf13bF5Y3ZLogNwMpQIHBIrt9Y4mapIZvLdIB_kmwaTpqLWvAjz5rpVo7l4uGUd42cvDXkOJbFElyuHwRgfEr_hXhj7lWRyB2iDJm1HBbqBovGaIsUCEvu8m6PE7UmglmFyQFW7ys3NuoKIq46Dcv4HnBD0S7bpkNZOEeMWx-0b9t.xazmQ1ReLQk-y0g7RoELL;Path=/;Expires=Wed, 23-Dec-2020 10:16:56 GMT;Max-Age=7200;Secure;HttpOnly
< Expires: Thu, 01 Jan 1970 00:00:00 GMT
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: DENY
< Content-Security-Policy: default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; frame-ancestors 'none';
That ugly cookie is our ticket for the next calls.
Get system status info
Getting the status as simple as a GET /status , just like:
If all good (and why wouldn’t be), then we should see something like:
{
"clusterStatus": {
"nodes": [{
"services": [{
"serviceName": "filestore",
"instances": [{
"code": "ACTIVE",
"processStatus": "Active",
"instanceId": "0",
"timestampUtc": 1497060680268,
"currentDeploymentState": "Enabled",
"binaryVersion": "<build>"
}],
"rollupStatus": "Running",
"rollupRequestedDeploymentState": "Enabled"
}, {
"..."
}],
"nodeId": "node1",
"rollupStatus": "Running",
"rollupRequestedDeploymentState": "Enabled"
}],
"href": "/api/0.5/status",
"rollupStatus": "Running",
"rollupRequestedDeploymentState": "Enabled"
}
}
This is exactly what we wanted, detailed status for each service (we will see more services here than in serverinfo.xml ), desired state, and last, but not least, the result is in a fancy JSON format, so we don’t have to parse XML (which sucks anyway).
Using TSM Passwordless authentication
So what if I do not want to use my precious OS user credentials in monitoring tools? How can I interact with TSM API without username and passwords — just like with systeminfo ? Fear not, Tableau supports passwordless login beginning from 2019.2. The documentation mentions two requirements:
The account you are running commands with is a member of the TSM-authorized group, by default, the tsmadmin group. The Tableau unprivileged user (by default, the tableau user) and root account may also run TSM commands.
group. The Tableau unprivileged user (by default, the user) and root account may also run TSM commands. You are running commands locally on the Tableau Server that is running the Tableau Server Administration Controller service. By default, the Tableau Server Administration Controller service is installed and configured on the initial node in a distributed deployment.
But how does this passwordless auth works? It is not documented (yet), but the process is fairly easy:
First, the application connects to a local named pipe (on Windows) or a UNIX domain socket (on Linux)
TSM validates the connecting process user id and group membership
If the user is part of the tsmadmin group it returns a cookie for the session. That cookie can be used as an AUTH_COOKIE for other API calls.
Quite simple, huh? It is, indeed.
The communication between our process and the TSM controller is based on Apache Thrift protocol. Thus, you can generate the passwordless authenticate code in any programming language of your choice using the thrift command.
enum PasswordLessLoginReturnCode {
PLL_SUCCESS = 0,
PLL_NOT_AUTHORIZED = 1,
PLL_ERROR = 2
}
struct PasswordLessLoginResult {
1: PasswordLessLoginReturnCode returnCode,
2: string username,
3: string cookieName,
4: string cookieValue,
5: i32 cookieMaxAge
}
service PasswordLessLogin {
PasswordLessLoginResult login()
}
After compiling the thrift file, you should be able to call login() function which will return with the coookieName and cookieValue .
An example implementation in Rust will look like this:
This will print the API results as:
Ok(PasswordLessLoginResult {
return_code: Some(PllSuccess),
username: Some("tsmsvc"),
cookie_name: Some("AUTH_COOKIE"),
cookie_value: Some("fSd[...]mC8DQ"),
cookie_max_age: Some(7200)
})
In the tabadmincontroller_node1–0.log log file, we can also see that TSM recognize our user — even without sending it over the communication channel.
2020-12-23 07:45:48.453 +0100 Thread-17 : INFO com.tableausoftware.tabadmin.webapp.impl.linux.LinuxPasswordLessLoginManager - Password-less login request from user 'tsmuser' with uid '1000'
2020-12-23 07:45:48.462 +0100 Thread-17 : INFO com.tableausoftware.tabadmin.webapp.impl.linux.LinuxPasswordLessLoginManager - User with uid '1000' and username 'tsmuser' is logged in via password-less auth
This is it. We can now get the server status from our code without storing or passing credentials over the line.
TSM functionalities for non-admins
In some circumstances, it would be great to limit who can use what functionalities in TSM. At some of our clients the Level2 support team is limited to certain actions like starting and stoping the Server, creating backups but they cannot change the configuration or reset the admin password for instance. Out of the box the authorization scheme in TSM is all or nothing: if you are logged in, you are superuser.
The way how we help our customers to fine tune TSM user levels and assign the right permissions to the right folks is to use setuid bit binaries along with passwordless login. Let’s say you have two users: tsmuser and support . tsmuser is part of the tsmadmin group but support isn’t. In that case, your binary that interacts with TSM API should be owned by tsmuser and have a setuid file system flag:
$ chmod 4755 your_tsm_app
$ ls -l your_tsm_app
-rwsr-xr-x. 2 tsmadmin tsmadin 8367064 Dec 26 08:12 your_tsm_binary
And the app should also change the effective uid (granted by the OS due to the setuid flag) to current uid:
#include <unistd.h> setuid(geteuid());
When our binary is executed by the support user, the application will switch its current user to tsmuser . Later, when the passwordless code is invoked, Tableau Service Manager will grant session tsmuser ‘s AUTH_COOKIE .
Passwordless over Network
If you are brave enough, you can even map or proxy the local domain sockets to a network port or a remote host using socat(1) . It will allow passwordless login from remote endpoints as well.
Putting everything together: tableau-monitoring-execd
I made a quick plugin for telegraf for monitoring Tableau Server using systeminfo.xml and TSM API. It uses all the above-mentioned technics to get system status information as easily and reliably as possible. The project page here:
This small app is an execution daemon for telegraf: a continously running process that collects information from external services (like Tableau Server thru TSM API) and return the results in InfluxDB’s line protocol. We use this as a critical part of our monitoring efforts, completed with proactive monitoring and alarming. | https://medium.com/starschema-blog/tableau-services-manager-tsm-api-the-undocumented-passwordless-authentication-9b76ed00119d | ['Tamas Foldi'] | 2020-12-27 10:53:16.220000+00:00 | ['Dataviz', 'Tableau Server', 'Telegraf', 'Monitoring', 'Tsm Api'] |
Why fast-track leaders lose their mojo, and how to gain it back | Congratulations! Your company made Inc. magazine’s 500/5000 list of fastest-growing companies.
The invitation to go to the grand induction ceremony is on your desk. Your baby, Rocket Inc., is now on the map. Local papers have printed the announcement on the front page. The phone has been ringing off the hook from potential investors.
Wooohoooo!!! You, the founder/executive/ CEO/leader, made it into this fast-track exclusive club. What could be better?
So…….Why do you feel like a phony as you cut the celebratory cake? Where is your elation of the accomplishment? Why are you engulfed with fear that the whole facade will come crashing down and hit the dirt?
You know and the statistics show that fast growth generates a myriad of new problems. The fast road to success is not littered with rose petals alone — there are many, many thorns embedded in between.
Founders/executives/leaders of most fast-growing companies are inadequately prepared for the many financial, operational and personal risks that pop up and become killjoys. I can safely bet that you relate to at least a few, if not many, of the following:
Risks that rob your mojo
Cash flow crunch. The ebb and flow of cash in and cash out gets extremely complicated. Fast-growing startups burn money for years before generating positive cash flow. Your monthly expenses may exceed the inflow. A common scenario is exceeding your credit line and suddenly one bad sales month spells disaster.
Turnover. You funded the last few cash flow shortages by postponing payment of your salary and that of some of the other senior team members.
Some of the employees, including your right-hand people, are polishing their resumes. Contractors and suppliers are threatening to stop providing products/services and may actually follow through.
Need to spend to become a big company. Your team is pushing for more robust processes and equipment to deal with the present load and the growth trajectory. All this requires cash, but cash is scarce.
Finding good investors. Those who can partner with you and be supportive to help you grow and prosper. They exist but, where are they? How do you screen them?
Sometimes, smelling blood in the water, what you are getting are offerings laced with crazy conditions. These predators-masquerading-as-investors want their pound of flesh even if the venture fails. You want to say “no.” Are you in a position to say no?
Investor relations is cutting into `productive’ effort. You had collected enough investment dollars to tide you over the erratic cash flows. You are now indebted to many people who demand your time whenever a thought occurs to them.
Every week, new cash flow projections have to be prepared in “their” format, not the one you winged last week. When do you do real business?
Trying to race a VW Beetle at the speed of a Ferrari. VCs have invested big money in you to grow fast and become a “unicorn.” You have to keep going faster and faster. The more you speed up, the more is required from you. Where is that money coming from?
Operations and customer service problems. When you started everyone and anyone could handle all aspects of the business. The processes were simple and one-person-smooth. Now you have silos, and the left hand does not know what the right does.
Meanwhile, the internal team is praying for a slowdown so that they can catch their breath. You can already hear the stress level and petty fights in the voices of customers and employees.
Over-reliance on early customers. The growth that put you on the map was from the few early customers. They know it and are continuing to tighten the screws. In order to be successful, you are now trying to diversify your product line, expand geography, and increase margins.
The early customers are not agreeing to change to the new programs. Do you let them go or continue to serve them as a loss leader?
Star salespeople bomb out. To keep up with ever-expanding sales requirements you hire many new salespeople. They were stars in their previous positions. In your venture, a good percent cannot deliver for many months.
They incur costs of on-boarding, training, travel and any “guaranteed” bonuses. The costs pile up. However, sales are a promise in the future.
Responsibility for the entire value chain. You meet the family of your team members in the company picnic. You smile dutifully but cringe inside at the thought of how many people are dependent on you.
You worry about everyone in the complex value chain of suppliers, customers, business partners, and stakeholders. You must keep them engaged and successful. The burden of the entire chain is on you.
I could go on, but you get the idea. Passion, what passion? This was your dream, your energizer. What happened? Your enthusiasm is waning. You are killing yourself physically, mentally and emotionally.
You question whether all this insane effort was worth the physical and emotional pain. How are the problems of rapid growth better than those when you had little growth?
Growth risk is an important issue. It is equally as relevant as economic downturns or technology obsolescence. Those issues are also on your mind While others celebrate the growth of your company, many organizational, professional and personal issues have formed a twisted rope that is slowly strangling you.
You were dreaming of a smooth upward curve, like the one you saw in the PR documents of winning companies. Instead, you are riding a scary roller coaster of evolution and revolution. These are strewn with risks that ebb and flow and are constantly changing. You are bogged down and losing your mojo. You cry out, “I need help!”
You are definitely not alone. I have been there. I have felt that. My last company was in the Inc. 500/5000 for five years in a row. At one time or another, I have felt the pain and agony of many of these issues and more.
Now what?
Don’t despair, you are not alone. There are many groups in your area that can help you and share some of this burden with you. In hindsight, I wish I had made use of that resource when I was struggling with these very issues while building a fast-track company.
Who can help?
Your team: It is their role to do a good job and tackle problems. You have hired very carefully.
Board of directors: It is their fiduciary responsibility to look after the interests of the stakeholders, particularly the shareholders.
Board of advisers: It is their duty to be the domain experts and augment the skills of the team.
Investors: They invested in you because of the past rapid growth. They can provide casual, often contradictory advice.
Professional service providers (accountants, lawyers, etc.): For a fee, they provide services in their area of expertise.
Consultants: They are hired for a specific service that may be needed. They are the domain experts.
Mentors: Your mentors can provide objective feedback and support.
Trade groups: They provide industry-specific non-competitive information and lobbying services on behalf of all members.
These are all the people who can help you shoulder some of the tasks to do it right, outsource your load, and reduce the risks associated with growth.
You are already familiar with these resources and are taking advantage of their help and counsel to lighten your load. And yet, your problem persists.
The intense and convoluted nature of issues faced uniquely by the CEOs of a fast-growing enterprise is not fully tackled by all of these supporters.
The cry for help remains muffled inside. You cannot bring many of your issues to anyone in these groups. You are the boss, after all. You have to appear not just strong, but also invincible. Many of your issues either remain unresolved or are sub-optimized.
As you set out to change the world and your biggest home run is yet to be hit, you may have an unsettled feeling that there are even more problems coming ahead. And, you are stuck as it is. What about what is around the corner?
It is very lonely at the top. Help!
A higher resource
The biggest challenge is your lack of connections with people who are in a similar situation and through grit and trial may have made inroads on some of the very issues that are causing you lack of sleep. In turn, you may have ideas for them.
It behooves you to consider an additional, often neglected, source of help.
A peer group of CEOs can make the most impact on you and your business for tackling the issues that you face. Each group is purpose-built to help members help each other mitigate risks and improve performance and outcomes of their businesses.
The peer group can help you move from fear-based and response management to purpose-driven leadership. The members are selected so that they are not competing with you. They have no ax to grind. They can be truthful, however harsh it may seem at first. They are there for you and the group’s welfare.
The rapid pace of growth creates a certain emptiness in the leader. A peer group of CEOs has the capability of filling that emptiness with renewed potential. Such a group can help you get your mojo back like no other group can.
I am presenting this bald advocacy because I wish I had made use of this resource when I was struggling with these very issues while building a fast-growing company. You can take advantage of my hindsight.
A peer group defined
A peer group of CEOs rests on a solid premise. It comprises of ambitious and hard-working leaders who are committed to helping every member of the group continued success through shared experiences and mutual support. If you decide to add such a peer group to your mix of support, here are a few things to look out for:
Leaders only: The peer group is made up of leaders of companies. All of them are founders, presidents, CEOs and business owners.
As equals, they can openly discuss and support each other with strategies that enhance personal life and personal freedom while continuing sustainable growth and profits.
Empathy: Group members are more likely to understand and empathize with your life and work situation because they are all in a very similar situation.
Non-competitive: The members are selected so they belong to non-competitive businesses. Without that assurance, you may be reluctant to divulge critical details or ask for help. Non-competitive selection allows the discussion to be free and open and gives the chance for members to be honest and vulnerable.
Community: You are part of a community — a network of similar executives who want to grow their business and manage risks, just like you. The group members help each other gain perspectives, break down walls, and look beyond in creative ways.
Often you are not aware of your own blind spots. The other members shine a light on it. In group interaction, the members should be able to challenge and are comfortable in being challenged to take their business and personal life to the next level.
Diversity with commonality: The leaders share some degree of commonality in size and/or stage of evolution (homogeneous) and are different in other sets of skills or domains (heterogeneous). This homogeneous/heterogeneous combination makes it ideal to connect with the group. Such a group can provide an even and immediate exchange of benefits while being totally non-competitive and agenda-free.
Goal: In spite of the many benefits of a social organization, the peer group of CEOs should not primarily be a social organization. Its agenda is for business and personal development. The collective goal is to turn each other’s “What to do???” into “What to do.” Moreover, the group holds you responsible that you do it.
Psychic benefit: In addition to getting help for you, an additional bonus is an immeasurable fulfillment of helping other fellow leaders with your own experience and insights.
Role of a facilitator
With or without a facilitator, a group of leaders of fast-growing companies in one room presents enormous potential. However, with a competent facilitator leading them, it can add several key benefits to the dynamics.
Dedication: Facilitators are, often accomplished, individuals in their own right and are dedicated to increasing the effectiveness and enhancing the lives of fellow leaders. They have the capability of counsel because they have walked the walk. Their raison d’être is for the members to achieve personal and organizational results.
Curb the competitive streak: High-powered leaders are used to their lofty stature and may find it difficult to control their competitive streak. They are all soaring eagles, after all. Empathetic listening is not necessarily their strong suit. Without a facilitator, group dynamics can be dodgy, as in any unsupervised activity. It is the facilitator’s role to streamline the process, guide the personalities, and enable the best outcome to emerge in the group.
Wear many hats: The facilitator is an accomplished leader who has traversed this journey before. (S)he can wear different hats with the group. The facilitator is a coach, adviser and mentor rolled in one and someone totally committed to your success.
Thinking outside the box: A facilitator can dig down into the issue with the members so that they can start to look beyond their own comfort zone. This is where the magic happens — and start to explore areas of solutions that they may have never thought of before. That could be more illuminating and transformative than anything else they’ve done before.
Extract all the juice: A facilitator who moderates the group, and is not part of the group, can stimulate thinking, frame issues, guide discussions, keep the process on track, limit tangents, and ensure accountability in the group. Their role is to ensure that every bit of the experience and wisdom is developed, shared and applied.
Expand the moment: Discussions can get passionate and even heated before the Aha! moment for you. The facilitator can expand that precious moment to make it a broader learning for the entire group.
Complementary content: Based upon the need of the group, the facilitator can call in additional resources for learning and proven ideas.
Selecting the right members: The most important role of a facilitator is to select the right members for the group. Groups work best when the members have something to learn from the other members and they, in turn, can reciprocate with similar contributions.
Maintaining confidentiality is paramount. A dysfunctional member can dampen it for all members. The facilitator may ask a member to leave if that member is deemed not to be a good fit for the group.
A facilitated peer group of leaders, made up of fast-growing companies, can provide the members with a distinctive place for tackling tough issues, gaining esoteric wisdom and personal transformation.
Help is available. Such a group may be just the answer that you and your fast-growing company need. You can get back your mojo.
A version of this article first appeared in Upsize Magazine.
To see other opinion columns go to “Planting Seeds”.
Rajiv Tandon is executive director of the Institute for Innovators and Entrepreneurs and an advocate for the future of entrepreneurship in Minnesota. He facilitates peer groups of fast growth Minnesota CEOs. He can be reached at [email protected]. | https://medium.com/on-startups-and-such/why-fast-track-leaders-lose-their-mojo-and-how-to-gain-it-back-8689d11a058d | ['Dr. Rajiv Tandon'] | 2019-03-06 23:24:05.521000+00:00 | ['Growth Challenges', 'Fast Growth', 'Entrepreneurship', 'Peer Group'] |
Data Mining: A Plague, Not a Cure | ‘Data mining’ or pulling out perceived patterns from large data sets can be a tempting endeavor, however, patterns are inevitable and generally meaningless. Gary Smith, author of ‘The 9 Pitfalls of Data Science’ and ‘The Phantom Pattern Problem’, draws examples from experiments into scurvy in sailors centuries ago, as well as cycles in economies and stock markets.
The scientific revolution that has improved our lives in so many wonderful ways is based on the fundamental principle that theories about the world we live in should be tested rigorously. For example, centuries ago, more than 2 million sailors died from scurvy, a ghastly disease that is now known to be caused by a prolonged vitamin C deficiency. The conventional wisdom at the time was that scurvy was a digestive disorder caused by sailors working too hard, eating salt-cured meat, and drinking foul water. Among the recommended cures were the consumption of fresh fruit and vegetables, white wine, sulfate, vinegar, sea water, beer, and various spices.
In 1747, a doctor named James Lind (1716–1794, left) did an experiment while he was on board a British ship. He selected 12 scurvy patients who were “as similar as I could have them.” They were all fed a common diet of water-gruel, mutton-broth, boiled biscuit, and other unappetizing sailor fare. In addition, Lind divided the 12 patients into groups of two so that he could compare the effectiveness of six recommended cures. Two patients drank a quart of hard cider every day; two took 25 drops of sulphate; two were given two spoonsful of vinegar three times a day, two drank seawater; two were given two oranges and a lemon every other day; and two were given a concoction that included garlic, myrrh, mustard, and radish root.
Lind concluded that,
“… the most sudden and visible good effects were perceived from the use of oranges and lemons; one of those who had taken them, being at the end of six days fit for duty … The other was the best recovered of any in his condition; and … was appointed to attend the rest of the sick.” (DR JAMES LIND’S “TREATISE ON SCURVY” PUBLISHED IN EDINBURGH IN 1753)
Unfortunately, his experiment was not widely reported and Lind did little to promote it. He later wrote that, “The province has been mine to deliver precepts; the power is in other to execute.”
The medical establishment continued to believe that scurvy was caused by disruptions of the digestive system caused by the sailors’ hard work and bad diet, and that it could be cured by “fizzy drinks” containing sulphuric acid, alcohol, and spices.
Finally, in 1795, the year after Lind died, Gilbert Blane, a British naval doctor, persuaded the British navy to add lemon juice to the sailors’ daily ration of rum — and scurvy virtually disappeared on British ships.
Lind’s experiment was a serious attempt to test six recommended medications, though it could have been improved in several ways:
– There should have been a control group that received no treatment. Lind’s study found that the patients given citrus fared better than those given seawater. But maybe that was due to the ill effects of seawater, rather than the beneficial effects of citrus.
– The distribution of the purported medications should have been determined by a random draw. Maybe Lind believed citrus was the most promising cure and subconsciously gave citrus to the healthiest patients.
– It would have been better if the experiment had been double-blind in that neither Lind nor the patients knew what they were getting. Otherwise, Lind may have seen what he wanted to see and the patients may have told Lind what they thought he wanted to hear.
– The experiment should have been larger. Two patients taking a medication is anecdotal, not compelling, evidence.
Nonetheless, Lind’s experiment remains a nice example of the power of statistical tests that begin with a falsifiable theory, followed by the collection of data — ideally through a controlled experiment to test the theory.
It is tempting to believe that patterns are unusual and their discovery meaningful; in large data sets, patterns are inevitable and generally meaningless.
Today, powerful computers and vast amounts of data make it tempting to reverse the process by putting data before theory. Using our scurvy example, we might amass a vast amount of data about the sailors and notice that sufferers often had earlobe creases, short last names, and a fondness for pretzels, and then invent fanciful theories to explain these correlations.
This reversal of statistical testing goes by many names, including data mining and HARKing (Hypothesizing After the Results are Known). The harsh sound of the word itself reflects the dangers of HARKing: It is tempting to believe that patterns are unusual and their discovery meaningful; in large data sets, patterns are inevitable and generally meaningless.
Decades ago, data mining was considered a sin comparable to plagiarism. Today, the data mining plague is seemingly everywhere, cropping up in medicine, economics, management, and, now, history. Scientific historical analyses are inevitably based on data documents, fossils, drawings, oral traditions, artifacts, and more. But now, historians are being urged to embrace the data deluge as teams systematically assemble large digital collections of historical data that can be data mined.
Tim Kohler, an eminent professor of Archaeology and Evolutionary Anthropology has touted the “glory days” created by opportunities for mining these stockpiles of historical data:
By so doing we find unanticipated features in these big-scale patterns with the capacity to surprise, delight, or terrify. What we are now learning suggests that the glory days of archaeology lie not with the Schliemanns of the nineteenth century and the gold of Troy, but right now and in the near future, as we begin to mine the riches in our rapidly accumulating data, turning them into knowledge.
The promise is that an embrace of formal statistical tests can make history more scientific. The peril is the ill-founded idea that useful models can be revealed by discovering unanticipated patterns in large databases where meaningless patterns are endemic. Statisticians bearing algorithms are a poor substitute for expertise.
For example, one algorithm that was used to generate missing values in a historical database concluded that Cuzco, the capital of the Inca Empire, once had only 62 inhabitants, while its largest settlement had 17,856 inhabitants. Humans would know better.
Peter Turchin has reported that his study of historical data revealed two interacting cycles that correlate with social unrest in Europe and Asia going back to 1000 BC. We are accustomed to seeing recurring cycles in our everyday lives: night and day, planting and harvesting, birth and death. The idea that societies have long, regular cycles, too, has a seductive appeal to which many have succumbed. It is instructive to look at two examples that can be judged with fresh data.
Data mining is a plague, not a panacea.
Based on movements of various prices, the Soviet economist Nikolai Kondratieff concluded that economies go through 50–60 year cycles (now called Kondratieff waves). The statistical power of most cycle theories is bolstered by the flexibility of starting and ending dates and the co-existence of overlapping cycles. In this case, that includes Kitchin cycles of 40–59 months, Juglar cycles of 7–11 years, and Kuznets swings of 15–25 years. Kondratieff, himself, believed that Kondratieff waves co-existed with both intermediate (7–11 years) and shorter (about 3 1/2 years) cycles. This flexibility is useful for data mining historical data, but undermines the credibility of the conclusions, as do specific predictions that turn out to be incorrect.
In the 1980s and 1990s, some Kondratieff enthusiasts predicted a Third World War in the early 21st century:
“the probability of warfare among core states in the 2020s will be as high as 50/50.”
More recently, there have been several divergent, yet incorrect, Kondratieff-wave economic forecasts:
“in all probability we will be moving from a ‘recession’ to a ‘depression’ phase in the cycle about the year 2013 and it should last until approximately 2017–2020.”
The Elliot Wave theory is another example of how coincidental patterns can be discovered in historical data. In the 1930s, an accountant named Ralph Nelson Elliot studied Fibonacci series and concluded that movements in stock prices are the complex result of nine overlapping waves, ranging from Grand waves that last centuries to Subminuette waves that last minutes.
Elliott proudly proclaimed that, “because man is subject to rhythmical procedure, calculations having to do with his activities can be projected far into the future with a justification and certainty heretofore unattainable.” He was fooled by the phantom patterns that can be found in virtually any set of data, even random coin flips. And, just like coin flips, guesses based on phantom patterns are sometimes right and sometimes wrong.
In March 1986, Elliot-wave enthusiast Robert Prechter was called the “hottest guru on Wall Street” after a bullish forecast he made in September 1985 came true. Buoyed by this success, he confidently predicted that the Dow Jones Industrial Average would rise to 3600–3700 by 1988. The highest level of the Dow in 1988 turned out to be 2184. In October 1987, Prechter said that, “The worst case [is] a drop to 2295,” just days before the Dow collapsed to 1739. In 1993 the Dow hit 3600, just as Prechter predicted, but six years after he said it would.
Findings patterns in data is easy. Finding meaningful patterns that have a logical basis and can be used to make accurate predictions is elusive.
Data are essential for the scientific testing of well-founded hypotheses, and should be welcomed by researchers in every field where reliable, relevant data can be collected. However, the ready availability of plentiful data should not be interpreted as an invitation to ransack the data for patterns or to dispense with human knowledge. The data deluge makes common sense, wisdom, and expertise essential.
Data mining is a plague, not a panacea. | https://medium.com/science-uncovered/data-mining-a-plague-not-a-cure-b30ec520d00e | ['Oxford Academic'] | 2020-06-12 11:01:02.101000+00:00 | ['Algorithms', 'Oxford University Press', 'Mathematics', 'Science', 'Data Science'] |
Why You Should Read One Hundred Years of Solitude | As I was nearing the last few pages of the book, I realized I never wanted One Hundred Years of Solitude to end. I had no idea that a book could pull my imagination to travel so far into an outrageous and magical realm of the living and dead of Macondo.
If I could sum up One Hundred Years of Solitude in just one phrase, it would be, “What the f*ck.” I use that phrase in the most enchantingly and astonishing way possible.
As the author, Gabriel Garcia Marquez, I believe that he seriously cannot be sane because this book embodies the entirety of what it means to be unique.
This book is so beautifully depicted that you become completely tangled in the absurdities of its magic realist narrative, following a hundred years of many fortunes and misfortunes of seven generations of the Buendia family.
It is a hard read, for there are many complex characters with mostly the same names. Luckily for my addition, there is a family tree to follow. However, I guarantee you it will be an enriching reading experience.
Here are 3 reasons why you should read One Hundred Years of Solitude. | https://medium.com/books-are-our-superpower/why-you-should-read-one-hundred-years-of-solitude-6114afcc0c36 | ['Jess Tam'] | 2020-11-20 07:31:24.230000+00:00 | ['Magic Realism', 'Books And Authors', 'Book Review', 'History', 'Books'] |
What would you do with $1M to make your city’s tech ecosystem more inclusive and representative? | This week, the Kapor Center launched the Tech Done Right Challenge, a $1M grant competition to catalyze cross-sector collaborations to help build more intentional, intersectional, and inclusive tech ecosystems in cities across the U.S. The challenge provides one-time “seed” grants to learn about the models that are working and not working across U.S. cities.
We know the barriers, it’s time for solutions
In 2018, the Kapor Center designed and released a research framework to understand the causes and systemic barriers that underlie the lack of diversity in tech. The Leaky Tech Pipeline (LTP) Report established baseline metrics and a common language so that we can — collectively — work on eradicating the complex problem of the lack of diversity in tech.
A key part of the challenge is to find solutions that embrace lean startup approaches through prototypes, experimentation, and iteration through public and private collaborations. We need to build and test partnerships and services that are inclusive by design, not as an afterthought.
At the Kapor Center, we work to increase representation of people of color in technology and entrepreneurship through increasing access to tech and STEM education programs, conducting research on access and opportunity in computing, investing in community organizations and gap-closing social ventures, and increasing access to capital among diverse entrepreneurs.
We also know first hand how challenging — yet crucial — it is to build place-based initiatives aimed at building a local inclusive tech ecosystem. For example, we have helped East Bay talent secure national tech apprenticeships through TechHire Oakland, provided entrepreneurs of color a three-month residency to help incubate their startups with mentors and coaches at a critical stage where no resources are available through our Innovation Lab & the Oakland Startup Network. We also know that this work is complex and requires not only foundations as us, but also local governments, academia, tech companies, community-based training organizations, and entrepreneurs to tackle the systemic issues outlined in the Leaky Tech Pipeline.
These problems are certainly not unique to Oakland. We know other cities face similar challenges yet many may not have access to the education, research, and capital resources we are privileged to have. For that reason, we are launching the #TechDoneRight Challenge as a prototype of a new approach to grant funding in the tech philanthropy sector.
What if the tech philanthropic funding community could see themselves more as investors? What if this shift in perspective of “investing” instead of “gifting” is what our communities deserve to feel empowered and valued?
So, if you are a funder reading this, we need you to invest in new ways and in tech-powered organizations whose theory of change may not follow the typical “five-year arc” but rather are iterative forward learning organizations who validate what the community really needs and then decides what is scalable or not;
If you are an organization on the ground looking for a “seed investment” to tackle the ecosystem gaps you know you really need to but have not had the resources to do so — until NOW — this challenge is for you; and, or
If you are an organization with a lean approach and have the lived journey of the problem you are tackling — this challenge is also for you.
Join the #TechDoneRight Challenge and let’s move towards a new way of problem-solving with, for, and by the community, we are looking to serve. Join the conversation using #TDRChallenge!
More info click here — #TDRChallenge! | https://medium.com/kapor-the-bridge/what-would-you-do-with-a-1m-to-make-your-citys-tech-ecosystem-more-inclusive-and-representative-7d4def325f47 | ['Lili Gangas'] | 2019-03-20 18:33:45.238000+00:00 | ['Tech', 'Nonprofit', 'Entrepreneurship', 'Empowerment', 'Lean Startup'] |
by Lorisa Griffith | SONG NOSTALGIA
“On Your Side” / Pete Yorn
Every September when the air catches a chill, I revisit this song and the feelings it brings up. I have been over the crush I associated with the song for a long time. He was seriously bad news, but it reminds me of how naive I was back then. I thought my love and attention could change someone who didn’t want to change. Breaking apart, coming back together, and starting over couldn’t save us. That’s because God knows best. Some things just are not meant to be and for good reason.
I spent a lot of time downtown in those days. I remember the way the town looked when the sun would go down and the sky would go all sorts of orange, purple, and pink at twilight. The streetlights came on as shops closed and bars opened. Restaurants got ready for the dinner crowd. I would catch up to friends and people watch for a while as I waited for the usual crowd to come out. There was something simpler about life then. Work, friends, and dating (or not) were things that filled my days even though they felt more empty to me then. | https://medium.com/the-composite/song-nostalgia-pete-yorn-on-your-side-4248c2c01c68 | ['Lorisa Griffith'] | 2017-09-04 19:29:23.454000+00:00 | ['Music', 'Relationships', 'Love', 'Art', 'Life'] |
What Would You Ask for If You Knew The Answer Was Yes? | What Would You Ask for If You Knew The Answer Was Yes?
9.) Something to smile about on my half-birthday (Nov. 7)…
Photo: PublicDomainPictures.net
I came across the question in the title a few years ago on Instagram, and it stopped me dead in my scrolling.
It was a welcome respite from all those narcissistic selfies and show-off holiday snaps that made me give up on Instagram after only a few months. The question got me thinking, and with the next gift-shopping holiday season fast approaching, it’s gotten me thinking again.
I rarely receive presents, and frankly, I wouldn’t have it any other way. It’s not that I’m a man who already has everything. In fact, I have closer to nothing. I’ve spent the past nearly 17 months wandering around Asia and Europe with most of my belongings in one suitcase and two carry-ons.
I keep my possessions in check because minimalism is my thing. I don’t collect anything that can’t be stored digitally. Physical belongings only lead to clutter, to which I’m violently allergic. I have pretty much all I want/need. I’d never turn down a nice hoodie, but I’m more into the intangible.
Although I’m borderline-impossible to shop for, I’m not without a wish list. What would I ask for if I knew the answer was yes? Well, that one’s easy.
1. The One (Love, baby, love)
Have we met? Did I already love and lose him? I spent decades waiting for the man (to borrow a line from a song by The Velvet Underground about a more dangerously addictive love) before I gave up and went home. I’ll probably never return to that corner, but my door is still open.
2. Home
The week I scrolled upon the aforementioned Instagram post, I gave a presentation at work about my professional story, which I separated into three distinct chapters: 1) The magazine era in New York City. 2) The book era in Buenos Aires, Melbourne, Bangkok, and Cape Town. 3) The digital era in Sydney.
“So where do you consider to be home then?” someone asked when I was finished.
I still have no idea, but I’m more than ready to find out. As the golden Burt Bacharach/Hal David oldie so eloquently put it, a house is not a home. Nor is a home necessarily home. And as much as I loved my home in Sydney, the city was never home.
How will I know I’m finally home? When I feel secure and happy enough somewhere to take a major leap of faith and sign up for a long-term Internet plan.
3. An idea for a groundbreaking app
I’ve been fantasizing about this ever since Marnie’s ex-boyfriend Charlie got rich by getting into the app business during the second season of Girls. He was my favorite of all the show’s male love interests, but I’d be sure to request an ending that’s happier than Charlie’s. By season five, he’d lost his health, his hygiene, and his hope to a drug habit. I’d take everything he gave up (aside from Marnie) over the app fortune.
4. A gift for writing fiction
I’m the kind of guy who’s in awe of people who excel at things I cannot do well, or at all, like sing, cook, or tell a compelling story that didn’t actually happen. I’ll probably always feel a little bit like a failure for not mastering the latter.
5. A portable bathroom that fits into my backpack
I know. Impossible. But if it weren’t, I’d want mine to be self-cleaning. As a frequent pee-er, never again being stuck out and about when I’m dying to go would be like living the dream — one in which I wouldn’t have to squeeze myself into those filthy water closets while traveling around Europe by bus and train.
A travel-size portable loo certainly would have come in handy in 2014 when I went camping in the Serengeti, with only nylon walls separating me inside my sleeping bag from the lions and elephants roaming by. I would, of course, have needed a much bigger tent.
Number-twos probably would remain strictly an at-home thing, but if I ever decide to join the mile-high club, I wouldn’t have to worry about impatient fellow passengers waiting outside.
6. A private personal gym in every city I visit
Sometimes the presence of multiple hardbodies can provide motivation, but I’d give it up if it meant I’d never again have to wait on someone who’s hogging the bench-press area.
7. A digital copy of House of Bondage (photos by Ernest Cole, words by Thomas Flaherty)
House of Bondage is a rare book that was published and banned in South Africa during the apartheid era, making it virtually impossible to buy there even today. I first learned about it via an exhibit at The Apartheid Museum in Johannesburg, the one that made me break down in tears in 2013 and inspired my in-the-works second book, Storms in Africa: Notes from the Motherland.
I haven’t been able to find a new copy in any bookstore, or online, for less than $100, much less in eBook form. When I finally decide to hand over $100-plus for a hard copy on Amazon, with an extra $50 tacked on to my bill for international delivery, I’ll probably read it and weep… again.
“House of Bondage” Photo: Ernest Cole via Pixelle.co
8. Peace of mind
Panic disorder is the bane of my 24/7. It makes “living every day like it’s your last” to close to literal for comfort.
9. The downfall of Donald Trump
I want it to be hard and humiliating, for him and for his fans. At this point, I’m not sure whom I resent more — the idiot in charge or the ones who gave him the keys to the White House. The rest of us have been talking about a revolution for two years. Can we please end this national nightmare and make revolution a reality on November 6?
10. World peace
Of course. Because we don’t need another Boston or Brussels or Istanbul or Paris or 9/11. (The latter is my least favorite New York City experience over the course of my 15 years living there.) Because I don’t want to read about yet another black person, another member of the LGBTQ community, another refugee, another Muslim, or another woman being harassed, bashed, or murdered. Because both guns and people kill.
So, what would you ask for if you knew the answer was yes? | https://medium.com/recycled/what-would-you-ask-for-if-you-knew-the-answer-was-yes-23c64bf38f6e | ['Jeremy Helligar'] | 2018-10-24 12:01:02.038000+00:00 | ['Love', 'Nonfiction', 'Life', 'Holidays', 'Gifts'] |
Clean Architecture, the right way | What is so Clean about Clean Architecture?
In short, you get the following benefits from using Clean Architecture:
Database Agnostic : Your core business logic does not care if you are using Postgres, MongoDB, or Neo4J for that matter.
: Your core business logic does not care if you are using Postgres, MongoDB, or Neo4J for that matter. Client Interface Agnostic: The core business logic does not care if you are using a CLI, a REST API, or even gRPC.
The core business logic does not care if you are using a CLI, a REST API, or even gRPC. Framework Agnostic: Using vanilla nodeJS, express, fastify? Your core business logic does not care about that either.
Now if you want to read more about how clean architecture works, you can read the fantastic blog, The Clean Architecture, by Uncle Bob. For now, lets jump to the implementation. To follow along, view the repository here.
Clean-Architecture-Sample
├── api
│ ├── handler
│ │ ├── admin.go
│ │ └── user.go
│ ├── main.go
│ ├── middleware
│ │ ├── auth.go
│ │ └── cors.go
│ └── views
│ └── errors.go
├── bin
│ └── main
├── config.json
├── docker-compose.yml
├── go.mod
├── go.sum
├── Makefile
├── pkg
│ ├── admin
│ │ ├── entity.go
│ │ ├── postgres.go
│ │ ├── repository.go
│ │ └── service.go
│ ├── errors.go
│ └── user
│ ├── entity.go
│ ├── postgres.go
│ ├── repository.go
│ └── service.go
├── README.md
Entities
Entities are the core business objects that can be realized by functions. In MVC terms, they are the model layer of the clean architecture. All entities and services are enclosed in a directory called pkg . This is actually what we want to abstract away from the rest of the application.
If you take a look at entity.go for user, it looks like this:
pkg/user/entity.go
Entities are used in the Repository interface, which can be implemented for any database. In this case we have implemented it for Postgre, in postgres.go. Since repositories can be realized for any database, therefore they are independent of all of their implementation details.
pkg/user/repository.go
Services
Services include interfaces for higher level business logic oriented functions. For example, FindByID, might be a repository function, but login or signup are service functions. Services are a layer of abstraction over repositories by the fact that they do not interact with the database, rather they interact with the repository interface.
pkg/user/service.go
Services are implemented at the user interface level.
Interface Adapters
Each user interface has it’s separate directory. In our case, since we have an API as an interface, we have a directory called api.
Now since each user-interface listens to requests differently, interface adapters have their own main.go files, which are tasked with the following:
Creating Repositories
Wrapping Repositories inside Services
Wrap Services inside Handlers
Here, Handlers are simply user-interface level implementation of the Request-Response model. Each service has its own Handler. See user.go | https://medium.com/gdg-vit/clean-architecture-the-right-way-d83b81ecac6 | ['Angad Sharma'] | 2020-01-12 15:59:59.237000+00:00 | ['Software Architecture', 'Go', 'Development', 'Clean Architecture', 'Technology'] |
Decorator Design Pattern in Modern C++ | In software engineering, Structural Design Patterns deal with the relationship between object & classes i.e. how object & classes interact or build a relationship in a manner suitable to the situation. The Structural Design Patterns simplify the structure by identifying relationships. In this article of the Structural Design Patterns, we’re going to take a look at the not so complex yet subtle design pattern that is Decorator Design Pattern in Modern C++ due to its extensibility & testability. It is also known as Wrapper.
/!\: This article has been originally published on my blog. If you are interested in receiving my latest articles, please sign up to my newsletter.
By the way, If you haven’t check out my other articles on Structural Design Patterns, then here is the list:
The code snippets you see throughout this series of articles are simplified not sophisticated. So you often see me not using keywords like override , final , public (while inheritance) just to make code compact & consumable(most of the time) in single standard screen size. I also prefer struct instead of class just to save line by not writing " public: " sometimes and also miss virtual destructor, constructor, copy constructor, prefix std:: , deleting dynamic memory, intentionally. I also consider myself a pragmatic person who wants to convey an idea in the simplest way possible rather than the standard way or using Jargons.
Note:
If you stumbled here directly, then I would suggest you go through What is design pattern? first, even if it is trivial. I believe it will encourage you to explore more on this topic.
All of this code you encounter in this series of articles are compiled using C++20(though I have used Modern C++ features up to C++17 in most cases). So if you don’t have access to the latest compiler you can use https://wandbox.org/ which has preinstalled boost library as well.
Intent
To facilitates the additional functionality to objects.
Sometimes we have to augment the functionality of existing objects without rewrite or altering existing code, just to stick to the Open-Closed Principle. This also preserves the Single Responsibility Principle to have extra functionality on the side.
Decorator Design Pattern Examples in C++
And to achieve this we have two different variants of Decorator Design Pattern in C++:
Dynamic Decorator: Aggregate the decorated object by reference or pointer. Static Decorator: Inherit from the decorated object.
Dynamic Decorator
struct Shape {
virtual operator string() = 0;
}; struct Circle : Shape {
float m_radius; Circle(const float radius = 0) : m_radius{radius} {}
void resize(float factor) { m_radius *= factor; }
operator string() {
ostringstream oss;
oss << "A circle of radius " << m_radius;
return oss.str();
}
}; struct Square : Shape {
float m_side; Square(const float side = 0) : m_side{side} {}
operator string() {
ostringstream oss;
oss << "A square of side " << m_side;
return oss.str();
}
};
So, we have a hierarchy of two different Shape s(i.e. Square & Circle ) & we want to enhance this hierarchy by adding colour to it. Now we're suddenly not going to create two other classes e.g. coloured circle & a coloured square. That would be too much & not a scalable option.
s(i.e. & ) & we want to enhance this hierarchy by adding colour to it. Now we're suddenly not going to create two other classes e.g. coloured circle & a coloured square. That would be too much & not a scalable option. Rather we can just have ColoredShape as follows.
struct ColoredShape : Shape {
const Shape& m_shape;
string m_color; ColoredShape(const Shape &s, const string &c) : m_shape{s}, m_color{c} {}
operator string() {
ostringstream oss;
oss << string(const_cast<Shape&>(m_shape)) << " has the color " << m_color;
return oss.str();
}
}; // we are not changing the base class of existing objects
// cannot make, e.g., ColoredSquare, ColoredCircle, etc. int main() {
Square square{5};
ColoredShape green_square{square, "green"};
cout << string(square) << endl << string(green_square) << endl;
// green_circle.resize(2); // Not available
return EXIT_SUCCESS;
}
Why this is a dynamic decorator?
Because you can instantiate the ColoredShape at runtime by providing needed arguments. In other words, you can decide at runtime that which Shape (i.e. Circle or Square ) is going to be coloured.
You can even mix the decorators as follows:
struct TransparentShape : Shape {
const Shape& m_shape;
uint8_t m_transparency; TransparentShape(const Shape& s, const uint8_t t) : m_shape{s}, m_transparency{t} {} operator string() {
ostringstream oss;
oss << string(const_cast<Shape&>(m_shape)) << " has "
<< static_cast<float>(m_transparency) / 255.f * 100.f
<< "% transparency";
return oss.str();
}
}; int main() {
TransparentShape TransparentShape{ColoredShape{Square{5}, "green"}, 51};
cout << string(TransparentShape) << endl;
return EXIT_SUCCESS;
}
Limitation of Dynamic Decorator
If you look at the definition of Circle , You can see that the circle has a method called resize() . we can not use this method as we did aggregation on-base interface Shape & bound by the only method exposed in it.
Static Decorator
The dynamic decorator is great if you don’t know which object you are going to decorate and you want to be able to pick them at runtime but sometimes you know the decorator you want at compile time in which case you can use a combination of C++ templates & inheritance.
template <class T> // Note: `class`, not typename
struct ColoredShape : T {
static_assert(is_base_of<Shape, T>::value, "Invalid template argument"); // Compile time safety string m_color; template <typename... Args>
ColoredShape(const string &c, Args &&... args) : m_color(c), T(std::forward<Args>(args)...) { } operator string() {
ostringstream oss;
oss << T::operator string() << " has the color " << m_color;
return oss.str();
}
}; template <typename T>
struct TransparentShape : T {
uint8_t m_transparency; template <typename... Args>
TransparentShape(const uint8_t t, Args... args) : m_transparency{t}, T(std::forward<Args>(args)...) { } operator string() {
ostringstream oss;
oss << T::operator string() << " has "
<< static_cast<float>(m_transparency) / 255.f * 100.f
<< "% transparency";
return oss.str();
}
}; int main() {
ColoredShape<Circle> green_circle{"green", 5};
green_circle.resize(2);
cout << string(green_circle) << endl; // Mixing decorators
TransparentShape<ColoredShape<Circle>> green_trans_circle{51, "green", 5};
green_trans_circle.resize(2);
cout << string(green_trans_circle) << endl;
return EXIT_SUCCESS;
}
As you can see we can now call the resize() method which was the limitation of Dynamic Decorator. You can even mix the decorators as we did earlier.
method which was the limitation of Dynamic Decorator. You can even mix the decorators as we did earlier. So essentially what this example demonstrates is that if you’re prepared to give up on the dynamic composition nature of the decorator and if you’re prepared to define all the decorators at compile time you get the added benefit of using inheritance.
And that way you actually get the members of whatever object you are decorating being accessible through the decorator & mixed decorator.
Functional Approach to Decorator Design Pattern using Modern C++
Up until now, we were talking about the Decorator Design Pattern which decorates over a class but you can do the same for functions. Following is a typical logger example for the same:
// Need partial specialization for this to work
template <typename T>
struct Logger; // Return type and argument list
template <typename R, typename... Args>
struct Logger<R(Args...)> {
function<R(Args...)> m_func;
string m_name; Logger(function<R(Args...)> f, const string &n) : m_func{f}, m_name{n} { } R operator()(Args... args) {
cout << "Entering " << m_name << endl;
R result = m_func(args...);
cout << "Exiting " << m_name << endl;
return result;
}
}; template <typename R, typename... Args>
auto make_logger(R (*func)(Args...), const string &name) {
return Logger<R(Args...)>(std::function<R(Args...)>(func), name);
} double add(double a, double b) { return a + b; } int main() {
auto logged_add = make_logger(add, "Add");
auto result = logged_add(2, 3);
return EXIT_SUCCESS;
}
Above example may seem a bit complex to you but if you have a clear understanding of variadic temple then it won’t take more than 30 seconds to understand what’s going on here.
Benefits of Decorator Design Pattern
Decorator facilitates augmentation of the functionality for an existing object at run-time & compile time. Decorator also provides flexibility for adding any number of decorators, in any order & mixing it. Decorators are a nice solution to permutation issues because you can wrap a component with any number of Decorators. It is a wise choice to apply the Decorator Design Pattern for already shipped code. Because it enables backward compatibility of application & less unit level testing as changes do not affect other parts of code.
Summary by FAQs
When to use the Decorator Design Pattern?
— Employ the Decorator Design Pattern when you need to be able to assign extra behaviours to objects at runtime without breaking the code that uses these objects.
— When the class has final keyword which means the class is not further inheritable. In such cases, the Decorator Design Pattern may come to rescue.
What are the drawbacks of using the Decorator Design Pattern?
— Decorators can complicate the process of instantiating the component because you not only have to instantiate the component but wrap it in a number of Decorators.
— Overuse of Decorator Design Pattern may complicate the system in terms of both i.e. Maintenance & learning curve.
Difference between Adapter & Decorator Design Pattern?
— Adapter changes the interface of an existing object
— Decorator enhances the interface of an existing object
Difference between Proxy & Decorator Design Pattern?
— Proxy provides a somewhat same or easy interface
— Decorator provides enhanced interface | https://medium.com/dev-genius/decorator-design-pattern-in-modern-c-4af9053ccc7f | ['Vishal Chovatiya'] | 2020-09-14 19:23:09.240000+00:00 | ['Programming', 'Software Development', 'Design Patterns', 'Coding', 'Cpp'] |
Descriptive and Inferential Statistics for Data Science | MATHS FOR DATA SCIENCE
Descriptive and Inferential Statistics for Data Science
Inferential and Descriptive Statistics!
Photo by Dhru J on Unsplash
Statistics is mainly divided into two domains- descriptive and inferential. While one is used to describing a dataset summary, other one is used for bringing out meaningful numbers of a population from a sample.
In the series of “Maths for Data Science” articles, I have talked about Probability, Regression, Normal Distribution, and other important subjects. But, this article is going to be on the lighter side of knowledge. Some key concepts I will cover are of descriptive statistics including mean, median, standard deviation, etc. Along with it, the power of inferential statistics will also be covered.
Descriptive Statistics
Before diving into descriptive statistics, let’s talk about data first.
Data is the new oil — Clive Humby
You may think of data as numbers in a spreadsheet but data can come in many forms — from the text, spreadsheets, images to databases. In this modern era, data is an important asset to utilize a new way of the world. No matter what field you are in — insurance, banking, or healthcare, data is utilized to make better decisions and accomplish goals.
There is mainly two type of data — Quantitative and Categorical.
Quantitative data takes on numeric values that allow us to perform mathematical operations (like the number of students in a class).
Quantitative- Continuous and Discrete
Continuous data can be split into smaller and smaller units like height. We can split height into centimeters, meters, millimeters, feet, inches, and many more units. Discrete data only takes on countable values like number of students in a class.
Categorical is used to label a group or set of items (like gender — male, female, transgender).
Categorical — Nominal and Ordinal!
Ordinal data take on a ranked ordering like Excellent, Good, Poor. And, Nominal data do not have an order or ranking like gender of a person.
Time for descriptive stats introduction! A descriptive statistic as the name suggests describes a dataset quantitatively.
Use the summary features of a dataset to answer questions!
Quantitative data can be having four aspects from which we can describe our dataset.
Measures of Center (Mean, Median, Mode) Measures of Spread The Shape of the data. Outliers
Measures of Center
Mean: Mean is the total sum of values divided by total number of values.
Median: The middle number in the list from lowest to highest.
Mode: Mode is the value that appears the most or the most frequent number in your dataset.
Measures of Spread
Measures of Spread are used to provide us an idea of how spread out our data is from one another.
Before jumping into the methods, let’s first understand the five number summary:
Minimum: The smallest number in the dataset. Q1: The value such that 25% of the data falls below. Q2: The value such that 50% of the data falls below. Q3: The value such that 75% of the data falls below. Maximum: The largest value in the dataset.
Now, let’s look at some of the most common methods to check the data spread.
Range: It is the difference between maximum and minimum value of the list or your data.
Interquartile Range (IQR): The interquartile range is calculated as the difference between Q3 and Q1.
Variance: The variance is the average squared difference of each observation from the mean.
Standard Deviation: Average distance of each observation from the mean. Also, The standard deviation is the square root of the variance
The shape of the Data
Using a histogram is the best way to visualize the spread of data. Looking at it, we can identify the shape of our data — right-skewed, left-skewed, or symmetric.
Symmetric: Mean equals Median
Right-Skewed: Mean greater than Median
Left-Skewed: Mean less than Median
Outliers
Outliers are the points that fall very far from the rest of the data points. This generally affects measures like the mean and standard deviation.
There can be the following aspects related to them:
Typo error Understanding why they exist Use 5 points summary to understand the outliers
You can use these formulas to check that a given number is an outlier or not.
Inferential Statistics
Now that we have described our data, it is time to draw conclusion on a larger population using the collected dataset.
For inferential statistics, we must be clear about a few terms:
Sample: a subset of the population.
Population: our entire group of interest.
Parameter: numeric summary about a population.
Statistic: numeric summary about a sample.
For example, we send our ad catalog to 10,000 individuals. And, 5000 of them replied. In these 5000 individuals, 75% were interested in our catalog ad. But, we wanted to find out how many individuals from the whole population are interested in our ad?
Population= 10,000 Sample=5000 Statistic= 75% Parameter= Proportion of the population that is interested in ad
So, we have to draw a conclusion for all of the 10,000 individuals from the responses we have got i.e only 5000. Draw a conclusion from these 5000 and apply it to the bigger population!
Binomial Distribution, Normal Distribution, Confidence Intervals, Regression, and Central Limit Theorem are some indexes to inferential statistics.
I have made separate articles on them!
Summary
I have talked about Descriptive and Inferential statistics in this article:
Descriptive Statistics Inferential Statistics
Peace!
In Plain English
Enjoyed this article? If so, get more similar content by subscribing to our YouTube channel! | https://medium.com/ai-in-plain-english/descriptive-and-inferential-statistics-for-data-science-e8b8b56b1ed2 | ['Vishal Sharma'] | 2020-06-10 06:38:19.363000+00:00 | ['Machine Learning', 'Data Science', 'AI', 'Math', 'Mathematics'] |
nxneo4j: NetworkX-API for Neo4j — A new chapter | Recently, I have had the opportunity to work with nxneo4j and I am excited to share it with the world!
What is nxneo4j?
nxneo4j is a python library that enables you to use networkX type of commands to interact with Neo4j.
Neo4j is the most common graph database. NetworkX is the most commonly used graph library. Combining the two brings us the nxneo4j!
Neo4j has the following advantages:
Neo4j is a database which means it persists the data. You can create the data once and run graph algorithms as many times as you want. In networkX, you need to construct the graph every time you want to run it.
Neo4j graph algorithms are scalable and production-ready. Neo4j algorithms are written in Java and performance tested. NetworkX is a single node implementation of a graph written in Python.
The response time is much faster in Neo4j.
Neo4j supports graph embeddings in the form of Node Embeddings, Random Projections, and Graph Sage. These are not available in nxneo4j yet but it will be available in the future versions.
So, why not use Neo4j?
nxneo4j is designed to help users interact with Neo4j quickly and easily. It uses the famous networkX API. You will use your well-accustomed scripts and but the scripts will run against Neo4j. Cool!
Just to be clear, Mark Needham had already created the nxneo4j and you might have used it in the past. This version updates the entire library for Neo4j 4.x and new Graph Data Science library since the older Graph Algoritm library is not supported with Neo4j 4.x. More importantly, it significantly improves the core functionality with property support, node and edge views, remove node features etc. So, this is more a like “A new chapter” or “Welcome back” for the library and it will have continuous support.
If you are like me and prefer Jupyter Notebooks instead, here is the link:
https://github.com/ybaktir/networkx-neo4j/blob/master/examples/nxneo4j_tutorial_latest.ipynb
Prerequisite 1: Neo4j itself
You need to have an active Neo4j 4.x running. Make sure you have Neo4j 4 and above. You have four options here:
Neo4j Desktop: It is a free desktop application that runs locally on your computer. It is super easy to install. Follow the instructions on this page: https://neo4j.com/download/ Neo4j Sandbox: This is a free temporary Neo4j instance and it is the fastest way to get started with Neo4j. So fast that, you can have your Neo4j instance running under 60 seconds. Just use the following link: https://neo4j.com/sandbox/ Neo4j Aura: This a pay-as-go cloud service. You can start as low as $0.09/hour. This is most suitable for long term projects where you don’t have to worry about the infrastructure. In case you want to give it a try: https://neo4j.com/aura/ Your enterprise Neo4j instance. You know it when you have it. Otherwise, you can ask your architect to run an instance for you. Be careful during the experimentation.
Prerequisite 2: APOC and GDS plugins
In Neo4j Desktop, you can easily install them like the following:
Image by the Author
Image by the Author
The libraries come pre-installed in the Sandbox and Aura.
Connect to Neo4j
No matter which option you choose, you need to connect to the Neo4j. The library to use is “neo4j”
pip install neo4j
Then, connect to the Neo4j instance.
from neo4j import GraphDatabase uri = "bolt://localhost" # in Neo4j Desktop
# custom URL for Sandbox or Aura
user = "neo4j" # your user name
# default is always "neo4j"
# unless you have changed it.
password = your_neo4j_password driver = GraphDatabase.driver(uri=uri,auth=(uri,password))
If everything went smoothly so far, you are ready to use nxneo4j!
nxneo4j
To get the most up to date version, install it directly from the Github page.
This will install nxneo4j 0.0.3. The version 0.0.2 is available on pypi but it is not stable. 0.0.3 will be published on pypi soon. Until then, please use the above link.
Then create the Graph instance:
import nxneo4j as nx G = nx.Graph(driver) # undirected graph
G = nx.DiGraph(driver) # directed graph
Let’s add some data:
G.add_node(1) #single node G.add_nodes_from([2,3,4]) #multiple nodes G.add_edge(1,2) #single edge G.add_edges_from([(2,3),(3,4)]) #multiple edges
Check nodes and edges:
>>> list(G.nodes())
[1, 2, 3, 4] >>> list(G.edges())
[(1, 2), (2, 3), (3, 4)]
To add nodes and edges with features:
G.add_node('Mike',gender='M',age=17)
G.add_edge('Mike','Jenny',type='friends',weight=3)
Check individual nodes data:
>>> G.nodes[‘Mike’]
{'gender': 'M', 'age': 17} >>> G.nodes['Mike']['gender']
'M'
Check all nodes and edges data:
>>> list(G.nodes(data=True))
[(1, {}),
(2, {}),
(3, {}),
(4, {}),
('Mike', {'gender': 'M', 'age': 17}),
('Jenny', {})] >>> list(G.edges(data=True))
[(1, 2, {}),
(2, 3, {}),
(3, 4, {}),
('Mike', 'Jenny', {'type': 'friends', 'weight': 3})]
Visualize with like the following:
>>> nx.draw(G)
Image by the Author
To delete all the data
G.delete_all()
Config file
Neo4j has some additional requirements for data storage. In Neo4j, the relationships have to have a relationship label. The labels of the nodes are highly recommended. Since the NetworkX syntax has no room for label modification, we store this knowledge in the “config” file.
The config file is a python dictionary, and the default config file has the following statements:
{
'node_label': 'Node',
'relationship_type': 'CONNECTED',
'identifier_property': 'id'
}
You can easily change this dictionary and create an instance with new modifications. For example:
config = {
'node_label': 'Person',
'relationship_type': 'LOVES',
'identifier_property': 'name'
} G = nx.Graph(driver, config=config)
You can also change the default values after the instance creation:
G.direction = 'UNDIRECTED' #for Undirected Graph
G.direction = 'NATURAL' #for Directed Graph G.identifier_property = ‘name’
G.relationship_type = ‘LOVES’
G.node_label = ‘Person’
To check the config file:
>>> G.base_params()
{'direction': 'NATURAL',
'node_label': 'Person',
'relationship_type': 'LOVES',
'identifier_property': 'name'}
Built-in Data Sets
nxneo4j has 3 built-in datasets:
Game of Thrones
Twitter
Europe Road
1.Game of Thrones data
Created by Andrew Beveridge, the data set contains the interactions between the characters across the first 7 seasons of the popular TV show.
There are 796 nodes and 3,796 relationships.
All nodes are the TV characters labeled “Character”. The relationship types are “INTERACTS1”, “INTERACTS2”, “INTERACTS3” and “INTERACTS45”
The only node property is “name”
The relationship properties are “book” and “weight”.
You can load it with the following command:
G.load_got()
2. Europe Roads
Created by Lasse Westh-Nielsen, the data set contains the European cities and the distances between them.
There are 894 nodes and 2,499 relationships.
All nodes are labeled “Place” and the relationships types are all “EROAD”
Node properties are “name” and “countryCode”
Relationship properties are “distance”, “road_number” and “watercrossing”.
You can load the data with the following code:
G.load_euroads()
3. Twitter
Created by Mark Needham, the data contains Twitter followers of the graph community.
There are 6526 nodes and 177,708 relationships.
All node labels are “User” and all relationship types are “FOLLOWS”
Node properties are “name”, “followers”, “bio”, “id”, “username”, “following”. Relationships don’t have any property. To get the data, run:
G.load_twitter()
Graph Data Science
It is algorithm time!
There are at least 47 builtin graph algorithms in Neo4j. nxneo4j will expand to cover all of them in the future versions. For now, the following networkX algorithms are supported:
pagerank
betweenness_centrality
closeness_centrality
label_propagation
connected_components
clustering
triangles
shortest_path
shortest_weighted_path
Let’s clear the data and load Game of Thrones data set:
G.delete_all()
G.load_got()
Visual inspection:
nx.draw(G) # You can zoom in and interact with the nodes
# when running on Jupyter Notebook
Image by the Author
Centrality Algorithms:
Centrality algorithms help us understand the individual importance of each node.
>>> nx.pagerank(G)
{'Addam-Marbrand': 0.3433842763728652,
'Aegon-Frey-(son-of-Stevron)': 0.15000000000000002,
'Aegon-I-Targaryen': 0.3708563211936468,
'Aegon-Targaryen-(son-of-Rhaegar)': 0.15000000000000002,
'Aegon-V-Targaryen': 0.15000000000000002,
'Aemon-Targaryen-(Dragonknight)': 0.15000000000000002,
'Aemon-Targaryen-(Maester-Aemon)': 1.1486743815905878,
... } >>> nx.betweenness_centrality(G)
{'Addam-Marbrand': 0.0,
'Aegon-Frey-(son-of-Stevron)': 0.0,
'Aegon-I-Targaryen': 0.0,
'Aegon-Targaryen-(son-of-Rhaegar)': 0.0,
'Aegon-V-Targaryen': 0.0,
'Aemon-Targaryen-(Dragonknight)': 0.0,
'Aemon-Targaryen-(Maester-Aemon)': 186.58333333333334,
... } >>> nx.closeness_centrality(G)
{'Addam-Marbrand': 0.3234782608695652,
'Aegon-Frey-(son-of-Stevron)': 0.0,
'Aegon-I-Targaryen': 0.3765182186234818,
'Aegon-Targaryen-(son-of-Rhaegar)': 0.0,
'Aegon-V-Targaryen': 0.0,
'Aemon-Targaryen-(Dragonknight)': 0.0,
'Aemon-Targaryen-(Maester-Aemon)': 0.33695652173913043,
... }
2. Community Detection Algorithms
Community Detection algorithms show how nodes are clustered or partitioned.
>>> list(nx.label_propagation_communities(G))
[{'Addam-Marbrand',
'Aegon-I-Targaryen',
'Aerys-II-Targaryen',
'Alyn',
'Arthur-Dayne',
... ] >>> list(nx.connected_components(G))
[{'Raymun-Redbeard'},
{'Hugh-Hungerford'},
{'Lucifer-Long'},
{'Torghen-Flint'},
{'Godric-Borrell'},
... ] >>> nx.number_connected_components(G)
610 >>> nx.clustering(G)
{'Colemon': 1.0,
'Desmond': 1.0,
'High-Septon-(fat_one)': 1.0,
'Hodor': 1.0,
'Hosteen-Frey': 1.0,
... } >>> nx.triangles(G)
{'Addam-Marbrand': 0,
'Aegon-Frey-(son-of-Stevron)': 0,
'Aegon-I-Targaryen': 0,
'Aegon-Targaryen-(son-of-Rhaegar)': 0,
'Aegon-V-Targaryen': 0,
... }
3. Path Finding Algorithms
Path Finding algorithms show the shortest path between two or more nodes.
>>> nx.shortest_path(G, source="Tyrion-Lannister", target="Hodor")
['Tyrion-Lannister', 'Luwin', 'Hodor'] >>> nx.shortest_weighted_path(G, source="Tyrion-Lannister", target="Hodor",weight='weight')
['Tyrion-Lannister', 'Theon-Greyjoy', 'Wyman-Manderly', 'Hodor']
Resources:
Project Github Page:
Jupyter Notebooks:
https://github.com/ybaktir/networkx-neo4j/blob/master/examples/nxneo4j_tutorial_latest.ipynb
Changelog:
https://github.com/ybaktir/networkx-neo4j/blob/master/CHANGELOG.md
Credits:
Mark Needham for creating the library.
David Jablonski for adding the functionalities while improving the core functionality. | https://medium.com/neo4j/nxneo4j-networkx-api-for-neo4j-a-new-chapter-9fc65ddab222 | ['Yusuf Baktir'] | 2020-09-02 20:00:07.895000+00:00 | ['Neo4j', 'Python', 'Networkx', 'Data Science', 'Graph'] |
Startup Spotlight: Evreka | Startup Spotlight: Evreka
Reshaping the Waste Management Business with End-to-End Solutions
Evreka is aiming to reshape the waste management business by delivering the most intelligent and modular solutions. Evreka enables cross-border waste management companies, local authorities, and municipalities to be more sustainable, smarter, greener, and digital. Evreka is recognized as the winner of the 2019 Sustainability Innovation Award for its intelligent waste management solutions presented by Oracle.
Umutcan Duman, co-founder and CEO of Evreka, discovered his passion for entrepreneurship during his university years. Afterward, he closed two unsuccessful attempts, which ended with many learnings, and focused on solving the world waste problem with Evreka. Since Evreka’s foundation, he has been working with EvrekaCrew for five years and they have been actively operating in 40 countries. With the contribution of his achievements with Evreka, Umutcan has been chosen for the Forbes 30 Under 30 list and Fortune 40 Under 40 list as one of the most successful young people in Turkey.
— In a single sentence, what does Evreka do?
Evreka provides the most comprehensive intelligent ERP solution designed and focused on the entire category of waste management by delivering end-to-end modular solutions and cutting-edge products, as well as guaranteeing operational excellence across the globe.
— What sets Evreka apart in the market?
With comprehensiveness at its core, Evreka creates a platform that can cross-integrate with all sorts of third-party applications, hardware, and software. Competing in the same position as its competitors in 2016, Evreka developed the foundational modular platform by 2019. It combined with the technologies of its old competitors, trucks related devices, and world leaders in the ERP and CRM sectors like Oracle. This comprehensive and enabler structure let Evreka stand out from its competitors. Evreka shifted from being a competitor into the most agile and comprehensive waste management solution leading its peers.
— How did Evreka come to be? What was the problem you found and the ‘aha’ moment?
We realized the insufficiency in waste collection processes — the lack of optimization and solutions that are far from sustainability in early 2015 — and we adhered to solving them and offering more efficient methods. We worked for it night after night and created the EvrekaCrew.
In time, we started to serve clients across the world. The deeper we went, the more we realized the similarities between waste operations around the world. They were being managed almost the same and they all needed the same change: Digitization and traceability. With this gained new knowledge, we started to offer our solutions to city cleaners to improve their operations and increase their efficiency.
And today, with the vision of “always better,” we are proud to offer the industry’s most comprehensive, intelligent, and modular waste management software to more than 40 countries.
— Have you pursued funding and if so, what steps did you take?
Like every startup, we fundraised to accelerate our growth globally. We have achieved great success with a small investment of about 2m dollars, and we continue to do so. Our pioneering role in the sector encourages us to be always better. That is why we are still open to new smart investors to solve the problem we aim at more deeply with more comprehensive technologies.
— What milestone are you most proud of so far?
We signed a success story with EvrekaCrew, a fantastic team. Since our establishment, we have been waking up to every new day with great enthusiasm, trying to implement our mission in every aspect of our business, and offering the industry’s most comprehensive solution for waste management. While doing all these, of course, we pass some milestones. Since Evreka has existed, we have witnessed precious moments together, and I would like to share the most exciting of them with you.
The first moment we hit the field! The day we carry out our first tests in the field with our first user is precious. This was a moment of pride in which we can see the result of our work for the first time.
Our first shot to the global market, our first overseas experience, our first foreign users, and partners! This turning point means a lot to us because we have added tremendous speed to our growth from now on.
The first enterprise client, which is one of the milestones that level us up. This is the most precise and concrete proof that we are a proven company and a proven solution. We are now a company that has trustable references in our users’ eyes and can adapt its services to everyone and every situation regardless of business type and size.
— What KPIs are you tracking that you think will lead to revenue generation/growth?
We are aware of the importance of standard KPIs that drive sales processes, and of course, we use them. We manage our pipeline with excellent planning. While we are creating new leads, we are trying to advance these leads’ processes on the pipeline successfully. Since our product and solution are unique, we have achieved a rapid transition to the purchasing process for all potential users who have tried the product. We are working to make this purchasing process more successful day by day and maximize our growth.
— How do you build and develop talent?
It is the most critical responsibility of the founder CEO. We call our team EvrekaCrew, and it is the most important thing that we care about most in our business.
We have a team mainly focused on solving inefficiency and old-school techniques encountered in waste management in the world. We continuously work together to improve this area and improve it to the moon and back.
On the other hand, we have determined our vision and values altogether, and have been trying to spread this to our new team members. We are very attached to each other and our culture. Who wouldn’t want to work in a job that has a tremendous contribution to the world?
— What are the biggest challenges for the team?
Unfortunately, in 2017 during the early stages, we lost our close friend and co-founder Berkay. This situation deeply affected all founders, shareholders, and the Crew. But as a team, we overcame this by engaging each other more tightly and carrying Berkay’s dream on our shoulders. Now, we have him in our hearts and his vision in our minds. Every single day, we ask ourselves what Berkay would do if he were here and we are doing our best to make him proud.
— What’s been the biggest success for the team?
There is no such thing as the biggest success because each success has its own enjoyment and separate teaching. That’s why we like to celebrate each success separately and focus on the drivers behind it. As EvrekaCrew, we have placed hard work, learning from the best practitioners, and celebrating success all together in our corporate culture.
— What’s something that’s always on your mind?
The only thing I always have in mind is how we can get better. That’s why our motto is “always better.” Personally, this motto truly embraces each member of EvrekaCrew and me, and we live every moment of our daily life in the light of this frame of mind. We strive to do not only when doing business but also in our personal lives.
— What advice would you give to other founders?
As founders, we have no luxury to complain. We have to handle every problem we face with care and find the best solution for everyone.
— Have you been or are you part of a corporate startup program or accelerator? If so, which ones and what have been the benefits?
Oracle’s global startup program supports Evreka. Since the first day we were selected for the program, we have been using Oracle’s Cloud infrastructure and have been fed with a wide range of support, such as various mentoring and marketing services. We are also in Oracle’s global catalog.
Being involved in a highly applicable, supportive, and agile program, and having the opportunity to work with Oracle is one thing that accelerates Evreka’s growth. Although we have received acceptance from several international programs, our preference has been to grow with Oracle and make the most of Oracle’s vast knowledge. With its proven success globally, Oracle is one of the proudest of our name being mentioned together.
— Who is your cloud provider?
We work with Oracle not only for the Cloud system but also for other services that they provide. We are a part of Oracle for Startups, and we offer the most comprehensive solution in the industry for a more sustainable and smart future with Oracle solutions. | https://medium.com/startup-grind/startup-spotlight-evreka-66e3ba304603 | ['The Startup Grind Team'] | 2020-08-11 15:21:00.921000+00:00 | ['Startup Lessons', 'Startup Spotlight', 'Startup', 'Technology', 'Startup Life'] |
10 Best Coursera Python Certification & Courses to Learn Coding in 2021 | 10 Best Coursera Python Certification & Courses to Learn Coding in 2021
These are the best Coursera courses for Python. You can join these specializations to both learn Python and get a Certification, most of these courses are free-to-audit.
Hello guys, if you want to learn Python and looking for the best Coursera courses and certifications for Python then you have come to the right place. Earlier, I have shared the best Python courses for beginners, and today, I am going to share the best Python certifications from Coursera.
While there are many online platforms out there to learn Python Programming, Coursera is one of the most reputed ones. The best thing about Coursera is that it provides access to courses taught at the World’s top universities like the University of Michigan and Rice University, one of the top 20 universities in the USA.
It has also got the best Python certifications offered by both organizations like IBM and Google, the World’s top universities like the University of Michigan. That’s why many people flock to Coursera to learn Python and other Computer Science and Software Engineering skills.
The Trust level of Coursera is also very high, and that’s why when you put Coursera Certifications on your resume and LinkedIn, Recruiters pay attention. Their courses are also in-depth and well structured, which provides you the confidence and knowledge to justify those certifications.
Because of all these, more and more people are choosing Coursera for their online learning journey. Even many organizations are choosing Coursera as their learning partner, which means you may get to access many of the top Coursera certifications for free if your organization has tied up with them.
While I have shared some of the best Python Courses from Udemy and other platforms earlier, I have been continuously asked to write a similar article for Best Coursera Courses for Python and Data Science, and this article is for all those who have requested for best Coursera courses and certification on Python and Computer Science.
Now that we talked about Coursera, let’s talk a little bit about Python becuase if you are here, then you are not only interested in Coursera but, more importantly, in Python. In a word, it’s the kind of programming language and possibly the most straightforward dominant programming language you can learn in 2021.
Unlike learning JavaScript, which is focused on Web Development, learning Python opens a lot of doors. You can not only create a Web application using Django and Flask, but also do a lot of automation using different Python libraries, and become a Data Scientist and Machine Learning expert by using Pandas, TensorFlow, and PyTorch.
In short, Python has immense scope, and everybody should learn Python.
By the way, if you like Udemy courses then I also recommend Jose Portiall’s Complete Python Bootcamp: Go from zero to hero in Python course. It’s the most popular Python course on the planet. More than 1,164,741 (1 million+) students Students have already joined this course.
10 Best Python Courses & Certifications on Coursera
Now that we have touched base on both Python and Coursera, its time to deep dive on the courses and certification which Coursera offers for aspiring Programmer who wants to learn Python.
While there are many great Python courses on Coursera, I have only included the best of the best ones, which is trusted by many of my friends, colleagues, and recommended to me.
It’s also not necessary to go through all these courses at once; instead, you will be served better if you go for a Coursera Specialization like Python for Everybody Specialization, which bundles related courses together with a hands-on project to teach you a skill rather than just syntax and semantics.
Anyway, without wasting any more of your time, here is my list of best Coursera courses for Python and Computer Science:
This is the most popular and one of the best Coursera course on Python. It is evident from the fact that more than 975,145 students have already enrolled in this. It’s offered by the University of Michigan, one of the most significant academic institutions in not just the USA but the World.
The average course at Michigan university costs around $15,000 USD, but you can access this course FREE, thanks to Coursera, but if you want to get a certificate, then you need to pay for, Specialization which costs around $39 per month if I am not wrong.
As the name suggests, this course aims to teach everyone the basics of programming computers using Python. It will show you the basics of how one constructs a program from a series of simple instructions in Python, which makes it very useful for absolute beginners.
The course has no pre-requisites and avoids all but the simplest mathematics. Anyone with moderate computer experience should be able to master the materials in this course. This course will cover Chapters 1–5 of the textbook “Python for Everybody.”
Here is the link to join this Python course — Programming for Everybody
This course covers Python 3, the most popular version of Python, and provides a good launchpad for more advanced Python courses like Web scrapping using Python, accessing Databases, and doing Data Analysis in Python. This course is also part of the Python for Everybody specialization, which means completing this course will count towards your certification. | https://medium.com/javarevisited/10-best-python-certification-courses-from-coursera-4576890eb6b3 | [] | 2020-12-11 04:43:00.374000+00:00 | ['Python', 'Data Science', 'Programming', 'Software Development', 'Coding'] |
Better process for zsh virtualenvwrapper plugin | Virtualenvwrapper is one of the must have scripts for building a Python development environment. With the power of zsh oh-my-zsh plugin framework, we can automate some processes such as activate & deactivate.
But the build-in virtualenvwrapper fails to work sometimes what it sould be. I decided to fix it by myself after googling for existed solutions.
virtualenvwrapper plugin analyze & optimize
The existed virtualenvwrapper plugin is quite useful in most of time, but it would malfunction occasionally. So I want to break it down in this section to find proper ways to improve it.
As it is a plugin for zsh environment, the code is quite straight.
The first part is used to check system variables and scripts has been installed or not. We don’t have to dig into much about how this part works.
The logical part is in the second part of the plugin code, which contains two parts. The first part is the core while second part simply attaches workon_cwd function from first part onto chpwd hook. So we only have to heed to the workon_cwd function.
The original code is shown below.
function workon_cwd { if [ ! $WORKON_CWD ]; then WORKON_CWD=1 # Check if this is a Git repo # Get absolute path, resolving symlinks PROJECT_ROOT=”${PWD:A}” while [[ “$PROJECT_ROOT” != “/” ! -e “$PROJECT_ROOT/.venv” ]]; do PROJECT_ROOT=”${PROJECT_ROOT:h}” done if [[ “$PROJECT_ROOT” == “/” ]]; then PROJECT_ROOT=”.” fi # Check for virtualenv name override if [[ -f “$PROJECT_ROOT/.venv” ]]; then ENV_NAME=`cat “$PROJECT_ROOT/.venv”` elif [[ -f “$PROJECT_ROOT/.venv/bin/activate” ]];then ENV_NAME=”$PROJECT_ROOT/.venv” elif [[ “$PROJECT_ROOT” != “.” ]]; then ENV_NAME=”${PROJECT_ROOT:t}” else ENV_NAME=”” fi if [[ “$ENV_NAME” != “” ]]; then # Activate the environment only if it is not already active if [[ “$VIRTUAL_ENV” != “$WORKON_HOME/$ENV_NAME” ]]; then if [[ -e “$WORKON_HOME/$ENV_NAME/bin/activate” ]]; then workon “$ENV_NAME” export CD_VIRTUAL_ENV=”$ENV_NAME” elif [[ -e “$ENV_NAME/bin/activate” ]]; then source $ENV_NAME/bin/activate export CD_VIRTUAL_ENV=”$ENV_NAME” fi fi elif [[ -n $CD_VIRTUAL_ENV -n $VIRTUAL_ENV ]]; then # We’ve just left the repo, deactivate the environment # Note: this only happens if the virtualenv was activated automatically deactivate unset CD_VIRTUAL_ENV fi unset PROJECT_ROOT unset WORKON_CWD fi }
From line 6 to 12, it checks the project root folder with the criteria is whether there is a .venv in folder. If not, the PROJECT_ROOT variable is . to represent current folder.
From line 13 to 22, it checks whether there is a .venv file to overwrite virtualenv name from PROJECTROOT folder to parent folder recursively. The issue here is that variable ENVNAME will be set to an empty string if .venv file could not be found. Then in the line 23, the ENV_NAME check will fail because of this, and the plugin fails to activate virtualenv.
The .venv is used to determine whether the folder has virtualenv instead of overwriting the virtualenv name. So we could change the code to set ENV_NAME to current folder name when .venv file is not found.
# Check for virtualenv name override if [[ -f “$PROJECT_ROOT/.venv” ]]; then ENV_NAME=`cat “$PROJECT_ROOT/.venv”` elif [[ -f “$PROJECT_ROOT/.venv/bin/activate” ]];then ENV_NAME=”$PROJECT_ROOT/.venv” elif [[ “$PROJECT_ROOT” != “.” ]]; then ENV_NAME=”${PROJECT_ROOT:t}” else FOLDER_NAME=`pwd` ENV_NAME=`basename “$FOLDER_NAME”` fi
Here we seems have set everything correct. But when we have sub-folders in virtualenv root folder, checking ENV_NAME would fail and then deactivate the environment because sub-folder name doesn’t match any virtualenv name. The new logical would illustrate as below.
The code with above logic is shown below.
# Activate the environment only if it is not already active if [[ “$VIRTUAL_ENV” != “$WORKON_HOME/$ENV_NAME” ]]; then if [[ -e “$WORKON_HOME/$ENV_NAME/bin/activate” ]]; then workon “$ENV_NAME” export CD_VIRTUAL_ENV=`pwd` elif [[ -e “$ENV_NAME/bin/activate” ]]; then source $ENV_NAME/bin/activate export CD_VIRTUAL_ENV=`pwd` elif [[ `pwd` != “$CD_VIRTUAL_ENV”* ]]; then # We’ve just left the repo, deactivate the environment # Note: this only happens if the virtualenv was activated automatically deactivate unset CD_VIRTUAL_ENV fi fi
Here we can say the new optimized virtualenvwrapper plugin for zsh has finished. It now can automatically activate & deactivate the environment when entering and leaving. The full file can be download from here. | https://medium.com/ken-m-lai/better-process-for-zsh-virtualenvwrapper-plugin-ed68f1d1e72c | ['Ken Lai'] | 2018-01-13 12:25:46.468000+00:00 | ['Ken Lai', 'Tech', 'Zsh', 'Python'] |
Historical Stock Price Data in Python | Stocks are very volatile instruments and it’s therefore very important that we thoroughly analyze the price behaviour before making any trading decisions. This is why fetching and analyzing the prices is crucial. And python comes in handy to do that.
The stock data can be downloaded from different packages such as yahoo finance, quandl and alpha vantage. In this article, we will look at fetching the daily and minute level data from yahoo finance. This is brief version to get the data, if you are looking for other methods and different data types then you can refer below. | https://towardsdatascience.com/historical-stock-price-data-in-python-a0b6dc826836 | ['Ishan Shah'] | 2020-04-10 09:42:47.203000+00:00 | ['Stock Market', 'Python', 'Data', 'Programming', 'Data Visualization'] |
The Phenomenal, Indomitable Human Spirit | By Wayne Saalman
[Photo by Zac Durant]
BE INSPIRED. BE VERY INSPIRED. So we should tell ourselves every day, for this world of ours is a treasure trove of possibilities. One need only look at what we humans have brought into existence in the realms of the creative arts and scientific innovation, and one is left marveling. Clearly, the fire of desire occupies a very special place among evolution’s prime directives, for otherwise, the intensity of that passion would not be so compelling. After all, if nature cared only that we survive, we humans would be as incapable of self-awareness as the insects, animals, birds of the air and the creatures of the sea.
Humans do more than survive, however. We thrive.
Over the centuries, we have fought against stifling heat and the freezing cold, beasts of the wild, deadly diseases and so much more. We have organized ourselves into tribes and nations, erected great governments, cities and monetary systems, invented innumerable necessities and created fantastic cultural artworks. We have also surrounded ourselves with wondrous luxuries and technological marvels. We have triumphed in too many ways to count.
That’s not all. We have tended, not only to our physical interests, but to our metaphysical intrigues, as well.
Hardcore scientific materialists doubtless believe that far too many of us delude ourselves, however, in terms of what is possible as biological entities born of sheer happenstance. Admittedly, we’re great, they say, but only in a limited manner. Yes, we can dream big in certain imaginative ways. We can seek and achieve worldly fame and fortune. We can grow intellectually and thereby become acclaimed in a specific field such as business, science, politics or the arts. We can even win our ideal lover if we are charming and persistent enough.
What no hardcore scientific materialist will agree to, though, is that humanity possesses an innate ability to achieve spiritual salvation or enlightenment, obtain psychic powers or enjoy survival beyond this physical, material world in which we now find ourselves.
Scientific materialists, of course, believe that the brain generates consciousness, as well as the sense of self we experience as human beings and that with the demise of the body these sensate qualities simply cease.
Yet, from a certain perspective, one might say that if our consciousness and sense of self are “only” the product of a physical organ known as the brain, then the dreams that brain conjures are still pretty spectacular.
According to polls, however, most of us believe that there is, indeed, an actual spiritual dimension and that our essential being is what we call the “soul”. The soul, most say, continues on after the demise of the physical body and subsequently resides in a realm that is both boundless and eternal in nature.
Despite these major differences of ideation and perception, the one thing that can be definitively stated is that even if our sense of self is but an illusory figment of electrochemical processes, it is still phenomenal and remarkable, for even an atheist can live a robust, moral, creative and fulfilling life, despite having nothing but oblivion on his or her personal horizon.
This implies that, at the very least, the human spirit is an immaterial, evanescent “something” possessed of a numinous, transcendental quality of some nature. It is a quality that is inherent to our essential being, even if that essence is limited and given to a sense of boundlessness only within the sphere of the human imagination.
While brain activity may be the source of the self-reflexive mind, the possibility exists that the psycho-spiritual complex might not only be real, but sourced in a supraluminal plane; a domain where energy and information actually move faster than the speed of light as we know it in this universe of ours. After all, neuroscientists admit, straight out, that they cannot prove that the brain gives rise to consciousness and a few have even argued that the brain may indeed be more like a radio receiver, mediating waves of energy and information external to it, rather than being the actual generator of thought and perception.
Some cutting edge thinkers have even gone so far as to propose that the one thing that may be genuinely supraluminal within our cosmos — and us — is thought itself.
Thought, of course, is an activity of the mind and if the mind is indeed sourced in a supraluminal plane, then the possibility exists that the mind may not be bound by the laws of physics as we know them. Perhaps, this can account for such “impossible” phenomena as telepathy, remote viewing, psychokinesis, clairvoyance, out of body experiences, and other events of “high strangeness”.
One of the most esteemed psychoanalysts of all time, Carl Jung, postulated that there may indeed be a supraluminal dimension and that dimension may be the source of the archetypal forces that shape psychic reality. Archetypes are a recurrent constellation of motifs within each of us that drive us toward a definitive destiny.
What archetypes are believed to do is propel us into acts of heroism, transform us into lovers, mothers, fathers, teachers, political or religious leaders, soldiers, police officers, merchants, artists, dramatic actors and comedic tricksters, among other roles (some of a dark nature, unfortunately, such as dictators, murderers, thieves and gangsters, for example).
In short, these recurrent forces are what motivate the human spirit.
Ideally, of course, that motivation is for noble ends, not for greedy nor evil ones.
Perhaps, what drives each of us to continually reach for all that we desire in this life (both the possible and seemingly impossible), is down to the particular archetypes that dominate within each of us. We must remember, however: there are karmic implications to all that we do. As we sow, we reap.
One thing is certain: the more creative we are with our personal energetic constellation of psychic dominants, the more powerful they become.
So, yes, be inspired! Be very inspired (rather than “afraid, very afraid”), I say, and create in a positive manner as you desire with boundless passion. We can do that for the duration of this known lifetime or go further and choose to create as if there is not only tomorrow, but all of eternity in which to enrich ourselves and everyone else in this amazing world of ours.
In brief, we humans are blessed in too many ways to count and that is down to one thing: the phenomenal, indomitable spirit within.
[Photo by Julia Caesar] | https://waynesaalman.medium.com/the-phenomenal-indomitable-human-spirit-c72feaad5fac | ['Wayne Saalman'] | 2019-06-01 08:11:18.196000+00:00 | ['Consciousness', 'Spirituality', 'Self Improvement', 'Culture', 'Psychology'] |
Why 35 and up Is a Little Too Old to Be Having “Sleepovers” | Alex and Phil were close friends, not the best of friends by any stretch, but close enough to where it wasn’t uncommon for them to stay up late at night, drinking beers, talking into the evening breeze.
If one of them got too drunk, they’d pass out, and one would let the other crash on his couch for the night.
It happened on one of those nights where Phil and Alex were both getting very good and drunk.
Phil shot his wife a text saying that he was probably going to stay another night over. After sending the text he sat in the darkness with Alex.
They sat together by the fire quietly, staring into the flames.
Maybe it was the alcohol. Maybe it was everything finally coming to a head, but out of nowhere Phil started crying.
It was light at first, but overwhelmed with emotion he started sobbing harder and harder.
Weeping over his dead brother.
In between fits of tears, he would throw up, the snot from the tears mixed with strings of spit and bile stretching from his face.
Alex was shocked.
He tried to get Phil to calm down and drink some water, but he just pushed him away,
He kept repeating “I’m sorry, I’m so so sorry” while weeping.
To his brother or to him, Alex never knew. He just watched over him until he finally passed out, exhausted from the tears and the alcohol.
It wasn’t until Phil had rolled over in the grass that Alex saw.
Phil had shit himself.
The garden hose was in the backyard next to the fire pit. Alex pulled it around and gently sprayed Phil to clean him, waking him up in the process.
Upon waking Phil was much more subdued and violently ashamed. Still too drunk to move effectively, he let Alex pull him out of his clothes and throw them away, while he gave him new ones.
Both men were fiercely embarrassed.
Phil, embarrassed at crying in the other man’s presence, Alex embarrassed to have seen the other man crying.
Around four in the morning, Phil crept back to his house crawling into bed with his wife. She asked if everything was all right and he simply shook his head. | https://medium.com/writers-blokke/why-35-and-up-is-a-little-too-old-to-be-having-sleepovers-8fd1574dba1c | ['Jesse Ya Diul'] | 2020-11-10 03:41:34.518000+00:00 | ['Short Story', 'Life Lessons', 'Humor', 'Writing', 'Life'] |
Lights | Haiku is a form of poetry usually inspired by nature, which embraces simplicity. We invite all poetry lovers to have a go at composing Haiku. Be warned. You could become addicted.
Follow | https://medium.com/house-of-haiku/lights-ef5f3b3311d1 | ['Galit Birk'] | 2020-07-04 23:21:00.864000+00:00 | ['Self-awareness', 'Poetry Prompt', 'Haiku', 'Tradition', 'Peace'] |
You Should Be Asking Yourself These Questions Right Now | People are legitimately worried about the effect the pandemic will have on mental health — not only because so many aren’t working, but also because so many of us are isolated. Not to mention sickness and death. But I do wonder, when this subject is broached, how many consider the responsibility that we have for our own mental health. Though we may suffer trauma and its effects through no fault of our own, we still are not helpless to address it. This assumes three important things:
1.We acknowledge we have experienced trauma (pain, fear, isolation, etc.)
2. We recognize that there are areas we aren’t mentally healthy.
3. We pursue and engage in a process of healing.
And there’s a whole lot of unhealthiness going on out there right now. The panic buttons have been activated and our deeper, unresolved issues are bubbling up. Now is the time to take inventory of ourselves, not only to assess where we are, but to plan for who and where we want to be once this is past us.
I agree with self-care and recommendations to be kind to yourself during these trying times, but now is not the time to numb out or to self-medicate with our chosen vices. Now is also not the time to be melting down into a pit of despair or frenzied worry.
It’s the time to ask yourself some critical questions:
What are my go-tos when I experience something sad or painful? This should be relevant to everyone. If you don’t actually experience the feelings, you’re probably numb to them, and that isn’t healthy. Most of us prefer to turn to something else to avoid or alleviate those feelings rather than work through them. And usually those things don’t serve us for the better.
How do I process anger, and how does it show up in my life? Anger is not a bad emotion — a lot of our anger is justifiable, at its root. But what we do with it (how it shows up) can be toxic to ourselves and others. How does your anger affect you and the people you care about? If you live in a constant state of anger or outrage, you are robbing yourself of freedom and quality of life.
How does fear influence my decisions and life choices? This is a big one, and it’s not just about the ‘now.’ Fear is what drives our need to control, and when control is removed as an option, fear takes the wheel. It decides where we go, when we go, and how. It often masquerades as other emotions, like anger or even apathy. One of the most helpful things we can do for ourselves is to be able to identify our fears and understand where they come from. Once we’ve brought these things out of the shadows and into the light, they are able to be dealt with more effectively, if we choose to take them on. And for nearly all of us, fear is what holds us back from living the way we’d like to live.
Where does denial and defensiveness show up in my life? I know several people for whom denial is a way of life, and it saddens me so much. Wounded people often create their own reality in an effort to control their environment, bending or full-out denying what is plainly seen to others as the truth. When the truth threatens their sense of self, their walls go up and they will tell you how it really is — that is, the ‘truth’ behind their walls. As you would expect, walls like that make for strained or non-existent relationships, and further distance between themselves and reality. Those are extreme cases, but most of us have at least a touch of this denial and defensiveness in our lives. Where and how is yours showing up?
What are the things I would change about myself, if I could? This is not to say we shouldn’t love ourselves the way we are, but many of us are unhappy with ourselves about things we have the power to change. I’ve personally spent at least a couple of decades hating how emotionally fragile I was, until I got so sick of it, I put all my effort into changing — that is, doing the hard work of therapy (and beyond) until my life looked different. My emotional stability is its own reward, but I also have the bonus of feeling pretty proud of myself for accomplishing that huge goal. There is so much hope for a better quality of life than we are living right now.
What are the things I like about myself? It is vitally important that we know what these things are — and if we can’t think of many, that’s a strong warning sign that we need help and change in our lives. What we do like, we need to nurture and grow, like a flowering plant. If you want to continue to bloom through the seasons, you have to cultivate those things, prune away the dead areas, and pull up the weeds that want to strangle the life and beauty out of you. Lean hard into your own goodness, talents, and strengths.
What are my values? Do I live according to them? How? Make a list of at least 5 (but preferably 10) things/areas you value (for example, family, honesty, integrity, solitude, generosity, etc). If you end up with a big list, rank them as best you can. Does the way you live your life reflect what is important to you? In what ways? If your life is in conflict with your values, you will feel unfulfilled, unfocused, and in general, unhappy. If you identify areas you are living out of step with your values, what “values” are you actually demonstrating? | https://medium.com/interfaith-now/you-should-be-asking-yourself-these-questions-right-now-cf15d35ad9c5 | ['Michelle Wuesthoff'] | 2020-05-03 22:47:29.139000+00:00 | ['Mental Health', 'Transformation', 'Pandemic', 'Covid 19', 'Self Care'] |
How to Sync Mobile Development Teams with Kotlin Multiplatform | The use of mobile apps is increasing, and the demand for app development is contributing to its growth. For example, Statista estimated that by the end of 2021, 73% of e-commerce sales will take place on a mobile device. And, this high demand is not only related to e-commerce. Mobile app demand is also growing because of a very competitive marketplace. Consequently, mobile app development is in its major growth stage, and there needs to be a better way to build apps faster and more efficiently.
In mobile app development, the main challenge for developers is whether the app should be built for native or cross-platform frameworks. The best way to solve this dilemma is to actually test both of them using the Kotlin Multiplatform approach. Teams can easily synchronize and deliver apps for different platforms simultaneously with this method.
What is Kotlin Multiplatform?
Kotlin is a multiplatform language for building mobile apps so developers can write code that can be used on multiple devices including Android, iOS, and the web.
Kotlin Multiplatform is a new approach for development teams to build a single user flow for several operating systems or devices. Applying Kotlin Multiplatform can reduce time and costs by 30% or more. The app market is increasing, and users are becoming more demanding regarding the quality of apps. Startups are always struggling to launch their product on time and on budget, and building an application for Android, iOS and the web can triple expenses and time. Kotlin Multiplatform offers the opportunity to create a universal code that works on all of these platforms.
Let’s discuss how the Kotlin Multiplatform approach works, and what the challenges of using it are.
What are the key problems in synchronization?
Even though Kotlin Multiplatform reduces the time it takes to develop an app, it’s still challenging to develop an app using multi-team app development. Building multiplatform apps means you need a team where developers have different experiences, and different platforms shouldn’t interact in the process of building the final product.
Here are some of the main challenges:
Different experience needed
The main problem of using Kotlin Multiplatform is that you need to use different approaches to solve one problem. It is necessary to gather a team with diverse expertise. Different devices offer users different experiences, and several teams of developers can come up with different ways of solving the product requirements.
Communication within the teams
Another challenge is to gather a team where developers have different skills and expertise. While creating multiplatform approaches with the help of different teams, communication is a big challenge. For example, two different teams can create an application for different operating systems, and the customer may change the development requirements for one team, and leave them as-is for the other. This can lead to conflicting results.
Each platform has specific challenges
When you are building multiplatform apps, each of the operating systems has specific requirements especially, for the user interface, as it is necessary to match the design of the devices for a specific operating system. When developing each platform you need to use specific languages, frameworks, and libraries for each platform. The challenge is to build the same app, with a similar design while taking into account different frameworks, etc.
Advantages and disadvantages of cross-platform solutions
As far as possible cross-platform application development, there are only a few solutions like React Native, Flutter, and Kotlin. In recent years, Kotlin has become very popular because Google announced support for the language on Android as well as Spring Boot 2 that is offered by Kotlin support. Kotlin is running in JavaScript, native, and iOS applications. So, Kotlin helps to share the code between them.
Development is carried out in the Kotlin language in IntelliJ IDEA or Android Studio, which is a big advantage when compiling the resulting code in native languages for each platform (i.e.OBJ-C, JS and JVM). The build system is Gradle. It supports groovy syntax and Kotlin script (kts). Kotlin MPP has the concept of the target platforms. In these blocks, the operating systems we need are configured in the native code and compiled in Kotlin. Source sets, as the name suggests, provides the source code for the platforms that are stored here.
Advantages of using such an approach include:
Saving time and money. When you use Kotlin Multiplatform for mobile and web development, you are going to save around 30% of the time needed for development. This means you will cut expenses by 30%. Compared to Java, Kotlin can cut 40% of the code, so you save money as long as you don’t need to pay for more developers.
Native-like solution. The code in Kotlin is similar to Swift as well as being 100% interactive with Java.
Fewer bugs. Thanks to Kotlin, you will get the most accurate code, and while some bugs emerge, it’s very easy to find and fix them before runtime.
Team productivity. Your team will be working on the same task so this can help keep tasks efficient.
Disadvantages are:
Kotlin is very useful if you want to reduce cost and time. But it’s costlier and more time-consuming when it comes to learning Kotlin. The transition to Kotlin can be challenging for your team as they need to spend time learning it.
Some limitations of using it. One of these limitations is multithreading. It is based on Coroutines technology which isn’t supported in Swift.
Lack of experts in the market. Kotlin requires very high expertise in different areas, and it’s much easier to find a Java or Swift developer than those who work with Kotlin. The main problem is that you can’t easily replace Kotlin developers since the technology isn’t as widely used as native programming languages.
How Kotlin Multiplatform can come in handy
According to TechCrunch, just two years ago at the I / O 2017 conference, Google announced support for Kotlin in its IDE, Android Studio. This came as a surprise, given that Java remained the language of choice for developing Android applications for a long time.
Over the past two years, the popularity of Kotlin has increased. More than 50% of professional Android developers use the language to develop their apps, according to Google.
It looks like Google is increasing its support for Kotlin. “The next big step that we’re making is that we’re going Kotlin-first,” said Chet Haase, an engineer on the Android UI Toolkit development team at Google. “We understand that not everybody is on Kotlin right now, but we believe that you should get there,” Haase said. “There may be valid reasons for you to still be using the C++ and Java programming languages and that’s totally fine. These are not going away.”
While sharing the same code between platforms, you can create a common module that depends on the Kotlin library. In case we need to access platform-specific APIs, like DOM in JavaScript or File APIs in Java, the best solution is platform-specific declarations. It means that in our shared module, we use a keyword to declare the implementation of a different method separately on each platform.
Wrapping Up
Kotlin Multiplatform development is a great choice for saving time and budget. This is a good approach where you can use common code for the different operating systems of the apps that will be used on different devices.
The main benefits of using Kotlin Multiplatform are the ease of sharing one code for multiple platforms, reduced costs and time by around 30%, and there is no need to hire extra staff to implement Kotlin, which significantly reduces time as there is no need to build Android, iOS and web apps with different codes.
The main pitfall is that it’s hard to find highly-qualified professionals who can deliver high-quality Kotlin Multiplatform apps, but as the technology develops so will the specialists. | https://medium.com/quick-code/how-to-sync-mobile-development-teams-with-kotlin-multiplatform-52a0ad4b8bd5 | ['Archer Software'] | 2020-10-12 15:46:06.870000+00:00 | ['Mobile App Development', 'Kotlin', 'Kotlin Multiplatform', 'Crossplatform Mobile App', 'Cross Platform Developmen'] |
RL Agents That Think Like Children — Exploring Exploration | Despite recent progress in artificial intelligence (AI) research, human children are still by far the most skilled learners we know of, learning noble skills like language and high-level thinking from minimal data. An efficient, hypothesis-driven exploration helps children’s learning. In fact, they explore so well that numerous ML researchers have been encouraged to put videos like the one below in their speeches to motivate study into exploration methods. However, because using results from studies in developmental psychology can be challenging, this video is often the degree to which such research correlates with human cognition.
A time-lapse of a baby playing with toys (https://www.youtube.com/watch?v=8vNxjwt2AqY)
Why is directly employing research from developmental psychology to problems in Artificial Intelligence so tricky? Taking motivation from developmental studies can be challenging because the environments that human children and RL agents are typically investigated in can be very diverse. Traditionally, reinforcement learning (RL) investigation takes place in grid-world-like environments or other 2-dimensional games, whereas children act in the rich and 3-dimensional real world. Moreover, similarities between children and AI agents are challenging because the tests are not regulated and often have an objective mismatch; much of the developmental psychology study with children takes place with children involved in the free exploration, whereas most AI research is goal-driven. Lastly, it can be difficult to ‘close the loop’ and build agents inspired by children and learn about human perception from AI research results. By examining children and artificial agents in the identical controlled, 3D environment, we can probably mitigate several of these difficulties above and ultimately advance research in AI and cognitive science.
Recently, Deepmind and Berkeley Artificial Intelligence Research (BAIR) collaborated on a research and developed a directly contrasting agent and child exploration platform based on DeepMind Lab.
How do Children Explore?
The foremost information that we know about infant exploration is that infants form hypotheses about how the world operates, and they engage in exploration to inquire those hypotheses. For instance, studies such as Liz Bonawitz et al., 2007 showed that their observation influences kids’ exploratory play. They decide that if it seems like there are multiple approaches that a plaything could work, but it’s not obvious which one is best (namely, the evidence is causally confounded). Children participate in hypothesis-driven exploration and will investigate the plaything for significantly longer than when the dynamics and result are easy (in which case they would instantly move on to a new plaything).
Stahl and Feigneson et al., 2015 revealed us that when children as young as 11-months are shown with gadgets that disrupt physical laws in their environments, they will investigate them more and even participate in hypothesis-testing behaviours that indicate the particular kind of disruption seen. For instance, if they observe a car floating in the space (as in the video on the top), they discover this unexpected; subsequently, children crash the toy on the desk to explore how it works. In other words, these disruptions significantly lead the children’s exploration.
How do AI Agents Explore?
Standard work in computer science and AI concentrated on improving search methods to investigate a goal. For instance, a depth-first search approach will explore a particular path until either the goal or a state is terminated. If a state is terminated, it will backtrack till the subsequent unexplored path is discovered and then continue down that path. Unlike children’s exploration, practices like these don’t have a concept of exploring more given unexpected evidence, collecting information, or examining hypotheses. Current work in RL has witnessed the development of other varieties of exploration algorithms. For example, intrinsic motivation methods give a reward for exploring engaging regions, such as those that have not been visited as much before or those which are unexpected. While these seem more related to children’s exploration, they are typically practiced more to expose agents to a different set of experiences during training rather than to encourage rapid learning and exploration at decision time.
Conclusion
DeepMind and BAIR did a range of experiments and presented their conclusions on arXiv. Take a look at the paper for a detailed explanation of the experimental setup and results. In conclusion, this work only opens several deep questions about how children and AI agents explore. The experiments are shown here begin to discuss how much children and AI agents are ready to explore, whether free versus goal-directed exploration policies differ, and how reward shaping influences exploration.
The authors of the paper believe that to truly build intelligent agents, they must do as children do: actively explore their environments, perform experiments, and gather information to weave together into a rich model of the world.
In this way, we will understand how children and agents explore novel environments and how to close the gap between them.
Reference
Exploring Exploration: Comparing Children with RL Agents in Unified Environments.
Eliza Kosoy, Jasmine Collins, David M. Chan, Sandy Huang, Deepak Pathak, Pulkit Agrawal, John Canny, Alison Gopnik, and Jessica B. Hamrick arXiv preprint arXiv:2005.02880 (2020) | https://medium.com/swlh/rl-agents-that-think-like-children-exploring-exploration-f217cf347287 | ['Karthik Bhaskar'] | 2020-11-20 18:29:40.909000+00:00 | ['Artificial Intelligence', 'Machine Learning', 'Reinforcement Learning', 'Data Science', 'Deep Learning'] |
Software security at the organizational level | Security awareness, and knowledge increased a lot recently. More and more developers know what is XSS and they have already heard at least about CSRF, organizations have security departments, and most project leaders understand that security auditors and ethical hackers are their friends not their enemies.
And still, everyone who is involved in the application security business has a typical, recurring experience: that old vulnerabilities, well-known security problems come back again and again, and many applications are still ridiculously easy to hack.
Why is that so?
One obvious answer is that security is a matter of focus.
Photo by Paul Skorupskas on Unsplash
Even if you are a developer who knows about a given vulnerability, you might still introduce it to your code, simply because you focus so much on functionality that you forget to realize the security aspect. When you are running deeply within one track, it is hard to stop, climb out of your pit and look around from a higher perspective. It even happens to security experts, who turn to development, sometimes they introduce vulnerabilities without realizing it, or (more often) realize it only after they look through their code with their “security glass” on.
Another important reason is that at the organizational level security is a typical weakest link problem.
Photo by Shaojie on Unsplash
If there is just one Junior developer in the team, who does not understand SQLi, he can easily introduce several SQLi vulnerabilities before anyone realizes it, and teaches him how to do it better.
Actually there is more about it than just the typical weakest link problem. There is something that we can call the accumulation of knowledge. Some types of knowledge accumulate very well, while others accumulate hardly, or not at all.
If there is one developer who understands graph-algorithms well, she can do all the graph-related tasks, and the quality of those solutions will not depend on the graph-knowledge of other developers. It is enough if others know that she is the right person to find, when there is a graph related problem. If one developer creates a great UI component others can use it even without fully understanding how it works. In many aspects, IT knowledge accumulates extremely well (that’s why we have such a rapid improvement in the software industry).
But alas, security knowledge cumulates hardly. The fact that one developer knows an immense lot about security does not really help other developers who are less knowledgeable. The problem is that those other developers won’t know when to ask a question. After all, if you never heard about XXE, and you use an XML parser, you won’t go and ask your colleague: ‘Hey, is there any possible security issue with using an xml parser with DTD enabled?’. In order to ask such questions you already have to know at least something about XXE.
So to sum it up, there are three problems which help hackers and hinder devs: focusing solely on functionality, the weakest link problem, and that security knowledge does not accumulate well.
How can we address these problems?
I believe (with many other security experts), that a true solution can only be found at the organizational level. You have to create the right processes, roles, practices at organizational level.
Create a code-review system to guarantee that no code of junior developers (or optimally, no code of anyone) can get it’s way into the repo without someone reviewing it first.
to guarantee that no code of junior developers (or optimally, no code of anyone) can get it’s way into the repo without someone reviewing it first. Make it a norm that security questions pop up during the design phase as well as during testing.
as well as during testing. Allocate time for your devs to look at their own code from different angles, checking the code’s overall quality, including security quality . (And you have to make sure that they don’t use that time for something else, even when a deadline is approaching, and the client is eager to see all the new functionalities!)
. (And you have to make sure that they don’t use that time for something else, even when a deadline is approaching, and the client is eager to see all the new functionalities!) Ensure that everyone gets basic education about security as early as possible. Monitor how the onboarding process effectively involves learning your security standards.
about security as early as possible. Monitor how the onboarding process effectively involves learning your security standards. Make security related knowledge and skills refreshment programs a regular part of annual professional improvement.
a regular part of annual professional improvement. Facilitate security knowledge transfer within the organization.
within the organization. Identify the security stuff that everyone has to know within a project, and separate it from the knowledge that only security-experts have to care for.
within a project, and separate it from the knowledge that only security-experts have to care for. Do have security experts with well defined roles and meaningful responsibilities.
These are short bullet points only. I will publish a longer article on the matter with more details soon in the defdeveu journal.
Stay tuned for the next post here introducing how to implement and maintain in a longer term the above in the framework of an advanced security champions program. | https://medium.com/defdeveu/software-security-at-the-organizational-level-980174615443 | ['Nyilasy Péter'] | 2020-06-11 08:17:24.450000+00:00 | ['Software Engineering', 'Organizational Culture', 'Software Development', 'Software Security', 'Coding'] |
Fibonacci’s Echo | Math is like an echo.
Fibonacci knew this.
When you know one thing,
It leads to another.
It’ll always lead to a known place.
Echos are reliable.
Just like math.
In response to Richard L. Boyer ‘Echo’ prompt
The Magnificent Jesse Boy
As always, thanks to our hosts, Kathy Jacobs and Harper Thorpe, and the rest of the talented Chalkboard team! | https://medium.com/chalkboard/fibonaccis-echo-1a4dc52be4aa | ['Lee-Anne Hancock'] | 2020-09-23 23:44:40.229000+00:00 | ['One Line', 'Nature', 'Design', 'Chalkboard', 'Poetry'] |
Gold Plating Software Products | Exit psychic abilities and enter research
Simply adding a washing machine won’t cut it, so it seems. You might have just opened pandora’s box of requests and complaints. More customer satisfaction they said, it will cost almost nothing they said.
An argument can be made here that some people would have been turned off by the lack of a washing machine and would rather rent somewhere else. Exit psychic abilities and enter research. The decision to buy a washing machine should have been inspired by solid market analysis. How many people actually asked for one? How many of them decided to rent regardless? How many free condos do you have on average? Can you increase the price for condos with a washing machine? What’s the recommended percentage of condos to put a washing machine in? How long do washing-machine-loving customers rent on average and how does that compare to others?
Additionally, if your decision was based on thorough analysis, it would necessitate its own project or sprint. It won’t be part of your MVP or an add-on to a completely different scope. Just as a thought exercise, try to think of reasons why or why not to buy a microwave and share your thoughts in a comment.
Software Analogy
I’m sure most readers have already picked up on what I’m trying to say with the washing machine example but, after all, this is a software-focused article. So how exactly does this apply to software projects?
A project manager or a software engineer thinks it’s a super-awesome idea to add feature X. They go on to develop this feature and after the deadline getting pushed back a week or two, they finally finish thinking it was worth it.
So here are a few different things that can come out of pandora’s box, that is, if the client notices the feature at all!
It’s broken! Looks like you didn’t cover all the corner cases and urgent unasked-for maintenance is now due.
Ok that’s a decent feature but can you do Y on top of/instead of X? Y will take you a sprint to do. Do you say no or ask for extra payment and upset the client? Maybe just swallow the cost? Either way, you lose. Looks like you forgot to propose your idea first.
your idea first. Actually, this feature is confusing my team and they might misuse it, can you disable it? They are the smartest I can afford to hire, sorry about subverting your expectations, I guess…
Upgrading the application, exciting times huh? Except remember that useless feature you added a few months earlier? It’s proving a bit problematic. So you can feel free to omit it and upset the 5% of your users utilizing it (or the client, by proxy) or spend a week or two trying to get it to work in the new version. Such a fun decision to make!
Oh btw, a few useless features down the line and it looks like bugs have been getting a longer and longer lifespan and we have a bunch of angry users. We need urgent maintenance. Do we pause other projects and potentially miss on opportunities? Or maybe we can hire more developers, increase our maintenance costs, and introduce a bit of newcomer’s chaos into the team! Or perhaps we can just ask everyone to commit to deadlines and work unpaid overtime and risk them suddenly disappearing to work somewhere else that is more organized and less stressful. What can go wrong with constant recruitment costs and team chaos?
W hat do we do about it?
It’s your job to have a proper conversation with the client/users to help them understand and convey to you what they really want.
Make a plan and stick to it! You might think that this is easier said than done but, let’s think about it. You do want to impress the client, maybe you are worried they will go for another provider if they don’t like the product and you don’t want to scare them off with too much commitment. Maybe you think you will lure in more users with nice features. You want to impress them by giving them “extra value” or “special features”. So far so good but the moment you decide you know what they want, things go south.
The best way to design a wonderful product is to do your homework beforehand. Do research, write user stories, have conversations, and most importantly don’t assume you know what the user wants! In fact, do you think they even fully know what they want? It’s your job to have a proper conversation with the client/users to help them understand and convey to you what they really want.
Take a look at this popular GIF, with a little twist
You should also realize that everything has a cost. The immediate cost might seem plausible but how about future costs? Software is not too different from a washing machine. You will need to maintain it and upgrade it. Every time you upgrade the frameworks you are using or move to new technologies, that feature is going to be a task “make sure it still works”. Every time a user finds a scenario that breaks it or an OS/Platform update renders the way it’s implemented obsolete, you will have to maintain it. So, think ahead!
So, before adding a new feature, think about the following points: | https://medium.com/swlh/gold-plating-software-products-7bffe427b215 | ['Hesham Meneisi'] | 2020-12-29 01:16:16.163000+00:00 | ['Engineering', 'Software', 'Project Management', 'Software Development', 'Software Architecture'] |
The River as Spiritual Teacher | “Eventually, all things merge into one, and a river runs through it. The river was cut by the world’s great flood and runs over rocks from the basement of time. On some of those rocks are timeless raindrops. Under the rocks are the words, and some of the words are theirs.” Norman Maclean
Once, when I was a boy in the backseat of a big car, we crossed a road where the river had burst its banks. We were the last car to get through that night.
My mother in the passenger seat and my brother beside me were in a panic as the rushing water rose up to the windows. I must have been 12 maybe, but instead of fear, trusted the river to let us pass when it could so easily have swept us away.
I have loved rivers ever since, although perhaps respect is a better word. Strange things happen by them and in them. Communities spring up, needing the river’s life-giving qualities and people suffer and sometimes drown.
I was 16 when my first love told me she had a secret. She looked nervous in the telling and I knew this was something important, something worthy of my full attention.
She had recently moved down south following a tragedy.
On the banks of the River Clyde, transport artery for Glasgow and its shipping yards, before the days of mobile phones, her mother had left the car to use a phone box.
The old girl was a talker and was on the phone for a long time, leaving her son in the backseat with the dog.
She had left the window open. Out climbed the dog followed by the boy, down to the river rushing by, not to be seen for a very long time.
The frogmen took four weeks to find a child’s body some way downstream; the river had claimed another victim, tore her family apart. So, they came south to leave the past, not forget it. They never would.
But what takes life can also give it. The mother birthed a second boy, now her only living son, years after most women have children.
It was heralded a miracle; I always thought the river had a hand in Alasdair’s arrival, its mysterious life-giving power carrying into life a newborn like Moses on the Nile.
In my own life, a therapy client who came to me saying he knew he was going to die, slipped in the shower and fulfilled the fate he was expecting just weeks after we started talking.
Two years later, I was walking the River Thames and heard a voice telling me to turn off my music and pay attention. I turned around and there was a bench with his name on. It felt like the river was speaking to me.
In Siddhartha, Herman Hesse’s wonderful novella, we see the river as protagonist, a funnel around which the plot weaves, returning to its imagery again and again.
It acts as marker for the different seasons of the eponymous hero’s life as he pursues his own truth, for many years becoming lost and unconscious, embroiled in the world, until the river rescues him once more.
The book begins ‘in the sunshine of the river bank’ as if the river is the sun itself shining its beacon on the young man, a wayshower for sure.
Most of us like our mentors to be more human, less enigmatic, but here the river is the perfect accompaniment to the hero as he peers into its unending depths on his quest for enlightenment.
There is always more to be learned from it and as he grows and matures he sees more and more, until finally he sees the whole of life itself reflected in its waters.
In mythology, the Self is often symbolized as a complementary pair of opposites, a king and queen for instance, reflecting the polarities of our own nature.
In Siddhartha, the river represents the unity of selves, the perfect synthesis of all our warring personalities and the wholeness that underlies all.
Toward the end of the book when he meets the ferryman Vasudeva, Siddhartha is told that by looking into the river he can learn everything worth knowing.
But at that point he remains too divided within himself to see this truth, yet gradually a change, guided by the ferryman, begins to take place.
Carl Jung, the great psychiatrist described the human task as to wake up to the primordial unity that joins the divided and conflicted parts of ourselves.
The river symbolizes that unity and the soul’s eternity.
When Siddhartha listened attentively to this river to the song of a thousand voices ; when he did not listen to the sorrow or laughter , when he did not bind his soul to any one particular voice and absorb it in his Self, but heard them all, the whole, the unity; then the great song of a thousand voices consisted of one word: Om perfection.
At another time, under instruction from the old sage masquerading as a simple ferryman, he is shown how to listen to the river and learn from it.
They listened. Softly sounded the river, singing in many voices. Siddhartha looked into the water, and images appeared to him in the moving water: his father appeared, lonely, mourning for his son; he himself appeared, lonely, he also being tied with the bondage of yearning to his distant son; his son appeared, lonely as well, the boy, greedily rushing along the burning course of his young wishes, each one heading for his goal, each one obsessed by the goal, each one suffering. The river sang with a voice of suffering, longingly it sang, longingly, it flowed towards its goal, lamentingly its voice sang.
He finally finds the correct understanding of life embracing both pain and pleasure, seeking nothing, as the oneness of life supersedes the notion of being a separate self.
In the end, the river is seen as the soul itself, both the individual soul and the world soul:
“Have you also learned that secret from the river; that there is no such thing as time?” That the river is everywhere at the same time, at the source and at the mouth, at the waterfall, at the ferry, at the current, in the ocean and in the mountains, everywhere and that the present only exists for it, not the shadow of the past nor the shadow of the future.”
As Norman Maclean, who spent a lifetime ripening before writing his classic A River Runs Through It says, all things merge into one and a river runs through it.
The river is seen as distinct from life as we know it, timeless and always present, but is ultimately life itself, creator and destroyer, like a mountain-top Shiva resting at the head of the Ganges.
In Sufism, the divine is often referred to as a river of loving; this is not the stop and start of love in the physical world, but a constant flow that ultimately heals us all.
The waters of life contain both our beginning and our end.
© simon heathcote
https://medium.com/soul-sea/this-is-the-holy-grail-you-seek-1281ced94c2b | https://medium.com/interfaith-now/the-river-as-spiritual-teacher-7f22c4dccff5 | ['Simon Heathcote'] | 2019-10-14 12:51:01.344000+00:00 | ['Fiction', 'Spirituality', 'Life Lessons', 'Buddhism', 'Books'] |
The 30 Most Popular Email Signoffs, Translated | The 30 Most Popular Email Signoffs, Translated
On the subtle art of saying “Thanks!” when you mean “Go die in a fire”
Those with a gift for the dark arts of email valedictions can speak volumes with a single word, an abbreviation, or an artfully placed (or withheld) exclamation point. But like all great endeavors in poetic expression, email signoffs are too often ignored, unappreciated, or willfully misunderstood by the heedless masses who genuinely think you’re expressing gratitude when you say something as nuanced as “Thanks;” or well-wishes when you say something as devastating as “Best.”
These are doubtless the same people who will text you “ok” or even “k” to mean “alright,” even though it is well-established at this point that either of those utterances in a text message signifies the beginning of a blood feud; or who write “ha” when they mean “hahahahahahaha,” which is like saying “Fuck off and die” when you mean “Nice to meet you.” With all that being said, it seems like there is a real and urgent need for an “Email Signoff Translator” to help folks who are subtextually challenged better navigate their professional lives, and so here we are. Hopefully this will clear up a lot of confusion in the future.
All the best, = “I grudgingly respect and fear you.”
As ever, = “You bore me so relentlessly that it is almost interesting.”
Best = “I have decided to spare you, for now.”
Best regards = “I’m on my best behavior because I need something from you.”
Best wishes = “Please do not email me again!”
Cheers, = “I like you but I don’t respect you.”
Cordially, = “For some time now I have been plotting a violent and surprisingly creative revenge upon your person, but I’ve recently decided that being compelled to go through life as the unctuous wretch that you are is more punishment than I could ever possibly mete out, and so I have decided (with some regret) to leave you to it.”
Hope this helps! = “Hope this helps (you dull-witted pillock)!”
Hope you’re well! = “I have already forgotten who you are or what this email was about.”
Kind regards, = “A long, long, long time ago, in an incident that was doubtless so trivial to you at the time that you thought nothing of it, you hurt someone with your carelessness. That person was me, and while there are certainly many others who have fallen prey to your cruel disregard, the relevant difference between the latter instances and the first one is that your other victims have not found you yet. But I have. Now, oblivious, you have fallen into my web.”
Let me know if you have any questions = *see “Hope this helps!”
Love = “I’m actually genuinely quite deeply in love with you but don’t know how to properly say it in another context.”
Regards = “I’m told it’s not possible to give someone palsy just by sending them an email, but that hasn’t stopped me trying.”
Respectfully = “You really fucked up this time!”
Rgrds = “I don’t respect you enough for vowels.”
Sincerely, = “I dislike you but I need something from you.”
Take care = “I think you are cool and I want you to think I am cool, too!”
Talk soon = “I’m painfully aware that you don’t read my emails.”
Thank you so much! = “I’m too personally insecure to know how I feel about you.”
Thank you very much! = “I want to be very clear that you have done me precisely zero favors.”
Thank you! = “Please hurry up and do this extremely simple thing for me.”
Thanks = “When justice finally catches up with you, you will not be spared.”
Thanks in advance = “I expect you to fail me.”
Thanks! = “Thanks for nothing!”
Thx = “Thx for nothing.”
TY = “You are not worth the effort to write out full words, and even this half-hearted scrap of feigned gratitude is more than you deserve.”
Warm regards = “I love you.”
Xoxo = “I love you but I don’t respect you.”
Yours, = “I love you but I don’t know you.”
Yours truly, = “I have always loved you, and the stars themselves shine more brightly when I think about you.” | https://medium.com/sharks-and-spades/the-30-most-popular-email-signoffs-translated-a486d8e7d8f1 | ['Jack Shepherd'] | 2020-10-22 17:33:23.036000+00:00 | ['Email', 'Work', 'Humor', 'Writing', 'Digital Marketing'] |
Create your UX / UI portfolio #02 : Goals and Purpose | What kind of professional do you want to present to the world?
In the competitive corporate world of information technology, many people walk around with sparse projects or send a doc. document attached to the email to apply for a job openings. And some of the questions that may arise at this point are: What is the difference between curriculum and a portfolio? Send both or how to choose between one?
Maia (our character in this journey) had these doubts, so she researched the subject. She ended up realizing that there are usually fields for sending resumes and fields for online portfolio URL when applying for a job via the registration form. She sends both. So she decided when she was going to apply for a job via email, she will also send her attached resume and her portfolio link (or even the PDF version of the portfolio).
But for her to achieve the objective of positively impacting the person responsible for the selection process, Maia needs to be very clear about the purpose of each of these pieces so that they are complementary and not redundant, and help her as much as possible in a job hunting.
The curriculum should represent all the professional achievements of Maia’s life, as well as having a short and attractive variation.
Every company wants to recruit a person who is not only fit for the job but also has other social and cultural activities; therefore, Maia includes other information, such as hobbies and other activities that make her a versatile person. She knows she doesn’t necessarily have to be an excellent hobby, but they must reflect that she has other skills and can be considered an expandable candidate. For example, she says that she really likes to watch films, which shows that she has a certain cultural inclination and that she is concerned about health when she says that one of her hobbies is cycling.
On the other hand, the project portfolio is a collection of samples of work or activities that are presented to a potential employer or recruiter to show their work in action. With a portfolio, people involved in the recruitment process can witness her talent. This adds a tangible aspect to Maia’s work, thus providing the recruiting professional with a clearer picture of the type of work she does, and what is the quality of her work.
For Maia, the portfolio is an invaluable companion to your resume. It is where she can show all of her work in detail and even create a project timeline. So, if you want to win a place, you should probably invest in preparing the portfolio, just like Maia.
Elaboration
During the elaboration of the portfolio, Maia was assisted by colleagues with market experience, who shared important suggestions, such as:
Sharing screenshots of your work is not enough! Her portfolio should include case studies that detail her specific contribution and creative process. For example, in all of Maia’s case studies, she always presents the problem and her collaboration on the project.
Project Overview The goal of the project was to design a tablet app concept where children from the ages from 6 to 14 can stream their favorite TV and movie content. My Contributions I was responsible for this project from start to finish. This included doing research, defining personas, creating flowcharts and wireframes as well as designing a finished user interface.
Carefully consider the purpose of your portfolio. How do you want the world to see you? Maia defined that she will continue as a UX and UI professional so her portfolio should prioritize and highlight these projects and her skills in the field.
How do you want the world to see you? Maia defined that she will continue as a UX and UI professional so her portfolio should prioritize and highlight these projects and her skills in the field. Recruiters spend little time analyzing each candidate’s portfolio. So go for it! Try to be assertive, always presenting yourself as a problem solver and making it clear what was your role in each project. Everything must be clear and objective.
OVERVIEW
In addition to sharing your latest work, here’s what you should keep in mind for an eye-catching digital portfolio.
Build your brand: you want to create a consistent personal brand throughout your professional presence. Everything can start with your new logo, as Maia created in her portfolio. If you are not very skilled with graphic design, you can simply use your name with a legal font (Just beware of copyright. If you want you can use platforms like dafont.com and choose from thousands of free options to use), or still, ask colleagues who work in graphic design to help you with these issues.
Images: configure your images in the best possible resolution, but they are accessible (for example, Maia used a standard image with 1900x1200 px for her portfolio. You can use this size as reference)
Improved performance: you can write code from your website. But you can use ready-made platforms without any problem. Just know how to use these tools well.
Don’t just use images that you think are beautiful: although you should strive to create visually appealing content, good design is the sum of form and function and we want your portfolio to communicate that. Maia worked on her communication using a minimalist style and always focusing on projects, with vibrant and attractive colors.
THE CONTENT
You need to carefully consider what should be included and how the content should be presented.
Defining a line: Maia makes it clear what her portfolio style is and how she presents herself as a professional. You can follow the same style and create a unit for your portfolio.
Selling yourself: Some theorists believe that “about me” can be the most important part of your portfolio. Pay special attention to the “about me” part, which can present some statistics, recommendations, and even a short video reel. Be very clear about whatwas your role in each project and even ass the feedback of some clients. And believe me, it makes a difference.
Storytelling: the ability to communicate through the written word is increasingly valued in design and marketing. To demonstrate your ability to tell stories, use storytelling concepts for the cases you will be presenting.
The cases in your portfolio
Case Studies via Storytelling
Every day, those responsible for recruitment processes receive dozens of professional profiles. Many of these profiles will be lost in so much content. But there are ways to overcome confusion and help your profile stand out: you can reach people in a more personal way, letting your projects explain what kind of professional you are.
For this, Maia started writing customer case studies and adding the projects it has in the narrative format, and we will see how this contributed to the production of a strong and assertive portfolio.
Bringing your projects to life
As a creative professional, Maia usually has a very particular perspective on a project or product. For example, she will be concerned with conceptualizing and developing an application. But the customer will focus on their daily use: their look, functionality, and so on. Since a case study is written from the perspective of problem-solving, it tends to crystallize the benefits better than a typical marketing message. It can become even more powerful if you include personal testimonials and other comments from the people you served. That is, the customer. Don’t forget to include testimonials from clients in all cases (if possible) in your portfolio.
Educating the public
A case study usually follows a problem/solution format. That is, it describes the challenge that was faced — such as creating a new product, improving the interface, or failing communication — and explains how you helped solve the customer’s problem. This allows the reader to feel as if they are watching everything “over your shoulder”, as that person benefits from your product or service. So use simple, yet effective, and engaging language. Maia always starts by presenting the project’s challenge and how she collaborated to solve the problem. All with a pertinent but simple language.
Talking to the target audience.
Most people decide in a short time if they are interested in your profile. Then you must win them over immediately. A case study does not speak to all customers or potential prospects but speaks directly to those who share a similar background because they belong to the same industry or face a similar challenge, such as reducing operating expenses. Such a targeted approach is likely to produce a much better result than a “unique size” marketing speech. See the example of Maia’s case in which there was no budget for user research and how she worked around this problem.
User research on a $0 budget For this project I didn’t have a budget I could spend on research but because of a very specific target audience of this app I knew I can’t skip this step. Thankfully there were studies that others have done. Nielsen Norman Group had two studies, one about kids and one about teens, and the summary with most important findings was available for free on their website.
Delivering a happy ending
Good case studies shouldn’t look like marketing. They must first look like stories, and everyone loves stories, even if they are about companies that need to reduce their logistics costs. Why? Because at the heart of these stories are real people facing real challenges. Even better, a case study always has a happy ending. Note how Maia always faces the problems and improves the customer’s life and product, as in the case of a streaming application for which she researched a lot even with budget limitations and presented significant improvements such as navigation and profile sharing.
Conducting interviews After going through the NNg studies I decided to go a bit deeper and talk to some users. First I interviewed a couple of teachers and parents. What I learned from a teacher who works with kids aged 6–10 is that they know how to use a smartphone and are comfortable with using the search function to find their favorite YouTubers. I also learned that most of them don’t own their smartphone so using a tablet for streaming is a likely scenario. It helped a lot.
Influence
Case studies not only resonate with consumers but also encourage their professional colleagues. After all, most companies work to help people, and their professionals are happy to learn that they have made a difference in their customers’ lives. In defining the appearance of success, case studies also provide their professionals with a goal and standard for measuring their success. Show how you not only produce but also understand what you are doing, whether by relating your case to concepts or an actual idea. You must aim for a certain quality standard in your projects, both conceptual and visual, and try to show this to the world with your portfolio.
Good habits
If you plan to create a nice case study for your portfolio, here are some rules Maia uses, that you can also follow:
In a sentence or two, explains what you did for the client and how you helped with the project.
Describe the customer’s business to situate the problem.
Explain the challenge the customer faced.
Describe you collaboration on the project and the solution you provided and how you came to this solution.
Explain in detail how the customer has benefited from your solution.
Always try to include customer feedback.
Include sketches and prototypes of the project, which is important to show your technical skills.
Maia continues to transmit success: she publishes the case study in her online portfolio, sends it by e-mail to the client’s database, and shares it on social networks. That way, she creates a network of references.
When Maia’s case studies are ready, she checks to see if they reach potential audiences in social segments like Linkedin. And when she is inspired she even writes an article about the experience on her social media.
In the sequence, we will talk the elements of a good narrative. | https://medium.com/thesequence/create-your-ux-ui-portfolio-02-goals-and-purpose-c38655608515 | ['Deborah M.'] | 2020-11-18 04:13:31.760000+00:00 | ['Professional Development', 'UX Design', 'Design', 'Technology', 'UX'] |
The Terrifying World of Benadryl Addiction | The Terrifying World of Benadryl Addiction
It’s not uncommon to completely lose touch with reality, forgetting they swallowed the pills, forgetting where they are, and even forgetting who they are. Seizures. Lifelike hallucinations. Psychosis. Coma. Death.
Photo by Pavel Krugley on Unsplash
“First thing I want to say, I hate DPH. I think DPH is horrible. I took DPH because I wanted to mentally hurt myself. I will never take DPH again,” a user posted to the r/DPH subreddit as a preface to his experience taking 1200 milligrams of DPH — shortened form of diphenhydramine, or more commonly, Benadryl — during a suicide attempt. He took two entire boxes of Benadryl, which is around 50 to 100 times the therapeutic dose. By the time I spoke to him a month after his post, he had gone back on this reassurance and said he now found himself “hooked” on Benadryl.
“It has this unsettling and inherent darkness that I’ve never experienced with any other drug,” mentioned one user with years of heavy Benadryl dependence. With trip reports detailing walls covered in spiders, shadowy ghost figures skulking around their bedrooms, and pets melting in their owner’s arms, DPH possesses a unique ability to cause almost wholly nightmarish trips, transporting the user to a world reminiscent of the Silent Hill franchise. While in this delirium, the user often can’t tell where reality ends and the drug begins. It’s not uncommon to completely lose touch with reality, forgetting they swallowed the pills, forgetting where they are, and even forgetting who they are. Seizures. Lifelike hallucinations. Psychosis. Coma. Death. Doctors have noted the often inconsistent but often deadly serious symptoms of antihistamine overdose.
Photo via Reddit screenshot
DPH, once an obscure high, has risen in popularity due to social media personalities and its spread on social media. For instance, the “Benadryl challenge” on TikTok sees users down massive amounts of Benadryl and record the experience for their followers. Often the drug’s most common users are teenagers and other young adults without access to traditional recreational drugs, and Benadryl’s over-the-counter availability placates any of their health concerns. After all, if it comes from a Walgreens shelf, how bad can it be?
The DPH community on Reddit is a popular forum for recreational Benadryl users. While some brag with photos of hundreds of pills laid out, similar to how traditional drug dealers might post their hauls, most adamantly tell new users to stay far away from this drug. DPH addicts are a dime a dozen here, often lamenting the drug and how it has ruined their life, relationships, and brain health, whilst remaining in the throes of their addiction.
Is the brain damage permanent? Did the drugs break my brain? Questions plaguing the subreddit in between posts about trip reports, dosage questions, and memes. Others make posts warning about their use exacerbating their mental health problems, from depression to borderline personality disorder, to one user waking up in a mental hospital after a DPH binge. The teenager constantly felt intense paranoia and was unable to stop experiencing tactile and visual hallucinations. Others report brain fog, suicidal ideation, heart issues, memory problems, and vision and hearing loss. The lives of these users — many of them teenagers in already vulnerable situations — make Requiem for a Dream look like Sesame Street.
“I’m honestly quite sad how many people, especially teens, are taking this drug recreationally. I don’t want anyone else to have to experience the pure agony, terror, and confusion DPH has caused me,” infamously wrote one user of the Reddit forum. The user is believed to have fatally overdosed, after publicly and bluntly speaking about their addiction on the board for months. According to their comments, they started taking DPH in 2019 as a legal and cheap alternative to LSD and quickly became addicted as their physical and mental health deteriorated. On several experiences, medics rushed them to the hospital after seizures while shopping; they drove to work while on 2000 mg and described swerving to avoid hitting hallucinations; and had several stays in the cardiac intensive care unit. Their last post was a photo of 276 pink Benadryl pills — 6900 milligrams or 172 to 276 times the recommended dose. Responding to concerned users on the board, this person replied “6.9g may be the dose that finally frees me.”
Many users on Reddit discuss the adverse neurological effects they’ve suffered from DPH abuse, yet there have been nearly no studies on the effects of heavy Benadryl addiction on a young person’s brain. However, overdosing on Benadryl isn’t a new phenomenon in medical journals.
In another case reported by researchers, a distraught 14-year-old girl swallowed 150 capsules, each containing 50 mg of diphenhydramine. She quickly fell unconscious and experienced seizures. Despite aggressive and speedy treatment, doctors declared her brain dead the following day after detecting no cerebral blood flow. Fatal doses for children can be as low as 500 mg, while the fatal dose for adults is generally considered to be between 20 to 40 mg/kg.
Addiction, regardless of the drug, regularly ravages the user’s life in catastrophic ways but few substances are as wholly negative and all-consuming as recreational Benadryl. While substances like mushrooms, LSD, and even cannabis in some areas remain illegal, some young and vulnerable users resort to over-the-counter drugs that prove ruinous for their physical and mental health.
Those pushed to obscure highs like DPH might get some relief next year, at least for those in Oregon. Starting February 1, 2021, Oregon’s drug decriminalization law takes effect and drug users will no longer be arrested for possessing small amounts of illegal substances. Instead of a prison sentence, small possession charges will be punished with only a small fee similar to a traffic ticket. Under this new law, Oregon drug users won’t be pushed into the prison system and instead can get the help they need. Hopefully, this will allow local drug users to make safer choices. If you’d like to help the drug decriminalization effort nationwide, research drug decriminalization and its intersections with prison reform. By starting the conversation in your friend circle, these issues will become less stigmatized. | https://medium.com/an-injustice/the-terrifying-world-of-benadryl-addiction-7595d2b6501b | ['Raisa Nastukova'] | 2020-12-16 21:57:34.693000+00:00 | ['Drugs', 'Addiction', 'Reddit', 'Trends', 'Mental Health'] |
Deploying Classification Model using Flask | In my previous article, I’ve described the process of building an Image Classification model using Fast.ai. In this article, let’s see how to deploy it on a web application made out of Flask.
Small Background of the model that we’ve built — Classification of vehicles into emergency and non-emergency. Now that the model is built, what next? How do you want to use your trained model? A simple answer would be to build a web application where you would be able to pass the image and ask for prediction. There are 2 python based options for web development: Flask & Django. I chose Flask since it is light weight compared to Django and also because my deployment will only be based on request/response in text form. Django has lots of things which might not be needed for my webapp and also one can do anything and everything in Flask that can be done in Django.
So this article starts where the last article ended i.e., after getting the “.pkl” file of the model. We built the webapp which runs the trained model in the backend to get the prediction. So let’s get started… | https://medium.com/analytics-vidhya/deploying-classification-model-using-flask-5b92fbe1def5 | ['Manohar Krishna'] | 2020-11-16 14:36:28.353000+00:00 | ['Image Classification', 'Flask', 'Deep Learning', 'Python', 'Deployment'] |
3 Deceptively Simple Life Lessons Every Ambitious Individual Needs To Learn Before They Turn 30 | 3 Deceptively Simple Life Lessons Every Ambitious Individual Needs To Learn Before They Turn 30
The view from the peak is more enjoyable when you’ve enjoyed each step of the climb.
Photo by bruce mars on Unsplash
Ambitious individuals often fall victim to their pursuit of success.
By constantly preparing for the future or analyzing the past, successful people restrict themselves from being fully immersed in the present moment.
Most individuals with a thirst for achievement fail to recognize the very traits that make them successful also prevent them from developing a greater sense of fulfillment.
Strategizing for the future and delaying gratification for long-term gains builds materialistic wealth. Consistent, quality habits lead to the conventional markers of achievement. But playing the game of wealth or influence robs most individuals of happiness by convincing them that their fulfillment is tied to achievement.
In truth, you cannot become happy through achievement. You can only be happy. And you can make that choice — of releasing suffering and accessing a deeper state of gratitude — in any moment.
While many have mastered the formula to acquiring material wealth, few have discovered the path to building inner wealth.
Read the list below to learn three ways to acquire inner wealth.
1. Recognize that you will never be satisfied
No matter how much money, fame, admiration, or power you earn, it will never satisfy your thirst.
Your mind is built to project into the future or ruminate on past events.
Thinking about the future may increase your drive and determination, but it also prevents your ability to soak up the nectar of this moment. It leads to anxiety, over-analyzing decisions, and cost-benefit analyses that reduce the complexity of human connection.
Even if you’re able to interrupt your constant thought-stream, you’ll likely still fall victim to the trap of tying your emotional happiness to external objects.
If the object you desire is a BMW, soon after acquiring the object, it will lose its specialness and you will feel driven to buy something else, something even more impressive.
The thirst of your ego is endless.
Learn to move your mind from the future to the present moment. See through the trap of materialism.
To be happy, you need to recognize that no matter what you have, your mind will always want more.
2. Realize that if you cannot be happy now, you will never be happy
Because the mind is always looking forward to bigger and better things, no amount of materialistic success or power will satisfy its hunger.
If the mind is always hungry, then the game you’re playing is futile unless you develop the ability to allow happiness to enter your heart.
Learn to meditate and cultivate true presence.
When you are fully immersed in the here and now, you are in a state of no-mind. When the mind is absent, fulfillment and happiness enter your heart, enabling you to enjoy what you have now.
There’s no need to stop acquiring material wealth if that’s what you feel called to do, but you need to recognize that if you fail to pay attention to your inner wealth, your successes will only magnify your inner emptiness.
3. Learn the language of true happiness
Dominant cultural narratives spread lies about happiness.
They tell us to acquire power over others, to earn lots of money, and to engage in all sorts of pleasure-seeking activities. Unfortunately, after eating that nice meal, making your first million, or getting that big promotion, life returns to normal.
True happiness comes from being in the present moment.
The words happiness and happens have the same linguistic root because happiness happens as a byproduct of full engagement with the here-and-now.
In a state of no-mind, there is no contemplation of the future or the past, there is only full enjoyment of whatever is arising in consciousness.
No matter who you are, where you are, what you have, or what you don’t possess, you have the ability to experience a deep sense of contentment in life.
Maximizing one’s inner richness to match one’s outer wealth leads to a greater integration of balance, health, and fulfillment.
Discover the blockages that are preventing your own enjoyment of the journey so that you can overcome all obstacles and enjoy your road to success.
The view from the peak is more satisfying when you enjoy each step of the climb.
(A previous version of this article first appeared in Inc Magazine) | https://medium.com/wholistique/3-deceptively-simple-life-lessons-every-ambitious-individual-needs-to-learn-before-they-turn-30-2d44da601dd5 | ['Dr. Matthew Jones'] | 2020-11-19 07:36:53.113000+00:00 | ['Life Lessons', 'Inspiration', 'Happiness', 'Self', 'Psychology'] |
Create a weather app UI with 3D-like illustrations | Welcome to the third step by step UI guide! Let’s practice some UI skills — I encourage you to take some time and follow my little tutorial. You will most likely end up having a nice UI example to share on your socials and portfolio :)
This time, we’re going to create a visually pleasing, simple weather app with 3d-like illustrations.You won’t need any fancy software to do this — I work entirely in Sketch, but you will be able to recreate this tutorial in Figma.
Please note, that this is not a “legit” process on how to create a product. We will focus on creating a clean, consistent UI, and we skip all the research/user experience/whatever you like to call it/steps.
Basic idea and wireframe of an app
Since we’re going to create a weather app UI, let’s think about what elements and information are crucial for this kind of product:
• Time and place
• Weather condition (visual representation is always nice here!)
• Temperature
• Additional information (humidity, wind, UV, pollution level etc.)
• Weather forecast for the next days
Having these points in mind, let’s create a very rough wireframe: | https://uxdesign.cc/create-a-weather-app-ui-with-3d-like-illustrations-4a6a5686c5ea | ['Diana Malewicz'] | 2020-10-28 23:02:57.855000+00:00 | ['UI', 'UX', 'Tutorial', 'Design', 'Apps'] |
Oracle ADF BC REST — Performance Review and Tuning | I thought to check how well ADF BC REST scales and how fast it performs. For that reason, I implemented sample ADF BC REST application and executed JMeter stress load test against it. You can access source code for application and JMeter script on my GitHub repository. Application is called Blog Visitor Counter app for a reason — I’m using same app to count blog visitors. This means each time you are accessing blog page — ADF BC REST service is triggered in the background and it logs counter value with timestamp (no personal data).
Application structure is straightforward — ADF BC REST implementation:
When REST service is accessed (GET request is executed) — it creates and commits new row in the background (this is why I like ADF BC REST — you have a lot of power and flexibility in the backend), before returning total logged rows count:
New row is assigned with counter value from DB sequence, as well as with timestamp. Both values are calculated in Groovy. Another bonus point for ADF BC REST, besides writing logic in Java — you can do scripting in Groovy — this makes code simpler:
Thats it — ADF BC REST service is ready to run. You may wonder, how I’m accessing it from blog page. ADF BC REST services as any other REST, can be invoked through HTTP request. In this particular case, I’m calling GET operation through Ajax call in JavaScript on client side. This script is uploaded to blogger HTML:
Performance
I’m using JMeter to execute performance test. In below example, REST GET request is invoked in infinite loop by 100 concurrent threads. This creates constant load and allows to measure how ADF BC REST application performs under such load:
ADF BC REST scales well, with 100 concurrent threads it does request processing in 0.1–0.2 seconds. If we would compare it to ADF UI request processing time, it would be around 10 times faster. This is expected, because JSF and ADF Faces UI classes are not used during ADF BC REST request. Performance test statistics for 100 threads, see Avg logged time in milliseconds:
Tuning
1. Referenced Pool Size and Application Module Pooling ADF BC REST executes request is stateless mode, REST nature is stateless. I though to check, what this mean for Application Module tuning parameters. I have observed that changing Referenced Pool Size value doesn’t influence application performance, it works either with 0 or any other value in the same way. Referenced Pool Size parameter is not important for ADF BC REST runtime:
Application performs well under load, there are no passivations/activations logged, even when Referenced Pool Size is set to zero.
However, I found that it is still important to keep Enable Application Module Pooling = ON. If you switch it OFF — passivation will start to appear, which consumes processing power and is highly unrecommended. So, keep Enable Application Module Pooling = ON.
2. Disconnect Application Module Upon Release
It is important to set Disconnect Application Module Upon Release = ON (read more about it — ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). This will ensure there will be always near zero DB connections left open:
Otherwise if we keep Disconnect Application Module Upon Release = OFF:
DB connections will not be released promptly:
This summarises important points related to ADF BC REST tuning. | https://medium.com/oracledevs/oracle-adf-bc-rest-performance-review-and-tuning-c3acadecd477 | ['Andrej Baranovskij'] | 2018-05-29 17:13:19.610000+00:00 | ['Oracle', 'Oracle Adf', 'Rest', 'Java'] |
A Comprehensive Introduction to Different Types of Convolutions in Deep Learning | Intuitively, dilated convolutions “inflate” the kernel by inserting spaces between the kernel elements. This additional parameter l (dilation rate) indicates how much we want to widen the kernel. Implementations may vary, but there are usually l-1 spaces inserted between kernel elements. The following image shows the kernel size when l = 1, 2, and 4.
Receptive field for the dilated convolution. We essentially observe a large receptive field without adding additional costs.
In the image, the 3 x 3 red dots indicate that after the convolution, the output image is with 3 x 3 pixels. Although all three dilated convolutions provide the output with the same dimension, the receptive field observed by the model is dramatically different. The receptive filed is 3 x 3 for l =1. It is 7 x 7 for l =2. The receptive filed increases to 15 x 15 for l = 3. Interestingly, the numbers of parameters associated with these operations are essentially identical. We “observe” a large receptive filed without adding additional costs. Because of that, dilated convolution is used to cheaply increase the receptive field of output units without increasing the kernel size, which is especially effective when multiple dilated convolutions are stacked one after another.
The authors in the paper “Multi-scale context aggregation by dilated convolutions” build a network out of multiple layers of dilated convolutions, where the dilation rate l increases exponentially at each layer. As a result, the effective receptive field grows exponentially while the number of parameters grows only linearly with layers!
The dilated convolution in the paper is used to systematically aggregate multi-scale contextual information without losing resolution. The paper shows that the proposed module increases the accuracy of state-of-the-art semantic segmentation systems at that time (2016). Please check out the paper for more information.
8. Separable Convolutions
Separable Convolutions are used in some neural net architectures, such as the MobileNet (Link). One can perform separable convolution spatially (spatially separable convolution) or depthwise (depthwise separable convolution).
8.1. Spatially Separable Convolutions
The spatially separable convolution operates on the 2D spatial dimensions of images, i.e. height and width. Conceptually, spatially separable convolution decomposes a convolution into two separate operations. For an example shown below, a Sobel kernel, which is a 3x3 kernel, is divided into a 3x1 and 1x3 kernel.
A Sobel kernel can be divided into a 3 x 1 and a 1 x 3 kernel.
In convolution, the 3x3 kernel directly convolves with the image. In spatially separable convolution, the 3x1 kernel first convolves with the image. Then the 1x3 kernel is applied. This would require 6 instead of 9 parameters while doing the same operations.
Moreover, one need less matrix multiplications in spatially separable convolution than convolution. For a concrete example, convolution on a 5 x 5 image with a 3 x 3 kernel (stride=1, padding=0) requires scanning the kernel at 3 positions horizontally (and 3 positions vertically). That is 9 positions in total, indicated as the dots in the image below. At each position, 9 element-wise multiplications are applied. Overall, that’s 9 x 9 = 81 multiplications.
Standard convolution with 1 channel.
On the other hand, for spatially separable convolution, we first apply a 3 x 1 filter on the 5 x 5 image. We scan such kernel at 5 positions horizontally and 3 positions vertically. That’s 5 x 3=15 positions in total, indicated as dots on the image below. At each position, 3 element-wise multiplications are applied. That is 15 x 3 = 45 multiplications. We now obtained a 3 x 5 matrix. This matrix is now convolved with a 1 x 3 kernel, which scans the matrix at 3 positions horizontally and 3 positions vertically. For each of these 9 positions, 3 element-wise multiplications are applied. This step requires 9 x 3=27 multiplications. Thus, overall, the spatially separable convolution takes 45 + 27 = 72 multiplications, which is less than convolution.
Spatially separable convolution with 1 channel.
Let’s generalize the above examples a little bit. Let’s say we now apply convolutions on a N x N image with a m x m kernel, with stride=1 and padding=0. Traditional convolution requires (N-2) x (N-2) x m x m multiplications. Spatially separable convolution requires N x (N-2) x m + (N-2) x (N-2) x m = (2N-2) x (N-2) x m multiplications. The ratio of computation costs between spatially separable convolution and the standard convolution is
For layers where the image size N is larger than filter size (N >> m), this ratio becomes 2 / m. It means at this asymptotic situation (N >> m), computational cost of spatially separable convolution is 2/3 of the standard convolution for a 3 x 3 filter. It is 2 / 5 for a 5 x 5 filter, 2 / 7 for a 7 x 7 filter, and so on.
Although spatially separable convolutions save cost, it is rarely used in deep learning. One of the main reason is that not all kernels can be divided into two, smaller kernels. If we replace all traditional convolutions by the spatially separable convolution, we limit ourselves for searching all possible kernels during training. The training results may be sub-optimal.
8.2. Depthwise Separable Convolutions
Now, let’s move on to the depthwise separable convolutions, which is much more commonly used in deep learning (e.g. in MobileNet and Xception). The depth wise separable convolutions consist of two steps: depthwise convolutions and 1x1 convolutions.
Before describing these steps, it is worth revisiting the 2D convolution and 1 x 1 convolution we talked about in my previous sections. Let’s have a quick recap of standard 2D convolutions. For a concrete example, let’s say the input layer is of size 7 x 7 x 3 (height x width x channels), and the filter is of size 3 x 3 x 3. After the 2D convolution with one filter, the output layer is of size 5 x 5 x 1 (with only 1 channel).
Standard 2D convolution to create output with 1 layer, using 1 filter.
Typically, multiple filters are applied between two neural net layers. Let’s say we have 128 filters here. After applying these 128 2D convolutions, we have 128 5 x 5 x 1 output maps. We then stack these maps into a single layer of size 5 x 5 x 128. By doing that, we transform the input layer (7 x 7 x 3) into the output layer (5 x 5 x 128). The spatial dimensions, i.e. height & width, are shrunk, while the depth is extended.
Standard 2D convolution to create output with 128 layer, using 128 filters.
Now with depthwise separable convolutions, let’s see how we can achieve the same transformation.
First, we apply depthwise convolution to the input layer. Instead of using a single filter of size 3 x 3 x 3 in 2D convolution, we used 3 kernels, separately. Each filter has size 3 x 3 x 1. Each kernel convolves with 1 channel of the input layer (1 channel only, not all channels!). Each of such convolution provides a map of size 5 x 5 x 1. We then stack these maps together to create a 5 x 5 x 3 image. After this, we have the output with size 5 x 5 x 3. We now shrink the spatial dimensions, but the depth is still the same as before.
Depthwise separable convolution — first step: Instead of using a single filter of size 3 x 3 x 3 in 2D convolution, we used 3 kernels, separately. Each filter has size 3 x 3 x 1. Each kernel convolves with 1 channel of the input layer (1 channel only, not all channels!). Each of such convolution provides a map of size 5 x 5 x 1. We then stack these maps together to create a 5 x 5 x 3 image. After this, we have the output with size 5 x 5 x 3.
As the second step of depthwise separable convolution, to extend the depth, we apply the 1x1 convolution with kernel size 1x1x3. Convolving the 5 x 5 x 3 input image with each 1 x 1 x 3 kernel provides a map of size 5 x 5 x 1.
Thus, after applying 128 1x1 convolutions, we can have a layer with size 5 x 5 x 128.
Depthwise separable convolution — second step: apply multiple 1 x 1 convolutions to modify depth.
With these two steps, depthwise separable convolution also transform the input layer (7 x 7 x 3) into the output layer (5 x 5 x 128).
The overall process of depthwise separable convolution is shown in the figure below.
The overall process of depthwise separable convolution.
So, what’s the advantage of doing depthwise separable convolutions? Efficiency! One needs much less operations for depthwise separable convolutions compared to 2D convolutions.
Let’s recall the computation costs for our example of 2D convolutions. There are 128 3x3x3 kernels that move 5x5 times. That is 128 x 3 x 3 x 3 x 5 x 5 = 86,400 multiplications.
How about the separable convolution? In the first depthwise convolution step, there are 3 3x3x1 kernels that moves 5x5 times. That is 3x3x3x1x5x5 = 675 multiplications. In the second step of 1 x 1 convolution, there are 128 1x1x3 kernels that moves 5x5 times. That is 128 x 1 x 1 x 3 x 5 x 5 = 9,600 multiplications. Thus, overall, the depthwise separable convolution takes 675 + 9600 = 10,275 multiplications. This is only about 12% of the cost of the 2D convolution!
So, for an image with arbitrary size, how much time can we save if we apply depthwise separable convolution. Let’s generalize the above examples a little bit. Now, for an input image of size H x W x D, we want to do 2D convolution (stride=1, padding=0) with Nc kernels of size h x h x D, where h is even. This transform the input layer (H x W x D) into the output layer (H-h+1 x W-h+1 x Nc). The overall multiplications needed is
Nc x h x h x D x (H-h+1) x (W-h+1)
On the other hand, for the same transformation, the multiplication needed for depthwise separable convolution is
D x h x h x 1 x (H-h+1) x (W-h+1) + Nc x 1 x 1 x D x (H-h+1) x (W-h+1) = (h x h + Nc) x D x (H-h+1) x (W-h+1)
The ratio of multiplications between depthwise separable convolution and 2D convolution is now:
For most modern architectures, it is common that the output layer has many channels, e.g. several hundreds if not several thousands. For such layers (Nc >> h), then the above expression reduces down to 1 / h / h. It means for this asymptotic expression, if 3 x 3 filters are used, 2D convolutions spend 9 times more multiplications than a depthwise separable convolutions. For 5 x 5 filters, 2D convolutions spend 25 times more multiplications.
Is there any drawback of using depthwise separable convolutions? Sure, there are. The depthwise separable convolutions reduces the number of parameters in the convolution. As such, for a small model, the model capacity may be decreased significantly if the 2D convolutions are replaced by depthwise separable convolutions. As a result, the model may become sub-optimal. However, if properly used, depthwise separable convolutions can give you the efficiency without dramatically damaging your model performance.
9. Flattened convolutions
The flattened convolution was introduced in the paper “Flattened convolutional neural networks for feedforward acceleration”. Intuitively, the idea is to apply filter separation. Instead of applying one standard convolution filter to map the input layer to an output layer, we separate this standard filter into 3 1D filters. Such idea is similar as that in the spatial separable convolution described above, where a spatial filter is approximated by two rank-1 filters.
The image is adopted from the paper.
One should notice that if the standard convolution filter is a rank-1 filter, such filter can always be separated into cross-products of three 1D filters. But this is a strong condition and the intrinsic rank of the standard filter is higher than one in practice. As pointed out in the paper “As the difficulty of classification problem increases, the more number of leading components is required to solve the problem… Learned filters in deep networks have distributed eigenvalues and applying the separation directly to the filters results in significant information loss.”
To alleviate such problem, the paper restricts connections in receptive fields so that the model can learn 1D separated filters upon training. The paper claims that by training with flattened networks that consists of consecutive sequence of 1D filters across all directions in 3D space provides comparable performance as standard convolutional networks, with much less computation costs due to the significant reduction of learning parameters.
10. Grouped Convolution
Grouped convolution was introduced in the AlexNet paper (link) in 2012. The main reason of implementing it was to allow the network training over two GPUs with limited memory (1.5 GB memory per GPU). The AlexNet below shows two separate convolution paths at most of the layers. It’s doing model-parallelization across two GPUs (of course one can do multi-GPUs parallelization if more GPUs are available).
This image is adopted from the AlexNet paper.
Here we describe how the grouped convolutions work. First of all, conventional 2D convolutions follow the steps showing below. In this example, the input layer of size (7 x 7 x 3) is transformed into the output layer of size (5 x 5 x 128) by applying 128 filters (each filter is of size 3 x 3 x 3). Or in general case, the input layer of size (Hin x Win x Din) is transformed into the output layer of size (Hout x Wout x Dout) by applying Dout kernels (each is of size h x w x Din).
Standard 2D convolution.
In grouped convolution, the filters are separated into different groups. Each group is responsible for a conventional 2D convolutions with certain depth. The following examples can make this clearer.
Grouped convolution with 2 filter groups.
Above is the illustration of grouped convolution with 2 filter groups. In each filter group, the depth of each filter is only half of the that in the nominal 2D convolutions. They are of depth Din / 2. Each filter group contains Dout /2 filters. The first filter group (red) convolves with the first half of the input layer ([:, :, 0:Din/2]), while the second filter group (blue) convolves with the second half of the input layer ([:, :, Din/2:Din]). As a result, each filter group creates Dout/2 channels. Overall, two groups create 2 x Dout/2 = Dout channels. We then stack these channels in the output layer with Dout channels.
10.1. Grouped convolution v.s. depthwise convolution
You may already observe some linkage and difference between grouped convolution and the depthwise convolution used in the depthwise separable convolution. If the number of filter groups is the same as the input layer channel, each filter is of depth Din / Din = 1. This is the same filter depth as in depthwise convolution.
On the other hand, each filter group now contains Dout / Din filters. Overall, the output layer is of depth Dout. This is different from that in depthwise convolution, which does not change the layer depth. The layer depth is extended later by 1x1 convolution in the depthwise separable convolution.
There are a few advantages of doing grouped convolution.
The first advantage is the efficient training. Since the convolutions are divided into several paths, each path can be handled separately by different GPUs. This procedure allows the model training over multiple GPUs, in a parallel fashion. Such model-parallelization over multi-GPUs allows more images to be fed into the network per step, compared to training with everything with one GPU. The model-parallelization is considered to be better than data parallelization. The later one split the dataset into batches and then we train on each batch. However, when the batch size becomes too small, we are essentially doing stochastic than batch gradient descent. This would result in slower and sometimes poorer convergence.
The grouped convolutions become important for training very deep neural nets, as in the ResNeXt shown below
The image is adopted from the ResNeXt paper.
The second advantage is the model is more efficient, i.e. the model parameters decrease as number of filter group increases. In the previous examples, filters have h x w x Din x Dout parameters in a nominal 2D convolution. Filters in a grouped convolution with 2 filter groups has (h x w x Din/2 x Dout/2) x 2 parameters. The number of parameters is reduced by half.
The third advantage is a bit surprising. Grouped convolution may provide a better model than a nominal 2D convolution. This another fantastic blog (link) explains it. Here is a brief summary.
The reason links to the sparse filter relationship. The image below is the correlation across filters of adjacent layers. The relationship is sparse.
The correlation matrix between filters of adjacent layers in a Network-in-Network model trained on CIFAR10. Pairs of highly correlated filters are brighter, while lower correlated filters are darker. The image is adopted from this article.
How about the correlation map for grouped convolution?
The correlations between filters of adjacent layers in a Network-in-Network model trained on CIFAR10, when trained with 1, 2, 4, 8 and 16 filter groups. The image is adopted from this article.
The image above is the correlation across filters of adjacent layers, when the model is trained with 1, 2, 4, 8, and 16 filter groups. The article proposed one reasoning (link): “The effect of filter groups is to learn with a block-diagonal structured sparsity on the channel dimension… the filters with high correlation are learned in a more structured way in the networks with filter groups. In effect, filter relationships that don’t have to be learned are on longer parameterized. In reducing the number of parameters in the network in this salient way, it is not as easy to over-fit, and hence a regularization-like effect allows the optimizer to learn more accurate, more efficient deep networks.”
AlexNet conv1 filter separation: as noted by the authors, filter groups appear to structure learned filters into two distinct groups, black-and-white and color filters. The image is adopted from the AlexNet paper.
In addition, each filter group learns a unique representation of the data. As noticed by the authors of the AlexNet, filter groups appear to structure learned filters into two distinct groups, black-white filter and color filters.
11. Shuffled Grouped Convolution
Shuffled grouped convolution was introduced in the ShuffleNet from Magvii Inc (Face++). ShuffleNet is a computation-efficient convolution architecture, which is designed specially for mobile devices with very limited computing power (e.g. 10–150 MFLOPs).
The ideas behind the shuffled grouped convolution are linked to the ideas behind grouped convolution (used in MobileNet and ResNeXt for examples) and depthwise separable convolution (used in Xception).
Overall, the shuffled grouped convolution involves grouped convolution and channel shuffling.
In the section about grouped convolution, we know that the filters are separated into different groups. Each group is responsible for a conventional 2D convolutions with certain depth. The total operations are significantly reduced. For examples in the figure below, we have 3 filter groups. The first filter group convolves with the red portion in the input layer. Similarly, the second and the third filter group convolves with the green and blue portions in the input. The kernel depth in each filter group is only 1/3 of the total channel count in the input layer. In this example, after the first grouped convolution GConv1, the input layer is mapped to the intermediate feature map. This feature map is then mapped to the output layer through the second grouped convolution GConv2.
Grouped convolution is computationally efficient. But the problem is that each filter group only handles information passed down from the fixed portion in the previous layers. For examples in the image above, the first filter group (red) only process information that is passed down from the first 1/3 of the input channels. The blue filter group (blue) only process information that is passed down from the last 1/3 of the input channels. As such, each filter group is only limited to learn a few specific features. This property blocks information flow between channel groups and weakens representations during training. To overcome this problem, we apply the channel shuffle.
The idea of channel shuffle is that we want to mix up the information from different filter groups. In the image below, we get the feature map after applying the first grouped convolution GConv1 with 3 filter groups. Before feeding this feature map into the second grouped convolution, we first divide the channels in each group into several subgroups. The we mix up these subgroups.
Channel shuffle.
After such shuffling, we continue performing the second grouped convolution GConv2 as usual. But now, since the information in the shuffled layer has already been mixed, we essentially feed each group in GConv2 with different subgroups in the feature map layer (or in the input layer). As a result, we allow the information flow between channels groups and strengthen the representations.
12. Pointwise grouped convolution
The ShuffleNet paper (link) also introduced the pointwise grouped convolution. Typically for grouped convolution such as in MobileNet (link) or ResNeXt (link), the group operation is performed on the 3x3 spatial convolution, but not on 1 x 1 convolution.
The shuffleNet paper argues that the 1 x 1 convolution are also computationally costly. It suggests applying group convolution for 1 x 1 convolution as well. The pointwise grouped convolution, as the name suggested, performs group operations for 1 x 1 convolution. The operation is identical as for grouped convolution, with only one modification — performing on 1x1 filters instead of NxN filters (N>1).
In the ShuffleNet paper, authors utilized three types of convolutions we have learned: (1) shuffled grouped convolution; (2) pointwise grouped convolution; and (3) depthwise separable convolution. Such architecture design significantly reduces the computation cost while maintaining the accuracy. For examples the classification error of ShuffleNet and AlexNet is comparable on actual mobile devices. However, the computation cost has been dramatically reduced from 720 MFLOPs in AlexNet down to 40–140 MFLOPs in ShuffleNet. With relatively small computation cost and good model performance, ShuffleNet gained popularity in the field of convolutional neural net for mobile devices.
Thank you for reading the article. Please feel free to leave questions and comments below.
Reference
Blogs & articles
“An Introduction to different Types of Convolutions in Deep Learning” (Link)
“Review: DilatedNet — Dilated Convolution (Semantic Segmentation)” (Link)
“ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices” (Link)
“Separable convolutions “A Basic Introduction to Separable Convolutions” (Link)
Inception network “A Simple Guide to the Versions of the Inception Network” (Link)
“A Tutorial on Filter Groups (Grouped Convolution)” (Link)
“Convolution arithmetic animation” (Link)
“Up-sampling with Transposed Convolution” (Link)
“Intuitively Understanding Convolutions for Deep Learning” (Link)
Papers | https://towardsdatascience.com/a-comprehensive-introduction-to-different-types-of-convolutions-in-deep-learning-669281e58215 | ['Kunlun Bai'] | 2019-02-11 22:44:56.620000+00:00 | ['Convolutional Network', 'Machine Learning', 'Deep Learning', 'Artificial Intelligence', 'Towards Data Science'] |
Slicing data with Images in Power BI | Slicing data with Images in Power BI
You can slice data in Power BI in many different ways. But, did you know that you can also use images for slicing and dicing? It’s simpler than you think…
Photo by Andrew Robert Lauko on Unsplash
You know that I’m a huge Power BI evangelist. One of the things I like most regarding Power BI is that it gives you the flexibility to tell the story with your data in multiple different ways. Basically, you’re not limited to ready-made solutions only (nevertheless that those solutions are quite often sophisticated enough to satisfy all your reporting needs).
With some creativity and a little tweaking here and there, you can create an unforgettable experience for your users. And they will appreciate you because of that!
Simple request — simple solution!
A few weeks ago, I was working on a report for our Sales department. Guys from Sales are not very demanding when it comes to reporting design — they want to see their numbers as quickly as possible, and don’t pay too much attention to the visual look.
Essentially, they would be grateful for having something like this:
So, two simple visuals, and a slicer for slicing and dicing the data. And there is nothing wrong with it! But, since I’ve had some additional time to invest in this report’s development, I’ve decided to offer them a much more eye-pleasing solution…
As usual, I’m using a Contoso sample database in all of my examples.
Take advantage of using PowerPoint
When I want to create a nice report background, I like using PowerPoint. This gives me the possibility to group all ideas, visuals’ placeholders, text boxes, etc. in one single place. Working this way will also reduce the number of visuals on my Power BI report canvas, and consequentially, the time needed for the report rendering.
Using PowerPoint, you are substituting multiple visuals with only one! The trick is — as soon as you create your desired background, just save it as a .png or .jpeg file:
By the way, here is how my report background will look like:
Let me just briefly explain the idea for the new report look. Two large placeholders in the middle of the canvas will contain the main visuals. On the left-hand side is where the magic happens!
I’ve decided to put images and use them instead of plain simple slicer visual. This way, my users will have an app-like experience, like they are navigating through a website.
Bookmarks hold the key!
Bookmarks are one of the coolest features in Power BI! Essentially, what Bookmark does, it captures the current state of the report, so you can use it as a reference in your actions through buttons, images, shapes… This will basically give you a possibility to create an app-like experience for your report and make it look interactive.
There are really dozen use cases where you can apply bookmarking, and this is one of them.
The first step is to put our background created with PowerPoint, as a background image in Power BI:
Now, I will insert icons in the placeholders on the left. These icons will be used to navigate the user between various bookmarks:
Short explanation about the images used in the report:
Images marked with the green arrow will be used as “slicers”
Images marked with the red arrow will be used to indicate which “slicer” is currently applied
The image marked with the yellow arrow will help us to revert to the starting point and see data for all product categories (no slicers applied)
The first bookmark I’m creating will capture the default look of the report — all product categories included:
The most important thing here is to leave the Data option checked, as that will capture the current state of the page, including all filters applied (in our case, no filters had been applied)!
Let’s now create a view for the Audio product category. I will hide all images except for Audio. The key thing here is to apply the filter in the Filters pane, in order to show data for the Audio product category only!
Again, leave the Data option checked when creating a bookmark. Repeat the same steps for the other product categories, so the final look of Bookmarks pane will remind to something like this:
Now, we need to specify actions for our images, so they work as navigation between different bookmarks. Let me show you how it’s done for the Audio product category, and you can then easily follow the same pattern for the remaining categories:
The final step is to hide the Filters pane from our users, as they don’t need to know how the report works under the hood:
After I published my report to Power BI Service, here is how it looks:
If you ask me, that looks much better compared to our starting point:)
Conclusion
As I’ve already said many times, Power BI is so powerful when it comes to flexibility. You’ve just seen how some basic background formatting, clever usage of bookmarks and images, can push your report to a whole new level.
I don’t need to say that my Sales team was in awe after getting this report. And that is the most important takeaway from this article: always try to put some additional effort, in order to enable your users to enjoy using the report!
Thanks for reading!
Subscribe here to get more insightful data articles! | https://towardsdatascience.com/slicing-data-with-images-in-power-bi-32b210449ccc | ['Nikola Ilic'] | 2020-11-16 14:30:41.553000+00:00 | ['Creativity', 'Towards Data Science', 'Data Science', 'Power Bi', 'Data Visualization'] |
4 Amazing Templates to Make Product Strategy an Asset, Not a Liability | 4 Amazing Templates to Make Product Strategy an Asset, Not a Liability Brandon Lopez Follow Mar 10 · 2 min read
I recently wrote a beginner’s guide to product marketing to help startups and SMBs understand the acute need for the function and common pitfalls to avoid. Once you have the right resources in place, the next step is to succeed at strategic planning, roadmap development, crafting a narrative/messaging/positioning, and building a product point of view.
Thanks to my selfless friend Chris Mann, who’s been a product leader at Bizo (acquired by LinkedIn), Coremetrics (acquired by IBM), and CEO at BrightFunnel (acquired by Terminus), he’s shared a few of his amazing and actionable templates to help get you started.
You can find these product-related templates on Valley Innovators’ site here or I’ve included a summary with links below. Keep in mind these templates have been battle tested in the field by Chris and they work in practice. Enjoy!
Strategic Planning Matrix
View Template
Improve the planning process and get stakeholders across the organization including executives, product, marketing, sales, and operations aligned on key strategic and product initiatives with this google sheet template. Use this template to:
Rank strategic initiatives to improve performance
Assign clear ownership and gain better alignment
Increase accountability across teams and owners
Improve communication and shorten timelines
Messaging Blueprint
View Template
Leverage this blueprint to achieve better collaboration and alignment on messaging with this google doc template. Use this template to:
Align on the problem your company solves, differentiators, value propositions, target audience, and core capabilities.
Develop initial messaging for your PR boilerplate and elevator pitch.
Track and analyze competitor messaging.
Solution Spec
View Template
Gather critical product and executive input in this Solution Spec google doc. Use this template to:
Think through product strategy and closely align execs with product leadership
Discuss roadmap timelines, phasing, and implications for better utilization of resources.
Articulate risks and dependencies to remove critical roadblocks
Product Point of View (POV)
View Template
Leverage this POV framework to think through and gain alignment on strategic product initiatives. Use this template to:
Think about and articulate the fundamental challenges and opportunities of a particular product strategy.
Gain input from other stakeholders for better clarity and alignment.
Challenge assumptions on market opportunity, product roadmap, risks, and resources needed for success.
A special thanks again to Chris Mann for sharing these templates. Listen to Valley Innovators’ podcast with Chris for more insights on how to use them or contact Persimmon Marketing for help implementing them to improve your product marketing and development process. | https://medium.com/datadriveninvestor/4-amazing-templates-to-make-product-strategy-an-asset-not-a-liability-585a5e6c03ec | ['Brandon Lopez'] | 2020-03-11 12:02:30.934000+00:00 | ['Planning', 'Startup', 'Strategy', 'Product Development', 'Product Management'] |
How to Steal (I Mean Borrow) Clients from Your Competitors | Want to steal clients from your competitors? No, because then you just sound like a jerk. So, I guess a nicer way to say it would be, “Want the clients of your competition to hire you instead?”. Yeah, that’s better. Probably not what we all say behind closed doors, though, right?
We all want more business. In order for that to happen, we need people to hire us. So, what are some realistic ways to go about doing it? In our experience, it comes down to combining an accessible image and service with unique value points and smart marketing. Let’s break it down a little and discuss how these components can carve you out a larger piece of your industry’s pie.
Accessibility
Is your homepage up to date? Is it something you’d feel comfortable navigating if you had never seen it before? Do you respond to customer complaints on social media? Does your online blog offer a personalized voice of authority in your industry?
These marketing strategies are about increasing the accessibility of your company. If you want to lead the industry, you need to be accessible to your customers. This means providing communication channels for questions, feedback, and education that are always open and expertly managed. It also means using easy-to-understand language, clear calls-to-action, and strong, relevant design. Incorporating aspects of your brand identity that highlight ease of use and accessibility will allow you to win over customers by offering them the comfort and convenience they want. Remember, when choosing between simple and complex, customers will choose simple every time.
Offering New Value
Another great way to differentiate yourself from the competition is by honing in on a new value element you can offer your customer base that the competition has yet to tap into. For example, Amazon Prime’s subscription success skyrocketed after they streamed online shows and movies. One of the best ways to decide what new value element to incorporate is by gathering customer feedback and performing a competitive audit (link to both blogs). Find where your niche in the market is and then capitalize on it.
Strategic Targeting
Being available for your customers is great, and having valuable elements to offer them is essential. However, waiting for them to come to you is not going to be enough to win them from your competition. To truly steal (I mean borrow) your competitor’s customers, you’re going to need to use technology to your advantage and intentionally and strategically target your competitor’s customers with content. For example, Facebook’s Lookalike Audience tool lets you directly target fans of other businesses, and Twitter’s Follower targeting creates segmented audiences based on the businesses or specific thought leaders your customers already respect. Once you get their attention, hit your competitor’s customers with awesome content that will make them remember you and view you as an asset. If you can do that, you’re sure to make some waves.
Understanding the Competition
Let’s face it, it’s all about being in the right place and the right time. We all have competition. Do you understand your competition? I would say that a lot of you probably have a good hunch of what they offer. You probably have even been in their store or tried one of their products. That’s great, but do you REALLY understand what makes you different? Why would someone go through all the trouble of severing a relationship with one of your competitors to give you a chance? It’s not an easy process. There has to be a specific “pain” that we are solving that this potential client can’t get (or is not getting) from anyone else. That’s why competitive analysis is such a big deal and why it’s one of the first things we do.
At the end of the day, if you want to BORROW your competitor’s customers you need to be there when they aren’t and perform at 99% (because 100% is a bit unrealistic) every time.
—–
Want to be more effective with the time you have creating marketing strategies? You can read The 5 Step Process For a More Structured Marketing Strategy eBook for a more in-depth discussion of these concepts and how you can begin to implement them. | https://medium.com/thoughts-of-a-brand-strategist/how-to-steal-i-mean-borrow-clients-from-your-competitors-83c7bcfe1e77 | ['Skot Waldron'] | 2018-05-08 03:03:53.787000+00:00 | ['Digital Marketing', 'Competitive Analysis', 'Marketing', 'Branding'] |
A million things… | There are millions of things worth doing.
Millions of things worth seeing.
Tens of thousands of people worth kissing.
Hundreds of places worth visiting.
Dozens of books worth writing.
Hundreds of thousands of conversations worth having.
Thousands of movies worth watching.
Millions of people worth hugging.
Hundreds of thousands of books worth reading.
Millions of songs worth listening to.
And one life worth living.
Don’t waste it… | https://medium.com/thought-pills/a-million-things-2d1f12fab910 | ['Yann Girard'] | 2017-11-06 19:18:32.553000+00:00 | ['Life Lessons', 'Inspiration', 'Writing', 'Life', 'Poetry'] |
Under the Hood of K-Nearest Neighbors (KNN) and Popular Model Validation Techniques | Under the Hood of K-Nearest Neighbors (KNN) and Popular Model Validation Techniques
Exploring the mathematics and methodology behind the K-Nearest Neighbors Algorithm, Traditional Train/Test Split, and Repeated K-Fold Cross Validation
This article contains in-depth algorithm overviews of the K-Nearest Neighbors algorithm (Classification and Regression) as well as the following Model Validation techniques: Traditional Train/Test Split and Repeated K-Fold Cross Validation. The algorithm overviews include detailed descriptions of the methodologies and mathematics that occur internally with accompanying concrete examples. Also included are custom, fully functional/flexible frameworks of the above algorithms built from scratch using primarily NumPy. Finally, there is a fully integrated Case Study which deploys several of the custom frameworks (KNN-Classification, Repeated K-Fold Cross Validation) through a full Machine Learning workflow alongside the Iris Flowers dataset to find the optimal KNN model.
GitHub: https://github.com/Amitg4/KNN_ModelValidation
Please use the imports below to run any included code within your own notebook or coding environment.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import load_iris
from statistics import mean,stdev
from itertools import combinations
import math
%matplotlib inline
KNN Algorithm Overview
K-Nearest Neighbors is a popular pattern recognition algorithm used in Supervised Machine Learning to handle both classification and regression-based tasks. At a high level, this algorithm operates according to an intuitive methodology:
New points target values are predicted according to the target values of the K most similar points stored in the model’s training data.
Traditionally, ‘similar’ is interpreted as some form of a distance calculation. Therefore, another way to interpret the KNN prediction methodology is that predictions are based off the K closest points within the training data, hence the name K-Nearest Neighbors. With the concept of distance introduced, a good initial question to answer is how distance will be computed. While there are several different mathematical metrics that are viewed as a form of computing distance, this study will highlight the 3 following distance metrics: Euclidean, Manhattan, and Chebyshev.
KNN Algorithm Overview — Distance Metrics
Euclidean distance, based in the Pythagorean Theorem, finds the straight line distance between two points in space. In other words, this is equivalent to finding the shortest distance between two points by drawing a single line between Point A and Point B. Manhattan distance, based in taxicab geometry, is the sum of all N distances between Point A and Point B in N dimensional feature space. For example, in 2D space the Manhattan distance between Point A and Point B would be the sum of the vertical and horizontal distance. Chebyshev distance is the maximum distance between Point A and Point B in N dimensional feature space. For example, in 2D space the Chebyshev distance between Point A and Point B would be max(horizontal distance, vertical distance), in other words whichever distance is greater between the two distances.
Consider Point A = (A_1, A_2, … , A_N) and Point B = (B_1, B_2, … , B_N) both exist in N dimensional feature space. The distance between these two points can be described by the following formulas:
As with most mathematical concepts, distance metrics are often easier to understand with a concrete example to visualize. Consider Point A = (0,0) and Point B = (3,4) in 2D feature space.
sns.set_style('darkgrid')
fig = plt.figure()
axes = fig.add_axes([0,0,1,1])
axes.scatter(x=[0,3],y=[0,4],s = 50)
axes.plot([0,3],[0,4],c='blue')
axes.annotate('X',[1.5,1.8],fontsize = 14,fontweight = 'bold')
axes.plot([0,0],[0,4],c='blue')
axes.annotate('Y',[-0.1,1.8],fontsize = 14,fontweight = 'bold')
axes.plot([0,3],[4,4],c='blue')
axes.annotate('Z',[1.5,4.1],fontsize = 14,fontweight = 'bold')
axes.annotate('(0,0)',[0.1,0.0],fontsize = 14,fontweight = 'bold')
axes.annotate('(3,4)',[2.85,3.7],fontsize = 14,fontweight = 'bold')
axes.grid(lw=1)
plt.show()
Segment X is the straight line distance between Point A and Point B. Segment Y is the vertical distance between Point A and Point B. Segment Z is the horizontal distance between Point A and Point B. Using the distance metrics as detailed above, let us compute the distance between Point A and Point B:
Important Note: Because KNN utilizes distance measurements as a key component, feature scaling is especially important to ensure one feature does not mask the input of other features
KNN Algorithm Overview — Classification
Classification is a supervised machine learning concept which categorizes data into classes.
Suppose a dataset is split into training data, data which allows the model to learn, and test data, data that allows us to validate model performance. The KNN Algorithm supports classification tasks through the following general algorithm structure:
Step 1: Select a K Value, Distance Metric, and Weighting Schema (covered later!) for the model
Step 2: Train the model with training data
Step 3: Compute distance using the selected distance metric between ith test/new observation and every training observation, while storing distances and training labels in a data structure
Step 4: Sort the data structure in ascending order by distance
Step 5: Use the K closest points and the selected weighting schema to vote on the class of the test/new observation, where the class with the majority vote is selected as the prediction
Step 6: Repeat Steps 3–5 until a prediction has been made for every test observation
Step 7: Return predictions
Step 8: If predictions are for test observations, then compare test labels to predictions in order to assess model accuracy
Classification Weighting Schema
Mentioned above in the classification algorithm steps was selecting and utilizing a ‘Weighting Schema’. This subsection will briefly walk through what this means alongside a concrete example.
A weighting schema dictates how much influence each of the K closest points will have towards making a final prediction. While there are many weighting schemas out there used alongside KNN, this study will focus on two:
Schema 1 — Traditional or ‘standard’: This voting schema assumes all points have equal influence over the final prediction. In a classification setting this means each of the K closest points will cast 1 vote
Schema 2 — Custom Weighted or ‘weighted’: This voting schema assumes that the closer a point is, the more influence it has over the final prediction. In a classification setting this means the closest point will cast K votes, the 2nd closest will cast K — 1 votes, and so on and so forth until the Kth closest point casts 1 vote. Formulaically, this is shown by # of votes = K+1-position, where position is how close it is to the new point.
See below for a concrete example:
sns.set_style('darkgrid')
fig = plt.figure()
axes = fig.add_axes([0,0,1,1])
axes.scatter(x=[1.3,1.7],y=[0.5,1],s = 50)
axes.scatter(x=[1.5,1,1.2],y=[0.8,1.2,1.5],s = 50)
axes.scatter(x=[1.5],y=[0],s = 50)
axes.annotate('1',[1.3,0.6],fontsize = 14)
axes.annotate('2',[1.5,0.85],fontsize = 14)
axes.annotate('3',[1.7,1.05],fontsize = 14)
axes.grid(lw=1)
plt.show()
The example above shows 5 total points in the training data, where every point is either class blue or class orange. The green data point is the new point which we are trying to classify based off the training data. Assuming a Euclidean distance metric and a 3-Nearest Neighbors framework, Points 1,2 and 3 are the closest points to the new observation in that order.
The ‘standard’ weighting schema would vote in the following format:
Point 1 would have 1 vote towards class blue
Point 2 would have 1 vote towards class orange
Point 3 would have 1 vote towards class orange
With two votes for class blue and one vote for class orange, the new observation would be classified as a class blue under the ‘standard’ schema.
The ‘weighted’ weighting schema would vote in the following format:
Point 1 would have 3 vote towards class blue
Point 2 would have 2 vote towards class orange
Point 3 would have 1 vote towards class orange
With four votes for class blue and two votes towards class orange, the new observation would be classified as a class blue under the ‘weighted’ schema.
The Algorithm
Below is our construction of the KNN Algorithm for Classification built entirely from scratch using NumPy. The framework is fully flexible for any sized data set (any any number of classes!), and supports the following methods:
First the model must be instantiated. Generalized syntax will be model = KNNClass(k,weighting = ‘standard’,distance_metric = ‘euclidean’) . The user must pass in a positive integer for k, and may optionally pass in values for weighting (‘standard’,’weighted’) and distance_metric (‘euclidean’,’manhattan’,’chebyshev’)
. The user must pass in a positive integer for k, and may optionally pass in values for weighting (‘standard’,’weighted’) and distance_metric (‘euclidean’,’manhattan’,’chebyshev’) After the model has been instantiated, it must be fit with training data. Generalized syntax will be model.fit(xtrain,ytrain)
After the model has been instantiated and fit with training data, the model can begin predicting values. Generalized syntax will be predictions = model.predict(xtest) where predictions for the test data will be returned in a NumPy array
where predictions for the test data will be returned in a NumPy array After the model has been instantiated and fit with training data, the score method can be used to return an accuracy score. Generalized syntax will be model.score(X,Y). The method will then make predictions for X, compare them to Y, and return the proportion of correct predictions.
For more information on the inner workings of the custom framework below, please refer to the comments and general code within the code block.
#KNN CLASSIFICATION FRAMEWORK
class KNNClass:
def __init__(self,k,weighting = 'standard',distance_metric = 'euclidean'):
#WHEN INSTANTIATING A MODEL, SPECIFY A MANDATORY K - VALUE SIGNIFYING THE NUMBER OF NEAREST NEIGHBORS THE MODEL
#WILL UTILIZE TO MAKE EACH PREDICTIONS. OPTIONALLY, THE USER MAY ALSO SPECIFY A WEIGHTING SCHEMA (STANDARD/WEIGHTING)
#AND DISTANCE_METRIC (EUCLIDEAN/MANHATTAN/CHEBYSHEV)
self.k = k
self.weighting = weighting
self.distance_metric = distance_metric
def fit(self,Xtrain,Ytrain):
#BEFORE MAKING PREDICTIONS, THE MODEL MUST BE FIT WITH THE TRAINING DATA. THIS MEANS THE FEATURE DATA AS XTRAIN AND
#LABELS AS YTRAIN.
self.xtrainmatrix = np.matrix(Xtrain)
self.ytrainmatrix = np.matrix(Ytrain)
if self.xtrainmatrix.shape[0] == 1:
self.xtrainmatrix = self.xtrainmatrix.transpose()
if self.ytrainmatrix.shape[0] == 1:
self.ytrainmatrix = self.ytrainmatrix.transpose()
#IN ADDITION TO STORING THE TRAINING DATA, THE FIT METHOD WILL ALSO STORE A LIST OF THE LABELS AND THE TOTAL NUMBER OF
#FEATURES
self.labellist = list(set(np.array(self.ytrainmatrix).squeeze()))
self.numfeatures = self.xtrainmatrix.shape[1]
def predict(self,Xtest):
#AFTER THE MODEL HAS BEEN INSTANTIATED AND FIT WITH TRAINING DATA, THE PREDICT METHOD CAN BE USED TO RETURN A NUMPY
#ARRAY OF PREDICTED VALUES. TO PROPERLY EXECUTE THIS METHOD, PASS IN THE TEST FEATURE DATA (UNLABELED) FOR XTEST.
self.xtestmatrix = np.matrix(Xtest)
if self.xtestmatrix.shape[0] == 1:
self.xtestmatrix = self.xtestmatrix.transpose()
#BEGIN COMPARING TEST OBS TO TRAINING OBS
preds = []
for testobs in self.xtestmatrix:
distance = []
for trainobs in self.xtrainmatrix:
#CALCULATE DISTANCE BETWEEN TEST OBS AND TRAINING OBS
#DISTANCE_METRIC = 'EUCLIDEAN' CALCULATES EUCLIDEAN DISTANCE: SQRT(SUM((X-Y)**2))
#DISTANCE_METRIC = 'MANHATTAN' CALCULATES MANHATTAN DISTANCE: SUM(ABS(X-Y))
#DISTANCE_METRIC = 'CHEBYSHEV' CALCULATES CHEBYSHEV DISTANCE: MAX(ABS(X-Y))
if self.distance_metric == 'euclidean':
sums = 0
for num in range(0,self.numfeatures):
sums = sums + (testobs[0,num]-trainobs[0,num])**2
distance.append(np.sqrt(sums))
elif self.distance_metric == 'manhattan':
sums = 0
for num in range(0,self.numfeatures):
sums = sums + abs(testobs[0,num]-trainobs[0,num])
distance.append(sums)
elif self.distance_metric == 'chebyshev':
sums = []
for num in range(0,self.numfeatures):
sums.append(abs(testobs[0,num]-trainobs[0,num]))
distance.append(max(sums))
#CREATE A MATRIX WITH YTRAIN AND DISTANCE COLUMN, REPRESENTING THE DISTANCE BETWEEN TEST/TRAINING OBS AND THE TRAINING OUTPUT
#SORT MATRIX BY DISTANCE
distancecol = np.matrix(distance).transpose()
ytrainmatrix2 = np.hstack((self.ytrainmatrix,distancecol))
ytrainmatrix2 = np.matrix(sorted(np.array(ytrainmatrix2),key=lambda x:x[1]))
#CREATE VOTING FRAMEWORK, TRACK NUMBER OF VOTES PER LABEL ACROSS N NEAREST TRAINING POINTS FOR TEST OBS
#IF WEIGHTING = 'STANDARD', A STANDARD VOTING SCHEMA IS FOLLOWED WHERE EACH OF THE K NEAREST POINTS ARE GIVEN THE SAME WEIGHT
#IF WEIGHTING = 'WEIGHTED', A WEIGHTED VOTING SCHEMA IS FOLLOWED WHERE CLOSER POINTS WILL HAVE A HEAVIER WEIGHT BY MEANS OF INCREASED NUMBER OF VOTES
if self.weighting == 'standard':
labelcount = []
for labels in self.labellist:
count = 0
for num in range(0,self.k):
if ytrainmatrix2[num,0] == labels:
count = count + 1
labelcount.append(count)
elif self.weighting == 'weighted':
labelcount = []
for labels in self.labellist:
count = 0
for num in range(0,self.k):
if ytrainmatrix2[num,0] == labels:
count = count + self.k - num
labelcount.append(count)
#CREATE AN ADDED VOTING LAYER FOR SCENARIOS IN WHICH THERE IS A TIE FOR MOST VOTES
predlabel = []
for num in range(0,len(labelcount)):
if labelcount[num] == max(labelcount):
predlabel.append(self.labellist[num])
if len(predlabel) == 1:
preds.append(predlabel[0])
else:
preds.append(predlabel[np.random.randint(0,len(predlabel))])
#RETURN A NUMPY ARRAY OF PREDICTED VALUES
return np.array(preds)
def score(self,X,Y):
#AFTER THE MODEL HAS BEEN INSTANTIATED AND FIT, THE SCORE METHOD MAY BY USED TO RETURN AN ACCURACY SCORE FOR THE MODEL
#GIVEN TEST FEATURES AND TEST LABELS.
#THIS METHOD WORKS BY PASSING IN THE TEST FEATURES (X), UPON WHICH THE PREDICT
#METHOD ABOVE WILL CALLED TO RETURN A NUMPY ARRAY OF PREDICTED VALUES.
preds = self.predict(X)
Y = np.array(Y)
totalpreds = len(preds)
#THESE VALUES WILL BE COMPARED TO THE TEST LABELS(Y) AND AN ACCURACY SCORE WILL BE
#RETURNED AS A VALUE BETWEEN 0-1 COMPUTED BY (CORRECTED PREDS)/(TOTAL PREDS).
comparelist = []
count = 0
for num in range(0,totalpreds):
if preds[num] == Y[num]:
comparelist.append(1)
else:
comparelist.append(0)
return sum(comparelist)/len(comparelist)
KNN Algorithm Overview — Regression
Regression is a supervised machine learning task which predicts continuous values for data.
Suppose a dataset is split into training data, data which allows the model to learn, and test data, data that allows us to validate model performance. The KNN Algorithm supports regression tasks through the following general algorithm structure:
Step 1: Select a K Value, Distance Metric, and Weighting Schema for the model
Step 2: Train the model with training data
Step 3: Compute distance using the selected distance metric between ith test/new observation and every training observation, while storing distances and training labels in a data structure
Step 4: Sort the data structure in ascending order by distance
Step 5: Use the K closet points and the selected weighting schema to predict the value of the test/new observation
Step 6: Repeat Steps 3–5 until a prediction has been made for every test observation
Step 7: Return predictions
Step 8: If predictions are for test observations, then compare test labels to predictions in order to assess model accuracy
Regression Weighting Schema
Similar to the classification example, a weighting schema can also be used in KNN Regression to dictate how much influence each of the K closest points will have towards making a final prediction. While there are many weighting schemas out there used alongside KNN, this study will focus on two:
Schema 1 — Traditional or ‘standard’: This voting schema assumes all points have equal influence over the final prediction. In a regression setting this means each of the K closest points will be weighted equally or, in other words, the predicted value is the average of the K closest points.
Schema 2 — Custom Weighted or ‘weighted’: This voting schema assumes that the closer a point is, the more influence it has over the final prediction. In a regression setting this means that each of the K closest points will be assigned a weight:
where position indicates how close it is to the new point. The final prediction is then the weighted average of the K closest points.
See below for a concrete example:
sns.set_style('darkgrid')
fig = plt.figure()
axes = fig.add_axes([0,0,1,1])
axes.scatter(x=[1.3,1.7],y=[0.5,1],s = 50,color = 'red')
axes.scatter(x=[1.5,1,1.2],y=[0.8,1.2,1.5],s = 50,color = 'red')
axes.scatter(x=[1.5],y=[0],s = 50)
axes.annotate('1',[1.3,0.6],fontsize = 14)
axes.annotate('2',[1.5,0.85],fontsize = 14)
axes.annotate('3',[1.7,1.05],fontsize = 14)
axes.grid(lw=1)
plt.show()
The example above shows 5 total points in the training data, marked by the color red. The blue data point is the new point which we are trying to predict based off the training data. Assuming a Euclidean distance metric and a 3-Nearest Neighbors framework, Points 1, 2 and 3 are the closest points to the new observation in that order. Suppose Points 1, 2, and 3 have values 50, 55, and 60 units respectively.
The ‘standard’ weighting schema would calculate in the following format:
The ‘weighted’ weighting schema calculate in the following format:
The Algorithm
Below is our construction of the KNN Algorithm for Regression built entirely from scratch using NumPy. Similar to the classification framework, the regression framework is fully flexible for any sized data set, and supports the following methods:
First the model must be instantiated. Generalized syntax will be model = KNNReg(k,weighting = ‘standard’,distance_metric = ‘euclidean’) . The user must pass in a positive integer for k, and may optionally pass in values for weighting (‘standard’,’weighted’) and distance_metric (‘euclidean’,’manhattan’,’chebyshev’)
. The user must pass in a positive integer for k, and may optionally pass in values for weighting (‘standard’,’weighted’) and distance_metric (‘euclidean’,’manhattan’,’chebyshev’) After the model has been instantiated, it must be fit with training data. Generalized syntax will be model.fit(xtrain,ytrain)
After the model has been instantiated and fit with training data, the model can begin predicting values. Generalized syntax will be predictions = model.predict(xtest) where predictions for the test data will be returned in a NumPy array
where predictions for the test data will be returned in a NumPy array After the model has been instantiated and fit with training data, the score method can be used to return an accuracy score. Generalized syntax will be model.score(X,Y,error_metric = ‘rmse’). The method will then make predictions for X, compare them to Y, and return an error score according to the optional error_metric (‘rmse’,’mse’,’mae’) pass in. The error_metric values indicate scoring based off Root Mean Square Error, Mean Square Error, and Mean Absolute Error.
For more information on the inner workings of the custom framework below, please refer to the comments and general code within the code block.
#KNN REGRESSION FRAMEWORK
class KNNReg:
def __init__(self,k,weighting = 'standard',distance_metric = 'euclidean'):
#WHEN INSTANTIATING A MODEL, SPECIFY A MANDATORY K - VALUE SIGNIFYING THE NUMBER OF NEAREST NEIGHBORS THE MODEL
#WILL UTILIZE TO MAKE EACH PREDICTIONS. OPTIONALLY, THE USER MAY ALSO SPECIFY A WEIGHTING SCHEMA (STANDARD/WEIGHTING)
#AND DISTANCE_METRIC (EUCLIDEAN/MANHATTAN/CHEBYSHEV)
self.k = k
self.weighting = weighting
self.distance_metric = distance_metric
def fit(self,Xtrain,Ytrain):
#BEFORE MAKING PREDICTIONS, THE MODEL MUST BE FIT WITH THE TRAINING DATA. THIS MEANS THE FEATURE DATA AS XTRAIN AND
#LABELS AS YTRAIN.
self.xtrainmatrix = np.matrix(Xtrain)
self.ytrainmatrix = np.matrix(Ytrain)
if self.xtrainmatrix.shape[0] == 1:
self.xtrainmatrix = self.xtrainmatrix.transpose()
if self.ytrainmatrix.shape[0] == 1:
self.ytrainmatrix = self.ytrainmatrix.transpose()
#IN ADDITION TO STORING THE TRAINING DATA, THE FIT METHOD WILL ALSO STORE THE TOTAL NUMBER OF FEATURES
self.numfeatures = self.xtrainmatrix.shape[1]
def predict(self,Xtest):
#AFTER THE MODEL HAS BEEN INSTANTIATED AND FIT WITH TRAINING DATA, THE PREDICT METHOD CAN BE USED TO RETURN A NUMPY
#ARRAY OF PREDICTED VALUES. TO PROPERLY EXECUTE THIS METHOD, PASS IN THE TEST FEATURE DATA (UNLABELED) FOR XTEST.
self.xtestmatrix = np.matrix(Xtest)
if self.xtestmatrix.shape[0] == 1:
self.xtestmatrix = self.xtestmatrix.transpose()
#BEGIN COMPARING TEST OBS TO TRAINING OBS
preds = []
for testobs in self.xtestmatrix:
distance = []
for trainobs in self.xtrainmatrix:
#CALCULATE DISTANCE BETWEEN TEST OBS AND TRAINING OBS
#DISTANCE_METRIC = 'EUCLIDEAN' CALCULATES EUCLIDEAN DISTANCE: SQRT(SUM((X-Y)**2))
#DISTANCE_METRIC = 'MANHATTAN' CALCULATES MANHATTAN DISTANCE: SUM(ABS(X-Y))
#DISTANCE_METRIC = 'CHEBYSHEV' CALCULATES CHEBYSHEV DISTANCE: MAX(ABS(X-Y))
if self.distance_metric == 'euclidean':
sums = 0
for num in range(0,self.numfeatures):
sums = sums + (testobs[0,num]-trainobs[0,num])**2
distance.append(np.sqrt(sums))
elif self.distance_metric == 'manhattan':
sums = 0
for num in range(0,self.numfeatures):
sums = sums + abs(testobs[0,num]-trainobs[0,num])
distance.append(sums)
elif self.distance_metric == 'chebyshev':
sums = []
for num in range(0,self.numfeatures):
sums.append(abs(testobs[0,num]-trainobs[0,num]))
distance.append(max(sums))
#CREATE A MATRIX WITH YTRAIN AND DISTANCE COLUMN, REPRESENTING THE DISTANCE BETWEEN TEST/TRAINING OBS AND THE TRAINING OUTPUT
#SORT MATRIX BY DISTANCE
distancecol = np.matrix(distance).transpose()
ytrainmatrix2 = np.hstack((self.ytrainmatrix,distancecol))
ytrainmatrix2 = np.matrix(sorted(np.array(ytrainmatrix2),key=lambda x:x[1]))
#CREATE VOTING SCHEMA AMONG N CLOSEST POINTS.
#IF INDEX = 0 THEN PREDICTED VALUE WILL BE THE AVERAGE OF THE N CLOSEST POINTS
#IF INDEX = 1 THEN PREDICTED VALUE WILL BE A WEIGHTED AVERAGE OF N CLOSEST POINTS, WHERE CLOSER POINTS ARE WEIGHTED MORE STRONGLY TOWARDS THE PREDICTED VALUE
if self.weighting == 'standard':
sums = 0
for num in range(0,self.k):
sums = sums + ytrainmatrix2[num,0]
preds.append(sums/self.k)
elif self.weighting == 'weighted':
sums = 0
arr = np.arange(1,self.k + 1)
arrsum = np.sum(arr)
for num in range(0,self.k):
sums = sums + ytrainmatrix2[num,0]*((self.k - num)/arrsum)
preds.append(sums)
#RETURN A NUMPY ARRAY OF PREDICTED VALUES
return np.array(preds)
def score(self,X,Y,error_metric = 'rmse'):
#AFTER THE MODEL HAS BEEN INSTANTIATED AND FIT, THE SCORE METHOD MAY BY USED TO RETURN AN ACCURACY SCORE FOR THE MODEL
#GIVEN TEST FEATURES AND TEST LABELS.
#THIS METHOD WORKS BY PASSING IN THE TEST FEATURES (X), UPON WHICH THE PREDICT
#METHOD ABOVE WILL CALLED TO RETURN A NUMPY ARRAY OF PREDICTED VALUES.
preds = self.predict(X)
Y = np.array(Y)
totalpreds = len(preds)
#THESE VALUES WILL BE COMPARED TO THE TEST LABELS(Y) AND AN ACCURACY SCORE WILL BE
#RETURNED. THIS VALUE WILL FOLLOW EITHER A ROOT MEAN SQUARE ERROR(RMSE), MEAN SQUARE ERROR(MSE), OR
#MEAN ABSOLUTE ERROR(MAE) METHODOLOGY DEPENDING ON THE VALUE PASSED IN FOR ERROR_METRIC.
#RMSE = SQRT(SUM((PREDICTED - ACTUAL)**2)/(TOTAL NUMBER OF PREDICTIONS))
#MSE = SUM((PREDICTED - ACTUAL)**2)/(TOTAL NUMBER OF PREDICTIONS).
#MAE = SUM(ABS(PREDICTED-ACTUAL))/(TOTAL NUMBER OF PREDICTIONS)
if error_metric == 'rmse':
error = 0
for num in range(0,totalpreds):
error = error + (preds[num] - Y[num])**2
return np.sqrt(error/totalpreds)
elif error_metric == 'mse':
error = 0
for num in range(0,totalpreds):
error = error + (preds[num] - Y[num])**2
return error/totalpreds
elif error_metric == 'mae':
error = 0
for num in range(0,totalpreds):
error = error + abs(preds[num] - Y[num])
return error/totalpreds
Popular Model Validation Techniques
This section will explore 2 popular model validation techniques: Traditional Train/Test Split, and Repeated K-Fold Cross Validation.
Traditional Train/Test Split
The model validation process essentially evaluates the model’s ability to accurately predict new data. This is generally done by first splitting labeled data into a training set and testing set. There is no accepted rule of thumb for what ratio of the data to include in training vs test sets. Some recommendations say 70/30, some say 80/20, but in reality the user can fully customize their ratio. The model is then intuitively trained on the training set before being tested on the test set. Testing in this circumstance refers to the model predicting values for the test set, comparing them to the real values for the test set, and using an error metric to evaluate the model’s predictions.
Traditional Train/Test Split follows this exact methodology through for one iteration. Below is our construction of the Traditional Train/Test Split built entirely from scratch using primarily NumPy (there is some pandas for data export purposes as needed). The function is flexible for any sized data set and generally the syntax will be as follows:
xtrain, xtest, ytrain, ytest = traintestsplit(X, Y, test_size)
The user will pass in all their feature data (X), all their labels (Y), and a test_size decimal/fraction between 0 and 1. This value will dictate the size of the test set. For example, a test_size = 0.3 indicates a 70/30 split (70% of the data is training data, while 30% of the data is test data). The function will then return four data structures:
xtrain , feature data for model training, contains 1 — test_size proportion of the data
, feature data for model training, contains 1 — test_size proportion of the data xtest , feature data for model testing, contains test_size proportion of the data
, feature data for model testing, contains test_size proportion of the data ytrain , labels for xtrain
, labels for xtrain ytest, labels for xtest
These structures can be passed directly into the KNN Frameworks mentioned above. For more information on the inner workings of the custom function below, please refer to the comments and general code within the code block.
#TRAIN TEST SPLIT FUNCTION
def traintestsplit(X,Y,test_size):
"""Pass into features (X), labels (Y), and test_size (decimal between 0 and 1). The function will return 4 objects: xtrain,xtest,ytrain,ytest. xtest (features) and ytest (labels) should be used for model testing purposes and contains test_size proportion of the total amount of data passed in. xtrain and ytrain contain the remaining data and are used to initially train the model"""
#CONVERT TO X AND Y TO MATRICES
Xnew = np.matrix(X)
Ynew = np.matrix(Y)
if Xnew.shape[0] == 1:
Xnew = Xnew.transpose()
if Ynew.shape[0] == 1:
Ynew = Ynew.transpose()
#IDENTIFY UNIQUE INDEXES FOR TRAINING DATA AND TEST DATA, AS WELL AS THE SIZE OF THE TEST SIZE
idlist = [] #TEST SET INDICES
idlist2 = [] #TRAINING SET INDICES
#IDENTIFY TEST INDICES THROUGH THE FOLLOWING PROCESS:
#STEP 1: CALCULATE SIZE OF TEST SET
testsize = np.round(test_size*len(X))
#STEP 2: CREATE A LIST OF SIZE TESTSIZE CONTAINING UNIQUE INTEGERS. INTEGERS CANNOT BE LESS THAN 0 OR GREATER THAN lEN(X)
for num in range(0,int(testsize)):
#STEP 2A: FOR THE FIRST ITERATION, APPEND A RANDOM INTEGER BETWEEN 0 AND LEN(X) TO IDLIST
if num == 0:
idlist.append(np.random.randint(0,len(X)))
else:
#STEP 2B: FOR ALL OTHER ITERATIONS, GENERATE A RANDOM INTEGER BETWEEN 0 AND LEN(X). THEN COMPARE IT TO ALL VALUES
#CURRENTLY IN IDLIST. IF IT IS EQUAL TO ANY OF THE VALUES THEN DO NOTHING. IF IT IS NOT EQUAL TO ALL OF THE VALUES
#THEN APPEND IT TO THE LIST. THIS PROCESS WILL REPEAT UNTIL A VALUE HAS BEEN APPENDED, MEANING A UNIQUE VALUE WAS
#GENERATED AND ADDED TO THE LIST.
count = 0
while count == 0:
newloc = np.random.randint(0,len(X))
for locations in idlist:
if newloc == locations:
count = 1
break
if count == 1:
count = 0
else:
count = 1
idlist.append(newloc)
#IDENTIFY TRAINING INDICES THROUGH THE FOLLOWING PROCESS:
#STEP 1: LOOP THROUGH ALL NUMBERS BETWEEN 0 AND LEN(X).
for num in range(0,len(X)):
count = 0
#STEP 2: COMPARE NUMBER TO INDEXES HELD IN IDLIST. IF THE NUMBER DOES NOT EXIST IN IDLIST, THEN APPEND TO IDLIST 2
#MEANING IT HAS BEEN ADDED AS A TRAINING INDEX
for locations in idlist:
if num == locations:
count = 1
break
if count == 0:
idlist2.append(num)
#SORT INDEXES, IDLIST2 IS TRAINING DATA INDEXES WHILE IDLIST IS TESTSET INDEXES
idlist.sort()
idlist2.sort()
#STORE TRAINING AND TEST DATA IN LISTS BASED OFF OF IDENTIFIED INDEXES
xtrain = []
xtest = []
ytrain = []
ytest = []
for items in idlist2:
xtrain.append(np.array(Xnew[items])[0])
ytrain.append(Ynew[items,0])
for items in idlist:
xtest.append(np.array(Xnew[items])[0])
ytest.append(Ynew[items,0])
#RECONVERT TRAINING AND TEST DATA BACK INTO THE FORMAT/DATA TYPE THAT THE DATA WAS ORIGINALLY PASSED IN AS
#ONLY EXCEPTION IS NUMPY NDARRAYS WILL BE RETURNED AS NUMPY MATRIX AND 1 X N DIM MATRICES WILL BE RETURNED AS N X 1 DIM
#CONVERSTION FOR PANDAS DF - FEATURES
if type(X) == pd.core.frame.DataFrame:
xtrain = pd.DataFrame(xtrain)
xtest = pd.DataFrame(xtest)
xtrain.index = idlist2
xtest.index = idlist
#CONVERSTION FOR PANDAS SERIES - FEATURES
if type(X) == pd.core.series.Series:
xtrain = pd.Series(xtrain)
xtest = pd.Series(xtest)
xtrain.index = idlist2
xtest.index = idlist
#CONVERSTION FOR NUMPY MATRIX/ARRAY - FEATURES
if type(X) == np.matrix or type(X) == np.ndarray:
xtrain = np.matrix(xtrain)
xtest = np.matrix(xtest)
if xtrain.shape[0] == 1:
xtrain = xtrain.transpose()
if xtest.shape[0] == 1:
xtest = xtest.transpose()
#CONVERSTION FOR PANDAS DF - LABELS
if type(Y) == pd.core.frame.DataFrame:
ytrain = pd.DataFrame(ytrain)
ytest = pd.DataFrame(ytest)
ytrain.index = idlist2
ytest.index = idlist
#CONVERSTION FOR PANDAS SERIES - LABELS
if type(Y) == pd.core.series.Series:
ytrain = pd.Series(ytrain)
ytest = pd.Series(ytest)
ytrain.index = idlist2
ytest.index = idlist
#CONVERSTION FOR NUMPY MATRIX/ARRAY - LABELS
if type(Y) == np.matrix or type(Y) == np.ndarray:
ytrain = np.matrix(ytrain)
ytest = np.matrix(ytest)
if ytrain.shape[0] == 1:
ytrain = ytrain.transpose()
if ytest.shape[0] == 1:
ytest = ytest.transpose()
#RETURN TRAINING AND TEST DATA SPLITS
return xtrain,xtest,ytrain,ytest
Repeated K-Fold Cross Validation
Repeated K-Fold Cross Validation addresses three significant problems that can occur within a Traditional Train/Test Split methodology. The first problem is sampling bias, as there is no guarantee that the user made a good, comprehensive split of the data. The second problem is that the methodology does not use all the data for training and testing so the model’s predictive capability, can often times still not be appropriately generalized. Lastly, in scenarios where the initial data set is small, Repeated K-Fold Cross Validation is especially useful to allow for maximum usage of the data.
Repeated K-Fold Cross Validation attempts to combat the issues mentioned above through the following generalized steps:
Step 1: User select number of repeats (number of times they want K-Fold Cross Validation to occur) and number of folds (number of subsets the data is split into). Suppose number of repeats = R and number of folds = K where R and K are positive integers.
Step 2: The data set is split into K folds
Step 3: The ith fold is used as the test set (xtest, ytest) while the remaining folds are used as the training set (xtrain, ytrain).
Step 4: Train the model on the training data, while using the test data to evaluate the iterations prediction accuracy. Store the prediction accuracy value.
Step 5: Repeat steps 3–4 until each fold has been used as the test set (K — 1 repeats). This helps to reduce sampling bias (by ensuring model validation is not subjected to just one train/test split) while also ensuring all the data is used for training and testing at some point in the process.
Step 6: Repeat steps 2–5 R — 1 times. This further helps to minimize sampling bias by splitting the data into K different folds for each iteration.
Step 7: Take the average of all R*K prediction accuracy values to get a more generalized score for model performance and accuracy
Below is our construction of the Repeated K-Fold Cross Validation Framework built entirely from scratch using NumPy. The framework is flexible for any sized data set and supports the following methods:
First, the framework must be instantiated. The general syntax will be rkf = Repkfoldcv(n_repeats, n_splits, shuffle = False). n_repeats and n_splits refers to the number of repeats (1 = standard K-Fold Cross Validation) and number of folds respectively while shuffle (True/False) optionally specifies whether the user needs the data shuffled.
After the framework has been instantiated, the user simply passes in the feature data (X) through the split method. General syntax will be rfk.split(X), which will return a nested list of n_repeats*n_splits lists each of which contains a set of train and test indices.
This framework can be integrated directly with the KNN frameworks detailed above. A brief Python example of the syntax required to utilize this framework with the KNN frameworks is as follows:
scores = []
model = KNNClass(k = 5, weighting = 'standard', distance_metric = 'euclidean')
rfk = Repkfoldcv(n_repeats = 5, n_splits = 5, shuffle = True)
for train,test in rfk.split(X):
xtrain,xtest,ytrain,ytest = X[train], X[test], Y[train], Y[test] #if X or Y is a pandas structure, use X.iloc[train/test] or Y.iloc[train/test]
model.fit(xtrain,ytrain)
scores.append(model.score(xtest,ytest))
print(mean(scores))
For more information on the inner workings of the custom function below, please refer to the comments and general code within the code block.
#REPEATED K FOLD CV FRAMEWORK
class Repkfoldcv:
def __init__(self,n_repeats,n_splits,shuffle = False):
#WHEN INSTANTIATING THE OBJECT, SPECIFY N_REPEATS (1 SIGNIFIES STANDARD K FOLD CV), N_SPLITS (SPECIFIES THE NUMBER
#FOLDS THE DATA WILL BE BROKEN INTO) PER ITERATION, AND OPTIONALLY WHETHER THE USER WANTS THE DATA SHUFFLED
self.n_repeats = n_repeats
self.n_splits = n_splits
self.shuffle = shuffle
def split(self,X):
#CREATE LIST TO STORE INDEXES FOR HOLD OUT FOLDS AND NOT HOLD OUT FOLDS
indexes = []
#IDENTIFY BASELINE FOLD SIZE
foldsize = math.floor(len(X)/self.n_splits)
#OUTER LOOP FOR NUMBER OF REPEATS
for repeats in range(0,self.n_repeats):
#CREATE LIST OF SIZE LEN(X) CONTAINING NATURAL NUMBERS BETWEEN 1-N_SPLITS. EACH NATURAL NUMBER WILL OCCUR AT MINIMUM
#FOLDSIZE TIMES IN THE LIST
assignfold = []
for num in range(0,self.n_splits):
for num1 in range(0,foldsize):
assignfold.append(num+1)
while len(assignfold) < len(X):
assignfold.append(np.random.randint(1,self.n_splits+1))
#SHUFFLE LIST IF USER DECLARES SHUFFLE = TRUE
if self.shuffle == True:
np.random.shuffle(assignfold)
#ITERATE THROUGH THE LIST N_SPLITS TIMES. FOR EACH ITERATION, APPEND TO TWO LISTS WHERE LIST 1 STORES THE INDEX POSITION
#WHENEVER THE NATURAL NUMBER IS EQUAL TO THE NUM AND LIST 2 STORES WHENEVER THE NATURAL NUMBER IS NOT EQUAL TO THE NUM.
#THIS CREATES N_SPLITS SETS OF A HOLD OUT (TEST) FOLD AND NOT HOLD OUT (TRAIN) FOLD.
for num in range(0,self.n_splits):
infold = []
notinfold = []
count = 0
for assignments in assignfold:
if assignments == num + 1:
infold.append(count)
else:
notinfold.append(count)
count = count + 1
#APPEND TRAIN FOLD AND TEST FOLD INDEXES TO THE INDEXES LIST
indexes.append([notinfold,infold])
#RETURN INDEXES AS A NESTED LIST
return indexes
Case Study — Finding an Optimal KNN Model for the Iris Flowers Data Set
This section will conduct an in-depth case study utilizing the methodologies and custom frameworks mentioned above with the Iris Flowers Dataset. Specifically, this case study will include some initial data exploration, feature engineering/scaling, the KNN Classification framework, Repeated K-Fold Cross Validation, and a custom grid search framework. The goal of the study is to find the best mix of features, a K value, a distance metric, and a weighting schema which when used in combination maximizes prediction accuracy, creating an optimal KNN model.
This case study will use NumPy, Pandas, Matplotlib, and Seaborn.
Iris Flowers Data Set
The Iris Flowers Data Set contains 150 rows and 5 columns. Each row contains data pertaining to a single flower. 4 of the columns include the following numerical feature data: sepal_length_cm, sepal_width_cm, petal_length_cm, and petal_width_cm. The final column contains the target/label: setosa/versicolor/virginica. During model validation and general model prediction, this is the value we will be attempting to classify.
#LOAD IN IRIS DATASET FOR KNN CLASSIFICATION
data = load_iris()
flowers = []
for items in data['target']:
if items == 0:
flowers.append('setosa')
elif items == 1:
flowers.append('versicolor')
else:
flowers.append('virginica')
iris = pd.DataFrame(data['data'])
iris.columns = ['sepal_length_cm','sepal_width_cm','petal_length_cm','petal_width_cm']
iris['target'] = flowers
#FIRST 5 ROWS OF THE DATASET
iris.head()
Data Exploration
Before delving into any model development, we will first undergo some initial data exploration and visualization to get a better understanding of how the data looks.
2D Plots
#PLOTTING THE NUMBER OF LABELS PER CLASS
fig = plt.figure()
axes = fig.add_axes([0,0,1,1])
sns.countplot(x=iris['target'])
axes.set_title('Number of Labels Per Class',fontsize = 18, fontweight = 'bold')
axes.set_xlabel('Class',fontsize = 14, fontweight = 'bold')
axes.set_ylabel('Number of Labels',fontsize = 14, fontweight = 'bold')
plt.show()
#PLOTTING 2D RELATIONSHIPS BETWEEN THE NUMERICAL FEATURES ON A SCATTER PLOT GRID USING SEABORN
sns.pairplot(iris,hue='target')
plt.show()
Feature Engineering
Using the 4 feature columns, we will create some additional features of unique pairwise ratios between the columns.
#CREATE A COLUMN FOR EVERY UNIQUE PAIRWISE RATIO AMONG THE 4 EXISTING FEATURES
for items in list(combinations(iris.columns[:-1],2)):
iris[items[0] + '/' + items[1]] = iris[items[0]]/iris[items[1]]
#SHOW FIRST 5 ROWS OF UPDATED IRIS DATASET
iris = iris[['sepal_length_cm', 'sepal_width_cm', 'petal_length_cm',
'petal_width_cm', 'sepal_length_cm/sepal_width_cm',
'sepal_length_cm/petal_length_cm', 'sepal_length_cm/petal_width_cm',
'sepal_width_cm/petal_length_cm', 'sepal_width_cm/petal_width_cm',
'petal_length_cm/petal_width_cm','target']]
iris.head()
Feature Scaling
As stated earlier, because KNN is a distance-based algorithm feature scaling is especially important to prevent the impact of one feature masking input from other features. Feature scaling addresses this problem by ensuring each feature column has the same mean and standard deviation. A common way to execute this practice is to subtract each column by its column mean, before diving the difference by the column standard deviation (resulting in a column with an approximate mean of 0 and approximate standard deviation of 1). This is the method we will execute within this study.
Before feature scaling the data set, we will view descriptive statistics for each of the feature columns. This is important information to know in case new data is being streamed into live models for predictions, or if the presence of new labeled data results in the model being retrained with updated data.
iris.describe().iloc[[0,1,2,3,7]].transpose()
#FEATURE SCALING THE DATA
for cols in iris.columns[:-1]:
iris[cols] = (iris[cols] - iris[cols].mean())/(iris[cols].std())
#SHOW FIRST 5 ROWS OF FEATURE SCALED IRIS DATA SET
iris.head()
Following the feature scaling process, by again using the .describe() method we can see that the descriptive statistics of each feature has changed to reflect a mean of ~ 0 and a standard deviation of ~ 1.
iris.describe().iloc[[0,1,2,3,7]].transpose()
Model Development, Validation, and Selection
With the data properly inspected and scaled, we are ready to undergo the model development, validation, and selection process. This process will include a grid search methodology along with the custom KNN Classification and Repeated K-Fold Cross Validation frameworks above to find an optimal KNN Classification model. This model will find the a subset of features, a K value, a distance metric, and a weighting schema which when used together maximizes prediction accuracy.
First we will create a list which holds all possible feature subsets:
featuresubsets = []
for num in range(1,len(iris.columns[:-1])+1):
#FOR SUBSETS OF SIZE 1, APPEND EACH FEATURE COLUMN NAME TO THE FEATURE SUBSETS LIST
if num == 1:
for cols in iris.columns[:-1]:
featuresubsets.append(cols)
#FOR SUBSETS OF SIZE > 1, APPEND ALL UNIQUE SUBSETS OF THAT SIZE TO THE FEATURE SUBSETS LIST
else:
for subsets in list(combinations(iris.columns[:-1],num)):
featuresubsets.append(subsets)
len(featuresubsets) 1023
We have now created a list holding all 1023 possible unique subsets of the feature data. With these subsets identified, we can build our grid search. Our grid search will test each feature subset with a K Value between 1–20. In addition, for each feature subset/K Value combination 6 models will be generated where each model holds selects a different weighting schema/distance metric combination. For model validation, each model will be subjected to a Repeated K-Fold Cross Validation Process with 5 folds (80/20 split for each iteration) and 5 repeats. In summary, our grid search algorithm will build 1023 * 20 * 6 = 122760 total models where each model is validated 5 * 5 = 25 times to provide a generalized accuracy score depicting model performance:
#LISTS TO STORE THE RESULTS OF THE GRID SEARCH
featureNames = []
numberOfFeatures = []
numberOfNeighbors = []
weightingSchema = []
distanceMetric = []
trainingAccuracy = []
testingAccuracy = []
#STORING THE FEATURE AND LABEL DATA IN SEPARATE STRUCTURES
X = iris.iloc[:,:-1]
Y = iris['target']
#GRID SEARCH ALGORITHM
#lOOP THROUGH ALL SUBSET POSSIBILITIES
for num in range(0,len(featuresubsets)):
#CREATE XNEW, MODIFIED VERSION OF X THAT ONLY HOLDS THE FEATURE SUBSET
if type(featuresubsets[num]) == str:
XNew = X[[featuresubsets[num]]]
else:
XNew = X[list(featuresubsets[num])]
#CREATE MODELS FOR K VALUES BETWEEN 1 AND 20 FOR EACH FEATURE SUBSET
for K in range(1,21):
#FOR EACH K VALUE/SUBSET COMBINATION, CREATE 6 TOTAL MODELS WITH EVERY WEIGHTING SCHEMA/DISTANCE METRIC COMBINATION
for weight in ['standard','weighted']:
for distance in ['euclidean','manhattan','chebyshev']:
model = KNNClass(K,weighting = weight,distance_metric=distance)
#STORE MODEL TRAIN/TEST ACCURACY SCORES FOR EACH ITERATION OF REPEATED K-FOLD CV MODEL VALIDATION PROCESS
modeltrain = []
modeltest = []
#RUN EACH MODEL THROUGH A 5X5 REPEATED K-FOLD CV PROCESS, WHILE STORING ITS TRAIN/TEST ACCURACY SCORES FOR EVERY ITERATION
rfk = Repkfoldcv(n_repeats = 5,n_splits = 5,shuffle=True)
for train,test in rfk.split(XNew):
xtrain,xtest,ytrain,ytest = XNew.iloc[train],XNew.iloc[test],Y.iloc[train],Y.iloc[test]
model.fit(xtrain,ytrain)
modeltrain.append(model.score(xtrain,ytrain))
modeltest.append(model.score(xtest,ytest))
#STORE FEATURE NAMES OF MODEL
featureNames.append(featuresubsets[num])
#STORE NUMBER OF FEATURES OF MODEL
numberOfFeatures.append(len(XNew.columns))
#STORE K VALUE OF MODEL
numberOfNeighbors.append(K)
#STORE WEIGHTING SCHEMA OF MODEL
weightingSchema.append(weight)
#STORE DISTANCE METRIC OF MODEL
distanceMetric.append(distance)
#STORE AVERAGE TRAIN/TEST ACCURACY SCORES THROUGH MODEL VALIDATION PROCESS, PROVIDING GENERALIZED SCORE OF MODEL PERFORMANCE
trainingAccuracy.append(mean(modeltrain))
testingAccuracy.append(mean(modeltest)) #STORE GRID SEARCH RESULTS IN PANDAS DATAFRAME AND RETURN 10 BEST PERFORMING MODELS ACCORDING TO TESTING ACCURACY
SummaryReport = pd.DataFrame()
SummaryReport['Feature Names'] = featureNames
SummaryReport['Number of Features'] = numberOfFeatures
SummaryReport['Number of Neighbors'] = numberOfNeighbors
SummaryReport['Weighting Schema'] = weightingSchema
SummaryReport['Distance Metric'] = distanceMetric
SummaryReport['Training Accuracy'] = trainingAccuracy
SummaryReport['Testing Accuracy'] = testingAccuracy
SummaryReport.sort_values('Testing Accuracy', ascending=False,inplace=True)
SummaryReport = SummaryReport.head(10)
SummaryReport.set_index('Feature Names',inplace = True)
SummaryReport
Based off the summary report of the grid search results, a 3-NN model using 3 ratio features (sepal_length_cm/petal_length_cm, sepal_width_cm/petal_length_cm, sepal_width_cm/petal_width_cm) alongside a standard weighting schema and euclidean distance metric was the best performing model across a Repeated K-Fold CV process with 5 repeats and 5 folds (25 model validation iterations). This best performing model also has near identical training/testing accuracies meaning that the model is neither overfitting or underfitting the data. Finally, with a relatively low number of features and low number of neighbors the model has certainly achieved model parsimony in being a fairly simple model with extremely high predictive power.
Closing Remarks
The KNN Algorithm is one of the most widely used models for both classification and regression-based tasking. In closing, we wanted to provide some additional insight on the benefits and drawbacks of the model to help identify the best settings for deploying this model.
Benefits of KNN
Intuitive distance-based methodology
Minimal prior assumptions of the data unlike other algorithms (linear/multiple regression)
Easy model training as the model simply stores the training data
Highly adaptable to new training data and multi-class frameworks
Very flexible/customizable framework (adapts to classification/regression, linear/non-linear problems, and can be used with a variety of distance metrics/weighting schemas)
Drawbacks of KNN
Computationally expensive algorithm as distance needs to be computed between every test observation and every training observation before being sorted
Larger datasets can severely slow down the algorithm as a result of the above
Curse of Dimensionality: As the number of features increase, it becomes harder to find data points that are truly close together in higher dimensional space
Feature scaling is necessary to ensure the scale of one feature does not mask the impact of others
Finding the optimal K-value can be difficult and time consuming
Imbalanced data and outliers can cause the model to be hypersensitive
Luckily, Python’s scikit-learn library provides these models and algorithms premade and ready to implement through a single line of code. Scikit-learn provides source code for each of these options, and while we refrained from viewing outside references for the sake of building truly custom frameworks from scratch we highly recommend utilizing the full power of the scikit-learn library. Please view below for relevant packages that coincide with the content discussed throughout the article.
#DISTANCE METRICS
from sklearn.neighbors import DistanceMetric
#KNN CLASSIFICATION
from sklearn.neighbors import KNeighborsClassifier
#KNN REGRESSION
from sklearn.neighbors import KNeighborsRegressor
#KNN REGRESSION ERROR METRICS
from sklearn.metrics import mean_squared_error,mean_absolute_error #RMSE IS SQUARE ROOT OF MEAN_SQUARED_ERROR
#TRADITIONAL TRAIN/TEST SPLIT
from sklearn.model_selection import train_test_split
#REPEATED K-FOLD CROSS VALIDATION
from sklearn.model_selection import RepeatedKFold
About the Author
Amit Gattadahalli is a Consultant at Boulevard with a focus in data science. He recently graduated from the University of Maryland College Park with a B.S. in Mathematics and Statistics. His work within the consulting industry has concentrated on supervised/unsupervised machine learning, custom data science-centric algorithm development, data visualization, and general software development.
About Boulevard
The Boulevard Consulting Group, a Native American-owned, Small Business Administration (SBA) 8(a) small business, is a modern management and technology consulting firm that helps clients leverage the power of data science, operational transformation, and cutting-edge technology to build a better future. We develop highly scalable, impactful solutions by combining the latest technology trends with time-tested improvement methodologies, and with the personalized attention of a smaller firm. For more information about Boulevard and its services, visit our website at www.boulevardcg.com. | https://medium.com/swlh/under-the-hood-of-k-nearest-neighbors-knn-and-popular-model-validation-techniques-85ab0964d563 | ['Boulevard Consulting'] | 2020-09-03 18:51:10.446000+00:00 | ['Iris', 'Python', 'K Nearest Neighbors', 'Crossvalidation', 'Machine Learning'] |
Tonight’s comic is about seltzer. | More from rstevens Follow
I make cartoons and t-shirts at www.dieselsweeties.com & @rstevens. Send me coffee beans. | https://rstevens.medium.com/tonights-comic-is-about-seltzer-b352045a4c58 | [] | 2019-10-01 03:16:02.356000+00:00 | ['Wellness', 'Comics', 'Seltzer', 'Scam'] |
Perfection | A Tale of How Chase of Being can create Pressure, Anxiety, Myths and Doubts, wheras Perfection lies in …
Chase of Perfection,
Often ends up with Pressure,
Pressure of Excellence,
Pressure of Achievement,
Pressure to be simply Perfect.
Paths of Perfection,
Often leads to Anxiety,
Anxiety of Failing,
Anxiety of Losing,
Anxiety to be simply Perfect.
Waves of Perfection,
Often creates tides of Fear,
Fear of Rejection,
Fear of Future,
Fear to be simply Perfect.
Myths of Perfection,
Often cascades mirage of Doubts,
Doubts of Self-abilities,
Doubt of not being Enough,
Doubt to be simply Perfect.
Perfection is a Pressure,
Perfection is an Anxiety,
Perfection is a Fear,
Perfection is a Myth,
Perfection is simply an Imperfection.
……………………………..
To Follow My Journey Connect —
📍 LinkedIn —
📍 Instagram —
https://www.instagram.com/daman873/ | https://medium.com/illumination/perfection-2a59305dd0b1 | ['Daman Sandhu'] | 2020-08-02 09:30:19.912000+00:00 | ['Poem On Medium', 'Perfection', 'Poetry On Medium', 'Writing', 'Poetry'] |
Quick-Start Guide: Minimalistic Python Workflow | Accessing the terminal
For those who are already familiar, feel free to skip this section.
However, if you are new to development, it’s essential for you to understand how to access the Command-Line Interface (CLI) of your operating system.
The terms “CLI”, “command-line interface”, “command-line”, and “terminal” are used interchangeably by developers. A CLI is a program used to send instructions to your operating system , one of its sub-systems, or an application you’ve installed. Windows 10 has “PowerShell” and “Command Prompt”, whereas Linux and MacOS have the “Terminal”.
In Ubuntu 18+, right-click anywhere on the desktop and select “Open in Terminal”. There are plenty of other ways to get to the terminal, so if that doesn’t work for you then just look up how to open terminal in Ubuntu and try a few alternatives.
In Windows 10, click the start menu icon and type in “PowerShell” to search for Windows PowerShell. Command Prompt is an alternative that you can use, but it’s not as user-friendly as PowerShell in many cases and nowadays there are multiple commands that overlap between Linux’s Bash syntax for terminal and PowerShell’s syntax (making it easier to learn both at once).
This is the Ubuntu 19.04 terminal. Don’t let the blank screen scare you.
If you’ve never used a CLI before or are using a new one that you’re not familiar with, I’d highly recommend you start by typing help and hitting Enter.
No, I’m not joking. CLI’s typically have very impressive documentation and “help” is a fairly universal command. There are very few features in any given terminal that cannot somehow be traced back to the “help” command. By starting there, you should be armed with the ability to find out more.
Make sure you are able to open one up and type the command above. Type some other stuff as well. Try to break it…. you likely won’t guess any system-breaking commands on your own. It will come in handy later on, I promise.
Installing and Verifying Python
In the interest of getting up and running as quickly as possible, I recommend installing base Python until there is a significant need to do otherwise.
Installing Python on Windows 10
For Windows 10 users, download Python here and run the installer. The important thing to note is that there are two check-boxes on the very first screen in the installation wizard that you should pay attention to (something along the lines of “Install for all users” and “Add Python to PATH”).
Please ensure that “Add Python to PATH” is checked. Most tutorials recommend that you also check “Install for all users”, but I actually recommend that you don’t check that option because sometimes an installation for all users leads to permissions issues when you are trying to download packages via Python’s package manager, pip. Once both of those settings are properly configured, click “Install Now” and follow the remaining prompts.
If you install Python for just your own user, you shouldn’t run into any problems when trying to install packages. Otherwise, you may have to use the Administrator account (which is not the same as your personal account having Administrator rights…).
Installing Python on Ubuntu 18+
If you are using Ubuntu 18+, you likely already have Python 2.7 installed (accessible by typing python into the terminal). However, there are a number of improvements that have been made in Python 3. In order to get those updates, the first thing you should do is type python3 into your terminal and see what happens when you run it.
You will likely see the following:
Command 'python3' not found, but can be installed with: sudo apt install python3
Assuming you see that error, first make sure the APT package installer is updated before attempting to install Python 3. Open up the terminal and type in the following:
sudo apt update && upgrade
Once that process finishes, go ahead and run the command to install Python 3:
sudo apt install python3
Verifying the installation (Windows 10 or Ubuntu 18+):
Try running the python3 command in Linux terminal (see below) or the python command in PowerShell for Windows. The output should look roughly the same, indicating the version of Python that started up.
The >>> symbol indicates that Python is running and waiting for some code to be entered. This is commonly referred to as a Read-Evaluate-Print-Loop (REPL). In a REPL, you type in commands by hand and they are executed (or throw exceptions) when you hit enter. It’s very important for Python beginners to understand that this is possible. Try typing in the following command in the REPL (excluding the >>> symbol) and hit the Enter key:
>>> print('Hello, world!')
You should see Hello, world! print out in the terminal. This is the most classic command used as a test across all programming languages.
However, it’s boring. Run the following command instead:
>>> x = lambda x : print('<3' * x); x(9000);
Both of the commands above were valid and should show some output without complaining or throwing errors. If you run into an error, you need to check your installation steps. Feel free to add comments below if you run into issues.
Once you get a command to work successfully, try executing a command that is absolutely not valid:
>>> qerfwerfwerg
I encourage this for beginners because errors happen all the time and it’s important not to be afraid of failure.
You ran into a NameError. It certainly won’t be the last one you ever see, so don’t worry about it.
Go ahead and exit the Python interpreter:
>>> exit()
Typically, processes that are running in any terminal (regardless of operating system) can be stopped by typing Ctrl+C. However, there are a few exceptions and Python is one of them. You must use the command above or close the terminal to exit the Python REPL.
By this point, you’ve successfully configured and executed Python commands in a real interpreter. The Python REPL is very powerful and is also one of the easiest ways to test an idea separately from your project-in-progress.
If you’re using Linux, you’ll also want to install the package manager for Python (pip), so go ahead and install it for both Python 2.7 and Python 3.x using the commands below. Pip allows you to download code that others have written and use it in your own programs.
sudo apt install pip
sudo apt install pip3
If you can get this far, you are in business!
Installing git Bash
Git will likely be installed if you are using Ubuntu, but doesn’t come with Windows by default.
Git is a version controlling system used by developers across the world and is, without a doubt, the current world standard for code management and distribution. Note that git is separate from GitHub (which is an incredibly popular hosting site for git repositories).
Git is important to use for the following reasons:
Making sure your code is safe and can be reverted back to a prior version if it breaks
Making sure you can track your changes (and other people’s changes) with a clear history log
Enabling you and others to share code with each other and contribute to each other’s projects without breaking them
You don’t need to understand quite how to use it yet, but you should install it and watch/read a few tutorials on it at the very least.
Similarly to Python, you can test if git is installed by typing the git command into your terminal (the command is the same between Windows and Ubuntu).
If you don’t see a list of commands and descriptions print out when you run the git command, download it here:
The default settings for the git installer should be fine to use. Once git is installed, run the following command:
git status
The output should look something like this:
fatal: not a git repository (or any of the parent directories): .git
That looks bad, but it’s simply an error indicating that the folder the terminal is running from is not a git repository. If you see that message, it means git has been successfully installed!
There are two last commands to configure git. These commands will set your username and email for all commits.
git config --global user.name "YourUserName"
git config --global user.email "[email protected]"
To learn more, I’d highly recommend you read the documentation and watch a few tutorials.
Installing and Configuring Visual Studio Code
Visual Studio Code is a text editor and lightweight Integrated Development Environment (IDE) for creating applications in just about any programming language. It’s used to edit and manage files in a project as well as providing tools for debugging your applications and integrating with all kinds of development tools and services.
Everyone has a favorite text editor and IDE, they’re often two different applications, but Visual Studio Code is the best of both worlds in many ways. It’s free, it’s extensible, and it’s much less intimidating to brand new coders than most full-blown IDE’s. There’s very little you can’t do with VS Code, and it starts out very lightweight so you can add additional components to it as you learn and grow as a developer.
You should have no issues installing Visual Studio Code on any major platform. Simply click the link above and run through the installer or download it from the Ubuntu Software store. Once it’s installed, I highly recommend updating your settings to the following (this can be done via the GUI, but it’s easier to click the icon in the screenshot below):
{
"window.restoreWindows": "none",
"editor.mouseWheelZoom": true,
"editor.renderWhitespace": "all",
"workbench.startupEditor": "newUntitledFile",
"window.titleBarStyle": "custom",
"editor.minimap.maxColumn": 50,
"editor.minimap.renderCharacters": false,
"breadcrumbs.enabled": true
}
The top right (see the cursor) has a button you can click to manually paste in the settings above.
Visual Studio Code also allows for the download and usage of extensions. There are thousands of them out there, but if you’re using VS Code for Python development then I would recommend the following:
Python: Basic Python support with syntax highlighting, linting, debugging, etc. This one is an absolute must.
autoDocstring: Automatic generation of documentation within your classes, methods, and functions. Makes it extremely easy to generate robust and consistent documentation for your code.
Markdown All in One: Improved Markdown previewing, editing, and table of contents.
Markdownlint: Markdown linting to ensure proper formatting.
GitLens: Robust tooling for viewing historic changes in git, blame analysis, comparing against prior revisions, etc.
Conclusion and further reading
If you followed the instructions above, you should have a workflow that is fully capable of building applications and tools using the Python language. I highly recommend you read the documentation of both git and Python rather than relying heavily on tutorial videos. By exposing yourself to the real documentation, the mystery of how these technologies work will be diminished. | https://sj-porter.medium.com/quick-start-guide-minimalistic-python-workflow-97f4ff2f814a | ['Sj Porter'] | 2019-12-15 07:30:52.360000+00:00 | ['Python Programming', 'Python3', 'Git', 'Python', 'Visual Studio Code'] |
Experience Index | Photo by Author
12, 17, 27
What do they all share?
I can remember a desk
A chair
Something to write with
A room in my head
Filled to the top
More than I asked for
Overload on thought
And just pouring
Dribbling
Existing
Onto a page under light | https://medium.com/afwp/experience-index-edf2ce4a5991 | ['Jack Burt'] | 2020-06-04 15:31:01.362000+00:00 | ['Spirituality', 'Satire', 'Writing', 'Life', 'Poetry'] |
How to Write Content in 11 Easy Steps | People used to ask me, “Yo Seb, [nevermind the fact no one calls me Seb] what do you do anyway?” I would sheepishly reply “I’m a financial analyst”. You know, that purposely ambiguous term that means nothing other than obsessing over Excel spreadsheets for a profit. I would then tell them how much I hate my job, and hate my life, and hate pretty much everything. We would all have a nice laugh together and sip our drinks quietly after it all died down. I would later go home and cry myself to sleep.
N OT ANYMORE!
I am a successful writer now. Meaning, my pieces on Medium get up to several dozens of views a month. Now, while I mostly write fiction, I figured I could mansplain my secrets of success to you, seasoned writers and 15 year old bloggers alike.
Here are the eleven easy steps to write very successful content on Medium:
Write content. The mantra is JFDI — Just Fucking Do It. Google what people mean with “content”. I would do it for you, but I’m too busy writing 4 to 5 listicles a day. WRITE WRITE WRITE! Be a man. With that I mean, like, be born with a penis. And stay that way. It’s been proven that this can increase your chances of success by over 9,000%! Also, don’t like other men. Like… sexually (yikes!). And be white. In general, be white, that really goes a long way in this (or any) business. Write short bullet points. Nobody reads “Crime and Punishment” anymore. I mean, who has time to read 600 pages of whining? Now if Dostoyevsky had written “The 101 reasons why the Russian justice system sucks baaaaallzzz” he’d surely be on all sorts of Buzzfeed lists about hot 19th century writers. Add a cool picture. A good image can make or break you on Medium. So whatever you’re writing, make sure to get the sickest pic of yourself in a dope car with a confused look on your face. Now, if you’re like me, just Google a dude that looks good instead. Add a personal anecdote. This makes the read more relatable. When I was six years old, my grandpa worked in a mine and told me a crazy story about it . I don’t remember exactly what the story was, but it was pretty cool. And relatable. The moral here, be like grandpa and work in a mine, it’ll make you cool and stuff. Stay focused on the topic at hand. No one likes to read an article or essay or any story and then clearly see how the author’s mind wanders astray in a cloud of thoughts totally unrelated to the story that could have easily been edited out. Like, when my grandpa was telling me his story, for some reason he went on a tangent about how grandma had cheated on him decades ago and now was afraid my mom wasn’t really his daughter. And that, like, he had cheated on his wife in return and now had a secret family in a different city and felt so sad for not providing his real daughter the love and support he had given someone who might not even be his blood relative and was now spending time with her offspring instead of his own. At one point he broke down and started crying. He kept repeating names I didn’t know between sobs and even got in fetal position and started rocking back and forth on the floor. All this and I was like “Yo, what the fuck happened in the mine? Did I really stop watching Pokémon for this shit?” Don’t judge, I was six at the time. So yeah, don’t be like grandpa, the fumes in a mine can really fuck up your brain and stuff. Write good shit. I hear people like that. Google how to write good shit. Turns out, I’m not the only one. Avoid cliches. Everyone thinks they’re a writer nowadays, so everyone writes the same stuff. To stand out, you have to be different. You have to change the status quo. If you want to be different, change. So, go be the change you want to see in the world, and avoid using old cliches. Use the new ones instead. Always make lists that end in unsual numbers. There are way too many top five and top ten lists, so make it a different number to be ah… different and such. Also, the higher the number the more likely people will think you have more to say than the unimaginatively mediocre top ten losers.
So there you go, the more than 10 steps to have words on a page. Now be a baller and hit me with the clap below. And don’t forget to comment and subscribe if u want more DOPE content like thiiiiisss!!!1!1!!!! | https://medium.com/slackjaw/how-to-write-content-in-11-easy-steps-ba9481e28ff9 | ['Sebastian Sd'] | 2020-11-25 21:28:53.072000+00:00 | ['Medium', 'Satire', 'Humor', 'Writing', 'Lists'] |
To A Better 2019 | What’s an Email Refrigerator?
Ok ok ok. This isn’t just a regular email. So…What is it? Great question. Typically, the fridge is where you go to snack and find tasty things. Aaalllllso, on the outside, it’s where the best work and ideas get hung. So I’m here to feed you and digitally put a magnet on my best projects, insights, and ideas for you to see (I also just think that “newsletter” sounds like spam).
This coming year, I’m stepping up writing game and my first step is to share more. (Well, the first step is to write more.) And then I’m excited to share these with you because you’ve been someone whose opinion I trust and whose support I’ve relied on. If you have any thoughts, I’d love to hear them.
Ok here we go…
Illustration by Burnt Toast
2018 → 2019
I just wrapped up my 2018 annual review and 2019 plan. It’s a process I’ve been iterating on for the last decade and it’s helped me organize my life more intentionally. It’s not perfect, but it works for me. If you’re interested in my template or having me explain it, let me know.
Each year has a theme and a three word thesis.
A year ago, I set out to expand Caveday into a more profitable company, tried to meet more neighbors and be a local in our new home in Jersey City, get more recognition for my art, and start a family. So 2018’s Theme was “The Year of Cultivation” and my thesis was “Prepare for Growth.”
This coming year, I’m expecting more change. I’m exploring what it means to be a good father, changing the dynamics of my marriage now that our daughter Golda is here, deepen my sense of community, and map out a new career. That’s why I’ve called 2019’s theme “The Year of Redefining” and the thesis is “Captain my ship.” (More about this in the coming Email Refrigerator)
Life Reader 2018
For the fourth year in a row, I’ve compiled a life reader. What’s a life reader? Remember in college you’d have a course reader– compiled by your professor and xeroxed articles and readings and images? It’s similar to that. I curate the best art and articles that have influenced me this year, and then write a little introduction that outlines some of the themes and patterns that came up. I publish it as a tangible book for sale, and also share the PDF here for free.
This year, some of the more interesting themes include public perception vs internal realities, political fundamentalism vs resistance, hatred vs self love, grief vs acceptance, feeling stuck, changing identities, technology vs humanity, and the performances of life (“performing” friendship, marriage, being single, etc).
There are some great articles and amazing art in this year’s life reader.
Or check out past ones from 2017, 2016, and 2015.
Painting by Jarek Puczel
New Website / New Career Path
As I mentioned, one of the big themes for 2018 was feeling stuck and changing identities. After a lot of writing, thinking, talking and action, I’ve rewritten my career plan. The way I’m talking about my shift is that I am an artist and a teacher. I am pioneering the future of work, helping people thrive in a world of distractions by leading them in unlearning bad habits and replacing them with better ones.
Basically, my intention is to do more teaching, writing, speaking, and designing workshops. I’m going to be spending a lot of time growing Caveday, teaching with The School of Life and designing my own workshops. So I redesigned and rewrote my website and bio to reflect that change. Take a peek!
It’s uncomfortable to stop referring to myself as a designer or an advertising guy. I’m sure much of my income this year will come from that kind of work. But I’m working through the transition and setting my direction. I’ll be writing more and sharing about that redefinition and the things that helped me get unstuck.
Illustration by Rebecca Mock
Ok. Let’s Wrap It Up.
Thank you for being a part of my life and for being interested in reading what I have to say, and replying with your own thoughts. I’m looking forward to starting a conversation every time you’re looking for a little snack and head to the Email Refrigerator. I’ll keep it stocked for you.
With love and gratitude, here’s to sharing the best year ahead,
Jake | https://medium.com/email-refrigerator/to-a-better-2019-d1c81986a2b0 | ['Jake Kahana'] | 2020-12-27 18:42:41.662000+00:00 | ['Self-awareness', 'Career Change', 'New Year'] |
Simple Solutions to Problems of a Front-end Web Developer | Simple Solutions to Problems of a Front-end Web Developer
And some things that can make your life much easier.
Photo by Karl Pawlowicz on Unsplash
“NASA has landed robots on Mars and here we are struggling to center-align our divs!”
I once heard it and it resonated with me so much as it has a lot of truth in it. Doing something as obvious as centering a div may seem common sense driven but it was actually very difficult for me to work out in CSS.
Being a front end developer is a dream of many but comes with challenges we have to face and it is a part of our lifelong learning process.
Web development has changed a lot in the past years. I remember coding my first website in notepad and the eureka moment when I figured out how to insert images that can be hyperlinked.
Things have changed a lot since then and for better. Now we have so powerful text editors, CSS preprocessors, amazing libraries. Websites are way more interactive and beautiful and this poses a new challenge to a front end developer to what technologies to learn in this vast ocean.
But there are some of the common things we all encounter and can be a great relief when we find a good solution that is not going to break the site for some unknown reason.
Photo by Andrea Piacquadio from Pexels
Here are some simple solutions to problems faced by a front-end web developer
and things that are going to make our lives much easier-
Centering a div
Stylesheet 404
Vertical Align
No responsiveness
Viewport units
Adding Custom fonts
Minified CSS and JavaScript
Clearfix for float
Compressed or stretched images
Some things to ease your life as a front end developer- | https://medium.com/front-end-weekly/simple-solutions-to-problems-of-a-front-end-web-developer-488949c6201b | ['Deepak Gangwar'] | 2020-06-04 00:07:38.398000+00:00 | ['Development', 'Front End Development', 'Web Development', 'CSS', 'Education'] |
Tableau Extensions Addons Introduction: Synchronized Scrollbars | At this year’s Tableau Conference, I tried to stay on the safe side and show mostly Tableau supported solutions: visualizations on real-time data, Jupyer Notebook like experiences and so on. I had only one exception, my “Tableau Extensions Add-ons” microframework that allowed me to show use cases like synchronized scrolling of dashboard elements, getting and validating user names and sessions and building user interaction heatmaps. People loved all of these, so let me show you the synchronized scrolling magic.
Basic Concepts
Tableau Extension API is basically a simple HTML object in the dashboard — basically an iframe - that can access some of the vizql functionality by window messages. When your extension asks for summary data or sets parameter values, it simply notifies its parent frame - the Viz frame - to return with the data or execute the tabdoc:set-parameter-value vizql command. Since the extension is hosted on a different host - you cannot deploy extensions theoretically on Server - there is no other way to interact with the Viz parent frame. If you try to access the Viz parent frame via window.parent , you will get DOMException: Blocked a frame with origin for good reasons. This is great and secure - the only commands an extension can execute are the ones that are handled in the Viz frame by the Extension API server-side code.
But what if we need more functionality that the Viz frame exports?
Adding our own event listener to the Viz frame
The solution is as easy as it sounds, we need to add our own window message handlers to the parent frame to support additional functionality while staying as secure as possible. The quickest way to add our event listener that receives the command from our Extension and sets up everything inside the Viz frame is to modify one of the existing javascript files in Tableau Server and append our code to it. Sounds pretty hacky? Well, until Tableau Software provides a better way to add site-specific javascript and CSS files, I don’t know a better way and besides, it’s fun to hack Tableau. My usual place to store these additions is vqlweb.js , located in C:\Program Files\Tableau\Tableau Server\packages\vizqlserver.20194.19.0923.1135\public\v_201941909231135\javascripts directory on Windows and the matching directory in Linux. You need to add the following code (details later) to the top of this file:
This is how the “patched”, addons ready vqlweb.js looks like
Don’t forget to repack or rename the .br and .gz files (compressed version of this javascript)
And now let’s what we added exactly. As I mentioned, this is really a microframework, this is everything practically:
const EXTENSION_ADDON_URL_PREFIX = '
const LOAD_MODULE_REQUEST_MESSAGE = 'LoadModule-Request';
const LOAD_MODULE_RESPONSE_MESSAGE = 'LoadModule-Response';
window.addEventListener('message', function (event) {
if (event.data && event.data.event_id === LOAD_MODULE_REQUEST_MESSAGE) {
// load our addons external js file
var addon = event.data.data.replace(/[^a-z0-9\-]/gi, '_').toLowerCase();
var script = document.createElement('script');
script.src = EXTENSION_ADDON_URL_PREFIX + "/tableau-extensions-" + addon + "-server.js";
script.onload = function () {
event.source.postMessage({ event_id: LOAD_MODULE_RESPONSE_MESSAGE, data: event.data.data }, "*");
}
document.head.appendChild(script);
}
});
}
const EXTENSION_ADDON_URL_PREFIX = ' https://your-addons-server/extension-lib' const LOAD_MODULE_REQUEST_MESSAGE = 'LoadModule-Request';const LOAD_MODULE_RESPONSE_MESSAGE = 'LoadModule-Response';window.addEventListener('message', function (event) {if (event.data && event.data.event_id === LOAD_MODULE_REQUEST_MESSAGE) {// load our addons external js filevar addon = event.data.data.replace(/[^a-z0-9\-]/gi, '_').toLowerCase();var script = document.createElement('script');script.src = EXTENSION_ADDON_URL_PREFIX + "/tableau-extensions-" + addon + "-server.js";script.onload = function () {event.source.postMessage({ event_id: LOAD_MODULE_RESPONSE_MESSAGE, data: event.data.data }, "*");document.head.appendChild(script);});
What is happening here? If we receive a LoadModule-Request message from our extension, we will load the js file https://your-addons-server/extension-lib/tableau-extensions-<modulename>- server.js to the Viz frame. We don’t have to deal with the same domain policy, window messages working well between domains.
Is this secure?
As secure as the files you are loading are secure. The code only loads javascript files from a predefined folder in a single server. This is basically a whitelist method, administrators control what additional JS files can be loaded as extension add-ons. Also, the code runs with the logged users’ authorization — no one without appropriate user permissions can access or perform actions.
TL;DR — it’s secure.
Now let’s see a real use case.
Synchronized Scrolling
Let’s consider the example of synchronized scrolling. It’s a quite popular request, you can vote for this feature to be supported out of the box here. Basically what we want is:
Add code to the Viz frame that gets two or more zone ids or div selectors and establish synchronized scrolling Add code to the extension API that exposes the synchronize scroll functions We want to make this a repeatable process, to support other use cases in the future
From a visual perspective, we should achieve something like this:
Synchronized Scrolling on Superstore, based on Klaus Schulte blog post
Implementation
I wanted to create a really clean API to load Extension add-ons, trying to be as native as I could. I ended up with the following code for this Sync Scroll use case:
<html lang="en">
<head>
<script text="text/javascript"
src="
<script text="text/javascript" src="../extension-lib/tableau-extensions-addons.js"></script>
<script type="text/javascript">
document.addEventListener("DOMContentLoaded", function (event) {
tableau.extensions.initializeAsync().then(function () {
tableau.extensions.initalizeAddonsAsync("sync-scrollbar").then(function () {
tableau.extensions.dashboardContent.dashboard.syncScrollbars(0, 1);
});
});
});
</script>
</head>
<body></body>
</html> https://cdn.jsdelivr.net/gh/tableau/extensions-api/lib/tableau.extensions.1.latest.js</a> "> document.addEventListener("DOMContentLoaded", function (event) {tableau.extensions.initializeAsync().then(function () {tableau.extensions.initalizeAddonsAsync("sync-scrollbar").then(function () {tableau.extensions.dashboardContent.dashboard.syncScrollbars(0, 1);});});});
In order of events, this is what happens:
Load the usual Extension API javascript file Load the tableau-extensions-addons.js file. This will add new functions like initalizeAddonsAsync and dashboard.syncScrollbars Call dashboard.syncScrollbars with the two dashboard items
Step 2 is quite interesting, this is where the magic happens (code is here: https://github.com/tfoldi/tc19/blob/master/extension-lib/tableau-extensions-addons.js). It first sends a message to the parent frame to load the sync-scrollbar-server, then it loads the sync-scrollbar-client to the current frame (which adds the syncScrollbars to the dashboard extension object). This sounds might complex, but it is painfully simple: we load one file to our frame, one file to the parent, and that's it.
Chain of events, we load the extensions API, then the extension adds API, that loads our client addon files in the current frame and the server addon in the Viz frame
And how the views are synchronized? When we call dashboard.syncScrollbars it sends a message to window.parent :
{
const SCROLLBAR_SYNC_REQUEST_MESSAGE = "ScrollbarSync-Request";
tableau.extensions.dashboardContent.dashboard.__proto__.syncScrollbars = function (idx0, idx1) {
console.log("Sync request from client");
window.parent.postMessage(
{
event_id: SCROLLBAR_SYNC_REQUEST_MESSAGE,
data: [idx0, idx1]
},
"*");
}
}
And on the Viz frame we just capture this message and set the syncing:
{
const SCROLLBAR_SYNC_REQUEST_MESSAGE = "ScrollbarSync-Request";
if (event.data && event.data.event_id === SCROLLBAR_SYNC_REQUEST_MESSAGE) {
const left = event.data.data[0];
const right = event.data.data[1];
// ... actual sync scroll implementation
// window.addEventListener('message', function (event) {if (event.data && event.data.event_id === SCROLLBAR_SYNC_REQUEST_MESSAGE) {const left = event.data.data[0];const right = event.data.data[1];// ... actual sync scroll implementation// https://github.com/tfoldi/tc19/blob/master/extension-lib/tableau-extensions-sync-scrollbar-server.js
Pretty slick, it’s a shame that we have to change a file on the Server to make it work. All the code from above are here: https://github.com/tfoldi/tc19/
Deployment
To deploy this example on your own environment all you have to this:
Have a Tableau Server when you have admin/root rights on OS level.
Add this code to vqlweb.js as described earlier in this post.
as described earlier in this post. Whitelist https://tfoldi.github.io/tc19/scroll-sync-extension/ URL as extension.
Deploy this workbook.
Enjoy.
Questions? Feedback?
Does it work — yes or no? Do you like this? Drop me a line, feedback is always appreciated. | https://medium.com/starschema-blog/tableau-extensions-addons-introduction-synchronized-scrollbars-5c6614e42ec | ['Tamas Foldi'] | 2019-12-02 18:18:50.912000+00:00 | ['Dataviz', 'Tableau Server', 'Extensions Api', 'Tableau'] |
Part 4. Node.js + Express + TypeScript: Unit Tests with Jest | It is not possible to develop an application efficiently without Unit Tests. This statement is more than the truth about HTTP API server. Literally speaking, I never run the server when I develop new functionality, but develop Unit Tests together with the code, and run Unit Tests to verify the code, step by step until I finish the functionality.
In previous parts of the tutorial, we developed a Node.js+Express+Open API App with GET /hello and GET /goodbye requests. If you start from this part, you can get it by:
$ git clone https://github.com/losikov/api-example.git
$ cd api-example
$ git checkout tags/v3.3.0
$ yarn install
Initial Setup
I used different frameworks for the testing, but then I came to use jest only for all projects for all tests types. Jest has nice documentation. To develop and run the tests with TypeScript I use ts-jest. Let’s install them as dev dependencies (-D flag), and create default jest.config.js:
yarn add -D jest @types/jest ts-jest
$ yarn ts-jest config:init
To make jest tests files to see @exmpl scope, update just created jest.config.js and add moduleNameMapper property:
Now, we can run our tests with yarn jest command, but let’s add a new script to package.json for convenience:
"scripts": {
...
+ "test:unit": "ENV_FILE=./config/.env.test jest"
},
As you see, we pass it test env file. We don’t add any variables to it yet. Just create an empty file:
$ touch config/.env.test
Now, we can run our tests with yarn test:unit command. Let’s move on to our first unit test and try to run it.
Simple Test
The easiest function to test, which doesn’t have any dependencies is auth function in src/api/services/user.ts. Create src/api/services/__tests__ folder, and user.ts file in it. As you see, we put our test files in __tests__ folder where our tested file is located, and name it the same as a tested file. We follow this pattern all the time, unless there’s a specific case. Copy and paste the following content to it:
Analyze the content. Pay attention to line 3,4 and 9:
auth/it should resolve with true and valid userId for hardcoded token
auth/it should resolve with false for invalid token
Keep the naming clear to you and others. You can use it() or test(), and use the one which fits your naming style. To organize tests you can use multiple describe() levels.
In line 5 and 10, we perform an action, and on line 6 and 11 we validate the actions’ result with jest expect function. we use one of the methods toEqual. Refer to the documentation to find the method you need for a value validation.
Run yarn test:unit to see the result: | https://losikov.medium.com/part-4-node-js-express-typescript-unit-tests-with-jest-5204414bf6f0 | ['Alex Losikov'] | 2020-10-10 05:40:37.979000+00:00 | ['Mocking', 'Unittest', 'Typescript', 'Test Coverage', 'Jest'] |
ORCA Telegram 2.0: Group Migration | Getting back from a short holiday we’re starting our week with some important stuff right away. We’re migrating our Telegram Group and setting up a new one which is significantly better, faster, stronger and more efficient than our last one.
The decision of group migration has been approved for a number of reasons. First of all, due to the nature of Telegram coding, we suspect a significant number of accounts present in the legacy group might be bots. These accounts display peculiar inactivity in the group nor engage with other members, and may have compromising goals. These suspicious accounts were one of the reasons why we had to mute our group during the first hours of ORCA ICO Round I.
What is more, our admins and moderators have faced multiple challenges when trying to set up moderation rights and implement automated services within the group.
Inside the new group, we plan to initiate new measures oriented towards community building.
The most appropriate news and announcements will be shared within the group, community support will be available 24/7 coupled with the most relative news, tweets, and articles coming from the crypto industry.
To prevent issues that constituted problems in the legacy group, in a couple days time we’re initiating ‘prove you’re a person’ validation process similar to the CAPTCHA program you find in websites over the internet.
The updated policy of the group is structured to prevent bot accounts from joining and staying present in the group.
One more new implementation is that we will be awarding karma points to existing users for their active participation. Acquired points will be listed in group’s leaderboards and rankings. Community points will not only be earned through active chatting but also via engagement in weekly community activities. Other group members will have the option to dedicate points to each other based provided merits.
Address of the new group is (link) t.me/ORCA_Official. We are going to update the pinned message in the old group and will redirect all new entrants coming through the legacy links to the new one.
The changes to be set up are a part of the iteration process in order to achieve higher efficiency and user satisfaction. We believe that Telegram group quality and its functions are an important asset of our project and we are continuously striving to achieve this goal.
Be sure to check out our new Telegram Group and evaluate the changes for yourself. We are looking for feedback and are always open for suggestions on how to improve even more.
See you inside ORCA 2.0! | https://medium.com/cc-connecting-crypto-with-banking/orca-telegram-2-0-group-migration-393ef16a4c57 | [] | 2018-10-01 12:18:11.374000+00:00 | ['Telegram', 'Startup', 'Cryptocurrency', 'Updates', 'Fintech'] |
🤩 Meet the team 🤩 | We find immense pleasure to release the Team Video that introduces people working at Mobigraph, the company behind PEP Network
Enjoy!
➡️ https://youtu.be/U0iKEu0x1nk | https://medium.com/pep-ico/meet-the-team-b7458abac5b0 | ['Sharik Khan'] | 2018-06-13 12:44:32.429000+00:00 | ['ICO', 'Blockchain', 'Ethereum', 'Entrepreneurship', 'Cryptocurrency'] |
The Scale of Modern Communications | Why stop with the numbers here?
Google sees 1.2 trillion searches each year, and there are over 3.5 billion internet users.
Advertising spend worldwide in 2016 was almost $500 billion.
To put this into context, the entire GDP of India is $2.25 trillion, and the total workforce of South Korea is 26.6 million.
Too Many Disciplines
Let’s get back to communications.
I want to give you an idea of what is actually involved in this work today and just how wide-ranging the disciplines are within it.
I’ll break down the field into sub-fields, and then looks at the disciplines within each of those.
To start with, we can look at communications as separated into the following:
Corporate Communications
Traditional Marketing
Digital Communications
Creative Production
Data Analysis
Communications Technology
There are many ways to segment communications activity, and I’ve chosen this one for simplicity.
Every single one of these sub-disciplines is it’s own full-time job, with distinct career paths all the way to the most senior levels.
Corporate Communications
Covers all aspects of an organisations communications at a strategic level, with a focus on aligning any communications strategy with business goals.
Disciplines in this field include:
Crisis communication is a sub-field of the public relations profession involving responding to potential threats to your brand and organisation from a communications point of view. This is part of disaster recovery strategy.
Communicating within an organisation, such as between employees and management. Important for clear dissemination of information, team-building, developing a positive internal culture, increasing job satisfaction etc.
Stakeholder Communications
A focus on communicating to key partners and stakeholders, such as investors, funders etc.
A large amount of work goes into developing an effective approach to making communications work for an organisation. Most senior communicators do only this, full-time. It mostly focuses around aligning organisation goals with the communications function. What of the many disciplines covered in this article will contribute towards driving the organisation forwards?
Brand
Brands are everything these days. Apple’s brand is valued at around $170 billion, and it’s their brand that drives their eye-watering sales revenues. Branding work covers many of the disciplines in this article, costs a small fortune to work on and can take a long time, as it affects everything to do with an organisation.
Traditional Marketing
This is what most people think of as marketing. It includes working with the press, printed media, advertising on TV, radio, events and many others. It’s called traditional because many of these disciplines have been in existence hundreds of years (when was the first poster advertised, I wonder?), as opposed to digital communications, which has only been around since the early 90's.
Some of the major elements include:
The press release has been a staple of communication since newspapers were a thing (the first release was made in 1906). These form the backbone of many communications teams around the world, much to the annoyance of colleagues who are regularly prodded into providing quotes and speaking to journalists.
Some key parts of a good PR function include:
• Building and growing relationships with the media, journalists and other PR teams in your sector
• Writing press releases (the sign-off process alone takes significant time here)
• Distributing the press release to press contacts
• Measuring the performance of press releases (involves 15–20 metrics, several KPI’s, tracking and logging coverage, calculating Return-on-Investment, or ROI and much more)
• Newsjacking (hijacking current news around the world to insert your own brand, product or services)
• Often acts as spokesperson for company
• Dealing with journalist requests (such as comments on a story)
Newsjacking is a challenging, complex, but highly rewarding approach.
Printed Media
Working with posters, flyers, banners and anything that can be printed out physically. Strong link with creative production fields as marketing professionals rarely have design skills.
• Coordination of brand assets and copy (text)
• Liaising with designers and printers
• Organising distribution
TV and radio
Still the largest slice of the GDP generated by the creative industries, TV remains a dominant platform for modern communications. Radio is still relevant, but nowhere near as strong as it once was, although access is easier to both platform with the proliferation of the internet — particularly over the last 5 years with the rise of mobile as a way of consuming content.
A major success in this field is that of Netflix, as well as Amazon and BBC iPlayer, whose digital video services are massively popular worldwide.
I would need a whole other article just to cover the disciplines involved in TV and radio, but here’s just a snapshot:
• Production, from set design, to project planning and organisation
• Writing screenplays, scripts etc
• Advertising, commercials
• Camera and audio capture
• Editing
In the science communications field, such programmes as Planet Earth, COSMOS, and Wonders of the Universe are excellent examples that attract huge audiences.
Events
There’s nothing quite like getting out there and communicating with people in person. This isn’t always possible, given events can be one of the most expensive ways to communicate and often have small audiences, but the levels of engagement you can get from attending or hosting an event are some of the best of any method of communicating.
If you’ve ever had or attended a wedding, you know how much work can go into planning just a single day, let alone an event which involves hundreds or even thousands of people, perhaps in another country.
To give an example, when Earlham Institute’s communications team went to the Plant & Animal Genomes Conference (PAG) in San Diego, USA in 2014, it took over 3 months of work to organise and cost a large chunk of the yearly budget.
Don’t even get me started on arranging for 10 large boxes of merchandise and stand equipment to be shipped.
The event paid dividends, however, with one of the busiest stands of the event, seeing well over 1500 people in five days.
Digital Communications
With roots in the mid-80’s (something to do with floppy disks and car advertising) and exploding in the 90’s, communicating in the age of digital now covers an astonishing number of disciplines and has permanently changed how all the traditional and corporate approaches listed above.
Like it or not, digital is here to stay — and it’s very much leading the way in practically every area.
Digital is my specialism, but this doesn’t really clarify any of what I actually do. Here are some of the more prominent disciplines within digital — bear in mind that every single one of these areas has a career path attached to it. | https://medium.com/wheres-the-evidence/the-scale-of-modern-communications-163e2d1283a | ['Chris J. Bennett'] | 2017-03-03 09:48:25.071000+00:00 | ['Communications', 'Marketing', 'Public Relations', 'Digital Marketing', 'Social Media'] |
AutoML: Not A Magic Bullet, But A Powerful Business Tool | by Allen Chen, Andrew Mendoza, Gael Varoquaux, Steven Mills and Vladimir Lukic
When AI was first introduced into business processes, it was transformative, enabling companies to leverage the vast amounts of accumulated data to improve planning and decision making. It soon became apparent, however, that integrating AI into business processes at scale required significant resources. First, companies had to recruit highly sought-after (and highly paid) data scientists to create the data models behind AI. Second, the process of building and training the machine learning models that accelerated the data analysis process required a significant expenditure of time and energy. This, in turn, led to the development of automated machine learning (AutoML), techniques that essentially automate core aspects of the machine learning process including model selection, training, and evaluation.
In effect, AutoML seeks to trade machine (processing) time for human time. This automation brings many benefits. First and foremost, it decreases labor costs. It also reduces human error, automates repetitive tasks, and enables the development of more effective models. By reducing the technical expertise required to create an ML model, AutoML also lowers the barriers to entry, enabling business analysts to leverage advanced modeling techniques — without assistance from data scientists. And by relieving data scientists from repetitive tasks of the machine learning process, AutoML frees these costly resources to pursue higher-value projects.
New solutions invariably raise new questions
As data scientists ourselves, we initially thought little of AutoML. Yes, these techniques and tools could produce reasonably effective models. But that was essentially all they could do — and, of course, not without drawbacks. In the early stages, AutoML tools were far less advanced and typically no more complex than what could be implemented by a data scientist using existing tools. These barriers to acceptance were compounded by AutoML’s black box nature, which makes trained models less interpretable and meaningful, and by the difficultly in immediately finding use for it in non-academic settings. Moreover, AutoML suites of tools were far narrower in scope and solved only a portion of the problem — and with little value add.
AutoML has come a long way since then. In fact, it is now ubiquitous in most of the prevailing machine learning libraries, open-source tools and major cloud-compute platforms. Commercially available AutoML tools are making feature engineering and the development of complex machine learning models as easy as a few clicks of the button, enabling business users to deploy these models themselves in a production-ready state. As these more powerful AutoML tools proliferate, new questions arise, such as:
· Should we be using AutoML?
· If so, when should or shouldn’t we use them?
· Can we expect the results to be better than hand-crafted models?
· Can these tools take the next step and replace data scientists altogether?
Blindly optimizing a metric risks enhancing biases
As we assess AutoML, we must recognize that performance is not the complete story, and that bias can play an important role in AI. Taking human data scientists out of the process does not necessarily result in bias-free results. A computer does not, for example, know that there is anything wrong with training facial-recognition algorithms using the faces of white people only — or that the result of doing so is that a phone may fail to unlock when presented with the face of a non-white user. It is therefore the responsibility of data scientists themselves to mitigate these biases by checking and correcting models that advantage one race, gender or protected class over another.
Allowing biases to skew results can have negative consequences for businesses in virtually any industry. An example of bias in health care was recently published in Science magazine. The algorithm in question was designed to see which patients would benefit from high-risk care-management programs. It was, according to the report, the kind of algorithm routinely used to determine care levels for more than 200 million people in the U.S. The article authors found that the algorithm incorrectly determined that fewer Black people than white people were in need of such care programs — even though the Black patients in the data set had 26.3% more chronic illnesses than their white counterparts. The error occurred for two reasons: First, the algorithm used total individual health care costs for the previous year to determine need. Since Black citizens tend to be poorer than white citizens, they spent less on health care regardless of how much care they might actually have needed. Second, the data set used to train the algorithm included seven times more data on whites than it did on Blacks.
Similarly, Reuters noted in 2018 that the algorithm Amazon used for years to drive its hiring process unfairly excluded female candidates. Indeed, the hiring algorithm was trained by analyzing patterns of resumes submitted to Amazon over the previous ten years. Since the vast majority of applicants were men, the algorithm learned that male candidates were more likely to be selected. The algorithm also gave lower scores to resumes that “included the word ‘women’s, as in ‘women’s chess club captain” and downgraded graduates of two all-women’s colleges.
These are just two examples of the potential ways that bias can insinuate itself into business decision making. Given how broadly AI-based processes are used to inform such decisions — some of which affect hundreds of millions of people — companies must be aware of biases and take all possible steps to remove or mitigate them.
The best data science model: Humans + AI
Nonetheless, in spite of the risk posed by undetected biases, we believe that the ease and potential time saving of developing models using AutoML makes it a tool that every data scientist and data-science department should have on hand. It is a low-cost, high-potential tool that, at minimum, provides as solid a performance baseline for hand-crafted approaches. In the best scenarios, AutoML will do this much faster than a human — and produce better models as well — as we will discuss in Part 2 of this series. Data scientists need to be particularly careful that both the assumptions they use to design their models and the data they use to train them do not result in unintended consequences.
A final possible reason for lack of AutoML uptake may be the fact that some data scientists have expressed concern that AutoML will soon make them redundant. This is similar to the concern accountants had in the early 1980s when Microsoft introduced Excel. Rather than put accountants out of work as they had feared, Excel made their jobs easier, automating many of the mundane tasks involved in managing financial documents.
Similarly, we believe that AutoML will make data scientists more efficient. Rather than spending time iterating over and tuning models, data scientists with access to AutoML tools can spend less time on these tasks and more time on higher-value efforts such as applying domain and industry knowledge. Given the paucity and expense of data scientists, this ability to shift resources should be a welcome development for business leaders.
Data scientists can rest assured knowing that not only can they continue to play a central role in AI development — they must continue to play such a role. If companies are to avoid the unforeseen consequences of bias in automation, humans must remain at the center of data modelling.
In the second of this two-part series, we will look at the strengths and limitations of AutoML, and underscore the critical role humans play in AI projects. | https://medium.com/bcggamma/automl-not-a-magic-bullet-but-a-powerful-business-tool-be386b1abae8 | ['Bcg Gamma Editor'] | 2020-06-29 15:37:57.042000+00:00 | ['Responsible Ai', 'Machine Learning', 'Artificial Intelligence', 'Data Science', 'Ethical Ai'] |
Spark Under the Hood: randomSplit() and sample() Inner Workings | Spark Implementation of randomSplit()
Signature Function
The signature function of randomSplit() includes a weight list and a seed specification. The weight list is to specify the number of splits and percentage (approximate) in each and the seed is for reproducibility. The ratio is approximate due to the nature of how it is calculated.
For example, the following code in Figure 3 would split df into two data frames, train_df being 80% and test_df being 20% of the original data frame. By using the same value for random seed, we are expecting that the same data points are in the same split if we were to re-run the script or Spark internally rebuilds the splits.
Figure 3: randomSplit() signature function example
Under the Hood
The following process is repeated to generate each split data frame: partitioning, sorting within partitions, and Bernoulli sampling. If the original data frame is not cached then the data will be re-fetched, re-partitioned, and re-sorted for each split calculation. This is the source of potential anomalies. In summary, randomSplit() is equivalent to performing sample() for each split with the percentage to sample changing with the split being performed. This is evident if you examine the source code for randomSplit() in PySpark³. This blog⁴ also provides some more information and visuals on how randomSplit() is implemented.
Let’s walk through an example. Figure 4 is a diagram of the sample() for each split, starting with the 0.80 split.
Figure 4: Process of generating the 0.8 split. Identical to sample() implementation.
Spark utilizes Bernoulli sampling, which can be summarized as generating random numbers for an item (data point) and accepting it into a split if the generated number falls within a certain range, determined by the split ratio. For a 0.8 split data frame, the acceptance range for the Bernoulli cell sampler would be [0.0,0.80].
The same sampling process is followed for the 0.20 split in Figure 5, with just the boundaries of acceptance changing to [0.80, 1.0].
Figure 5: Process of generating the 0.2 split. Identical to sample(). implementation. Partition contents remain constant and sorting order is preserved ensuring a valid split.
The data frame is re-fetched, partitioned, and sorted within partitions again. You can see in the example that RDD partitions are idempotent. Which means that the data points in each partition in Figure 4, remain in the same partition in Figure 5. For example, points b and c are in Partition 1 in both Figure 4 and 5. Additionally, the seed associated with each partition always remains constant, and the order within partitions is identical. All three of these points are fundamental to both sample() and randomSplit(). Ensuring that the same sample is produced with the same seed in the former, and guaranteeing no duplicates or disappearing data points in the latter.
Solutions To Avoiding Inconsistencies
Fixing these issues lies in ensuring that RDD partitions and sorting order are idempotent. Any one of the following three methods ensure this and can be applied: 1) caching the data frame before operations 2) repartitioning by a column or a set of columns, and 3) using aggregate functions⁵. An example of each method is shown in Figure 6.
Figure 6: Three different methods to avoid inconsistencies in randomSplit() and sample()
Caching the original data frame leads to partition content being held in memory. So instead of re-fetching data, partitioning and sorting, Spark continues operations using the partitioned data in memory. Note that cache() is an alias for persist(pyspark.StorageLevel.memory_only) which may not be ideal if you have memory limitations. Instead, you can consider using persist(pyspark.StorageLevel.memory_and_disk_only) . If there is no memory or disk space available, Spark will re-fetch and partition data from scratch, so it may be wise to monitor this from the Spark Web UI. Caching is the solution I chose in my case.
Summary and Key Takeaways
Moral of the story is: if unexpected behavior is happening in Spark, you just need to dig a bit deeper! Here is a summary of all the key points of this article:
randomSplit() is equivalent to applying sample() on your data frame multiple times, with each sample re-fetching, partitioning, and sorting your data frame within partitions.
The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either change upon data re-fetch, there may be duplicates or missing values across splits and the same sample using the same seed may produce different results.
These inconsistencies may not happen on every run, but to eliminate them completely, persist (aka cache) your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
Good luck in all of your Spark endeavors, and I hope this article is a good starting point to understanding the inner workings of Spark. I’d appreciate any and all comments you may have! | https://medium.com/udemy-engineering/pyspark-under-the-hood-randomsplit-and-sample-inconsistencies-examined-7c6ec62644bc | ['Meltem Tutar'] | 2020-05-28 01:30:57.237000+00:00 | ['Data Science', 'Big Data', 'Spark', 'Pyspark'] |
A WebSocket ML Model Deployment | A WebSocket ML Model Deployment
Using a WebSocket service to deploy an ML model
This blog post builds on the ideas started in three previous blog posts.
In this blog post I’ll show how to deploy the same ML model that we deployed as a batch job in this blog post, as a task queue in this blog post, inside an AWS Lambda in this blog post, as a Kafka streaming application in this blog post, a gRPC service in this blog post, and as a MapReduce job in this blog post.
The code in this blog post can be found in this github repo.
Code
Introduction
In the world of web applications, the ability to create responsive and interactive experiences is limited when we do normal request-response requests against a REST API. In the request-response programming paradigm, requests are always initiated by the client system and fulfilled by the server and continuously sending and receiving data is not supported. To fix this problem, the Websocket standard was created. Websockets allow a client and service to exchange data in a bidirectional, full-duplex connection which stays open for a long period of time. This approach offers much higher efficiency in the communication between the server and client. Just like a normal HTTP connection, Websockets work in ports 80 and 443 and support proxies and load balancers. Websockets also allow the server to send data to the client without having first received a request from the client which helps us to build more interactive applications.
Just like other web technologies, Websockets are useful for creating applications that run in a web browser. Websockets are useful for deploying machine learning models when the predictions made by the model need to be available to a user interface running in a web browser. One benefit of the Websocket protocol is that we are not limited to making a prediction when the client requests it, since the server is able to send a prediction from the model to the client at any time without waiting for the client to make a prediction request. In this blog post we will show how to build a Websocket service that works with machine learning models.
Package Structure
To begin, we set up the project structure for the websocket service:
- model_websocket_service (python package for websocket service)
- static (Javascript files)
- templates (HTML templates for UI)
- __init__.py
- config.py (configuration for the service)
- endpoints.py (Websocket handler)
- ml_model_manager.py (class for managing models)
- schemas.py (schemas for the API data objects)
- views.py (web views for the UI)
- scripts (test script)
- tests (unit tests)
- Dockerfile
- Makefle
- README.md
- requirements.txt
- setup.py
- test_requirements.txt
This structure can be seen here in the github repository.
Websockets
Websockets are fundamentally different from normal HTTP connections. They are full-duplex, which means that the client and server can exchange data in both directions. Websocket connections are also long-lived, which means that the connection stays open even when no messages are being exchanged. Lastly, websocket connections are event-based, which means that messages from the server are handled by the client in an “event handler” function that is registered to an event type. The same happens in the server code, which handles events from the client by registering handlers. There are four default events that are built into the Websocket protocol: open, message, error, and close. Apart from these event types, we are free to add our own event types and exchange messages through them.
Installing the Model
To begin working on a Websocket service that can host any ML model, we’ll need a model to work with. For this, we’ll use the same model that we’ve used in the previous blog posts, the iris_model package. The package can be installed directly from the git repository where it is hosted with this command:
This command should install the model code and parameters, along with all of its dependencies. To make sure everything is working correctly, we can make a prediction with the model in an interactive Python session:
from iris_model.iris_predict import IrisModel
>>> model = IrisModel()
>>> model.predict({“sepal_length”:1.1, “sepal_width”: 1.2, “petal_width”: 1.3, “petal_length”: 1.4})
{‘species’: ‘setosa’}
Now that we have a working model in the Python environment, we’ll need to point the service to it. To do this, we’ll add the IrisModel class to the configuration in the config.py file:
class Config(dict):
models = [{
“module_name”: “iris_model.iris_predict”,
“class_name”: “IrisModel”
}]
The code above can be found here.
This configuration gives us flexibility when adding and removing models from the service. The service is able to host any number of models, as long as they are installed in the environment and added to the configuration. The module_name and class_name fields in the configuration point to a class that implements the MLModel interface, which allows the service to make predictions with the model.
As in previous blog posts, we’ll use a singleton object to manage the ML model objects that will be used to make predictions. The class that the singleton object is instantiated from is called ModelManager. The class is responsible for instantiating MLModel objects, managing the instances, returning information about the MLModel objects, and returning references to the objects when needed. The code for the ModelManager class can be found here. A complete explanation of the ModelManager class can be found in this blog post.
Defining the Service
The websocket service is built around the Flask framework, which can be extended to support Websockets with the flask_socketio extension. The Flask application is initialized in the __init__.py file of the package like this:
app = Flask(__name__)
The code above can be found here.
Now that we have an application object, we can load the configuration into it:
if os.environ.get(“APP_SETTINGS”) is not None:
app.config.from_object(“model_websocket_service.config
{}”.format(os.environ[‘APP_SETTINGS’]))
The code above can be found here.
The configuration is loaded according to the value in the APP_SETTINGS environment variable. This allows us to change the setting based on the environment we are running in. Now that we have the app configured we can initialize the Flask extensions we’ll be using:
bootstrap = Bootstrap(app)
socketio = SocketIO(app)
The code above can be found here.
The Bootstrap extensions will be used to build a user interface and the SocketIO extension will be used to handle the Websocket connections and events. With the extensions loaded, we can now import the code that handles the Websocket events, REST requests, and renders the views of the UI:
import model_websocket_service.endpoints
import model_websocket_service.views
The code above can be found here.
Lastly, we will instantiate the ModelManager singleton at application startup. This function is executed by the Flask framework before the application starts serving requests. The models that will be loaded are retrieved from the configuration object that we loaded above.
@app.before_first_request
def instantiate_model_manager():
model_manager = ModelManager()
model_manager.load_models(configuration=app.config[“MODELS”])
The code above can be found here.
With this code, we set up the basic Flask application that will handle the Websocket events.
Websocket Event Handler
With the application set up, we can now work on the code that handles the Websocket events. This code is in the endpoints.py module. To begin, we’ll import the Flask app object and the socketio extension object from the package:
from model_websocket_service import app, socketio
The code above can be found here.
A websocket handler is just a function that is decorated with the @socketio.on() decorator. The decorator registers the function as a Websocket event handler with the Flask framework, which will call the function whenever an event of the type described in the decorator is received by the application. We’ll use the decorator here to handle events of type “prediction_request”, which will handle the prediction requests that clients send to the server.
@socketio.on(‘prediction_request’)
def message(message):
try:
data = prediction_request_schema.load(message)
except ValidationError as e:
response_data = dict(
type=”DESERIALIZATION_ERROR”, message=str(e))
response = error_response_schema.load(response_data)
emit(‘prediction_error’, response)
return
The code above can be found here.
The first thing we do when receiving a message from the client is to try to deserialize it with the PredictionRequest schema. This schema contains the inputs to the model predict() method and also the model’s qualified name. If the deserialization fails, we’ll respond to the client by emitting a prediction error message back to the client using the ErrorResponse schema. The emit() function is provided by the socketio extension and is used to send events to the client from the server.
Now that we have a deserialized prediction request from a client, we’ll try to get a reference to the model from the model manager. The service will emit an ErrorResponse object back to the client system if it fails to find the model that is requested by the client.
model_manager = ModelManager()
model_object = model_manager.get_model(
qualified_name=data[“model_qualified_name”]) if model_object is None:
response_data = dict(
model_qualified_name=data[“model_qualified_name”],
type=”ERROR”, message=”Model not found.”)
response = error_response_schema.load(response_data)
emit(‘prediction_error’, response)
The code above can be found here.
If the model is found, then this code will be executed:
else:
try:
prediction = model_object.predict(data[“input_data”])
response_data = dict(
model_qualified_name=model_object.qualified_name,
prediction=prediction)
response = prediction_response_schema.load(response_data)
emit(‘prediction_response’, response)
except MLModelSchemaValidationException as e:
response_data = dict(
model_qualified_name=model_object.qualified_name,
type=”SCHEMA_ERROR”, message=str(e))
response = error_response_schema.load(response_data)
emit(‘prediction_error’, response)
except Exception as e:
response_data = dict(
model_qualified_name=model_object.qualified_name,
type=”ERROR”, message=”Could not make a prediction.”)
response = error_response_schema.load(response_data)
emit(‘prediction_error’, response)
The code above can be found here.
If the prediction is made successfully by the model, a PredictionResponse object is serialized and emitted back to the client through the ‘prediction_response’ event type. If the model raises an MLModelSchemaValidationException error, the error is serialized and sent back by emitting an ErrorResponse object back to the client. If any other type of exception is raised, a ErrorResponse object is created and sent back to the client.
The Websocket handler that we built in this section is the only one that we need to add to the service in order to expose any machine learning models to clients of the Websocket service. The handler is able to forward prediction requests to any model that is loaded in the ModelManager singleton. The handler is also able to handle any exceptions raised by the model and return the error back to the client.
REST Endpoints
In order to make the Websocket service easy to use, we will be adding two REST endpoints that expose data about the models that are being hosted by the service. Even though the models can be reached directly by connecting to the Websocket endpoint and sending prediction request events, knowing what models are available and data to send into each model is helpful for users of the service.
The first REST endpoint queries the ModelManager for information about all of the models in it and returns the information as a JSON data structure to the client.
@app.route(“/api/models”, methods=[‘GET’])
def get_models():
model_manager = ModelManager()
models = model_manager.get_models()
response_data = model_collection_schema.dumps(
dict(models=models))
return response_data, 200
The code above can be found here.
The second REST endpoint is used to return metadata about a specific model hosted by the service. The metadata returned includes the input and output schemas that the model uses for it’s prediction function.
@app.route(“/api/models/<qualified_name>/metadata”, methods=[‘GET’])
def get_metadata(qualified_name):
model_manager = ModelManager()
metadata = model_manager.get_model_metadata(
qualified_name=qualified_name)
if metadata is not None:
response_data = model_metadata_schema.dumps(metadata)
return Response(response_data,
status=200, mimetype=’application/json’)
else:
response = dict(type=”ERROR”, message=”Model not found.”)
response_data = error_response_schema.dumps(response)
return Response(response_data, status=400,
mimetype=’application/json’)
The code above can be found here.
Using the Service
In order to test the Websocket server we wrote a short python script that connects through a websocket, sends a prediction request, and receives and displays a prediction response. The script can be found in the scripts folder.
The script’s main function connects to localhost on port 80 and sends a single message to the prediction_request channel:
sio = socketio.Client()
def main():
sio.connect(‘http://0.0.0.0:80')
data = {‘model_qualified_name’: ‘iris_model’,
‘input_data’: {‘sepal_length’: 1.1,
‘sepal_width’: 1.1, ‘petal_length’: 1.1,
‘petal_width’: 1.1}}
sio.emit(‘prediction_request’, data)
The code above can be found here.
To receive a prediction response from the server, we register a function that will be called on every message in the “prediction_response” channel:
@sio.on(‘prediction_response’)
def on_message(data):
print(‘Prediction response: {}’.format(str(data)))
The code above can be found here.
To use the script, we first start the server with these commands:
export PYTHONPATH=./
export APP_SETTINGS=ProdConfig
gunicorn — worker-class eventlet -w 1 -b 0.0.0.0:80 model_websocket_service:app
Then we can run the script with this command:
python scripts/test_prediction.py
The script will send the prediction request and then print the response from the server to the screen:
Prediction response: {‘prediction’: {‘species’: ‘setosa’}, ‘model_qualified_name’: ‘iris_model’}
Building a User Interface
In order to show how to use the Websocket service in a real-world client application we built a simple website around the Websocket and REST endpoints that were described above. The user interface leverages the models and metadata REST endpoints to display information about the models being hosted by the service, and it uses the Websocket endpoint to make predictions with the models.
This user interface is similar to the one we built for this blog post, where we showed how to deploy models behind a Flask REST service. We are reusing a lot of the same code here.
Flask Views
The Flask framework supports rendering HTML web pages through the Jinja templating engine. We created an HTML template that displays the model available through the service. The view code uses the ModelManager object to get a list of the model being hosted, then renders the list to an HTML document that is returned to the client’s web browser:
Model List Page
In order to show a model’s metadata, we built a view that queries the model object directly and renders an HTML view with the information:
Model Metadata Page
Both of these views are rendered in the service and do not use the REST endpoints to retrieve the information about the models.
Dynamic Web Form
The last webpage we’ll build for the application is special because it renders a dynamically -generated form that is created from the model’s input schema. The webpage uses the model’s metadata REST endpoint to get the input schema of the model and uses the brutusin forms package to render the form in the browser.
The form accepts input from the user and sends it to the server as a Websocket event of type ‘prediction_request’. The webpage also has a Websocket event listener that is able to render all of the ‘prediction_response’ and ‘prediction_error’ Websocket events that the server emits back to the client. The code for this webpage can be found here.
Prediction Web Form Page
Closing
The Websocket protocol is a simple way to build more interactive web pages that has wide support in modern browsers. By deploying ML models in a Websocket service, we’re able to integrate predictions from the models into web applications quickly and easily. As in previous blog posts, the service is built so that it is able to host any ML model that implements the MLModel interface. Deploying a new ML model is as simple as installing the python package, and adding the model to the configuration of the service. Combining the Websocket protocol with machine learning models is quick and easy if the code is written in the right way. | https://brianschmidt-78145.medium.com/a-websocket-ml-model-deployment-22d73938541b | ['Brian Schmidt'] | 2020-04-05 01:01:48.143000+00:00 | ['Machine Learning', 'Software Engineering', 'Data Science'] |
AI Article Title Generator | Why has nobody coded up an AI “article title generator” yet?
Here’s a bunch of circles that google thinks is an AI. I’m not convinced.
This seems to me to be one of the easiest ways to become fabulously rich. Here’s the idea.If someone with access to the right data wanted to, they could tune an artificial neural network (AI) to generate near perfect article titles.
How We Get Fabulously Rich
Modern media companies make their money based on clicks, which are farmed from social media. Classic model media organizations do still exist, but the definition of “classic” basically means “about to get rusty and fall apart at any moment.” The future of media is in the clicks. Most of the traffic comes from secondary sharing. Possibly, nearly all of it. And people share titles.
A staggering 59% of article shares happen without the sharer even reading the article. I literally just did it. That link? It’s to a Forbes article I’ve never read, and here I am linking it on Medium. I could have linked the Washington Post article on the same topic, but the Forbes one had a slightly better headline on my Google Search.
But now I’ve linked them both. Never read either.
Oh damn. Now we’re seeing how this works, in real time.
But that spills through. Watch the math. Let’s pretend we have a social network, such as Facebook, where each user has a couple hundred connections. Let’s say for the sake of argument that ten of those connections are tight enough to the user, and aligned enough with the ideology of the article, that they might be interested in sharing it. Six of the ten share it without even reading it. Then thirty six of the sixty users in layer two share it without reading it. Then two hundred sixteen….
…you get the idea. It’s a cascade.
Given that the title is responsible for over 50% of the shares, the title is more important than the body of the article. More important than the content. More important than the truth.
There’s a lot of actual money, when you do the analysis, tied up in having the perfect article title. We should be able to tune an AI to generate near perfect ones. For a refresher on how these things work, click here and skim to the second section:
You’d need some esoteric data though. You’d need a database of every article title you could find, as well as its time series traffic history. You’d need a language heuristics package to parse phrases into data models of phrases. You’d probably need more mysterious programming mammer jammer I haven’t thought of. And then you feed the titles into the input layer and tune it to click histories in the output layer. Once tuned, it becomes a traffic prediction engine, where you feed in a new title and it tells you the traffic that title probably gets you. And then, because “computer,” you feed it a kabillion randomly generated article titles, sort the kabillion titles by score, and filter out what makes any sort of sense inside the top 1%. Then write the article around the title. Because who cares, nobody reads the article anyway. (Are you reading this? HELLO! Help, I’m trapped inside a fortune cookie factory.)
Does Popular Science have one of these things already?
Our first problem: Big Media companies are the only entities with access to [article title vs traffic] data in enough depth to tune the model. So how do we get rich? Here’s our plan: We convince them to let us tune our model at AI Article Title Generator Incorporated (stock ticker: AIATG) towards their data. A tool like this could be worth whole percentage points in gross revenue to them. They’d pay out the nose. If Big Media doesn’t already have this, then it’s probably only because they don’t realize they need it. In particular, they need to get it before the other Big Media company down the street gets it.
I really don’t think it would be that hard to do, technically. We cook up AIATG Incorporated in Silicon Valley or Raleigh Durham, pitch to NBC or similar, and knock the project out in a year for a few million dollars, which is nothing compared to the potential payout. Big payout. Whole percentage points on NBC’s gross revenue payout.
But then the next big media corporation comes knocking on AIATG’s door, because they are getting killed in the (secondary sharing from title only) marginal traffic marketplace. Obviously we sell them one too, tuned to their data, for the same price as the first sale. The development for the second one is fantastically easy, because we’re just re-tuning the thing we’ve already built to different data. That’s almost a push-button exercise. We make bank there, and another client comes along, and another. And we all become fabulously rich.
But it doesn’t end there.
After AIATG Inc. has eight of these AI installations in place, we will also have eight times more actual data than any of the major media companies had on their own. So AIATG Media, the newly founded media wing of the company, can develop its own article title generator orders of magnitude more capable than all the other media outlets. AIATG Media begins eating NBC’s lunch by keeping all the best article titles for ourselves. AIATG not only has the potential to be a major media player, but to quite possibly destroy all legacy media.
It’s Bezos big. Way bigger than Musk.
And what are the other Big Media companies going to do? Come after us? Accuse us of manipulating the media? They’re all complicit in the scheme, and even if they weren’t, we have a weaponized article title generator far more powerful than they do, so we destroy them. In the media.
Weird Thoughts
It’s not at all a hard thing to envision, if you look at the comps.
China is probably already got something like this running, related to their creepy social credit score. They’d do it like this. The higher your score is, the more loyal you are to the Chinese government. That’s almost literally how that score works. They’d map their processed article title database against social credit score fluctuations. Figure out what article titles make the people the most patriotic, or the most anarchic. All buried in the ANN, and predictive. They could control an entire population that way. And if there’s one thing the Chinese have got, it’s population. No more Tiananmen Square incidents if everyone’s loyal.
I wonder if this sort of project isn’t exactly why the Chinese invented the social credit score in the first place. Maybe they already have the AIATG.
But holy smokes, if that’s true then they could weaponize it against us. What would a weaponized version look like? What if they tuned it to anger and hatred, measured by some publicly available metric, or just some complicated balance of daily proxies for measures of patriotism or something. Tune it to that. But then they’d need a media network to deploy it, do disseminate article titles that destabilize a populace. They don’t really have such a media network in the English speaking countries.
But Russia does. Maybe Russia’s been doing this crap to us for a while. I’m not sure why they would have developed it over there. Were they thinking about this sort of idea back into the cold war? Computers were garbage back then.
I don’t think Russia has a system like this, because if they did then the stuff that came out of their social media botfarms in 2016 wouldn’t have been so laughably bad. But maybe they’re working on it.
And the last thought I have on the matter, is to wonder how highly an article entitled “AI Article Title Generator” would score in the AIATG predictive engine.
I should probably write an article like that, to find out. | https://medium.com/handwaving-freakoutery/ai-article-title-generator-56eee8cb13df | ['Bj Campbell'] | 2019-02-10 15:17:43.642000+00:00 | ['Artificial Intelligence', 'Social Media', 'Media Criticism', 'Culture', 'Media'] |
Will AI Take My Writing Job? | Do you have a side gig? An idea for a novel you’ve never written? Awesome, we’ll come back to that later. Now the big one: are you afraid that AI will take your job?
If you’re a UX writer, you may worry it’ll take “the writing part”. Not so! Let’s discuss.
First, here’s how artificial intelligence and machine learning work:
At its core, AI is math. And lots of it. AI is a research field, and ML is a type of technology within that field. An ML-based algorithm analyzes a bunch of data, finds patterns, then makes predictions about new data, based on what it learned from the analysis. Theoretically, it gets better at this over time.
For writing, the easiest way to see this in action is when a smartphone’s autocorrect picks up patterns in the words you type, and predicts which words usually come next. This, of course, is not the same as writing, and it can sometimes come off as unhelpful or weird. However, there are ways in which these predictions can be super helpful for actual writers. But let’s start at the weird part.
There are many funny articles about writing only using your mobile phone’s autocorrect feature. This is most intriguing when algorithms suggest things like “I love you” as a closing sign off in response to lunch invitations. Writers and researchers who work on these systems have to flag “I love you” as something not to suggest. Even though the algorithm is picking up that this is a high probability phrase, it’s not right, and like, very awkwardly not right. Just because you say it a lot doesn’t mean you want to say it all the time.
That makes sense, right? It does to you, but it doesn’t to an ML model.
Let’s talk about this AI-generated Harry Potter fanfiction:
A machine wrote a book based on the most popular aspects of the collection of Harry Potter novels. Bananas, right? How did a machine-learning model write a book? So weird. Let’s explore.
The ML Harry Potter book was actually based on a huge database of writing. Writing by a human. Without that, you couldn’t have ML Harry Potter.
For machine learning to actually produce content, it needs a whole lot of information, or data, to learn from.
In short: to write a book, an ML model needs a huge database of words and phrases, exactly like the book you intend for it to write. To generate meaningful phrases, the algorithm also needed labels for important functional elements — like protagonist, villain, etc. With the help of people who can label the words with actual meanings, books can become training data for an ML model.
In the world of ML, the Harry Potter series isn’t actually very much data. Which is why the story was so very inhumanly incoherent. All the algorithm could do is predict which words usually follow other words in the context of these specific novels.
To make these ML-predicted word strings hang together, the story “was constructed with the help of multiple writers, who edited the sentences generated through a combination of algorithmic suggestion and authorial discretion.” So, in short, the ML Harry Potter was written by a bunch of (I’m sure very giggly) writers with an undoubtedly high penchant for Mad Libs®.
In order for AI to actually write like a person, it would need to come up with an idea, or a story that it wanted to tell. It would need to generate narrative. It would need to set up emotional tension, dramatic highs and lows, cause and effect.
There are currently people who are working on teaching AI how story works. Professor Patrick Winston at MIT, who has been working on this for ~15 years, has said that stories are “a fundamental differentiating capability of us humans.” And machines don’t get it. (Yet.)
That’s where you come in.
You know how to tell a story to other humans. In fact, you can assess a situation, an audience, or a person, and figure out what kind of story will help foster understanding within them. You can tell stories that haven’t been told yet — ML can only return remixes of ones that have.
Now, let’s talk about writing:
For writers, what part of your job is really tedious? Are there any parts that are unreasonably hard, boring, or inherently uncreative?
For me, it’s remembering pre-established, researched, patterns for a specific user context, then writing for those standards.
For that stuff, I’d love a reminder of a story that’s already been told. I could definitely use a suggestion for the best, most deeply-researched way to write error strings for a particular server malfeasance.
It would be nice to have a program that could recommend a selection of pre-approved error strings that are standard across my company, that I can check, lift, drop, tweak, and be done with.
It could be up to me to decide when and where these examples or suggestions make the most sense.
Then, I wouldn’t have to go and dig up that research deck from another team that proved that a particular phrasing is actually clearer.
That would be rad. And that job can be done by ML. But first!
It’s up to us to write the story for it to learn, so it can then give it back to us when we need it.
To write the best for humans:
You have to decide the best way to communicate to people.
You have to figure out the most interesting analogy, the simplest language, and the clearest prose.
As a result, you get to create something glorious.
Since we (writers and designers) are the keepers of the stories, the only jobs that AI can take from us are the ones we want to give away. You can teach a machine what writing is, but it can’t write a UI, description, or article on it’s own — it needs your ideas first.
However, with a little help from ML, you can enjoy the benefit of having the stuff that should be uniform and perfect, be uniform and perfect.
What one could do, to make one’s life a little easier, is create a very useful database of well-designed strings, and use ML to recommend a couple good candidates for your situation. This would be most useful for strings that are best standardized — the ones you don’t want to spend a ton of time reinventing.
Once you have this set of standard strings, you could label them based on what makes sense for each context in your UI. Those labels can help teach an ML model which strings make sense for each context. In theory, you could use this dataset as your testing data to generate string suggestions for a couple predictable scenarios. You would basically have a pre-programmed set of standard options that your ML can generate for a narrow set of situations. That’s about how much good writing ML can do — it can serve up what you already put in, ideally at the time and place when you need it again.
So where does that leave us when writing with AI?
Let’s think about this with a handy metaphor. You know how your microwave has a potato or a popcorn button that helps you cook those things pretty effectively and uniformly? Instead of using several kitchen implements and waiting a long time, you can put these things into a microwave, and press a button — it already knows the best time and temperature. The machine can pretty much guarantee a decently cooked potato. This is what a database of strings can do for you: it can give you a standard set for a given scenario.
This is useful because I don’t want to have to re-learn how to microwave a potato. I’d like it to be perfect on the first try, so I can avoid potential potato explosions, and because potato is one element of a delicious dish. Similar to how a single string is one element of your entire UI. For those boring simple ones, it would be nice to press a button and have those at least halfway done.
Now that you have the boring and predictable part done, you can really get wild. Personally, I’m going to use that microwaved potato as a base for something delicious and interesting that my microwave has no idea about. It doesn’t know what dish I’m making, and it doesn’t know for whom I’m making it — I, the chef, i.e. user experience creator, know the end product and the user journey. The microwave just knows that for potatoes, heat on high for 4 minutes, stop, then do the same thing again.
ML the boring stuff so you can write even better.
So far, we’ve learned that machine learning can do those parts of our job that are:
A. Hard or boring;
B. Should, for safety and efficiency and lower clean-up costs, be standardized;
and
C. Are not inherently creative. An algorithm doesn’t have narrative laced through it, just as your microwave has no idea what those freakin’ potatoes are for.
Having some part of the job done easily, correctly, predictably, and programmatically can loosen us up to get even more creative. If you don’t have to spend time boiling potatoes, what are you doing? Perhaps, when you’re not reviewing very boring strings for consistency, you’re out there keeping users safe in other ways.
As a writer with the heavy lifting of consistency gone, you can focus on other stuff:
(1) Defining the voice and tone.
(2) Designing the information architecture.
(3) Labeling the most valuable parts of the content.
(4) Coming up with new ways to present information.
(5) Scaling good, tested, consistent, UX writing to more products so more people have access to information around the world.
You can also ensure that the UX writers of the future, who will also be ML writers, are thoughtful, prepared, and empowered to do great work for the benefit of users.
So that’s it! That’s why AI isn’t taking your writing job. It could actually make your job way easier, while opening up time for the creative, explorative, fun work that your brain is really good at. So you can let the machines do the little stuff and get back to writing that novel. | https://medium.com/google-design/will-ai-take-my-writing-job-bb4400fdfdd0 | ['Roxanne Pinto'] | 2019-05-17 14:15:03.043000+00:00 | ['Machine Learning', 'Writing', 'UX', 'Ux Writing'] |
Create a Customized Color Theme in Material-UI | Create a Customized Color Theme in Material-UI
Making a custom color theme for your next React project is easy
Photo by Sharon McCutcheon on Unsplash
I recently started using Material-UI for all of my React projects and have been enjoying how easy it is to build beautiful interfaces. In this article, I want to explain how to get started with Material-UI and customize a color palette theme to use for your next project.
What is Material-UI?
Material-UI is a React UI framework that follows the principles of Material design. Material design is a design system created by Google. If you are interested to learn more about Material Design, have a look at the video below.
What I like about Material-UI is that it makes customization easy. You can easily change colors, typography, spacing, and more in individual components or globally by customizing the theme. Let’s get started!
Getting Started
First off, I will assume that you are able to set up a basic react app. The easiest way will be to use create-react-app. Once you have your basic app set up, we need to install the Material-UI core package. Open your terminal and run the following code.
// with npm
npm install @material-ui/core // with yarn
yarn add @material-ui/core
Next, we want to clean up the code from our src folder. Delete the CSS files and clean up the App.js file. Your App.js file should look something like this.
import React from 'react'; const App = () => {
return (
<div>App</div>
);
} export default App;
What is the Default Theme?
Material-UI has a default theme that sets default values for breakpoints, typography, color palette, and more. The value that we are interested in here is palette. In the palette object, there are nested objects for primary and secondary, and within each object, there is a light, main, dark, and contrastText property.
You will see that the default main primary palette is set to a bluish color (#3f51b5), and the default main secondary palette is set to a pinkish color (#f50057). These are the values that we will be overriding.
Material-UI Default Palette Theme
Creating a Theme with createMuiTheme
The next step will be to create a custom theme. In your App.js file, import createMuiTheme and ThemeProvider (we will be using this in the next section) from ‘@material-ui/core’. We will then create a theme calling the createMuiTheme function. In the function, we will pass in an object as the argument and set the main value for the primary and secondary color palettes. This will override the default theme, and set a new color palette for our project. You only need to set the main color, as Material-UI will automatically set the light and dark colors for us based on the value of the main color. It will look something like this.
import { createMuiTheme, ThemeProvider } from '@material-ui/core'; const theme = createMuiTheme({
palette: {
primary: {
main: "HEXADECIMAL COLOR"
},
secondary: {
main: "HEXADECIMAL COLOR"
}
}
});
Please note that in the Material-UI documentation, they show an example of importing purple and green from the Material-UI core package. This is another way you can set the color, but I find that just using a hexadecimal value has been an easier approach for me.
import { createMuiTheme } from '@material-ui/core/styles';
import purple from '@material-ui/core/colors/purple';
import green from '@material-ui/core/colors/green';
const theme = createMuiTheme({
palette: {
primary: {
main: purple[500]
},
secondary: {
main: green[500]
}
}
});
Here is the fun part! Choose two colors that you want to use as the primary and secondary colors for your app. This color tool from Material.io has helped me when choosing the right colors. It will show you examples of where and when the primary and secondary colors will be used, and you can get a visual image of the color scheme working together. Play around with it until you find the two colors you want to use.
Choosing the right color palette for your application is not easy. If you are just starting out, I recommend choosing colors that you like and think look good, and not spending too much time worrying about this initially. By customizing the theme this way, it makes it easy to change your color scheme later in your project if you choose to do so.
If you are like me and not great at design and choosing a color palette that will be effective, here is an article that can give you some insights on choosing the right colors for your website.
Using the ThemeProvider Component
Now that you have chosen your color palette and created a theme, it is time to implement the theme into your project. We can do this by using the ThemeProvider component. The ThemeProvider component will wrap any components that will be using our custom theme. It should ideally be at the root of the component tree. We will pass a theme prop to this component which will be equal to the theme object you created earlier.
<ThemeProvider theme={theme}>
COMPONENTS TO USE THEME
</ThemeProvider>
Wrapping Up
If you followed this far, your final App.js file should look something like this. In the example below, I also imported the Button component from the Material-UI library and am rendering two buttons. One is with the primary color and one is with the secondary color.
The output should look like this. As you can see, another great thing that Material-UI does for us is it automatically changes the text color for us. The primary color is set to a dark green color, so the text is light, while the secondary color is set to a lighter orange color, so the text is dark. | https://medium.com/swlh/create-a-customized-color-theme-in-material-ui-7205163e541f | ['Chad Murobayashi'] | 2020-11-30 13:21:36.730000+00:00 | ['React', 'Material Ui', 'Material Design', 'Programming', 'Colors'] |
Univers Labs Web Design Guide | Univers Labs Web Design Guide
How to Design for Univers Labs and Ensure that your
Digital Designs are Developer Ready
Following over 300 digital design and development projects, we have compiled our experience into this guide to ensure that your designs are fit for the web and look amazing when finally built. This is the very same guide that we use internally at Univers Labs to ensure that our designers produce the highest quality and efficient projects.
Guide Contents 📕
Key Points Design Software Break Points & Safe Areas Design Annotations Components & Style Guides Content Fonts Assets Layout Modules Developer Ready Design Checklist Useful Resources
Key Points 🏹
Designs should be completed in UX/UI software such as Adobe XD or Figma.
All designs must be optimised for the breakpoints and safe zones .
be optimised for the breakpoints and . Take note of the highly important content guidelines.
Check our designer checklist for elements that must be designed for the web.
Make use of master components and component states inline with the atomic design paradigm
Design Software ✏️
We recommend completing designs in Adobe XD. Adobe XD is fit specifically for designing digital UX. It makes working on a project dramatically easier and ensures that what you design can be built to the highest-quality standards.
Software
Status
Notes
Adobe XD
✅ Preferred
Figma
🆗 Alternative
Prototyping is not as efficient
Sketch
😒 Not advised
We don’t like it as much 🙃, developer tools not as useful
Adobe Illustrator
😒 Not advised
Buggy, causes fractional pixels, designed for illustrating.
Adobe InDesign
⛔️ Banned
Not fit for professional developer ready designs.
Designed for print publications and typography.
Photoshop
⛔️ Banned
Not fit for professional developer ready designs.
Designed for photo/bitmap post-production.
Breakpoints & Safe Areas 📐📱 💻 🖥
Designing for the web is challenging because of resolution management across multiple and changing mobile, laptop and desktop resolutions. At Univers Labs, we support five device categories in our project proposals/contracts. To ensure the websites we build look great across all of these devices, we use designs that are optimised for three breakpoints with responsive scaling in between breakpoint widths.
Feel free to use our pre-made Adobe XD Templates which are already correctly setup.
UL Breakpoints v1.4 21st July.xd
Please ensure you check and confirm the proposal/contract for the supported breakpoints. Some contracts may offer more than three breakpoints as per our extended Full Breakpoint Table.
Please ensure your design files art boards are set to the above dimensions and that you are using guides or layout grids so that designs are optimised for the safe areas.
Design Annotations 📝
Where necessary, you should annotate your designs to indicate anything that isn’t shown. You can add these annotations on another layer.
These annotations could highlight:
Animations (Speed, curves etc, example After Effects is acceptable. Even something like “fast and flashy, gentle and delicate, subtle, not subtle”)
Show / Hide functionality
How the design should scale on different screen sizes
Where something is a button and what it should do.
Anything else you think is worth noting
Rollovers / other states
Style Guides 🎨
Make use of XD’s or Figma’s master component system and states to ensure consistency of reused elements throughout your design. It’s likely you’ve designed with a set, it is helpful for us if you have a separate art board that shows all of these styles. It should show:
Content
Content Plan
Ensure you have a good understanding of the content plan for the project you’re working on. An initial content plan is usually provided shortly after a project kick-off with the end client. It should contain:
Sitemap Audiences Summary list of content for each page Functionality for each page Examples, references, and notes
Our Content Process
At Univers Labs, the content process is a two-way street. We collaborate with strategising and advising a client on the best way to present content throughout a site or app via the following process:
Realistic Content
To ensure our designs can be properly reviewed and tested we must ensure designs use content that is either from:
Worst Case Content
Make sure that the content used fits a “worst case” scenario: If you feel a section will need a lot of text from a client, ensure that it works with a lot of text. If the content plan says we are bringing content over from an existing website’s blog, try and identify the “worst-case” content. Lots of formatting, images, text, etc and try to fit it in to your designs. This will save headaches later on in the process.
Content Constraints
Where you advise limitations and rules to content within your designs, you should configure text and image box constraints in your design software and make sure you annotate these.
Visual Content
You should use client on-brand and relevant imagery or industry applicable stock photos (unsplash is a good source). Photos of Cats (whilst utterly wonderful) are not appropriate. Take a moment to do image searches for contextually relevant images. Try to ensure these are in the public domain and available for reuse.
Fonts 🔡
Make sure that all fonts used in your designs are able to be used on the web as webfonts. In the first instance, you should check the following sources for the font and if it is available as a webfont.
Google Fonts
Adobe Fonts
If a font a designer wishes to use is not available on these sites, we ask that they contact us before using the font as we will need to procure it correctly.
Assets 💾
Ensure that you mark assets (icons, logos, images) in your design as being exportable from your design software. In XD this can be done by selecting “Mark for Export” on the objects.
If you are providing bitmap (photo, pixel) assets, you should make sure that they are at least 144dpi for use on Retina displays.
Layout Modules
For web design, Univers Labs always uses layout modules via a CMS. This means that the design is broken into distinct design elements that logically can be reused throughout a website. This allows our users to create new pages easily from a library of layout modules, giving them the flexibility to expand and adapt their website.
When designing layout modules, keep in mind the following points:
At Univers Labs, layout modules need to work across our minimum three standard breakpoints. Layout modules should be designed to work and look good in any combination with other layout modules. To ensure a consistent experience and design, layout modules should be made up of reusable component instances as per the Atomic Design Methodology.
Univers Labs Developer Ready Design Checklist ☑
Designs have been reviewed by a developer and 2nd UX designer for checking fitness for development ™️ Favicon is designed 🍪 Has the client supplied an up to date and GDPR compliant Cookie Policy and Privacy policy documentation? Can this be designed for all breakpoints? If not why and how can we make it so it can? 📝You have provided annotations for design details such as responsive and interactive behaviour. 📱💻 Ensure that all art boards are available in one XD/Figma file per page template with corresponding breakpoints art boards next to each page/ screen art board. 💾 Assets such as icons and illustrations are marked for export Content meets the requirements set out in this guide 🔍📐 Have you checked your design safe areas for each breakpoint? 📱Does mobile look useable on landscape? If not what should change?
Useful Resource Links
Unsplash
High quality, royalty free stock imagery
ICONS8
Premium icons, product level more so than the free noun project
Dribbble
Useful UI inspiration
The Noun Project
Library of icons with royalty free use
Google Fonts
Adobe Fonts
Web font libraries
Atomic Design Methodology
Design system for modern digital applications such as web, apps and software | https://medium.com/universlabs/univers-labs-web-design-guide-89e0c11b3de6 | ['Univers Labs'] | 2020-08-25 16:04:10.614000+00:00 | ['Web', 'Design', 'Digital Development', 'Digital'] |
Creating chRistmas caRds | Diverging bar plot
Image by Author.
Let’s start with an easy one. The key point of a diverging bar plot is comparing data with a midpoint/baseline. We will use this idea to create symmetric bars that will represent our Christmas tree.
This is the most cumbersome of the approaches we present, since we need to create the data frame manually. What we need for this 5 level tree:
Specify each of the 10 bars (however, only 5 unique values, since we are specifying both, the left and the right side of the diverging bar plot). This is shown in the wish column, where we can be a bit sneaky and include a secret message!
Set divergence values. Values at the same level need to be the same in the absolute sense, with one of them being positive and the other negative.
Add labels for different parts of the tree (so we can color it later).
That’s it, we are ready for ggplot2 to do the magic. Create the bars, select your colors, throw in some ornaments in the end and your first card is ready!
Dirichlet sampling
Image by Author.
Now let us dip our toes into statistics.
Dirichlet distribution is a multivariate generalization of the beta distribution, parameterized by vector α of positive reals and length K. The support of a Dirichlet distribution of order K is the open standard (K-1)-simplex, which is a generalization of a triangle. For example, for K=3 we get an equilateral triangle.
A triangle? Hmm … Christmas tree is sort of a triangle, isn’t it?
All we have to do is sample uniformly from an equilateral triangle to get the points for the tree and the ornaments. Then we map the values from ℝ³ to ℝ² and plot them using ggplot2. To top it off, we also add a nice star on the top and wish everyone a merry Christmas!
3D spiral
Image by Author.
For the last one we move from 2D to 3D. We draw a simple spiral and decorate it with some bigger spheres for the ornaments, and some smaller ones for the twinkly Christmas lights.
In the 2D plane a spiral has the following parametric representation:
x = r(φ)cos(φ) and y = r(φ)sin(φ).
We set r(φ) = φ and φ= i/30, where i is the iteration index. To stretch it out in 3D we also need to add the third coordinate z, which can be just z = n_tree-i, where n_tree is the number of points we want.
We generate the data and then sample from that to get the coordinates for the ornaments and the lights. We reduce the z coordinate for the ornaments by some constant, so they appear below the line, and add some Gaussian noise to the lights, for them to spread out around the tree.
We plug the data into plotly, specify the colors, size and other details … and here it is! Our very first 3D Christmas tree. | https://towardsdatascience.com/christmas-cards-81e7e1cce21c | ['Greta Gasparac'] | 2020-12-17 14:41:22.825000+00:00 | ['Christmas Cards', 'R', 'Ggplot2', 'Plotly', 'Data Science'] |
How to Measure AI Product Performance the Right Way | But in order to quantify the accuracy of the underlying algorithm or the interaction of the AI product with the user, business indicators are not enough. This requires operational KPIs and proxy metrics for data driven decision making:
Operational KPI
Reminder: what is a key performance indicator? I like this short and memorable definition:
The introduction and iterative further development of AI products is capital-intensive. In this respect, it is important to measure product performance on an ongoing basis and to make data-driven investment decisions in roadmap planning.
The goal is now to determine the influence of AI capabilities on operational product KPIs such as cost per acquisition, new sessions, retention rate, conversion rate or number of trial sign-ups. The hypothesis is: The more pronounced the AI capabilities in the product are, the greater their influence on operational KPIs:
For a product with high AI capability, it is legitimate to put KPI changes in direct connection with it. For example: Google Home Devices in combination with the Google Assistant have a high AI capability. Hence the hypothesis: Rising sales figures or the number of active users correlate with the quality of the Google Assistant.
The other extreme: in an email client, an ML algorithm ensures the classification of new emails into “work”, “private”, “social” and “marketing”. The AI capability of the product is low. In this respect, operational KPI changes are not due to AI features.
AI and Machine Learning Proxy Metrics
Regardless of the form of AI capability, the operational KPIs mentioned are not suitable for assessing the AI aspects of a product in isolation. Let alone improve the product iteratively in the build-measure-learn rhythm. The accuracy of a spam filter or a recommendation engine does not correlate directly with new sessions or the cost per question. And if it does, it is only possible to isolate the effect with great effort.
AI proxy metrics are better suited to quantify questions about the accuracy of the algorithm or the contribution of AI to the user experience. Proxy metrics only show part of the big picture, but ideally correlate positively with the high level KPI. The following proxy metrics are suitable from a product perspective:
Objective Function
A common approach to measuring success is an objective function. I consider it a proxy metric because it does not necessarily have an impact on financial KPIs.
An objective function is a mathematical formula that needs to be minimized or maximized in order to solve a specific problem using AI. If the goal of the algorithm is to optimize a travel connection, the objective function must be minimized in order to achieve a short travel time. If the objective function quantifies accuracy, the goal is to maximize the function:
Classification Accuracy
A classification algorithm assigns a new data point to an existing category [2]. Now I want to know the accuracy with which the algorithm makes the classification. In the simplest case, it is a “binary classifiers” with two target categories. For example, the classification of e-mails into junk or relevant. Four results are possible:
- True positive: Correct positive prediction. - True negative: Correct negative prediction. - False positive: Incorrect positive prediction. - False negative: Incorrect negative prediction.
The key figures True Positive Rate (TPR) and False Positive Rate (FPR) [3] are now suitable for determining the classification accuracy:
TP is the absolute number of true positive results. FN is the absolute number of false negative results.
Mean Absolute Error (MAE)
Mean Absolute Error is one of the simplest and most popular metrics for determining the accuracy of predictions from regressions.
The starting point is the assumption that an error is the absolute difference between the actual and the predicted value. MAE uses the average of the absolute difference of all errors and gives a final result between 0 and infinity. MAE is best used in scenarios where the magnitude of the error is irrelevant because the errors are not weighted.
The mean absolute error (MAE) can be shown in a currency or other unit. For example, if the goal of an ML algorithm is to predict the development of property prices, the unit of the MAE is euros or dollars. One example of the result of the MAE calculation is: “The forecast of property price developments deviates by an average of EUR 50,000 from the actual value.”
Root Mean Squared Error (RMSE)
Like the MAE, the RMSE is used to determine the accuracy of predictions. Because of the fact that all errors are squared before they are averaged, the RMSE gives weight to larger errors. If the magnitude of the errors plays a role, RMSE is suitable for determining the average model prediction errors.
Sensibleness and Specificity Average (SSA)
SSA [4] is a metric developed by Google to quantify how natural a dialogue with a chatbot feels to people. The Sensibleness and Specificity Average (SSA) focuses on two central aspects of human communication: whether something makes sense (sensibleness) and whether it is specific (specificity).
How natural does a dialogue feel? // design.absurd
In the first step, the tester judges whether the chatbot’s response is reasonable in the context of the dialogue. In the second step, he judges whether he considers the answer to be context specific. The SSA score is the average of these two results.
AI and Machine Learning OKRs
Objectives and key result are ideal for measuring the success of an AI product. The aim of the OKR framework is to set ambitious goals. The progress on the way to goal achievement is tracked with key results [5].
OKRs shouldn’t be top down // design.absurd
OKRs are so well suited because key results are created bottom-up. The operational team determines which metrics are best suited to continuously improve a product. In an AI/ML context OKRs are for example:
Objective
“To develop an AI product which generates predictions for our B2C users.”
Key result 1
“Generate over 10000 B2C user signups via the Android and iOS apps.”
Key result 2
“No more than 1% false positives on the ML prediction model.”
Key result 3
“Introduce continuous deployment to speed up ML model deployment.”
It is important to formulate ambitious goals (experience has shown that 60–70% goal achievement is a good value). OKRs should always be transparent for the entire organization to give others an insight into the own projects.
Conclusion
The success measurement of an AI product is possible to a certain point with the usual operational product KPIs such as sessions or number of sign-ups. To isolate the accuracy e.g. of an Machine Learning algorithm, proxy metrics are used. OKRs are ideal for higher-level progress measurement on the way to achieving goals. | https://medium.com/swlh/how-to-measure-ai-product-performance-the-right-way-2d6791c5f5c3 | ['Simon Schreiber'] | 2020-06-30 11:25:22.501000+00:00 | ['Analytics', 'Artificial Intelligence', 'Data Science', 'Product Management', 'Machine Learning'] |
Object Detection using OpenCV | Photo by timJ on Unsplash
Object detection is an important part of image processing. The autonomous vehicle has to detect lanes, road surfaces, other vehicles, people, signs, and signals, etc. We live in such a dynamic world, and everything is constantly changing. While ago, a friend gave me a PDF file, told me to locate and get a specific value in PDFs. The application of Object Detection is everywhere. Here is another example, Data scientists are using Object Detection to identify the planet disease from the vegetable leaf.
A few weeks ago, I was doing research on Deep Learning and Computer Vision on autonomous cars. This article includes some of the interesting things from my research. Feature Detection is one of the tasks of Object Detection. So, What is feature detection? For Humans, We understand the pattern, shape, size, color, lengths, and others to identify the object. It’s somewhat similar to computers as well. The Feature is something we train our model to know if it finds one. A Feature can be anything like shape, edge, length, etc., and also combinations of all of them. In one of my previous projects about DeepFake detection, I used MSE (Mean Square Error), PSNR (peak signal-to-noise ratio), SSIM (Structural Similarity Index), and histogram as a feature to identify DeepFake images form real images.
A feature can be anything unique that can be found in general. A good feature has to be repeatable and extensible. For example, suppose the goal is to detect dogs from the large set of images, which also contains the images of cats and other animals.
Image by Gerhard G. from Pixabay
Going back to the previous statement about feature has to be something unique and need to be presented in the majority of data. If we have a majority of images similar to the above two images, What could be the bad Feature here?
A bad feature, in this case, is ear size. Our understanding was the size of the dog ear is larger in general. But that is not the cause in our sample images. In the first image, the dog ear is a similar size to a cat or even smaller. If we use ear size as the feature to train our model only using these two images, we will have 50% true negative or false positive. This brings another important point. Which is if you want to have higher success in your model, you should pick the feature carefully. A tale can’t be a good feature to separate either because it’s not visible clearly. Size is also not a good choice.
Our goal here is to identify another object, such as a truck on the road. We can use a technique like Harris Corner Detection or canny edge detection to detect the edges. We need to separate cars, pedestrians, signs from the images. We can use OpenCV to identify trucks specifically.
import cv2 cv2.matchTemplate()
Template Matching simply a technique that slides the input image over the template image and compares the template images and input image under a template image. It returns a grayscale image, and each pixel denotes how many neighbors of that pixels matches with a template. There are numerous template matching methods available in OpenCV. Here is the mathematical formula for the correlation coefficient.
Once the match is found in both images, it will singles the bright point. OpenCV official docs have details with code examples here. Let’s find the truck on the road.
import cv2
import numpy as np
import matplotlib.image as mpimg from matplotlib import pyplot as plt
%matplotlib inline image_color = cv2.imread('actual_truck.jpg')
plt.imshow(image_color)
We read the image from the file. We are going to locate the truck in this image.
Original Image Source: Photo by Ernesto Leon on Unsplash
Image height and width
Convert Image to Gray Scale
The reason to use grayscale is to make an image as simple as possible. There is no need for a multicolored image. Colors add complexity to the image and increase the signal-to-noise ratio.
image_gray = cv2.cvtColor(image_color, cv2.COLOR_BGR2GRAY)
plt.imshow(image_gray, cmap = 'gray')
Creating a template image
This is our template image. OpenCV usage this image collects the feature and locates the truck.
import cv2
import numpy as np
import matplotlib.image as mpimg
from matplotlib import pyplot as plt
%matplotlib inline image_color = cv2.imread('sample_truck.jpg')
x= 235
y = 350
h = 200
w = 150
cropped=image_color[y:y+h, x:x+w]
plt.imshow(cropped)
status = cv2.imwrite('t.jpg', cropped)
print("Image written to file-system : ",status)
Performing template matching
# Perform template matching using OpneCV
result = cv2.matchTemplate(image_gray, template, cv2.TM_CCOEFF_NORMED)
print(result.shape)
plt.imshow(result)
Locate truck
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(image_color, top_left, bottom_right, (10,10,255), 5)
plt.imshow(image_color)
Conclusion
In this article, we cover what image processing and its application are. We discuss features and some bad feature examples. Choosing a bad feature will result in a false positive, or true negative. Then we discuss about cv2.matchTemplate(). Later we use template matching to identify similar trucks on the road. | https://medium.com/towards-artificial-intelligence/object-detection-using-opencv-cec18e5b746d | ['Neupane Parlad'] | 2020-12-31 03:19:23.642000+00:00 | ['Data Science', 'Opencv', 'Computer Science', 'Image Processing'] |
From a Self-Taught Data Scientist, Here’s Why You Shouldn’t Use Bootcamps | 1. Bootcamps are a sub-optimal solution.
A strategy that I constantly use in my education, in my career, and in my life is asking myself “what is the objective of doing this?” And so, I found myself asking “what is the objective of learning data science?”
Personally, there are two objectives for myself:
To develop and refine my knowledge in data science To gain credibility to show employers that I’m are a qualified candidate.
Now with these two objectives in mind, consider the image below:
Image created by Author
I’m not going to go through each point, but you can see that there are a number of alternatives that have a much higher impact in achieving the two objectives that I outlined above.
You might be thinking “well aren’t bootcamps good because I’m learning AND getting a certification?” and the answer to that is no. Generally, bootcamps are extremely surface-level, explaining only the “what” and not the “why”. And in terms of gaining credibility, bootcamp credentials get you nowhere — it’s what you do with it that matters (see point #3).
And to tell you the honest truth, I made the same mistake when I started out. Initially, I cared so much about “looking” certified on paper that I was more focused on piling certifications and less focused on developing my knowledge. What I got out of that was wasted time and useless certifications.
My goal isn’t to try to shut you down from learning. I understand that everyone has their own learning methods that work best for them, which is great. As I said in the preface, learning with bootcamps is certainly better than not learning at all! All that I want to do is make sure that you’re aware of more effective alternatives that will help you achieve your goals much faster (aka. becoming a data scientist).
So if you feel like you’re chasing certifications and not education, I strongly recommend that you take a step back and ask yourself what your objective(s) is (are). | https://towardsdatascience.com/from-a-self-taught-data-scientist-heres-why-you-shouldn-t-use-bootcamps-b9f7ae72b4e3 | ['Terence Shin'] | 2020-12-08 13:29:49.705000+00:00 | ['Data Science', 'Machine Learning', 'Opinion', 'Productivity', 'Education'] |
Behind the Scenes: The Inaugural Enigma Parseathon | What is your Senator’s worth (in assets, that is)? Would knowing their net worth change your perception of their voting record, or how you’d view their representation of you?
Using data from the Center for Responsive Politics, we found that the average Senator is worth about $2 million more than their House counterpart. Public data allows us to address these questions, and for that reason and more, we at Enigma really, really like public data.
Last month, employees from across our company — from accountants and marketers to interns and engineers — spent a whole day acquiring, cleaning, and parsing public datasets like the one above.
The Parseathon was born out of a desire to connect our employees to a process that is a central part of Enigma’s mission and powers our products: data acquisition. It was also a chance for everyone in the company to contribute a dataset to Enigma Public, our platform built upon the broadest collection of public data. Relaunched just a few weeks ago, Enigma Public takes on the hefty task of making unwieldy public data accessible to researchers, journalists, and curious citizens, among others.
For this year’s Parseathon, Enigma employees voted to focus on datasets related to political accountability. One of our data curators, India Kerle, hunted down over 40 datasets, including ones that contained the social media accounts of just about every politician, charity expenditures in Canada, and the net worth of U.S. lawmakers.
On the day of the Parseathon, employees chose datasets they were interested in and worked in small groups to complete their projects. For some employees, the Parseathon was their first time interacting with an in-house Extract-Transform-Load (ETL) tool called ParseKit. ParseKit allows for just about anyone to write their own data parser, even without knowing a coding language such as Python or R. It enables the user to point to a data source, set up a schema consisting of the column names and data types, and finally, specify a desired output. It also allows for more complex, custom steps if the data is particularly finicky. ParseKit plugs in nicely to Concourse, our platform for scheduling and maintaining ongoing parsers for data that is updated on a regular basis.
Jaleh Afrooze, our Finance and Operations Manager, was one of the first-time parsers. In her day-to-day work, she has little contact with ParseKit directly. For Jaleh, the Parseathon was an opportunity to see firsthand what she had previously only heard described by her colleagues. She found the technical problems of parsing quite exciting, and felt that the experience afforded a new kind of understanding of both what many people at Enigma do on a daily basis and the types of data problems we solve as a company.
Stephanie Spiegel, our People and Operations Manager and another first-time user, echoed that thought. For her, the Parseathon was a great way to dip her toes into technical work.
Like many at Enigma, Stephanie has strong opinions about what makes good public data. “Why a dataset would be published without a clear dictionary of terms is one of life’s greatest mysteries,” she said.
At the Parseathon, Stephanie chose to “metabase” existing datasets. “Metabasing” is part of the lingua franca at Enigma, referring to the process of adding metadata to each dataset on Enigma Public, with the intention of providing additional context, more precise names to columns, etc. Data dictionaries are, of course, essential to proper metabasing.
Thankfully, we have Stephanie and many others at Enigma to act as good stewards of public data, by adding that layer of context and accountability. Our team metabased over 20 datasets on the day of the Parseathon.
Beyond the data and the toolkit, the Parseathon was an opportunity to work with colleagues most would not interact with on an everyday basis — all over banh mi and mini ice-cream sandwiches. At the end of the day, Eve Ahearn, the Parseathon’s principal organizer, awarded “Data Wrangler” bandanas to the event’s most enthusiastic participants.
Jaleh was one such winner. She described the day as “one of the most fun days at Enigma.”
You can view our political accountability datasets and more on Enigma Public. Let us know what you find. (You can find us on Twitter, @enigma_data.)
Do you love public data? Check out opportunities at Enigma. Come help us empower people to interpret and improve the world around them. | https://medium.com/enigma/behind-the-scenes-the-inaugural-enigma-parseathon-b931b624d24f | ['Rashida Kamal'] | 2017-10-03 19:19:29.495000+00:00 | ['Data Science', 'Metadata', 'Open Data', 'Engineering', 'Data'] |
Your Existence and Good Content Aren’t Good Enough | Have you ever stumbled upon a writer’s page and noticed the number of their following and the number of claps on their stories don’t add up?
I have.
They have followers in the thousands, I’m talking 5k to 10k, yet the claps on their stories never exceed 200 to 500.
Sure, claps don’t tell you how many people read the story, but a low number of claps is a telltale sign that the view count isn’t particularly high, either.
Haven’t you ever wondered why that is?
I’m constantly seeing the same piece of advice floating around on Medium about how to get more followers, usually titled something like, “How to get 10,000 followers in three months!” or “Get 200 to 500 followers a week with this one simple trick!”
Its:
Follow 50 people everyday
It doesn’t matter who they are, what they write about, or whether they write at all, just follow fifty people a day, and 10 to 15 people might follow you back.
It suddenly hit me.
This is why the claps don’t add up.
Followers Vs. Engagement
What are the benefits of having a large number of followers? Well, for one the larger the number, the nicer it looks on your profile page and the more brag-worthy it is.
But, let’s say a writer has 7k followers. How many of those 7,000 people are actually clicking on their stories?
It depends.
Here’s a question: Would you rather have a group of ten below average friends who aren’t very dependable and can’t be trusted or two really good friends who you can trust with your life? They’re always there for you and they always show up.
Obviously, the latter, right?
Quality over quantity.
Well, that’s the same way followers work (or should work.)
Following fifty people a day trying to hit 10k followers as quickly as you can is good for you in the short-term if your only goal is to have a nice number to show off on your profile page, but what does it do for you long-term?
In the long run, most of those followers aren’t actually going to be reading your stories. They aren’t going to be sharing your work on Twitter and they certainly aren’t going to be a direct component of your success level on Medium.
This is why you need to engage.
‘Enhanced Stats’ a Google Chrome plug-in with more…well, enhanced stats, tells me I have seventy-one articles, but 869 responses. That means, I’m engaging. I’m responding to comments and I’m leaving them.
A lot.
This is how I’ve made so many great connections on Medium. This is how, with a smaller following than others, even some of my non-curated stories have significantly more views than they would have otherwise.
But, the best part is, a lot of them have really genuine, heartwarming and valuable comments from other writers.
The type of comments that, even when you’re suffering from a vile case of writer’s block, motivate and inspire you to keep writing, like how I began writing this piece.
Compliments leave lasting impressions
Engaging on Medium is what has helped me build my tribe (you know who you are!) and make invaluable connections with other writers.
Every day, the first thing I do when I sit down with my morning coffee is open up Medium and spend at least an hour (sometimes two) engaging. Responding to comments, reading other writers’ stories and of course, leaving well-thought-out comments.
It also gives me something to take my mind off of writing. Rather than stress about my next story, I use my downtime to read and engage.
Believe it or not, I refer to my Medium tribe in my real life when I talk about writing with my significant other. I call them my Medium friends and I acknowledge that without their friendship, I wouldn’t be as successful.
Your existence and good content aren’t good enough
Followers and readers are two vastly different things. Followers see your stories but most won’t click on them. Readers are there to do just that; read your stories.
Getting others to follow you requires only your existence. Getting others to follow you who actually read your work requires your existence, good content and engagement.
Writers want to feel like their work is unique and appreciated. We like specifics; did you like my introduction? The punch line? The style?
We like feedback. We want to know what exactly it was that made you click on a story, or finish it all the way through. What worked, what didn’t?
Criticism is welcomed and helpful when it’s constructive.
Most writers aren’t in it for the money (obviously) — they’re in it for passion.
Here’s my question to you: Do you want followers or readers?
If you want followers, go ahead and follow fifty random people a day. But, if you want readers, you need to engage. Leave comments on other writers' works and not generic comments like, “Great piece!” or “well done.” I’m talking about real, genuine, thought-provoking comments.
Comments that implant you in other writers’ minds, that when they see your name pop up in their feed they think, “Oh, right, we had a great conversation about —” and they actually click on your story.
If you’re a writer you’re most likely a reader, too
I’m not going to sugarcoat this though, don’t bother leaving empty praise-comments and engaging with writers whose works you don’t actually like or on pieces you didn’t actually read, maybe just skimmed the body and headed straight to the conclusion to highlight the closing sentence.
It’s not as subtle as you may think.
If you’re a writer you’re most likely a reader, too. You don’t really want your Medium feed cluttered with stories you don’t have any intention of reading, but with stories that you can’t wait to open.
Engage with writers whose work you genuinely enjoy reading. Engage with writers who show reciprocity and engage back.
This is how two people can gain a reader for life.
Slow and steady wins the race
Sure, it might take longer to reach your goal, whatever it may be, but I would rather it take me two years to reach 10,000 followers who actively read and enjoy my work, than wake up to 10,000 followers tomorrow who:
Don’t know me and don’t care to
Don’t read my stories
Are only following me in hopes I’ll follow them back so they can grow their number of followers they’ll never engage with
When you use the follow-for-follow approach, Medium turns into a popularity contest. Medium is meant to be a platform for content creators, not social media. We have Twitter for that.
I used to follow every single person I stumbled upon, regardless of whether they actually published articles on Medium or engaged with my work. What resulted was a disastrous feed riddled with stories that although may be fantastically well-written, are not in topics that I enjoy reading about or something I would ever actually click on.
In addition, it was nearly impossible to find the stories of the writers whose works I truly do enjoy and look forward to. I felt like I was missing out on all these great stories because they were getting buried in my feed.
Now, I only follow writers whose works I genuinely enjoy and those who engage with me and show reciprocity.
Some writers say you need to follow fifty random people a day if you want to increase your following. But, say you don’t want to increase your following, you want to increase your readers and consequently, your views.
How do you do that? It’s simple.
Leave ten comments a day.
Every day, read ten different stories by ten different writers and leave a valuable comment on each story you genuinely enjoyed, explaining why you enjoyed it.
Create a bond, a friendship, a tribe, and gain lifelong readers.
And remember, slow and steady wins the race. 🐢 | https://medium.com/wreader/you-dont-want-more-followers-you-want-more-readers-ab72f6cd944d | ['Fatim Hemraj'] | 2020-12-05 17:40:54.448000+00:00 | ['Advice', 'Writers On Writing', 'Writing', 'Writers On Medium', 'Writer'] |
Why I Interned at a Startup Instead of a Tech Giant: The myth of the “good” job | There is hysteria sweeping over Computer Science students these days: a crippling obsession with getting a “good” job. It is impossible to talk to my peers without someone dropping the name of a company they would give up everything to work for — like Google, Microsoft, Facebook, or Amazon — that every other job would pale in comparison to.
A year ago, I was no exception to this hysteria. Having always succeeded in school, I was blessed with getting everything I wanted academically. Naturally, when I started looking for a summer internship, I wanted to get a job at one of these companies. I felt as though any other job would be a disappointment — just a filler until I could reapply the following year.
Student life
When I started applying for internships, I was so excited to be interviewing with all the companies I dreamt of working for. But after a few sub-par interviews, I found myself frustrated that my strong suits weren’t being highlighted in these companies’ interview processes. So I decided it was time to open my eyes to the hundreds of other tech companies out there. I begrudgingly took a look at the job board for co-op students, and blindly applied to the first company on the list: Axiom Zen, a Canadian venture studio. After a surprisingly gruelling interview process, I ended up accepting an internship at Axiom Zen, and was onboarded to their portfolio company ZenHub.
As it turns out, Axiom Zen isn’t just a good company, it’s an amazing one. They were named first among Canada’s Most Innovative Companies by Canadian Business in 2016. And while ZenHub might not have Google’s name recognition (yet!), that doesn’t matter to me anymore.
My experiences here have expanded my horizons and helped me see past the fallacy that name recognition correlates with job satisfaction. Instead, students should look for the three things I’ve found at ZenHub: impactful work, learning potential, and personal responsibility.
Impactful work
In my first weeks, to say I was stressed out would be an understatement. I was thrown into the deep end of a large project and as one co-worker accurately put it, it was sink or swim. I had three bugs assigned to me before I even left the first day. Not only did I not know ZenHub’s JavaScript code base, I barely knew JavaScript.
Seeing my bug fixes in production by day three was awesome: talk about impactful work early on! (Contrast this with an intern I met who was working at a big tech company; two months in, they had only shipped six lines of code.) Seeing the impact of my work early on was the push I needed to learn quickly.
Collaborating at ZenHub
Besides making an impact on ZenHub’s products, I’ve also had the opportunity to impact the community of women in Computer Science. I am the only female engineer in my small team. Right away, I wondered how I could help remedy the situation, and was mentally preparing to make my voice heard on the issue. Instead, the team approached me to share my thoughts on building a more diverse workplace, and invited me to help come up with strategies to attract more women. I was invited to share my voice before I even began to speak.
Learning potential
My main motivation for an internship was to learn quickly: after almost five years at a previous company, I needed something to force me out of my comfort zone. But during the chaos of applications, the allure of a big company somehow overtook this original goal. Working at ZenHub has reminded me how important learning is above all else.
Early on in my internship, I was introduced to Ramda, a JavaScript library, and fell in love with its functional programming style. Around the same time, I expressed my interest in public speaking. Pablo, our VP of Engineering, encouraged me to speak at a local meetup about functional programming. While this was a nice idea, I didn’t feel prepared to talk to fellow developers about a topic I barely knew! But fast forward a month later and, despite all my reservations, I applied to speak at GitHub HQ in San Francisco. Getting my talk accepted and actually delivering it was one of my proudest moments.
Giving the talk solidified all the reasons why I love my job. It enabled me to step far out of my comfort zone and allowed me to improve my public speaking — a skill I’d hoped to learn, but never imagined doing as a programming intern! The whole experience set a new bar for what I thought was possible during a summer internship.
Personal responsibility
From the moment I jumped on board at ZenHub, I was given a lot of responsibility. This was intimidating at first, but now I realize it’s a requirement for every future role I’ll take on. Sure, every job comes with responsibilities, but the scale at ZenHub was pretty dramatic. Every ZenHub developer does front-end, backend, and QA (and sometimes even blogs!).
At ZenHub, code reviews are the great equalizer. Everyone critiques each other’s code; not only was my code being torn to shreds, but I was doing the same to our VP of Engineering. Many developers would consider this a nightmare, but I look at our code reviews as a shining example of the responsibility we are afforded. Being able to review experienced developers’ code gives me a chance for revenge! No — in all seriousness, it exposes me to examples of what great coding looks like. I think it’s a great opportunity that I probably wouldn’t get at a larger company.
When I started on my quest for a job a year ago, this is definitely not where I pictured myself ending up. But boy, am I ever glad I’m here. I finished my internship feeling truly inspired by my work. I am no longer an intern at ZenHub — I’m an employee!
If you’re in computer science, take a step back and rethink your criteria for what you want in your next internship. When my peers ask me where I worked in the summer, I don’t talk about a name — I talk about a company that you may not have heard of, but that exceeded all my expectations for what a “good” job should be.
Sunrise hike with a few brave Axioms
ZenHub is part of the Axiom Zen Family. We’re always looking for curious people with the integrity needed to make an impact! If you’re interested, you can apply here. | https://medium.com/axiomzenteam/why-i-interned-at-a-startup-instead-of-a-tech-giant-the-myth-of-the-good-job-170b8e54c7d5 | ['Christine Legge'] | 2017-05-31 22:18:56.974000+00:00 | ['Internships', 'Women In Tech', 'Project Management', 'Startup', 'General'] |
Recognizing Handwritten Digits Using Scikit-Learn in Python | Recognizing handwritten text is a problem that can be traced back to the first automatic machines that needed to recognize individual characters in handwritten documents. Classifying handwritten text or numbers is important for many real-world scenarios. For example, a postal service can scan postal codes on envelopes to automate grouping of envelopes which has to be sent to the same place. This article presents recognizing the handwritten digits (0 to 9) using the famous digits data set from Scikit-Learn, using a classifier called Logistic Regression.
For example, the ZIP codes on letters at the post office and the automation needed to recognize these five digits. Perfect recognition of these codes is necessary in order to sort mail automatically and efficiently. Here we are going to analyze the digits data-set of the Sci-Kit learn library using Jupyter Notebook.
First we start with importing required Libraries:
Then understanding the dataset:
Data Understanding
The data files train.csv and test.csv contain gray-scale images of hand-drawn digits, from zero through nine.
Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.
Now we load the Digits dataset into the notebook. After loading the data we can read lots of information about the datasets by calling the DESCR attribute.
The images of the handwritten digits are contained in a digits.images array. Each element of this array is an image that is represented by an 8x8 matrix of numerical values that correspond to a grayscale from white, with a value of 0, to black, with the value 15.
Let’s visually check the contents of this result using the matplotlib library.
The numerical values represented by images, i.e. the targets, are contained in the digit.targets array.
Observed that dataset is a training set containing 1,797 images.
This dataset contains 1,797 elements, and so we can consider the first 1,791 as a training set and will use the last six as a validation set. Here we can see in detail these six handwritten digits by using the matplotlib library:
Confusion matrix
A confusion matrix is a table that is often used to evaluate the accuracy of a classification model. We can use Seaborn or Matplotlib to plot the confusion matrix. We will be using Seaborn for our confusion matrix.
Confusion Matrix
The non-linear model gives approx. 92.5% accuracy than of linear model 90.27% accuracy. Thus, going forward, let’s choose hyperparameters corresponding to non-linear models.
The evaluation metric for this contest is the categorization accuracy, or the proportion of test images that are correctly classified. For example, a categorization accuracy of 0.97 indicates that you have correctly classified all but 3% of the images.
Let’s predict the accuracy of our model using KNN classifier.
Conclusion
From this article, we can see how easy it is to import a dataset, build a model using Scikit-Learn, train the model, make predictions with it, and finding the accuracy of our prediction(which in our case is 98.33%). I hope this article helps you with your future endeavors!
Hope you like my article!! | https://medium.com/swlh/recognizing-handwritten-digits-using-scikit-learn-in-python-34c109fbdb2 | ['Shrusti Warekar'] | 2020-11-01 12:36:32.875000+00:00 | ['Data Science', 'Scikit Learn', 'Python'] |
Why Coding is More Fun than Engineering | Because coding is play and engineering is work.
What’s less obvious is that play gets more work done than work. Free from obligation, you can explore more options, discover new possibilities, and stay in flow for longer.
Let me show you.
Here’s a typical week of engineering according to my time tracker. I try to click the button every time I context switch for more than 5 minutes. Call it research into office culture, if you will.
This data is by no means perfect. Context switches happen within categories; I forget to click the button; and I’ve given up capturing the small distractions that happen.
I also spend too much time in faux meetings on Slack. Those don’t show up here.
But you can see a trend: most work happens in half-hour and aaaalmost-an-hour intervals. Then something comes up, and I have to switch. QA bugging out, absolutely horribly urgent code reviews that must happen right this instance, a deployment here and there, or a short meeting or two.
If I’m very lucky, I get to work on something for almost 2 hours. Sometimes, if the stars align just so, almost 3. That happened three times this week. Three times all week that I focused for more than an hour. 🙄 | https://medium.com/swizec-a-geek-with-a-hat/why-coding-is-more-fun-than-engineering-9605d6c906f0 | ['Swizec Teller'] | 2016-10-19 21:50:22.264000+00:00 | ['Software Engineering', 'Work Life Balance', 'Coding', 'Software Development', 'Front End Development'] |
Simple Edge Detection Model using Python | In this post, I will show you how to detect the edges in an image. Writing an edge detection program will give you some understanding of computer vision and self-driving cars. Edge detection is commonly used to understand objects in an image. It also helps the machines to make better predictions. Writing an edge detection program is a great way to understand how machines see the outside world. This will give us a better perspective when it comes to computer vision.
I covered a couple of computer vision concepts in my previous articles: face detection, face recognition, and text recognition. And today, we will work on edge detection using python.
Table of Contents | https://towardsdatascience.com/simple-edge-detection-model-using-python-91bf6cf00864 | ['Behic Guven'] | 2020-11-14 21:28:44.140000+00:00 | ['Programming', 'Deep Learning', 'Artificial Intelligence', 'Technology', 'Machine Learning'] |
Feeling Enough | An ongoing situation for real. Yet I get glimpses of it, and other times I feel it fully, and other times it starts to slip away, and I must cling to it like fire clings to wood.
The ways in which water flows down its path, is what we must learn. To flow down the path of energy, from our most ethereal aspects to our densest physicality. To flow down the path laid for us, that’ll satisfy every single thought, feeling, imagination, desire, and expectation we need and prefer to be satisfied. It’s a game nahmeeean?
A game that’ll go every which way, or go round every day which’ll lead one to stay, to stay, to stay, to stay stay stay stay stagnay.
But it’s always ok.
Always enough.
It’s always enough too.
It’s always enough.
See it, feel it, breathe it, know it, show it, act it, fun it, succeed in it, revel in it, relish in it, revere it, sincerely.
Sincerely,
Shant | https://medium.com/light-year-one/feeling-enough-c632b623e097 | ['Shant Aumeta'] | 2017-04-01 08:44:08.533000+00:00 | ['Life', 'Poetry', 'Fun', '365 Days', 'Storytelling'] |
Understanding Word Embedding Arithmetic: Why there’s no single answer to “King − Man + Woman = ?” | Understanding Word Embedding Arithmetic: Why there’s no single answer to “King − Man + Woman = ?” plotly Follow May 8 · 7 min read
Representing words in a numerical format has been a challenging and important first step in building any kind of Machine Learning (ML) system for processing natural language, be it for modelling social media sentiment, classifying emails, recognizing names inside documents, or translating sentences into other languages. Machine Learning models can take as input vectors and matrices, but they are unable to directly parse strings. Instead, you need to preprocess your documents, so they can be correctly fed into your ML models. Traditionally, methods like bag-of-words have been very effective in converting sentences or documents into fixed-size matrices. Although effective, they often result in very high-dimensional and sparse vectors. If you want to represent every word in your vocabulary and you have hundreds of thousands of words, you will need many dimensions to fully represent your documents. Instead, what if we numerically represent each word in a document separately and use models specifically designed to process them?
A graphical representation of bag-of-words. Retrieved from Chapter 4 of “Applied Text Analysis”.
Being able to embed words into meaningful vectors has been one of the most important reasons why Deep Learning has been so successfully applied in the field of Natural Language Processing (NLP). Without the ability to map words like “king”, “man”, or “woman” into a low-dimensional dense vector, models like Recurrent Neural Networks (RNNs) or Transformers might not have been so successful for NLP tasks. In fact, learned embeddings were an important factor in the success of Deep Learning for Neural Machine Translation and Sentiment Analysis, and are still used to these days in models such as BERT.
Word embeddings have been an active area of research, with over 26,000 papers published since 2013. However, a lot of early success in the subject can be attributed to two seminal papers: GloVe (a method that generates vectors based on co-occurrence of words) and word2vec (which learns to predict the context of a word, or a missing word out of a sequence). Whereas both have been used extensively and show great results, a lot of interesting spatial properties were examined for the latter. Not only are similar words like “man” and “woman” close to each other (in terms of cosine distance), but it is also possible to compute arithmetic expressions such as king - man + queen .
An illustration of the spatial properties of word2vec. Figure retrieved from Mikolov 2013.
Building a Dash app to explore word arithmetic
At Plotly, we have used word embeddings in multiple AI and ML projects and demos, and being able to better understand the arithmetic properties of those methods is important to us. Rather than simply choosing a single nearest neighbor for an expression like king - man + woman or france - paris + tokyo , we wanted to explore the neighborhood of each operation that we are applying to the starting word (i.e. king and france in our case). We felt that Dash was the perfect tool for this, since we were able to leverage powerful Python libraries like NumPy and scikit-learn for directly building a responsive and fully-interactive user interface — all inside a single Python script. As a result, we built a Word Embedding Arithmetic app, which we styled using Dash Enterprise Design Kit and deployed through Dash Enterprise App Manager. For the purpose of this demo, we used a subset of the word2vec embedding trained on the Google News Dataset.
An intuitive way to represent this is to use a network of arithmetic operations, where each operation we are applying to a starting term is connected with a directed edge. For example, for an expression like france - paris + tokyo , we connect the node france to france - paris and france + tokyo , since they are the results of respectively applying subtraction and addition to france . Similarly, we get france - paris + tokyo by applying the same operators to the intermediate nodes. Each of those nodes are colored in blue.
This fully-interactive network was created with Dash Cytoscape, using pure Python.
Then, each of those blue nodes (which are themselves vectors) are connected to their 8 nearest neighbors (represented in gray if they appear in the arithmetic expression, and red otherwise).
One important note is that the length of the edges do not depend on the similarity of the neighbors; as a visual distinction, we change the shape of the nearest neighbor (that is not part of the expression) into a star. In addition to this, we made it possible to click on any node in the network to display the similarity of that node to all the nearest neighbors inside a plotly.express bar chart. To create this interaction, you simply need to assign a callback that is fired whenever click a node on a Dash Cytoscape component:
Such callbacks can be written in less than 50 lines of Python code, and result in a highly interactive app: | https://medium.com/plotly/understanding-word-embedding-arithmetic-why-theres-no-single-answer-to-king-man-woman-cd2760e2cb7f | [] | 2020-05-08 22:05:17.800000+00:00 | ['Machine Learning', 'Artificial Intelligence', 'NLP', 'Deep Learning'] |
Foresting Featured Among The 13 Most Promising Blockchain Startups in Korea | Promising Blockchain Startups
Foresting is truly honoured to be featured among the top 13 blockchain startups in Korea. The list was curated and published by Fintech News Singapore.
It listed Foresting among other blockchain businesses likely to disrupt the industry.
Blockchain is a bit of a divided subject in Korea. With a few national controversies hitting the headlines such the controversial ban on ICOs and two crypto exchange hacks in as many weeks, you’d be forgiven for thinking that this is a volatile country for blockchain startups.
However, with the Korea’s expertise in advanced tech, it should be no surprise that there have been so many blockchain startups popping up in Korea. These startups come from a wide range of industries, including real estate, fintech and more.
Read more about the article here: 13 Most Promising Blockchain Startups in Korea. | https://medium.com/foresting/foresting-featured-among-the-13-most-promising-blockchain-startups-in-korea-2e18b6fceb2c | [] | 2018-07-11 08:40:37.304000+00:00 | ['Blockchain Startup', 'Startup', 'South Korea', 'Blog', 'Fintech'] |
Configure HTTPD server and setup Python Interpreter on Docker Container | Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run.
Let’s do the task…
1. Configuring HTTPD Server on Docker Container
Step-1 : For the first time, We need to install Docker software in our system.
To install docker , first create a docker repository inside /etc/yum.repos.d/ folder. Write the following code in the docker repository.
Use yum install docker-ce — nobest command to install docker.
Check the version of docker : docker — version
After installation , We need to check whether the docker service is running or not : systemctl status docker
To start the docker service use : systemctl start docker
Now , Docker has been installed successfully.
Step-2 : Pull a CentOS image from Docker Hub to launch a container : docker pull centos:latest
Step-3 : Launch a container named “web-conf”using CentOS image : docker run -it — name web-conf centos:latest
Step-4 : Install the httpd webserver using command : yum install httpd
Step-5 : Create webpages in /var/www/html/ document root
Step-6 : Now , start the httpd service . As docker container doesn't support systemctl command , So use /usr/sbin/httpd .
To get IP address of the container we need ifconfig command but it is not installed in the OS , So install net-tools software that provides ifconfig command : yum install net-tools
Check the IP address using command : ifconfig
Step-7 : All configuration has been done. Now See the webpage in the browser by typing : http://172.17.0.2/webpage.html
2. Setting up Python Interpreter and Running Python Code on Docker Container
Step-1 : First We need to install python3 inside the Docker Container. command : yum install python3
Step-2 : To confirm python installation , check python version command :python — version
Step-3 : Now , Use Python3 REPL Interpreter to run your python code.
Task completed ✌️✌️
Thanks for Reading 😇 | https://priyanka-bhide173.medium.com/configure-httpd-server-and-setup-python-interpreter-on-docker-container-249543b3f08e | ['Priyanka Bhide'] | 2020-12-18 13:54:23.362000+00:00 | ['Web Server', 'Docker', 'Python'] |
Google Docs template for summarizing your health history | I prefer a co-editing tool like Google Docs because it allows me to collaborate on the document with people in real time, and they can in turn opt to share it with other family members or caregivers.
It’s nothing too fancy, but it can help you start to get your thoughts together, determine your priorities, and start to see things in one place. Health records and patient portals just don’t capture or easily display many of the key things you may want to express (or that doctors want to know) — like detailed info about your symptoms, alternative treatments that are helping, what’s been ruled out, etc.
The ultimate goal of this document is to help you understand your health history and communicate better with your doctors — to build consensus on what’s happened in the past so you can agree on how to move forward. | https://medium.com/pictal-health/google-docs-template-for-summarizing-your-health-history-e08db8228e4f | ['Katie Mccurdy'] | 2018-06-25 12:26:45.902000+00:00 | ['Healthcare', 'Design', 'Patient Experience', 'User Experience', 'UX'] |
Understanding Machine Learning through Memes | “What we learn with pleasure, we never forget” — Alfred Mercier
What better way to have fun than scrolling some memes?
Machine Learning has become an imperative part of our everyday lives. But so have memes. As a proof, we now have more memes every day than them Good Morning wishes in family groups.
Well, why do we need to learn ML? What is ML?
Machine learning is a specific field of AI where a system learns to find patterns in examples in order to make predictions.
Computers learning how to do a task without being explicitly programmed to do so.
Or, in a more friendly definition, Machine Learning Algorithms are those that can tell you something interesting about the data (patterns !), without you having to write any custom code specific to the problem. Instead of writing code explicitly, we feed data to these ML algorithms and they build their own logic based on the data and its patterns.
An example, again, is that you can make an ML model to automatically detect and delete them Good morning wishes posters/images with striking accuracy.
Irritating, aren’t they ?
And that’s just the tip of the iceberg. There’s a lot more that is done using ML. If you see your daily usage, everything from Google Search prediction, Autocorrect, weather prediction, Google assistant (or Siri or Alexa), facial recognition; requires and implements ML in one way or another.
So I guess you’d know by now what can ML do.
So here’s one on that:
PS: ML enables the machine to do it all. Paint a canvas, write a symphony et all.
And memes might as well be one good way to get started with ML, and this blog might help.
For those of you who are already “Machine Learning Enthusiasts”, you’d have no difficulty relishing these meticulously made mesmerizing ML memes.
If you’re someone who doesn’t know much about ML, here’s what Andrew Ng’s got to say:
So the first question, again, What is ML?
We saw the definition already, well, here’s a memer’s take on this:
MATH + ALGORITHM = MACHINE LEARNING
And here’s what Wikipedia says:
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead.
Decide for yourself what you like better.
You’d notice the word statistics in the definition. Well ML did pop up out of Statistics and even today, some of the models used regularly are nothing but statistical calculations.
So, we now present, this:
And
Mathematics and ML have had a long long relationship.
And because of this relationship, many students find it hard to study ML, because, well, Maths.
I so wanted to share this ML meme. ;)
true, indeed :(
But then,
Well this is one of Andrew Ngs favourite dialogue(Andrew Ng: hailed as god by people starting ML from his courses (they’re goood goood courses, see this course, and this one too from deeplearning.ai ) )
Those who know, know.
But you don’t need to have had scored an A (or A+ ;) in maths to be able to use ML for your projects or be well versed with these models. And many a people don’t even care about the mathematics behind ML.
So one more question that arises frequently is :
How are ML and AI different?
Jokes apart, Artificial Intelligence is defined as any technology which appears to do something smart, or say, mimics Human Behaviour. This can be anything from programmed software to deep learning models which mimic human intelligence.
Whereas Machine learning is a specific kind of artificial intelligence but rather than a rule-based approach, the system learns how to do something from examples rather than being explicitly told what to do.
At this point, you’d be impressed by what ML and AI can do, but there’s a dangerous aspect to it as well. If not used carefully, this tech can be dangerous, but thats not as much of an issue as the media portrays it to be.
Another term that’s often interchangeably used with ML is Deep Learning
So what is Deep Learning?
Crying yet?
No, not this 😂.
So Deep learning is a specific type of machine learning using a technique known as a neural network which connects multiple models together to solve even more complex types of problems. (more on Neural Network later)
This is the relation between AI, ML and DL.
There are different types and Models of ML.
One of the most basic ones is Linear Regression or Regression:
Regression is one of the most important and broadly used machine learning and statistics tools out there. It allows you to make predictions from data by learning the relationship between features of your data and some observed, continuous-valued response. Regression is used in a massive number of applications ranging from predicting stock prices to understanding gene regulatory networks.
Then there’s k-means for Clustering algorithms.
Now, clustering means determining how closely related items are to each other, and arranging them to form clusters of related data items. K-means algorithm is an iterative algorithm that tries to partition the dataset into ‘k’ pre-defined distinct non-overlapping subgroups (clusters) where each data point belongs to only one group.
See how easily you can grasp this concept using this meme .
k = 4 ; 😛
Most importantly, there’s Neural Network.
Neural networks are a set of algorithms, modelled loosely after the human brain, that are designed to recognize patterns. That’s it. Some nodes or say neurons connected to each other which pass information like the ones in the brain do. Here each neuron processes the info and passes on to the next one.
Initially, like a child’s brain, the neural net is random, it hasn’t learnt anything and as we feed data to it, it starts learning. As a child slowly learns features one by one, with increasing complexity, so does a neural network.
Source : Analytics India Webiste
More on Neural network in this blog.
If you don’t train it properly, or before training it’ll give random results
NN uses a function called as Activation function that activated neurons depending on different conditions.
For example there’s Relu, Rectified Linear Unit, which activates a neuron only if output (y=wx+b) is greater than zero.
Well here’s one meme for the ones who know NN already.
The explanation of this meme is beyond the scope of this blog. Try contacting your nearest ML expert for the explanation.
One advanced model frequently used these days is GAN (Generative Adversarial Network)
Basically these are Neural Net technology which, given a training set, can learn to generate new data with the same statistics as the training set data.
For example, it can create new faces, new paintings which will look like any other but which actually does not even exist but are made by your machine. See, the machine is actually learning.
How do you do ML? What is the language? What framework?
Thanks to the modern frameworks (tf, keras, theano, pytorch and others) used these days, writing code for an ML model has become just a few lines work. You can use Python or even C++ together with these libraries.
So what is Keras?
Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation.
So, What is Tensorflow?
TensorFlow is a free and open-source software library which is a symbolic math library, and is used extensively for machine learning applications such as neural networks. Tf enables you to write complex code in a few lines.
Also, you can use pretrained models and tweak some parameters to get the result of your choice.
So,
*Stay cool and learn ML meme*
Signing off…
This blog was a part of Learning ML series by the author. Click here to get to the previous part of the series. The aim of this blog was to present some of the most relatable ML memes I found throughout the web (including ones I made and ones I collected)in a learning handbook kind of format so as to act as a beginner’s guide, with the fun ;) | https://medium.com/nybles/understanding-machine-learning-through-memes-4580b67527bf | ['Harsh Aryan'] | 2020-11-23 18:14:53.954000+00:00 | ['Programming', 'Deep Learning', 'Data Science', 'Science', 'Machine Learning'] |
A Holiday Miracle Part III | My name is Sunny Alexander, and I’m Henry James, and we’re writers for Dark Sides of the Truth magazine.
Part I, Part II
Never is such a long time. When the words came out of Damen’s mouth, we both gazed at each other and shared acknowledgment of a single emotion.
Things weren’t looking too good for Jericho at the moment.
“Damen, is there anything you can do?”
“We’ve already done everything, Sunny. Now it’s up to him and fate.”
“Don’t know if I like those odds, Damen.”
“I’m sorry, Henry, I really am, but there’s nothing we can do now but wait. Look, I’ve got rounds. But I’ll come back and check on him and Ruth. Are you guys planning on sticking around a bit?”
“Not for long. We really need to get back. I guess Ruth will need to come with us.”
“How about this, Sunny? I’ve got some friends here at the hospital she can stay with. They can make sure she’s taken care of and ferry her back and forth to be with her brother.”
“You sure?”
“Positive Sis. She’ll be fine. Besides, you’re going to want to get out ahead of the snowfall anyway.”
“Snow? It never snows in Houston.”
“Not true, Henry. Two years ago, we had snow in November. I haven’t taken a look outside lately, but I’m willing to bet we’re already starting to get a light dusting.”
“Could this year get any weirder?”
“Any year spent with you, Henry is guaranteed to be weird.”
“Yeah, I’ve always said he’s a magnet for this kind of crazy.”
“Ya know what you two? You can..”
“We know Henry. We can bite your ass. Okay, let me go give Ruth the plan. Damen, we can’t thank you enough for all your help.”
“No problem Shaundrika. She’ll be fine.”
We all stood, and after a handshake and a hug, Damen nodded, then turned and walked away.
“Sit tight, Henry. This will only take a minute.”
“No problem. Hey, if Damen’s right about the weather, you need to make it snappy.”
“Chill old man. This won’t take long.”
And it didn’t. That’s because we discovered Ruth was no longer there beside Jericho’s bed in the ICU area.
“Where the hell did she go?”
“Bathroom, maybe?”
“Okay, go take a look.”
“Whoa, whoa, whoa, Alexander. I’m not about to go poking my head into a lady’s bathroom.”
“Fine, I’ll go look. You start scouting around. We don’t leave until we’ve filled her in, got it?”
“Yes, mon Capitan.”
“Stop joking around James and start looking.”
After parting company and when left to my own devices, I began to think about Ruth, try to put myself in her place. Sunny may have been right. Maybe she did just have to go pee, but she’d hardly eaten anything when we stopped for lunch.
As one who marches on his stomach, I guessed maybe she’d gone down to the cafeteria on the ground floor, to shag something. When my intestines began to rub against themselves with a shrill whine, I told myself maybe I needed to find something for me as well.
When I stepped out of the elevator and gazed at the signs on the wall, I took the suggested path toward the chow hall, passing a vast expanse of glass doors on my left.
I caught a glimpse of her, then backed up and stopped.
She was sitting in an outside dining area centered in the middle of the hospital wings. Her eyes closed, her hands clasped together, and tucked against her chest.
Damen had been right about the snowfall.
Gazing at Ruth, I saw tiny white flakes drift down against her hair and shoulders, instantly disappearing as the heat of her body melted them. Aside from a gentle outward, then an inward motion of her hands against her chest, she seemed content to sit there beneath the snowfall.
Although I didn’t want to disturb her, I knew I had to at least let her know what we planned. Besides, in her frail condition, sitting in the cold, eventually becoming drenched, the chilled night air would probably make her ill.
At this point, half of her entire family was already in the hospital. This didn’t need to become a complete family affair.
As I neared the glass doors, an actuator opened the doors inward. I was met with a chilling rush of air, and thrust my hands into my jean pockets, stopping near her, not saying a word. Although she kept her eyes closed, a slight turn of her head acknowledged she was aware of someone standing nearby.
“Ruth, honey. You need to come inside.”
She continued to clutch her hands together at her chest.
“Mr. Henry?”
“Yes, dear.”
“Do you believe in God?”
“Yes, I suppose I do.”
“So do I, And I believe God brought you and Miss Sunny into me and Jericho’s lives for a reason.”
“I think it had more to do with dumb luck, Ruth.”
“No, dumb luck don’t exist. Although my brother did a stupid thing, Jericho could have tried robbing any bank. There was a ton of ’em between your bank and our house. But he chose your bank, Mr. Henry. Yours. And not only did God help Jericho find your bank, but he timed you going inside at the exact same time. God wanted Jericho to meet you and Miss Sunny. He sent you and Miss Sunny to help us when I was dying, and we didn’t have no money. And now, we’re together again, ain’t we, Mr. Henry?”
“Yes, hon, I believe we are.”
“What do you suppose God’s trying to tell us, Mr. Henry?”
“I honestly don’t know, Ruth. Honey, please let me take you inside. This cold and damp isn’t good for you. You need to stay fit for Jericho right now. Like he did for you. You need to take care of him.”
Ruth sighed, a prolonged release of breath, clouds of condensation from her lips billowing into the night air, then she unclasped her hands and grasped the wheels of her chair.
“Suppose you’re right. I need to be there when Jericho wakes up, huh?”
I paused, not really sure if my comment would assure her with any amount of sincerity then said, “yes, sweetheart, you do. Come on. Let’s get you inside where it’s warm.”
Let’s keep in touch: [email protected]
© P.G. Barnett, 2019. All Rights Reserved. | https://medium.com/dark-sides-of-the-truth/a-holiday-miracle-part-iii-73ff474d0a1a | ['P.G. Barnett'] | 2019-12-14 13:31:02.451000+00:00 | ['Short Story', 'Fiction', 'Stories', 'Henry And Sunny', 'Storytelling'] |
Richard Brautigan in Heaven | Richard Brautigan sketch by Oliver Dalmon
Somewhere in America
Richard Brautigan is still fishing for trout.
It is not this America
but if you close your eyes
you can see it from here.
He is casting a line silhouetted
against lemon-colored mist rising
off a stream that runs down
a mountainside in a place that looks
like Montana.
His floppy hat has blown off to somewhere
and his hair has grown into a map
of every trout stream he ever fished:
a wild tangle of rivulets
currents and flashing rainbows.
The lines on his face are tiny poems.
It might be strange for us
but not for Richard.
He has arrived at the silence of himself
and is listening to it. | https://medium.com/assemblage/richard-brautigan-in-heaven-2a660da62943 | ['Kara B. Imle'] | 2020-07-14 11:27:28.993000+00:00 | ['Poets Unlimited', 'Homage', 'Writing', 'Poems On Medium', 'Poetry'] |
Quantum Physics is Cool! Turning water into ice in the quantum realm | Quantum Physics is Cool! Turning water into ice in the quantum realm
Turning liquid water to ice is a piece of cake in our everyday lives. But, on a quantum scale, it requires much more effort.
Transitions between water and ice are pretty couldn’t be more of an everyday phenomenon in our lives. Especially in scorching hot summer, putting water-filled ice trays into the freezer and pulling out ice-cubes later is frequent, and essential.
So reporting on research that features physicists turning water to ice may not initially sound that exciting. The difference here is, the researchers in question — from the University of Colorado and the University of Toronto — have achieved this familiar transition of state with a cloud of ultracold atoms.
The team discovered that it could nudge these quantum materials to undergo transitions between “dynamical phases” — essentially, jumping between two states in which the atoms behave in completely different ways. The findings are published in a paper in the journal Science Advances.
Study co-author Ana Maria Rey, a fellow at JILA, a joint institute between CU Boulder and the National Institute of Standards and Technology (NIST), says: “This happens abruptly, and it resembles the phase transitions we see in systems like water becoming ice.
“But unlike that tray of ice cubes in the freezer, these phases don’t exist in equilibrium. Instead, atoms are constantly shifting and evolving over time.”
The findings, say the researchers, provide an insight into materials that are hard to investigate in the lab.
Rey explains the practical application of the team’s findings: “If you want to, for example, design a quantum communications system to send signals from one place to another, everything will be out of equilibrium.
“Such dynamics will be the key problem to understand if we want to apply what we know to quantum technologies.”
Fermionic atoms — the wallflowers of the atomic dance
Although scientists have previously observed similar transitions before in ultracold atoms, but only among a few dozen charged atoms, or ions, this new research stands out because of its scale. Rey and her team used clouds made up of tens of thousands of uncharged, or neutral, fermionic atoms.
Graphic depicting the weak interactions between neutral atoms in an ultracold gas. ( Steven Burrows/JILA)
Fermionic atoms are the introverts of the periodic table of elements, says Rey. They don’t want to share their space with their fellow atoms, which can make them harder to control in cold atom laboratories.
Study co-author Joseph Thywissen, a professor of physics at the University of Toronto, continues: “We were really wandering in a new territory not knowing what we would find.”
In order to navigate this new territory, the team used the weak interactions that occur between neutral atoms when they collide in a confined space.
Thywissen and his team in Canada cooled a gas made up of neutral potassium atoms to just a fraction of a degree above absolute zero. They then turned the atoms so that their “spins” had the same orientation.
Spins is a quantum property which isn’t related to angular momentum as you may expect but instead seems to be a magnetic property. Thywissen points out that a particle or atom’s spin has an orientation in a similar way that Earth’s magnetic field — which currently points to the north — does.
Once the atoms were all standing in formation, the group then tweaked them to change how strongly they interacted with each other. And that’s where the fun began.
Thywissem explains: “We ran the experiment using one kind of magnetic field, and the atoms danced in one way.
“Later, we ran the experiment again with a different magnetic field, and the atoms danced in a completely different way.”
Synchronising the atomic dance
In the first dance — or when the atoms barely interacted at all — these particles fell into chaos. The atomic spins began to rotate at their own rates and quickly all pointed in different directions.
The researchers compare this to standing in a room filled with thousands of clocks with second hands all ticking at different tempos.
When the group increased the strength of the interactions between atoms, however, they stopped acting like disordered individuals and more like a collective.
Their spins still ticked, but now they ticked in sync.
In this synchronous phase, the atoms are no longer independent, explains Peiru He, a graduate student in physics at CU Boulder and one of the lead authors of the new paper.
He continues: “They feel each other, and the interactions will drive them to align with each other.”
With the right tweaks, the group also discovered that it could do something else: revert both the synchronized and disordered phases back to their initial state.
In the end, the researchers were only able to maintain those two different dynamical phases of matter for about 0.2 seconds. If they can increase that time, He said, they may be able to make even more interesting observations.
He concludes: “In order to see richer physics, we probably have to wait longer.” | https://medium.com/swlh/quantum-physics-is-cool-turning-water-into-ice-in-the-quantum-realm-e2b7c560aa9 | ['Robert Lea'] | 2019-08-03 19:32:58.551000+00:00 | ['Quantum Physics', 'Quantum Mechanics', 'Physics', 'Chemistry', 'Science'] |
Current Trends in Regulated Industries | IBM Garage enables large enterprises to accelerate, breakthrough, and work more like startups.
Traditionally software development is not a core function in many regulated sectors such as food, aviation, medical devices, pharmaceutical, financial services, and railways. However, as these industries grow their digital capabilities, the software is increasingly playing a more significant role in regulatory compliance. Nowadays, cloud-based emerging technologies and rapid scaling of digital innovations through DevSecOps influence standard business architectures in all industries in every sector and pave new ways of working for regulatory bodies and the industries they regulate.
In the first part of this blog series, I discussed various product development challenges in the regulated sectors from the perspective of software development and introduced the readers to IBM Garage Method.
In this (second) part of the series, I will review the current trends in various regulated industries that are reshaping the regulatory landscape and how the IBM Garage Methodology can help organizations identify and prioritize innovation opportunities throughout the enterprise to take advantage of the industry trends.
Fourth Industrial Revolution
The convergence of emerging technologies and societal needs has ushered in a new era — Fourth Industrial Revolution (4IR)[1] — through innovative business models using disruptive technologies. Artificial intelligence (AI), Machine learning (ML), Blockchain, Automation, Internet of Things, 5G, and Edge computing are becoming pervasive in many industries and yield better ways to serve consumer needs and disrupt industry value chains. Increased technology usage in everyday life introduces changes in consumer behavioral patterns and prompts businesses to rethink how they engage with their consumers.
A key trend of 4IR is the emergence of technology-enabled platforms that combine both demand and supply to disrupt existing industry structures through sharing or on-demand economy (1). These platforms bring people, assets, and data together to allow new ways to offer and consume products and services. Market entrants challenge industry incumbents and influence consumer behavior by bringing new products and services quicker through better technologies on connected digital platforms.
Increased adoption of emerging technologies is putting more emphasis on consumer experience, privacy, and security. For example, in the healthcare industry, the topline value drivers are customer satisfaction, increased customer retention, and reduced customer acquisition cost (2). Modern AI-driven data platforms help healthcare organizations define policies by providing useful insights into customer trends (2) (3).
Technology adoption is traversing industries because businesses are trying to take advantage of the latest technological innovations in other industries to meet the consumer demands in theirs. For example, in the automotive industry, consumer data security and privacy are becoming critical differentiators because vehicles are becoming more autonomous and bringing personalized experiences to their users from multiple sources (4).
Current public policy and decision-making systems that follow linear and mechanistic processes to develop regulatory frameworks cannot keep up with 4IR’s rapid pace of change and broad impacts (1). As industries innovate, regulators are also taking more human-centric, collaborative, and agile approaches to protect the consumers while supporting innovation by allowing safe use of technological advancement. For example, relatively new legislations such as General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and Brazilian General Data Protection Law (LGPD) govern the collection and use of consumer data to protect their privacy and security.
Iterative Risk-based Regulation
Connected digital platforms are launching new digital products and augmenting physical devices with new digital capabilities rapidly. Digital products need to be updated frequently to stay relevant and secure. Embedded Software in physical devices also needs continual updates to ensure the proper function of these devices. FDA’s Center for Devices and Radiological Health (CDRH) adopted a risk-based approach towards regulating digital health technology because the traditional approach to moderate higher-risk hardware-based medical devices was not practical for the faster iterative development approach used to adopt software-based medical technologies (5). FDA also noted that digital health products that leverage connectivity could continually improve their safety and effectiveness through frequent modifications and updates (5).
Emphasis on the Quality Culture
In recent years, regulatory organizations are focusing more on organization culture than just end products. For example, FDA’s CDRH pre-certifies eligible digital health vendors who demonstrate quality culture and organizational excellence based on specific objective criteria (5). Organizations that nurture a culture of quality are essentially customer-centric, where leaders are quality advocates, and empowered employees work in a collaborative culture utilizing organizational systems and structures that promote continual improvement in their products and services.
For organizations that focus on quality and transparency, FDA also proposed a total product lifecycle-based regulatory framework for AI/ML-based technologies that would allow for software modifications in medical devices to be made from real-world learning and adaptation. The proposed regulatory framework (6) could enable the FDA and device manufacturers to track and react to a digital health product’s premarket development and postmarket performance. This potential framework allows for the regulatory oversight to embrace the iterative improvement power of AI/ML-based software as a medical device while assuring patient safety.
Various health authorities included quality culture in guidance documents and inspection protocols such as PIC/S Data Integrity Guidance, FDA New Inspection Protocol Project (NIPP), and MHRA Data Integrity Guideline (7).
Increased Experimentation
In the financial services industry, many regulators set up sandboxes for both startups and incumbents to experiment with new ideas and technologies with some exemption from regulatory requirements. This framework aims to encourage businesses to build innovative products and services while protecting their consumers. Sandboxes have become essential vehicles for regulatory monitoring and incremental innovation in the industry.
For example,
United Kingdom’s Financial Conduct Authority (FCA) created the first formal regulatory sandbox and propagated the concept throughout the world. It helped over 175 innovative businesses within a year to operate (8) (9).
In the United States, Consumer Financial Protection Bureau (CFPB) created a Disclosure sandbox for companies to iteratively test, improve disclosure content, format, and delivery mechanisms (10).
Canadian Securities Administrators (CSA) offers a regulatory sandbox to better understand how technology innovations impact capital markets and regulate fintech (11).
In Banking, Federal Deposit Insurance Corporation (FDIC) recognized that smaller banks could compete more effectively using AI/ML-based technologies. Banks and regulators have been weighing how fintech can safely play into banking and regulatory activities, such as developing anti-money-laundering controls. FDIC introduced an office of innovation, FDITech, to promote innovative technologies across the financial services sector. On July 20, 2020 the FDIC announced that it was seeking the public’s input on the potential for a public/private standard-setting partnership to promote the efficient and effective adoption of innovative technologies at FDIC-supervised financial institutions.
Incremental Regulations
Regulators are increasingly moving away from regulating processes upfront to avoid the methodological and informational limitations of front-end analysis of impact. They rely on incremental updates to the regulations based on data-driven research, innovation, and process outcomes. For example, the National Highway Traffic Safety Administration (NHTSA) introduced the Federal Automated Vehicles Policy (FAVP) in 2016 to address the lack of steering wheels and drivers in autonomous vehicles. NHTSA subsequently updated the rule to handle new technologies used in autonomous vehicles in 2017 (12).
European Union (EU) rolled out Markets in Financial Instruments Directive II (MiFID II) in January 2018 (13) that updated MiFID I and broadened its scope to improve the competitiveness of financial markets and protect investors. With MiFID II, the range of reporting expanded to include commodities, currencies, and credit products in addition to the equities and bonds from MiFID I. Essentially, the scope of reporting increased to cover almost anything traded on a multilateral trading facility. Organizations that narrowly focused on meeting MiFID I requirements had to rework their solution to address MiFID II compliance requirements. On the other hand, organizations that took an iterative approach to take advantage of technological innovations met the new needs that emerged from incremental MiFID II regulation with relative ease.
COVID-19 Pandemic
As the global workforce moved their offices to their living rooms and kitchens, the pandemic laid bare needs for new and enhanced digital services. The crisis necessitated regulators and businesses to rapidly implement a wide range of measures based on evidence and data-driven predictive models to accommodate consumer behavior changes and overcome financial and operational stress. Regulators reduced regulatory burden on the market operators in both agile and innovative ways that may serve as a basis for a flexible and responsive approach in the post-pandemic world. For example, changes in HIPAA requirements for telehealth (14), easier licensing, and increased reimbursements helped telemedicine practices being adopted rapidly during the pandemic. Network/infrastructure sectors demonstrated resilience in delivering essential services during the global COVID-19 emergency because of adaptive regulatory controls. Responsive regulations, technology adoption using cloud technologies, and digitization of business workflows will define post-pandemic imperatives in almost all industries.
Need for an Innovation Framework
Current trends indicate that the regulators are rethinking traditional regulatory models and taking more iterative and responsive approaches towards regulating new technologies without hampering innovation. Regulators are also collaborating with industries to address opportunities and threats of disruptive technologies that often cross traditional industry barriers. These trend-influenced regulatory changes will also necessitate businesses to rethink governance and development approaches that once allowed them to operate successfully within old policy frameworks.
Novel business models based on disruptive technological innovations benefit from hypothesis-driven experimentation instead of following pre-planned steps to build products or deliver services. Industry incumbents need to become more agile to address challenges that exist within their large complex organizations.
The regulatory landscape will continue to evolve as the digital economy expands through technological innovation and influences both regulators and regulated entities to experiment and adopt emerging cloud-based technologies, API Platforms, and AI at scale. These technologies perform better when adopted iteratively into products and services using a structured problem-solving approach (e.g., design thinking) and introducing incremental culture change towards agility by building a positive attitude towards risk and change management through organizational integration (15).
IBM Garage Method can help organizations identify and prioritize innovation opportunities throughout the enterprise by careful evaluation of risks and uncertainties across horizons against their product roadmaps.
Figure 1: IBM Garage Method helps organizations increase speed to value across four horizons
Using governance as an enabler for innovation, IBM Garage Method separates experimentation from process-driven regulated activities. Once experimentation is successfully validated to be compliant and approved, it is integrated into the target regulated product, process, or service(s) roadmap. This approach has the following benefits:
Each Hypothesis is validated into a proof of concept backed by relevant metrics and artifacts in a system of record that can help establish early evidence of conformance to the applicable standards Target Product can be incrementally enhanced and validated as it progresses through its lifecycle resulting in a possible reduction in cost and effort to obtain final regulatory approval
IBM Garage practices can help organizations evaluate hypothetical business scenarios to make better use of disruptive technologies.
Garage Practices Culture is the heart of the IBM Garage Method that promotes product quality through user-centric design by building a positive attitude towards risk and change management. Discover practices help teams and organizations dig deep into their problem domain, align everyone on common goals, success criteria, and identify potential problems and bottlenecks early. Envision practices provide development teams with a repeatable approach to deliver innovative user experiences through MVP iterations rapidly. Develop practices enable teams to collaborate and produce high-quality code can be confidently delivered to production using continuous integration, continuous delivery, and automation. Reason practices enable organizations to infuse artificial intelligence in a business workflow to make better decisions faster and more accurately. Operate practices focus on building automation that enables high availability and resiliency through continuously monitoring status and performance. Learn practices promote continuous experimentation through learning to develop the right solution and gather metrics to validate initial hypotheses.
IBM Garage Method also puts particular emphasis on data and documentation strategy for regulatory compliance by addressing digital tooling needs at the outset. With repeatable approaches and tools, IBM Garage Method focuses on quality and automation from the conceptual stages of product development. To test a hypothesis, IBM Garage squad(s) rapidly prototype by employing user-centric design techniques and quickly iterate through an MVP or a series of them using a test-centric approach by combining pair programming (PP) with test-driven development (TDD). As IBM Garage Method incorporates continuous verification and validation of code intrinsically through TDD and periodic review of other deliverables, adopting this framework may streamline processes around intermediate regulatory assessments and accelerate final regulatory approvals.
Next...
In the next, and final, part of this blog series I will discuss how organizations can implement the IBM Garage Methodology for software development in regulated sectors. | https://medium.com/ibm-garage/ibm-garage-in-regulated-industries-part-ii-487369603a13 | ['Neal Bhattacharya'] | 2020-10-07 20:04:22.404000+00:00 | ['Regulatory Compliance', 'Organizational Culture', 'Development', 'Cloud Transformation', 'Digital Transformation'] |
Which wich for the witch? | If you purge your soul onto paper and are doing it for the love of writing. Then this publication is for you. We love gritty, raw, emotional, thought-provoking, rebellious, sexual, spiritual, nature-related writings and comic strips.
Follow | https://medium.com/the-rebel-poets-society/which-wich-for-the-witch-d9b295e9fce6 | ['Andy Anderson'] | 2020-10-19 01:19:48.245000+00:00 | ['Witch', 'Humor', 'Comics', 'Funny', 'Storytelling'] |
I’m A Writer, A Real Freaking Writer! | I did it. I jumped. Well, dove head first without checking to see if there was 20 feet of water or a puddle below. I first learned about NaNoWriMo years ago and thought what a massive project that was. Why would anyone want to torture themselves like that? And yet, here I am.
In the past several years, nearly all my writing has been non-fiction. The thought of writing fiction felt too difficult and too time-consuming. It’s just so free and full of possibility–how frightening. Then I did fictional writing for video game narratives a couple of years ago. That was fun. Maybe I could attempt some fiction. But a novel? An actual book, like that may one day be read by an agent and an editor and then published and I could call myself an author and have fancy book signings and my book would sit on shelves in libraries and be bought by people at The Strand and…
As exciting as that sounded, I was quite sure it was something I was not capable of. I mean, books are magical, authors are legends. As I found myself spending more time with writers though, it helped me realize that they are actually human beings. They have faults like me. Some have faults that are way worse than mine even. Those names on my bookshelves are not the gods I’ve made them out to be in my head. So maybe, just maybe, I could write a novel.
In the past year, I’ve been working on a collection of personal essays. Most are still in progress. Many are stories I’m afraid to tell. I get to them when I have time, and when I’m feeling particularly brave. There is one I’ve been wanting to tell for years. I’m not sure the moment I decided I needed to tell this story, but it’s been at least 10 years. I began writing a personal essay about it, and I had so much to tell I realized an essay could never contain all of my story.
After BinderCon, I was full of vigor, inspired to write and write and write. I felt capable and brave. I was ready to rip off the chains that have held me back and fearlessly put myself out there. I’ve been submitting to publications and reaching out to editors. I have been doing it.
I saw #NaNoWriMo come across my Twitter timeline about two weeks before November 1st. I decided to at least check out the website, for future reference. A few minutes later, I found myself signing up for it. I kept it to myself for a few days. I felt embarrassed. Who was I to write a novel? What a fraud—I am not a real writer.
I’d had these same feelings a few weeks earlier at BinderCon. I nearly convinced myself to skip the speed pitches with the editors. I told myself I had no talent, I was not a real writer, and I was wasting their time. I had no business being there and my pitches were awful.
I worked for a while the night before, tweaking my pitches, choosing some writing samples, and freaking the fuck out. I finished up and was still considering my options. I could cancel now and let the BinderCon organizers know so they could open up my appointments to others. That would be a nice thing to do. I imagined myself telling everyone I felt a little ill. I can’t admit I’m scared shitless or that I think I’m worthless. I’d have a bit of a headache, and pass on an opportunity to someone more worthy than me. That would be for the best. Thankfully though, instead, I took some slow breaths. I read some inspiration. I went to sleep.
I woke up the next morning, and somehow felt ready to take on the world. I had a couple of hours to get ready, washed away all my doubts and self-deprecation in the shower. In line before my speed-pitch, I met others who were just as nervous. They had never pitched before. They were sure their book was nowhere near ready to be seen by an editor. They had thought of skipping too. I felt enormous relief.
I walked into the room and found the first editor I was to meet with. I admitted I was nervous, stating I’d never done a face to face pitch before. She smiled and said, “I know, it’s really odd. I don’t even know exactly how this works.” And then I felt okay. We went over my pitch and she liked my ideas.
The next publication I met with had recently shifted their format and my pitch would no longer work. However we spoke about my experience and other writing, and came up with several other pitches I can research and get back to her with. I will hopefully have relationships with both publications going forward. I walked out of that room feeling like a rock star. I am a writer, dammit.
With all that negativity and self-doubt once again creeping up on me after signing up for NaNoWriMo, I reminded myself that I am a freaking writer. I also told myself anyone can write a book. I’ve spent enough time in bookstores to know that is true.
I revealed the fact I’d signed up on my 5 Things post a few days before NaNoWriMo kicked off. I was afraid to say it aloud. I didn’t want to tell anyone, I was afraid I’d be made fun of. I thought people wouldn’t understand. I’d hear it was a waste of time, that this would get me nowhere. I can hear a certain someone asking me, “aren’t there better things to do with your free time?”
Silently putting the news out there in a Tumblr post and then connecting with fellow Binders and others who had also signed up made me feel more confident. I wasn’t foolish. This is a thrilling endeavour I was embarking on. In a month’s time, I will have a rough draft of my first novel. That is a massive accomplishment. The next day I spoke to those closest to me, and they were all stoked. Those I’ve told have been nothing short of supportive.
My friend and I were just discussing serendipity. I can’t help but think the timing of BinderCon, some small writing successes I’ve had, and an essay by Sara Benincasa all came together to give me a much-needed kick in the ass. I have decided to do it anyway. To pitch to as many places as I can. To reach out and ask for a gig. To write a freaking book. To call myself a writer, a real freaking writer!
Good luck to everyone else who has stocked up on caffeine and candy for the month. Extra good luck to our loved ones who will have to put up with us during this time. Feel free to find me here so we can be buddies and help each other through the inevitable hurdles to come. I have made a Pinterest board for NaNoWriMo and writing, full of inspiration and tips that I’m frequently adding to.
I leave you now with a bit of wisdom. I believe it was Ernest Hemingway who said:
“It’s like I got this music in my mind, saying it’s gonna be alright, cause the writers gonna write, write, write.”
Happy #NaNoWriMo to all! | https://medium.com/nanowrimo/im-a-writer-a-real-freaking-writer-45806841ea74 | [] | 2015-10-03 16:34:30.509000+00:00 | ['Writers', 'NaNoWriMo', 'Writing'] |
Seasons Come And Go Slowly | Seasons change without ever a twinkle light shining not so bright. One season leaves us to make way for another.
When was the last time you witnessed the ever-living forest take a huge breath and releasing it into the atmosphere?
Gentle change released into the environment giving us life for another…day.
Haiku/tanka prompt “gentle” | https://medium.com/house-of-haiku/seasons-come-and-go-slowly-5447022d1234 | ['Pierre Trudel'] | 2020-09-26 14:12:58.513000+00:00 | ['Love', 'House Of Haiku Prompt', 'Seasons', 'Change', 'Environment'] |
Getting an Agent Does Not Mean Your Book Will Be Published | Getting an Agent Does Not Mean Your Book Will Be Published
I learned the hard way that even an expert, motivated agent might not be able to place your manuscript as soon as you’d hoped.
Photo by Steve Johnson on Unsplash
Everybody wants an agent. There are a lot of good reasons, not the least of which is that most publishers won’t even look at non-agented manuscripts.
I lucked out and had an easy time finding an agent. I emailed a plot outline and got a call from the agency the next day, “We think this story would make a great children’s picture book.”
That was five years ago.
The first thing I learned is that a plot outline is not a book.
Marching orders: Write the book. Create characters and settings and dialogue.
My agent sent my draft manuscript to a trusted colleague at Publisher A. Her response was that the subject matter, two child refugees in wartime England, was too mature for a picture book. She suggested a middle-grade (ages 9–12) chapter book. Me? A chapter book?
After more than a year of learning and struggles and rewrites — creating more complex characters and situations, and excitement and suspense — the revised manuscript was submitted to Publisher B. An acquisition editor there was enthusiastic, and we thought they were going to take it. Then the marketing department turned it down. “Not right for our list.”
FYI, first-time fiction authors do not get contracts based on an outline and a few sample chapters. You have to write the whole book. And get it in tip-top shape so editors at the publishing house will have very little to do.
Next, a referral from my agent brought me to a consultant, a developmental editor who’d previously worked on children’s books at several top NYC publishing houses. She let me know in no uncertain terms what could be improved, especially staying in the moment of each scene without digressions and flashbacks. She agreed that the story line was fine and that I’d done the research. But that won’t sell a book. In her opinion, my MC (main character), a young refugee from Nazi-occupied Austria, was too passive, not engaging enough. She didn’t move the plot forward; my MC merely reacted to the horrific events happening around her. Kids today, the editor emphasized, need to be inspired by active characters who change the world around them. Keep the adult characters out of the way, she advised, or outsmart them. Wow, a 12-year-old girl in 1939 outsmarting her parents and the Nazi regime. Not easy to do.
Draft after draft.
After more than another year’s work — not every day, but in between freelance nonfiction assignments — my agent submitted a new, much-improved draft to Publisher C. An important editor there loved it and held onto it for almost a year, shopping it around the company, with apparently positive feedback. Her colleagues asked for more information about my background and other published writing. A meeting was set. And then there was a crisis at Publisher C that made the newspapers. That story line ended with a broken heart. Mine.
At that point, I started thinking about self-publishing. Coincidentally, I was offered and accepted the freelance assignment of reading and critiquing 25 self-published novels that had been entered in the 2018 Writer’s Digest competition. Doing that taught me that the self-published world was not one I wanted to be in. Reading and critiquing those books was difficult. Some of them were so amateurish in every way, from the concept to the writing to the cover design, that it was difficult to come up with anything positive to say.
Back to the drawing board with another developmental editor. With my agent still cheering me on, I worked hard on the next draft: More drama. Deep, true emotions. Making sure kids 9–12 will want to root for my MC. Tightening up every scene. Weaving in every unraveled thread that didn’t 100 percent make sense. Some people think that writing for kids is easier, “a good place to start your writing career.” It’s not. There are specific conventions for every age group: from the 32-page picture book to the 75,000-word YA novel with contemporary themes ripped from the headlines.
Now we’re finally getting somewhere.
Currently, we have a big, enthusiastic bite from Publisher D — who wants to see the manuscript again with a shorter word count, several scenes cut, and a bit more “kid appeal.” Play up the fashions and friends. Delete everything of interest only to adults, like the excerpt from Winston Churchill’s speech my MC heard over the wireless. I’m working hard on it. Perseverance is the name of the game. In the past, an agent was unable to sell two of my nonfiction book proposals. But I wasn’t especially upset. I turned the concepts into magazine articles. But this middle-grade historical novel is my baby. One I nurtured and love… and really want to get out there and into readers’ hands.
I hope I’ll have good news to share in the very near future.
And, of course, this isn’t only happening to me. As just one example, a close friend, an Emmy-winning television producer, recently spent four years on an action thriller for adults. He worked hard, hand-in-hand with his agent… who just last week told him that she was unable to sell it.
Wish us both luck.
Why did I tell you all this? Not to discourage you. Keep writing! Just to let you know that if you want to be commercially published — even if you have a dedicated agent making introductions, submitting, and selling — you might have to tread a similar path. I hope not. But you might. So be prepared. | https://ellenshapiro.medium.com/getting-an-agent-does-not-mean-your-book-will-be-published-f6ecb954b694 | ['Ellen M. Shapiro'] | 2019-08-20 15:58:05.033000+00:00 | ['Perseverance', 'Childrens Books', 'Publishing', 'Writing'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.