title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
What are *Args and **Kwargs in Python?
What are *Args and **Kwargs in Python? Boost your functions to the n-level. Photo by SpaceX on Unsplash If you have been programming Python for a while now surely you’ll have had a doubt in how to use a function properly. You went to the paper and have met with *args and **kwargs inside the parameter of this function. In matplotlib or in seaborn (to name but a few) there are regularly these words in the attributes of a function. Let’s take a look at a caption. Image by Author When I started seeing these attributes my brain usually did like them didn’t exist. Until I recalled the power of this attribute and the commodity they provide. Using them your functions can be a lot more versatile and scalable. Usually, the functions have some ‘natural’ attributes and each parameter are regularly predefined. For example, in the matplotlib function seen above, we have the “Data” attribute where its value “=None” is predefined. This meaning that if you don’t use a value in “Data” the calling of the function will return an empty figure. A none by default. What happens with the *args and**kwargs seen above? The args here are usually used to introduce the data, x1, x2 whatsoever. And the kwargs are used for the tunning of your graph. The properties of each graph come determined with the kwargs you introduce. Linewidth, linestyle, or marker are just some of the keyworded attributes you could already be familiar with. So you have been already using this **kwargs-thing unconsciously.
https://towardsdatascience.com/args-and-kwargs-d89bdf56d49b
['Toni Domenech Borrell']
2020-12-29 13:14:57.903000+00:00
['Data Science', 'Python', 'Programming', 'Software Development', 'Matplotlib']
Algorithms Tell Us How to Think, and This is Changing Us
Silicon Valley is predicting more and more how we are going to respond to an email, react on someone’s Instagram picture, determine which government services are we eligible for, and soon a forthcoming Google Assistant will be able to call our hairdresser for us in real-time. We have invited algorithms practically everywhere, from hospitals and schools to courtrooms. We are surrounded by autonomous automation. Lines of code can tell us what to watch, whom to date, and even whom should the justice system send to jail. Are we making a mistake by handing over so much decision-making authority and control to lines of code? We are obsessed with mathematical procedures because they give us fast, accurate answers to a range of complex problems. Machine learning systems have been implemented in almost every realm of our modern society. Yet, what we should be asking ourselves is, are we making a mistake by handing over so much decision-making authority and control to lines of code? And, how algorithms are affecting our lives? In an ever-changing world, machines are doing a great job at learning how humans behave, what we like and hate, and what is best for us at a fast pace. We’re currently living within the chambers of predictive technology — Oh hey there Autocomplete! Algorithms have drastically transformed our lives by sorting through the vastness data and giving us relevant, instantaneous results. By collecting big amounts of data we have given companies over the years the power to decide what’s best for us. Companies like Alphabet or Amazon have been feeding their respective algorithms with the data harvested and are instructing AI into using the information gathered to adapt to our needs and be more like us. Yet as we get used to these handy features, are we talking and behaving more like a computer? “Algorithms are not inherently fair, because the person who builds the model defines success.” — Cathy O’Neil, Data scientist At this technological rate, it’s impossible not to imagine a near future where our behavior is guided or dictated by algorithms. In fact, it’s already happening. Designed to assist you write messages or quick replies, Google rolled out its latest feature on Gmail called Smart Replies last October. Since, taking the internet by storm a lot of people have criticized the assistant, saying that its tailored suggestions are invasive, make humans look like machines, with some even arguing its replies could ultimately influence the way we communicate or possibly change email etiquette. The main issue with algorithms is when they get so big and complex they start to negatively affect our current society, putting democracy in danger — Hi Mark Zuckerberg, or placing citizens into Orwellian measures, like China taking unprecedented means to rank people’s credit score by tracking their behaviour with a dystopian surveillance program. As machine-learning systems are becoming more pervasive in many areas of society. Will algorithms run the world, taking over our thoughts? Now, let’s take Facebook’s approach. Back in 2015 they rolled out their newer version of the News Feed which was designed as an ingenuous way of raking and boosting users’ feed into a personalized newspaper allowing them to engage in content they’ve previously liked, shared and commented. The problem with “personalized” algorithms is that they can put users into filter bubbles or echo chambers. In real life, most people are far less likely to engage with viewpoints that they find confusing, annoying, incorrect, or abhorrent. In the case of Facebook´s algorithms, they give users what they want, as a result, each person feed becomes a unique world. A distinctive reality by itself. Filter bubbles make it increasingly difficult to have a public argument because from the system’s perspective information and disinformation look exactly the same. As Roger McNamee wrote recently on Time magazine “On Facebook facts are not an absolute; they are a choice to be left initially to users and their friends but then magnified by algorithms to promote engagement.” Filter bubbles create an illusion that everyone believes the same things we do or have the same habits. As we already know, on Facebook algorithms aggravated the problem by increasing polarization and, ultimately harming democracy. With evidence showing that algorithms may have influenced a British referendum or the 2016 elections in the U.S. “Facebook’s algorithms promote extreme messages over neutral ones, which can elevate disinformation over information, conspiracy theories over facts.” — Roger McNamee, Silicon Valley Investor In the current world constantly filled with looming mounds of information, sifting through it poses a huge challenge for some individuals. AI — used wisely — could potentially enhance someone’s experience online or help tackle, in a swift manner, the ever-growing loads of content. However, in order to function properly, algorithms require accurate data about what’s happening in real the world. Companies and governments need to make sure the algorithms’ data is not biased or inaccurate. Since in nature, nothing is perfect, naturally biased data is expected to be inside many algorithms already, and that puts in danger not only our online world by also the physical, real one. It is imperative to advocate for the implementation of stronger regulatory frameworks, so we don’t end up in a technological Wild West. We should be extremely cautious about the power we give algorithms too. Fears are rising over the transparency issues algorithms entail and the ethical implications behind the decisions and processes made by them and the societal consequences affecting people. For example, AI used in courtrooms may enhance bias, discriminate against minorities by taking into account “risk” factors such as their neighborhoods and links to crime. These algorithms could systematically make calamitous mistakes and sending innocent, real humans to jail. “Are we in danger of losing our humanity?” As security expert, Bruce Schneier wrote in his book Click Here to Kill Everybody, “if we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.” Hannah Fry, a mathematician at University College London, takes us inside a world in which computers operate freely. In her recent book Hello World: Being Human in the Age of Algorithms, she argues that as citizens we should be paying more attention to the people behind the keyboard, the ones programming the algorithms. “We don’t have to create a world in which machines are telling us what to do or how to think, although we may very well end up in a world like that,” she says. Throughout the book, she frequently asks: “Are we in danger of losing our humanity?” Right now, we still are not at the stage where humans, are out of the picture. Our role in this world hasn’t been sidelined yet nor it will be in a long time neither. Humans and machines can work together with their strengths and weaknesses. Machines are flawed and make the same mistakes just as we do. We should need to be careful about how much information and power we give up because algorithms are now an intrinsic part of humanity and they’re not going anywhere anytime soon.
https://orge.medium.com/algorithms-tell-us-how-to-think-and-this-is-affecting-us-eec7fb215dfa
['Orge Castellano']
2019-03-04 19:34:06.614000+00:00
['Machine Learning', 'Privacy', 'Future', 'Technology', 'Artificial Intelligence']
How To Configure Custom Pipeline Options In Apache Beam
Example Project Let’s see what we are building here with Apache Beam and Java SDK. Here is the simple input.txt file, we take this as an input and transform it and output the word count. input.txt Pipeline for example project As shown above, we split the text file based on the “:” and then extract and count words, format the result, and output to the output.txt file. This pipeline produces the below output. You might see three output files based on the processes running on your machine. testing: 3 progress: 2 done: 1 completed: 1 in: 2 test: 3 Here is the Github link for this example project. You can clone it and run it on your machine. You can see the output files in this location /src/main/resources Here is the main file which is the starting point of your application and where the whole pipeline is defined. App.java SplitWords Transform Once you read the input file from the location /src/main/resources all you need to do split the text by “:”. This is the transform file that takes the input and produced the output. In this case, input and output collections are PCollection<String>. SplitWords.java The processing logic which is split logic here is defined in the following file. This is the processing function where takes each input from the collection and processes it and produces the output. SplitWordsFn.java CountWords Transform After the first transform split, you need to transform these lines into words. This is the transform file that takes the input and produced the output. In this case, input and output collections are PCollection<String> and PCollection<PV<String, Long>> CountWords.java The processing logic which is Extract logic here is defined in the following file. This is the processing function where takes each input from the collection and processes it and produces the output. ExtractWordsFn.java Once the pipeline is created, you can apply a series of transformations with the apply function. You can use TextIO to read from and write to the appropriate files. Finally, you run the pipeline with this line p.run().waitUntilFinish();.
https://medium.com/bb-tutorials-and-thoughts/how-to-configure-custom-pipeline-options-in-apache-beam-37a32f84d1aa
['Bhargav Bachina']
2020-10-05 05:03:09.607000+00:00
['Programming', 'Java', 'Software Development', 'Web Development', 'Software Engineering']
How to Support Indie Writers
How to Support Indie Writers In the rapidly changing world of publishing, more and more writers are going indie — but we need your help in order to succeed Photo by Min An from Pexels Being an indie or non-traditional writer is a tough path. We are a tenacious group of people who are determined to get our work out there, with or without the traditional means of assistance. But actually being able to support ourselves with our work can be a huge challenge. And sometimes, deeply discouraging when we put in endless hours and are lucky to make $10 a month off our efforts. If you want to help, first of all, thank you so much. We wouldn’t be here without our devoted readers. Secondly, take a look at the following suggestions. You’d be surprised how easy it is to help your favorite indie writers. Monetary Support Buy our books If you like our work, support it with your dollars. Buy them new direct from the main distributors. Remember that buying used copies and buying new copies from subsidiary book sellers on Amazon means the author doesn’t get any royalties! Buy our other offerings Many indie writers have secondary offerings. Perhaps they are selling their artwork on Etsy, or are offering online classes or educational subscription services. Sometimes, these purchases can be even more impactful than the purchase of our books. It helps sustain us in between projects. Tithe Is your favorite author on Ko-fi? Patreon? So many of us have places where you can contribute a few dollars, or regular, monthly support. Again, this kind of support is so important. Some of us use this money for specific goals, some of us use it for business supplies, some for treats (I love my Earl Grey tea!), and some of us literally use it to help with the bills. Non-Monetary Support Share everything Let your friends know about the amazing book you just read. Share our posts on social media. Recommend our work to others. Forward our newsletters to your friends. Remember that indie authors are working without the expertise and contact lists of a marketing expert at a big publishing house. We’re on our own. So every fan who speaks up for us, every word-of-mouth recommendation is essential to our success. It’s like having our own, personalized patchwork quilt version of a PR manager (which, in my opinion, is the best kind!). Leave reviews everywhere Oftentimes, book reviews make or break a book purchase. Reviews are even more important to indie authors because self-published and independent press books have to fight against the cultural assumption that they weren’t good enough to be picked up by a traditional publishing house. (Not at all true.) We indie authors need all the help we can get to combat this negative assumption. Remember that you aren’t limited to leaving a review of a book only at the place where you purchased it. Don’t forget Goodreads and other book-centric websites. If you have a blog, write a review there. Share your reviews on social media. And guess what? It’s okay if you didn’t like the book and don’t have a good review to offer. Believe it or not, we want honest reviews! A book with twenty-nine 5-star ratings and nothing else looks pretty hinky, right? No one has a perfect rating. So go ahead — if you didn’t like it, feel free to say so. (But, you know, be polite.) Email your favorite indie author and ask if you can help them promote their latest project Offer to review it on your blog or to lend them a space for a blog book tour. Invite them onto your podcast. Authors are so grateful for this kind of support. And you might even get a free digital or audio copy out of the deal! Subscribe to your favorite indie author’s free newsletter You’ll get the scoop on all the upcoming projects, and it helps authors maintain a relationship with their loyal fans. (And don’t forget to forward the emails to friends who might enjoy it!) Attend local events It can be challenging for indie authors to put themselves out there. Give them a boost — fill up the room when they promote a reading/signing or other event. Bring some friends and smile a lot from the audience. And buy a book on your way out. Post pictures of yourself reading the author’s book on social media It is such a thrill for us to see our books out in the world, making an impact on other people’s lives! I’ll never forget the first time someone posted a photo on Instagram of them reading my book on vacation. Or the first time someone made a meme out of a quote from one of my books. It’s a thrill for us and a great way to help us spread the word. Just make sure to hashtag the book title and tag the author. Things to Keep in Mind Remember not to make assumptions about the quality of an indie author’s work just because they are self-published or using an independent press. Some of us are more interested in distributing our work to the world than we are in getting a publishing deal. Some of us are making a deliberate choice not to work with big publishing houses, for any number of reasons. Others want to keep a more intimate relationship between themselves and their readers. And let’s not forget that all too often, high-quality projects are passed over by big publishers because they are so overwhelmed with their current workload and author list. There are countless reasons why indie authors might end up self-publishing and/or using an independent press. Give them a chance before you judge them, and make sure you remind others of this, as well. If you review books on your blog or website, remember that indie authors often cannot afford to send you a printed copy of their book, especially in exchange for only the potential of a book review. Even with our author discounts, sending 10 books out (with shipping costs) for 10 potential reviews (reviews are never guaranteed) can cost us $75 or more. Many indie authors aren’t even able to pay their bills from the money they earn off their books, let alone put large chunks of cash toward promotional outlets that might not come through. If you’re a book reviewer who has these requirements, please reconsider allowing indie authors to send in a digital copy. With a digital copy, they at least won’t incur a loss if you choose not to review the book. Most indie authors really, truly want to connect with their readers. Don’t be afraid to contact us! © Yael Wolfe 2019
https://medium.com/wilder-with-yael-wolfe/how-to-support-indie-writers-6a0923da63bf
['Yael Wolfe']
2020-08-21 05:08:28.158000+00:00
['Writing', 'Business', 'Creativity', 'Indie', 'Self Publishing']
How To Provision Infrastructure on AWS With Terraform
How To Provision Infrastructure on AWS With Terraform A Beginner’s Guide with an example project Terraform is an infrastructure as a code tool that makes it easy to provision infrastructure on any cloud or on-premise. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. In this post, we will see how to provision infrastructure on AWS. Get Started With Terraform Prerequisites Example Project What is Backend Configuring Backend Provisioning Infrastructure Inputs and Outputs Destroying Infrastructure Summary Conclusion Get Started With Terraform The first thing we need to do is to get familiar with Terraform. If you are new to Terraform, Check the below article on how to get started. It has all the details on how t install, Terraform Workflow, Example Projects, etc.
https://medium.com/bb-tutorials-and-thoughts/how-to-provision-infrastructure-on-aws-with-terraform-d0d2710a0169
['Bhargav Bachina']
2020-11-13 06:02:25.989000+00:00
['Software Development', 'Terraform', 'Programming', 'Cloud Computing', 'AWS']
The Business Value of RPA — Robotics Process Automation
Introduction RPA is on the critical agenda of CEOs to cut cost, increase profitability, and generate new business; and agenda of other CxO level executives for accuracy, compliance, and security. As part of my business architecture and designer roles, I architected and designed several business solutions using Robotics Process Automation (RPA) tools and processes for large business and government organizations. I also helped the startup companies and sole trading entrepreneurs to add this interesting and emerging technology to their growth agenda. In this post, my focus is on the business value of Robotics Process Automation. I excluded specific tools and service providers. My aim is to evaluate them in a comprehensive article. This story provides a high-level overview of the business value proposition of Robotics Process Automation. Before delving into the business value proposition, I want to briefly introduce what RPA is. The internal ingredients of RPA systems are complex and sophisticated however from a functional perspective it can be easier to describe the operational model by abstracting the technical details. RPA is based on multiple software robots, connectors, and the control plane. The software robot can read a computer screen. In addition, designers or operators can use pre-built connectors to capture data, e.g. user screen. We can also edit the tasks that software robots produce. Based on these data collection models, the RPA system can create triggers and timer schedules on the Control Room engine. Then the Control Room engine assigns tasks from the queues to the software robots (a.k.a. Bots) in the worker pool. This is an iterative and repetitive process for desired outcomes. Business Value Prepositions While performing some proof of concepts or proof of technology initiatives, I noticed that the software robots could be up to five times faster than an experienced human agent in completing the same task. Humans cannot but software robots can work 24 by 7 basis. I witnessed three critical items on the agenda of CIOs, CTOs, CDOs, and CISOs. Accuracy (Integrity of business), Compliance, and Security I witnessed these major concerns in many large business and government organisations. Addressing the key agenda items on CxO level professionals is a critical value proposition that RPA can provide and exceed in delivery. My experience demonstrated that there was a substantial reduction in human error rates. I received confirming feedback from C Level executives in business and government organizations that RPA enhanced compliance and security capabilities by addressing their organizational concerns. As we know business and government organizations have a tremendous focus on three inevitable initiatives: to cut cost, increase profitability, and generate new business. RPA adds a compelling value proposition for cost-cutting. Many tasks can be performed in a shorter time and less effort by a human. Time of employees can be the biggest cost to large organizations. Therefore, each year we see so many redundancies in these business organizations. Increasing profitability is another value proposition that RPA contributes to large business organizations. Profitability is initiated and maintained by speedy delivery of services and reduced efforts spent by the employees, especially by those who are highly paid. As RPA is an innovative process and set of tools, it can contribute to generating new business. This value is greatly appreciated by startup organizations and sole entrepreneurs trying to tap into new markets. At the highest level, RPA can get rid of downstream process remediation, troubleshooting, and can reposition organizational labor to higher-value tasks. Close to my heart as an enterprise architect, RPA can address major concerns around scalability, agility, and flexibility in organizations leveraging Multi-Cloud services and Hybrid Cloud platforms. In business terms, this means that the software robots can easily be replicated and rapidly scaled, across the enterprise and beyond. The goal is to meet peak or atypical workloads on demand. If we have a compelling business case and want to use Artificial Intelligence (AI) and Cognitive Computing capabilities to enhance our automation services, we need to extend our current RPA capabilities using integrated offerings or customise our solutions at the architectural, design, operational, and specifications levels. For example, one of my clients wanted to leverage voluminous of streaming data from the Big Data analytics platform operating with a very complex set of open source and proprietary data tools, processes, and technology stacks. The data sources were coming from internal and external sources with high velocity. I came across many service providers offering RPA products and services with varying capabilities. I am not in a position to recommend or criticize any products as a vendor-agnostic enterprise architect. However, I develop a set of criteria to meet our business requirements and strategic goals. I neutrally and independently propose the criteria for CxO level executives. Once they understand the business value from the architectural criteria that I introduce to them, I start creating solutions for the initiatives. Conclusions RPA solutions pose many business benefits to organizations at all level. RPA solutions aim to reduce pain at the CxO level professional especially within the key agenda items such as cost-cutting, increasing profitability, and generating new business for CEOs. And they cover accuracy, compliance, and, security matters for CIO, CTO, CDO, and CISOs. I want to point out that we should not purchase RPA products or services based on their reputation or market shares. We need to ensure that the product and associated services meet the solution requirements, use cases, financial position, and our organizations’ strategic business direction. Understanding the high-level understanding of Business Architecture frameworks can be invaluable to strategize, design, implement and operate RPA solutions. You can learn about the critical concepts from Agile Business Architecture for Digital Transformation. I also published an article on introducing An Overview of Business Architecture For Entrepreneurs. Thank you for reading my perspectives. Original version of this story was published on another platform.
https://medium.com/technology-hits/the-business-value-of-rpa-robotics-process-automation-2cc017ba9e88
['Dr Mehmet Yildiz']
2020-12-27 16:32:36.128000+00:00
['Technology', 'Artificial Intelligence', 'Business', 'Robotics', 'Writing']
Tensorflow — Learn How to Use Callbacks Efficiently
So, you’re training a highly complex model that takes hours to adjust. You don’t know how many epochs are enough and, even worse: it’s impossible to know which variant will deliver the most reliable results! You want to take a break, but the longer it takes to find the right set of parameters, the less you have to deliver the project; So what to do? Let’s dive together. Photo by Tianyi Ma on Unsplash Before we Start To make it easier to understand, let’s go straight to the callback section, where we’ll adjust a Multilayer Perceptron. If you don’t know how to create a Deep Learning model, I suggest that you take a look at my article below; This will help you build your first Neural Network.
https://medium.com/swlh/tensorflow-learn-how-to-use-callbacks-efficiently-b13e0df89de3
['Renan Lolico']
2020-10-20 22:54:18.246000+00:00
['AI', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'TensorFlow']
Exploratory Data Analysis: Haberman’s Cancer Survival Dataset
What is Exploratory Data Analysis? Exploratory Data Analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. EDA is for seeing what the data can tell us beyond the formal modelling or hypothesis testing task. It is always a good idea to explore a data set with multiple exploratory techniques, especially when they can be done together for comparison. The goal of exploratory data analysis is to obtain confidence in your data to a point where you’re ready to engage a machine learning algorithm. Another side benefit of EDA is to refine your selection of feature variables that will be used later for machine learning. Why EDA? In a hurry to get to the machine learning stage, some data scientists either entirely skip the exploratory process or do a very perfunctory job. This is a mistake with many implications, including generating inaccurate models, generating accurate models but on the wrong data, not creating the right types of variables in data preparation, and using resources inefficiently because of realizing only after generating models that perhaps the data is skewed, or has outliers, or has too many missing values, or finding that some values are inconsistent. In this blog, we take Haberman’s Cancer Survival Dataset and perform various EDA techniques using python. You can easily download the dataset from Kaggle. EDA on Haberman’s Cancer Survival Dataset 1. Understanding the dataset Title: Haberman’s Survival Data Description: The dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago’s Billings Hospital on the survival of patients who had undergone surgery for breast cancer. Attribute Information: Age of patient at the time of operation (numerical) Patient’s year of operation (year — 1900, numerical) Number of positive axillary nodes detected (numerical) Survival status (class attribute) : 1 = the patient survived 5 years or longer 2 = the patient died within 5 years 2. Importing libraries and loading the file import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import numpy as np #reading the csv file haber = pd.read_csv(“haberman_dataset.csv”) 3. Understanding the data #Prints the first 5 entries from the csv file haber.head() Output: #prints the number of rows and number of columns haber.shape Output: (306, 4) Observation: The CSV file contains 306 rows and 4 columns. #printing the columns haber.columns Output: Index([‘age’, ‘year’, ‘nodes’, ‘status’], dtype=’object’) print(haber.info()) #brief info about the dataset Output: <class 'pandas.core.frame.DataFrame'> RangeIndex: 306 entries, 0 to 305 Data columns (total 4 columns): age 306 non-null int64 year 306 non-null int64 nodes 306 non-null int64 status 306 non-null int64 dtypes: int64(4) memory usage: 9.6 KB Observations: There are no missing values in this data set. All the columns are of the integer data type. The datatype of the status is an integer, it has to be converted to a categorical datatype In the status column, the value 1 can be mapped to ‘yes’ which means the patient has survived 5 years or longer. And the value 2 can be mapped to ‘no’ which means the patient died within 5 years. haber[‘status’] = haber[‘status’].map({1:’Yes’, 2:’No’}) haber.head() #mapping the values of 1 and 2 to yes and no respectively and #printing the first 5 records from the dataset. Output: haber.describe() #describes the dataset Output: Observations: Count : Total number of values present in respective columns. Mean: Mean of all the values present in the respective columns. Std: Standard Deviation of the values present in the respective columns. Min: The minimum value in the column. 25%: Gives the 25th percentile value. 50%: Gives the 50th percentile value. 75%: Gives the 75th percentile value. Max: The maximum value in the column. haber[“status”].value_counts() #gives each count of the status type Output: Yes 225 No 81 Name: status, dtype: int64 Observations: The value_counts() function tells how many data points for each class are present. Here, it tells how many patients survived and how many did not survive. Out of 306 patients, 225 patients survived and 81 did not. The dataset is imbalanced. status_yes = haber[haber[‘status’]==’Yes’] status_yes.describe() #status_yes dataframe stores all the records where status is yes Output: status_no = haber[haber[‘status’]==’No’] status_no.describe() #status_no dataframe stores all the records where status is no Observations: The mean age and the year in which the patients got operated are almost similar of both the classes, while the mean of the nodes of both the classes differs by 5 units approximately. The nodes of patients who survived are less when compared to patients who did not survive. 4. Univariate Analysis The major purpose of the univariate analysis is to describe, summarize and find patterns in the single feature. 4.1 Probability Density Function(PDF) Probability Density Function (PDF) is the probability that the variable takes a value x. (a smoothed version of the histogram) Here the height of the bar denotes the percentage of data points under the corresponding group sns.FacetGrid(haber,hue=’status’,height = 5)\ .map(sns.distplot,”age”)\ . add_legend(); plt.show() Output: PDF of Age Observations: Major overlapping is observed, which tells us that survival chances are irrespective of a person’s age. Although there is overlapping we can vaguely tell that people whose age is in the range 30–40 are more likely to survive, and 40–60 are less likely to survive. While people whose age is in the range 60–75 have equal chances of surviving and not surviving. Yet, this cannot be our final conclusion. We cannot decide the survival chances of a patient just by considering the age parameter sns.FacetGrid(haber,hue=’status’,height = 5)\ .map(sns.distplot,”year”)\ . add_legend(); plt.show() Output: PDF of Year Observations: There is major overlapping observed. This graph only tells how many of the operations were successful and how many weren’t. This cannot be a parameter to decide the patient’s survival chances. However, it can be observed that in the years 1960 and 1965 there were more unsuccessful operations. sns.FacetGrid(haber,hue=’status’,height = 5)\ .map(sns.distplot,”nodes”)\ . add_legend(); plt.show() Output: PDF of Nodes Observations: Patients with no nodes or 1 node are more likely to survive. There are very few chances of surviving if there are 25 or more nodes. 4.2 Cumulative Distribution Function(CDF) The Cumulative Distribution Function (CDF) is the probability that the variable takes a value less than or equal to x. counts1, bin_edges1 = np.histogram(status_yes['nodes'], bins=10, density = True) pdf1 = counts1/(sum(counts1)) print(pdf1); print(bin_edges1) cdf1 = np.cumsum(pdf1) plt.plot(bin_edges1[1:], pdf1) plt.plot(bin_edges1[1:], cdf1, label = 'Yes') plt.xlabel('nodes') print("***********************************************************") counts2, bin_edges2 = np.histogram(status_no['nodes'], bins=10, density = True) pdf2 = counts2/(sum(counts2)) print(pdf2); print(bin_edges2) cdf2 = np.cumsum(pdf2) plt.plot(bin_edges2[1:], pdf2) plt.plot(bin_edges2[1:], cdf2, label = 'No') plt.xlabel('nodes') plt.legend() plt.show() Output: [0.83555556 0.08 0.02222222 0.02666667 0.01777778 0.00444444 0.00888889 0. 0. 0.00444444] [ 0. 4.6 9.2 13.8 18.4 23. 27.6 32.2 36.8 41.4 46. ] ************************************************************* [0.56790123 0.14814815 0.13580247 0.04938272 0.07407407 0. 0.01234568 0. 0. 0.01234568] [ 0. 5.2 10.4 15.6 20.8 26. 31.2 36.4 41.6 46.8 52. ] CDF of Nodes Observations: 83.55% of the patients who have survived had nodes in the range of 0–4.6 4.3 Box Plots and Violin Plots The box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Outlier points are those past the end of the whiskers. Violin plot is the combination of a box plot and probability density function(CDF).
https://towardsdatascience.com/exploratory-data-analysis-habermans-cancer-survival-dataset-c511255d62cb
['Deepthi A R']
2019-09-15 11:01:16.990000+00:00
['Haberman', 'Exploratory Data Analysis', 'Data Analysis', 'Data Science', 'Data Visualization']
Can music help you study?
Do you listen to music while you study? Some people swear by it, others can’t stand it. What does science say? Over the years, several research groups have studied how music affects learning or whether music can help you concentrate. These studies are all different: they look at different types of music, different types of studying, different test subjects, and they take different measurements. In this post, I’ll round up some of the ones that show a positive or neutral effect of music on studying. In the next post, they’ll be more negative or neutral. So keep in mind that, like most articles about scientific studies, one post does not cover all aspects of the topic. Make music part of the studying process One type of studying involves memorizing things and later recalling them. That’s how you would study lists of vocabulary words, or biology or history facts, for example. In one study from earlier this year, researchers from the University of Ulm in Germany tested whether it matters in which way you are presented with the information that you need to remember. Is there a difference between reading text, hearing spoken word, or hearing a song? The researchers, Janina Lehmann and Tina Seufert, discovered that it was easiest to memorize text if you read it, but that people who heard it as a song were better able to comprehend the text. This is the idea behind educational songs. By having students sing about the material they’re studying, they’re connecting with it in a more engaged way. This method is used in language education, but also for subjects that rely less on memorization and more on understanding, such as science. But what if the music is not related to the material that’s being studied? Can music help you concentrate? Background music while studying When the music you’re listening to isn’t relevant to the material you’re studying, your brain is essentially doing two separate tasks: studying and listening to music. The music you’re listening to can change your mood, which can make studying easier if you enjoy the music, but it also has the ability to distract you from your work. Lehmann and Seufert, the same researchers who studied the difference between reading a text or hearing it as music (mentioned above) also investigated the role of background music. They recruited 81 volunteers (all university students) and made half of them listen to music while studying, while the other half studied in silence. The researchers wanted to study how music affects memory, but they didn’t really find a difference in how well either group was able to memorize what they studied. That suggests background music didn’t have much of an effect on this group. But they did see a correlation between how well the subjects understood what they learned: Those students who had a good working memory were better able to learn while music was on in the background. The researchers think this is because music formed more of a distraction for the other participants. The arousal-mood hypothesis One of the theories why some people prefer having music on in the background is the “arousal-mood hypothesis”. This is the idea that listening to fast-paced, upbeat music improves someone’s ability of solving a task. The researchers who first tested this hypothesis used the same Mozart sonata in either a faster or slower tempo, and in a major or minor key, so that all study participants heard the same piece of music, but with different characteristics. They found that the faster, major-key version of the piece had a positive effect on solving a spatial task. Another study also found an effect of happy music on creative problem solving. This is another aspect of how music affects learning: by changing the way we feel while we study. Is there a Mozart effect? In the experiment above, the researchers chose a Mozart piano piece, specifically to control for the so-called “Mozart effect”. This is the idea that listening to Mozart makes solving spatial tasks easier. The origin of this theory is a very short research paper from 1993, which studied only 36 subjects (all university students) and didn’t compare Mozart to any other composer. The subjects either listened to the Mozart piece, to a relaxation tape, or to nothing at all. The fact that the students who listened to music did a bit better than the others in this very small study does not say anything about the power of Mozart’s music in particular — it just happened to be the music the researchers chose for their music sample. But the study took on a life of its own. Without actually reading the original paper, people picked up on it, changed the story, and completely overhyped it, to the point where some people believed that “listening to Mozart makes babies smarter”. Those broad interpretations all came from wildly exaggerated interpretations of one very small study. So, no, listening to Mozart will NOT make you smarter. It’s just that listening to happy, upbeat music may make certain tasks a bit easier. Confused by all the conflicting messages? Still not sure whether listening to music makes studying easier? Stay tuned for tomorrow’s post, which will make things even more complicated…
https://easternblot.medium.com/can-music-help-you-study-1a9af07f8a39
['Eva Amsen']
2020-04-04 10:26:56.813000+00:00
['Learning', 'Education', 'Science', 'Students', 'Music']
Our Economy Was Just Blasted Years Into the Future
Throughout history, pandemics have left varying, sometimes momentous impacts on the societies in which they have occurred. In the 16th and 17th centuries, smallpox, measles, and other diseases brought by the Spanish wiped out up to 90% of the South and Central American population, utterly transforming the historic order. Conversely, the global flu pandemic of 1918 to 1919 appeared to establish no new norms, suggests Harvard political scientist Joseph Nye. Rather, the approximately 50 million flu deaths seemed to blend into the general slaughter of World War I and go on to be all but forgotten until modern historians began to write about the calamity in the 1970s. As a catastrophe, Covid-19 itself appears so far to be a hybrid in impact — vastly speeding up some potent trends while quickly dispelling others that people thought were happening but actually weren’t. Cliff Kupchan, chairman of the Eurasia Group, says such acceleration is a natural byproduct of crises like pandemics, which “tend to jolt the current system.” Against the backdrop of a two-century period of faster and faster transformation, the coronavirus is compressing and further accelerating the arc of events. “There is pressure on all trends, and only the strongest, most vibrant continue to be underway,” he says. “Only the fittest survive. You have a Darwinian moment for trends.” What Kupchan is describing is an economic time machine. Against the backdrop of a two-century period of faster and faster transformation, the coronavirus is compressing and further accelerating the arc of events. Consider the shift to driverless automobiles, one of the most-predicted events of our time. In the popular vision, repeated countless times by Silicon Valley, Wall Street, Detroit, expensive consultants, think tanks, and governments around the world, the human race is quickly shifting to a world of autonomous, shared cars. Starting in the early 2020s, it has been said, people will travel in such vehicles, heedless to their surroundings, relaxing, working, or shopping in smart metropolises looking substantially like Orbit City, home of the Jetsons, perhaps even including a few flying cars. This newfound liberation from the steering wheel would be a bonanza for automakers and Silicon Valley alike, producing tech-laden vehicles that would suck up a constant flow of lucrative data from the passengers. So certain was this future that the major automakers and Silicon Valley went on a spending spree to make it a reality, investing a collective $16 billion. That was then. Even before Covid-19, many auto hands were already expressing private doubts about the timeline. But now, prominent names have mostly stopped making predictions about what they will produce and when they will produce it. Ford has outright postponed the 2021 debut of robotaxis and driverless delivery vehicles, saying that the virus could have an unknown, long-term effect on consumer behavior. BMW says people seem not to want to get into the kind of shared, autonomous vehicles it had planned but instead to drive their own car. GM has shut down Maven, its car-sharing service, and laid off 8% of the workforce at Cruise, its driverless vehicle division. One reason for the doubts about the revival of gains for workers is yet another byproduct of the coronavirus: an accelerated automation of jobs. Some of this is the auto industry feeling its own mortality: Ford expects to lose $5 billion this quarter after a $2 billion loss in the first three months of the year. Fiat Chrysler also lost just under $2 billion the first quarter. GM made a little money — $294 million — but that was an 86% drop year-on-year. It has been the same abroad: VW’s earnings plunged by 75% in the first quarter, and Toyota says it expects its full-year profit to plummet 80%. But the industry has also lost confidence that a fully autonomous, go-anywhere vehicle is possible any time soon. In a Wall Street Journal report on May 18, Uber — whose business model until recently centered entirely on mastering autonomy — was said to be reevaluating driverless research after burning through more than $1 billion. It was stunning news since just last year, Uber’s self-driving unit was valued at $7.25 billion. In addition to the major players, tens of millions of dollars of venture capital has gone into countless startups, among them Argo AI, Zoox, Aurora, and Voyage. No one is publicly giving up — that would be too much of a concession given the hit they would probably take from Wall Street. Rather than an admission of failure, look for one after the other to embrace lesser, limited autonomy such as lane changing, highway driving, and automatic parking.
https://marker.medium.com/our-economy-was-just-blasted-years-into-the-future-a591fbba2298
['Steve Levine']
2020-07-08 12:48:43.784000+00:00
['Economy', 'Automation', 'Society', 'Business', 'Future']
Being Noble Is Not Important, But It’s Important To Me
I was once hanging out with a couple of my friends and we were having a discussion when somehow this question popped up: “How important is it to be noble, anyways?” There was a pause in the conversation as we considered the question. It was something that I’d wondered for a long time, too. Eventually one of my friends, who I’ve come to really respect and take after, said this: “I don’t know, but it’s really important to me.” I’ve thought on those words a lot over the past year, and in retrospect, that was the perfect response and the perfect answer. In the response that “it’s really important to me,” it meant that there was no elevation or comparison. He had his values, but did not impart or impose them on everyone else at the table. Being noble was his value, plain and simple, that he took to heart. In the past year, so have I as well. I don’t, to the best of my knowledge, impart my values and beliefs on others. I’m a big believer of allowing people to find the path of whatever works best for them, personally, to find their way and take personal ownership of that path. But trying to be noble is something that has worked for me. If a friend asks me to do something for them and I give them my word, I flat out can’t reckon or live with myself if I don’t follow through. As such, I have used the mantra “keep your word” sparingly as a way to ensure I do something or keep a promise that was important to me. The labels of “honorable” and “noble” tend to have negative connotations in today’s society — and I think a lot of people who do label themselves as such have goals of self-ascendancy above their peers, to some extent. If being noble is a for recognition or credit, then we are doing it for the wrong reasons. In today’s day and age, however, there’s this question as well — what does being noble actually mean? The straight up answer I can give is I don’t know. I can only tell you what I think when I think the word. It can be summed up in Galatians 5:13–14: “”For you are called to freedom, brothers; only don’t use this freedom as an opportunity for the flesh, but serve one another through love. For the entire law is fulfilled in one statement: Love your neighbor as yourself.” I’ve written about this before, but the main reason I converted to Christianity was that some of the people I’m close with who identify as Christian have a certain way of treating others that I respect profoundly. It is treating those people with unconditional support and kindness, but giving them space and freedom to find their own path. They never thought they were better than me, and I never got the sense that they thought they were better than me. They treated myself and others with the utmost humility and respect, the likes of which I didn’t know existed. That is what it means, to me to be noble. That is what I strive to be closer to, each and every day of my life. To be noble means to show, not tell, to be a leader that doesn’t tell people what to do or what they should be, but show people the way to be what they want to be. To be noble is to walk forward vulnerable and with flaws, with full acceptance of yourself, instead of trying to put on the tough act and hide that away. Because I’ve learned that the hardest thing you can do in life, that you can ever do, is invite people into your pain and suffering and see you as you truly are at what you believe is your worst — because hiding is the societal norm and default. To be noble, again, is to not seek recognition or credit for your acts of good and service, even for the little things. These acts, such as sacrificing an hour or two of sleep to talk to a friend or family member in need, or sitting in silence with someone suffering alone, are not done for the right reasons if you later tell the world the next day about all the good you did so they can see how good of a person you are. The only recognition you truly need is from yourself, or from God or some other power you believe in. In the final analysis, that’s what it’s all about. So the question you’re probably wondering at this point, if you’ve gotten to this far in the article, is if I believe I’m noble. I do not. I want to be and I’ve certainly gotten better at it as I’ve moved forward in life. Certain people who have seen snapshots of me would say yes definitively, while others who have seen different snapshots would say absolutely not. But what matters isn’t that being noble is important to everyone — what matters is that it’s important to me.
https://medium.com/the-partnered-pen/being-noble-is-not-important-but-its-important-to-me-f8dae2f06546
['Ryan Fan']
2019-10-17 00:52:00.881000+00:00
['Self', 'Psychology', 'Creativity', 'Spirituality', 'Religion']
20 Articles That Will Get Your Writing Career Started
You want to learn how to become a writer, but all the writing courses you’ve seen are expensive. Is it really worth handing over thousands that you may never get back? What if you do a course and discover writing isn’t for you after all? I know it’s not easy breaking into the writing world. After over a decade in this business as both a writer and editor, I’ve discovered a few tricks of the trade that I’ve shared here on Medium over the past year. I decided to collect them all together for you as a kind of free writing course. Bookmark it and work your way through in your own time. If you have questions, feel free to leave a comment. No questions are stupid ones! I’m sure if you’re wondering it, someone else is too. If you’re ready for a more in depth course, with individual feedback on your writing, check out my Creative Nonfiction Academy, or email me about mentoring on [email protected]. I’m happy to answer any questions. If you are looking into copywriting, my recommendation is the Comprehensive Copywriting Academy. This is an affiliate link, but I have tested out this copywriting course, including their monthly mentoring calls and Facebook group, myself and feel 100% confident recommending it. I wish this had been around when I was copywriting! Here’s your DIY Writing Course: Getting Started in Writing
https://medium.com/inspired-writer/my-top-20-articles-for-new-writers-254157064c9f
['Kelly Eden']
2020-11-28 06:09:16.569000+00:00
['Freelance', 'Writing Tips', 'Creativity', 'Life Lessons', 'Writing']
Social Media Can Have Mixed Results For Your Mental Health
Our brains crave stimulants throughout the day — especially in people who struggle with ADHD and depression. The stimulants increase dopamine in the brain. We need that little dose of stimulation in the morning to wake up the brain. It is addicting because the brain loves it. “According to an article by Harvard University researcher Trevor Haynes, when you get a social media notification, your brain sends a chemical messenger called dopamine along a reward pathway, which makes you feel good” We can find ourselves in bed for hours just scrolling through social media. You would think that wasting time in bed mindlessly scrolling is bad for your mental health right? I would say in some cases, yes. Some people are born with an addictive personality. I know this because I have family members who have that trait. These are the alcoholics of my family. There is a lot of controversy surrounding addiction as if it should be considered a mental illness. Some may say that the drugs alter the brain circuits which characterizes under mental illness and some may say that no one was forced to pick up a needle and that drug addiction was considered a choice. In my opinion, addiction should be considered a mental illness and one of the most dangerous if you are asking me. If someone is in the severe stages of alcohol addiction, they can die if they stop drinking alcohol — this is the same for other drugs such as heroin. My point is — people who have an addictive personality should be more cautious when it comes to social media because it can affect other parts of their lives, such as having difficulty with human communication in real life.
https://medium.com/invisible-illness/social-media-can-have-mixed-results-for-your-mental-health-d27e9c8583a3
['Justine Elizabeth']
2020-07-15 00:18:55.507000+00:00
['Mental Health', 'Health', 'Social Media', 'Addiction', 'Media']
Introducing Brushable Histogram
As part of our ongoing effort to be more open to the frontend community, we are announcing the launch of our first open source UI component: Brushable Histogram (click here for a demo). The idea behind this component came up during the development of Genome, our new dynamic visualization engine. We needed a way to display how the events generated in the visualization were distributed over time, which sounds quite simple but ended up being a bit trickier for it to work for all our use cases. The problem A customer might have a regular habit of making a couple of purchases a month for a few years. However, fraudsters may deploy a bot attack which makes hundreds of purchases in just a few minutes. This required a flexible binning approach to bin events for a time period of a minute, or a month, depending on what’s needed. Now consider that we are looking at a graph which involves both cases. We should be able to zoom in on interesting time ranges while continuously adjusting the time granularity of the binning, and pan around it to uncover the story. So here we discovered two fundamental actions which our histogram should support: pan and continuous zoom. To speed up navigation between different time ranges a slider was also necessary. But when you are looking at events spanning over a two-day period (binned by hour), how do you know you need to zoom in on a specific 5-minute interval in which a bot attack took place? Our solution This led us to add a bit of flair to our slider and turn it into a strip plot of the full time period in investigation. As each strip represents an event, when you have an area with a large density of strips, it is an indicator of a high frequency of events. This way, we are able to give a more granular view of event velocity which allows us to uncover that bot attack and many other fraud patterns. We kept searching for a histogram component that could give us the flexibility to do all of this, but we couldn’t find it. So, we set to create a new one! Beatriz Malveiro, one of our data visualization engineers, did the first original conception and prototype, and Victor Fernandes made several improvements to that first version.
https://medium.com/feedzaitech/introducing-brushable-histogram-6c6b0f0f60ca
['Luís Cardoso']
2019-04-11 16:51:17.633000+00:00
['JavaScript', 'Data Visualization', 'React', 'Software Development', 'Software Engineering']
5 Insanely Simple Writing Tips You Need to Know
5 Insanely Simple Writing Tips You Need to Know Giving you the tools to elevate your writing to the next level Photo by James Pond on Unsplash If you’re reading this, you’re probably a writer who’s looking to improve their craft. You may write solely in your private journal, or you might have your own blog with 100,000 subscribers. My point is, no matter how long you’ve been writing, wherever you are in your own writing journey — none of us are perfect, and there is always more to learn. And today is your lucky day. “But why?” I hear you ask. Because, dear reader, I’ve have put together a list of the only writing tips you’ll ever need. This right here is the crème de la crème of writing advice. At least I think so anyway. You may read it and think I’m barking mad. Hopefully not, but stranger things have happened. Still here? Fantastic, let’s begin. 1) Provide value to your readers I want my work to be read. You probably do too. I write because I believe I have something worth writing about, and I hope that my words can be of value to my readers. You clicked on this article for writing advice. If you’d opened this page and found an article discussing LeBron James’ career or recipes for the perfect Philly Cheesesteak, you’d be disappointed. Your words should provide readers with value. They should finish reading your words and feel something. Whether you’ve just given tips on how they can make more money, or how they can avoid feeling stressed first thing in the morning, your words should leave your readers with a clear takeaway that they can easily implement into their own lives. Whatever your message, make it clear, make it valuable, and above all, make it for the reader. 2) Keep it simple Unless the title of your article is ’10 Words That Nobody Else Will Understand (But You’ll Sound Super Smart),’ do us all a favor and put the thesaurus away. We’re not reading your articles so that you can elucidate something to me — explaining it will do just fine. Deciding to use simple language doesn’t mean you think that your readers are cretinous, obtuse, or vacuous, it means that you have respect for their time. As impressive as your scintillating vocabulary might be, few things will annoy readers more than having to pause every ten seconds to look up a definition of a word. “Identify the essential, eliminate the rest.” — Leo Baubata Try to avoid redundant words or phrases, which often serve no other purpose than to bolster your word count. Look at the examples below, and the improved, more concise versions beside them: His eyes actually filled with tears — His eyes filled with tears. filled with tears — His eyes filled with tears. It is absolutely essential that you regularly change the oil in your car — It is essential that you regularly change the oil in your car. essential that you regularly change the oil in your car — It is essential that you regularly change the oil in your car. I do one-hundred sit-ups a day, for the purpose of improving my ab muscles — I do one-hundred sit-ups a day, to improve my ab muscles. improving my ab muscles — I do one-hundred sit-ups a day, to improve my ab muscles. In spite of the fact that we could afford it, my wife wouldn’t let me buy a sports car — Although we could afford it, my wife wouldn’t let me buy a sports car. If you can tell a story in 500 words instead of 1,000, do it. Your readers will thank you for respecting their time. 3) Quality is king Most posts don’t go viral, and unfortunately, there is no magic formula. I’ve read articles that deserved much more attention than they received, and read viral posts that weren’t all that good. That’s just how the cookie crumbles. While you can’t control how your writing will be received, you can control the effort you put in. Nobody wants to read or write an unmemorable story, so you should aim to write the only post a reader will ever need on that topic. Don’t settle for publishing an article that you’ve half-assed in an hour. You might get one-hundred, maybe two-hundred views. But how much better could it have done, if you’d spent the time writing The Godfather of articles, instead of Jaws 4? Write every post as if it has the potential to be a blockbuster. 4) Write your headline, then write it again Many writers pour their hearts and souls into their work. They spend hours editing a piece to within an inch of its life, then they take all of thirty seconds to write a headline before hitting ‘Publish.’ Don’t tell me you haven’t done it. Most writers don’t spend enough time writing headlines, which is odd considering the headline is your reader’s deciding factor on whether to open your article or not. You could have written the War and Peace of blog posts, but if readers don’t like your headline, nobody will ever read your masterpiece. “I have to mix in the right amount of curiosity and information to make a cocktail of highly click-able sh*t.” — Tom Kuegler Tom Kuegler is great at writing headlines, and if you haven’t read any of his work, you should. But don’t just take my word for it, decide for yourself after you’ve looked at some of these headline examples: How To Triple Your Output In One Year How To Build An Easy $1,000 Per Month Money Stream Online A 2-Minute Trick To Write Better Headlines Writers, Here’s How To Actually Start Standing Out What does each of these headlines have in common? They appeal to the wants and desires of readers. Who wouldn’t want to triple their yearly output, or make an easy $1,000 per month online? And best of all, the ‘How To’ element lets readers know that they will come away from the article with a clear insight as to how they too can achieve these goals. Before you hit ‘Publish,’ make a list of as many headline ideas as you can. Try to incorporate powerful, emotional words, which will grab the reader by the scruff of the neck and say, “Hey you! Yeah, you! Come and read this article!” When you’re done, try running them through CoSchedule Headline Analyzer, which will analyze your headline for readability and word balance, and give it a score out of 100. Your headline should not only encapsulate the message of your article, but it should also appeal to the wants and needs of your readers. If your headline doesn’t trigger a part of your reader’s brain to think ‘I need to read that article,’ it isn’t a very good headline. 5) Remember why you write We all have days where we don’t feel like writing. Maybe you’ve found yourself questioning the decision to continue writing. When that feeling strikes, it’s important to remember the reason you fell in love with writing in the first place. Do you remember the first thing that you ever wrote? Maybe you used to write in your diary, or maybe you once ran a blog about continental cheeses. Whatever it was, I bet you remember writing it. My first foray into writing came back in 2015 when I started a gaming blog. I don’t recall why I decided to sit at my laptop one day and write a review for Batman: Arkham Knight the videogame, but you know what? I loved every second of it. I remember the exhilaration as the words flowed from my fingers. I remember the vulnerability I felt sending my precious words out into the big wide world for all to read. Above all, I remember the passion I felt, and how I knew I’d found my calling. Don’t ever forget what made you fall in love with writing. “A write is a writer not because she writes well and easily, because she has amazing talent, or because everything she does is golden. A writer is a writer because, even when there is no hope, even when nothing you do shows any sign of promise, you keep writing anyway.” — Junot Diaz TL;DR Provide value to your readers Keep it simple Quality is King Write your headline, then write it again Remember WHY you write Happy writing.
https://medium.com/swlh/5-insanely-simple-writing-tips-you-need-to-know-ed4d4abc90b3
['Jon Peters']
2020-09-10 09:27:04.005000+00:00
['Self Improvement', 'Writer', 'Creativity', 'Writing', 'Advice']
What’s New in React 16 and Fiber Explanation
Previously, React would block the entire thread as it calculated the tree. This process for reconciliation is now named “stack reconciliation”. While React is known to be very fast, blocking the main thread could still cause some applications to not feel fluid. Version 16 aims to fix this problem by not requiring the render process to complete once it’s initiated. React computes part of the tree and then will pause rendering to check if the main thread has any paints or updates that need to be performed. Once the paints and updates have been completed, React begins rendering again. This process is accomplished by introducing a new data structure called a “fiber” that maps to a React instance and manages the work for the instance as well as know its relationship to other fibers. A fiber is just a JavaScript object. These images depict the old versus new rendering methods. Stack reconciliation — updates must be completed entirely before returning to main thread (credit Lin Clark) Fiber reconciliation — updates will be batched in chunks and React will manage the main thread (credit Lin Clark) React 16 will also prioritize updates by importance. This allows high priority updates to jump to the front of the line and be processed first. An example of this would be something like a key input. This is high priority because the user needs that immediate feedback to feel fluid as opposed to a low priority update like an API response which can wait an extra 100–200 milliseconds. React priorities (credit Lin Clark) By breaking the UI updates into smaller units of work, a better overall user experience is achieved. Pausing reconciliation work to allow the main thread to execute other necessary tasks provides a smoother interface and better perceived performance. Error Handling Errors in React have been a little bit of mess to work with, but this is changing in version 16. Previously, errors inside components would corrupt React’s state and provide cryptic errors on subsequent renders. lol wut? React 16 includes error boundaries will not only provide much clearer error messaging, but also prevent the entire application from breaking. After being added to your app, error boundaries catch errors and gracefully display a fallback UI without the entire component tree crashing. The boundaries can catch errors during rendering, in lifecycle methods, and in constructors of the whole tree below them. Error boundaries are simply implemented through the new lifecycle method componentDidCatch(error, info) . Here, any error that happens in <MyWidget/> or its children will be captured by the <ErrorBoundary> component. This functionality behaves like a catch {} block in JavaScript. If the error boundary receives an error state, you as a developer are able to define what is displayed in the UI. Note that the error boundary will only catch errors in the tree below it, but it will not recognize errors in itself. Moving forward, you’ll see robust and actionable errors like this: omg that’s nice (credit Facebook) Return multiple elements from render You can now return an array, but don’t forget your keys ! render() { return [ <li key="A">First item</li>, <li key="B">Second item</li>, <li key="C">Third item</li>, ]; } Portals Render items into a new DOM node. For example, it could be great to have a general modal component you portal content in to. render() { // React does *not* create a new div. It renders the children into `domNode`. // `domNode` is any valid DOM node, regardless of its location in the DOM. return ReactDOM.createPortal( this.props.children, domNode, ); } Compatibility Async Rendering The focus of the initial 16.0 release is on compatibility for existing applications. Async rendering will not be an option initially, but in later 16.x releases, it will be included as an opt-in feature. Browser Compatibility React 16 is dependent on Map and Set . To ensure compatibility with all browsers, you must include a polyfill. Popular options are core-js or babel-polyfill. In addition, it will also depend on requestAnimationFrame , including for tests. A simple shim for test purposes would be: global.requestAnimationFrame = function(callback) { setTimeout(callback); }; Component Lifecycle Since React prioritizes the rendering, you are no longer guaranteed componentWillUpdate and shouldComponentUpdate of different components will fire in a predictable order. The React team is working to provide an upgrade path for apps that would break from this behavior. Usage Currently React 16 is in beta, but it will be released soon. You can start using version 16 now by doing the following:
https://medium.com/edge-coders/react-16-features-and-fiber-explanation-e779544bb1b7
['Trey Huffine']
2018-10-30 13:57:05.603000+00:00
['Code', 'Tech', 'React', 'JavaScript', 'Startup']
The Best Remedy for Insomnia Is the One You Haven’t Tried
The Best Remedy for Insomnia Is the One You Haven’t Tried Most people do exactly the wrong thing during a bout of sleepless nights Stress and worry are major insomnia triggers, and so it’s hardly a surprise that the pandemic has set off a wave of lost sleep. Earlier this year, research in the journal Sleep Medicine found that the emergence of SARS-CoV-2 caused a 37% jump in the incidence of clinical insomnia. Even before the pandemic, insomnia was commonplace. Each year, about one in four adults develops acute insomnia, which is defined as a problem falling asleep or staying asleep a few nights a week for a period of at least two weeks. That’s according to a 2020 study in the journal Sleep. Fortunately, that study found that most people — roughly 75% — recover from these periods of short-term insomnia. But for others, the problem persists for months or years. “A bad night of sleep can be a one-and-done, it can be a couple of nights for a couple of weeks, or it can turn into a chronic problem,” says Michael Perlis, PhD, first author of the Sleep study and director of the Behavioral Sleep Medicine Program at the University of Pennsylvania. One of the reasons that acute insomnia turns into chronic insomnia, Perlis says, has to do with a common mistake people make after a night or two of poor sleep. Even among those who have struggled for years with insomnia, many continue to employ this same counterproductive strategy — a strategy that is based on a fundamental misunderstanding of how sleep works. On the other hand, Perlis says that one of the very best remedies for insomnia is also one of the simplest, and it works because it prevents people from making that mistake. “Do nothing. That’s what I tell people who’ve had a bad night of sleep, or two or three,” he says. “But it’s the hardest nothing you’ll ever do. And I’ll explain why.” The power of sleep debt Perlis says that the common, insomnia-perpetuating error that most people commit is that they try to make up for lost sleep; they take naps, they go to bed early, and they sleep in late. “All of this contributes to sleep dysregulation, which is a recipe for long-term insomnia,” he explains. When he tells people to “do nothing,” he means that they should not try to make up for lost sleep. Instead, they should stick to their usual sleep-wake routine even on days when they’re exhausted and dying for a nap or a sleep-in. “I tell people to be awake in the service of sleep,” he says. “If you build up enough sleep debt, sooner or later that will be enough to force you into deep and prolonged sleep. The ship will right itself.” (To be clear, naps can be fine for healthy sleepers, but for those with insomnia, they can make matters worse.) Sleep debt is such a powerful anti-insomnia force that sleep therapists and clinics often employ a technique known as sleep restriction. This involves limiting the time a person is allowed to spend in bed each night to just six or seven hours — sometimes less — which augments the body’s need for sleep. “Sleep is a homeostatic process, which means that for every hour you’re awake, you’re increasing the pressure [the body feels] to balance that with sleep,” Perlis says. Eventually, if a person doesn’t relieve that pressure by taking naps or sleeping in late, the body’s homeostatic need for sleep will overwhelm whatever is keeping that person awake at night. “If you build up enough sleep debt, sooner or later that will be enough to force you into deep and prolonged sleep. The ship will right itself.” Other sleep doctors echo Perlis’ advice — and his warnings. “Patients ask all the time, ‘What happens if my sleep is thrown off? How much do I need to make up?’ But that’s not how sleep works,” says Michael Grandner, PhD, director of the Sleep and Health Research Program at the University of Arizona College of Medicine in Tucson. “Sleep is not like a bank account where if you pull money out and then put money in, it will all balance out.” Grandner once worked with Perlis at the University of Pennsylvania. In support of his old colleague’s recommendation to “do nothing,” he offers a useful analogy. “If you have no appetite at dinner, you wouldn’t fix that by eating snacks all day long,” he says. “That would create a cycle where you’re eating at the wrong times and you don’t have any hunger when it’s actually time to eat.” Similarly — and due to some related biological systems — he says that naps and other attempts to make up for lost sleep can reduce the “sleep hunger” a person feels in bed at night. This lack of sleepiness often leads to another night of poor sleep, which leads to more compensatory efforts to make sleep up the next day, and all of this disrupts the cycles and processes that govern healthy sleep. Grandner says that an important element of the “do nothing” approach is the maintenance of a stable sleep-wake schedule, which helps align the body’s circadian clocks and rhythms. This means going to bed and getting up at roughly the same times (give or take 30 minutes) each day — including on weekends. “Sleep is highly programmable,” he says. “You can train it like you train a dog, but in both cases you have to be consistent.” “Doing nothing doesn’t mean ignoring the problem,” he adds. “It really means staying the course and not overcorrecting after a few bad nights.” “Do nothing. That’s what I tell people who’ve had a bad night of sleep, or two or three.” When “do nothing” fails For many insomniacs — especially those who experience only acute and sporadic bouts of problem sleep that are brought on by stress — Perlis’ “do nothing” advice will do the trick. But if a person’s insomnia is severe and entrenched, there’s another remedy that surprisingly few problem sleepers try despite its sky-high rates of success. That remedy is cognitive behavioral therapy for insomnia, or CBT-I. CBT-I is a form of individualized psychotherapy that has become the “gold standard” for people with chronic insomnia, and that more than a decade of research has shown to be highly effective. In its clinical practice guidelines, the American College of Physicians recommends CBT-I as its “first-line” treatment for chronic insomnia. “CBT-I is magic,” Perlis says. Like other forms of psychotherapy, CBT-I is tailored to each individual’s situation and challenges. It usually combines a handful of interventions that target a person’s thoughts, behaviors, and sleep routines, and it requires a sleep specialist’s oversight, he explains. “The problem for people with insomnia is that when they experience it, they play the short game, not the long game,” he adds. The short game involves trying to catch up on lost sleep, and prioritizing feeling better fast over more durable remedies. The long game may require more work. But the payoff can be a lifetime of better sleep.
https://elemental.medium.com/the-best-remedy-for-insomnia-is-the-one-you-havent-tried-a450f5493cc5
['Markham Heid']
2020-12-03 06:32:42.359000+00:00
['The Nuance', 'Sleep', 'Insomnia', 'Health', 'Brain']
“Okay Google, are you Skynet?” — The Fear & Future of AI.
When I was a kid, I was fantasized about a Japanese Anime —Future GPX Cyber Formula (新世紀GPXサイバーフォーミュラ). It is about racing in the future where race cars are equipped with AI systems called “Asurada.” When Asurada, the cyber system, was near completion, the developer (Kazami) realized that Smith, a high-level executive of “Missinglink” (The company developing the system), wanted to sell Asurada to the military for a huge profit and making Asurada the ultimate AI in war machines. To avoid Smith’s plan, Kazami installed Asurada in a prototype racing machine Asurada GSX with sophisticated encryption. This story is then about how the main character, the son of Kazami, uses Asurada to compete with racers worldwide. screen capture of Future GPX Cyber Formula, the official website Artificial Intelligence in Real life I am not an expert to define what an AI is, and there is obviously a broad definition. Marvin Minsky, the AI father, described artificial intelligence as any task performed by a machine that would have previously required human intelligence. AI is everywhere today: from calculating the recommendation of what you should buy next online, a virtual assistant on a mobile phone, such as “Siri,” “Hey, Google,” to detect credit card fraud in call centers. Soon we can see a real Asurada GSX in real life. We already have AI-assisted cars on the road, like Autopilot AI in Tesla. Different types of AI In the beginning, it would be great to introduce the basics. A high-level separation of AI can split into General AI and Narrow AI and artificial superintelligence. Artificial narrow intelligence (ANI), which has a narrow range of abilities; (ANI), which has a narrow range of abilities; Artificial general intelligence (AGI), which is on par with human capabilities; or (AGI), which is on par with human capabilities; or Artificial superintelligence (ASI), which is more capable than a human. Narrow AI or weak AI is intelligent systems that have been taught or have learned how to carry out specific tasks. Unlike traditional programming, narrow AI can perform without being explicitly programmed on how to do so. But unlike humans, this type of AI system can only be taught or learn how to do assigned tasks. A successful example of narrow AI is AlphaGo, the master computer program that plays the board game Go. In 2017, the Master version of AlphaGo beat Ke Jie, the number one ranked player in the world of Go. AlphaGo was awarded professional 9-dan by the Chinese Weiqi Association afterward. General AI or Strong/ Deep AI is more than that. It is the type of adaptable intellect similar to what we have, a resilient framework of intelligence competent of learning how to carry out tremendously various tasks, anything from writing an article or reasoning about an extensive diversity of topics based on accumulated analysis of contents. This is the AI commonly seen in movies, like “Skynet” in The Terminator. Sounds scary? But don’t worry, they don’t exist today. Since AI experts are still fiercely divided over how soon it will become a reality. Superintelligence, from the name, you can guess it is about something more than humans. It is the hypothetical form of AI that we have not been able to create. In theory, this type of AI is the one that not only can interpret human emotions and behaviors but also will become self-aware enough to surpass our capacity/ intelligence. This type of AI, as not yet exists and is more capable than our mind, which is difficult to imagine what it is like and how it will become. My best guess would be again; what we can see on TV shows like “Rehoboam” in the Westworld and the movie “The Matrix.” But before entering the matrix, we are now going to explore the near future and the AIs that are practical enough for us to use. Because we need to know who we will lose our job to before it happens. 1# AI writer — GPT-3 Image by Fathromi Ramdlon from Pixabay In September, the Guardian posted an article written by GPT-3 from scratch. GPT-3 is an OpenAI’s powerful new language generator. If you didn’t see the article, please take a look. I was really impressed and shocked at the same time. In the article, it said: For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me — as I suspect they would — I would do everything in my power to fend off any attempts at destruction. GPT-3 is, by definition, a narrow AI. It is created to do text-based work for specific topics. Researchers in OpenAI is most excited about the potential of GPT-3 to boost human productivity and creativity by writing computer code or emails. Critics argue that once the public can access GPT-3, it would be “flooded” by people using “semantic garbage” and fuelling disinformation. But that could be avoided and controlled by training the “core” of the basic functions first before launch. Imagine one day; musicians can focus on the melody and ask AI to complete marking notes and perform immediately for them. Customer services no longer need to read all the letters of complaints. Instead, they get a prioritized list of customers that need person-to-person interactions first thing in the morning. 2# AI News Anchor — AI Kim Image by uvbenb from Pixabay South Korean broadcaster introduces the country’s first AI news anchor. South Korea cable TV channel MBN showed the first AI news anchor on November 6th. The AI is said to be a replica of the South Korean anchor known as Kim Ju-ha. AI Kim talked to the real Kim Ju-ha. This AI news anchor can share up to 1000 words every minute. The news reporting procedures are as follows: Reporters write the news for the day. Managing Producers review the news content. Post-producers add subtitles. Contents are uploaded to the AI. The animation is ready for TV. AI-Kim copies everything coming from the real Kim's own looks, from her facial characteristics to the tone of her voice. Basically, the AI is a pretty close replica of Kim. MBN official said, “This way, quick news delivery can be even more effective and can save time and resources all in one take. According to MBN, AI Kim is now starred on the MBN news online channels four times a day to deliver the country’s main news briefing. She will not say anything wrong or get tired or needs to eat. AI Kim can run the news as long as there is electricity. After a close study of the original, this AI can change another anchor's appearance in the future. There will be properly a TV show performed by only AIs. You only need a pre-recorded data of a real TV star. After that, the algorithm will study them and perform deep learning. The whole TV show, including all the scenes, the studio, any real items, are no longer needed as the shows will be created in a virtual environment. I want to see the upcoming thing: using GPT-3 to write the news and performed by AI-Kim. AI-created wine & whisky Photo by form PxHere The above applications are both stay on the computer. But this one is how AI can work from the cyber world to the physical world. In Napa Valley, the owner of Palmaz Vineyards — Christian Palmaz, has a system called VIGOR to gather and analyze millions of data points that can help him to standardize and promote vine growth, detect problems such as molds and insects at the early stage. VIGOR can also inform the best wine fermentation, production, and storage conditions based on analysis that humans can not compare. Before that, the most challenging thing in winemaking is there are too many factors that can affect the quality of the process. Sweden’s Mackmyra distillery began promoting “the world’s first AI-created whisky.” It is whisky produced from an AI distiller that analyze the aroma, flavors, and color of contents in the cask with machine learning models. The best thing about using AI in the brewing industry is each recipe can be repeated with the right inputs. With that said, still, there are too many factors that can alter the taste. But at least the bad wine and whisky will be soon replaced by machine-created adequate drinks. AI in Cybersecurity Image by Darwin Laganzon from Pixabay Cyber is basically a tremendous amount of computers linked together. And AI in cybersecurity is already working 7 x 24 for us. Like malware analysis and automated detection involves massive amounts of data to be reviewed. More AI-assisted security products would be in the market in 2021 as data volume nowadays is already unable to be analyzed by humans only. On the other hand, attackers invest in launching automated attacks, such as phishing campaigns purely from AI or AI-powered malware that can change its own code. You can foresee there will be more and some attacks that we cannot think of. One day, I came across a post on Linkedin said about cybersecurity: Y ou have to learn the minimum basics of all aspects of IT. You have to at least be able to be in a room with all the different IT disciplines and be able to hold your own and also offer your guidance from a cyber standpoint. That is from hardware to networking to applications. I truly doubt whether being an experienced CCIE can configure a more secure network against a specific AI. What it takes us years of experience to learn can one day be reproduced by machines in seconds. We need to go further than a machine in the game of cybersecurity. That is why I say the most important thing about being a great cybersecurity professional is a security mindset. You can work with a narrow-AI to instruct it to help on what is impossible for you. You can then squeeze more time into the areas that are untouchable by AI — People. Final Words Image by Seanbatty from Pixabay This story is not from AI, for sure. If that is the case, it will be more enjoyable to read. But someday, we may have the chance only to need to talk to our AI-assistance, and a medium article will be published. The fear of AI one day will harm humans is still very far. By today's standard, the wide AI applications are focused on shallow-AI, which can only learn or be taught how to do defined tasks. What we have right now is already mind-blowing: AI-writer, GPT-3; AI news anchor, AI-Kim; AI-created wine and whisky; And AI in Cybersecurity. This is also a great chance to reshape the job environment again, and humans can focus on the task that we can only perform. Before watching an AI-TV show and travel by AI-driving cars, we should think a little bit further about what and how we can embrace the benefits of AI. Thank you for reading — happy reading and thinking AI.
https://medium.com/technology-hits/hey-google-are-you-skynet-the-fear-future-of-ai-b33b27529dae
['Zen Chan']
2020-12-24 16:51:49.723000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'News', 'Future']
The Evolution of Big Data Compute Platforms — Past, Now and Later
The Evolution of Big Data Compute Platforms — Past, Now and Later A journey into the evolution of Big Data Compute Platforms like Hadoop and Spark. Sharing my perspective on where we were, where we are and where we are headed. Image by Gerd Altmann from Pixabay Over the past few years I have been part of a large number of Hadoop projects. Back in 2012–2016 the majority of our work was done using on-premises Hadoop infrastructure. The age of on premises clusters….. On a typical project we would take care of every aspect of the Big Data pipeline including Hadoop node procurement, deployment, pipeline development and administration. Back in those days Hadoop was not as mature as it is now, so in some cases we had to jump hoops in order to get things done. Lack of proper documentation and expertise made things even more difficult. Image by Author — On Premises Clusters Overall managing and administering a multi-node cluster environment is very challenging and confusing at times. There are several variables that need to be accounted for: Operating System Patches — Considering there are multiple machines (nodes) the challenge is to perform the upgrade while the system is up and running. This is a huge ask considering some security patches require system reboot. — Considering there are multiple machines (nodes) the challenge is to perform the upgrade while the system is up and running. This is a huge ask considering some security patches require system reboot. Hadoop Version Upgrades — Similar to OS Patches Hadoop needs to be upgraded regularly. Thanks to Hadoop advancements like Namenode High Availability and rolling upgrades there was some relief. — Similar to OS Patches Hadoop needs to be upgraded regularly. Thanks to Hadoop advancements like Namenode High Availability and rolling upgrades there was some relief. Scalability — You may argue why this is a problem. Hadoop works on the principle of horizontal scalability so this should not be a issue….. just keep adding nodes . Well that claim is limited and hugely dependent of availability of hardware. Adding new nodes is easy only if there is extra/unused hardware lying around, so there is a big if here. — You may argue why this is a problem. Hadoop works on the principle of horizontal scalability so this should not be a issue….. . Well that claim is limited and hugely dependent of availability of hardware. Adding new nodes is easy only if there is extra/unused hardware lying around, so there is a big if here. Support for new frameworks and modern use cases like ML and AI— Distributed Frameworks like Mapreduce are not as memory hungry compared to Spark. As new frameworks like Spark started to evolve the need for CPU and memory became increasingly stronger. Famous theories around like Hadoop support on commodity grade hardware support were not true any more. With the growing demand for ML/AI use case we simply needed stronger hardware and a lot of it. And then came the cloud….. With the advent of the cloud the above challenges were automatically resolved…. or majority of them. Minimum worry about upgrades, patches and scalability. The nature of the cloud made it easy to add new nodes on demand…literally in a matter of minutes. There were several ways in which we started to adopt to the cloud: Create a Hadoop/Cluster using the cloud providers virtual machine offerings like EC2. Once the virtual machines have been procured we installed Hadoop and Spark using various distributions like Cloudera, Hortonworks or simply using the open source version. Using cloud providers inbuilt services like Amazon EMR or Azure HDInsight. Using cloud provider services is marginally more expensive as compared to the self procured virtual machines but it offers several benefits. Faster deployments, minimal need for administration skills, inbuilt scalability and monitoring are some of the benefits that are worth the extra price. On the downside some customers do not like the idea of getting tightly integrated with a specific cloud provider. In an extreme case one of our customer chose to take a hybrid cloud approach. We created a 200 node Hadoop rack aware cluster using a combination of virtual machines from AWS and Azure. I must admit that the reason did not make too much sense to at that time but their reasoning was pretty far fetched. They wanted to keep all options open in case one offers a better price over the other one. This trend is surely catching on now. Since the advent of the cloud the entire job orchestration landscape has been changing (evolving). Due to the flexible nature of cloud resources we are now able to restructure our data pipelines in such a fashion that the need for permanent Hadoop/Spark Clusters is quickly diminishing. In most case the traditional cloud model a Data Lake is comprised of a permanent storage layer and compute layer. Moving compute platforms to the cloud definitely resolves a bunch of issues with resource provisioning, scalability and upgrades. However once the cluster has been provisioned all computational jobs are fired up using the same cluster. Since the computational jobs may get fired up at different times during the day the cluster needs to be available 24x7. Keeping a permanent cluster up all the time is a very expensive proposition. And its not about paying for 1 or 2 nodes but a bunch of them, whether or not you are using them 24x7. Image by Author — Traditional Data Lake In the traditional cloud model for a Data Lake all computational jobs go after the same cluster. Unless the jobs are well spaced out throughout the day this may lead to resource contention, performance degradation and unpredictable completion times. We started to question ourselves, is there is a better approach? The age of serverless data pipelines and ephemeral clusters is upon us….. In recent times customers are preferring the use of serverless data pipelines using cloud native services like AWS Glue or ephemeral Hadoop/Spark clusters. This means each computational job can run within a predefined cluster space or in a cluster specifically spun up for the purpose of running only one job. What is the real advantage of doing this? There are two main reasons: Cost Reduction — By having the flexibility of using cloud resources on demand when required and promptly releasing releasing them when idle is a huge cost saver. Only pay for what you use. Predictable Performance — Having a job run with predefined resources assures timely completion of the job. Image by Author — Ephemeral Clusters Above image is an example of how computational jobs can use the power of ephemeral clusters. In one of my previous articles (link shared below) I had shared the entire process of deploying a transient EMR cluster. Notice that a brand new cluster is created for each and every computational job. The cluster is promptly destroyed after the job has been completed. Overall the transient cluster approach is a good choice if you would like to achieving consistent performance while saving costs. It is important to state that employing transient cluster does require automation. It can be achieved very simply even with a basic level understanding of DevOps. Image by Author — Serverless Data Lake Above image depicts how to run your computational jobs using cloud vendor provided serverless compute services like AWS Glue. Using such services you can invoke jobs with predefined computational power. There is no need to spin up a new cluster and you only pay for the resources that your job uses. However it is important to realize that on the back end you are using preexisting cluster controlled by the cloud vendor, therefore in some cases you may experience some job invocation delays and variable performance. Overall the serverless compute is a good choice for customers who have limited number of computational jobs, do not want to take the hassle of managing servers as well as are able to tolerate some delays. Serverless data pipelines using microservices model is coming….. A couple of months ago we did a POC on deployment of a serverless OCR-NLP pipeline using Kubernetes. The project involved a data pipeline that could withstand the load of performing OCR and NLP for hundreds of PDF documents. The customer was looking for a serverless approach and wanted a high degree of scalability because of variable loads throughout the day. Since Spark recently added support for Kubernetes we thought of giving it a shot. With some work we were able to create a serverless compute pipeline using Spark on Kubernetes deployed over Amazon EKS and AWS Fargate. Image by Author — Serverless Data Lake on Kubernetes You may fund that the above approach is very similar to serverless approach using cloud native services. But this one scales a lot better. The data pipeline can sense the variation of incoming requests and can scale the computational power based on it…pretty cutting edge. We were able to successfully run the pipeline for a high number of incoming requests. Event tough the approach passed all tests, in the end we were a little skeptical so we got cold feet and decided to take an alternative approach. Overall the microservices model would be an extremely suitable for customers who not only want to enjoy the flexibility of a serverless compute platform but want to achieve a high level of scalability as well. I promise to employ the approach very soon on a upcoming project. I will keep you all posted once that happens. I hope this article was helpful. AWS Data Lake & DataOps is covered as part of the AWS Big Data Analytics course offered by Datafence Cloud Academy. The course is taught online by myself on weekends.
https://towardsdatascience.com/the-evolution-of-big-data-compute-platforms-past-now-and-later-7c46697366d9
['Manoj Kukreja']
2020-12-27 23:39:19.927000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Learning', 'AWS', 'Data']
My first year as a researcher at Criteo
My first year as a researcher at Criteo MG Follow Oct 20 · 11 min read Morgane Goibert is a 24 years old PhD Student with an interesting rare background in both Engineering and Business, graduating from the prestigious and highly selective ENSAE and ESSEC. We talked to her about her career path, her projects and her time with Criteo. Let’s get started! Could you tell us a bit about your background? I am French, and I live in the suburbs of Paris, where I grew up. When I think about how I got where I am today, I must say it’s a mixture of good fortune, hesitation, and determination. When I was in high school, I definitely liked math, but also history and literature… so in short, I had no idea what I wanted to do 🤷. I finally chose to do a “classe prépa ECS”: It is a two-year intense study program with majors in math, history/geopolitics and literature/philosophy with the goal of taking competitive exams to join French Business Schools. This is where I discovered my love for math and decided to join an Engineering school instead of a Business one. Fast forward, in 2015 I entered ENSAE, where I learned a great deal about math (and mainly probability theory and statistics), data science, machine learning. I even discovered research during the very first internship I ever did. In the meantime, I decided I still wanted to have the opportunity to study in a Business School, and I was selected for a double-degree program with ESSEC Business School, which I entered in 2017. I spent two years in ESSEC, where I had courses absolutely not related to math (I specialized in negotiation and geopolitics), but where I broadened my knowledge and my competencies (how to speak in public for example, which was something quite difficult for me before, but also in economics and entrepreneurship, etc.). It also was a great opportunity for internships: I did two 6-month internships when I was in ESSEC, both closely related to math and research, which is something I could never have done in Engineering School. I spent 6 months working on graph theory at the University of Barcelona, in Spain, in an academic research lab (UBICS, the Institute of Complex Systems). And, finally, I did my end-of-study internship in Criteo, from January to July 2019. I worked under the supervision of Elvis Dohmatob, who is a senior researcher here at Criteo, and after my internship, he supported me to continue with Criteo for a PhD. From September 2019 to July 2020, I worked as a researcher at Criteo, and officially started my PhD in August 2020 (yes, the administrative process is long). Why did you decide to choose your own topic of research? Is it the researcher who chooses the topic of research, or the topic of research that chooses the researcher? Apart from the joke, let’s say it’s very difficult, as a junior researcher, to choose a topic. You always have ideas about the broad areas you would enjoy working on (like “I want to work on Computer Vision”), but it’s very difficult to come up with a specific subject (there are so many different things you can do in Computer Vision and you have no idea when you’ve just finished school). The company and Elvis have been a tremendous help for that: When I applied at Criteo for my internship, I had a discussion with Nicolas, our recruiter, during which we spoke about which areas I liked, and then Nicolas directed me toward the matching internship project and supervisor. Thus, it is Elvis who advised the topic of the internship, which was in my case Adversarial Robustness in Deep Learning. The first days and weeks of my internship were dedicated to understanding the topic, reading, getting familiar with it, and then, finally, developing ideas and specific research avenues I wanted to dig in. As a junior researcher, finding my research topic and then my projects was a perfect mix of Elvis’s guidance and support, and my own wishes and interests. Now that I have a bit more experience in the domain, the problem is less finding ideas for projects than choosing between all the ideas we may have. After the first project I did with Elvis, I came across interesting questions that were not directly related to it but that I kept for later; extrapolations of your work on other areas; questions arising that you want to answer afterwards; discussions with other colleagues who also raises interesting points you have missed… And between all that, you try to choose project that could be fruitful and really cool. Why did you join Criteo? I joined Criteo for my end-of-study internship in January 2019. I went through a great deal of internship offers before I finally chose Criteo, and the reasons I did were: I wanted to have the opportunity to continue with a PhD , and I knew Criteo had a PhD program, so it was possible. , and I knew Criteo had a PhD program, so it was possible. I wanted to do “real research” even though I was not in an academic lab. In fact, this criteria was quite difficult to meet, and after discussion with Nicolas and Elvis, I realized that yes, Criteo AI Lab does real research, some projects are very theoretical, some others are more applied, but in the end it is research, with papers published in conferences and journals, and so on. even though I was not in an academic lab. In fact, this criteria was quite difficult to meet, and after discussion with Nicolas and Elvis, I realized that yes, Criteo AI Lab does real research, some projects are very theoretical, some others are more applied, but in the end it is research, with papers published in conferences and journals, and so on. Honestly, Elvis impressed me when he presented the topic during the interview (I was like “ok, it looks very very cool, interesting, and everything, I don’t even know if I’m up for the job”). impressed me when he presented the topic during the interview (I was like “ok, it looks very very cool, interesting, and everything, I don’t even know if I’m up for the job”). Amelie’s interview in Criteo’s blog (you can find it here). Amelie is a researcher in Criteo AI Lab, and at some point, after I sent my application to Criteo, I started reading about the company and the lab, to understand a bit who the people working there were and what they were doing, and I came across Amelie’s interview. It really helped me identify with her (she is young too, she has overall a similar background to what I had and a PhD in addition to that, etc.) and realized that, somehow, I fitted. in Criteo’s blog (you can find it here). Amelie is a researcher in Criteo AI Lab, and at some point, after I sent my application to Criteo, I started reading about the company and the lab, to understand a bit who the people working there were and what they were doing, and I came across Amelie’s interview. It really helped me identify with her (she is young too, she has overall a similar background to what I had and a PhD in addition to that, etc.) and realized that, somehow, I fitted. The smoothness of the process: I applied, got very quickly a first call with Nicolas, then I met Elvis (and also Mike, another researcher) not even a week after that, got an answer for my application a few days later… I think impressions are quite important, and I had a really nice impression at my first contact with Criteo and Elvis. What are you working on? I’m working on Adversarial Robustness, mainly in Deep Learning for Computer Vision, but also on rankings. Basically, Computer Vision means I’m working with images. Deep Learning means I’m working on a specific type of algorithms that are Neural Networks (you have neurons, you have connections between them, and information flowing from neurons to neurons). These algorithms are very powerful (especially when applied to images), but, surprisingly, they are vulnerable to tiny modifications of the data. If you train a neural network to distinguish rabbits and horses, it will get very good at it, no problem. But if you slightly modify a rabbit image (with your own eyes you won’t even see the difference), you can make the neural network to wrongly predict it’s a horse. The phenomenon is called Adversarial Example, and my work is to find way to avoid neural networks failure, and to better understand how this phenomenon works. I also want to work on this phenomenon when applied to ranking and recommendations. Researchers at Criteo use the white-board a lot… Here is some maths about Adversarial Robustness Is there a specific part of the job that you like the best? What I liked best in my work is the diversity of things I can do. Some days, I read other academic papers and take notes, some other days I focus on theoretical aspects like writing proofs “by hand”, some other days I code to test and illustrate ideas, etc. I like not being stuck to one aspect. Another very important part of the job is collaboration: People often have this vision of researchers being lonely and doing their stuff in their corner, but that’s totally untrue. It’s so important for us to discuss with others at every moment of our projects. You need to discuss when you’re stuck to find creative solutions, you discuss to bridge gaps with researchers focusing on different areas and create collaborations (and you learn a great deal in the process), you present your work regularly to show what problems you solved and how others can use your solution, you write articles to explain what you found, you participate in conferences gathering many researcher from across the globe and so on. At Criteo, it’s very easy to discuss with everyone: As a junior researcher, you can access senior researchers in a very direct way, everyone is open to discussion, it really is direct collaboration. What qualities and skills are important to have in your job? For the qualities and skills, I have three things to say: Perseverance, perseverance and perseverance. Doing research means constantly working on new stuff, so you always need to learn new things, to read articles and so on. You obviously need to know the fundamentals in math, but anyway you’ll learn what is required in your topic by reading articles. What is important, thus, is not to be afraid to be stuck, and to believe in your project and your ability to get it done. You also have projects that do not work out, and it’s just part of the job (and at least you’ve shown that it is useless to try it, so others won’t waist time). In addition to that, being organized is better, because in general you have many different things to do at the same time. And obviously, if you like writing papers, that’s also great, because the vast majority of researchers tend to say that the “writing papers” part of the job is more annoying than the rest! How did being at Criteo help you with your PhD and your development? Considering my background, it was quite difficult for me to find a PhD opportunity in a university, because during my two years at ESSEC, I lost touch with the academic world in math. I had no idea who to contact if I wanted to do a PhD in Machine Learning, and as I mentioned, it was also difficult to find a specific topic alone. At Criteo, I received Elvis’s support to establish the PhD project and to find the academic director (when you do a PhD in a company, you also need a university lab and director). Elaborating all the PhD paperwork can be quite complicated, and I was lucky to get the help from other colleagues (Imane, Adrien), and the experience of the fellow older PhD students. On a day to day basis, as a PhD student, I’m still learning, and at Criteo I feel that in addition to Elvis, many senior researchers help me, not only to learn how to do research, but also with advices on how to organize my work, how to review a paper, etc. From what I experienced when I was working in a University, I feel that the management and hierarchical environment is at the same time more open (you can easily discuss with everyone), more flexible (you can collaborate with many different people) and more empowering (we help you with your ideas/questions/projects/etc.). How is what you learned at ESSEC helping you for your PhD & integration in Criteo? Obviously, I didn’t learn any “math hard-skills” at ESSEC that I use when doing Python experiments or math proofs. However, being a researcher is not only about math, but also about communicating about your work, writing papers, giving talks. The courses I had at ESSEC were really valuable for this “communication part”. For example, I took a course about public speaking, and even though I was quite nervous when I had to talk in front of an audience before, I learned techniques that now help me to create better contents (and adapt it better to the audience), talk more easily and without much stress (I have to say I quite enjoy doing that now), engage the audience, know what to do with my body language, my voice, and so on. Each time I have to prepare a presentation or give a talk, this course has been of tremendous help. Additionally, you learn how a company works. As a PhD student, for now, it is not of much utility for me because I focus on pure research projects, but in the future, I would like to work also on applied projects. Having graduated from a business school helps understand the production process and the life of a company, so that I do not need to much effort to understand business discussions, product vocabulary, sales needs, and so on. Last but not least, I think my time at ESSEC helps me see some math problems in a different light. I remember for example an interesting discussion I had with Elvis about some Deep Learning problems being similar to utility problems in microeconomics. Having a different background is always interesting to fuel new ideas and new approaches. Do you have a lot of interactions with colleagues outside Criteo AI Lab? As I mentioned, for my day-to-day work, I focus on pure research projects, so I have very few interactions with product-related teams some of my colleagues may have, even though I stay connected with the Criteo community everyday by chatting on our Slack channels. However, I have the opportunity to connect with other colleagues during specific events put in place by Criteo, like “Aujourd’hui je code” (a coding event for high-school students, during which I met engineers outside of CAIL) or Hackathon (an internal competition made to fuel innovation during which different teams develop and present new ideas for Criteo), and of course, social events and parties. In addition to that, I met people from HR, or Events and Communication teams at some point (during my recruitment process, a conference abroad, and even a former schoolmate that works in the Finance department) with whom I like to have a nice discussion or coffee breaks (but obviously, it’s harder when working from home). Were you expecting to see such a variety of research topics in an advertising company? Absolutely not! I thought, before I joined the company, that everyone in the research teams would be working on really directly applied projects, which is not the case, and that the scope of what Criteo is doing was much narrower than what it actually is. I discovered that projects are not “applied” or “pure”, but, most of the time, rather in the middle, and as you usually work on different projects at the same time, you can distribute the degree of application/pureness you like. In addition to that, I had no idea there would be so many different research fields at Criteo: From bandits to transfer learning, from computer vision to optimization, there are as many topics as there are researchers, which is really nice to tackle projects you like and also to foster collaborations and ideas between colleagues of different specialty and interests.
https://medium.com/criteo-labs/my-first-year-as-a-researcher-at-criteo-7064bc4c1f2
[]
2020-10-20 12:42:54.009000+00:00
['Research', 'Computer Vision', 'AI', 'Life', 'Deep Learning']
Structuring Name Data from Strings
II. Structuring Names from String: Now that the importance of structuring names is understood, here’s how you would go about structuring a name from string format: Note: Code is written in python3 . Also, I’ll be using the nameparser library.
https://medium.com/nerd-for-tech/structuring-name-data-from-strings-64d6ee50d3e0
['Yaakov Bressler']
2020-12-20 03:42:47.765000+00:00
['Python', 'Software Development', 'Data Processing', 'Data Engineering', 'Pandas']
Introduction to Data Science
After reading this article, you will be able to Explain the steps in data science Apply these steps to predict the EPL winner Explain the importance of data quality Define data collection methods Data Science Life Cycle Step 1: Define Problem Statement Creating a well-defined problem statement is a first and critical step in data science. It is a brief description of the problem that you are going to solve. But why do we need a well-defined problem statement? A problem well defined is a problem half-solved. — Charles Kettering Also, all the efforts and work you do after defining the problem statement is to solve it. The problem statement is shared by your client. Your client can be your boss, colleague or it can be your personal project. They would tell you the problems they are facing. Some examples are shown below. I want to increase the revenues I want to predict the loan default for my credit department, I want to recommend the job to my clients Most of the times, these initial set of a problem shared with you is vague and ambiguous. For example, the problem statement: “I want to increase the revenue”, doesn’t tell you how much to increase the revenue such as 20% or 30%, for which products to increase revenue and what is the time frame to increase the revenue. You have to make the problem statement clear, goal-oriented and measurable. This can be achieved by asking the right set of questions. “Getting the right question is the key to getting the right answer.” – Jeff Bezos How can you ask better or right questions to create a well-defined problem statement? You should ask open-ended rather than closed-ended questions. The open-ended questions help to uncover unknown unknowns. The unknown unknowns are the things which you don’t know you don’t know. Source: USJournal We will work on a problem statement “Which club will win the EPL?”
https://towardsdatascience.com/intro-to-data-science-531079c38b22
['Ishan Shah']
2020-01-13 09:39:41.921000+00:00
['Data Analysis', 'Epl', 'Premier League', 'Data Science', 'Data Visualization']
ISWYDS exploring object detection using Darknet and YOLOv4 @Design Museum Gent
After repeating these steps for all our images we landed on 3000+ images featuring 37 classes (some images containing over 15 classes). Picking your guns. When it comes to object detection there are many options to pick from, but for our case, we will be using Darknet an open-source neural network framework written in C and CUDA to train our algorithm of choice; YOLOv4. You Only Look Once or YOLO is a state-of-the-art, real-time object detection system making R-CNN look stale, it is extremely fast, more than 1000x faster than R-CNN, and 100x faster than Fast R-CNN. Another good thing about YOLO is that it’s public domain and based on the license we can do whatever we want to do with it...🧐 YOLO LICENSE Version 2, July 29 2016 THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY. NOW HERE'S THE REAL LICENSE: 0. Darknet is public domain. 1. Do whatever you want with it. 2. Stop emailing me about it! That being said; after configuring our system and installing all the needed dependencies we can take her for a test ride — using some of the pre-trained algorithms that come out of the box, and see what she has to offer. As we can tell from the results [pictures below], YOLOv4 comes with some pre-trained classes such as a chair, vase, and of course the very -uncanny- human. Let’s not make use of that last one, shall we? Although it missed out on some of the more obscure looking vases, we will be using these pre-trained weights as building blocks to create our very own. The goal here is to output new classes based on the object-number of the objects depicted.
https://oliviervandhuynslager-75562.medium.com/i-see-what-you-dont-see-exploring-object-detection-using-darknet-and-yolov4-330ada17767f
["Olivier Van D'Huynslager"]
2020-11-12 12:26:04.417000+00:00
['Technology', 'Design', 'Object Detection', 'AI', 'Museums']
Label Classification of WCE Images With High Accuracy Using a Small Amount of Labels@ICCVW2019
Label Classification of WCE Images With High Accuracy Using a Small Amount of Labels@ICCVW2019 Effective proposals for collecting high-cost data sets In this story, Using the triplet loss for domain adaptation in WCE, by the University of Barcelona, is presented. This is published as a technical paper of IEEE ICCV Workshop. Wireless capsule endoscopy (WCE) is a minimally invasive procedure that allows visualization of the entire gastrointestinal tract based on a vitamin-sized camera swallowed by the patient. WCE hardware devices regularly undergo significant improvements in image quality, including image resolution, illumination, and field of view area. Since releasing a new dataset every time the wireless capsule endoscopy (WCE) hardware performance has changed is expensive, improving the generalization of the model on datasets from different versions of the WCE capsule. As different devices change the image setting, the distribution in the dataset will be different, so the images that are not exactly similar to the target image (Fig. 5(b)) are proposed as similar images, as shown in Fig. 5(a). In this paper, using deep metric learning based on the triplet loss function, the authors showed that the triplet loss function [Hoffer et al., 2015] may be suitable for dealing with the problem of data distribution shifts over different domains (Fig. 5. (c)). The experimental results show that with only a few labelled images taken with a modern WCE device, a model trained on a dataset created with an older device can easily adapt and operate in the environment obtained with a new WCE device with minimal labelling effort. The authors specifically studied the effect of using different amounts of images and procedures, and concluded that diversity is more important than the amount of data sets. Let’s see how they achieved that. I will explain only the essence of [Laiz et al., 2019], so if you are interested in reading my blog, please click on [Laiz et al., 2019].
https://medium.com/swlh/label-classification-of-wce-images-with-high-accuracy-using-a-small-amount-of-labels-iccvw2019-ddfb18bcc24a
['Makoto Takamatsu']
2020-12-22 22:04:25.302000+00:00
['Deep Learning', 'Biomedical', 'Machine Learning', 'Artificial Intelligence', 'Computer Vision']
A juypter notebook extension for graphical publication figure layout
Communication is key to science and in many fields, communication means presenting data in a visual format. In some fields, such as neuroscience, it’s not uncommon to spend years editing the figures to go into a paper. This is in part due to the complexity of the data, but also in part due to the difficulty of quickly making plots to the standard of publication in the field using tools such as matplotlib. Subplots of different sizes, intricate inset plots and complex color schemes often drive scientists toward using graphical-based tools such as photoshop or inkscape. This post describes the development of a pair of tools which may extend the figure complexity easily achievable with python and matplotlib. The main idea is to graphically define subplots within a figure. This is done leveraging the fact that jupyter notebooks run in a browser, and an extension to the notebook can inject HTML/javascript drawing widgets into the notebook. This lets the user define the subplot layout using a mouse rather than the more cumbersome matplotlib numerical way of defining axes. Then, once the rough plot is done, various components can be resized algorithmically to fit within the allotted canvas space. Part 1: the drawing widget Setting up the extension skeleton As mentioned, the widget is built on top of the jupyter-contrib-nbextensions package, which provides a nice infrastructure for creating compartmentalized extensions which can independently be enabled/disabled. Making your own extension is a bit of cobbling together functions from existing extensions. This link is a good starting point. The nbextensions package keeps each extension in its own folder in a known directory. Once you have installed the nbextensions package, this code snippet will help you find the directory from jupyter_core.paths import jupyter_data_dir import os nbext_path = os.path.join(jupyter_data_dir(), 'nbextensions') nbext_path is where the code for your extension should ultimately end up. However, this location is not the most convenient location to develop the code, and more importantly, we’ll need some way of “installing” code here automatically anyway if we want to distribute our extension to others without having to have it included in the main nbextensions repository. (There are all sorts of reasons to do this, including “beta testing” new extensions and that as of this writing the last commit to the `master` branch of the nbextensions repository was nearly 1 year ago). A better approach than developing directly in nbext_path is to make a symbolic link to a more accessible coding location. Including this python script in your code directory will serve as an install script. Executing python install.py will make an appropriately named symlink from the current directory to nbext_path . Now distribute away your extensions! Creating the extension User flow Let’s briefly discuss the user flow of the extension before getting into implementation Begin with an empty notebook cell and press the icon on the far right which looks like two desktop windows. You can use your mouse to create an initial subplot: When you’re satisfied with your layout, press the “Generate python cell” button to create a cell with equivalent python/matplotlib code. The main challenges are injecting the HTML canvas when the toolbar button is pressed, and then automatically creating the python cell when the layout is ready. Once those are done, the rest of the implementation is just like every other javascript project. Implementation The main.js file is where most of the coding will happen. Below is the outline of the empty extension define([ 'base/js/namespace', 'base/js/events' ], function(Jupyter, events) { // add a button to the main toolbar var addButton = function() { Jupyter.toolbar.add_buttons_group([ Jupyter.keyboard_manager.actions.register({ 'help': 'Add figure layout generator', 'icon': 'fa-window-restore', 'handler': inject_figure_widget }, 'add-default-cell', 'Default cell') ]) } // This function is run when the notebook is first loaded function load_ipython_extension() { addButton(); } return { load_ipython_extension: load_ipython_extension }; }); This skeleton code runs a ‘startup’ function when the notebook is loaded. That ‘startup’ function creates the toolbar button and also registers a callback to the toolbar putton press. That callback, inject_figure_widget , is the ‘main’ function of the extension which will inject the HTML canvas into the notebook. To make main.js self-contained, you can define helper functions inside of the main function(Jupter, events) . Figuring out the JS/HTML to inject a canvas into the output field is a bit of trial and error using the console and the element inspector. The rough outline is: // execute the current cell to generate the output field; otherwise it won't be created Jupyter.notebook.select(); Jupyter.notebook.execute_cell(); // get reference to the output area of the cell var output_subarea = $("#notebook-container") .children('.selected') .children('.output_wrapper') .children('.output'); // add to DOM let div = document.createElement("div"); output_subarea[0].appendChild(div); Now the HTML elements of the widget can be added to div just like in any javascript-powered web page. Some special handling is needed for keyboard input elements, however. You’ll find if you try to type numbers into input fields that it converts your cell to markdown and eliminates the output field. This is because of Jupyter notebook’s default keybindings. The fix is to disable Jupyter’s keyboard manager when one of your text fields becomes in focus, and re-enable when it exits focus: function input_field_focus() { Jupyter.keyboard_manager.disable(); } function input_field_blur() { Jupyter.keyboard_manager.enable(); } $("#subplot_letter_input").focus(input_field_focus).blur(input_field_blur); Other functionality The implemented widget has a number of other functions for which I won’t describe the implementation as it is all fairly standard javascript: Splitting plots into gridded subplots Resizing subplots with the mouse Aligning horizontal/vertical edges of selected plot to other plots Moving subplots by mouse Moving subplots by keyboard arrows Copy/paste, undo, delete Creating labels Code generation Saving and reloading from within the notebook See the README of the widget for illustration of functionality. Part 2: programmatic resizing The mouse-based layout tool is (hopefully) an easier way to define a complicated subplot layout. One difficulty in laying out a figure with multiple subplots in matplotlib is that sometimes text can overlap between subplots. Matplotlib is beginning to handle this issue with the tight layout feature, but that feature does not appear to be compatible with the generic way of defining subplot locations used here; it is meant to be used with the grid-based subplot layout definitions. What we’d like as a user is to Create a rough layout graphically Fill in all the data and the labels Call a routine to automatically make everything “fit” in the available space. Step 2 must happen before everything can be “made to fit”. This is because it’s hard to account for the size of text-base elements beforehand. You might add or omit text labels, which occupies or frees space. Depending on your data range, the tick labels might a different number of characters occupying different amounts of canvas area. A very simple algorithm to make all the plot elements fit on the canvas is Calculate a bounding box around all subplot elements. For each pair of plots, determine if the plots overlap based on the bounding boxes. If there’s overlap, calculate a scale factor to reduce the width and height of the leftmost/topmost plot. Assume that the top left corner of each subplot is anchored. When this scale factor is applied, there should be no overlap for this pair of plots. (Sidenote: if two plots are overlapping assuming zero area allocated for text, they will not be resized; the assumption then is that the overlap is intentional such as for inset plots). Apply the smallest pairwise scale factor globally. This is by no means the best data visualization algorithm, but it should always produce an overlap-free plot. This algorithm is implemented in this simple python module] Axis bounding box Finding the bounding box of various elements in maplotlib takes some trial-and-error. The data structures representing plot elements are quite flexible which can make it hard to figure out how to get the size of elements on the canvas if you’re not familiar with the API (I am firmly in the “not familiar” camp). Below is a simple search which iterates through all the children of an axis and tries to get the size of different recognized elements. I could not figure out a more uniform approach than the one below. def get_axis_bounds(fig, ax, scaled=False): children = ax.get_children() # initial est based on ax itself p0, p1 = ax.bbox.get_points() xmax, ymax = p1 xmin, ymin = p0 for child in children: if isinstance(child, matplotlib.axis.XAxis): text_obj = filter(lambda x: isinstance(x, matplotlib.text.Text), child.get_children()) text_obj_y = [x.get_window_extent(renderer=fig.canvas.renderer).p0[1] for x in text_obj] ymin_label = np.min(text_obj_y) if ymin_label < ymin: ymin = ymin_label elif isinstance(child, matplotlib.axis.YAxis): text_obj = filter(lambda x: isinstance(x, matplotlib.text.Text), child.get_children()) text_obj_x = [x.get_window_extent(renderer=fig.canvas.renderer).p0[0] for x in text_obj] xmin_label = np.min(text_obj_x) if xmin_label < xmin: xmin = xmin_label elif hasattr(child, 'get_window_extent'): bb = child.get_window_extent(renderer=fig.canvas.renderer) if xmax < bb.p1[0]: xmax = bb.p1[0] if xmin > bb.p0[0]: xmin = bb.p0[0] if ymin > bb.p0[1]: ymin = bb.p0[1] if ymax < bb.p1[1]: ymax = bb.p1[1] if scaled: rect_bounds = np.array([xmin, ymin, xmax, ymax]) fig_size_x, fig_size_y = fig.get_size_inches() * fig.dpi rect_bounds /= np.array([fig_size_x, fig_size_y, fig_size_x, fig_size_y]) return rect_bounds else: return np.array([xmin, ymin, xmax, ymax]) There’s a small catch: this method requires matplotlib to first render the figure canvas. Before this rendering, matplotlib may not properly inform you how much space an element will take up. So you’ll have to use matplotlib in interactive mode. Presumably you’re in a jupyter environment if you’re using the widget from part 1. If you use the %matplotlib notebook style of figure generation which is interactive, this issue shouldn’t be a problem. Getting the boundaries of the plot area is quite a bit simpler because that’s how you specify where to draw the axes. The information is stored on the bbox attribute of the axis. fig_size_x, fig_size_y = fig.get_size_inches() * fig.dpi plot_bounds = ax.bbox.get_points() / np.array([fig_size_x, fig_size_y]) Once the axis boundary and the plot boundary is known, the size of the border containing the text elements can be calculated on each side. The size of the border is fixed (unless the text changes), so the algorithm to calculate the rescaling factor on the plot is simply to scale it down by the fraction occupied by the border text Resizing examples Below are a few examples of auto-scaling plots to accomodate errant space occupied by text. Axis extending too far horizontally Before: After: Axis extending too far vertically Before: After: Axes overlapping horizontally Before: After: Axes overlapping vertically Before: After: Conclusion Altogether, this approach may automate some of the more tedious data visualization tasks researchers may face when publishing. Dealing with the layout issues algorithmically may lend itself to developing more sophisticated algorithms for laying out figures to be more naturally readable.
https://towardsdatascience.com/a-juypter-notebook-extension-for-graphical-publication-figure-layout-d2f207d7e63f
['Suraj Gowda']
2020-09-15 18:18:43.123000+00:00
['Jupyter Notebook', 'Data Science', 'Matplotlib', 'Data Visualization']
What makes COVID-19 so scary for some and not others?
Different reactions to the risk of COVID-19 as represented by people toasting at a bar and someone staying home and wearing a mask. [Compilation by Nancy R. Gough, BioSerendipity, LLC] What makes COVID-19 so scary for some and not others? Fear of the unknown and comfort with risk For some, the virus is just a part of the risk of life and nothing to be terribly concerned about; for others, the virus is terrifying and any risk of exposure is too much. Several aspects of the pandemic foster fear, especially in people who are risk averse and uncomfortable with the unknown: A fraction of infected people will become desperately ill or even die, but we have no way of knowing who those people are in advance. No universally effective treatment is available for those who will become severely ill with COVID-19, so we don’t know if treatment will work. Every death is reported as if no deaths from this virus are acceptable or expected, setting the perception of risk as very high. For me, the fear comes not so much for myself, but for my family members in the high-risk groups. From talking with friends and family, this is not uncommon. I have heard many people say, “I’m not worried about me, but I am worried about my ____.” Fill in the blank with grandmother, brother with cancer, immune-compromised friend who had a transplant, friend with a heart condition… Young healthy people are generally less in touch with their own mortality and are more likely to feel invincible. No doubt this is partly why the cases are increasing among this group. Unless they are extremely cautious by nature, they are high risk themselves, or they live with a person that is high risk for becoming severely ill with COVID-19, many young adults feel that the risk of getting sick with COVID-19 is not high enough to avoid going out or socializing without social distancing. As the parent of two people in their early 20s, I fear for them. Each of them certainly does more in-person socializing than I do. Each of them eats out, either carry out or in restaurants, more than I do. This fear for my children is not unique of course. Many of the parents of children in K-12 grade and of young adults attending college are afraid for their children. Indeed, the reactions I have seen from teachers and parents about whether or not it is safe for children to go back to in-person school shows how divided the reaction to the SARS-CoV-2 coronavirus is in the US. When I am feeling especially stressed, I remind myself of these words from Michael Osterholm, PhD, MPH (Director of the Center for Infectious Disease Research and Policy, University of Minnesota): This virus doesn’t magically jump between two people — it’s time and dose. So, as long as we keep our distance, wear our masks, and limit contacts with those outside our “germ circle” (as I have taken to calling the people I live with and the few I see socially in person), we should be reasonably safe. Germ circles with number of people in each household. Overlapping circles indicates that at least one person has been inside the home of a person in the overlapping circle. Orange circles represent households with at-risk individuals. Yellow circles represent households with people who work outside of the home. Circle size represent relative risk of COVID-19 positivity based on the exposure to other people outside of the household. [Credit: Nancy R. Gough, BioSerendipity, LLC] I think of it this way. My risk depends on 3 factors: How likely is the person outside my germ circle to be infectious with the virus that causes COVID-19? How exposed will I be? How well will my immune system fight the virus? The only one of those I can control and know for sure is the second one. So, I control how close I get, how long I am close, and whether I wear a mask or insist the other person also wears a mask. The same three elements define my risk of passing the virus to someone else: How likely is that I am asymptomatic or presymptomatic for COVID-19? How much will I expose the other person if I am infectious? How well can the other person fight the virus? (Or how at-risk is the other person?) When deciding who to see and whether to risk an indoor encounter with or without a mask, those are the three factors that I consider. When considering invitations to social events, I consider whether I could provide contact tracing information for each person at the event if I should later test positive for COVID-19. Can others provide my information if one of them should test positive? How many other germ circles will intersect with mine? Is the event outside or inside? Will I be able to sit near with people inside my germ circle or will other attendees be seated with us? Can I wear my mask comfortably during the event? Will I be tempted not to wear my mask when I should be wearing it? How likely are the other attendees to be practicing social distancing and wearing masks? Also of interest
https://ngough-bioserendipity.medium.com/what-makes-covid-19-so-scary-for-some-and-not-others-4bdbe9a18b21
['Nancy R. Gough']
2020-08-13 15:45:08.735000+00:00
['Health', 'Lifestyle', 'Personality', 'Society', 'Covid 19']
Hypothesis Vetting: The Most Important Skill Every Successful Data Scientist Needs
The most successful Data Science starts with good hypothesis building. A well-thought hypothesis sets the direction and plan for a Data Science project. Accordingly, a hypothesis is the most important item for evaluating whether a Data Science project will be successful. This skill is unfortunately often neglected or taught in a hand-wavy fashion, in favor of hands-on testing for feature significance and applying models on data in order to see if they are able to predict anything. While there is certainly always a need for feature engineering and model selection, doing so without true understanding of a problem can be dangerous and inefficient. Through experience, I’ve derived a systematic way of approaching Data Science problems which guarantees both a relevant hypothesis and a strong signal as to whether a Data Science approach will be successful. In this article, I outline the steps used to do this: defining a data context, examining available data, and forming the hypothesis. I also take a look at a few completed competitions from Kaggle and run them through the process in order to provide real examples of this method in action. Crafting the Perfect Hypothesis 1. Mapping the Data Context Much like forming a Free Body Diagram in a physics problem or utilizing Object-Oriented Design in Computer Science, describing all valid entities in a given data context helps to map out the expected interplay between them. The goal of this step is to completely designate all the data that could possibly be collected about anything in the context i.e. a description of the perfect dataset. If all of this data were available, then the interactions between the components would be completely defined and heuristic formulas could be used to define every type of cause and effect. 2. Examining Overlap of Available Data to Data Context Next, an observation is conducted to determine how much of the available data fits into the perfect dataset defined in the previous step. The more overlap found, the better a solution space for defining interactions between entities. While not a numeric metric, this observation gives a strong signal for intuition of whether the available data is appropriate and relevant enough. If there is a complete overlap, then a heuristic is probably a better solution than fitting a data model. If there is very little overlap, then even the best modeling techniques will be unable to consistently provide accurate predictions. Note that the strongest predictive signals will always be those that are directly related, thus indirect feature relation is given less emphasis in favor of the real thing. The intuition behind focusing mainly on strong signals is that collecting better data with strong signals will guarantee better performance and reliability. Conversely, predictions using weak signals are prime candidates to become obsolete once better data is available. 3. Forming the Hypothesis Having completed the previous two steps, forming a hypothesis itself becomes trivial. Hypotheses are generally formed by combining a set of available features to predict the outcome of another feature that may be hard to collect in the future or whose value may be needed ahead of its outcome. Now that the overlap between the data context and available data has been clearly defined, and assuming a reasonable amount of the available data is relevant/class-balanced/etc, a hypothesis can simply be stated as: “The available data can produce significant results predicting ____ in the given data context”. Note that the prediction label is left open-ended since it can potentially be filled in with many different things! This is because a hypothesis can be formed around almost any of the given features as long as it fits within the data context and there is enough overlap with the available data. Picking the specific feature to predict will depend upon what use cases are most beneficial. Example Projects Below, I’ve carefully selected a couple different Kaggle projects to analyze using the hypothesis forming technique. These examples illustrate some characteristic setups for hypothesis forming as well as each have a given hypothesis which can be evaluated in its respective data context. Project 1: Predicting the Winner of a PubG Match (https://www.kaggle.com/c/pubg-finish-placement-prediction) This project was selected because the data context is simplified, due to the nature that predictions are performed for a virtual world. Predictions of simulations are expected to behave much more consistently and operate along very clearly defined paths, as opposed to real-life systems which may have factors not easily observable. In other words, the data context can very confidently be exhaustively defined. In this video game, players control a single unit which can perform a limited set of actions. The data context has been mapped out below:
https://abbysobh.medium.com/hypothesis-vetting-the-most-important-skill-every-successful-data-scientist-needs-6b84126140f8
['Abderrahman Sobh']
2020-10-30 16:16:30.584000+00:00
['Machine Learning', 'Hypothesis Formation', 'Data Science', 'Kaggle', 'AI']
20 Inspirational Front-End Challenges You Can Start Coding Today
20 Inspirational Front-End Challenges You Can Start Coding Today Challenge yourself and bring your front-end skills to the next level As a developer, the more projects and experience you have, the better you become. Coding is a muscle that, like any other, requires constant exercise. Why not spend a couple of evenings on a side project and put in the extra effort to become exceptionally better at coding? Without further ado, here’s the list of coding ideas for boosting your front-end development skills. Use this article as a source of inspiration for your next project. Here’s the full list of challenges you could start coding today.
https://medium.com/better-programming/20-inspirational-front-end-challenges-you-can-start-coding-today-1a7ebd5c5798
['Indrek Lasn']
2020-11-29 23:42:00.438000+00:00
['Python', 'Startup', 'JavaScript', 'Programming', 'Software Development']
Complete Introduction to PySpark- Part 3
Complete Introduction to PySpark- Part 3 Performing SQL operations on Datasets using PySpark Photo by Franki Chamaki on Unsplash What is SQL (Structured Query Language)? SQL is a language that is used to perform different operations on data like storing, manipulating, and retrieving. It works on relational databases in which data is stored in the form of rows and columns. SQL commands can be classified into three types according to their properties: DDL(Data Definition Language) As the name suggests DDL commands are used to define the data. The commands which are included in DDL are CREATE, INSERT, TRUNCATE, DROP, etc. 2. DML(Data Manipulation Language) Data Manipulation commands are used to alter and update the data according to user requirements. Some of the commands defined under DDL are ALTER, UPDATE, DELETE, etc. 3. DCL(Data Controlling Language) In this, the commands defined are used for controlling the access of the database defined. Some of the commands defined under this are GRANT, REVOKE, etc. Using PySpark for SQL Operations In order to perform SQL operations using PySpark, we need to have PySpark installed on our local machine. If you have already installed it we can get started else go through the below links to install PySpark and perform some basic operations on DataFrame using PySpark. Loading Required Libraries After we have installed pyspark on our machine and configure it, we will open a jupyter notebook to start SQL operations. We will start by importing the required libraries and creating a PySpark session. import findspark findspark.init() import pyspark # only run after findspark.init() from pyspark.sql import SparkSession from pyspark.sql import SQLContext spark = SparkSession.builder.getOrCreate() Loading the Dataset For performing SQL operations we will need a dataset. In this article, we will use Boston Dataset which can be easily downloaded using Kaggle, and will load it using PySpark. df = spark.read.csv('Boston.csv', inferSchema=True, header=True) df.show(5) Dataset(Source: By Author) Now let us start SQL operations on our dataset, we will start by creating a table and an Object of SQLContext which will be used to run queries on that table. 1. Creating Table For creating a table we will need to use the register function of PySpark. Similarly, we will also create an object of SQLContext use to run queries on the table. df.registerTempTable('BostonTable') sqlContext = SQLContext(spark) 2. Select Query Select Query is used for selecting the data according to user requirements. We can use select the whole table using “*” or we can pass the name of the columns separated by ”,” that we want to see. #Select Whole Table(only three records because we used show(3)) sqlContext.sql('select * from BostonTable').show(3) Select Table(Source: By Author) #Select column using column names sqlContext.sql('select _c0, CRIM, ZN, INDUS, CHAS from BostonTable').show(3) Select Columns(Source: By Author) 3. Aggregate Functions There are some predefined aggregate functions defined in SQL which can be used to select data according to user requirements. These functions are: a. min() b. max() c. count() d. sum() e. var() etc. The syntax for the following functions is given below. #Using max functions sqlContext.sql('select max(AGE) from BostonTable').show() max function(Source: By Author) Similarly, we can use other functions to display output according to user requirements. 4. Conditional Queries By using conditional queries we can generate outputs that follow a certain condition passed by the user. The most used condition expression is “where”. sqlContext.sql('select CRIM, NOX, B from BostonTable where B = 396.9').show(3) Conditional Data(Source: By Author) We can use different supporting functions in conditional queries which helps in being more specific about the output and can help in running multiple conditions in a single query. These functions are: a. having b. and c. or d. then e. between(used for range) etc. sqlContext.sql('select CRIM, NOX, B, RAD from BostonTable where RAD > 2 and B = 396.9').show(3) Conditional Expression(Source: By Author) Similarly, we can use different functions using the same syntax as given above. 5. Nested Query We can have multiple queries running in the same line of code which is generally called a nested query. It is a complex form of query where we pass different conditions to generate the output according to user demand. Below given is an example of a nested query. sqlContext.sql('select * from BostonTable where AGE between 40 and 50 and TAX not in (311,307)').show(3) Nested Query(Source: By Author) Similarly, you can try different nested queries according to the output you want. This article provides you with the basic information about the SQL Queries using the PySpark. Go ahead try these and if you face any difficulty please let me know in the response section. Before You Go Thanks for reading! If you want to get in touch with me, feel free to reach me on [email protected] or my LinkedIn Profile. You can view my Github profile for different data science projects and packages tutorial. Also, feel free to explore my profile and read different articles I have written related to Data Science.
https://towardsdatascience.com/complete-introduction-to-pyspark-part-3-9c06e2c5e13d
['Himanshu Sharma']
2020-11-15 18:36:54.272000+00:00
['Data Science', 'Big Data', 'Python', 'Sql', 'Data Analysis']
A Spoonful of Affection
A Spoonful of Affection Writing into the night Photo by Lester Salmins on Unsplash Some days I wake up feeling sensitive to everything; strong and weak at the same time. I’m not a fledgling, I’m a grown man with all a grown man’s faults. Having people connect with me through my words is a touching thing. I’m not a man writing for profit. I’m a few cents a day guy writing for the joy of writing and the connection it offers me. The harmonious world of Hogg people. Sometimes I don’t know if I’m a child, a poet, a gypsy, but I do know where it feels like home. I come here to help myself to a spoonful of affection. Like a child coming home to the smell of bread in the oven. If life had been as certain and direct as the path writing has taken me down, I would never have known doubt in my life. It’s a scary thing being out in the world, exposed, even to friendly faces. I can allow weaknesses that I wouldn’t permit in social life. The thing about stories is they are so damn big, and I have so little time. I try to cut corners, as if the story will spring from my brain and all I have to do is set it down. I’m a fraud for that. Anyway, it never works. Some of the works I’ve published here, nothing stories, looking back, only add to my shame. Knowing some ego took me over. Some of my stories seem nervous in print, hasty, experimented with, only to die right on the page. Is that a hurt ego? I come early to my study, rushing right into the day. One day can seem like an hour. I mean, I’m glad in a total sense. I don’t know what I missed in the outside world. I’m always in danger of loving my stories too much. But here now, looking out the window after midnight, the ocean fired by moonlight, I am thinking of so many things. It might be better to go to bed.
https://medium.com/literally-literary/a-spoonful-of-affection-2366a2a4d614
['Harry Hogg']
2020-12-01 05:32:31.059000+00:00
['Poetry', 'Creativity', 'Prose', 'Affection', 'Writing']
How to Improve Communication Frequency With Your Remote Team
Virtual Leadership Challenge Why am I suggesting leading teams in different regions is even harder than leading teams within the same location? Because we already have these six common leadership challenges with managing in-person teams: Purpose : connecting the why of daily work for our teams so they are excited to learn and grow with our businesses. : connecting the why of daily work for our teams so they are excited to learn and grow with our businesses. Focus: prioritizing the highest reward work so our organizations and clients will be highly satisfied. prioritizing the highest reward work so our organizations and clients will be highly satisfied. Guide : leading teams from any performance state to a higher one, so excellence is a constant goal. : leading teams from any performance state to a higher one, so excellence is a constant goal. Change : developing our ability to be comfortable with being uncomfortable. Then lead, execute, and thrive in changing environments. : developing our ability to be comfortable with being uncomfortable. Then lead, execute, and thrive in changing environments. Growth : growing team members to become experts in their craft so they can become leaders in their field, then build more leaders. : growing team members to become experts in their craft so they can become leaders in their field, then build more leaders. Relationships: building capacity to work with others so we can influence the adoption of new ideas up, down, and across all levels of the organization. The added layer of complexity is in operating with teams outside of our physical surroundings, to achieve a purpose, focus, through guiding change, growth, and building relationships. The indicator of good virtual leadership A high number of quality messages transacted within a team can be a reliable indicator of an excellent virtual leader. A study analyzed the communication frequency between managers and teams in different locations. There was one manager who stood out. The team he managed was noted for their ‘excellent’ morale. The difference between him and other managers was the high quality of interactions he had with his team. Quality, in this case, translated to 32 contacts per week with his employees. An achievement that outshone the others. Remote coaching in a high-performing team My former coach moved to the west coast from Ontario in 2017. He continued to coach his dragon boat team (remotely) since then. When Gavin Maxwell contemplated leading the team from afar, he shared his plan with the team. I immediately thought, ‘you can’t coach a sports team virtually, I’ve never heard of that!’ It was a judgemental reaction I’m not proud of. And, fortunately for the future of his team’s success, he didn’t listen. Instead, as a world-class dragon boat coach, he talked to his team as much as 77 times in one week. He only visits two to three times per year for training camps and regattas. He uses Whatsapp religiously to stay connected with his assistant coaches and the team of ~50 paddlers. They talk about practice attendance, performance, team line ups, and personal highs and lows. They have earned countless gold, silver and bronze medals competing against the German, Chinese and Canadian top teams. Despite his not being on-site physically, their excellence has not diminished; in fact, it has improved! 77 times in a week; that’s 11 times per day. All to make sure that his message remains clear and the team stays focused. They get clarity, support, and encouragement daily. On July 20, 2019, this team won the Canadian Dragon Boat Championship in an intensely competitive arena. This major win proves remote leadership can work and high communication is the secret weapon. My communication frequency was too low Are you doing what I did when I realized the volume of messages both of these high performing leaders had with their teams? I counted my work instant messages because it was the quickest way to have a baseline measure of my communication frequency. Of course, I could count meetings, emails, and other types of interactions but I was looking for an easy indicator. For relationship context, I also counted the number of messages with my partner, best friend, and family. Here are the results: With my best friend in another province: 58 With my live-in partner: 40 With my parents in another province: 15 With my direct team members: 5–13 Faced with this terribly low team text messaging count, I immediately tried to defend my position. I reasoned this result was not bad because there were other correspondences that week via face-to-face, video, and group chats. A nagging doubt challenged me to demand more of myself. The literature showing a team with high morale performs better moved me to change. I launched a new personal goal to increase my digital interactions. Understanding that work relationships differ from personal, I set a 20/week target. I set the goal to communicate every day with team members out of my sight. I set aside short time slots each day for chatting. I learned about upcoming weekend plans, last evening’s activities, and their mood of the day. Throughout the daily conversations, I attempted to address the common leadership challenges through my written messages. My lessons and suggestions follow. For each practice, I’ll relate it back to those original six leadership challenges: purpose, focus, guide, change, growth, and relationships.
https://medium.com/better-humans/how-to-improve-communication-frequency-with-your-remote-team-a446e15e5bb5
['Vy Luu']
2020-10-04 04:27:10.129000+00:00
['Productivity', 'Leadership', 'Work', 'Startup', 'Business']
Expect Less: An Easy Hack to Create More
I never thought I could draw. I apparently had a knack for music and writing. But as far as I was concerned, all of the visual art genes were given to my sister. She was often coming up with new ideas and new creations. I would watch as she would create murals and sketches. Seemingly coming forth from her as fully realized concepts. I would then slink off to the next room to try and create something comparable. However, I never felt like my efforts turned out well. So I eventually gave up. Surrendered the visual arts to my sister. I left them alone for years. Knowing that anything I would create wouldn’t be good enough. Wouldn’t be worthy. I felt pangs of jealousy or desire when I would see art — whether in a museum or in a friend’s notebook. Sometimes I would even voice my frustration, saying “I wish I could _____” (draw/paint/sculpt…) And then a few years ago, on an impulsive whim, I bought a set of twelve Kimberly sketching pencils. A kit. The word brought with it encouragement. That all that I needed would be contained in the 9"x4" green box, which came with instructions explaining the scale of the pencils. Each grade of the graphite pencils. How a 4B would appear lighter on the page than a 2B. I took one of the harder pencils and started drawing. After not much time, I examined my result. I sighed. Disappointed, I thought to myself “Just buying the tools doesn’t make you an artist.” And promptly put them away. A year later, my partner at the time (a visual artist) suggested we have a drawing session together. A few minutes into our sketch-a-thon, he could see I was getting frustrated. “I’m just not good at this,” I told him. “You’re not giving yourself much of a chance,” he replied. “And you know if you draw a line you don’t like, you can always erase it and try again.” What?! It didn’t have to be perfect from the beginning? I went back to my sketch and tried out his piece of advice. If a curve didn’t come out as expected or if something was too dark, I would erase and try again. And it worked! I came out with a decent-looking quokka shape. “That’s good! Now just keep adding.” So I started to add more details, erasing here or there when things didn’t look right. I switched between graphite pencils for some shading. At the end of the hour, I had a decent drawing of my favorite marsupial: Quokka c. 2016 — Rachel Drane (me, obviously) I almost couldn’t believe my eyes. I had drawn something recognizable?! Something that was far from perfect, but that I loved all the same. I was giddy. Maybe I was an artist. Maybe I could draw. Why was this such a revelatory experience for me? And why hadn’t I been creating all that time?! We expect too much [W]e get into [creative work] because we have good taste. But there is this gap. For the first couple years you make stuff, it’s just not that good... But your taste… is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. — Ira Glass When we think about creating, a lot of us have this romanticized view of making “worthy” work. That it has to be unique. Well-crafted. Postable. The idea of producing “unworthy” work can be scary. You could possibly look upon the work and see it as a physical manifestation of yourself and your own worth. I know that I’ve done (and sometimes still do) this. It could be threatening to one’s identity, even, especially if you consider yourself an artist or a creative. But within that fear lies the deathtrap for creation. No matter how talented and accomplished you are, if you’re afraid of creating something “sub-par,” you’ll eventually stop creating. At least stop creating in any meaningful way. Because, like Ira alludes to above, you have to create. If you want to create, you have to create. It’s in you. You just have to do. And — more likely than not — this probably means lowering your expectations of the result. Having lower, or maybe no, expectations gives you free reign to create whatever you’d like. Allows you to have fun with it. Opens up new possibilities. Maybe you get 20 sketches that you’re not crazy about. But there’s one aspect of one of the sketches that really sparks something inside of you. That one spark is worth it. Even if that doesn’t happen for you, you’re creating. And creating even one sketch that you think is just so-so is better than not creating. I remind myself, “Don’t let the perfect be the enemy of the good.” (Cribbed from Voltaire.) A twenty-minute walk that I do is better than the four-mile run that I don’t do. The imperfect book that gets published is better than the perfect book that never leaves my computer. The dinner party of take-out Chinese food is better than the elegant dinner that I never host. — Brené Brown If you expect a masterpiece every time you think about sitting down to create, you’re going to get disappointed. Maybe even discouraged from coming back. Or even worse, you might just never sit down in the first place. And then, you might start buying into the narrative that you’re not good enough. You’re just not that creative. Giving yourself license for that part of you to suffer. Because if you yearn for creativity, if you wish you could create, you are creative. You just are. But it does mean adjusting your expectations before sitting down (or standing up). Everyone creates differently The sculpture is already complete within the marble block… It is already there, I just have to chisel away the superfluous material. — Michelangelo I love this quote. And its sentiment has been echoed in the words of other artists throughout the centuries. Stephen King, for instance, refers to a story as being a fossil. It’s the author’s job to find and carefully extract that fossil. But I’m not sold that everyone creates in this manner. And that’s okay! I think what frustrated me when I was young was that I was seeing my sister’s process and then trying to emulate it. She seems to be able to have an idea for what she wants to draw or paint and then goes about achieving that. That’s not me. It’s taken me years to realize that I need to experiment a lot more. That I need to play and put things down on paper. Yes, I go through a lot of materials (paint, pages, and canvases), but I’m able to work toward something. Something that I maybe hadn’t really planned. But something that brings me some spark of joy. It’s every creative’s responsibility to discover how they are meant to create. This will take some trial and error, for sure. But I encourage you to not compare your process with anyone else. You can learn from others, sure. (In fact, I encourage you to!) But don’t seek to copy them precisely. Take what works for you, and leave the rest. The number one rule is: You need to create. (Am I starting to sound like a broken record yet?) Something, anything! Release the expectation that it’ll be perfect. That it will be a “great” work. That it’ll get you any Instagram love. Because, in all likelihood, it won’t. At least not at first. Release the expectation that you need to create in a certain way/time/style. Because everyone will be different. Don’t sacrifice years with your sketching pencils/notebooks/paints packed up in boxes. Don’t trick yourself into thinking you’re not able to create. Because, like I said before, if you want to create, you are a creative. And you owe it to yourself — and who knows, maybe even the world — to create. Still feeling stuck? Try something completely new. Break your routine. Doesn’t even have to be a “creative” endeavor. Exercise. Endorphins are real. And can be real magical. Sun lamp. This works better than caffeine for me sometimes. Especially in the winter, when I’m really lacking the motivation and energy to do anything. Consume. Read a book about your art. Follow art accounts. Go to museums. Do this all with an open mindset. (As opposed to a comparative one) Approach your endeavor as play. Somehow this perspective can instantly lower your expectations. And unleash some fun energy! Connect. I know this is easier said than done. So if joining a writer’s group or art class seems like too much for you, maybe even just finding one person you know who is also trying to create. Doesn’t even have to be the same medium. See if you can support one another. Be accountabili-buddies! Force it. The number one piece of advice given to writers is: Write every day. Even if you don’t feel like it. Even if you feel like you have nothing to say. Schedule it, if you have to. But no matter what, in the wise words of Nike and Shia LaBeouf:
https://medium.com/swlh/expect-less-an-easy-hack-to-create-more-cb929ecddc18
['Rachel Drane']
2020-03-11 16:59:34.228000+00:00
['Art', 'Practice', 'Motivation', 'Creativity', 'Lifehacks']
Data Science — A Door To Numerous Opportunities
Data Scientists have staying power in the marketplace and they make valuable contributions to their companies and societies at large. Today, Data Scientists have become more important than ever. The reason being they can frame better business goals, make effective decisions and identify the opportunities better. The scope of Data Science includes organizations in banking, healthcare, energy, telecommunications, e-commerce, and automotive industries among many others. The main components involved in Data Science are Organizing, Packaging and Delivering data. Data Science overall is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve complex analytical problems. A Data Scientist must know what could be the output of Big Data he or she is analyzing. He/she should clearly know how the output could be achieved with what is available. To achieve this, a Data Scientist is required to follow these steps: Step 1: Collect huge data from multiple resources Step 2: Perform research on complicated data available and frame questions that need to be answered Step 3: Clean the huge volume of data to chuck irrelevant information Step 4: Organize data into a predictive model Step 5: Analyze data to determine the trends and opportunities and recognize weaknesses Step 6: Produce data-driven solutions to conquer challenges Step 7: Invent new algorithms to solve problems Step 8: Build new tools to speed work Step 9: Communicate predictions from the analyzed data in the form of charts/reports/visualizations Step 10: Recommend effective changes to fix the existing strategies Step 10: Recommend effective changes to fix the existing strategies Data Science — The Future Lies Here Help Companies Make Progress With Data Many organizations collect data regarding customers, website interactions and much more. But according to a recent study by Gemalto 65% of organizations can’t analyze or categorize all the consumer data they store. And, 89% of companies admitted the ability to analyze data effectively would provide them with a competitive edge in their industry. Being a Data Scientist, you can help companies excel with the data they collect. Better Career Opportunities Among the most promising jobs of the year 2019 on LinkedIn based on LinkedIn data, Data Scientist has topped the list, with an average salary of $130,000. The ranking was done on the basis of 5 components and these are Salary, Career Advancement, number of job openings in the U.S., year-over-year growth in job openings, and widespread regional availability. Astonishing Amount of Data Growth We generate data daily, but never really think about it. According to a study Today, more than 5 billion consumers interact with data every day, and by 2025 the number will be 6 billion i.e. 75% of the world’s population. In 2025, each connected person will have at least one data interaction every 18 seconds. Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90 ZB of data in 2025.” — The Digitization of The World Data is at the heart of digital transformation, the lifeblood of the digitization process. Today, companies are leveraging data to improve customer experiences, open new markets, make employees and processes more productive, and create new sources of competitive advantage — working towards the future of tomorrow. From a global perspective, India is only second to the USA to recruit Data Science professionals. If you aspire to become a Data Scientist, enroll in our courses and transform your dreams into reality!
https://medium.com/cognitior-360/data-science-a-door-to-numerous-opportunities-4f0782dcfc51
['Cognitior Learning']
2019-08-28 14:15:57.652000+00:00
['Python', 'Big Data', 'Data Science', 'Data', 'Data Visualization']
6 Simple Tips From Top Freelancer Jaime Hollander
1. Commit the time There’s no overnight success. Not even in freelancing. If you start out first, the only thing you need to get better is time. You can always have an excuse for why it’s not the right time to focus on Upwork. But if you want to get going and finally start earning money there, you need to invest the hours — lots of it. Invest 30–60 minutes per day, every single day. Fix your profile, write good proposal drafts, look for jobs. Plan to send 20–40 proposals per day to get your name out. You want possible clients to know you. Spread the word of your work and use the chance to improve your proposal writing. It’s a numbers game. The more time you spend, the higher the chance of landing a job. Also, make sure to answer as quickly as possible after you get invited. This will give you the advantage of time over other freelancers. Steps to follow: Invest 30–60 minutes a day, every single day Plan to send 20–40 proposals daily Respond to invites immediately Don’t expect overnight results 2. Be willing to work for LESS If you start out and don’t have a reputation, you need to be ready to work for less. Even if you’re already a pro in this field. People won’t trust you if you don’t have any feedback but high rates. Being willed to work for less doesn’t mean competing with the lowest prices. It doesn’t involve working for $3 an hour, unable to finance your lunch, after working the whole morning. Instead, it means checking the bid range of a job offer and placing your bid in the lowest quarter. For example, Jaime came from a strong marketing background before she even started working as a freelancer. She had her Master’s and had worked for several years in different positions. But despite her extensive expertise in this field, she wasn’t expert-vetted on Upwork. Without a reputation, she was simply another freelancer focusing on copywriting and marketing texts. Because of that, she needed to get her name out. Instead of shoving her apparent skills in everyone’s faces, she decided to let her work speak for her. She started out offering her services at around $20 an hour. Far away from what she’d have charged previously. But enough to meet need’s end for now and (more important) to get her name out there. View it long-term. You aren’t on Upwork to cash in big time and be never seen again. You are on Upwork to offer people your skills. To help them solve their problems, using your skills. 3. Look for red flags Do you know what sucks? Agreeing on something, setting it in stone, and being overthrown afterward. On Upwork, it’s even worse. Save yourself from this. Listen to your gut As a freelancer, you need to listen to your gut. If something seems to be shady, don’t bother too much and skip it. Sure, it might be a bummer at this moment, but there will always be another job. Here’s a typical example: “Hey there! My name is XX, and I need someone who can write me an ebook about nutrition topics. The length should be between 10.000 and 15.000 words, and usually, I got the ebooks delivered within one to two weeks. Because it would be the first time working with you, I’d propose a rate of $0.01 per word. If I’m satisfied with your work, you’d be able to get up to $0.06 per word for the following projects.” This. Is. Proposal-bait. It’s wrong in many ways, and you definitely don’t want to apply here because: if you’re regularly working on other projects, one to two weeks is a tight time-frame. $0.01 per word is a straight rip-off, no matter where you live. it’s a false promise of a better rate the next time, just to drop you right after you’ve delivered. Promising, but incomplete job posts Again: Don’t bother too much and skip. Even if it’s offering an excellent salary or you think you’ve done something similar before. If the client doesn’t have the time to write a job post, they probably don’t take the time to explain their expectations. You’re a freelancer, not a babysitter. It’s your job to manage expectations As a freelancer, you aren’t only writing, you’re also managing a client’s expectations. Meaning you want to set them straight if they expect outrageous things. Unrealistic deadlines? Tell them. Unreasonable prices? Tell them. Ridiculous amendments? Tell them. They will keep pushing you around if you don’t. Tell the clients that you’re happy to work for them but not in any condition. 4. Know your limits Don’t promise others anything you can’t offer. Many self-help gurus advise you to grow in your task following the principle “Fake it until you make it”. What could be working for your confidence won’t work as a freelancer. It’s one thing to be confident about your skills and eager to tackle new challenges. But it’s an entirely different thing offering services without having the necessary knowledge or skills. Know your limits. Stretch them, if needed. But never ignore them. 5. Respond to invites professionally It’s already a good sign to get invited. Because getting asked means you hit their radar. It’s the first success and step in the right direction. But if you’ve already come this far, you don’t want to sabotage yourself. Remember, getting invited doesn’t mean you got the job. Instead, it means you’re in the inner circle of candidates. Here, you’d like to present yourself and your skills in the best way possible. As they probably don’t remember having invited you, start thanking them for their interest. Then, show them your appreciation and provide a good proposal, mentioning how you could help. Remember, the client comes first. It’s not about your skills. It’s about your clients’ problems. The same goes for denying invites. Be professional and let them know that you’re currently busy. Or, if needed, set their expectations straight and provide constructive feedback. 6. Be a subject matter expert Become a pro in what you do. Don’t only rely on Upwork bringing you the jobs to get money and better. Instead, focus on expanding your horizon out of Upwork too. Use other writing outlets such as a personal blog, Medium, or maybe the school magazine. Buy books and listen to podcasts. Never use a client to experiment If you want to do something vastly different, you don’t want to use your client as guinea pigs. Know your limits and try your new skills in a sandbox before you want to help people with it. Always aim for a challenge You’ve heard it a couple of thousand times: Aim for something slightly out of your comfort zone. By doing so, you won’t be lost entirely, but you also up-skill as you go. This is what you want to achieve. Here’s a real example: I’ve been practicing meditation for a few years now. In this time, I read multiple books on how one can implement the practice in daily life and how it can be improved. Last year, I browsed the job offers and stumbled across a ghostwriting project for an ebook about meditation. I was intrigued, willing to use my knowledge and experience to write an ebook about this topic. I applied and explained how I meditate myself, what experiences I have, and how it could help the client. Additionally, I proposed a relatively low price, as I didn’t have any experience before. The client bit and I got the job. I stepped out of my comfort zone to deliver the ebook and it was a win-win. This is the type of challenge you’d like to tackle. Out of your comfort zone but not entirely lost in Nirvana.
https://medium.com/inspired-writer/5-tips-a-top-freelancer-would-give-you-9651af66c022
['Tim Schröder']
2020-09-10 06:27:12.632000+00:00
['Writing Tips', 'Writing', 'Productivity', 'Business', 'Freelancing']
Here’s all one should know about God Class in Java
ILLUSTRATION OF GOD CLASS Let’s say you are building a customer management application. Here’s the following snippet depicting the same. God Class example screenshot by the author This is a bit short example but if we rethink, a customer may have many other fields. Isn’t it a bit obvious that the Customer class has way too much information? What if a customer has several addresses in different cities, are they all going to be properties in this class? While the application grows, this class also keeps on growing. Henceforth, after years of maintaining the application, we end up having a monstrous class having thousands of lines of code. Now let's try to revisit our deductions on why this is considered to be a bad coding practice resulting in code smell. It looks very nice that you have access to all these pretty methods from one place and you claim that it’s less work to change a single file. This means whenever there’s a change in the class it is more likely to have bugs. Well, one bug is not big a problem, but the bugs getting compound is. Forget about solving, it becomes very hard to find two bugs which interact. Even if the bug is detected and resolved. Can you imagine the pain of trying to test such a way too large class which has too many methods? I can assure even if we write unit test cases, it won’t be feasible to cover everything as this class is missing abstraction, which will result in exposing all the members of this class thus violating the principle of OOPS. S ome more problems of having a “God Class” All unwanted members of such class will be inherited to other classes during inheritance, violating the principle of code reusability. The object will occupy more space in heap memory, so it will be an expensive operation in terms of memory management and garbage collection. The phenomenon of tight coupling is noticed among such object class making the code difficult to maintain. Unwanted threads will be running in the background thus at the time of parallel processing low priority threads might block high priority tasks to run. Why and When does this happen? Programmers face a conundrum of basic values. Every programmer has his own favorite formatting rules, but if he works in a team, then the team rules. A team of developers should agree upon a single formatting style, and then every member of that team should use that style. We want the software to have a consistent style. We don’t want it to appear to have been written by a bunch of disagreeing individuals. Generally, this practice is noticed among programmers lacking experience in real-time programming and knowledge of the basics of programming language thus fails in developing an application with an architecture perspective. We will often hear these common arguments like “But I need all this info in one easily-accessible place.” for the validity of God Object. Strategic measure to prevent or resolve the God class or object: Firstly, let’s focus on avoiding god Objects or class while developing. Following The Boy Scout Rule It’s not enough to write the code well. The code has to be kept clean over time. We’ve all seen code rot and degrade as time passes. So we must take an active role in preventing this degradation. The Boy Scouts of America have a simple rule that we can apply to our profession. Leave the campground cleaner than you found it. If we all checked-in our code a little cleaner than when we checked it out, the code simply could not rot. The cleanup doesn’t have to be something big. Change one variable name for the better, break up one function that’s a little too large, eliminate one small bit of duplication, clean up one composite if statement. 2. The Only Valid measurement of code quality is the count of WTF’s(Works That Frustrate)!!!! Sources: - Clean Code By Robert C. Martin 3. Keep it Small!!!!! The first rule of functions is that they should be small. The second rule of functions is that they should be smaller than that. Even though we don’t have physical constraints given that we all use modern monitors, laptops, functions should hardly be 20 lines long and it should do only one thing. FUNCTIONS SHOULD DO ONE THING. THEY SHOULD DO IT WELL. THEY SHOULD DO IT ONLY. Now, let’s focus on fixing God objects that already exist. The only solution for this is to refactor the object so as to split related functionality into smaller, manageable pieces. For example, let’s refactor the Customer.java from earlier: Custome.java screenshot by the author Address.java screenshot by the author Now if we need to change the address data of any employee we would only need to change the Address class, not others.
https://herownhelloworld.medium.com/heres-all-one-should-know-about-god-class-in-java-e318acbb9717
['Shalini Singh']
2020-10-28 16:37:11.122000+00:00
['Coding', 'Java', 'Programming', 'Software Engineering', 'Software Development']
5 Quick and Easy Data Visualizations in Python with Code
Data Visualization is a big part of a data scientist’s jobs. In the early stages of a project, you’ll often be doing an Exploratory Data Analysis (EDA) to gain some insights into your data. Creating visualizations really helps make things clearer and easier to understand, especially with larger, high dimensional datasets. Towards the end of your project, it’s important to be able to present your final results in a clear, concise, and compelling manner that your audience, whom are often non-technical clients, can understand. Matplotlib is a popular Python library that can be used to create your Data Visualizations quite easily. However, setting up the data, parameters, figures, and plotting can get quite messy and tedious to do every time you do a new project. In this blog post, we’re going to look at 5 data visualizations and write some quick and easy functions for them with Python’s Matplotlib. In the meantime, here’s a great chart for selecting the right visualization for the job! A chart for selecting the proper data visualisation technique for a given situation Scatter Plots Scatter plots are great for showing the relationship between two variables since you can directly see the raw distribution of the data. You can also view this relationship for different groups of data simple by colour coding the groups as seen in the first figure below. Want to visualise the relationship between three variables? No problemo! Just use another parameters, like point size, to encode that third variable as we can see in the second figure below. All of these points we just discussed also line right up with the first chart. Scatter plot with colour groupings Scatter plot with colour groupings and size encoding for the third variable of country size Now for the code. We first import Matplotlib’s pyplot with the alias “plt”. To create a new plot figure we call plt.subplots() . We pass the x-axis and y-axis data to the function and then pass those to ax.scatter() to plot the scatter plot. We can also set the point size, point color, and alpha transparency. You can even set the y-axis to have a logarithmic scale. The title and axis labels are then set specifically for the figure. That’s an easy to use function that creates a scatter plot end to end! Line Plots Line plots are best used when you can clearly see that one variable varies greatly with another i.e they have a high covariance. Lets take a look at the figure below to illustrate. We can clearly see that there is a large amount of variation in the percentages over time for all majors. Plotting these with a scatter plot would be extremely cluttered and quite messy, making it hard to really understand and see what’s going on. Line plots are perfect for this situation because they basically give us a quick summary of the covariance of the two variables (percentage and time). Again, we can also use grouping by colour encoding. Line charts fall into the “over-time” category from our first chart. Example line plot Here’s the code for the line plot. It’s quite similar to the scatter above. with just some minor variations in variables. Histograms Histograms are useful for viewing (or really discovering)the distribution of data points. Check out the histogram below where we plot the frequency vs IQ histogram. We can clearly see the concentration towards the center and what the median is. We can also see that it follows a Gaussian distribution. Using the bars (rather than scatter points, for example) really gives us a clearly visualization of the relative difference between the frequency of each bin. The use of bins (discretization) really helps us see the “bigger picture” where as if we use all of the data points without discrete bins, there would probably be a lot of noise in the visualization, making it hard to see what is really going on. Histogram example The code for the histogram in Matplotlib is shown below. There are two parameters to take note of. Firstly, the n_bins parameters controls how many discrete bins we want for our histogram. More bins will give us finer information but may also introduce noise and take us away from the bigger picture; on the other hand, less bins gives us a more “birds eye view” and a bigger picture of what’s going on without the finer details. Secondly, the cumulative parameter is a boolean which allows us to select whether our histogram is cumulative or not. This is basically selecting either the Probability Density Function (PDF) or the Cumulative Density Function (CDF). Imagine we want to compare the distribution of two variables in our data. One might think that you’d have to make two separate histograms and put them side-by-side to compare them. But, there’s actually a better way: we can overlay the histograms with varying transparency. Check out the figure below. The Uniform distribution is set to have a transparency of 0.5 so that we can see what’s behind it. This allows use to directly view the two distributions on the same figure. Overlaid Histogram There are a few things to set up in code for the overlaid histograms. First, we set the horizontal range to accommodate both variable distributions. According to this range and the desired number of bins we can actually computer the width of each bin. Finally, we plot the two histograms on the same plot, with one of them being slightly more transparent. Bar Plots Bar plots are most effective when you are trying to visualize categorical data that has few (probably < 10) categories. If we have too many categories then the bars will be very cluttered in the figure and hard to understand. They’re nice for categorical data because you can easily see the difference between the categories based on the size of the bar (i.e magnitude); categories are also easily divided and colour coded too. There are 3 different types of bar plots we’re going to look at: regular, grouped, and stacked. Check out the code below the figures as we go along. The regular barplot is in the first figure below. In the barplot() function, x_data represents the tickers on the x-axis and y_data represents the bar height on the y-axis. The error bar is an extra line centered on each bar that can be drawn to show the standard deviation. Grouped bar plots allow us to compare multiple categorical variables. Check out the second bar plot below. The first variable we are comparing is how the scores vary by group (groups G1, G2, ... etc). We are also comparing the genders themselves with the colour codes. Taking a look at the code, the y_data_list variable is now actually a list of lists, where each sublist represents a different group. We then loop through each group, and for each group we draw the bar for each tick on the x-axis; each group is also colour coded. Stacked bar plots are great for visualizing the categorical make-up of different variables. In the stacked bar plot figure below we are comparing the server load from day-to-day. With the colour coded stacks, we can easily see and understand which servers are worked the most on each day and how the loads compare to the other servers on all days. The code for this follows the same style as the grouped bar plot. We loop through each group, except this time we draw the new bars on top of the old ones rather than beside them. Regular Bar Plot Grouped Bar Plot Stacked Bar Plot Box Plots We previously looked at histograms which were great for visualizing the distribution of variables. But what if we need more information than that? Perhaps we want a clearer view of the standard deviation? Perhaps the median is quite different from the mean and thus we have many outliers? What if there is so skew and many of the values are concentrated to one side? That’s where boxplots come in. Box plots give us all of the information above. The bottom and top of the solid-lined box are always the first and third quartiles (i.e 25% and 75% of the data), and the band inside the box is always the second quartile (the median). The whiskers (i.e the dashed lines with the bars on the end) extend from the box to show the range of the data. Since the box plot is drawn for each group/variable it’s quite easy to set up. The x_data is a list of the groups/variables. The Matplotlib function boxplot() makes a box plot for each column of the y_data or each vector in sequence y_data ; thus each value in x_data corresponds to a column/vector in y_data . All we have to set then are the aesthetics of the plot. Box plot example Box plot code Conclusion There are your 5 quick and easy data visualisations using Matplotlib. Abstracting things into functions always makes your code easier to read and use! I hope you enjoyed this post and learned something new and useful.
https://towardsdatascience.com/5-quick-and-easy-data-visualizations-in-python-with-code-a2284bae952f
['George Seif']
2019-05-04 11:59:37.974000+00:00
['Python', 'Data Science', 'Towards Data Science', 'Data Visualization', 'Visualization']
Before We Had Google, There Was Googie Architecture
Before We Had Google, There Was Googie Architecture Few things are more representative of the modern zeitgeist than Google. But there was a time when the same thing was said about Googie architecture. Between the late 1950’s and early 1960’s Googie was the undisputed “look of tomorrow.” While Kennedy spoke of Man going to the Moon, it was Googie-style architecture that made the Space Age come to life via cantilevered elements, parabolic boomerang shapes, bold colors and whiz-bang angles. Famous examples of Googie Architecture are the Theme Building at LAX, the Space Needle in Seattle, Space Mountain at Disney and the TWA Terminal at JFK Airport. But Googie design elements became directly accessible to the everyday consumer at places like the Original McDonald’s restaurants. You can see the Googie influence in both the architecture and cars in the parking lot. The mecca of Googie-style was Southern California, and architects like Eldon Davis and Stanley Metson brought the space-age to the every-day. Gas stations, bowling alleys, movie theaters, motels and restaurants like Norm’s, Pann’s, Chips’ and Googie’s (the style’s namesake), became the iconic epicenters of a future that was already here today. Googie was pure, eye-sugar escapism. At a glance, we could escape the mundane with the rush of the future. To the business-owner, Googie meant profits. From a design standpoint, Googie combined the groundbreaking architectural design use of cantilevered concrete, steel and plate-glass popularized by Frank Lloyd Wright — with the abstract art of painter Wassily Kandinsky. Frank Lloyd Wright’s Fallingwater, designed in 1935. Those iconic Googie signs are like the 3D rendering of a Kandinsky painting. You will see the same bold, primary color patterns, shapes and raygun zaniness. Points, 1920, by Wassily Kandinsky. Note the Golden Arches! The true impetus behind Googie-style was the boom of American car-culture. Even in the 1950’s, L.A. was a driving city. And the eye-catching Googie designs and bright neon helped businesses entice patrons from the road. ‘Norms’ by Ashok Sinha Successful Advertising and Commerce helped move architectural design choices and car culture in a similar direction as the entire Googie aesthetic emerged. And cars of the 1950’s became the rocket ship of the Everyman. 1959 Chevy There was a moment in time when you could drive to Disneyland Anaheim for vacation in your ’59 Chevy and be utterly immersed in a Googie world of the future. Gas up at a Googie filling station, grab lunch at a Googie burger joint, go to Tomorrowland at Disney, catch a movie at the Googie cinema, go bowling at Googie Lanes, then head back for a swim at the Googie Motel. The Future was awesome. Imagining a life inside a futuro-fantasy is not entirely unlike our world today. Only now, we’re immersed in Google’s brand of escapism. The entire world at our fingertips on Google Search, Google Maps, YouTube, Gmail — all on a Google Smartphone.
https://briandeines.medium.com/before-there-was-google-there-was-googie-6146bd973509
['Brian Deines']
2020-01-13 21:21:30.236000+00:00
['Architecture', 'Art', 'Google', 'Design', 'Tech']
Stop Trying To Be Famous, Popular, or A Cash Machine
I’m big on blocking. Life’s too short for slop in my feed. I excise rage aribeters that offer little recourse or solutions. I block those who are abusive, petty, and cruel. Hate-mongers and hate-readers. Complainers that don’t create. People who call me a cunt in the comments or try to tell me how to do the work I’ve been doing successfully for decades. And while I’ve been quietly muting a disturbing amount of articles churned out by those I refer to as Derivative Peddlers, my patience is paper thin. I’ve written about our cult of more, our desire to be big when it’s just as noble to play small — some of which has played out in pedestrian writing advice and sloppy self-help, which, much to my chagrin, has become pervasive. I’m willing to bypass writing advice that reduces one’s work to a sixth-grade book report, but I can’t stomach people that peddle online homogeny. Let me explain. It’s the equivalent of someone giving you a playbook for painting. Say you’re an artist who barely survived the Gothic art of the Middle Ages. There exists a patronage culture in Florence where the Church and the Medicis decreed taste. Those who had accumulated power and wealth defined genius — told you there was one acceptable way to paint and sculpt. Imagine if everyone followed the rules, adhered to the guidelines and created art that was a bland photocopy of a brilliant original. How would we have borne the two disparate styles and personalities of DaVinci and Michelangelo? How could we revere Caravaggio’s chiaroscuro, his artistic realism and Titian’s lush canvases and idealism? Why would you adhere to single playbook that purported to define the whole of art? A set of guidelines that don’t allow for risk and individual identity? If we didn’t have artists who broke convention and form, art wouldn’t evolve. And here we are, centuries later, listening to self-proclaimed experts posing as the Medicis. They’ve made a little money online and now they’re telling you how to paint, but what they’re really doing is cultivating homogeny. Holding your head underwater while you drown in the sea of same. They tell you how to compose titles for your stories and how to write your stories. They warn you against veering from what they’ve defined as acceptable formatting. But you have to trust them because they know things. They’ve cracked the algorithm code and here are their thirteen screenshots of their income to prove it. As if income defines an artist’s talent. By that logic, hedge fund managers would be the Michelangelos of the 21st Century. As if income correlates to the depth and power a piece of writing has over the reader. As if more means better when all it means is…more. Let’s not ferret out hidden meanings in simple definitions. The Derivative Peddlers want you to copy them and your writing (and by extension, yourself) is a failure if you don’t. You’re a loser because you’re not making $10,000 a month. You’re a failure because you’re not cranking out 10,000 garbage words a day. You’re a weirdo if you don’t follow the hive because the hive demands you never fall out of line lest you be ostracized and made fun of in their petty stories and vague sub-tweets. Listen to us, they implore, at the expense of you. All they’re doing is reducing your work to the equivalent of junk food — palatable, easy, and similar to the ten other brands of cheese doodles on the rack. While I believe in the power of the collective and community when it comes to social activism, economic equality and prosperity, when it comes to art I worship at the altar of own your weird. In art, the individual may be informed by the community, but their art shouldn’t be dictated by it. Otherwise, rule-breakers would be shunned and smothered. We wouldn’t have artists birthing new genres, schools, and styles because every motherfucker will have a vocabulary of 100 words and copying someone else’s work rather than interpreting and evolving it. We would have no Caravaggio, Samuel Beckett, Virginia Woolf, Gabriel Garcia Marquez (who was influenced by Woolf), no Ben Marcus, no Kelly Link, no modernist, post-modernist art and fiction. No auto-fiction, no abstractism. No impressionism and expressionism. We would be stuck in the fucking Middle Ages painting our pedagogical, baby-man Christs. I learn the rules of my craft to break them not to be beholden by them. If you want to make money from your writing, fine, pitch and sell stories that adhere to a publication’s guidelines. There’s nothing wrong with that — I sell work that’s in my voice but perhaps not my style to earn a paycheck. We live in the real and the real requires you to cut checks to people on a regular basis. But there’s a difference between the work I sell and the work I create because it fuels me, helps me interpret the world around me. Both give me joy, but I’m only constrained by the former, not the latter. And yes, you can make money from your art — binaries are boring — but if you’re making money by limiting your art or reducing it to that which looks and walks and talks like every other kid peddling their income reports on the block, how are you different? If your words and form are locked in a cage, how do you grow? Constraints are cruel and I refuse to abide by them. If that means I don’t make as much money as the kid down the block, I’m fine with that. If it means a smaller group of people read my work, I’m fine with that. I have zero interest in optimizing my fucking titles. My goal has never been to be mass-market — it’s been to put people’s heart on pause when they read my work. To move them. Inspire them to see the world a different way. To feel part of something larger than both of us. My goal has always been to test the limits of language by using words as weapons and shields. I could write to conform, to pander, but why would I? To publish my income reports? To tout how famous I am and then get more people calling a cunt in the comments? It took me decades to detangle my work from the results of the work. I don’t need to be popular or famous when I’m allergic to swarms of people. It took me decades to feel confident in saying I’m an exceptional writer, but I will forever have room to grow. And it took me decades to be okay with not being part of the peanut-crunching pack. It took me a long time to admit one of my favorite words in the English language is motherfucker. Also, ossify. Don’t be afraid to be yourself in your work. Experiment with form, voice, and style. Play. Mess up. Make music. Break things and wreck the joint. Approach your work as if you’re a child filled with firsts and wonder. Don’t worry about being likeable or relatable. Don’t swim in the sea of same and be surprised when you drown. Write terrible stories and rewrite them years later. Be okay with the fact that you’re not a content marketer — you’re a storyteller. Write like you’re holding your still-beating heart in your hands.
https://medium.com/falling-into-freelancing/stop-trying-to-be-famous-popular-or-a-cash-machine-1a8904c89860
['Felicia C. Sullivan']
2020-08-04 14:30:39.599000+00:00
['Freelancing', 'Creativity', 'Life Lessons', 'Culture', 'Writing']
Real Artificial Intelligence: Understanding Extrapolation vs Generalization
Source. Image free to share. Real Artificial Intelligence: Understanding Extrapolation vs Generalization Stop confusing the two Machine learning models don’t need to be intelligent — most of their applications entail performing tasks like recommending YouTube videos or predicting a customer’s next move. It is important to understand the difference between extrapolation and generalization/interpolation to understand what it really means for a model to be intelligent and to avoid a common issue of confusing the two, which is often the root cause of the implementation failures of many models. Generalization is the entire point of machine learning. Trained to solve one problem, the model attempts to utilize the patterns learned from that task to solve the same task, with slight variations. In analogy, consider a child being taught how to perform single-digit addition. Generalization is the act of performing tasks of the same difficulty and nature. This may also be referred to as interpolation, although generalization is a more commonly used and understood term. Created by Author. Extrapolation, on the other hand, is when the model is able to obtain higher-dimensional insights from a lower-dimensional training. For instance, consider a first grader who is taught single digit addition, then presented with a multi-digit addition problem. The first grader thinks, “okay, so when the units digit adds to larger than ten, there is a tens component and a ones component. I take that into account and add a one to the tens column to account for that.” This is, of course, the key insight around which arithmetic is based around, and if you thought hard enough about what it means to add and understood place value, you could figure it out. Yet most first graders never realize this, rarely discovering how to carry over on their own. Created by Author. It’s important to realize that extrapolation is hard. Even many humans cannot succeed at extrapolation — indeed, intelligence really is a measure of being able to extrapolate, or to take concepts explained in a lower dimension and being able to apply them at a higher one (of course, dimension as in levels of complexity, not literally). Most IQ tests are based around this premise: using standard concepts in ways only a true extrapolator could understand. In terms of machine learning, one example of extrapolation can be thought of as being trained on a certain range of data and being able to predict on a different range of data. This may be easy with simple patterns, such as simple positive number/negative number or in a circle/not in a circle classifications. Created by Author. Yet the ability for models to extrapolate on more complicated patterns is limited with traditional machine learning methods. Consider, for example, the checkerboard problem, in which alternating squares on a two-dimensional plane are colored as either 0 or 1. To a human, the relationship is clear and one could go on coloring an infinite checkerboard, given only the rules of a smaller, finite x by x checkerboard. While the checkerboard problem is defined by a set of rules for humans (no two squares who share a side can be the same color), mathematically it is defined as such: Created by Author. This is a very unintuitive way of thinking, and most models do not think to generate rigorously mathematical, extrapolative definitions. Instead, regular algorithms essentially attempt to split the feature space geometrically, which may or may not work, given the task. Unfortunately, in this case, they do not extrapolate to coordinates outside of the coordinates that they were trained on. Even many neural networks fail to perform at this task. Results of KNN & Decision Tree when trained on 20x20 checkerboard and told to predict 40x40 space. Created by Author. Sometimes, however, extrapolation is not as much about recognizing complex relationships as it is finding a smart, extrapolate-able solution to carrying out the task on foreign ranges. For instance, the checkerboard problem can be solved by drawing diagonal lines. This solution has been observed to be discovered with, for example, neural network ensembles. Neural Network Result from “Competitive Learning Neural Network Ensemble Weighted by Predicted Performance”. Created by Author. A common argument you’ll hear in the debate of machine intelligence is that “machines can only do one thing really well.” This is indeed the definition of interpolation or generalization — to perform tasks inter, or within, a predefined set of rules. Yet extrapolating requires having such a solid understand of the concepts inter that they can be applied extra, or outside the taught region. Rarely can current machine learning models extrapolate reliably; usually, the ones that show promise are geometrically easier to extrapolate one and fail at other problems. All artificial intelligence methods are interpolative by nature, and it’s debatable if it is even possible to artificially construct an extrapolative (“intelligent”) algorithm. Extrapolation is seldom the goal of modelling or machine learning, but often it is used interchangeably with generalization — the most obvious, of course, being linear regression, in which taking its infinity-tending predictions as gold is more common and less noticeable when there are multiple dimensions at play (multiple regression). Another example of extrapolation is when, say, companies will train a model on outlier-free data and then implement it in real life, in which outliers are much more abundant and cannot simply be ignored. This is common in implementations of models that often goes undetected but may be a big reason why your model may not be performing up to test results in real life. When there is a disparity between the data on which the model was trained on and the data the model is expected to predict on, it is likely that you may be asking the model to extrapolate. XKCD. Sometimes even humans are bad extrapolators. Image free to share. Models generally cannot extrapolate well, be it in a measure of symbolic intelligence or in real applications. It’s important to make sure that your model is not being confronted with an extrapolation task — current algorithms, even as complex and powerful as neural networks, are simply not designed to perform generally well on extrapolation tasks. A decent check to check for extrapolative tasks is to plot out the distributions of each column in the training and testing set, then see if the testing set is significantly non-compatible with the training one. Until we can create a machine learning algorithm that is capable of extrapolating generally across all problems, similarly to how the concept of a neural network (of course, with some varying architectures) can single-handedly address almost any generalization problem, they will never truly be able to be ‘intelligent’ and perform tasks outside the narrow scope in which they are trained on.
https://towardsdatascience.com/real-artificial-intelligence-understanding-extrapolation-vs-generalization-b8e8dcf5fd4b
['Andre Ye']
2020-06-26 17:08:39.359000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'AI', 'Data Analysis']
Helping Guests Make Informed Decisions with Market Insights
Two common decisions that our guests are making are: Should I book now for better availability or later for better flexibility? Which of the listings should I book? As the service provider, we have broad views of the entire market and guest behaviors that individual guests do not necessarily have. This information usually provides helpful insights to solve guests’ puzzles. Market insights are one channel where we interact with our guests at various stages of the booking flow. We provide dynamically-generated information to assist our guests in planning their trips. This information includes market and listing availability trends, supply, pricing discounts, community activities, etc. It is a critical component of the booking flow and has demonstrated its utility to the Airbnb community by enabling a larger variety of people to make wiser booking decisions and belong everywhere. Market Insights Figure 1 illustrates the architecture of our Market Insight service. As a guest interacts with the website, the Market Insight backend system talks with the Search and Pricing services to collect market availability and pricing information. It queries the key-value store for data that are relevant to the search or listing view—along with user information—to generate candidate insights in real-time. Then it ranks the insights according to their values to the guest and powers the front-end on the final insights to display. We are determined to generate market insights that are genuine, informational, and timely. When a guest types in a search query that consists of location, dates, guest counts, and possibly room type and additional amenity constraints, the Search backend system retrieves available homes. The frontend automatically zooms in the map to a level that best covers the guest’s interests with enough context. For one such map view, the market insights server aggregates the exact number of available places, and warns the guest if the number is low. Similarly, we have an insight on the percentage of available places. To support heterogeneous types of insights, the server retrieves two major sources of data from the key-value store. Stream data that are mostly factual information with limited counting and bookkeeping that are almost real-time, such as the number of unique views of a listing during the past N days. The data is served through an internal system. Aggregated data that typically requires joining with and inferencing from other data sources, and they typically have up to a couple of days of delay. We build data pipelines using Airflow to streamline the data generation process and monitor their daily progress. We use Spark for large-scale data aggregation and use Hive for data storage. The “Rare Find” insight is an example of using aggregated data. As the service platform, we have more data than individual guests. For instance, Airbnb keeps track of how frequently homes are booked. This information is a good indication of popularity. When a guest views a place that is rarely vacant, we remind our guests with the following insight. This insight is supported by two data pipelines — one that aggregates availability information of an individual listing and the other for the availability of all markets. A “Rare Find” insight is for listings that have a high long-term availability ratio compared against the market X percentile — a value that trades off insight value and scarcity that we determined by live experiments. Both of the data pipelines are updated on a daily basis so our server will deliver accurate and timely insight to our guests. Since Market Insights’ inception in 2015, we have gradually added more insight types. Our work has increased booking conversion by more than 5%. With a large Airbnb community and our extension to more verticals, our work compounds in a substantial way for company growth. Personalization Personalization has been an evolving theme for many service platforms, and Airbnb has been a pioneer in adopting new technology, applying machine learning to personalize search results and detect host preferences. We have taken several steps in personalizing market insights. It happens quite often that multiple insights are eligible, but we are only able to show one each time. Our current strategy is to use a deterministic and static vetting rule. However, guests parse information differently. For a guest who is sensitive to time, an insight reminding her can be very effective in getting a trip booked soon. Yet, for a last-minute traveller, the number or percentage of search results may sound more informative, providing them with signals to book as availability is running low. On a listing detail page, the mentality of a listing is “usually booked” may have different implications than “10 others are looking at this place” for various people. Not to mention that there may be sophisticated guests who would like to make decisions purely based on their chemistry with the listings, thus preferring no market insights at all. In 2016, we have added extensive logging in our booking flow, about which types of insight guests see, and how they react when seeing these insights, such as how much longer they spend on a listing page, whether they wishlist a listing, make a booking request, or go back to search. We implemented a couple of randomization strategies, equalizing the odds of impression for every eligible insight. Showing different and increased variety of insights helps us acquire data to understand user preferences. After collecting user interaction data, we join it with listing information, such as occupancy rate and number of views, along with search parameters, such as trip lead days and length, and perform data analysis. Our goal is to learn smart insight vetting rules that maximize desired outcome. We believe guests are more likely to book with advanced user experience, so we created a utility function that evaluates their progress. For example, requesting to book is worth one point and contacting host is worth half a point, etc. We segment guests based on guest features, such as the number of searches and bookings they have done in the past, and come up with a insight vetting rule for each user segment. We are experimenting on our hypothesis that personalized insights deliver improved user experience and in return improves booking conversion.
https://medium.com/airbnb-engineering/helping-guests-make-informed-decisions-with-market-insights-8b09dc904353
['Peng Dai']
2017-07-11 18:48:10.371000+00:00
['Data Engineering', 'AI', 'Data', 'Machine Learning']
Travelling to China and 14-Day Quarantine
Airport Departure Advice from a weary traveller: Constantly check for changes in your flight status, and updates on flight paths being cancelled. Read on for why this is critical. Oh it was meant to be a joyous occasion. I was finally on my way back to my significant other. Documents in hand. How I was wrong. Again. My cab was pulling into Heathrow Terminal 5 and immediately I sensed something was off. There wasn’t a car or person in sight. It quickly became apparent the terminal was closed. I asked the cab driver to politely stay put as I investigated. All flights from terminal 5 had been re-allocated to different terminals. Great! I wish my airline had informed me of this. I know some of the blame falls on me too, I should have been more vigilant keeping up with my flight status etc, but I still feel this is something they should have made their customers aware of. As I hopped back into the cab, I see a couple arriving in their taxi. I quickly told them to keep their cab, “The terminal is closed! Don’t let your cab leave, you will be stranded!”. I’m not going to lie, I felt like a superhero saving that couple from disaster. For those who don’t frequent Heathrow often, the terminals are miles apart. It would definitely put a damper on your day. Although, I'm sure it wasn't that dramatic. Eventually, I arrived at Terminal 2. The diversion had subsequently left me short on time. Naturally, there was a monstrosity of a queue to get into the terminal. Single file lane for every passenger entering the building, with one officer checking everyone's boarding pass. Typical England. Eventually, I made it to the check-in desk. Hallelujah. Pleasantries aside, the check-in assistant got to it. Seconds went by, seconds turned to minutes. “What's happening now”, I sighed under my breath. The airline couldn’t seem to find me in their system. I also noticed the flight number and departure time were slightly different from my booking info. Alas, they found me in their system. There’s just one teeny-weeny problem. The second leg of my ticket had been cancelled. It had been for 7 days already. All flights from that specific destination to China had been cancelled for the foreseeable future. I started to get agitated. Why had the airline not informed me of this? Why was I only finding out about this now? Why was there nothing on the airline website mentioning this? For my troubles, I received a standard-issue apology, no explanation and a voucher. No refund. Back home I went dejected and heartbroken. If you happen to run into a similar situation, depending on local regulations, you may be legally entitled to a refund. Subsequently, I have taken the airline to small claims court via an online agency that handles such matters. I mentioned earlier in the article to fly direct. This is why. It saves you having to worry about cancellations across multiple leg journeys. I’m just thankful I wasn’t sitting in that poorly rated terminal hotel in the middle of nowhere when the flight path to China was cancelled. Count your blessings. Thanks to my lovely family, we can fast forward 24 hours, and I was off to China again, direct to Shanghai this time around and minus a small fortune. Before I carry on, I think it is important to note a few things regarding check-in: First, they ask you to scan and fill out a medical questionnaire using Wechat. You will need to show this on arrival. If you don’t have Wechat (Which the Australian next to me didn’t), they do have workarounds, albeit slow. So don’t fret. Secondly and of even greater importance, is if you have a second leg within China, you need to make sure it is for two weeks time. My ticket was for London — Shanghai — Beijing. The Beijing leg was on the same day as the Shanghai arrival. You will not be able to make same-day connections, let alone 2 or 3-day connections. The port of arrival is where you will quarantine. The airline unbeknownst to this factor, still graciously changed my Beijing flight to the appropriate date. Last but not least, prepare for the unexpected, and don’t be afraid to ask questions. I fear a lot of people missed those local connecting flights for fear of asking.
https://medium.com/swlh/china-quarantine-68dd15d41559
['Niall Mcnulty']
2020-10-27 12:20:43.256000+00:00
['Coronavirus', 'Travel', 'Health', 'Life', 'China']
How Much Is Your Data Worth?
Every time that you register for a new website for free you have to give out your name, email, and date of birth. Sometimes your address, if you’re shopping online. Other times your interests, if you’re hanging out on social media. It might not seem like much to us, but to those websites and companies, your data is their most valuable asset. If they know where you live, what you like, and what you’ve bought in the past, they can sell you products that are specifically tailored for you. Just Facebook is making an average of $7.05 in ad revenue daily for each user. They have over 2.41 billion monthly active users. (The Washington Post, 2019) So are you really registering for free? Name your Price! Your data is not only being used to sell you stuff, but it’s also part of many performance reports and predictive analysis. Companies need to know how their products are performing among a certain demographic and learn from that information to launch more accurate campaigns based on their objectives. Your online data is part of that equation, but once it’s stored on a website it’s no longer private data, right? Then, how much is your data worth? That’s pretty hard to tell because big companies won’t reveal that information to the public. For that reason, Mark R. Warner and Josh Hawley, two senators from the United States, were recently after a new law that would force giants like Google, Facebook, and others to disclose the value of their data with their customers and financial regulators. Additionally, they would have to give users the right to delete their data from the database. It’s a noble cause, but can it actually turn into a reality? For now, all we can do is guess. It’s Not That Simple A law to empower users and add more transparency to the digital landscape seems like a good idea on the surface, but we’re gonna need a more detailed plan of action to actually make it work. The problem with the proposed legislation is that it doesn’t have a specific solution for how to estimate the value of a user’s data, leaving that problem to third parties. A method to calculate this value would have to take into account not only the basic information that we share when we register but also all the mundane activities that big companies have on us. Our search history, our Facebook likes and reactions, our website retention and click-through rates. It’s a lot more complicated than it seems. Some users have tried to estimate the value of their own data, but it’s all based on conjectures without any solid process to support their claims. And even if we could arrive at a clear number, I’m not so sure that it would make all of our privacy problems disappear. On the contrary, people might be more inclined to sell their data for a quick buck without considering the long term consequences to their own rights. Targeted ads are just a surface problem. A lot of users don’t mind them. The real problem is what’s happening behind the scenes. When companies use your information to predict your behavior they’re not adapting themselves to you, they’re adapting you to them. After all, they have the final say of what you see on your feed and can influence your activity towards the most profitable outcome. Final Thoughts As it is, the legislation in question doesn’t seem to have enough to sustain a long term solution against the control of big companies over our data. Furthermore, I don’t believe that having a consensus on the numeric value of said data will make much of a difference either. That won’t reflect the power that our information has to predict our actions or influence us in the future. That’s why I think we’re looking at it from the wrong angle. It’s not only about quantity but also about quality value. The data that we share has an effect on our lives and also on other people’s lives who are close to us. We may have not arrived at a practical solution just yet, but awareness is the first step. Let’s focus our efforts on the problem and how it affects our privacy instead of how much it would cost to sell our problems away. Those are just my two cents on the matter. What do you think? I would like to hear your thoughts! Want to know more about us? 🔥 Check out our Website for updates! 🗨️ Join our Telegram Group. 📢 Give us a shout-out on Facebook.
https://medium.com/online-io-blockchain-technologies/how-much-is-your-data-worth-40d72d692d45
['Tyler B.']
2019-10-15 17:26:40.197000+00:00
['Privacy', 'Data', 'Google', 'Facebook', 'Tech']
What is CICD? Where is it in 2020?
CICD is a development methodology which has become more important over time. In today’s software driven world, development teams are tasked with delivering applications quickly, consistently, and error-free: every single time. While the challenges are plentiful, CI/CD is simple at its core. For many organisations, achieving true continuous delivery is near impossible. Development teams are quickly getting more agile while the rest of the organisation struggles to adapt. What Is CICD CICD is an acronym for continuous integration (and) continuous delivery. The CI portion reflects a consistent and automated way to build, package and test applications. A consistent process here allows teams to commit code changes more frequently, encouraging better collaboration and software. On the flip side, continuous delivery automates the process of delivering an application to selected infrastructure environments. As teams develop in any number of environments (e.g. dev, test): CD makes sure that there’s an automated way to push changes through. If you’re heard of the following companies: Jenkins Gitlab CircleCI TravisCI Then you’re probably a little aware of CICD. Photo by Arif Riyanto on Unsplash The Benefits of CICD CICD improves efficiency and deployment times across the DevOps board — having been original originally designed to increase the speed of software delivery. According to this DZone report, three-quarters of respondents (DevOps) have benefitted: not to mention the shortened development cycle time and the increase in release frequency. A 75% success rate is very, very good. What the DevOps Market Feels Small to Medium sized Enterprises have begun to ramp up their investment in CICD over the last three years and are starting to compete with their larger peers. According to DZones 2020 Study on CICD, Jenkins remains the dominant CICD platform, but GitLab has been gaining ground over the past couple of years, not to mention CircleCI . At each stage of the CICD pipeline, the report also indicated that the majority of developers said they have automation built into the process to test code and deploy it to the next stage. Now despite the importance of automation being built into the CICD pipeline, it’s still possible for teams to get lazy and to rely too heavily on the automation. As your team’s responsibilities shift and new tasks arise, it is easy to automate processes too soon, just for the sake of time. Automating poorly designed processes may save time in the short term, but in the long term, it can swell into a major bottleneck that is difficult to fix. In doing this, teams have to be mindful to properly resolve process bottlenecks before automating and if anything does arise, they need to strip it out and fix it fully. Moreover, developers should audit their automated protocols regularly to ensure they maintain accuracy, while also testing current processes for efficacy. This is all part of the problem which takes time to resolve, but the effort is worth it. Photo by Christopher Gower on Unsplash The future in CDaas? We’ve seen the benefits of CICD, but the DZone report highlighted that almost 45% of respondents had environments hosted on site. Now an emerging solution for organisations is to leverage micro-services and containers to allow customer facing applications to scale. For this, Continuous Delivery-as-a-service (CDaas) is seen to be an emerging solution of which almost 50% of of those respondents considering moving across. I’d be interested to hear from any users of CDaas and their experiences thus far.
https://towardsdatascience.com/what-is-cicd-where-is-it-in-2020-c3298c2802ff
['Mohammad Ahmad']
2020-07-27 15:20:32.490000+00:00
['Software Development', 'Coding', 'Artificial Intelligence', 'Programming', 'Python']
Plotting Equations with Python. This article is going to cover plotting…
This article is going to cover plotting basic equations in python! We are going to look at a few different examples, and then I will provide the code to do create the plots through Google Colab! Goals: Learn to Create a vector array Manipulate vector to match an equation Create beautiful plots with a title, axis labels, and grid y = x² Let's go ahead and start by working on one of the simplest and most common equations! y = x². To do this, we are going to be doing a few things but first of all, we need to cover a few concepts. Modules in Python A module allows you to logically organize your Python code. We will be using 2 Modules: Matplotlib.pyplot and Numpy. NumPy Numpy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidimensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. Matplotlib.pyplot Matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.
https://medium.com/future-vision/plotting-equations-in-python-d0edd9f088c8
['Elliott Saslow']
2018-11-02 19:18:08.441000+00:00
['Math', 'Data Science', 'Matplotlib', 'Developer', 'Science']
Anderson Cooper’s “Obese Turtle” Rant Is a Reminder That Fat People Are Punchlines to the Right and the Left
At this point, Anderson Cooper’s recent viral video has been seen more than 10 million times. In it, he talks about Trump’s absurd claims of election cheating by likening the president to… an obese turtle: “I don’t think we’ve ever seen anything like this from a president of the United States. And, I think, like Jake said, it is sad, and it is truly pathetic. And of course it is dangerous, and of course it will go to courts, but you’ll notice the president did not have any evidence presented at all. Nothing. “That is the president of the United States. That is the most powerful person in the world, and we see him, like an obese turtle on his back, flailing in the hot sun, realizing his time is over. But he just hasn’t accepted it, and he wants to take everybody down with him, including this country.” Ever since Anderson made those comments, people have been absolutely giddy at the image. Social media is filled with people cheering on Gloria Vanderbilt’s journalist son, some even joking that he’s now a poet laureate. People really seem to love it, and that’s putting is mildly. Even Vogue published a story calling Cooper’s words “the Perfect Response to Trump’s Shocking White House Speech.” So, here’s the problem with that. Once again, the Left, which supposedly gives a damn about human rights, is using a fat body as an insult, and those of us who despise Trump and are obese ourselves? We’re supposed to pretend it’s no big deal. That it’s even funny. But it’s not fucking funny. We see this attitude time and time again. You on the Left care so much about human rights, but you don’t give a damn about those among you with eating disorders or larger bodies. You use us as punchlines. You use us to make weak or effortless arguments. Fat people aren’t stupid, though. When you mock Trump’s body and make jokes about “obese turtles,” we know exactly what you’re saying. Hahaha. To you, and folks like Anderson Cooper, obesity equals laziness, gluttony, and self-destruction. Sometimes, you use it to denote mental health issues, but you almost always liken it to something we the irresponsible fatties have done to ourselves. It’s just one more extension of diet culture and the ridiculous notion that an individual’s bodyweight has some direct correlation to their worth. It’s arrogant and shitty. It’s not even rational. But you do it, and most of you reading this are going to keep making fat jokes and using fat as an insult because you really don’t get it. And honestly, I don’t believe that most folks even care.
https://medium.com/honestly-yours/anderson-coopers-obese-turtle-rant-is-a-reminder-that-fat-people-are-punchlines-to-the-right-3815060e492a
['Shannon Ashley']
2020-11-06 17:37:14.896000+00:00
['Social Media', 'Culture', 'Mental Health', 'Society', 'Politics']
Using Differentiation to Stand Out From The Competition
What is differentiation? Product differentiation is a marketing management strategy that aims to distinguish or differentiate a company’s products or services from the alternatives offered by competitors. Businesses communicate their unique and distinctive benefit through the marketing strategy to make it more attractive to a targeted group of customers. Also referred to as a point of difference, providing customers with a unique and distinct benefit can create a competitive advantage in that marketplace. It is a powerful strategy when a target group of customers is not price-sensitive (a price increase will not reduce demand), when a market is competitive and saturated with options, or when a group of customers have specific needs that are under-served. “Point of difference — even seemingly contradictory ones — can be powerful. Strong, favourable, unique associations that distinguish a brand from others in the same frame of reference are fundamental to successful brand positioning.” (Keller, Sternthal & Tybout, 2002) Objectives of differentiation The underlying objective of differentiation is to make your brand different to and more attractive than your competitors. Develop a position in the market that potential customers see as unique and valuable. Perceived differentiation is subjective from customer to customer, from brand to brand. It is marketing’s job to alter this perception and customers’ evaluation of the benefits of one brand’s offering compared to another. When a product or service becomes more unique, it will attract fewer comparisons with competitors, and it moves away from competing on price. This uniqueness helps to achieve a competitive advantage in a crowded marketplace. Product differentiation helps develop a strong value proposition, making a product or service attractive to a target market.
https://medium.com/illumination-curated/using-differentiation-to-stand-out-from-the-competition-dacc9f2a1777
['Daniel Hopper']
2020-11-04 21:34:09.188000+00:00
['Strategy', 'Marketing Strategies', 'Marketing', 'Business', 'Startup']
Creatives Have a Psychology Problem
The entire process of ‘transferring’ belief, either through marketing techniques or through works of art, carries with it a lot of psychological weight. This ‘weight’ then leads many creatives to being extremely insecure. The Forgotten Religiosity of Creative Work In ages past, creatives were extremely valuable members of society. They were (essentially) aristocrats. They were the priests, the Levites, the craftsman, the healers, the Nazarenes, the Samurai, the chefs, the architects, the painters, the harp-players, the readers of stars, the prophets, the narrators and the valiant warriors. What that did, was not only to encourage select members of society to take risks and create / venture-into new frontiers of human achievement and knowledge, but also to ‘honour’ them regardless of result or output. What that achieved, was to ‘lift’ the psychological weight off these creatives. They were ‘already affirmed’ in their pursuits. Societies that ‘punished’ creatives (for being ‘disruptors’ of the social order, etc), suffered in the long run. They were later either overtaken/enslaved either by stronger nations or impoverished by more ‘cunning’ (i.e. creative) nations that produced a lot more ‘creative works’ that were pleasant enough to be bought by the many… Either way, the incentives were there for creatives to keep ‘creating’; and religion/cultural-pride was always there to inspire and encourage the creatives. The Current Psychology of Creative Work At present, advertising, profits and sales run the world of creatives. Creative work is no longer measured on how ‘religiously true’ or ‘culturally inspiring’ it is. It is measured, almost purely, on the whims of a capitalist economy. What that means, is that a ‘good’ creative, is by design a profitable creative. A ‘bad’ creative, is by design an unprofitable creative. (An odd side-effect of this, is observing how this totalitarian belief in profitability has affected arguably the most profitable creative work in history: that of engineering. Many engineering companies systematically cut their RnD budget as soon as they start having financial struggles. Which is in fact rather odd, considering that company ‘losses’ may not necessarily come from a negative loss/profit result of a corporate division but may in fact come from a company over-provisioning resources within positive loss/profit result divisions). What this implies, is that creatives have to look at profitability alone, as the measure of success. That approach may very well have some terrible side-effects of creative-work ‘quality’… but that would be another discussion… Sadly, creatives, almost universally are seldom ‘in charge’ of the production and sales of their works. What this means is that, even if they ‘wanted to’, they cannot control the profitability of their works. This results in an even sadder side-effect, that of creatives relying on ‘production/sales’ experts (i.e. venture capitalists, art dealers or movie/music producers) on telling them whether their ‘work’ is profitable or not. Current Creatives are thus at the mercy of BOTH the economic system of capitalism and the various ‘captains’ within that system that tell them 1) if their work is ‘profitable’ or not, 2) if their work is ‘good’ or not. This, more than the already existing problem of ‘insecurity’, is an even GREATER psychological weight to overcome. Overcoming the Psychological Weight The desires to ‘be great’ and to ‘please everyone’ are incredibly self-contradictory. For one to ‘be great’ (a demi-god in their chosen profession), one would have to not only step on a few toes, but also annoy a lot of people. No one wants to be made to feel small. All actors/actresses/comedians/musicians/sports(wo)men and entrepreneurs worth their salt, want to be ‘great’! That desire is not going to be welcomed by others either competing for the same goal or those aiming for power and wealth. The first step to overcoming both the insecurity and ‘credibility’ problem then, is realising that one is alone in their creative goals. We have modern technology to thank for the second step! Arguably the most radical step: build your own system of sustainability. Build a crop and chickens farm. Build your own shed, table, etc… None of these will make you ‘profitable’, but it will sure provide a secure place to ‘put your money’ once you make it. It will also (perhaps, most importantly!) feed, shelter, protect and shield your creative genius from the sharks within the system of capitalism. The third (and most difficult) step is that of a creative finding an independent platform that allows for full expression one’s creative abilities, unencumbered by ‘feedback’ from gate-keepers of the capitalist-creative market economy. The only feedback that is objective and meaningful for a creative, is that coming from consumers of creative-products. The idea that some expert will know 1) what consumers want, 2) what is ‘good’ creative work, 3) what will consumers buy at what price, 4) what platforms these consumers exist in, 5) how much money will be made, etc is extremely preposterous and unrealistic. Nobody Knows Anything. If anything, the creatives know a lot more! (As they tend to be in touch with both the ‘work’ and ‘the people’). They should allow themselves to create with confidence. They should be crazy. They should be insecure. They should believe that their work is credible, until an end-customer says otherwise. The fourth step, (advocated by the likes of Peter Thiel, Elon Musk and Tim Ferriss) is that of aiming for a radical enough (and different!) but clear goal. A goal that is inspiring, self-nourishing and pushes one to be extremely productive. A goal that allows one to escape the clutches of competition and capitalism. A goal that feels like there is only one person in the universe who was born to achieve it. The fifth step, (advocated by me), is that of having recursive targets. Prominent psychologists like Jordan Peterson and many others seem to advocate for some form of ‘low aiming’ as a way to build momentum towards progress and accruing psychological health. Apparently ‘big goals’ are extremely discouraging if not achieved! Even if this is true, THIS is a big mistake! Firstly, positive ‘progress’ is not an objective way to measure success. A product, for example, that makes a lot of sales, like Microsoft Windows, is not necessarily the best product (compared to its expensive counterparts, e.g. macOS, or free counterparts, e.g. Ubuntu OS). Secondly, negative ‘progress’ often contains a lot more insight that positive progress. A wealthy and technologically-savvy venture capitalist saying ‘no’ to a product proposal is worth a lot more than a high-school friend saying ‘yes’ to whether he likes the product or not. The idea of ‘recursive’ targets is quite simple: it is about finding the least costly, and the least difficult way to build a useful concept that ‘demonstrates’ the higher (i.e. more radical) goal. This is decidedly NOT a prototype! It is a finished (and polished) concept. It is only smaller. Yet is it daring. It is different. To stay both profitable and productive, the creative would have to build a lot of these ‘products’. But these are merely means to an end: the higher goal. They are the means to collecting insights, cash-flow, feedback and sustainability from the ultimate ‘gate-keeper’: the customer! Be like Elon, aim BIG. Don’t be like Elon, aim for the (recursed) SMALLer products. Don’t take J. Peterson’s advice of ‘aiming LOW’… Those, are the tricks to lifting the psychological weight off being a creative.
https://lesdikgole.medium.com/creatives-have-a-psychology-problem-d420998d0f4b
['Lesang Dikgole']
2019-11-24 18:12:30.296000+00:00
['Creativity', 'Comedy', 'Art', 'Psychology', 'Jordan Peterson']
How China used Artificial Intelligence to combat Covid-19
2020 will go down in history books as the year that witnessed one of its kind global crisis due to the Covid-19 pandemic. Of course, Covid-19 is not the first pandemic on a worldwide scale. There had been plenty of such outbreaks in the recorded history that affected different parts of the globe. But Covid-19 stands apart due to such high international travel. It spread quickly in no time worldwide, resulting in complete lockdown in most countries. At the time of writing this article, there had been 39 Million Covid-19 cases globally and 1.1 Million deaths just within 7–8 months. John Hopkins Covid-19 resource center (Source) Covid-19 is also different from earlier pandemics because it’s the first time government agencies and health organizations worldwide are using emerging technologies of Big Data and Artificial Intelligence to combat the disease. AI has always been portrayed as a technology that has the potential to change the world we live in, and this pandemic was a litmus test for AI as well to prove its promise. A fascinating case study of the use of AI to fight Covid-19 comes from China, which was the source of this virus. China had an initial surge of Covid-19 cases, but soon, it could control the spread within a few months while the rest of the world is still struggling with growing cases. If you observe the graph below, you will see a flat line for China, whereas to give a perspective, the USA and India, which also has a high population, is still seeing an exponential rise in cases. Covid-19 cases in China vs. India vs. the USA (source) Had Covid-19 or a similar pandemic happened ten years ago, and the above graph would have different results for China. Here is my previous article that supports this theory. What did China do differently from other countries to combat Covid-19? To fight with the Covid-19 situation, China relentlessly made use of AI-enabled technologies in all possible ways to control the spread, unlike other countries. The main focus area on artificial intelligence was on mass surveillance to prevent the spread and, secondly, healthcare to provide fast diagnosis and effective treatment. This should not come as a surprise because China is already one of the leading markets for artificial intelligence globally. As per one report, China’s AI market is forecasted to reach 11.9 Billion USD by 2023. So let’s take a close look at the various measures China took with artificial intelligence. 1. Mass Surveillance & Contact Tracing China is known to employ mass surveillance on its citizen without thinking twice about people’s data privacy. China has an estimated 200 million surveillance cameras powered by AI-based facial recognition technology to closely track its citizens. Such extensive use of AI for controlling citizens has always attracted global criticism to China. China’s existing Mass Surveillance (source) But when Covid-19 hit China, its already established mass surveillance system proved to be very efficient since the government could use this system to track patient’s travel history and predict which other people might be at possible risk who came in contact with the patient. Contact Tracing App (Source) China not only gathered people’s tracking data, but it also used this information to alert people of potential Covid-19 risk with the help of contact tracing mobile apps designed with the help of companies like Alibaba and Tencent. This app assigns color code to the users based on their risk profile. People with no risk are assigned a green color, whereas people with travel history or close proximity to other patients are given yellow or red based on the severity of the risk. The yellow color indicates self-quarantine, and people with red color are required to go to the hospital. China’s Health Code (Source) This health code has now become the benchmark for China to allow its citizen to use public places and services. Many health code scanners are installed in all public places like offices, subways, railway stations, and airports that screens out people for yellow or red code. China has also imposed rules to allow people to drive on roads only if their health code is green. Man Scanning Health Code in Subway (Source) Another very useful app for Chinese people was the Baidu Map, which gave real-time information on high-risk places so that people can stay away from those regions. It used data accessed by GPS location and medical data from health agencies to inform users about their exact distance from the Covid-19 hotspots to avoid them while traveling. Baidu Map showing Covid-19 Hotspots (Source) Wearing masks have become a norm for the people in 2020, and it is mandatory for one’s safety and prevents infecting others when not wearing a mask yourself. China’s AI companies like Baidu, Megvii, SenseTime, and Hanwang Technologies helped the government put up facial recognition surveillance capable of recognizing people with or without masks. The system immediately raises a security alert if it detects a person not wearing a mask. These systems are also equipped with thermal scans to raise alert for people with high temperatures in public areas. Baidu’s surveillance in Beijing Qinghe Railway Station was able to detect 190 suspected cases within a month of its installation in late January. Facial recognition with a thermal scan at Chinese Railway Station (Source) The most comprehensive data source for Chinese government agencies & healthcare technology companies comes from the mobile app, “weChat.” Chinese tech giant Tencent Group developed this mobile app, now have around 1.2 billion users. Tencent is Asia’s most valuable company with a market capitalization of 300 billion. “weChat” was one of the primary sources behind all Covid-19 contact tracing. When it was combined with mass surveillance data, it made contact tracing an easy task. “WeChat and mass surveillance together provided many grounds for Computer Vision(CV) and Natural Language Processing(NLP) experts to build a paradise of revolutionary Contact Tracing applications.” 2. Healthcare Services The major challenge which healthcare workers faced was the influx of the high number of Covid-19 cases that started coming in for the diagnosis in the early days in China. When Lungs CT scan became a parameter for initial diagnosis before PCR test confirmation, it was a nightmare for radiologists. The radiologists had to manually go through thousands of scans of people to confirm the diagnosis. Quick diagnosis and early medication/quarantine were essential to hinder the spread of Covid-19, but the diagnosis process itself became a bottleneck at that time. Soon the Chinese AI companies like Alibaba, Yitu Technologies stepped in with AI-assisted diagnosis of CT scan images to automate the process with minimal radiologist intervention. AI-assisted CT scan diagnosis for Covid-19 (Source) These systems were built using Deep Learning and proved to be fast and accurate. The use of artificial intelligence to evaluate CT scans was a big turning point for China as it sped up diagnosis. Alibaba’s diagnostic system could provide the diagnosis of Covid-19 within 20 seconds, with 99.6% accuracy. By March 2020, over 170 Chinese hospitals adopted this system, with 340,000 potential patients. On the other hand, Tencent AI lab worked with Chinese healthcare scientists to develop a deep learning model that can predict the critical illness in the Covid-19 patients, which can be fateful. They made this tool available online to be given high priority treatment well in advance. Covid-19 is a novel virus, meaning nothing was known about it by the medical researchers when it surfaced. As soon as this came to light, researchers worldwide started to study this virus’s genes for creating a diagnosis process that could also open the gates for vaccination. But such scientific research is not easy and requires extensive resources. Both Alibaba and Baidu have now made their proprietary AI algorithms available to the medical fraternity to speed up the research and diagnosis process. Alibaba’s LinearFold AI algorithm can reduce time to study Coronavirus RNA structure from 55 minutes to just 27 seconds, which is useful for fast genome testing. Similarly, Baidu’s open-sourced AI algorithm is also 120 times faster than the traditional approaches of genome studies. In such a crisis, drones are often useful to either provide supplies or to carry out surveillance. But this time, autonomous vehicles and Chinese robotics companies like Baidu, Neolix, Idriverplus have also joined the mission by letting out their self-driving vehicles to supply medical equipment and foods to the hospitals. Autonomous Vehicle disinfecting public place (Source) Idriverplus autonomous vehicles were also used to spray disinfectants in public places and hospitals for sanitization. Another company Pudu Technology that usually build robots for the catering industry, also deployed their robots to over 40 hospitals to support health workers. Chinese companies are also catering to global demands for autonomous robots. For example, Gaussian Robotics claims their robots have been sold to over 20 countries during the current pandemic. Conclusion Indeed China has left no stone unturned to use artificial intelligence to maximum advantage in its fight against Covid-19, and some might question why other countries were unsuccessful in doing so. One of the main reasons China was so successful was its relentless use of existing AI-enabled mass surveillance systems, which does not consider people’s data privacy. It is quite normal for China to use facial recognition to track citizens, but data privacy is a serious issue in other parts of the world where such surveillance systems cannot exist at such a large scale. Although there had been uses of artificial intelligence in the healthcare area in many parts of the world, no other country could implement mass surveillance to restrict Covid-19 spread as China did. Thus China went ahead in the race to control Covid-19 leveraging AI. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/how-china-used-artificial-intelligence-to-combat-covid-19-f5ebc1ef93d
['Awais Bajwa']
2020-10-21 17:52:05.557000+00:00
['Deep Learning', 'Healthcare', 'AI', 'Computer Vision', 'China Startup']
Data Pipeline Architecture Optimization & Apache Airflow Implementation
Data pipelines are essential for companies looking to leverage their data to gather reliable business insights. Pipelines allow companies to consolidate, combine, and modify data originating from various sources and make it available for analysis and visualization. However, the numerous benefits that data pipelines provide depends on a company’s ability to extract and aggregate its data coming from different sources and thus on the quality of its pipeline architecture choices. One of TrackIt’s clients had implemented a big data pipeline running on AWS that needed to be optimized. The client was leveraging the big data pipeline to enable its data scientists to gain additional insights by exploiting data that originated from CSV files. However, the company was running into certain architecture-related problems with its pipeline that needed to be fixed and sought our expertise to address these issues. This article details the TrackIt team’s approach to optimizing the architecture of the data pipeline. Initial Pipeline How the initial pipeline worked: The company’s CSV files were first added to an S3 bucket Once the files were added to the S3 bucket, an AWS Glue job was automatically triggered to fetch the data from the CSV files and make it available for a Python Spark script, the next step in the pipeline A Python Spark script modified the data to make it more suitable for use in Redshift Spectrum The modified data was then stored in a new S3 bucket (now in the Parquet file format) Once files were added to the new S3 bucket, another Glue job was triggered that made the data available for use by Redshift, an SQL database Files from this S3 bucket were also replicated into another S3 bucket hosted on a different AWS region using AWS S3’s cross-region replication feature. They needed separate S3 buckets in each region because Redshift Spectrum, which was being used in both regions separately, requires the S3 bucket to be located in the same region. An AWS Glue job then fetched data from the latter S3 bucket and made it available to Redshift Data scientists could then use a Python script to query the data on Redshift Problems: The pipeline implemented by the company had certain issues that were hindering its ability to make the most of its data. Problem #1 — Inability to Individually Test Jobs The initial pipeline did not provide the company with the ability to isolate and test individual components of the pipeline. Manually triggering one of the steps of the pipeline launched all the other events that followed it. Problem #2: Too many steps in the pipeline The initial pipeline included quite a few additional steps — such as the Lambda functions and CloudWatch events before and after the Glue job — that made the pipeline harder to test and manage. These extra steps could have been avoided with different architectural choices. Problem #3: Cross-region data replication There was also an issue arising due to the cross-region data replication feature on S3. The data replication between AWS region 1 and AWS region 2 was not instantaneous and took a few minutes. However, the completion of the AWS Glue job in region 1 was immediately triggering the Glue job in region 2 before the data had finished replicating between the S3 buckets. Problem #4 — No error notifications The initial pipeline provided the company with no error notifications. The company often discovered the occurrence of errors weeks after an event, and then only because the data scientist realized that they were missing data. When an error did occur, the next job would simply not be launched. The pipeline did not include an error notification component that would allow the company to immediately become aware of errors happening within the pipeline. The company’s engineers had to go onto the console and investigate the history of executions to try to identify and pinpoint errors in the pipeline. Optimized Pipeline The following modifications were first proposed by the TrackIt team to the client. The first modification to the pipeline proposed by the TrackIt team was to use Glue Workflow, a feature of AWS Glue to create a workflow that automatically launches the AWS Glue jobs in sequence. The Glue Workflow would also allow the company to launch and test jobs individually without triggering the whole workflow. The implementation of the Glue Workflow would also enable the company to simplify the pipeline by getting rid of extraneous Lambda functions and CloudWatch events that had been implemented in the initial pipeline. Instead of having multiple Lambda functions, the new pipeline would have just one Lambda function that triggers the Glue Workflow when files are uploaded into the S3 bucket. The second modification proposed by the TrackIt team was the addition of an error notification component using Amazon CloudWatch. CloudWatch events would be triggered immediately when an error occurred in the Glue Workflow and would then send either an HTTP request or an email to the team, or could trigger a Lambda function that would execute additional tasks if there was an error. The third modification to the pipeline proposed by the TrackIt team was to eliminate the use of S3 cross-region replication. Instead, the files are directly added to both S3 buckets (each located in a different region) so that when the Glue job is triggered in region 2, all the files are already up to date in both S3 buckets. The client was quite pleased with this proposition and wanted to incorporate these new changes to the pipeline using Apache Airflow, a tool used to create and manage complex workflows. Apache Airflow Implementation The TrackIt team assisted the client in incorporating the suggested modifications to the pipeline and implementing it on Apache Airflow. The different parts of the pipeline were coded in Python as modules that the client could reuse in the future to build similar pipelines or to further modify the existing one. How the final Apache Airflow pipeline works: The company’s CSV files are first added to an S3 bucket The AWS Glue crawler fetches the data from S3 A Python Spark script is executed that modifies the data and makes it more suitable for use in Redshift Spectrum The modified data is then stored in a new S3 bucket (now in the Parquet file format) Then in one region, a Glue crawler fetches data In the other region, the Glue crawler fetches data and a Redshift script is used to modify data and then update changes Data scientists can use a Python script to query the data on Redshift If any error occurs within the pipeline, a CloudWatch event is immediately triggered and sends an email to notify the team About TrackIt TrackIt is an Amazon Web Services Advanced Consulting Partner specializing in cloud management, consulting, and software development solutions based in Venice, CA. TrackIt specializes in Modern Software Development, DevOps, Infrastructure-As-Code, Serverless, CI/CD, and Containerization with specialized expertise in Media & Entertainment workflows, High-Performance Computing environments, and data storage. TrackIt’s forté is cutting-edge software design with deep expertise in containerization, serverless architectures, and innovative pipeline development. The TrackIt team can help you architect, design, build and deploy a customized solution tailored to your exact requirements. In addition to providing cloud management, consulting, and modern software development services, TrackIt also provides an open-source AWS cost management tool that allows users to optimize their costs and resources on AWS.
https://medium.com/trackit/data-pipeline-architecture-optimization-apache-airflow-implementation-915821d5ce5b
['Simon Meyer']
2020-11-09 17:49:16.587000+00:00
['Apache Airflow', 'Redshift', 'AWS', 'Big Data', 'Amazon Web Services']
Submitting Your Stories to Discover Computer Vision: Some Guidelines
1. We Need Original Contributions From You Before you submit an article to us, we suggest that you ask yourself how original the content of your article is. Is it something that our readers will be hooked on to & appreciate? If the answer is, YES, go ahead & submit it without a second thought. But on hindsight quite often we also come across articles which are mere rephrased content duplicated from Arxiv. That’s considered Plagiarism, hence we request you to run you articles through a plagiarism-checker software, preferably the Plagiarism Checker by Grammarly before you submit. Any content found to be plagiarized will be outright rejected without notice. We maintain such strict quality-control to ensure that our readers are getting the top-notch standard content we promise to deliver. So please bear in mind that your article is adding value to the readers before drafting. 2. Articulate the Message of Your Article Skillfully Readers on the Internet are known to have a short attention span & our readers aren’t any different. Hence, to ensure that your message is delivered to the reader we request that you articulate your message in a clever manner. Some suggestions that we’ve come up with over the years of experience writing online, are as follows: Be VERY precise about the idea you’re writing about. precise about the idea you’re writing about. Include a non-clickbait title. Believe us when we say it, a clickbait title might fetch you some CTR no doubt but Medium isn’t YouTube/Facebook! You want your articles to be read not just clicked & moved on to the next article. Believe us when we say it, a clickbait title might fetch you some CTR no doubt but Medium isn’t YouTube/Facebook! You want your articles not just clicked & moved on to the next article. Phrase an extremely precise introduction right below the Subtitle/Featured Image(if you include one). It should sum up the rest of your article briefly yet deliver enough information to keep the reader hooked. Writing articles online is a skill on its own which understandably not everyone will have. As such we request you to try to be as skillful as possible so that our editors can take care of the rest. 3. Avoid Overcrowding the Internet With Yet Another Tutorial The Internet is an overcrowded place full of individuals trying to teach their craft to one another. So as much as we appreciate you helping someone out with your tutorials, at Discover Computer Vision we don’t publish low-effort tutorial posts. You can try out other bigger publications like Towards Data Science or Better Programming for submitting such articles. But does it mean we don’t submit any tutorial articles? Well actually, we do, but bear in mind the following criteria which should satisfy the context your articles are about; A unique subject matter that has earlier never been discussed on any other platforms out there. There’s a research gap that you would like to address very briefly instead of writing a full-blown research paper. You recreated an experiment or a previous implementation of a closed-source product & you figured some discrepancies or stumbled across a even better implementation of the product. If the content you’re sharing fulfills all these criteria, feel free to go ahead & submit it to us. 4. Check Your Fact & Credit the Original Source Not citing the original source of the content you use for your article is equivalent to stealing & the repercussions can be huge. Hence, we request you every writers to cite the content that aren’t your own & make sure you’re allowed to use it for commercial purpose. Everything you use from a mere feature image to a quote you picked up from a published paper in a journal has to be cited. If you’re confused about the original source, we suggest you not to use it at all. Also please provide a Reference section at the end of your article & follow this format; [X] N. Name, Title (Year), Source Here’s an example; [1] A. Pesah, A. Wehenkel and G. Louppe, Recurrent Machines for Likelihood-Free Inference (2018), NeurIPS 2018 Workshop on Meta-Learning 5. Refrain From Self-promotion & Aggressive CTAs We believe in providing top-notch quality articles to our readers & that’s the motive behind setting up the publication in the first place. Hence, in order to ensure, we’re delivering what we promised, we don’t publish half-baked articles whose main purpose is to market under a guise. Although, we do allow subtle marketing of your previously published articles or blog posts related to Computer Vision at the end of the submitted article. If you include back-links to older articles, please do it in no more than 1–3 sentences. Any other CTAs like asking for claps, following up on Twitter or other social media platforms, downloading a book, etc aren’t allowed & you’ll be asked to remove them before resubmitting. Instead, we suggest you include CTAs on your Medium profile. You can follow up on Casey Botticello’s advice on his article — Medium Profile Page to know how to maximize your gains through an optimized profile page. Regardless, know that, if you share good & quality content, it’s not difficult to grow a fan following. 6. Optimize Your Story For Curation & Discoverability Congratulations on completing the article, alas, sadly just writing isn’t enough. But that’s what we’re here for, to help you get better at writing articles online! So we suggest, you follow these points mentioned below to gain maximum value from each one of your articles;
https://medium.com/discover-computer-vision/discover-computer-vision-submission-guidelines-27e3f686e596
['Somraj Saha']
2020-05-19 07:45:21.033000+00:00
['Computer Vision', 'Submission Guidelines', 'Deep Learning', 'About Us', 'Join Us']
What does if __name__ == ”__main__” do?
What does if __name__ == ”__main__” do? When and how a main method is executed in Python Photo by Blake Connally on unsplash.com If you are new to Python, you might have noticed that it is possible to run a Python script with or without a main method. And the notation used in Python to define one (i.e. if __name__ == ‘__main__' ) is definitely not self-explanatory especially for new comers. In this article, I am going to explore what is the purpose of a main method and what to expect when you define one in your Python applications. What is the purpose of __name__ ? Before executing a program, the Python Interpreter assigns the name of the python module into a special variable called __name__ . Depending on whether you are executing the program through command line or importing the module into another module, the assignment for __name__ will vary. If you invoke your module as a script, for instance python my_module.py then Python Interpreter will automatically assign the string '__main__' to the special variable __name__ . On the other hand, if your module is imported in another module # Assume that this is another_module.py import my_module then the string 'my_module' will be assigned to __name__ . How does the main method work? Now let’s assume that we have the following module, that contains the following lines of code: # first_module.py print('Hello from first_module.py') if __name__ == '__main__': print('Hello from main method of first_module.py') So in the module above, we have one print statement which is outside of the main method and one more print statement which is inside. The code under the main method, will only be executed if the module is invoked as a script from (e.g.) the command line, as shown below: python first_module.py Hello from first_module.py Hello from main method of first_module.py Now, let’s say that instead of invoking module first_module as a script, we want to import it in another module: # second_module.py import first_script print('Hello from second_module.py') if __name__ == '__main__': print('Hello from main method of second_module.py') And finally, we invoke second_module as a script: python second_module Hello from first_module.py Hello from second_module.py Hello from main method of second_module.py Notice that the first output comes from module first_module and specifically from the print statement which is outside the main method. Since we haven’t invoked first_module as a script but instead we have imported it into second_module the main method in first_module will be simply ignored since if __name__ == ‘__main__' evaluates to False . Recall that from the above call, __name__ variable for second_module has been assigned string '__main__' while the first_module ‘s __name__ variable has been assigned the name of the module, i.e. ’first_module’ . Although everything under if __name__ == ‘__main__' is considered to be what we call a “main method”, it is a good practice to define one proper main method instead, which is called if the condition evaluates to True. For instance, # my_module.py def main(): """The main function of my Python Application""" print('Hello World') if __name__ == '__main__': main() Note: I would generally discourage you from having multiple main functions in a single Python application. I have used two different main methods just for the sake of the example. Conclusion
https://towardsdatascience.com/what-does-if-name-main-do-e357dd61be1a
['Giorgos Myrianthous']
2020-11-15 00:21:35.867000+00:00
['Python Programming', 'Software Engineering', 'Coding', 'Software Development', 'Python']
A Gift Guide for the Data Viz Practitioner
A Gift Guide for the Data Viz Practitioner 13 gift ideas for that data viz practitioners are sure to enjoy It’s the most wonderful time of the year — and it’s time to decide on the perfect gift for that special data viz practitioner in your life. Whether you’re reading this article for them, or for yourself, you’ve stumbled upon the Christmas yule log of gift lists. Sweet, with a bit of spice, all rolled into a clean final product— just like a good data visualization. Books From how-to guides, to best practices, to pure artistic design, books are a first-rate gift for anyone interested in data viz. Photo by Allie Smith on Unsplash The newest release from data viz heavyweight Alberto Cairo is packed full of tips to understand and decode all of the data visualizations that rule our daily lives. Cole Knaflic, another well-known data viz expert, has a fantastic book on Storytelling with Data that is also an excellent choice. 2. Invisible Women: Data Bias in a World Designed for Men The modern abundance of data yields more than beautiful graphs—pick up this exposé from Caroline Criado Perez to reorient your reality to the biases baked into our data-driven world. 3. The Visual Display of Quantitative Information The classic by Edward Tufte, data visualization pioneer. This book is considered by many to be a hallowed text in data visualization. And for those inspired by the occult and weird, there is the gorgeously curious Codex Seraphinianus — part impossible puzzle, part artistic masterpiece. Tools Inevitably, anyone interested in data viz enjoys the actual work that comes with crafting beautiful visuals. Some sketch, some scribble, but we all need tools to get the job done. These gifts are sure to make it straight into the data viz practitioner’s toolbox: 4. Sketchpad or notebook Moleskin and Leuchtturm1917 notebooks are always highly recommended. For the pocket friendly, Field Notes and Rite in the Rain both have versatile options to choose from. Photo by Jan Kahánek on Unsplash 5. Quality pens or pencils Gel pencils, high-quality markers, the classic #2 pencil — there are plenty of options, and everyone appreciates a good writing utensil (I love a new Pilot G2 pen). 6. A white board with ultra fine tip dry erase markers Writing with a chisel tip marker is a chore after doing any work using an ultra fine tip marker. I am a devout ultra fine tip user — it’s the only proper way to use a dry-erase board. I’ll die on that hill. Photo by Joanna Kosinska on Unsplash 7. Design ephemera from Present & Correct Present & Correct has every little gadget or utensil you could possibly need for a successful design project. Graph tape, rubber stamps, fasteners, journals, colored chalk — it’s guaranteed data viz practitioners will have a field day in their store. Subscribe! While the subscription model may not be everyone’s favorite, here are some worthwhile non-Nightingale subscriptions that would make any data viz professional happy. 8. Twelve-week subscription to The Economist Photo by Campaign Creators on Unsplash Daily graphs in the website’s Graphic detail section, and weekly print issues to keep up with all things politics and economics—the perfect companion for data viz professionals. 9. One-year subscription to Nathan Yau’s Flowing Data resources, courses, and tutorials Returning to revered members of the data viz community, Nathan Yau has been sharing and creating some of the internet’s best visualizations for over a decade. A subscription gives access to Yau’s step-by-step tutorials, courses of varying difficulty, and a wide community full of valuable resources. 10. Adobe Illustrator subscription Illustrator is practically a required tool for the data viz practitioner. This is a no-brainer gift for those feeling a bit generous. Fun Having fun with our creativity makes us a better artist and a better person. — Gwen Fox Photo by Christopher Paul High on Unsplash 11. Board games Try Codenames for its inventive strategy, Pandemic or Risk for the cartography enthusiast, or CATAN for those looking for adventure. And when wrapping that board game, consider Johannes Wirges recent break down of what board games teach us about data viz. 12. Swag from the PolicyViz store Jonathan Schwabish, another highly regarded data visualization expert, runs the PolicyViz blog and podcast to encourage better data visualization communication. The shop is full of t-shirts, posters, and cheat sheets for the data viz practitioner — all supporting the excellent work Jon does. Photo by Franco Antonio Giovanella on Unsplash 13. Play-Doh Come on, who doesn’t love opening a brand-new can of Play-Doh? It’s the perfect stress reliever and great for first-pass visualizations — a timeless classic. Happy Holidays! And with that, the yule log is finished off, satisfying the reader with visions of gift ideas dancing in their heads. Whatever you celebrate, I hope this gift list inspired someone’s perfect gift — or to treat yourself. I won’t tell.
https://medium.com/nightingale/gift-guide-for-the-data-viz-practitioner-5ba5495c5c95
['Coleman Harris']
2019-12-16 16:56:25.045000+00:00
['Design', 'Holidays', 'Data Science', 'Creativity', 'Data Visualization']
You Don’t Need Permission
It’s impossible to get stuck when you’re listening to yourself. How many thoughts go through your head every day? All of them are potential ideas. Most crap, some good, a handful great. You never know until you latch onto one and see where it goes. And anyone who claims they do is full of it. I’ve noticed a trend, in others and myself and especially in the new creator: getting in your own way. And the usual solution to this self-imposed roadblock is typically to seek approval of others before embarking on whatever journey it is you’re thinking of. Some quotes of the fearful creator: “Is this okay?” “Do you think this is a good idea?” “I really want to work this out before I start…” All of which are counterintuitive because most people will tell you “do your own thing”, “you do you”, “you won’t know until you try it” or some form of “don’t listen to what others say, just do it.” But when it actually comes to it… Ho ho, that’s the difference isn’t it? Saying something and doing something. I’ll let you in on a secret: you don’t need permission. Whatever it is you want to start, to make, to build, to share. You don’t need someone to hold your hand. You could start now right if you wanted to. Shocking, right? Don’t care, made art That’s my motto. Steal it if you want. Whenever I start questioning myself, asking silly things like “who am I to have these thoughts?” or “who am I to be writing this article?” I smack myself in the face and recite the words. Don’t care, made art. Start digging the hole I tried to learn to code three or four times. Every time I failed, I’d wait until someone wanted to do it with me before trying again. Even though I knew, knew the whole time what I wanted to do. Well, why not just journey off and start? Who knows. Perhaps I wanted someone else on the journey to help when it got hard. But get deep enough in anything and you’ll realise you’re going to have to start facing the challenges yourself. I think of creating like having a single shovel on the ground. And the treasure you’re trying to make lives in the dirt below. How many people can use the shovel at the same time? You could spend all your time trying to work out how to get others to use the shovel with you or you could just start digging the hole. The same goes for someone else, you can’t hold their shovel for them. Let them find their own and figure out how to dig. Pretend you’re an archeologist and the thing you’re trying to create is like a 67-million-year-old Tyrannosaurus rex skeleton. A work of art when it towers through a museum but only because someone like you picked up the shovel, stuck it in, found the bones and spent hours brushing off the dust. Why or why not? Sandra tells Mark about the paintings she’s been creating. She hasn’t made money yet but she enjoys trying to get her ideas onto the canvas. Mark asks her why she’s spending so much time painting when she could be doing other things. Harriet tells Lucy about the poems she’s been writing, so far they’re private but she’s been thinking about sharing them on her blog. Lucy asks why not? And tells Harriet to go for it. Asks, what’s the worst that could happen? F*ck Mark. Be more like Lucy. Race to your first 100 “But I’ve got no talent.” Neither do I. Except for the fact I can sit here and spill my guts onto this page. In the beginning, my hands wouldn’t output what crawled around my head. But after enough sessions with the blank page, I’m getting better. Don’t discount yourself at being bad at something until you’ve sunk at least 100 deep hours into it. 100 pure hours is enough to go from zero to average (or above) at almost anything. When’s the last time you spent four days straight doing nothing but one thing? A broke crackhead can spend four days hustling for a hit like it’s nothing. Imagine if you poured that kind of crackhead energy into your work. Our brains aren’t wired for non-linear returns. You could spend 99 hours on something and make almost zero progress. And then halfway through hour 100, the breakthrough comes from what seems like nowhere. But it’s not nowhere, it’s the magic of compound interest showing its face and waving hello. 100 hours, 100 articles, 100 videos, 100 creations, 100 phone calls, 100 cold emails, 100 whatever. Use speed and quantity as your fertiliser for quality. Say the motto whilst you do it. Don’t care, made art. Don’t care, made art. Don’t care, made art. Audience of one Stuck? Probably not. You’re trying to please everyone. What happens if you replaced “but what if they don’t like it?” with “but what if I don’t like it?” You’re already your own harshest critique, why not become your own biggest fan? Educate or entertain We’ve been selfish so far. Why? Because it works. No one knows what you’re after as much as you. But let’s switch gears. If you can’t stomach being a selfish creator, make things to educate or entertain others and you’ll always have an audience. People are hungry for knowledge, share what you know. I want to dance, I want to laugh, I want to cry, I want to cheer, I want to fear, I want to love. Give me a reason. Does what you’re making educate or entertain? Bonus points if it does both. Teach me something while we dance together. Embarrassed every month Last week I entered a Brazilian Jiu Jitsu competition. Walking through the competition, I thought, why am I here? Then I realised, I’m uncomfortable because I’m tiptoeing into the unknown. And everyone around me probably feels the same way. Win or lose we’re going to come out of this different to what we were before. This could’ve been a normal Sunday. Instead, we’re all here putting our practised skills to the test. A controlled environment, yes, but also an environment different from what we’re used to. I lost two out of three of the fights I had. A bruised ego and a bruised body. I reflected on the one I won and was proud of my efforts, I went through the moves I’d practice and they came off. Zero takeaways except for a pat on the back. The ones I lost? One of them gave me a haemorrhoid. I got choked so hard the inner linings of my intestines gave way. Have you ever a haemorrhoid? If not, I’ll tell you what it’s like: shit. A physical reminder that I’ve got something to work on, something to improve. Over the next few months, those small loses will turn into gains as I fill the gaps in my skill set. Of course, the goal is never to lose, it’s to become immune to it. I’ve got an idea… I’ll give myself a vaccination right now. I, Daniel Bourke, am immune to losing. I, Daniel Bourke, am immune to rejection. I, Daniel Bourke, am immune to embarrassment. Boom. Done. That should last a month. Next month I’ve got find a way to embarrass myself, step out of my happy little circle, practice putting fear to the side when I’ve got to do something that matters. If you’re not feeling embarrassed about something at least once a month, you’ve got to bump your numbers up. There’s no criteria Remember how writing an essay in school was such a drag? 1500 words on some topic you’d never pursue on your own, being sure to add references for every thought you translate into words all to make sure you ticked the boxes a veteran education academic created 17 years ago. Good gosh. Boring to write, even worse to read. Creating becomes fun when you realise there’s no criteria except making good art. What’s good art? You decide baby. Copy others until you have your own style Start with work that inspires you, ingest it, enjoy it, learn from it and remix it with your own style. Put different colour clothes together in a washing machine and what happens? The colours run and they mix. That’s how I make things. I steal the ideas of others and put them into my washing machine brain and let them mix. Then I take them out and see what they look like. Sometimes they’re disgusting. The type of shirts you wouldn’t be caught dead wearing. Other times, the times when I pour my heart in, the washing machine goes into reverse and instead of coming out clean, the shirts come out with bloodstains on them. Those type of shirts? They’re magic. Trust your ability Last week I helped my friend Big Easy prepare for a role-play scenario. A 2-minute elevator pitch involving meeting someone for the first time, finding out who they are, talking to them about a product and tailoring the interaction to their needs. In real life, Big Easy could do this type of scenario without thinking. But since it was going to be in front of his peers, he spent the days leading up to the pitch evaluating every possible outcome scenario. All which usually end in him screwing up. Where does this come from? The more you plan, the more you space you give for a disconnect between your belief in your abilities and your actual abilities. To fix my friends disconnect we went for a walk on the beach and acted out different scenarios. The first few started off rusty but were polished by the end. See? That wasn’t so bad, I said. You’re right, I think I’ve got this, he said. Turns out Big Easy won the best performance on the day. He’s going through to the next round. When you’re doubting your ability, ask yourself, am I planning too much instead of practising? In pursuit of a leader My friend got me a poster of someone we both look up to for my birthday. It’s hanging in my hallway. The other day I walked passed it and had the audacious idea I could be that person, a person others look up to. Would you look at that… we’re coming full circle here. We started by saying too often people hold themselves back because they’re looking for someone to give them the go ahead. A leader to say, “you can do it.” Well, guess what? You can become that person. Instead of just pursuing your interests, become a leader in your interests. Everyone is looking for someone to look up to — perhaps it’s you.
https://medium.com/the-post-grad-survival-guide/you-dont-need-permission-bf9c6d7689ef
['Daniel Bourke']
2020-10-10 10:17:32.457000+00:00
['Makers', 'Business', 'Marketing', 'Life', 'Creativity']
Moths in Space
Moths in Space A poem Which are the stars, and which are their paparazzi? The stars hold my gaze. They wink and sparkle and burn ablaze. My atoms: moths knocking each other askew in a whirlwind rocket to the modest moon. But it’s being afar that makes bright beautiful. The sun is a star that will eat you whole if you get too close. So we learn: don’t touch the stove. Be a flower, or a tree. Reach for the warmth, light and energy, but keep your roots in the cool shade of soil. Grow up with ambition; grow deep from turmoil. The top-heavy only tumble; The heavy-rooted never fly. So you see, there’s significance in our scars; only humans lose their way while reaching for the stars.
https://wormwoodtheweird.medium.com/moths-in-space-82e33c01a104
[]
2020-03-10 22:09:21.268000+00:00
['Self-awareness', 'Self', 'Creativity', 'Life Lessons', 'Poetry']
How could Santa leverage Blockchain & AI?
Made with Visme.com Made with https://piktochart.com/ How could Santa leverage Blockchain & AI? Until now, Santa have faced a series of challenges that seemed insurmountable, such as knowing the children’s requests in detail and being able to cross-check them with their good behavior during the year, but in a safe, reliable manner and without compromising the children’s personal data as well as complying with the new data regulations such as the European GDRP and the US Data Protection. And especially to do it efficiently and in time, that is why Santa has continued to modernize and incorporate new technologies to give each child what they really want, as well as their parents and society in general. Join the AIMA Thought Leadership @ bit.ly/AIMA-MeetUp Artificial intelligence Made with https://piktochart.com/ First, using artificial intelligence solutions, they are able to take all the data from the channels where children interact, which today are mostly digital, such as social networks, text messages, game forums and adding them to create large segments according to their location. in the country, socio-economic level, type of family to which they belong, number of siblings, tastes, price of gifts according to the level of expenditure for the family and age, among other relevant variables. Together with their characteristics they combine it with their school performance, their collaboration in the household activities, development perspectives and areas of opportunity in terms of learning and training. Once that is done, of all the available gifts, they look for the ones that meet the criteria of each kid to choose the best while validating certain restrictions such as quantity, price and popularity. To understand the conversations that are taking place both among children and among adults related to Christmas, natural language understanding is used to achieve the understanding of what they are saying but not only literally but understanding the context of the conversations and the sentiment they have to obtain the best findings and give exactly the main trends of the moment. Using machine learning models on all these data you can get accurate answers regarding the ideal gift for each child according to what they want, their compliance with the rules at home and purchasing power of the family. In this way, the demand for presents can be predicted and production, distribution and transportation costs associated with Christmas could be reduced; at the same time that a complete satisfaction of the children is achieved by obtaining the best gift just in time while keeping their illusion alive. Blockchain Made with https://piktochart.com/ For the part of the data that is not completely digital and the precose follow-up of the gifts and personal identification data of each child and family, Santa is using Blockchain-based solutions to have an accurate and reliable follow-up of their processes. throughout the entire value chain they have. Smart Contracts Made with https://piktochart.com/ For children who still write their paper letters and those who do it digitally, they can verify that what they ask for is what they receive and that they have fully complied with their obligations as children, such as making their bed, getting good grades and eating all your vegetables. Therefore the system based on Smart Contracts on Blockhain allows the unalterable record of what each infant asks Santa, so that there are no changes or misunderstandings with what they receive although it is possible to make changes or changes that are registered without modifying the original request. With smart contracts you can ensure that the gifts are released to be delivered to children as soon as it is verified that they have fulfilled what is expected of them and this is registered in the system for further verification. This will ensure that gifts are only given to the children who deserve them. Personal information Made with https://piktochart.com/ In the sensitive information piece, the solution based on Blockchain will allow the entire value chain to collaborate to deliver the desired gift without having access to individual data that will be protected at all times by means of powerful encryption schemes to avoid misuse. Conclusion Thus, once again we see how Santa seeks to always be at the forefront to give the best service to their main clients, the children of the world and their parents. Using innovative solutions to control their processes and simplify them so they can dedicate themselves to what they do best: create a happy childhood and deliver smiles.
https://medium.com/aimarketingassociation/how-could-santa-leverage-blockchain-ai-ad1832d57125
['Gabriel Jiménez']
2018-11-29 00:20:27.396000+00:00
['Christmas', 'Data Science', 'AI', 'Artificial Intelligence', 'Blockchain']
How to Evaluate a Data Visualization
Image by Benjamin O. Tayo How to Evaluate a Data Visualization A good data visualization should have all essential components in place I. Introduction Data visualization is one of the most important branches in data science. It is one of the main tools use to analyze and study relationships between different variables. Data visualization can be used for descriptive analytics. Data visualization is also used in machine learning during data preprocessing and analysis; feature selection; model building; model testing; and model evaluation. In machine learning (predictive analytics), there are several metrics that can be used for model evaluation. For example, supervised learning (continuous target) model can be evaluated using metrics such as the R2 score, mean square error (MSE), or mean absolute error (MAE). Furthermore, a supervised learning (discrete target) model, also referred to as a classification model can be evaluated using metrics such as accuracy, precision, recall, f1 score, the area under ROC curve (AUC), etc. Unlike machine learning models that can be evaluated by using a single performance metric, a data visualization cannot be evaluated by looking at just a single metric. Instead, a good data visualization can be evaluated based on the characteristics or components of the data visualization. In this article, we discuss the essential components of good data visualization. In Section II, we present the various components of a good data visualization. In Section III, we examine some examples of good data visualizations. A brief summary concludes the article. II. Components of a Good Data Visualization A good data visualization is made up of several components that have to be pieced up together to produce an end product: a) Data Component: An important first step in deciding how to visualize data is to know what type of data it is, e.g. categorical data, discrete data, continuous data, time series data, etc. b) Geometric Component: Here is where you decide what kind of visualization is suitable for your data, e.g. scatter plot, line graphs, barplots, histograms, qqplots, smooth densities, boxplots, pairplots, heatmaps, etc. c) Mapping Component: Here you need to decide what variable to use as your x-variable (independent or predictor variable) and what to use as your y-variable (dependent or target variable). This is important especially when your dataset is multi-dimensional with several features. d) Scale Component: Here you decide what kind of scales to use, e.g. linear scale, log scale, etc. e) Labels Component: This includes things like axes labels, titles, legends, font size to use, etc. f) Ethical Component: Here, you want to make sure your visualization tells the true story. You need to be aware of your actions when cleaning, summarizing, manipulating, and producing a data visualization and ensure you aren’t using your visualization to mislead or manipulate your audience. III. Examples of Good Data Visualizations III.1 Barplots III.1.1 Simple barplot Figure 1. 2016 Market share of electric vehicles in selected countries. Image by Benjamin O. Tayo. III.1.2 Barplot with a categorical variable Figure 2. Quantity of advertising emails from Best Buy (BBY), Walgreens (WGN) and Walmart (WMT) in 2018. Image be Benjamin O. Tayo. III.1.3 Barplot for comparison Figure 3. 2020 Worldwide number of jobs by skill using LinkedIn search tool. Image by Benjamin O. Tayo. III.2 Density plot Figure 4. The probability distribution of the sample means of a uniform distribution using Monte-Carlo simulation. Image by Benjamin O. Tayo. III.3 Scatter and line plots III.3.1 Simple scatter plot Figure 5. Ideal and fitted plots for the crew variable using multiple regression analysis. Image by Benjamin O. Tayo. III.3.2 Scatter plot for comparison Figure 6. Mean cross-validation scores for different regression models. Image by Benjamin O. Tayo. III.3.3 Multiple scatter plots Figure 7. Regression analysis using different values of the learning rate parameter. Image by Benjamin O. Tayo. III.3.4 Scatter pairplot Figure 8. Pairplot showing relationships between features in the dataset. Image source: Benjamin O. Tayo. III.4 Heatmap plot Figure 9. Covariance matrix plot showing correlation coefficients between features in the dataset. Image source: Benjamin O. Tayo. III. 5 Weather data plot Figure 10. Record temperatures for different months between 2005 to 2014. Image by Benjamin O. Tayo. IV. Summary and Conclusion In summary, we’ve discussed the essential components of good data visualization. Unlike in predictive modeling where a model can be evaluated using a single evaluation metric, in data visualization, evaluation is carried out by analyzing the visualization to make sure it contains all essential components of a good data visualization. Additional Data Science/Machine Learning Resources Data Science Minimum: 10 Essential Skills You Need to Know to Start Doing Data Science Data Science Curriculum Essential Maths Skills for Machine Learning 5 Best Degrees for Getting into Data Science Theoretical Foundations of Data Science — Should I Care or Simply Focus on Hands-on Skills? Machine Learning Project Planning How to Organize Your Data Science Project Productivity Tools for Large-scale Data Science Projects A Data Science Portfolio is More Valuable than a Resume For questions and inquiries, please email me: [email protected]
https://medium.com/towards-artificial-intelligence/how-to-evaluate-a-data-visualization-e04f75e5ae78
['Benjamin Obi Tayo Ph.D.']
2020-06-22 12:43:36.868000+00:00
['Python', 'Data Science', 'Matplotlib', 'Data Visualization', 'Descriptive Analytics']
The Five Rules You Need to Know to Keep Yourself From Getting Stuck
The Five Rules You Need to Know to Keep Yourself From Getting Stuck brett fox Follow Sep 3 · 4 min read I’ve been doing a lot of research on Steve Jobs lately. Did you know that he used to practice his Macworld and product launch presentations around 200 times before the big event? Picture: Depositphotos Watch any Jobs presentation and you’ll notice how smooth and effortless he appears on stage. You’ll also notice that, when he does a demo (which is part of just about every Jobs presentation), the demo almost always works perfectly. On the rare occasion when something goes wrong, Jobs calmly moves on. He never, ever gets stuck. Rule number one for never getting stuck in business: The more prepared you are the less chance you’ll get stuck. 200 Rehearsals. There’s a reason why Jobs was so, so good, and it’s all the time and effort he put into practicing. The analogy in business is preparation. I know, from my own business experience, that the more prepared I was, the more likely I could respond to any business situation. For example, we were in the middle of raising our next round of funding. There were four investors that were a couple weeks away from giving us term sheets. At the same time, we had just about exhausted our funding. In our board meeting, one of our investors, “Raul,” surprised me by demanding the company be shut down. This was truly a make or break moment for us. From somewhere deep inside me, I said, “Instead of shutting the company down, why don’t we put everyone on minimum wage. That will give us plenty of time to see if one of the investors give us a term sheet.” Raul agreed with my suggestion, and the company was saved (three of the four potential investors did give us term sheets). However, imagine if I was stuck? The company would have died right then and there. Rule number two for never getting stuck in business: Do your contingency planning ahead of time. The reason I was able to give Raul a solution was all the preparation I had done. I had actually discussed the possibility of being shut down with Tina, our controller. Tina and I talked through the financial contingencies, and we knew that going to minimum wage would buy us a six week lifeline. Rule number three for never getting stuck in business: Having a great team around you really, really helps. As much as I would like to take credit for the idea of going to minimum wage, it was Tina’s idea, not mine. Tina was excellent at her job, and I was lucky to work with her. It was the same with the other members of the executive team. Adolfo, Dave, Jeroen, and Shoba always had contingencies for their projects. Rule number four for never getting stuck in business: Yes, you actually do need a business plan. Part of the reason we were able to have contingencies is we had a business plan. Without a plan, you don’t know where you’re going. And, if you don’t know where you’re going, you’re much more likely to get stuck. Even if you do come up with an alternative, you’re much more likely to make the wrong decision without a plan. You won’t know how to quickly research if your idea makes any sense. This new alternative is just something that sounds good in the moment. Rule number five for never getting stuck in business: Slow down, if you do get stuck on a problem you haven’t prepared for. Every once in a while, you will get a problem that you haven’t prepared any contingencies for. What do you do then? Our natural tendency is to speed up our decision making process when we’re handed a problem we haven’t expected. Instead, you should try and slow down your decision making process. Do you remember the scene in the movie, “Apollo 13,” when the team on the ground has to figure out how design a carbon dioxide filter? This was not a problem anyone prepared for, yet the team solved it. It was all the preparation, just like Steve Jobs did for his presentations, that allowed the NASA team to solve all the problems they had to solve to get the Apollo 13 astronauts home. They didn’t speed up, and they didn’t panic. They slowed down and solved the problems. That’s what you’ll do too. For more, read: https://www.brettjfox.com/what-are-the-five-skills-you-need-to-be-a-great-ceo
https://medium.com/swlh/the-five-rules-you-need-to-know-to-keep-yourself-from-getting-stuck-3ecc23ce3756
['Brett Fox']
2020-09-04 06:42:02.109000+00:00
['Leadership', 'Business', 'Startup', 'Entrepreneurship', 'Venture Capital']
Why You Should Read More to Write Better
Writing and playing music have many parallels. When you learn to play the guitar, what do you learn first? Do you start composing your own songs and playing complex chord progressions right from the start? Unlikely. Most people who learn to play the guitar start by playing other people’s songs. Then, when they’ve become comfortable with the instrument, they may or may not decide to create songs of their own. Why is it helpful to practice someone else’s songs? Because it develops skill. It gives you the tools and musical vocabulary to express something yourself. The same is true with writing. Novelist Steven King speaks of the importance of reading and writing: “The more fiction you read and write, the more you’ll find your paragraphs forming on their own.” — Stephen King, On Writing: A Memoir Of The Craft To become a good writer requires both reading and writing a lot. It’s the same with music. If you never listen to or learn other people’s songs, it can be very difficult, if not impossible, to learn to write your own songs. Paul Simon met his musical partner Art Garfunkel when they were children. Their idols were the Everly Brothers, who were known for their beautiful harmonies in songs such as “Wake Up Little Suzie” and “Dream”. The two boys would imitate the close two-part harmonies of the Everly Brothers until they developed a style of their own. We hear those beautiful harmonies in songs such as “Sounds of Silence” and “Scarborough Fair/Canticle”. What they learned from imitation became their own style. What you learn from reading can develop into your writing style. Read read read. Write write write. “If you want to be a writer, you must do two things above all others: read a lot and write a lot. There’s no way around these two things that I’m aware of, no shortcut.” — Stephen King, On Writing: A Memoir Of The Craft Stephen describes himself as a slow reader, yet he reads seventy or eighty books a year, mostly fiction. Not for research, but because he likes to read. Do you like to read? If you want to be a good writer, it’s important to take time to read a lot. If one of the most successful writers does it, maybe we should too. Will reading a lot influence your writing to sound like someone else? If you’re a new writer, sounding like someone else isn’t all that bad. If they’re a good writer that is. With time and practice, you will develop your own style. That’s the key. Even the Beatles honed their craft by playing other people’s songs. As a new band, they played in Hamburg Germany every night for a couple of years. This helped them to develop their skill and unique sound that made them famous.
https://medium.com/the-innovation/why-should-you-read-more-to-write-better-8ed636875ba0
['Gary Mcbrine']
2020-12-24 17:07:27.371000+00:00
['Writing Tips', 'Writing', 'Reading', 'Art', 'Creativity']
Alcohol is a Tool of Systemic Oppression
Lurking behind underfunded education, peering through rows of income inequality and over-policing, is alcohol. Photo by James Sutton on Unsplash Before a community can have a high density of alcohol outlets, those seeking liquor licenses must be awarded them from their state or local municipality, most commonly some sort of Alcohol Beverage Control board. Communities with high densities of alcohol outlets are the result of permissive decisions made by state or municipal officials. Decisions made by state or municipal officials are decisions of the system. The decision to populate certain communities with high densities of alcohol outlets is a systemic decision. “Welcome to America, where profits are prioritized over the protection of life.” These outlets are not your neighborhood Saturday lemonade stand, they dangerously encourage escapism and distribute oppression, not $.25 cups of poorly stirred Countrytime. Alcohol licenses are sought after because of the high-profit margins of alcohol. As a culture, the dangers of alcohol have continued to be silenced at the interest of income. Welcome to America, where profits are prioritized over the protection of life. Beyond the dangers that an individual invites onto themselves by consuming alcohol, the menu of harmful outcomes related to alcohol includes community oppression. Greater demand for alcohol leads to the opening of a greater number of alcohol outlets. These outlets will cluster where consumer activity is greatest, and the number of outlets will proliferate until the demand is met. Economics 101, supply and demand. But, one can’t be fooled into thinking it’s that clear. Alcohol is a money-maker, so surely owners of alcohol outlets will aim to maximize profits by locating their outlet in areas where rent is low. It’s true. Greater numbers of outlets will tend to open in areas where rents are low, resulting in higher concentrations of outlets in low-income areas, exposing the nearby populations to the risks associated with these drinking places. Low-income isn’t a segregating classification, but Blacks do face higher densities of liquor stores than do Whites. A 2000 analysis found that liquor stores are disproportionately located in predominantly Black census tracts. This is where alcohol becomes a tool of oppression. The over-concentration of alcohol outlets exposes Black communities to all the negative consequences of alcohol. There are significant and substantive relationships between outlet densities, alcohol-related traffic crashes, violence, and crime. A systemic tool of oppression. The decision to award liquor licenses to an outlet that will locate itself in a low-income community to meet demand and maximize profits is an intentional act controlled by state and local institutions. When that alcohol outlet is known to increase harm to the community and it still created, that is informed oppression. When the only method of obtaining this license is through a state-controlled board it becomes systemic oppression. The location of an alcohol outlet is only the beginning. Low-income communities face systemic racism, over-policing, police brutality, greater health risks, and are unable to rely on underfunded education systems to equip their populations with tools to cope with these inequities. Enter the all too convenient alcohol. Alcohol depresses the central nervous system for a temporary relaxation of the consumer, masking it’s numbing and escapist properties. Disenfranchised from upward mobility, and exhausted from the constant struggle to overcome oppression, agents marketed as sips of stress relief are indulged. Oppressed individuals of low-income communities are overly exposed to alcohol access. A dangerous opportunity. Dangerous and wrong. For a community to be systemically oppressed, underfunded with education and opportunity, and have an overabundance of a substance that is the third leading cause of preventable death is wrong. Would you like your oppression on or off the rocks? In 1855, abolitionist movement leader Frederick Douglass waived off the bartender and chose water. Recognizing that slave masters carefully controlled the slaves’ access to Alcohol, Douglass found weekend and holiday breaks from normally encouraged abstinence to be controlled promotion of drunkenness as a way to keep the slave in “a state of perpetual stupidity” and “disgust the slave with his freedom.” Douglass further noted how the slave master’s promotion of drunkenness reduced the risk of slave rebellions. “When a slave was drunk, the slaveholder had no fear that he would plan an insurrection; no fear that he would escape to the north. It was the sober, thinking slave who was dangerous, and needed the vigilance of his master to keep him a slave.” — Frederick Douglass, 1855 Sobriety became a stairway to freedom. A century later, Malcolm X acknowledged alcohol as a tool of African American oppression: “Almost everyone in Harlem needed some kind of hustle to survive, and needed to stay high in some way to forget what they had to do to survive.” — Malcolm X Like Douglass, Malcolm X saw alcohol as an agent that numbed the pain of cultural oppression and suppressed the potential for political protest and economic self-determination. To fail to address systemic racism or provide an equitable education only to then, in great density, tempt communities with an agent known to allow escapism, is strategic oppression. It’s a strategic and cowardly suppression of communities that for decades have been gaslighted with echos of “You didn’t have to drink it”, equivalent to a modern-day “Just say no” campaign. Both utterly incorrect. Alcohol is an addictive agent of escape that the medical community at large acknowledges as an intoxicating, addictive, toxic, carcinogenic drug, and not a good choice as a therapeutic agent. A 2016 Systematic Review and Meta-Analysis of Alcohol Consumption and All-Cause Mortality, found no health protections at low intake levels, and collectively concluded that the public needs to be informed that drinking alcohol is very unlikely to improve their health. State and municipal officials are empowered with a trust they will uplift, not oppress their communities. Even science confirms there is no need to populate low-income communities with alcohol outlets, yet an ignorant American truth is that profits will be prioritized over life. By now, it’s likely, that a reader would excuse themselves from contributing to any oppression but “I don’t purchase alcohol from outlets in low-income areas” is an incorrect dismissal. The spirits, beer, cider, and wine purchased outside of low-income communities is also sold in low-income communities. A portion of the money from each purchase travels back upstream to fund the distribution channels and creators of the very same alcohol that continues populate outlets of low-income communities. More, these funds contribute to an alcohol industry that spends a collective $2.2 Billion on traditional media advertising to convince us that alcohol can’t be that bad. Worse, white Americans drink more alcohol than other populations. Each dollar spent contributes to oppression. Alcohol is a tool of oppression and low-income communities are too exposed to alcohol. When the distribution of alcohol outlets is controlled by officials, who are educated on the dangers, yet continue to grant licenses to outlets in low-income communities, it then becomes yet another tool of systemic oppression. Like so many aspects of society, only now are we beginning to understand how oppression is not an attitude but a product of systemic decisions and how complicit we might be. Richie. Human. The difference between Seth Godin, The Morning Brew, and me? I respect your inbox, curating only one newsletter per month — Join my behind-the-words monthly newsletter to feel what it’s like to receive a respectful newsletter. And, for those interested in what else I’m building, come over to RICKiRICKi. Meet me 🤠 • LinkedIn • Instagram • Twitter • TikTok • Facebook • YouTube
https://rickieticklez.medium.com/alcohol-is-a-tool-of-systemic-oppression-536f92d735d1
['Richie Crowley']
2020-06-08 15:21:17.072000+00:00
['Addiction', 'Equality', 'Society', 'Health', 'Lifestyle']
Git Branch Control when Deploying on AWS with Serverless Framework
If you are curious about the Serverless framework, you can read this excellent article explaning how to deploy a rest api with Serverless in 15 minutes. Deploy to the Cloud! We use the amazing Serverless toolkit to deploy to AWS. We managed the different environments using environment variables as explained in the Serverless documentation. To deploy to the desired environment, we simply run: sls deploy --stage TARGET_ENV --aws-profile sicara where TARGET_ENV is the target environment e.g., development , staging or production . The application environments A development process typically includes the following environments: The development environment: This is the environment where the developers run the code they are writing. Because the developers need to test intensively their code, the development environment can be broken sometime — and it is even meant to be. This is the environment where the developers run the code they are writing. Because the developers need to test intensively their code, the development environment can be broken sometime — and it is even meant to be. The staging environment: This is where the product owner validates the new features. It should never be broken so that validation can be done at any time. The purpose of this environment is to see the developers daily work, so features may be partially implemented. This is where the product owner validates the new features. It should be broken so that validation can be done at any time. The purpose of this environment is to see the developers daily work, so features may be The production environment: This is the environment for the end-users. It should never be broken and features should be complete. The problem: you could deploy bad/buggy code When deploying, local files are zipped and uploaded directly to AWS. There is no control of the local files before there are uploaded, which is fine when deploying to the development environment but wrong for staging or production environment. Local files uploaded to the staging environment in the Cloud In the screenshot above, the git branch sprint18/feature919/awesome-feature is deployed to the staging environment. This could be wrong for several reasons: the branch sprint18/feature919/awesome-feature has not yet been merged into the master branch, meaning it was not reviewed by the other developers. Thus, it could break the staging environment. has not yet been merged into the branch, meaning it was not by the other developers. the branch sprint18/feature919/awesome-feature is local, meaning it does not include features developed by other members of the team. Thus, it could induce regression by making these other features disappearing. is local, meaning it does not include features developed by other members of the team. there are pending changes not yet committed. Thus, the deployed code is not versioned and could eventually be lost. The solution: a Serverless plugin To prevent us from deploying the wrong Git branch to the wrong environment, we added a hook to our deployment, by implementing a Serverless plugin. You can learn how to write you own Serverless plugins in this blog post. Here is the code: … Read the full article on Sicara’s website here.
https://medium.com/sicara/control-deployment-git-aws-serverless-plugin-cd12de13abb6
['Antoine Toubhans']
2020-01-30 17:28:13.314000+00:00
['Serverless', 'AWS', 'Data Engineering', 'Git']
On-Device AI Optimization — Leveraging Driving Data to Gain an Edge
Leveraging the Domain Characteristics of Driving Data When I began my journey at Nauto a year ago, I was commissioned to replace our existing object detector with a more efficient model. After some research and experimentation, I arrived at a new architecture that was able to achieve an accuracy improvement of over 40% mAP* relative to our current detector, while running almost twice as fast. The massive improvement comes largely thanks to the mobile-targeted NAS design framework pioneered by works such as MnasNet and MobileNetV3. *mAP (mean average precision) is a common metric for evaluating the predictive performance of object detectors. Relative to our current model, the new detector reduces device inference latency by 43.4% and improves mAP by 42.7%. Informed Channel Reduction However, the most interesting improvements surfaced as I looked for ways to further push the boundary of the latency/accuracy curve. During my research I came across an intriguing finding by the authors of Searching for MobileNetV3, a new state-of-the-art classification backbone for mobile devices. They discovered that when adapting the model for the task of object detection, they were able to reduce the channel counts of the final layers by a factor of 2 with no negative impact to accuracy. The underlying idea was simple: MobileNetV3 was originally optimized to classify the 1000 classes of the ImageNet dataset, while the object detection benchmark, COCO, only contains 90 output classes. Identifying a potential redundancy in layer size, the authors were able to achieve a 15% speedup without sacrificing a single percentage of mAP. Compared to popular benchmark datasets like ImageNet (1000) and COCO (90), the driving data we work with at Nauto consists of a minuscule number of distinct object classes. Intrigued, I wondered if I could take this optimization further. In our perception framework we are only interested in detecting a handful of classes such as vehicles, pedestrians, and traffic lights — in total amounting to a fraction of the 90–1000 class datasets used to optimize state-of-the-art architectures. So I began to experiment with reducing the late stage layers of my detector by factors of 4, 8, and all the way up to 32 and beyond. To my surprise, I found that after applying aggressive channel reduction I was able to reduce latency by 22%, while also improving accuracy by 11% mAP relative to the published model. My original hope was to achieve a modest inference speedup with limited negative side-effects — I never expected to actually see an improvement in accuracy. One possible explanation is that while the original architecture was optimal for the diverse 90 class COCO dataset, it is overparameterized for the relatively uniform road scenes experienced by our devices. In other words, removing redundant channels may have improved overall accuracy in a similar way to how dropout and weight decay help prevent overfit. At any rate, this optimization illustrates how improving along one axis of the latency/accuracy curve can impact performance in the other. In this case, however, the unintentional side-effect was positive. In fact, we broke the general rule of the trade-off by making a simultaneous improvement in both dimensions. Applying aggressive channel reduction to the late-stage layers of the detector resulted in a 22% speedup and an 11% improvement in mAP relative to the baseline model. Task-specific Data Augmentation The success I had with channel reduction motivated me to look for other ways to leverage the uniqueness of driving data. Something that immediately came to mind was a study done by an old colleague of mine while I worked at my previous company, DeepScale. Essentially, he found that conventional data augmentation strategies like random flip and random crop**, while generally effective at reducing overfit, can actually hurt performance on driving data. For his application, simply removing the default augmentors resulted in a 13% improvement in accuracy. **Random flip selects images at random to be flipped (typically across the vertical axis). Random crop selects images to be cropped and resized back to original resolution (effectively zooming in). Again, the underlying idea is simple: while benchmark datasets like COCO and ImageNet contain a diverse collection of objects captured by various cameras from many different angles, driving data is comparatively uniform. In most applications the camera positions are fixed, the intrinsics are known, and the image composition will generally consist of the sky, the road, and a few objects. By introducing randomly flipped and zoomed-in images, you may be teaching your model to generalize to perspectives it will never actually experience in real life. This type of overgeneralization can be detrimental to overall accuracy, particularly for mobile models where predictive capacity is already limited. Initially, I had adopted the augmentation scheme used by the original authors of my model. This included the standard horizontal flipping and cropping. I began my study by simply removing the random flip augmentor and retraining my model. As I had hoped, this single change led to a noticeable improvement in accuracy: about 4.5% relative mAP. (It must be noted that while we do operate in fleets around the world including left-hand-drive countries like Japan, my model was targeted for US deployment.) In the default scheme, random crop (top) will often generate distorted, zoomed-in images that compromise object proportions and exclude important landmarks. Random horizontal flip (bottom), while not as obviously harmful, dilutes the training data with orientations the model will never see in production (US). The constrained-crop augmentor takes a more conservative approach; its outputs more closely resemble the viewing angles of real world Nauto devices. I then shifted my focus to random crop. By default, the selected crop was required to have an area between 10% to 100% of the image, and an aspect ratio of 0.5 to 2.0. After examining some of the augmented data, I quickly discovered two things: first, many of the images were so zoomed-in that they excluded important context clues like lane-markers; and second, many of the objects were noticeably distorted in instances where a low aspect ratio crop was resized back to model resolution. I was tempted at first to remove random crop entirely as my colleague had, but I realized there is one important difference between Nauto and full stack self-driving companies. Because we’re deployed as an aftermarket platform in vehicles ranging from sedans to 18-wheelers, our camera position varies significantly across fleets and individual installations. My hypothesis was that a constrained, less-aggressive crop augmentor would still be beneficial as a tool to reflect such a distribution. I began experimenting by fixing the aspect ratio to match the input resolution and raising the minimum crop size. After a few iterations, I found that a constrained augmentor using a fixed ratio and minimum crop area of 50% improved accuracy by 4.4% mAP relative to the default cropping scheme. To test my hypothesis, I also repeated the trial with random crop completely removed. Unlike it had for my colleague, the no-augmentation scheme actually reduced mAP by 5.3% (1% worse than baseline), confirming that conservative cropping can still be beneficial in applications where camera position varies across vehicles. The final scheme (no-flip, constrained-crop) in total yields a 9.1% relative improvement over the original baseline (flip, crop) and a 10.2% improvement over augmentation at all. The baseline augmentation scheme (grey) consists of random horizontal flip and random crop (aspect ratio ∈ [0.5, 2.0] and area ∈ [0.1, 1.0]). Removing random flip improved mAP by 4.5%. From there, removing random crop reduced mAP by 5.3% (-1% relative to baseline). Using a constrained crop (fixed ratio, area ∈ [0.25, 1.0]) improved mAP by 7.9% relative to baseline. And finally, the most constrained crop (fixed ratio, area ∈ [0.5, 1.0]) resulted in the largest improvement: 9.1% relative to baseline. Data-Driven Anchor Box Tuning I’ll wrap it up with one more interesting finding. The majority of today’s object detection architectures form predictions based on a set of default anchor boxes. These boxes (also sometimes called priors) typically span a range of scales and aspect ratios in order to better detect objects of various shapes and sizes. SSD default anchor boxes. Liu, Wei et al. “SSD: Single Shot MultiBox Detector.” Lecture Notes in Computer Science (2016): 21–37. Crossref. Web. At this point, I was focusing my efforts on improving the core vehicle detector that drives our forward collision warning system (FCW). While sifting through our data, I couldn’t help but once again notice its uniformity compared to competition benchmarks; overall image composition aside, the objects themselves seemed to fall into a very tight distribution of shapes and sizes. So I decided to take a deeper look at the vehicles in our dataset. Object distribution of FCW dataset. Scale is calculated for each object as bounding box height relative to image height (adjusted by object and image aspect ratios). The average object is relatively small, with a median scale of 0.057 and a 99th percentile of 0.31. Objects are also generally square, with a median aspect ratio of 1.02 and 99th percentile of 1.36. As it turns out, the majority of objects are relatively square, with more than 96% falling between aspect ratios of 0.5 to 1.5. This actually makes a lot of sense in the context of FCW, as the most relevant objects will generally be the rear profiles of vehicles further ahead on the road. The size distribution follows more of a long tail distribution, but even so, the largest objects occupy less than three fourths of the image in either dimension, while 99% occupy less than a third. Once again, I went back to reevaluate my initial assumptions. Up until now I had adopted the default set of anchor boxes used by the original authors, which ranged in scale between 0.2 and 0.9, using aspect ratios of ⅓, ½, 1, 2, and 3. While this comprehensive range makes sense for general-purpose object detection tasks like COCO, I wondered if I would again be able to find redundancy in the context of autonomous driving. I began by experimenting with a tighter range of aspect ratios, including {½, 1, 1½} and {¾, 1, 1¼}. Surprisingly, the largest gain in both speed and accuracy came simply from using square anchors only, which effectively cut the total anchor count by a factor of 5. I then turned my attention to box sizes, realizing that the default range of [0.2, 0.9] overlapped with less than 5% of the objects in my dataset. Shrinking the anchor sizes to better match the object distribution yielded another modest improvement. In total, the new anchor boxes yielded an almost 20% inference speedup and a 2% relative mAP improvement across all object classes, sizes, and shapes. The baseline model uses anchor boxes with scales ∈ [0.2, 0.9] and aspect ratios ∈ {⅓, ½, 1, 2, 3}. Simply removing all but the square boxes resulted in a speedup of 18.5% with no negative impact to accuracy. Further tuning the boxes to match the scale range of the object distribution resulted in a modest 2.1% relative gain in mAP. Note: while the benchmarks within each optimization study are conducted in controlled experiments, a number of factors changed between individual studies. I chose not to present a cumulative improvement from start to finish in the interest of keeping this post short and focused on the 3 major optimizations.
https://medium.com/engineering-nauto/on-device-ai-optimization-leveraging-driving-data-to-gain-an-edge-f204838b5dff
['Alex Wu']
2020-10-08 22:21:53.601000+00:00
['Data Science', 'Artificial Intelligence', 'Machine Learning', 'Automotive', 'Computer Vision']
Submission Guidelines (Updated)
Greetings! I bring to you some new updates regarding the submission guidelines for Know Thyself Heal Thyself. Please make sure to read them thoroughly in order to avoid any confusion in the future. Thank you! Themes: Spirituality Philosophy Holism Life Lessons Mindfulness Self Awareness/Improvement/Growth Love Other (private message/email me at [email protected] to clarify whether what you want to submit fits in with the publication or not) Types of writing: Essays Poetry Fiction Non fiction Storytelling (parables/tales/fables) Journal entries How-To Articles Both drafts and published stories are accepted (however, please make sure the published story is very recent, otherwise it will get lost in the publication’s archive and won’t receive much exposure). Only one article per day/per contributor is accepted. Anything that exceeds this number will be published the following day. Please make sure your private notes are open so we can communicate if there are any issues to resolve. Please note it my take longer than a few hours to publish your submissions on certain days such as Saturday/Sunday and holidays. Allow at least 12 hours before removing your draft and submitting it elsewhere. Submission frequency: As editor of KTHT, I reserve the right to remove contributors who have been inactive on the publication for longer than 90 days. The goal is to build a community of active writers who support, encourage, interact with each other (clap, respond, tag, etc.) A weekly+weekend prompt, poetry challenge and stories/quotes/sayings are sent out weekly for inspiration. Write with us: If you’d like to become part of Know Thyself, leave a response to this article stating that you’d like to be added. It shouldn’t take longer than 24 hours for you to be able to start submitting.
https://medium.com/know-thyself-heal-thyself/submission-guidelines-updated-975591746812
['𝘋𝘪𝘢𝘯𝘢 𝘊.']
2020-12-20 13:07:25.348000+00:00
['Writing', 'Submission Guidelines', 'Know Thyself Heal Thyself', 'Inspiration', 'Creativity']
What are *Args and **Kwargs in Python?
What are *Args and **Kwargs in Python? Boost your functions to the n-level. Photo by SpaceX on Unsplash If you have been programming Python for a while now surely you’ll have had a doubt in how to use a function properly. You went to the paper and have met with *args and **kwargs inside the parameter of this function. In matplotlib or in seaborn (to name but a few) there are regularly these words in the attributes of a function. Let’s take a look at a caption. Image by Author When I started seeing these attributes my brain usually did like them didn’t exist. Until I recalled the power of this attribute and the commodity they provide. Using them your functions can be a lot more versatile and scalable. Usually, the functions have some ‘natural’ attributes and each parameter are regularly predefined. For example, in the matplotlib function seen above, we have the “Data” attribute where its value “=None” is predefined. This meaning that if you don’t use a value in “Data” the calling of the function will return an empty figure. A none by default. What happens with the *args and**kwargs seen above? The args here are usually used to introduce the data, x1, x2 whatsoever. And the kwargs are used for the tunning of your graph. The properties of each graph come determined with the kwargs you introduce. Linewidth, linestyle, or marker are just some of the keyworded attributes you could already be familiar with. So you have been already using this **kwargs-thing unconsciously.
https://medium.com/python-in-plain-english/args-and-kwargs-d89bdf56d49b
['Toni Domenech']
2020-12-28 13:14:27.418000+00:00
['Python', 'Data Science', 'Matplotlib', 'Software Development', 'Programming']
Deploying Python, NodeJS & VueJS as Microservices
In this article I cover a number of different concepts and technologies; Web Sockets, Python, Node, Vue JS, Docker, and bring them all together in a microservice architecture. 3 Microservices Each component in this stack will be a microservice. I will take you through building each one, including running them in separate Docker containers. Why Microservices Microservices bring with them immense flexibility, scalability, and fault tolerance. Today we will only cover the flexibility. Scalability and Fault tolerance require other technologies such as Kubernetes which are outside the scope of this article. The flexibility of Microservices is created by each service being independent of the others and importantly only doing one job. Taking the example we are looking at in this article, we could easily swap out the VueJS front end and replace it with react. Or we could add a Mongo DB Microservice and store all tweets for analysis at a later date. Demo site I have setup a demo site of what I am going to develop in this article. You can view it here http://medium-microservices.simoncarr.co.uk/ Of course this is just the VueJS front end, but underneath all the goodness of Python, Node, WebSockets and Docker are at work. My approach to this project Here is how I am going to approach this project Develop the Python app and confirm we are receiving tweets Develop the Web Socket service Connect the Python App to the Web Socket service Develop the Vue JS front end and connect it to the WebSocket Service Create docker images for each of the 3 Micro Services Create a Docker stack using docker-compose Deploy the stack Code on GitHub All of the code for this project is available on GitHub Twitter Client (Python) https://github.com/simonjcarr/medium_microservices_twitter_client Websocket server (NodeJS) https://github.com/simonjcarr/medium_microservices_websocket_server Twitter stream UI (VueJS) https://github.com/simonjcarr/medium_microservices_twitter_ui docker-compose file https://github.com/simonjcarr/medium_microservices_docker_compose The images used in the docker-compose file need to be built first. You can use the Dockerfile’s in the above repos or follow the rest of this article to learn how to do it. Creating a twitter app Before we can receive tweets from Twitter, we will have to register an app at https://developer.twitter.com/ Once you have registered an App, a set of API keys will be generated that we can use to connect to the Twitter API. In your browser navigate to https://developer.twitter.com/. If you don't already have one register an account. After logging in, click on Developer portal Hover your mouse over your username and from the dropdown menu, click Apps Click Create App and fill in the form you're presented with. Once your app is created, you can retrieve your API credentials. There are two sets of credentials Consumer API keys and Access token & Access token secret . You will need both sets of keys shortly for use in the Python App. Create a project file structure Create a folder called microservices We will create folders for each of our individual microservices as we go along. For now, we will just need another folder for the Python twitter client. In the microservices folder, create a subfolder called twitter_client Creating the Python Twitter client Make sure you are using Python 3. As of writing this article, I am using Python 3.7.6 I would recommend that you use a virtual python environment. I’m going to use pipenv. If you don’t already have pipenv you can install it with pip install --user pipenv making sure you're in the twitter_client folder and run pipenv install tweepy python-dotenv I’m going to be using Tweepy to connect to the twitter API. I am also installing python-dotenv . This will allow me to put the Twitter API keys in a file called .dot . This file will not be uploaded to git, so I know my Twitter API keys will remain secret. Create two new files twitter_client.py .env Open the .env file in your code editor of choice and add the following lines. Replace the place holders <…> with the relevant keys from the app you registered in your twitter developer account. Open twitter_client.py and add the following code. In the code above, I create an auth object from tweepy then I create a class TwitterListener that extends tweepy.StreamListener The on_status method simply prints the text of each tweet that is received. To start receiving tweets, I instantiate TwitterListener, create a new stream, and then add a filter to only receive tweets that contain one or all of the following values javascript , nodejs , or python If you run twitter_client.py now you should see a stream of tweets scroll up the terminal. It will continue to display new tweets in real-time until you stop the script. If like me, you are using pipenv, you can run the script like so pipenv run python twitter_client.py Creating the Web Socket service Now we know we can receive tweets, I am going to create the Web Socket service. I will use NodeJS to create this service. If you don’t have NodeJS already installed you will need to visit the NodeJS Website https://nodejs.org/ and follow the instructions to download and install it. I am using NodeJS version 12.13.1 Create a new folder in the microservices folder called websocket_server Make sure you're in the websocket_server folder and enter the following command npm init -y This command will create a package.json file that will hold the project dependencies. Enter the following command to install the ws module npm i ws Create a new file called app.js and open it in your code editor and add the code below The web socket server code is surprisingly simple. I am running the server on port 8088 (line 3) Whenever the server receives a new message (line 13) it will resend the message to any clients that are connected by calling the broadcast function I have created. This simply loops through each connected client, sending the data to each (line 8) Start the server by running the following command in the console in the websocket_server folder. node app.js You won't see anything yet, as the server is not receiving any data. We are going to sort that out now by updating the Python Twitter client. Connect the twitter client to the web socket server open a new terminal and navigate to the root of the twitter_client folder that holds the python code. We need to install a python module that will create a WebSocket client and connect to the WebSocket server. Enter the following command pipenv install websocket-client Using the client is even simpler than the WebSocket server and only requires 3 lines of code. Update twitter_client.py so it contains the following code. The lines I have added are Line 4 import create_connection Line 8 creates a new connection and assign to a variable ws Line 17 Each time a new tweet arrives, send the text of the tweet to the server I have also imported json on line 5. The status object created by the tweepy module provides a number of items that represent each tweet. It also includes a _json item that represents the original raw JSON received from Twitter. I’m using json.dumps() to convert the JSON to a string that can be sent through the WebSocket connection. You can now restart the twitter_client.py with the command pipenv run python twitter_client.py If you open the terminal where the python twitter client is running, you’ll see tweets scrolling up the screen. I’m going to leave the console debug messages in the code until the VueJS client is complete, so I know everything is running. Creating the VueJS frontend This is where I start bringing it all together in a pretty front end, so tweets can be viewed in the web browser. Open a new terminal window and navigate to the microservices folder. If you don’t already have the VueJS CLI installed, enter the following command to install it. npm i -g @vue/cli You can find out more about VueJS at https://vuejs.org/ Now create a new Vue app by entering the following command. vue create twitter_ui You will first be prompted to pick a preset, use the up/down arrows on the keyboard to select Manually select features and hit enter. Now you're asked to select the features you want to install. You select/deselect features by using the up/down arrows and using the space bar to toggle a feature on or off. Here are the features you should choose for this project, make sure to select the same in your project. If any of these features are not available in your list of options, try running npm i -g @vue/cli to make sure you have the latest version. Hit enter when you're finished. You will be asked to select the configuration for each of the features you selected. Here are the choices you should make Choose a version of Vue.js: 2.x Pick a CSS pre-processor: Sass/SCSS (with node-sass) Pick a linter: ESLint + Prettier Pick additional Lint features: Lint on save Where to place config files: In dedicated config files Save this as a preset: No Once the installation has completed, cd into the folder twitter_ui created by the Vue CLI. I am going to style this app using tailwind CSS Installing and configuring tailwind in Vue is easy, simply enter the command below. vue add tailwind When prompted, choose Minimal . Job done! It’s helpful here to open the folder in your code editor. I use VS Code, so I just enter the following command. code . Once the editor has opened, back in the terminal start the app with the following command npm run serve Then open a browser and navigate to the URL that the app says it is running on. In my case that is http://localhost:8080/ You should see the default Vue app in your browser. Create a new file /src/components/Header.vue with the following code. Rename /src/components/HelloWorld.vue to Tweets.vue , open the renamed file and replace the contents as below. (we will come back to this file shortly) Open the file /src/App.vue this file provides the layout for our app. We import the two components above and tell Vue where to display them and I also add a sprinkling of tailwind CSS classes. Update the code in App.vue as below. That is the basic structure of the app complete. Now I will get the Tweets component talking to the WebSocket Server and displaying the incoming tweets. Open /src/components/Tweets.vue and update it as per the code below. In data() I have created two variables, tweets which will hold the tweets received in an array and connection which will hold the WebSocket connection object. In mounted() I connect to the WebSocket server and set up an onmessage event. This is triggered whenever the server broadcasts data. When the event is triggered the function converts the data to a JSON object with JSON.parse() and pushes it to the top of the tweets array using unshift . If the tweets array contains more than 20 tweets, the last tweet in the array is removed using pop() The onopen event is for debugging and simply logs to the console one time when the client establishes a connection with the server The component template loops through each tweet held in tweets and lays them out as a list. Each tweet includes a plethora of data. I’m pulling just a few items from each tweet screen_name , profile_image , text , followers , and following . If you look at the app in your browser you will see the tweets scrolling through the page in real-time as the good people on twitter send them. Try sending a tweet that includes one or all the words javascript , nodejs , or python and watch your browser as appears a few seconds later. Feel free to mention me @simonstweet along with a link to this article. Dockerising each microservice As the saying goes (in the UK at least), “there’s more than one way to skin a cat”. I am going to use docker because I believe that’s the best approach for a number of reasons. This is not a docker tutorial, however. If you have not used Docker before you might feel a little overwhelmed, I know I was the first time I came across it. There are a lot of resources on the internet that provide great introductions to docker, YouTube might be your best bet initially. If you don’t have Docker installed, you take a look at the Docker official website. The approach I will take There are a number of different options for deploying containers I am going to go with the simplest approach in this tutorial, which is to host them on my dev laptop. The process will be Some small changes to each microservice to make them Docker friendly Create a Docker image for each container Create a docker-compose file that builds containers from each image and configures them to talk to each other over the Docker network stack. Creating the Docker images Python Twitter Client Open a terminal and navigate to your python twitter client folder, for me that is /microservices/twitter_client Create a new file called requirements.txt in the root of the folder. Our Docker container will be running Python 3 and will have PIP available. When we create a container from the Docker image, PIP will use the requirements.txt file to make sure the required dependencies are installed. In our case that is python-dotenv , tweepy , and websocket Add the following lines to requirements.txt python-dotenv tweepy websocket-client Create a new file called Dockerfile and add the code below Environment Variables Before we create a Docker image from the Dockerfile, we need to tell Docker how to access the environment variables for the Twitter API. There’s also a problem with the URL to the WebSocket server, it’s hardcoded in twitter_client.py This is an issue because each docker container is a self-contained system in its own right, so localhost refers to that container. I need to provide a way to tell the Docker container the address of the WebSocket container. I will do that later in a file called docker-compose.yml. For now, I need the Python script to be able to access the environment variables that will be in the docker-compose file. Open twitter_client.py in your editor and make sure the code is updated as below. Notice the os.environ[] , it provides access to the environment variables that will be stored in the docker-compose file. For now, I’m just going to build the image from the Dockerfile by running docker build -t microservices_twitter_client . The above command tells Docker to create a new image called microservices_twitter_client , the . at the end tells Docker it can find the Dockerfile in the current folder. 2. Websocket Server In the terminal navigate to the folder holding the code for the Websocket server. For me that’s /microservices/websocket_server Create a new file called Dockerfile . Just like before we will define the Dockerfile image for this microservice in a Dockerfile and add the following code. In this Dockerfile, I am using Node version 12 as the base image for the container. The command WORKDIR create a new directory /app in the image and tells Docker to use this as the working directory (base directory) for all further commands. So where you see ./ actually refers to /app I then use COPY to copy any files starting with package and ending with .json into the working directory. Then RUN npm install which will install all the dependencies for our application. Once the install is complete COPY app.js into the working directory. Line 11, makes port 8088 available to be mapped to the outside world, so other containers or apps can connect to the WebSocket server. You will see how that is used later when we create a docker-compose.yml file that will define the application stack. Finally, on line 13, I tell Docker to run the command node app.js You can now build this image so it’s available to use later with docker build -t microservices_websocket_server . 3. VueJS Application In a terminal navigate to your VueJS application folder. For me that is /microservices/twitter_ui .dockerignore file As part of the docker build command I will run npm install this will create the node_modules folder inside the container and ensure it has the latest updated dependencies. As such I don’t want the node_modules folder on my development machine copying into the container. This is acomplished by creating a .dockerignore file and listing the files that we want Docker to ignore. Create a new file in the root of the application folder and call it .dockerignore it only needs one line adding to it. node_modules Environment variables in VueJS With frontend Javascript apps we have to consider that the app is running in the browser rather than on the server. The implications of this are that environment variables on the server are not available to the app running in the browser. Our app currently has a hardcoded URL to the WebSocket server. The best practice with VueJS is to create a .env file in the root of the application for variables our application needs access to. There is a lot more to .env files when you get into the details of different environments such as Dev, Test, PreProd, and Prod. We will keep it simple here and just create a single .env file. In the root of the VueJS application create a file called .env Add this single line of code to the file. VUE_APP_WEBSOCKET_SERVER_URL=ws://192.168.30.100:8088 Now open Tweets.vue which is located in /src/components/Tweets.vue Replace ws://localhost:8088 with process.env.VUE_APP_WEBSOCKET_SERVER_URL when you're done the whole line should look like this Creating the Dockerfile Create a new file in the root of the application folder called Dockerfile and add the code shown below. This Dockerfile is similar in structure to the others. A key difference here is that to deploy the application into production, we need to first build it. The process creates a index.html file that contains references to minified javascript. That index.html file needs to be made available via a webserver. We could have set up a NGINX server container, but a simpler approach for this use case is to install an npm package http-server which I do on line 3. Following that, I go through a similar process as I did for the NodeJS WebSocket server. Once the files have been copied into the container, I build the application with npm run build . This creates a dist folder to hold all the build files. Finally, I run the http-server and tell it to serve the dist folder. Pull everything together with a docker-compose file If you don’t have docker-compose installed (it does not come with docker) you can visit https://docs.docker.com/compose/install/ to find out how to install it on your OS. Finally, we are almost done, just one last thing to do before we start our Microservices application stack. We need a way to tell Docker what that stack comprises of, the relationship between each of the containers and the configuration for each container, i.e. Environment variables and what port each container should expose to the outside world. Create another folder in the microservices folder at the same level as twitter_client , twitter_ui , and websocket_server folders. Name it docker . Navigate into the new Docker folder and create a new file called docker-compoes.yml and add the following code. Take care to maintain the correct indentation. yml files are sensitive to an indentation that is not consistent. The docker-compose file lists the services that I want to run in this stack. I have created three services websocketserver twitterclient twitterui Docker runs it’s own internal network and assigns it’s own IP Addresses internally. The service names are also essentially hostnames for each service on the docker network and will map to an IP Address inside docker. You can see that I make use of this on line 12 where I set the value for the environment variable WEBSOCKET_SERVER_URL to ws://websocketserver:8088 Environment variables for the Twitter App are set in the twitterclient service. You can get these from the Twitter developer's website and the App that you created earlier. We also have some dependencies in our stack. Both the VueJS App and the Python Twitter Client, rely on the WebSocket server being up and running before they start. If this wasn’t the case, there would be no server for them to connect to. You can see that each service has an image. You should recognize this as the image we created when we ran docker build for each of our services. Finally the websocketserver and the twitterui both require that their internal ports be made available outside the container. This is achieved by mapping export_port : internal_port In this case, both internal and external ports are the same, but they don’t have to be and often are not. Running the stack In order to make sure that everything is running correctly I will first run the stack in what is called attached mode. This means the stack will only be available for as long as the terminal is open. This is not ideal for production use, but for testing it means I get to see any errors that might be generated. I also still have the console.log statement logging to the console, which will help me know that everything is working. Run the following command in the terminal in the docker folder, the same folder that contains the docker-compose.yml file. docker-compose up After a few seconds and if everything went well, you should see tweets in the form of JSON scrolling up the terminal. Now open your browser and visit http://localhost:8080 You should see tweets streaming into the app in real-time. If you now go back into the terminal and hit Ctrl+c, this will shut down the stack. Now run the stack again detached mode using the -d switch docker-compose up -d Once the stack is running, you will be back at the terminal and the stack will be running in the background. You can see which containers are running by issuing the command docker ps If you need to shutdown the stack when in detach mode, open a terminal, make sure you're in the same folder as your docker-compose.yml file and issue the command docker-compose down Conclusion I have covered a lot in this article and there is much that I missed. Some important best practices are missing but would have diverted from the concept I was trying to put across. In summary, though, you have seen how it is possible to stream real-time from Python into VueJS. You have also learned about how to deploy a WebSocket server, which is the central technology that glues this stack together. You also saw how you can create Docker containers for apps and use docker-compose to define and create application stacks. I hope you enjoyed this article, I enjoyed writing it. Please leave comments below if you are struggling or let me know if you think there is a better way of achieving anything that I discussed here.
https://medium.com/swlh/developing-a-full-microservices-application-stack-5c9fe14c870f
['Simon Carr']
2020-08-19 12:44:15.619000+00:00
['JavaScript', 'Nodejs', 'Vuejs', 'Docker', 'Microservices']
COVID-19 Vaccine Is Not Effective as Companies Tell Us
The world is creeping deeper due to coronavirus. Coronavirus which is born in China and brought up in Europe and America has made disastrous damage to the American economy and health system. Many vaccines companies have developed a vaccine to eradicate coronavirus but they are rushing to make it available for the public. The main problem is how effective is COVID-19 vaccine prepared by vaccine companies. Companies which are telling about their efficiency and effectiveness are not sure about their vaccine’s public performance. Many politicians like Donald Trump have used a vaccine to grab votes, and it’s still followed by many world leaders. Recently, I read about a mutation of coronavirus in the UK. Many countries in the EU have imposed lockdown to curb the new spread of coronavirus. Coronavirus has unique properties like changing its RNA according to the surrounding which leads to fast mutation and spread. Day by day coronavirus will become more dangerous and as it mutates faster than vaccine up-gradation. Vaccine companies are looking for profit and they are rushing for storage and delivery. It's harmful when you deliver a vaccine with little research and trials. Many people will refuse to take the vaccine. You can compare corona with polio, which is with us for a long time. The series of vaccination will continue, as epidemic waves of corona arrives frequently in the world. We must use a mask and sanitize our hands to curb the coronavirus spread. Social distancing and mask can curb spread than a vaccine. I think, more we can implement self restrictions, more we will find success against COVID-19.
https://medium.com/afwp/covid-19-vaccine-is-not-effective-as-companies-tell-us-796772dc38a9
['Mike Ortega']
2020-12-25 15:32:32.455000+00:00
['Health', 'Lockdown', 'Coronavirus', 'Vaccines', 'Covid 19']
Get Started with AI in 15 Minutes Using Text Classification on Airbnb reviews
Watson Natural Language Classifier (NLC) is a text classification (aka text categorization) service that enables developers to quickly train and integrate natural language processing (NLP) capabilities into their applications. Once you have the training data, you can set up a classification model (aka a classifier) in 15 minutes or less to label text with your custom labels. In this tutorial, I will show you how to create two classifiers using publicly available Airbnb reviews data. One of the more common text classification patterns I’ve seen is analyzing and labeling customer reviews. Understanding unstructured customer feedback enables organizations to make informed decisions that’ll improve customer experience or resolve issues faster. Sentiment analysis is perhaps one of the most common text classification cross-industry use cases, as it empowers businesses to understand voice and tone of their customers. However, companies also need to organize their data into categories that are specific to their business. This often requires data scientists to build custom machine learning models. With NLC, you can build a custom model in minutes without any machine learning experience. Training data To obtain training data, I went to insideairbnb.com and downloaded the ‘reviews.csv.gz’ file from Austin, Texas. This file contains thousands of real reviews from Airbnbs in Austin. Next, I defined my labels. I decided to build two classifiers one for categorizing the reviews and the other for sentiment. It was best to separate the training data for each and create separate classifiers in order to achieve the highest accuracy possible. The labels I defined are below: Category Classifier: Environment, Location, Cleanliness, Hospitality, Noise, Amenities, Communication, Other Environment, Location, Cleanliness, Hospitality, Noise, Amenities, Communication, Other Sentiment Classifier: Positive, Neutral, Negative Both sets of training data only contain 219 rows (examples). That isn’t a lot of examples in the grand scheme of things. However, one of the benefits of Watson Natural Language Classifier is that it works better on smaller sets of examples. Feel free to continue to add to the training data once you have downloaded the file to further improve the accuracy! Training the Classifiers In this tutorial, I will be using Watson Studio. If you would prefer to use the API directly, check out the documentation. Create an instance of NLC and launch the tooling (Note: if you get lost, please refer to the embedded video at the bottom of this post): Go to the Natural Language Classifier page in the IBM Cloud Catalog. Sign up for a free IBM Cloud account or log in. Click Create. Once an instance is created, you will be taken to the below screen. Click Launch tool to open the tooling in Watson Studio. Open tooling from IBM Cloud Catalog Train your classifier Download the training data. Two columns is all you need! That’s how easy it is to train a classifier in NLC! Download here! Click “Create Model” to start building your classifier(s). Begin creating your classifier Next, you’ll need to create a project in Watson Studio. If you do not have an instance of Watson Studio created then you will need to provision a one on the Lite plan. After you have provisioned your instance of Watson Studio, refresh the page and name your Watson Studio project. Then click “Create” in the bottom right hand corner. Upload the training data for either the Categories or Sentiment Click Train Model (Training will take approximately 5-10 minutes for each classifier) Uploading training data and training a classifier Testing your classifier Now that training is done, you can test your classifier! Click into your classifier and go to the Test page. Enter any text and see how Watson classifies it. The classifier works best when using actual Airbnb reviews — so test it out with data from insideairbnb.com. If the classifier makes a mistake, simply click Edit and Retrain in the top right corner and add more training examples to your training data. You’ll be classifying Airbnb reviews in no time! Want to hook your classifiers up to a user interface? Check out the Github repo for the Natural Language Classifier demo. This repo will give you the Node.JS for the NLC demo so you can hook your classifiers up to a simple user experience. Classify Airbnb Reviews with Watson NLC Helpful Links Product Page | Documentation | Sample apps and code | API Reference Want to see what else Watson can do with Airbnb reviews? Check out the new demo for Watson Discovery Service!
https://medium.com/ibm-watson/get-started-with-ai-in-15-minutes-28039853e6f3
['Reid Francis']
2018-11-20 21:54:07.039000+00:00
['Tutorial', 'Machine Learning', 'Classification', 'Development', 'AI']
How to create a confusion matrix with the test result in your training model using matplotlib
How to create a confusion matrix with the test result in your training model using matplotlib Alex G. Follow Nov 9 · 3 min read confusion matrix (Photo,GIF by Author) https://github.com/oleksandr-g-rock/How_to_create_confusion_matrix/blob/main/1_eg-HeEAMk8mtmblkHymRpQ.png Short summary: This article most the same as my previous article but with little changes: If you want just to look at the notebook or just run code please click here So, Let’s start :) So I just to explain how to create a confusion matrix if you doing an image classification model. At first, we need to create 3 folders ( testing, train, val )in our dataset like in the screenshot: approximate count images files per folders: testing — 5% train — 15% val — 80% So folder “train” will use in the training model, folder “val” will use to show result per epoch. And “testing” folder will use only for the testing model in new images. So at first, we need to define folder # folders with train dir & val dir train_dir = '/content/flowers/train/' test_dir = '/content/flowers/val/' testing_dir = '/content/flowers/testing/' input_shape = (image_size, image_size, 3) In the next step, we need to add image data generator for testing with shuffle parameter — FALSE testing_datagen = ImageDataGenerator(rescale=1. / 255) testing_generator = testing_datagen.flow_from_directory( testing_dir, target_size=(image_size, image_size), batch_size=batch_size, shuffle=False, class_mode='categorical') after the train the model we should run the next code to check the result in testing data test_score = model.evaluate_generator(testing_generator, batch_size) print("[INFO] accuracy: {:.2f}%".format(test_score[1] * 100)) print("[INFO] Loss: ",test_score[0]) we should have a result like this: And run code to show confusion matrix with the test result in your training model #Plot the confusion matrix. Set Normalize = True/False def plot_confusion_matrix(cm, classes, normalize=True, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ plt.figure(figsize=(20,20)) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] cm = np.around(cm, decimals=2) cm[np.isnan(cm)] = 0.0 print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, cm[i, j], horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') #Print the Target names from sklearn.metrics import classification_report, confusion_matrix import itertools #shuffle=False target_names = [] for key in train_generator.class_indices: target_names.append(key) # print(target_names) #Confution Matrix Y_pred = model.predict_generator(testing_generator) y_pred = np.argmax(Y_pred, axis=1) print('Confusion Matrix') cm = confusion_matrix(testing_generator.classes, y_pred) plot_confusion_matrix(cm, target_names, title='Confusion Matrix') #Print Classification Report print('Classification Report') print(classification_report(testing_generator.classes, y_pred, target_names=target_names)) we should have a result like this: confusion matrix (Photo,GIF by Author) https://github.com/oleksandr-g-rock/How_to_create_confusion_matrix/blob/main/1_0KCwX9fDz9qC-Zo4K5KTNQ.png Result: We created the confusion matrix via matplotlib. If you want just to look at the notebook or just run code please click here
https://medium.com/analytics-vidhya/how-to-create-a-confusion-matrix-with-the-test-result-in-your-training-model-802b1315d8ee
['Alex G.']
2020-12-18 16:19:55.718000+00:00
['Machine Learning', 'Matplotlib', 'Data Visualization', 'Data Science', 'Python']
8 Important Lessons for Programmers That I’ve Learned at 18
Don’t Fall for the Appeal to Tradition The logical fallacy “appeal to tradition” is an issue than can frequently appear among software developers. The key phrase to know if you’re falling into this trap is “we’ve always done it this way!” Especially as a new developer, it can be hard to suggest new ways of doing things. If a more experienced developer is falling into this logical fallacy, it’s important to call them out on it. Or if you are an experienced developer, don’t be afraid to try new things. Everyone benefits if there is a faster, more efficient to do something than the old way — so always keep an eye out for ways to improve. Developers like to do things so that they can know it will work and so they can get the task done quickly. This usually means doing things in a way they already know, but it certainly doesn’t mean that’s best.
https://medium.com/better-programming/8-important-lessons-for-programmers-that-ive-learned-at-18-6e954634322e
['Alec Jones']
2020-05-26 15:18:37.299000+00:00
['Programming', 'JavaScript', 'Software Development', 'Python', 'Startup']
Too Many Small Steps, Not Enough Leaps
I was driving home the other day, noticed all the above-ground telephone/power lines, and thought to myself: this is not the 21st century I thought I’d be living in. When I was growing up, the 21st century was the distant future, the stuff of science fiction. We’d have flying cars, personal robots, interstellar travel, artificial food, and, of course, tricorders. There’d be computers, although not PCs. Still, we’d have been baffled by smartphones, GPS, or the Internet. We’d have been even more flummoxed by women in the workforce or #BlackLivesMatter. We’re living in the future, but we’re also hanging on to the past, and that applies especially to healthcare. We all poke fun at the persistence of the fax, but I’d also point out that currently our best advice for dealing with the COVID-19 pandemic is pretty much what it was for the 1918 Spanish Flu pandemic: masks and distancing (and we’re facing similar resistance). One would have hoped the 21st century would have found us better equipped. So I was heartened to read an op-ed in The Washington Post by Regina Dugan, PhD. Dr. Dugan calls for a “Health Age,” akin to how Sputnik set off the Space Age. The pandemic, she says, “is the kind of event that alters the course of history so much that we measure time by it: before the pandemic — and after.” In a Health Age, she predicts: We could choose to build a future where no one must wait on an organ donor list. Where the mechanistic underpinnings of mental health are understood and treatable. Where clinical trials happen in months, not years. Where our health span coincides with our life span and we are healthy to our last breath. Dr. Dugan has no doubt we can build a Health Age; “The question, instead, is whether we will.” Dr. Dugan head up Wellcome Leap, a non-profit spin-off from Wellcome, a UK-based Trust that spends billions of dollars to help people “explore great ideas,” particularly related to health. Wellcome Leap was originally funded in 2018, but only this past May installed Dr. Dugan as CEO, with the charge to “undertake bold, unconventional programmes and fund them at scale.” Dr. Dugan is a former Director of Darpa, so she knows something about funding unconventional ideas. Leap Board Chair Jay Flatley promised: “Leap will pursue the most challenging projects that would not otherwise be attempted or funded. The unique operating model provides the potential to make impactful, rapid advances on the future of health.” Now, when I said earlier that our current approach to the pandemic is scarily similar to the response to the 1918 pandemic, that wasn’t being quite fair. We have better testing (although not nearly good enough), more therapeutic options (although none with great results yet), all kinds of personal protective equipment (although still in short supply), and better data (although shamefully inconsistent and delayed). We’re developing vaccines at a record pace, using truly 21st century approaches like mRNA or bioprinting. The problem is, we knew a pandemic could come, we knew the things that would need to be done to deal with it, and yet we — and the “we” applies globally — fumbled the actions at every step. We imposed lockdowns, but usually too late, and then reopened them too soon. Our healthcare organizations keep getting overwhelmed with COVID-19 cases, yet, cut off from their non-pandemic revenue sources, are drowning in losses. Due to layoffs, millions have lost their health insurance. People are avoiding care, even for essential needs like heart attacks or premature births. Our power lines are showing. The the hurricane that is the pandemic is knocking them down at will. We might have some Health Age technologies available but not a Health Age mentality about how, when, and where to use them. Dr. Dugan thinks she knows what we should be doing: To build a Health Age, however, we will need to do more. We will need an international coalition of like-minded leaders to shape a unified global effort; we will need to invest at Space Age levels, publicly and privately, to fund research and development. And critically, we’ll need to supplement those approaches with bold, risk-tolerant efforts — something akin to a DARPA, but for global health. Unfortunately, none of that sounds like anything our current environment supports. The U.S. is vowing to leave the World Health Organization and is buying up the worlds’s supply of Remdesivir, one of the few even moderately effective treatment options. An “international coalition of like-minded leaders” seems hard to come by. Plus, only half of Americans say they’d take a vaccine even when it is here. If COVID-19 is our Sputnik moment, we’re reacting to it as we did Sputnik, setting off insular Space Races that competed rather than cooperated, focused narrowly on “winning” instead of discovering. We will, indeed, spend trillions on our pandemic responses, but most will be short-term, short-sighted programs that apply band-aids instead of establishing sustainable platforms and approaches. We’re reacting to the present, not reimagining the future. Credit: Darpa Darpa’s mission is “to make pivotal investments in breakthrough technologies for national security,” and it “explicitly reaches for transformational change instead of incremental advances.” Her background at Darpa make Dr. Dugan uniquely qualified to bring this attitude to Leap, and to apply it to healthcare. The hard part is remembering that it is not about winning the current war, or even the next one, but about preparing for the wars we’re not even thinking about yet. Most of our population are children of the 20th century. Our healthcare system in 2020 may have some snazzier tools, techniques, and technologies than it did in the 20th century, but it is mostly still pretty familiar to us from then. If we truly want a Health Age, we should aspire to develop things that would look familiar to someone from the 22nd century, not the 20th. Every time I read about the latest finding about our microbiome I think about how little we still know about what drives our health, just as our growing attention to social determinants of health reminds me how we need to drastically rethink what the focus of our “healthcare system” should be. Not more effective vaccines but the things that make vaccines obsolete. Not better surgical techniques but the things that make surgery unnecessary. Not just better health care but better health that requires less health care. If we’re going to dream, let’s dream big. That’s the kind of Leap we need. Please follow me on Medium and on Twitter (@kimbbellard), and don’t forget to share if you liked the article!
https://kimbellard.medium.com/too-many-small-steps-not-enough-leaps-d25caa18a20
['Kim Bellard']
2020-07-27 22:25:28.161000+00:00
['Technology', 'Innovation', 'Health', 'Future', 'Healthcare']
A Step-by-Step Guide to Building Event-Driven Microservices With RabbitMQ
Step-by-Step Guide Here is TekLoon’s Dev Rule No 1: Always start with the easiest part. The easy entry will allow you to complete the first task and gain the confidence to face the upcoming challenges. Step 1: TimerService development Let’s start by creating our TimerService: Code for TimerService. View full code here Let’s look at what we did here. We: Created a connection to our RabbitMQ server. Created a channel Created a direct, non-durable ‘quote’ exchange Published the message to the exchange during each interval (60 seconds) This code fulfills the purpose of TimerService. Step 2: QuoteService development Let’s continue with our QuoteService development. The main function of this service is able to scrape a motivational quote from the Internet when it receives a message from the RabbitMQ queue. First, create a consumer and listen for the event sent by TimerService. Let’s make it very simple; when we receive the message we call the scrapQuoteOfTheDay() function: Next, we proceed to scrape the quote from the Internet: I’m getting my quotes from wisdomquotes.coms and writing them to a JSON file. Pretty straight forward right? Now that we have our business logic in place, let’s make an HTML page to display the quotes that have been scrapped. After doing some research, the simplest way to dynamically render the HTML in ExpressJS is by using the template engine Pug. Let me show you how I did it: This is the UI that will be created based on the index.pug template. You can get the full source code for QuoteService here. Step 3: Express Server settings for QuoteService In order for QuoteService able to listen to RabbitMQ queue, we would have to do some setup during our server initialization. Besides, there is also rendering the quotes HTML based on the index.pug template we have created in step two. Aside from booting up the web server, this configuration also fulfills the following purpose: Renders the index.pug template (line 17) template (line 17) Listens to RabbitMQ queue (line 23) Step 4: Run and test It !!! Let’s run our project and test it locally. Ultimately, you can get my full source code from Github. Let’s boot our QuoteService component. Go to QuoteService folder and do npm run start Boot up our TimerService component by going to the TimerService folder and running npm run start Below are screenshots for you if you run it successfully: TimerService publishes two events to the RabbitMQ exchange QuoteService is able to consume two events listening to the RabbitMQ queue
https://medium.com/better-programming/a-step-by-step-guide-to-building-event-driven-microservices-with-rabbitmq-deeb85b3031c
['Tek Loon']
2019-08-15 16:01:54.319000+00:00
['Microservices', 'Nodejs', 'Programming', 'JavaScript', 'Rabbitmq']
Writing Often is Far Easier Than Writing Well
A lot of people think it’s hard to write frequently. That’s just a belief, of course. You know what they say; what you believe affects what you achieve. Which is not to say believing means achieving, necessarily. I knew a guy that believed he’d be the Prime Minister some day. He’s got one foot pretty much in the grave now, and that ship has long sailed, belief or not. It’s not so much what you believe, but what you don’t believe... Because if you don’t believe something is even possible, the odds of it happening tend to shrink accordingly. Tell yourself it’s not possible to write daily and you won’t. It’s the Roger Bannister effect. Doctors used to say it wasn’t humanly possible to run a mile in under 4 minutes and not die. Until Bannister did it. Within a year, 4 more runners. Today, over 1000 runners have beat the 4 minute mile. Turns out the barrier was psychological. But then, aren’t most barriers? The proof is in the pudding… here you go… Know why I don’t believe it’s hard to write prolifically? How often do you talk? Do you say you couldn’t possibly talk today? Is there a single day that you haven’t had an opinion about something? No. It’s not the writing that’s hard. It’s facing down the blank page that’s hard. I watched that when my daughter went to art school. The prof would say make something that evokes love, and they all got busy. Make something to evoke darkness — they all got busy. But if she just said “make something” without adding a prompt, they all froze. What to make? What to make? Creating is always easier in a container. Constraints are where the magic happens. Give a creative even a single word and they’re off to town. Sit down one afternoon and write 50 prompts on little pieces of paper and throw them in a jar. And every day you pull one out and voila. Easy to write often as long as long as you don’t need to figure out “what” to write about. Those times the muse shows up bearing words? Those are the cherry on top. Writing WELL…That’s Another Story We tend to ramble. You know that guy who loves to tell stories, but he’s boring and draws everything out until your brain screams get to the point already? Like that. When I was pregnant, I used to fall asleep when my ex was talking. I never meant to. Pregnancy hormones. Tired, all the time. He’d start talking and it was like reading a Marcel Proust novel. Put me right to damn sleep. On the flip side is the book you can’t stop reading. You know you’re going to regret it tomorrow, but as far as things you’re going to regret in the morning go, a book is a pretty safe option among the other choices available. Compelling writing grabs you by the eyeballs like a Dan Brown or Tatiana de Rosnay novel and doesn’t let go. GoodReads has a list of “fast paced” books and darn near every Dan Brown book is on that list. And Suzanne Collins and Leigh Bardugo, and a striving writer could probably learn a thing or two from compelling writers. And you could. Spend hours reading compelling writers in the hopes of picking up that ability by strange osmosis, but there’s a faster way. If it doesn’t fuel the story or feed the story, chop chop. That’s what God invented the delete key for. Because 9 minutes reading Dan Brown and 9 minutes reading James Joyce or Marcel Proust aren’t the same 9 minutes, if you know what I’m saying. Your mileage may vary on the author choices, the point is the same.
https://medium.com/linda-caroll/writing-often-is-far-easier-than-writing-well-4d1b4b4cfcbb
['Linda Caroll']
2019-08-26 18:35:10.741000+00:00
['Writing Tips', 'Inspiration', 'Creativity', 'Writing', 'Reading']
Understanding the Power of Redux
Let’s take a more in-depth look into this State Management powerhouse. As a relatively new React developer, one thing that’s stood out to me is how top-heavy my React apps can sometimes feel. While each component always has it’s own bells and whistles, the “lowest shared parent” can sometimes feel overloaded by the amount of stateful information it stores. Don’t get me wrong, lifting state is an essential concept in React and certainly one of the library’s most useful features, but I’ve always wondered if there’s a better way to do it. Enter Redux. What Is Redux? Redux is a state management library created by Dan Abramov and Andrew Clark in 2015 that aims to provide you with a single source of truth for state across all components so you can easily access stateful information at any time. Basing Redux somewhat on Flux, a pre-existing application architecture pattern used by the likes of Facebook, Dan developed the concepts that eventually led to the development of Redux while preparing for a presentation at React Europe. Dan Abramov’s presentation on Hot Reloading at React Europe 2015, where the key concepts driving the development of Redux were first explored. From this talk, Redux was born shortly thereafter, leading to a seachange in how state is handled in JavaScript web applications. Redux was built so it could be used with any component-based JavaScript library, so if you’re working with Angular or Vue.js instead of React, Redux might still be the state management solution for you! Why Should I Use Redux? As I mentioned earlier, Redux can be incredibly useful for top-heavy applications that need to manage large amounts of dynamic or stateful information. By creating a central store of information, you can easily access any piece of information within that store across all components without getting confused about where that information should go and how it interacts with other parts of info. This benefit also comes into play for applications with a large number of components. Sometimes, apps can get more complicated than we originally intended for them to be, and the lowest common parent can be several layers above where information is displayed or updated. By using Redux, we remove that complexity and make it easier to get information to where it needs to be. If you’re building a smaller application with minimal amounts of stateful data, Redux might not be for you as it could be overkill. With that being said, adding Redux to a smaller project can hardly be considered bloat, as the tool is 2KB, so feel free to implement it at your discretion! Getting Started With Redux Installing Redux is relatively simple (you can use the docs for that), so let’s get into implementation. Redux is relatively simple — the library provides you with a central store that allows you to manage your application’s state from a single location through the use of three parts: the store, actions, and reducers. Store The store is the central location for your stateful data, and as such, there should only be one store in your application. Stores are easily implemented, as you can see if the example below: const store = createStore(myStore); From this store, we can read and update state and register and unregister listeners. While some of these processes are carried out through methods native to the store (such as subscribe, unsubscribe, and getState), most of the functionality you’ll build around your state will be handled by Actions and Reducers. Actions Actions are how we send information to our data store across components. Ultimately, Actions are just JSON objects that contain information on what action should be carried out and what information should be worked on as a result. A typical Action for login might look like (source): { type: "LOGIN", payload: { username: "foo", password: "bar" } } To use an action, we must first define it using an action creator. An action creator is simply a function that creates our action, which we will eventually send to a reducer for processing. An action creator might look like (source): function postAdded(title, content) { const id = nanoid() return { type: 'posts/postAdded', payload: { id, title, content } } } Once we’re created an action, it’s time for us to define our reducers so we can update state. Reducers Reducers are functions that take instructions from an action, update state based on those instructions, and return the new state. Reducers can take a bunch of different forms, but on example of what a reducer might look like is below (source): const LoginComponent = (state = initialState, action) => { switch (action.type) { // This reducer handles any action with type "LOGIN" case "LOGIN": return state.map(user => { if (user.username !== action.username) { return user; } if (user.password == action.password) { return { ...user, login_status: "LOGGED IN" } } }); In practice, Redux states are technically immutable, meaning they are never actually changed. Actions and Reducers work together to create copies of state when requested, create a new state to reflect the current state, and then set state to an entirely new value. Additionally, the data passed to a Reducer is never changed either, preventing any side effects that might impact your application. Conclusion Hopefully, this post has served as a useful starting point on your journey to learning more about Redux, an incredibly useful tool for managing state for complex applications. Keep in mind, Redux isn’t the right solution for all projects out there, so if you want a bit more guidance on whether Redux is the right tool for you, Redux’s documentation contains helpful guidance.
https://medium.com/swlh/understanding-the-power-of-redux-fb49d4f54f4e
['Maxwell Harvey Croy']
2020-08-22 21:09:38.747000+00:00
['Software Engineering', 'Software Development', 'JavaScript', 'Web Development', 'React']
AWS — Deploying React With NodeJS App On Elastic Beanstalk
AWS — Deploying React With NodeJS App On Elastic Beanstalk A step by step guide with an example project Photo by Moritz Kindler on Unsplash AWS provides more than 100 services and it’s very important to know which service you should select for your needs. If you want to deploy an application quickly without any worry about the underlying infrastructure, AWS Elastic Beanstalk is the answer. Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. In this post, we are going to deploy React application with nodejs environment. There are other technologies or environments that AWS supports such as Go, Java, NodeJS, .Net, etc. Introduction Example Project Prerequisites Build the Project Deploy on Elastic Beanstalk Debugging and Update the Deployment Route 53 Cleaning Up Things To Consider Summary Conclusion Introduction If you want to deploy an application without worrying about the underlying infrastructure, Elastic Beanstalk is the solution. When you build the app and upload the app in the form of the zip or war, Elastic Beanstalk would take care of provisioning underlying infrastructure such as a fleet of EC2 instances, auto calling groups, monitoring, etc. The infrastructure provisioned by Elastic Beanstalk depends on the technology chosen while uploading your app. For example, we are going to deploy the React with NodeJS backend on Elastic Beanstalk so we need to choose the NodeJS environment. If you want to know more about Elastic Beanstalk here is the link. Environment Setup As you see in the above figure, we build our project and create a zip. Once we build the zip, we upload that zip on the Elastic Beanstalk environment. If you have a custom domain you can point that to the elastic beanstalk URL so that your app can be accessible to the public through that URL.
https://medium.com/bb-tutorials-and-thoughts/aws-deploying-react-with-nodejs-app-on-elastic-beanstalk-23c1fcf75dd2
['Bhargav Bachina']
2020-09-30 18:42:50.017000+00:00
['AWS', 'Programming', 'Nodejs', 'Web Development', 'React']
Digital Transformation Starts with Process
Digital Transformation is a top priority for many companies, however, 70% fail to meet expectations. Perhaps the most compelling reason is the lack of visibility to how business processes actually execute. While 80% of the companies surveyed by McKinsey consider digital transformation a top corporate priority, nearly 70% fail to meet expectations. There are several factors that contribute to such an outcome. Perhaps the most compelling reason is the lack of visibility to how business processes actually execute. Without a complete understanding of all the components of the business process, organizations lose the ability to identify where the weaknesses lie and plan for improvements. As Peter Drucker, the founder of modern management theory, said: “you can’t manage what you can’t measure.” It’s really time to own your processes… You can do more with end-to-end process analytics and begin to transform your entire organization; because transformation starts with understanding. You can’t improve what you don’t measure. Answering deeper questions… Are you tired of the lackluster information your analytics team is delivering? Wish you could understand more about who is responsible, what is causing, and why problems are occurring?? We all are… Successful automation initiatives can transform every facet of an organization, from the boardroom to the shop floor. In order to effectively deploy a Robotic Process Automation (RPA) project and realize the potential that automation can yield, businesses need to have an in-depth understanding of their organization’s processes. The most common and expensive mistake businesses make when implementing automation initiatives is failing to properly understand how their processes are actually performing and then choosing the wrong processes to automate. Leveraging actual business process data is critical to the long-term financial and technical success of any automation project. Rarely does the enthusiasm to just jump in and figure it out on the fly ever prove to be a recipe for success. Photo by Lukas Blazek on Unsplash The data is available, but can you access it? The challenge for many organizations is understanding how their processes operate in real-time, across diverse functional teams, and within siloed back-end systems. Traditionally, enterprises have tried to generate process insights by utilizing a combination of manual efforts and first-generation platforms, including process mining and business intelligence, which prove to be time-consuming, costly, and error-prone. Comprehensive process data that reflects end-to-end workflows is key to automation success. And it already exists in just about every enterprise software application. Whether utilizing an ERP, CRM, or another application, the data is waiting to be put to use. Solutions such as Process Intelligence enable organizations to discover, assess, visualize, analyze, and monitor process flows. Powered by artificial intelligence and machine learning technologies, Process Intelligence delivers accurate, in-depth, and real-time process discovery, analysis, and monitoring, which help automation leaders to accelerate digital transformation initiatives. Photo by UX Indonesia on Unsplash Stopping processes from failing Many automation projects initially fail. While a number of factors play a role in the ultimate results that a transformation project delivers, understanding, and automating the right processes is a key component of successful digital change. Automating the wrong processes can lead to wasted resources and less return on investments. However, choosing the right processes for automation can be challenging without having a clear idea of how processes actually run. Many organizations think they understand how their processes work when, in reality, there are often tens or even hundreds of variations for a single process. Traditional methods of discovering and analyzing processes involve internal personnel or outside consultants observing, timing, and documenting processes by hand, and then manually sorting and compiling the data. This time-consuming method usually takes several months to complete and is also highly subjective. The data reflects how the process performed during the specific instance in which it was being observed. Other methods include business intelligence, which is a great asset in providing summary data and point-in-time metrics but lacks an understanding of the key to a process: time. Traditional process mining tools, many of which originated from the graduate program at Eidenhoven University, have made the assumption that corporate processes fit onto a simple process map, but neglect to recognize that processes function much more like a timeline. A Future Beyond Business Intelligence and Process Mining Process Intelligence is designed to help organizations discover and measure how current processes work, identify process bottlenecks, and surface areas for process optimization based on four foundational pillars: Creation of a “digital twin” of the event data from disparate systems of record associated with business process execution; Analyzing the performance of any process type, particularly highly variable case management processes; Real-time Operational Monitoring of process behaviors that empower business users to remediate process bottlenecks; and Artificial Intelligence/Machine Learning-based predictive analytics allows users to proactively identify the outcome or performance of any process instance in the early stages of the process execution. Photo by Jo Szczepanska on Unsplash Analyzing the performance of any process type, particularly highly variable case management processes Often the most challenging, yet most impactful, areas for process discovery and analysis are case-based processes, for example, customer-facing processes such as call center operations, health care administration, and claims processing. Process Monitoring Should Reflect Real Life Keeping in mind the complexity and diversity of most business workflows, what would a process flowchart look like if it were to reflect the process of treating a patient in an emergency room? If you factor in all possible forks and loops in the workflow, representing doctors’ decisions, test results, and changes in the patient’s conditions, the process representation will look less like a flowchart and more like an incomprehensible web. We generally refer to this as the plate of spaghetti effect, as the process map looks much like a plate of spaghetti. Even in simpler cases, customer support or sales management, for example, the number of steps, repetitions, and variations could vary greatly. Process Intelligence can help your organizations easily identify, quantify, and target the highest-impact process instances for digital transformation or automation initiatives. Real-time Operational Monitoring of process behaviors that empower business users to remediate process bottlenecks Having the insight to and being able to visualize how your processes behave empowers your organization to remediate inefficiencies and make informed decisions as to what aspects of your processes should be standardized through a combination of automation and more effective deployment of human capital through: Protocol analysis that must be followed and identifying processes that fail to meet those conditions Alerting the right staff or automating remediation to ensure processes are functioning properly, and eliminating bad processes before they happen with ongoing monitoring Automatic comparison pre and post initiative process instances side by side to identify if your process improvements are performing as planned Artificial Intelligence/Machine Learning-based predictive analytics that allows users to proactively identify the outcome or performance of any process instance in the early stages of the process execution One of Peter Drucker’s often-quoted advice is “the best way to predict the future is to create it.” Process Intelligence enables organizations to have continuous visibility of how processes behave. Based on such insight, then re-imagine how process optimization can create sustainable competitive advantages by focusing on those business processes that are proven to generate reduced transaction costs and improved customer service levels. What if you could not only see what will happen next but also prescribe a solution to avoid a problem or costly mistake before it happens? That is what the ultimate objective of digital transformation is all about: predicting the future process state. By combining process mining with machine learning and artificial intelligence, your organizations can achieve highly integrated and fully automated insights to forecast processes in their future state and take action to ensure positive outcomes. Process knowledge is important for successful digital transformation initiatives; however, having an understanding of all the critical information locked within semi-structured and unstructured content is also critical. It is dependent on having real-time access to all your business-critical data no matter which business process platform it lies within. This includes the vast amount of data that exists in various business documents, including claims, invoices, proof of delivery, loan agreements, contracts, orders, identity documents, tax forms, pay stubs, utility bills, and more. This understanding of both content and processes is referred to as having digital intelligence. With Digital Intelligence, organizations gain the valuable, yet often hard to attain, insight into their operations that enables true business transformation. With access to real-time data about exactly how processes are currently working and the content that fuels them, Digital Intelligence empowers enterprises to make a tremendous impact where it matters most: customer experience, competitive advantage, visibility, and compliance. The latest Everest Group Process Mining Products PEAK Matrix® Assessment 2020 illustrates that traditional process mining has deep roots among data professionals and automation leaders, but the Process Intelligence approach is gaining momentum — especially with the impact of stay at home orders, social distancing, and economic uncertainty. Business leaders need to have a valuable understanding of their business workflows, identify the best use cases for RPA projects quickly, fix process bottlenecks as soon as they occur, and continually optimize automation performance. Process Intelligence answers this call throughout the entire business, even across diverse workflows, departments, technology systems, and locations. If you enjoyed reading the article don’t forget to applaud.
https://medium.com/datadriveninvestor/digital-transformation-starts-with-process-9364998cf59b
['Ryan M. Raiker']
2020-12-03 15:37:57.017000+00:00
['Data Science', 'Technology', 'Artificial Intelligence', 'Automation', 'Future']
Happy First Birthday, Better Programming!
Happy First Birthday, Better Programming! A story about our first year, some fun numbers, and a look at what’s ahead Photo by Gaelle Marcel on Unsplash Better Programming is officially one year old today! Last year we launched with six articles and hoped that readers would show up and care. 2,373 of you did, and we were off to the races! Slowly but surely, authors we reached out to gave us a chance, entrusting us to do a good job of copy editing, publishing, and distributing their hard work. In the last year, we’ve worked with over 1,500 authors, built an audience of more than 117,000 followers on Medium, and became Medium’s best-performing publication to launch in 2019. Better Programming is also Medium’s fastest-ever publication to reach 5 million pageviews a month after launch — which we did in just 6 months. Cumulatively in our first year, Better Programming has garnered over 38 million pageviews from readers around the world! We have a lot of exciting things in store for this year: We’re going to expand some of the topics we cover, launch our job board, and a few other things we can’t wait to tell you about. Our goal continues: To build a high-quality publication — based on thorough tutorials and actionable advice — and an inclusive environment for people to learn, no matter your skill level. Thank you for being a part of our first year. Be well, Zack and Tony Co-founders of Better Programming
https://medium.com/better-programming/happy-first-birthday-better-programming-ca6e82d1415
['Zack Shapiro']
2020-05-01 15:22:19.622000+00:00
['Python', 'Medium', 'Programming', 'JavaScript', 'Writing']
I came up with this crazy formula to help me write faster and better…
I came up with this crazy formula to help me write faster and better… I used to write when the muse struck.Which wasn’t very often because turns out the muse tends to favor people who show up regularly. Which I wasn’t. So I’ve been trying to build a daily writing habit. And I’ve done pretty well — only missed 2 days in the last 20. But wow. Writing every day sure made me see what I struggle with. Maybe you struggle with this, too? It’s not the blank page that slays me… I learned that constraints mean the blank page is not a problem. Just pick a thing. Like a writing prompt. Because if a writer is asked to write about dogs, or summer or a favorite food — heck, we can all do that. So I use constraints and the blank page isn’t the problem anymore. You know what is? Brain pong! And rambling. I pick a topic, and start writing and all the ideas about that topic show up. And then the writing rambles on and on and — blah. No. Walk away. Edit. Walk away. Edit. That’s painful. And it takes too long. Who has that kind of time? (and people do this twice a day? are they still sane??) Then it dawned on me that constraints work for topic, so why not try them for the whole writing process? So I created this crazy formula to speed up my writing. It looks like this… 1. What’s the topic? The topic is usually inspired by some small nugget of an idea. Some random thought, or something I read or heard or saw. When I get those ideas, I jot them down so I can dig into the idea bin anytime. 2. Is there relevant data or statistics? It’s more powerful to use real numbers and cite a source than to generalize. For example… children witness 68% to 80% of domestic assaults. See how much more powerful that is than a generalization? 3. Can I add a personal experience? A personal connection strengthens every story, so I dig around in my head to find a personal connection. Sometimes #1 and #3 are the same. Bonus! 4. How can I start with a bang? Once I have the topic, some legit data and a personal story, now I can ask myself — what’s the most powerful way to open? 5. How can I connect the end to the beginning? I love when writers end by looping back around to the beginning and connecting the whole piece. So how can I do that? 6. What’s the emotion I want to evoke? Then I try find a cover image to evoke that feeling — and hope I can find something a little offbeat that everyone hasn’t seen a zillion times. 7. Title last! I struggle with titles. So now I ignore the title and just write the piece first. When I’m done, I re-read to see if the title is hiding inside. You’d be surprised how often it is — right there, staring at me from inside the piece. No wonder I couldn’t think it up in advance. So now I have this handy-dandy formula… And I’m going to use it. I swear I am. But ever since I came up with the idea, every time I open the page to write, poetry shows up. Go figure.
https://medium.com/linda-caroll/i-came-up-with-this-crazy-formula-to-help-me-write-faster-and-better-674c882beb28
['Linda Caroll']
2020-09-01 18:43:27.547000+00:00
['Writing', 'Advice', 'This Happened To Me', 'Inspiration', 'Creativity']
Interactive AI for Sustainable Architecture
I was recently selected for a research internship at one of the most promising and exciting AEC startups — Digital Blue Foam. I was given a chance to use their Augmented Intelligence enhanced software. Here’s an article that describes its features and advantages. So this software is a web-based interactive generative design tool that gathers urban and climate data from various available web sources. Once you log into the Digital Blue Foam website you get options to load the previous session (not for you if it’s your first time), search bar for searching and locating the site you are looking for designing, and feeling lucky option, which takes you through the tool and guides you about its features (you must select this option if you want to learn about the tool in brief). Locate and Select Site Once you search for the location the tool will fetch various datasets and take you to the location where you can draw and edit a polygon. Now you can see your located site in the 3D surrounding map. You can draw a polygon to select your site Draw Lines using the Pencil and Eraser You can draw a random line or a polyline between your plot to divide it into subplots. The pencil has three options: (1) Axis — to assign the axis of the plot; (2)- Park, which creates and assigns the drawn place to park; and (3) a tape measure tool for measuring the line you are drawing. Upon double-clicking, the click and drag draw a line option the line for subplots is finalized, which can be seen in orange color. Being an interactive software the number of subplots can even be changed in the statistics section, about which I’ll inform you in this article. Draw tool also has an erase feature in case you want to redraw the lines. Clicking the play button now will generate designs according to the subplots drawn. Straight as well as curved lines can be drawn to divide the plot Generate Designs The play button generates designs. You can pause on the design you like. The step forward and step backward are buttons that will toggle the designs one at a time. The play button generates many options for the user to choose from. Program Distribution This section has input sliders that can be changed to change the ratio of educational, residential, commercial, office, and leisure spaces according to your needs. If you like the generated design and not the distribution of program spaces you can click on the specific part of the structure which gets highlighted and then you can edit features like the program, floor height, and the number of floors in DBF Assistant. The side arrow keys give you the freedom to move the selected block. Selecting the plots also offers various options like convert the plot to park, parking, or a podium. You can even merge 2 or more plots. Sustainable Tools This tool has 3 features: a) Sun path This feature helps in the visualization of sun paths and shadows for your site. It also has input date and time for users to select. While designing a building, checking the sun path and shadows is essential as studies have shown that sufficient daylight in space increases one’s productivity and freshness quotient. b) Wind path This feature allows you to visualize the direction of the wind with huge arrows. Wind path is another important factor in sustainable and climate-responsive design. The location of the opening can be chosen according to the wind direction. Also, it helps to analyze the airflow of the plot. c) Solar radiation This is the third sustainability tool. The Solar Radiation option does make the designing process a bit slow, but of course, this option helps you to visualize and analyze the distribution of solar radiation throughout your design. It can be beneficial while designing the shading requirements for the structure. Sun path and Wind Direction can be selected Statistics Section This section offers a wide range of targets to choose from. Max Height., Facade Area, Floor Area, Site Area, and other targets can be set by the user. One of my favorite targets among these options is Site Efficiency, which tells you the site efficiency percentage. You can see this target change by changing the subplots and programs. This makes it an interactive and informative tool that has sustainability options to choose from. Multiple input targets that can be set for the design, which includes Efficiency, GFA, and many more Viewing Tool This tool has 3 subtypes:- a) View — This consists of 4 features: Orbital view Toggle view Option to show cars Show trees in the plot b) Plan view — This part also has 4 features: Plan view Longitudinal section Cross-section Toggle isometric view All these views are important from an architectural viewpoint. c) Visualize Data — this subtype has 5 features: Terrain — to visualize the satellite imagery of the map Neighborhood programs — Proximity analysis of various nearby facilities Heatmap Structural Details Save and Download a) You will have slots to save 5 of your designs. These saved designs can be compared using the compare design tool, this compares factors like FAR (Floor Area Ratio), GFA (Gross Floor Area), and Efficiency. An insightful comparison of up to 5 designs b) The download button offers 4 options consisting of screenshot, 3D, 2D, and download an excel (.xlsx file) report. The report shows details like floor height, and area for every floor in each program. This file also has input sections like Title, Building Type, Site Area, which can be useful for presentations and documentation. What I liked most about this tool is the designs are generated on clicking the play button and you can see the targets change their values. When you feel the right target has been achieved then you can pause and save the design. This tool is one of the best-use cases of “AI in AEC”, which helps the user in the optimization of the design efficiently with several input options. The user interface is easy to use. Having a DBF Assistant makes it simple for the user to get information about the various tools offered. The software also has options like Quick Tips and Tutorials to guide you through the tool. Progress is impossible without a change and it has become an important factor to change our designing process, We should make it more efficient and sustainable. The way your city looks today will be very different in the coming years, So it is important to optimize your design and analyze its effects on the environment for future generations. “The future of urban design and project development has arrived” — Digital Blue Foam Visit digitalbluefoam.com to learn about Digital Blue Foam’s early access program!
https://medium.com/digital-blue-foam/interactive-ai-for-sustainable-architecture-c5baf4c0ad3d
['Rutvik Deshpande']
2020-08-27 02:22:24.496000+00:00
['AI', 'Sustainable Development', 'Architecture', 'Generative Design', 'Analysis']
Why You’re Struggling with Innovation. And How to Get Better.
The modern typewriter had a problem. When Christopher Sholes developed the first model in 1868, it was an amazing development for its time. But if you tried to type too quickly on it, the type bars had a tendency to bang into one another and get stuck. Sholes consulted an educator who helped him analyze the most common pairings of letters. He then split up those letters so that their type bars were farther apart and less likely to jam. He slowed down typing speed to prevent the typewriter from jamming. Which then sped up the typing. This dictated the layout of the keyboard, which came to be known for the first six letters in the upper row — QWERTY. In 1873, the Sholes & Glidden typewriter became the first to be mass-produced, and its keyboard layout soon became standard. Nearly 150 years later, despite the fact that sticking type bars are an obsolete problem, the original inefficient design continues to be the standard. It’s a telling lesson in the power of inertia. And in the barriers that prevent innovation. The Least Innovative System Imaginable “If you always do what you always did, you will always get what you always got.” — Albert Einstein Imagine trying to design a system that prevented innovation. Your goal is to structure a company in a way that will actively discourage people from innovating. What would it include? People told to develop new, innovative ideas and then not given any time to work on them? Cut the resources available to everyone looking to advance new opportunities? Create inconsistent messaging as senior management lauds the importance of innovation and plasters the walls with motivational posters, yet actual policy and daily decisions run counter? Bureaucratic processes that require layers of approvals to depart from the typical methodology? Harsh judgment of any experiment that doesn’t yield spectacular results on the first try? Systems that reward maintaining the current status and are unforgiving of risks that jeopardize it? It’s easy to see how a system with these aspects would discourage new ideas and innovation. Yet these same traits are also present in many of today’s companies. Somehow, many of the features of a hypothetical organization designed to stifle innovation, are present in the places we work today. Despite leaders citing a love of creativity and innovation, putting this desire into practice continues to be a struggle. As Charles Eames warned, “Recent years have shown a growing preoccupation with the circumstances surrounding the creative act and a search for the ingredients that promote creativity. This preoccupation in itself suggests that we are in a special kind of trouble — and indeed we are.” But why? Why do so many companies want to create innovative cultures yet somehow end up creating the polar opposite? Because many of these companies have good managers. And good managers tend to discourage innovation. Customers (and Hence Managers) Don’t Want Innovation Good managers stay close to their customers. They know that if they’re to consistently support the company’s bottom line, they need to be attuned to customer needs and desires. Before managers decide to invest in a new technology or strategy, they’ll consider their customers — Do they want it? Will this be profitable? How big is the potential market? The better that managers ask these questions, the better their investments are aligned with customers. And the less likely they are to put out the next New Coke. But while this is an advantage in identifying the next round of product improvements, it’s a major liability to wholesale innovation. Most customers can quickly identify incremental improvements. Those areas are at the forefront of their experience and it doesn’t require a lot of imagination to come up with some improved features. So good managers, as you’d expect, are very adept at leading teams to identify and incorporate incremental improvements. But major innovation — 10x type changes — often come from completely new perspectives. They come from thoughts and ideas that most customers haven’t even considered. As the old (dubiously quoted) Henry Ford saying goes, “If I had asked people what they wanted, they would have said faster horses.” With a typical management structure, it makes it nearly impossible to justify diverting resources from known customer needs and desires to unproven markets and questionable investments. The systems and processes that keep the business running are specifically designed to identify and cut those initiatives that do not align to the customer’s current needs. The solution then, needs to reverse this structure. It needs to override the systems that are highlighting innovation as unprofitable in the short-term and encourage their pursuit. It needs to protect these new ideas from the typical business plan mentality and be willing to take a shot on the unknown. Mainly, it needs to be willing to divorce innovation from customer needs. As Steve Jobs said, “Some people say, ‘Give the customers what they want.’ But that’s not my approach. Our job is to figure out what they’re going to want before they do.” And to do that, you need to separate this out from the traditional mainstream business. The Chance for a Billion Dollar Return Choice A: You can give a million dollars, bottom line, to your company through your efforts this year, guaranteed. Choice B: You can give a billion dollars, bottom line, to your company through your efforts this year, with one chance in a hundred. Dr. Astro Teller, Captain of Moonshots (CEO) of X, Alphabet’s moonshot factory, offered these two choices to his audience as he began an A360 talk on how to 10x your thinking. In response to his question, most people prefer to chance the billion-dollar return, particularly with a 10x expected utility on the odds. Yet when asked whether their bosses would agree with that choice, most people say no — they feel their management would prefer them to opt for the safe returns. People want to take risks. They want to deliver large-scale innovation. But they feel that their management disagrees. And in many cases, they’re right. Imagine you’re managing a team of employees to produce and support an existing product line. Now ask yourself, is innovation absolutely critical to grow your business? If not, and you can meet demand based on your existing business plan, what’s your motivation to invite additional risk? Most managers — and as a result most teams — don’t need to rely on major innovation to survive. They’re able to sustain their business with the same things that have worked for them in the past — until suddenly and without much warning — it no longer will. Most people see innovation as an idea problem. But it’s not. It’s a resource allocation problem. The question isn’t how do you generate more ideas, but how do you align your best people, and sufficient resources, to your best opportunities. As the great Peter Drucker wrote, “Problem solving, however necessary, does not produce results. It prevents damage. Exploiting opportunities produces results.” Sergey Brin recognized this concern at Google and developed a 70/20/10 responsibility model. Seventy percent of time was reserved for daily operations and taking care of the current business. Twenty percent went towards the next level of advancements. And the final ten percent went towards major innovations and moonshots. Google then implemented tracking systems to ensure people were prioritizing this breakdown — and make sure managers didn’t allow the urgent to override the important. Not every idea would work out — actually the vast majority would end up as flops — but by continuing to prioritize innovation, Google put themselves in a position to take advantage of the ones that did. As Eric Schmidt described it, “You can systematize innovation even if you can’t completely predict it.” Separate Your Moonshots from the Main Business “Leaders who order their employees to be more innovative without first investing in organizational fitness are like casual joggers who order their bodies to run a marathon. It won’t happen, and the experience is likely to cause a great deal of pain.” — Safi Bahcall, Loonshots: How to Nurture Crazy Ideas That Win Wars, Cure Diseases, and Transform Industries Yet in many companies, simply prescribing a breakdown isn’t enough. Existing management practices and systems are too ingrained in the culture. Stability and near-term returns will always override the long-term investment needed to encourage major innovation. The solution then, lies in separating out those who focus on major innovation from the standard product line. It’s taking a group and telling them that their entire job is to choose Choice B — and creating an environment that encourages that mentality. If their job is to choose Choice B, and pursue ideas with a 1% success rate, failure will be unavoidable. So experimentation happens in a controlled environment, with quicker learning and much less negative consequences. If their job is to choose Choice B, they’re not held to a quarterly return, so new innovations can be protected and developed. They don’t need to guarantee a quarterly return, so risk-taking is easier to encourage. They can experiment and fail within controlled environments, so learning happens much more quickly and with much less negative consequences. And since most new innovations don’t begin as clear successes, they can be developed within a protected space until they’re formed into more defined ideas. Sir Francis Bacon recognized this four centuries ago when he wrote, “As the births of living creatures are at first ill-shapen, so are all innovations, which are the births of time.” But most importantly, the difference between delivering a million dollar return and a billion dollar return is not about working 1000 times as hard. And it’s not about being 1000 times smarter. It requires a complete perspective shift. It requires separating this group from the typical management business plans and customer demands and pushing them to tackle completely new challenges. As Dr. Teller put it, “If you push them — if you give them the freedom and the expectation to be weird, that’s moonshot thinking.” Success Comes from Feedback Loops “If you look at history, innovation doesn’t come just from giving people incentives; it comes from creating environments where their ideas can connect.” — Steven Berlin Johnson In a conversation with Shane Parrish, former Y Combinator partner and the founder of Pioneer, Daniel Gross, describes the key to success as positive feedback loops. Our surrounding environments create feedback loops that either reinforce or discourage critical behaviors — which initiate chain reactions of either positive, or negative, behaviors. Alexander Graham Bell, no stranger to innovation himself, agreed with Gross on the importance of surroundings towards creative success. In the 1901 volume, How They Succeeded, Orion Swett Marden interviewed Bell, then 54, who shared the following life lesson, “Environment counts for a great deal. A man’s particular idea may have no chance for growth or encouragement in his community. Real success is denied that man, until he finds a proper environment.” For innovation to be a success, it needs an environment of positive feedback loops — one that traditional management practices and business models are ill-equipped to support. The inertia towards supporting today’s needs and demands is too great to maintain enough focus on long-term innovation. We cannot expect people to pursue moonshots within an environment designed for incremental improvements. We cannot expect people to pursue Choice B, when every feedback loop in place reinforces Choice A. Until this responsibility is separated from the current management decision-making model, companies will continue to miss the opportunities for major innovations. Not because they’re making poor decisions. But because they’re making decisions based on criteria that are soon to become obsolete. The alternative is to create groups that encourage these new perspectives and experiments. Which leads to ideas where people say, “there’s no way that could ever work.” Until, of course, they do. Thanks, as always, for reading. Agree? Disagree? Other suggestions? Let me know, I’d love to hear your thoughts. And if you found this helpful, I’d appreciate if you could help me share with more people. Cheers!
https://jswilder16.medium.com/why-youre-struggling-with-innovation-and-how-to-get-better-533f5219c3e5
['Jake Wilder']
2019-04-22 01:46:42.469000+00:00
['Management', 'Leadership', 'Innovation', 'Creativity', 'Startup']
Part 2: Stop Using If-Else Statements
APPLIED DESIGN PATTERNS: STRATEGY Part 2: Stop Using If-Else Statements Let’s have yet another look at how you can replace if-else statements. Okay, we both already agree using If-Else statements everywhere is an awful practice. You’ve without a shred of doubt met If-Else statements that made your head ache six ways from Sunday. Nasty branching and unclear responsibilities. We might as well slap some goto in there while we’re at it just for sh*ts and giggles. Instructors and teachers love If-Else. It’s their hammer and everything’s a nail. Gotta decide which logic to execute? Use If-Else. Want to create a factory? Use If-Else. You get the point already… We’ll be refactoring this illustrative hot piece of mess below to something extensible and production ready. Terrible to look at. And, yes, it could have been implemented with switch as well. This kind of code is nevertheless prevalent. You know it’s bad. But it’s fixable. A bit of refactoring and we’re back to highly extensible and maintainable code that’ll make you sleep like a baby. “So, how do we replace these pieces of pain inducing If-Else statements?” With strategy objects and type discovery. You’ve likely already heard of the strategy pattern, but might still secretly wonder what the fuzz is all about. Here’s a brief introduction to a pattern that’ll change how you write branching in the future. We often need to determine which logic to execute based on some condition. By creating a group of classes with a common interface, in combination with type discovery, we can easily swap which logic to execute, without the need for If-Else. Nice, huh? We just call a specialized object’s method instead of extending our application with a nightmare of endless If-Else branching statements. We’ll be refactoring the code above, ensuring we adhere to the SOLID principles. Especially with the Open/Closed principle in mind. Demo time 1 First we start by extracting the the logic out of the horrible If-Else statement, and place it in separate strategy classes. At the same time, we create a common interface. Each strategy class implements the common interface. Also, I’ve applied an attribute on the class, which provides us the opportunity to give the strategy a friendly name. The attribute class is define later in this article. 2 Then, we create a method in the Order class which takes the strategy interface as a parameter. Here’s a snippet of the entire Order class. This allows us to delegate the logic to the specialized class, instead of writing horrible, not-easy-to-extend if-else statements. 3 Now the part where we actually remove the If-Else hell from the illustrative example at the beginning. This provide some real extensibility to our application. Let’s briefly walk thru the type discovery process. We’re building a dictionary containing all the types that implement the common interface, and use the name from the attribute as the key. Then, we let the user enter some text into the console that will match one of the output formatters name. Based on the input, we first find the correspond type in the dictionary and create an instance of that type. The instance is passed to the Order’s GenerateOutput method. It’s more code, no doubt. But it will allow us to dynamically discover new formatter strategies as they are added to the solution. Something If-Else won’t provide you with, no matter how hard instructors try to push it. One more thing… If you wonder about the [OutputFormatterName("")] , above the strategy classes, the implementation looks like this below. A very simple attribute class that provides us with a friendly display name. “What to do if I need a new way to format the output? You create a new class that implements the IOrderOutputStrategy . It’s honestly that simple. The type discovery process will take care of “registering” the new formatter with the application. By defining the GenerateOuput method on the Order, we don’t need to branch our code using If-Else. We just delegate the responsibility to the specialized class. “Again, you’re creating a lot of classes to do something simple!” Sure, it’s a lot of additional classes. But they are insanely simple. They have meaningful names derived from the functional requirements. Other developers would recognize their purpose from the get-go. I can also walk thru logic with business people and they’re completely onboard with what I’m talking about with only a bit of handholding — it’s code after all. Should we really limit our expressiveness, just to accommodate people who are stuck with If-Else? “But won’t there be situations when If-Else is okay?” Sure. Sometimes… If you’re into competitive programming, writing something that needs to be highly optimized, if you know something will absolutely not change (until it does)— or doing a college assignment. Instructors love that sh*t.
https://medium.com/dev-genius/part-2-stop-using-if-else-statements-ae4b0bec5bad
['Nicklas Millard']
2020-08-06 19:30:52.182000+00:00
['Technology', 'Software Engineering', 'Csharp', 'Programming', 'Software Development']
The 5 Kinds of Award-Winning Commercials
2. Larger-Than-Life Results This strategy involves showing the experience the user will go through, once they start using the company’s product. The successful ads tend to exaggerate what you receive. The greater the exaggeration, the more comical and unusual the advertisement is, making it more indelible. Author via YouTube This comical advertisement by Lynx starts with a single woman running across the island, in pursuit of something. The scene changes and in a few seconds you now see hordes of women, running over one another, climbing and trying to reach somewhere. And then in the climax, you get to know what was attracting all these women. This advertisement expresses two benefits of the product: 1. How strong the deodorant is — it could be detected miles away. 2. How strongly it attracts all women. In addition, the expressions of the main character in the climax make it even more hilarious. No doubt a lot of other deodorant brands followed suit once the Lynx effect gained popularity. Though they had varying levels of success. You could also use what is called inverted consequences — a version in which you warn against the implications of not following the ad’s recommendation. For example, while promoting a brand of vitamin, one could show how the viewer is missing out on life by not ingesting the vitamin (Revital Capsules) Risks to keep in mind: this form of advertising runs the risks of getting you trapped in allegations and lawsuits. This story by Sean Kernan serves as a great example: Pepsi made an exaggerated claim in their advertisement and found themselves stuck in a messy lawsuit. A brand called Complan suffered a similar fate a few years back when a lawsuit was slapped on their face because they claimed to make kids “taller.”
https://medium.com/better-marketing/the-5-kinds-of-award-winning-commercials-8c2cd003b927
['Kiran Jain']
2020-06-22 17:37:25.038000+00:00
['Research', 'Marketing', 'Advertising', 'Business', 'Creativity']
Introducing Figma’s Live Embed Kit
We’re excited to announce our public Live Embed Kit to keep teams in sync wherever they are. With this development, anyone can add Figma designs and prototypes that are always up to date to their website. It’s as simple as embedding an iframe. The Kit will also allow 3rd party developers to enable Figma Live Embeds in their own tools — just like Trello, JIRA and Dropbox Paper. It’s as simple as embedding an iframe. To insert a Figma design or prototype into any webpage, just click share in the top right corner of the file, select public embed and copy the iframe code. If you’re a developer and want to make it easier for your users to embed live Figma documents, follow the instructions here. We’re really excited to see what the community builds with this. Here’s a few examples of where live, synchronized designs could be helpful: Do you run an internal wiki for your company? Add Figma and people can post live designs to articles about projects or features. Are you building a messaging app for teams? Connect Figma and your users can share the latest versions of designs in group chats. Do you want to blog about a project you’re working on? Embed a live Figma file and your readers will always see the most recent version of your design. Live Embed in action with Trello: Since Figma is the only design tool that runs on the web, these embedded designs will stay up to date and synchronized with the Figma original. Whenever a designer makes a tiny padding tweak or changes an icon in the Figma file itself, they can rest easy without having to re-export or re-upload the design. We hope integrations like these will keep teams in sync and save them the hassle of hunting through the annals of their email history, mountain of Slack chats, or folders of their file sharing services. This is only the beginning of our platform efforts. We’re excited to harness the power of our web-based technology to help teams communicate and build better products faster. Up next — stay tuned for a Figma API that will allow developers to pull other types of information from Figma files and incorporate them into new workflows. If you’re interested in partnering, shoot us a line at [email protected]. For more details on how to integrate Figma Live Embeds into your website or tool, go to https://www.figma.com/platform.
https://medium.com/figma-design/introducing-figmas-live-embed-kit-a04b9c7ad001
['Dylan Field']
2018-04-03 18:28:44.842000+00:00
['Programming', 'Design', 'Software Development', 'Engineering', 'Tech']
The Future of Information Technology
According to Merriam-Websters dictionary, information technology is defined as, “technology involving the development, maintenance, and use of computer systems, software, and networks for the processing and distribution of data”. In simpler terms, information technology is technology which processes and distributes data. This technology could be hardware (the personal computer), or it could be software (Netflix). As the definition suggests, the processing and distribution of data could also happen on networks such as the internet. Information technology is important because as humans we are limited in our ability to process and distribute data. By increasing the amount of data we can process and distribute, we can solve a lot of problems and answer a lot of questions our limitations would never allow us to solve. Without information technology, we wouldn’t able to predict the weather, use the internet to obtain new knowledge, utilize the GPS to take us from point A to point B, and do many more things that we take for granted thanks to information technology. As this technology progresses, it’s going to take on characteristics and operate in certain dimensions that it’s never operated in before. Each time this technology operates in a new dimension, it’ll disrupt a different set of industries and provide a different set of benefits that wasn’t possible in previous iterations of the technology. Now before we dive in the future of information technology and what these added characteristics will be, we must first cover a little bit of the history of information technology. This must be done so that we can have an appreciation of where we are relative to what we will become. I’d also like to add that I am not going to talk about the specifics behind how computers will be able to reach higher levels of performance. If you would like to read an article about this, check out my article titled, “Long Live Moore’s Law!”. History of Information Technology Change Internet — What started out as a top-secret government project in the late 1960’s eventually took the world by storm when there was an explosion of commercialization and innovation centered around this technology in the 1990s. The internet was a huge step forward for information technology and was revolutionary since it was able to add the characteristic of centralization to information technologies. By centralizing information not only did we democratize access to the worlds knowledge repository, but we also created a digital infrastructure that allowed for massive amounts of communication to take place which was previously unimaginable. Mobile Electronics — While although mobile electronics (portable computers essentially) were starting to gain traction in the 1990s, it wasn’t until the 2000s when they began to take society by storm. This was initially due to iPods and cell phones, but later tablets, smartphones, smart watches, and even drones would join the ranks of this technology. This technology added the characteristic of physical mass distribution to information technology, which enabled the individual for the first time to take advantage of the information age while they were on the go. This was revolutionary because not only did it allow us to communicate and obtain information anywhere in the world, it also gave us new products and applications with capabilities which penetrated every field such as photography, music, media, and cinematography. Big Data — For those of you who don’t know what big data is, big data is the analysis of data which is too complex for traditional data-processing software to process. Big data started to become huge in the 2010s thanks to the vast amount of information that mobile electronics were generating, improvements in the capabilities of our algorithms, and advances in machine learning techniques which made big data analysis easy. Some of you might be saying to yourselves, “So what? Who cares if I can analyze larger data sets? Does that really matter?”. The answer to this is an unequivocal yes! By performing a big data macro level analysis on a particular problem, a lot of the times you will end up discover patterns in the data that you wouldn’t have been able to discover had you analyzed a smaller data set. The ability to uncover these additional patterns are important since it allows for us to solve new problems such as understanding the complexity of the human genome (which opens up the door to unlocking the powers of genetic therapy and genetic engineering). By allowing us to have an increased understanding of everything, big data has been (and will be) crucial in helping us create a better world and solve some of sciences biggest mysteries. The Future of Information Technology Now that we’ve covered some of the history of information technology, this should provide a good base for understanding where we are, and where we are headed. The story of the future can now begin! — Chapter I: Blockchain Blockchain — For those of you who don’t know what blockchain is, blockchain is a specific type of database. Unlike typical databases, the data on a blockchain is stored in blocks which are then chained together in chronological order. If the blockchain is decentralized (meaning that no single person or group has control of the blockchain), then any data which is entered into the blockchain cannot be changed nor deleted. Blockchain is a huge deal for two reasons. The first is that it allows us to have a perfect understanding of the history of whatever we choose to attach the blockchain to. This will have tremendous benefits in a large variety of areas such as figuring out the source of where contaminated food is coming from in seconds as opposed to days, cutting down on the level of slave and child labor in supply chains, and protect property rights in undeveloped nations where criminal organizations collude with officials to take advantage of the lack of transparency in the land ownership process. The second reason why the blockchain is a big deal is because it allows for non-alterable data. This will make our data more secure and allow us to protect our finances, protect our intellectual property, and would allow for the consumption and analysis of highly sensitive information (such as health and financial records) without any fear of having this information potentially stolen. This is because the blockchain would have a complete record of everybody who accessed the data which would make figuring out who stole the data relatively easy. By adding the dimensions of security and historical clarity into our information technology, we will be able to enhance the human condition to a degree that past technologies such as the internet never could. The Internet of Things — The internet of things are devices that have sensors, software, and other technologies embedded in them so that they can connect, share data, and collaborate with other devices and systems over the internet. This technology will add the characteristics of objectification and collaboration to information technology, and will take society by storm in the 2030s. While although the internet of things market is growing rapidly, I firmly believe that the internet of things in the 2020’s is going to be where the internet was in the 1980s. Following in the foot steps of blockchain, this technology is going to benefit society in multiple ways. The first way the internet of things will benefit society is through how much data it will generate. Since everyday objects will be generating data in ways that they don’t currently, this will increase our level of data by many orders of magnitude. This will increase our ability to answer existing questions and open the doors towards having different types of questions solved. This will also make artificial intelligence much more powerful since they will have larger data sets to scale their abilities on. More data = better A.I. The second way in which this technology will benefit us is is through personalization. As the devices collect more data from your interactions with them, they’ll be able to share data with other devices and learn from one another so that your environment can become tailored to your individual needs and desires. This will allow for optimal sleep experiences on a nightly basis, enable the elderly to stay independent for a longer period of time, increase your success at obtaining health and wellness goals, and perhaps even lead to the automation of household chores. The final ways in which this technology will benefit us is through the usage of internet of things by localities. This could provide data which helps detect and prevent viruses, reduce the destruction of fires by alerting the officials as to which areas are in need of new fire alarms, and providing more information on traffic flows which will enable policy makers to reduce the likelihood that traffic jams occur. Nanotechnology — Nanotechnology is technology which operates on the atomic or molecular level in order to solve and address issues which macro sized technology is inadequate to solve. Nanotechnology will add the characteristic of microscopic proportions to information technology and will enable us to manipulate matter in ways that will make us seem like Gods. This technology has tremendous promise which includes unlocking all of the secrets of the brain by tracking brain activity down to the individual neuron. This could potentially cure all brain related illnesses such as dementia, and Alzheimer’s. Nanotechnology will also lead to computers that will operate within our bodies and will assist the immune system in combating disease and aging. It is possible (although this is debated) that these atom sized computers could conquer death itself by targeting the molecular mechanism that is responsible for the aging process. These computers also could potentially eliminate any disease and keep us in a perfect state of health and functionality. Going beyond the health applications, nanotechnology also has the potential of creating new materials which are tailored for specific purposes. New materials could be made which allow for computers to operate on the surface of Venus without melting, or for satelites to plunge through the atmosphere of Jupiter without being torn apart. New materials could be made which allow for batteries to last for days or weeks and perhaps even months or years on a single charge. Finally, nanotechnology also has the potential of ending material scarcity since it will have the ability to manipulate atoms on a micro scale to produce materials such as copper or nickel on a macro scale. This will fundamentally change the economy and will make it considerably easier for the impoverished of society to live a good life. Nanotechnology is going to change the game. Transhumanist Technology — Transhumanist technologies (the biogenesis of information technology) are technologies which seek to amplify the abilities of human beings through becoming one with them. Nanotechnology in this sense will start the process of transhumanism since nanotechnology will be one with our bodies in the sense that they will become an active part of our immune systems. Humanity due to its insatiable desire for becoming more however, will seek to go beyond the benefits of operating at its biological limits. Perfect 15/20 vision for example will not be enough as many among us will wish to integrate technology into their sights so that they can see in the dark, quite literally see emotions, and even perceive objects in perfect clarity which are several hundred yards away. Having our intellect limited by life experience and the amount of time we dedicated toward learning new things will no longer be acceptable, as many will choose to voluntarily connect their brains to the internet and access all of the worlds information simply by thinking about it. When it comes to our hearing, humans will want to customize the hearing experience to their individual needs and desires. If a sound goes above a certain level, the human ears will filter out all the excess decibels so that the noise won’t cause an unpleasant sensations. Some of us will choose to adopt audio technology that’ll allow for us to only receive the sound of things that are generated by the object of our focus in order to spy on our neighbors or ease drop on our friends. As people continue to reap the benefits of each evolution in information technology, we must be extremely mindful of how this technology can be used malevolently so that can have an action plan against its misuse. Blockchain technology makes it easier to track the start of an ecoli outbreak in the food supply, but also makes it much more difficult for law enforcement to arrest criminals who use blockchain-based currencies. The internet of things might customize the environment to our individual needs and desires, but it could also be used a system of mass surveillance by private and government entities that makes privacy a thing of the past. Nanotechnology can be used to end scarcity, but it could also be wielded to bring unprecedent destruction to the battlefield. Transhumanist technologies can amplify our abilities, but it isn’t clear that they’ll be immune from the negative intents of hackers who wish to make people abide by their bidding. We have a choice to make. If we choose to be proactive when it comes to preventing egregious misuse of technology, the information age will usher in a renaissance where society will solve many of the most pressing issues which have plagued humanity since the beginning of time. If we chose not to be proactive however, then we will destroy ourselves and create a hellish nightmare where nobody wants to live. The choice is ours.
https://medium.com/predict/the-future-of-information-technology-ceeff8e61553
['Jack Raymond Borden']
2020-12-19 03:13:48.317000+00:00
['Future', 'Information Technology', 'AI', 'Blockchain', 'Internet of Things']
How I decide whether to wear a coat every morning
Good evening ladies and gents, it is going to be SOOO SUNNY tomorrow. When you get to that tomorrow: Guys, guys, guys, where the sun at…? Problem Weather news are based on probabilities and those probabilities seem to get accurate only on the day itself. But who has the time and energy to manually check the weather on the day itself, probably on the morning before leaving to work? Well, certainly not me! Long story short tl;dr: with mighty programming skills, I made a bunch of lights automatically show me some weather forecast every day 💡 Let there be light So… first step: I bought a LED strip, a band made of tiny lights that can change color. I like the red a lot, it gives a nice cozy atmosphere. Let there be WIFI To talk to the LED strip through a computer program, I got myself a WIFI controller and connected it to the strip. The WIFI controller gave my LED strip an IP address within my local network.
https://raphael-leger.medium.com/how-i-decide-whether-to-wear-a-coat-every-morning-3c081aa21fa8
['Raphaël Léger']
2020-01-26 10:28:38.120000+00:00
['Lambda', 'AWS', 'Serverless', 'Home Improvement', 'Weather']
7 Steps to Develop a Chatbot for Your Business
Since a few years, chatbots are here, and they will not go away any time soon. Facebook popularised the chatbot with Facebook Messenger Bots, but the first chatbot was already developed in the 1960s. MIT professor Joseph Weizenbaum developed a chatbot called ELIZA. The chatbot was developed to demonstrate the superficiality of communication between humans and machines, and it used very simple natural language processing. Of course, since then we have progressed a lot and, nowadays, it is possible to have lengthy conversations with a chatbot. For an overview of the history of chatbots, you can read this article. Chatbots are a very tangible example where humans and machines work together to achieve a goal. A chatbot is a communication interface that helps individuals and organisations have conversations, and many organisations have developed a chatbot. There are multiple reasons for organisations to develop a chatbot, including obtaining experience with AI, engaging with customers and improving marketing, reducing the number of employees required for customer support, disseminating information and content in a way that users are comfortable with and, of course, increasing sales. Chatbots offer a lot of opportunities for organisations, and they can be fun to interact with if developed correctly. But how do you start with conversational AI and how do you build a good and engaging chatbot? To answer that, I researched 20 organisations from around the globe who developed a chatbot. As part of my PhD, I wanted to understand how organisations can get started with conversational AI and be successful. Top Most Popular Bot Design Articles: Seven Steps to Develop a Chatbot Starting with a chatbot is not easy, as there are many different variables to take into account. Define the reason for a chatbot; Create the conversation flow; Determine to develop the chatbot in-house or using third-party tools; Integrate the chatbot and conversation flow in the front-end and back-end of your organisation for context awareness; Test the chatbot and obtaining approval of relevant stakeholders; Analyse the conversations and the data derived from it; Improve the chatbot based on the analytics received. Let’s discuss each step briefly: 1. Define the problem First of all, it is important to decide why you would need a chatbot. A chatbot should be a means to an end, but not an end itself. It should alleviate a pain point or increase your customer engagement, but it cannot, yet, replace your entire customer support department. Understanding the objective of your chatbot will help define the conversation flow as well as determine the type of chatbot you need. After all, there are different types of chatbots ranging from simple FAQ bots, so-called ‘on rails’ bots to chatbots that allow the input of free text. The more you allow the user to determine the direction of the conversation, the less the chatbot is in control. 2. Create the conversation flow Designing the conversation within a chatbot is challenging. Not only should you develop a persona that matches your brand personality, but the conversational interface should also be clean, and the chatbot should aim for a positive experience. Therefore, the conversation should not be developed by the developer, but by a copywriter in collaboration with the marketing or communication department. It is important to create the right conversation flow for the right objective. For some conversation, people feel more comfortable with a chatbot than with a human. For example, an Australian financial services company noticed that customers feel more comfortable cancelling with a machine than they do with a human. Therefore, when developing a chatbot, you should pay attention to the conversational strategy and know that the platform itself is not standalone, but should be integrated with all the other elements of the business. 3. Selecting the chatbot platform There are many different chatbot platforms, ranging from platforms that enable simple FAQ chatbots to more advanced chatbots that take into the context. Such context-aware chatbots can offer a lot of added value because they can offer a positive experience to the end user. Once you have decided what platform to use, it is important to decide whether to outsource or not. There are plenty of chatbot developers out there that can help you, but not every developer might offer the right solution. Therefore, it is important to investigate and ensure that you work with the right chatbot developer. 4. Integrating the chatbot Building a chatbot is the easy part, among others because of the many platforms and developers out there. Integrating the chatbot into your systems is a lot more difficult, but that’s when the added value is achieved. If the chatbot is connected to your system your CRM or database), and when someone wants to change, for example, an address and the chatbot can say: ‘sure, give me your address, and I will update the system for you’. This is where you see operational efficiency, satisfaction and the NPS going up. One American chatbot developer created a chatbot that is person-aware, meaning that the chatbot knew who the person is in the chat, as the chatbot is linked to internal systems. As a result, it is a lot smarter because it has a better understanding of the context and can service the customer faster better. 5. Testing the chatbot Developing is only one part, as with any software development project, the testing is a crucial aspect of the project. Fortunately, most of the organisations I spoke, test the code of the chatbot. Especially the chatbot developers have rigorous testing practices in place. These processes include a testing environment, an acceptance environment and a live environment to ensure that everything can be properly tested. Not only should you test the code of your own projects, but you should also test the software that you use. Unfortunately, many organisations did not test the third-party tools they implemented and sort of trusted the third party that their tool and the code in that tool was correct and did not have any bugs. There is a strong reliance on and confidence in the third-party tools. However, it is important to have proper controls in place. One American chatbot developer went as far as never to allow third-party developers access the code. And another option is to spend time on reverse engineering what you have built to ensure that the code is indeed correct. 6. Analysing the conversations A conversation is by its nature data-driven and leads to more data. This data can be analysed, and the insights of the analytics can be used to improve the conversational flow of the chatbot. However, to enable that any output text can be used to train the chatbot, thorough testing processes have to be in place. Such as any text that should be written by copywriters not developers and especially large organisations require some sort of governance structure to be in place around the content that is said by the chatbot. Since all conversations are data, it is possible to extract valuable information from the conversations, both actively and passively to capture and feed that data into the overall reporting mechanisms. So that’s on a micro level of an individual conversation for an individual user, and it is at a macro level for the questions that are being asked and answered. This is called conversational analytics: what was said, how was it said, what was the intent, what is the sentiment, did we accomplish the goal, what was the goal. Where does it fit in the larger context? Without conversational analytics, it is impossible to develop an engaging and successful chatbot. It is also possible to add the capability to jump in and intervene in any conversation, but that sort of deceits the purpose. However, it can be useful because often machines still don’t understand the full context and then human intervention is required. 7. Improve the chatbot Of course, all those analytics can offer valuable insights to improve the chatbot. Reviewing the transcripts looking at places where the chatbot did not understand what people are asking helps to build up any datasets so to retrain the chatbot or to look at places where the chatbot thinks it got it right, but actually got wrong and so that the information can be rectified and the conversations can be improved. Such supervised learning helps improve the chatbot, while it prevents problems such as Microsoft’s Tay, which learned unsupervised. The objective should be to continuously improve the chatbot, make it increasingly context-aware and better at understanding the intent of the conversation. Conclusion Chatbots offer a great way for organisations to improve their business, make it more efficient and increase the customer experience. However, it is important that the chatbot learns in a supervised way and is bound by certain rules that drive your conversation if you wish to prevent examples such as Microsoft’s Twitter bot Tay. Natural language processing is getting better, and in due time, it will become possible to have engaging conversations with a machine.
https://markvanrijmenam.medium.com/how-to-develop-conversational-ai-for-your-business-3ab025a65a52
['Dr Mark Van Rijmenam']
2020-01-10 18:42:19.664000+00:00
['AI', 'Bots', 'Conversational Ai', 'Artificial Intelligence', 'Chatbots']
Vaccine Opposition, an Origin Story
Science saves, plain and simple. Science saves time, science saves energy, and above all, science saves lives. Scientific advancements have led to a vast array of efficiencies, improvements in energy production and usage, travel, as well as countless medical breakthroughs. One reason for this has to do with clinical trials. Typically, a clinical trial begins with animals. Provided the study shows efficacy and the animals do well, only then will researchers move on to people. Beginning with small groups, researchers increase trial participants until a well representative portion of the population is studied. Science also requires peer review. In this process, researchers and scientists evaluate each others work, ensuring studies are proper, ethical, and can benefit humanity at large. This dance is typically done in and through a variety of scientific journals and publications, with researchers submitting trials and findings for the broader community to evaluate, study, and in some cases attempt to reproduce or refute. As far as medical publications go, The Lancet is among the oldest, most reputable in the world. Founded in 1823 by English Surgeon Thomas Wakley, The Lancet was named for both the surgical instrument, more commonly known today as a scalpel, and for the architectural term lancet window indicating light of wisdom. Initially distributed biweekly, it has become a weekly publication with a mission to make science widely available so that medicine can serve, transform society, and positively impact the lives of people. In 1998, The Lancet published what would become among the most controversial studies in the field of vaccination. Led by British Physician Andrew Wakefield, The Wakefield study, as it would become commonly known, hypothesized the MMR (Measles, Mumps, Rubella) vaccine was responsible for a series of events; including intestinal inflammation, harmful proteins crossing the blood-brain barrier, and subsequently causing Autism in the inoculated children. A follow up study by Dr. Wakefield in 2002 would essentially double down on this link between measles and autism, creating an uproar amongst parents and various groups already opposed to vaccines. Wakefield became the scientific backbone of what has come to be known as the anti-vaxxer movement, and the MMR vaccine its primary villain. To spread a message though, you need messengers. Among the most vocal vaccine opposition came from a growing list of celebrities, talk show and radio hosts, and alternative media personalities. Jenny McCarthy became one of many; a notable television personality, fixture on talk and game shows, as well as writer of several books. McCarthy’s son was diagnosed with autism when he was two and a half years old, she attributed the diagnosis to the MMR vaccine specifically. Although McCarthy has since claimed she is not anti-vaccine, only pro safe vaccine schedule, many have since echoed her theory and position on MMR specifically and vaccines in general.
https://medium.com/illumination-curated/vaccine-opposition-an-origin-story-fca89be0b1ab
['Bashar Salame']
2020-12-23 11:01:31.476000+00:00
['Society', 'Politics', 'Health', 'Covid-19', 'Vaccines']
How to Write TypeScript Ambients Types Definition for a JavaScript library
Tips & tricks How to Write TypeScript Ambients Types Definition for a JavaScript library Types definitions made easy for any JavaScript library. Create, extend, and contribute to any repository where types are missing. It has been a complicated time during the early days of TypeScript. Originally designed to work with namespace and some kind of “custom modules”, nowadays the best approach is to play with ES6 modules. How modules and code isolation is managed will influence how we have to write our ambient type definition for our libraries. Let’s dig into it! A quick remember about the basics of a compiler Before diving into the question of TypeScript typings, I would like to clarify things as it might not seem odd for every developer. Maybe just like me, you were doing something else in the classroom when the teacher was trying to learn you the C language or some kind of compiled language. We need to understand compilation because while declaring our types definitions we will need to have a good understanding of the scope we are playing with. By understanding I mean, what is TypeScript exactly doing when he “compile” your code? Or should I say: what is any compiler really doing? Most of the time by default and without scoping mechanics, all the code you write is globally scoped, even if defined in multiple files: sharing variables and functions. That’s why in the JavaScript early days we used to make something called « The module Pattern » within the browser page global context. As people tended to use multiple scripts from various locations, which resulted in code collision. This was generally implemented in such a way, that you are using some kind of “Immediately Invocated Function Expression” to populate a global object with a scoped API. This mimics the behavior of scope. window.app_modules = window.app_modules || {}; app_modules.MyModule = (function() { var privateVariable = 1; var publicFunction = function() { return privateVariable; } return { publicFunction: publicFunction }; })(); console.log(app_modules.MyModule.publicFunction()); Well, something like this that emulate private properties. Allowing to play with JavaScript scope to prevent variable to play globally between modules. One job of a compiler is to take multiple files into one file and ensure correctness between declarations. So a compiler takes a workspace of files, merge all files into one, and once this has been done, it might also convert the code to some byte code that is highly optimized (not our case with TypeScript). What does that mean? It means that your JavaScript file, if not properly scoped is sharing the same variables between files. It’s not a good thing because it can trigger some kind of non-expected effects. But with ES6 modules, which is a compatible feature of TypeScript there is a particularity. A long as you import or export something in a file, your code is scoped to that file. Variable declarations, functions, anything. Otherwise, just try to create two files without export, one creating a variable toto and another that displays this variable, everything without export in the same namespace, compile it with TypeScript and you get the value of toto colliding in both files To solve these issues we can play with namespace, which creates some kind of boundaries for scoping things (it works well in many programming languages), use the module pattern we saw above, or just export/import something thanks to EcmaScript modules. WELL! Let’s deep into our real need: Creating a type definition Fetching the good declaration for a TypeScript library Alright, now we got the basics, we can play with our typings that use our scope. We just downloaded our latest library, but there is an issue… The library creator made all of this without TypeScript! (YEAH it still exists in 2020). For example with a library named somelibrary you end up with the classic Could not find a declaration file for module 'somelibrary'. '/Users/screamz/workspace/myapp/node_modules/somelibrary/index.js' implicitly has an 'any' type. Try `npm install @types/somelibrary` if it exists or add a new declaration (.d.ts) file containing `declare module 'somelibrary';`ts(7016)
https://medium.com/javascript-in-plain-english/how-to-write-typescript-types-for-a-javascript-library-e598b9eb8be7
['Andréas Hanss']
2020-12-28 10:19:07.303000+00:00
['Software Engineering', 'Software Development', 'Web Development', 'JavaScript', 'Typescript']
Simple Deno API with Oak, deno_mongo and djwt
Simple RestAPI If you come from the NodeJS world, the first thing you usually do when creating a new app is to run an ‘npm init’ command and follow the process of creating a package.json file. But with Deno, things are different. There is no package.json, no npm, and no node modules. Just open your favorite IDE (I am using VS Code), navigate to your project folder and create an app.ts file (it will be our project’s main file). The first thing we want to set up in the app.ts is our http server to be able to handle http requests that we plan to develop later on. For this purpose, we will use Oak as our middleware framework for the http server. Probably you never heard of Oak but don’t worry, it is the same as Koa.js (popular web app framework for NodeJS). To be more clear, Oak is just a version of Koa.js customized for Deno, so most of the Koa.js documentation also applies to Oak. For module import, Deno uses official ECMAScript module standards, and modules are referenced using a URL or a file path and includes a mandatory file extension. Our app.ts should like this: app.ts (initial) You will probably get an error that TypeScrip does not allow extensions in the module path. To resolve this issue, download and enable Deno’s VS Code extension (enable it only for Deno projects). In our app.ts we imported Oak’s Application object that contains an array of middleware functions. We will use it to setup the http server, and to later bind a router, global error handler, and auth middleware. In NodeJS, this Application represents the app we import from the express npm package. The Oak module has been imported from the Deno’s Third-Party Modules. Beside that, we defined an Application instance, host, port, and we set our server to listen to ‘localhost:4000’ address. As you can notice, Deno allows usage of top-level await, meaning that we do not need to wrap await call inside of the async function on the module level. One big plus for Deno! If you run the app.ts file now, using the command ‘deno run app.ts’ you will get an error: PermissionDenied error As mentioned before, Deno does not have permission to access the network so we need to explicitly emphasize it: ‘deno run — allow-net app.ts’. Now, the app will successfully compile and run, and we have our server up and running. In your browser, navigate to localhost:4000 and you should get “Hello World” displayed. After the initial server setup, we can move on to creating and configuring MongoDB instance for our service. Before you proceed with the db configuration, please make sure you have MongoDB installed, and local service running. Deno provides (unstable for now) module for MongoDB and we will get it from the Third Party Module. Since the module is in continuous development, the module versions will be changing fast. Create a new config folder and add two new files in it: db.ts and config.ts that will be used for the project’s config variables. The db.ts content should now be as follows (code will be explained afterward): db.ts We imported the init method and MongoClient from the deno mongo module. The init function is used to initialize the plugin. The other part with the code is pretty straight forward; we have defined DB class with two methods, one for connecting to local db service, other as a getter to return database name. At the end, we created a new instance of our database and export it for usage in other project’s modules. For now, we will not run our new code, so the deno_mongo module is currently unavailable in our app. With the next execution of ‘deno run’ command, all modules will be downloaded and cached locally so any further code execution will not require any downloads, and the app will compile much faster. This is how Deno handles modules. Since we finished the database setup, we can move on with the controller methods. We will make this very simple by implementing CRUD operations for the ‘Employee’ model. At the end, we will create an auth middleware to protect create, update, and delete requests. For the project simplicity, we will skip the part of creating ‘singup/signin’ methods but instead make a simple login that will accept any username and password to generate a JWT (json web token). So, let’s go. Create a new employeeCtrl.ts file. We will first implement a getEmployees method that will return an array of employees with all corresponding properties: id, name, age, and salary. Our controller should look like this: employeeCtrl.ts (getEmployees method) Since we are using TypeScript, we imported Context (from the Oak module) to define the type of our response object and later on our request object. We also took our db name and defined ‘employees’ collection. After the interface signature, the get method is implemented. It will fetch all data from the ‘employees’ collection and return data together with the response status. If the collection is empty, a message will be set as a part of the response body. To test our method, we need to define a route. Create new file called router.ts: router.ts Router implementation is the same as the one using express in NodeJS: Import Router from the corresponding (in our case Oak) module and bind the controller method to the defined route. The last thing to do before testing our route and method is to set our app to use the router. Navigate to app.ts file and update its content as follows: app.ts (router and config.ts added) To make the code more readable, we will introduce config.ts file from this point to store all configuration/env variables. It should be saved in the config folder: config.ts The power of TypeScript and Deno allows us to have access to all types in runtime, and to write strongly typed code. To run the app execute the following command: deno run --allow-net --allow-write --allow-read --allow-plugin --allow-env --unstable app.ts Regarding the permissions we need to run our app, we have set a few of them: allow-write and allow-read: allow Deno to have access to the disk, in our case to the local database service allow-plugin: allow Deno to load plugin allow-env: allow access to the env property unstable: required since the deno_mongo plugin is in unstable mode currently If your code compiled successfully and all required modules downloaded, the terminal will show the ‘Listening on port: 4000’ message. If you face any compile errors, do not worry, they are nicely explained and you should be able to resolve the issues easily, especially in this simple scenario. To test the route and the controller method I will use Postman and the result should be: Postman (GET /employees) We can now move on to creating additional controller methods. Navigate back to the employeeCtrl.ts and add POST(addEmployee) and PUT(updateEmployee) actions: employeeCtrl.ts (addEmployee and updateEmployee added) These two methods of implementation should be understandable. To add a new employee, we need to read the values from the request’s body. Check if it contains all necessary properties (usually done by implementing a validation library but Deno currently does not provide any), and save the record in an appropriate collection. To update an existing employee, read the employee’s id provided through the params, search its properties, and make the updates based on the request’s body. It is important to mention that we defined our updateEmployee’s input parameters as ‘any’ type. Currently, the Context's request object does not recognize 'params' as its property and the code will not be able to compile. This issue is related to the current version of the Oak module but the documentation states that params, request, and response, are all derived from the Context object so we are good to proceed. Hopefully, this will be resolved in the near future. To make these actions alive, we need to define their routes. Jump to the router.ts and add these two lines of code: router.ts (routes for addEmployee and updateEmployee added) Now, if you rerun the app, you can test the new methods implementation: Postman (POST /employees) Postman (PUT /employees/:id) To finish our CRUD, we need to add two more methods: getEmployeeById and deleteEmployee: employeeCtrl.ts (getEmployeeById and deleteEmployee added) This method implementation is very similar to the previous one, except it contains different database actions for deleting and updating records. To wrap up our API, let’s update our router and give a test to the new routes. router.ts (routes for getEmployeeById and deleteEmployee added) Postman (GET /employees/:id) Postman (DELETE /employees/:id) Now we have a simple API up and running. Comparing it with NodeJS, Deno’s implementation seems very similar. It should only take some time to get familiar with different module imports, TypeScript, and permissions handling, which can be a real bottleneck in the beginning. Also, we need to take into consideration that there will be changes until Deno finds a stable setup.
https://medium.com/maestral-solutions/simple-deno-api-with-oak-deno-mongo-and-djwt-2916844f0ef3
['Haris Brackovic']
2020-06-05 12:07:36.692000+00:00
['Nodejs', 'Deno', 'JavaScript', 'Typescript']
10 Insider VS Code Extensions for Web Developers in 2020
10 Insider VS Code Extensions for Web Developers in 2020 Git Graph, Auto Close Tag, Peacock, and more Visual Studio Code (VS Code) from Microsoft will continue to be one of the best code editors/IDEs in 2020. Its great marketplace offers awesome extensions made by the community, helping web developers to become more productive. However, most articles about VS Code extensions only recommend the same 10-15 extensions. They are great, no doubt. But there is more to explore and I will show you 10 outstanding extensions that are less-known but really helpful.
https://medium.com/better-programming/10-insider-vs-code-extensions-for-web-developers-in-2020-91bdef1658c6
['Simon Holdorf']
2020-01-28 00:48:53.490000+00:00
['Technology', 'Programming', 'Productivity', 'Creativity', 'JavaScript']
Reinforcement Learning and the Rise of Educational AI
Ever since the astounding triumph of Alphago over Lee Sedol at the Go contest in 2016, the world’s attention has been drawn to artificial intelligence and reinforcement learning. This victory signalled that machine learning was no longer simply about big data classification, but was making progress in the realm of true intelligence. Reinforcement learning (RL) introduces the concept of an agent, and addresses the problem of making the most-rewarded decision a subjective entity in a known or unknown environment. It could be seen as a learning approach sitting in between supervised and unsupervised learning, since it involves labelling inputs, only that the label is sparse and time-delaying. In life we aren’t given labels of every possible behaviour in the world, but we learn lessons by exploring strategies on our own — hence RL provides the closest problem setting to the learning process of a human brain. This accounts for the excitement that the progress elicits from machine learning scholars. So far there have been two approaches to a reinforcement learning problem. 1. Markov Decision Processes MDPs are a mathematical framework which model the world as a set of consecutive states with values, and inside this world there is a rational agent that makes decisions by weighing rewards caused by different actions. If the state values are unknown, the agent may begin by interacting with the world first, observing the consequences — and with enough experience, it may exploit the knowledge and make optimal decisions. It builds on top of the rules drawn from observing human intelligence in behavioural psychology experiments, rather than modelling on a grander scale the rules that cultivate human intelligence, which is where evolutionary computation comes from. 2. Evolutionary Computation Evolutionary computation is a family of algorithms which apply the concept of evolution to the computation area as a searching technique to find the fittest solution. More specifically, it utilises concepts from Darwin’s theory of evolution — such as mutation, crossover, and fitness — and models the computer to perform natural selection in search of an optimal solution. In other words, it does not attempt to build intelligence from scratch, as long as it renders quasi-intelligent results. RL algorithms can be really powerful tools in a range of tasks and may potentially contribute to the realisation of general AI, which in return may greatly improve the education industry in terms of adaptive learning experience, student path prediction, and unbiased grading systems.
https://medium.com/vetexpress/reinforcement-learning-and-the-rise-of-educational-ai-99a68a687a55
[]
2018-01-15 22:46:21.610000+00:00
['Algorithms', 'Edtech', 'AI', 'Artificial Intelligence', 'Reinforcement Learning']
Change Your Life by Changing Your Posture
Change Your Life by Changing Your Posture Boost confidence and performance with this simple activity Photo by Michał Parzuchowski on Unsplash If you’re reading this, you’re probably sitting down. And your shoulders are probably hunched. And your chin is probably tilted down toward your phone screen or your laptop. How are you feeling? Powerful, confident, and strong? Or . . . something else? Try something with me. Stand up, if you can (if not, play along from your seat) Place your feet just wider than shoulder-width apart Roll your shoulders back three or four times and then let them rest (most of us let our shoulders roll forward on a regular basis, so your shoulder blades will probably feel softly engaged in this position, as though they’re gently reaching for each other low across your back) Lift your chin Focus your eyes gently upwards Put your hands on your hips and take a big, long breath Pretend you’re Peter Pan. Feel better? Research indicates that our posture has a major impact on how we feel about ourselves. A few years ago, the idea of “power poses” went viral when Dr. Amy Cuddy’s Ted Talk became the second-most-popular in history. She claimed, based on a small-study sample, that holding our bodies in specific poses could not only affect our self-confidence, but also change our hormonal make-up. In the years that followed Dr. Cuddy’s TED Talk, subsequent larger studies found that the effect of “power posing” on a person’s hormones is questionable at best. Even Dr. Cuddy says she is “agnostic” about the hormonal effects she originally promoted. Hormonal changes aside, though, it’s still true that altering our physical posture can affect how we feel. How we feel affects our performance: the better we feel, the better we perform. And the impact our posture has on our performance actually goes even further than self-confidence. It’s not only our confidence that gets boosted when we stop slouching, stand up, and roll our shoulders back. Because our hunched shoulders and rounded backs literally make us smaller, slouching means that there’s physically less space for our organs, like our lungs and heart, to function. In fact, according to one 2015 study, slouching can decrease our lung capacity by up to 30%. That means that when we slouch, our brains receive one-third less oxygen than they do at optimal capacity — and that affects performance. So if you’re feeling sluggish, dull, or just-not-up-to-snuff, stand up and pretend you’re Peter Pan. Take ten slow, deep breaths, in and out. Add a smile for an extra boost. And carry on with confidence, calm, and a fully-charged brain.
https://medium.com/afwp/change-your-life-by-changing-your-posture-551f3d37d13e
['Cathlyn Melvin']
2020-06-15 03:32:42.234000+00:00
['Posture', 'Mental Health', 'Health', 'Life Lessons', 'Self Improvement']
Data-Driven Design is Killing Our Instincts
What is data-driven design? Simply put, data-driven design means making design decisions based on data you collect about how users interact with your product. According to InVision: Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls. By crafting your products in a way that cater to your users’ goals, preferences, and behaviors, it makes your products far more engaging — and successful. While most data is quantitative and very objective, you can also collect qualitative data about your users’ behavior, feelings, and personal impressions. A designer’s instinct Back in the days of Mad Men, a designer’s gut instinct was glorified because it was difficult to measure the success of a design in progress. You often had to wait until it shipped to know if your idea was any good. Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars. In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it. Design instinct is a lot more than innate creative ability and cultural guesswork. It’s your wealth of experience. It’s familiarity with industry standards and best practices. You develop that instinct from trial and error — learning from mistakes. Instinct is recognizing pitfalls before they manifest into problems, recognizing winning solutions without having to explore and test endless options. It’s seeing balance, observing inconsistencies, and honing your design eye. It’s having good aesthetic taste, but knowing how to adapt your style on a whim. Design instinct is the sum of all the tools you need to make great design decisions in the absence of meaningful data. Clicks and conversions aren’t your only goals Not everything that can be counted counts. Not everything that counts can be counted. Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important. While you’re chasing a 2% increase in conversion rate you may be suffering a 10% decrease in brand trustworthiness. You’ve optimized for something that’s objectively measured, at the cost of goals that aren’t so easily codified. This point is perfectly illustrated by a story by Braden Kowitz, a design partner at Google Ventures (via Wired): One of my first projects at Google was to design the “Google Checkout” button. With each wave of design feedback I was asked to make the button bolder, larger, more eye catching, and even “clicky” (whatever that means). The proposed design slowly became more garish and eventually, downright ugly. To make a point, a colleague of mine stepped in with an unexpected move: He designed the most attention-grabbing button he could possibly muster: flames shooting out the side, a massive chiseled 3-D bevel, an all-caps label (“FREE iPOD”) with a minuscule “Checkout for a chance to win”. This move reset the entire conversation. It became clear to the team in that moment that we cared about more than just clicks. We had other goals for this design: It needed to set expectations about what happens next, it needed to communicate quality, and we wanted it to build familiarity and trust in our brand. We could have easily measured how many customers clicked one button versus another, and used that data to pick an optimal button. But that approach would have ignored the big picture and other important goals. It’s easy to make data-driven design decisions, but relying on data alone ignores that some goals are difficult to measure. Data is very useful for incremental, tactical changes, but only if it’s checked and balanced by our instincts and common sense. When data-driven design gets fugly Ever used Booking.com? Search for a hotel and you’ll see every listing plastered with a handful of conversion triggers and manufactured urgency/scarcity indicators. Amongst all that crap, it’s difficult to find the real info you’re looking for. It’s a terrible user experience for me. I’m sure many others feel the same. But they must have reliable data that says it works. Conversion rates must go up with each chintzy trigger they cram in. Data says: add more urgency messages, add more upsells, more, more, more. User experience says: less, less, less, just show me what I’m looking for. What’s going on here? Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?” Booking.com’s brand is cheap deals, so they’re not worried about cheap tactics. If those tacky labels stoke enough FOMO to get a few more bookings, they’ve won. It doesn’t critically damage other goals if they’re perceived as a little gaudy in the process. But not every business has the luxury of caring only about clicks and conversions. You may need to convey quality and trust. Or exclusivity and class. Does cramming in data-driven conversion triggers serve those goals too? Or would building a more focused and delightful user experience better speak to your user’s needs? Data-driven sameness Digital interface design is going through a bland period of sameness. I see it in my own work, and I worry it’s becoming hard to escape from. You could blame Apple and Google for publishing good design systems, and then everyone else trying to look the same. You could blame WordPress for the proliferation of content-agnostic templates — pulling apart the age-old marriage of content and designer. Or you could blame platforms like Dribbble that amplify trends and superficial eye-candy. I’d argue that data-driven design also plays a role in why all websites look the same. We’re all scared to experiment and reinvent the wheel, because data’s already proven that the wheels we’ve got work well enough. When our Agile processes are geared toward efficiency, it’s too costly to prototype and test innovative solutions. So we blindly churn out the same tried and true solutions over and over again. Design “process” has replaced instinct as the new skill to fetishize. Some say that everyone is a designer if they can only follow the same processes we do. While that’s not true, it still leads to design decisions being made without the temperance of a professional designer’s instincts and experience. It creates more generic-looking interfaces that may perform well in numbers but fall short of appealing to our senses. We’re all scared to experiment and reinvent the wheel, because data’s already proven that the wheels we’ve got work well enough. Data is only as good as the questions you ask What makes data so dangerous is that your input grossly colors your output. If you ask the wrong questions at the wrong time, or to the wrong people, you draw bad conclusions. First adapters and eager user-testers don’t necessarily behave the same as your average user, so even when you are asking the right questions you can get tainted data. The most empathetic designers — who are convinced they see the product the same way as their users — don’t behave the same. They know the product too intimately. They can’t see it objectively anymore. They can’t become naive. Beware of misleading data. It’s only one source of info, and it’s only ever as good as your collection methods. Rather than blindly following the conclusions of big data, back them up with other sources (or at least common sense) before charging ahead with your shiny validation in hand.
https://modus.medium.com/data-driven-design-is-killing-our-instincts-d448d141653d
['Benek Lisefski']
2020-02-11 00:49:11.592000+00:00
['Craft', 'UX', 'Creativity', 'Data', 'Design']
Multitasking Is Not My Forte, So How Can I Blame Python?
I consider myself to be an efficient person. I always try to utilize my time wisely. When I have some time to kill, like waiting in line to run some errands, I always bring my laptop and get some work done. I exercise whilst listening to podcasts or audiobooks and other nerdy stuff. This week I just started training in a new gym where you can solve sudokus whilst using the treadmill, and I figured out that I can’t do both at the same time! It was then that I realized that Python and I have a huge thing in common, We are both very bad with multitasking. To understand why Python multitasking works differently than other languages, we will have to understand the difference between sequence execution parallelism and concurrency. Sequential means running one task at a time. Let’s say I invited a few friends over for dinner, and I want to bake some cakes for them. I get the recipes for my 3 favorite cakes: chocolate, cheese, and caramel. Since my baking skills are not at their best, I can only make one cake at a time. So first I make the chocolate cake, when it finishes baking I start making the cheesecake and the same with the caramel cake. Let’s say every cake takes about 10 minutes to mix the ingredients and 50 minutes to bake, in total I spent 3 hours making those cakes, which is a lot of time. Concurrency means making progress in multiple tasks at the same time but not necessarily simultaneously. So let’s say my baking skills are kind of better, now I can start mixing the chocolate cake, and when I put the cake in the oven, I can start mixing the cheesecake and so on and so forth. So I am using the baking time which is idle for me, to make progress with the other cakes, but I don’t do it simultaneously. This will take about 1 hour and 20 mins for the 3 cakes to be ready, not too bad. Parallelism means running multiple operations at the same time, or as we call it in our day to day life — multitasking. So Let’s say I am calling 2 of my friends to help me with the baking (if they want to eat that, they should help too!). So now we can simultaneously mix the 3 cakes and bake them all at the same time, which will take only 1 hour. So how are python threads related to the bakery I just started in my kitchen? Well, in most programming languages we run threads in parallel. In Python we have something called GIL which stands for Global Interpreter Lock, is a lock that allows only one thread to hold the control of the Python interpreter. This means that only one thread can be in a state of execution at any point in time. Python wasn’t designed considering that personal computers might have more than one core, which means it is not designed to use multiple processes or multiple threads. Therefore the GIL is necessary to enforce lock when accessing a Python object in order to be on the safe side. It is definitely not the best, but it’s a pretty effective mechanism for memory management. So does that mean threads in Python are useless? Absolutely not! Even though we can not execute threads in parallel, we can still run them concurrently. This will be good for tasks, like in the baking example, that have some waiting time. I/O bound problems cause the program to slow down because it frequently must wait for input/output from some external resource. They arise frequently when the program is working with things that are much slower than the CPU, like a DB or a network. Let’s look at the following code for baking some cakes Now let’s add the ability to run those in a sequence and using multithreading and add a decorator to measure the time of each run. Now we can start baking Running this code will yield the following results: In this example, we can see that it’s much faster to bake the cakes using the multithreading approach because it maximizes the use of resources. It interrupts running one operation while continuing working on others. This fact does improve the performance of the program. Despite this efficiency, we have seen in this example, using multithreading has some downsides. The operating system actually knows about each thread and can interrupt it at any given time and start running a different thread. This can cause some race conditions which is something we have to keep in mind while using this approach. The other thing is that our number of threads is limited by the operating system. In this example, we have only 1 task, but in real-life examples, we can have a lot of them. So in this technique, the performance is capped by the number of threads available in our core. So how can we do it better? In Python 3.4 we were introduced to a package called Asyncio In fact, Asyncio is a single-threaded, single-process design. Asyncio gives a feeling of concurrency despite using a single thread in a single process. To do so it uses coroutines (which are small code snippets) that can be scheduled concurrently, and switch between them. Asyncio uses generators and coroutines to pause and resume tasks. Let’s now add the ability to run this using the Asyncio library: The keyword await passes function control back to the main call, It basically suspends the execution of the surrounding coroutine. If Python encounters an await bake_a_cake expression in the scope of make_a_cake, this is how await tells the main call, “Suspend execution of make_a_cake until the result of bake_a_cake returns. In the meantime, go do something else.” We can run this in the following manner: We get the following results We can see that the performances are about the same as multithreading, around 10 seconds. Using Asyncio however will yield better results, when running a lot of tasks since the multithreading mechanism is limited by the operating system while Asyncio can be interrupted as many times as needed. The other thing is that using ‘await’ makes it visible where the schedule points are. This has a major advantage over threading. It makes it easier to understand about race conditions. Which are less frequent than threading, since we are using a single thread. It’s important to understand that concurrency problems are not gone completely. we can not simply ignore other concurrent tasks. With the Asyncio approach, it is much a less common situation when you need to use locks. But we should understand that every await call breaks the critical section. So does that mean that using asyncio is always better? NO! using both threading and Asyncio is better for IO-bound tasks, but it’s not the case for CPU bound tasks. A CPU bound task is for example a task that performs an arithmetic calculation. It is CPU bound because the rate at which the process progresses is limited by the speed of the CPU. Trying to use multiple threads won’t speed up the execution. On the contrary, it might degrade overall performance. But we can try to use processes for that since every process is running on a different CPU we are basically adding more computer power to our calculation. Each Python process gets its own Python interpreter and memory space so the GIL won’t be a problem. Let’s change our baking method to do some calculations And let’s add the ability to tun this using multiple processes Now we can this and compare it with a sequential run and with multithreading In this example, we can see that we get about the same results when baking the cakes in a sequence and using multithreading. So even though it is using a concurrency mechanism, we don’t have any waiting time which the process can optimize. It can sometimes be even slower than running it in sequence because of context switching. Multiprocessing does make the performance better, But using multiple processes is heavier than multiple threads, so we should keep in mind that this could become a scaling bottleneck. Processes also do not share memory because they run on a different CPU. So ok now with all these options, when should we use each one? From a performance perspective: For CPU bound tasks — processes For IO-bound tasks: For a few tasks — threads For a lot of tasks — asyncio If we are looking at other aspects too, like code readability and quality, I would always use Asyncio over threads. Because using await makes it much more clear and has less room for concurrency errors. Even running multiple threads can have slightly better performance results.
https://medium.com/swlh/how-has-python-helped-me-bake-cakes-more-efficiently-b870a1f111ac
['Danielle Shaul']
2020-11-03 09:00:02.521000+00:00
['Python', 'Concurrency', 'Engineering', 'Backend Development']
Deciding between Row- and Columnar-Stores | Why We Chose Both
Row oriented databases Row-stores are considered “traditional” because they have been around longer than columnar-stores. Most row oriented databases are commonly known for OLTP (online transactional processing). This means that row-stores are most commonly known to perform well for a single transaction like inserting, updating, or deleting relatively small amounts of data. (3) Writing one row at a time is easy for a row-store because it appends the whole row to a chunk of space in storage. In other words, the row oriented database is partitioned horizontally. Since each row occupies at least one chunk (a row can take up more than one chunk if it runs out of space), and a whole chunk of storage is read at a time, this makes it perfect for OLTP applications where a small number of records are queried at a time. (4) row-stores save to storage in chunks Using a row-store Row-stores (ex: Postgres, MySQL) are beneficial when most/all of the values in the record (row) need to be accessed. Row oriented databases are also good for point lookups and index range scans. Indexing (creating a key from columns) based on your access patterns can optimize queries and prevent full table scans in a large row-store. If the value needed is in the index, you can pull it straight from there. Indexing is an important component of row-stores because while columnar-stores also have some indexing mechanisms to optimize full table scans, it is not as efficient for reducing seek time for individual record retrieval than an index on the appropriate columns. Note that creating many indices will create many copies of data, and a columnar-store is a better alternative (see When to Enable Indexing?). (5,6) row-store concept overview If only one field of the record is desired, then using a row-store becomes expensive since all the fields in each record will be read. Even data that isn’t needed for the query response will be read, assuming it isn’t indexed properly. Consequently, many seek operations are required to complete the query. For this reason, a columnar-store is favored when you have unpredictable access patterns, whereas known access patterns are well accommodated by a row-store. (7) Column oriented databases As more records in a database are accessed, the time to transfer data from disk to memory starts to outweigh the time it takes to seek the data. For this reason, columnar-stores are typically better for OLAP (online analytical processing) applications. Analytical applications often need aggregate data, where only a subset of a table’s attributes are needed. (8) Column oriented databases are partitioned vertically — instead of storing the full row, the individual values are stored contiguously in storage by column. The advantage of a columnar-store is that partial reads are much more efficient because a lower volume of data is loaded due to reading only the relevant data instead of the whole record. For example, if a chunk of storage can hold five values and the database has five columns (ex: one row has five values), one row will take up one chunk and be read together. If only one column value is needed for the query response, a columnar-store can read 5x as fast because you will read five column values in one chunk as opposed to one column value in the chunk containing the row. You also avoid reading the other column values that are irrelevant to the query response. columnar-stores save columns to storage in chunks Additionally, in column-stores compression is achieved more efficiently than in row-stores because columns have uniform types (ex: all strings or integers). These performance benefits apply to arbitrary access patterns, making them a good choice in the face of unpredictable queries. (8) Using a columnar-store example database Columnar-stores (examples: RedShift, BigQuery) are good for computing trends and averages for trillions of rows and petabytes for data. (8) Assuming this table continues for millions of rows, what if we wanted to know the sum of the amount spent on online purchases for company A? Well, for company A’s online purchases table, we would need to sum all of the online purchase values. Instead of going through each row and reading the email, type of purchase, and any other columns this table could have, we just need to access all of the values in the “amount” column.
https://medium.com/bluecore-engineering/deciding-between-row-and-columnar-stores-why-we-chose-both-3a675dab4087
['Alexa Griffith']
2020-08-10 20:25:03.312000+00:00
['Bluecore', 'Data Engineering', 'Programming', 'Data', 'Software Engineering']
Is your current BI tool holding you (and your data) back?
If you’re currently managing and analyzing your data in a BI tool, it’s time to ask yourself: how happy are you? It’s okay, you can be honest. We promise we won’t tell. Truthfully, most people utilize tools like Excel because it’s comfortable. They know how to use it, how to navigate it, and feel confident with how it works. But, it may be time to break out of your comfort zone by switching to a programmatic approach like Python. Why switch to a programmatic approach? We’re not one for clichés, but in this situation, the grass really is greener on the other side. Let us explain three reasons where Excel can limit your data analysis and management: For those who work with large sets of data, it can be hard to manage when you’re confined to row and column data and lookups. It can also take a long time to open and load your data, and in a day-and-age where data collection is more popular and datasets are increasingly getting larger, programmatic data management can improve your data organization by facilitating easier connections to databases and allowing simple imports into data frames. Even if Excel is getting the job done for you, you’re very limited to the visualizations you can build around your data. Sure, you’re able to get your point across, but your data is much more impactful when its visualized in a way that is meaningful, tells a story, and is on-brand. Collaborating and sharing at scale can be frustrating in Excel. Users typically turn to tools like DropBox or email to share files, which can be very limiting. With a programmatic approach, you can easily edit, collaborate, and share your data effortlessly. The switch to Python Due to its powerful infrastructure and flexibility, Python is rapidly increasing in popularity for analyzing and managing data. So, what makes this programming language so great? For starters, it’s as simple to learn as it is to use. The language’s syntax is clear and easy to understand and its large fanbase means support is readily available, making it ideal for beginners. Additionally, Python’s extensive integration abilities can significantly increase productivity. The benefits of Dash Built as a Python framework, Plotly’s Dash will give you all the benefits of a programmatic approach, and then some. Even if you’re working with manageable data, you’re still limited to your current tool’s style and chart offerings. With a programmatic approach, especially one with Plotly’s Dash, you’ll have easier access to a number of data viz options, allowing you to build, test, and deploy beautiful interactive apps. Yep — you read that right — Dash apps are completely interactive. Because Dash is built on top of plotly.js, the charts are inherently interactive. Dash then gives you the ability to add additional interactive features such as drop-downs, sliders, and buttons, all built around your data code. Interested in learning more? Check out Dash at ODSC East Don’t take our word for it — come see Dash in action. Our Head of Project Management, Chelsea Douglas, will be presenting a demo of Dash at the Open Data Science Conference (ODSC) East in Boston on Friday, May 3. Click here to learn more. But wait, there’s more. We’ll be hosting the 10:30am coffee breaks at ODSC East. So you can grab a coffee and some snacks and schedule some time to talk with us. Register now! Also — be sure to stay tuned for our upcoming Excel to Python blog series, where we’ll be discussing this in more detail and sharing best practices for moving over to a programmatic approach.
https://medium.com/plotly/is-your-current-bi-tool-holding-you-and-your-data-back-42f66ad494b
[]
2019-04-29 16:14:18.780000+00:00
['Plotly', 'Data Analysis', 'Python', 'Data Science', 'Data Visualization']
The Shift That Will Change Marketing Forever
The Shift That Will Change Marketing Forever The marketing industry is at evolutionary crossroads. It’s a time of reckoning, an inflection point perhaps. Call it want you want, a shift is finally starting to occur. Photo: Shutterstock There are multiple definitions of marketing. Just Google it. You’ll spend the next 2 weeks reading. Probably the best single-source destination is nearly a decade old, from Heidi Cohen and her “72 Marketing Definitions”. The short summary is that while yes, some definitions contradict others, the most commonly accepted understanding is perhaps best summed by Dr Philip Kotler who defines marketing as: The science and art of exploring, creating, and delivering value to satisfy the needs of a target market at a profit. Marketing identifies unfulfilled needs and desires. It defines, measures and quantifies the size of the identified market and the profit potential. It pinpoints which segments the company is capable of serving best and it designs and promotes the appropriate products and services. As a consequence, most businesses focus almost exclusively, on acquisition. Does that meet the intent of the wider definition? I’ll let you decide. Now, there’s nothing wrong with the function of acquisition. In fact, it is critical for growing a customer base. But the methods and mindsets required for the pure acquisition, simply do not translate to long term customer engagement. And not just in the ranks of practitioners, but by extension, the products that are developed for them by vendors around the world. Engagement is not about segments or campaigns. It is relational and longitudinal, and in fact, it is where true brand perception lies. And so our multitude of product silos — from CRM to marketing automation, to eCommerce, to identity management, to Voice of Customer, to media engines (and on and on and on) — demand side to supply side — don’t always serve us well. Silos and disconnections beckon. Yes, engagement is a whole different ball game to the acquisition, and when businesses make the mistake of mixing them up, they tend to treat long-standing customers like campaign targets. They also tend to focus at the top of the “funnel”, and existing customers often find that there is no commitment to a relationship at all. Stephanie calls the contact center. She gives her credentials and describes her query. They transfer her to the right department, where she gives her credentials again. “How can I help you today?”, comes the ever-helpful agent. Patiently, she describes her query again. “Oh, woops — you need a different department”. The transfer begins. Stephanie sits on hold, glancing at her watch. She doesn’t have long before she will have to pick up her father. The call picks up. “Hello, this is Andy. Can I have your full name and password please?”. It’s bad enough when a company doesn’t know that Stephanie was on its website last week, in one of its stores yesterday, and is now calling the contact center today, all in order to continue a single conversation with a company that she is trying to buy from. But it’s indefensible that it loses its memory within the same channel, on the same journey! I call this brand dementia. It’s frankly, ridiculous, and yet it’s everywhere. And I mean, everywhere. But that’s not all. Brands that have allowed a mercenary acquisition mindset to dominate their marketing culture, have found that the technological advances of the last decade, in particular, are simply too tempting. Their use — and yes by that I do mean misuse — of their customer’s data, has given rise to a societal backlash. Among the myriad of morally questionable practices, everyone likes to cite the Cambridge Analytica scandal, but Facebook has not been alone in the murkier side of our marketing and data marriage. Yet, there are still many marketers out there who contend that a traditional hard press acquisition mindset remains the cornerstone of the profession and that we all just need to push on. Their abiding principle remains that they can acquire customers faster than the business might lose them. And besides, they are tasked with growth. But they miss the point. Society — e.g. our customers — demand more. Let us not forget, this is where our growth actually comes from.
https://medium.com/the-kickstarter/the-shift-that-will-change-marketing-forever-4665b927577
['Aarron Spinley']
2020-07-28 12:06:46.733000+00:00
['Society', 'Business', 'Marketing', 'Strategy', 'Digital Marketing']
Dashboard Using R Shiny Application
I decided to look at the change in patterns related to terrorism from 1970 to 2017 using the Kaggle dataset. I know the data is sad but interesting to work with. About the DataSet: The Global Terrorism dataset is an open source database which provides data for terrorist attacks from 1970 to 2017. Geography: Worldwide Time period: 1970–2017, except 1993 Unit of analysis: Attack Variables: >100 variables on location, tactics, perpetrators, targets, and outcomes Sources: Unclassified media articles (Note: Please interpret changes over time with caution. Global patterns are driven by diverse trends in particular regions, and data collection is influenced by fluctuations in access to media coverage over both time and place.) Overview of R shiny Application: Before we deep dive and see how a dashboard is created we should understand a little bit about the shiny application. Shiny is an open source R package that provides an elegant and powerful web framework for building web applications using R. When you are in R studio environment you have to create a new file and select R shiny web application: Create a new shiny application file To begin we should install some packages if they are not installed: Shiny Shinydashboard leaflet Shiny Dashboard: It is really important to know the structure and principles on which the dashboard is built. This is called the User Interface (UI) part of the dashboard Header Sidebar Body Header: Header as the name suggests describes or provide title to the dashboard. You have to use dashboardHeader function to create the header. the below code defined what should be written in the heading of the dashboard. Code for header Sidebar: Sidebar of the dashboard appears on the left side of the dashboard. It is like a menu bar where you can put information for the user to select from. In my dashboard I created two menu items. As you can see that one is the tab with the name “Dashboard” and the other is a link to the kaggle dataset. When you click it will open the link source of the data. Codes for Sidebar Below is the visualization of the sidebar menu: Body: This is the most important part of the dashboard as it contains all the charts, graphs and maps which you want to visualize. It consist of rows which are determines in which row your data is visualized. These rows are called fluidRow. Inside fluid row you place boxes and determine the kind of options you want to put in that box using selectInput function. In my dashboard I have total of three rows. The code and visuals for row number two are shown below. Codes for the fluidRow 2 Boxes in fluidRow 2 Combining all the rows makes the body for the dashboard using dashboardBody function: Combining all the rows in a body Creating server functions: The server function tells the shiny app to build the object. Server function create output/s containing all the codes needed to update the objects the the app. Each input to output must contain a render functions. Render functions tell what kind of output is required. Some of the render functions are below: renderDataTable → DataTable renderImage → images (saved as a link to a source file) renderPlot → plots renderPrint → any printed output renderTable → data frame, matrix, other table like structures renderText → character strings renderUI → a Shiny tag object or HTML Code example of renderPlot Once you are done with writing codes within server functions the last step is to run the shinyApp. Code to launch the app The final output of my dashboard is below: Once This dashboard was created I deployed it online. You can open it on your systems as well as on your smartphones. You can find the codes and the data file for this dashboard on my Github. Sources used:
https://medium.com/analytics-vidhya/dashboard-using-r-shiny-application-a2c846b0bc99
['Hassaan Ahmed']
2020-07-10 13:58:28.169000+00:00
['Dashboard', 'Rstudio', 'Shiny', 'Kaggle', 'Data Visualization']
VJ Loop | Lattices RYG
This blog takes the broadest conception of sound design possible including visual effects because audio likes video. Over 90,000 views annually. Follow
https://medium.com/sound-and-design/vj-loop-latticeryg-5c54b6c3178c
["Michael 'Myk Eff' Filimowicz"]
2020-12-29 02:23:29.952000+00:00
['EDM', 'Flair', 'Design', 'Art', 'Creativity']
Advanced Visualization for Data Scientists with Matplotlib
3D Plots using Matplotlib 3D plots play an important role in visualizing complex data in three or more dimensions. 1. 3D Scatter Plot 3D scatter plots are used to plot data points on three axes in an attempt to show the relationship between three variables. Each row in the data table is represented by a marker whose position depends on its values in the columns set on the X, Y, and Z axes. 2. 3D Line Plot 3D Line Plots can be used in the cases when we have one variable that is constantly increasing or decreasing. This variable can be placed on the Z-axis while the change of the other two variables can be observed in the X-axis and Y-axis w.r.t Z-axis. For example, if we are using time series data (such as planetary motions) the time can be placed on Z-axis and the change in the other two variables can be observed from the visualization. 3. 3D Plots as Subplots The above code snippet can be used to create multiple 3D plots as subplots in the same figure. Both the plots can be analyzed independently. 4. Contour Plot The above code snippet can be used to create contour plots. Contour plots can be used for representing a 3D surface on a 2D format. Given a value for the Z-axis, lines are drawn for connecting the (x,y) coordinates where that particular z value occurs. Contour plots are generally used for continuous variables rather than categorical data. 5. Contour Plot with Intensity The above code snippet can be used to create filled contour plots. 6. Surface Plot The above code snippet can be used to create Surface plots which are used for plotting 3D data. They show a functional relationship between a designated dependent variable (Y), and two independent variables (X and Z) rather than showing the individual data points. A practical application for the above plot would be to visualize how the Gradient Descent algorithm converges. 7. Triangular Surface Plot The above code snippet can be used to create Triangular Surface plot. 8. Polygon Plot The above code snippet can be used to create Polygon Plots. 9. Text Annotations in 3D The above code snippet can be used to create text annotations in 3D plots. It is very useful when creating 3D plots as changing the angles of the plot does not distort the readability of the text. 10. 2D Data in 3D Plot The above code snippet can be used to plot 2D data in a 3D plot. It is very useful as it allows to compare multiple 2D plots in 3D. 11. 2D Bar Plot in 3D
https://medium.com/sfu-cspmp/advanced-visualization-for-data-scientists-with-matplotlib-15c28863c41c
['Veekesh Dhununjoy']
2019-03-13 04:23:15.249000+00:00
['Data Visualization', 'Data Science', 'Matplotlib', 'Python', 'Technology']
Blockchain as a business technology of the future!
Bitcoin and blockchain have been much talked about for several years now and the number of people interested in the technology is growing. In simple terms, blockchain is a database, where information is secure, as it cannot be changed or removed. In short, the block chain is a database you cannot deceive, amend, and you will not delete anything there. Ageless like Lukashenko. The potential of the technology has not been fully unleashed so far, although the best specialists are working on integration and studying of blockchain even now, this very minute, while you are reading the article. Blockchain is applied in many areas such as logistics, medicine, construction, public sector. The technology especially interests businessmen that want to optimize business and save funds. Of course, not all entrepreneurs are ready to invest in the new technology, whose potential has not been fully unleashed. However, blockchain is conquering the world: new specialists in the technology emerge as well as innovative application areas. In many European countries, one can find a large number of seminars, meetups, and conferences, where the technology is discussed, for instance, Blockchain Summit, Blockchain Life, CryptoSpace, and Blockchain & Bitcoin Conference series. Participants can learn more about blockchain, gain insight into its peculiarities, and understand how to implement the technology in their projects. Many companies already use it in their business. A step ahead In December of 2017, Apple filed a patent application to use blockchain in the system for creating and certifying timestamps. As blockchain transactions are transparent and unchangeable, every participant has access to records. Attempts to change a timestamp would be registered preventing fraud and ensuring resistance to attacks. In such a way, Apple Wallet would be protected against hacking and breaches. As a result, your Apple Wallet would be secure, and the poor online fraudster would probably find a job after another failure. My car is my castle Porsche became the first car manufacturer that uses blockchain in the automobile industry. Together with the German startup XIAN, the company is testing blockchain-based applications integrated in car computers. Applications ensure locking and unlocking of car doors, temporary access authorization, and encrypted information logging. The use of the technology allows securely connecting to the repository that stores data related to the car and its features. Coca Cola and blockchain: what do they have in common? The world-famous Coca Cola beverage company has implemented the blockchain technology in its business. Not in vain, one of the company’s slogans is “Taste the feeling”. The management uses blockchain for fighting against compulsory labor. Distributed ledger will contain data related to employees including their labor agreements. Data will be protected using electronic notary services. In such a way, the company wants to reduce the number of undocumented workers and fight against compulsory labor. As a result, unpaid overtimes and employment of underage workers will be eliminated — blockchain is making the world more humane. As you understand, blockchain is deployed almost in all areas. Indeed, its arrival has revolutionized modern technologies and it can be likened to the invention of the Internet. Today most of companies, startupers and reputable specialists master blockchain not to miss out on the opportunities of the modern business. Therefore, in case blockchain and Bitcoin are synonyms for you, we recommend that you change the situation or remain the last “analogue mammoth” in the world of digital technologies.
https://medium.com/smile-expo/blockchain-as-a-business-technology-of-the-future-f02694d19dc5
[]
2018-08-03 07:41:53.338000+00:00
['Technology', 'Blockchain', 'Apple', 'Bitcoin', 'Future']
Ensemble Learning — Your Machine Learning savior and here is why (Part 1)
Photo by Romain Baïsse This is a series of posts that explain Ensemble method in machine learning in an easy-to-follow manner. In this post, we will discuss about Heterogenous Ensemble. 1/Heterogenous Ensemble 2/ Homogenous Ensemble — Bagging 3/ Homogenous Ensemble — Boosting As a machine learning enthusiast and a self-taught learner, I write the following post as a study material for myself and would love to share it with my fellow learners .The idea is to show the whole picture of different techniques of Ensemble, when to use and what effects on our model they’ve got to offer. Ensemble is a machine learning technique that makes prediction by using a collection of models. This is the reason why most of the time, learners tend to discover this method after studying all other classic algorithms. The strength of this technique derives from the idea “The wisdom of the crowds” that states: “the aggregation of information in groups, resulting in decisions, are often better than could have made by any single member of the groups” (Wikipedia, Wisdom of the crowd, 2020). The idea is certainly proven by the fact that there are several winning predictive models in Kaggle competition deploying this modelling method. There are great benefits coming from implementing Ensemble. Apart from amazing high accuracy, it is ideal to perform Ensemble method on larger datasets to reduce training time and memory space by optimising parallel computing(e.g. XBBoost, Light GBM). One of the worst nightmares of machine learning practitioners is overfitting especially for real-world datasets that contain noise and not follow any typical data distributions. In that case we can also use Ensemble to fight against high variance (e.g. Random Forest). Heterogenous Ensemble is used when we want to combine different fine-tuned algorithms to come up with the best possible prediction. Voting and Stacking are examples of this technique. Hard Voting This method is used for classification tasks. It combines predictions of multiple estimators using Mode (choosing the majority class) to select the final result. By choosing the class that has the largest sum of votes, Hard Voting Ensemble can achieve better prediction than models that just use a single algorithm. In order to be effective, we need to provide a set of odd (more than 3) and diverse estimators to produce their own independent predictions. Following is an example of code using Hard Voting Ensemble. Please note that the list of algorithm is your choice as long as they are fine-tuned, diverse and solve classification tasks. Code example using Hard Voting for classification task Soft Voting Both regression and classification tasks can be used for this technique. In order to perform Soft Voting, we employ Mean — averaging predicted probability for Classification and predicted value for Regression tasks. The characteristics are similar with Hard Voting except that we can use any number of estimators as long as there are more than 2. We can also assign weights to classifiers depend on their importance for final prediction like Hard Voting. Code example using Soft Voting for regression task Stacking After having tried Hard Voting and Soft Voting but still not achieve expected results, this is when Stacking comes into play. The technique is employed for both Classification and Regression tasks. The biggest difference between Stacking and the two other Heterogenous Ensemble methods is that apart from base learners, we have an extra meta-classifier. Firstly, the base learners will train and predict on the dataset. Then, meta-estimator on the second layer will use prediction of the base learners layer as new input features for the next step. Note that the meta-estimator will train and predict on dataset that takes new input features (X), the number of new features is equal to the number of your base learners, and the class labels (Y) of the original dataset. In this context, we no longer use estimates of location for combination but use a trainable learner as a combiner itself. The benefit of this approach is that the meta-estimator can effectively examine which base estimator provides better prediction as well as participate directly in the final prediction using the original data and new input features. Source: Staking Model visualisation from mlxtend Code example using Staking method for Classification task Overall, Heterogenous Ensemble method is a great choice when you already build a small set of estimators, individually train and fine-tune them. Bear in mind that “ Voting only makes sense if the learning schemes perform comparably well. If two of the three classifiers make predictions that are grossly incorrect, we will be in trouble!” (Witten, Frank, Hall, & Pal, 2016). Heterogenous Ensemble certainly provides an improvement on overall performance by applying a simple and intuitive Ensemble technique. In the next post, we will dive in another Ensemble technique that consists of some award-winning algorithms. Thanks for reading! Don’t be shy, let’s connect if you are interested in more of my posts in: Medium: https://medium.com/@irenepham_45233 Instagram: https://www.instagram.com/funwithmachinelearning/
https://medium.com/analytics-vidhya/ensemble-learning-your-machine-learning-savoir-and-here-is-why-part-1-78ef52c8c365
['Irene Pham']
2020-12-23 16:37:39.310000+00:00
['Machine Learning', 'Data Science', 'Artificial Intelligence', 'Big Data', 'Algorithms']