title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
7 Waking Actions You Can Do to Stay Productive for the Entire Day
I use a chrome extension called New Tab Draft which basically turns your new tabs and windows into a type-able notepad. I use it to jot down random thoughts, quick reminders but most importantly, my to-do list. Having your to-do list in a highly-visible and accessible place gives you a constant reminder to stay on track. It’s been highly effective for me because everything I do is on the internet. By having my to-do list the first thing I see when I open a new tab, it’s becomes impossible to even open Facebook or Youtube because it reminds me of the things I should be focusing on. This trick works well with everyone — I’ve seen startup founders use it, kitchen caretakers, mothers, office workers, lawyers, etc. The key is not to overwhelm your to-do list with tasks and focus on solving no more than 3 (preferably 1) major task that day. More details below. 3. Brainstorm 10 ideas I got this idea from a very prolific writer, James Altucher, who says that by brainstorming 10 ideas each day, you force your mind to become more creative. I practice this exercise religiously as a writer. Most of the topics I write down are scrapped, but there’s always one or two out of the 10 that makes for a great piece. I’d suggest anyone, even non-writers, to practice this. If you’re a mom trying to figure out what to cook for dinner, practice writing down 10 things to cook. If you’re a comedian, come up with 10 jokes. If you’re an artist, do 10 quick sketches. The true value of coming up with 10 ideas, besides the improvement in creativity is that it’s a numbers game: supposedly the more ideas you come up with, the more good ideas you can generate. 4. Disable all notifications/Turn phone on silent This is pretty self-explanatory, but it’s something that many people still don’t do at work. Social media notifications, email alerts or any kind of notification in general is a major distraction to getting things done. We click on it, the badge with the red numbers, because our minds are built that way — to feel good when we see that we’ve accomplished a task or received a message from someone. The reason why we feel good clicking these numbers is because our brain releases dopamine, a neurotransmitter that gets released anytime our brain wants us to feel good. It’s the same reason why people drink, gamble and have sex. Disabling notifications are easy — there are generally settings or apps available for most phones that’ll allow you to set notifications to silent or disable them entirely. I use the ‘Do not disturb’ on both my Macbook and iPhone to achieve this. As for things I can’t avoid like Medium notifications (because I write for a living), I use Stylish, a chrome extension that allows you to alter the appearance of a website. It’s for the slightly more technical people, but you basically hide the notifications entirely using CSS rules.
https://tiffany-sun.medium.com/7-waking-actions-you-can-do-to-stay-productive-for-the-entire-day-48625d7a0f37
['Tiffany Sun']
2019-11-12 19:36:01.633000+00:00
['Creativity', 'Ideas', 'Productivity', 'Advice', 'Life Lessons']
What makes you better developer?
Photo by MORAN on Unsplash There might be so many different paths of being developer. But being better developer might be important for many of us as a personal challenges, curiosity of how computers work, proud of creating something. Some people decide they want to be a developer from getting inspired by another cool developers in very early ages. Some people might just start wondering how does computers work at high school or collage and try to implement some applications by themselves. There are many different paths and factors has a great impact of being developer. I remember my first code was a simple hang-man game but at that time there was so many popular video games i would like to play. That time i spent more time on playing these games rather than creating them. But now it is the opposite. I am spending more time to creating the software development. I had a chance to meet & work with many successful developers and i tried to get inspired by them to became a better developer. Read so many articles about how people became successful after their attempts of what they are trying to do. Many of my searches gave me another point of view. Personally i believe that can be applied many sectors requires hands on experience for being better on what you want to accomplish. There are so many people very successful on their jobs and their life. Many of them share the same principles in common. P ractice If you want to accomplish something (believe me it can be anything) you have to practice on it. There is famous quote from Edison while he was trying to find electric lamp; “ I have not failed 10,000 times. I have not failed once. I have succeeded in proving that those 10,000 ways will not work. When I have eliminated the ways that will not work, I will find the way that will work.” Thomas Edison People get frustrated when they can not accomplish something or to be afraid to make mistakes. If you don’t make this mistakes now you might make them later on. Do not afraid of being wrong or making mistakes. If none of us made mistakes there would be nothing to learn from them.
https://medium.com/quick-code/what-makes-you-better-developer-c2d196abc858
['Melih Yumak']
2020-08-06 11:16:51.156000+00:00
['Software Development', 'Development', 'Productivity', 'Developer', 'Software']
Clouds on Mars could have been caused by meteors
Clouds on Mars could have been caused by meteors How did Mars get its clouds? New research suggests the key ingredient could have been meteors. Clouds in Mars’ middle atmosphere, which begins about 18 miles — 30 kilometres — above the surface, have been known to scientists and astronomers but they have, thus far, struggled to explain how they formed. A new study — published on June 17th in the journal Nature Geoscience — may have solved this martian cloud mystery. The paper from researchers at CU Boulder examines those wispy accumulations suggesting that they are a result of a phenomenon known as meteoric smoke — the icy dust created by space debris slamming into the planet’s atmosphere. An important aspect of the paper’s findings is the reminder that the atmospheres of planets and their weather patterns should not be considered isolated from their host solar systems. Victoria Hartwick is a graduate student in the Department of Atmospheric and Ocean Sciences (ATOC) and lead author of the new study. She elaborates: “We’re used to thinking of Earth, Mars and other bodies as these really self-contained planets that determine their own climates. But climate isn’t independent of the surrounding solar system.” Discovering the origins of martian cloud seeds The research — co-authored by Brian Toon at CU Boulder and Nicholas Heavens at Hampton University in Virginia — centres on the fact clouds seem to form from nowhere. Hartwick points out that this doesn’t mean they appear spontaneously. She says: “Clouds don’t just form on their own. They need something that they can condense onto.” An example is clouds on our own planet — low-lying clouds begin life as tiny grains of sea salt or dust blown high into the air, water molecules then collate around these particles. This gathering of water molecules becomes bigger and bigger eventually forming the accumulations we can see from the ground. Hartwick says that the problem in cracking the martian cloud mystery is that those sorts of cloud seed don’t exist in Mars’ middle atmosphere. Thus leading and her colleagues to suspect meteors and space debris could act as defacto cloud seeds. As Hartwick says — approximately two to three tons of space debris crash into Mars every day, ripping apart the planet’s atmosphere and injecting a huge volume of dust into the air. To discover if those dust clouds would be sufficient to give rise to Mars’ mysterious clouds, Hartwick and her team created massive computer simulations to model the flows and turbulence of the planet’s atmosphere. When they accounted for meteors in their calculations — clouds appeared. Hartwick says: “Our model couldn’t form clouds at these altitudes before. But now, they’re all there, and they seem to be in all the right places.” Hartwick and her team Mars cloud simulations (Hartwick) Whilst Hartwick concedes that the model may initially sound unlikely, research has also shown that similar interplanetary debris and dust may help to seed clouds near Earth’s poles. Despite this, the team warn that we shouldn’t expect to see gigantic thunderheads forming above the surface of Mars anytime soon as the clouds produced in the simulation were more thin and nebulous than those we experience on Earth. A NASA balloon mission examined the rippling, electric blue clouds found high over Earth’s poles during twilight in summertime. (NASA) Hartwick adds that just because are thin and wouldn’t be visible to the naked eye, that doesn’t mean they can’t have an effect on the dynamics of the climate. The simulations show that middle atmosphere clouds, for instance, could have a large impact on the Martian climate. Depending on where the team looked, those clouds could cause temperatures at high altitudes to swing up or down by as much as 10⁰ C. It is this climactic impact which Brian Toon — a professor in ATOC — finds most compelling. He believes that the team’s findings with regards to modern-day Martian clouds may also help to reveal details of the red planet’s evolution including how it once managed to support liquid water at its surface. Toon points out: “More and more climate models are finding that the ancient climate of Mars when rivers were flowing across its surface and life might have originated, was warmed by high altitude clouds. “It is likely that this discovery will become a major part of that idea for warming Mars.” Original research: http://dx.doi.org/10.1038/s41561-019-0379-6
https://medium.com/swlh/clouds-on-mars-could-have-been-caused-by-meteors-ee4c74239374
['Robert Lea']
2019-06-17 15:11:30.784000+00:00
['Science', 'Space', 'Space Exploration', 'Mars', 'Weather']
Why Is Gold so Valuable?
How something useless becomes valuable Something useless is valuable if we decide to treat it as valuable. Seashells were once valued in many parts of the world and used as currency. Cigarettes have long been valued as money among prison inmates regardless of whether they smoked or not. And when smoking was banned, inmates started to value cans of mackerel, even expired ones. And you and I and everyone else value little pieces of paper with portraits and numbers printed on them. We value things that everyone else values. And for some reason, everyone decided to value gold.
https://medium.com/i-wanna-know/why-is-gold-so-valuable-221ca9194015
['David B. Clear']
2020-12-10 06:31:56.671000+00:00
['Money', 'Chemistry', 'Society', 'Science', 'Economics']
Programming in Your Browser Is (Almost) Here
How It Works Screenshot by the author. By going to any GitHub repository and clicking three buttons, you’ll be taken to a working VS Code window in your browser within about 30 seconds. In this one tab, it’s possible to code, build, test, and deploy faster than ever before. The possibilities are extended even more with a ready-to-go terminal and the ability to install VS Code extensions within your codespace. You can even connect to a codespace in VS Code on your own machine using the Visual Studio Codespaces extension. Repositories can also have specific settings to further customize Codespaces, including automatically installing extensions, forwarding ports, and more. This is done in a new devcontainer.json file and is documented nicely. One benefit of using Codespaces is that new developers can be ready to help out in just a few seconds, with fewer errors along the way. New contributors can instantly have access to all the tools and info needed to make their first commit to a project. Take a look at this clip from the Satellite 2020 keynote for a live demo of Codespaces:
https://medium.com/better-programming/programming-in-your-browser-is-almost-here-a6a68ce63b60
['Ben Soyka']
2020-08-31 14:43:42.280000+00:00
['Software Development', 'Github', 'Startup', 'Software Engineering', 'Programming']
My Top 10 Favorite Facebook Advertising Features
Video is the future of Facebook. Someday, Facebook might even be all video, all day. And there’s good reason for that. People love to watch videos. At last count, Facebook users are watching 100 million hours of video per day on the social network. Are you using Facebook Ads to grow your business? If not, you should be. Here are nine reasons why. Facebook has many great ad formats, targeting options, and campaign types. Here are my top 10 favorite Facebook advertising features. 1. Lead Ads In addition to being cheap and insanely effective, Facebook Lead Ads totally eliminate the need for people to visit a landing page on your website. With Lead Ads you can acquire valuable contact information from potential customers who are using Facebook on a mobile device. You can use these ads to get people to sign up for your email newsletter, offer deals or discounts, schedule appointments, and more. 2. Video Ads Video ads are an awesome and cheap Facebook advertising feature — you can pay as little as a penny per video view! More memorable than the usual text and image combo, Facebook video ads deliver strong brand recall and high engagement — and drive purchase intent. Simply upload the video to Facebook’s native video player, customize the description, thumbnail, budget, and targeting, and go! 3. Engagement Ads on Wall Posts Engagement ads can help make your Facebook Page look super popular to anyone who is checking out your business. Facebook will only show this type of ad to the people who are most likely to engage with your post — reacting, commenting, or sharing. Sure, getting thousands of comments and reactions is ultimately just vanity — but people want to be part of the in-crowd. Facebook Pages with zero fan interaction always looks a bit suspect. If your business is so great, where are all your customers? 4. Remarketing Facebook remarketing lets you reach people who have already interacted with or checked out your brand in some way. Maybe they visited your website (or a specific page on it), took some sort of action in your app or game, or gave you their email address or phone number. Facebook tags these people with cookie. Your remarketing ads will show to those people as they go through their Facebook News Feed so they will remember you and perhaps convert on one of your hard offers. People who are familiar with your brand are 2x more likely to convert and 3x more likely to engage. Ridiculously powerful stuff! 5. Interest Targeting Facebook’s interest targeting helps you find the people who are likely to be interested in buying your product or service. You can reach specific audience based on their interests, their activities, and the pages they’ve liked. You can also combine interests to expand the reach of your ad. Whether you want to target people who are interested in technology, fitness and wellness, entertainment, or a certain business/industry, this Facebook ad feature will help you do it. 6. Demographic Targeting You can target people based on where they live, their age, their gender, their political leanings, their job title, or by specific life events (e.g., engagement, birthday, anniversary) Facebook also offers financial targeting. You can specify that you only want to show it to people who make more than an income level you specify, whether it’s as low as $30,000 or more than $500,000 If you sell a pricy product, you want to make sure your ads are shown to people who can afford to buy your stuff! 7. Behavior Targeting Facebook’s behavior targeting lets you reach people based on purchase history, intent, device usage, and more. Facebook uses data from third-party partners to figure out what people are purchasing, online and offline. After matching up that data with user IDs, Facebook lets advertisers target audience segments based on thousands of different purchasing behaviors. For instance, you can target ads to people who have purchased clothing, health and beauty, technology, or pet products. Or if you wanted to target based on travel, you could choose options such as frequent travelers, international travelers, cruises, or whether someone has used a travel app in the past month. 8. The Facebook Pixel Facebook’s tracking pixel tracks actions that happen on your website as a result of your paid ads (as well as your organic posts). All you have to do is add some code to any pages you want to track. Actions include things like adding an item to a cart, viewing content, making a purchase, and completing registration. The tracking pixel will help you measure conversions, optimize your ads and targeting, and gain insights about the Facebook users visiting your website. 9. Website Conversion Campaigns You want to use conversion campaigns when the objective of your ad is to get people to do something specific on your website or in your mobile app. You define that action, whether it’s completing a purchase, adding something to a cart, or a page view. 10. Carousel Ads Carousel Ads let you display multiple images or videos (up to 10) within the same ad unit. Each image or video can link to a different page of your website You can use these images to highlight products, features, or a promotion. When done well, Carousel ads have proven to significantly increase conversions and click-through rates. Bonus: Facebook Messenger Bots Businesses can now create bots for Facebook Messenger that will “talk” to your customers anytime, 24/7. How cool is that? Facebook’s chat bots have a ton of potential in terms of customer service and sales. They can provide automated information, take orders, help you buy products or services, or provide shipping notifications. And you never have to leave Facebook Messenger to shop or get the information you want. Those are my favorite Facebook advertising features. What are yours? Originally posted on Inc.com About The Author Larry Kim is the CEO of Mobile Monkey and founder of WordStream. You can connect with him on Twitter, Facebook, LinkedIn and Instagram.
https://medium.com/marketing-and-entrepreneurship/my-top-10-favorite-facebook-advertising-features-238916a1bd78
['Larry Kim']
2017-04-28 16:18:06.320000+00:00
['Marketing', 'Facebook', 'Social Media', 'Digital Marketing', 'Advertising']
yes, this is an apocalypse — but it’s not what you think.
a few weeks ago, a few friends of mine were sitting in our apartment discussing the news around COVID-19 — at the time, in the American mainstream, the issue was still a mere rumbling, a trickle of ominous but still-remote stories about Italian hospitals and Iranian patients and the occasional reminder to “wash your hands and cover your cough.” back then — in those naïve days of early March — none of us present knew anyone who had been personally affected by the disease (that would change less than a week later). our lives still proceeded with minimal disruption. yet we had noticed that stores were selling out of hand sanitizer, that some offices were encouraging employees to work from home, that healthcare experts were appearing increasingly on national news to issue warnings that jarred our stubborn sense of immunity. murmurs about travel restrictions and school closings were beginning to rise, a subtle unease leaking into the air. adding up the signs, one of my friends asked, with a furrowed brow, “so, is this like, an apocalypse?” i smiled, probably chuckled a little, and asked if, rather than “apocalypse,” he had meant to use the word “epidemic.” he nodded, “yeah i guess so,” he said, although he was perhaps just being polite. i’ve thought about that exchange many times since. in the moment, i took his remark as a sort of verbal typo — of course he didn’t mean “apocalypse” — that was not a word used in literal, adult discussions. for me, the term is inseparable from the Biblical book of Revelation, the “end times” narrative, an account laden with symbolic images of cosmic upheaval, supernatural destruction, and Final Judgement. so of course it doesn’t apply here, right? if the heavens aren’t being split open, if “angels of woe” aren’t ravaging the earth, if the moon (or is it the sun?) hasn’t turned blood red — then apocalypse is not what we’re experiencing. so my thinking went. until i recalled that the root meaning of the word “apocalypse” (ἀποκάλυψις or apokalypsis) is simply: “to uncover, reveal, expose.” so, antichrists and demon armies aside, a literal apocalypse is merely an event in which a hidden object, reality, or truth is revealed. suddenly, my friend’s words feel — dare i say it — prophetic. because, so far, this has all felt like a vast revelation, a global exposure — of truths we have too long, too often, covered over.
https://sarahaziza1.medium.com/yes-this-is-an-apocalypse-but-its-not-what-you-think-d84185193f08
['Sarah Aziza']
2020-03-26 14:59:16.831000+00:00
['Health', 'Poetry', 'Spirituality', 'Covid 19', 'Mental Health']
How to Thrive as a Freelancer Working From Home
As a freelancer, I have the good fortune of being able to work from home. Lots of people dream about earning a living from the comfort of their own home. You can choose your own hours, you don’t have a long commute to work, no moody colleagues, … the list of advantages is almost endless — especially if you’re an introvert, like me. But even if it feels like a veritable paradise at first, working from home can quickly degenerate into a nightmare if you don’t approach it with the necessary discipline and self-management. In fact, a freelancer survey has shown that 62% of freelancers feel stressed, and 54% find it hard to stay productive at home. We all know that time is money, but unfortunately, a lot of it gets lost in disorganization and disruption. What’s more, we’re faced with a constant barrage of technology, people, and tasks that can contribute to this disorganization. Many freelancers find that they rush from one task to the next, trying to get everything done. In the end, it’s not just your productivity that suffers but your mental health too. So, to all of you out there who are considering earning your living by working from home, here are a few helpful tips to get you started.
https://medium.com/swlh/how-to-thrive-as-a-freelancer-working-from-home-52d73dbd5580
['Kahli Bree Adams']
2020-06-30 04:17:37.001000+00:00
['Business', 'Small Business', 'Entrepreneurship', 'Productivity', 'Freelancing']
How storage works in Prometheus, and why is this important?
How storage works in Prometheus, and why is this important? Learn the bases that make Prometheus, so a great solution to monitor your workloads and use it for your own benefit. Photo by Vincent Botta on Unsplash Prometheus is one of the key systems in nowadays cloud architectures. The second graduated project from the Cloud Native Computing Foundation (CNCF) after Kubernetes itself, and it is the monitoring solution for excellence in most of the workloads running on Kubernetes. If you already have used Prometheus some time, you know that it relies on a Time series database, and it is one of the key elements. Based on their own words from the Prometheus official page: Every time series is uniquely identified by its metric name and optional key-value pairs called labels, and that series is similar to the tables in a relational model. And inside each of those series, we have the samples that are similar to the tuples. And each of the samples contains a float value and a milliseconds-precision timestamp. Default on-disk approach By default, Prometheus uses a local-storage approach storing all those samples on disk. This data is distributed on different files and folders to group different chunks of data. So, we ve folders to create those groups, and by default, they are a two-hour block and can contain one or more files depends on the amount of data ingested in that period of time as each folder contains all the samples for that specific timeline. Additionally, each folder also has some kind of metadata files that help locate each of the data files' metrics. A file is persistent in a complete manner when the block is over, and before that, it keeps in memory and uses a write-ahead log technical to recover the data in case of a crash of the Prometheus server. So, at a high-level view, the directory structure of a Prometheus server’s data directory will look something like this: Remote Storage Integration Default on-disk storage is good and has some limitations in terms of scalability and durability, even considering the performance improvement of the latest version of the TSDB. So, if we’d like to explore other options to store this data, Prometheus provides a way to integrate with remote storage locations. It provides an API that allows writing samples that are being ingested into a remote URL and, at the same time, be able to read back sample data for that remote URL as shown in the picture below: As always in anything related to Prometheus, the number of adapters created using this pattern is huge, and it can be seen in the following link in detail: Summary Knowing how the storage of Prometheus works is critical to understand how we can optimize their usage to improve the performance of our monitoring solution and provide a cost-efficient deployment. In the following posts, we’re going to cover how we can optimize the usage of this storage layer, making sure that only the metrics and sample that are important to use are being stored and also how to analyze which metrics are the ones using most of the time-series database to be able to take good decision about which metrics should be dropped and which ones should be kept. So, stay tuned for the next post regarding how we can have a better life with Prometheus and not die in the attempt.
https://medium.com/dev-genius/how-storage-works-in-prometheus-and-why-is-this-important-1882c340fee2
['Alex Vazquez']
2020-11-15 10:51:42.125000+00:00
['Software Development', 'Technology', 'Kubernetes', 'Cloud Computing', 'Programming']
General Purpose Tensorflow 2.x Script to train any CSV file
With that out of the way, let's get started. First, you just need to specify a few parameters. DATASET_PATH — where your dataset is located in the file system LABEL_NAME — target label column name TASK — “r” for regression and “c” for classification DUMMY_BATCH_SIZE — don’t worry about this. Just set it to 5 BATCH_SIZE EPOCHS TRAIN_FRAC — the fraction of data to be used for training. float between 0 and 1. CHECKPOINT_DIR — folder to store checkpointed models Imports from collections import defaultdict import os import numpy as np import matplotlib.pyplot as plt import tensorflow as tf import kerastuner as kt import IPython during the whole process, we will be using tf.data to handle the dataset. By using tf.data you can load massive datasets one chunk at a time. It also has a lot of useful functionality like caching and prefetch the next chunk of data while the model is training on the current chunk. The below function will return a fresh dataset object that you can iterate over one chunk(batch size) at a time. def get_dataset(batch_size = 5): return tf.data.experimental.make_csv_dataset(DATASET_PATH, batch_size = batch_size, label_name = LABEL_NAME, num_epochs = 1) If the task is regression, the number of output nodes is one if TASK == "r": OUTPUT_NODES = 1 If it is a classification task then we need to find the number of labels(classes) to determine the number of output layers by iterating over the whole dataset. elif TASK == "c": unique_labels = set() for _ , label in get_dataset(batch_size=BATCH_SIZE): for ele in label.numpy(): unique_labels.add(ele) num_labels = len(unique_labels) if num_labels <= 2: OUTPUT_NODES = 1 else: OUTPUT_NODES = num_labels As I mentioned before, this script will do all the preprocessing on its own without the need to have external functions because all the preprocessing logic is embedded as one of the layers in the model. This enables you to just give the raw data to the deployed model. The first step in achieving that is to define what is known as model inputs. It's simply a dictionary where the keys are the column names and values are tf.keras.Input objects model_inputs = {} for batch, _ in get_dataset(batch_size=DUMMY_BATCH_SIZE).take(1): for col_name, col_values in batch.items(): model_inputs[col_name] = tf.keras.Input(shape=(1,), name=col_name, dtype=col_values.dtype) Sample Output: { "clouds_all": <tf.Tensor "clouds_all:0" shape=(None, 1) dtype=int32>, "holiday": <tf.Tensor "holiday:0" shape=(None, 1) dtype=string>, "rain_1h": <tf.Tensor "rain_1h:0" shape=(None, 1) dtype=float32>, "snow_1h": <tf.Tensor "snow_1h:0" shape=(None, 1) dtype=float32>, "temp": <tf.Tensor "temp:0" shape=(None, 1) dtype=float32>, "weather_description": <tf.Tensor "weather_description:0" shape=(None, 1) dtype=string>, "weather_main": <tf.Tensor "weather_main:0" shape=(None, 1) dtype=string> } One feature that tf.data has is that it can automatically detect the data type of each column. Let's use that to our advantage and split the model_input dictionary into multiple dictionaries according to the data type integer_inputs = {} float_inputs = {} string_inputs = {} for col_name, col_input in model_inputs.items(): if col_input.dtype == tf.int32: integer_inputs[col_name] = col_input elif col_input.dtype == tf.float32: float_inputs[col_name] = col_input elif col_input.dtype == tf.string: string_inputs[col_name] = col_input Sample integer_inputs: {'clouds_all': <tf.Tensor 'clouds_all:0' shape=(None, 1) dtype=int32>} Sample float_inputs: { 'rain_1h': <tf.Tensor 'rain_1h:0' shape=(None, 1) dtype=float32>, 'snow_1h': <tf.Tensor 'snow_1h:0' shape=(None, 1) dtype=float32>, 'temp': <tf.Tensor 'temp:0' shape=(None, 1) dtype=float32> } Sample string_inputs: { 'holiday': <tf.Tensor 'holiday:0' shape=(None, 1) dtype=string>, 'weather_description': <tf.Tensor 'weather_description:0' shape=(None, 1) dtype=string>, 'weather_main': <tf.Tensor 'weather_main:0' shape=(None, 1) dtype=string> } Now, for the preprocessing, the general flow for the integer and float columns is to first concatenate the layers by passing them through tf.layers.Concatenate and then normalize them through a normalization layer. But to calculate the mean and std for normalization, we have to iterate through the whole dataset once. when it comes to preprocessing the string columns, we first get the vocabulary(list of unique words) for each column. Then we pass the input through a string lookup layer tf.keras.layers.experimental.preprocessing.StringLookup and then through a one-hot encoding layer tf.keras.layers.experimental.preprocessing.CategoryEncoding. You can also perform some simple operations like converting everything to lowercase and eliminating leading and trailing white spaces. StringLookup layer maps strings from a vocabulary to integer indices. CategoryEncoding then takes those integer indices and creates a one-hot vector. Pass the inputs through their corresponding functions to get preprocessed layers integer_layer = numerical_input_processor(integer_inputs) float_layer = numerical_input_processor(float_inputs) string_layer = string_input_processor(string_inputs) Add all the inputs to a list preprocessed_inputs = [] if integer_layer is not None: preprocessed_inputs.append(integer_layer) if float_layer is not None: preprocessed_inputs.append(float_layer) if string_layer is not None: preprocessed_inputs.append(string_layer) preprocessed_inputs might look something like this [<tf.Tensor 'normalization/truediv:0' shape=(None, 1) dtype=float32>, <tf.Tensor 'normalization_1/truediv:0' shape=(None, 3) dtype=float32>, <tf.Tensor 'concatenate_1/concat:0' shape=(None, 66) dtype=float32>] Concatenate all the inputs if len(preprocessed_inputs) > 1: preprocessed_inputs_cat = tf.keras.layers.Concatenate()(preprocessed_inputs) else: preprocessed_inputs_cat = preprocessed_inputs Finally, create a tf.keras.Model that takes the model_inputs and returns the preprocessed outputs preprocessing_head = tf.keras.Model(model_inputs, preprocessed_inputs_cat) You can also plot the preprocessing model to see what the individual layers look like. Increase the value of dpi if you want more the plot to have a higher clarity. tf.keras.utils.plot_model(model = preprocessing_head, rankdir="LR", dpi=72, show_shapes=True, expand_nested=True, to_file="preprocessing_head.png") You might get an image something like this
https://zahash.medium.com/general-purpose-tensorflow-2-x-script-to-train-any-csv-file-c5a5abe5c7fd
['Zahash Z']
2020-12-01 14:10:39.137000+00:00
['Python', 'TensorFlow', 'Machine Learning', 'Data Science', 'Artificial Intelligence']
5 Words We Need to Stop Using in Climate Change Conversations
5 Words We Need to Stop Using in Climate Change Conversations Tabitha Whiting Follow Oct 11 · 5 min read I’m not saying that language is what’s stopping us from adequately addressing the climate emergency. But I am saying that it has a part to play, and that some of the words commonly used around the topic of climate change are problematic. Here are 5 of those words. 1. Change Photo by Ross Findon on Unsplash Let’s start with the obvious one: change. ‘Climate change’ is now the most commonly used phrase to describe the long-term shifts in weather conditions and temperature caused by human-generated greenhouse gases. It used to be described as global warming — but, quite rightly, there was a feeling that ‘warming’ did not truly reflect the reality of the impacts this phenomemon is responsible for, suggesting temperature changes alone. ‘Change’, though, has its own issues. The main issue is that ‘change’ is a neutral term. Change can be good, and change can be bad. There’s no urgency, and nothing that suggests climate change is something that with have vast, negative impacts. This is dangerous because it could inadvertently play into the main narrative of climate sceptics — that our earth’s climate naturally fluctuates, and we’re simply in a period of increased temperatures currently, with nothing to do with human activity. Over recent years we’ve seen growth in the use of the terms ‘climate crisis’ and ‘climate emergency’ instead of climate change, which have the benefit of invoking urgency and the need to act now. However, these terms could also end up being damaging — we tend to associate ‘crisis’ and ‘emergency’ with short-term problems which are over relatively quickly. Climate change doesn’t fit that. It’s a long-term emergency, and the danger is that these terms lose their impact with the general public as time goes on. “There is a limited semantic ‘budget’ for using the language of emergency, and it’s possible you can lose audiences over time, particularly if there are no meaningful policies addressing the fact that there really is an ongoing emergency.” — Dr David Holmes, director of the Climate Change Communication Research Hub There’s no perfect answer, but it’s important to be aware of the different connotations of these terms when having conversations with others about climate change. 2. Believe Photo by Ran Berkovich on Unsplash “Do you believe in climate change?” It’s a question I see all too often — from Twitter threads to political debates. The problem with this question lies in the use of the word ‘believe’. Belief has no place in conversations about climate change. 97% of the world’s research scientists agree that climate change exists, is caused by human activity, and will have vast and devastating impacts unless we start making drastic changes right now to the way that we live. It’s fact. Proven, scientific fact. There’s simply no room for belief. When we continue to use this word around the topic of climate change, we allow there to be room for belief. And when we allow room for belief, we also allow room for disbelief, scepticism, and denial. Either you agree that we need to address climate change now, or you’re wrong (and likely have an ulterior motive to do with allowing capitalism to continue to thrive). 3. Goals Photo by Markus Winkler on Unsplash Whether it’s goals for reaching net-zero or targets set by The Paris Agreement, it’s very common to see headlines focused on climate goals and targets — and usually focused on the likelihood of us missing them. These goals are important if we are to tackle the climate emergency. But when we focus on facts, targets, and statistics, it becomes too easy to dehumanise the climate emergency. What those goals represent are lives. If Western societies fail to reach net-zero carbon by 2050 then we are threatening the lives of those at the front line of climate impacts, and failing to protect our fellow humans. It’s all too easy to forget that when we talk about climate change in terms of goals and targets. 4. Fight Climate action and climate policy are often framed as part of the ‘fight against climate change’. We paint the picture that we are battling with this external force of ‘climate change’ which is overpowering us. The reality is that it is humans who have caused climate change. And it is humans who have the power to halt it. It’s nothing to do with us ‘defeating’ the mightly power of climate change. The power lies in our hands, or rather, in the hands of the oil and gas giants to stop extracting and burning fossil fuels to drive our capitalist consumerist society, and politicians to force them to do so. The word ‘solve’ can be added to this list for very similar reasons. We don’t need to ‘solve’ climate change. The solution is simple: stop pouring more and more greenhouse gases into the atmosphere. Searching for ‘solutions’ is only a distraction. 5. Neutral How many adverts have you seen from large brands claiming that they’re going ‘100% carbon neutral’? Well, watch out, because it’s a classic greenwashing term. The term carbon neutral means that the creation of a product or service does not produce any carbon emissions. Sounds great. But carbon neutrality usually comes from a combination of a brand reducing the carbon emissions of their practice and supply chain (for instance, through installing solar panels on a warehouse roof to provide renewable energy, and from the brand purchasing carbon offsets — paying someone else to capture or avoid emitting enough carbon emissions to ‘neutralise’ the carbon emissons they are causing. Carbon offsets and carbon neutrality offer brands a way to assuage their climate guilt without actually doing anything to reduce their direct carbon footprint, and the ability to plaster ‘sustainable’ all over their marketing in order to win over ethically-minded customers. It’s similar to the idea of ‘net zero’ which you’ll often hear politicians talk of, and which represents a combination of reducing emissions as well as relying on carbon capture technology to fix the problem. We want the focus to be on reducing direct carbon emissions, or we’re never going to get ourselves out of this crisis, simply racing against the planet to find more and more ways to capture the growing carbon emissions we continue to rack up. They’re terms to be very wary of. Make sure you do your research before you buy from that ‘carbon neutral’ brand or trust the claims coming out of a politician’s when they talk about ‘net zero carbon’.
https://medium.com/age-of-awareness/5-words-we-need-to-stop-using-in-climate-change-conversations-3a2d43ca08cc
['Tabitha Whiting']
2020-10-15 12:47:40.602000+00:00
['Environment', 'Words', 'Language', 'Sustainability', 'Climate Change']
6 Actionable Tips To Achieve Your Fitness Goals In 2021
I have been practicing yoga for three years now. Last year I started my yoga diploma course. All the advanced poses were part of the syllabus. Including middle split, handstand, standing splits, and some other extreme balancing poses. I needed to perform all the extreme poses within a span of 6 months. So, I used to practice every day for 90 minutes. My yoga flow was intense and back-breaking. Motivating myself to do the same routine for 6 days a week was a challenge. If you are new to the fitness world then you know what a struggle it is to follow a consistent workout routine. It takes a great deal of self-discipline to practice the same routine day after day until you get perfect at it. Here are some techniques to keep you motivated to exercise so that you can achieve all your fitness goals in 2021. Set S.M.A.R.T Goals We make big goals on 1st January that we struggle to achieve for a few days before throwing them out in the dumpster. Most people are likely to give up their resolution by January 19. We You need to make goals by using the S.M.A.R.T Method S:- Specific I want to get healthy is not a specific goal, you need to break it down as to how you are planning to achieve that goal. Maybe you need to lose weight to get healthy. But how much weight do you need to lose to get healthy? It may be 10 pounds or 15 pounds. Set a specific goal to have a clear sight of what you want to do. M:- Measurable You can measure the amount of weight lost on a weighing scale, so losing a certain amount of weight is a measurable goal. Working out five times a week and walking 6000 steps daily are also measurable goals. No matter how you measure your goal, it should be able to reflect success. accurately A:- Attainable You should be mindful of setting attainable and realistic goals. If you set goals that you have no power over then you will be left disappointed. You cannot get shredded six-pack abs in a week. That is an unattainable goal that will crush your fitness routine R:- Relevant Your goals should be aligned with your values. They should be able to transform your life for the better. They should be relevant to your journey. Set goals that help you achieve your expected result. T:- Time-Bound All goals should have a window of execution. Giving yourself deadlines will help you in the timely execution of your goal. It will make you more serious about your goal. It is easy to track a goal when you set up a time limitation on it. Time limits motivate you to push forward. Follow the 2-Minute Rule You need to build a habit of working out regularly to achieve all your health and fitness goals. Instilling a habit to be active every day is not as hard as everyone makes it to be. You need to start small to become consistent with your practice. Start with a 30-minute walk rather than aiming for 10,000 steps. Start any exercise and do it just for 2 minutes, you will not achieve your desired result in those 2 minutes but you will get motivated to follow through with your real workout. It is all about getting up from the couch to exercise. Start small to build a foundation for bigger and better habits. Make It A Habit To Be Active In The Morning Morning exercise is the miracle cure you need to start your day in the right way. Morning exercise energies you for the day. Doing a 15–20 minute workout in the morning sets you for the day, as it helps you in releasing the endorphins which result in a high. Completing a portion of your daily workout earlier in the day will provide you motivation later in the day to smash your goal. Embrace The Internet There is a host of free workouts for you to try on the internet. You can find yoga, HIIT, Pilates, and many more types of workout on youtube. There is a channel for every kind of workout, you can easily get started in fitness with the help of youtube. All you need is your phone, internet, and the will to become better. Here is a list of top fitness influencers that you can follow in various sports. Yoga:- Yoga with Adriene Pilates:- blogilates HIIT:- Pamela Rief At-Home Workout:- Chloe Ting Jumping Rope:- Jump Rope Dudes Set A Specific Time To Workout Exercising daily at the same time helps you build a routine. This routine is what you need to make working out a top-priority habit for 2021. Working out at the same time regularly helps you build an automated habit. Working out becomes easy when you have assigned a particular time for it. You automatically close that window for any other activity. It helps you stay focused on your goal. Maintain a Fitness Journal Maintaining a fitness journal is one of the best tools that can help you achieve all your fitness goals. It helps you track your progress over time. You can track your progress daily, weekly, or every 15 days. It is up to you. You can also track your rest days. Tracking your progress helps you in getting a real-time limit for achieving your goals. It encourages you to push forward.
https://medium.com/in-fitness-and-in-health/6-actionable-tips-to-achieve-your-fitness-goals-in-2021-9d6b414407f7
['Khyati Jain']
2020-12-28 22:00:56.249000+00:00
['Fitness', 'Fitness Tips', 'Health', 'Productivity', 'Sports']
Vitamin D for Covid-19: New Research Shows Promise
Vitamin D for Covid-19: New Research Shows Promise Studies highlight potential life-saving benefits. But some experts aren’t convinced. The study’s findings were significant — “spectacular” even, in the words of at least one expert commenter. A team of doctors at Reina Sofía University Hospital in Córdoba, Spain, split 76 newly admitted Covid-19 patients into two groups. One group got the standard treatment at the time, which included a cocktail of antibiotics and immunosuppressant drugs. The second group got the same standard treatment — plus a drug designed to raise vitamin D levels in the blood. Among the 26 hospitalized people who received standard care alone, fully half went on to the intensive care unit (ICU) because their disease had worsened. Two of them died. But among the 50 people who received the vitamin D treatment on top of standard care, only one person ended up in the ICU. None died. In their study write-up, published in October in the Journal of Steroid Biochemistry and Molecular Biology, the Spanish researchers explained that their experiment was a “pilot” study that requires follow-up work. But they also pointed out that theirs is not the first piece of evidence linking vitamin D to a reduced risk for severe respiratory infection. Far from it. “Vitamin D supports a range of innate antiviral immune responses while simultaneously dampening down potentially harmful inflammatory responses,” says Adrian Martineau, PhD, a clinical professor of respiratory infection and immunity at Queen Mary University of London. “The evidence that low vitamin D levels are a risk factor for severe [Covid-19] disease is not definitive, but many lines of research suggest that this is likely.” Martineau was not involved with the Spanish study, but he has published several papers on vitamin D for the treatment and prevention of viral infections. In a 2017 research review, which appeared in the journal BMJ, he and his co-authors concluded that taking a daily or weekly vitamin D supplement is associated with a reduced risk for respiratory infection — especially among those who have low levels of the vitamin in their blood. Martineau and others say it’s very possible — though not yet proven — that a vitamin D supplement could provide a measure of protection against SARS-CoV-2 and Covid-19. How vitamin D may combat the coronavirus Thanks in part to Martineau’s work on vitamin D and respiratory infections, the “sunshine vitamin” — so called because the human body requires UV light to make it — has been the focus of Covid-19 research almost since the start of the pandemic. During the spring, several groups identified apparent associations between low levels of vitamin D and increased Covid-19 risks. Since that time, others have replicated their work. For a study published September 17 in PLOS One, researchers found that a person’s risk for a positive SARS-CoV-2 infection is “strongly and inversely” associated with blood levels of vitamin D. Taken together, these findings suggest that adequate vitamin D levels may help prevent a SARS-CoV-2 infection and also keep infections that do occur from growing worse. The mechanisms that may explain vitamin D’s benefits are numerous. For example, macrophages are helpful white blood cells that play a number of virus-clearing roles. “Vitamin D deficiency impairs the ability of macrophages to mature,” says Petre Cristian Ilie, PhD, a Covid-19 investigator and research director at The Queen Elizabeth Hospital in the U.K. Moreover, Ilie says that vitamin D may increase levels of certain cell enzymes that help repel the coronavirus. There’s also evidence that the presence of vitamin D may dampen elements of the immune system that are involved in the so-called cytokine storm that is associated with severe Covid-19. These are just a sampling of the many ways in which vitamin D may protect against SARS-CoV-2. “Even before the coronavirus pandemic, I think there was good evidence to support taking vitamin D supplements,” says Walter Willett, MD, a professor of epidemiology and nutrition at Harvard T.H. Chan School of Public Health. “This pandemic adds another reason.” Since the early days of the pandemic, Willett has been investigating the relationship between vitamin D and Covid-19. “The evidence that low vitamin D levels are a risk factor for severe [Covid-19] disease is not definitive, but many lines of research suggest that this is likely,” he says. “The healthier you are, the more your vitamin D levels will rise naturally.” He points out that African Americans and other people of color, due to elevated levels of melanin in the skin, require more sun exposure than lighter-skinned individuals to produce like amounts of vitamin D. “We know from national surveys that Black people living in the U.S. have about 17 times higher rates of severe vitamin D deficiency than White people,” he says. Black Americans have also experienced disproportionately high rates of severe Covid-19. While many other factors contribute to these inequalities — including differences in income, work, and health care access — Willett says that it is “highly possible that low vitamin D levels can explain part of the huge disparities in severe Covid-19 infection.” Research from Europe has found that countries hit hardest by Covid-19 — such as Spain and Italy — have a higher prevalence of vitamin-D deficiency than countries with populations that tend to be sufficient in D. It’s also worth noting that the burden of deadly Covid-19 appeared to dip during the sunnier summer months in the U.S., U.K., and elsewhere. There are many plausible explanations for this drop that have nothing to do with vitamin D or sun exposure — including improved clinical management of the disease. But some researchers have speculated that populationwide increases in sun exposure during the summer, and consequently improved vitamin D status, may have contributed to the disease’s apparent softening. Not everyone is convinced For all vitamin D’s promise, some experts say that the sunshine vitamin may turn out to be fool’s gold. “I’m as excited as anyone about vitamin D, but I’m not ready to jump on the bandwagon,” says Mark Moyad, MD, the Jenkins/Pokempner director of preventive and alternative medicine at the University of Michigan Medical Center. Moyad is among the world’s leading authorities on the risks and benefits of supplements. He reels off a number of potential confounders or complicating factors that could eventually squelch the current enthusiasm for vitamin D. “As you gain weight, vitamin D goes down because it’s sequestered in adipose tissue,” he explains. In fact, almost any disease that is associated with metabolic dysfunction or inflammation tends to drive down the blood’s quantity of vitamin D. And so it’s possible that low vitamin D is simply a marker of health issues — such as obesity and Type 2 diabetes — that are known to make Covid-19 worse, he says. (Similarly, some researchers have posited that vitamin D is an indicator of adequate sun exposure, which they say may provide a number of health benefits that are frequently misattributed to vitamin D.) For those who want to raise their vitamin D levels safely, Moyad says that the best way to do so doesn’t involve a pill. Furthermore, Moyad points out that some researchers have failed to find correlations between low levels of vitamin D and an increased risk for Covid-19 — including among people of color. Even if it turns out that vitamin D plays a role in moderating some aspect of SARS-CoV-2 infection or disease, it’s not a certainty that swallowing the vitamin as a supplement will do any good. “We’ve been trolled and teased before by these sorts of correlations, and we’ve paid the price,” he says. To illustrate his point, he describes the decades of promising research that linked low vitamin D levels to bone weakness. But when, for a 2019 JAMA study, people took high daily doses of vitamin D for three years, their bones actually got weaker, not stronger. While the known risks of taking moderate amounts of vitamin D as a supplement are minimal, the JAMA study’s findings — as well as the findings of many other past vitamin studies — show that there can be unexpected and often unwanted consequences associated with supplement use. For those who want to raise their vitamin D levels safely, Moyad says that the best way to do so doesn’t involve a pill. “Eat right, stop smoking, get some exercise, get outside in the sun, lose weight,” he says. “The healthier you are, the more your vitamin D levels will rise naturally.” For those dead set on taking a vitamin D supplement, he recommends taking no more than 600 to 800 IU per day, which is the National Institutes of Health (NIH) recommended daily amount for kids and adults. Martineau — the London-based respiratory disease expert — offers similar advice. Based on some of his recent work, he says that daily doses of vitamin D in the 400 to 1,000 IU range appear to be safe and effective for the prevention of infections. Harvard’s Willett is a bit more bullish. He says that 2,000 IU per day is “a reasonable dose” for adults. But he also says more research is needed to identify optimal intakes. “I love vitamin D, and I’m excited to see if it works,” Moyad adds. “But we may end up disappointed.”
https://elemental.medium.com/vitamin-d-for-covid-19-new-research-shows-promise-b2593e782933
['Markham Heid']
2020-10-08 05:32:47.127000+00:00
['The Nuance', 'Health', 'Covid 19', 'Science', 'Nutrition']
The most impressive Youtube Channels for you to Learn AI, Machine Learning, and Data Science.
This channel publishes interviews with data scientists from big companies like Google, Uber, Airbnb, etc. From these videos, you can get an idea of ​​what it is like to be a data scientist and acquire valuable advice to apply in your life. Xander Steenbrugge is a machine learning researcher at ML6. His YouTube channel summarizes the critical points about machine learning, reinforcement learning, and AI in general from a technical perspective while making them accessible for a bigger audience. A new ML Youtube channel that everyone should check out, Machine Learning 101 posts explainer videos on beginner AI concepts. The channel also posts podcasts with expert data scientists and professionals working on AI in commercial industries. FreeCodeCamp is an incredible non-profit organization. It is an open-source community that offers a collection of resources that helps people learn to code for free and create their projects. Its website is entirely free for anyone to learn about coding. Also, they have their news platform that shares articles on programming and projects. Kevin Markham creates in-depth YouTube tutorials to understand AI and machine learning. Data School focuses on the topics you need to master first and offers in-depth tutorials that you can understand regardless of your educational background. Machine Learning TV has resources for computer science students and enthusiasts to understand machine learning better. This YouTube channel aims to make machine learning and reinforcement learning more approachable for everyone. There is a 12 video playlist for a full-introduction to neural networks for beginners, and it seems a subsequent intermediate neural network series is currently in production. Andreas Kretz is a data engineer and founder of Plumbers of Data Science. He broadcasts live tutorials on his channel on how to get hands-on experience in data engineering and videos with questions and answers about data engineering with Hadoop, Kafka, Spark, and so on. Edureka is an e-learning platform with several tutorials and guidelines on trending topics in the areas of Big Data & Hadoop, DevOps, Blockchain, Artificial Intelligence, Angular, Data Science, Apache Spark, Python, Selenium, Tableau, Android, PMP certification, AWS Architect, Digital Marketing and many more. Ng was named one of Time’s 100 Most Influential People in 2012 and Fast Company’s Most Creed. He co-founded Coursera and deeplearning.ai and was a former vice president and chief scientist at Baidu. He is an adjunct professor at Stanford University. The official Deep Learning AI YouTube channel has video tutorials from the deep learning specialization on Coursera. Founded by Andrew Ng, DeepLearning.AI is an education technology company that develops a global community of AI talent. DeepLearning.AI’s expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics to advanced application, empowering them to build an AI-powered future. Tech With Tim is a brilliant programmer who teaches Python, game development with Pygame, Java, and Machine Learning. He creates high-level coding tutorials in Python. Created in 2016, Machine Learning University (MLU) is an initiative by Amazon with a direct objective: to train as many employees as possible to master the technology, essential for the company to achieve the “magic” of offering products with this integrated technology. This YouTube channel has tutorial videos related to science, technology, and artificial intelligence. Sentdex creates one of the best Python programming tutorials on YouTube. His tutorials range from beginners to more advanced. With more than 1000 videos about Python Programming tutorials, it goes further than just the basics. You can learn about machine learning, finance, data analysis, robotics, web development, game development, and more. Joma Tech is a YouTuber who makes videos to help people get into the technology industry. He worked for large technology companies as a data scientist and software engineer. Based on his experience, he makes videos of interviews with experts and lifestyle in Silicon Valley and makes data science more accessible. Python Programmer content includes tutorials on Python, Data Science, Machine Learning, book recommendations, and more. This YouTube channel features topics such as how-to’s, reviews of software libraries and applications, and interviews with key individuals in the field of deep learning. DeepLearning.TV is all about Deep Learning, the field of study that teaches machines to perceive the world. Starting with a series that simplifies Deep Learning, the channel features topics such as How To’s, reviews of software libraries and applications, and interviews with key individuals in the field. Through a series of concept videos showcasing the intuition behind every Deep Learning method, we will show you that Deep Learning is actually simpler than you think. YouTube videos to help you build what’s next with secure infrastructure, developer tools, APIs, data analytics, and machine learning, Helping you build what’s next with secure infrastructure, developer tools, APIs, data analytics, and machine learning. Keith Galli is a recent graduate from MIT. He makes educational videos about computer science, programming, board games, and more. Data Science Dojo is a channel that promises to teach data science to everyone in an easy to understand way. You will find a multitude of tutorials, lectures, and courses on data engineering and science.
https://medium.com/swlh/21-amazing-youtube-channels-for-you-to-learn-ai-machine-learning-and-data-science-for-free-486c1b41b92a
['Jair Ribeiro']
2020-12-11 11:52:36.350000+00:00
['Data Science', 'Online Learning', 'Artificial Intelligence', 'AI', 'Learning']
53 Python Interview Questions and Answers
1. What is the difference between a list and a tuple? I’ve been asked this question in every python / data science interview I’ve ever had. Know the answer like the back of your hand. Lists are mutable. They can be modified after creation. Tuples are immutable. Once a tuple is created it cannot by changed Lists have order. They are an ordered sequences, typically of the same type of object. Ie: all user names ordered by creation date, ["Seth", "Ema", "Eli"] Tuples have structure. Different data types may exist at each index. Ie: a database record in memory, (2, "Ema", "2020–04–16") # id, name, created_at 2. How is string interpolation performed? Without importing the Template class, there are 3 ways to interpolate strings. name = 'Chris' # 1. f strings print(f'Hello {name}') # 2. % operator print('Hey %s %s' % (name, name)) # 3. format print( "My name is {}".format((name)) ) 3. What is the difference between “is” and “==”? Early in my python career I assumed these were the same… hello bugs. So for the record, is checks identity and == checks equality. We’ll walk through an example. Create some lists and assign them to names. Note that b points to the same object as a in below. a = [1,2,3] b = a c = [1,2,3] Check equality and note they are all equal. print(a == b) print(a == c) #=> True #=> True But do they have the same identity? Nope. print(a is b) print(a is c) #=> True #=> False We can verify this by printing their object id’s. print(id(a)) print(id(b)) print(id(c)) #=> 4369567560 #=> 4369567560 #=> 4369567624 c has a different id than a and b . 4. What is a decorator? Another questions I’ve been asked in every interview. It’s deserves a post itself, but you’re prepared if you can walk through writing your own example. A decorator allows adding functionality to an existing function by passing that existing function to a decorator, which executes the existing function as well as additional code. We’ll write a decorator that that logs when another function is called. Write the decorator function. This takes a function, func , as an argument. It also defines a function, log_function_called , which calls func() and executes some code, print(f'{func} called.') . Then it return the function it defined def logging(func): def log_function_called(): print(f'{func} called.') func() return log_function_called Let’s write other functions that we’ll eventually add the decorator to (but not yet). def my_name(): print('chris') def friends_name(): print('naruto') my_name() friends_name() #=> chris #=> naruto Now add the decorator to both. @logging def my_name(): print('chris') def my_name():print('chris') @logging def friends_name(): print('naruto') def friends_name():print('naruto') my_name() friends_name() #=> <function my_name at 0x10fca5a60> called. #=> chris #=> <function friends_name at 0x10fca5f28> called. #=> naruto See how we can now easily add logging to any function we write just by adding @logging above it. 5. Explain the range function Range generates a list of integers and there are 3 ways to use it. The function takes 1 to 3 arguments. Note I’ve wrapped each usage in list comprehension so we can see the values generated. range(stop) : generate integers from 0 to the “stop” integer. [i for i in range(10)] #=> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] range(start, stop) : generate integers from the “start” to the “stop” integer. [i for i in range(2,10)] #=> [2, 3, 4, 5, 6, 7, 8, 9] range(start, stop, step) : generate integers from “start” to “stop” at intervals of “step”. [i for i in range(2,10,2)] #=> [2, 4, 6, 8] Thanks Searge Boremchuq for suggesting a more pythonic way to do this! list(range(2,10,2)) #=> [2, 4, 6, 8] 6. Define a class named car with 2 attributes, “color” and “speed”. Then create an instance and return speed. class Car : def __init__(self, color, speed): self.color = color self.speed = speed car = Car('red','100mph') car.speed #=> '100mph' 7. What is the difference between instance, static and class methods in python? Instance methods : accept self parameter and relate to a specific instance of the class. Static methods : use @staticmethod decorator, are not related to a specific instance, and are self-contained (don’t modify class or instance attributes) Class methods : accept cls parameter and can modify the class itself We’re going to illustrate the difference around a fictional CoffeeShop class. specialty = 'espresso' def __init__(self, coffee_price): self.coffee_price = coffee_price # instance method def make_coffee(self): print(f'Making {self.specialty} for ${self.coffee_price}') # static method @staticmethod def check_weather(): print('Its sunny') class CoffeeShop:specialty = 'espresso'def __init__(self, coffee_price):self.coffee_price = coffee_price# instance methoddef make_coffee(self):print(f'Making {self.specialty} for ${self.coffee_price}')# static methoddef check_weather():print('Its sunny') @classmethod def change_specialty(cls, specialty): cls.specialty = specialty print(f'Specialty changed to {specialty}') # class methoddef change_specialty(cls, specialty):cls.specialty = specialtyprint(f'Specialty changed to {specialty}') CoffeeShop class has an attribute, specialty , set to 'espresso' by default. Each instance of CoffeeShop is initialized with an attribute coffee_price . It also has 3 methods, an instance method, a static method and a class method. Let’s initialize an instance of the coffee shop with a coffee_price of 5 . Then call the instance method make_coffee . coffee_shop = CoffeeShop('5') coffee_shop.make_coffee() #=> Making espresso for $5 Now call the static method. Static methods can’t modify class or instance state so they’re normally used for utility functions, for example, adding 2 numbers. We used ours to check the weather. Its sunny . Great! coffee_shop.check_weather() #=> Its sunny Now let’s use the class method to modify the coffee shop’s specialty and then make_coffee . coffee_shop.change_specialty('drip coffee') #=> Specialty changed to drip coffee coffee_shop.make_coffee() #=> Making drip coffee for $5 Note how make_coffee used to make espresso but now makes drip coffee ! 8. What is the difference between “func” and “func()”? The purpose of this question is to see if you understand that all functions are also objects in python. def func(): print('Im a function') func #=> function __main__.func> func() #=> Im a function func is the object representing the function which can be assigned to a variable or passed to another function. func() with parentheses calls the function and returns what it outputs. 9. Explain how the map function works map returns a map object (an iterator) which can iterate over returned values from applying a function to every element in a sequence. The map object can also be converted to a list if required. def add_three(x): return x + 3 li = [1,2,3] [i for i in map(add_three, li)] #=> [4, 5, 6] Above, I added 3 to every element in the list. A reader suggested a more pythonic implementation. Thanks Chrisjan Wust ! def add_three(x): return x + 3 li = [1,2,3] list(map(add_three, li)) #=> [4, 5, 6] Also, thanks Michael Graeme Short for the corrections! 10. Explain how the reduce function works This can be tricky to wrap your head around until you use it a few times. reduce takes a function and a sequence and iterates over that sequence. On each iteration, both the current element and output from the previous element are passed to the function. In the end, a single value is returned. from functools import reduce def add_three(x,y): return x + y li = [1,2,3,5] reduce(add_three, li) #=> 11 11 is returned which is the sum of 1+2+3+5 . 11. Explain how the filter function works Filter literally does what the name says. It filters elements in a sequence. Each element is passed to a function which is returned in the outputted sequence if the function returns True and discarded if the function returns False . def add_three(x): if x % 2 == 0: return True else: return False li = [1,2,3,4,5,6,7,8] [i for i in filter(add_three, li)] #=> [2, 4, 6, 8] Note how all elements not divisible by 2 have been removed. 12. Does python call by reference or call by value? Be prepared to go down a rabbit hole of semantics if you google this question and read the top few pages. In a nutshell, all names call by reference, but some memory locations hold objects while others hold pointers to yet other memory locations. name = 'object' Let’s see how this works with strings. We’ll instantiate a name and object, point other names to it. Then delete the first name. x = 'some text' y = x x is y #=> True del x # this deletes the 'a' name but does nothing to the object in memory z = y y is z #=> True What we see is that all these names point to the same object in memory, which wasn’t affected by del x . Here’s another interesting example with a function. name = 'text' def add_chars(str1): print( id(str1) ) #=> 4353702856 print( id(name) ) #=> 4353702856 # new name, same object str2 = str1 # creates a new name (with same name as the first) AND object str1 += 's' print( id(str1) ) #=> 4387143328 # still the original object print( id(str2) ) #=> 4353702856 add_chars(name) print(name) #=>text Notice how adding an s to the string inside the function created a new name AND a new object. Even though the new name has the same “name” as the existing name. Thanks Michael P. Reilly for the corrections! 13. How to reverse a list? Note how reverse() is called on the list and mutates it. It doesn’t return the mutated list itself. li = ['a','b','c'] print(li) li.reverse() print(li) #=> ['a', 'b', 'c'] #=> ['c', 'b', 'a'] 14. How does string multiplication work? Let’s see the results of multiplying the string ‘cat’ by 3. 'cat' * 3 #=> 'catcatcat' The string is concatenated to itself 3 times. 15. How does list multiplication work? Let’s see the result of multiplying a list, [1,2,3] by 2. [1,2,3] * 2 #=> [1, 2, 3, 1, 2, 3] A list is outputted containing the contents of [1,2,3] repeated twice. 16. What does “self” refer to in a class? Self refers to the instance of the class itself. It’s how we give methods access to and the ability to update the object they belong to. Below, passing self to __init__() gives us the ability to set the color of an instance on initialization. class Shirt: def __init__(self, color): self.color = color s = Shirt('yellow') s.color #=> 'yellow' 17. How can you concatenate lists in python? Adding 2 lists together concatenates them. Note that arrays do not function the same way. a = [1,2] b = [3,4,5] a + b #=> [1, 2, 3, 4, 5] 18. What is the difference between a shallow and a deep copy? We’ll discuss this in the context of a mutable object, a list. For immutable objects, shallow vs deep isn’t as relevant. We’ll walk through 3 scenarios. i) Reference the original object. This points a new name, li2 , to the same place in memory to which li1 points. So any change we make to li1 also occurs to li2 . li1 = [['a'],['b'],['c']] li2 = li1 li1.append(['d']) print(li2) #=> [['a'], ['b'], ['c'], ['d']] ii) Create a shallow copy of the original. We can do this with the list() constructor, or the more pythonic mylist.copy() (thanks Chrisjan Wust !). A shallow copy creates a new object, but fills it with references to the original. So adding a new object to the original collection, li3 , doesn’t propagate to li4 , but modifying one of the objects in li3 will propagate to li4 . li3 = [['a'],['b'],['c']] li4 = list(li3) li3.append([4]) print(li4) #=> [['a'], ['b'], ['c']] li3[0][0] = ['X'] print(li4) #=> [[['X']], ['b'], ['c']] iii) Create a deep copy. This is done with copy.deepcopy() . The 2 objects are now completely independent and changes to either have no affect on the other. import copy li5 = [['a'],['b'],['c']] li6 = copy.deepcopy(li5) li5.append([4]) li5[0][0] = ['X'] print(li6) #=> [['a'], ['b'], ['c']] 19. What is the difference between lists and arrays? Note: Python’s standard library has an array object but here I’m specifically referring to the commonly used Numpy array. Lists exist in python’s standard library. Arrays are defined by Numpy. Lists can be populated with different types of data at each index. Arrays require homogeneous elements. Arithmetic on lists adds or removes elements from the list. Arithmetic on arrays functions per linear algebra. Arrays also use less memory and come with significantly more functionality. I wrote another comprehensive post on arrays. 20. How to concatenate two arrays? Remember, arrays are not lists. Arrays are from Numpy and arithmetic functions like linear algebra. We need to use Numpy’s concatenate function to do it. import numpy as np a = np.array([1,2,3]) b = np.array([4,5,6]) np.concatenate((a,b)) #=> array([1, 2, 3, 4, 5, 6]) 21. What do you like about Python? Note this is a very subjective question and you’ll want to modify your response based on what the role is looking for. Python is very readable and there is a pythonic way to do just about everything, meaning a preferred way which is clear and concise. I’d contrast this to Ruby where there are often many ways to do something without a guideline for which is preferred. 22. What is you favorite library in Python? Also subjective, see question 21. When working with a lot data, nothing is quite as helpful as pandas which makes manipulating and visualizing data a breeze. 23. Name mutable and immutable objects Immutable means the state cannot be modified after creation. Examples are: int, float, bool, string and tuple. Mutable means the state can be modified after creation. Examples are list, dict and set. 24. How would you round a number to 3 decimal places? Use the round(value, decimal_places) function. a = 5.12345 round(a,3) #=> 5.123 25. How do you slice a list? Slicing notation takes 3 arguments, list[start:stop:step] , where step is the interval at which elements are returned. a = [0,1,2,3,4,5,6,7,8,9] print(a[:2]) #=> [0, 1] print(a[8:]) #=> [8, 9] print(a[2:8]) #=> [2, 3, 4, 5, 6, 7] print(a[2:8:2]) #=> [2, 4, 6] 26. What is pickling? Pickling is the go-to method of serializing and unserializing objects in Python. In the example below, we serialize and unserialize a list of dictionaries. import pickle obj = [ {'id':1, 'name':'Stuffy'}, {'id':2, 'name': 'Fluffy'} ] with open('file.p', 'wb') as f: pickle.dump(obj, f) with open('file.p', 'rb') as f: loaded_obj = pickle.load(f) print(loaded_obj) #=> [{'id': 1, 'name': 'Stuffy'}, {'id': 2, 'name': 'Fluffy'}] 27. What is the difference between dictionaries and JSON? Dict is python datatype, a collection of indexed but unordered keys and values. JSON is just a string which follows a specified format and is intended for transferring data. 28. What ORMs have you used in Python? ORMs (object relational mapping) map data models (usually in an app) to database tables and simplifies database transactions. SQLAlchemy is typically used in the context of Flask, and Django has it’s own ORM. 29. How do any() and all() work? Any takes a sequence and returns true if any element in the sequence is true. All returns true only if all elements in the sequence are true. a = [False, False, False] b = [True, False, False] c = [True, True, True] print( any(a) ) print( any(b) ) print( any(c) ) #=> False #=> True #=> True print( all(a) ) print( all(b) ) print( all(c) ) #=> False #=> False #=> True 30. Are dictionaries or lists faster for lookups? Looking up a value in a list takes O(n) time because the whole list needs to be iterated through until the value is found. Looking up a key in a dictionary takes O(1) time because it’s a hash table. This can make a huge time difference if there are a lot of values so dictionaries are generally recommended for speed. But they do have other limitations like needing unique keys. 31. What is the difference between a module and a package? A module is a file (or collection of files) that can be imported together. import sklearn A package is a directory of modules. from sklearn import cross_validation So packages are modules, but not all modules are packages. 32. How to increment and decrement an integer in Python? Increments and decrements can be done with +- and -= . value = 5 value += 1 print(value) #=> 6 value -= 1 value -= 1 print(value) #=> 4 33. How to return the binary of an integer? Use the bin() function. bin(5) #=> '0b101' 34. How to remove duplicate elements from a list? This can be done by converting the list to a set then back to a list. a = [1,1,1,2,3] a = list(set(a)) print(a) #=> [1, 2, 3] Note that sets will not necessarily maintain the order of a list. 35. How to check if a value exists in a list? Use in . 'a' in ['a','b','c'] #=> True 'a' in [1,2,3] #=> False 36. What is the difference between append and extend? append adds a value to a list while extend adds values in another list to a list. a = [1,2,3] b = [1,2,3] a.append(6) print(a) #=> [1, 2, 3, 6] b.extend([4,5]) print(b) #=> [1, 2, 3, 4, 5] 37. How to take the absolute value of an integer? This can be done with the abs() function. abs(2) #=> 2 abs(-2) #=> 2 38. How to combine two lists into a list of tuples? You can use the zip function to combine lists into a list of tuples. This isn’t restricted to only using 2 lists. It can also be done with 3 or more. a = ['a','b','c'] b = [1,2,3] [(k,v) for k,v in zip(a,b)] #=> [('a', 1), ('b', 2), ('c', 3)] 39. How can you sort a dictionary by key, alphabetically? You can’t “sort” a dictionary because dictionaries don’t have order but you can return a sorted list of tuples which has the keys and values that are in the dictionary. d = {'c':3, 'd':4, 'b':2, 'a':1} sorted(d.items()) #=> [('a', 1), ('b', 2), ('c', 3), ('d', 4)] 40. How does a class inherit from another class in Python? In the below example, Audi , inherits from Car . And with that inheritance comes the instance methods of the parent class. class Car(): def drive(self): print('vroom') class Audi(Car): pass audi = Audi() audi.drive() 41. How can you remove all whitespace from a string? The easiest way is to split the string on whitespace and then rejoin without spaces. s = 'A string with white space' ''.join(s.split()) #=> 'Astringwithwhitespace' 2 readers recommended a more pythonic way to handle this following the Python ethos that Explicit is better than Implicit . It’s also faster because python doesn’t create a new list object. Thanks Евгений Крамаров and Chrisjan Wust ! s = 'A string with white space' s.replace(' ', '') #=> 'Astringwithwhitespace' 42. Why would you use enumerate() when iterating on a sequence? enumerate() allows tracking index when iterating over a sequence. It’s more pythonic than defining and incrementing an integer representing the index. li = ['a','b','c','d','e'] for idx,val in enumerate(li): print(idx, val) #=> 0 a #=> 1 b #=> 2 c #=> 3 d #=> 4 e 43. What is the difference between pass, continue and break? pass means do nothing. We typically use it because Python doesn’t allow creating a class, function or if-statement without code inside it. In the example below, an error would be thrown without code inside the i > 3 so we use pass . a = [1,2,3,4,5] for i in a: if i > 3: pass print(i) #=> 1 #=> 2 #=> 3 #=> 4 #=> 5 continue continues to the next element and halts execution for the current element. So print(i) is never reached for values where i < 3 . for i in a: if i < 3: continue print(i) #=> 3 #=> 4 #=> 5 break breaks the loop and the sequence is not longer iterated over. So elements from 3 onward are not printed. for i in a: if i == 3: break print(i) #=> 1 #=> 2 44. Convert the following for loop into a list comprehension. This for loop. a = [1,2,3,4,5] a2 = [] for i in a: a2.append(i + 1) print(a2) #=> [2, 3, 4, 5, 6] Becomes. a3 = [i+1 for i in a] print(a3) #=> [2, 3, 4, 5, 6] List comprehension is generally accepted as more pythonic where it’s still readable. 45. Give an example of the ternary operator. The ternary operator is a one-line if/else statement. The syntax looks like a if condition else b . x = 5 y = 10 'greater' if x > 6 else 'less' #=> 'less' 'greater' if y > 6 else 'less' #=> 'greater' 46. Check if a string only contains numbers. You can use isnumeric() . '123a'.isnumeric() #=> False '123'.isnumeric() #=> True 47. Check if a string only contains letters. You can use isalpha() . '123a'.isalpha() #=> False 'a'.isalpha() #=> True 48. Check if a string only contains numbers and letters. You can use isalnum() . '123abc...'.isalnum() #=> False '123abc'.isalnum() #=> True 49. Return a list of keys from a dictionary. This can be done by passing the dictionary to python’s list() constructor, list() . d = {'id':7, 'name':'Shiba', 'color':'brown', 'speed':'very slow'} list(d) #=> ['id', 'name', 'color', 'speed'] 50. How do you upper and lowercase a string? You can use the upper() and lower() string methods. small_word = 'potatocake' big_word = 'FISHCAKE' small_word.upper() #=> 'POTATOCAKE' big_word.lower() #=> 'fishcake' 51. What is the difference between remove, del and pop? remove() remove the first matching value. li = ['a','b','c','d'] li.remove('b') li #=> ['a', 'c', 'd'] del removes an element by index. li = ['a','b','c','d'] del li[0] li #=> ['b', 'c', 'd'] pop() removes an element by index and returns that element. li = ['a','b','c','d'] li.pop(2) #=> 'c' li #=> ['a', 'b', 'd'] 52. Give an example of dictionary comprehension. Below we’ll create dictionary with letters of the alphabet as keys, and index in the alphabet as values. # creating a list of letters import string list(string.ascii_lowercase) alphabet = list(string.ascii_lowercase) # list comprehension d = {val:idx for idx,val in enumerate(alphabet)} d #=> {'a': 0, #=> 'b': 1, #=> 'c': 2, #=> ... #=> 'x': 23, #=> 'y': 24, #=> 'z': 25} 53. How is exception handling performed in Python? Python provides 3 words to handle exceptions, try , except and finally . The syntax looks like this. try: # try to do this except: # if try block fails then do this finally: # always do this In the simplistic example below, the try block fails because we cannot add integers with strings. The except block sets val = 10 and then the finally block prints complete .
https://towardsdatascience.com/53-python-interview-questions-and-answers-91fa311eec3f
['Chris I.']
2020-04-27 16:10:45.288000+00:00
['Python', 'Coding', 'Software Engineering', 'Data Science', 'Programming']
Can we assume that storytelling is a piece of art?
Can we assume that storytelling is a piece of art? Photo by OVAN from Pexels As long as I remember myself, I have been creating art with my hands, including jewellery, sculpture, and mosaic. It was my godmother's idea and further encouragement in 2008… So, I started my journey to art by sculpting with plastilina when six months later, I blended art with holidays by attending two courses of mosaic in Mykonos island and marble sculpture in Paros island, Greece. Then on, my art journey was contagious, and I wanted more and more… so attended additional jewellery classes of silversmithing and a dozen others, which were never enough for me… and I was hungrily added one more course to my to-do-list. But what is Art for me? A Ritual Treatment of my soul… Art is my way to give colours to my feelings, create a shape to my chaos, and give a picture to my inner self … At the same time, art gives me the opportunity to express my emotions in red or blue, unfold my deepest fears in yellow and share my story with others in rose. So, I was wondering: are my stories and this enthusiasm to write them a new type of art for me? Yes, they are, indeed. It’s been a lot of years since I have watched the movie “Eat Pray Love.” Since then, I had been dreaming of writing a book about my life, including the parts I wanted to change and could not accept as problematic, and at the end, my unique way to improve my life and keep on walking. Every time of thinking to write down my story, I always said 'this is not possible, you are not a writer, you haven’t studied literature… Natalia, you think too highly for yourself.’ Until a July’s Sunday night, after a simple conversation with my boyfriend, seating on a bench by the Thames and drinking homemade Pimm’s in coffee mugs, when I realised how much I wanted to write stories about my life and connect with others who have similar beliefs or perspectives about our interactive trip to Cosmos… Honestly, I cannot write about formulas on how to live longer, make your skin fresher or change your job… The only thing I can tell you is how important is to do things that give joy to my life… and how mistakes are part this and how much denial is strong at this trip… And then, out of the blue, how my smile arrives back and move on, and make my change and all obstacles belong to the past. Furthemore I can agree with you, my life might be irrelevant to you and my approach can work for you in a full, slight or not at all version. Nevertheless, we all have to accept the body and shaking tremors when we read a story that is familiar to our subconscious inner self… We use to say ‘we do not know what to do next’ or ‘how to do this’ but we all deep inside have the answer. I believe that conditions in lives do not miraculously change if we do not react in advance… I strongly believe that since we start changing our destiny, the universe does its best to make things fit in us. Also, the universe’s best move might not correlate to our expectations. Photo by Magda Ehlers from Pexels When I create art by writing my story and sharing it in the community, I have first experienced those feelings or moments by the time I talk to you about. So, I basically draw the honest version of my “present” by realising my wise reactions, wrong acts and moreover my stupidity. This is not meant to deteriorate my audience or to compare myself to others. It is just an attempt to interact with myself and you, and in general to find fellow travellers on this ride.
https://natkokla.medium.com/can-we-assume-that-storytelling-is-a-piece-of-art-ad2e7b1f9858
['Natalia Kokla']
2018-09-05 19:53:59.433000+00:00
['Loving Myself', 'Art', 'Life', 'Psychology', 'Writing']
Engineering Management like a Pro — 101😮
In this era of technology everything from ordering foods to transferring money, everything is available in just single tap, of course thanks to tech. But building technology solution is not a piece of cake it requires tons of efforts to bring that food ordering or banking app in your smartphone. Now, Straight to the point in this article talks about good practices and hacks that can help you in building a great software product with maximum productivity and efficiency in less time and cost. Let’s start from the step zero of product development:- It is very important for any product team to understand the idea behind the product and its impact and goal. Analyse all the consumer needs and discuss it with the engineering and development team about the feasibility and cost of development, this will make your engineering team aware of what they have to develop, the user segment and other constrains. Now, Focus on engineering and development segment:- Higher level application architecture:- Decide a highly scalable and easy to maintain application architecture for whole application which includes Backend, Frontend, DevOps and other segments if any. Technology selection:- Decide the technology and tools we are going to use to develop our application, and make sure that the technology we selected must satisfy the needs and goals of our application. Analyse resources:- Have a look at your human resource weather they are enough skilled on your selected technology stack, if not first get your workforce skilled on those technology, so that each team member can decide between good practice and bad practices, don’t start development just by relying on stackoveflow🚀 Segment level architecture:- Now define micro level architecture of every segment of your application i.e Frontend, Backend, DevOps etc. make sure to discuss with engineering team and give opportunity to all your members to find pros and cons of the decided architecture. Application performance from day 1:- No body want to use slow or buggy application, that’s why performance consideration really matters and each and every developer should implement the best optimized feature(where ever possible), because rework can be more costly. Define a code standard:- Defining a code standard and design pattern and ask every developer to follow that code standards, doing this can decrease the development time and maintenance effort to a larger extent. Decide a success matrix:- Analyzing the result is most important, similarly define a minimum threshold for every feature of your application, it will help you in delivering more perfect features and less bugs. Setup a proper test environment:- Analyse all the edge cases and provide a proper testing environments to developers so that they can test each and every feature while developing, this will help in decreasing the number of iterations. Test everything from min scale to max scale:- Let’s understand this with help of a example, suppose we are having a table in our application then test it when when zero records and then thousands of records, if the table lags then you need to change the strategy, this will not only help in performance and stability, but also help in improving the UX of application. Decide and test deployment architecture:- Deployment is really crucial part of any software product, don’t try to be conservative here because if your deployment infrastructure is not good, then well developed application cannot give good performance and may leave your user frustrated. This it all about Engineering Management in this article in my upcoming article i will talk about each one of the above points in depth.
https://medium.com/codingurukul/engineering-management-like-a-pro-101-3e89706a16da
['Suraj Kumar']
2020-04-12 04:01:12.012000+00:00
['User Experience', 'Product Development', 'Engineering Mangement', 'Startup', 'Engineering']
Is Artificial Intelligence Possible
“ Artificial Intelligence has been brain-dead since the 1970s.” This rather ostentatious remark made by Marvin Minsky co-founder of the world-famous MIT Artificial Intelligence Laboratory was referring to the fact that researchers have been primarily concerned on small facets of machine intelligence as opposed to looking at the problem as a whole. This article examines the contemporary issues of artificial intelligence (AI) looking at the current status of the AI field together with potent arguments provided by leading experts to illustrate whether AI is an impossible concept to obtain. Because of the scope and ambition, artificial intelligence defies simple definition. Initially, AI was defined as “the science of making machines do things that would require intelligence if done by men”. This somewhat meaningless definition shows how AI is still a young discipline and similar early definitions have been shaped by technological and theoretical progress made in the subject. So for the time being, a good general definition that illustrates the future challenges in the AI field was made by the American Association for Artificial Intelligence (AAAI) clarifying that AI is the “scientific understanding of the mechanisms underlying thought and intelligent behaviour and their embodiment in machines”. The term “artificial intelligence” was first coined by John McCarthy at a Conference at Dartmouth College, New Hampshire, in 1956, but the concept of machine intelligence is in fact much older. In ancient Greek mythology the smith-god, Hephaestus, is credited with making Talos, a “bull-headed” bronze man who guarded Crete for King Minos by patrolling the island terrifying off impostors. Similarly, in the 13th century, mechanical talking heads were said to have been created to scare intruders, with Albert the Great and Roger Bacon reputedly among the owners. However, it is only in the last 50 years that AI has really begun to pervade popular culture. Our fascination with “thinking machines” is obvious, but has been wrongfully distorted by the science-fiction connotations seen in literature, film and television. In reality, the AI field is far from creating the sentient beings seen in the media, yet this does not imply that successful progress has not been made. AI has been a rich branch of research for 50 years and many famed theorists have contributed to the field, but one computer pioneer that has shared his thoughts at the beginning and still remains timely in both his assessment and arguments is British mathematician Alan Turing. In the 1950s Turing published a paper called Computing Machinery and Intelligence in which he proposed an empirical test that identifies an intelligent behaviour “when there is no discernible difference between the conversation generated by the machine and that of an intelligent person.” The Turing test measures the performance of an allegedly intelligent machine against that of a human being and is arguably one of the best evaluation experiments at this present time. The Turing test, also referred to as the “imitation game” is carried out by having a knowledgeable human interrogator engage in a natural language conversation with two other participants, one a human the other the “intelligent” machine communicating entirely with textual messages. If the judge cannot reliably identify which is which, it is said that the machine has passed and is therefore intelligent. Although the test has a number of justifiable criticisms such as not being able to test perceptual skills or manual dexterity it is a great accomplishment that the machine can converse like a human and can cause a human to subjectively evaluate it as humanly intelligent by conversation alone. Many theorists have disputed the Turing Test as an acceptable means of proving artificial intelligence, an argument posed by Professor Jefferson Lister states, “not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain”. Turing replied by saying “that we have no way of knowing that any individual other than ourselves experiences emotions and that therefore we should accept the test.” However, Lister did have a valid point to make, developing an artificial consciousness. Intelligent machines already exist that are autonomous; they can learn, communicate and teach each other, but creating an artificial intuition, a consciousness, “is the holy grail of artificial intelligence.” When modelling AI on the human mind many illogical paradoxes surface and you begin to see how the complexity of the brain has been underestimated and why simulating it has not been as straightforward as experts believed in the 1950s. The problem with human beings is that they are not algorithmic creatures; they prefer to use heuristic shortcuts and analogies to situations well known. However, this is a psychological implication, “it is not that people are smarter then explicit algorithms, but that they are sloppy and yet do well in most cases.” The phenomenon of consciousness has caught the attention of many Philosophers and Scientists throughout history and innumerable papers and books have been published devoted to the subject. However, no other biological singularity has remained so resistant to scientific evidence and “persistently ensnarled in fundamental philosophical and semantic tangles.” Under ordinary circumstances, we have little difficulty in determining when other people lose or regain consciousness and as long as we avoid describing it, the phenomenon remains intuitively clear. Most Computer Scientists believe that the consciousness was an evolutionary “add-on” and can, therefore, be algorithmically modelled. Yet many recent claims oppose this theory. Sir Roger Penrose, an English mathematical physicist, argues that the rational processes of the human mind are not completely algorithmic and thus transcends computation and Professor Stuart Hameroff’s proposal that consciousness emerges as a macroscopic quantum state from a critical level of coherence of quantum level events in and around cytoskeletal microtubules within neurons. Although these are all theories with not much or no empirical evidence, it is still important to consider each of them because it is vital that we understand the human mind before we can duplicate it. Another key problem with duplicating the human mind is how to incorporate the various transitional states of consciousness such as REM sleep, hypnosis, drug influence and some psychopathological states within a new paradigm. If these states are removed from the design due to their complexity or irrelevancy in a computer then it should be pointed out that perhaps consciousness cannot be artificially imitated because these altered states have a biophysical significance for the functionality of the mind. If consciousness is not algorithmic, then how is it created? Obviously, we do not know. Scientists who are interested in subjective awareness study the objective facts of neurology and behaviour and have shed new light on how our nervous system processes and discriminates among stimuli. But although such sensory mechanisms are necessary for consciousness, it does not help to unlock the secrets of the cognitive mind as we can perceive things and respond to them without being aware of them. A prime example of this is sleepwalking. When sleepwalking occurs (Sleepwalking comprises approximately 25 per cent of all children and 7 per cent of adults) many of the victims carry out dangerous or stupid tasks, yet some individuals carry out complicated, distinctively human-like tasks, such as driving a car. One may dispute whether sleepwalkers are really unconscious or not, but if it is, in fact, true that the individuals have no awareness or recollection of what happened during their sleepwalking episode, then perhaps here is the key to the cognitive mind. Sleepwalking suggests at least two general behavioural deficiencies associated with the absence of consciousness in humans. The first is a deficiency in social skills. Sleepwalkers typically ignore the people they encounter, and the “rare interactions that occur are perfunctory and clumsy, or even violent.” The other major deficit in sleepwalking behaviour is linguistics. Most sleepwalkers respond to verbal stimuli with only grunts or monosyllables or make no response at all. These two apparent deficiencies may be significant. Sleepwalkers use of protolanguage; short, grammar-free utterances with referential meaning but lack syntax, may illustrate that the consciousness is a social adaptation and that other animals do not lack understanding or sensation, but that they lack language skills and therefore cannot reflect on their sensations and become self-aware. In principle Francis Crick, co-discover of double helix DNA structure believed this hypothesis. After he and James Watson solved the mechanism of inheritance, Crick moved to neuroscience and spent the rest of his trying to answer the biggest biological question; what is the consciousness? Working closely with Christof Koch, he published his final paper in the Philosophical Transactions of the Royal Society of London and in it he proposed that an obscure part of the brain, the claustrum, acts like a conductor of an orchestra and “binds” vision, olfaction, somatic sensation, together with the amygdala and other neuronal processing for the unification of thought and emotion. And the fact that all mammals have a claustrum means that it is possible that other animals have high intelligence. So how different are the minds of animals in comparison to our own? Can their minds be algorithmically simulated? Many Scientists are reluctant to discuss animal intelligence as it is not an observable property and nothing can be perceived without reason and therefore there is not much-published research on the matter. But, by avoiding the comparison of some human mental states to other animals, we are impeding the use of a comparative method that may unravel the secrets of the cognitive mind. However, primates and cetacean have been considered by some to be extremely intelligent creatures, second only to humans. Their exalted status in the animal kingdom has lead to their involvement in almost all of the published experiments related to animal intelligence. These experiments coupled with analysis of primate and cetacean’s brain structure has to lead to many theories as to the development of higher intelligence as a trait. Although these theories seem to be plausible, there is some controversy over the degree to which non-human studies can be used to infer the structure of human intelligence. By many of the physical methods of comparing intelligence, such as measuring the brain size to body size ratio, cetacean surpasses non-human primates and even rival human beings. For example “dolphins have a cerebral cortex which is about 40% larger a human being. Their cortex is also stratified in much the same way as humans. The frontal lobe of dolphins is also developed to a level comparable to humans. In addition, the parietal lobe of dolphins which “makes sense of the senses” is larger than the human parietal and frontal lobes combined. The similarities do not end there; most cetaceans have large and well-developed temporal lobes which contain sections equivalent to Broca’s and Wernicke’s areas in humans.” Dolphins exhibit complex behaviours; they have a social hierarchy, they demonstrate the ability to learn complex tricks, when scavenging for food on the sea floor, some dolphins have been seen tearing off pieces of sponge and wrapping them around their “bottlenose” to prevent abrasions; illustrating yet another complex cognitive process thought to be limited to the great apes, they apparently communicate by emitting two very distinct kinds of acoustic signals, which we call whistles and clicks and lastly dolphins do not use sex purely for procreative purposes. Some dolphins have been recorded having homosexual sex, which demonstrates that they must have some consciousness. Dolphins have a different brain structure than humans that could perhaps be algorithmically simulated. One example of their dissimilar brain structure and intelligence is their sleep technique. While most mammals and birds show signs of rapid REM (Rapid Eye Movement) sleep, reptiles and cold-blooded animals do not. REM sleep stimulates the brain regions used in learning and is often associated with dreaming. The fact that cold-blooded animals do not have REM sleep could be enough evidence to suggest that they are not conscious and therefore their brains can definitely be emulated. Furthermore, warm-blood creatures display signs of REM sleep, and thus dream and therefore must have some environmental awareness. However, dolphins sleep unihemispherically, they are “conscious” breathers, and if fall asleep they could drown. Evolution has solved this problem by letting one half of its brain sleep at a time. As dolphins utilise this technique, they lack REM sleep and therefore a high intelligence, perhaps consciousness is possible that does not incorporate the transitional states mentioned earlier. The evidence for animal consciousness is indirect. But so is the evidence for the big bang, neutrinos, or human evolution. As in any event, such unusual assertions must be subject to the rigorous scientific procedure, before they can be accepted as even vague possibilities. Intriguing, but more proof is required. However merely because we do not understand something does not mean that it is false — or not. Studying other animal minds is a useful comparative method and could even lead to the creation of artificial intelligence (that does not include irrelevant transitional states for an artificial entity), based on a model not as complex as our own. Still, the central point being illustrated is how ignorant our understanding of the human brain, or any other brain is and how one day a concrete theory can change thanks to enlightening findings. Furthermore, an analogous incident that exemplifies this argument happened in 1847, when an Irish workman, Phineas Cage, shed new light on the field of neuroscience when a rock blasting accident sent an iron rod through the frontal region of his brain. Miraculously enough, he survived the incident, but even more, astonishing to the science community at the time were the marked changes in Cage’s personality after the road punctured his brain. Where before Cage was characterized by his mild-mannered nature, he had now become aggressive, rude and “indulging in the grossest profanity, which was not previously his custom, manifesting but little deference for his fellows, impatient of restraint or advice when it conflicts with his desires” according to the Boston physician Harlow in 1868. However, Cage sustained no impairment with regards to his intelligence or memory. The serendipity of the Phineas Cage incident demonstrates how architecturally robust the structure of the brain is and by comparison how rigid a computer is. All mechanical systems and algorithms would stop functioning correctly or completely if an iron rod punctured them, that is with the exception of artificial neural systems and their distributed parallel structure. In the last decade, AI has begun to resurge thanks to the promising approach of artificial neural systems. Artificial neural systems or simply neural networks are modelled on the logical associations made by the human brain, they are based on mathematical models that accumulate data, or “knowledge,” based on parameters set by administrators. Once the network is “trained” to recognize these parameters, it can make an evaluation, reach a conclusion and take action. In the 1980s, neural networks became widely used with the backpropagation algorithm, first described by Paul John Werbos in 1974. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. Most notably in 1997, IBM’s Deep Blue supercomputer defeated the world chess champion, Garry Kasparov. After the match, Kasparov was quoted as saying the computer played “like a god.” That chess match and all its implications raised profound questions about neural networks. Many saw it as evidence that true artificial intelligence had finally been achieved. After all, “a man was beaten by a computer in a game of wits.” But it is one thing to program a computer to solve the kind of complex mathematical problems found in chess. It is quite another for a computer to make logical deductions and decisions on its own. Using neural networks, to emulate brain function provides many positive properties including the parallel functioning, relatively quick realisation of complicated tasks, distributed information, weak computation changes due to network damage (Phineas Cage), as well as learning abilities, i.e. adaptation upon changes in environment and improvement based on experience. These beneficial properties of neural networks have inspired many scientists to propose them as a solution for most problems, so with a sufficiently large network and adequate training, the networks could accomplish many arbitrary tasks, without knowing a detailed mathematical algorithm of the problem. Currently, the remarkable ability of neural networks is best demonstrated by the ability of Honda’s Asimo humanoid robot that cannot just walk and dance but even ride a bicycle. Asimo, an acronym for Advanced Step in Innovative Mobility, has 16 flexible joints, requiring a four-processor computer to control its movement and balance. Its exceptional human-like mobility is only possible because the neural networks that are connected to the robot’s motion and positional sensors and control its ‘muscle’ actuators are capable of being ‘taught’ to do a particular activity. The significance of this sort of robot motion control is the virtual impossibility of a programmer being able to actually create a set of detailed instructions for walking or riding a bicycle, instructions which could then be built into a control program. The learning ability of the neural network overcomes the need to precisely define these instructions. However, despite the impressive performance of the neural networks, Asimo still cannot think for itself and its behaviour is still firmly anchored on the lower-end of the intelligent spectrum, such as reaction and regulation. Neural networks are slowly finding there way into the commercial world. Recently, Siemens launched a new fire detector that uses a number of different sensors and a neural network to determine whether the combination of sensor readings are from a fire or just part of the normal room environment such as dust. Over fifty per cent of fire call-outs are false and of these well over half are due to fire detectors being triggered by everyday activities as opposed to actual fires, so this is clearly a beneficial use of the paradigm. But are there limitations to the capabilities of neural networks or will they be the solution to creating strong-AI? Artificial neural networks are biologically inspired but that does not mean that they are necessarily biologically plausible. Many Scientists have published their thoughts on the intrinsic limitations of using neural networks; one book that received high exposure within the Computer Scientist community in 1969 was Perceptron by Minsky and Papert. Perceptron brought clarity to the limitations of neural networks, although many scientists were aware of the limited ability of an incomplex perceptron to classify patterns, Minsky’s and Papert’s approach of finding “what are neural networks good for?” illustrated what is impeding future development of neural networks. Within its time period Perceptron was exceptionally constructive and its identifiable content gave the impetus for later research that conquered some of the depicted computational problems restricting the model. An example is the exclusive-or problem. The exclusive-or problem contains four patterns of two inputs each; a pattern is a positive member of a set if either one of the input bits is on, but not both. Thus, changing the input pattern by one-bit changes the classification of the pattern. This is the simplest example of a linearly inseparable problem. A perceptron using linear threshold functions requires a layer of internal units to solve this problem, and since the connections between the input and internal units could not be trained, a perceptron could not learn this classification. Eventually, this restriction was solved by incorporating extra “hidden” layers. Although advances in neural network research have solved many of the limitations identified by Minsky and Papert, numerous still remain such as networks using linear threshold units still violate the limited order constraint when faced with linearly inseparable problems Additionally, the scaling of weights as the size of the problem space increases remains an issue. It is clear that the dismissive views about neural networks disseminated by Minsky, Papert and many other Computer Scientists have some evidential support, but still, many researchers have ignored their claims and refused to abandon this biologically inspired system. There have been several recent advances in artificial neural networks by integrating other specialised theories into the multi-layered structure in an attempt to improve the system methodology and move one step closer to creating strong-AI. One promising area is the integration of fuzzy logic. invented by Professor Lotfi Zadeh. Other admirable algorithmic ideas include quantum inspired neural networks (QUINNs) and “network cavitations” proposed by S.L.Thaler. The history of artificial intelligence is replete with theories and failed attempts. It is in inevitable that the discipline will progress with technological and scientific discoveries, but will they ever reach the final hurdle?
https://medium.com/quick-code/is-artificial-intelligence-possible-4ec99a140294
[]
2019-04-23 18:49:49.766000+00:00
['Artificial Intelligence', 'AI', 'Machine', 'Programmer', 'Technology']
Word Analysis using Word Cloud in Python
Word Cloud In this article we will try to understand the usage of very handy and useful tool known as word cloud. We will try to implement the Word cloud using python codes. This will be very short article Word Cloud can be used in the analysis of words present in the corpus. Suppose you have a 2000–3000 words and we want to analyse which is the most common words or repeated words in the document. In the above discussed scenario word cloud will be very handy tool to use. People generally use it for the quick understanding of the document and used to understand what the document is about? Suppose if you have a 2000–3000 tweets and quickly you want to understand the nature of the tweet either its positive or negative. In this scenario word cloud will be very handy tool. Below are the steps we will follow to implement word cloud from scratch, right from the installation Step 1: First we will install the word cloud by executing the below pip command from the terminal
https://medium.com/analytics-vidhya/word-analysis-using-word-cloud-in-python-b273ded249dc
['Akash Deep']
2020-08-31 04:58:39.642000+00:00
['NLP', 'Word Cloud', 'Matplotlib', 'Python', 'Sentiment Analysis']
Setting Up Your Home Office as a New Freelancer
Assuming you do have the option to set up your own home office in a separate room, this is what you’ll need to bear in mind: Furniture and set-up: You’re going to spend a lot of time in your home office, so it pays to make it as pleasant and comfortable as possible. You will need a large enough desk that is the right height for you to work in comfort and a proper office chair with arm and back support that allows you to sit comfortably. Don’t skimp on the chair — your body will thank you! I also highly recommend a footstool under the desk, so you can put your feet up. I have found this to be very comfortable for long stints at the PC, and it has definitely become indispensable for me. Finally, don’t forget to get some storage options for your home office. The last things you’ll want is paperwork piling up on your desk. Technical equipment: No home office is complete without a few technical essentials. Let’s take a look at what these are. Lighting First of all, it is essential to have proper lighting in your home office. Ideally, you should have lighting installed over your reading area, over the computer and behind you so that there’s no reflection off the computer monitor. Computer It goes without saying that you will need a PC or Mac. As a freelancer writer or creative, you don’t usually need an enormous hard drive, but your computer should have plenty of RAM so you can research on the internet and run various applications concurrently. If your budget allows, I would also purchase a laptop in addition to a desktop PC. Technology has a tendency to fail just when you most need it (e.g. just as you’re about to finish that important 10,000-word project for a new client), so it is definitely worth having a second machine to work on in an emergency, or if you ever feel like working on the go. Monitor As freelancers, we typically stare at the computer screen all day long, so I’d recommend investing in a separate large monitor (at least 17'’). This will make working a lot easier and more comfortable. Internet connection A broadband or ADSL connection has become commonplace for most of us, and as a freelancer, you will definitely need one in order to upload and download large files and surf at faster speeds so as to not slow you down while researching a term or delivering a file. Printer A printer is another indispensable item for your home office. Clients will likely send you NDAs to sign at some point, you may need to print out vendor agreements or contracts, or you may want to print out a text and proofread the hard copy rather than checking it on screen. External hard disk drive You may have already experienced a computer crashing without you having backed up your data in a while. It has certainly happened to me before! It is therefore imperative to purchase an external hard disk drive as soon as possible and run daily backups. Surge protector A surge protector basically works as a shield, as it blocks excessive voltage spikes and surges. This will protect your electronics from damage, which is obviously a good idea. Smartphone A smartphone is important to have if you want the flexibility of being able to step out of your office during office hours and still being available in the sense that you can respond to email enquiries promptly without making your clients wait until you return. The client will feel reassured even if you just briefly acknowledge receipt of their email, e.g. “I’ll be back in the office in two hours and will respond to your email then.” Having a smartphone means there is no pressure to rush back to the office to check if you’ve missed any emails, as you can stay on top of things even while you are out and about. You can even open documents and see what files clients have sent you.
https://medium.com/the-lucky-freelancer/setting-up-your-home-office-as-a-new-freelancer-fcd22b3561b2
['Kahli Bree Adams']
2020-07-15 22:22:53.669000+00:00
['Entrepreneurship', 'Business', 'Small Business', 'Productivity', 'Freelancing']
Quantum Computing And The Meaning Of Life—Not Just ‘42’
But what exactly is quantum computing? To understand why it’s so incredible, one must look at the difference between a quantum computer and a regular computer. A regular computer works by switching millions of tiny transistors between 1 and 0, or “on” and “off”. The computer can only tell each transistor to either let an electric current pass or not. There’s no other way and no in-between. So a computer has to switch through the different combinations, one by one. First, it’s for example 1000101, then 0101101 and then 1100100. These three random numbers already represent 3 different setups and have to occur in order. The computer can not make all 3 of them simultaneously. And though coming up with these 3 will only take the computer a few nanoseconds, having to go through billions of combinations with a lot more numbers (transistors) involved, can quickly become a time-consuming effort. A quantum computer makes use of a physical phenomenon that takes place in the still quite mysterious quantum world. A so-called “qubit”, which replaces the traditional transistor and consists of a molecule that’s deliberately spun at incredible speeds by shooting it with lasers at pinpoint accuracy while keeping it suspended in a near-absolute-zero environment, will fall into a so-called superposition. Remember the transistor? It’s either 1 or 0. The qubit, however, can be either 0, or 1, or anything in between (meaning a little of both at the same time). It uses a quantum state, which basically means it’s everything and nothing at the same time. To describe it really simply: Instead of having to go through the three binary number examples one after the other, a quantum computer can calculate and display all three at the same time. Imagine the game where you put a little ping pong ball under one of three plastic cups and start switching the cups around. If you were to work like a regular computer, you’d lift them up one by one to find the ball. A quantum computer simply lifts up all three at the same time, finds the ball, and then acts as if it never lifted the two empty cups in the first place.
https://medium.com/illumination/quantum-computing-and-the-meaning-of-life-not-just-42-b1d638c6cdd0
['Kevin Buddaeus']
2020-09-06 02:46:13.153000+00:00
['Technology', 'Data Science', 'Future', 'Science', 'Life']
How to facilitate understanding of data? Data visualization [Examples and tools]
Data is everywhere. We live in a world where is a lot of data and we may even think that there is too much data. It’s true. There are a few signs that there’s too much data. There are more and more of them, it is difficult to analyze, process or draw conclusions from them, which is why it is so important to be able to present them in a proper way. In 2000, when the telescope was launched in New Mexico, more data was collected than in the entire history of astronomy, and today more or less the same amount of data is collected every few weeks. Walmart has 20,000 stores worldwide in 70 countries and processes 2.5 petabytes of customer data every hour. For comparison, 1.5 PT is 10 billion Facebook photos. This is an unprecedented amount of data in the past. It seems that we live in a world ruled by the need to collect data. This is what the economy of the modern world is based on. Each of us “produces” data, each of us has a phone in his pocket, where applications are installed, which send further information about the user. Actually, the data has become a new currency. How many times have you met with the situation that you got something “for free”, only for providing your data? Promotions to set up a bank account, permission to receive marketing materials such as e-books, applications — Facebook, search engine and Google tools. Data is a new business. From such a huge amount of information, the art is to select the most important data, invest time and human resources and do it in such a way as to interest the audience. What’s the purpose of data visualization? If you want your Facebook post to have record-breaking results — what do you do? You add catchy, attractive graphics. This works the same way with reports. Good visualization attracts attention, is easier to understand, and helps you reach your audience quickly. With dashboards and graphics adjust to target-group, even huge data can be clear and understandable. Why? Because most people are visual. So if you want your meetings with colleagues to be effective and your customers to understand your data better and faster, you should turn boring charts into eye-catching graphics. Here are some interesting numbers that confirm the importance of visualization: People receive 90% of all information from their eyes, Photos increase the readability of the text by 80%, People remember 10% of what they hear, 20% of what they read and 80% of what they see, If the leaflet contains no illustrations, people will remember 70% of it. Adding graphics can increase the number to 95%. Proper visualization of data also provides many benefits for your company: Quick decision making. Summarizing data is easy and fast thanks to graphics that allow you to quickly see if a specific column is higher than others if a given indicator exceeds a predetermined threshold, etc. — All this without the need to browse several pages of statistics in Google or Excel Sheets. Summarizing data is easy and fast thanks to graphics that allow you to quickly see if a specific column is higher than others if a given indicator exceeds a predetermined threshold, etc. — All this without the need to browse several pages of statistics in Google or Excel Sheets. More engagement. Most people are better at seeing and remembering information presented in graphics with clear messages. Most people are better at seeing and remembering information presented in graphics with clear messages. A better understanding of data. Well done reports are transparent not only to technical specialists, analysts and scientists dealing with data but also to non-technical managers such as CMO or CEO and help each employee make decisions in their area of responsibility. With this influx of data, visual communication can be helpful and become a key aspect to attract and retain users for longer, or help stakeholders to understand and learn from the data presented — and this problem will be discussed in this article. So, how do you do it? The biggest problem in visualization of data is not, surprisingly, the selection of the wrong tool, lack of skills, but more prosaic thing of a strategic nature — the lack of orientation on the end-user, results in the fact that visualizations are often done automatically, without thinking whether the recipient, for whom the graphic representation is made — will understand the presented results and whether, above all, it will not take him more time than if the data were in the status quo. This often results in a “mesh” of data when we want to present more data than the human brain can quickly and effectively analyze. Examples of how to NOT do visualization: Completely incorrect proportions of data Obvious manipulation of scale/proportion 3D charts work well very rarely… In order not to make such mistakes, I suggest you stick to the following (often forgotten) basic questions to make a proper mindset for your next data visualization challenge. For whom do you direct this visualization? RECIPIENT Identify the highest priority people (e.g. teacher, classroom/management board, end-users), What are the current problems of the company, what are the expectations of the management and what are the difficulties that prevent the problem from being solved? Resist the temptation to create a visualization that meets the needs of each individual. Why do you do this visualization? CONTEXT Specify what issues you would like to discuss in a presentation, e.g. a business presentation. It is worth considering the type of decision: strategic (e.g. just give one answer — whether to buy a given property) operational (issues requiring a response many times a day) or more tactical (issues requiring a regular weekly or monthly review at a meeting) What do you want to achieve with the presentation? By a small selection of relevant information, you can significantly influence the next decisions of the stakeholders, e.g. by contrasting the sales results exceeding the statistically significant norm in a specific period of time. How will you create this visualization? TYPE Standard charts or those that require more work, but bring better results (properly made — they affect emotions subconsciously, and then — going further — decisions) — artistic visualizations. How to say if the visualization is “good”? Three simple criteria. Good visualization should have: Story (functional and at the same time engaging [storytelling] content) Adjusted form (Design adapted to the target person e.g. weekly report — simple charts/presentation aimed at evoking emotions — artistic charts) Values (consistency of information showing solution) And what is “bad” visualization? Bad visualization means: Lack of functionality (effect — uselessness) Lack of appropriate form (effect — misunderstanding) Inconsistency (possible consequence — potential manipulation) Examples of well-made visualization (subjective) Interactivity A perfect example of a data visualization that combines all the necessary ingredients of an effective and engaging piece: it uses colour to easily distinguish trends; it allows the viewer to get a global sense of the data; it engages users by allowing them to interact with the piece, and it is surprisingly simple to understand in a single glance. Story This example tells the story of every known drone attack (obviously not controlled by AI…) and the victims in Pakistan. By sorting the information, the dramatic facts were presented in an easy to understand visual format. Time-saving This visualization shows 100 years of the evolution of rock in a single page. Not only does it simplify information for you, but also provides actual audio samples for each genre, from electronic blues to dark metal. The context The goal of this insightful interactive piece by Nikon is to give users a sense of the size of objects, both big and small, by using comparisons. Next to the Milky way, for example, a common object such as a ball or a car seem smaller than we ever imagined. An ideal example for presentations e.g. of the board of directors. The aim is to create a chart that shows the average price per carat of a diamond over five years. What should be done to attract attention, increase the memorization of the information, and perhaps even affect further decisions such as entering a specific market? Add the context — “Diamonds WERE a girl’s best friend”. Which visualization tools should I use? It depends. Data visualization is a form of communication used in various fields, e.g. science, business, journalism etc. Therefore, everything depends on the purpose of presenting data, the level of advancement of visualization and your experience with a particular program. If you are just beginning with data visualization or simply lack an idea for a proper graph, then visit the website — https://datavizproject.com To select the right tool, you need to specify an (often contradictory) objective: - Analysis or presentation? — Do you want to research data (R, Python) or build visualizations for e.g. a client (D3.js, Illustrator) or maybe something in between (Tableau, Ggvis, Plotly)? - Changes — Will you change your data while doing the visualization? In Illustrator you have to start building your chart from the beginning when you change/add value. In D3.js you can change the data in an external location and update the database by re-importing. In Plotly and Lyra — just import the database once and you can freely change it in the tool without losing a lot of valuable time. - Basic or unusual chart types? You need standard “bar” or “line” charts (Excel, Highcharts); or maybe more original? (D3.js). If you don’t know how to code, then the solution to the second situation may be the Lyra application, where you can change any element without entering even a single line of code. - Interactivity vs. static: You need to create interactive graphics e.g. for a website (D3.js, Highcharts) or maybe you just need static graphics in PDF/SVG/PNG format (R, Illustrator). “There are no perfect tools for everything, there are only good tools for people with specific goals.” The tools should be tailored to a specific need (blue — software libraries, red — programs). What tools do we use in Wrocode to visualize data? Free Google Fusion Tables (a great tool for presenting geographical data, unfortunately, Google is cancelling its support for this program on December 3, 2019), Tableau Public (easy to use — often used by us to present e.g. sales data) Paid Sisense (simple interface — huge possibilities → definitely worth recommending) …and many others depending on need.
https://medium.com/wrocode/how-to-facilitate-understanding-of-data-data-visualization-examples-and-tools-c07405bef679
['Łukasz Busz']
2019-08-26 12:58:30.284000+00:00
['Data Visualization', 'Big Data', 'Data Science', 'Graphic Design', 'Dashboard']
Like a Stone:
M ay 2020 marks the 3rd Anniversary of Cornell’s death. This was, is and will always be my love-letter to his family and musicians/music lover’s everywhere. My wheels crawled along the asphalt and I breathed in the afternoon sky. Brushstrokes of cotton candy melting with fireside abstracts served my daily commute home from work well. How could I possibly mind rush hour when my drive literally reminds me to appreciate the view? Time pushed as traffic crept along the California coastline and so did I. My thoughts swirled around nothing and everything as my eyes took deliberate turns between sky and road. The volume on my radio was low enough to dim the car commercials but still present enough to tap my ear when I landed on a tune I liked. Enter Chris Cornell. And just like that. My sunset had a soundtrack. As soon as I heard those pipes the fact that I couldn’t peg which song he was singing (a rarity) didn’t matter. His voice is undeniable; uncompromised passion with a bellowing tone that weaved through the speakers straight into your blood. I was only a few seconds in when it hit me — I had no idea what song this was. As a retired party girl who made her living on the stripper stage in the 90s, I take great pride in being ‘in the know’ with musicians from back in my day. I worked the clubs in Waikiki from Milli Vanilli and Terence Trent D’arby to Mötley Crüe and Fatboy Slim. Now decades later I can still tell you, with each song I hear — where I was working, what beaded leather or fluorescent lace costume I wore and which drug dealer had the best coke. Knowing his voice but not recognizing which song was more than annoying — it was a treat. I turned up the volume and without warning his lyrics pulled me inside a part of myself I was not expecting to revisit on a random afternoon drive home from the office. “And I sat in regret Of all the things I’ve done For all that I’ve blessed And all that I’ve wronged In dreams until my death I will wander on” I could blame the sudden tickle in my nose and watery eyes on PMS or low blood sugar, but the fact is — Chris Cornell was more than his voice — he was a rock and roll poet. My introduction to Audioslave and their new tune Like a Stone reignited something in me that’s difficult to describe. My pole dancing days far behind me, I wrestled with feelings of anxiety remembering who I was then compared to the woman I’ve become. There I was, driving home from my corporate job in an upper class, conservative town, and a song I fell in love with became a magic carpet ride to my past. With each note and lyric, I danced through a wormhole to a time when I was old enough to be on my own and make reckless choices, yet young enough to find my way out. It seemed Chris Cornell and I both found our way out. He too was a survivor of the party scene in the 90s. I didn’t follow his personal life, and to be honest wasn’t aware that he was back on the charts fronting his new band at the time, Audioslave. But like a long lost sister who powered through Aquanet hairspray and faded jeans torn at the knee, I was proud we were both doing well in our new lives.
https://medium.com/narrative/like-a-stone-e5738bbe43
['Christine Macdonald']
2020-08-29 16:08:16.096000+00:00
['Depression', 'Death', 'Mental Health', 'Suicide', 'Music']
Why Going Vegan Might Not Be a Solution at All
Why More Cattle? Well, what does this all have to do with more cattle? We live in a broken world. And we want to come to a sustainable, desirable world like I just described. We want to be wise humans living within the ecosystems. And don’t think that means we will be going back in time, living like cavemen. I’m convinced we will be living in harmony with nature in cities too. But for that to happen, we need a transition period. And the most important part of the transition is the restoration of the soil of our planet. We have been degrading the soil for ages and ages by maximizing our food production without care for the land. We need to increase biodiversity and scientists estimate that at least one-quarter of our species on planet Earth live in the soil. Soil Food Web for biodiversity. Source: commons.wikimedia And the damage is so bad by now that farmland and nature reserve land both are very degraded. All over the world. Soil creatures cannot breathe anymore because the layers have compacted. Due to this compaction, water cannot be stored in the soil and plants grow with very superficial roots. I work with farmers in my home country the Netherlands and the first thing we teach in nature-inclusive farming is that we have to prevent more compaction from happening. We have to stop compacting the soil with heavy tractors. If we don’t do that, all other efforts for creating biodiversity will fail. The next steps are natural manure to give enough nitrogen and phosphorous to our soils combined with lots and lots of decomposing organic matter to build up humus. The healthy topsoil that our planet needs so badly. Our agriculture has played a large role in degrading the land. Mindful agriculture can play a big part in restoring our soil to become healthy again. And that’s where cattle and chickens and pigs come in. Animals are an intrinsic part of the web of life. Cattle have hooves and they trample the soil in such a way that compaction is solved in a natural way. Chickens have claws and they love to eat the maggots out of the cattle manure. And pigs just love to turn soil with their snouts. Allan Savory makes the case for more cattle with what he calls Holistic Management. Yes, overgrazing has been a problem degrading our soil. A solution, however, is not to go for less grazing. Allan says we need more cattle and chickens and pigs doing what they do best, graze our grasslands, poo and trample the soil. We need to make sure the cattle need to be in big herds, grazing very quickly. Pooping an immense amount of manure. And then moving on and leaving the land alone for one of two complete years (natural cycles of plant growth). I do agree with Allan very much when he says to environmentalist organizations that they need to work with cattle in Nature Reserves to restore degraded soils this way. Yes, we should. And in this short film, permaculture expert Geoff Lawton gives his perspective on the matter. He gives nuances to Allan’s story that I relate to very much. We need to look at every situation locally and decide what tools of people-and-animal-collaboration works best. Yes, we need animals. There’s not a doubt in my mind that we need animals on a healthy planet. So what about the ammonia farts of cows, you ask? Healthy soil will absorb all. The problem is we don’t have healthy soil anymore… Sorry, dear vegans, I will personally not go vegan. And I do not think that going vegan is a solution to all of our problems. During the transition period, we need to work together. People in teams with animals to make our planet healthy again. And make sure we have enough food for all. And earn a living in our economies. We have to find new ways of being together, producing our human food as well as restoring ecosystems.
https://medium.com/climate-conscious/why-going-vegan-might-not-be-a-solution-at-all-e85c28c4bcf
['Desiree Driesenaar']
2020-10-07 11:02:41.757000+00:00
['Climate Change', 'Vision', 'Sustainability', 'Vegan', 'Climate Action']
How to Unlock the 5-Paragraph Essay to Improve Your Writing
How to Unlock the 5-Paragraph Essay to Improve Your Writing Why Beginnings, Middles, and Ends Still Matter in Writing Photo by Magda Ehlers from Pexels I taught the dreaded 5-paragraph thesis essay for 25 years in college. The students hated it. I hated reading their dull, lifeless essays. And yet, we persisted. There is a lesson in writing structure in the 5-paragraph essay, a human need for a beginning, middle, and end. Success in school is often measured by how well a student writes a 5-paragraph thesis essay. College entrance exams, AP tests, the SAT, ACT, and exit exams in freshman writing classes often use the 5-paragraph thesis essay as the standard-bearer of good writing. Most of you reading this today probably had courses that emphasized the 5-paragraph essay, with its formulaic beginning with thesis statement as last sentence, its three body paragraphs to express three supporting points, and its conclusion focused on restatement of main points. The 5-paragraph thesis essay is so formulaic that it doesn’t even have to be 5 paragraphs long. The 5-paragraph thesis essay is so formulaic that it doesn’t even have to be 5 paragraphs long. It could be 20 paragraphs, with a single paragraph introduction, 18 paragraphs for each point to be made (1 point per paragraph), and 1 concluding paragraph. The format is extendable, like a ladder, but not flexible. For all its faults, the 5-paragraph thesis essay is a perfect fundamental starting point to structuring writing. It is not, however, a model to hold up and worship. On the contrary, many writing theorists will attest to its limitations. But it does emphasize the absolute basics about writing that most of us practice today — a beginning, a middle, and an end. The limitations of the 5-paragraph essay I once taught a freshman college student, a young man from western Kansas, who wrote page after page of dull 5-paragraph essays. He was a hard worker, but he was not improving. After 3 essays, we sat down together to analyze his efforts. Upon closer inspection and in talking with him, I discovered that upon the advice of his high school English teacher, he had employed the most severe 5-paragraph formula I had ever seen. Each essay was no more and no less than 20 sentences long. Each paragraph had four sentences — a topic sentence, a specific discussion point, an example, and a transitional sentence. In the introduction, the transitional sentence was the thesis statement, the overall point of the essay. In the conclusion, the transitional sentence was a concluding statement broadening the point of the whole essay or rounding it out with a clever statement. In short, my student’s essays lacked development, specific details, life’s blood. It was all skeleton and no flesh. With the 20-sentence formula, there was no room for elaboration, a second (or third or fourth example), a personal anecdote, an insightful quotation. My student’s essays lacked development, specific details, life’s blood. It was all skeleton and no flesh. I don’t fault my student for trying what he was taught, nor do I fault his teacher for presenting a formula to prepare students for standardized college entrance exams. But the rigors of college thinking and learning were not suited for such a formulaic 5-paragraph thesis essay. Thought and ideas do not come neatly packaged with only three points. What if an essay idea needed a fourth point? What if a paragraph point needed a second example to help support and illustrate an idea? In the student’s formula, he would not have the opportunity to help his own case. The origins of the 5-paragraph thesis essay This story may be apocryphal, but it serves a good point. During graduate school, in my teacher training program, our director explained the origins of the 5-paragraph thesis essay thus: During WWII with the enormous response of men of all stripes volunteering to fight fascism, the drill sergeants needed a method to teach the recruits to get them through basic training and to the front lines as quickly as possible. Recruits came from a great variety of socioeconomic and educational backgrounds, from the very sharp to the very dull-witted. No one who was physically able, however, was turned away. To teach to the common denominator, the drill sergeants hammered home a three-part structure. 1. Tell them what you are going to tell them. 2. Tell them. 3. Tell them what you told them. For instance: 1. Today’s lesson is about keeping your helmet on your head, the most important of all your safety gear. 2. Keep your helmet on your head, for it is the most important of all your safety gear. 3. That’s it for your helmet. Remember to keep your helmet on your head to keep your head safe. In this way, recruits heard the advice three times. In the rush to have recruits learn so much material in such little time, repetition was looked at as a way to get people to remember the most important points. Say something often enough and people will remember it. If even one recruit remembers to keep his helmet on to prevent a fatal injury, the drill sergeant can sleep soundly knowing he did the job as best he could. Many of these drill sergeants turned into the college professors that filled America’s universities following WWII. They took the theories and ideas they learned from the service and applied them to their own work as writing theorists. And thus, the 5-paragraph thesis essay was born, with its 3-part structure: Introduction — Tell your readers what you are going to tell them. Body — Tell your readers your main points. Conclusion — Tell your readers what you told them. Here’s the rub. For all the millions of 5-paragraph thesis essays written by students over the years, this is not a form that occurs in nature. Browse through dozens of Medium articles today and look for a 5-paragraph thesis essay. I defy you to find one. Browse through magazines, such as The New Yorker, National Geographic, Esquire, Vogue, even Time and Newsweek, and you won’t find a 5-paragraph thesis essay there either. For all the millions of 5-paragraph thesis essays written by students over the years, this is not a form that occurs in nature. English teachers worth their salt know that the 5-paragraph thesis essay is a starting point for most students, not the goal. With declining literacy rates and skills today, you have to start somewhere. Most teachers want their students to excel beyond the 5-paragraph thesis essay, to learn the fundamental form so that they can escape from that limiting box into something else like a flowering vine that will meander and wind where it will. The 5-paragraph thesis essay: beginning, middle, and end Writing needs beginnings, middles, and ends. Human life is organized around this 3-part structure. Childhood / prime of life (adolescence and adulthood) / old age. Or non-working life / working life / retirement — whatever structure you put to it — we are biologically predisposed to see life in this 3-part structure. This doesn’t mean that every article, every story, every narrative has an overt beginning, middle, and end. Not every story starts with “Once upon a time” or “A long time ago, in a galaxy far far away . . .” Some of the greatest literary works start famously in medias res, in the middle of things, such as Hamlet, in which Hamlet’s father has already been killed and the plot set firmly in motion. Homer’s Iliad starts with an argument about going to war with Troy, long after the beautiful Greek Helen was kidnapped by Prince Paris of Troy, the action that prompted the war. Some stories start far before their stated offerings. The great comic novel, The Life and Opinions of Tristram Shandy, Gentleman, is presumably about Tristram’s life, yet the first 150 pages are devoted to philosophical arguments between his father and uncle and various personages and getting Tristram himself born. Thus the “beginning” of this tale is way before the beginning of the main character’s life. Or to take a more contemporary example, consider the enthralling Christopher Nolan movie Memento, which starts at the end, and evolves backward toward the beginning through the lens of Lenny suffering from short-term amnesia. So much writing about writing discusses the opening or “the hook,” such as a personal anecdote to lead a reader into the essay. That is the beginning. And then the middle of an essay is the development of the main points, the details, perhaps an enumerated list (usually more than 3 things) with detailed examples of the main ideas. And then the end of the essay is included as a takeaway point, the one thing that you want your readers to remember. Even today, we are bound in so many ways to this 3-part “beginning, middle, and end” structure. The takeaway Writing is as much art as craft. The art of writing is hiding the craft from the reader, to hide the scaffolding, the hammer and nails and wooden planks that build the structure, while still offering a beginning, middle, and end. Or to use another metaphor, we clothe our writing so the skeleton doesn’t show. At first, our essays are skin and bones, undernourished adolescent structures. Once we flesh them out, they grow and flourish, and they need new clothes for their new shapes. So after all, we are all still writing 5-paragraph thesis essays — essays composed of beginnings, middles, and ends, with a point to make, ideas for development, and a take-away point. We just clothe our essays in finer silks than their cousin, the 5-paragraph thesis essay.
https://medium.com/swlh/how-to-unlock-the-5-paragraph-essay-to-improve-your-writing-3f53e5eb96dd
['Lee G. Hornbrook']
2020-04-16 15:00:17.451000+00:00
['Writing Tips', 'Writing', 'Essay', 'Higher Education', 'Creativity']
Will Tech’s Monopolies Survive 2020?
Will Tech’s Monopolies Survive 2020? How the triple turmoil of a pandemic, protests, and a presidential election threatens Silicon Valley’s status quo. Photo: Wang Ying/Xinhua via Getty Welcome back to Pattern Matching, OneZero’s weekly newsletter that puts the week’s most compelling tech stories in context. There was a brief moment, at the peak of the Covid-19 pandemic’s first wave in the United States, when it looked like Big Tech might be back in the public’s good graces. With stay-at-home orders across the country, screens were no longer an addictive distraction from real life, but the locus of real life itself. Zoom was powering business meetings; Houseparty, happy hours. Facebook was once again a dominant force in news; Apple and Google were partnering on a privacy-conscious contact tracing app. Politicians in the United States and Europe who had been laying the groundwork for new regulations suddenly had more urgent things to worry about. That moment has passed. The lifting of lockdowns has been greeted not with sighs of relief at a return to the status quo, but with rallying cries to change it. There are protests in the streets. A presidential election looms. While criminal justice reform tops the domestic agenda, the appetite for tech reform appears to have returned as well. The Pattern Big Tech is back in the hot seat. 💬 The European Commission this week opened two antitrust probes against Apple, focusing on how its App Store rules and Apple Pay system, respectively, hamstring competitors. The App Store investigation was sparked by a 2019 complaint from Spotify about Apple’s practice of taking 30 percent of all subscription revenues from users who sign up for third-party apps on iOS. That puts Spotify at a disadvantage in competing with Apple’s own Apple Music service, from which Apple keeps 100 percent of revenues. (In 2018, Spotify stopped allowing users to pay via iOS.) The Apple Pay investigation, meanwhile, will examine how Apple limits the use of its devices’ “tap and go” payment functionality to Apple Pay alone, once again giving its own service a big edge over competitors. 💬 Spotify is hardly the only company affected. I wrote in depth in February about the brewing antitrust case against Apple, and the developers lining up to testify against it. In a twist of timing, one of those developers, Basecamp, launched a new paid email app, called Hey, on the same day the EU investigation was announced. Apple rejected it, on the grounds that it doesn’t allow the in-app subscription options that would give Apple its 30-percent cut. Protocol’s David Pierce recounted how that decision went down, while The Verge’s Dieter Bohn blasted Apple for inconsistencies in how it enforces its rules. Even Apple blogger Jon Gruber, who often defends the company, agreed that the company’s rent-seeking has gone too far. (Meanwhile, if you’re interested in Hey, read developer Kaya Thomas’ OneZero review of the buzzy, pricey new email platform.) 💬 Even mighty Facebook can’t get its apps onto Apple devices when they compete directly with Apple’s own offerings. The New York Times reported Thursday that Apple has rejected Facebook Gaming, the social network’s new casual gaming app, at least five times in the past four months, citing policies against apps that function primarily as game stores. Google, for its part, quickly approved Facebook Gaming on the Google Play store in April. Illustrating the user-unfriendly effects of Apple’s restrictions, the Times article explains that Facebook’s approach to getting its app approved has involved continually making the interface less intuitive, on the theory that this would make it less store-like. 💬 That Apple governs its App Store with impunity, and often to its own advantage, is not new. Neither is it new that Apple’s own apps sometimes compete with, copy, and crowd out those made for its platforms by independent developers. What is different now are the scope of Apple’s first-party app ambitions, the number of developers willing to risk the giant’s ire by speaking out, and the willingness of people in power to listen. In OneZero this week, Owen Williams argues that the EU case could be “a defining moment for the technology industry, as companies like Google and Facebook may find themselves scrutinized in a similar way.” And speaking of Google… 💬 Google made a similar power play this week by integrating its Meet videoconferencing software into the Gmail app. It’s a transparent attempt to leverage Google’s dominance in one market — email, in this case — against a rival (Zoom) that was outcompeting it in another market. 💬 The regulatory fervor is not confined to Europe. Back in the United States, the right broadened its assault on Section 230 this week, as Sen. Josh Hawley introduced a bill that would make it easier to sue tech companies for inconsistencies in how they moderate content. The move comes two weeks after Donald Trump signed an executive order challenging the legal protections that online platforms enjoy under Section 230 of the Communications Decency Act. Gizmodo’s Dell Cameron argues that the bill, like Trump’s order, is mostly toothless: It still allows companies to set their own rules of moderation, as long as they stick to them and apply them equally to all parties “in good faith.” 💬 And yet that even that mushy qualifier could open the door to enough lawsuits that some companies may simply decide a more hands-off approach is safest. Which is, of course, what Trump and Hawley want: for social platforms to keep their paws off of racist or false content from right-wing sources, including the president himself. This week conveniently brought us an illustration of the kind of dustup that could turn into a lawsuit, when NBC News reported that Google had banned the financial site ZeroHedge and conservative political site The Federalist from its ad network for spreading racist conspiracy theories about the anti-police brutality protests. As an uproar spread — both sites have large, vocal followings — Google disputed NBC News’ story. Google said The Federalist was never demonetized, but that it had reached an agreement with the publisher that involved removing racist comments from The Federalist’s comment section. 💬 These are the types of interventions that liberals and civil rights activists, along with some of tech companies’ own employees, have been calling for. (Some civil rights groups are now calling on companies to boycott Facebook’s ad platform.) They’re also the type that get the right riled up and build momentum for bills like Hawley’s, as Ben Shapiro and Ted Cruz were quick to rail against Google’s moves this week. As I wrote in a previous newsletter, we appear to have at last reached the point where the big platforms have to pick a side. Twitter was the first to do so, when it started flagging some of Trump’s tweets as misleading, and the company kept up its enforcement against him this week by putting a warning label on a video he tweeted. The video fabricated fake CNN footage of a “terrified” Black toddler running away from a “racist baby,” then implied that the network was spreading divisive fake news (a classic example of Trumpian projection). 💬 Facebook has opted for the laissez-faire approach to Trump’s posts, but even it felt compelled to take action this week when the liberal blog Media Matters for America reported that the president was running Facebook ads with Nazi iconography. (Trump’s campaign then claimed the inverted red triangle was an antifa symbol — which is a lie, according to historians who study the group.) Facebook removed the 88 offending ads. 💬 Antitrust enforcement and Section 230 reform are separate issues. But the growing momentum behind both is indicative of a larger trend: Big Tech has lost the benefit of the doubt. That happened long ago in Europe, but it is finally happening in the United States as well, from both major parties. And any notion that the pandemic or a Republican presidency would ease the regulatory pressure on Silicon Valley has now been put to rest. Americans of all ideologies are fed up with business as usual, their polarization arguably stoked by the tech platforms themselves, and their economic stability undermined by the rise of the gig economy. In other words, we’re living in a mess that is partly of the tech industry’s making. And now that mess is coming back to haunt it. Undercurrents Under-the-radar trends, stories, and random anecdotes worth your time 🗨️ Two Black leaders at Pinterest left the company over racial discrimination, saying they were subjected to offensive comments, unfair pay, and retaliation. CEO Ben Silbermann subsequently issued a public apology and admitted “parts of our culture are broken,” Bloomberg’s Sarah Frier reported. But the ex-employees, Ifeoma Ozoma and Aerica Shimizu Banks, who made up two-thirds of the platform’s public policy and social impact team, said on Twitter that they heard the apology only through the media. Their allegations are part of a wider reckoning over tech companies’ treatment of Black employees, and they dent the reputation of a platform that had previously earned praise for some progressive policies — which, it turns out, Ozoma and Banks had been criticized by their managers for championing. Read Ozoma’s full thread here. 🗨️ Lesser-known face recognition companies are eagerly courting law enforcement, looking to fill the vacuum after IBM, Microsoft, and Amazon stepped back. Clearview AI, NEC, nd Ayonix are among those poised to capitalize by ignoring the anger over surveillance technology’s discriminatory effects on Black communities, the Wall Street Journal reported. My OneZero colleague Dave Gershgorn has written about an even longer list of companies, including many that you might not expect, that have been trying to cash in on a face recognition gold rush. In Bloomberg Opinion, Cathy O’Neil makes the case that face recognition by law enforcement will continue until or unless Congress repeals post-9/11 legislation, such as the Real ID Act, that prioritized antiterrorism efforts over civil liberties. 🗨️ Instagram’s algorithm systematically incentivizes its users to show skin in their photos, according to a report from the nonprofit AlgorithmWatch. 🗨️ The great scourge of bots on social media may be overstated, bot expert Darius Kazemi argued, in a New York Times article by Siobhan Roberts. Headlines of the Week Facebook Groups Are Destroying America — Nina Jankowicz and Cindy Otis, Wired Devin Nunes’ Attorney Says He’s at ‘Dead End’ in Quest to Reveal Identity of Twitter Cow — Kate Irby, Fresno Bee Thanks for reading. Reach me with tips and feedback by responding to this post on the web, via Twitter direct message at @WillOremus, or by email at [email protected].
https://onezero.medium.com/will-techs-monopolies-survive-2020-90a8ea05b6c3
['Will Oremus']
2020-06-20 13:57:29.136000+00:00
['Technology', 'Apple', 'Facebook', 'Pattern Matching']
Easy Peasy Stores With Public and Private Actions
Easy Peasy Stores With Public and Private Actions Easy Peasy provides a better API and experience on top of Redux Bengtskar Lighthouse, Finland, 2020. Photo by the author. Since the end of 2019, I have been using Easy Peasy to manage the state of my applications both professionally and personally. The library has a familiar API and logic with a lightweight feel and good flexibility. If you’re using or have used Redux and aren’t fully sold on it, take a look into Easy-Peasy. It may be worth it. “Easy Peasy is an abstraction of Redux, providing a reimagined API that focuses on developer experience.” — Easy Peasy’s official website Each time I’ve set up an Easy Peasy store, I’ve experimented more with its implementation. I’ve asked myself more questions about not only what the library can do but what can be done with it. During my latest project, I had asked myself, “What if I wanted my app to have access to only a specific subset of actions? Can Easy Peasy create private and public actions for its stores?” I looked through the docs but didn’t find an answer to this question. I’d hoped for something like JavaScript classes where I could preface any action or state value with private . But while that wasn’t the case, that doesn’t mean that wasn’t the answer. I mentioned that with each project, I explored more of not only what Easy Peasy could do but what I could do with it — and this will be an example of the latter. Note: This article will assume a base understanding of creating a store and Hooks using Easy Peasy. Some code samples will roughly include these concepts but will not explain them or show them in full. Please visit the Easy Peasy docs for better information on getting started.
https://medium.com/better-programming/easy-peasy-stores-with-public-and-private-actions-5cc1682765da
['Daniel Yuschick']
2020-10-29 14:38:41.187000+00:00
['Programming', 'JavaScript', 'Redux', 'Typescript', 'React']
Would You Buy A Book For $117? Here’s How I’ll Sell It
Who would spend that much on a book? Nobody. But I’m not positioning this as a book. I’m selling hard to find information. Some of it cannot be found elsewhere. The book acts as the vehicle for delivering the info. I’m selling it as a physical book only. $117 sounds like a lot of money for a book. I should also mention you can’t buy it on Amazon or at any bookstore. Plus, it appeals to a narrow audience. Only a few can benefit from the information. None of this seems to make sense, right? It’s hard to find the product. It’s expensive, It appeals to a narrow audience. The format may not be convenient. What am I thinking? It all seems counter-intuitive. The truth is, to sell a book for that much money you need to do things different. Some of it seems puzzling at first glance. Here’s why it works. Narrow The Audience Niche down to a smaller audience with specific needs and desires. You’ll likely find an under-served audience. This audience must feel passion or have a desperate need for what you are selling. For example, let’s suppose you sell a course on dog training. You’ll compete with hordes of others. Most folks seeking out this information will find tons of options. It’s unlikely that you’ll offer something so earth shattering that you can charge a premium price for a book. You would need a different approach. Now, let’s niche down and limit the audience to Catahoula Leopard dogs. You’re now facing a tiny audience. Though small, their passion for these dogs radiates. Only a few providers serve them. If you can fill that gap you hold more pricing power. Narrow The Focus Now we’re facing a handful of competitors. How could we narrow the focus and dive deep into a really specific part of Catahoula Leopard Dogs? What about training Catahoula Leopard dogs for competitive dog shows? Now we’re drilling down into a small but focused area of dog training. A small but fanatical crew of prospects exists. Our audience shrinks but the remaining prospects spend big dollars. That’s our opening. Limit Availability Scarcity drives up demand. Sure, that’s the first lesson in copywriting. If we offer our information in digital format it’s unlimited. A physical book you buy directly from the source screams scarcity. I can state that I’m printing only five hundred copies. Once I sell out they’re not coming back. It’s tough to sell that story on Amazon, even if it’s true. The limited printing gives it a feeling of exclusivity. You’ll be one of only five hundred to possess this information. Your buyer feels a sense of prestige. Oversize The Offer The buyer gets more than just a book. The added bonuses increase the perceived value. You see, the book is just one piece of the overall offer. The onslaught of extras make the price a no brainer. I may even include some limited premium bonuses for the first twenty buyers. Imagine crafting a sales campaign to owners of Catahoula competition dogs. See how specific we can get in our marketing message? You can drill down into the exact needs and challenges they face. Compare that to targeting all dog owners. There’s no way you can reach them on the same personal level. There’s one more advantage of going higher priced with a small audience. I can send a handwritten thank you note to all buyers. Try doing that with a seven dollar ebook.
https://medium.com/writtenpersuasion/would-you-buy-a-book-for-117-heres-how-i-ll-do-it-68e7ee8eeb8d
['Barry Davret']
2017-03-26 21:43:08.187000+00:00
['Marketing', 'Business', 'Persuasive Writing', 'Psychology', 'Digital Marketing']
An Ultimate Cheat Sheet for Data Visualization in Pandas
An Ultimate Cheat Sheet for Data Visualization in Pandas All the Basic Types of Visualization That Is Available in Pandas and Some Advanced Visualization That Are Extremely Useful and Time Saver We use python’s pandas' library primarily for data manipulation in data analysis. But we can use Pandas for data visualization as well. You even do not need to import the Matplotlib library for that. Pandas itself can use Matplotlib in the backend and render the visualization for you. It makes it really easy to makes a plot using a DataFrame or a Series. Pandas use a higher-level API than Matplotlib. So, it can make plots using fewer lines of code. I will start with the very basic plots using random data and then move to the more advanced one with a real dataset. I will use a Jupyter notebook environment for this tutorial. If you do not have that installed, you can simply use a Google Colab notebook. You even won’t have to install pandas on it. It already has that installed for us. If you want a Jupyter notebook installed that’s also a great idea. Please go ahead and install the anaconda package. It’s a great package for data scientists and it’s free. Then install pandas using: pip install pandas or in your anaconda prompt conda install pandas You are ready to rock n roll! Pandas Visualization We will start with the most basic one. Line Plot First import pandas. Then, let’s just make a basic Series in pandas and make a line plot. import pandas as pd a = pd.Series([40, 34, 30, 22, 28, 17, 19, 20, 13, 9, 15, 10, 7, 3]) a.plot() The most basic and simple plot is ready! See, how easy it is. We can improve it a bit. I will add: a figure size to make the size of the plot bigger, color to change the default blue color, title on top that shows what this plot is about and font size to change the default font size of those numbers on the axis a.plot(figsize=(8, 6), color='green', title = 'Line Plot', fontsize=12) There are a lot more styling techniques we will learn throughout this tutorial. Area Plot I will use the same series ‘a’ and make an area plot here, I can use the .plot() method and pass a parameter kind to specify the kind of plot I want like: a.plot(kind='area') or I can write like this a.plot.area() Both of the methods I mentioned above will create this plot: The area plot makes more sense and also look nicer when there are several variables in it. So, I will make a couple more Series, make a DataFrme, and make an area plot from it. b = pd.Series([45, 22, 12, 9, 20, 34, 28, 19, 26, 38, 41, 24, 14, 32]) c = pd.Series([25, 38, 33, 38, 23, 12, 30, 37, 34, 22, 16, 24, 12, 9]) d = pd.DataFrame({'a':a, 'b': b, 'c': c}) Let’s plot this DataFrame ‘d’ as an area plot now, d.plot.area(figsize=(8, 6), title='Area Plot') You do not have to accept those default colors. Let’s change those colors and add some more style to it. d.plot.area(alpha=0.4, color=['coral', 'purple', 'lightgreen'],figsize=(8, 6), title='Area Plot', fontsize=12) Probably the parameter alpha is new to you. The ‘alpha’ parameter adds some translucent looks to the plot. It appears to be very useful at times when we have overlapping area plots or histograms or dense scatter plots. This .plot() function can make eleven types of plots: line area bar barh pie box hexbin hist kde density scatter I would like to show the use of all of those different plots. For that, I will use the NHANES dataset by the Centers for Disease Control and Prevention. I downloaded this dataset and kept it in the same folder as this Jupyter notebook. Please feel free to download the dataset and follow along: Here I import the dataset: df = pd.read_csv('nhanes_2015_2016.csv') df.head() This dataset has 30 columns and 5735 rows. Before starting to make plots, it is important to check the columns of the dataset: df.columns output: Index(['SEQN', 'ALQ101', 'ALQ110', 'ALQ130', 'SMQ020', 'RIAGENDR', 'RIDAGEYR', 'RIDRETH1', 'DMDCITZN', 'DMDEDUC2', 'DMDMARTL', 'DMDHHSIZ', 'WTINT2YR', 'SDMVPSU', 'SDMVSTRA', 'INDFMPIR', 'BPXSY1', 'BPXDI1', 'BPXSY2', 'BPXDI2', 'BMXWT', 'BMXHT', 'BMXBMI', 'BMXLEG', 'BMXARML', 'BMXARMC', 'BMXWAIST', 'HIQ210', 'DMDEDUC2x', 'DMDMARTLx'], dtype='object') The names of the columns might look strange. But do not worry about that. I will keep explaining the meaning of the columns as we go. And we will not use all the columns. We will use some of them to practice these plots. Histogram I will use the weight of the population to make a basic histogram. df['BMXWT'].hist() As a reminder, the histogram provides the distribution of frequency. It shows in the picture above that about 1825 people have a weight of 75. The maximum people have the weight in the range of 49 to 99. What if I want to put several histograms in one plot? I will make three histograms in one plot using weight, height, and body mass index (BMI). df[['BMXWT', 'BMXHT', 'BMXBMI']].plot.hist(stacked=True, bins=20, fontsize=12, figsize=(10, 8)) But if you want three different histograms that are also possible using just one line of code like this: df[['BMXWT', 'BMXHT', 'BMXBMI']].hist(bins=20,figsize=(10, 8)) It can be even more dynamic! We have systolic blood pressure data in the ‘BPXSY1’ column and the level of education in the ‘DMDEDUC2’ column. If we want to examine the distribution of the systolic blood pressure in the population of each education level, that is also be done in just one line of code. But before doing that I want to replace the numeric value of the ‘DMDEDUC2’ column with more meaningful string values: df["DMDEDUC2x"] = df.DMDEDUC2.replace({1: "less than 9", 2: "9-11", 3: "HS/GED", 4: "Some college/AA", 5: "College", 7: "Refused", 9: "Don't know"}) Make the histograms now, df[['DMDEDUC2x', 'BPXSY1']].hist(by='DMDEDUC2x', figsize=(18, 12)) Look! We have the distribution of systolic blood pressure levels for each education level in just one line of code! Bar Plot Now let’s see how the systolic blood pressure changes with marital status. This time I will make a bar plot. Like before I will replace the numeric values of the ‘DMDMARTL’ column with more meaningful strings. df["DMDMARTLx"] = df.DMDMARTL.replace({1: "Married", 2: "Widowed", 3: "Divorced", 4: "Separated", 5: "Never married", 6: "Living w/partner", 77: "Refused"}) To make the bar plot we need to preprocess the data. That is to group the data by different marital statuses and take the mean of each group. Here I do the processing of the data and plot in the same line of code. df.groupby('DMDMARTLx')['BPXSY1'].mean().plot(kind='bar', rot=45, fontsize=10, figsize=(8, 6)) Here we used the ‘rot’ parameter to rotate the x ticks 45 degrees. Otherwise, they will be too cluttered. If you like, you can make it horizontal as well, df.groupby('DMDEDUC2x')['BPXSY1'].mean().plot(kind='barh', rot=45, fontsize=10, figsize=(8, 6)) I want to make a bar plot with multiple variables. We have a column that contains the ethnic origin of the population. It will be interesting to see if people’s weight, height, and body mass index change with ethnic origin. To plot that, we need to group those three columns (weight, height, and body mass index) by ethnic origin and take the mean. df_bmx = df.groupby('RIDRETH1')['BMXWT', 'BMXHT', 'BMXBMI'].mean().reset_index() This time I did not change the ethnic origin data. I kept the numeric values as it is. Let’s make our bar plot now, df_bmx.plot(x = 'RIDRETH1', y=['BMXWT', 'BMXHT', 'BMXBMI'], kind = 'bar', color = ['lightblue', 'red', 'yellow'], fontsize=10) Looks like ethnic group 4 is a little higher than the rest of them. But they are all very close. No significant difference. We can stack different parameters (weight, height, and body mass index) on top of each other as well. df_bmx.plot(x = 'RIDRETH1', y=['BMXWT', 'BMXHT', 'BMXBMI'], kind = 'bar', stacked=True, color = ['lightblue', 'red', 'yellow'], fontsize=10) Pie Plot Here I want to check if marital status and education have any relation. I need to group the marital status by education level and count the population in each marital status group by education level. Sound too wordy, right? Let’s see it: df_edu_marit = df.groupby('DMDEDUC2x')['DMDMARTL'].count() pd.Series(df_edu_marit) Using this Series make a pie plot very easily: ax = pd.Series(df_edu_marit).plot.pie(subplots=True, label='', labels = ['College Education', 'high school', 'less than high school', 'Some college', 'HS/GED', 'Unknown'], figsize = (8, 6), colors = ['lightgreen', 'violet', 'coral', 'skyblue', 'yellow', 'purple'], autopct = '%.2f') Here I added a few style parameters. Please feel free to try with more style parameters. Boxplot For example, I will make a box plot using body mass index, leg, and arm length data. color = {'boxes': 'DarkBlue', 'whiskers': 'coral', 'medians': 'Black', 'caps': 'Green'} df[['BMXBMI', 'BMXLEG', 'BMXARML']].plot.box(figsize=(8, 6),color=color) Scatter Plot For a simple scatter plot I want to see if there is any relationship between body mass index(‘BMXBMI’) and systolic blood pressure(‘BPXSY1’). df.head(300).plot(x='BMXBMI', y= 'BPXSY1', kind = 'scatter') This was so simple! I used only 300 data because if I use all the data, the scatter plot becomes too dense to understand. Though you can use the alpha parameter to make it translucent. I preferred to keep it light for this tutorial. Now, let’s check a little advanced scatter plot with the same one line of code. This time I will add some color shades. I will male a scatter plot, putting weight in the x-axis and height in the y-axis. There is a little twist! I will also add the length of the leg. But the length of the legs will show in shades. If the length of the leg is longer the shade will be darker else the shade will be lighter. df.head(500).plot.scatter(x= 'BMXWT', y = 'BMXHT', c ='BMXLEG', s=50, figsize=(8, 6)) It shows the relationship between weight and height. You can see if there is any relationship between the length of the legs with height and weight as well. Another way of adding a third parameter like that is to add size in the particles. Here, I am putting the height in the x-axis, weight in the y-axis, and body mass index as an indicator of the bubble size. df.head(200).plot.scatter(x= 'BMXHT', y = 'BMXWT', s =df['BMXBMI'][:200] * 7, alpha=0.5, color='purple', figsize=(8, 6)) Here the smaller dots means lower BMI and the bigger dots mean the higher BMI. Hexbin Another beautiful type of visualization where the dots are hexagonal. When the data is too dense, it is useful to put them in bins. As you can see, in the previous two plots I used only 500 and 200 data because if I put all the data in the dataset the plot becomes too dense to understand or daw any information from it. In this case, using spatial distribution can be very useful. I am using hexbin where data will be represented in hexagons. Each hexagon is a bin representing the density of that bin. Here is an example of the most basic hexbin. df.plot.hexbin(x='BMXARMC', y='BMXLEG', gridsize= 20) Here the darker color represents the higher density of data and the lighter color represents the lower density of data. Does it sound like a histogram? Yes, right? Instead of bars, it is represented by colors. If we add an extra parameter ‘C’, the distribution changes. It will not be like a histogram anymore. The parameter ‘C’ specifies the position of each (x, y) coordinate, accumulates for each hexagonal bins, and then reduced by using reduce_C_function. If the reduce_C_ function is not specified, by default it uses np.mean. You can specify it the way you want as np.mean, np.max, np.sum, np.std, etc. Look at the documentation for more information here Here is an example: df.plot.hexbin(x='BMXARMC', y='BMXLEG', C = 'BMXHT', reduce_C_function=np.max, gridsize=15, figsize=(8,6)) Here the darker color of a hexagon means, np.max has a higher value for the population height(‘BMXHT’) for that bin as you can see that I used np.max as a reduce_C_function. You can use a colormap instead of shades of color: df.plot.hexbin(x='BMXARMC', y='BMXLEG', C = 'BMXHT', reduce_C_function=np.max, gridsize=15, figsize=(8,6), cmap = 'viridis') Looks pretty, right? And also very informative. Some Advanced Visualization I explained some basic plotting above that people use in everyday life when they deal with data. But data scientists need some more. Pandas library has some more advanced visualization as well. That can provide a lot more information in one line of code. Scatter_matrix Scatter_matrix is very useful. It provides a huge amount of information packed in one plot. It can be used for general data analysis or feature engineering on machine learning. Let’s see an example first. I will explain after that. from pandas.plotting import scatter_matrix scatter_matrix(df[['BMXWT', 'BMXHT', 'BMXBMI', 'BMXLEG', 'BMXARML']], alpha = 0.2, figsize=(10, 8), diagonal = 'kde') Look at that! I used five features here. I got the relationship between all five variables with each other. In the diagonals, it gives you the density plot of each individual feature. We discuss more on density plots in my next example. KDE or density plots KDE plots or Kernel Density Plots are built to provide the probability distribution of a series or a column in a DataFrame. Let’s see the probability distribution of the weight variable (‘BMXWT’). df['BMXWT'].plot.kde() You can visualize several probability-distribution in one plot. Here I am making a probability distribution of height, weight, and BMI in the same plot: df[['BMXWT', 'BMXHT', 'BMXBMI']].plot.kde(figsize = (8, 6)) You can use other style parameters we described before as well. I like to keep it simple. Parallel_coordinates This is a good way of showing multi-dimensional data. It clearly shows the clusters if there is any. For example, I want to see if there is any difference in height, weight, and BMI between men and women. Let's check. from pandas.plotting import parallel_coordinates parallel_coordinates(df[['BMXWT', 'BMXHT', 'BMXBMI', 'RIAGENDR']].dropna().head(200), 'RIAGENDR', color=['blue', 'violet']) You can see the clear difference in body weight, height, and BMI between men and women. Here, 1 is men and 2 is women. Bootstrap_plot This a very important plot for research and statistical analysis. It will save a lot of time in statistical analysis. The bootstrap plot is used to assess the uncertainty of a given dataset. This function takes a random sample of a specified size. Then mean, median, and midrange is calculated for that sample. This process is repeated a specified number of times. Here I am creating a bootstrap plot using the BMI data: from pandas.plotting import bootstrap_plot bootstrap_plot(df['BMXBMI'], size=100, samples=1000, color='skyblue') Here, the sample size is 100 and the number of samples is 1000. So, it took a random sample of 100 data to calculate mean, median, and midrange. The process is repeated 1000 times. This is an extremely important process and a time saver for statisticians and researchers. Conclusion I wanted to make a cheat sheet for the data visualization in Pandas. Though if you use matplotlib and seaborn, there are a lot more options or types of visualization. But we use these basic types of visualization in our everyday life if you deal with data. Using pandas for this visualization will make your code much simpler and save a lot of lines of code. Feel free to follow me on Twitter and like my Facebook page. More Reading:
https://towardsdatascience.com/an-ultimate-cheat-sheet-for-data-visualization-in-pandas-4010e1b16b5c
['Rashida Nasrin Sucky']
2020-11-13 14:49:54.337000+00:00
['Artificial Intelligence', 'Data Science', 'Python', 'Data Visualization', 'Programming']
Why React Hooks Are the Wrong Abstraction
Hooks Problem #1: Attached During Render As a general rule of design, I’ve found that we should always first try to disallow our users from making mistakes. Only if we’re unable to prevent the user from making a mistake should we then inform them of the mistake after they’ve made it. For example, when allowing a user to enter a quantity in an input field, we could allow them to enter alphanumeric characters and then show them an error message if we find an alphabetic character in their input. However, we could provide better UX if we only allowed them to enter numeric characters in the field, which would eliminate the need to check whether they have included alphabetic characters. React behaves quite similarly. If we think about Hooks conceptually, they are static through the lifetime of a component. By this, I mean that once declared, we cannot remove them from a component or change their position in relation to other Hooks. React uses lint rules and will throw errors to try to prevent developers from violating this detail of Hooks. In this sense, React allows the developer to make mistakes and then tries to warn the user of their mistakes afterward. To see what I mean, consider the following example: This produces an error on the second render when the counter is incremented because the component will remove the second useState hook: Error: Rendered fewer hooks than expected. This may be caused by an accidental early return statement. The placement of our Hooks during a component’s first render determines where the Hooks must be found by React on every subsequent render. Given that Hooks are static through the lifetime of a component, wouldn’t it make more sense for us to declare them on component construction as opposed to during the render phase? If we attach Hooks during the construction of a component, we no longer need to worry about enforcing the rules of Hooks because a hook would never be given another chance to change positions or be removed during the lifetime of a component. Unfortunately, function components were given no concept of a constructor, but let’s pretend that they were. I imagine that it would look something like the following: By attaching our Hooks to the component in a constructor, we wouldn’t have to worry about them shifting during re-renders. If you’re thinking, “You can’t just move Hooks to a constructor. They need to run on every render to grab the latest value” at this point, then you’re totally correct! We can’t just move Hooks out of the render function because we will break them. That’s why we’ll have to replace them with something else. But first, the second major problem of Hooks.
https://medium.com/better-programming/why-react-hooks-are-the-wrong-abstraction-8a44437747c1
['Austin Malerba']
2020-12-14 18:42:43.966000+00:00
['Programming', 'React Hook', 'JavaScript', 'React', 'Software Engineering']
Why We Find Patterns in Randomness
Why We Find Patterns in Randomness It helps us survive, and we naturally evolved as humans to do so From Free Nature Stock on Pexels I see patterns in randomness all the time, even if sometimes, a cigar is just a cigar, not a phallic symbol. For me, everything going wrong in a given warning is a sign of God wanting me to be challenged on a given morning. The run that ran poorly was a sign of God’s plan that running wasn’t for me that day. The fact that I’m not motivated to do my work means God wants me to do something else at a given moment. The gambler’s paradox is the phenomenon of someone who places bets and looks for patterns to take advantage of. However, those patterns tend not to actually be there, so according to American physicist, Richard A. Muller, “the routine heel really is random — at least at an honest casino.” Each spin of the roulette is inherently random, and in the world of gambling, streaks don’t empirically exist. The gambler’s paradox is also called the Monte Carlo fallacy and the fallacy of the maturity of chances, and is widely cited as to why people gamble as much as they do. There’s actually a psychological term for the tendency to ascribe patterns to randomness — apophenia. Psychiatrist Klaus Conrad defines apophenia as the “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness.” Within the sphere of apophenia is pareidolia, which is a perception of images and sounds in random stimuli — like seeing a face in an inanimate object, or seeing the face of Jesus on tree stumps. Dr. Po Chi Wu in Psychology Today asks, then, whether life is just a series of random events. He talks about various levels of consciousness. Our tendency towards apophenia doesn't mean everything is random, as much as I might believe. But does a lack of randomness mean there are patterns? And who created those patterns if they exist? Are those patterns comprehensible for humans? For me, my faith dictates that I will never know God’s plan or ways. Job acknowledges in Job 42:2 the comprehensibility of God: “I know that you can do all things and that no purpose of yours can be thwarted.” Vox made a YouTube video asking why we so often find faces in purses, or why we experience pareidolia. We are fascinated faces of Jesus in tree trunks. Pareidolia, in Greek, means “beyond the image.” It became considered a form of psychosis after psychologists started testing pareidolia in the Rorshach Test. But why are our brains constantly trying to make patterns in things where there intrinsically aren’t patterns? For example, why do we constantly see patterns in the stars? The stars did not align to look like a dipper or a bear, so why did we give them those names and try to make sense of the universe’s nonsense? A 2009 study from Hadjikhani et al. in NeuroReport found that the fusiform face area became activated when people saw faces in non-face objects. Using magnetoencephalography (MEG), the researchers found that people who perceived faces in objects had early activation in the fusiform face area, faster than perceptions of common objects, and only slower than the perception of actual faces: “Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon,” Hadjikhani et al. said. Wu asks the grand question at the end of the day: how much of our future can we actually influence? I believe that my success, when I have it, is a series of lucky coincidences. I work hard. I’m passionate. But a million things had to break my way, like growing up with both parents, growing up in America, a roof above my head, and food on my plate. He talks a lot about entrepreneurs, who often talk about success as being in the right place at the right time. He says that they tend to have a “single-minded focus” for good timing, and as a result, interpret random events as meaningful. In the state of single-mindedness, is there a liberation in not overthinking things and just doing — is there a benefit to ascribing so much meaning to randomness.
https://medium.com/publishous/why-we-find-patterns-in-randomness-d4913e85814d
['Ryan Fan']
2020-09-22 15:18:53.871000+00:00
['Neuroscience', 'Life Lessons', 'Psychology', 'Spirituality', 'Philosophy']
Video Streaming Using Flask and OpenCV
Stream video using OpenCV and Flask. (Image Source) My name is Anmol Behl and I am a member of team Bits-N-Bytes.We are a group of three team members pursuing B.Tech in Computer Science and Engineering from KIET Group of Institutions,Ghaziabad.This article explains how to stream video using Flask and OpenCV taking face detection as an example. Nowadays, machine learning is becoming a highly popular field. Due to the daily needs and rapid increase in development, there is a need for deploying machine learning models. But it is very difficult or impossible to deploy them on mobile devices. One option is to use machine learning mobile frameworks such as TensorFlow Lite to call pre-trained models. Are there any easier options? Yes! With the 5G approaching, it will take only 0.01 seconds to upload a 100KB image at a speed of about 100Mbps, so we can deploy almost everything including face recognition as a service on the server-side. Considering an example of Face Detection this article will demonstrate how to stream video on Linux servers using Python Flask and OpenCV. Let’s Begin… STEP 1: Creating and activating Environment We will create a virtual environment for the project. The virtualenv package is required to create virtual environments. You can install it with pip: $ pip install virtualenv To create a virtual environment, you must specify a path. We are creating our environment’s local directory called ‘Videorecognition in the home folder, to create the folder type the following: $ virtualenv Videorecognition To activate the environment execute the following command: $ source Videorecognition/bin/activate STEP 2: Installing Flask and OpenCV First, we need to refresh/upgrade the pre-installed packages/libraries with the apt-get package manager: $ sudo apt-get upgrade $ sudo apt-get upgrade Now we will execute the following commands to install OpenCV and Flask: $ sudo apt install python3-opencv $ pip install Flask STEP 3: Creating Project Structure Now that all the pre-requisites are installed let’s set up our project: ├── VideoStreaming/ │ ├── camera.py │ ├── main.py │ ├── haarcascade_frontalface_alt2.xml │ ├── templates/ │ │ ├── index.html STEP 4: Detecting faces using OpenCV Now as we have created our project structure we will detect face using OpenCV.We will use the most basic and easiest way to detect face i.e, by using Haarcascades. #camera.py # import the necessary packages import cv2 # defining face detector face_cascade=cv2.CascadeClassifier("haarcascade_frontalface_alt2.xml") ds_factor=0.6 class VideoCamera(object): def __init__(self): #capturing video self.video = cv2.VideoCapture(0) def __del__(self): #releasing camera self.video.release() def get_frame(self): #extracting frames ret, frame = self.video.read() frame=cv2.resize(frame,None,fx=ds_factor,fy=ds_factor, interpolation=cv2.INTER_AREA) gray=cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) face_rects=face_cascade.detectMultiScale(gray,1.3,5) for (x,y,w,h) in face_rects: cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2) break # encode OpenCV raw frame to jpg and displaying it ret, jpeg = cv2.imencode('.jpg', frame) return jpeg.tobytes() STEP 5: Creating a webpage for displaying video Now as we will create a webpage to display our video. <-- index.html --> <html> <head> <title>Video Streaming Demonstration</title> </head> <body> <h1>Video Streaming Demonstration</h1> <img id="bg" src="{{ url_for('video_feed') }}"> </body> </html> STEP 6: Creating Streaming Server Now as we have detected faces using haarcascade and created a webpage to display video, we will integrate these two modules with our server. # main.py # import the necessary packages from flask import Flask, render_template, Response from camera import VideoCamera app = Flask(__name__) def index(): # rendering webpage return render_template('index.html') @app .route('/')def index():return render_template('index.html') def gen(camera): while True: #get camera frame frame = camera.get_frame() yield (b'--frame\r ' b'Content-Type: image/jpeg\r \r ' + frame + b'\r \r ') def video_feed(): return Response(gen(VideoCamera()), mimetype='multipart/x-mixed-replace; boundary=frame') @app .route('/video_feed')def video_feed():return Response(gen(VideoCamera()),mimetype='multipart/x-mixed-replace; boundary=frame') if __name__ == '__main__': # defining server ip address and port app.run(host='0.0.0.0',port='5000', debug=True) STEP 7: Starting and Accessing the Server Open a terminal window in the project directory by executing the following commands: $ cd Videorecognition $ cd VideoStreaming To start the server execute the following command: python main.py In order to access the server open browser and navigate to the Server URL: Conclusion I hope this tutorial can help others find their way of beginning with Flask and OpenCV! For details and final code, please visit my GitHub depository: Video-Streaming-with-Flask See you in my next tutorial! Thank you, Anmol
https://medium.com/datadriveninvestor/video-streaming-using-flask-and-opencv-c464bf8473d6
['Anmol Behl']
2020-08-31 16:42:14.333000+00:00
['Machine Learning', 'Flask', 'Computer Vision', 'Opencv', 'Python']
From Utopia to Reality: Marketing and the Big Data Revolution
Perfect information. First Degree Price Discrimination. You might know these terms from high school economics defining utopian conditions in the marketplace. One involves having instantaneous knowledge of all market prices, utilities, and cost functions. The other involves selling each customer a good as per his individual willingness to pay. Read More
https://medium.com/data-analytics-and-ai/from-utopia-to-reality-marketing-and-the-big-data-revolution-7e1b6514a060
['Ella William']
2019-06-15 13:20:59.219000+00:00
['Big Data', 'Marketing', 'Analytics', 'Data Visualization', 'Data Science']
The Mind-Body Connection Is Stronger Than You Think
‘Biofeedback’ Could Ease Headaches, Anxiety — and Maybe a Lot Else By providing a window onto the body’s inner workings, biofeedback could help people control what was once thought to be ungovernable The man’s boasts were outlandish. Wim Hof, a 51-year-old Dutch endurance athlete, claimed that he could voluntarily control his own immune system — ramping its activity up or down at will. Moreover, he said that he could teach this skill to others. Hof’s assertions might never have been put to the test but for the fact that he held several remarkable world records — including one for the longest time spent submerged neck-deep in an ice bath, and another for the fastest half marathon run barefoot on snow. His accomplishments earned him the nickname “the Iceman” and generated enough public interest — at least in the Netherlands — that scientists decided to investigate his claims. For a 2012 study that appeared in the journal Psychosomatic Medicine, a team of Dutch researchers injected Hof with a toxin from E. coli bacteria. In past experiments, the toxin had reliably caused nausea and other symptoms and had also produced steep elevations in blood biomarkers of inflammation. After receiving the injection, Hof reported almost no symptoms, and his blood samples revealed “a remarkably mild inflammatory response,” the study authors reported. In 2014, a follow-up study repeated the experiment using a group of 12 people who had trained using Hof’s methods. Blood samples indicated that, like Hof, they were able to exert some control over their body’s inflammatory response, which is a component of the immune system usually thought to be wholly involuntary and that, when overactive, can cause or contribute to a wide range of health problems. “Hof had effectively found the off-switch for his immune system,” says Scott Carney, an investigative journalist who details his own experience with Hof’s techniques in the 2017 bestseller What Doesn’t Kill Us. “The [Hof] studies showed that something that should be impossible to do was possible.” What does Hof’s method entail? Carney says it combines meditation and breathing exercises with cold exposures, such as a frigid shower. During these cold-exposure intervals, which Carney duly undertook, he would concentrate on speeding up his own metabolism in order to generate body heat and stop himself from shaking or shivering. Through this and related exercises, “you start to sort of gain control of internal bodily processes that we don’t generally try to access,” he says. “I’ve seen people use this for remission of Crohn’s, arthritis — just crazy things.” It’s possible that biofeedback, by giving people a real-time look at the internal workings of their nervous system, can facilitate these self-calming practices. While all of this may sound wild and far-fetched, some of Hof’s practices dovetail with a long-studied form of therapy known as biofeedback. “As the word suggests, biofeedback involves feeding back to the patient their own bio-signals, such as blood pressure or heart rate variability or muscle tension — anything that the body is displaying as a result of activity of the autonomic nervous system,” says Stefan Hofmann, PhD, a professor of psychology at Boston University. (Hofmann is not related to Hof.) The autonomic nervous system (ANS) is so named because its processes are largely automatic — meaning involuntary and unconscious. The ANS plays a role in breathing, heart rate, digestion, thermoregulation, and a lot else. While Hof’s unconventional methods use the body’s response to cold as its source of biofeedback, this therapy has traditionally used specialized sensors to produce real-time measures of a person’s heart rate variability or other ANS-generated internal signals. “By feeding these signals back to people, they can, to some extent, gain control over them,” Hofmann explains. The research on biofeedback Researchers have been exploring biofeedback since the 1960s. Some of the strongest work in support of its therapeutic power has examined its effect among people with headaches. A 2019 review from the U.S. Department of Veterans Affairs found “high-confidence evidence that biofeedback is effective for reducing the frequency, duration, and intensity of migraine and tension-type headaches.” While biofeedback techniques vary, many headache studies have involved hooking people up to electromyogram (EMG) sensors that measure electrical activity in the skin and muscles, which ebbs and flows in response to headache pain. This EMG data is “fed back” to a person as sounds, images, or both. For example, as a person’s headache worsens, a computer connected to the EMG may display a colored circle that contracts or grows red. “You might focus on widening the circle or changing its color,” Hofmann says. By doing this, people often find that they’re able to turn down their headache’s intensity. Apart from its role in headache management, EMG biofeedback has helped people recover muscle function and mobility following a stroke. The technique can also treat incontinence, blood flow problems, and — as the Hof research suggests — maybe even inflammation and symptoms of some autoimmune disorders. A 2019 study from researchers at UCLA found that virtual reality–based biofeedback — using VR headsets to show people visual representations of their own breathing patterns — helped to reduce pain among people with rheumatoid arthritis and lupus. Researchers have also found that biofeedback, perhaps by increasing activity in the vagus nerve, may have some anti-inflammatory effects. More recently, scientists have started looking at biofeedback as a treatment for anxiety disorders. A 2018 study in the journal Stress found that biofeedback lowered symptoms of burnout and improved math performance among a group of college students. Boston University’s Hofmann co-authored a 2017 research review that found biofeedback based on heart rate variability — a measure of the time between heart beats — could produce a large drop in self-reported stress and anxiety. “There has been a long-standing view among behavioral scientists that if you can feel and perceive something, then you may be able to control it.” How does biofeedback do all this? “That’s still a bit of a mystery,” Hofmann says. But there are theories. He explains that with some concentration and practice, people have the ability to regulate their own breathing or muscle tension, which can have a calming effect on heart rate, blood pressure, and other elements of the autonomic nervous system that are associated with unhealthy states of arousal. It’s possible that biofeedback, by giving people a real-time look at the internal workings of their nervous system, can facilitate these self-calming practices. “There has been a long-standing view among behavioral scientists that if you can feel and perceive something, then you may be able to control it,” adds Hugo Critchley, MD, PhD, chair of psychiatry at the University of Sussex in the U.K. One of Critchley’s primary areas of research concerns the ways in which the mind, brain, and body interact during states of arousal. He points out that, somewhat surprisingly, research on experienced Buddhist meditators has found that they are no better than non-meditators when it comes to detecting their own heartbeat. By giving people a sharper view of their heartbeat and other internal states, biofeedback may help people take the wheel of processes or functions that were long presumed to be ungovernable, Critchley says. Bringing biofeedback out of the lab In the past, technologies capable of providing people with accurate, real-time biofeedback tended to be expensive and cumbersome. But that’s changing. “There’s a revolution going on with high-tech companies measuring bio-signals, and biofeedback will have a role in that,” Hofmann says. Some companies are planning to roll out VR headset programs that work with the Apple Watch and other wearable heart rate monitors to provide helpful biofeedback measures. And already there are a number of commercial wearables that are capable of measuring heart rate variability. (This is different from a simple heart rate monitor, which Hofmann says is not a helpful biofeedback output.) A lot more research is needed to clarify biofeedback’s therapeutic uses and mechanisms of action. It could turn out that coupling biofeedback with meditation or muscle-relaxation techniques could enhance these practices’ well-studied health benefits. It’s also possible — though far from certain — that biofeedback could help people turn down harmful inflammation or other arousal-linked internal states. “If you’re able to be more aware of what’s going on in your body, that can give you better control of arousal,” Critchley says. “And a lot of arousal is inappropriate.”
https://elemental.medium.com/biofeedback-could-ease-headaches-anxiety-and-maybe-a-lot-else-630f3c51f99f
['Markham Heid']
2020-09-17 05:31:01.485000+00:00
['Health', 'Biology', 'Anxiety', 'The Nuance', 'Science']
Python and Bokeh: Part II. The beginner’s guide to creating…
Photo by Ronit Shaked on Unsplash The beginner’s guide to creating interactive dashboards: Bokeh server and applications. This is the second part of our tutorial series on Bokeh visualization library. In this part of the series, we will explore Bokeh application and how to serve them using Bokeh server. Please, refer to the first part of these series to cover the basics of building scatter, line, and bar plots, and learn how to apply styling and embed plots in web pages. All the examples in Part I of the series displayed simple embedded visualizations with only the basic interactivity. However, we often need to create full-fledged applications to properly visualize the data, as real-world visualizations usually require not only interactions with a user via buttons, dropdown menus, and other elements, but also dynamic data updates. Bokeh allows this, as plots can be not only embedded, but also combined into large and elaborate web applications, with the built-in Bokeh server. It handles data and plot updates between backend server in Python and client-side Bokeh JavaScript library, which is responsible for actual drawing in a browser. Anatomy of a Bokeh application Bokeh defines a set of abstractions for storing and transporting objects between backend and frontend. The main and most important one is the document. A Bokeh document is a container, which incorporates all the elements, including plots, widgets and interactions. Each document contains one or more models. In the context of Bokeh, models are plots , axes , tools or any other visual or non-visual element, which is drawn or used to draw something else on the user screen. Bokeh backend serializes documents to JSON which are sent and displayed by the BokehJS client. In turn, Bokeh application is an entity, which creates and populates documents with updates on the backend with all the necessary configuration to properly transfer data between the server and client. Although Bokeh applications are written in Python and handled by the server, it is also possible to add custom JavaScript functionality on a client side, with JavaScript callback or otherwise. This is a very powerful tool, but we will use it only occasionally, and mostly for styling. Bokeh JS client can also handle user actions. Without further ado let’s proceed to actual coding. We will start by exploring the main building blocks of Bokeh apps and leverage that knowledge to create a truly interactive dashboard for real-world data visualizations with all the interactivity and data facilities we may need. Running Bokeh server Let’s create a simple application, which plots some random data. The most basic way to do so is to use a single Python script, which takes an empty document and fills it with models: Bokeh exposes curdoc() function, which provides a handle for the current default document. Although you can create and populate documents manually, with all the flexibility possible, curdoc() is the most straightforward way to get a document and start working with it, while not compromising on flexibility too much. After that, we can use figure() to create a model, which is a Figure instance in this case: Although we already specified figure size, this model is still empty and is not attached to the document. Let’s plot some random values on a newly created figure and attach it to the document we have: Each document has one or more root elements. Root elements are the direct children of a document and, as we will see later, they can be directly referenced elsewhere. As simple as it looks, we have our first Bokeh application. Just a final touch: we will define a page title to be displayed in the browser tab with bokeh_doc.title = "Sample Bokeh App" and we're ready to pass it to Bokeh server: As you might have already figured out this command bokeh will run a server and manage backend-related tasks, like downloading data sets. run subcommand launches a server, while --show option indicates, which app should be open in a browser window. The complete code for this application: Basic Bokeh application Under the hood, Bokeh server performed a lot of tasks: it added a figure model to the document, serialized the model and sent it to the client browser session. All of that happened without our intervention at all. Widgets and callbacks Now we have a working app which displays a static scatter plot. Apparently, we have covered static plots in Part I, so what’s the difference? It’s simple: Bokeh server provides a rich functionality to make plots and other document elements actually dynamic and interactive. This is achieved by a system of callbacks and attribute updates, which are handled by Bokeh server. For example, to update some plot with new data, we only need to add that to a corresponding data source. No need to send updates to a client session, Bokeh with perform this for us. Callbacks and various data source update mechanisms are the main building blocks of Bokeh interactivity, so let’s explore them. Periodic callbacks The main and, probably, the most common type of callback used in dynamic dashboards, is the periodic callback. After being registered to a document, it will be fired by Bokeh server on specified time intervals. Periodic callbacks are most commonly used to fetch new data and update plots. Although it’s better to perform long-running I/O operations outside main thread, we will not bother about it in this tutorial, as this is a very general Python topic, not specific to Bokeh. To illustrate, how periodic callbacks are used, let’s create a simple application with an empty figure in it: To add data to the figure, we use a periodic callback, which draws random numbers at each run: The method bokeh_doc.add_periodic_callback notifies Bokeh server, that add_circles function must be fired every second, or once per 1000 milliseconds. Note, that each callback will add new renderer to the plot, so after a while, we will have a bunch of independent glyphs in our app. While it is fine for the sake of this presentation, more efficient scenarios will require ColumnDataSource functionality (to be introduced later in this tutorial). As previously, we launch the app: If everything works as expected, the app will add a circle to the plot every second. The complete code for this app is as follows: Application with a periodic callback Widgets and attributes callbacks Periodic callbacks help to make Bokeh applications dynamic, but we also want to make our plots responsive to user interactions, like clicks and selections. The typical mechanism of interaction is when the user engages with some element or visual component, like a Button, Dropdown menu, Slider, etc., to change some aspect of plotting. For example, a user may select a category from a dropdown menu to filter the data for display, based on the selected category. Or, we may want to turn on or off periodic callback on a button click (yes, these callbacks may be added or removed dynamically). Both types of interactions basically work the same. User interaction typically causes some changes in a model or models attributes. For example, slider value is bound to value attribute, and as a user interacts with a slider, the attribute changes correspondingly. Bokeh allows to bind a callback to these attribute changes and the way to register such callback is through on_change method, which is exposed by various models. The callback function must have a specific signature, as you will see in a moment. This mechanism is extremely powerful and enables custom handling of virtually any changes in a Bokeh application. We will extend our app from previous section: Bokeh provides predefined widgets, like the Button, which can be used immediately in an application: We use a predefined style called success for a green color button (very similar to the Bootstrap CSS framework) with Generate label on it. So far the app does nothing: no data is plotted to the sample_plot , the button is not responsive to a click, and it is not even added to the document. Buttons in Bokeh expose a simpler callback mechanism to handle clicks: callback function should have no arguments and is registered with on_click method. You certainly can use any Python functional tool to create such a callback function from a generic function with an arbitrary signature. Now we need to add a plot and a button to the document. We will use a basic column layout for this, with only a minor change: we will wrap the button in an element called widgetbox , which is responsible for the proper placement and padding of the widgets (check how it will look without it): So far we have not created any attribute callbacks, but we will shortly. When any of the high-level plotting methods like circle , vbar are applied on a Figure instance, those methods add an additional renderer to that figure (look into the renderers attribute of a Figure instance). As you already probably figured out, we can attach a callback to any such change. To illustrate this, let's create a very basic callback with the correct signature: Note the signature: it’s generic, so that the same callback function may be bound to different attributes: the attribute name itself is the input parameter to the function. Now, on each button click, we will see "attribute 'renderers' changed" in the terminal. As simple as that. Each time we click the button, we add a new renderer to the plot and Bokeh fires renderer_added callback. The power of this mechanism is that we do not connect this renderer_added callback to the button at all. We just properly handle the sequence of events, launched by the button click. This allows to wire callbacks in Bokeh application in a complex and flexible way. The complete code for this app is as follows: Providing data In all the examples above, we plotted data in a straightforward way: by providing numpy arrays to a corresponding glyph method. In larger applications, this simple implementation has a couple of important drawbacks. First, each call to a circle or any other glyph function adds a new renderer to the plot which makes them hard to track. Second, this approach couples the data layer to the view layer, which makes the code entangled and harder to maintain as the app grows larger. The right way to handle data in Bokeh applications is via ColumnDataSource and CDSView . ColumnDataSource is a data container, which was introduced in Part I of this tutorial. CDSView is a filtering mechanism for column data sources, which allows us to visualize only certain data elements with a single data source under the hood. We will use CDSView later in our examples and in our dashboard application. Streaming Let’s rewrite our button application in a more efficient way. First, we create a data source and use it for plotting: For now, sample_plot depends on data_scr as the source for its data to be presented as circles. But this data source is still empty. How should we populate it with data? Instead of plotting data directly on a button click, we will now stream data into the data source: Let’s breakdown this code: the data_scr.stream method of ColumnDataSource appends new data to the data source. Remember, Bokeh keeps track of the two versions of our data: one on the server side and the other on the client side. Using data_scr.stream ensures that only diff changes will be sent to the client, while the data source itself will not be recreated or resend from scratch. This is important, as sometimes you may need to stream large amounts of data, and creating new data sources on each update is costly both in terms of resources and performance. Note also the rollover argument: it caps the maximum number of most recent data points that the client will keep. Again, this is more suitable for applications, which update frequently and with large amounts of data. The complete code for this app is as follows: Patching While streaming is used to provide new data, sometimes you need to change the data, which is already in the data source. For this use case ColumnDataSource exposes a patch method. Again, only diff updates will be sent to the client with a minimal network overhead. Let’s rewrite our callback function — we will randomly select 3 circles and change their x field: The patch method requires a dictionary, with keys being the fields of data source to be changed. You do not need to change all the fields at once. Values are sequences of pairs, where the 0-th element is the index to patch at, and the 1-st element is the new value to patch with. Patching in Bokeh has two drawbacks to be mentioned, though. One is the very strict type checking in data_scr.patch : int will pass, while np.int64 or np.int32 won't. That's why we need to do patch_idx = [int(ix) for ix in patch_idx] . Also, patching won’t handle generators, so zip won't work without transforming it to the actual data sequence. The complete code for this app is as follows: Streaming and patching methods enable us to decouple the data layer from the view layer. Now our data updates are efficiently handled by Bokeh, so we do not have to bother with how data changes will find their way to the client. We can now design Bokeh applications around data, and instruct plots to use specific data sources for drawing. Using tables One important widget to explore before creating a real dashboard is a table. A common use case is to display actual data values for inspection. Let’s create a simple application with tables. Along the way, we will also explore filtering with CDSView . First, we need to add some imports: In Bokeh, a table is a collection of columns, linked to some data source (what a surprise!). To create a table, we need to construct table columns first: Note, that we provide not only field names, which correspond to columns in a data source, but also titles, which will be displayed in the header row. We need also the data source and then we’re ready to create the table itself: Now, to illustrate how Bokeh handles filtering and selections, we will add the second table, with the same columns, but with CDSView : This code looks foreign, so let’s break it down a bit. In this table, we want to display only filtered rows, and we start by creating a mask. This mask filters nothing as for now, but we only need a placeholder to create a CDSView . We will update it later. To create a boolean mask in CDSView , we use a BooleanFilter . Another option may be an IndexFilter , which allows selecting which indices should be displayed in a view. Finally, we create the table itself and instruct it to show data from the data_scr according to columns and subject to any filtering, as defined in the data_view . Now, how do we change, which rows are displayed in our filtered table? Remember, Bokeh tracks all the changes in the document and its children. What we need to do is just to change the view, and Bokeh will transfer the changes to the client. Let’s first arrange our application together: If you launch it now, nothing interesting will happen: tables will be absolutely identical. To make them look different, let’s add a periodic callback: In this callback, we randomly select a subset of rows to recreate the view. No additional actions are needed: Bokeh will notify every model, which needs to re-render on this change (in this case, only filtered_table ). Note several useful things about tables and data in Bokeh in general: you can sort table rows by some column value by clicking at the column header, to get to original order, you need to Ctrl+click on the column header. Moreover, if you select one or more rows in one table, you will see, that the same rows are selected in another table (except those, which are filtered at the moment). This is an important observation: in reality, you selected not rows in the table, but rows in the underlying ColumnDataSource . Bokeh notices the client change and notifies every model (tables, plots, and others), which uses the same data source, causing all of them to respond to the change. Actually, you can even attach a callback to the selection and handle even more elaborate behavior. The complete code for this app is as follows: Next steps Now that we know how to create glyphs, provide them with data (filtered or now), and create dynamic and interactive Bokeh applications, we can proceed to the final part of the series: dynamic dashboards. In the next part, we will create a real interactive dashboard, using the real data, coming from an external data source. We will handle all the aspects of the dynamic dashboard: data management, plotting, interactivity, and styling. Stay tuned!
https://medium.com/y-data-stories/python-and-bokeh-part-ii-d81024c9578f
['Gleb Ivashkevich']
2019-07-20 21:41:11.524000+00:00
['Bokeh', 'Y Data', 'Data Visualization', 'Python', 'Visualization']
We Achieved More This Year Than We Give Ourselves Credit For
We Achieved More This Year Than We Give Ourselves Credit For When merely surviving is an accomplishment Photo by Michelangelo Buonarroti from Pexels For many of us it’s been the most difficult year of our lives. Even if worse things have happened to us in the years before, most of us haven’t spent a whole year in confusion and paranoia. Even the most powerful governments and renowned experts were not prepared for this, so we can’t blame ourselves for waking up into chaos one morning and not knowing what to do next. At first, it may have felt like we’re handling this pretty well, but the cost of living in lockdown kept piling up week after week. Maybe that’s why showing some kindness to ourselves is overdue. Despite the worsening woes of lockdown life, we have made it this far. The fact that we’re alive and breathing right now is indicative of the right steps we have taken and the luck we had on our side. That’s a great place to grow gratefulness. We’ve had to accept life within four walls as the new normal As much as we love coming home, being home all the time and bringing home everything we did elsewhere — like work and exercise — are challenges none of us signed up for. We gave up our temples and turned them into trains in a loop, living in the irony of constant restlessness in a place meant for resting. We accepted looking at faces we love only through a flat-screen, and showing our battlegrounds-of-residence to strangers on camera. We shared our losses and griefs from a distance and came up with creative ideas to celebrate birthdays, anniversaries, and graduations in front of blue light emitters we were already overexposed to. We accepted Zoom as a household name, something most of us would probably never know of in different circumstances. We cheered and protested, supported and voted, taught and learned, loved and protected, all within an ever-shrinking walled perimeter. And we still continue to hold our own. The last thing we need is undeserved guilt We need to rid ourselves of wild expectations, of perfection and invincibility, on top of everything we’ve been carrying already. Yes, we lost jobs, but we also spent months brainstorming to fortify our fragile livelihoods going forward. We learned new skills, found new opportunities, came across ideas that we probably wouldn’t have in another reality. Yes, we lost loved ones and opened our hearts to others who suffered the same, but we learned to embrace uncertainty as an inevitable component of life, vowing to express love and thankfulness more often, more sincerely. All of that means we owe ourselves a pat on the back, a hug for the soul, and a high-five to our nerves. It wasn’t a year of setbacks and slowdowns It’s rather been a year of finding better solutions for survival. We’re ending a year knowing we’re united not only in our battles but in our triumphs too, as long as we count ourselves as a part of a larger entity: Humanity. Even when some of us find these simple truths to be obvious, it’s not that hard to forget them when we measure ourselves against astronomical expectations in circumstances where merely surviving is an accomplishment. So let’s take a kind look at the mirror and say, “You did great!” And remember that each of us, and by aggregation all of us, reached a milestone worthy of celebration.
https://medium.com/live-your-life-on-purpose/we-achieved-more-this-year-than-we-give-ourselves-credit-for-d390edec40b4
['Mutasim Billah']
2020-12-24 12:25:15.631000+00:00
['Society', 'Pandemic', '2020', 'Motivation', 'Self Love']
New Survey Identifies 98 Long-Lasting Covid Symptoms
New Survey Identifies 98 Long-Lasting Covid Symptoms Early research helps quantify coronavirus long-haulers’ experiences Photo: RuslanDashinsky/Getty Images There’s a growing number of people around the globe who have survived Covid-19, only to find persistent symptoms lasting weeks or months, and even new effects like hair loss that don’t show up until weeks after they’ve been declared Covid-free. They’re called long-haulers. Their experiences are poorly understood by the medical community and often dismissed by doctors as psychological issues, writes epidemiologist and Covid survivor Margot Gage Witvliet, PhD, in an article on The Conversation. But the aches, pains, and inconveniences are real, according to Witvliet, who had Covid-19 four months ago and is now suffering from tinnitus, chest pain, and heart-racing. 98 long-haul effects A new survey of 1,567 long-haulers now shows just how wide-ranging these long-term symptoms are, stretching from sadness and blurry vision to diarrhea and joint pain. Here are the top 10 complaints and the percentage of people reporting each one (many long-haulers report several effects): 100% Fatigue 66.8% Muscle or body aches 65.1% Shortness of breath or difficulty breathing 59.0% Difficulty concentrating or focusing 58.5% Inability to exercise or be active 57.6% Headache 49.9% Difficulty sleeping 47.6% Anxiety 45.6% Memory problems 41.9% Dizziness The survey, which includes self-reported post-Covid symptoms, grew out of a Facebook page called Survivor Corps, a grassroots group devoted to educating Covid-19 long-haulers and connecting them to the medical and research communities. The findings were analyzed and presented by Natalie Lambert, PhD, an associate research professor in medicine at Indiana University and Wendy Chung, MD, a neurodevelopmental specialist at Columbia University Irving Medical Center. Their paper has not been formally peer-reviewed nor published in a journal, but the findings echo other research and the growing number of documented if anecdotal cases. In all, survey respondents noted 98 different effects, far more than the 11 common symptoms that the U.S. Centers for Disease Control and Prevention lists as possible signs that a person has the disease. Several of the ills are far from benign: tinnitus; cramps; flashes or floaters in vision; night sweats, pain in the hand and feet. A quarter of the effects involved pain. “The results of this survey suggest that the brain, whole body, eye and skin symptoms are also frequent-occurring health problems for people recovering from Covid-19,” the researchers state. ‘We expected to see a lot of long-term damage’ Studies have shown that Covid-19 infects much more than the respiratory system. By late spring, we knew the disease was affecting the body from head to toe, swelling the brain and compromising many of the body’s organs, and that it could be a blood vessel disease. A new study suggests Covid-19 can infect the thyroid gland, causing excess hormone release. And as time goes on, studies have begun looking at potential long-term effects. Heart images taken 10 weeks after people contracted Covid-19 found 78 of 100 had some sort of inflammation or other abnormalities, even if the people had few or no preexisting cardiovascular issues, researchers reported July 27 in the journal JAMA Cardiology. “We expected to see a lot of long-term damage from Covid-19: scarring, decreased lung function, decreased exercise capacity,” Ali Gholamrezanezhad, a radiologist at the Keck School of Medicine at the University of Southern California, tells Science Magazine. “It’s going to take months to a year or more to determine if there are any long-lasting, deleterious consequences of the infection.” There are ongoing odd effects, too. Beyond enduring pain, hundreds of Covid-19 survivors are experiencing hair loss. “We are seeing patients who had Covid-19 two to three months ago and are now experiencing hair loss,” says Shilpi Khetarpal, MD, a dermatologist at the Cleveland Clinic. The demoralizing effect was made vivid recently in a Twitter video posted by the celebrity Alyssa Milano, herself recovering — in some ways — from the disease. Khetarpal says, however, that the effects should be temporary. Answers coming, in months or years Covid-19 would not be the only virus to cause chronic symptoms. Polio usually causes mild cold or flu-like symptoms. But in about 1% of cases, it damages the neurological system and can leave a person partially paralyzed. Epstein-Barr virus and the herpes virus are both suspected of causing chronic fatigue syndrome, but scientists aren’t sure. Given the current pandemic is only months old, nobody can say for sure if any of the post-Covid complications will become life-long problems. “It’s going to take months to a year or more to determine if there are any long-lasting, deleterious consequences of the infection,” Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Disease, said last month in an interview on Facebook. “We just don’t know that now. We haven’t had enough time.” That leaves sufferers like Witvliet, the epidemiologist, in limbo, mostly just resting while wondering when her headaches, brain fog, and extreme fatigue might clear up. “It’s too soon to say we’re disabled,” she writes. “It’s also too soon to know how long the damage will last.”
https://elemental.medium.com/new-survey-identifies-98-long-lasting-covid-symptoms-87935b258a3e
['Robert Roy Britt']
2020-08-14 05:31:01.020000+00:00
['Health', 'Pandemic', 'Covid 19', 'Symptoms', 'Coronavirus']
What is a Dashboard Style Guide and Why Should You Use One?
What is a Dashboard Style Guide and Why Should You Use One? Dashboard style guide (part 1) Many organizations employ BI developers and data analysts to create data visualizations and dashboards. Many of them, do not come from a design background and they focus their efforts on the mere creation of dashboards. Often, the design is pushed out and falls into random visual choices which leave a result that looks messy and unclear. A well-designed dashboard is not just “more beautiful”, but also easier to understand. When UX aspects are not taken into account, you may end up with a dashboard that does not serve the purpose it was made for in the first place and is inaccessible to end users. What is a style guide? A style guide is a document which provides guidelines for planning and designing components or pages. The objective is to create uniformity and consistency while reducing design efforts by reusing components. A style guide is crucial especially for large companies as it helps them maintain a consistent brand and design language (internally and externally) — even when many designers are working on the company’s products. A UX style guide focuses mainly on functionality, while a UI style guide emphasizes graphic design: colors, fonts, layouts, typography, etc.
https://medium.com/tint-studio/what-is-a-dashboard-style-guide-and-why-should-you-use-one-fb84ce8ffbb0
['Anat Sifri']
2019-04-30 09:05:10.840000+00:00
['Design', 'Dashboard', 'Data Visualization', 'Uxui Design']
The Jackpot of the Availability Cascade
You’re a soldier in the information war, but do you know what the endgame feels like? Photo by Chansereypich Seng on Unsplash Imagine two combatants of relatively equal strength. They’re both very good, very well practiced. They know all the moves. The only way one of them can win is when the other one slips up and lets their guard down. Maybe it’s exhaustion that causes the slip. Maybe it’s the surprise of a secret move they’ve never trained against, a move their opponent was holding onto for just the right moment. Whatever the reason, the match ends and a champion is declared. Some in the crowd erupt in joy while others fall into a shocked silence. It was a long fight. A tense fight. But in the end, the winner was decided. Those who cheered for the victor go home energized and self-satisfied; those who were rooting for the loser go home disappointed and turn their attention to other battles. Some people were neutral the whole time, just spectators to the fight, but not many. It’s almost impossible to watch a contest like this — a brawl between two powerful duelists at the top of their game — and not give in to the very human urge to pick a side. I’m Not Writing about a Real Fight, of Course I’m describing the intellectual clash between millions of Americans on issues in the public sphere — abortion, universal healthcare, illegal immigration, welfare, climate change. Like the two combatants I described above, the opposing sides in these battles have become entrenched. They can’t convince their opponents to give up. Nor have they been able to persuade enough neutral fighters to rally to their cause and overwhelm their adversary. Many of these battles span years, decades, and even generations. In some cases — like the conflict over what power states should have versus what power the federal government should have — the battle has been raging since the founding of our nation. And the battlefield is massive. It’s more of a battle-space, actually, because it spans both the real and conceptual realms. For the largest and longest-running battles, those who lead the charge against the opposing side have managed to accumulate a large supply of rhetorical munitions, a “pro” for every “con.” For the newest battles, the attackers wonder whether they can use the element of surprise to achieve victory quickly. Can the defenders be defeated swiftly before they have a chance to muster? Or will the attackers charge forward valiantly, winning a few quick victories only to become bogged down in a massive war of attrition they’re not yet prepared to fight? The Path to Victory If you’re still new to the information wars — or maybe you’ve been around awhile and just can’t quite grasp the metaphor I’m using — it might seem that I’m describing more of an ongoing process, something like history rather than a battle that actually can be won. But there’s a reason lobbyists and dark-money groups and nonprofits throw billions of dollars every year into the prolonged battles they’re fighting. It’s because their ideological crusaders are holding the line and hoping for a Big Event — bad luck, an error by their opponent, or something totally unexpected — that will break the deadlock and pave the way to victory. But before I provide you with a technical definition for what’s known as an “availability cascade,” consider the following: On September 10, 2001, many Americans were aware that global terrorism was a threat but were disinterested in committing U.S. forces overseas. Four days later — on September 14, 2001 — the American attitude had changed so rapidly and decisively that the U.S. Congress authorized the President to wage a global war on terrorism: only one member of the U.S. House of Representatives opposed the resolution, and the U.S. Senate agreed unanimously with only two Senators abstaining. You know what happened between September 10th and September 14th, 2001, don’t you? Suffice to say, even though skepticism about deploying American forces overseas was quite high throughout the 1990s and early 2000s, the 9/11 attack was such a spectacular event that a long, long time had to pass before anyone could publicly criticize American aggression overseas without being accused of treachery. Vice President Dick Cheney, along with his buddies in the military-industrial complex, had warned for years about the threat of global terrorism only to be rebuffed by bureaucrats and politicians who said things like “the American public doesn’t like military adventurism” and “the American public can’t stand to see Americans dying on foreign soil.” Then, all of a sudden — voila! In the span of just a few days, the Vice President won his battle over U.S. intervention abroad as the arguments of his opposition collapsed just as quickly and suddenly as the buildings of the World Trade Center on that September morning. Technical Definition If you’re itching for a technical definition for this phenomenon as opposed to just a single illustrative case study, here it is. The term “availability cascade” was first used by economist Timur Kuran and legal scholar Cass Sunstein in a 1999 paper published in the Stanford Law Review. The term was not used in the context of convincing Americans to go to war, however, but in the context of risk regulation. Kuran and Sunstein noticed the way environmental and public-health activists — who they gave the fancy title of “availability entrepreneurs”—used the mass media to persuade the public on issues related to health and the environment. The two academics were concerned about how activists could whip up public sentiment over a particular issue and force the government to take unwarranted or even destructive actions based on populist fear and anger rather than sound economics or science. Kuran and Sunstein defined the availability cascade as follows: A self-reinforcing process of collective belief formation by which an expressed perception triggers a chain reaction that gives the perception of increasing plausibility through its rising availability in public discourse. Said another way, the availability cascade is basically a meme that becomes legitimized and popularized — whether rightly or wrongly, accurately or not — by “going viral.” And I don’t mean the sort of one-off visual/textual meme we’re exposed to on Facebook or Twitter, although these are certainly innovative weapons in the information wars. Rather, the availability cascade is a process of repetition that frequently follows from a Big Event and leads to widespread belief adoption, suddenly and decisively shifting public perception and creating an overwhelming call to action. Other Examples Whether you want to think about the availability cascade as a meme, “going viral,” or a come-to-Jesus moment, it’s absolutely a real phenomenon. Like a river that suddenly bursts free of its banks and etches a new course into the landscape, the availability cascade can lead to a decisive victory in the information wars when little resistance is encountered. While we’re speaking of rivers, one of the most famous availability cascades in American history occurred in the summer of 1969 when the Cuyahoga River caught fire in downtown Cleveland. The river, which was one of the most chemically polluted in the United States, had actually caught fire something like 13 times before, including a massive 1952 fire that resulted in hundreds of thousands of dollars in damage to boats, a bridge, and a riverfront office building. Although the 1969 fire wasn’t that bad and no photographs of it were available, this didn’t stop Time magazine from publishing a dramatic photo from the far more destructive 1952 fire, shocking enough people that the public outcry eventually led to various environmental improvements such as the Clean Water Act and the creation of the Environmental Protection Agency. You could convincingly argue that the entire cascade surrounding the Cuyahoga River fire was manufactured given that the photographic evidence had been manipulated. And indeed, that’s exactly what happens when smart and savvy availability entrepreneurs craft a compelling narrative out of small events and/or less-than-complete information. But lest we worry that all of history has been manipulated, it’s helpful to also consider examples of purely organic availability cascades. While approaching its mooring in Lakehurst, New Jersey, from Germany in 1937, the hydrogen-filled airship Hindenburg burst into flames and crashed. The disaster was recorded on film and quickly rushed out to the theaters via newsreel. Although the airship industry had been sputtering for years, the Hindenburg crash touched off an availability cascade that airships were not safe and permanently ended their use for commercial passengers practically overnight. If you haven’t seen (or don’t recall) the Hindenburg newsreel footage, it’s worth discovering (or rediscovering). I can only imagine what it would have been like in 1937, as an average person who had up until that point seen only primitive special effects out of Hollywood, to sit down in the darkness of a theater and watch real footage of tiny human beings scattering like ants beneath the hulk of a flaming dirigible as it fell to Earth. Hitting the Jackpot But while the Hindenburg newsreel is dramatic, it plays only a soft prelude to some of the more intense and opinion-swaying imagery we’ve seen in the years since. Although availability cascades are not always touched off by fires, explosions, and violence, it certainly seems to help — among the first things that come to mind are atomic bombs detonating, U.S. television coverage of the 1968 Tet Offensive in Vietnam, the World Trade Center falling down, and the chilling gun-sight camera video from 2007 that shows jovial U.S. soldiers gunning down two Reuters journalists and other civilians in the streets of Iraq. More recently, the widely watched video that showed the brutal and unnecessary killing of George Floyd by police officers in Minneapolis, Minnesota, touched off a public conversation that had all the hallmarks of an availability cascade in the making. Although it remains to be seen whether more significant changes are ahead for police departments across the country, it’s possible that in retrospect the video of the killing will be viewed as the “tipping point” that forced America to finally confront the disproportionate violence police officers commit against people of color. A strong visual component of the availability cascade is clearly important, and herein lies the challenge for today’s information warriors who hope to hit the jackpot and win a decisive victory. For one thing, many issues do not yield well to dramatic visual imagery. This is, thankfully, one of the many challenges faced by tin-foilers in the anti-vaxx space, who lack a compelling image that “proves” vaccines cause autism. (They don’t.) Meanwhile, even if a strong visual component exists, information warriors still face an uphill battle given how thoroughly Americans have already been exposed to dramatic imagery — including fictional explosions of spaceships, planets, Death Stars, and the like. Though obviously much of the violence we have seen isn’t real, our subconscious minds don’t necessarily know this. Imagery is imagery. As people’s sensitivity thresholds rise, so too does the threshold needed to ignite the availability cascade and give it a life of its own. This means that information warriors will search out ever-more-stupefying ways to get people’s attention and win a decisive battle on issues in the public sphere. So consider yourself warned. Or — if you happen to be a foot-soldier in the information wars — consider yourself informed.
https://medium.com/swlh/the-jackpot-of-the-availability-cascade-72adbec8da7e
['Jack Luna']
2020-10-13 22:00:04.310000+00:00
['Politics', 'Society', 'Psychology', 'Media', 'Social Media Marketing']
3 Beliefs I Abandoned After 3 Years of Professional Coding
2. Programmers Shouldn’t Be Insecure About Their Work Now, I’ve got some really skilled senior developers working with me. They are competent individuals who require high standards from my pull requests. Otherwise, they will easily reject them without remorse. I used to think of these people as developers not capable of doing any wrong. I would look at them in awe and wonder if one day I would reach their level of skills and never code anything incorrectly. But this belief was wrong. It didn’t take much for me to gain more experience and dialogue with my colleagues, only to realize that they have insecurities too. They don't always know everything and are not always sure about the best possible solution for a given problem. They have to constantly renew their knowledge like I have to. They know that sometimes they just have to go with the flow and their intuition on what is best for the project. I’ve freed myself from this belief and now I don’t always feel stuck in my own world thinking that I will never be a senior developer because I know that seniors have my feelings of insecurity too.
https://medium.com/better-programming/3-beliefs-i-abandoned-after-3-years-of-professional-coding-d4a71b588100
['Piero Borrelli']
2020-11-18 15:31:48.843000+00:00
['Python', 'JavaScript', 'Technology', 'Productivity', 'Programming']
I Wish I Knew Then What I Know Now
I Wish I Knew Then What I Know Now How I got my compassion to include myself Photo by Giulia Bertelli on Unsplash All my friends say I am a kind person. I call when you are sick. I bring food over when you can’t cook. I give to those on the street who ask for spare change. Everyone knows I bend over backwards to help my kids. I am just not very kind to myself. In fact, I am darn hard on myself. Looking back on my life, if I knew then what I know now, I would have cut myself some slack. Let me tell you the story of four words that changed my life. And no, the four words were not, “will you marry me?” It began at the end of an aerobics class when we were gathering up our belongings. I overheard someone say, “I am more self-compassionate.” A bright, beam-like light turned on in my brain—just four words. I had never considered self-compassion — there was no connection in my mind between the words “self” and “compassion.” It sounded so new-age-granola-woo woo or something. At best, it felt self-indulgent like a day at the spa, something I never had time for. I had a million things to get done, — who the f*ck could think about self-compassion? But it sounded soothing. Who knows why those words resonated with me so I made a mental note, filing it away for future reference. We all struggle For most, life is a struggle. It’s not perfect, and some of us have a more challenging road than others. Illness. Anxiety. Grief. Loneliness. Financial issues. Relationship problems. Depression. Guilt. Failure. Trauma. Addiction. Toxic work environments. Maybe we love and care for someone who suffers. If you don’t identify with any of these descriptors, just living through these uncertain pandemic times is a struggle. Just do it I was raised with the expectation to work hard, help others and “just do it.” Like many, my life was a whirlwind of constant pressure that I thought I thrived on. I could monitor my success by the items crossed off on my to-do list. Many spiral notebooks filled with lists documented the progress of my life. I was an entrepreneur working 80+ hours a week in a start-up that eventually grew. I had two children to mother. Being a parent is a transformative experience, but it also brings a lifetime of thinking about their health, behaviour, grades, friends, etc. Nothing prepared me for my son’s depression as a teenager and now as an adult, a crushing and unrelenting illness. Parents with kids struggling with a mental illness know the concern, worry, and stress, not to mention the guilt and self-blame. Did I miss a sign? Was it because I had a demanding job? Was what I did somehow not enough? I was heartbroken because no love or healthy meal could fix or help or alleviate his struggles, leaving me emotionally emptied. After three years in a long-term residence, my father died. He was the person I spoke to every day and whose care was always on my mind. It is estimated that approximately 30% of us provide help to a chronically ill, disabled, or aged family member or friend during any given year and spend an average of 20 hours per week providing care for someone we love. All this left no time for me. How self-compassion helped me My inner voice was a harsh taskmaster and kept screaming, “just do it” and don’t think about anything else. I confronted life’s challenges by being in a perpetual problem-solving mode. “I can do it all,” was my mantra. That was great until I crashed and burned. After reflection and a little help, I realized that the tough “just do it” voice that screamed in my head was not sustainable. It was a long journey to replace that harsh taskmaster’s voice. Self-compassion is an easy concept to explain but challenging to implement. It involves treating yourself with the same caring and kindness you would treat a family member, a friend or even a stranger who is having a hard time. Our culture emphasizes being kind to others struggling, but it does not emphasize being kind to ourselves. I had to sit down and have a little talk with myself to understand why being kind to myself did not come naturally. I had to undo a lifelong pattern of thinking to be convinced that having compassion for myself was the same as having compassion for a friend who is suffering. I had to sit down and have a little talk with myself to understand why being kind to myself did not come naturally. But eventually, I got there. For me, being self-compassionate gave me the space to step back and observe my life without judgement: this is the way it is for now. It connected me to the notion that many suffer or go through rough times: I am not alone. It allowed me to speak kindly to myself: the way I would speak to a friend going through a hard time. It takes practice — being kind to yourself is not easy — but like any skill, the more you practice, the better you get. How self-compassion can help you While I had a slave-driver inner voice, for others your inner voice can be a harsh critic driving you to be tough on yourself, beat yourself up, think you are the only one to have failed, or you are not good enough. The voice is a merciless judge of your inadequacies and shortcomings. Still others live with the crushing fist of depression and anxiety. Or we care for someone who is sick. Perhaps we have a demanding ageing parent or a terrible boss. Our reality feels like being shrink-wrapped, unable to punch our way out. I am not an expert or a therapist, and there isn’t one formula for developing self-compassion. But I know it works. Here is what you can try: sit down somewhere quiet and identify what your situation is, accepting it without judgement or criticism or trying to figure out what to do accept that the rough time you are going through is painful realize struggles are a reality shared by all of us talk to yourself as if speaking to your best friend with kindness, care and reassurance allow yourself to be more healthy and happy because you care about yourself and not because you have to fix something or you are worthless or unacceptable the way you are repeat often and notice the shift in your thinking. Image created by author with photo licensed from Adobe Stock Research and resources Whatever your struggle is, research-based evidence confirms that the benevolent inner voice of self-compassion shifts your mindset to a more positive place. It provides you with confidence, resilience and a belief in your abilities. It lowers levels of depression and anxiety. Resources for further reading: If you are interested in learning more about self-compassion, I recommend the books written by Dr. Kirsten Neff, a pioneer and leader in the field of self-compassion research for over 15 years. She has an excellent website. I know all this now and wonder why it took me so long to figure out. But better late than never — so the saying goes. I hope self-compassion can be a soothing force in your life too.
https://medium.com/crows-feet/i-wish-i-knew-then-what-i-know-now-adb8df39a088
['Alice Goldbloom']
2020-11-21 01:22:26.928000+00:00
['Health', 'It Happened To Me', 'Self Compassion', 'Life', 'Mental Health']
taylor swift’s “evermore”: track-by-track review
an overview of evermore On July 24, 2020, Taylor Swift sent shock waves through the music industry when she unexpectedly dropped her 8th studio album, folklore. She had always spaced out her album eras by more than two years and this album came less than a year after her 7th studio album, Lover. In addition to being a surprise (announced to the world less than 24 hours before its release), it was also notable for three other reasons. The first is that it was the first high-profile music release produced during the COVID-19 pandemic. The second is that it represented a marked departure from Taylor Swift’s increasingly pop music-oriented sound. The third is that it was good; like really, really good. Within a few weeks, folklore had become the best-selling album of 2020. The first single, “cardigan,” became Taylor Swift’s sixth #1 hit on the Billboard Hot 100 and gave her the distinction of being the first artist to debut atop the album and single charts in the same week. The album received the best reviews of her career to date, obtaining an astonishing average score of 88/100 on Metacritic. It was recently nominated for 5 Grammys, including Album of the Year (her fourth nomination in that category; she’s one of only two female artists who have ever won it twice). And it is currently racking up other end-of-the-year accolades, including being in the top spot on many high profile publications’ lists of the year’s best albums. Perhaps the only thing more shocking than the surprise release and subsequent success of folklore was the surprise drop of her 9th studio album, evermore, only 20 weeks later. Taylor followed the same strategy with evermore that she did with folklore. She announced it via social media less than 24 hours before it debuted with cover art, links to her website for pre-orders and merchandise, and a statement about the origin of the album. The statement read: “To put it plainly, we just couldn’t stop writing songs. To try and put it more poetically, it feels like we were standing on the edge of the folklorian woods and had a choice: to turn and go back or to travel further into the forest of this music. We chose to wander deeper in … I’ve never done this before. In the past I’ve always treated albums as one-off eras and moved onto planning the next one after an album was released. There was something different with Folklore. In making it, I felt less like I was departing and more like I was returning. I loved the escapism I found in these imaginary/not imaginary tales. I loved the ways you welcomed the dreamscapes and tragedies and epic tales of love lost and found into your lives. So I just kept writing them.” — Taylor Swift Only a few days after its December 11 release, the album has already amassed enough sales to guarantee a #1 debut on next week’s Billboard 200 chart, spawned a notable hit in the form of lead single “willow,” received similar critical acclaim to its predecessor (its current Metacritic score is 85/100), and led numerous critics to retool their “best of” end-of-year list. Rather than reflect a distinct new “era,” evermore represents a “sister album” to folklore. It could also be described as a companion album or sequel. What it is not is an album of outtakes from the folklore recording sessions that were assembled together to prolong the buzz around that album. It is a fully realized, cohesive artistic vision and, in fact, is even longer than folklore. It clocks in at over an hour without even counting the two forthcoming bonus tracks available on the yet-to-be shipped physical release of the album. Image copyright: Republic Records/Taylor Swift Taylor Swift continues to work with the same creative team on evermore, but with a a different emphasis. Whereas folklore’s tracks were split pretty evenly between those produced by Jack Antonoff and Aaron Dessner, this one much more heavily features Dessner’s work and also collaborations from Bon Iver, HAIM, and Marcus Mumford (of Mumford & Sons). The result is that there is even more of an indie rock sound to many of the songs. But it also takes interesting detours into chamber rock, grunge, folk music, and her country roots. Even more than was the case with folklore, there is little sense of tempo or urgency on evermore. The songs take their time to breathe and build and the tempo shifts not only between songs but very often within them. In terms of production, the songs frequently go to genuinely unexpected places. And while some of these experiments work better than others, they are all fascinating. As is usually the case with any Taylor Swift album, the highlight is the song-writing. She continues her time-honored tradition of delivering heartfelt confessionals, albeit with more psychological complexity and maturity than ever before. She also further experiments with third-person storytelling and interwoven narratives. We may not get something as clever and audacious as the teenage love story trilogy from folklore, but there are fascinating recurring characters and themes here. The themes of devastating heartbreak, begrudging forgiveness, romantic neglect, forbidden love, human evil, nostalgia, and grief all prominent. Like folklore, evermore cannot be appreciated after a single listen. It took me multiple listens to fully be cast under its spell and appreciate the intricacies of its writing and production. It’s far from Taylor Swift’s most accessible album, but it’s one of her very best. Without further ado, here is my track-by-track review of evermore. Image copyright: Republic Records/Taylor Swift evermore: track-by-track review “willow” This folksy, yearning love song was an interesting choice for the album’s lead single. It is hardly the album’s catchiest, most provocative, or most powerful song, but it is a strong and fitting start to the album. The lyrics depict the complexities of wanting someone to love you back and Swift’s vocals effectively oscillate between deep and plaintive and heightened and breathless. As is virtually always the case, Swift said it best when she described the glockenspiel-driven song by saying, “I think it sounds like casting a spell to make somebody fall in love with you.” Favorite lyrics: “Wait for the signal, and I’ll meet you after dark/ Show me the places where the others gave you scars/ Now this is an open-shut case/ I guess I should’ve known from the look on your face/ Every bait-and-switch was a work of art” “champagne problems” Swift describes the storyline of this song as involving “longtime college sweethearts [who] had very different plans for the same night, one to end it and one who brought a ring.” It is a wrenching piano-driven ballad co-written with her romantic partner Joe Alwyn (under the pseudonym William Bowery). It is filled with stunning details that evoke vivid imagery and also delves into the female protagonist’s mental health struggles. One of the most interesting aspects of the song to me is its title. “Champagne problems” is typically a phrase that is used to describe problems that may seem very real and painful to an individual but are truly insignificant when compared to the suffering of others. But a broken engagement and mental illness are undeniably painful topics. Is she acknowledging that in the epic tragedy of 2020, singing about a breakup feels trivial? Or is she sharing the protagonist’s perspective of the situation? Questions like this are part of what drive the brilliance of evermore. Favorite lyrics: “Sometimes you just don’t know the answer/ ’Til someone’s on their knees and asks you/ ‘She would have made such a lovely bride/ What a shame she’s fucked in the head,’ they said/ But you’ll find the real thing instead/ She’ll patch up your tapestry that I shred” “gold rush” The only song on the album produced by Jack Antonoff, who produced about half the songs on each of her last three albums, this one opens with an ethereal chant and then unexpectedly evolves into a quicker pace and poppier sound. The lyrics find the protagonist falling for someone that is universally adored and being pursued with the fervor of settlers looking for gold in California (hence the title). She is filled with longing and jealousy and ultimately realizes that the chase and the fight isn’t worth it. Favorite lyrics: “At dinner parties, I call you out on your contrarian shit/ And the coastal town we wandered ‘round had nеver seen a love as pure as it/ And thеn it fades into the gray of my day-old tea/ ’Cause it could never be” “‘tis the damn season” One of my favorite songs on the album, “‘tis the damn season” is propelled by an electric guitar strum and tells the story of a woman who left her small town of Tupelo, Mississippi to make it in Hollywood. She is back for the holidays and reunites with an old flame. Her ambivalence about both leaving and returning home is palpable and Taylor Swift’s aching vocal performance is one of her best. The song is a winner on its own, but is made even richer when you reach the track “Dorothea” and realize that this track is also about her, albeit from a different perspective. It’s the kind of world building and intertwining narratives that were a big part of what made folklore so ambitious and breathtaking. Favorite lyrics: “Sleep in half the day just for old times’ sake/ I won’t ask you to wait if you don’t ask me to stay/ So I’ll go back to L.A. and the so-called friends/ Who’ll write books about me if I ever make it/ And wonder about the only soul/ Who can tell which smiles I’m fakin’/ And the heart I know I’m breakin’ is my own/ To leave the warmest bed I’ve ever known” Image copyright: Republic Records/Taylor Swift “tolerate it” Many albums ago, Taylor Swift’s fans noticed that her fifth tracks tend to be a particularly wrenching and confessional ballads (e.g., “All Too Well,” “The Archer,” “My Tears Ricochet”). The fifth track of evermore is no exception. The refrain “I know my love should be celebrated/ But you tolerate it” is a punch to the gut and it perfectly sums up the song, which tells of a woman who has been faithfully devoted to her partner for years but is becoming increasingly resentful that he fails to demonstrate fidelity and passion in return…or even interest. The song evokes the heartbreaking dynamic of being in love with someone who appears largely indifferent toward you. Favorite lyrics: “While you were out building other worlds, where was I? Where’s the man who’d throw blankets over my barbed wire?/ I made you my temple, my mural, my sky/ Now I’m begging for footnotes in the story of your life/ Drawing hearts in the byline/ Always taking up too much space or time” “no body, no crime (feat. HAIM)” My favorite song on the album — and one of my favorite songs Taylor Swift has ever made — is this ambitious collaboration with the rock band HAIM. In the grand tradition of Cher’s “Dark Lady” and Carrie Underwood’s “Two Black Cadillacs,” the song tells the story of a murder (a double murder, no less!) from various perspectives. I fell in love with it instantly, but it took me numerous listens to fully comprehend the shifting narrative. The first verse involves the narrator’s friend telling her that she thinks her husband is cheating. The second verse tells of the narrator’s friend’s disappearance and the narrator’s suspicion that it was her husband that killed her. The third verse tells of the narrator murdering her late friend’s husband and subsequently covering it up. There is a chilling refrain and the song’s productions and vocals perfectly fit the cinematic and macabre content. It is also one of the most authentically and unapologetically country songs she has produced in ages. Favorite lyrics: “Good thing my daddy made me get a boating license when I was fifteen/ And I’ve cleaned enough houses to know how to cover up a scene/ Good thing Este’s sister’s gonna swear she was with me/ Good thing his mistress took out a big life insurance policy” “happiness” Taylor Swift reportedly wrote this song only a week before the album’s release, but it hardly feels like a rush job. It tells the story of someone in the aftermath of a breakup realizing that although they are devastated, they know there will be happiness again. It is fittingly slow, deliberate, and contemplative in its production. It also finds Swift engaging in a remarkably mature and complex approach to a breakup that contrasts markedly with the music she wrote as a teenager. Just look at lines like “No one teaches you what to do/ When a good man hurts you/ And you know you hurt him, too.” Favorite lyrics: “Honey, when I’m above the trees/ I see it for what it is/ But now my eyes leak acid rain on the pillow where you used to lay your head/ After giving you the best I had/ Tell me what to give after that/ All you want from me now is the green light of forgiveness/ You haven’t met the new me yet/ And I think she’ll give you that” “dorothea” The dyad from “‘tis the damn season” is revisited here, albeit from the perspective of the male this time. He talks about his high school girlfriend (the titular Dorothea) who went off to become a star in Hollywood. He pines for her and wonders if she ever thinks about him now that she has moved on. Although it’s more uptempo and upbeat than many of the other songs, it has a heartbreaking innocence and earnestness. Favorite lyrics: “It’s never too late to come back to my side/ The starts in your eyes shined brighter in Tupelo/ And if you’re ever tired of bеing known for who you know/ You know that you’ll always know me, Dorothea” “coney island (feat. The National)” This indie rock duet with Matt Berninger of the rock band the National (whose bandmate Aaron Dressner produced much of evermore), is intensely nostalgic. It depicts with vivid detail and heartbreaking longing the story of a couple looking back on a relationship that fell apart largely because of unequal levels of commitment. For me, it never reaches the dramatic heights I was hoping it would, but the lyrics are gorgeous and the mix of Swift’s mellifluous vocals with Berninger’s raspy baritone is truly special. Favorite lyrics: “Break my soul in two looking for you/ But you’re right here/ If I can’t relate to you anymore/ Then who am I related to?/ And if this is the long haul/ How’d we get here so soon?/ Did I close my first around something delicate?/ Did I shatter you?” “ivy” Swift delves back into the lyrical theme of infidelity on this banjo-driven track that features harmonizing from Justin Vernon of Bon Iver. It tells the story of a woman who has reluctantly fallen in love with a man who is not her husband. She longs to be faithful, but cannot stop herself even though she is perfectly aware of the havoc it will wreak. The track is similar thematically to “illicit affairs,” which is a stand out from folklore (and like “ivy” was the tenth track on that album). Favorite lyrics: “Clover blooms in the fields/ Spring breaks loose, the time is near/ What would he do if he found us out?/ Crescent moon, coast is clear/ Spring breaks loose, but so does fear/ He’s gonna burn this house to the ground” Image copyright: Republic Records/Taylor Swift “cowboy like me” Of all the songs on evermore, this song snuck up on me the most. I was somewhat indifferent to it upon first listen and yet I kept returning to it over and over due to its hypnotic production and exceedingly interesting lyrics. Like “no body, no crime” this song goes back to her country roots and also involves vivid storytelling. But the similarities pretty much end there. Rather than a double murder, this song tells the story of two grifters who are constantly looking for wealthy people to romantically pursue but unexpectedly break all their own rules by falling in love with each other. The lyrics are beautifully complemented by backup vocals from Marcus Mumford (of the popular country rock band Mumford & Sons) and an elegant orchestration involving guitars, mandolins, and harmonicas. Favorite lyrics: “And the skeletons in both our closets/ Plotted hard to f*** this up/ And the old men that I’ve swindled/ Really did believe I was the one/ And the ladies lunching have their stories about/ When you passed through town/ But that was all before I locked it down” “long story short” This tale of rising from the ashes is one of the few songs on her one-two punch of folklore and evermore that with slightly different production could have been right at home on one of her previous, pop-oriented albums. It’s an indie-rock track that heavily features drums and guitars and has a jaunty, catchy chorus. It is another meditation on the cruel treatment Taylor Swift received from the media that she received due to various controversies and (mostly) some old-fashioned misogyny a few years earlier. But the way she cheerfully dismisses it and refocuses on her current happiness with her romantic partner suggests that she has fully moved on, making it one of the album’s truly heartwarming songs. Favorite lyrics: “Past me/ I wanna tell you not to get lost in these petty things/ Your nemeses/ Will defeat themselves before you get the chance to swing/ And he’s passing by/ Rare as the glimmer of a comet in the sky/ And he feels like home/ If the shoe fits, walk in it wherever you go” “marjorie” Yet another example of how folklore and evermore are “sister albums” is the fact that the thirteenth track of each is an ode one of her grandparents. While folklore’s “epiphany” focuses on her grandfather’s time in WWII, evermore’s 13th track “marjorie” tells the story of her grandmother. Identified by many critics and fans as a highlight of evermore, it is a gorgeously written tale of guilt, grief, and regret. The subject, Marjorie Finlay, was an opera singer who died when Taylor was just 13 years old. As such, the song takes an alternating perspective of young Taylor not appreciating her grandmother enough while she had her and grown Taylor wishing she could do it all over. To make it all the more poignant, the song samples her grandmother’s actual vocals for the backing track. Favorite lyrics: “I should’ve asked you questions/ I should’ve asked you how to be/ Asked you to write it down for me/ Should’ve kept every grocery store receipt/ ’Cause every scrap of you would be taken from me/ Watched as you signed your name ‘Marjorie’/ All your closets of backlogged dreams/ And how you left them all to me” “closure” Aptly described by genius.com as a “wild industrial folk number,” “closure” tells the story of a woman who keeps getting offered the opportunity for closure with an ex-partner, but repeatedly rejects it. She does so not out of a desire to hurt him, but rather a reflection that he is doing it for selfish reasons and she simply doesn’t need it. She has moved on. Its unique, complex, and unpredictable orchestration and production make it one of the album’s most sonically bold tracks. Favorite lyrics: “I know I’m just a wrinkle in your new life/ Staying friends would iron it out so nice/ Guilty, guilty, reaching out across the sea/ That you put between you and me/ But it’s fake and it’s oh so unnecessary” “evermore (feat. Bon Iver)” Although it never reaches the power of “exile,” the collaboration with Bon Iver that appeared on folklore and is one of Swift’s best songs, there is a lot to admire here. Particularly, there is the thrilling bridge that occurs mid-song that features Taylor and Justin Vernon of Bon Iver trading lines and harmonizing. The majority of the song surrounding the hook is a somber, piano ballad that delves into the narrator’s depression. This one doesn’t work quite as well for me lyrically as most of the rest of the album, but it is beautifully performed, fascinating in its production, and caps the album on an appropriately somber but hopeful note. Favorite lyrics: “Hey December/ Guess I’m feeling unmoored/ Can’t remember/ What I used to fight for/ I rewind thе tape/ But all it does is pause/ On thе very moment all was lost/ Sending signals/ To be double-crossed” Image copyright: Republic Records/Taylor Swift In Sum Admittedly, I was a bit worried on December 10th when Taylor Swift announced evermore. Her previous surprise release was such a career-redefining masterwork and such an innovative and powerful artistic creation that I was worried that if it wasn’t as good as folklore or came off as a cash grab, it could have cheapened folklore’s legacy. But it doesn’t cheapen it. It deepens it. Four full listens in and I’m still finding new things to appreciate lyrically and sonically. The very act of poring over the lyrics in order to select my favorites for this article somehow increased my already substantial appreciation for the scope and nuance of Swift’s songwriting. Standout tracks for me are “no body, no crime,” “‘tis the damn season,” “champagne problems,” and “cowboy like me.” But, as was the case with folklore, there isn’t a single bad track on the album. The question most Swifties and music critics will be dogged with is: “Sure it’s good, but is it as good as folklore?” In my opinion, this isn’t the correct question to ask. My answer to that would be a tentative “no.” For me, it ever-so-slightly lacks the cohesion and emotional punch of folklore. However, this is likely heavily influenced by the facts that folklore was a marked departure from her prior album (and thus totally unexpected) and I have now had nearly five months to savor and explore it. I think the question should be asked is: “Is evermore also a masterpiece like folklore is?” And my answer that to that question would be an unequivocal “Yes.” Choosing between folklore and evermore is like deciding which movie is better — The Godfather or The Godfather Part II? Before Sunrise, Before Sunset, or Before Midnight? Alien or Aliens? Toy Story, Toy Story 2, or Toy Story 3? I may have my personal preferences, but comparing them is ultimately splitting hairs. They are all masterpieces that are head and shoulders above virtually everything else being produced in the medium and they complement and deepen each other beautifully. Rating for “evermore”: 5/5 stars
https://medium.com/rants-and-raves/taylor-swifts-evermore-track-by-track-review-17a14557dda9
['Richard Lebeau']
2020-12-16 21:42:18.209000+00:00
['Music', 'Writing', 'Culture', 'Entertainment', 'Feminism']
How Does Spotify Know You So Well?
Recommendation Model #1: Collaborative Filtering First, some background: When people hear the words “collaborative filtering,” they generally think of Netflix, as it was one of the first companies to use this method to power a recommendation model, taking users’ star-based movie ratings to inform its understanding of which movies to recommend to other similar users. After Netflix was successful, the use of collaborative filtering spread quickly, and is now often the starting point for anyone trying to make a recommendation model. Unlike Netflix, Spotify doesn’t have a star-based system with which users rate their music. Instead, Spotify’s data is implicit feedback — specifically, the stream counts of the tracks and additional streaming data, such as whether a user saved the track to their own playlist, or visited the artist’s page after listening to a song. But what is collaborative filtering, truly, and how does it work? Here’s a high-level rundown, explained in a quick conversation: Image source: Collaborative Filtering at Spotify, by Erik Bernhardsson, ex-Spotify. What’s going on here? Each of these individuals has track preferences: the one on the left likes tracks P, Q, R, and S, while the one on the right likes tracks Q, R, S, and T. Collaborative filtering then uses that data to say: “Hmmm… You both like three of the same tracks — Q, R, and S — so you are probably similar users. Therefore, you’re each likely to enjoy other tracks that the other person has listened to, that you haven’t heard yet.” Therefore, it suggests that the one on the right check out track P — the only track not mentioned, but that his “similar” counterpart enjoyed — and the one on the left check out track T, for the same reasoning. Simple, right? But how does Spotify actually use that concept in practice to calculate millions of users’ suggested tracks based on millions of other users’ preferences? With matrix math, done with Python libraries! In actuality, this matrix you see here is gigantic. Each row represents one of Spotify’s 140 million users — if you use Spotify, you yourself are a row in this matrix — and each column represents one of the 30 million songs in Spotify’s database. Then, the Python library runs this long, complicated matrix factorization formula: Some complicated math… When it finishes, we end up with two types of vectors, represented here by X and Y. X is a user vector, representing one single user’s taste, and Y is a song vector, representing one single song’s profile. The User/Song matrix produces two types of vectors: user vectors and song vectors. Image source: From Idea to Execution: Spotify’s Discover Weekly, by Chris Johnson, ex-Spotify. Now we have 140 million user vectors and 30 million song vectors. The actual content of these vectors is just a bunch of numbers that are essentially meaningless on their own, but are hugely useful when compared. To find out which users’ musical tastes are most similar to mine, collaborative filtering compares my vector with all of the other users’ vectors, ultimately spitting out which users are the closest matches. The same goes for the Y vector, songs: you can compare a single song’s vector with all the others, and find out which songs are most similar to the one in question. Collaborative filtering does a pretty good job, but Spotify knew they could do even better by adding another engine. Enter NLP.
https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe
['Sophia Ciocca']
2020-04-08 22:49:04.455000+00:00
['Machine Learning', 'Artificial Intelligence', 'Spotify', 'Tech', 'Music']
A Whole Life Approach to Writing
A Whole Life Approach to Writing How to contain, capture and corral creative inspiration That loving feeling One of my personal mantras as it relates to writing is I write because I love it. It is on my profile, it shows up in a poem here and an essay there, and it is totally un-unique. Yep, you read that right — un-unique. Even though a deep passion for writing pulses through my veins, I know I am not the special bearer of creative desires. A love for communicating through the written word is the universal soul beat of wordsmiths across the globe. It is a creative energy all true writers feel. Sometimes. Oh no, you may be thinking about now, not another article on how to keep pressing on when feelings of inspiration are nowhere to be found! Fear not, I will not be giving any tips on how to discipline yourself through the dry spells and persevere through the grueling hours of tedium. Although, if you need that kind of thing, there are many good resources out there. The bookends of the whole life approach I would like to take a different approach — a whole life approach to keeping that love of writing alive. First, a little personal background may be in order, to frame what I will be addressing and set the stage for what I will not be. I remember the early days of my own writing passion when, as far back as first grade, I discovered that creative writing assignments were not work, but fun. It is the memory of times alone in my room, however, when an overwhelming desire to write a story would fall upon me, that I remember most. I would sit down, with my wide-ruled notebook paper and #2 pencil, and start to sketch out the characters and conversations that were a part of my make-believe world. It was exhilarating! Until it wasn’t. I would get about 2 or 3 chapters in, the enthusiasm would wane, and the story would end up somewhere in my room, probably in the triangular space behind my catty-corner dresser that served as my fort. I kept waiting for that feeling to come back so I could finish my creation, but alas. I also recall the first time I discovered that I could produce a finished product by sheer discipline with no feeling of inspiration whatsoever. It was my senior year of high school, when I had to write a literary analysis paper for my English class. It was one of those papers where you have to back up your thesis with sub-points supported by quotations from the book, and I had likely not even read the whole book. Somehow, I miraculously produced a worthy paper. But what was eye-opening to me at the time was that I could write something I was proud of without feeling like writing or even caring about the topic I was writing about. In the end, however, the exercise did produce several feelings: a sense of accomplishment, a boost in confidence and the satisfaction of self-discipline. These little snapshots represent what I like to think of as two very important bookends of a writer’s life: we need times of both deep inspirational creativity and structured discipline if we are going to bring our ideas to fruition. We have only to look at nature for an example of how beauty, spontaneity, order, and structure are intermingled throughout the created world. Those ordinary moments But what about all of the other moments of everyday life? The times when seedling thoughts lie dormant, when our little ant army lives march along as we do our duties, or when inner and outer storms rage leaving no time for structured planning, let alone space for contemplation and inspiration? The truth is that, for most of us, even those who write for a living, these non-writing times represent the majority of our existence. Eating and sleeping alone can take up close to half of every 24-hour period. If we want to stay healthy and maintain relationships with other human beings, that adds in a few more hours a day, and I could go on and on. Trust me, I am a scheduler, and I have parsed my life carefully over the years in an effort to understand where all the time goes. As a writing lover, I need to remember that writing is a whole life experience and includes much more than the moments I sit down to write, whether from major inspiration or necessary discipline. Between these times are the ordinary moments, the everyday occurrences that are easily lost, but worthy of guarding and gleaning. I find that most of my inspiration comes either directly or indirectly through normal life experiences—when I am brushing my teeth, shuttling kids around, sweeping the floor or having a conversation with family and friends. It is in these very times that the little fireflies of creativity float across the landscape of my mind. It is also at these very times that those same fireflies will flutter and flicker right back out into the void of night if I don’t (humanely, of course) contain, capture and corral them. If I don’t want them to disappear, I need to safeguard them until I can release their illuminating sparks into my writing. Containing, capturing and corralling creativity. These alliterative categories are not perfectly clear-cut, but instead of trying to squeeze creativity into a mold, I am satisfied that the overlap is further proof of the whole life nature of the writing process. Following is how I seek to hold on to the brief flashes of my fireflies. Contain the emotions Ah, those sometimes pleasant, sometimes painful, often unwieldy emotions. How to contain them when their definition is unclear, their nature so fleeting and life moves on leaving us with only an undefined memory? We know what words to use to tell — confusion, anger, love — but how do we hold on to the moment — contain it — so that we show the emotion in our writing? I recently found myself in a situation where I felt extremely agitated. At the time, I probably felt any number of emotions — sadness, anger, confusion and eventually peace. See, I just told you how I felt, but, while I’m sure you can relate, it probably didn’t stir up any emotion in you. As I was processing my emotions and trying to calm myself, I felt the urge to write, but I wasn’t at a place of clarity to do so and it wasn’t really the right time. Instead, I decided to contain the emotions within the framework of my experience. I wrote stream of consciousness notes — with no care for spelling, structure, flow — into my phone, intermingling sights, sounds, feelings and thoughts into one long rambling string. Afterwards, every time I read those notes, I recalled the incident vividly and eventually shaped the experience into a poem. Capture the senses We are constantly using our senses. They are the physical intake valves through which we relate to the world around us. They not only enable us to experience pleasure and pain, but they can stir up emotions and trigger deep thinking. Hence, the difficulty in placing them in a completely separate category. For my current purposes, I am making a distinction that I find helpful. When I think of capturing the senses, I am less concerned with trying to identify what I am feeling and more concerned with making sure that the fleeting revelations my senses bring to mind do not escape. Here is an example. One of the things I love to do is go for country drives. They expand my view, help me take deep breaths, pull me out of my stress pits. But why? It is because, as I am driving along, I see broad open spaces, beautiful contrasting colors, surprising vistas around the bend. I need to capture those. Or what about the other senses? I find that smells and sounds often bring with them wafts and echoes of nostalgia. The smell of my grandparents’ home or the distant sound of a train whistle remind me of what it felt like to be a child — not only remind me, but carry me back in time. The taste of macadamia nut crusted mahi mahi takes me to a romantic open-air dinner on the edge of the Pacific Ocean. The touch of my grandson’s little lips on my cheek brings with it both today’s new joys and yesterday’s memories of young motherhood. Whenever I am in a situation where my senses bring refreshment to my soul, clarity to my thinking or reminders of past situations, I try to capture them by recording them in short phrases that will jog my sensory memories when I decide to use them in a story or poem or perhaps even a non-fiction piece. Corral the concepts Whether out of the clear blue sky or triggered by one of the above two items, I have days where a concept races through my mind, and I think, “I need to write about that!” It could be something based on a very simple observation of a daily frustration like, “Why is there always a mismatch between plastic storage containers and lids?” Or it could be a quick glance in the mirror that triggers thoughts on the mysteries of aging, “how can I still feel so young on the inside while the outside insists on growing old?” Recently, not on a country drive, but on a suburban drive along a winding back road, I was provided a real-life illustration of a concept I think of often — perspective. After having driven this 3-mile stretch several times from one direction — on my way home from somewhere — I now approached it from the opposite direction — on my way to that somewhere — and was struck by the unfamiliarity of it all. I was aware of new scenery I had never noticed, and instead of being confident about the speed I should take as I rounded a curve, I was cautious and apprehensive, not knowing what would be around the bend and unsure of how much further I needed to drive to get to the end. It felt like a completely different road. Perspective. Knowing there was a life lesson in there, I corralled the concept by recording it with a sentence or two and stored it away in my computer Incubator folder. That folder contains documents entitled Miscellaneous Ideas (so original!) and Quirky and Fleeting Thoughts (that’s better!) as well as individual documents for ideas I have spent time expounding on a bit more extensively. Concluding encouragement My goal in sharing these things is not to provide methods, but to encourage all writers, including myself, to keep our love of writing alive even in times when we are not experiencing light bulb illuminations or when we are unable to carve out intentional time for writing. By recognizing that our lives are full of inspirational moments just waiting to be contained, captured and corralled, we have the opportunity to dot the landscape of our everyday moments with fireflies, keeping our own creative lights burning, while waiting for the perfect time to release our new and glowing creations into the world. So what fireflies have flown across your creative landscape today? I know you’ve seen some. Now just take a few moments to contain, capture or corral them. Meanwhile, I need to go take care of a few of my own.
https://medium.com/literally-literary/a-whole-life-approach-to-writing-a082ff5abcbf
['Valori Maresco']
2020-04-12 03:03:42.493000+00:00
['Creativity', 'Writing Tips', 'Writing Life', 'Essay', 'Writing']
How to Set Up a Publication in Medium
This post will be a quickie primer in how to set up a Medium publication to house your Blog-Your-Own-Book Challenge posts. We’ll talk about how to set up the publication in general, and then how to set up navigation in the publication so that your posts are all housed together nice and neat. I hear you thinking: Do I really need to do this? The answer is: Yes. You should have a publication on Medium if your plan is to be a writer on Medium. My friend Ashley Shannon always says “every post needs a home.” She’s right. Sometimes that home is in someone else’s publication. But sometimes it should be in your own. In your own publication, you can start gathering up an audience that’s yours. How cool is that? Ready? Create Your Publication First steps first. You need to actually build a publication. This is way, way easier than you might think. You can do it in ten or fifteen minutes. Maybe a little longer if you want to spend time creating graphics for the header, avatar, and logo. Step One: Go to the Publication Page Click on your little round photo in the upper right corner of your screen, then click on ‘publications.’ Screenshot: Author Step Two: Open a New Publication Page Click on the ‘new publication’ button. Screenshot: Author Step Three: Create Your New Publication Here are the questions you’ll be asked: Name. This is where you’ll name your publication. It will become part of your URL: medium.com/YOURPUBNAME, so think carefully and choose wisely! This is where you’ll name your publication. It will become part of your URL: medium.com/YOURPUBNAME, so think carefully and choose wisely! Tagline. This is a sentence about your publication. This is a sentence about your publication. Description. This is a longer description than your tagline. Maybe two or three sentences. It will go in the footer of all your posts and show up in searches on Medium. This is a longer description than your tagline. Maybe two or three sentences. It will go in the footer of all your posts and show up in searches on Medium. Publication Avatar. This is the picture that will show up in the little round circle at the top of page for your publication, similar to your personal avatar picture. It also shows up in previews of your content. It should be square and at least 60 X 60 pixels. This is the picture that will show up in the little round circle at the top of page for your publication, similar to your personal avatar picture. It also shows up in previews of your content. It should be square and at least 60 X 60 pixels. Publication Logo. This one is optional. It’s something you can create that will be added to the top of every post. It should be 600 X 72 pixels. This one is optional. It’s something you can create that will be added to the top of every post. It should be 600 X 72 pixels. Add your contact info, including your email address and social media links. These will be public, so consider creating a dedicated email address. Add up to five tags to your publication, similar to when you write a post. Add other writers or editors if you have them. Click ‘next.’ Step Four: Set Up Your Publications Header On the next page, you’ll be able to set up the header for your publication — what it will look like for readers. We’re going to look at this tool bar: Screenshot: Author
https://medium.com/the-write-brain/how-to-set-up-a-publication-in-medium-97c64243d8b2
['Shaunta Grimes']
2020-07-24 03:40:23.888000+00:00
['Byob', 'Writing', 'Blogging', 'Medium', 'Creativity']
COVID-19 Lays Bare Inequities In Our Health Care System
The coronavirus is disproportionately affecting Black Americans and intensifying social determinants of health. Social determinants of health are conditions in which people live, work, and learn that impact health risks and health outcomes. COVID-19 is showing the world how social determinants of health, matter, especially to Black Americans. According to The Henry J. Kaiser Family Foundation, social determinants of health include factors like socioeconomic status, education, neighborhood, and physical environment, employment, and social support networks, as well as access to health care. Addressing social determinants of health is vital for improving health and reducing health disparities. There are a growing number of initiatives to shape policies and practices in non-health sectors in ways that promote health and health equity like Medicaid-specific initiatives. However, many challenges remain and are visible in the COVID-19 data. There are over 1.4 million cases of the coronavirus worldwide, according to Johns Hopkins University, and more than 80,000 deaths. Today, as stated by Worldometer, there are 400,412 coronavirus cases and 12,854 deaths in the US. The majority of fatalities are disproportionately Black. Nikole Hannah-Jones has done a great job aggregating the data in a Twitter thread: Now, even as Black Americans risk higher exposure, they are already disproportionately suffering the comorbidities that make COVID-19 so deadly. Nikole Hannah-Jones explains, “Black Americans are 40 % more likely to have hypertension than white, twice as likely to have diabetes, up to 3x asthma hospitalization.” Additionally, poor Black people are more vulnerable to COVID-19. Blacks are more likely to work service sector jobs where they can’t practice social distancing and work from home. So what do we do? Below are just a few ways to address the social determinants of health and help those most at risk:
https://medium.com/humble-ventures/covid-19-lays-bare-inequities-in-our-health-care-system-1bf65a1aaf6c
['Harry Alford']
2020-04-08 02:44:19.887000+00:00
['Covid 19', 'Social Determinants', 'Black People', 'Coronavirus', 'Health']
History Warns of the Deadly Threat to Humanity from Artificial Intelligence
An article published in Nature magazine in autumn 2017 makes for interesting reading. It reports on research carried out by Washington State University and Arizona State University, which shows that the wealth disparity in human societies was insignificant until the development of agriculture. That occurred in different parts of the world around 13,000 years ago. What happened next should be a warning to humans in the age of artificial intelligence (AI). Land cultivation started when groups of nomads stayed in one location probably due to illness, injury, bad weather, or fear of other tribes. A few individuals experimented with seeds and plants and discovered that they could grow edible crops in dedicated plots and repeat the process each year. That reduced their need to constantly hunt, fish, and search for wild fruit and vegetables. Some grabbed more land than others and became the wealthiest of the group. The wealth gap increased even more, when some people learned how to tame large animals like oxen and horses and used them to till larger areas and, in the case of horses, more effectively fight adversaries and, so, acquire more land. Having the latest and most powerful technology — in the broadest sense of the word — has always meant riches and power. The industrial revolution, which replaced much animal and human sweat with steam power, made the owners of the steam engines and factories very wealthy. Today’s technological equivalent of oxen, horses, and steam engines are computer systems and, just as in the days of the early humans, those who control that new technology are among the richest. Technology itself, however, may soon upend that age-old equivalence. Many respected experts predict that the processing power of computers will surpass that of humans within the next few decades. Some of those experts, including Tesla and SpaceX CEO, Elon Musk, and the late theoretical physicist, Stephen Hawking, worry that artificial intelligence machines will eventually become conscious i.e. gain self-awareness, will be smarter than humans, and continue to get smarter quickly. These ultra-smart machines, the experts warn, will pose an existential threat to humanity because humans will not know what they’re thinking and so won’t be able to control them. Of course, nobody knows for sure that this will happen and, if it does, precisely when, but the expert warnings are credible enough to be taken seriously. Scientists call the hypothetical moment in time when machines become conscious as the “singularity.” If that moment arrives, the experts suggest a number of possible scenarios. The most benign is that the machines will work for the benefit of their human creators and that there would be no reason for them to harm humans. Yet how could anyone be sure that that would be the case, since humans would not know what the machines are thinking? Even today, computer scientists don’t fully understand why complex computers make some of the decisions they make. Autonomous machines that can harm humans already exist. Drones without a human controller can be programmed to locate and attack targets. Some scientists argue that if machines become conscious, they are likely to regard humans as unnecessary and inefficient and eliminate them. Elon Musk, among others, suggests that the only way to match these super-intelligent machines is to augment human intelligence by joining human brains to the machines. In that science-fiction scenario, the human race would become a race of cyborgs. Since cyborgs’ machine elements will doubtlessly evolve quicker than the biological elements, humans will gradually, but effectively turn into machines. That suggests that Elon Musk’s proposal is not really a solution at all and that the machines will eventually take over one way or another. Thanks to AI, the rich will get richer until, ironically, humans lose control of the technology they invented. That’s unlikely to happen anytime soon, but it could happen sooner than most people think. When it does, for the first time in human history, being very wealthy will count for very little.
https://medium.com/digital-diplomacy/history-warns-of-the-deadly-threat-to-humanity-from-artificial-intelligence-e08eccfc9a5f
['George J. Ziogas']
2020-12-16 12:25:12.629000+00:00
['Artificial Intelligence', 'Society', 'Technology', 'Data Science', 'History']
Software Roles and Titles
I’ve noticed a lot of confusion in the industry about various software roles and titles, even among founders, hiring managers, and team builders. What are the various roles and responsibilities on a software team, and which job titles tend to cover which roles? Before I dig into this too much, I’d like to emphasize that every team is unique, and responsibilities tend to float or be shared between different members of the team. Anybody at any time can delegate responsibilities to somebody else for various reasons. If your team isn’t exactly what I describe here, welcome to the club. I suspect very few teams and particular software roles will match perfectly with what we’re about to explore. This is just a general framework that describes averages more than any particular role or team. I’ll start with management titles and work my way through various roles roughly by seniority. I’d also like to emphasize that you should not feel constrained by your job title. I like to build an engineering culture which favors: Skills over titles over titles Continuous delivery over deadlines over deadlines Support over blame over blame Collaboration over competition I like to reward initiative with increased responsibility, and if somebody has the skills and initiative to take on and outgrow the title they’re hired for, I like to promote rather than risk losing a rising star to another company or team. Software Development Roles Engineering Fellow CEO CTO CIO/Chief Digital Officer/Chief Innovation Officer VP of Engineering/Director of Engineering Chief Architect Software Architect Engineering Project Manager/Engineering Manager Technical Lead/Engineering Lead/Team Lead Principal Software Engineer Senior Software Engineer/Senior Software Developer Software Engineer Software Developer Junior Software Developer Intern Software Developer We’ll also talk a little about how these roles relate to other roles including: VP of Product Management/Head of Product Product Manager VP of Marketing Note: Sometimes “director”, or “head” titles indicate middle managers between tech managers and the C-Suite. Often, “Chief” titles indicate a C-suite title. C-suite employees typically report directly to the CEO, and have potentially many reports in the organizations they lead. At very large companies, those alternate titles often fill similar roles to C-suite executives, but report to somebody who is effectively the CEO of a smaller business unit within the larger organization. Different business units sometimes operate as if they are separate companies, complete with their own isolated accounting, financial officers, etc. Different business units can also have VPs, e.g., “Vice President of Engineering, Merchant Operations”. Engineering Fellow The title “fellow” is the pinnacle of achievement for software engineers. It is typically awarded in recognition of people who have made outstanding contributions to the field of computing, and is usually awarded after an engineer writes a number of top selling books, wins prizes like the Turing Award, the Nobel Prize, etc. In other words, fellows are usually already famous outside the organization, and the company is trying to strengthen their brand by more strongly associating themselves with admired and influential people. In my opinion, organizations should not try to hire for “fellow” roles. Instead, find the best and brightest, hire them, and then grant the title (and benefits) if the engineer is deserving of it. A fellow typically also holds another title at the company. Often a CTO, Architect, VP of Engineering, or principal role, where they are in a position to lead, mentor, or serve as an example and inspiration to other members of the organization. CEO The CEO is the position of most authority in an organization. Typically, they set the vision and north star for the company. They rally everybody around a common understanding of why the company exists, what the mission is, and what the company’s values are. Frequently, CEOs are also the public face of the company, and in some cases, become synonymous with the brand (e.g., Steve Jobs with Apple, Elon Musk with Tesla/SpaceX, etc.) In some cases, CEOs are also the technical founder of a software organization, in which case, they also often fill the CTO role, and may have a VPs of Operations, Sales, Strategy, and Marketing helping with some of the other common CEO responsibilities. The CEO of a small company frequently wears a lot of hats, as you may have picked up from all the other roles that fell out of the CEO title when I mentioned that some CEOs lead the technology team. In any case, if there are important organizational decisions to be made, you can’t run it up the chain of responsibility any higher than the CEO. If you are a CEO, remember that you’re ultimately responsible, and you should trust your instincts, but don’t forget that even most famous CEOs have mentors and advisors they consult with on a regular basis. Trust your gut, but seek out smart, insightful people to challenge you to improve, as well. CTO Like the CEO role, the CTO role shape-shifts over time. At young startups, the CTO is often a technical cofounder to a visionary or domain-driven CEO. Frequently they are not qualified to take the title at a larger company, and hopefully grow into it as the company grows. Frequently, a startup CTO finds that they prefer more technical engineering roles, and settle back into other roles, like Principal Engineer, VP of Engineering, or Chief Architect. In many organizations, the mature CTO role is outward facing. They participate in business development meetings, frequently helping to land large partnerships or sales. Many of them hit the conference circuit and spend a lot of time evangelizing the development activities of the organization to the wider world: sharing the company’s innovations and discovering opportunities in the market which match up well with the company’s core competencies. CTOs frequently work closely with the product team on product strategy, and often have an internal-facing counterpart in engineering, such as the VP of Engineering. CTOs also frequently set the vision and north star of the engineering team. The goals for the team to work towards. CIO/Chief Digital Officer/Chief Innovation Officer The Chief Innovation Officer (CIO) is like a CTO, but typically employed by a company that would not normally be considered a “tech company”. The goal of the CIO is to reshape the company into one that consumers perceive as tech-savvy and innovative: To show the world what the future of the industry looks like, no matter what that industry is. For example, a home remodeling superstore chain might have a CIO responsible for partnering with tech companies to build a mixed reality app to show shoppers what a specific couch or wall color would look like in their living room, or using blockchains and cryptocurrencies to enhance the security and efficiency of supply chain logistics. Not to be confused with a Chief Information Officer (CIO), a title which is typically used in companies who are even more detached from technology, interested about as far as it aids their core operations. Unlike a Chief Innovation Officer, A Chief Information Officer is more likely to be leading tech integration and data migration projects than building new apps and trying to figure out how a company can disrupt itself from the inside. There are Chief Information Officers who act more like Chief Innovation Officers, but in my opinion, they should use the appropriate title. Most tech-native companies (app developers, etc) don’t have either kind of CIO. Instead, those responsibilities fall to the CTO and VP of Engineering. VP of Engineering/Director of Engineering While CTOs often face outward, the VP of Engineering often faces inward. A VP of Engineering is frequently responsible for building the engineering team and establishing the engineering culture and operations. The CTO might tell the engineering team what needs to get done on the grand scale, e.g., “be the leading innovator in human/computer interaction”. The VP of Engineering helps foster a culture that manages the “how”. The best VPs of Engineering at first come across as somebody who’s there to help the team work efficiently, and then they almost disappear. Developers on the team collaborate well, mentor each other, communicate effectively, and they think, “Hey, we’re a great team. We work really well together!” and maybe they think that’s all a lucky accident. The truth is that almost never happens by accident. It happens because there’s a VP of Engineering constantly monitoring the team’s progress, process, culture, and tone of communications. They’re encouraging developers to use certain tools, hold specific kinds of meetings at specific times in order to foster better collaboration with fewer interruptions. The best VPs of Engineering have been engineers, both on dysfunctional teams, and on highly functional teams. They know the patterns and anti-patterns for effective software development workflows. They work with the heads of product and product managers to ensure that there’s a good product discovery process (they don’t lead it or take charge of it, just make sure that somebody is on it and doing it well), and that product and design deliverables are adequately reviewed by engineers prior to implementation hand offs. I’m going to stop there before I write a book on all the work that goes into leading effective development operations. For more of my thoughts on this topic, check out How to Build a High Velocity Development Team. Many startups are too small to hire a full time VP of Engineering, but it’s still very important to get engineering culture right as early as possible. If you need help with this, reach out. Chief Architect At small organizations, the chief architect could be a technical co-founder with the self-awareness to realize that they won’t want the responsibilities of a CTO as the company grows. Maybe they don’t like to travel, or are simply more interested in software design than conference talks, business development, and sales calls that infiltrate the lives of many CTOs. The chief architect may be responsible for selecting technology stacks, designing collaborations and interfaces between computing systems, assessing compute services offerings (AWS, Azure, ZEIT Now, etc.), and so on. A chief architect may evaluate a wide range of industry offerings and make pre-approved or favored recommendations to work with particular vendors. As the company matures, the chief architect may also need to work closely with the CTO, and sometimes partner organizations to develop integrations between services. At many companies, the CTO also serves as the chief architect. Software Architect A software architect serves many of the purposes of a chief architect, but is generally responsible for smaller cross-sections of functionality. Architects will often work with the chief architect to implement their slice of the larger architectural vision. Software architects often make tech stack choices for particular applications or features, rather than company-wide decisions. Engineering Project Manager/Engineering Manager/Project Manager An Engineering Project Manager (also called “Engineering Manager” or simply “Project Manager”) is in charge of managing the workflow of an engineering team. Some larger companies have both Engineering Managers and Project Managers. In that case, the Engineering Manager typically acts like the VP of Engineering at the local team scope, while the Project Manager takes on the responsibilities described here. Project Managers typically interface with both product leaders and an engineering leader such as VP of Engineering, CTO, or a middle manager to cultivate and prune the work backlogs, track the progress of work tickets, detailed progress reports (milestone burn down charts, completed vs open tickets, month/month progress reports, etc.) You can think of them as the analog of a shop manager for a manufacturing assembly line. They watch the work floor and make sure that the assembly line runs smoothly, and work product isn’t piling up on the floor in front of a bottleneck. The best Project Managers also spend a lot of time classifying issues and bugs in order to analyze metrics like bug density per feature point, what caused the most bugs (design error, spec error, logic error, syntax error, type error, etc.) and so on. Those kinds of metrics can be used to measure the effectiveness of various initiatives, and point out where improvements can be made to the engineering process. Engineering Managers tend to develop a good understanding of the strengths of various team members, and get good at assigning work tickets to the appropriate responsible parties, although, this should be a collaborative effort, seeking feedback from individual developers on what their career goals are and what they want to focus on, within the bounds of the project scope available. If there is time pressure or work backlogs piling up, the Project Manager should collaborate with the engineering and product leaders to figure out the root cause and correct the dysfunction as soon as possible. Wherever possible, the Project Managers should be the only ones directly delegating tasks to individual engineers in order to avoid the multiple bosses problem. Engineers should have a clear idea of who they report directly to, and who’s in charge of delegating to them. If you’re a different kind of engineering leader, and you’re guilty of delegating directly to engineers, it’s probably a good idea to coordinate with the Engineering Manager in charge of the report you’re delegating to and delegate through them so that the work receives correct, coordinated prioritization, and the Engineering Manager is aware of what each engineer is actively working on at any given moment. At very small organizations, the Engineering Manager is often also the CTO and VP of Engineering (with or without the corresponding titles). If that’s you, don’t worry about the previous paragraph. A common dysfunction is that the Engineering Manager can begin to think that because product hands off work for engineering to implement, and Engineering Managers work closely with product teams, that the Engineering Manager reports to a Product Manager. In every case I’ve seen that happen, it was a mistake. See “Avoiding Dysfunctions…” below. Tech Lead/Team Lead The Tech Lead or Team Lead is usually the leader of a small number of developers. They are usually senior engineers who act like mentors, examples, and guides for the rest of the team. Usually, engineers report to the project manager or engineering manager, but a tech lead may be responsible for the team’s code quality measures, such as ensuring that adequate code reviews are being conducted, and that the team’s technical standards (such as TDD) are being upheld. Engineer Career Progressions Generally, engineers can take one of two career paths: move into management, or keep coding. Management positions aren’t for everyone. Lots of engineers prefer to stay on the technical path. That progression can take many directions, twists, and turns, but could look something like this: Intern -> Junior Software Developer -> Software Developer/Engineer -> Senior Software Engineer -> Principal Software Engineer -> Software Architect -> Senior Software Architect -> Chief Architect -> CTO -> Engineering Fellow Alternatively, for those engineers interested in a people leadership role, a progression might look something like this: Intern -> Junior Software Developer -> Software Developer/Engineer -> Team Lead/Tech Lead -> Engineering Manager/Project Manager -> Senior Engineering Manager -> Director of Engineering -> VP of Engineering Avoiding Dysfunctions in Engineering Leadership IMO, VP of Engineering, CTO, VP of Product, and VP of Marketing should all report directly to the CEO. Each of them needs to be in charge of their own process. External facing CTOs should not have direct reports (if they do, it usually means they are filling both the CTO and VP of Engineering Roles). Instead, the Engineering leaders report to the VP of Engineering. This is to avoid the two bosses dysfunction, but also because these roles are fundamentally different: one focused on the customer and how the organization fits into the wider world, and the other focused on internal, day-to-day operations. They’re two wildly different skill sets, with sometimes competing priorities. I’ve seen a lot of dysfunction in engineering leadership because of confusion about which engineering leaders are responsible for what, and it tends to be a recipe for disaster. Whatever is right for your organization, make sure that responsibilities and chain of authority are clear, in order to avoid engineers feeling torn between two or three different “bosses”. Likewise, in an organization of sufficient size, product and engineering need to be two separately led teams. What I mean by that is that the product managers should own the product roadmap. They should be evangelists for the users, and they should be really plugged into the users, often engaging with them 1:1 and learning about their workflows and pain-points in great depth. They should be experts on what the market needs, and they should be very familiar with the company’s strengths and capabilities to fill those needs. That said, the VP of Engineering (or whomever is filling that role) needs to be in charge of delivery, and production pace. While the product managers should own the roadmap, the engineering managers need to be responsible for taking those roadmap priorities, matching them to the engineering capacity, and reporting on the timing. Product and marketing teams will have strong opinions about when something should ship, but only the engineering management has a good gauge of whether or not those delivery timelines are possible given the roadmap requirements. The engineering team needs the authority not simply to push back on timing, but in most cases, to completely own timing, working with the CEO, product, and marketing teams to figure out priorities, understand strategic needs of the company, and then help shape a development cadence that can meet those needs without imposing drop-dead deadlines that ultimately hurt the company’s ability to deliver quality products at a reliable pace. The best performing teams I’ve ever been involved with subscribed to the no deadlines approach. We build great products without announcing them in advance, and then let the marketing teams promote work that is already done. Alternatively, when you’re working in the public view, transparency is a great solution. Instead of cramming to meet an arbitrary deadline, actively share your progress, with ticket burn-down charts, a clear view of remaining work, progress, pace, and remaining scope, and change over time that can indicate scope creep. When you share detailed information about the progress being made, and share the philosophy that we can’t promise a delivery date, but we can share everything we know about our progress with you, people can see for themselves the work and the pace. Because of differing, often competing goals, product, marketing and engineering need to be separate roles reporting directly to the CEO where none of them can dictate to each other. If your team feels time pressure to work overtime, or crunch to get some key deliverable out before some drop-dead deadline, it points to a dysfunction here. Either the engineering managers are reporting to the wrong people, or the team lacks a strong engineering leader who understands the futility of software estimates and the need for a collaborative give-and-take between engineering and product in order to ensure the flexibility of shipping scaled-back MVPs to hit delivery targets. Product should own the continuous discovery process. Engineering should own the continuous delivery process. Marketing should work hand-in-hand with the product team to ensure that product messaging to the wider world is on-point. The whole thing should fit together like a pipeline, creating a smoothly flowing, positive feedback cycle. Like an assembly line, the slowest bottleneck in the process must set the pace for the rest of the process, otherwise, it will lead to an ever-growing backlog that piles up so much that backlog items become obsolete, and backlog management becomes a full-time job. Product teams who feel like engineering is not keeping pace should focus first on quality of engineering hand-off deliverables. Have we done adequate design review? Has an engineer had a chance to provide constructive feedback before handoff? 80% of software bugs are caused by specification or UX design errors, and many of those can be caught before work ever gets handed off to an engineering team. Once you have that process finely tuned, ask yourself if you’ve really explored the product design space thoroughly enough. Did you build one UX and call it done, or did you try multiple variations? Building and testing variations on user workflows is one of the most valuable contributions a product team can make. Do you have a group of trusted users or customers you can run A/B prototype tests with? One of the biggest dysfunctions of software teams is that the product team is producing sub-par deliverables (sometimes little more than a few rushed, buggy mock-ups), and failing to run any of them by customers or engineers prior to handing them off. That dysfunction causes a pileup of re-work and engineering backlog that often gets blamed on engineering teams. Make sure that the delegation of responsibilities makes sense, that you’re not putting undue time pressure on engineering, and that you have a great product team engaged in a collaborative product discovery process, working with real users to build the best product. Engineering managers, I’m not letting you off the hook. If these dysfunctions exist on your team, it’s your responsibility to address them with product, marketing, and business leadership, and spearhead requirements for engineering hand-offs. It’s also your responsibility to protect the productive pace of your team, go to bat for additional resources if your team is being pressured to produce more than your current capacity can handle, to report clearly on the work pacing and backlog, and to demo completed work and ensure that your team is getting due credit for the fine work that is being done. Don’t place blame, but do demonstrate that your team is doing their very best work.
https://medium.com/javascript-scene/software-roles-and-titles-e3f0b69c410c
['Eric Elliott']
2020-07-17 00:54:42.751000+00:00
['Engineering Management', 'Software Engineering', 'Software Development', 'Technology', 'Startup']
We All Have a Story to Tell
I believe writers are open books. Our readers are privy to our life experiences, our families, our personality traits and our thoughts and feelings. They share our secrets and our memories through our written words, but what happens when we have a story we yearn to tell but it can’t be told? What happens when we can’t share the truth and there is no outlet? There are circumstances when it’s just not enough to write it for yourself, when you feel the need to be heard but you vow yourself to silence. Perhaps it’s too private or we fear consequences, or it’s a part of us we aren’t willing to share with the world. It could be the story is too painful to share. I recently wrote a story that will never be published. No one other than me will ever read it. My story is too painful. It keeps me awake at night crushing me. Allowing others in is my only hope for escape. Therefore, I will never be free. “It must be a divine power or non-human force because every day I tell myself I’m done, yet I’m still here. I want to surrender but there’s this unexplainable will to keep going that I never knew I had.” At the same time my story will eat me alive. It weighs heavy on me. Each word is carefully thought out. Emotions are adjectives. Actions are verbs. Specifics are nouns, yet no one will ever be given the opportunity to connect with each carefully chosen word. There are some secrets I cannot share. There are certain truths that cannot be told. There’s a blank page in every book. “I have no resolution. I’m incapable of freeing you from your pain. Each new day picks up where the last left off and I feel as desperate as you do. While you feel undeserving I feel as if I’m not doing enough.” I am failing myself and my readers. I’m stifling characters, burying details and barricading scenes. There isn’t always a hero in the hero’s journey. I am no hero. I’m incapable of navigating the unknown world in real life and when we put things in writing they become a part of us that will always be there. I refer to this as the curse of the creative. We become dependent on processing life through the process of writing. We become human when we are recognized as human. We become connected and bonded to those who we hook with our opening paragraph and cheer for us when we triumph in the end.
https://erikasauter.medium.com/we-all-have-a-story-to-tell-ae1af9a87b81
['Erika Sauter']
2018-09-14 17:52:32.014000+00:00
['Life', 'Writing', 'Life Lessons', 'Inspiration', 'Creativity']
My Favorite Pro Creative Apps for 2021
Bear: The one and only home for all my ideas. The tagging system sold me on this. It’s not the only app with this feature, but it does do it rather well. Instead of categories, Bear allows you to add multiple sorting tags to each note. With categories, one note has one category. With tags, one note can have multiple “categories.” As a result, the organization is more complete. Which matters to me because connections between thoughts are important. For instance, let’s say I create a note about a new strategy for my trading algorithm. That would obviously belong in the “Code” category. But what if I also want to add it to my post queue, the “Posts” category? I can’t categorize it as both “Code” and “Posts.” But with Bear, I can. I simply add both “#code” and “#posts” to that one note. Done. Now I can find it under both “#code” and “#posts” in the sidebar. In other apps, I’d have to duplicate the information. Maybe I’d add “post about new trading strategy” to a different note under the Posts category. Obviously, Bear’s tag system is superior. Bear’s note editor in fullscreen mode. Source: author. Moving on, the Bear editor is one of the best I’ve used for on-the-fly ideas. Fast. Clean. Effective. Plus, Markdown support is excellent — I can format quickly even on my phone. This was a problem with my ex-idea jotter, Agenda. Editing was too slow. In contrast, there’s less friction with Bear. A big deal when I’ve got three things to write down at the same time. Now, I’m punctilious when it comes to UI. I probably apply more selection criteria to user interfaces than I’ll apply to my wife. Corner radius of a single button looks off? Junk it. There’s no compromise when it comes to design. I look at these tools for hours a day. They must be pretty. And I’m happy to report Bear passes all the tests. In fact, I might even call it impressive. Dark. Clear. Intuitive. I can get behind the aesthetic. The Dieci theme suits my fancy on my iPhone and iPad. On Mac, Solarized Dark suits the bigger screen. You’ve got plenty of skin options if you go pro — more on that later. Extras But things get fun when you leave the app, too. The developers thought of everything. It’s pretty much as integrated as a 1st party app. For instance, I set up a Siri Shortcut to dictate a new Bear note using Siri. When I suddenly come up with a bug fix while cooking, this is a lifesaver. In addition, I have a shortcut to record a post idea from my clipboard — useful when something I’m reading inspires me. Just copy some text, run the shortcut, and the idea is stored in Bear under “#posts”. The whole process takes three seconds. Not all note apps can do this. And the Bear widgets on my iPad Pro’s Today view? Remarkably useful. I set it up to show my most recent notes. So with one tap, I can pick up where I left off. Right from the home screen. No digging through categories. No filtering. This is what notes should be: Natural. Accessible. An extension of your own mind. Bear gets as close to that as I’ve ever experienced. I mean, you can even access your local Bear Notes database on macOS. For the non-programmers out there, this means you have infinite automation options for manipulating your notes. Granted, most of you won’t need this. But hey, it’s an option. Pricing Finally, let’s talk cost. Bear Pro will cost you $1.49 per month. If you don’t have commitment issues, unlike me, you could pay roughly $1.25 per month, billed annually. Whichever option you prefer, just buy it. This pricing is absurd. I’m not sure how the good folks who make this are turning a profit. A dollar and a half per month? You can find that walking down the street. The free version doesn’t have sync. And themes are rather limited. If you’re a light user, I guess you could make it work. But it’s a deal-breaker for me. I use this every day. All the time. For hours. To do everything. They could charge ten times their current price, and it’d still be a bargain. It helps that I’m sold on the brand, too. I mean, what developer offers free themed wallpapers? Well-played, Bear. You got me there. In summary, I can’t recommend Bear enough. It’s simple if that’s all you need. But when the rubber meets the road, it keeps up. Value for money, capable, and pretty. What else do you need? Migrating from your existing notes app is trivial, too. So there’s not much stopping you. Alternative: Drafts One alternative I’ve heard about is Drafts: Similar editing, similar organization, just five times uglier. Can’t stand it. Not enough negative space in the layout of the editor. The menus look messy. Too distracting. Overall, not as consistent as Bear. I need serenity in my idea sanctuary. But if you’re less pedantic, why not give it a shot.
https://medium.com/swlh/best-pro-productivity-creative-apps-crypto-trader-blogger-2021-1aafbdefa919
['Mika Y.']
2020-12-22 08:21:01.713000+00:00
['Business', 'Work', 'Technology', 'Productivity', 'Startup']
How to Start a New Brand
Starting a new brand is no easy feat. By “brand” we mean essentially starting a new business. To do this effectively you first need to make sure you are solving a problem people actually need help solving. Unfortunately, many businesses get too ahead of themselves and think they are solving a problem but come to find out that no one really needs their solution. This, in essence, is why so many businesses and startups fail. If you are indeed solving a problem AND solving it well, you can then proceed to identify what we like to refer to as the first step in branding. Ask yourself: “What is it that you want to be known for?”. Allow your customers to identify with you Answering this question is a safe way to slowly grow a brand that encapsulates what it is you stand for and do. From there, your target audience will do their part in identifying with your business. This part comes naturally. The hope is that the “identification” period is a pleasant one. If it is, consumers will continue to come to you for whatever it is you offer. Our video answers the question of “How to Start a New Brand” by taking our own brand (Couple of Creatives) and explaining the building blocks behind why we started it. Follow along and try to find what it is about your own business that is unique. If you can identify something, focus on ways to make it valuable to your customers. If they can identify with you over your competition then your branding efforts are working.
https://medium.com/couple-of-creatives/how-to-start-a-new-brand-25fe0089ad57
['Andy Leverenz']
2017-02-22 02:45:14.813000+00:00
['Business', 'Branding', 'Entrepreneurship', 'Design', 'Freelancing']
The 12 Days of Christmas if It Were Written by Jane Austen
Image by author The 12 Days of Christmas if It Were Written by Jane Austen After you watch Bridgerton, sing this Regency-themed carol While watching the new show Bridgerton on Netflix and admiring all the historical drama, don’t forget the lady who inspired much of it. Jane Austen not only wrote novels, she wrote parodies of popular tunes including The 12 Days of Christmas. Though not as well-known as Pride and Prejudice, it’s worth a listen or a sing-a-long. On the first day of Christmas Jane Austen gave to me; A proposal to a lady. Image public domain On the second day of Christmas Jane Austen gave to me; Two pompous men, and a proposal to a lady. On the third day of Christmas Jane Austen gave to me; Three French gowns, two pompous men, and a proposal to a lady. On the fourth day of Christmas Jane Austen gave to me; Four carriages, three French gowns, two pompous men, and a proposal to a lady. Image public domain On the fifth day of Christmas Jane Austen gave to me; Five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. image pubic domain On the sixth day of Christmas Jane Austen gave to me; Six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. On the seventh day of Christmas Jane Austen gave to me; Seven schemers-a-scheming, six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. On the eighth day of Christmas Jane Austen gave to me; Eight mothers-a-worrying, seven schemers-a-scheming, six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. Image pubic domain On the ninth day of Christmas Jane Austen gave to me; Nine soldiers-a-flirting, eight mothers-a-worrying, seven schemers-a-scheming, six ladies-a-walking, Five ruined girls! Four carriages, three french gowns, two pompous men, and a proposal to a lady. Image public domain On the tenth day of Christmas Jane Austen gave to me; Ten characters misunderstanding, nine soldiers-a-flirting, eight mothers-a-worrying, seven schemers-a-scheming, six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. On the eleventh day of Christmas Jane Austen gave to me; Eleven meaningful glances, ten characters misunderstanding, nine soldiers-a-flirting, eight mothers-a-worrying, seven schemers-a-scheming, six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady. Image public domain On the Twelfth day of Christmas Jane Austen gave to me; Twelve maids actually working, eleven meaningful glances, ten characters misunderstanding, nine soldiers-a-flirting, eight mothers-a-worrying, seven schemers-a-scheming, six ladies-a-walking, five ruined girls! Four carriages, three French gowns, two pompous men, and a proposal to a lady.
https://medium.com/jane-austens-wastebasket/the-12-days-of-christmas-if-it-were-written-by-jane-austen-d61b0ac0492d
['Kyrie Gray']
2020-12-26 18:30:40.435000+00:00
['Books', 'Literature', 'Satire', 'Music', 'Humor']
This is What Could Make or Break the U.S. This Fall
This is What Could Make or Break the U.S. This Fall If the United States doesn’t get its act together, it’s going to be a tough autumn As May gave way to June, rates of Covid-19 cases and deaths were falling across much of the United States, especially in New York, New Jersey, Michigan, and many of the virus’s springtime hot spots. Some epidemiological models even predicted that warm and sunny weather, coupled with more open windows and outdoor-centric lifestyles, would push infection rates so low that much of America could return to a state of relative normalcy. Of course, things haven’t played out that way. Falling case and death counts helped lull many parts of the country into a false sense of security. In many states, imprudent reopenings, coupled with poor adherence to commonsense safety measures, gave a dwindling virus a big boost. “We declared victory at a plateau a couple months ago, and now we have a brand-new peak that has broken the previous record by twice the magnitude,” says Mark Cameron, PhD, an associate professor in the Department of Population and Quantitative Health Sciences at Case Western Reserve University School of Medicine in Cleveland. Cameron and other experts say that if the United States makes the right moves now — starting today — there may still be time to right the ship before the fall. But if we don’t, prognostications for the coming months are almost uniformly dire. “Whether you’re looking at individual states or the whole country, the outlook right now is grim,” he says. Creating fresh reservoirs Over and over again, virus experts highlight two foreseeable events as likely to cause major trouble this fall. Those events are school reopenings and the advent of the cold and flu season. “I understand the need for parents to send their kids back to school, but I think school reopenings are going to create huge reservoirs of infected people,” says Lee Riley, MD, a professor and division head of infectious diseases and vaccinology at UC-Berkeley. Riley says that if students and teachers rigorously adhere to mask guidelines and other virus-blocking protocols, and if people who develop symptoms or who are exposed to an infected individual immediately self-quarantine, it’s possible that schools could safely reopen. But he says that the politicization of these measures — and masks in particular — will make this level of adherence unlikely. “Kids will bring the infection home, and many people will develop severe infections, especially parents and grandparents who have preexisting conditions,” he says. “I don’t think there’s any question that will happen.” “We are roughly seven months into this pandemic in the U.S., which is the most technically sophisticated country on the planet, and we’re still struggling to get aggressive and widespread testing in every community.” Experts say the arrival of the cold and flu season will worsen this already difficult situation — and in more ways than one. For starters, Riley says that people who catch one of the seasonal bugs may be more likely to develop a serious infection if also exposed to SARS-CoV-2. “I think it will be very important for people to get the flu vaccine, and I think demand for that vaccine will create shortages,” he says. Clinics overwhelmed by non-Covid-19 patients Common cold and flu cases could also put a strain on the country’s still-feeble Covid-19 testing capabilities. “When we start seeing the normal circulation of influenza and this whole huge family of viruses that cause respiratory illness at this time every year, we’re going to have a lot of people developing symptoms that are consistent with Covid-19,” says Julie Fischer, PhD, a microbiologist at the Georgetown University Center for Global Health Science and Security. Fischer says that easy access to quick and accurate Covid-19 testing will be absolutely vital when it comes to identifying the true coronavirus cases and ensuring that those people receive prompt and appropriate treatment. Testing will also ensure that hospitals and clinics aren’t overwhelmed with non-Covid-19 patients. Unfortunately, it seems unlikely that these testing capabilities will be in place by fall. “We are roughly seven months into this pandemic in the U.S., which is the most technically sophisticated country on the planet, and we’re still struggling to get aggressive and widespread testing in every community,” Fischer says. “We’re still rationing.” She says that America’s testing deficiencies do not stem from a lack of technical know-how or capability. “The problem is that [testing] requires a lot of planning and communication and supporting policies, all of which are leadership issues,” Fischer says. In states or cities that have strong leaders who have “been guided by science,” she says that testing capabilities are good and getting better. “But in places where leadership or communication or coordination have broken down, we’re still struggling with testing,” she says. “The fact that we’re still dealing with these basic testing questions is incredibly frustrating.” If everyone got on board with those measures, and if the country’s leaders ensured that testing access is appropriately ramped up, the fall could be a time of minor and controlled outbreaks. Time is running out There’s still time — roughly two months — to ramp up testing capabilities across the country. “But this will take real leadership, including at the national level,” Fischer says. Other experts agree that the United States still has a chance to get its act together in time to prevent calamity this fall. “If we are diligent and do everything we can to break the chains of transmission, we could get down the other side of this current wave by October,” says Case Western Reserve’s Cameron. “But we can’t make the same mistakes we made after the spring.” What needs to happen? The answers aren’t surprising. “Other countries have been able to lower their curves by having good mask compliance, by controlling the opening of risky businesses like bars, and by avoiding large groups of people getting together without adequate social distancing,” Cameron says. If everyone got on board with those measures, and if the country’s leaders ensured that testing access is appropriately ramped up, the fall could be a time of minor and controlled outbreaks. Throw in the emergence of a new and highly effective form of Covid-19 treatment — and several are in the works — and there’s a slim chance that autumn could turn out to be a happier and safer time in the United States than summer has proven to be. But it’s not looking good. “I wish I had a more hopeful view for the fall, but in the U.S. there are a lot of people who don’t like to follow rules,” says Efraín Rivera-Serrano, PhD, a molecular virologist at the University of North Carolina at Chapel Hill. Rivera-Serrano says that all the models that predicted lower transmission rates this summer were based primarily on environmental variables — such as more UV light and greater humidity levels. Those optimistic models didn’t account for huge chunks of the country ignoring social distancing directives and other safety measures. “I don’t have high hopes, because I don’t trust people’s behavior,” he adds.
https://elemental.medium.com/theres-still-some-hope-for-the-u-s-this-fall-9b9527e9cb97
['Markham Heid']
2020-08-07 19:29:42.614000+00:00
['The Nuance', 'Covid 19', 'Coronavirus', 'Public Health', 'Health']
Blood pressure data analysis from NHANES dataset.
Speaking about sex differences in BP, I highly recommend the following article for further understanding of the data: https://academic.oup.com/ajh/article/31/12/1247/5123934, and further explanation on the effects of aging in BP in both sexes, I recommend this article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4768730/. Overall, the analysis of the NHANES dataset showed that the difference between diastolic (t= -2.47) and systolic BP (t = 3.1) in the age band [0–10 years] is small compared to other age bands. This is observed, between many reasons, by a low sexual differentiation in this pre-puberty period. The difference between males and females in systolic BP reach it’s maximum value on the age band of [20–30 years] (t= 37.31) and diastolic BP on the age band of [30–40 years] (t=19.68), both BP being higher in males compared to females (this is why the t value is positive, once the variable 1 is always the male group compared to the variable 2 female). After the greatest difference in systolic BP in the [20–30 years], the difference drops through years reaching a negative t value of -1.9 in the age band of [60–70 years]. The negative t value is saying that, on our sample, the mean systolic BP of females is higher than males. The diastolic BP difference between males and females also reduces over years. Many factors contributes to this observed result. One of those factors may be drop of sexual hormones in females, as they get older. Using the “t age” values observed for systolic BP, we can see that males have high increase in BP, specially between the age bands of [10–20 years](t = 33.16) and [20–30 years](t=34.71) which slows down in older groups, for example [70–80 years] (t=4.17). Females, on the other hand, start with slower increase in systolic BP in the age bands[10–20 years](t=23.85) and [20–30 years](t=17.47). Then females increases their t values in the age band of [40–50 years](t=20.9), reaching the point of a higher mean systolic BP on the age band of [60–70 years] (speaking about the sample only). Using the “t age” values for diastolic BP, I would highlight brief observations. First of all, it’s interesting to note the shape of “U” of data over years for both sexes, increasing until age band of [40–50 years] and reducing after this age band. The purpose of calculating “t male” and “t female” was observe variations between diastolic and systolic BP in males or females respectively between age bands. In the first age band [0–10 years] we observe a “t female” of 104.14 and “t male” of 103.36, which increases in age band of [10–20 years] and falls as age increases. This was not an expected result once the difference between means increase after 40 years old. But, we also see a continuous increase in the standard deviation of the variables as the groups get older, especially on systolic BP. It is important to highlight that many biases might be contained is this data analysis. For example, smoking, HDL, cholesterol, obesity, nutrition behaviour, alcohol consumption and diabetes were not explored between age bands and gender and might influence BP. In resume, the difference between males and females are minimal in BP between 0–10 years old. The difference becomes clear between 10–40 years old, with males having greater systolic and diastolic BP. Those differences are reduced between 40–70 years old and swap between 70–80 years old in matters of systolic BP, and are reduced in matters of diastolic BP. Both genders suffers an increase of systolic BP as they get older. Both genders suffers increase in diastolic BP as they get older until 40–50 years, after this point, diastolic BP is reduced.
https://joaoguilherm3.medium.com/blood-pressure-data-analysis-from-nhanes-dataset-51ec40d30cec
['João Guilherme']
2020-11-06 20:46:20.094000+00:00
['Python', 'Data Visualization', 'Pandas', 'Science', 'Blood Pressure']
Machine Learning — How it works. Definitions of AI and ML. What their…
Photo by Alina Grubnyak on Unsplash You hear the words “Machine Learning” and “Artificial Intelligence” tossed around all the time nowadays. News articles pop up about how Artificial Intelligence (AI) is being used for predictive policing. You hear how companies are rushing to implement Machine Learning (ML) into their products. But what are AI and ML? How does it work? Read on to find out. What’s the Difference? What are They? In everyday life, we use ML interchangeably with AI, but they’re different! AI is the whole domain whereas ML is a specific field in AI. In other words, AI is to math as ML is to geometry. So what are they? AI is the act of getting a machine to do actions that typically require human intelligence. This is a broad definition because it’s the definition for the whole field. AI applications range from chatbots, to composing music, to designing airplane parts! How does AI do this? One approach is ML. Machine Learning processes give machines data, then have them learn automatically from that data to produce actions. This may seem simple at first, but actually programming a program to classify cats and dogs seems like an impossible task! But don’t worry , it’s possible, and you’ll learn how it works in the next section.
https://medium.com/datadriveninvestor/machine-learning-how-it-works-900b53d0e3d7
['Dickson Wu']
2020-11-10 20:33:42.282000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Data Science', 'Programming']
Top 10 Benefits Of Artificial Intelligence
Did you know that Artificial Intelligence will contribute a whopping $15.7 trillion to the global economy by 2030!? In addition to economic benefits, AI is also responsible for making our lives simpler. This article on the Benefits Of Artificial Intelligence will help you understand how Artificial Intelligence is impacting all domains of our life and at last benefiting humankind. I’ll be discussing the benefits of Artificial Intelligence in the following domains: Automation Productivity Decision Making Solving Complex Problems Economy Managing Repetitive Tasks Personalization Global Defense Disaster Management Lifestyle Increased Automation Artificial Intelligence can be used to automate anything ranging from tasks that involve extreme labor to the process of recruitment. That’s right! There is n number of AI-based applications that can be used to automate the recruitment process. Such tools help to free the employees from tedious manual tasks and allow them to focus on complex tasks like strategizing and decision making. Increased Automation — Benefits Of Artificial Intelligence — Edureka An example of this is the conversational AI recruiter MYA. This application focuses on automating tedious parts of the recruitment process such as scheduling screening and sourcing. Mya is trained by using advanced Machine Learning algorithms and it also uses Natural Language Processing (NLP) to pick up on details that come up in a conversation. Mya is also responsible for creating candidate profiles, perform analytics, and finally shortlists applicants. Increased Productivity Artificial Intelligence has become a necessity in the business world. It is being used to manage highly computational tasks that require maximum effort and time. Did you know that 64% of businesses depend on AI-based applications for their increased productivity and growth? increased Productivity — Benefits Of Artificial Intelligence — Edureka An example of such an application is the Legal Robot. I call it the Harvey Spectre of the virtual world. This bot uses Machine Learning techniques like Deep Learning and Natural Language Processing to understand and analyze legal documents, find and fix costly legal errors, collaborate with experienced legal professionals, clarify legal terms by implementing an AI-based scoring system and so on. It also allows you to compare your contract with those in the same industry to ensure yours is standard. Smart Decision Making One of the most important goals of Artificial Intelligence is to help in making smarter business decisions. Salesforce Einstein which is a comprehensive AI for CRM (Customer Relationship Management), has managed to do that quite effectively. As Albert Einstein quoted: “The definition of genius is taking the complex and making it simple.” Smart Decision Making — Benefits Of Artificial Intelligence — Edureka Salesforce Einstein is removing the complexity of Artificial Intelligence and enabling organizations to deliver smarter, and more personalized customer experiences. Driven by advanced Machine Learning, Deep Learning, Natural Language Processing, and predictive modeling, Einstein is implemented in large scale businesses for discovering useful insights, forecasting market behavior and making better decisions. Solve Complex Problems Throughout the years, AI has progressed from simple Machine Learning algorithms to advanced machine learning concepts such as Deep Learning. This growth in AI has helped companies solve complex issues such as fraud detection, medical diagnosis, weather forecasting and so on. Solve Complex Problems — Benefits Of Artificial Intelligence — Edureka Consider the use case of how PayPal uses Artificial Intelligence for fraud detection. Thanks to deep learning, PayPal is now able to identify possible fraudulent activities very precisely. PayPal processed over $235 billion in payments from four billion transactions by its more than 170 million customers. Machine learning and deep learning algorithms mine data from the customer’s purchasing history in addition to reviewing patterns of likely fraud stored in its databases and can tell whether a particular transaction is fraudulent or not. Strengthens Economy Regardless of whether AI is considered a threat to the world, it is estimated to contribute over $15 trillion to the world economy by the year 2030. According to a recent report by PwC, the progressive advances in AI will increase the global GDP by up to 14% between now and 2030, the equivalent of an additional $15.7 trillion contribution to the world’s economy. Strengthens Economy — Benefits Of Artificial Intelligence — Edureka It is also said that the most significant economic gains from AI will be in China and North America. These two countries will account for almost 70% of the global economic impact. The same report also reveals that the greatest impact of Artificial Intelligence will be in the field of healthcare and robotics. The report also states that approximately $6.6 trillion of the expected GDP growth will come from productivity gains, especially in the coming years. Major contributors to this growth include the automation of routine tasks and the development of intelligent bots and tools that can perform all human-level tasks. Presently, most of the tech giants are already in the process of using AI as a solution to laborious tasks. However, companies that are slow to adopt these AI-based solutions will find themselves at a serious competitive disadvantage. Managing Repetitive Tasks Performing repetitive tasks can become very monotonous and time-consuming. Using AI for tiresome and routine tasks can help us focus on the most important tasks in our to-do list. An example of such an AI is the Virtual Financial assistant used by the Bank Of America, called Erica. Erica implements AI and ML techniques to cater to the bank’s customer service requirements. It does this by creating credit report updates, facilitating bill payments and helping customers with simple transactions. Erica’s capabilities have recently been expanded to help clients make smarter financial decisions, by providing them with personalized insights. As of 2019, Erica has surpassed 6 million users and has serviced over 35 million customer service requests. Personalization Research from McKinsey found that brands that excel at personalization deliver five to eight times the marketing ROI and boost their sales by more than 10% over companies that don’t personalize. Personalization can be an overwhelming and time-consuming task, but it can be simplified through artificial intelligence. In fact, it’s never been easier to target customers with the right product. An example of this is the UK based fashion company ‘Thread’ that uses AI to provide personalized clothing recommendations for each customer. Personalization — Benefits Of Artificial Intelligence — Edureka Most customers would love a personal stylist, especially one that comes at no charge. But staffing enough stylists for 650,000 customers would be expensive. Instead, UK-based fashion company Thread uses AI to provide personalized clothing recommendations for each of its customer. Customers take style quizzes to provide data about their personal style. Each week, customers receive personalized recommendations that they can vote up or down. Thread’s uses a Machine Learning algorithm called Thimble that uses customer data to find patterns and understand the likes of the buyer. It then suggests clothes based on the customer’s taste. Global Defense The most advanced robots in the world are being built with global defense applications in mind. This is no surprise since any cutting-edge technology first gets implemented in military applications. Though most of these applications don’t see the light of day, one example that we know of is the AnBot. Global Defense — Benefits Of Artificial Intelligence — Edureka The AI-based robot developed by the Chinese is an armed police robot designed by the country’s National Defence University. Capable of reaching max speeds of 11 mph, the machine is intended to patrol areas and, in the case of danger, can deploy an “electrically charged riot control tool.” The intelligent machine stands at a height of 1.6m and can spot individuals with criminal records. The AnBot has contributed to enhancing security by keeping a track of any suspicious activity happening around its vicinity. Disaster Management For most of us, precise weather forecasting makes vacation planning easier, but even the smallest advancement in predicting the weather majorly impacts the market. Accurate weather forecasting allows farmers to make critical decisions about planting and harvesting. It makes shipping easier and safer. And most importantly it can be used to predict natural disasters that impact the lives of millions. Weather Forecast — Benefits Of Artificial Intelligence — Edureka After years of research, IBM partnered with the Weather Company and acquired tons and tons of data. This partnership gave IBM access to the Weather Company’s predictive models, which provided tons of weather data that it could feed into IBM’s AI platform Watson to attempt to improve predictions. In 2016 the Weather Company claimed their models used more than 100 terabytes of third-party data daily. The product of this merger is the AI based IBM Deep Thunder. The system provides highly customized information for business clients by using hyper-local forecasts — at a 0.2 to 1.2-mile resolution. This information is useful for transportation companies, utility companies, and even retailers. Enhances Lifestyle In the recent past, Artificial Intelligence has evolved from a science-fiction movie plot to an essential part of our everyday lives. Since the emergence of AI in the 1950s, we have seen exponential growth in it’s potential. We use AI based virtual assistants such as Siri, Cortana, and Alexa to interact with our phones and other devices; It is used to predict deadly diseases such as ALS and leukemia. Enhanced Lifestyle — Benefits Of Artificial Intelligence — Edureka Amazon monitors our browsing habits and then serves up products it thinks we’d like to buy, and even Google decides what results to give us based on our search activity. Despite being considered a threat AI still continues to help us in many ways. Just like how Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute quoted: “ By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” With this note, I’d like to conclude by asking you, how do you think Artificial Intelligence will help us create a better world? So with this, we come to an end of this Benefits Of Artificial Intelligence blog. Stay tuned for more blogs on the most trending technologies. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of Deep Learning.
https://medium.com/edureka/benefits-of-artificial-intelligence-dc2d3e64ba80
['Sahiti Kappagantula']
2020-10-06 12:49:10.728000+00:00
['Deep Learning', 'Artificial Intelligence', 'AI', 'Decision Making']
Odds of Dying: What You Should Really Worry About
Odds of Dying: What You Should Really Worry About The things that could seriously get in the way of your well-being, according to the latest statistics from the National Safety Council. When lightning struck during a monsoon storm near our home on July 28, 2016. We went out to chase it. My son, Marius Britt, got this photo. Notice the total absence of sharks. Standing in a thunderstorm with utter glee, I love being pummeled by raindrops the size of crickets and daring God to strike me. I live among rattlesnakes in the Arizona desert, and respect them greatly, but I don’t really fear them so long as I’ve got a good body’s length between us (except that one time when one trapped me in the garden at the side of the house and I had to use my cell phone to call my wife inside the house so she could call the fire department, but that’s another story). Flying doesn’t scare me (as long as I’m not the pilot, alone and lost over the Northern California wilderness in a developing thunderstorm with a faulty compass, but that’s also another story). Anyway, here’s what really scares me: I’m terrified being eaten by a shark. So much so that dangling my feet even in a man-made lake gives me the heebie-jeebies. I can muster no happiness at all swimming in the ocean. The fear is totally irrational, I know. OK, Finally, the News … Blue sharks are supposedly not very aggressive. I get zero comfort in that scientific viewpoint. Photo: NOAA/NEFSC In the pursuit of happiness, fear and worry can be real downers (death even more so). So to put your mind at ease about everything from bee stings to asteroid strikes, here are the leading causes of death, announced yesterday by the National Safety Council. These are the things that could seriously get in the way of your well-being. Top 10 Lifetime Odds of Dying By … Heart Disease: 1 in 6 Cancer: 1 in 7 Chronic lower respiratory disease: 1 in 27 Suicide: 1 in 88 Opioid overdose: 1 in 96 Motor vehicle crash: 1 in 103 Fall: 1 in 114 Gun assault: 1 in 285 Pedestrian incident: 1 in 556 Choices, choices. Photo: Pixabay/SplitShire The data are for 2017, the most recent year compiled and analyzed. They cover “selected categories,” not all possible ways to croak. Here is the rest of the published list — more stuff to worry about, though maybe a little less so (with a few comments by yours truly): Motorcyclist: 1 in 858 (Note #1: these are the odds if you are one, not the odds of being killed by one) (Note #1: these are the odds if you are one, not the odds of being killed by one) Drowning: 1 in 1,117 (this is, like, right behind sharks on my fear scale; hmm…) (this is, like, right behind sharks on my fear scale; hmm…) Fire or smoke: 1 in 1,474 Choking on food: 1 in 2,696 (who knew? practice your Heimlich!) (who knew? practice your Heimlich!) Bicyclist: 1 in 4,047 (see Note #1 above) (see Note #1 above) Accidental gun discharge: 1 in 8,527 Sunstroke: 1 in 8,912 Electrocution, radiation, extreme temperatures and pressure: 1 in 15,638 (one has to use one’s imagination on this one) (one has to use one’s imagination on this one) Sharp objects: 1 in 28,000 (self-inflicted? probably) (self-inflicted? probably) Cataclysmic storm: 1 in 31,394 (if you’re the only person killed by a given tornado, are you part of this stat?) (if you’re the only person killed by a given tornado, are you part of this stat?) Hot surfaces and substances: 1 in 46,045 Hornet, wasp and bee stings 1 in 46,562 Dog attack: 1 in 115,111 Plane crash: 1 in 188,364 Lightning: 1 in 218,106 Railway passenger: 1 in 243,765 Lowering the Odds Of course, if you never ride a motorcycle, bicycle or train, you can scratch those off your kick-the-bucket list. And to your friends who are terrified to fly, you can offer a different angle on that tired but true automobile axiom. Tell them: “You’re more likely to die in a dog attack!” Since this article is part of a science-reporting project, I have to add: Eat well, don’t drink too much, sleep better, chill out and get lots of exercise. There, you just lowered the odds of death by some of the Top 10 items above. Fear is Healthy, But … Meanwhile, the NSC has some advice around knowing the numbers and using them wisely: “Fear is natural and healthy,” the agency says. “It can help us respond to danger more quickly or avoid a dangerous situation altogether. “It can also cause us to worry about the wrong things, especially when it comes to estimating our level of risk. If we overestimate our risk in one area, it can lead to anxiety and interfere with carrying out our normal daily routine. Ironically, it also leads us to underestimate real risks that can injure or kill us.” Image: Pixabay/Bibbi228 Sharks & Asteroids Odds of death by asteroid, by the way, weren’t on the NSC list. But two scientists, working separately, have put that figure at 1 in 700,000 and/or 1 in 1.6 million. Take your pick, then worry about something else. Finally, the odds of dying in a shark attack are 1 in 3.7 million, according to one expert estimate. I find zero comfort in that, but I’m happy I live in the desert. You can see the full NSC report here.
https://medium.com/luminate/odds-of-dying-what-you-should-really-worry-about-cc761901565b
['Robert Roy Britt']
2019-01-23 20:46:35.662000+00:00
['Self Improvment', 'Death And Dying', 'Happiness', 'Science', 'Health']
Social Media is Not “Social”
Social Media is Not “Social” It’s media. Start treating it that way. Social Media is no longer just Social, it’s the most important and underutilized Media Channel available to marketers. It’s 2017, and if you haven’t been playing really close attention over the last two years, digital and more importantly social media have dramatically shifted. You’ve likely read some headlines, you seen digital spending reports rising, but most marketers have largely ignored them. As a marketer, you can no longer look at Social Media as this siloed off marketing step-child that you pay little or no attention to, (except on Wednesdays and every other weekend) each month. You are likely more involved and infatuated with your other favorite marketing channels. You’ve had them longer and you are more invested in them. They’re like family. Entrepreneur and tech-soothsayer, Gary Vaynerchuk articulated it in a powerful way late last year: “Social Media is just a slang term for the current state of the internet.” This is 100% spot on. In it’s earliest beginnings, Social was a handful of networks, each with their own audiences and capabilities — and mostly was considered “for the kids”. That may have been true 5 or 10 years ago. But in 2017, the networks of Facebook (including Instagram, Facebook Messenger and WhatsApp), Snapchat, Pinterest and YouTube, are THE NETWORKS where (billions) of consumers are spending their time. Furthermore, they are quickly evolving out of being merely social channels, but rather full-fledged media companies. If you don’t buy into Gary V’s above thesis on how to look at social in today’s digital landscape, then consider the following: In March 2017, Snapchat went public and in its first week — it’s currently valued higher than Delta, Target, Hershey, Viacom, CBS and Hilton (just to name a few). Chart from Statista Not too bullish on Snapchat? Ok, then consider these stats from comScore, eMarketer & Nielsen. 66% of all digital time is spent on a mobile device. 89% of all mobile device time is spent on apps. 49% of mobile app time is on Social Media and Messaging Apps. “Mobile first” doesn’t just pertain to your website anymore, it’s about evolved consumer behavior patterns towards mobile and social. Do these consumer usage stats have your attention? They should rock you to your marketing core. Social Media has gone beyond being a handful of channels that you can activate your marketing on, they are now the channels you must be activating on. Still, so many seasoned marketers continue to look at it this way — yeah that’s cute, no thanks. It’s in the same vein of how they laughed off the importance of Facebook a decade ago and the same way channels like Snapchat are getting laughed off today. Over 150M user per day are spending minutes and hours on Snapchat alone. That’s called scale and that’s why it matters. You can’t disregard or disrespect a platform like Snapchat just because you don’t understand it or because it’s not the same feed format that you’ve grown accustomed to with Facebook, Twitter, LinkedIn, etc. Mega hotel chains laughed off AirBnB. Major metropolitan taxi medallion holders laughed off UBER. Barnes & Noble and Wal-Mart laughed off Amazon. P&G-Gillette laughed off Dollar Shave Club. …Billions of dollars of lost revenue later, here we are. Digital, social, peer-to-peer messaging apps, and the gig-economy continue to win and they will take down more formerly assumed un-killable giants as they mature. You have to pay the piper. In 2014, Marketers freaked out when Facebook changed their algorithm and they lost their page’s organic reach. FREAKED. THE. F. OUT. What brands and marketers had been getting for free, they now needed to pay for — and still not achieve the same results. It sucked, sure. But, what marketers took for granted, was that this change gave birth to the single most powerful branding and advertising vehicle in history. Print ads were king for over 150 years until Radio came along in 1920 and disrupted a two-century old medium . Radio ads were king until TV advertising hit scale in the 1950’s. TV was king, and will be dethroned in 2017 by digital ad spending for the first time in history. Digital moves fast. The consumer internet is really only about 20 years old. Smartphones are only 10 years old. Let that sink in, and consider the dramatic shifts in the marketing/advertising landscape over the last 20 years compared to the previous 300. It took 38-years before 50 million people gained access to radios It took television 13-years to get to 50 million It took Instagram a year and a half It took Flickr two years to reach 100 million uploaded pictures It took Instagram eight months Again, Digital isn’t just fast it’s lightning fast, and the combination of smartphones and social have taken over how we use the internet itself. You likely (or better) understand the relevance of digital marketing in today’s landscape. Now you need to wrap your head around the idea that social can, and will overtake Paid Search, Display, Adwords, etc. in the very near future. The internet itself, as we know it, will give way to the IoT, Voice-Search, and AR/VR in the next 15 years…that’s a seperate post and rabbit hole to go down all together. Here’s the rub… You cannot be nostalgic or romanticize the advertising and marketing methods that worked for you in the past — 50 years, 15 years or 5 years ago. It is irrelevant in today’s society and consumer behaviors. Social media is already and has already become the next thing. Marketers as a whole just haven’t been capitalizing on it and giving it the respect it deserves. The cost to entry is as low as it will ever be right now because the mega-brands haven’t completely priced out the competition yet. But the clock is ticking, and if you have been reading the tea leaves it’s changing. As an example: In the U.S., the Super Bowl is the most valuable TV ad buy, year-in and year-out based on exposure and viewership. Yet in 2017, decade long major brands players like Kraft-Heinz, Frito-Lay Doritos decided to drop out. So, why do you expect they bailed— and where do you expect those tens of millions of dollars for production and ad-buys are going this year? Here’s a hint, it’s not direct mail. Once the big boy brands decide to go all-in on social advertising because of the micro-targeting and delivery vehicles that it can provide compared to traditional channels, the cost of entry will skyrocket. SMB’s are going to feel the strain of those tens of millions of dollars and the CPM to reach your target is going to go up by 3–5x of what it is today by 2020. The supply and demand curve will shift as more businesses continue investing larger spends into digital. “Anti Social” social (or Peer-to-Peer Messaging) is the next iteration of digital communication. Once marketers start positioning themselves around the current state of what consumers are actually doing, they will start making an impact where it matters. Instead of scrambling to figure it out after something they initially dismissed actually hit scale and started to matter. Case in point, Messaging Apps. The next big thing that marketers need to pay attention to is the mass usage of Messaging Apps. “Kids aren’t on Facebook anymore”, said the out of touch digital marketer. Yes, they are. They’re just using different iterations of the platform in the form of peer-to-peer messaging apps like Facebook Messenger, What’sApp, and a slew of others globally. A projected 1.2 Billion users will be on Facebook owned What’sApp in 2017. Facebook Messenger has another 1B+ Active Users. This is “where the kids are”. Most popular mobile messaging apps worldwide as of January 2017, monthly active users (in millions) from Statista For marketers, Social Media should no longer be a “Should we or shouldn’t we” conversation. If you’re even having that conversation, “What the ROI of shifting dollars from your PPC spend to social is”, or even the “We want to know how to do social” — then you are still a million miles away from the bulls-eye. What you need to do, is reconfigure your entire understanding of digital marketing. Stop focusing on immediate ROI and 12-month marketing calendars that tether you to traditional media buying cycles. Start thinking about the long game: Brand Value and Customer Retention.
https://medium.com/on-advertising/social-media-is-not-social-its-media-start-treating-it-that-way-6cda5c39881a
['Chad Anderson']
2017-03-13 16:03:06.719000+00:00
['Social Media Marketing', 'Facebook', 'Social Media', 'Digital Marketing', 'Marketing']
Google Analytics is the Only SEO Analytics Tool You Need: Here’s How to Use It (Part 1 of 2)
Google Analytics is the Only SEO Analytics Tool You Need: Here’s How to Use It (Part 1 of 2) Also, How to Prove the True Value of SEO for Your Boss, Client, or Business SEO is often a Catch-22 for small and medium-sized businesses. One common question we get from our clients perfectly illustrates why: “We hear buzzwords like SEO a lot, but how do we know if investing time and money into SEO is worth the investment? How do I know how much SEO adds real value and profit to my business?” Interest over time on Google Trends for “SEO” in the United States, 2004 — present. Image via Google Trends. One option to test the value of SEO for your business is to hire a full-time digital marketer, digital analyst, or SEO consultant. Of course, they will have to prove their business value to your company using analytics. Or maybe you want to test the value of SEO yourself using the many SEO tools out there on the market. Backlinko recently created a comprehensive review of 189 SEO tools that are currently on the market. But most of them are costly or at best freemium tools that lock away most of the valuable features for the paid tier. And herein lies the chicken-and-egg problem facing many small and medium-sized businesses: Businesses want to investigate if SEO will really have positive return on their investment of time and money. But they often must pay a big sum upfront for a digital marketer or consultant to analyze whether SEO is the right marketing channel for them in the first place. But it doesn’t have to be that way — this is where Google Analytics comes in! As you probably know, Google Analytics a free digital analytics tool, and by some measures, it is already being used by more than half of all websites on the internet. And for most SMBs, it’s actually the only SEO analytics tool you need to evaluate the value of SEO for your business. So in this week’s post (Part 1), we’ll first show you how to find your organic search traffic metrics in Google Analytics. Then we’ll walk you through how to use Google Analytics to measure the value of SEO for your manager or client. Part 1 will cover: How to find your organic search traffic in Google Analytics reports How to measure the value of your organic search traffic with Google Analytics Which SEO-related metrics and reports should you track on Google Analytics? How to build a SEO dashboard to see your key SEO metrics at a glance Let’s begin! How to find your organic search traffic in Google Analytics reports Just so we’re on the same page, let’s start with some basic definitions. SEO, or search engine optimization, is the technique of growing the amount of high-quality organic search traffic to a website via search engines like Google. By “organic search traffic,” we mean website visitors who search a keyword and click on a search engine result, rather than a pay-per-click (PPC) ad. In other words, this is free traffic from search engines, rather than paid traffic from digital ads. Now that we know what SEO is, let’s start with how to find and isolate your organic traffic on Google Analytics. There are at least two main ways to look at how your organic traffic is performing. Option 1: Drill Down in the Channels Report The first way is to simply go to your Channels report (Acquisition >> All Traffic >> Channels). This will show you how your different channel groupings are performing in terms of traffic, engagement and conversions. By channels, we mean different ways that visitors are getting to your website (e.g. traffic from Paid Search, Referral, Social, etc). Click on Organic Search to drill down on your organic search traffic (i.e. see metrics for only your visitors from organic search). This will then display your Organic Keywords report, which shows your top-performing keywords sorted by the metrics of your choosing (Acquisition >> All Traffic >> Channels >> Keyword). For example, you may want to sort your organic keywords by bounce rate (the percent of users who get to your site and immediately leave without further actions). This will let you see which keywords drive the most highly engaged or high-quality traffic. Segment by Search Engine: You can also segment organic search traffic by source if you want to look at specific search engines (i.e. how many visitors are coming from Google, Yahoo, Bing, etc). Go to Acquisition >> All Traffic >> Channels >> Source (tab). Segment by Landing Page: Lastly, you may want to identify the landing pages that are driving the most organic search traffic to your site. By landing page, we mean the first web page that a visitor sees when they visit your website. To see the highest-traffic landing pages for your organic search traffic, click the Landing Page primary dimension in the organic keywords report (Acquisition >> All Traffic >> Channels >> Landing Page). Option 2: Add “Organic Traffic” as a Segment in Any Report However, perhaps you want to further analyze your organic traffic in a different report. If that’s the case, you can add the “Organic Traffic” default segment at the top of the report. This will allow you to dive deeper on your Audience, Behavior, and Conversion reports for your Organic Traffic segment. Now that you know how to find SEO metrics in Google Analytics, let’s talk about which specific metrics to analyze to quantify the value of your organic traffic. How to measure the value of your organic traffic with Google Analytics So why do you want to measure the impact of your SEO? Because it’s typically undervalued by CEOs and SMB owners. After all, quantifying the value of SEO and organic traffic for your business is a unique challenge. Because organic search traffic (like social traffic) often sits at the top of the marketing funnel, many if not most consumers these days do not convert to paid customers the first time they encounter your website. As I mentioned in last week’s post, you may need an email marketing or remarketing campaign to make the conversion. As such, organic search traffic often doesn’t get the credit it deserves for bringing in revenue to the business, and is chronically undervalued by management. What’s why I use the “Multi-Channel Funnels” reports in Google Analytics to measure the value of SEO. Specifically, I recommend using the “Assisted Conversions” report (Conversions >> Multi-Channel Funnels >> Assisted Conversions). This is possibly the best report for investigating whether Google Analytics is underestimating the value of organic search traffic (or any channel) with last click attribution (a digital analytics model gives credit for a conversion to whichever channel the “last click” came from). The report focuses on the “Assisted Conversions” metric, which represent conversions in which a channel appeared on the conversion path, but was not the final conversion interaction. Like players in basketball, the value of a channel in digital marketing is more than just points scored directly, but also the number of assists. Think about an assist in basketball, where a player may not be the person who actually puts the ball through the hoop, but may assist their teammate. Because assists provide important value to the basketball team and make “scoring” (or “conversions”) possible, the number of assists is an important metric to track to understand the value of a player for a team in both basketball and digital marketing. Therefore, the Assisted Conversion report is your best bet for understanding the true impact of your SEO. You might be surprised to find that the measurable value of your SEO efforts is double what you thought it was. Step 1: Go to your Assisted Conversions report Click Conversions >> Multi-Channel Funnels >> Assisted Conversions. There you’ll see the number of assisted conversions and the value of these conversions for all your channel groupings. As you can see in this screenshot, Display has the highest Assisted / Last Click Conversions ratio for the Google Merchandise Store, meaning Display’s impact on the company’s number of conversions is the most understated. Step 2: Look at your Assisted Conversions for organic search traffic Click “Organic Search” under “MCF channel grouping.” Here you’ll see Step 3: Compare your Assisted Conversion Value with Direct Conversion Value for Each Source (Search Engine) For the business in this screenshot, most of the assisted conversions for organic search is coming from Google. If you compare this company’s Assisted Conversion Value of Google organic search ($2912.08) with its Direct Conversion Value ($2717.08), you’ll see that the real value of Google organic search for this business is more than double its Direct Conversion value. In other words, an SEO analyst can show their manager or client that the value of SEO is double what they originally thought it was based on direct conversions alone! Now that you know how to demonstrate the value of SEO with Google Analytics, let’s talk about which metrics to monitor to understand how organic search is performing for your business. Next Steps Today, we covered: (1) how to isolate your organic search traffic in Google Analytics reports, and (2) how to measure the value of your organic traffic with Google Analytics. As you can tell, learning how to analyze SEO with Google Analytics is not an easy task. It takes a serious amount of investment in time and learning. That’s why at Humanlytics, we’ve been helping a few dozen businesses optimize their digital channels, including their SEO and organic search traffic. Many of these businesses are led by very smart and technical cofounders. But even these entrepreneurs who are trained in digital marketing and data analytics often don’t have the bandwidth or resources to distill actionable insights from their SEO data. This is the reason the next feature we’re building in our digital analytics platform is an AI-based tool to recommend the right digital channels to focus on. This AI tool will tell you whether SEO is the right channel for your business based on your Google Analytics data, so you won’t have to waste any money on the wrong marketing activities. Our AI-based marketing analytics tool whether SEO — or any channel — is right for your business. PC: The Daily Dot In other words, the tool automates everything we’ve explained in this tutorial so you can spend less time learning this stuff through trial-and-error, and more time doing what you do best — running your business. If you’re interested in beta testing this feature for free (or need help setting up your conversion goals), sign up with the form at the end of the post, or shoot me an email at [email protected]. Tune in next week for Part 2 of tracking SEO with Google Analytics, where we’ll walk you through how to choose the right SEO metrics for your SEO dashboard with Google Analytics. Specifically we’ll cover:
https://medium.com/analytics-for-humans/google-analytics-is-the-only-seo-analytics-tool-you-need-heres-how-to-use-it-part-1-of-2-451e66ff102
['Patrick Han']
2018-06-08 19:41:55.873000+00:00
['SEO', 'Google Analytics', 'Digital Marketing', 'Startup', 'AI']
Java Concurrency: Locks
A lock is a thread synchronization mechanism like synchronized blocks except locks can be more sophisticated than Java’s synchronized blocks. Locks are created using synchronized blocks, so it is not like we can get totally rid of the synchronized keyword. From Java 5 the package java.util.concurrent.locks contains several lock implementations, so you may not have to implement your own locks. But you will still need to know how to use them, and it can still be useful to know the theory behind their implementation. Lock vs Synchronized Block There are few differences between the use of synchronized block and using Lock API: A synchronized block is fully contained within a method — we can have Lock API’s lock() and unlock() operation in separate methods we can have Lock API’s lock() and unlock() operation in separate methods A synchronized block doesn’t support fairness, any thread can acquire the lock once released, no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property . It makes sure that the longest waiting thread is given access to the lock . It makes sure that the longest waiting thread is given access to the lock A thread gets blocked if it can’t get access to the synchronized block. The Lock API provides the tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces the blocking time of thread waiting for the lock This reduces the blocking time of thread waiting for the lock A thread that is in the “waiting” state to acquire access to a synchronized block, can’t be interrupted. The Lock API provides a method lockInterruptibly() which can be used to interrupt the thread when it’s waiting for the lock Lock Reentrance All implicit monitors implement reentrant characteristics. Reentrant means that locks are bound to the current thread. A thread can safely acquire the same lock multiple times without running into deadlocks (e.g. a synchronized method calls another synchronized method on the same object). When the thread first enters into the lock, a hold count is set to one. Before unlocking the thread can re-enter into lock again and every time hold count is incremented by one. For every unlocks request, the hold count is decremented by one and when the hold count is 0, the resource is unlocked. Lock Fairness Java’s synchronized blocks make no guarantees about the sequence in which threads trying to enter them are granted access. Therefore, if many threads are constantly competing for access to the same synchronized block, there is a risk that one or more of the threads are never granted access — that access is always granted to other threads. This is called starvation. To avoid this a Lock should be fair. Reentrant Locks also offer a fairness parameter, by which the lock would abide by the order of the lock request i.e. after a thread unlocks the resource, the lock would go to the thread which has been waiting for the longest time. This fairness mode is set up by passing true to the constructor of the lock. Lock Methods Let’s take a look at the methods in the Lock interface: lock() — call to the lock() method increments the hold count by 1 and gives the lock to the thread if the shared resource is initially free. call to the method increments the hold count by 1 and gives the lock to the thread if the shared resource is initially free. unlock() — call to the unlock() method decrements the hold count by 1. When this count reaches zero, the resource is released. call to the method decrements the hold count by 1. When this count reaches zero, the resource is released. tryLock() — if the resource is not held by any other thread, then call to tryLock() returns true and the hold count is incremented by one. If the resource is not free then the method returns false and the thread is not blocked but it exists. if the resource is not held by any other thread, then call to returns true and the hold count is incremented by one. If the resource is not free then the method returns false and the thread is not blocked but it exists. tryLock(long timeout, TimeUnit unit) — as per the method, the thread waits for a certain time period as defined by arguments of the method to acquire the lock on the resource before exiting. as per the method, the thread waits for a certain time period as defined by arguments of the method to acquire the lock on the resource before exiting. lockInterruptibly() — this method acquires the lock if the resource is free while allowing for the thread to be interrupted by some other thread while acquiring the resource. It means that if the current thread is waiting for a lock but some other thread requests the lock, then the current thread will be interrupted and return immediately without acquiring a lock. Lock Implementations Multiple lock implementations are available in the standard JDK: ReentrantLock ReentrantReadWriteLock StampedLock ReentrantLock The class ReentrantLock is a mutual exclusion lock with the same basic behavior as the implicit monitors accessed via the synchronized keyword but with extended capabilities. As the name suggests this lock implements reentrant characteristics just as implicit monitors. A lock is acquired via lock() and released via unlock() . It's important to wrap your code into a try/finally block to ensure unlocking in case of exceptions. This method is thread-safe just like the synchronized counterpart. If another thread has already acquired the lock subsequent calls to lock() pause the current thread until the lock has been unlocked. Only one thread can hold the lock at any given time. ReentrantReadWriteLock ReentrantReadWriteLock class implements the ReadWriteLock interface. The interface ReadWriteLock specifies another type of lock maintaining a pair of locks for read and write access. The idea behind read-write locks is that it's usually safe to read mutable variables concurrently as long as nobody is writing to this variable. So the read-lock can be held simultaneously by multiple threads as long as no threads hold the write-lock. This can improve performance and throughput in the case reads are more frequent than writes. Let’s see rules for acquiring the ReadLock or WriteLock by a thread: Read Lock — if no thread acquired the write lock or requested for it then multiple threads can acquire the read lock — if no thread acquired the write lock or requested for it then multiple threads can acquire the read lock Write Lock — if no threads are reading or writing then only one thread can acquire the write lock The above example first acquires a write-lock in order to put a new value to the map after sleeping for one second. Before this task has finished two other tasks are being submitted trying to read the entry from the map and sleep for one second: When you execute this code sample you’ll notice that both read tasks have to wait the whole second until the writing task has finished. After the write lock has been released both read tasks are executed in parallel and print the result simultaneously to the console. They don’t have to wait for each other to finish because read-locks can safely be acquired concurrently as long as no write-lock is held by another thread. StampedLock Java 8 ships with a new kind of lock called StampedLock which also supports read and write locks just like in the example above. In contrast to ReadWriteLock the locking methods of a StampedLock return a stamp represented by a long value. You can use these stamps to either release a lock or to check if the lock is still valid. Another feature provided by StampedLock is optimistic locking. Most of the time read operations don’t need to wait for write operation completion and as a result of this, the full-fledged read lock isn’t required. Instead, we can upgrade to read lock: Working With Conditions The Condition class provides the ability for a thread to wait for some condition to occur while executing the critical section. This can occur when a thread acquires access to the critical section but doesn’t have the necessary condition to perform its operation. For example, a reader thread can get access to the lock of a shared queue, which still doesn’t have any data to consume. Traditionally Java provides wait(), notify() and notifyAll() methods for thread intercommunication. Conditions have similar mechanisms, but in addition, we can specify multiple conditions: Semaphores In addition to locks, the Concurrency API also supports counting semaphores. Whereas locks usually grant exclusive access to variables or resources, a semaphore is capable of maintaining whole sets of permits. This is useful in different scenarios where you have to limit the amount of concurrent access to certain parts of your application. Here’s an example of how to limit access to a long-running task simulated by sleep(5) : The executor can potentially run 10 tasks concurrently but we use a semaphore of size 5, thus limiting concurrent access to 5. It’s important to use a try/finally block to properly release the semaphore even in case of exceptions. Executing the above code results in the following output: Semaphore acquired Semaphore acquired Semaphore acquired Semaphore acquired Semaphore acquired Could not acquire semaphore Could not acquire semaphore Could not acquire semaphore Could not acquire semaphore Could not acquire semaphore The semaphores permit access to the actual long-running operation simulated by sleep(5) up to a maximum of 5. Every subsequent call to tryAcquire() elapses the maximum wait timeout of one second, resulting in the appropriate console output that no semaphore could be acquired. Conclusion In this article, we have seen different implementations of the Lock interface and learned how to use them in multithreaded applications. In the next article, we will check the Executors. Stay tuned.
https://medium.com/javarevisited/java-concurrency-locks-9d161e1d1847
['Dmytro Timchenko']
2020-11-14 04:47:00.100000+00:00
['Software Development', 'Technology', 'Software Engineering', 'Java', 'Programming']
Another story about microservices: Hexagonal Architecture
When you hear stories about the most gigantic projects having a microservice architecture, you are tempted to introduce dozens of tiny applications that would work for you, like house elves, invisible and undemanding. However, system architectures lie on a spectrum. What we imagine is the extreme end of that spectrum: tiny applications exchanging many messages. At the other end of the spectrum you imagine a giant monolith that stands alone to do too many things. In reality, there are many service-oriented architectures lying somewhere between those two extremes. In a nutshell, a microservice architecture means that each application, or microservice’s code and resources are its very own and will not be shared with any other app. When two applications need to communicate, they use an application programming interface (API) — a controlled set of rules that both programs can handle. Developers can make many changes to each application as long as it plays well with the API. This idea comes in many flavors, with different shares of the monolith architecture. In this post, we are going to discuss one of such variations of microservice architecture, known as Hexagonal Architecture. The first key concept of this architecture is to keep all the business models and logic in a single place, and the second concept — each hexagon should be independent. What is Hexagonal Architecture? Invented by Alistair Cockburn in 2005, Hexagonal Architecture, or to call it properly, Ports and Adapters, is driven by the idea that the application is central to your system. All inputs and outputs reach or leave the core of the application through a port that isolates the application from external technologies, tools and delivery mechanics. Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases. Alistair Cockburn Hexagonal Architecture draws a thick line between the software’s inside and outside parts, decoupling the business logic from the persistence and the service layer. The inside part makes up the use cases and the domain model it’s built upon. The outside part includes UI, database, etc. The connection between them is realized via ports and their implementation counterparts are called adapters. In this way, Hexagonal Architecture ensures encapsulation of logic in different layers, which ensures higher testability and control over the code. Architecture components Each side of the hexagon represents an input — port that uses an adapter for the specific type. Hence, ports and adapters form two major components of Hexagon Architecture: Ports A port is a gateway, provided by the core logic. It allows the entry or exiting of data to and from the application. The simplest implementation of a Port is an API layer. Ports exist in 2 types: inbound and outbound. An inbound port is the only part of the core exposed to the world that defines how the Core Business Logic can be used. An outbound port is an interface the core needs to communicate with the outside world Adapters An adapter transforms one interface into another, creating a bridge between the application and the service that it needs. In hexagonal architecture all communication between the primary (which use system to achieve a particular goal) and secondary actors (which system uses to achieve primary actor’s goals) and application ports is done with the help of adapters. Therefore, adapters can also be of two types: Primary Adapters The primary or Driving Adapters represents the UI. It is a piece of code between the user and the core logic. They are called driving adapters because they drive the application, and start actions in the core application. Examples of a primary adapters are API controllers, Web controllers or views. Secondary Adapters The secondary or Driven Adapters represent the connection to back-end databases, external libraries, mail API’s, etc. It is an implementation of the secondary port, which is an interface. These adapters react to actions initiated by the primary adapters. Domain Model The third component of the architecture is the domain model, a conceptual model that represents meaningful concepts to the domain that need to be modelled in software. The concepts include the data involved in the business and rules the business uses in relation to that data. Benefits of Hexagonal Architecture High Maintainability, since changes in one area of an application doesn’t affect others. Ports and Adapters are replaceable with different implementations that conform to the same interface. The application is agnostic to the outside world, so it can be driven by any number of different controls. The application is independent from external services, so you can develop the inner core before building external services, such as databases. Easier to test in isolation, since the code is decoupled from the implementation details of the outside world. Our implementation of Hexagonal Architecture One of Sciforce’s projects required to separate often changing external elements from internal ones that might lessen the impact of change and simplify the testing process. The standard architectural template is based on the Spring set of frameworks: The Core contains objects that represent the business logic of the service. Typically for Hexagonal Architecture, the core knows nothing about the outside world, including the network and the file system. All communication with the outside world is handled by Inbound and Outbound gateway layers. In our application, a Port is a Groovy interface used to access either a Core or an Outbound object and an Adapter is an implementation of a Port interface that understands how to transform to and from external representations into the Core’s internal data model. You can swap out an Adapter in gateway layer: for example, you can enable accepting messages from RabbitMQ instead of HTTP clients with no impact on the Core.We can also substitute Service Stubs for outbound gateways increasing the speed and reliability of integration tests. Example: Simple application In the example, we show a scheme of a simple application that uses the Spring Framework to accept a REST request with some text, converts the text to lowercase and saves it to MongoDB. At the first step, when the client sends an HTTP request, the RestController is responsible for handling it. In terms of Hexagonal Architecture, this controller serves as an Adapter that communicates the request from the HTTP protocol to the internal domain model (a simple string in this case). It is also responsible for calling into the Core via the Port and sending the results back over the client. The ConversionService is the object that the inbound Adapter (RestController) invokes via the ConversionPort interface. The service itself doesn’t access anything outside of the process: all it needs to do its job is to access the in-memory objects. After performing all the necessary processing (converting the text to lowercase), it delegates the task of storing the results to the outbound PersistencePort. The MongoDBGateway is the implementation of the PersistencePort. It knows how to adapt the Core’s internal model, which is plain text, into a form that MongoDB can handle. Though the example is basic, in more sophisticated systems, it might implement exception, logging and retry logic in the code. This example shows only 3 objects but in an actual application, you would have multiple objects in play. For example, a single REST controller would respond to different URLs by calling different services which then call different outbound gateways. Conclusion To sum things up, the main idea of Hexagonal Architecture is decoupling the application logic from the inputs and outputs. It helps to free your most important code of unnecessary technical details and to achieve flexibility, testability and other important advantages that make your working process more efficient. However, like other architectures, hexagonal architecture has its limitations and downsides. For instance, it will effectively duplicate the number of classes on your boundary. When it is the best choice? As it facilitates the detachment of your external dependencies, it will help you with the classes that you anticipate will be swapped out in production in future and the classes that you intend to fake in tests.
https://medium.com/sciforce/another-story-about-microservices-hexagonal-architecture-23db93fa52a2
[]
2020-01-10 14:40:55.620000+00:00
['Programming', 'Microservices', 'Software Engineering', 'Software Development', 'Software Architecture']
Good Collaborations Are Art, Great Ones Are Kitsch
Heron Preston’s collaboration with oral care brand MOON sounds like something out of MSCHF factory. Known for its purposefully absurd and random viral stunts, MSCHF is the creator of Nike sneakers filled with Holy Water, toaster-shaped bath bombs, and an app making stock investments based on astrological signs. While it certainly wouldn’t look out of place next to the squeaky chicken bong popularized by the “factory”, the limited edition stain removal whitening toothpaste in fact dropped on StockX on October 27th. Asking price climbed from $15 to $27 via DropX and the toothpaste came in a limited batch of 350 items. Collaborations like MOON x Heron Preston, Colgate x Supreme, Aimé Leon Dore x Porsche 964, McDonalds x Travis Scott or White Castle x Telfar are often dismissed as stunt-y, tongue-in-cheek, garish and lowbrow. Be that as it may, most of them aim to be appreciated in an ironic and knowing way. Good collaborations are art, great collaborations are kitsch. They fit into the definition of kitsch perfectly: a replica that’s purposefully fake, and that’s where the joke is. Take it seriously, and you are a goon. There are already obvious parallels between collaborations and the world of art (and kitsch): there are auctions, collectors, dealers, critics, resale marketplaces, monographs. Just like art, collaborations aim to shock and surprise. They can’t be criticized, and they strive to reach high prices and cultural immortality. Power to the Mundane “You know it’s art when the check clears,” said Andy Warhol. With Roy Lichtenstein and Robert Indiana, Warhol made his way into museums by turning the mundane world into works of art by enriching it with pop references, connotations and associations. Warhol’s art is commercial and his commercials are art (a Warhol ad launched Absolut vodka in 1986). At the same time, fine art went from museums into fashion, design and pop culture. Elsa Schiaparelli — the original creator of the newspaper print dress — was probably the proto fashion collaborator who featured her Surrealist friends like Salvador Dali on her designs. In the ’80s, New York designer Willi Smith invited artists, performers and graphic designers to join his project of making art part of daily life. In the early 00s, Jeff Koons, Damien Hirst, Takashi Murakami and Stephen Spouse joined forces with Louis Vuitton where then creative director Marc Jacobs turned the fashion-art collaboration into global cash cows. Recently, Cindy Sherman collaborated with Undercover and Yoyoi Kusama has just released her new Veuve Clicquot La Grande Dame limited-edition bottle and gift box. It retails for $30,000 and comes with a poem. When someone buys a Cindy Sherman x Undercover, they aren’t actually buying a bag or a t-shirt; they’re buying a legit work of art. When they wear it, a person shows off their knowledge and cultural awareness. They also see themselves through a new lens: not as mere consumers, but as collectors. Done right, collaborations generate collectibles, justify high prices, create cult objects, and initiate brands in the domain of intangibles. Thanks to this newly-acquired timelessness, symbolic authority and post-materialistic form, Undercover isn’t a mere commercial entity, but a shrine of culture and human creativity. Through collaborations, brands ingrain themselves in culture, not in a market segment. Culture x Commerce Collaborations transform non-culture into culture. It’s a great business model: collaborations don’t need financial capital, only a strong brand capital. Supreme can put its logo on a brick and collaborate with Colgate as long as its brand equity is attractive. Moncler, Mini and Aimé Leon Dore made collaborations integral parts of their DNA. With good reason: compressed trend cycles force brands to constantly come up with the new stuff. Consumers today expect physical products at the unattainable speed of Instagram. A quick solve is to riff off already popular and familiar stuff. Cue in the endless Air Jordan and Supreme collaborations. A brand uses Air Jordan or Supreme’s aesthetic just enough to become kitsch, which gives it a new context and an ironic read and turns it into an insider joke. Collaborations work well in mature markets, where consumers are bored and products are commodified. There are only so many Uniqlo items that a person can own, but not if those items were made by Jun Takashi, Jil Sander or Pharrell. Having the fashion link allows Uniqlo to cultivate “elitism to all:” it can sell a lot of Pharrell t-shirts to a lot of people without diluting its symbolic value. This symbolic value makes a commodity incomparable: a very few people will pick MOON over Crest or Colgate in a pharmacy. But many will select it to add some flex to their bathrooms. Limited editions keep the cultural pioneers interested in the brand, and MOON can enjoy a temporary monopoly by rendering its competition irrelevant: collaborations are hard to replicate. Hardest to replicate are inconsistent and random collaborations, like Heron Preston x MOON or Steven Alan x Mucinex. Their genius is in that they shun any coherence. Coherence is for suckers, because collaborations aren’t brand extensions. They’re a creative expression of a brand that let it flex its zeitgeist muscles, promote it as a trendsetter and turn its products into brand communication. A Chanel snowboard or IKEA x Craig Green make Chanel and IKEA modern and culturally present and curious. An unexpected collaboration attracts collectors, cultural pioneers and hypebeasts. It becomes the source of a brand’s aspirational power. Collaborations Trade in Aspiration For a brand, having aspirational power is everything. In the modern economy, the growth motor isn’t a price. It’s taste, aesthetics, identity and thrill. Economic growth doesn’t come from products, but from the intangible social and cultural capital that a brand creates. Products are just a vehicle for beauty, thrill, identity, transformational experiences and a life aesthetically worth living. Moon toothpaste, in its own words, “is destined to elevate your everyday, oral care routine into a true oral beauty experience.” By collaborating with Heron Preston, MOON puts this mission on steroids. It makes brushing teeth more culturally and socially relevant. Having an orange toothpaste turns everyday hygiene into a creative and inspiring ritual. Every time we brush our teeth with Heron Preston x Moon, we create a social distance between ourselves as those unenlightened enough to use Crest. We also create a link between us and all other cultural pioneers of oral care. Collaborations aren’t a brand gloss. They’re a strategic transformation of a brand’s operating system. In the aspirational economy, this transformation is a matter of a brand’s long-term renewal and cultural relevance. Strategic collaborations across a brand’s entire value chain are akin to making a safe bet on a brand’s cultural and business future. At the level of marketing and sales, collaborations protect pricing power, ensure high margins and reframe consumers’ perception of the brand. At the level of a product concept and production, collaborators provide value innovation. At the level of distribution, collaborations expand a brand’s market and renew its customer base. A collaboration between luxury brands and Chinese KOLs give these brands an in with the Chinese customer. A collaboration between Rimowa and streetwear pioneers like Supreme, Bape and Anti Anti Social Club renews brand associations. At the level of merchandising, collaborations give halo to the core collection, re-evaluate brand perception and increase brand consideration. Before Nike launches a new model, it seeds it on runways of its fashion collaborators like Undercover or Sacai. The collaborators add their imprint, making it culturally noteworthy and spurring interest in the model’s later commercial release by Nike. Collaborations are basically a constant brand re-contextualization: they take it from one context and put it into another one. In that sense, there isn’t a “bad” collaboration: collaborations are calculated cultural and business tests. Some contexts are more fertile than others, but just as evolution constantly mixes stuff up to see what sticks (theropods didn’t), a brand stays alive through remixes. Collaborations are the strategy of brand awareness, market expansion and its fountain of youth. Through re-contextualization, collaborations: Allow brands to start trading in exchange value, not in use value. Use value is defined by a product’s functionality. Exchange value is defined by a product’s social appeal. A social hit becomes a market hit. Brands that insert their products in the cultural exchange system and not in a market segment, win. Give everyday products identity. In a crowded competitive landscape, a brand is the key product differentiator. A brand makes products stand for something more than their function and separates them from commodities. A collaboration enforces brand identity, ensures its continuity, and connects products into a narrative. Infuse taste and meaning into ordinary consumption. Today, a brand’s products and services do not only fulfill their basic functions. Their job is to aesthetically enrich their buyers’ lives and become social links that signal status, social distinction and belonging. Collaborations are easier to understand once they’re taken out the domain of brand stunts and into the domain of art. Art is a big business. Art is also a big social and cultural commentator, critic and cynic. It tells us what we need to know about the world we live in and about where the future is going. Collaborations do the same. This article was originally published on Highsnobiety.
https://medium.com/swlh/good-collaborations-are-art-great-ones-are-kitsch-e583fb2374fb
['Ana Andjelic']
2020-11-23 08:30:18.256000+00:00
['Entrepreneurship', 'Art', 'Collaboration', 'Culture', 'Marketing']
Best practices for combining data science and design
By Ricky Hennessy, Sheetal D Raina & Amanda Ward At Fjord, we’ve been combining data science and design on integrated teams for the past three years. By bringing together these two disparate disciplines, we’ve been able to deliver tremendous value to our clients through data-driven products and services that address user needs and solve critical business problems. Despite the enormous potential, most organizations struggle to enable effective collaboration between data science and design. In this article, we’ll share best practices and lessons learned on how data scientists and designers can work together to deliver transformative products and services. Create a Shared Understanding With different backgrounds and ways of seeing the world, it’s important to create a shared understanding between data scientists and designers so we can get the most out of this powerful collaboration. Instead of parallel tracks working in isolation, take advantage of opportunities for knowledge sharing and question asking. By remaining flexible and open to different approaches, data scientists can benefit from a deep understanding of user needs and designers can unlock possibilities presented by large sets of data. When working with one of our aerospace clients, our data science team was performing exploratory data analysis while our design team was interviewing users. By co-locating the team and providing opportunities for spontaneous check-ins, the design team was able to validate research findings using real data, and the data science team was able to gain insight into the peculiarities contained in the data. This led to a much deeper understanding of both the end users and the data. Don’t Lose Sight of Business Goals Data science is a process for solving business problems using data. Before we can begin the process, it’s critical that we have a clear understanding of the business. Too often, the design process leads to an overly user-centric view that ignores real-world realities and constraints. On the other hand, data scientists can become so focused on solving technical problems that they lose sight of core issues that need to be addressed. For any data science and design collaboration to be successful, it’s critical to align user needs with business goals. Ensuring that the team is equipped with business understanding will lead to solutions that deliver value while also addressing user needs. While working with a Fortune 100 manufacturing company, it became apparent that the initial problem at hand was too broad. If we had focused exclusively on what users were looking for or what project stakeholders wanted, we would have developed solutions that delivered little to no value for the organization. Instead, we viewed user needs and stakeholder input through a “business goals” lens, allowing us to narrow our focus to high value problems. Execute Through Seamless Collaboration Defining the right problem starts with the utilization of a “business goals” mindset. However, defining the problem is just the beginning. Individual understanding leads nowhere unless there is effective collaboration between team members. At every stage, integrating various viewpoints with different skills and expertise helps the team solve the right business problem in unison. At Fjord, we have developed a framework that translates business problems into solvable data science problems using a human-centered design approach. The key lies in integrating strategy, design, and data science in a way that they are more potent than when they work separately.
https://medium.com/design-voices/best-practices-for-combining-data-science-and-design-3aebdb4e076e
['Ricky Hennessy']
2019-07-08 11:35:46.491000+00:00
['Design', 'Data Science', 'Design Thinking', 'Collaboration', 'AI']
💭DRAWING: Jung💭
Artist’s Note №1 For synchronicity’s sake, here’s my drawing of legendary psychoanalyst Carl Jung. When I think of Switzerland, I first think of chocolate — then I think of Carl Jung. It’s probably pathological. Artist’s Note №2 This drawing will earn no money (Medium recently all-but-demonetized poetry, cartoons, flash fiction and other short articles). Please consider buying me a coffee. More coffee=more drawings for you to enjoy. Artist’s Note №3 This drawing is brought to you by the letter “D.” “D” is for Dr. Franklin’s Staticy Cat and Other Outrageous Tales, my collection of humorous stories and drawings for children. Artist’s Note №4 My new one-man Medium magazine is called — Rolli. Subscribe today. Artist’s Note №5 From now on, I’m letting my readers determine how often I post new material. When this post reaches 1000 claps — but not before — I’ll post something new.
https://medium.com/pillowmint/drawing-jung-3beee4e6bb98
['Rolli', 'Https', 'Ko-Fi.Com Rolliwrites']
2020-01-27 19:26:16.207000+00:00
['Art', 'Drawing', 'Mental Health', 'Psychology']
One vs One & One vs All
One vs One & One vs All Binary Class Classification Approach to Solve Multi Class Classification Problem What is Multi Class Classification problem? When we predict one class out of multi class known as multi class classification .Suppose your mother has given you a task to bring mango from a basket having variety of fruits , so indirectly you mother had told you to solve multi class classification problem. But our main is to apply the binary classification approach to predict the result from multi class. Why we need One vs Rest and One vs One? There are some classification algorithm which has not been made to solve multi class classification problem directly these algorithms are LogisticRegression and SupportVectorClassifier. By applying heuristic approach to these algorithms we can solve multi class classification problem. let’s get started………. One Vs Rest Suppose we have three classes: [Machine Learning, Deep Learning, NLP]. We have to find the final prediction result out of it. Step 1: Take multi class data and split into multiple binary classes. let we have 100 classes . Then by using One Vs Rest: 1 class belong to (class 0) and remaining 999 class will belong to (class 1). This process will happen again and again till each and every class will form their individual Model. number of model being created = number of class Figure 1 Step 2: Each model predicts probability whichever model will have highest probability, will be considered for further binary class classification to get final result. Let we have probabilities for Model 1, Model 2 ,Model 3 are 0.3,0.5,0.2 respectively . As we observe Model 2 has highest probability hence we will use model 2 for final prediction of class o or class 1. i.e; Model 2: — Deep Learning (Class 0) & [Machine Learning, NLP] (Class 1) If we will predict class 0 means our predicted result will be Deep Learning else Result will be form class 1 i.e: [Machine Learning, NLP] # logistic regression for multi-class classification using a one-vs-rest from sklearn.datasets import make_classification from sklearn.linear_model import LogisticRegression from sklearn.multiclass import OneVsRestClassifier # define dataset X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, n_classes=3, random_state=1) # define model model = LogisticRegression() # define the ovr strategy ovr = OneVsRestClassifier(model) # fit model ovr.fit(X, y) # make predictions yhat = ovr.predict(X) Disadvantage As it makes numbers of model equals to number of classes hence it does slow prediction of output. Means it has high time complexity. If we will have 100s of classes then task will be so much arduous One Vs One Suppose you have n number of classes then it will gives model of one class vs another class.let you have four classes in a dataset A,B,C ,D. Step 1: Convert the multi class dataset into binary class ,Here number of models will be; Figure 2 Total 6 binary classes will be formed as shown below; Figure 3 Step 2: Find the probability of each Model whichever will have highest probability that model will give final predicted output output. Figure 4 As Model 3 has highest probability hence if we will predict class 0 result will be A and prediction of class 1 will give result as D . # SVM for multi-class classification using one-vs-one from sklearn.datasets import make_classification from sklearn.svm import SVC from sklearn.multiclass import OneVsOneClassifier # define dataset X, y = make_classification(n_samples=1000, n_features=10, n_informative=5, n_redundant=5, n_classes=3, random_state=1) # define model model = SVC() # define ovo strategy ovo = OneVsOneClassifier(model) # fit model ovo.fit(X, y) # make predictions yhat = ovo.predict(X) Conclusion Hope you liked this blog if you have any suggestion about further improvement please do comment below. If you want to learn more about this topic please click this link
https://medium.com/ai-in-plain-english/one-vs-one-one-vs-all-binary-class-classification-approach-to-solve-multi-class-classification-f285b72dc12c
['Akhil Anand']
2020-12-25 10:55:01.866000+00:00
['Logistic Regression', 'Svm', 'Machine Learning', 'AI', 'Artificial Intelligence']
Your Punctuality is Reflecting More Than You Know
Life is anything but predictable. People will argue the validity of free will until the end of time. One thing remains to be true though. So much of life boils down to how you react to it. I like to think that be proactive is always the best approach. Too many things at least. Yet, we don’t always have a choice in the matter if unpredictable circumstances arise. We are thrown into reactive mode. How you respond is everything, and it speaks volumes to the kind of person you are deep down. One of the foundations of social and professional life is punctuality. Unfortunately, this is often at odds with the curveballs life throws at you. There can be a plethora of natural or human causes that will put a hard stop on your punctuality. How you respond matters. How you communicate matters. Ask yourself: Do I value punctuality and understand how it reflects upon me? Why does punctuality matter? Trust & Dependability So much of our world comes down to this one word. We place an underappreciated amount of trust in absolute strangers. While many say that we earn trust, for many it is pre-loaded when meeting people. In the workplace, hiring is a vetting process in itself. There’s a baseline of trust that many have for new coworkers, managers or direct reports. It makes us comfortable to think we can safely hand off responsibility to someone. If you are the person who is late to your interview, that IS your first impression. Follow that up with habitual tardiness to meetings or deadlines, and you’re now that person. While arguable better than lacking interpersonal skills, it still breeds distrust. The act of being punctual will strengthen or confirm the trust others have in you. We all know who the most dependable people are in our lives. Also in our workplaces. When they say they will be somewhere to help you or make a meeting, you have no doubt it will happen. Be that person and make it habitual. Respect What is one of the first feelings you have when someone is late for a meeting? Or lunch? If they’re normally punctual and reliable, it’s surprising. Likely the assumption that something came up and it’s out of their control. But they’ll arrive. Now, what if that person is always late? I’m willing to guess one of the thoughts that circulate is that they don’t respect your time. Or the plans. Or the meeting. While this may not be true, it can be hard to avoid this internal accusation. The presumption that someone’s actions are indicative of who they are at the core. Less about uncontrollable circumstances. It comes down to one thing: repetition. Being punctual shows you have respect for yourself, others and the subject at hand. People want to believe you have that respect and enjoy returning the favor. Give the sign that doing so is worth their time. Integrity We all know that phrase that ‘your word is your bond.’ Like it or not, this holds true throughout all aspects of life. When you give your word, it immediately stands to reflect upon you as a person. As a coworker. A friend. And while other judgments are reversible, damage to your integrity isn’t the same. Losing the value of your word can be irreversible, depending on the circumstances. It stands to reason that your stamp, your seal of your personal brand, gets its value from your word. Make it unwavering.
https://medium.com/imposters-inc/your-punctuality-is-reflecting-more-than-you-know-31d6500e7c14
['Michael Lanasa']
2020-07-22 00:01:09.623000+00:00
['Business', 'Startup', 'Leadership', 'Life Lessons', 'Entrepreneurship']
Beautiful Intelligence
Mark Rothko used color to paint abysses deep enough to lose yourself in. Prince used his voice to conjure images of raspberry berets and oceans of violets in bloom. Break dancers and ballerinas use the body in motion to express everything from love to fury. We instinctively engage with color, sound, and motion from the time we’re born. These universal forms of expression take us places the written word sometimes can’t go, viscerally animating ideas and providing context. But people often associate these communicative tools more with the arts than with corporate spaces, which is perhaps why we haven’t traditionally seen these elements in digital productivity ecosystems. Color, sound, and motion fell by the wayside in a world of suits and ties and black and white documents. That world is increasingly antiquated. Even in offices where you may still have to cover tattoos and remove piercings, people are starting to welcome gifs and emojis in professional content. And why not? Whether your desired aesthetic feels more Bauhaus or more pop, moving beyond letters and numbers makes productivity more authentic, beautiful, and resonant. In Microsoft 365, we’re blending art and science to create a library of high-quality content that leverages our powerful AI. From creating original work in-house to commissioning designs from artists worldwide, we’ll be rolling out a curated array of new illustrations, looping videos, photography, fonts, animations, stickers, and much more, beginning primarily within Office.
https://medium.com/microsoft-design/beautiful-intelligence-6e03cdfe8ab0
['Rachel Romano']
2020-09-17 17:05:22.448000+00:00
['Microsoft', 'Design', 'AI']
How CRY makes money
I realized that we’ve never taken the time to fully explain all that we do at CRY, including how we earn revenue. It’s fairly simple but does warrant some discussion. First, we are a three-person team. It’s hard for me to give you titles because, for the most part, all of us do a little of everything. At our core, we are content creators. Whether it’s here on CRY Mag or our We CRY Together celebration, our goal is to create or curate stories that are in line with our vision of elevating emerging artists, building a connected creative community or navigating the emotional aspects of what it means to be a writer or artist. How we make money As much as we love what we do, we need money for our business to work. And yes, we are a business. As much fun as we have doing this, we’re actually very focused on driving revenue because we know that’s what it takes to give us the freedom to keep creating. Ghostwriting Our main source of revenue comes from ghostwriting. We help people write stories of accomplishment, overcoming some kind of obstacle, memoirs, or family histories that people want to capture. This means someone contracts us to write an entire book. The cost for this is tens of thousands of dollars so it’s reserved for those who are committed to telling their story and have the finances to do so. Not everyone who chooses this option is “rich.” You’d be surprised how resourceful people can be when they’re motivated. Proposals (story structures) We definitely understand that not everyone can afford for us to write their book in full. As a separate offering, we write proposals. A proposal can have different meanings, but for CRY, it’s more like an outline for your book. If someone who is not a writer has an idea for a book, it’s actually really difficult for them to understand how to get the idea out of their head on onto the page. That’s where we help. We provide structure, a succinct synopsis, chapter by chapter summary, along with other outputs so by the time we’re finished, our clients know exactly what to write and how to format their story. We do this for anywhere between $2,000-$3,000, depending on a few variables. Editing Sometimes, clients come to us with hundreds of pages of blog posts, journal entries and loose notes. We take those pages and turn it into a real book. The cost for this varies depending on the amount of work we have to do, but typically comes in around $6,000 — $8,000. Why are we sharing this? Because it’s important to be transparent about money. There shouldn’t be this secret around how revenue is earned, especially if it can help inform or educate someone who is trying to do something similar. We love what we do at CRY, and earning revenue through these streams allows us to create and curate content. We definitely have plans to expand our offerings, and when we do, we’ll be sure to share another update!
https://medium.com/cry-mag/how-cry-makes-money-ca79e7eeb24a
['Kern Carter']
2020-10-28 13:14:10.215000+00:00
['Creativity', 'Ghostwriting', 'Education', 'Money', 'Writing']
Exploring Google Cloud Vision API and Feature Demonstration With Python
Features of Google Cloud Vision API 1. Face detection The Vision API can perform feature detection on local image files and remote image URLs. DETECT_FACES and DETECT_FACES_URI functions can perform multiple-face detection within an image, along with the associated key facial attributes, such as emotional state and headwear. 2. Image attributes Detects general attributes of the image, such as dominant colors and appropriate crop hints. 3. Detect labels The Vision API can detect and extract information about entities in an image, across a broad group of categories. Labels can identify general objects, locations, activities, animal species, products, and more. 4. Optical Character Recognition (OCR) Detects and extracts text from images. TEXT_DETECTION and DOCUMENT_TEXT_DETECTION annotations support OCR. 5. Web Detection Web Detection detects web references to an image. Searches the web for the best guess label and pages with full and partial matching images. 6. Detect multiple objects The Cloud Vision API can detect and extract multiple objects in an image with Object Localization — a module that identifies information about the object, the position of the object, and rectangular bounds for the region of the image that contains the object. 7. Detect explicit content (SafeSearch)
https://medium.com/better-programming/exploring-google-cloud-vision-api-and-feature-demonstration-with-python-1f02e1dbdfd3
['Ulku Guneysu']
2020-12-15 02:24:31.137000+00:00
['Google Cloud Platform', 'Python', 'Machine Learning', 'Programming', 'Computer Vision']
Q&A: Sam Felix, Director of Audience + Platforms @ The New York Times
Q&A: Sam Felix, Director of Audience + Platforms @ The New York Times This week, The Idea caught up with Sam Felix to discuss how The Times thinks about reaching new audiences, how it evaluates new and different platforms, and how it adjusts to changes in the tech-universe. Tell us about your role and what you do. I focus on the company-wide strategy, partnerships and relationships with key tech partners — focusing a majority of my time on Google, Apple, Facebook, Snapchat, etc. — and figuring out how and to what extent we should be working with these platforms. My job is to think deeply about how The Times’ journalism is reflected off-platform and monetized and to make sure there’s a clear value to us participating in the platform, whether that be direct revenue or a strong audience value. We spend most of our time focusing on projects, big and small, that help connect The Times to these sometimes hard to understand, hard to reach audiences off-platform and look for ways to drive them back to our O&O (owned and operated). So the types of projects we tend to work on and lead are things like Subscribe with Google, AMP, Apple News, Snapchat Discover. How does your team fit within the larger organization? We’re in a pretty unique position within the company — we work across most of the departments to help spread a consistent strategy for the platforms. We’re technically part of the marketing organization, but physically sit in the newsroom. We of course have no influence over the coverage and really maintain an appropriate separation from the journalists, but it’s very clear and central to our work that we partner closely with the newsroom audience team and the newsroom thinkers to ensure that we’re building audiences off-platform that reflect our overall strategy. Can you talk more about The Times’ overall strategy? How do you evaluate new or emerging platforms? When we’re thinking about new platforms, we are vigilant about our strategy. We look at these new opportunities and these new platforms through different lenses: Can the platform help us demonstrate the breadth of our report to a new audience that we don’t otherwise reach? Does the platform help drive this audience back to our O&O? We believe that the best way to experience The Times’ journalism is on our properties, and so our goal is to bring users back to our properties, fully immersing them in the journalism, with the goal of developing a relationship and a daily habit with our readers. We also ask ourselves: does this platform give us a new way of thinking about storytelling — so things like Snapchat Discover — in a way that we might not otherwise do on our properties. One example is Reddit. Obviously Reddit is not new, it’s actually been around for a very long time, but they are sort of emerging for us, in a way. When we look at how we engage with Reddit and what sort of investment we put into it, one of the things that was most attractive to us was leveraging AMAs as a way to expose this very engaged Times audience to our journalists and answer questions about how the report is made, pulling back the curtain on The New York Times. Again, something we can’t always do as easily, especially for a new audience on our platforms, and so we can leverage these partners to help us really tell more of The New York Times’ story. Thinking about some of the big changes that have happened within the last year regarding Google and Facebook, have they had any impact on how you approach or think about these platforms? These changes have definitely impacted how we optimize on a daily basis, but I wouldn’t say that they’ve dramatically, or very much at all, changed our strategy for how we approach Google and Facebook holistically. Facebook’s been making these types of changes, big and small, to the algorithm for at least a year, if not more. We’ve certainly tracked the changes and we monitor how they impact our distribution mix and try to respond in our programming strategy, but we are not reliant on Facebook traffic to generate audience, so it doesn’t impact our business in a material way or our ability to reach our readers. At the same time, we’ve definitely seen the shift towards search and Google, or the shift back to Google. We have a very strong partnership with Google and make it a priority to work very closely with them on things like Subscribe with Google — to not only take advantage of the resurgence of search, but also to ensure that we are providing the best possible subscriber experience for our readers, no matter where they choose to meet us. We also want to do what we can to help Google create products and experiences within its platform that support the news business so that we don’t find ourselves in the land of too many pivots. We’re constantly keeping an eye on what’s happening on each of those platforms, but trying to be thoughtful and cautious in our approach overall. You mentioned part of your strategy is to ultimately drive users back to your O&O. How does Google AMP fit into that strategy? We were an early partner with Google on AMP and took a very measured approach the last couple years to understanding the format, the experience, and working with Google to enhance the story page. We did a lot of testing around the impacts on audience as well as advertising, and in that time, worked with Google, watched how the industry had shifted, and eventually become more comfortable with the format. And so we recently made the decision to convert most of our articles, save for some that the AMP format doesn’t quite support yet, into AMP pages. So far it’s continued to enhance performance and have a positive impact on our audience within the Google ecosystem. We took our time and we weren’t one of the first ones to jump all in on AMP, because it is important that the AMP page that is presented to the reader is a positive experience that we believe is equal to, or at least nearly as great as, what we hope we’re able to provide on our web pages. We also wanted to make sure, and this is why Subscribe with Google has been great, that we’re able to align our business model consistently across AMP and our website and various other touch points. So technically it is served from a Google server, but Google’s done a lot of work to make it come as close to being a part of our own universe as possible. How are you thinking about news aggregation app like Apple News and Flipboard? How do aggregate apps like that fit into the NYT’s strategy? Well, we are cautious. We think about these platforms as an opportunity to reach readers who are more interested in a browsing experience, specifically browsing a wide universe of publishers in a short amount of time — so these are the skimmers who want their top news fast from many outlets. By having a presence on these platforms, we’re able to expose our journalism to these readers and try to look for and cultivate opportunities to bring them back to our platform and pull them more deeply into the report. For example, we’ve had a lot of success driving newsletter signups on Apple News. So even though that audience lives exclusively off-platform, it does drive a lot of reach, though it doesn’t come into our analytics in the same way as say Google or direct traffic. So traffic, in this case, has a slightly different meaning. But because we are finding that that user base is coming back to us within the Apple News ecosystem and is then taking the next steps to sign up for a newsletter, that demonstrates some real value to us and tells us that it does play a role in the subscriber journey. Is there a cool project you’ve been working on recently? Is there anything you’ve learned or hope to learn from it? At the beginning of each year, we reevaluate what our position is with each of the major platforms and how we want to situate The Times within this larger tech-universe. So we’re in the midst of all of that 2019 planning and reviewing the landscape of all of the major tech platforms. It’s a project in flight, but it’s been a really fascinating body of work this year given everything that’s happened in 2018 — with all of the platforms and the big questions we have to think about when it comes to privacy, trust, aggregation, bundling subscriptions, and what it means when a platform might be doing something that feels directly competitive to our business model, or on the surface feels really helpful to getting journalism to big audiences, but might not ultimately help us actually achieve that goal in the long run. It’s a big thought experiment, but it’s something I think all publishers should be spending a big chunk of time doing often as these platforms update and change and regulation starts to get into the conversation. What’s the most interesting thing you’ve seen recently from a media outlet other than your own? I’m still a big fan of Axios. I love everything they’re doing. I’ve been continually impressed with their ability to leverage their newsletter strategy into a segmented audience strategy where they can build clear, almost communities around each of these subject lines. They’re developing a direct relationship with their readers. Being in a reader’s inbox is probably one of the closest relationships you can have, so I thought it was a great move when they acquired Sports Internet to flesh out their sports vertical. I’m really excited to see what they do with that and what else they expand into and hope they can continue to be as engaging in their approach for these new verticals.
https://medium.com/the-idea/q-a-sam-felix-director-of-audience-platforms-the-new-york-times-609885934524
['Lizzy Raben']
2019-05-06 14:37:39.734000+00:00
['Journalism', 'Atlantic Media', 'Facebook', 'Subscriber Spotlight', 'The New York Times']
Going Full Greta
As with many others, Greta Thunberg is my current number one hero. She makes me think. And, importantly, she walks her talk. Compared to most Americans I have a teeny tiny carbon footprint but Greta has got me thinking about how I might make it even smaller. I’ve realized there is more I can do but I’ve also realized that I am not sure I can go ‘Full Greta’ and I’ll explain why in a minute. Greta does not use fossil fuel powered vehicles for transportation. For many Americans that would be unthinkable. At my last job I had a co-worker who lived just two blocks from the office yet she drove to work each day. When her car broke down she was beside herself. She didn’t know what to do. She was calling all her friends trying to get rides to work. IT NEVER OCCURRED TO HER to simply go out her front door and walk two blocks. IT NEVER OCCURRED TO HER! Sadly, this is the mindset of so many Americans. They simply will not go anywhere without driving. I have not owned a fossil fuel powered car — or any vehicle — in six years. I’ve gone for long stretches of time without a car several times in the past as well. I walk! And my legs do not pollute. And they don’t require license plates, a license, insurance, gas, or constant repair bills. They have not only saved the planet from some pollution but they have saved me countless tens of thousands of dollars. And walking has also greatly improved my health. Of course I must admit that going car-less has not always been about environmental activism. Usually it was brought about by debilitating poverty. It was not until I was forced to go car-less that I began understanding the environmental impact of doing so. Now I’m quite happy and healthy being car-less. As poverty continues to skyrocket in America many others will be forced to give up their expensive car addictions. This will probably help the environment a little but I don’t think this is the best way to address the climate crisis. The results are not as powerful when we are forced into action as they are when we consciously and purposefully take action through choice. Greta also does not utilize air travel, which is by far one of the most polluting forms of travel carbon-per-person-wise. The last time I was on a plane was in the early 1990s. Back during the first 30 years of my life I was an ardent lover of air travel. I flew a lot and enjoyed the heck out of it. But then the joy seemed to drain away and I quit flying. So there’s not much I can do to shrink my carbon footprint by cutting back on flying since I don’t fly anyway. Greta tries to never buy new clothes. She wears hand-me-downs from her older sister and also gets clothes from thrift stores. When I was a kid I absolutely hated being given hand-me-down clothes from my older brother. I simply refused to wear them. I didn’t want his vibes on me. And for most of my life I would not even consider buying clothes at a thrift store. Buying used clothes? Ewe gross! Right? But over this last decade I have been slowly softening my hard-headed stance about this. It all started about 8 years ago when I needed a blazer for a certain event. I had burned all my ties and business suits way back when I quit the corporate world and I had zero money to buy a new blazer. So I bit the bullet and went to the local thrift store. To my surprise I found a blazer that was in mint condition that fit me perfectly. I bought it for $2 and when I got home I found a $5 bill in one of the pockets. I dare anyone to find a deal like that at any popular clothing retail fashion store. I have since shopped at the thrift store on a somewhat regular basis, mostly for household items and books and potential birthday presents for people under the age of 10. But I have also bought some shirts and trousers and jackets. I even bought the first umbrella I’ve ever owned at that thrift store. But to be honest I have to say that I vehemently draw the line at socks and underwear. I buy those at the nearby evil Wal-Mart. There is one part of Greta’s environmentally friendly lifestyle that I personally have trouble with, though, and that is the fact that she is a vegan. I just don’t think I can take that step. Over recent decades I have radically decreased my intake of meat — around 90%. I don’t eat pork and I rarely eat beef or chicken but one of my favorite foods is organic, grass-fed bison meat. It’s expensive, though, so I can rarely afford to buy it. But I still manage to have around 5 to 7 bison burgers a year — usually on special occasions or holidays. I feel good knowing the local rancher and his family who raise the organic, grass-fed bison and I know the spiritual ways in which they handle the entire harvesting process. And their ranch is only about 95 miles away so not much gas is used in transport. I know that I could take that step and give up meat entirely but there are two animal products that I simply cannot imagine giving up and those are organic, cage-free chicken eggs and organic real butter made from grass-fed cows. I consume those products almost every single day. Egg yolks are probably my very favorite food in the world. I only eat the yolks, never the whites. 97.41869% (approximately) of all the nutrition found in an egg is in the yolk. And 99.03574% (approximately) of all the flavor of an egg is found in the yolk. The whites are useless and go in the compost bucket. There is no more orgasmic culinary experience than plopping a hot, yet still liquid, egg yolk into one’s mouth and letting its yumminess explode with flavor throughout the mouth. And real, natural, organic butter made from grass-fed cows is perhaps the most crucially important food we can eat to maintain cardiovascular health. I like the fact that we don’t have to kill the chicken to get the egg and we don’t have to kill the cow to make the butter. The problem, of course, is that we think we must have huge factory farms to produce animal products to scale for an exploding population. (‘Scale’ is my least favorite word in the American Business Lexicon.) So I always try to ‘source’ organic, cage-free chicken eggs from local farmers, several of which I know personally who live just outside of town. I just don’t think I can go 100% ‘full Greta’ but I can get close. I can try. The important thing that I am grateful for about Greta is that she has prompted me to make closer observations of my own actions in order to find new ways to help be a part of mitigating the effects of the climate crisis. Very importantly, she is helping to expand mass awareness of how everyone can help. I give her my own personal Nobel Prize. But there is something that I’ve never heard Greta talking about… For years, many woo-woo masters have talked about how our outer environment reflects our inner environment. As the saying goes, ‘As within, so without,’ or something like that. The pollution in our outer environment is a reflection of the pollution within us so we cannot fix the outer pollution until we fix the inner pollution; our psychological pollution, our emotional pollution, our collective attitudes and beliefs and self-loathing, our fear, our guilt, our hate, our prejudices, our greed… I’ve been working on cleaning out my inner pollution for decades. Every time I clean out some inner pollution I find more inner pollution that needs to be cleared out. I clear it out then find another layer of inner pollution that needs to be worked on. There are so many layers of inner pollution. Sometimes I feel like a big old fat onion. But I keep working on it. So as we begin to observe our outer behavior in order to help heal the planet’s environment we must also be sure to more closely observe our inner workings to see what can be healed within. I feel that we will achieve much greater and faster success by working both without and within simultaneously. One thing is for certain and that is that we simply cannot continue to go on in an unobserved path of somnambulist denial.
https://whitefeather9.medium.com/going-full-greta-a532991ec2d7
['White Feather']
2019-10-20 16:30:51.142000+00:00
['Vegan', 'Environment', 'Self', 'Climate Change', 'Food']
How Kanye West Built A Multi-Billion Dollar Sneaker Brand
How Kanye West Built A Multi-Billion Dollar Sneaker Brand What made the brand successful, and what can potentially kill it An Adidas Yeezy. By andreimihaiducu from Pixabay The chances are pretty high, that you’ve already heard of The Yeezy. It’s the sneaker brand of Kanye West in collaboration with Adidas, and now Gap. To grab these babies, people across the U.S. would camp outside shoe stores in dead cold winter, hire taskers to wait in line on their behalf, and even deploy sneaker-buying bots to nab a pair online before they sell out. Those who are unable to purchase these within the few precious seconds before they sell out, end up spending thousands of dollars over the market value to purchase them in the resale market. The man behind all this craze is Kanye West. He has successfully created one of the most hyped sneakers of all time. However, there are some potential roadblocks in the way ahead, that he must navigate through to help the company stand the test of time.
https://medium.com/better-marketing/how-kanye-west-built-a-multi-billion-dollar-sneaker-brand-78eed4acfcaa
['Kiran Jain']
2020-07-06 18:58:02.122000+00:00
['Marketing', 'Fashion', 'Celebrity', 'Business', 'Startup']
There Are Only 3 Things A Writer Must Do Every Morning
There’s no shortage of people telling you what you “must” do in the morning if you want to change your life. You know exactly what I mean, too. Wake up early. Have a cold shower. Meditate. Exercise. Plan your day. Set your priorities. Blah, blah, blah. I call bull. Those tips have been around for decades. Centuries, probably. Despite all the well-meaning advice, most of us are still tired, overworked, underpaid, disorganized and wishing we knew how to get it together. It’s an easy slide into self blame. But it’s not your fault. It’s faulty information. Know why all that stuff doesn’t work? Because it’s like trying to eat the proverbial elephant. Too much. That is not how the human brain works. The human brain is infinitely malleable. Change is possible. For all of us and any of us. But mass rebuild works better for cars than people. We have to change in small increments. Tiny steps. Even more important? Our brain needs to buy into what we’re doing. If your brain doesn’t buy it, your body isn’t going to follow through. Simple as that. You can say affirmations about how “rich” you are until the cows come home, and your brain is saying yeah? Well then why is my bank account dry? Know what I mean? If you are a writer, or want to be a writer, there’s only 3 things you need to do every morning. Not big cataclysmic changes. Just 3 tiny things. 1. Take a few minutes for you In his book, Miracle Morning, Hal Elrod says how we wake up each day dramatically affects us all through the day. Even more so, it affects the level of success at everything we do all day. Ever have one of those mornings where the alarm doesn’t ring, or you turn it off and when you wake up already late? You’re rushing around trying to get out the door, and forget stuff, and traffic sucks. And somehow, the crazy morning just affects the whole day. Know what I mean? We’ve all been there. The opposite is true, too. When I was caring for Dad at the end of his life I learned that the hard way. It was hard work, and that’s the understatement of the year. Juggling work and caring for a dying parent would test the patience of a saint. I am not a saint. If I started the day bolting out of bed to him calling me? By the end of the day I felt like something the cat dragged in. I learned to wake up early enough to take a few lazy minutes for myself and it made all the difference. Doesn’t even matter how you spend it. Watching the sun rise or feeding the birds. Reading a book, playing solitaire. You get to pick how much time and how to spend them. But do that for yourself. So every day starts with your needs, not someone else’s.
https://medium.com/linda-caroll/there-are-only-3-things-a-writer-must-do-every-morning-f4769933739e
['Linda Caroll']
2020-09-15 06:40:51.711000+00:00
['Creativity', 'Self', 'Advice', 'Habits', 'Writing']
How to Decide Between Algorithm Outputs Using the Validation Error Rate
How to Decide Between Algorithm Outputs Using the Validation Error Rate Monument Follow Aug 12 · 3 min read Monument (www.monument.ai) enables you to quickly apply algorithms to data in a no-code interface. But, after you drag the algorithms onto data to generate predictions, you need to decide which algorithm or combination of algorithms is most reliable for your task. In the ocean temperature tutorial, we cleaned open remote sensing data and fed the data into Monument in order to forecast future ocean temperatures. In that case, we used visual inspection to evaluate the accuracy of different algorithms, which was possible because the historical data roughly formed a sine curve. Visual inspection is one tool in the data science toolbox, but there are other tools as well. The Validation Error Rate is another useful tool in cases where you want to get more fine-grained or where visual inspection does not yield obvious insights. There are other error functions that can be used, but Validation Error Rate is the default error function in Monument. What Is The Validation Error Rate And Why Is It Important? The Validation Error Rate measures the distance between “out of sample” values and estimates produced by the algorithm. You can find this metric in the INFO box in the lower-left corner of the MODEL workspace. As a general rule of thumb, the “more negative” your Validation Error Rate is, the more accurate the model is. Negative infinity would be a perfect model. In the real world, as we will see with our ocean temperature data, sometimes the best you can do is a small, but nevertheless positive number. Currently, Monument only displays one Validation Error Rate at a time. To view the Validation Error Rate for other algorithms that you have trained, click the drop-down arrow on the right side of the algorithm pill and select SHOW ERROR RATE. To compare the performance of the models, I have pasted below a table of all the Validation Error Rates applied to the ocean temperatures data, sorted from lowest to highest. Algorithm Performance On The HABSOS Data As we discovered in the tutorial, with default parameters, AR and G-DyBM perform the best on the cleaned and transformed data. How To Improve Algorithm Performance Typically, we can improve the Validation Error Rate — i.e. make it “more negative” — by adjusting the algorithms’ parameters. You can access an algorithm’s parameters by selecting PARAMETERS in the algorithm pill drop-down. Choosing which parameters to edit to improve performance depends heavily on your business objectives and the nature of the data you’re looking at. We will cover common cases in future tutorials, but the best approach is to experiment yourself to develop an intuition around which parameters most improve results for different kinds of data. Certain algorithms allow for automated parameter adjustment. In Monument, the LSTM and LightGBM algorithms also have “AutoML,” which is short for Automated Machine Learning. AutoML automatically adjusts an algorithm’s parameters to optimize performance. You can select AUTOML from the algorithm drop-down to access these capabilities. For example, when we run AutoML on the HABSOS data, we can lower the Validation Error Rate by 0.04 from 3.273 to 3.233. Not a huge improvement on this particular data, but an improvement nonetheless. Often, the gains are much greater. There are other reports within Monument that we can use to improve algorithm performance, including, dependent variables, forecast training convergence, and feature importance. We’ll explore these topics in future tutorials.
https://medium.com/swlh/how-to-decide-between-algorithm-outputs-using-the-validation-error-rate-c288a358ca9b
[]
2020-08-15 23:34:51.982000+00:00
['Machine Learning', 'Automl', 'Algorithms', 'Artificial Intelligence', 'Big Data']
A More Strategic Way to Delete Your Emails
Photo: JohnnyGreigg / Getty Images For all the ways that G Drive has made our lives run better, it makes getting rid of those memories kind of complicated. The most intuitive way to delete messages in Gmail is scrolling and clicking. Forever. So technology columnist Angela Lashbrook tried to find another way, a journey she writes about in Debugger, Medium’s new publication about consumer technology. Turns out you need a two-part system: Let an app clear out most of the junk. Then employ search-and-destroy strategies to find groups of emails to delete all at once. Her tips are surprising but easy and effective. The best approach, however, is the simplest: daily maintenance. “That means deleting all the emails you don’t need as you receive them so you don’t get caught up in the mess I spent the last several days cleaning up,” Lashbrook writes. And in that way, your inbox is like any important relationship: Pay a little attention every day so you can focus on the present, not thousands of pieces of baggage from the past.
https://forge.medium.com/how-your-inbox-is-like-any-other-relationship-in-your-life-df4f9d77f2b
['Ross Mccammon']
2020-10-21 14:49:38.703000+00:00
['Inbox', 'Google', 'Email', 'Productivity']
QC — Quantum programming: implementation issues
Photo by NeONBRAND We are still in the early stage of Quantum computing. Expect surprises! A quantum algorithm depends on the available qubits and also on how those qubits are connected. Different models of quantum computers may do it differently. This is like a program runs on Intel i7 processor and fails for the next generation processor. Decoherence is another major issue in quantum computing. Don’t expect to pause a running program and grab a coffee. Once you hit the run button, you are under the clock. The program must finish quickly before the quantum information is decayed. In this article, we will cover some implementation issues as well as the decoherence problem. Swap gate The physical implementation of qubits may not be symmetric. A controlled-not gate may take qubit-0 as the control and qubit-1 as the target but not the opposite. Strongly depend on implementations Here, it shows how 2-qubit may be connected in CNOT-gate on IBM Q5. As shown below, if we connect q1 and q2 with a controlled-not gate, we will get an error. Having the q1 as the control qubit with q2 as a target is not allowed in this machine. By reshuffle the gates, the error is gone. Here is another example for IBM Q20. IBM Q 20 Tokyo (20-qubits) — Modified from source To overcome the limitation, we can use swapping to swap two qubits. Here are other possibilities to overcome the issue: Universal Gate In classical computing, NAND gate is a universal gate that can build the remaining operations. We may implement a few more universal gates for performance reason. Complex gates are built on top of these physical gates. In quantum gates, the universal set is This can further reduce and approximated with any precision by CNOT, H, S, and T gates with some overhead. In practice, IBM Q (ibmqx4) implements the following physical gates: where For the programming interface, all the following operations are provided in IBM Q and build on top of the gates above. This actually leads us to one important topic. Not all qubits are equal Precision and errors remain an issue for quantum computing. As indicated above, different qubits have different gate error and readout error. Gate error is about the precision in applying a quantum gate. i.e. how accurate can we control the superposition? Readout error is the error in measuring qubits. MultiQubit gate error is the error on operating the 2-qubit gate. This information may take into consideration when implementing an algorithm. Or at least this should be documented with your execution run for future comparison or reference. Decoherence & errors As mentioned before, every quantum program is running under the clock. Once you start hitting the run button, quantum information starts degrading because of the interaction with the environment (any electric and magnetic fields in the vicinity). Your program must be completed before the quantum state becomes garbage. So you should aware of the length of the physical quantum gates used in your program. Since the program is written in logical quantum gates, knowledge of how physical gates are translated into physical gates is helpful. The quality of the quantum computers can be measured by the relaxation time (T1), coherence time (T2), readout errors, and gate errors. Source IBM The decoherence process is measured by T1 and T2 above. T1 — Energy relaxation: the time taken for the excited |1⟩ state decays toward the ground state |0⟩. T2 — Dephasing which affects the superposition phase. T2 includes the effect of dephasing as well as energy relaxation. That is why this information is always published for your reference. Fault tolerance Fault tolerance computing is no longer taught in engineering for a very long time. Quantum computing brings the subject back. The quantum calculation is vulnerable to errors. To counterbalance the problem, we can add additional quantum gates for error detection or error correction. This is one major reason that we always need far more qubits than we think. The following sample is an encoder and a decoder that allows one qubit error. Source: IBM Next Now we finish the quantum gates. Next, we will start learning programming a quantum algorithm. Here is the link for the whole series:
https://jonathan-hui.medium.com/qc-quantum-programming-implementation-issues-51e3a146645e
['Jonathan Hui']
2019-01-17 00:43:52.232000+00:00
['Programming', 'Data Science', 'Science', 'Software Development', 'Artificial Intelligence']
My Doctor Is a Gardener
Want more? No? Okay. Nevermind. But in case you change your mind, subscribe to my newsletter to get notified whenever I publish something new.
https://medium.com/the-haven/my-doctor-is-a-gardener-18ad2a810fac
['David B. Clear']
2020-12-10 21:06:52.212000+00:00
['Humor', 'Health', 'Covid 19', 'Comics', 'Coronavirus']
You are Music
Image courtesy by John Mosca Music awakens, inspires and elevates humanity Everyone should play his own music — chat, write and communicate through art, music and images. Words are not anymore sufficient to transfer one’s own thoughts and feelings. Words are obsolete, poor and a very slow way to convey one’s own invisibility to others. That’s why music is becoming the new language and a new way of communication for the new coming,post-human civilisation. Let’s make this world beautiful. ‘You are Gods’ who have forgotten to be such. Music is the forgotten language through which the Gods used to talk to each other, and transfer their will to men. Music is the way that allows every man to return to his own peaceful creative nature, and be able so, to communicate in all directions and to all human beings, the original, powerful language of democracy, joy and beauty. Music is a declaration of life that gives equal opportunities to all of its members. Within music, races, nations, politics and religion dissolve. Music is the only law, and creativity, its order. It doesn’t matter wheather your Music is happy or sad, loud or soft, weak or strong, harmonious or not, it comes from You; and whatever it is, it’s your Music, it’s always projecting your aliveness, your uniqueness, your Dream. The origin of your Music is more important than the Music itself, because YOU ARE MUSIC. Music is the most simple, powerful way to spread out all over the world, democracy and love, is the manifest of a new humanity, free from divisions and conditionings . Without nation, without religion, without politics, a humanity endowed with infinite creativity, free from discriminations and conflicts, tending towards peace, unity, love.
https://medium.com/age-of-awareness/you-are-music-e8ed4364f62
["Elio D'Anna"]
2020-12-11 15:22:00.167000+00:00
['Beyourself', 'Musicians', 'Communication', 'Inspiration', 'Music']
R2D2 as a model for AI collaboration
This essay is about the way we design relationships between humans and machines — and not just how we design interactions with them (though that is a part of it), but more broadly, what are the postures we have towards machine intelligences in our lives and what are their postures toward us? What do we want those relationships to look and feel like? As I’ve been thinking through these ideas, science fiction characters are one of the lenses I’ve been finding helpful as a way of thinking about different models for our relationships with machines. Specifically, I’ve been using these three — C3PO, Iron Man, & R2D2 — as notable archetypes that describe different approaches to how we might design machine intelligences to engage with humans. C3PO C3PO is a protocol droid; he’s supposedly designed to not only emulate human social interaction, but to do so in a highly skilled way such that he can negotiate and communicate across many languages and cultures. But he’s really bad at it. He’s annoying to most of the characters around him, he’s argumentative, he’s a know-it-all, he doesn’t understand basic human motivations, and often misinterprets social cues. In many ways, C3PO is the perfect encapsulation of the popular fantasy of what a robot should be and the common failures inherent in that model. Now this idea didn’t emerge in 1977. The idea of a mechanical person, a humanoid robot, has been around since at least the 1920s. Eric the Robot, or “Eric Robot” as he was more commonly known, was one of the earliest humanoid robots. He was pretty rudimentary, as you can see from the kind of critique that was leveled at Eric by The New York Times in the above quotes — it critiques his social demeanor and also questions his utility — why is a robot that moves and converses better that the one that can tell when the water levels in Washington are too high? Is this the best form or use for a robot? But despite these criticisms, people were fascinated by Eric, as well as by the other early robots of his ilk, because it was the first moment that there was this idea, this promise, that machines might be able to walk among us and be animate. We’ve had this idea in our social imagination for nearly a century now—the assumption that, given sophisticated enough technology, we would eventually create robotic beings that would converse and interact with us just like human beings. Our technology today is many orders of magnitude more sophisticated than Eric’s creators could have imagined. And we are still trying to fulfill this promise of a humanoid robot. We see the state of the art today with robots like Sophia. This is so much closer to that humanoid ideal than Eric was, but it falls so short and just feels awkward and creepy. On the software side (“bots” vs. “robots”), we have examples of the failure of the C3PO model in the voice assistants we use every day. They’ve gotten pretty “good”, but still can’t understand context well enough to respond appropriately in a consistent way, and the interactions are far from satisfying. It turns out that even with incredibly rich computational and machine learning resources, interacting like a human is really hard. It’s hard to program a computer to do that in any satisfying way because in many ways it’s hard for humans to do successfully either. We get it wrong so much of the time. Interacting like a human is hard (even for humans). We don’t all talk to each other the same way. We don’t all have the same set of cultural backgrounds or conversational expectations. Below are charts created by British linguist Richard Lewis to show the conversational process of negotiating a deal in different cultures. These challenges have come to the fore lately with everyone from search engines to banks trying to create convincing conversational interfaces. We see companies struggling with the limitations of this approach. In some cases they have addressed those challenges by hiring humans to either support the bots or in some cases actually pose as chatbots. Welcome to 2020, where humans pretend to be bots who are in turn pretending to be human. Stop trying to make machines be like people Obviously, the ideal of creating machines that interact with us just like people do is incredibly complicated and may actually be unattainable. But more importantly, this isn’t actually a compelling goal for how we should be implementing computational intelligence in our lives. We need to stop trying to make machines be like people and find some more interesting constructs for how to think about these entities. The reason people started with the C3PO model is because human conversation is the dominant metaphor we have for interacting and engaging with others. My hypothesis is that this whole anthropomorphic model for robots is fundamentally just a skeuomorph because we haven’t developed new constructs for machine intelligence yet, in the same way that desks and file drawer metaphors were the first attempts at digital file systems, and when TV first emerged, people read radio scripts in front of cameras. Instead, a more compelling approach would be to exploit the unique affordances of machines. We already have people. Humans are already quite good at doing human things. Machines are good at doing different things. So why don’t we design for what they’re good at, what their unique abilities are and how those abilities can enhance our lives?
https://alexis.medium.com/r2d2-as-a-model-for-ai-collaboration-9a2638bfbd09
['Alexis Lloyd']
2020-11-21 16:39:03.281000+00:00
['AI', 'Robots', 'Design', 'UX']
Round in Circles
Photo Credit — Me! Greenwich, London The anticipation has my heart racing All I want to do is pick up the pace and move forward keep on going keep on growing in this life plant the seeds I've been sewing reap the rewards of the knowing and the learnings from past endeavours Cross a million Bridges over rivers This spoken word s*** has me thinking keeps me writing stops me sinking this spoken flow is how I show the world just who I am Not just who I am but what I can do can you imagine where this life will take us Just what this life will make us ain't no one gonna forsake us because it's up to us how we move forward This better life we're moving towards and our Horizons are filled with bright light our Horizons eradicate trite spite get rid of all the hate that sits within us the debates they try to spin us round in circles instead of straight lines We'll create our fate and we'll be just fine.
https://medium.com/poets-unlimited/round-in-circles-dd469019f210
['Aarish Shah']
2017-09-16 03:21:31.685000+00:00
['Writing', 'Photography', 'Inspiration', 'Poetry', 'Creativity']
Top 10 Safest Jobs from AI
This is a series of 4 articles I am sharing here, for people who are concerned and eager to understand more about job displacement impact potentially caused by artificial intelligence technology. You would read about “safe” versus “endangered” jobs in this series. The jobs listed in each article are demonstrative from my research research and technological knowledge, which may or may not fit into your personal scenario. I highly encourage readers to take those as references and inspirations, and to start re-imagine and re-strategize your career today with our shared future — powered by AI. How to determine what jobs are safe/unsafe? White collar: Repetition vs. strategic: Does your job have minimal repetition of tasks? Do you regularly come up with insights that are important to your company? Do you make key decisions that cross functions for your company? Simplicity vs. complexity: Do most decisions in your job require complexity or deliberation? In your job, do you need to regularly learn and understand a lot of complex information? Blue collar Dexterity vs. repetition: Does it require at least a year of training to be qualified for your job? Does your job involve very little repetition of the same task(s)? Fixed vs. unstructured environment Is your job usually performed in different environments each time? (e.g., a taxi driver would always work in the same taxi) Is your work environment unstructured? For all jobs: human-contact / empathy / compassion Is communication and persuasion one of the most important parts of your job? Do you spend >30% of your work time with people who are not employed by your company (e.g., customers, potential customers, partners) ? Is a key part of your job performance measured by how well you interact with people? Does your job result in happiness, safety, or health of those your directly service? Do you lead or manage people in your job? Top 10 Safest Jobs from AI Psychiatrists Psychiatrists, social workers, marriage counselors are all professions that require strong communication skills, empathy, and the ability to win trust from clients. These are the weakest areas for AI. Also, with the changing times, growing inequality, and job displacements, the need for these services are likely to increase. Therapists (occupational, physical, massage) Dexterity is one of the challenges of AI. Physical therapy (such as chiropractics or massage therapy) involves applying very delicate pressures and sensing equally minute responses from the client’s body. In addition, there are added challenges of customizing care for each client, consequences of hurting a client, and the need for person-to-person interaction. The human interaction includes: the ongoing therapy, professional advice, small talk, as well as encouragement and empathy. These aspects make the job impregnable to AI for the short-term. Medical caregivers (nurses, elderly care) The overall healthcare industry is expected to grow substantially, due to increased income, greater benefits, AI lowering the cost of care, and aging population (which requires much more care). Many of these reasons will foster a symbiotic environment where AI helps the analytical and repetitive aspects of the healthcare, as more of the healthcare profession shifts to attentiveness, compassion, support, and encouragement. AI researchers and engineers As AI grows, there will be a jump in the market demand for AI professionals. Gartner estimates that in the next few years, these increases will outnumber the jobs replaced. However, one needs to keep in mind, AI tools are getting better so that some of the entry-level positions in AI will be automated over time as well. AI professionals will need to keep up with the changes, just like over the years software engineers had to learn about assembly language, high-level language, object-oriented programming, mobile programming, and now AI programming. Fiction writers Story-telling is one of the highest levels of creativity, which AI will be weak. Writers have to ideate, create, engage, and write with style and beauty. In particular, a great fictional book has original ideas, interesting characters, engaging plot, surprising emotions and poetic language. All of these are hard to replicate. While AI will be able to write social media messages, title suggestions, or even imitate writing styles, the best books, movies, and plays will be written by humans for the foreseeable future. Entertainment will be a hot area in the era of AI, because we will have more wealth overall, and more free time. Teachers AI will become great tools to teachers and education, capable of knowing how to help personalize education curriculum based on each students’ competence, progress, aptitude, and temperament. However, education will be more about helping each student to find what he or she wants, helping hone each student’s ability to learn independently, and being the friend and mentor to help each student to learn to interact with others and gain trust. These are jobs that can only be done by teachers, and require a low student:teacher ratio (like 5:1 or fewer). So there will be many more such humanistic teacher positions created in the future. In fact, a parent may be the best such teacher — if future governments would be wise enough to compensate home schooling parent-teachers. If you are or want to be a teacher, learn more how to connect to your students, one student at a time or in small groups, and less about how to lecture to 50 students. Criminal defense attorney Top lawyers will have nothing to worry about their jobs — reasoning across domains, winning the trust of the clients, years working with many judges, and persuading the jury are the perfect combination of complexity, strategy, and human interaction that are so hard for AI. However, a lot of paralegal and preparatory work in document review, analysis, and recommendations can be done much better by AI. And many tasks performed by paralegal: legal discovery, creating contracts, and handling small claims, parking cases, will be increasingly handled by AI. Cost of law makes it worthwhile for AI companies to go after AI-paralegals and AI-junior lawyers, but not the top lawyers themselves. Computer Scientists & Engineers As the information age advances, McKinsey report shows that engineering jobs (computer scientists, engineers, IT administrators, IT workers, tech consulting). This high-wage category is expected to increase by 20 million to 50 million globally by 2030. But these jobs require staying up-to-date with technology, and moving into areas that are not automated by technologies. Scientists Scientists are the ultimate profession of human creativity. AI can only optimize based on goals set by human creativity. While AI is not likely to replace scientists, AI would make great tools for scientists. For example, for drug discovery, AI can be used to hypothesize and test possible uses of known drugs for diseases, or filter possible new drugs for scientists to consider. AI will amplify the human scientists. Manager/Leaders Good managers have human interaction skills including motivation, negotiation, persuasion. Good managers will effectively connect on behalf of the companies to employees, and vice versa. More importantly, the best managers are leaders, who establish a strong culture and values, and through their action and words, cause the employees to follow with their heart. While AI can be used to manage performance, managers will continue to be humans. That said, if a manager is merely a bureaucrat sitting behind a desk and giving employees orders, he or she will be replaced by other humans.
https://kaifulee.medium.com/top-10-safest-jobs-from-ai-1824cacd1954
['Kai-Fu Lee']
2020-12-03 09:04:59.386000+00:00
['Artificial Intelligence', 'Technology', 'Tech', 'Jobs', 'AI']
Keeping a Distance — From Friend and Foe Alike
Written by Almost famous cartoonist who laughs at her own jokes and hopes you will, too.
https://marcialiss17.medium.com/keeping-a-distance-from-friend-and-foe-alike-28f1bcad6f70
[]
2020-04-01 14:14:51.249000+00:00
['Mental Health', 'Health', 'Comics', 'Trump', 'Covid 19']
Artistic Voronoi Diagrams in Python
Making a Voronoi Diagram Let’s start by importing all the libraries we need. Script 1 — Importing libraries. If you get any ModuleNotFoundError let’s take care of them using Anaconda to install the missing packages. I will only go over the less common packages, since it’s likely you already have Pandas, Matplotlib, and other widely used packages installed. Open Anaconda Prompt and navigate to the desired conda environment. To install PIL a Python package used for image processing: conda install -c anaconda pillow Now let’s define the helper function that will do the bulk of the work: Script 2 — Helper function that makes the Voronio diagram. You can find the original Python script here: https://rosettacode.org/wiki/Voronoi_diagram#Python. You can read the doc string to understand the parameters the function takes. Click here to see some of the color palettes you can use. To create the Voronoi diagram, sites are randomly drawn from a 2D Gaussian distribution. Then, each site is assigned a random color from one of the two palettes. Finally, the diagram is created pixel by pixel and then saved. Now we are ready to make a cool looking diagram.
https://medium.com/i-want-to-be-the-very-best/artistic-voronoi-diagrams-in-python-928bdae85dd8
['Frank Ceballos']
2019-12-24 13:08:41.324000+00:00
['Python', 'Math', 'Art', 'Design', 'Voronoi']
Bottoms-Up: How the Pinterest growth team decentralizes its structure
Neeraj Chandra | Growth engineering On the growth team at Pinterest, we describe our structure as “bottoms-up”, meaning ideas and responsibilities flow throughout the team in a way that provides a lot of autonomy to work on the projects each person wants to tackle, across a variety of roles. The result is a team that is very scalable and flexible. And it means that each of us on the team has a lot of liberty to choose how we spend our time. How this approach works Ultimately, my main priority is to build experiences that help contribute to the acquisition, activation, and retention of users, which is also the mechanism by which our team is evaluated. Specifically, my team focuses on user conversion, and in the context of Pinterest, that means showing the value of Pinterest to prospective users and converting them into active users. To do so, we come up with ideas for improvements, build and launch experiments to measure the change, and analyze results. If the experiment contributed a valuable increase, we ship the experiment to users. Conversely, if the experiment had no impact on the growth funnel or negatively contributed to user growth, we shut it down, remove the experiment, and analyze the results to learn and improve. Our team runs hundreds of experiments per quarter, so in order to accurately measure the impact of a specific experiment, we randomly bucket the population into control and experiment groups. To run so many experiments, our team is constantly searching for new ideas. We don’t believe that idea generation should belong only to one person; instead the entire team is responsible. We regularly spend time brainstorming and evaluating new ideas. Anyone can pitch an idea to the team to receive feedback, and if the idea holds up it gets moved into the backlog. Part of demonstrating that an idea is worthy is showing there is enough of an opportunity, and so we must also explain why the idea is important and large enough to work on. In a similar fashion, the responsibility to analyze results also falls to each team member. We’ve built an experimentation framework that calculates and aggregates key metrics, but often times we need to dig further to understand user behavior. And so, a key part of my job is to utilize our data tools to construct queries and analyze the results of an experiment. While I can certainly receive help if needed, it’s expected that I drive the analysis and help complete our understanding of Pinners. A side benefit of everyone being engaged in the analysis is we’re always thinking about the type of data and logs needed in advance of any new experiment. Because we all have experience using logs to answer questions, additional input throughout the development process can only make us better. Why we do it At this point, you might be wondering why we’ve organized ourselves in such a way. We still have some specialization, to be sure, but on the whole it’s a much looser structure. Why have everyone spend time on generating ideas and analyzing results? This particular structure is ideal for us given the nature of our work, as it allows us to focus on the hard problems, remain scalable and flexible, and encourage our own growth. Some additional key reasons: Make the right decisions quickly. Team input enables us to move faster in the development of high quality ideas so that we can focus on solving the right problems. If we pick the wrong project to work on, it doesn’t matter how efficient we are at development and analysis, for the entire idea is likely to have poor results and be shut down. By including everyone in the process, we can generate more solutions to our problems and incorporate a wider diversity of perspectives. In addition to coming up with ideas that solve a user problem, it’s also important to estimate the potential impact of any opportunity, so that we can prioritize accordingly. Predicting the expected impact from a new project can be difficult for just one person to do, but by having the entire team involved we can hold each other accountable and arrive at a more objective and rigorous result. In practice, this means that we have a weekly meeting where individuals pitch the team on any ideas they have. As a team, we discuss each idea for its merits and its potential opportunity, and then use that to shape our prioritization. Team input enables us to move faster in the development of high quality ideas so that we can focus on solving the right problems. If we pick the wrong project to work on, it doesn’t matter how efficient we are at development and analysis, for the entire idea is likely to have poor results and be shut down. By including everyone in the process, we can generate more solutions to our problems and incorporate a wider diversity of perspectives. In addition to coming up with ideas that solve a user problem, it’s also important to estimate the potential impact of any opportunity, so that we can prioritize accordingly. Predicting the expected impact from a new project can be difficult for just one person to do, but by having the entire team involved we can hold each other accountable and arrive at a more objective and rigorous result. In practice, this means that we have a weekly meeting where individuals pitch the team on any ideas they have. As a team, we discuss each idea for its merits and its potential opportunity, and then use that to shape our prioritization. Adjust to focus on the current problems. By involving everyone in each part of the process, we remain flexible and scalable. If our team is low on ideas, we can all shift our focus for the week to generating new ones. If we have a healthy backlog of ideas, we prioritize developing those ideas and launching experiments. If we need to analyze ongoing experiments, we can each perform the necessary analysis to arrive at a conclusion.The result is we can adjust to solve the current problems, regardless of where they are in our process; we can also easily onboard new members without changing our process. By involving everyone in each part of the process, we remain flexible and scalable. If our team is low on ideas, we can all shift our focus for the week to generating new ones. If we have a healthy backlog of ideas, we prioritize developing those ideas and launching experiments. If we need to analyze ongoing experiments, we can each perform the necessary analysis to arrive at a conclusion.The result is we can adjust to solve the current problems, regardless of where they are in our process; we can also easily onboard new members without changing our process. New ideas lead to professional development. Finally, each of us can learn and grow by being exposed to the different parts of the process. I personally find this to be really exciting, because I get to be involved in generating ideas, prioritizing work, developing experiments, and analyzing results. Each of these requires a different set of skills, and as the nature of my work shifts or my interests change, I can adjust accordingly. Since joining the team, I’ve gotten better at each of these things. I’ve improved my ability to come up with ideas for consideration, and I’ve improved my ability to dig deep into data analysis. Getting better at one part of the process helps me with the other parts too. For example, by becoming more familiar with analyzing experiments, I can incorporate better logs during development, and by becoming better at generating ideas, I can understand what questions to answer during analysis. It also means that, in addition to building new experiences, I get to think about the larger problem of user acquisition and activation. This is great for me and great for the business; with each of us learning more about user growth, we can in turn contribute better and higher-impact ideas, while ensuring our teams move at a fast pace. What the approach requires This bottoms-up approach may not work for every team. Because we tend to build smaller initiatives to test our hypotheses, before committing to larger efforts, we’re constantly having to ideate, adapt and learn. Consequently, our team benefits tremendously from our flexible nature and focus on ideation. The entire team, to its credit, has also really embraced this mentality, and it’s so exciting to see everyone adopt a curious mindset and consider new opportunities. Through refinement, we’ve found success making the bottoms-up approach work for us. But how does it work for management? I asked Ludo Antonov, the head of growth engineering at Pinterest, about his thoughts on this approach and why he helped structure the team this way. This is what he told me: “I love ‘bottoms-up’ and decentralized teams because it allows everyone on the team to feel ownership over the product and the company. It encourages various functions such as design, marketing, engineering, and product to come together and learn from each other’s perspective and build a world class product. I always say that we should strive for the people that spend the most of their personal time on turning an idea into reality to have the most say in its direction. This leads to a much higher quality of execution and happiness of the team, while fostering learning and excellence. I get ecstatic when I see engineers, designers or marketers propose radically different ideas than what we have in the product today and channel that passion into making these ideas successful. In that sense, I think of anyone in a leadership position on the team (myself included) as being an advisor to them, guiding and enabling them in that process of creating value. My favorite part is that when I have ideas that I think we should build, I know that the team is empowered to say ‘no’ to me. Thus, I know that when we actually get aligned on building something, it is because they believe it’s worth their time and will make the team and the product more successful, versus building things because of a leader’s opinion.” Tying it all together In the end, our team has found success with a bottoms-up culture. It’s encouraged each of us to take ownership over our domain while generating a healthy diversity of ideas. Being able to test my own ideas is incredibly empowering too — there’s no substitute for trying out an idea and seeing how it actually performs. What I enjoy most is being able to spend time throughout my day thinking about our strategy and coming up with new ideas, and then turning those ideas into reality. I get to be a part of the bigger conversations, and it’s an incredible opportunity to have influence on a product that over 250 million people use each month. Thanks to Brian Lee, Ludo Antonov, Hayder Casey, and Jeff Chang for their help with this post!
https://medium.com/pinterest-engineering/bottoms-up-how-the-pinterest-growth-team-decentralizes-team-structure-d9f890fa8869
['Pinterest Engineering']
2019-04-05 19:13:29.008000+00:00
['Growth', 'Engineering', 'Startup', 'Team Collaboration']
Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones
Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones Fitness is a passion for me. I do it as a ritual at home nowadays. I used to go to the gym and love the rituals. However, nowadays, it is difficult for me to go to the gym. Not going to the gym does not mean giving up fitness goals. I created a customised gym at home for my specific needs. I perform a wide variety of workout regimes. As I get older, the type of exercises changed a lot. I used to do a lot of cardio when I was younger. My main focus is weight training, resistance training, callisthenics, high-intensity interval training (HIIT), and mild cardio. The purpose of this post to share with you three joyful exercises I perform a daily basis to keep my lean muscles, bones density, and low-fat percentage. Even though I passionately work out, my approach to exercise is gentle. For example, instead of using too heavy dumbbells, I prefer using my body weight. It is natural and produces the required outcomes for me. Let me introduce you the three simple exercise I do almost every day to maintain my low-body fat percentage, lean muscles, and dense bones. Apart from a few hundred push-ups, I use the pull-up machine every day. Here are the three joyful exercises; enjoy.
https://medium.com/illumination-curated/joyful-exercises-for-contributing-to-low-fat-lean-muscles-and-dense-bones-18a9d7614afc
['Dr Mehmet Yildiz']
2020-12-28 16:52:20.670000+00:00
['Self Improvement', 'Fitness', 'Writing', 'Health', 'Technology']
The Forgotten Key to Great Software: Human Touch (and Special Goggles)
Cast your mind back to 2006. RHCP’s Stadium Arcadium, Yeah Yeah Yeahs’ Show Your Bones, and The Killers’ Sam’s Town were grooving through your FM stereo. The second Pirates of the Caribbean movie was netting blockbuster sales (sails?), as the book Eat, Pray, Love was encouraging us all to take chances, live a little louder, and drink white wine at 10am (in a fun way, not in an alcoholic sort of way). The iPod had been dominating the music world for five full years, and had a near-monopoly on the listening-device industry. But a new foe approached. One that promised to take the MP3 fight straight to those turtleneck-wearing hipsters in Cupertino. In November of 2006 the Microsoft Zune, a.k.a. the iPod killer, was unveiled. On paper the two devices looked incredibly similar. In fact the Zune had a leg up in some areas, with it’s larger screen and built-in FM tuner. A battle of two tech giants, both yearning to get their device into your pocket, you’d think sales figures would be neck & neck. Yet after ~2 years of production the Zune finally reached 2 million in total sales, while 3.53 million iPods were being sold every single month. Why the substantial mismatch in sales? Sure the iPod was introduced 5 years earlier and had momentum, but that doesn’t fully explain the wildly different results. I’d ague it’s because the iPod was built around it’s users, from it’s conception, to every design choice made, to every line of code written and hardware component created. The iPod seemed intuitive, cool, and easy to use. It made sense. It felt good. That scroll wheel was smooth yet tactile as your finger raced around the rim. The menus & UX let you jump quickly to the perfect song, while also allowing you to meander through your catalog, looking for inspiration. Simple games like Texas Hold ’Em and that brick breaking game, though not cutting edge in their gameplay or graphics, gave the user something to concentrate on as they grooved to their favorite songs. Even though the Zune stood toe-to-toe technically with the iPod, it lacked that human-centered touch. Instead of the easy-to-master scroll wheel, the Zune stuck with traditional directional buttons, causing you to “click, click, click, click…” through selection after selection until you found the right jam. The device was larger, heavier, and generally felt clunkier than the sleek iPod. The respective stores (iTunes vs. Zune Store) offered very different user experiences and ease of integration with their products. On paper the two devices were incredibly similar, and if feature lists were the only criteria of a well-received piece of tech then Apple and Microsoft would’ve posted similar sales. But Apple built a device entirely around it’s users, and that made all the difference. Software Can Fulfill All Requirements, and Still Be a Disaster We’ve all had those sprint reviews, where devs unveil the feature they worked an unholy number of hours on, putting their blood, sweat, and tears into making sure reality perfectly matches each requirements specification, only to be bombarded with questions from other stakeholders during the demo. “Why do I have to click two buttons to reach this page?” “I can’t find the Submit button, where is it?” “Why do I have to toggle back and forth between pages? Can’t I just see all this info on one page?” Worse still is when the whiz-bang feature that‘s been clamored for finally gets deployed to production, and no one touches it for months on end. When asked, users respond with statements like: “Oh that? Yeah I found it easier to just put all my comments in Notepad” “I tried that approach, but it just seemed so complicated. Whereas the old way I’m familiar with.” “Wait, you can do that? I never knew!” As engineers, we like building cool things. Therefore we tend to focus exclusively on the engineering & technical aspects of our software. How robust is the design? How clean is the code? How scalable and maintainable is the system? Engineers should care about these dimensions, and fight for them if need be, but they’re only half the battle when creating a great system. When using an iPod, 12-year old me wasn’t appreciating the masterful architecture that allowed each piece of the system to interface flawlessly with the greater whole, or the brilliant algorithms that saved heaps of space and time during repetitive calculations. I liked the fact that I could buy my favorite Rise Against album (remember them?) on iTunes, transfer it onto the iPod with ease, and within a few clicks listen to my new tunes wherever I wanted to. As an end user I didn’t care about the technical aspects of the system, I cared about the experience I had when using the system. UX, not UI So how do you build a great user experience? The old-school approach was to hire a couple designers, send them a list of fields & elements to be displayed on each screen, and have interns write CSS/GUI code to look like the resulting mockups. UI done, developers could now design the rest of the system around what they deemed important/interesting. While good interface design is critical, it only scratches the layer of the entire experience. Most software goes beyond a series of screens. It’s dynamic; calculating, reacting, and sometimes being shaped by users’ input, in an intricate dance of man & machine. Good user interfaces don’t mean much if the user has to wait 45 seconds for a central page to load, or is redirected to unexpected, disjointed places as they flow through the system. The responsibility of creating a great user experience can’t be entirely shouldered by designers. Every member of the team must keep this goal in mind, from the product owner who selects the right features to prioritize (not necessarily the most popular), to testers who, on top of teasing out crashes, bring up feelings of ambiguity or annoyance felt when using the system, to developers, who write every line of code with the intention of helping users achieve their goals or have an experience. User Goggles Jay Leno, famed software visionary Stacks of books, papers, and articles have been written about the art of making a great user experience, and I recommend that all software professionals take at least a cursory dive into this field. But you don’t need a degree in Human Computer Interaction to create software that users love. Instead you just need the human touch, a.k.a. empathy, a.k.a. walking in another’s shoes, a.k.a user goggles. The D.A.R.E., S.A.A.D., and P.A.D.B.T.O.B.A.T.G.W.J.D.T.W.O.K (People Against Drinking Before Twenty-One But After That Go Wild Just Don’t Touch Weed O.K.) programs in high schools relish the chance to break out their Drunk Goggles at school assemblies. These goggles simulate the visual, balance, & coordination impairment caused by a few too many White Russians, leading their wearer to stumble around the room like a drunkard on a Friday night. The person isn’t actually drunk, they’re probably stone sober, but they interface with the world as if they were a heavy drinker. When giving fresh code a first run, I like to imagine putting on my “user goggles”. Although I’ve spent anywhere from hours to months in the intricate details of this system, I forget all that and pretend that I’m a first time user, trying to accomplish a specific task. It’s tough, and sometimes it feels like another distraction from writing useful code… But as I’m stumbling through my task, I notice things: The Submit button’s on the left. That’s weird. I’d expect it to be on the right. I clicked Continue, is it doing anything? Hello? I feel like all my time is spent toggling between these tabs. Back and forth. Back and forth. Ugh. I keep submitting this form, and each time the next page takes at least 10 seconds to load. I’ve actually started checking my phone each time I click Submit, that’s how bad it is. I write down all these little thoughts. These criticisms, annoyances, and tics. My users would think the exact same thoughts if they loaded my system for the first time, current state. I use these thoughts to fuel a second-pass at the code, to get in front if this criticism. It’s much easier to change the system now, then 4 months later when most of the dev team’s moved on and the code base is 5,000 lines ticker. Every team member should practice some form of this exercise. While writing user stories, Product Owners should close their eyes and say “As a user, what do I actually want to do? What do I need to do? What’s the purpose of me loading this system? Does this user story help me execute my purpose for being here?” While trying new features, Testers should load the home page and say “I’m Jane Soandso. I loaded somerandomsystem.com because I want to do ___. Let’s see if I can figure out how to get this done quickly, easily, and painlessly” Write down every hiccup, concern, or negative thought, and fix the root cause if possible. Remember: if you notice something, then you can almost guarantee that your users will notice the same thing, and their experience will be hampered as a result. Conclusion What’s presented here isn’t new (walk a mile in someone else’s shoes, anyone?), but user empathy’s sorely underutilized in software development. Development team members often see software in a very specific light. We’re creating a piece of art. A massive collection of code, interfaces, ideas, and test plans assembled by highly-skilled professionals over the course of months (or years). But the people who use this software (and directly/indirectly give us money for rent & Tostitos) see our creation as a tool, to be used for certain tasks or experiences, and nothing more. By simulating the mindset of our users mid-work, we can constantly readjust our creation to better fit their needs, lives, wants, hates, and secret, innermost desires (that last one’s tricky, and may require some imagination). By wearing our user goggles mid-development we can later release our software with confidence, knowing it’ll achieve it’s ultimate purpose: to be used & loved by actual people in the real-world.
https://medium.com/walk-before-you-sprint/the-forgotten-key-to-great-software-the-human-touch-297ca8300301
['Grant Gadomski']
2019-05-04 02:00:50.856000+00:00
['UX', 'Software Development', 'Software', 'Design', 'Software Engineering']
Unveiling the Algorithm behind “Picks For You”
The Picks For You module on 1688.com is now much more than just a product recommendation channel. It has led to the development of some major online promotions and good marketing scenarios including Top-ranking Products, Must-buy List, Theme Marketplace, and Discover Quality Goods. Inserting marketing scenarios into the Picks For You display in the form of cards helps to distribute traffic and improve overall position exposure gain. This article illustrates how to insert marketing scenario cards into the Picks For You display. Background Currently, the marketing scenarios inserted into Picks For You on the 1688.com app are mostly product collections. With metrics that are relevant to IPV as the current focus, consider exposure gain to measure model results. Calculate the exposure gain as follows: Exposure gain = In the case of only product recommendations, this metric is equal to PV_CTR. Figure 1 The Picks For You display on the homepage of 1688.com app Challenges and Solutions At present, sellers decide and provide the mapping between marketing scenario cards and the products to display in the Picks For You column. How to insert certain marketing scenario cards into product recommendations? Status Before Iteration It is very straightforward to randomly attach a card to recommended products with a certain probability. This method lowers exposure gain as it ignores the capacity of cards and user preference for different cards. While causing metric values to drop significantly, this method helps to accumulate initial data in a short time. Weak Personalization To transform a product into a card, define a card quality score and a user preference score, then use the formula shown in Figure 2 to determine which type of card to use. Figure 2 Card selection formula At the same time, instead of simply applying the product-card relationships provided by sellers, filter the provided collection of product-card pairs. There are multiple cards under each card type for every product, therefore filtering by card quality scores is critical. User preference scores for different card types are calculated offline. iGraph synchronizes these two types of data daily. In online scheduling, cards are inserted once they are attached to the corresponding products based on the formula shown in Figure 2. For better results, carefully arrange the display frequency of cards, ensuring that there is a certain number of products in between to avoid the situation wherein too many cards appear on one screen. The exposure gain grows by 3.23%, in comparison to the results before iteration. However, it is still lower than the base value without inserting any card. It is significant to note that there is a limited improvement even while experimenting with the multiple variations of the formula in Figure 2. Machine Learning Model Now, with so many cards available to attach for a recommended product, the question arises that which one is most likely to draw clicks. The suggested model attempts to answer this question while transforming it into a Click-Through-Rate (CTR) estimation problem. Sort the estimated CTR values and pick the highest CTR. The final display results are subject to several rules. Examples and Features From the Picks For You data, select the Exposure and Click data of recommended products that are available for attaching cards as training examples. Features categorize into three parts- user feature, (trigger) item feature, and card feature. Use the product form as a special card form. Select 85 features as the model input, including 62 real number features, 19 categorical features, and 4 cross features. Real number features are statistical features in the user, item, and card dimensions. For example, the statistics of CTR of a product (item) on the Picks For You platform, and CTR in different forms. For categorical features, make sure to embed the same before inserting them into the model. Recall Based on the final product recommendations of Picks For You, recall the candidate collection, item2item2card, from the selected product-card collection. Currently, the mapping between products and cards only uses the above-mentioned card quality score, but does not consider the relationship between products and cards. The overall capacity of a card is not necessarily equal to the capacity when it is attached to a certain item. Therefore, add an item2theme mapping, where the theme represents item-card. SWING algorithm helps to construct this method using Card Exposure and Click data of several days and considering an item-card pair as an item entity. Online A/B testing shows that after adding this recall, exposure gain increases by 0.79%. Sorting Model Use the Wide & Deep Model (WDL) as the sorting model, despite using the Deep & Cross Network (DCN) during iterations. A/B testing shows little difference in exposure gain between the two models, with DCN 0.03% higher than WDL. Use XTensorFlow to train models every day and push to RTP. Figure 3 WDL (left) and DCN (right) Effectiveness The effectiveness is composed of two factors: card content and precise card distribution for product recommendation. If the card content is poor, it will further reduce users’ interest in clicking the card again, and also affects upstream distribution. Compared to the weak personalization, the current strategy increases exposure gain by 6.77% and the number of products clicked per user by 18.60%. Secondly, against the product recommendation method, the current strategy boosts exposure gain by 1.58% and the number of products clicked per user by 0.01%. System Process Online Scheduling The product recommendations from Picks For You determine the overall product sequence, whereas the card sorting model decides which cards are attached to which products and eventually where to display based on rules (the card interval strategy used in the weak personalization phase still applies). Figure 4 shows a flowchart of online scheduling, which includes the weak personalization and machine learning module. Figure 4 Scheduling flow diagram, containing weak personalization and machine learning model Card Fallback and Cold Start If certain card types are missing in the final result of a single request, then according to a pre-set probability, up to one card will be inserted for each missing card type. The interval policy also affects the fallback result. A card can’t be inserted if there is no suitable position. This ensures card fallback and cold start; and increases the diversity of cards, allowing users to see other types of cards. Road Ahead The current model is merely a card selector, which fails to consider the overall sequence of products and cards. Currently, the card selector only models the exposure and click data at the Picks For You level. Considering that we aim to take into account clicks inside cards, there is room for optimization. In the future, we plan to use the existing product recommendation results for recall and the card selector as another method for cards to train a mixed sorting model downstream for overall sorting. Original Source:
https://medium.com/dataseries/unveiling-the-algorithm-behind-picks-for-you-cc5ed8a82772
['Alibaba Cloud']
2020-02-06 06:35:03.431000+00:00
['Alibaba', 'Machine Learning', 'Algorithms', 'Artificial Intelligence', 'Big Data']
The Perfect Micro-Workout For Writers That Won’t Kill Your Flow
Recently we bought a treadmill, but sadly I’m not using it as much as I’d hoped I would. I enjoy it but finding a time in my day to devote to exercise has been problematic. Our house is small and the basement is my daycare so we don’t have the kind of extra space that most people have in their homes. The treadmill lives in our bedroom, which is right beside our daughter’s room and it’s also over the daycare. That means that I can’t use it in the mornings when my daughter is sleeping (I probably could, but I’m not a jerk) or when the daycare kids are sleeping. So that rules out the two best times for me to exercise. Also, I write in the morning, so exercising at that time would waste my best, most focused energy. When the daycare kids are sleeping, I’m eating my lunch and editing usually. After work, I’m getting dinner ready, and I’m not an evening person. Image by author via Canva. I know people who go to the gym or a yoga class after dinner, but we don’t eat until around 7:30 so that’s not going to work. By 8 pm, I’m in my jammies, on my computer again doing courses, writing more, or getting caught up with my favorite Medium writers. Even though I want to exercise, part of the problem is finding the right opportunity during the day. Finding a time to exercise can be the trickiest part of getting your workout done, even if you don’t dread it. I think this is why I’m always most successful when getting my exercise incidentally.
https://medium.com/illumination-curated/the-perfect-micro-workout-for-writers-that-wont-kill-your-flow-5f0bdb9826c7
['Erin King']
2020-11-03 02:40:15.927000+00:00
['Health', 'Lifestyle', 'Writers Life', 'Self', 'Writing']
What I Learned After 360 Days of Journaling
Journaling Helps Build Positive Habits With Ease “Depending on what they are, our habits will either make us or break us. We become what we repeatedly do.” ―Sean Covey Journaling every day is a habit that you build. It does not come as easy as you may think. It is time-consuming, and some days, it feels harder to pick up that pen and paper. Like building any habit, it takes effort, patience, and consistency. Once I started to build journaling as a habit, it actually helped me develop other positive habits with much more ease. I became used to being patient with myself. I was putting the effort and mindfulness towards working at building that habit every day. I’ve found myself working out more consistently. I’ve begun to see patterns in my behaviors over the last few months and making a conscious change from the parts of me that were toxic. I managed to write each day — how much I wrote and finding what time works best. Set Yourself Up for Success I started small — writing one sentence for a week. My initial goal was only to sum up, my day in one sentence. Starting small and working your way up to sets you up for success. When starting anything new, it’ll become overwhelming when you dive right into the deep end. You don’t want to let yourself drown. As time went on, I started to have the urge to write more. So, I took the training wheels off from limiting myself to only one sentence and took off writing as much as I felt like. (Although to continue building muscle, I made sure that I had a minimum of writing at least one to three sentences) It was hard finding the time of day to journal at first. At first, I typically wrote as soon as I woke up, as they resembled Morning Pages. However, some mornings I would wake up later and not have time for work, so I’d write before bed. For a few months, I ended up buying a second journal to test out writing one for the mornings and one before bed. However, that did not work for me, it just took up too much time, and I had to label these journals to make sure I write in them at the right time. I then tried to write my entries in many different ways. At first, I treated it like my therapist. Letting everything thought out — the good and the bad. It felt like a long-overdue release. Then, for a few weeks, I focused on gratitude. I began to write more about what I was grateful for in my life. I would write it with as much clear visualization as possible. Like being grateful for the sun and ocean, I would write how I felt the breeze brush my face with kisses, reminding me that I was alive. Or how the sounds of the ocean were reminding me that I am strong, that everything in life reflects the rise and fall of the waves. Building this habit, I didn’t set too many limits and boundaries to ease into journaling consistently. The first three months were trial and error. Eventually, you’ll start to gain momentum and find ways that work. Results After four months, I began to find a consistent time that worked best with my schedule. It was better for me to write as soon as I woke up. It gave me the time to process events and thought from the previous day. While also being able to incorporate what I've dreamt the night as well. It’s helped me keep one journal so that my thoughts are consecutive and together in one place. It makes it a lot easier when I want to go back and read the past months to reflect further. Sometimes, when I think I don’t have much to say, it has helped me create themes or focus on entries. There are weeks where I want to feel more grateful, so I write as a reminder for what I’m grateful for. There are weeks where I begin to feel stagnant or really struggling. So I started writing letters to the universe asking for strength or visualizing letters writing as my future self, thanking the universe for bringing me what I’ve always dreamed of. I write in specific detail, tapping into all of my senses, realizing that I could get there — I will get there. As I kept writing, I kept building the amount that I wrote each week. By the 3rd month, I was able to write one full page without stopping. Now, I can write more than one page, as if I’ve just written one sentence. What You Can Do to Start Your Journaling Practice
https://medium.com/age-of-awareness/what-i-learned-after-360-days-of-journaling-843914e7bd78
['Jess Tam']
2020-12-15 18:02:58.508000+00:00
['Creativity', 'Journaling', 'Self Love', 'Self Improvement', 'Writing']
Direct from the investor: 5 lessons for entrepreneurs in 2021
Direct from the investor: 5 lessons for entrepreneurs in 2021 The end of the year is good for balance. And it’s also good for, who knows, tearing up some manuals. Photo by Andreas Klassen on Unsplash I consider myself a start-up adviser and investor for beginners. Great apprentice, modesty aside, but with a lot to learn. Having been an administrator for over 04 years, I reinvented myself in the business world by doing M&As. And, six months ago, I immersed myself in this world that excites and fascinates me as a startup advisor and investor. At 24, all right. It’s not too bad to feel like a boy. It tastes good. And so, I follow and I will continue to learn. That said, here are my five boy cents for you, the entrepreneur, to start 2021:
https://medium.com/datadriveninvestor/direct-from-the-investor-5-lessons-for-entrepreneurs-in-2021-f0c67fc502c6
['Marco Antonio']
2020-12-27 17:32:29.075000+00:00
['Investing', 'Entrepreneurship', 'Business', 'Technology', 'Productivity']
Is the New Instagram Update a New Form of Dark Pattern?
Image by natanaelginting on freepik In the last days, the most recent Instagram update has been in the news for the worst reasons. Many users and influencers have publicly spoken out their dissatisfaction, namely James Charles, who, in a rant video, advised his followers not to update their apps. At the center of these negative opinions is the new layout, where the basic Instagram options — camera and notifications — have been removed from the bottom menu bar and instead are displayed in the upper-right corner of the home page, next to the direct message button. Author/Copyright holder: Instagram. Copyright terms and license: Fair Use. The new bottom menu features the “Reels” (mimicking Instagram’s competitor TikTok) and “Shop” button. This means that these two options are now easily accessible on every screen, while the camera and the notifications button are inaccessible in some tabs. Instagram is very explicit with its intentions on its blog: to prioritize Reels and its e-commerce platform over traditional posts. The company states that this change was taken in order to keep up with the fast-paced evolution of technology use patterns. However, this approach seems to be forcing the users to have new intentions when using the app and not otherwise. It even makes me wonder if we are facing a new type of dark pattern. In my recent post about Dark Patterns, I showed how Instagram uses the Roach Motel pattern. Indeed, we already know this company has no problem in resorting to dark patterns to reach their goals. However, if Instagram is being honest about its intentions with this update, we cannot consider it a dark pattern, right? The issue is the new bottom menu layout takes advantage of years of muscle memory developed by the users, who now have a high probability of accidentally clicking on the Reels or Shop button when looking for the app’s traditional features. Certainly, Instagram has a competent UX Team who is able to understand the real users’ needs and motivations. It’s very possible they knew users are not interested in the same features the company is aiming to promote. Maybe this is the reason why Instagram’s official blog post seems to be anticipating users’ disappointment: Author/Copyright holder: Instagram. Copyright terms and license: Fair Use. What about you, what do you think about Instagram’s new update? Do you also consider it unfair for users or do you find it a good business decision?
https://uxplanet.org/is-the-new-instagram-update-a-new-form-of-dark-pattern-1776697cffd8
['Mariana Vargas']
2020-11-19 11:12:20.132000+00:00
['Visual Design', 'UX', 'Creativity', 'Design', 'Technology']
Three Stories Every Entrepreneur Should Know
Storyboard for Breaking Bad, and we all know how that ended. Image from Uproxx. What would you say your company does? It seems like a simple question that should have a simple answer, but that’s rarely the case. Here’s something I’ve done over my 20+ years as an entrepreneur to keep everyone focused on the tasks at hand while also keeping an eye on the future. Entrepreneurs usually blur the lines of what their startup is, what it will be, and what it should be. This is fine until you try to start planning around those stories. At that point, you need to be asking: What are the priorities today and how do we execute on those priorities without mortgaging the future? The reverse question is just as important: How much time do we spend working on those new things that aren’t generating revenue yet? The Three Story Rule Every startup should have three stories, loosely related to the three arcs most storytellers use in episodic storytelling. An easy way to think about it is a television series. When you watch an episode of a TV show, the writers are usually working on three storylines: Story A: Story with an arc that begins and ends in this episode (or maybe a two-parter). Story B: Story with a longer arc that lasts a few episodes or more. This current episode will advance the plot of Story B in smaller increments, and maybe drop a twist in here or there. Story C: Story with a much longer arc, maybe out to the end of the season or the end of the series itself. This current episode might not advance Story C at all, or it may just drop a few hints. At the end of the season or the series, you’ll be able to look back and piece Story C together, but that won’t be easy or even possible in real time. Now let’s take that story strategy and apply it to your startup, and I’ll use my most recent startup as an example. Story A: Right Now Story A is the what your company is doing today that is generating revenue, building market share, and adding value to the company. Story A is about this fiscal quarter, this fiscal year, and next fiscal year. At Automated Insights, Story A was our story for the first few years while we were known as Statsheet, a company that aggregated sports statistics and turned them into visualizations and automated content. This is how we made our money — either using our own data to generate content or using data like Yahoo Fantasy Football to generate Fantasy Matchup Recaps. While we were breaking new ground in the arena of sports stats, we were one player in a sea of players, and while automating content from sports stats gave us a competitive advantage, sports was still a highly commoditized and difficult marketplace. Story B: What’s Next Story B is what’s going to open up new markets using new technologies or new products. Story B is about what you could do if the stars aligned properly or if you raised enough money for a longer runway, because Story B usually comes with a lot more risk for a lot more reward. A few years into Statsheet, when we went to raise our Series A round, we pitched using our proprietary automated content engine on all kinds of data, generating machine-written stories in finance, marketing, weather, fitness, you name it. We changed our name to Automated Insights and pivoted completely with a $5 million raise. That pivot came with a ton of risk. We had friends (and potential acquirers) in sports and we would now be making sports just a part of our story. In return, we would be one of the first players in the nascent Natural Language Generation (NLG) market, a pre-cursor to the “AI” market. It was not a coincidence that the acronym for our new company name was also AI. Story C: The Billion-Dollar Story Story C usually involves a seismic change that disrupts existing markets, and as you can imagine, it’s a million times more difficult to pull off. Uber and Lyft are on Story C. They’re no longer known as a better taxi or for solving a specific problem. They’re about creating a market in which a large portion of people can no longer live without them. In most urban areas, ride hailing services are now a necessity, as the ability they offer to do more things cheaply has made a major impact on lifestyle. There’s just no going back. Story C was actually where my vision split from my former startup. I was focused more on real-time, plain-word insights generated from a mesh of private and public data, i.e. Alexa, Google Assistant, and Siri. The company was turning towards more of a B2B approach, first as a SaaS NLG tool, and then as a business intelligence tool. No one was wrong here, but the latter was the direction the company took. So now I’m working on a new Story A at a new startup. And I’ve got Stories B and C in my purview. So which story do you tell? Well, it depends on who you’re talking to. For the press, for customers, and for potential employees, stick to Story A — if these folks aren’t jazzed about Story A, then you’re not spending enough time on Story A. In fact, you should consider Story B and Story C to be intellectual property. It’s not the kind of thing you want to go too deeply into without an NDA or some protection in place. For your board, your investors, and your employees, focus on Story A, of course, but also keep them aware of Story B and drop hints about Story C. Story B is where you’re headed next. It might be what you raise your next round on, or it may be your next big pivot. Story C is best kept in the distance until you’ve crushed Story A and made significant progress on Story B. It’s a goal, mainly, and you should just be making sure you’re not closing doors to it as you move forward. Once you get your stories straight, then it’s just about execution. But come back to them often, every quarter or even every sprint, and make sure everyone is still on the same page.
https://jproco.medium.com/three-stories-every-entrepreneur-should-know-3476909629bb
['Joe Procopio']
2019-01-04 19:56:32.109000+00:00
['Product Management', 'Business', 'Entrepreneurship', 'Startup', 'Technology']
My Phobia: (British) Baked Beans.
My Phobia: (British) Baked Beans. The most sickening article I’ve written. Photo by Deepansh Khurana on Unsplash\ I feel sick to my stomach. My face is tight with anxiety and it’s causing crows’ feet on either side of my eyes. They age me beyond my years. They make me shiver. My shoulders are tense. Why can’t I stop staring at them? They say it’s tough to stop looking at unsettling things, and this is beyond repulsive. I’m captivated by them- in the worst possible way. I have a phobia of baked beans. Most people have an aversion to snakes, spiders and Tom Cruise movies. Society accepts them as ‘normal’. Leguminophobia is the irrational fear of baked beans. And it turns out, I am not alone in my struggle. I read an article about a cook who had to quit his job because he couldn’t cook a Full English Breakfast, in which they are a staple. Questions have been posted on Reddit and Quora by those wondering if this affects others. There is even a hashtag on Twitter #Leguminophobia. This is the solidarity I needed. You can stop laughing now. This is serious(ish). If I was a Goliath, baked beans would be my David. The truth is, laughter is a standard response to those who learn about my phobia. I have had to come to terms with this reaction over the years. Most people have an aversion to snakes, spiders and Tom Cruise movies. Society accepts them as ‘normal’. I am well aware that my detestation of a low-cost popular foodstuff is plain weird. Acknowledging the fact they exist in the same world as I, is traumatic. Never mind the fact people choose to eat them as food- shudder. If I was a Goliath, Baked Beans would be my David. British baked beans- most famously made by Heinz- are premade and come in a tin. It’s shiny silver, rudimentary and sturdy. It would break a toe (or brick for that matter) if dropped on one, and remain unblemished. The opening reveals a cold, shiny tomato-based red gloop that glistens in even the dimmest of light. How can food make my eyes squint? They taste overly sweet and artificial despite being made with no additives. The beans are Haricot Beans apparently-whatever they are. They have a dry centre and mushy texture meaning they stick to your teeth when chewed. The flatulence they produce is almost unique in its smell. “Did you eat baked beans today? It smells like you did”. It all started in my family home as a child where they were deemed an acceptable dish at any time. For breakfast, lunch, or a side dish at dinner. Baked beans were everywhere. I recall my brother warming some in the microwave for lunch. He nuked them for too long meaning he could hardly hold the bowl without burning his hands. I was just about coping with them in my vicinity until I saw a thin layer of red skin on the top, which was caused by excessive heating. He proceeded to scrape it off with a spoon, flick it into the bin, before taking a mouthful. For me, this episode was trauma in its purest form. Photo by Cottonbro on Pexels “They keep you regular” my mom used to say. Which, to me, was just another reason, on the already exhaustive list, to hate them. When I spy someone chowing down on beans without a care in the world, I feel desperately sorry for their partner, friends, family, or colleagues who will have this human airbag in their company later on. The flatulence they produce is almost unique in its smell. “Did you eat baked beans today? It smells like you did”. The scent is as sinister as it is distinctive. Watching them being ingested is all engaging for me. Like falling into anaphylaxis when stung by a bee. Why do baked beans seem to infect every other item on the plate? The bean ‘jus’ is unable to be hoarded into one part of the plate and kept there willingly. I once saw someone build a damn of sausages and toast to save their mushrooms from drowning on their plate. Their attempt was futile. I can only assume the only way to segregate your food from baked beans and their fluid is to keep them in a bowl independent from your meal. That would, however, require one to eat a spoon of pure baked bean, with nothing to distract from the noxious taste or texture. After they have been eaten, my torture isn’t over. When the bean-tainted plate is removed there is always, ALWAYS a bean remaining on the table. It’s sitting there, thinking it’s got away from its bio bin grave. Surrounded by a thin layer of source smeared on the counter. Cold and still gleaming in the light. How can just one bean make me squint? When it’s wiped up, the weak bean structure squashes into the cloth, causing further insult. God forbid I have to wash up the dishes. The dishwater will be polluted with a reddish tint and the sink strainer will expose a naked pale bean after having its coating washed off. A world without baked beans would be a better one. Can we agree to get fiber and protein from other foodstuffs? It would make my life indefinitely happier.
https://medium.com/writers-blokke/my-phobia-british-baked-beans-7050173c32f9
['Nathan Foolchand']
2020-12-22 21:34:29.384000+00:00
['Health', 'Short Story', 'Life', 'Writing', 'Food']
How To Get Hired At Your Dream Job In Just 11 Lines Of Python Code
How To Get Hired At Your Dream Job In Just 11 Lines Of Python Code A complete newbie-friendly guide on how to use Machine Learning to be hired at your dream job “ There has literally never been a time in the history of humankind where this short article could have been more relevant. “ In the U.S. alone an estimated 47 million jobs are predicted to disappear in the post-COVID19 world. To put that in perspective, a staggering 32% of the population might become unemployed. Similar statistics can be found for almost all countries across the globe. What does this mean? The majority of these people are under no circumstance going to simply accept such a fate. Governments around the world have pledged relief-funds to their citizens. Although such funds have been quite generous in many countries, under no possible point of view are they enough to feed the millions of people who are soon going to be left without a fixed source of income. It thus becomes obvious that the majority of the ones affected are going to try to find a new place of employment. In other words, if you considered the job market prior to COVID-19 of being extremely competitive, finding an occupation in 2020 will prove to be exponentially more difficult as the competition is going to be significantly higher than what it was in the pre-COVID19 world. Taking into consideration the above-mentioned, the aim of this article is to walk you through the complete process of creating a beginner-friendly machine learning model which is going to almost guarantee that you are accepted at the job of your dreams. If you like this article and are interested in receiving exclusive monthly content for free, subscribe to my exclusive mailing list at the end of this article (can be also accessed directly from here).
https://medium.com/datadriveninvestor/get-hired-at-your-dream-job-in-just-11-lines-of-python-code-7a8d41bbb335
['Filippos Dounis']
2020-06-05 00:21:48.339000+00:00
['Business', 'Entrepreneurship', 'Programming', 'Data Science', 'Artificial Intelligence']
Visualising high-dimensional datasets using PCA and t-SNE in Python
Update: April 29, 2019. Updated some of the code to not use ggplot but instead use seaborn and matplotlib . I also added an example for a 3d-plot. I also changed the syntax to work with Python3. The first step around any data related challenge is to start by exploring the data itself. This could be by looking at, for example, the distributions of certain variables or looking at potential correlations between variables. The problem nowadays is that most datasets have a large number of variables. In other words, they have a high number of dimensions along which the data is distributed. Visually exploring the data can then become challenging and most of the time even practically impossible to do manually. However, such visual exploration is incredibly important in any data-related problem. Therefore it is key to understand how to visualise high-dimensional datasets. This can be achieved using techniques known as dimensionality reduction. This post will focus on two techniques that will allow us to do this: PCA and t-SNE. More about that later. Lets first get some (high-dimensional) data to work with. MNIST dataset We will use the MNIST-dataset in this write-up. There is no need to download the dataset manually as we can grab it through using Scikit Learn. First let’s get all libraries in place. from __future__ import print_function import time import numpy as np import pandas as pd from sklearn.datasets import fetch_mldata from sklearn.decomposition import PCA from sklearn.manifold import TSNE %matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import seaborn as sns and let’s then start by loading in the data mnist = fetch_mldata("MNIST original") X = mnist.data / 255.0 y = mnist.target print(X.shape, y.shape) [out] (70000, 784) (70000,) We are going to convert the matrix and vector to a Pandas DataFrame. This is very similar to the DataFrames used in R and will make it easier for us to plot it later on. feat_cols = [ 'pixel'+str(i) for i in range(X.shape[1]) ] df = pd.DataFrame(X,columns=feat_cols) df['y'] = y df['label'] = df['y'].apply(lambda i: str(i)) X, y = None, None print('Size of the dataframe: {}'.format(df.shape)) [out] Size of the dataframe: (70000, 785) Because we dont want to be using 70,000 digits in some calculations we’ll take a random subset of the digits. The randomisation is important as the dataset is sorted by its label (i.e., the first seven thousand or so are zeros, etc.). To ensure randomisation we’ll create a random permutation of the number 0 to 69,999 which allows us later to select the first five or ten thousand for our calculations and visualisations. # For reproducability of the results np.random.seed(42) rndperm = np.random.permutation(df.shape[0]) We now have our dataframe and our randomisation vector. Lets first check what these numbers actually look like. To do this we’ll generate 30 plots of randomly selected images. plt.gray() fig = plt.figure( figsize=(16,7) ) for i in range(0,15): ax = fig.add_subplot(3,5,i+1, title="Digit: {}".format(str(df.loc[rndperm[i],'label'])) ) ax.matshow(df.loc[rndperm[i],feat_cols].values.reshape((28,28)).astype(float)) plt.show() Now we can start thinking about how we can actually distinguish the zeros from the ones and two’s and so on. If you were, for example, a post office such an algorithm could help you read and sort the handwritten envelopes using a machine instead of having humans do that. Obviously nowadays we have very advanced methods to do this, but this dataset still provides a very good testing ground for seeing how specific methods for dimensionality reduction work and how well they work. The images are all essentially 28-by-28 pixel images and therefore have a total of 784 ‘dimensions’, each holding the value of one specific pixel. What we can do is reduce the number of dimensions drastically whilst trying to retain as much of the ‘variation’ in the information as possible. This is where we get to dimensionality reduction. Lets first take a look at something known as Principal Component Analysis. Dimensionality reduction using PCA PCA is a technique for reducing the number of dimensions in a dataset whilst retaining most information. It is using the correlation between some dimensions and tries to provide a minimum number of variables that keeps the maximum amount of variation or information about how the original data is distributed. It does not do this using guesswork but using hard mathematics and it uses something known as the eigenvalues and eigenvectors of the data-matrix. These eigenvectors of the covariance matrix have the property that they point along the major directions of variation in the data. These are the directions of maximum variation in a dataset. I am not going to get into the actual derivation and calculation of the principal components — if you want to get into the mathematics see this great page — instead we’ll use the Scikit-Learn implementation of PCA. Since we as humans like our two- and three-dimensional plots lets start with that and generate, from the original 784 dimensions, the first three principal components. And we’ll also see how much of the variation in the total dataset they actually account for. pca = PCA(n_components=3) pca_result = pca.fit_transform(df[feat_cols].values) df['pca-one'] = pca_result[:,0] df['pca-two'] = pca_result[:,1] df['pca-three'] = pca_result[:,2] print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) Explained variation per principal component: [0.09746116 0.07155445 0.06149531] Now, given that the first two components account for about 25% of the variation in the entire dataset lets see if that is enough to visually set the different digits apart. What we can do is create a scatterplot of the first and second principal component and color each of the different types of digits with a different color. If we are lucky the same type of digits will be positioned (i.e., clustered) together in groups, which would mean that the first two principal components actually tell us a great deal about the specific types of digits. plt.figure(figsize=(16,10)) sns.scatterplot( x="pca-one", y="pca-two", hue="y", palette=sns.color_palette("hls", 10), data=df.loc[rndperm,:], legend="full", alpha=0.3 ) From the graph we can see the two components definitely hold some information, especially for specific digits, but clearly not enough to set all of them apart. Luckily there is another technique that we can use to reduce the number of dimensions that may prove more helpful. In the next few paragraphs we are going to take a look at that technique and explore if it gives us a better way of reducing the dimensions for visualisation. The method we will be exploring is known as t-SNE (t-Distributed Stochastic Neighbouring Entities). For a 3d-version of the same plot ax = plt.figure(figsize=(16,10)).gca(projection='3d') ax.scatter( xs=df.loc[rndperm,:]["pca-one"], ys=df.loc[rndperm,:]["pca-two"], zs=df.loc[rndperm,:]["pca-three"], c=df.loc[rndperm,:]["y"], cmap='tab10' ) ax.set_xlabel('pca-one') ax.set_ylabel('pca-two') ax.set_zlabel('pca-three') plt.show() T-Distributed Stochastic Neighbouring Entities (t-SNE) t-Distributed Stochastic Neighbor Embedding (t-SNE) is another technique for dimensionality reduction and is particularly well suited for the visualization of high-dimensional datasets. Contrary to PCA it is not a mathematical technique but a probablistic one. The original paper describes the working of t-SNE as: “t-Distributed stochastic neighbor embedding (t-SNE) minimizes the divergence between two distributions: a distribution that measures pairwise similarities of the input objects and a distribution that measures pairwise similarities of the corresponding low-dimensional points in the embedding”. Essentially what this means is that it looks at the original data that is entered into the algorithm and looks at how to best represent this data using less dimensions by matching both distributions. The way it does this is computationally quite heavy and therefore there are some (serious) limitations to the use of this technique. For example one of the recommendations is that, in case of very high dimensional data, you may need to apply another dimensionality reduction technique before using t-SNE: | It is highly recommended to use another dimensionality reduction | method (e.g. PCA for dense data or TruncatedSVD for sparse data) | to reduce the number of dimensions to a reasonable amount (e.g. 50) | if the number of features is very high. The other key drawback is that it: “Since t-SNE scales quadratically in the number of objects N, its applicability is limited to data sets with only a few thousand input objects; beyond that, learning becomes too slow to be practical (and the memory requirements become too large)”. We will use the Scikit-Learn Implementation of the algorithm in the remainder of this writeup. Contrary to the recommendation above we will first try to run the algorithm on the actual dimensions of the data (784) and see how it does. To make sure we don’t burden our machine in terms of memory and power/time we will only use the first 10,000 samples to run the algorithm on. To compare later on I’ll also run the PCA again on the subset. N = 10000 df_subset = df.loc[rndperm[:N],:].copy() data_subset = df_subset[feat_cols].values pca = PCA(n_components=3) pca_result = pca.fit_transform(data_subset) df_subset['pca-one'] = pca_result[:,0] df_subset['pca-two'] = pca_result[:,1] df_subset['pca-three'] = pca_result[:,2] print('Explained variation per principal component: {}'.format(pca.explained_variance_ratio_)) [out] Explained variation per principal component: [0.09730166 0.07135901 0.06183721] x time_start = time.time() tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300) tsne_results = tsne.fit_transform(data_subset) print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start)) [out] [t-SNE] Computing 121 nearest neighbors... [t-SNE] Indexed 10000 samples in 0.564s... [t-SNE] Computed neighbors for 10000 samples in 121.191s... [t-SNE] Computed conditional probabilities for sample 1000 / 10000 [t-SNE] Computed conditional probabilities for sample 2000 / 10000 [t-SNE] Computed conditional probabilities for sample 3000 / 10000 [t-SNE] Computed conditional probabilities for sample 4000 / 10000 [t-SNE] Computed conditional probabilities for sample 5000 / 10000 [t-SNE] Computed conditional probabilities for sample 6000 / 10000 [t-SNE] Computed conditional probabilities for sample 7000 / 10000 [t-SNE] Computed conditional probabilities for sample 8000 / 10000 [t-SNE] Computed conditional probabilities for sample 9000 / 10000 [t-SNE] Computed conditional probabilities for sample 10000 / 10000 [t-SNE] Mean sigma: 2.129023 [t-SNE] KL divergence after 250 iterations with early exaggeration: 85.957787 [t-SNE] KL divergence after 300 iterations: 2.823509 t-SNE done! Time elapsed: 157.3975932598114 seconds Now that we have the two resulting dimensions we can again visualise them by creating a scatter plot of the two dimensions and coloring each sample by its respective label. df_subset['tsne-2d-one'] = tsne_results[:,0] df_subset['tsne-2d-two'] = tsne_results[:,1] plt.figure(figsize=(16,10)) sns.scatterplot( x="tsne-2d-one", y="tsne-2d-two", hue="y", palette=sns.color_palette("hls", 10), data=df_subset, legend="full", alpha=0.3 ) This is already a significant improvement over the PCA visualisation we used earlier. We can see that the digits are very clearly clustered in their own sub groups. If we would now use a clustering algorithm to pick out the seperate clusters we could probably quite accurately assign new points to a label. Just to compare PCA & T-SNE: plt.figure(figsize=(16,7)) ax1 = plt.subplot(1, 2, 1) sns.scatterplot( x="pca-one", y="pca-two", hue="y", palette=sns.color_palette("hls", 10), data=df_subset, legend="full", alpha=0.3, ax=ax1 ) ax2 = plt.subplot(1, 2, 2) sns.scatterplot( x="tsne-2d-one", y="tsne-2d-two", hue="y", palette=sns.color_palette("hls", 10), data=df_subset, legend="full", alpha=0.3, ax=ax2 ) PCA (left) vs T-SNE (right) We’ll now take the recommendations to heart and actually reduce the number of dimensions before feeding the data into the t-SNE algorithm. For this we’ll use PCA again. We will first create a new dataset containing the fifty dimensions generated by the PCA reduction algorithm. We can then use this dataset to perform the t-SNE on pca_50 = PCA(n_components=50) pca_result_50 = pca_50.fit_transform(data_subset) print('Cumulative explained variation for 50 principal components: {}'.format(np.sum(pca_50.explained_variance_ratio_))) [out] Cumulative explained variation for 50 principal components: 0.8267618822147329 Amazingly, the first 50 components roughly hold around 85% of the total variation in the data. Now lets try and feed this data into the t-SNE algorithm. This time we’ll use 10,000 samples out of the 70,000 to make sure the algorithm does not take up too much memory and CPU. Since the code used for this is very similar to the previous t-SNE code I have moved it to the Appendix: Code section at the bottom of this post. The plot it produced is the following one: PCA (left) vs T-SNE (middle) vs T-SNE on PCA50 (right) From this plot we can clearly see how all the samples are nicely spaced apart and grouped together with their respective digits. This could be an amazing starting point to then use a clustering algorithm and try to identify the clusters or to actually use these two dimensions as input to another algorithm (e.g., something like a Neural Network). So we have explored using various dimensionality reduction techniques to visualise high-dimensional data using a two-dimensional scatter plot. We have not gone into the actual mathematics involved but instead relied on the Scikit-Learn implementations of all algorithms. Roundup Report Before closing off with the appendix… Together with some likeminded friends we are sending out weekly newsletters with some links and notes that we want to share amongst ourselves (why not allow others to read them as well?). Appendix: Code Code: t-SNE on PCA-reduced data time_start = time.time() tsne = TSNE(n_components=2, verbose=0, perplexity=40, n_iter=300) tsne_pca_results = tsne.fit_transform(pca_result_50) print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start)) [out] t-SNE done! Time elapsed: 42.01495909690857 seconds And for the visualisation
https://towardsdatascience.com/visualising-high-dimensional-datasets-using-pca-and-t-sne-in-python-8ef87e7915b
['Luuk Derksen']
2019-04-29 21:25:39.322000+00:00
['Data Science', 'Machine Learning', 'Data Visualization', 'Visualization', 'Python']
The NLP Cypher | 12.20.20
NeurIPS & Knowledge Graphs Michael Galkin returns with his round-up of graph news this time out of NeurIPS 🔥. Around five percent of papers from the conference were on graphs so lots to discuss. His TOC: Blog: For an enterprise take w/r/t the power of knowledge graphs: Training Data Extraction Attack 👀 A new paper (with authors from every major big tech), was recently published showing how one can attack language models like GPT-2 and extract information verbatim like personal identifiable information from just by querying the model. 🥶! The information extracted derived from the models’ training data that was based on scraped internet info. This is a big problem especially when you train a language model on a private custom dataset. The paper discusses causes and possible work arounds. Paper Book Me Some Data Looks like Booking.com wants a new recommendation engine and they are offering up their dataset of over 1 million anonymized hotel reservations to get you in the game. Pretty cool if you want a chance in working with real-world data. Here’s the training dataset schema: user_id — User ID check-in — Reservation check-in date checkout — Reservation check-out date affiliate_id — An anonymized ID of affiliate channels where the booker came from (e.g. direct, some third party referrals, paid search engine, etc.) device_class — desktop/mobile booker_country — Country from which the reservation was made (anonymized) hotel_country — Country of the hotel (anonymized) city_id — city_id of the hotel’s city (anonymized) utrip_id — Unique identification of user’s trip (a group of multi-destinations bookings within the same trip) “The eval dataset is similar to the train set except that the city_id of the final reservation of each trip is concealed and requires a prediction.” UFO Files Dumped on Archive.org There’s a nice dump of UFO files spanning several decades and countries if you want to get your alien research on. Apparently there was some copyright beef between content owners and media publishers which led ultimately to a third party obtaining a copy in the wild and uploading the files to archive.org 😭. Anyway, it’s a good data source to try out your latest OCR algo or if you are interested in searching for anti-gravity propulsion tech. 👽: The Air Force Ported µZero Apparently the US Air Force decided to port DeepMind’s µZero to the navigation/sensor system of a U-2 “dragon lady” spy plane. And they have called it ARTUµ, inspired by R2-D2 from Star Wars 😭. Recently, they ran the first ever simulated flight to show off the AI’s capabilities. The mission was for ARTUµ to conduct reconnaissance of enemy missile launchers on the ground while the pilot looked for aerial threats. DARPA be like: Article: GitHub Search Index Update GitHub will get rid of your repo from its code search index if it’s been inactive for more than a year. So how do you stay ‘active’? “Recent activity for a repository means that it has had a commit or has shown up in a search result.” Getting Rid of Intents Alan Nichol opines on the latest state of conversational AI and his RASA platform with regards to the goal of getting rid of intents as being paramount for conversational AIs to achieve Kurzweil levels of robustness. They are currently experimenting with end-2-end learning as an alternative to intents. In RASA 2.2 and beyond, intents will be optional. Blog: Speech Transformers & Datasets & WMT20 Model Checkpoints from FAIR XLSR-53: Multilingual Self-Supervised Speech Transformer Multilingual pre-trained wav2vec 2.0 models Multi-Lingual LibriSpeech Dataset WMT Models Out Facebook FAIR’s WMT’20 news translation task submission models Repo Cypher 👨‍💻 A collection of recently released repos that caught our 👁
https://medium.com/towards-artificial-intelligence/the-nlp-cypher-12-20-20-cfc4b197517c
['Quantum Stat']
2020-12-20 23:24:27.464000+00:00
['Machine Learning', 'Deep Learning', 'Data Science', 'AI', 'Artificial Intelligence']
Create Highly Available Websites using AWS Infrastructure
In this age, the transition to e-commerce is happening at a very fast pace. These e-commerce companies, both small and big would like to have to their website up and running 24 hours and 365 days a year not to lose customers and business. For instance, think about amazon.com going down for a couple of hours. This is where Cloud Computing comes into the picture. Cloud provides agility to build Highly Available (HA) and fault-tolerant applications which was not possible with the on-premise data centers. The Cloud vendors have Data Centers across multiple geographical locations with redundancy to help us build an HA website. The same applies to any of the business-critical applications that need to run all the time. Currently, AWS Global Infrastructure spans 21 geographical Regions with 66 Availability Zones (AZ) with more Regions and AZs to come in the near future. In this article, we will focus on building HA applications in the Cloud. AWS Global Infrastructure AWS Global Infrastructure has: Regions and Availability Zones (AZ) Data Centers in AZs Edge Locations Let’s discuss each of these in detail. Regions & Availability Zones Each Region is a geographical area with more than one isolated location called Availability Zones (AZ). Each AZ can be a combination of one or more Data Centers. For Example, North Virginia is a Region with a code of us-east-1 and has 6 AZ. You can learn more about regions in AWS documentation. As shown in the above diagram, there are multiple AWS Regions. And each Region has a minimum of two AZs. Each AZ has multiple Data Centers. Note: Some of the resources in AWS are global, while some are region-specific. When we create an IAM User it is Global and when we create an EC2 Instance it is regional. While creating an EC2 Instance, we got an option to pick the region and which AZ. It is not the same with the IAM User as it is global. Data Centers in Availability Zones Amazon is very secretive of the DC locations but that is not the case with AZs. For Example, the Mumbai AZ ap-south-1a might be in the east of Mumbai and ap-south-1c in the west. This way if there is any natural catastrophe like flood or fire around one of the AZs, it doesn’t affect the other AZs. Also, each of the Data Center (DC) has its own redundancy of Power Supply, UPS, Routers, Internet Connectivity, etc to avoid any Single-Point-Of-Failure. There is no common infrastructure between two DCs and the DCs are connected by high-speed internet connectivity for low latency. While creating a resource, AWS provides us an option to pick the Region and the AZs, but not the DC within it. The following factors are used for picking an AWS Region. Pricing (North Virginia Region is the oldest and the cheapest Region) Security and compliance requirement User/customer location Service availability Latency Note: For the sake of learning AWS, North Virginia is best as it is one the cheapest AWS Region and usually is the first one to support any new AWS feature for us to try it out. Edge Locations In AWS Global Infrastructure, the Edge Locations used for caching the static and streaming data. Currently, there is a global network of 187 Points of Presence (176 Edge Locations and 11 Regional Edge Caches) in 69 cities across 30 countries. When compared to the AZs, the number of edge locations is almost triple and close to the end-user. This makes the Edge Locations a ripe candidate for caching data, while the regions are for hosting web servers, databases and so on. These Edge Locations provide lower latency when compared to the Regions. When a request is made by the user, the Edge Location checks if the data is there locally and if not then gets the data from the appropriate Region, stores it locally and then passes it on to the user. Edge Locations is an AWS term provided by the AWS CloudFront Service. An AWS Edge Location is called PoP (Points of Presence) in general term. A similar service is provided by Content Distribution Network (CDN) providers like Akamai, Cloudflare, etc. These CDN providers cache the data like streaming video during a live match to provide a better experience to the end-user. Below is the table comparing the AWS and the general terminology. Creating a Highly Available Application using the AWS Global Infrastructure AWS Global Infrastructure provides a set of Regions, AZs and Edge Locations to create a Highly Available and Fault Tolerant application. Example If a web server is hosted in a single AZ, any problem with that AZ will make the website unavailable. To get around this the webserver can be deployed in multiple AZs within the same region as shown below. Similarly, any problem with a region which will also make the website unavailable. To make the website even more available, we can have the webserver across multiple regions, this way the availability of the website is not dependent on the availability of a single region. HA website and applications are created by using redundancy, but the problem with redundancy is that it comes at a cost. We need to run the same website at multiple locations. Also, the same web server can be deployed in some other Cloud like GCP or Azure. Here we are running the same web server in different Clouds and so the configuration is called Multi-Cloud Configuration. Google Anthos helps in building hybrid and Multi-Cloud applications using Kubernetes. When we have the webserver across multiple locations, how does the traffic gets distribute among them? We can’t simply keep them sitting idle. This is where the AWS Elastic Load Balancers come into play. The AWS ELB takes the request from the end-user and distributes the same across multiple web servers. Conclusion While setting up our own DC gives the flexibility to design our hardware and software, it comes at the cost of time and money. By leveraging AWS, with a few clicks or API calls we can create a highly available, secure, fault-tolerant, reliable, performant application. The different Cloud Vendors like Google, Amazon, Microsoft have been spending billions to set up new DC in different geographical locations. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of AWS.
https://medium.com/edureka/create-websites-using-aws-1577a255ea36
['Vishal Padghan']
2020-09-11 10:04:57.347000+00:00
['Aws Global Infrastructure', 'Web Development', 'AWS', 'Amazon Web Services', 'Cloud Computing']
“Stimulus”, or Survival?
“Stimulus”, or Survival? The longer Washington plays games with our lives, the more needless suffering real people are forced to experience. Photo by Maria Oswalt on Unsplash The dead of winter is coming, and with it comes the culmination of the housing and economic crisis that has only continued to build since the start of the coronavirus pandemic that’s ripping through the United States even worse than before. As I have said countless times before, whether it be the total lack of support for parents and teachers struggling to help children get through this unprecedented school year, the loss of over 300,000 lives, or the fact that that tens of millions of people are about to lose their housing through no fault of their own, it’s irrefutable that the government has failed us in every conceivable way virtually from the moment we were made aware of how significant the effect of this pandemic would be. The idea that another “stimulus” check for the American people is even controversial is testament alone to how little our lawmakers tasked with representing us actually care. One need only look at the unemployment sub-reddit on Reddit to get a glimpse of just how bad things have been, and continue to get worse. Jeff Stein with The Washington Post shared an image of one of the posts concerning unemployment, where the user writes: “I hope we all get some good news today. I’m on the verge of being homeless as well since I haven’t paid anything to my roommate and their girlfriend (ruthless) wants to kick me out into the cold. (Not her problem, as she says. Lol) I applied for SNAP yesterday and I have been dumpster diving for food the last few days outside of restaurants… I don’t have a car so every time I walk to food banks and stuff like that they’re either closed or need proof of residency and I don’t have that as I’m just couch surfing…” Under the ‘unemployed sub-reddit, another user wrote: “I’m losing it. I’ve been unemployed for 10 months now because of Covid. I’ve applied to over 600 jobs and only one interview. I have BS in Business Admin and MS in International Relations. I’ve had my resume redone, done interview prep, updated LinkedIn, and reached out to contacts/ connections. I’m coming off a terrible experience at last job with the US government that left me diagnosed with PTSD, major depressive disorder, and severe anxiety. Im really really not well enough mentally to work full-time and my depression has been more than overwhelming the last few months. I’m struggling getting out of bed, taking my dog out, and I am having an extremely difficult time. I can’t sleep through the night and have difficulty eating. Therapy isn’t working anymore. I live alone (immune compromised). No family support (financially or emotionally). Unemployment ran out two weeks ago. I’ve lived in 7 different places in the last year. Lost my apartment in June, managed to find new place in October, and I am hanging on by threads. Rent due in two weeks. Little savings left, high credit card debt and all bills sitting on credit card. I’m not sure how much longer I can keep doing this. Do I check into mental health facility? What do I do please help. Please any advice would be appreciated.” In a consumer-driven economy like that of the United States, it should go without saying that the stability of the economy is dependent upon the ability of average Americans to actually be able to spend. It should come as no surprise then, that the direct relief payments the American people desperately need have been dubbed “stimulus checks”. That said at this rate, the situation has been so dire the need has nothing to do with economic stimulus, but rather the ability just to survive. It’s near impossible not to read stories like the one above, and not sink into feelings of total, utter hopelessness and raw anger. For months I have been wondering what it would take for the American people to rise, whether it be mass demonstrations in the streets or a general strike, and demand that basic needs be met in the richest nation on earth. But when reflecting on what millions of people are currently going through, can we really be surprised that people don’t have the time or the energy to organize? Can we really be surprised that people have become so conditioned to the cruelty, our lawmakers feel entirely comfortable subjecting their constituents to unnecessary trauma while they hold our stability hostage? Are they capable of realizing that the longer they allow this to fester, the less likely it is that any amount they send us will be enough to help people catch up on their bills and overdraft fees, let alone stimulate their beloved economy? The wealth of the richest people in the country has grown by such an unprecedented amount as a direct result of this pandemic, a study found that a one time wealth tax could bring in enough money to send every single American a check of $3000, and still leave the elites with more money than they had at the beginning of the pandemic. Meanwhile, single mothers who lost their jobs as a direct result of the same circumstances that saw a surge in their fortunes are putting their bank accounts into overdraft just to buy diapers for their babies. I can’t help wondering what it will take for our lawmakers to actually care. What are the American people going to have to resort to in order to get through to people who have been so insulated from the pain and the rage, that they have no ability to understand how their failures are directly impacting tens of millions of people? How much more are we going to be expected to put up with before they realize these checks aren’t some frivolous luxury they don’t feel we’re worthy of receiving, but an absolute necessity?
https://medium.com/discourse/stimulus-or-survival-f63c68d16310
['Lauren Elizabeth']
2020-12-20 03:58:59.724000+00:00
['Economy', 'Politics', 'Society', 'Coronavirus', 'Government']
Learnin’​ Good All This AI Stuff for Product Management
How product managers can learn what they need to know about AI to be at peak effectiveness The most frequent question I get about AI from colleagues, product managers and others, is, “What do I need to know about AI and what’s the best way to learn it?” I’ve invested a considerable amount of time taking numerous courses, so I dug into my emails to collect some of the suggestions I’ve doled out. Is this necessary? First, it’s worth addressing the extent to which a product manager even needs to understand how AI works in order to be effective. There is an endless stream of business articles about what AI is, what it does and how it is going to disrupt this and that, all of which is great, but I am talking about understanding how it works (e.g. stochastic gradient descent, convolutional neural networks, tree search, supervised vs unsupervised learning, and all the other mumbo jumbo that sounds complicated until you know what it is). As Marty Cagan pointed out in Inspired (a must-read), product managers can come from a variety of different vertical disciplines, including those that are not necessarily technical, such as marketing or sales. Can these individuals, or even product managers who come from engineering but don’t necessarily have a background in AI, be successful managing AI products? In his article Behind Every Great Product, Marty says, While you don’t need to be able to invent or implement the new technology yourself in order to be a strong product manager, you do need to be comfortable enough with the technology that you can understand it and see its potential applications. With this, I could not agree more. As mentioned in my last post, if a product manager’s two key responsibilities are assessing opportunities and defining products to be built, these become particularly challenging without an understanding of the underlying technology. AI, however, is raising the bar. Almost 3 years ago Jon Evans welcomed everyone to the era of “hardtech” by comparing beginner tutorials for Android and TensorFlow. (I program in both and the latter is much more difficult.) Things have not gotten easier in the intervening years. For those who want to product manage in AI (which I highly recommend), at this very early stage of AI commercialization, it is critical to understand how it works in order to grasp what it cannot yet do and envision what is possible. You might be able to bumble around in PowerPoint for a while, but whether you are technical or not, it is important to learn the technology in order to be able to understand it and see its potential, not to mention communicate credibly with engineering. If you are still skeptical, allow me to also offer a comprehensive list of all the associated costs with learning AI: If you don’t think you have what it takes to learn AI, whatever you imagine that to be, you are probably mistaken. Read on or jump to the bottom for some inspiration (then come back). How to get there First, if you don’t know how already, learn how to code (a little). There has always been a lot of discussion about whether or not founders need to know how to code. TechCrunch had an article many years ago about “ The Trouble with Non-Tech Co-founders” and then Steve Blank (required reading) opined on “ Why Founders Should Know How to Code.” More recently TechCrunch implored people to “ Please Don’t Learn to Code” before posting an article on how the CEO of Auth0 still codes. Recognizing that product managers and founders are different things, although running product is most always one of the responsibilities of a founder, the relationship here is apt. Regardless, discussions around whether or not these people need to know how to code is complicated. The good news is that, the amount of coding skill you need to be able to get what you need out of your AI studies is relatively easy. You are not going to write production code, or any code that will go into the products you manage; you are just going to write enough to get through the classes. Take an online introduction to Python class (I have never taken one so cannot make a recommendation), get yourself an account on Stack Overflow (check out that juicy rep) and you should be good to go. Once you have the programming basics, then what? While no one could ever explore every AI class under the sun, here is the list of some MOOC s that I recommend, with a little commentary: Udacity Deep Learning Nanodegree: This is the first one I took, without ever having written a line of Python. (Not recommended, but I was already proficient in Java and R.) I was in the first group that ever went through this course, so it was more than a little rough, but it was fun and amazing. I was instantly hooked. Udacity now also has an AI Nanodegree, but I’m unfamiliar with that. Coursera Machine Learning offered by Stanford: I took this one, too, and it’s fantastic. Prof. Andrew Ng is a wonderful instructor and the way he organizes the concepts to cascade one into the next is beautiful. It uses Matlab (you can use Octave for free) which puts some people off, but it is worth it. Coursera Deep Learning Specialization offered by Stanford: I took this one, although as part of Stanford’s CS230 course (see below), again with Prof. Andrew Ng. Like his ML course, it is marvelous. Coursera Machine Learning Foundations offered by University of Washington: I have not taken this one, but it comes highly recommend from my good friend Anthony Stevens, Global Enterprise AI Architect at IBM. fast.ai: I have not taken this, but I’ve heard it is outstanding, and it is free. They tout not requiring any advanced math prerequisites. Leave a comment if you have any opinions. EdX Artificial Intelligence offered by Columbia University: I haven’t taken this one either, but I’ve heard great things. Udacity AI Product Manager Nanodegree: This one certainly sounds like it is in the bullseye, and I’m a fan of the Udacity Deep Learning Nanodegree, but this one is non-technical (no programming required), so I’m skeptical for the reasons described above. I haven’t taken it, and nobody else has yet either, because it was only announced yesterday. If you want to do it for real, which I am sure you do, then I’d recommend one of the courses above. (They are hard, but not that hard.) As soon as I learn something about this one I’ll report back here. Going all the way I am not recommending this for product managers because it is overkill, a ton of work with deadlines, and expensive, but I am in the middle of Stanford’s 4-course, 16-unit Graduate Certificate in Artificial Intelligence. These are not MOOCs or continuing education (I’ve done those, too), but rather 200-level, master’s degree classes, with serious prerequisites and full-time students, at Stanford, in the lecture hall, with TAs and exams and team projects and the rest of the ball of wax. Having completed CS230 and CS221, I don’t know which two courses I’ll take next (I want to take all of them), but I’m aiming to finish in March 2020. Pacman competition assignment from Stanford’s CS221 — AI Principals & Techniques Many friends have asked me, “Why?” It was a bit impulsive, but upon reflection I have a few answers: It makes me better at my job. In my product management role at PARC, helping Xerox launch AI-powered applications, I want to explore every possibility to bring value to the team. I can see Hoover Tower from my office window. Proximity to campus isn’t a reason for anything, but it is motivational. The classes are excellent. AI is amazing. The assignments are fun. In CS221 we developed AI to play Pacman. The class had a competition to see whose AI could generate the highest average score and I placed 11th out of ~350 students, so not too bad. As an electrical engineer before getting an MBA, this feels a little like I am getting “back to my roots.” You can potentially go on to get a Stanford master’s degree in CS! If accepted into the program, you can roll over the units from the Graduate Certificate, which means you are already a third of the way there. Apparently there are department emails that refer to these classes as “tryouts,” so if I do well enough (I have heard it very challenging to get in even with straight As) I’ll apply. Learning is awesome. Bringing it home As Rachel Thomas, deep learning researcher and co-founder of fast.ai, said recently during a talk about AI, “It’s natural to feel like if you’re not a math genius or you don’t have a PhD from Stanford that you couldn’t possibly hope to understand what’s going on, much less to get involved. I’m here to tell you that is false.” Agreed! Sure, there is a bit of math, but this isn’t string theory. Rachel goes on to talk about Melissa Fabros, an English literature Ph.D. (almost literally a poet) who took the fast.ai course and went on to win a grant to work in computer vision. With some determination and perseverance, almost everyone can learn the basics of how AI works, and then some. If you are a product manager, or would like to become one, who wants to work in AI, or is perhaps there already, then there is no reason not to learn a little Python and get going with the MOOCs. If you are passionate about the technology, which frankly you better be if this is what you want to do, then you will get there. So get pumped, sign yourself up for a class, start cranking and let me know how it goes in the comments below. Good luck!
https://medium.com/swlh/learnin-good-all-this-ai-stuff-for-product-management-1133b69aee
['Mark Cramer']
2019-08-09 15:50:42.157000+00:00
['Machine Learning', 'Product Management', 'Artificial Intelligence', 'AI', 'Learning']
4 Uncommon Python Tricks You Should Learn
1. Multiple Assignment When you want to give several variables the same values, you will often see code that repeats the assignment statement for each variable. It may look something like the following code, which uses one separate assignment per variable: a = 1 b = 1 c = 1 In Python, we can simplify this, and assign to every variable at once. a = b = c = 1 # a == 1 # b == 1 # c == 1 After doing this, every variable was assigned to the right-most value in the chain. Since it just takes the right-most value, we can also replace 1 with a variable.
https://medium.com/better-programming/4-uncommon-python-tricks-you-should-learn-2d3a156c10f2
['Devin Soni']
2020-02-12 17:24:54.423000+00:00
['Programming', 'Coding', 'Software Engineering', 'Python', 'Technology']