title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Remote teaching: Why a front-loaded and fast-paced class fared well in the wake of the COVID-19 Pandemic
Every teacher and professor had their share of strain brought by the ongoing COVID-19 pandemic. Through listening, adaptation, and planning, we made it to the finish line with quite a bit of energy with my students this semester. There was one key decision that helped me have a very smooth remote teaching experience: preparing a front-loaded syllabus. I made that choice consciously, but of course without a prediction about the implications of the then developing pandemic. I aimed to empower students with a broad spectrum of tools early in the semester and use the rest of the time for individual project development. This strategy ended up working well for remote teaching. Shaping a classroom through Agency, Adaptation, and Tooling Over the years, I observed that front-loaded classes and workshops fared much better in many terms. Combined with a longer exploration process, in the end, they turned into a teaching formula for achieving high-yield, high-quality results. I prepared the syllabus for the 2020 spring semester for the class I taught at MIT’s School of Architecture and Planning with three things in mind: 1- Agency When you prepare a front-loaded class or a workshop, you shock the students at their freshest state. This enables them to see a lot of material in a short period, but more importantly to pick and chose whatever makes more sense to them. Yes, no student learns everything we try to teach and it is better if we empower them with the material they are interested in. This kind of agency helps students to become a more integral part of the teaching process. If the teaching material is comprehensive and the teaching style is generous enough, students can and will have a say in what they learn. If the material is not malleable, they will either have a hard time adapting, or won’t align with the classroom dynamics at all. 2- Adaptation Teaching is about bi-directional adaptation. As the students adapt to the class and teaching material, the instructor needs to adapt to the overall drive of the class. Instructors are inclined to expect the students to adapt. But not all of them think about the fact that for a symbiosis to happen, both parties would need to act. A front-loaded class helps both parties to adjust and make choices early in the semester. I revealed the goals and mechanics of the subject in the very first class and dived right into the material. With a fast follow-up in the second week, I developed a quick sensation about the choices I made for the amount and delivery of the teaching material. In the meantime, the students came with questions to figure out if the class was the right one for them. One other benefit of a fast-paced start was to determine the no-adaptation types move out quickly. At MIT the students ‘shop’ for the classes the first week or two. If you show the intensity of the class early in the semester that helps refine the crowd. In short, if you are open to change things on the course — which in my mind is a must — a front-loaded class helps you plan earlier in the semester. 3- Fluency If you are teaching a class that includes skill-building components, there are fundamentally two tracks you can follow. 1 — You can plan to move incrementally and distribute skill-building sessions throughout the semester. This would help students to learn and digest skills over longer periods. A slower pace can also help them add skills more easily. This is a low-stress and low-risk choice, but it takes from the time that could be invested in the employment and refinement of skills. 2 — Alternatively, you can front-load the syllabus with skill-building sessions and then observe the tendencies of students in picking things up. This is a riskier move as not all the students would be able to follow the pace of the class. Students may need more support when you want to add on top of something that you have already taught in class. This would put more work-hours on the instructor. Just looking at the overall picture, the first option appears to be more logical, as the safe bet. Yet the second option, although comes with some risks, increases the chances of break-through achievements — if they are ever to happen in the class. Moving within a fast-paced setting, students hit a steeper learning curve, but at the same time become accustomed to the tools of the class earlier. This helps them become fluent in the tools they are using quicker. Especially for an application and making-oriented class, the second option works miraculously better. How so? I learned how to teach over 15 years of piecemeal teaching I have a quite mixed past in teaching and I have nowhere near the experience of a full-time academician. Yet, jumping back and forth between academia and professional practice, or spending time in both simultaneously helped me translate the strategies of teaching across these two domains. What did I do? I co-taught design studios. I developed design, geometry, and scripting classes. I happened to initiate and lead an undergraduate design program, somehow early in my career. Last but not the least, I conducted many workshops in different schools, cultures, and countries. Especially these workshops that ran anywhere from three hours to two weeks taught me a lot about developing syllabuses, even more so than semester-long classes. The diversity of student’s backgrounds, ages, and interests taught me a lot as well. While in professional practice, I happened to teach people whose age was (more than) double of mine. Later, I found chances to teach fresh out-of-the-high school kids. Over and over again, I discovered front-loaded scenarios fared better. Starting vertical (and going deep) and then going horizontal (and expanding). I applied this strategy to my latest teaching adventure. I asked the students to develop a “design” that had to be re-thought, letting go of its preconceived “parts.” My motivation stemmed from my ever-unfolding inquiry about part-whole relationships that I explained in my latest story: I deployed the teaching material through 4 tracks: Presence, Function, Quality, and The Whole.
https://medium.com/age-of-awareness/remote-teaching-why-a-front-loaded-and-fast-paced-class-fared-well-in-the-wake-of-the-covid-19-cdcabb0a85fc
['Onur Yuce Gun']
2020-06-22 16:38:40.909000+00:00
['Design', 'Creativity', 'Education', 'Technology', 'Innovation']
Analyzing Data Distributions with Seaborn
Analyzing Data Distributions with Seaborn A practical guide with many example plots (image by author) Data visualizations are key players in data science. They are powerful at exploring variables and the relations among them. Data visualizations are also much more preferred than plain numbers to deliver results and findings. In this article, we will see how data visualizations can be used to explore the distribution of variables. The examples will be done using a famous Python data visualization library called Seaborn. It is essential to interpret the distribution of variables. For instance, some machine learning models perform best when the variables have a normal distribution. Thus, the distribution of variables directs our strategy to approach problems. Distributions are also integral parts of exploratory data analysis. We can detect outliers, skewness, or get an overview of the measures of central tendency (mean, median, and mode). I think we have highlighted the importance of data distributions clearly. We can now start on the examples. We will be using an insurance dataset that you can obtain from Kaggle. The first step is to import the libraries and read the dataset into a Pandas dataframe. import numpy as np import pandas as pd import seaborn as sns sns.set(style='darkgrid') insurance = pd.read_csv("/content/insurance.csv") insurance.head() (image by author) The dataset contains some measures (i.e. features) about the customers of an insurance company and the amount that is charged for the insurance. The first type of visualization we will see is the histogram. It divides the value range of continuous variables into discrete bins and shows how many values exist in each bin. The following is a basic histogram of the bmi variable. sns.displot(insurance, x='bmi', kind='hist', aspect=1.2) Histogram of bmi (image by author) We can use the displot function of seaborn and specify the type of distribution using the kind parameter. The aspect variable adjusts the height-width ratio of the figure. The bmi variable has a normal distribution except for a few outliers above 50. The displot function allows for adding a kde plot on top of histograms. The kde (kernel density estimation) plot is a non-parametric way to estimate the probability density function of a random variable. sns.displot(insurance, x='bmi', kind='hist', kde=True, aspect=1.2) (image by author) We have the option to create only the kde plot by setting the kind parameter as of the displot function as ‘kde’. In that case, we do not need to use the kde parameter. We can plot the distribution of a variable separately based on the categories of another variable. One way is to use the hue parameter. The figure below shows the histogram of bmi variable for smoker and non-smoker people separately. sns.displot(insurance, x='bmi', kind='hist', hue='smoker', aspect=1.2) (image by author) We can also show the bars side-by-side by using the multiple parameter. sns.displot(insurance, x='bmi', kind='hist', hue='smoker', multiple='dodge', aspect=1.2) (image by author) It is possible to create a grid of plots with the displot function which is a highly useful feature. We can create more informative visualizations by using the hue and col parameters together. sns.displot(insurance, x='charges', kind='hist', hue='smoker', col='sex', height=6, aspect=1) (image by author) The figure above shows the distribution of the charges variable in different settings. We clearly see that the charge is likely to be more for people who smoke. The ratio of smokers is more for males and it is for females. We can also create two-dimensional histograms that give us an overview of the cross-distribution of two variables. The x and y parameters of the displot function are used to create a two-dimensional histogram. sns.displot(insurance, x='charges', y='bmi', kind='hist', height=6, aspect=1.2) (image by author) This figure shows the distribution of the bmi and charges variables. The darker parts of grid denser in terms of the number of data points (i.e. rows) they contain. Another feature we can use about the distributions is the rug plot. It draws ticks along x and y axes to represent marginal distributions. Let’s add rug plot to the two-dimensional histogram created in the previous step. sns.displot(insurance, x='charges', y='bmi', kind='hist', rug=True, height=6, aspect=1.2) (image by author) The plot is more informative now. In addition to the two-dimensional histogram, the rug plot on the axes provides an overview of the distribution of the individual variables. The hue parameter can also be used with two-dimensional histograms. sns.displot(insurance, x='charges', y='bmi', kind='hist',rug=True, hue='smoker',height=6, aspect=1.2) (image by author) We can also create bivariate kde plots. For instance, the plot below is the kde version of the previous two-dimensional histogram. sns.displot(insurance, x='charges', y='bmi', kind='kde',rug=True, hue='smoker',height=6, aspect=1.2) (image by author) The density of lines gives us an idea about the distribution. We can use the fill parameter to make it look more like a histogram. sns.displot(insurance, x='charges', y='bmi', kind='kde',rug=True, hue='smoker',fill=True, height=6, aspect=1.2) (image by author) Scatter plots are mainly used to check the correlations between two numerical variables. However, they also give us an idea about the distributions. Seaborn is quite flexible in terms of combining different kinds of plots to create a more informative visualization. For instance, the jointplot combines scatter plots and histograms. sns.jointplot(data=insurance, x='charges', y='bmi', hue='smoker', height=7, ratio=4)
https://towardsdatascience.com/analyzing-data-distributions-with-seaborn-a8607961a212
['Soner Yıldırım']
2020-12-25 17:08:22.111000+00:00
['Machine Learning', 'Data Science', 'Python', 'Artificial Intelligence', 'Data Visualization']
DialogRPT With 🤗 Huggingface Transformers: Which Comments Get More Likes, More Replies and Are More Engaging?
Microsoft recently released a New Dialogue Ranking Pretrained Transformers model. Some human replies are more engaging than others, spawning more followup interactions. Since we want our conversational models to be more interactive we need some metric to know which comment is more likely to engage the user. So authors of DialogRPT trained it on 133M pairs of human feedback data to address this issue and the resulting ranker outperformed several baselines. If you are not interested in paper summary directly jump to the code. Why do we need such a pretrained transformer model? Human-like conversations are perhaps one of the most difficult challenges of AI. Now with more recent advancements in the field some times human annotators cannot reliably distinguish between human- and machine-generated responses. However human responses are not limited to be just relevant to the context. Sometimes they are interesting enough to prompt a rich listener reaction. (Although conversation can be sometimes very boring, heated, funny and awkward🤭).On the other side Chatbots or Conv-AI models just to prevent them from creating inappropriate comments often end up being vague, not engaging. GIF via Giphy A successful dialog turn must be proactive, engaging, and consistent with social norms. The solutions that authors proposed is using existing human feedback data (e.g., number of replies and likes) from online social communities(here Reddit). While there has been work on feedback prediction, this is the first time it has been applied to a dialogue and response generation system. Fig 1. Example of an engaging Dialogue. Courtesy: DialogRPT Let's dive deep on how the model solves the issue of engaging comments. Posts and comments typically form a tree structure. Each comment has its own number of replies and upvotes (refer Fig 1. Upvotes also termed as “Likes” in some social communities). These can be used as engagingness labels after careful normalization and formulation. Using a dataset of 133M pairs of human comments and their associated number of replies or up-/downvotes, authors train a set of large-scale transformer-based feedback ranking models which outperform several baselines. Understanding feedback metrics Posts and comments typically form a tree structure. Consider parent node as context c and the reply to it as r. For each dialogue (c, r), we consider the following feedback: Width, the number of direct replies to r; Depth, the maximum length of the dialogue after this turn; and Updown, the number of upvotes minus the number of downvotes. The feedback metrics defined above cannot be directly used as a measure of reply engagingness. Studies show that while popularity, measured by Updown, generally increases with quality, posts of similar quality can exhibit very different upvote counts. Tasks Given a context and a list of responses, the task of predicting a ranking based on the feedback they received, as measured by these three separate metrics: (1) Width, (2) Depth, and (3) Updown. Along with this an additional task (4)human-vs-fake, which measures how human-like the response is. Problem Formulation and Training objective Contrastive learning approach was used while training the model for the task. A Contrastive Learning approach: Given the confounding factors affecting feedback mentioned above, we train the model on pairs of samples (c, r+) and (c, r−), rather than fitting it to score each dialogue individually. The model is trained in such a way that predicts a higher score for a better response than the less appropriate response. Model ensemble For machine generation: The machine generation is required to be both human-like and preferred by a human. To rank the machine generations, the authors took the probability of joint distributions as follows : P(r = preferred, human-like|c) = P(r = preferred|r = human-like, c)· P(r = human-like|c) Human calibration: To estimate the correlation between the feedback score and human responses preferences, pair of responses for the same context to a set of human annotators, asking them to select the response they would prefer to send or receive. Model and training The model is a 12-layer transformer model based on GPT-2 architecture and initialized with DialoGPTmedium model weights. DialoGPT is a large-scale dialogue response generation model, pre-trained on 147M Reddit conversations. Each model ( for up-down, width, depth, human vs rand) has 354.8M parameters and is trained on an Nvidia Tesla V100 4-core GPU with batch size 256 at an average training speed of 0.33 M pairs of samples per hour. Each model took around 70 hours to converge (until validation loss on a fixed set of 1024 samples ceased to improve). Findings Responses that receive fewer replies or upvotes tend to be less contentful. In contrast, comments that attract more feedback are typically different in character: for instance, questions (indicated by ?, why, how, what, who) often lead to a longer conversation (greater Depth). The model trained only on Width data can perform reasonably well on Depth prediction, and vice versa, consistent with the high correlation between their labels. The Updown label is less correlated with these, and so the model trained on Updown data performs poorly on Width and Depth data. This is in keeping with the complementary relationship between these models. In the human-vs-generated task, the authors evaluate the model’s ability to discriminate between human and generated responses. A model trained only on human-vs-rand data performs poorly on this task, indicating that the generated responses are sufficiently relevant to the context to yield a higher score than a random response. However, the feedback prediction models, Width, Depth and Updown show much higher accuracy in the human-vs-generated task, even though they were not trained on any generated responses. This implies that the ranking models (DialogRPT’s) predict that DialoGPT’s(another generative model by Microsoft) generated responses may not be as proactive or as engaging as human responses. Conclusion To sum up the whole paper, Authors tried to use Reddit human feedback data to build and release a large-scale training dataset for feedback prediction. Then they trained the GPT-2 based model on 133M pairs of human feedback data and demonstrate the trained model a.k.a DialogRPT outperforms several baselines. Human evaluation of machine-generated responses ranked by DialogRPT shows higher human preferences. For future work, this model can be integrated with the generation model (maybe in Reinforcement learning)using the ranking score by DialogRPT as a reward signal. Python Demo Install Libraries !git clone https://github.com/huggingface/transformers.git %cd transformers !pip install -e . %cd src Implementation for various task Task: For upvotes/likes prediction The updown the score predicts how likely the response is getting upvoted. Output : Score Response 0.125 Me too! 0.640 Here’s a free textbook (URL) in case anyone needs it. Task: Human vs Machine The human_vs_machine the score predicts how likely the response is from a human rather than a machine. Output : Score Response 0.000 I'm not sure if it's a good idea. 0.419 Me too! Task: Human vs Random The human_vs_rand the score predicts how likely the response is corresponding to the given context, rather than a random response. Output :
https://medium.com/swlh/dialogrpt-with-huggingface-transformers-which-comments-get-more-likes-more-replies-and-are-5e7e13a5429f
['Parth Chokhra']
2020-10-25 21:17:50.968000+00:00
['Machine Learning', 'Artificial Intelligence', 'NLP', 'AI', 'Data Science']
How Spiders Build Webs in Space
How Spiders Build Webs in Space Spiders on the International Space Station spin odd webs, and it teaches us something new (Pixabay, Couleur) Gotcha! Not all spiders make webs, but most of us can’t help but think of cobwebs when we think of spiders. The beautiful silky homes of our eight-legged friends are marvels of instinctive engineering. Spider don’t simply make webs to hang around in, they can spin one around their own abdomen as a cocoon for their offspring or use light spider silk as a balloon to fly around with. Some spider are more active hunters and among those, some use webs as ‘nets’ like arachnid gladiators. The webs, in their many forms, are not only marvels of engineering, but also of materials science. The spider silk of different species can differ in properties, but it can be up to three time tougher than Kevlar and five times stronger than steel. Add to that that it’s very flexible, water soluble, biocompatible, and biodegradable, and you’ve got a miracle material for things as diverse as armor, running shoes, and surgical thread. In fact, a lot of people are trying to produce spider slik on an industrial scale through a variety of approaches. But back to the engineering part. Why do spiders build their webs the way they do? A lot of spider webs — the hanging around kind — are asymmetrical. The hub of the web, the point where the threads converge, is generally higher up. And, sitting there, the spider almost always looks down. The possible reason? Gravity. It’s easier to run fast when you’re going down and gravity gives you a little extra ‘oomph’. What would happen if we put web-building spiders in a low or zero gravity environment? Space spiders You got to love science, because a new study tried to figure it out for us. Female golden silk-orb spider (Wikimedia commons, Charles J Sharp) Two juvenile golden silk orb-weavers (Trichonephila clavipes) — well-known web-builders — hitched a ride to the International Space Station. Once there, the spiders wanted to make it a bit more homely, regardless of gravity’s absence. Time to spin a web. As expected, the webs of the young spiders were generally more symmetrical than their earthly creations. However, there was still quite a bit of variability in terms of space web symmetry. When the scientists looked a bit closer at their eight-legged colleagues, they found that the webs they started building when the lights were on were asymmetrical, and the webs initiated in the dark were symmetrical. More specifically, webs built in illuminated conditions had their hub asymmetrically close to the light and the spiders themselves also faced away from the light source. In the dark, the spiders positioned themselves randomly in tems of gazing direction. Turns out that spiders can use light as a back up engineering guide. (Back up, because on earth webs are asymmetrical even when constructed at night and spiders face down even in the dark.) (An interesting aside, the female spider built a record-setting 34 webs.) To conclude: …in the absence of gravity, the direction of light can serve as an orientation guide for spiders during web building and when waiting for prey on the hub. Spider webs. In space. Gotta love science.
https://medium.com/predict/how-spiders-build-webs-in-space-c7d1abaf9983
['Gunnar De Winter']
2020-12-26 19:22:56.165000+00:00
['Science', 'Space', 'Biology', 'Animals', 'Engineering']
Understanding the Builder Design Pattern
Example 2: Creating Video Game Heroes You've now seen the classic theoretical example so you understand the responsibilities of each of the classes in the pattern. Now, here’s another example where we identify each of these classes with a specific problem. Our problem is the representation of different heroes or characters in a video game. We’ll focus on the classic WoW (World of Warcraft) game, in which the heroes are divided into two races: Humans and Orcs. Each of these heroes can have armor , weapon s or different skills depending on whether the hero is a human or orc. In the event that the builder pattern is not applied, it causes a constructor to be defined in the Hero class with a long list of parameters ( race , armor , skills , etc), which in turn causes logic to be defined in the constructor to decide whether the armor is human or orc. So, with this initial solution, the problem is coupled since any change in the business logic would mean having to rewrite quite a few pieces of code, with hardly any possibility of reuse. So, the first thing we have to do is stop and think about how the builder pattern helps us solve this problem. So, we focus on showing the UML diagram that solves this problem and we begin to implement it. Builder pattern applied to the Hero creation problem of a video game. In this example, we follow the same order as in the previous example and we start with the model or object that we want to build flexibly. The Hero class defines the race , armor , weapon and skills properties. All these attributes could be objects but to keep this example simple we’ve left them as character strings. The HeroBuilder interface defines methods for specific builders. Let’s observe look at how the Hero object is configured, little by little, with setArmor , setWeapon , and setSkills . Finally, we have the build method that finishes the configuration of the object and extracts the Hero object. Once the builder is defined (as an abstract class or interface) we must build the two specific builders that our problem requires: HumanHeroBuilder and OrcHeroBuilder . In the demo code we have completed with a different string according to each builder. It’s important to note that the build method of each of the builders returns the built object ( Hero ) and resets the state of the object, so it can build another one. The last element of the pattern is the Hero-Director class that allows you to store configurations repeated throughout the code. In our example, we created three Hero creation setups. For example, the createHero method builds a complete hero — that is, it assigns armor, abilities, and weapons. We also create a hero without any equipment with the createHeroBasic method. Finally, to illustrate another configuration, the createHeroWithArmor method is defined, which returns a hero for whom only the armor has been assigned.
https://medium.com/better-programming/understanding-the-builder-design-pattern-f4f56fa18c9
['Carlos Caballero']
2020-12-21 17:59:34.537000+00:00
['JavaScript', 'Programming', 'Design Patterns', 'Typescript', 'Builder Pattern']
Creating Custom Face Datasets: From Zero To Hero
Face recognition, classification, and detection have numerous real-world applications. Right from the mobile device’s camera to your office’s attendance system, the ever-increasing demand for face detection systems has given rise to a number of online APIs, services, and apps that perform this task for developers, for instance, Firebase MLKit, Google Cloud Vision, Microsoft Face API, IBM Visual Recognition. If you’re a seasoned ML developer, you’d like to train your own model and using your own data. The reason why you’re collecting your own data could be, The task which the model will preform is different from the services/APIs which tech giants offer. Your model performs badly in real-world situations as the existing dataset does not include varied samples. We’ll walk through this short story on how to collect images from the internet like a pro! We’ll try to make an end-to-end solution to our problem. Given a keyword, we’ll directly get a .npy file containing cropped and resized images. You can find the Python implementation in this Colab notebook, Google Image Search is the place where we’ll search for images. Yeah absolutely! But not randomly downloading each image and cropping faces out of them. We’ll use a web crawler which will one by one download each image from a Google Search result and saves it on your machine. We’ll use the icrawler package available in PIP to scrape images from the Google Search. Snippet 1 You can use more attributes for type , size and license also. See here. Wait, the downloaded images are very big. I only need a cropped part of the face. We know, that face classification systems will require cropped and aligned face images like the ones shown below. Somehow, we need to crop multiple faces from these images. Cropped face images. We can use dlib. It’s a face detection system for Python. Given an image, it outputs a bounding box. Using this bounding box, we can crop the image and store it separately in a list . We may also resize the cropped image using PIL to the size which our model will require like 224 * 224. Snippet 2 That’s All! The image dataset is ready to use as a .npy file! You may visualize the cropped images using matplotlib like, Snippet 3 The output will look like this,
https://medium.com/analytics-vidhya/creating-custom-face-datasets-from-zero-to-hero-824461dd1391
['Shubham Panchal']
2020-06-17 04:04:22.108000+00:00
['Machine Learning', 'Artificial Intelligence', 'Image Processing', 'Data Science', 'Data']
Image Processing with OpenCV
Image Processing is a field of knowledge that falls in Computer Vision. The premises of machine learning were first laid down by computer vision theory, applying a whole set of techniques to process and analyze imagery data to extract valuable information that computers and machines may use for a wide range of applications, such as: Stitching : Turning overlapping photos into a seamless panorama : Turning overlapping photos into a seamless panorama Morphing : Changing or merging through a smooth transition different pictures to create a new one : Changing or merging through a smooth transition different pictures to create a new one 3D Modeling : Converting 2D snapshots into a 3D composition : Converting 2D snapshots into a 3D composition Face detection : Identifying human faces in digital images : Identifying human faces in digital images Visual Authentication: Automatically logging your family members onto a computer or a mobile while sitting in front of a webcam The intended purpose of Vision is to reconstruct the complex, colorful, and vivid three-dimensional world from the most simple and elementary building blocks leveraging on reliable models that would help to interpret images in an estimable way. Preprocessing or namely image processing is a prior step in computer vision, where the goal is to convert an image into a form suitable for further analysis. Examples of operations such as exposure correction, color balancing, image noise reduction, or increasing image sharpness are highly important and very care demanding to achieve acceptable results in most computer vision applications like computational photography or even face recognition. For this article, I propose to introduce some of the commonly used image processing techniques leveraging a very popular Computer Vision library, OpenCV. I’ll try to describe briefly how each operation works and focus more on tackling the topic more practically, giving you all the code you need so you have a hands-on experience of the material. The image given below will be used in our experiments.
https://towardsdatascience.com/exploring-image-processing-techniques-opencv-4860006a243
['Aymane Hachcham']
2020-04-29 22:52:00.723000+00:00
['Machine Learning', 'Data Science', 'Computer Vision', 'Image Processing', 'Opencv']
Diabetes Can’t Be Cured With Low-Carb Diets
I remember a time in my childhood, I was about 7 or 8, when my parents went through a concerted effort to lose weight. Suddenly it was all grapefruit for breakfast, slimming shakes for lunch, and measly salads for dinner. They did not last long. These days, they probably wouldn’t have gone for the grapefruit. You see, the biggest diet movement of the last decade is a bit more on the meat side: Low-Carb High-Fat (LCHF). LCHF diets cover the gamut from paleo to keto — short for paleolithic and ketogenic respectively — and basically follow the prescription laid out by Atkins in the 90s except with varying amounts of bone broth and kale. Pictured: “Healthy” And now there’s new evidence, reported literally everywhere — from the Guardian to the New York Times — that not only are LCHF diets good for losing weight, they might have a special power: curing diabetes. This is, of course, total nonsense. Scientific Shenanigans Usually, I spend a lot of time going over the science. I talk about the pros and cons, what it actually means, and why the media might have gotten it a bit wrong. So here goes. What was the study? A scientific survey. A survey of people in a Facebook group who follow LCHF diets, and rate it highly as a method for diabetes control. The researchers asked 316 people who are in a Facebook group dedicated to using LCHF diets to control diabetes whether it helped them control their diabetes, and they responded that it did. That’s it. THE ENTIRE STUDY. I’m not kidding. Here’s the study methods. I could go in depth into the methods, talk about bias, statistics, and control groups but honestly I don’t think it’s necessary. There’s no point going in-depth talking about a survey, because all that surveys prove is that people have opinions. The researchers spent time confirming these opinions, but whether this is just a small, self-selected group or a real phenomenon is yet to be demonstrated. This was the equivalent of asking a bunch of cyclists if they thought cycling was a good idea, and then printing news headlines screaming: “CYCLING CURES ALL HUMAN DISEASE, SAYS STUDY”. Total rubbish. And this time, it’s not all the media’s fault. Some of the statements made in the study itself by the scientists were pretty unusual — the claim, for example, that people were achieving “exceptional” diabetes control. Since all the researchers had were self-reported before-and-after values for blood sugar, and there are innumerable known issues with asking people questions like this, the conclusion that this diet did anything at all for blood sugar is a wild guess. Pictured: Wild guess Media Madness Ultimately, this is yet another tedious story where incredibly preliminary findings have been blown out of all proportion. Picured: “Our results are meaningless but hopefully future research agrees with us and THEN we’ll be vindicated” If you look at the conclusions of the study (above), it’s actually quite easy to see just how useless the study is for making claims about diabetes. The authors spend an entire paragraph hedging about whether their results mean anything at all. Which anyone who’d read the study would realize. Unfortunately “Research doesn’t prove much at all” doesn’t sell newspapers Sadly, it’s another story of preliminary research being blown out of all proportions by media sources desperate for a story. LCHF diets have been gaining popularity, and there’s some indication that they might be useful for weightloss, but there’s also good evidence that they’re no different from any other type of calorie restriction. If you’ve got diabetes, don’t listen to the hype. Talk to your doctor, who is a much more reliable source of information about diabetes than sensationalist media pieces. It turns out, the best diet may just be the one that is best for you. Not a shock, really. If you enjoyed, follow me on Medium, Twitter or Facebook!
https://gidmk.medium.com/diabetes-cant-be-cured-with-low-carb-diets-8b645c935373
['Gideon M-K', 'Health Nerd']
2019-07-02 03:18:51.519000+00:00
['Science', 'Weight Loss', 'Low Carb', 'Diabetes', 'Health']
Advertising to Facebook “Fans” Alone Contradicts Marketing Theory
Not only does this mean that a very few people are seeing a brand’s content, but those exposed to this content are only the page “Likes” or, as we used to call them, Facebook ‘fans’. This implies that many marketers believe there’s value in consistently publishing content for their social fan community alone — essentially, driving business results through a loyalty approach. The problem is, we know loyalty campaigns (across any channel) have consistently proven ineffective at driving brand growth. Penetration Campaigns outperform Loyalty Campaigns, every time. In the 880 cases analyzed from the IPA databank by Les Binet & Peter Field in 2007, the percentage of cases that reported “very large” business effects (sales, market share, penetration, loyalty, price sensitivity, or profit) for loyalty campaigns was only 44%, while penetration campaigns had an effective success rate of 77% for the metrics outlined above.
https://medium.com/comms-planning/speaking-to-your-facebook-fans-contradicts-marketing-theory-1888b234bc98
['Alysha Lalji']
2016-10-10 20:55:17.506000+00:00
['Marketing', 'Social Media', 'Facebook', 'Facebook Marketing', 'Advertising']
NaNoWriMo Superhero on Medium: Leo, 6th grader
Welcome to the tenth, and last November episode of NaNoWriMo Superheroes and Superheroines on Medium! In this episode we meet Leo, who completed his first novel last year during NaNo at age 10, and this year said on November 9 to his parents, I have to start writing! November is the month I write a novel. Hear more from Leo about novel writing in the interview, and I hope you’re as inspired as I am by this young writer. Leo, pondering the nature of penguins. I asked Leo the same questions as his schoolmates Janet and Adi from Barrington Lloyd-Lovett, Josh Gauthier, and my daughter Ava. If you leave a response to this post, I’ll make sure Leo sees it.
https://medium.com/nanowrimo/nanowrimo-superhero-on-medium-leo-6th-grader-9cd7ca495ffb
['Julie Russell']
2017-12-09 17:58:40.787000+00:00
['Creativity', 'Writing', 'Fiction', 'Nanowrimo Superheroes', 'NaNoWriMo']
8 Backyard Plants to Boost Your Immune System
8 Backyard Plants to Boost Your Immune System Because Winter is Coming Winter is coming and there’s a good chance you will be exposed to cold germs, the flu, or Covid-19. Let’s hope not, but here are some plants that can help you build a wall of immunity around yourself. I love medicinal plants, and there are many in your backyard that can help with a range of ailments, but with Covid-19 back on the rise now seems to be the time to create an immune bubble for ourselves. And bonus, you can find many of these plants in your own backyard. But don’t wait until winter to find them. Fall is the best time to track down these plants, and I personally love a walk in the woods or park in fall. The crisp air, sun beams cutting through the tree braches, life taking one last look before winter. Pumpkin Seeds Yes, that’s right pumpkin seeds and what better to do with that old pumpkin? Well there are 26 other things you can do with your old pumpkin if you don’t like the seeds. Pumpkin seeds are packed with zinc, magnesium, iron and vitamin E, all great for boosting the immune function. Just one serving of pumpkin seeds can supply 14% to 42% of the daily target for these essential nutrients. Pumpkin seeds also have anti-fungal and anti-viral properties. They help in cell growth, improve your mood and is even better for quality sleep. You can sprinkle pumpkin seeds on just about anything, from oatmeal or overnight oats to garden salads, cooked veggies, stir-fries, soups, whole-grain dishes, tacos and, desserts. Roasted shell-on pumpkin seeds make a great snack or trail mix add-in. Pumpkin seed butter can be whipped into smoothies, swirled into yogurt, drizzled over fruit or used as the base for energy bars or balls. Peppermint Peppermint grows like a weed inside or out. Its leaves and essential oils contain active components, including menthol and rosmarinic acid, which have antiviral and anti-inflammatory activity . Peppermint-leaf extract can battle respiratory syncytial virus (RSV) and significantly decreased levels of inflammatory compounds. Peppermint oil can kill also bacteria such as E. coli, Listeria and Salmonella in food and Staphylococcus and pneumonia-linked bacteria. Menthol, also an extract from peppermint can kill bacteria. Since it’s the extract and not the leaves you will need to extract the oils and chemicals to see the benefits. Burdock Burdock root is high in Vitamin C and E to build immunity. Clip the prickly heads and the brilliant pink flowers and focus on the root. You’ll want to look for first year plants for the greatest benefit. A great article on how to prepare this plant tells you how not to have a bad experience the first time. You can roast the root, make kinpira gobo, or stir fry it. Rose Hips Go to any garden and you’ll see the little bulbs left over after the roses have bloomed. Roses grow wild in every state except Hawaii as well as in gardens. If you have your own, refrain from dead-heading the spent flowers otherwise the seed pods won’t form. The rose hip is the oval shaped fruit or seed pod of the rose plant and is found directly beneath the flower after successful pollination. They are often missed because people prune their rose bushes back to encourage more flowers. Rose hips can range in color from an orange-red to a dark purple or even black depending on the species and harvesting time. Amazingly these are full of Vitamin C, A, B, E and K. But, they are best after the first frost when sugar is concentrated and they become soft. If your area doesn’t have frost then plan for a late fall harvest when the rose hips gain a deep-red hue and slightly soft to the touch when gently squeezed. Eating them fresh is best, as the drying process destroys from 45 to 90 percent of Vitamin C, and infusions extract only about 40 percent of what’s left. Photo by vasilisa.via form PxHere Before use, rose hips require de-seeding to remove the tiny hairs that surround the seeds. Rose hips can be dried in the sun or by using a dehydrator; afterwards they can be cut into pieces or powdered for later use. You can eat the rose hips raw or make it into a tincture, tea, jelly, or syrup. Tea is really easy. Pour boiling water over the rose hips and let them sit 10–15 minutes. Strain out the rose hips. Elderberry This was one of the first medicinal plants I bought for my garden — it’s an immune system superstar. Be careful not to confuse these with pokeweed. On elderberry there are many more berries and the elderberry is a bigger shrub than the simple pokeweed. ID with the berries and the reddish branches they hang from. The elderberry tincture and its derivatives have long been a medicine that is used for cold and flu season. It has a history of use in preventing and treating symptoms of influenza, colds and sinusitis, with anti-viral activity against influenza virus and herpes simplex. They are also edible, but are best cooked in order to avoid any stomach upset — they have toxins that create hydrogen cyanide in the gut. Make an immune boosting elderberry syrup and tincture, or try elderberry mead, soda, or fermented honey. They can also be included in a medicinal herbal tea blend. Cherry Bark Growing up on the east coast I remember ravenously eating the sour cherries from the tree in my back yard, but I never thought about the bark. The inner bark of the black cherry or choke cherry tree can be used as a cough remedy to support the immune system and stop unproductive coughing so a person can sleep. Follow step by step instructions here, and enjoy foraging for this wild medicinal herb or try out this bark and root recipe with cherry bark. If you can’t find a cherry tree, you can buy a tincture and syrup from Naturopathica. Garlic Yeah I know winter is cuffing season, but sometimes you’ve got to fend off that vampire. Garlic is a powerful antimicrobial and liver protector. Garlic is great because it’s friendly to your gut flora and fauna but kills off the invaders. Garlic can also be employed as a preventative to the common cold. It has been shown to reduce the incidence of colds by 63%, decrease severity of symptoms and accelerate recovery time when compared with a placebo. Garlic has anti-viral activity against influenza A and B, cytomegalovirus, rhinovirus, HIV, herpes, viral pneumonia and rotavirus — note it does not prevent any of these. Echinacea Echinacea is native to North America, and is commonly known as purple coneflower. The species used medicinally are Echinacea angustifolia and Echinacea purpurea. This is another plant found easily in backyards. Echinacea is an immune stimulant. The main actives in the plant are alkylamides — which give a characteristic tingle on the tongue while making your mouth water. Echinacea may lower the risk of developing colds by more than 50% and shorten the duration of colds by one and a half days. Echinacea has also been known to help with depression and anxiety, perfect for the cold, dark winters. E. purpurea is easier to cultivate, and the whole plant can be used, including the leaf, flower, seed and root. The roots however shouldn’t be harvested until the plant is at least two years old. Echinacea has a great safety profile and can be taken as a tincture or capsule. You can also enjoy E. purpurea as a carefully dried tea, to preserve the alkylamides.
https://medium.com/the-innovation/8-backyard-plants-to-boost-your-immune-system-ea04b32ea5dd
['Marcus Griswold']
2020-11-03 19:50:04.153000+00:00
['Health', 'Covid 19', 'Psychology', 'Nature', 'Food']
The Origin of The Miranda Warning
The Origin of The Miranda Warning How Ernesto Miranda’s case came before the Supreme Court “You have the right to remain silent. Anything you say can be used against you in court. You have the right to talk to a lawyer for advice before we ask you any questions. You have the right to have a lawyer with you during questioning. If you cannot afford a lawyer, one will be appointed for you before any questioning if you wish. If you decide to answer questions now without a lawyer present, you have the right to stop answering at any time.” — The Miranda warning Miranda warnings today are universal in the justice system as a constitutional right to an attorney and against self-incrimination. But what few know today is the origin and story behind the Miranda warning. According to the National Constitution Center, the “Miranda” in the term comes from Ernesto Miranda, a man arrested in March of 1963 for rape and kidnapping. While in police custody, Miranda confessed to kidnapping and rape charges. However, his lawyers wanted to overturn the conviction. During cross-examination in his trial, Miranda’s lawyer wasn’t told he had the right to remain silent and the right to a lawyer. That would later be the framework for Miranda v. Arizona, a 1966 case that determined the Fifth Amendment to the Constitution ensures people must be read their rights to consult an attorney before and during questioning, and that every defendant had the right against self-incrimination. Who was Ernesto Miranda? Ernesto Miranda — Wikipedia Commons Mitchell Caldwell and Michael Leif at the American Heritage magazine say that Ernesto Arturo Miranda was born in Mesa, Arizona, on March 9, 1941, to a Mexican immigrant. After his mother died, he didn’t get along well with his father, who remarried. Miranda was first convicted of a crime when he was in eighth grade — he was convicted of burglary and sent to reform school. In that time period, Miranda attended the Queen of Peace Grammar School in Mesa, Arizona. After his release, Miranda was arrested for attempted rape and assault. In 1956, Miranda was released from the reform school at the Arizona State Industrial School for Boys (ASISB), and he returned there several times after getting in trouble for the law. He eventually moved to Los Angeles, where he served 45 days in county detention for curfew violations and an armed robbery, and then sent back to Arizona. For Miranda, his only option at the time was going into the U.S. Army. He spent a year and a half in the military and spent a third of his time in service working hard labor for peeping Tom acts and going absent without leave permission. He eventually left the military on an honorable discharge and went to Texas, where he was sent to federal prison for a year after stealing cars. At that point, Miranda seemed to get his life together. He moved in with a woman named Twila Hoffman in California, who had just separated from her husband but didn’t have money for a divorce. Miranda was 21, and Hoffman was 29, and Hoffman had two children. The two had a daughter together, and Miranda and Hoffman and their children moved back to Mesa. Miranda worked at a dockworker, while Twila Hoffman worked at a nursing school. He hadn’t held a single job for more than two weeks up to that point, and he worked so well that his supervisor said he was “one of the best workers [he had] ever had.” The attack In 1963, an 18-year-old woman named Patricia Weir (not her real name), who lived in Phoenix, worked at a local movie theater. The day was March 2, 1963, and one movie caused her to stay late at the theater. The bus didn’t reach her stop until 12:10 a.m. She later went home and walked up the street, but before she got home, a man jumped out of a car, grabbed her, and put a hand over her mouth: “Don’t scream, and you won’t get hurt.” She asked the man to “let me go, please let me go,” but he dragged her into the car, tied her hands behind her back, and made her lie down in the back. The man drove the car for 20 minutes, and then untied Weir and forced himself upon her, and then made her give him whatever money she had in the backseat. He drove her back to her house and said: “Whether you tell your mother what has happened or not is none of my business, but pray for me.” Weir pounded on the door, crying as she told her sister what happened. The sister called the police, and Weir was taken to the Good Samaritan Hospital to be examined — two detectives interviewed her. She said the attacker was: “[a] Mexican male, twenty-seven or twenty-eight years old, five feet eleven inches, 175 pounds, slender build, medium complexion with black, short-cut, curly hair, wearing Levi’s, a white T-shirt, and dark-rim glasses.” However, when later asked, she wasn’t sure what the man’s nationality was. She said he might have been Italian, and the police didn’t have much to go off of to continue the investigation. However, Weir’s brother-in-law, who picked her up at the bus stop after the attack, said he saw a car, a Packard, with a license plate of DFL-312 stalking the street and the car. The car, a 1953 Packard, belonged to Twila Hoffman. The confession The Maricopa County Courthouse — From Marine69–71 on Wikipedia Commons Caldwell and Leif later say that Miranda went to bed one night after a 12-hour-shift at work, and a couple of detectives arrived at his home. According to Ron Dungan at AZ Central, Hoffman answered the door with a baby, and the two children were with her. Miranda awoke, and the detectives asked him to come with them to the police station. “We’d rather not talk to you about this in front of your family,” Detective Carroll Cooley said. They led him to a four-man lineup, where Weir identified him. They also led Miranda to an interrogation room, where he confessed for two hours to rape and kidnapping. Miranda later said the cops coerced him into a confession after being dead tired from the graveyard shift. The detectives later brought Weir into the room, and then Miranda said: “that’s the girl.” He gave a detailed account that matched Weir’s story and then agreed to his confession's written statement. At the confession was a disclaimer that a suspect was confessing “with full knowledge of my legal rights, understanding any statement I make can be used against me.” Miranda signed the disclaimer, and then the district attorney filed charged against Ernesto Miranda for rape and kidnapping. The trial Miranda would be represented by an attorney named Alvin Moore, who represented Miranda at the Maricopa County Courthouse. Caldwell and Leif describe Moore as “a passionless defense attorney” in a very run of the mill case at the courthouse. Four witnesses took the stand, and Miranda’s confession would be presented as well. Weir testified as well, an emotional testimony that had a tremendous effect on the jury. Moore, however, at one point, questioned Cooley about the interrogation. He asked if Cooley read Miranda his rights. Cooley confirmed that he read the statement. He then questioned Cooley about the statement — it didn’t include anything about the defendant being entitled to the advice of an attorney. Knowing this, Moore tried to make a formal motion to exclude the confession, but the motion was denied. The jury deliberated after being instructed by the judge. The jury of three women and nine men found him guilty, unanimously. Moore later filed an appeal about Miranda’s trial, where he asked two pivotal questions that would shape legal history: “Was [Miranda’s] statement made voluntarily?” “Was [he] afforded all the safeguards to his rights provided by the Constitution of the United States and the law and rules of the courts?” Miranda himself would push the case after the Arizona Supreme Court decided that Miranda’s confession was given voluntarily and had been properly admitted since Miranda had not sought counsel. Miranda then filed a request himself to the Supreme Court, which caught the attention of Robert J. Corcoran, an attorney at the ACLU. Corcoran fought to have Miranda’s Supreme Court case considered, but Moore refused to press on with the case. Corcoran found assistance in a trial lawyer named John J. Flynn, who worked at a firm that took two cases for the ACLU a year. Flynn solicited the help of an associate named John P. Frank, who clerked for Hugo L. Black, and the two of them would work on the case. Frank and Flynn would correspond with Miranda frequently, and Miranda was very thankful for their help, saying “to know that someone has taken an interest in my case, has increased my moral enormously.” Miranda v. Arizona Chief Justice Earl Warren — Public Domain Preceding Miranda v. Arizona in the Supreme Court were cases like Gideon v. Wainwright, a 1963 case that guaranteed the right to the attorney, and Escobedo v. Ilinois, a 1964 case that protected the right of criminal suspects to have a right to counsel during police interrogations. Both of these cases were protected under the Sixth Amendment. For the Warren Court, which fervently protected constitutional rights in desegregating the school system and prohibited prayer in public schools, the Miranda case was the convenient successor. Escobedo v. Ilinois made many clerks, by 1966, have to mark “Escobedo” on cases that had coerced confessions — and Earl Warren then chose Miranda’s case where he was, in the words of Caldwell and Leif, “interrogated for two hours without being informed of either his right to remain silent or his right to counsel.” Escobedo’s case, however, was different from Miranda. The Arizona Attorney General’s Office made the argument that Escobedo’s confession was more coerced. The police made an effort to stop him from seeing a lawyer — Miranda had no such concerted effort. Escobedo also had a clean record, and Miranda had not, suggesting that Miranda knew the interrogation process. Frank, on the other hand, gave an argument about constitutional rights. Caldwell and Leif say: “It was the battle of constitutional rights versus the possibility of turning dangerous criminals back into the streets. It was the battle of good versus evil. But what was the ultimate good? And which was the worst evil?” On June 13, 1966, the Warren Court decided — 5–4, in favor of Miranda. The liberal justices of the court decided in favor, while the conservative justices decided against. The Court made sure that the Fifth Amendment would be the backbone of the Miranda protections. Earl Warren wrote the opinion and said: “This Court has recognized that coercion can be mental as well as physical, and that the blood of the accused is not the only hallmark of an unconstitutional inquisition.” To expand, Warren wrote the words that are cited almost verbatim in the Miranda warning today, if a person is subject to interrogation: “[He] has the right to remain silent … that anything said can and will be used against the individual in court … that he has the right to consult with a lawyer and to have the lawyer with him during interrogation … [and] that if he is indigent, a lawyer will be appointed to represent him.” Warren said that Miranda was not told of his right to an attorney and have one present, nor was Miranda given his right against self-incrimination. His confession was inadmissible since he had not been given these warnings. Legacy Caldwell and Leif note that opposition to the Warren Court grew because of the Miranda ruling — the Court had, after all, let a confessed rapist go free. Miranda warnings are standard in the justice system and the reading of these rights has become universal. Ernesto Miranda wasn’t actually free after the Warren Court ruled his confession was inadmissible. He became a celebrity and became “the most popular inmate at the Arizona State Prison,” and frequently signed autographs and gave legal advice. However, the Maricopa County District Attorney’s Office retried the case without the confession. Before the trial, Twila Hoffman talked with the prosecutor and said that Miranda confessed he’d kidnapped and raped an 18-year-old girl when she visited him in prison, and then asked Hoffman to visit Weir’s family. He asked Hoffman to convey his promise to marry Weir if she dropped the charges, and that he would return to Hoffman later, but wanted to get out of jail. With Twila Hoffman and the victim on the stand, Hoffman was sentenced to 20 to 30 years by the jury for rape and kidnapping, the same sentence as his original trial. Hoffman would later change her name to Twila Mae Spears. Flynn once wrote to her in 1973 asking for visitation rights, according to Dungan. She wrote in response: “This letter is very close to harassment. Any other correspondence as to this matter will result in Legal action.” In 1972, Ernesto Miranda was paroled. However, he violated his parole, went back to jail, and was freed again in 1975. He was 34 years old, and in a bar called “the Deuce” in Phoenix, he was playing cards with two Mexican immigrants. They got into an argument, and the two men stabbed Miranda to death. The bartender said the fight ended very quickly. By the time he arrived in the hospital, Miranda was pronounced dead. Once police arrested Fernando Rodriguez, one of the men who killed Miranda, they read him his Miranda rights, in English and Spanish.
https://medium.com/crimebeat/the-origin-of-the-miranda-warning-394e7a03fe8a
['Ryan Fan']
2020-10-10 08:36:48.098000+00:00
['History', 'Nonfiction', 'Justice', 'True Crime', 'Society']
Applications of Deep Learning for real-time Object Detection
The global computer vision market was valued at $27.3 Billion in 2019 with a CAGR of 19% from 2020 to 2027 [1]. Object detection is one of the core computer vision tasks that has a broad range of industrial applications such as - Cancer detection in radiology-based images [Healthcare] Detection of manufacturing defects, factory floor surveillance [Manufacturing] Detection of seat belts, parking in restricted areas [Public Safety] Stock level analysis and inventory management [Retail] What is object detection? There are four types of visual recognition tasks in computer vision. First, image classification, which is the assignment of labels to images, for example, labelling cows in a picture of farm as cows. Second, object detection, which is to not only label the cow but also to locate the cow using a bounding box. Third, semantic segmentation, which predicts the labels for each pixel of an image without differentiation between objects with the same label. Fourth, instance segmentation, which involves labelling as well as segmentation. Different types of computer vision tasks What is Deep Learning? Deep learning is a subset of machine learning, that can process data from a very wide variety of sources. Compared to traditional machine learning, it requires lesser data preprocessing by humans and can often produce more accurate predictions from the data. In deep learning, interconnected layers of software-based calculators known as neurons, form a neural network. There are layers of such neurons, hence the word “deep” neural network. The network ingests data and processes them through each layer of the neural network, which each layer learning increasingly complex features of the data. Once a deep neural network has learned how to make determinations from input data correctly, it can then use what it has learned to make determinations about new data. For example, once it learns what an object looks like, it can recognize the object in a new image. In other words, a deep neural network that has learned how to recognize cows, can quick detect cows in new images. How a “Neural network” a.k.a “AI model” works: The network processes signals by sending them through a network of nodes analogous to neurons. Signal passes from one to another along links. “Learning” improves the outcome by adjusting the weights that amplify or damp the signals each link carries. Nodes are typically arranged in a series of layers, in other words, a “deep” neural network. Image from Waldropp Mitchell, PNAS, 2019, 116(4) Technical detail: How does deep learning for object detection work? Sequence of tasks involved in object detection Use of a deep neural network for object detection Recent trends in applications of deep learning for object detection Overall, the accuracy and performance of state-of-the-art deep learning models reported in 2019 are significantly higher than those of the previous years. Higher accuracy has a profound impact on application of the technology in medical imaging as well as surveillance systems. Improvement in performance means results can be inferred much faster on modern edge-based computing systems, paving the way for applications such as real-time drone based video analytics. Specifically, the new improvements to deep learning models came by way of the following advancements: Face Detection Mean Average Precision went above 90% Face detection is a computer vision problem to detect human faces in images, which is the first step to applications such as face verification, face alignment and facial recognition. Face detection is different from generic object detection in two ways. First, the range of scale of objects is larger in face detection and blurring is more common in face detection. Second, face detection has a single target and depends strongly on the structural characteristics of the face. WIDER FACE is currently the most commonly used benchmark for evaluating face detection algorithms. The high variance of face scales and large number of faces per image make WIDER FACE the hardest benchmark for face detection, with three evaluation metrics (easy, medium and hard). In 2019, PyramidBox++ [2], VIM-FD [3], ISRN [4], Retinaface [5], AlnnoFace [6] and RefineFace [7] all reported mAP scores of greater than 90% for the easy, medium and hard metrics. This is a significant improvment over the previous years. 2. Recent trends in pedestrian detection CityPersons is a new and challenging benchmark for pedestrian detection. The dataset is split into different subsets according to the height and visibility level of the objects, and thus it’s able to evaluate the detectors in a more comprehensive manner. In 2019, the APD model reported a 30% improvement in the object detection performance over 2018 [8].
https://medium.com/unitx-ai-magazine/recent-trends-in-the-application-of-deep-learning-for-object-detection-aabed8e705bc
['Kiran Narayanan']
2020-08-04 11:39:14.904000+00:00
['Machine Learning', 'Artificial Intelligence', 'AI', 'Deep Learning', 'Digital Transformation']
How to Not Suck at Design, a 5 Minute Guide for the Non-Designer.
6. Use a list view for results, if order is important Most mobile and web apps have some type of search and there can be some healthy design debates on how to display the results. If order is important then a list view is most effective. If order doesn’t matter and you would like to encourage discovery (like Pinterest or AirBnB) then a grid view will encourage a gaze pattern to support discovery. 7. Design in black and white first, add color later Designing in black and white will keep the focus on solving and designing the core experience of your app. Color evokes strong emotional responses and often interrupts our ability to focus on the core design problem. 8. Create comfortable design Hand strain is a real issue, consider the graphic below from Luke Wroblewski’s amazing article: Responsive Navigation: Optimizing for Touch Across Devices. Luke lays out the areas of a phone that are easiest to reach and use (at least for right handers) — I’d love to see apps have a setting where you can switch the interface from right hand dominant to left hand dominant. Many effective mobile apps keep navigation and core actions in the bottom third of the phone. Image Credit: Luke Wroblewski Responsive Navigation: Optimizing for Touch Across Devices 9. Borrow Color Palettes Color is a bit of an elusive dark art. I highly recommend heading over to Dribbble and searching for “Color Palettes” or use a color palette generator like Coolors or Color Claim. Save yourself the hours of endless debate and second guessing. 10. Use Apple and Google OS Conventions Apple and Google have created incredible resources for anyone building software for Android or iOS. For example, the Google Material spec has guidelines, resources, colors, icons and components to help jump start the design of your app. Apple has the HIG — their Human Interface Guidelines, which outline everything you need to know on how to design an iOS app.
https://medium.com/startup-grind/how-to-not-suck-at-design-a-5-minute-guide-for-the-non-designer-291efac43037
['Marc Hemeon']
2020-08-13 16:56:29.187000+00:00
['Design', 'Innovation', 'Marketing', 'UX', 'Business']
Late last year, Sir David Attenborough, attended crucial UN climate talks, and implored that human…
Late last year, Sir David Attenborough, attended crucial UN climate talks, and implored that human behaviour and attitudes to climate change and global warming must change if we want to save our planet. Attenborough states “given the chance it can recover, and we know how to do that.” It’s not too late. In observing data from Climate Central, there is an obvious correlation between dramatic increases in carbon dioxide in the air and temperature rise. Carbon dioxide accounts for 75% of greenhouse gasses, and is largely a result of human behaviours such as burning fossil fuels and deforestation. These greenhouse gasses trap heat within the Earth’s atmosphere which is normally remitted back into space during night time when the temperature drops. Essentially, these gasses resemble glass in a greenhouse, which allows sunlight to pass into the ‘greenhouse,’ but prevents heat from escaping into space leading to a global temperature increase. Image source: Climate Central (2018) ‘GLOBAL TEMPERATURE & CARBON DIOXIDE’, Available at: http://www.climatecentral.org/gallery/download/co2-and-rising-global-temperatures (accessed: 27.11.2018) National Geographic suggests that the continued increase in the global temperature could have catastrophic impacts on our wildlife, sea levels and the rate of melting icecaps and glaciers. The frequency and intensity of extreme weather events will also be effected. Already we have seen heatwaves, extreme storms, and seasonal changes as well as changing animal behaviour, all of which indicate climate change and global warming. Plus, a recent article in The Guardian revealed that scientist Brad Lister returned to a Puerta Rican rainforest after 35 years to find that 98% of ground insects had vanished, with the most likely reason being global warming. This crash in insect numbers, impacts the foundations of the rainforest food chain, risking “ecological Armageddon”. This decline has had a snowball effect on other creatures who feed on the insects. This includes the Puerto Rican Tody, numbers of which have dropped by 90%. Lister describes this as a “bottom-up trophic cascade” because “when the invertebrates are declining the entire food web is going to suffer and degrade.” This is further supported by an article in The Guardian, by Jonathan Watts, which explains “45% of all potential environmental collapses are interrelated and could amplify one another.” Action must be taken before this cascades into an even wider, and more devastating problem. Whilst businesses and governments need to take a stronger stance on climate policy, there are small changes that individuals can also make to reduce their carbon footprint and have a positive impact on the environment. Individuals can focus on small domestic changes, such as switching off plugs, using low carbon or public transport rather than fuel-based vehicles, utilising renewable energy sources, and insulating their homes. EnergiToken is incentivising people to make these changes, by awarding those who display energy saving behaviour with EnergiTokens (ETK). ETK is a financially tangible cryptocurrency which is given to customers who, for example, purchase low carbon transport, solar panels, or energy efficient appliance from one of their partners. This ETK can then be spent by customers within the EnergiToken ecosystem of approved partners. EnergiToken’s partner, ON5, encourages domestic and commercial changes in energy consumption through workshops, and their recently launched Energy100 which educates people on how to be more efficient in their homes and workplaces. Individuals may be surprised by how small changes to one’s behaviour can really have a meaningful impact on the wider environment. If everyone makes a small change to their consumer habits, collectively the impact can be huge. Working with several energy efficient and environmentally passionate partners, EnergiToken plans to lead decarbonisation, and energy consumption reduction in order to help reduce the human-caused, damaging impact on the environment. Incentivising behavioural change with ETK, will motivate more people to make ‘greener’ choices until it becomes instinctive practice, thus having a sustained, and increasing impact on the environment. Small changes can have big consequences. “Every action counts.” — Juan Rocha, Stockholm Resilience Centre Reduce consumption and get rewarded. Visit www.energitoken.com today to find out more.
https://medium.com/energitokennews/late-last-year-sir-david-attenborough-attended-crucial-un-climate-talks-and-implored-that-human-f354355a5018
[]
2019-01-22 14:44:28.915000+00:00
['Climate Change', 'Environment', 'Sustainability', 'Global Warming', 'Pollution']
Is It Time to Get Over Design Patterns?
One of the core principles of good programming is don’t solve the same problem twice. If someone has already invented the perfect bubblesort, you have no business rolling your own. If you’ve got a reasonable regular expression to validate email addresses, you don’t need to do it yourself. And so on. This logic is easy to understand. Every time you reinvent a piece of functionality, there’s a risk that things will go sideways. You could introduce new bugs or stumble into unexpected shortcomings. At best, your code will suck up extra testing time. At worst, you’ll create problems that will hide in the seams and joints of your application, like bedbugs in the corners of a old bed frame. So it’s easy to understand the allure of design patterns. If we’re going to solve the same problems over and over again, wouldn’t we be wise to use the canonical solutions, ones created by far smarter programmers and tested over the eons? Or, to put it another way, don’t we have the responsibility to use battle-tested patterns to save time and ensure the best possible final product? This is how design patterns reel you in. A brief history of design patterns The idea of patterns — conceptual models that you can define and reuse — has deep roots, stretching back to real architecture (of buildings) and the work of Christopher Alexander. But design patterns as most programmers know them sprang into existence in 1994, when four coding geniuses wrote a book called Design Patterns: Elements of Reusable Object-Oriented Software. The book that launched a thousand design reviews Design Patterns set out 23 foundational patterns grouped into three categories: Creational, Structural, and Behavioral. You can review all of them here. Amazingly enough, when people talk about design patterns today — some 25 years later — they’re usually referring to one of the ancient patterns first codified in this book. This sort of success is no accident. And there’s no denying that the original design patterns were written by sharper programmers than you or me. But design patterns aren’t a neutral part of software design, and using them has a price that’s often overlooked. The cost of complexity Design patterns are often sold to programmers with architectural analogies. Imagine you were building a new home. Would you want the tradespeople doing the work to reinvent domestic plumbing systems? Would you want the electrician to cook up his own approach to wiring fuses? A couple of design patterns short of perfection / [Pixabay] But building software systems is very different than building houses. For one thing, design patterns aren’t ingredients you can drop straight into your code, like a handy function from a class library. Instead, each pattern is a model that needs to be implemented. Most design patterns define an interaction that spans different objects, which means you need to make changes to several classes. The sheer weight of this extra code complicates your design. They’re especially dangerous for new developers, who never see a coding side-trip they don’t want to take. Even when design patterns are at their best, they force you to trade simplicity for something else. Often, that “something else” is just a vague promise of good encapsulation and a warm fuzzy feeling. The general problem / XKCD Design patterns are opinionated. They embed themselves in your code, and they pull your classes in specific directions. Even the simplest patterns have a cost and introduce complexity. Consider the humble Singleton pattern—a class that only allows one instance. Despite its conceptual simplicity, there’s roughly a dozen different techniques for implementing the Singleton pattern, depending on whether you need thread safety, lazy loading, serializability, support for inheritance, or you just love enums. It’s not that Singleton design is an advanced concept. It’s just impossible to design any single code ingredient to be perfectly generalized and perfectly suitable to every use case. And to this day, architects still debate if the Singleton is a virtuous gold-plated pattern or an anti-pattern— something you should strive to avoid, because someday it will betray you. Ambiguous extensibility Design patterns are all about increasing abstraction in your code. Patterns like Proxy, Bridge, Adapter, and Facade add layers in between objects. At first, this seems like programming paradise. What virtuous programmer doesn’t want less dependency between objects? We all know the rule: All problems in computer science can be solved by another level of indirection. But there’s a side effect, too. Every extra layer of indirection adds a new place where you can put a solution. In other words, the more you abstract your design with patterns, the more places you open up for someone else to change the code. Future programmers are going to have trouble figuring out what part of the system to modify, and how they can extend the code without having their work collide with someone else’s changes. The worst offender is the Mediator pattern, which aims to let two objects interact without knowing anything about each other. The result is either the a holy nirvana of abstraction, or a way to seriously confuse responsibilities in your class model. There are two ways to wreck a car: 1) Tear it apart. 2) Call it a generalized road-limited transport container and start adding to it. Mismatching and bad fits It’s easy to rush into implementing patterns without understanding the context—in other words, how do these patterns fit into your chosen language, framework, and type of application? The answer can be murky. Modern language features like generics change the way patterns are used. Dynamic languages from Lisp to Python make many patterns obsolete, according to no less a programmer than Peter Norvig. And functional programming languages exist in a parallel universe with completely different patterns. These inconsistencies aren’t limited to language features. Other patterns don’t play nicely with certain types of infrastructure. For example, you don’t want chatty objects if you’re dealing with network protocols, and multithreaded code can break the standard implementations of most of the original 23 design patterns. Patterns are at their best in the hands of framework designers, who can integrate them directly into a framework. For example, events are modern examples of the Observer pattern. The Prototype pattern is fused into JavaScript (and the source of all its object-oriented features). Server-side web frameworks like ASP.NET implement the legendary Model View Controller pattern. And so on. The antidote: Be simple If design patterns are dangerous, what’s the solution? The answer is to take a simple, solemn pledge. It’s a sort of Hippocratic Oath of the programming world: First, be simple. If you’re deep in a thorny problem, in a fog of semicolons and class relationships, trying to untangle responsibilities and keep everything manageable, pause. Don’t let design patterns short-circuit your critical thinking. After all, having a pattern does not protect you from a bad design. There’s no guarantee that the problem you think you’re solving with a pattern is the problem you need to solve. And adding patterns that don’t address the right problems — or any problem at all — is a certain path to Software Maintenance Hell. If you can guarantee nothing else about your program, promise to keep it simple. Standards / XKCD Patterns are a design language The real value of design patterns is not prescriptive (telling you what to do). It’s descriptive (telling others what you’ve done). Design patterns aren’t recipes. They’re a language. Good things happen when you think of design patterns as a language that can help you talk about application design. You don’t need to start out trying to use patterns. Instead — with experience — you’ll begin to recognize the outlines of patterns crystallizing in your code. For example, if you code web services, you’re almost certainly using the Facade pattern, whether you recognize it or not. Once you recognize the emerging structure of your code, you can use the language of design patterns — concepts like factory, decorator, and facade — to formalize what you’ve done. Design patterns can’t teach you software architecture. They aren’t meant as an excuse to write a lot of code or a way to avoid thinking deeply about design. But they can help you think about your designs at a higher level of abstraction. And that’s probably what the Gang of Four were hoping all along.
https://medium.com/young-coder/is-it-time-to-get-over-design-patterns-8851864a6834
['Matthew Macdonald']
2019-11-20 20:08:53.799000+00:00
['Software Engineering', 'Software Development', 'Software Architecture', 'Programming', 'Design Patterns']
Why Most Entrepreneurs Struggle
No matter what, control your environment. You are the smartest thing you have ever met. Use that. 2. FIND THAT FAMOUS “ONE SINGLE THING” THAT YOU ARE INSANELY PASSIONATE ABOUT Come on. It’s not rocket science. You are not looking for the love of your life. If you are reading this, I assume you have spent at least 18 years on this planet and have at least an idea of what you like and what you don’t. Remember, you don’t have to reinvent the wheel. Discover what you are really passionate about and what you would change in your life or people’s lives. Think about what one thing you can do better than others, or how you could improve an existing problem. I understand you need inspiration, but wasting a million days in a row jumping from one distracting website to another won’t help you find the idea of your life. How do you make sure you’ve found that one single idea? Maybe Mark Cuban can help with his 12 Rules for Startups: Don’t start a company unless it’s an obsession and something you love. If you have an exit strategy, it’s not an obsession. 3. YOU FOUND THE THING OF YOUR LIFE? GREAT! NOW DO ONE THING: START! Stop the vicious cycle and just start. You will be impressed by how many people will contact you or will want to work with you once you begin. Scratch your idea onto a piece of paper (yes, just on a piece of paper) and walk out of your apartment. Talk to people. By the way, get a life and stop that bullshit of, “What if they steal my idea?” Talk to as many people as you can. Let them steal it if they really will, but you will kick their ass. Please also stop saying, “I need to find an investor.” How about making some sales first? Maybe you will even realize you don’t need to give any share of your business to an investor. Even if you really need an investor, by making some sales, you will have a stronger hand to play. 4. NOW THAT YOU’VE STARTED, HERE IS WHAT YOU SHOULD REMEMBER EVERY SINGLE DAY: SAY NO TO OTHER IDEAS, KEEP DOING ONE THING, AND DO IT FU*KING WELL Just because you started working on your idea doesn’t mean you won’t meet distractions along the way. Your mind will be about to explode with all the ideas you can apply to your business, the many features you can add to your product, etc. Keep your focus. Say no to distracting ideas. If you keep doing that, another thing that will impress you will be the power of what you studied in your marketing book: word of mouth. You will be truly impressed by the number of people coming back and asking for more work. Who cares about competition? You are just beating yourself. Isn’t that what matters in the end? 5. OH, BY THE WAY, STOP GIVIN A F*CK TO WHAT OTHER PEOPLE THINK. If you are an entrepreneur because you want to prove other people that you are successful, then please go back to step one above. Make sure you are following your passion because you have a vision and you are out to change something in this world. Otherwise, this will kill you. You will have difficulty focusing and you will keep being distracted because you won’t really be passionate about getting anything done. Now what? Stop wasting your time reading this article and get your ass back to work.
https://medium.com/swlh/the-single-biggest-reason-most-entrepreneurs-fail-in-2014-4c7e41e013cb
['Ali Mese']
2020-05-12 19:57:42.716000+00:00
['Self Improvement', 'Tech', 'Startup', 'Business', 'Entrepreneurship']
DeOldify: GAN based Image Colorization
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/merzazine/deoldify-gan-based-image-colorization-6abaa74dd250
['Vlad Alex', 'Merzmensch']
2020-06-09 08:30:57.027000+00:00
['AI', 'Published Tds', 'Artificial Intelligence']
Enable Multiple Apps Access to the same Google Cloud Services
Understanding Mobile Development Enable Multiple Apps Access to the same Google Cloud Services Google Cloud Services can be shared Photo by Alex Machado on Unsplash It has been some time since I worked on GoogleMap. The key is there, and everything is working fine. There’s no need to access to Google Cloud Console. Lately, I work on introducing App Bundle and uses a different Key to Upload the App. For internal testing, I sign with the upload key. All works fine until I notice that the Google Map is no longer working!! Forgotten how enabling the map work, I wonder if I need a new key, a new certificate, etc? Try google around, and nothing speaks of this question I have. Google Cloud Services After quick hunting around, Google Map is part of Google Cloud Services. To enable any services, do check out the formal guide by Google. The entire setup can be summarized by the diagram below Setup the Billings (you can have more than 1 if you like) Create a project and attached to which one you like to be billed In the Project, you then select what API you would like to enable (e.g. Google Map, Place API) Lastly, create the API Keys, and decide if the Keys should access to all API or restrict it to some of it. If my App is signed differently, do I need a new API Key? Now, the problem I faced is, given I’m using the App Bundle, and it is signed differently before uploading. For internal testing, we have a local APK that is signed with the uploading Keystore. So how can I get it to have access to all the same access as the main App to be upload to the store? Option 1: Create a different key for the internal App. To enable this to work, other than create the new key, I need to Change my build Gradle to have different Key Access for my internal testing app vs the production version Need to set all the API Key restrictions the original key has. Any future changes need to be catered to as well. The pro of this approach though is, you can have separate control of which API services to be access by Internal and Production. Option 2: Sign the APK with the same original Keystore This is back to square one, where the internal App Keystore signature is the same as production. No changes to Gradle, nor need to introduce a new API Key. The drawback is the benefit App Bundle, where we no longer need to have the original Keystore has been discarded. Option 3: Use the same API Key, but support for different Keystore signature. Wow, actually this is possible. To do so, you can go to menu →APIs & Services →Credentials From there you’ll get a list of API Keys. Select the one that you’re currently using
https://medium.com/mobile-app-development-publication/enable-multiple-app-access-to-same-google-cloud-services-cbb13e410ba4
[]
2020-11-24 08:44:40.344000+00:00
['Mobile App Development', 'Google Cloud Platform', 'Google', 'Android', 'Android App Development']
The rise of (audio)books. We have come a long way — from renting...
Image Source: Photo by Elice Moore on Unsplash We have come a long way — from borrowing books from libraries, buying and reviewing them online to reading and highlighting on electronic reading devices (Kindle) that feel like books but can carry tens of titles without the weight. It took us some time to try, adopt and get comfortable with listening to narrated audiobooks. Some of us are still resisting this change. While books/ebooks have some nostalgia associated with them, audiobooks have made reading handsfree and added a new dimension to reading — tonality aka emotion. When I searched “Audible” on Google, an a Google Ad popped up stating: “Listening Is The New Reading‎”. Below are the five prominent ways, in which I expect audiobooks, or books in general, to evolve. 1) Narrator-less Audiobooks Image Source: BigSnarf Blog While driverless cars steal attention and make more attractive headlines, a nearer reality is one, in which Artificial Intelligence gradually eliminates the need for human narrators of audiobooks. Unlike a few years ago, robots don’t sound robotic anymore. They sound more human as tonality is now infused into the the voice of our digital assistants. Yes, we can still distinguish a computerized voice from a real human’s voice. But, we are not far from making that distinction unnoticeable. The ousting of human narrators will lead to mass production of audiobooks, greater profit margins and faster time to market of audiobooks. 2) Increasing Competition Image source: Wired Audiobooks are not competing with books or e-books. They’re competing with podcasts, our music playlists, meditation apps, Youtube, phone calls and even in-person conversations. Thanks to the popularity and convenience of Apple’s airpods, listening is on the rise and so is the ambition of each of the audio apps above. While Netflix, Disney+, and Amazon Prime compete on screen time, podcasts, audiobooks, music apps and phone calls will complete on eartime. 3) Informative → Engaging Image Source: Margaritaville Resort One reason why some book/ebook readers haven’t yet turned into audiobook listeners is possibly because they zone out. In order to convert these readers to listeners, we may see introduction of a lot more knobs to customize the AI generated narration — from “monotonous” to “dramatic” to maybe “rap music”. Voice emulation has an unbelievable power. In the last couple of years, there have been a good number of videos on our social media feeds cautioning us about how voice emulation could result in the rise of deep fakes and hence can be a danger to our society. While such dangers do exist, any technology is only as good as how we put it to use. It is worth spending some time thinking about the bright side of the possibilities it opens up for the entire humanity. To see what is possible, check out this story on how DeepMind and Google recreated ALS patient and former NFL linebacker Tim Shaw’s voice using AI. Coming back to audiobooks, we can imagine voice emulation would allow us to choose narrators from hundreds of voices — from celebrities to grandpa/ma. 4) Convergence of Media Image Source: Agri Investor There are book lovers who love to read. Then, there are audiobook fanatics who are glued to their airpods. But, it is not all black and white. Some people prefer reading and listening at the same time. Currently, book, ebook and audiobook purchases for the same title are considered separate. This could be caused by any of the three reasons: 1) poor integration, 2) each trying to optimize their individual revenue, 3) assumption that people always choose one experience exclusively over the other. In future, we can expect audiobooks to be bundled for free (or for a nominal price) with book and ebook purchases. Captions on audiobooks will be the new ebook. Interestingly, Audible is already developing two new features addressing exactly this: 1) Immersion Reading, 2) Audible Captions. 5) Auto-translation Image Source: Wikimedia Market expansion is not just limited to converting readers to listeners. There’s a massive untapped market of people across the globe who do not prefer to read English, cannot read English, or for that matter cannot read at all. Making stories available for listening in local/regional languages and accents can open up a huge market opportunity, particularly for the self-help category. Translating millions of books to 1000+ languages in many different accents and voices is something that only technology can scale. Stories and knowledge can transform people, whether it be an investment banker at Wall Street or a fruit seller in Asia. If you’re unaware of the power of language translation capabilities that Google has developed, then check out the real-time translation that’s in the store with the upcoming Google Pixel buds. The technology is not perfect, but it is promising, specially when translation is not required in real-time. Can you imagine how real-time language translation with Google Pixel buds would change travel forever? But, that’s for another post. If you like this, please give it some love 👏. Do you think I missed something? Did you find any holes in my arguments? Did you think these were pretty obvious? Let me know in the comments. If you’re interested in reading more about the possibilities that lie ahead, subscribe to this publication: Predict.
https://medium.com/predict/the-future-of-audio-books-cbc5a21355dc
['Akash Mukherjee']
2020-07-19 01:37:08.089000+00:00
['Books', 'Futurism', 'Audiobooks', 'AI', 'Voice Assistant']
BI in Startups
BI in Startups How People.ai Setup BI/Analytics Function in Under 6 Weeks One of the growth challenges that startups face is the need for every decision to be data-driven. But to be data-driven, you have to have a Data and Business intelligence (BI) function that can power your analytics and inform your decisions. With data spread across tens of systems, providing holistic insights using spreadsheets is not only difficult but not scalable. Setting up Analytics is necessary because this is a key foundation of your business and one on which future growth depends. We thought we’d share the story of how we created our BI/Analytics function in a period of six weeks. This is the technical story behind it and we hope it and the lessons learned will be helpful to other startups preparing to establish their BI/Analytics function. 4 Basic Components of Your BI/Data Infrastructure To begin with, at a very fundamental level, there are four logical components that go into building most BI/ Data Warehouse systems: Ingestion and compute layer to ingest and transform data (ETL/ELT) A persistent data layer to store the raw/modeled data (Data Lake or Data Warehouse) A visualization layer to provide insights via various BI tools Additional components like workflow automation, Data Quality, CI/CD, etc. How to Evaluate Infrastructure Vendors The first challenge is to identify the technology/vendor for each infrastructure component. In today’s BI/data space, there are hundreds of options, with pros and cons of each selection. So, how should you go about selecting components that will scale as the company matures from early-stage to growth-stage? Here’s what we learned: Step 1: What fits in the company? A few questions can help answer this: Cloud or On-premise: Are the company’s engineering and business systems hosted on cloud or on-premise? What are the organization’s plans in one to two years (any big migration in the pipeline)? What cloud vendor is being used? Are the company’s engineering and business systems hosted on cloud or on-premise? What are the organization’s plans in one to two years (any big migration in the pipeline)? What cloud vendor is being used? Data Security: Are you planning to process and store sensitive data? What level of compliance (HIPAA/PII/PCI/SOX/GDPR) is needed? Are you planning to process and store sensitive data? What level of compliance (HIPAA/PII/PCI/SOX/GDPR) is needed? Budget: How much money is your company willing to spend on BI (both on people and technology) before seeing an ROI, and how do those budgets scale up and down depending on company growth? This helps in choosing systems with less stickiness and more elasticity, to scale up/down quickly. How much money is your company willing to spend on BI (both on people and technology) before seeing an ROI, and how do those budgets scale up and down depending on company growth? This helps in choosing systems with less stickiness and more elasticity, to scale up/down quickly. Delivery Speed: With multiple departments needing data, realistically, what are the pressing and critical needs that can help move the needle? This will help in choosing components that scale versus provide speed, in the interim. On a slightly related note, a big factor that helps in identifying and prioritizing the business needs and urgency is having an Analytics champion in the C-suite and every function. Partnering with the champion helps you to take a holistic view of the ‘true’ urgency and priority of needs, and that in turn helps in selection. Step 2: Understand the business use cases and 3 V’s Volumes: Create an inventory of data-producing systems and what volume of data each one produces. Have a multiplier of 10 to account for future use-cases. Create an inventory of data-producing systems and what volume of data each one produces. Have a multiplier of 10 to account for future use-cases. Variety: This is very critical, as it helps to come up with the Ingestion layer. You likely have multiple data-producing systems (departments that use dashboards, such as Product, Sales, Finance, Marketing, HR, Engineering, Customer Success). What type of data do they produce? Is it structured or semi-structured? This is very critical, as it helps to come up with the Ingestion layer. You likely have multiple data-producing systems (departments that use dashboards, such as Product, Sales, Finance, Marketing, HR, Engineering, Customer Success). What type of data do they produce? Is it structured or semi-structured? Velocity: Identify how often decisions need to be made in various functions and levels (from C-staff to Sales Rep). With the exception of a few use cases, this may not be super critical at an early stage company. However, as the company matures, this will become a huge value add. Consider having near-real-time (15 mins) use cases and use cases for weekly, monthly, and quarterly dashboards. For most use cases, a daily cadence for a data and reports refresh will suffice. Making the Vendor Selection Armed with the above information, these are some example requirements of vendors that you may establish: The solution needs to be in the cloud All components must scale linearly in both directions Minimal resource contention, especially between concurrent ETL writes and dashboard reads Minimal to no maintenance cost Has all compliances and satisfies security needs Can store and process any file and data formats Is within budget considerations Offers less vendor risk and components with less stickiness (easily portable) Data Store: The above requirements will help you narrow down possible vendors, for example: AWS Redshift or AWS RDS (If you are an AWS shop), AWS S3 + Hive/Presto Layer, Snowflake, etc. ETL/ELT: Next comes the ETL/ELT layer. With the choice of tens of on-premise and cloud options along with building in-house frameworks (using Python/Java), it is hard to narrow the field. Go back to your requirements. If your business needs something quick, however exciting it is to set up an in-house ETL/ELT framework, you may have to say no to on-premise tools and in-house frameworks. Many startups simply don’t have the resources or time to maintain and build them. If you are a cloud-based startup with multiple systems to integrate quickly, you may consider a managed data pipeline platform. Look for solutions with little to no maintenance and that comes with pre-built connectors for hundreds of applications. The goal is to streamline this process and be able to quickly set up integrations. BI Tools: For the BI layer, look at cloud solutions that offer maturity, richness in visualization, user’s choice, cost, and in-house experiences. Setting it Up The above process helped us to identify a good solution for our BI/Data infrastructure. Once identified, all the vendor evaluations, POCs, and contracts took three weeks and we set up our complete BI infrastructure in less than six weeks. We started delivery of dashboards and surfaced rich analytics from week four! These choices made our analytics easier, faster, and more reliable; but, as People.ai continues in its growth stage, the complexity increases. There will always be a need to evolve our infrastructure (systems and processes), to adapt to the growing needs of the business. As it goes, the only thing that is constant is change! We hope sharing our story gives you a place to start and a framework for how to get your BI/Analytics function up and running in no time! ABOUT PEOPLE.AI People.ai accelerates enterprise growth through the power of AI. With the industry’s only Revenue Intelligence System, People.ai frees all customer-facing teams, including Sales, Marketing, and Customer Success, from manual data entry by automatically capturing all contact and customer activity data, dynamically updating CRM and other systems of record, and providing actionable intelligence across management tools to realize the full selling capacity of the enterprise.
https://medium.com/people-ai-engineering/bi-in-startups-8830044f24e1
['Chaitanya Mamdur']
2019-08-22 16:34:07.343000+00:00
['Big Data', 'Data Warehouse', 'Business Intelligence', 'Data Visualization']
Kubernetes, Istio and The World Outside Rapido
If you are running Kubernetes (k8s) clusters in production and security is of utmost importance to you, you would have been at crossroads to choose between a private or a public cluster. Most of the major cloud providers give these options via their managed Kubernetes service solutions and all you need to do is choose one. And once you choose a private cluster, one of the immediate problems to tackle will be handling egress traffic (to the outside world). With a private cluster, all the nodes will only have a private IP and all egress traffic will need to be routed through some kind of a gateway that can talk to the internet. We were also at this crossroads and made the decision to use a GKE private cluster and allow internet access via a Cloud NAT. While this worked very well for us from a security perspective, it caused us some problems when it came to handling outbound traffic bursts and we had to think about alternate solutions for handling this more efficiently. Since we were already using a Service Mesh within our k8s clusters, we started thinking about if we could leverage it in some way to have a better solution for our problem. The final design we came up with was to use the Service Mesh for routing traffic to a set of proxies with SSL pass-through, running on nodes outside the k8s cluster with external IP addresses, thereby bypassing cloud NAT. We also had to ensure that nothing changed from an application perspective, like the URLs being configured, and the proxies have fail overs. The setup when we started off was very simple and is shown below. Egress Via NAT There is a private k8s cluster and a Cloud NAT, which was setup to perform NAT on the primary address range of the subnet. This would work well when the RPS is not that high (how high will have to be derived from the math stated below). Once you have the high RPS workloads in the cluster, the problem will slowly start to show up. We started seeing errors in application logs saying connection was refused and debugging further led us to Cloud NAT logs where we saw connection dropped errors being logged. An example of the NAT log entry is shown below. This prompted us to go back to Cloud NAT specifications and look for the section where they had mentioned how the ports are calculated based on the external IP addresses (https://cloud.google.com/nat/docs/ports-and-addresses). At a high level, this is what it says : The number of ports allocated per node restricts the total number of simultaneous connections that can be established to one unique destination from that node. The destination is derived from IP address, port and protocol. The math we overlooked : ports_per_vm = 64 total_nat_ip_address = 3 ports_per_nat_ip = 64512 total_vms = 100 Doesn’t look bad, right? We need a total of 64 * 100 = 6400 ports and we have much more than that here, 3 * 64512 = 193536. The issue was not the availability of the total allocatable ports by Cloud NAT, rather it was the number of ports allocated per node. In this case, it’s 64. This means, we can only have 64 simultaneous connections to, let’s say https://example.com (assuming it resolves to one public IP) from that node. Now imagine a case where two pods are running on the same node and each has an RPS of 100 and needs to make an external call per request. This can lead to port exhaustion in that node and errors in the application. This is exactly what Cloud NAT logs were telling us by saying connections were dropped. GCP also mentions that they induce a 2-minute delay before the gateway can reuse the same source address and port for establishing a new connection. This only makes things worse. Check out the link for more details. So then, we decided to ensure there are enough ports allocated per node for handling the traffic bursts. The new math : ports_per_vm = 8192 ports_per_nat_ip = 64512 total_vms = 100 total_nat_ip_address = (ports_per_vm * total_vms) / ports_per_nat_ip = 8192 * 100 / 64512 = ~13 Do you need so many ports per node is a question you will need to answer based on the traffic patterns you have. We did try with lower numbers and kept on increasing in steps till we reached this value and stopped seeing the connection dropped errors. (This could have been due to the fact that we have our infrastructure spread across multiple networks and there are certain connections that have to be made through public IPs). Yes, 13 addresses are not that bad. But we had two issues with this : It became a bit cumbersome to get all these IPs whitelisted at the third party network we had to connect to. And to satisfy the use case for a subset of the workloads, we were ensuring all nodes have the minimum required number of ports allocated. This means most of these ports were sitting idle and is not an efficient way of utilizing resources. To address these problems, we started thinking about a solution which will allow us to route our high RPS external calls through a cluster of egress proxies, which will all have a public IP assigned to them and thus will allow us to bypass the NAT and we didn’t have to worry about deciphering the right value for the ports_per_vm variable. This means we can keep the NAT IPs to a minimum and expand the egress cluster as and when needed. We could use Nginx as the egress proxy with SSL pass-through using the stream module and don’t have to worry about the limit on simultaneous outbound connections any more. Proposed Solution Proposed Solution (With Istio) Since all the calls are from within the k8s cluster equipped with Istio, we could use Service Entry, Workload Entry, Destination Rule, and VirtualService to configure Istio, and in turn Envoy, to perform the routing through the egress proxies in a reliable way. Let's understand briefly what all these components of an Istio service mesh are responsible for: A ServiceEntry allows you to add services outside the mesh into the service registry of Istio thereby enabling traffic management to these services. We will create service entries for the hosts we are connecting to and the egress proxies, as both are residing outside the mesh. A WorkloadEntry along with a ServiceEntry allows you to configure Clusters in Envoy. A Cluster is nothing but a group of upstream targets where the traffic has to be routed based on certain match conditions. We will use workload entries to create a cluster for the egress proxy having multiple endpoints. A DestinationRule allows you to configure what happens to the traffic for a given cluster. We will use destination rules to configure health checks and ejection for the egress proxy endpoints. A VirtualService allows you to configure routes in Envoy. Routes allow us to mention the upstream cluster to which traffic has to be routed based on a set of conditions. We will use virtual service to route the traffic to external hosts via the egress proxies. In our case, let’s say the external API we are calling is https://example.com and the egress proxies are egress-1.mydomain.internal & egress-2.mydomain.internal, what we need to tell envoy is that : If you see an SNI named example.com in the request, please forward it to one of the healthy instances among egress-1.mydomain.internal or egress-2.mydomain.internal. The details of the service entries, workload entries, destination rules and virtual services for achieving the above result is shown below : Workload entries per egress proxy and tagged with a label (app:egress-proxies) Service entry with a workload selector targeting the two workload entries created in the above step. Destination rule instructing Envoy to eject endpoints that have crossed the threshold of failures within a given period of time. You can play around with the outlier detection configuration to match your needs. Virtual service instructing Envoy to route the traffic to example.com via the egress proxy. Once the above setup is done you should have the traffic to example.com routed via the egress proxies. To verify all is good, you could run the sleep deployment in Istio docs and do a curl request and check the Istio proxy logs as mentioned there. You should see the routing being done to the clusters we created and not to the PassthroughCluster (a virtual cluster to which all traffic which Istio doesn’t care about is routed). You could verify the Istio configuration using these commands : ## Verify the listeners istioctl proxy-config listeners <pod-name> --port 443 --address 0.0.0.0 -o json | jq ## Output filterChainMatch": { "serverNames": [ "example.com" ] ...... } Routing rule when matched : "name": "envoy.tcp_proxy", "typedConfig": { " "statPrefix": "outbound|443||egress.mydomain.internal", "cluster": "outbound|443||egress.mydomain.internal", ..... "name": "envoy.tcp_proxy","typedConfig": { @type ": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy","statPrefix": "outbound|443||egress.mydomain.internal",..... ## Verify the cluster and it's endpoints istioctl proxy-config clusters <pod-name> --fqdn "outbound|443||egress.mydomain.internal" -o json | jq ## Output { "name": "outbound|443||egress.mydomain.internal", ..... "clusterName": "outbound|443||egress.mydomain.internal", "endpoints": [ { "locality": {}, "lbEndpoints": [ { "endpoint": { "address": { "socketAddress": { "address": "egress-1.mydomain.internal", "portValue": 443 } } }, ..... }, { "endpoint": { "address": { "socketAddress": { "address": "egress-2.mydomain.internal", "portValue": 443 } } }, .... } } If you want to test the ejection of unhealthy endpoints, you could kill one of the egress servers, keep firing requests and watch the output of endpoints command to see the endpoint is marked as unhealthy. watch -n 1 'istioctl -n services proxy-config endpoints <pod-name> --cluster "outbound|443||egress.mydomain.internal"' ### You should see something like this : ENDPOINT STATUS OUTLIER CHECK CLUSTER x.x.x.x HEALTHY OK outbound|443||egress.mydomain.internal x.x.x.x HEALTHY Fail outbound|443||egress.mydomain.internal A gist of the nginx config we used is mentioned here for your reference. Note the use of the stream module. We are using nginx 1.17.9 and the stream module is enabled by default in this version. We have come across some older versions where it’s not enabled by default. stream { server { listen 443; ssl_preread on; proxy_pass $ssl_preread_server_name:$server_port; } } Since these egress proxies are running on nodes outside the k8s cluster, we do not have an auto-scaling solution today. Autoscaling groups could be a potential solution or running a small separate public k8s cluster to host egress proxies might also be an option (subject to how the dynamic external IP addresses are handled since that can cause issues while whitelisting). We are spiking on these and will keep publishing our findings. In summary, service entries and workload entries allow us to add services outside the mesh into the service registry of Istio and have additional traffic management configuration imposed on them via virtual services and destination rules. We used these to add the hosts of the third-party services we talk to and the egress proxies, into the service registry and configured the traffic to be routed via these proxies. This allowed us to bypass Cloud NAT as these proxies have their own public IPs and thereby not worry about optimizing ports allocated per node to suit our workloads. — We are always looking out for passionate people to join our Engineering team in Bangalore. Check out the link for open roles: https://bit.ly/2V08LNc
https://medium.com/rapido-labs/kubernetes-istio-and-the-world-outside-rapido-75da3666db4a
['Sree Rajan']
2020-08-29 12:02:27.445000+00:00
['Engineering', 'Cloud Nat', 'Google Cloud Platform', 'Kubernetes', 'Istio']
6 Things to Know to Get Started With Python Data Classes
3. Equality/Inequality Comparisons Besides the initialization and representation methods, the dataclass decorator also implements the comparison-related functionalities for us. We know that for a regular custom class, we can’t have meaningful comparisons between instances if we don’t define the comparison behaviors. Consider the following custom class that doesn’t use the dataclass decorator. Equality Comparisons As shown above, with a regular class, two instances with the same values for all attributes are evaluated to be unequal because these custom class instances are compared by their identities by default. In this case, these two instances are two distinct objects, and they’re deemed to be unequal. However, with a data class, such an equality comparison evaluates True . This is because the dataclass decorator will also automatically generate the __eq__ special method for us. Specifically, equality comparison is conducted as if each of these instances is a tuple that contains the fields in the order that is defined. Because the two data class instances have the fields of the same values, they’re considered equal. How about inequality comparisons, such as greater than and less than? They’re also possible with the dataclass decorator by specifying the order parameter for the decorator, as shown below in Line 1. Inequality Comparisons Similar to the equality comparisons, data class instances are compared as if they’re tuples of these fields, and they’re compared as tuples lexicographically. For a proof of concept, the above code only includes two fields, and as you can see, the comparison results are based on the tuple’s order.
https://medium.com/better-programming/6-things-to-know-to-get-started-with-python-data-classes-c795bf7e0a74
['Yong Cui']
2020-11-18 17:07:40.546000+00:00
['Data Science', 'Python', 'Software Development', 'Programming', 'Artificial Intelligence']
Starting Stats: Day 1
Age: 44 Height: 5'4" Weight: 155 lbs How my body feels: I feel pretty good. Minor headache, fuzzy head. It’s not a real headache — I’m just not clear headed. Does that make sense? My emotional state: I am excited to get this project started because I know a system like this is good for me. I am motivated to do THIS project more than just the ketosis journey. I’m getting ready to catch a flight out this afternoon and start travelling most of the week, so that means I will have to be resourceful in order to work the plan. So, I’m a little anxious. I don’t want to eff it up right at the beginning. What I can celebrate: STARTING. That’s the first step. It matters.
https://medium.com/my-keto-story/starting-stats-day-1-ec05e37d77f5
['Mary Lucus-Flannery']
2016-10-16 16:20:38.871000+00:00
['Storytelling', 'Journey', 'Weight Loss', 'Health', 'Ketosis']
Phase IV: Narrowing the Context
I now know that I want to work on something related to making the workings of algorithm models more transparent and fair — below is a short list of specific contexts that I’ll explore. In the field of e-commerce such as Amazon or Staples, there’s the price tags that fluctuate according to the users’ geolocation, neighborhood, and other personal data, to try to push for the maximum amount of $$ that the user might be willing to pay. It could be interesting to explore the socioeconomic or gender variables within this context. or variables within this context. Most job searches are now conducted online — i.e. Google. It could be interesting to test just how much of an impact my personal data that Google has stored about me (age, gender, socioeconomic status, etc) on the search results. Do men’s and women’s job searches show different results? In the context of Social Media platforms, does everyone’s newsfeed differ according to how they interact? How much do they differ? Points of Interest Transparency in the algorithm models Agency & control by the users Fairness Building back trust between the user & the company Possible Method Categorization of user’s individual data into two: ‘re-usable’ and ‘not re-usable’ by the third parties. If there’s a way to aggregate the types of data we’re mined for and to categorize them to figure out what purpose each of them serve for the companies versus the users, maybe we could develop a system, where a <digital tool> could inform the user what the possible consequences are, how likely it is to happen (on a scale system), and the possible gains on the user’s part (curated content, etc), so that user can control what to reveal and what to withdraw. ALSO to set limitations on how long that data, if user chooses to share, can be used for. User has transparency over what possible ramifications are. User has agency over what data about self to release or not. Somewhat utopic and not immediately commercially applicable or lucrative for companies, but companies on moral grounds need to be open to audits (they hold so much power), and it’ll build trust back into the relationship btw users vs big corps. This data (that’s aggregated) would need to be constantly updated, but I imagine the categorization system to remain somewhat steady. 2. A digital you that you’re the sole owner of and that others can access only on your terms. What if there’s a digital “me”, perhaps multiples of it, that i can consciously choose to present depending on different situations? Help user curate the right kind of model(s) for themselves. In Master Algorithm, Pedro Domingos talks about a personal Digital Bank that stores personal data, anonymizes that information for the user, then gives the user the control over what aspects of the data should be used and how, when interacting with third party sites such as facebook and Google. Potential Contexts Staples.com product search, and their fluctuating prices; Flight search depending on the country of origin and country of search, and the fluctuating prices; Job searches displaying different results depending on the user’s gender, age, or socio-economic status Context: Google search algorithm “Gender distributions of specific occupations were unrepresentative of the gender distributions in the real world and that search can influence people’s perception of gender distribution in reality.” I’m seeing two different problems here: 1. the search displaying biased results or ‘unrepresentative’ results, and 2. the user not having control over what kind of data is pulled, and where from. As much as i’d like to tackle both problems, the first is a much much larger problem and requires much more technical skills than i have, and not at a scale i alone could work on. We can’t dig into the question of bias and discrimination without first setting benchmarks on what is fair and when something should be considered biased, or what neutrality even means. I’ll be focusing on the latter; if i can’t immediately impact the first, i think it’ll be important in the near future to enable the actual users to see what is going on (transparency & agency). So that even though the user might not be able to control what they are being shown, at least they have a say in what kind of data they would like to store in search engines’ databases, so that they can consciously create their own models (ref. master algorithm: “what mental model do you want the algorithm to have of you?”), and perhaps with options to create multiples (personas/for diff purposes) of it, you could a. open up the world of possibilities (by searching for the same thing with diff personas, aka models, to yield diff results), and b. have both agency and fairness, as well as trust for the company. Open Questions What else could this system do for the user? for the companies? what other utility would this serve? So what would i need to do as a designer? what would the design process be? how would I go about researching this? How would this work in social media? user to one other company (i.e. google search) might be plausible bc you’re just setting the reusability status for one other party. what about facebook? when you first register, how would the infos you’re inputting (birthdate, gender, occupation, etc) get displayed for facebook itself and your friends (or any other people)? As a platform/company learns more and more about you, can it provide better or more curated, premium services? Inspirations https://www.ghostery.com/ http://www.businessinsider.com/datawallet-lets-you-sell-your-data-2016-6 https://idcubed.org/ http://citizenme.com/ https://www.qiyfoundation.org/about-qiy/ http://dataprivacylab.org/projects/onlineads/ http://jots.pub/index.html https://webtap.princeton.edu/
https://medium.com/breaking-out-of-filter-bubbles/phase-iv-narrowing-the-context-a6c126770e82
['Min Kim']
2016-12-04 14:31:53.019000+00:00
['Machine Learning', 'Design', 'Algorithms', 'Artificial Intelligence']
Labeling a Food “Plant-Based” and What it Really Means
Labeling a Food “Plant-Based” and What it Really Means This reminds me of the “made with real fruit” scam that advertisers used to feed to us Photo by Tamanna Rumee I think by now we should all realize that commercials, advertisements, and the labels on our food are trying to trick us. We should know this, yet we don’t. Why? Jingles, loyalty rewards, ads, commercials, lies, repetition. I digress. As the plant-based diet becomes noticeably more popular, as a vegan, I have observed many companies switching their strategies. They are beginning to change the way they market their products to us by using little plant-based logos at the top of their packages or are actually printing on the package that it is plant-based. This is done even when their product is not completely plant-based. I hate to say this, I really do, but they have every right to do so. There are no great laws governing our food that say they cannot. The system was built to fool and confuse us. Trickery in advertising and labels is not frowned upon for some reason. So, as long as there is some form of plant in the food it can be called plant-based. Just like Sunny D isn’t really orange juice, it only has 2% concentrated OJ in it. It’s just sugar water. Actually, high fructose corn syrup water. They just put a big picture of a cut orange on the label to lure us in. It works too. Please don’t drink that. Now that I think about it, Sunny D is now labeled as a fruit drink and not a juice, but I do remember a time as a kid of the eighties when it was not the case. It does seem like the powers that be have made this juice/drink thing right. Progress can be slow. Getting back to the topic, all of these illusions can be tricky for people who are actually vegan or only eat plant-based foods. Many of these products that have a plant-based logo on the front have ingredients like eggs and milk in them. Again, technically these products are still plant-based in advertising speak. You have to remember to always check the ingredients. I wish there was an easier way to get this information, or that we could just trust what a company says, but no, almost always no. When I come across a new product I am interested in, I always ask it “what are you trying to hide from me”? So get out your phone, set it to your favorite camera app and use it as a magnifying glass to read those tiny little ingredient labels. I personally find no shame in this and hope you will not either. When I finally do zoom in to that ingredient list, I find that milk powder is one that I run into most often. This usually happens in cereals and in the snacks that have some sort of seasoning. In fact, if you plan on buying any kind of flavored chips or snacks, there is really only one thing to do. That is right, check the label before buying. Doritos, for instance, have cheese, milk, buttermilk, whey, and other milk-based ingredients in it. What! That is a no go for me. I’ve been Dorito free for three years now and could care less. Now, when I buy chips, they are baked and have simple seasonings or just plain. I know I can make something at home that will make these chips taste amazing. Many loaves of bread are out to fool you as well. They can contain milk, butter, eggs, or lard. Noodles and pasta have the habit of adding eggs to the recipe. Orange juice that is enriched with omega-3 is another one. The omega-3 usually comes from fish oil. OJ and fish oil doesn’t seem like a particularly amazing combination yet it does exist. Actually, anything enriched with omega-3 is suspect because of this. I will say it one more time, check the ingredients. I have come up with this fictional scenario to share with you. Again, fictional. Let’s say I really want to make some money off of stupid people. People who believe everything they hear on Facebook or just the internet and advertising in general. How can I profit from this? First I have to create a company which is very easy to do. A product? I realize most people love burgers, not just stupid people, so I want to create a plant-based burger that tastes and feels like a real beef burger. The goal is to get in on this blitz of new plant-based meat alternatives entering the market and take advantage of people who do not read labels. I hire a research and development crew and let them know what I want. They play around in the kitchen and come up with something they believe is going to be the next best thing. This is the recipe they come up with. For every eight-ounce burger, we use eight ounces of ground beef (cow), 1/2 teaspoon pea protein, 1/4 teaspoon beet powder. Contains less than 2% of the following: milk powder, soy sauce, whey, natural seasoning, liquid smoke, and dehydrated egg protein. Abracadabra, our new plant-based burger is born! Now it is time to take this burger to the market. On the label, I will make sure our advertising team puts plant-based and all-natural in big bold words. The package will be green, white, and have plants all along the border. The commercial would say something like, “grills and sizzles just like a real burger but made with plants”. Would it sell? Probably. This would all be legal for me to do, I think. Maybe I would need to add some onions or peppers in there. I don’t really want to look up the laws of what it takes to call a food plant-based. I don’t know the percentages. Perhaps that can be a future project. I think you get my point by now. What we all need to take away from this is our health is not what these companies are thinking about when they design the labels on their products or with the ingredients they choose to put in the product itself. Money is all these companies care about. The cheaper they make the product the more money they make. After all, the stock price must go higher. It must. Hopefully, someday this will change, but this is where we are now. From now on when you are picking up a product I encourage you to ask it this: “What are you hiding from me?”
https://medium.com/in-fitness-and-in-health/labeling-a-food-plant-based-and-what-it-really-means-100988d25473
['Dan Stout']
2020-12-26 16:38:01.176000+00:00
['Health', 'Advertising', 'Vegan', 'Food', 'Life']
Why We Can't Ease Up on Social Distancing
Hold the Line. ‘This virus is unforgiving to unwise choices’ 8Photo: Boston Globe/Getty Images As an infectious disease epidemiologist (albeit a junior one), I feel morally obligated to provide information on what we are seeing from a transmission dynamic perspective and how it applies to the social distancing measures. Like any good scientist, I have noticed two things that are either not well articulated or not present in the “literature” of online media. I have also relied on other infectious disease epidemiologists for peer review of this piece. Specifically, I want to make two aspects of these distancing measures very clear and unambiguous. First, we are in the very infancy of this epidemic’s trajectory. That means that even with these measures in place, we will see cases and deaths continue to rise globally, nationally, and in our own communities. This may lead some to think that the social distancing measures are not working. They are. They may feel futile. They aren’t. You will feel discouraged. You should. This is normal in chaos. This is the normal epidemic trajectory. Stay calm. The enemy we are facing is very good at what it does; we are not failing. We need everyone to hold the line as the epidemic inevitably gets worse. This is not an opinion. This is the unforgiving math of epidemics for which I and my colleagues have dedicated our lives to understanding with great nuance, and this disease is no exception. Stay strong and in solidarity knowing that what you are doing is saving lives, even as people continue getting sick and dying. You may feel like giving in. Don’t. You should perceive your entire family to function as a single individual unit: if one person puts themselves at risk, everyone in the unit is at risk. Second, although social distancing measures have been (at least temporarily) well received, there is an obvious-but-overlooked phenomenon when considering groups (i.e. households) in transmission dynamics. While social distancing decreases contact with members of society, it of course increases contact within a group (i.e. family). This small and obvious fact has surprisingly profound implications on disease transmission dynamics. The basic mechanics of this mathematical principle dictate that even if there is only a little bit of additional connection between groups (i.e. social dinners, playdates, unnecessary trips to the store, etc.), the epidemic likely won’t be much different than if there was no measure in place. The same underlying fundamentals of disease transmission apply, and the result is that the community is left with all of the social and economic disruption but very little public health benefit. You should perceive your entire family to function as a single individual unit: If one person puts themselves at risk, everyone in the unit is at risk. Seemingly small social chains get large and complex with alarming speed. If your son visits his girlfriend, and you later sneak over for coffee with a neighbor, your neighbor is now connected to the infected office worker that your son’s girlfriend’s mother shook hands with. This sounds silly, it’s not. This is not a joke or hypothetical. We as epidemiologists see it borne out in the data time and time again. Conversely, any break in that chain breaks disease transmission along that chain. In contrast to hand-washing and other personal measures, social distancing measures are not about individuals, they are about societies working in unison. These measures also require sustained action before results are evident. It is hard (even for me) to conceptualize how on a population level ‘one quick little get together’ can undermine the entire framework of a public health intervention, but it can. I promise you it can. I promise. I promise. I promise. You can’t cheat it. People are already itching to cheat on the social distancing precautions just a “little”- a short playdate, a quick haircut, or picking up a needless item from the store. From a transmission dynamics standpoint, this very quickly recreates a highly connected social network that undermines much of the good work our communities have done thus far. This outbreak will not be overcome in one grand, sweeping gesture, but rather by the collection of individual choices we make in the coming months. This virus is unforgiving to unwise choices. As this epidemic continues, it will be easy to be drawn to the idea that what we are doing isn’t working and we may feel compelled to “cheat” with unnecessary breaches of social distancing measures. By knowing what to expect, and knowing the critical importance of maintaining these measures, my hope is to encourage continued community spirit and strategizing to persevere in this time of uncertainty.
https://elemental.medium.com/hold-the-line-17231c48ff17
['Jonathan Smith']
2020-03-28 18:18:08.189000+00:00
['Health', 'Public Health', 'Social Distancing', 'Coronavirus', 'Covid 19']
The Hallmarks of Successful Graduate Software Engineers
The Application Cover letter I am sure you have heard this over and over again, “Make sure you have a cover letter”, but let me expand on it. First, when it comes to applying for a position, there is a lot of what I would regard as spammers. These are clearly people that are going for quantity over quality. A typical giveaway to these individuals are: No cover letter A cover letter addressed to another company A generic cover letter full of “stats and facts.” For instance, “I had a sales increase of 300%,” when the job isn’t sales! What to look for in a cover letter You introduce yourself as a human being, right from the get-go. “My name is X and I am passionate about Y”. I have now connected your name with a personality. It really doesn’t matter what Y is either. But now I know that you care about something. If there is some connection from your passion to the company you’re applying for you’re getting a head start before I’ve even looked at your CV. The cover letter should show why they want to work for the company, based on what they’re interested in and what the company has to offer. From there, touching on personal interest areas (even outside of technicals), side projects, and university paper experiences — “I did a Data Science 101 paper at university and fell in love with it”. Basically, your cover letter shouldn’t sell your skillset as a commodity, rather it should showcase your personality and show you as a human being. The resume The resume can be interesting. One thing to understand as an applicant is there is a large likelihood your resume will look like everyone else’s if you only add your education. The problem with universities is they ultimately make cookie-cutter copies of their alumni. Sure there may be variation in grades, in specific papers or majors, but the reality is, they’re much the same. What to look for in a resume An introduction. This should be different from that of the cover letter. It’s more generic and mainly gives the generic facts about yourself in short sentences: “Throughout my time at university, I have had a growing desire to work in the field of Computer Science, my time at university has taught me the importance of time management and the necessity to discuss and consider solutions before running into them head on.” Appropriate skills Don’t list papers ranked by grade. If the company you are applying for is heavily into data science, put a couple of relevant papers or projects associated with that down, but don’t go too heavy. Projects, Projects, Projects This is the big one. This might be the single most significant thing I look for and potentially the biggest trademark of succesful software graduates. If you want to instantly put yourself above other candidates, do quality projects. Open a free account on GitHub and get active. If you can show multiple projects under a single “active” GitHub account, this will allow the business to explore areas in what you have done, which you may not even realise are significant. What’s a “quality” project? A quality project is one which is obviously not a tutorial. The number of React Todo lists I’ve seen makes the mind boggle. While this may show a willingness to learn a language and do tutorials, the reality is that all this shows is that you followed instructions. Extending tutorials is good. Going off-road and doing something with that skill to try something new and interesting is gold. The best quality project I came across was a graduate software engineer applicant who was passionate about the Te Reo Māori (the indigenous language of Aotearoa, New Zealand). They had used the Google Maps API to rename all of New Zealand’s street names to their Te Reo Māori equivalents. In reality, this wasn’t necessarily groundbreaking work, but from that project alone I got more information about the candidate than any other material they provided. Let me explain. Firstly, I’m from New Zealand. Te Reo is one of our three officially recognised languages, although isn’t translated onto Google Maps. The fact that someone took the time to do something like that, and use a newly learnt skill to further the ability to do so and raise awareness of this fact was truly special. It spoke to the type of person they were more than any other project I’ve seen.
https://medium.com/better-programming/the-hallmarks-of-successful-graduate-software-engineers-e1e771d034c1
['Lindsay Jopson']
2020-09-01 11:39:47.735000+00:00
['Software Engineering', 'Startup', 'Software Development', 'Life Lessons', 'Programming']
Conversations with Creatives — National Best Selling Author, Jennifer Pastiloff
Conversations with Creatives — National Best Selling Author, Jennifer Pastiloff My newly discovered cousin Source: selfie by the author — The first time I met my cuzzies, Jen & Charlie — May 8, 2019 I recently found out that I had a family member I never knew about. Another cousin Facebook messaged me about Jen in 2019 and said that she lived not far from me. Not only that but that she’s also a writer, a yogi, and around my age. Why didn’t we know about each other sooner? Something about the family name being changed at Ellis island using an “a” instead of an “o.” That one tiny letter kept us from knowing one another. Until now. After we met, I took her best-selling book, On Being Human, with me on vacation. On the beaches of Rimini, Italy, I listened to the audiobook that she recorded. I got to know my newly discovered cousin for the first time, through her own voice, literally. And I loved it. Here’s our conversation about creativity. I interviewed Jen via IG Live — source: screenshot by author Annie: What does creativity mean to you and do you feel that you’ve always been in touch with it? Jen: Good question…when I’m feeling creative it’s when I feel connected. And it could be connected deeply to myself, to someone else, to an idea. I know the difference between being connected and disconnected because I’ve spent a lot of my life being disconnected. Creativity to me also means being uninhibited. In traditional ways, no I haven’t always been touched by creativity, but in other ways, yes. Creativity to me is being deeply connected to my imagination. Annie: You’ve been writing since you were a kid, same as me. So do you feel like you’ve always had this creativity in you from the get-go? Jen: Yes, from the get-go. Before my dad died I was writing stories, when I was 6, 7, 8… then I felt like there was so much pressure because everyone was like, “Jennifer is going to be a writer.” So as I got older, like in high school, I got resentful, like, “don’t tell me what I’m going to do.” I had studied poetry at NYU, my mom suggested I write for TV and I was like, “no, I want to be an actor, stop telling me what to do.” Annie: But did you love writing? Was it just that the fact people were telling you to do it that was annoying you? You rebelled against it, even though you had a passion for writing, clearly. Jen: Yeah, yes, yes… part of it was yes, I hate being told what to do even though I crave it. The other part was ego — there was a real thing with wanting to be an actor, even though I really didn’t want it. Wayne Dyer used to talk about this, does it feel natural to you? And can you imagine it? And this is the truth that I can tell you now, in my body now, I could never, all through my 20s, I could never visualize myself on set. And if I’m honest about that, this life I have now is so easy for me to visualize. I could always visualize myself connecting, I didn’t know what it would look like, but that should have been a red flag for me, “oh, I want to be an actor” even though I was just waiting for someone to discover me at the restaurant. Annie: The actor thing for you, was it more that you wanted the idea of what that would look like and not the actual thing? Jen: Yeah, it was fame and, losing my dad so young, I craved love and that feeling of “want me, want me, want me.” Annie: So, the acting was more about getting attention. being seen… and the rest? Jen: Well, I have impostor syndrome with creativity because I’m not an artist, in the way you are, or the traditional way. So sometimes I’m like “oh I’m not creative.” But that’s a bullshit story. Because being creative is being deeply connected to your imagination and not letting my inner asshole say, “that’s stupid” or “that’s cliché” or, “other people have done that.” Annie: Right. You know, for me, the visual artist part of me is fairly recent. My entire life I’ve been an actor and writer and my feeling about creativity is that we are ALL born creative and it comes out in all the ways that we are. In our daily life, how we behave, in how we talk to one another, in how we dress, cook, in our… everything. Jen: Yeah, I totally agree. Annie: And in whatever we make. Jen: Yeah, just make it. Whatever it is. Annie: Exactly. Jen holds up my artwork… Jen: Look at this, I love this. Annie: Thank you. Jen: It reminds me of my Beauty Hunting. Annie: It’s funny because you talk about beauty hunting and I used to say FTB, Find the Beauty. Jen: I almost named the book Beauty Hunting but went with On Being Human. Annie: Switching gears… I’m curious about your hearing loss and I know you read lips. How did you learn to read lips? Jen: I’m not completely deaf with a capital D, I can hear sounds with the hearing aids but it sounds very muffled so I supplement with the reading of the lips. I can’t watch things without subtitles. I learned to read lips at The Newsroom (a restaurant in Los Angeles where Jen was a waitress for several years) My hearing loss kept getting worse and it was survival that got me to read lips. I didn’t even know that I was doing it. Annie: You learned because you needed to learn, so it just happened over time? Jen: Yeah, I actually think a lot of people, now that we are living in a masked world, are realizing how much they were relying on reading lips. Annie: Interesting, even without hearing loss? Jen: Yes. Jen on the beach. Source: Jen’s Instagram Annie: You’re working on a new book now. Jen: Well, working. (Jen makes air quotes.) Annie: No? Jen: Well, I haven’t been writing. I haven’t written since my book. Annie: Oh. Jen: Yeah, it’s really bumming me out. I feel bad about myself so I can’t speak about writing currently. Annie: Okay, well when you do write, what’s your ideal environment like? Do you need to shut things out? Jen: In a perfect world, yes, I would shut everything out because that’s what I need, I can not multi-task, I can not have music on, can not have anyone talk to me. I’m fascinated by people who have music on or tv or sound… Annie: … Yeah, I have to have some kind of sound around me. Jen: I would clench my butthole the whole time, I need SILENCE. Annie: And about not writing, maybe it’s because you’ve been busy lately promoting your book. Jen: I mean, yeah, at the beginning of the pandemic I lost everything, all my in-person events. I reinvented myself and started doing everything online, coaching and we started a podcast, turned my retreats into virtual retreats, and I have a 4-year-old, so… I haven’t been able to write. Annie: What do you need to do to make that happen? Jen: well, it’s interesting you say that because I’m going to go on my calendar today and book out time. I need to block it out. Annie: Do you have a deadline? Deadlines are huge. Jen: I do better when other people give me one. Annie: Okay, I’ll give you a deadline. Jen: Okay. Annie: Well, I don’t know how long it usually takes you to write, so I’ll have to get back to you with a deadline. (Time has passed since our IG live, so, I’ve had time to come up with a deadline. Jen, your rough first draft is due June 15, 2021. I will text you. You’re welcome.) Annie: Your book started out as a blog first, is that right? Jen: Well, 12 years ago I wanted to write a book and really I was writing essays and keeping them on my computer. But I started a blog where I’d post a lot “the Manifest-Station.” It’s still there but I don’t write on it. I had workshops called “On Being Human” and I had an agent who found me online a long time ago because I had an online presence. Annie: Tell me about your new podcast. Jen: It’s called What Are You Bringing? We're best friends and we talk about what you’re bringing to life. it’s a virtual barbecue. Annie: Will there be food? Jen: Well, no. Annie: Okay, I’ll still come. Jen: I do it with my friend Alicia Easter, it’s great. Annie: I will check it out for sure. So, let’s talk advice. What is some advice someone gave you that really stuck with you? Jen: The one that popped into my head just now is Wayne Dyer, “Don’t die with your music still in you.” Annie: I love that one too. Jen: Yeah, I've been thinking about that a lot, being intentional with how I want to live my life. How to allow myself to be self-expressed. Annie: Don’t you feel you do that? Jen: I do, but there are still so many ways that I don’t. Like, why am I getting in my way with this next book and with my shame loss course? It’s me getting in my own way. There’s still music I’m not letting out yet. Annie: Do you feel successful? Jen: Absolutely, yeah, I do. Financially I’d love to be more successful. I would love to own a home, to have a bigger place. I define success the way I personally define it is, when I lay my head down at night I get to say, I told the truth today. So, as far as that goes, do I feel successful… yes. Period. Annie: Good. I love that, Yes. Period. Such a perfect ending to this chat. Thanks, cuz! 💌 Sign up for Annie’s monthly newsletter, What I Liked, Wrote & Drew
https://medium.com/the-innovation/conversations-with-creatives-national-best-selling-author-jennifer-pastiloff-d31031944a21
[]
2020-12-18 19:32:44.938000+00:00
['Self Improvement', 'Writing', 'Writers On Writing', 'Interview', 'Creativity']
Remembering American Songwriter Legend John Prine
America, we have lost a true legend and treasure. I was so sad to hear about the passing of John Prine, whose music I have enjoyed over the years. When hearing the news that he had contracted COVID-19, I was among many people wishing that he could beat this. Courtesy of John Prine Originally from Maywood, Illinois, John Prine learned to play guitar at age 14. He got his start in classes at the Old Town School of Folk Music in Chicago. Not many know this, but John Prine was an army veteran in Germany in the year of 1966. After serving in the army, he moved to Chicago in the late 1960s and worked for the US Postal Service while writing songs as a hobby. The rest is history. Roger Ebert, the famous film critic, had this to say about the late songwriter legend in his 1970 Chicago Sun-Times article. “He sings rather quietly, and his guitar work is good, but he doesn’t show off. He starts slow. But after a song or two, even the drunks in the room begin to listen to his lyrics. And then he has you.” Here are my top 3 songs by the late legend: That’s the Way the World Goes Round I remember hearing this song in the early 90s at the age of 8 and falling in love with his music. The simplicity and bare emotion of this song stuck with me at a young age. The flute harmony and the shuffling beat adds a nice touch. I also remember thinking that the lyrics “Naked as the eyes of a clown” as being the funniest thing an 8-year-old could hear at the time. Angel From Montgomery This song comes from his self titled record in 1971 and is one of my favorites. As a musician/keyboardist, I love the groovy and straightforward intro from the Hammond organ in the intro that fits so well with the piano, guitar, and John’s raspy voice. This song is a classic that has aged well over the years. Dear Abby The song “Dear Abby” was another childhood favorite of mine. I loved his simple storytelling and tongue in cheek lyrics “My feet are too long/My hair’s falling out, and my rights are all wrong/My friends they all tell me that I’ve no friends at all.” This classic from the 1973 album Sweet Revenge was a favorite of mine as I was figuring out my awkward teenage years and the advice in the lyrics “So listen up buster, and listen up good/Stop wishing for bad luck and knocking on wood” would-be sage, wise advice my late grandmother would have given me. In Memory As I was reading through people’s memories on Twitter of John Prine, I came across Hanif Abdurraquib, editor of GENMag, who, in my opinion, had the best response to John Prine’s passing. While I was in my home studio last night, I raised a glass of Evan Williams to Mr. Prine and recorded a short video of my playing my favorite song of his on the piano. You can watch it here. That’s the way that the world goes ‘round. You’re up one day and the next you’re down. Thanks for the memories, John. What are some of your favorite John Prine songs? I would love to hear them in the comments below.
https://medium.com/theentertainmentbreakdown/remembering-american-songwriter-legend-john-prine-26b3a81088c5
['Judson Hurd']
2020-10-10 01:19:31.505000+00:00
['RIP', 'Death', 'Music', 'Coronavirus', 'John Prine']
You Need a Co-Founder For Your Startup.
Almost every startup is better off with a co-Founder. Mostly the bootstrapped ones. If you have a super cool idea and thought about starting a startup, then you must have already researched the reasons for having a co-Founder. But worry not: If this has slipped your mind, I will give you a summary of some reasons for having one. Dan Lok, the Asian Dragon Investor advocates that you should bootstrap as long as you could with your friends in a garage. “Eat ramen noodles, that’s ok for a period of time because it teaches you how to be resourceful.” Lower Risks With shared responsibility, it is easier to lay your head on your pillow at the end of the day, don’t you think? Two or more people responsible for a startup need to share the gains, but also losses. Chances are that having an accountability partner will enable you to cut most loose ends that could jeopardize your business. You may be full of energy now, but a company’s variables can drive you crazy. It also means that the venture will not stop progressing due to the unavailability of a single person’s burnout. Not to mention, having a co-Founder will lower your personal risks as well. Regarding the way you sort things out with your new partner, you can now split the costs and ease stress and potential losses. Greater Possibility of FFS investments (Friends, Family, and Suckers) Photo by Timon Studler on Unsplash “Less than 1% of companies are funded by angel investors. Less than 0.5% is funded by venture capital.[…] Most people, they actually start off with just family and friends and loans from the bank” — Dan Lok This should not be the aim of having a co-Founder, but you cannot deny that it is a bonus. Bootstrapped companies always benefit from having some money coming in from the initial stage… In this stage, you should not need to prove your idea too much. You don’t know yet what you are doing, and probably do not have an MVP ready to actually show something. Thought that not knowing how to lower your CAC (Customer Acquisition Cost) or to increase the MMR (Monthly Recurring Revenue) will prevent you from getting a good investor or being approved into funds? Then you are absolutely right. But don’t mind about it just yet. That’s what the last F stands for (or S, as I gently put above). Your family and friends don’t have enough technical knowledge about what you’re doing, so make your idea compelling to them and hope for the best. If they do have technical knowledge, then CALL THEM TO BE YOUR CO-FOUNDER. Disclaimer: I’m not encouraging you to scheme or lie to anybody. The fact is: you don’t have those answers yet, but still, you need the money to get to know anything. Moral Support Starting a business is a task that isolates you from the external world. There is no one to talk to. No one who knows all the obstacles your company’s going through and still supports you. We may start to feel abandoned. The moral support a partner might give us is different from anything else we could hope for. Firstly, he will have to make your company. I don’t know if you are aware, but a startup often urges you to work way more than the 9–5 business hours. Having somebody to share it will be a big relief. Studies revealed that there is a bunch of brain damage such as depression and Alzheimer, and poor decision making. “Lacking encouragement from family or friends, those who are lonely may slide into unhealthy habits. In addition, loneliness has been found to raise levels of stress, impede sleep, and, in turn, harm the body. Loneliness can also augment depression or anxiety.” — Newcastle University epidemiologist Nicole Valtorta, PhD “What’s wrong with having one founder? To start with, it’s a vote of no confidence. It probably means the founder couldn’t talk any of his friends into starting the company with him. That’s pretty alarming because his friends are the ones who know him best.” — Paul Graham, co-Founder of Y-Combinator A Unique Set of Skills Photo by Aaron Huber on Unsplash Can you whistle and clap at the same time? Well, I can. When you hire a co-Founder you are getting twice as much work done without having to invest more money, which is great, but this work needs to be directed to the company’s greatest difficulties. Remember: You are not spending money yet on a co-Founder, but as soon as your startup starts to bear fruit — as we hope — it will be harvested by more hands. And it is fair. Let’s just make sure to put those hands to good use. Let’s picture it a lil’bit: You are an excellent administrator and you had a wonderful idea — sounds familiar? — and you are building an app. Ok. Who develops the apps? App developers. If you already have an FFS investment — you remember that, right? — great, you can hire an app developer, but there are substantial roles that must be filed within your company. Now you need a good marketer to build your launch strategy. Are you running out of money? Maybe you know a good professional in the field who believes in your idea. See? Now, besides all the significant points put above, you also granted your company someone who will give essential skills to your idea. Double Win. Investors Like It Photo by Lloyd Blunk on Unsplash If I haven’t convinced you so far, I think this is the turning point in our brief story. Investors tend to better accept companies with more than one founder. Perhaps, in addition to all the advantages I mentioned above, a VC should think: “hm … if this guy convinced another human being so skilled to work on his idea, basically for free, it must be a very good one.” But do not take just my word for that. Y Combinator’s co-Founder, Paul Graham, says in “18 Mistakes That Kill Startups”: “The low points in a startup are so low that few could bear them alone. When you have multiple founders, esprit de corps binds them together in a way that seems to violate conservation laws. Each thinks “I can’t let my friends down.” This is one of the most powerful forces in human nature, and it’s missing when there’s just one founder.” Ed Zimmerman, Adjunct Professor VC/Angel at Columbia Business School, is one of the many multiple-founders advocators. He claims to observe everything about the relationship between founders: “Do they appear to be off message with one another? Do they talk over each other? Do they seem disrespectful or discourteous toward each other? Or, do they seem supportive of each other? I love working with founders where it appears that each has the other’s back.” If you are interested in a specific post for these topics, leave in the comment section below; the only kind of pressure that works for me. Now that you know why you need a co-Founder, let’s go shopping! What to look for in a startup co-Founder Photo by Tachina Lee on Unsplash Looking for a co-Founder shouldn’t be much different than looking for an investor, after all, that person will invest time to execute your idea. Someone like you I’ve tried to make as clear as possible the importance of a business partner when starting your startup. One of the things that you should appreciate above all, is your partner to be having the same values ​​as you. He doesn’t have to be a soulmate, but he does need to be the soulmate of your business. If you are creating a health tech, for example, and are interested in doing something that may challenge some religion, it will be difficult to partner up with someone extremely religious. Write Down all the skills you will prioritize Yes, just like in a job description for any other position. Some aspects may be desirable and others are prerequisites. You must know exactly what you need from a co-Founder. If you have little or no acuity with numbers, looking for someone with those skills can be a plus. Unless you are creating a Fintech, then dealing with numbers should be paramount. Different backgrounds. As mentioned at the beginning of the article, looking for someone with skills different from yours is a way to enhance the success of your Startup. Take preference for those who have expertise in areas relevant to your project and who complement your skills. Extra information Get to know your co-Founder more. Ask about hobbies, childhood stories, school life. These are not questions you ask in a form, but important information to take into account, since the chemistry between you must be pulsating. It is not easy to work with someone with whom we cannot relate well. Where to look My main advice at this point is that you should look into your network of friends. People you already know and trust and can add to your project. On the Internet If you don’t know anyone in this medium that fits, you can look for people willing to be co-Founders on the internet. Search through Linkedin groups, Quora responses, Medium authors, and of course, Twitter. Looking for people to relate to professionally shouldn’t be a problem for you. If there’s one thing the Founders I know have in common, it’s that they LOVE to talk about their businesses. Some specific relationship platforms are also available for this purpose. The best-known examples are YouNoodle, Founder2be, and FounderDating. These platforms serve to connect you with people who already have some background in Tech, with the most varied interests. In addition to finding founders, you will also find experts who can advise or even board your team. Face-to-face events Good old face-to-face networking does not usually fail. You can attend technology development events at universities and development hubs. Introduce yourself and talk to other founders and experts in areas related to your business. If you like to participate in events like this, you are likely to find a partner. Maybe even an entire team. What to watch Photo by Gaspar Uhas on Unsplash Some founders do not take the CEO position because they are not able to run a company, or because their skills will be better used as CTO — Chief Technology Officer — or CFO — Chief Financial Officer, and there is no problem with that, as long as it has been talked about from the beginning to prevent it from motivating future discussions. For the same reason, always make a contract with the previous negotiation. Discuss scenarios. Percentages. Papers. Discuss the future. Go out for ice cream together. Seriously, don’t rush the important task of finding a co-Founder for your Startup. “As the proverb says, men cannot know each other until they have eaten salt together.” –Aristotle What important tip did I miss? Don’t forget to leave in the comments how was your process to find a co-Founder ;) — DR Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/you-need-a-co-founder-for-your-startup-a6560232b836
['Dominique Rocha']
2020-11-09 03:57:00.534000+00:00
['Business', 'Startup Lessons', 'Founder', 'Startup', 'Entrepreneurship']
Medium Is Building a Monetized Alternative to Twitter
Medium Is Building a Monetized Alternative to Twitter How writers can leverage new shortform posts Photo by Sara Kurfeß on Unsplash If you got paid $0.50 to $1.00 (or more) for every Tweet you wrote, would you use Twitter more often? Most content creators would almost certainly answer with an emphatic “Yes!” As part of recent sweeping changes to the platform, Medium has rolled out new options for shortform posts. Michelle Legro covers these in detail in a new post on the Medium Creator Hub. Medium’s shortform options essentially serve as a members-only, monetized version of Twitter. Writers on Medium (especially those with a following) can use shortform posts to perform many of the same functions as a Tweet, while potentially getting paid for their posts through Medium’s Partner Program — or taking advantage of expanded reach through Medium curation functions and their existing Medium audience. Here’s how Medium writers can leverage the platform’s new shortform writing functions to grow their reach, create more content and earn more. The Basics: How to Write a Shortform Post Let’s start with the basics — what’s a shortform post, and how do you make one? Shortform posts are Medium articles which are 150 words or less. Because the average English word is 4.9 characters long, that means that a shortform article is likely around 735 characters. Tweets are currently limited to 280 characters, so that means that shortform Medium articles can be about 2.5x longer than your longest Tweet. Shortform articles are generally a 1–2 minute read. Why 150 words? Any article longer than 150 words displays with a Read More button on Medium. If you keep your articles under 150 words, Medium says in their recent post about shortform, they’ll display in their entirety on your profile, or on the landing page of your publication. Readers can consume the entire shortform post without having to click through to a second page via a Read More button. You can write shortform posts longer than 150 words, but if you want to take advantage of having the full post display without a Read More, you should keep shortform posts to 150 words or less. Again, this is coming straight from Medium, so writers would be wise to heed this cutoff. Medium has hinted about new shortform options since they announced sweeping platform changes in mid-2020. But one recent surprise was the revelation that the systems for creating shortform posts are actually already live, and likely have been for months. You may have seen these posts already, if you follow Medium’s in-house publications, which have been using them at least since July 2020. How do you actually create one? It’s simple. As Medium explains on the Creator Hub, you start a new story, and instead of including a headline and photo at the beginning, you bold the first sentence. You then write your story as normal, ideally keeping the whole thing under 150 words. You can tag your story as you would normally. Here’s an example of a shortform post from Medium’s Creator Hub article: Courtesy Medium. There’s a bolded first line, no headline/image, and then less than 150 words of text. This cues Medium into the fact that you’re writing a shortform post. What to Include What should you include in your shortform posts? Really, it can be anything you want — or anything you feel provides value to your own audience. For their part, Medium says “When it comes to publishing shortform, variation — and consistency within that variation — is the key.” Here are some things a shortform post can do: Share a great story from another Medium writer , that you think would be relevant to your audience , that you think would be relevant to your audience Discuss breaking news or a new announcement in your industry. Because shortform posts are fast to write, you can respond to events as they happen and keep your audience informed. Because shortform posts are fast to write, you can respond to events as they happen and keep your audience informed. Highlight one of your older stories, which has new relevance based on the time of year, a new story you’ve just written, etc. For example, I recently highlighted a story I wrote in 2019 about Black Friday, since it’s now Black Friday again and the story was once again relevant. which has new relevance based on the time of year, a new story you’ve just written, etc. For example, I recently highlighted a story I wrote in 2019 about Black Friday, since it’s now Black Friday again and the story was once again relevant. Quickly respond to another writer’s story. Just read a story that got you really excited — for fired up and ready to argue? Publish a response sharing your opinion on another writer’s piece as a shortform post. Just read a story that got you really excited — for fired up and ready to argue? Publish a response sharing your opinion on another writer’s piece as a shortform post. Highlight recent work that fits into a trend. Shortform posts are great for “roundups” of your recent work. Maybe you just published 5 stories over the last month about home decor. Tie them together with links in a single shortform post, so readers can see them all organized together in a thread. Medium’s Coronavirus Blog does a great job of doing this to summarize recent Covid-19 news. Shortform posts are great for “roundups” of your recent work. Maybe you just published 5 stories over the last month about home decor. Tie them together with links in a single shortform post, so readers can see them all organized together in a thread. Medium’s Coronavirus Blog does a great job of doing this to summarize recent Covid-19 news. Share an off-platform story. Not all great content lives on Medium. Shortform posts are a great way to link out to and discuss great articles published on another platform. Not all great content lives on Medium. Shortform posts are a great way to link out to and discuss great articles published on another platform. Share a photo or video. Shortform posts are perfect for sharing a single video or photo. Write your intro sentence/sentences, then embed the photo or video into your post. Shortform posts are perfect for sharing a single video or photo. Write your intro sentence/sentences, then embed the photo or video into your post. Tease a future story. Planning something great? You can use a shortform post to “tease” a future story that you have in the works, and get readers excited about it. You can even include a few lines from the story in your teaser post, so readers get a taste of what you’re developing and want to come back to see the whole piece when it’s published. This is also a great way to grow your reach by encouraging readers to follow you, so they can see the full story the moment it goes live. From a practical standpoint, Medium says it’s a great idea to tag other writers in your shortform posts, and to tag stories that you mention in the post. To tag a Medium writer, start with the @ sign, and begin to type their Medium handle. The platform will show a list of writers. Click on the writer you want to tag, and they name will appear in green in your story. When you publish your shortform post, they’ll receive a notification. To tag a specific Medium story, copy the link to the story, paste it into your post, and hit enter. The story will appear in a box in your post. Benefits of Shortform Posts for Writers Medium’s shortform stories, to be blunt, are very much like Tweets. They serve many of the same functions as a Tweet (discussing breaking news, updating your audience about what you’re working on, etc), except that the can be members-only and monetized. What are the benefits to writers of creating shortform posts on Medium? They’re super easy and fast to create. How long does it take you to write 150 words? Probably no time at all. Shortform posts are great because they’re simple and fast to create. That means you can respond immediately to breaking news events, or be prolific in your content creation by publishing much more frequently. How long does it take you to write 150 words? Probably no time at all. Shortform posts are great because they’re simple and fast to create. That means you can respond immediately to breaking news events, or be prolific in your content creation by publishing much more frequently. They allow you to curate. Tim Denning recently said that audiences don’t seek content — they seek curated content. Your role as a creator is sometimes to create things totally from scratch. But sometimes it’s just to serve as a curator, gathering the best content from your own work or other sources which will have relevance to your audience. Curation isn’t “cheating” — choosing the best content and sharing it (with attribution, of course) is extremely valuable to your audience. Shortform posts make it super easy to find a great article, write a brief comment on it, and share it with your audience, curating the best content for them to consume. Tim Denning recently said that audiences don’t seek content — they seek curated content. Your role as a creator is sometimes to create things totally from scratch. But sometimes it’s just to serve as a curator, gathering the best content from your own work or other sources which will have relevance to your audience. Curation isn’t “cheating” — choosing the best content and sharing it (with attribution, of course) is extremely valuable to your audience. Shortform posts make it super easy to find a great article, write a brief comment on it, and share it with your audience, curating the best content for them to consume. They’re monetized. If you choose to lock your post down behind Medium’s paywall, it will be monetized just like any other Medium post if you’re in the Medium Partner Program. Shortform posts are short, so you’re unlikely to get a ton of reading time. But I find that with an audience of 5,000+ followers on Medium, my shortform posts earn about $0.25 to $2.00 each. Writing these shortform posts is a bit like being paid around $1 for each of your Tweets. If you choose to lock your post down behind Medium’s paywall, it will be monetized just like any other Medium post if you’re in the Medium Partner Program. Shortform posts are short, so you’re unlikely to get a ton of reading time. But I find that with an audience of 5,000+ followers on Medium, my shortform posts earn about Writing these shortform posts is a bit like being paid around $1 for each of your Tweets. They’re eligible for curation and can go in publications. Medium used to only rarely curate posts shorter than 3 minutes. They’ve since removed this restriction. While curation is a lower priority overall on the new Medium, shortform posts are still eligible for curation. You can also publish them in publications — either your own or another publication on the platform. Medium used to only rarely curate posts shorter than 3 minutes. They’ve since removed this restriction. While curation is a lower priority overall on the new Medium, shortform posts are still eligible for curation. You can also publish them in publications — either your own or another publication on the platform. They connect you to other writers and readers. When you tag another writer in your shortform post, they get a notification. It’s a great way to ping other writers, share something you love about their piece, promote their work to your audience, and build community overall. When you tag another writer in your shortform post, they get a notification. It’s a great way to ping other writers, share something you love about their piece, promote their work to your audience, and build community overall. They’re a great way to experiment. Shortform posts are fast for you to create, and fast for your audience to read. That makes them perfect for experimentation. You can try something totally new with a shortform post, and see if your audience responds to it. If they love it, you can build the idea into longform content. If they hate it, you’ve only wasted 1–2 minutes of their time. An Example Here’s an example of a recent shortform post that I wrote: It’s a 1 minute read. There’s a bolded first sentence, no featured image, and no headline. It was curated in Marketing. It will likely be published in the Startup. It shares a photo and a previous article I wrote. It’s gotten 78 views and earned me $0.45 in the first three days online. That’s not a lot, but it’s also not bad for a story that took about 60 seconds to write. It probably drove some views to my old story, too. Make Shortform Your Own Medium’s shortform posts are a bit like a monetized, promoted version of Twitter. Shortform posts serve many of the same functions as Tweets, but on the Medium platform. You can even repurpose your best Tweets are shortform stories if you want. They’re super fast and easy to write, allow you to experiment with new ideas, and can be used to highlight your own relevant work, the work of others, or to share and comment on breaking events. As a Medium writer, start testing out shortform posts and see what your audience engages with. As Medium says, each writer and each audience will find different value in this new format — whether that’s sharing stories you’ve curated, sharing industry announcements, building community, or more. You can write a shortform post in a few minutes. Try it right now — press the Write button, think of something brief to say, and create your first shortform post today.
https://medium.com/swlh/medium-is-building-a-monetized-alternative-to-twitter-bda70900de68
['Thomas Smith']
2020-12-02 09:39:56.062000+00:00
['Medium', 'Creativity', 'Twitter', 'Shortform', 'Writing']
One Way to Assess the Conflict Within Your Work in Progress
One Way to Assess the Conflict Within Your Work in Progress Amie DeStefano Follow Nov 9 · 4 min read Photo by Olav Ahrens Røtne on Unsplash Cheryl St. John, the author of Writing with Emotion, Tension, and Conflict (Writers Digest Books, 2013), writes, “You can’t write a book and then go back later and try to add conflict.” Weak conflict is one way to have a manuscript rejected, St. John explains. Her advice makes sense. That’s why I often use St. John’s book to plot and brainstorm conflict. But, what if I’m 50,000 words into my first draft, and it’s not entirely void of conflict? Yet it needs improvement. I didn’t want my eventual manuscript rejected. What nagged at me was that I never sought any objective, professional feedback about my story idea. Seeking advice seemed important now that doubt had me asking, “What am I doing?” I sat at my desk, staring out my window, waiting for the answers to appear before my eyes. “How did I stray? Why isn’t my fantasy work in progress, turning out as I want it to?” Nothing felt right as I pounded away at the keyboard, producing a daily word count. Which is good for training my writing muscle but not useful for my problem-solving process. What helped me the most was discussing the structure of my book idea with my writing class, “Write Right Now,” led by author Catherine Jordan. From Jordan, I learned what I was missing. You probably guessed it by now. Conflict. More conflict. After talking with Jordan, I was able to find direction, understand my book structure, and where I’d wandered off. Her objective review allowed me to take a step back and see my story had many more possibilities. Also, I listened to my classmate's book structure and premise. Giving me more perspective on what my first draft was lacking. Jordan’s words were polite and encouraging. The tip she gave me was to “increase my pressure,” and the importance of that pressure throughout my story. I learned that pressure is a component of your story structure—one part of seven more elements needed when writing a premise, giving your story direction. Jordan points out: If you can’t explain one of your story structure aspects, or if it’s too weak, more brainstorming and re-working are needed. The next day after Jordan’s advice, I drove to work, after I’d just finished my usual early morning writing. I was in the car with my coffee, thinking about my fantasy project, when Jordan’s words truly hit me and sunk in. I had pressure, but in a more generalized way for all the citizens of my world. My current pressure wasn’t unique enough for my main character’s goals. I needed to figure out how to take Jordan’s advice and apply it. Pressure defines how to keep my character from getting what they want. Which boils down to a discussion we had during one of our Tuesday evening classes when Jordan asked us: “What is conflict?” Our answers went up on the whiteboard. The definition that came from those answers was stopping your main character from achieving their goal. Next, Jordan asked us, “How can I stop my character from getting what they want?” When we know our character’s goals, we can stop them. Making it incredibly hard for them to get what they want by setting up their conflict. Conflict can be split into internal and external conflict. I’d learned internal conflict was the emotional aspect of my character. It can come from my character’s backstory. How she grew up, different experiences in her life, phobias, likes, dislikes, and wants. It shapes how she will react to certain situations when the pressure is on. External conflict is the part of the book world I create for my story people. The main characters are marooned on a deserted island or live on a new planet that’s going to explode in 30 days. This conflict holds your characters together in a bubble. They need to react to their story world’s limitations and benefits. Reviewing conflict with Jordan in her class helped me re-assess my pressure and conflict. My overwhelm with my first draft and getting it done had me losing my way down the “rabbit hole.” This, Jordon explained to us, is a way to avoid writing mistakes. With advice from Jordan, I took an honest look at my conflict. First, I wrote down what my main character wants (goals.) Next, I wrote down ways to stop my main character from getting what she wants. Along with writing down her response to these pressures based on her internal and external conflict. I also went over conflict and pressure between my characters and their individual responses. Then I started my outline. I reviewed the scenes in my story to determine what I could keep, tweak, or delete while brainstorming new scenes. And of course, I’m writing again. Luckily, I hadn’t finished my story. I wasn’t querying or getting rejection letters. I was in a dark, veiled place of writing my first draft. My eyesight and sense of direction were obscured. Receiving feedback from Jordan opened up fresh conflict for my story that I’d overlooked. I had to work at re-assessing and improving my story’s conflict. All of that evaluating and problem-solving was worth seeing a new story shape that I’m more excited to write about. I’m learning, and I’m thankful for those lessons, even if it happened 50,000 words into my process. Because now, it feels more like the sun is breaking through the fog of dark clouds around my work in progress. My eyes focus, revealing a more precise picture to follow.
https://medium.com/ninja-writers/one-way-to-assess-the-conflict-in-your-work-in-progress-6eecaa4b3639
['Amie Destefano']
2020-11-09 13:39:16.726000+00:00
['Conflict', 'Creativity', 'Writers Life', 'Problem Solving', 'Writing Tips']
How (And Why) to Keep an Everyday Notebook
“Keep a notebook. Travel with it, eat with it, sleep with it. Slap into it every stray thought that flutters up in your brain. Cheap paper is less perishable than gray matter, and lead pencil markings endure longer than memory.” — Jack London I am a compulsive notebook keeper. Only, until the last couple of years, my habit was to just pick up whatever notebook was laying around and write in it. So, at any given time I might have a dozen of them and actually finding the one that held a particular note? Forget it. I’ve tried Bullet Journaling, oh, every January since it was invented. On the surface, it was designed for me. Creative, intuitive, malleable. Just the thing for my maker’s mind. Only it never works. I make the first layout. I use it. Sort of. But it was so much work that I’m afraid of screwing it up. So I don’t really use it. And also? At some point, usually week three or four, I just don’t want to do the work of getting it set up anymore. Also, I have a problem with anything that starts to feel arbitrary. So those cute little bubbles to fill in for tracking my water consumption? They seemed like a good idea, but it doesn’t take long for my rebellious side to kick in. I mean, why exactly do I need to fill in water bubbles anyway? A Couple of Years ago, I Decided to Try Something Else There was a little confluence of things that changed my thinking. I read this little post by Austin Kleon about the notebooks he keeps, and bought the big, fat, flexible notebook he said he uses. And I read the Jack London quote I posted at the top of this post. It’s funny how tiny things can shift everything, isn’t it? I started carrying that notebook with me. And instead of making layouts or trying to do anything artsy or creative, I just started using it. I Just Write In It, Every Day It’s not a planner, which is what my Bullet Journal tried to be. If I feel the need to keep track of something for a while, I just start a page for it. No water consumption bubbles needed. If I go to a meeting or a conference or talk to someone, I just open my notebook to the next page and start writing. If I come across a quote that strikes me, I write it in my notebook. I take notes on books I read, ideas I have, things I want to remember. I keep lists — grocery lists, menu plans, bills to pay, ideas for everything. Like Jack London said, I slap every stray thought into my Everyday Notebook. It’s Not as Pretty or Put Together as a Bullet Journal I’m sure it will never become a thing. But it serves me better. If you’d like to keep your own Everyday Notebook, start with a notebook that’s big enough, but not too big. Is that vague enough for you? The truth is that I like keeping one book a year, but I carry a bag, so I can do that. If you need to carry yours in your pocket, you might just want one that’s big enough for a month. That’s okay, too. Get in the habit of having it with you every day, though. That’s key and not vague at all. Train yourself to write your notes in your Everyday Notebook. Don’t worry about organizing them. Just fill your pages. Date the notes. Draw a line under a thought, if you’ve got a lot of pages left. Taking Action on Your Notes Because all of my notes for my whole life are in one place, I sometimes need to actually do something with them. Maybe I need to act on something — make a call, reach out to someone, buy something. I might have to do something with notes I’ve taken for a story I’m working on. Maybe I want to put the notes I’ve taken on a book or conference into my Commonplace Book. Once a week or so, I go through my Everyday Notebook and I look at my notes. I act on what needs to be acted on and I mark through the note with a highlighter so that I know that it’s been addressed.
https://medium.com/mind-cafe/how-and-why-to-keep-an-every-day-notebook-instead-of-a-bullet-journal-80862f3e27dd
['Envy Writer']
2019-09-17 02:51:24.895000+00:00
['Self', 'Notebook', 'Productivity', 'Creativity', 'Life']
America, a Shared Madness
American English is clumsy, lacking in grace and subtlety. It’s very good at expressing stupid ideas, trivial things, but not so much at conveying complex thoughts or emotions. Derp is the most American of words. Derp can be a guttural belch designed to express confusion, or derp can be an accusatory finger pointed at a very dumb person. It doesn’t so much describe as it does assault, less a coherent thought than it is a noise one instinctively retches from their lizard brain. German is complex. Words and phrases are somehow both unwieldy and concise. To outsiders, a word might appear an endless string of letters that takes more space than entire sentences in their native tongue, yet the sum is capable of describing a thing others might have trouble articulating in dozens of sentences or even whole books. Vergangenheitsbewältigung refers to a collective trauma that may haunt a nation, specifically in the context of post-World War II German history when the country tried to understand and address its descent into fascism. It describes the mood of such a period as much as it does the process of working through the past. French is the most beautiful of all languages. Words and phrases aren’t as cumbersome as in German, but also not stunted or aggressive as in American English. Much like German, it can condense complex thoughts or ideas down into one word or a short string to identify concepts we might struggle to recognize over a lifetime. French can take something as difficult to put into words as a collective paranoia and make it feel romantic. Such is the case with folie à deux. In English, the phrase translates to “madness for two” and describes when a person who suffers from a delusional belief passes it onto others. In its simplest terms, it means a shared psychosis, though this description is reductive because it doesn’t account for transmission, it only articulates a state of collective hysteria. There is no word or phrase in American English like folie à deux, especially on a national level. Mass delusion, for example, explains how a group can experience a collective hysteria but it isn’t concerned with a source. It lacks precision. Worse, it’s often restricted to smaller groups across brief moments in time. It has meant everything from the Salem Witch Trials to a series of “evil clown” sightings in small towns throughout the United States in 2016. It doesn’t capture our prolonged madness, the generational fear that travels between parent and child, teacher and student, hostage taker and victim. America breeds a special kind of crazy. You see it among rich and poor alike. It starts at the highest levels and trickles down so all can share of it, the only kind of redistribution that happens here. This isn’t something new, a recent development that emerged out of a national panic, it’s a condition that has been with us since this country’s founding, though the last four years has thrown it into sharp relief. I first noticed it in 2015 when Donald Trump announced he was running for President. There were brief moments of rupture in media coverage that revealed an intense loathing for the man irrespective of political party. George Will called Trump a counterfeit Republican and the National Review devoted an entire issue of its magazine, titled “Against Trump”, to attacking him. MSNBC rebranded as the Anti-Trump Network by obsessively covering his every word and its biggest star, Rachel Maddow, began comparing Trump to Adolf Hitler on her show and in interviews. The disease spread to the rest of America in the form of public spectacle. In February 2017, witches began hexing Trump in public rituals, casting what they called a binding spell so that “his malignant works may fail utterly”; this continued for months and a group formalized online under the moniker #MagicResistance. But this being America, no action can occur without a corresponding or more absurd reaction. Christians responded to the hexes with coordinated prayer campaigns, and online forum 4chan created its own occult belief system, The Cult of Kek, to, first, mock the rituals and, then, spin off its own minor religion based around “meme magic.” Trump, his campaign, his Presidency — none are unique. When I say I see something in Trump, I’m not referring to the man but instead how Americans respond to him and figures like him. Conservatives have called it Trump Derangement Syndrome, a generalized paranoia in response to the man’s very existence, but it’s not distinct to Trump alone. It was first created by political commentator Charles Krauthammer to describe a Howard Dean quote about George W. Bush. Center-left news outlets revived and transformed it into Obama Derangement Syndrome. Seemingly every new President drives the country crazier, every election becomes an exercise in greater self-harm. But this past year may be the pinnacle of American derangement. (For now.) There’s a global pandemic affecting every nation but America is falling apart in ways both unique and unusual. The government provided only one check of $1,200 to its citizens despite near-universal state mandates that businesses shutdown or operate at minimum capacity. Armed protestors stormed state capitols to demand those states reopen. The death of Supreme Court Justice Ruth Bader Ginsburg inspired widespread protests and counter-protests. One of the most stunning images of 2020 came a little over a week after her passing when, in response to the nomination of Amy Coney Barrett, Trump supporters began holding prayer sessions in front of the Court; a staff photographer at the New York Times caught one such example, where supporters were cast in stark contrast to a woman laying on the ground, in tears over Ginsburg’s death. The following day evangelicals flocked to the Washington Monument as part of a prayer rally in support of Israel. But instead of holding a traditional prayer session, members treated the structure as if it were the Western Wall, connecting one of Judaism’s holiest sites to the founding of America. Protests have been ongoing since May when Minneapolis police murdered George Floyd. This one event may be the defining American moment of the 21st-century. A police station burned, cities lied about their use of chemical weapons on civilian protestors, Americans established their own free states. At a certain point though, it became unclear who was protesting what or why. In June, after protestors nationally tore down Confederate monuments, conservative residents of South Philadelphia began holding protests in defense of a statue of Christopher Columbus… but no one had threatened to attack or deface it. That same month caravans of Trump supporters began rolling through cities and suburbs around the country. The South Philadelphia protest saw police stand by and watch as protestors fought in the streets, while the caravans culminated in an October 31st protest in which a Texas caravan harassed and attacked a Biden campaign bus and another group shut down the Mario Cuomo bridge in New Jersey. There’s no phrase like folie à deux in our language, nothing like Vergangenheitsbewältigung. We have no concise way of capturing this feeling, this moment, the process of a country losing its mind. There is no turn of phrase or collection of grunts that portrays both the disease and its host — nothing describing transmission or receipt. The best we can do is approximate. We call it simply: America. Because America isn’t only a place. It’s not just a system of lines and boundaries, somewhere where you can travel to and from. It’s a pathology, a psychological condition which afflicts 328 million people. America is a light at the end of the tunnel, fast approaching, until you realize it’s the headlights of a truck bearing down on you. It will run you over, leave you for dead, but you keep getting back up and walking towards it again. America is a fleeting memory you move further away from everyday. It gnaws at you, and you resent it because you lose a little bit more each time you try to remember what it once was, but still you keep trying. America is that thing that drives us all crazy — a collective psychosis, a mass delusion, a shared madness.
https://robert-skvarla.medium.com/america-a-shared-madness-694c38f92454
['Robert Skvarla']
2020-11-04 15:02:25.367000+00:00
['Politics', 'Society', 'Nonfiction', 'Culture', 'Essay']
How to use DeepLab in TensorFlow for object segmentation using Deep Learning
How to use DeepLab in TensorFlow for object segmentation using Deep Learning Modifying the DeepLab code to train on your own dataset for object segmentation in images Photo by Nick Karvounis on Unsplash I work as a Research Scientist at FlixStock, focusing on Deep Learning solutions to generate and/or edit images. We identify coherent regions belonging to various objects in an image using Semantic Segmentation. DeepLab is an ideal solution for Semantic Segmentation. The code is available in TensorFlow. In this article, I will be sharing how we can train a DeepLab semantic segmentation model for our own data-set in TensorFlow. But before we begin… What is DeepLab? DeepLab is one of the most promising techniques for semantic image segmentation with Deep Learning. Semantic segmentation is understanding an image at the pixel level, then assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. Installation DeepLab implementation in TensorFlow is available on GitHub here. Preparing Dataset Before you create your own dataset and train DeepLab, you should be very clear about what you want to want to do with it. Here are the two scenarios: Training the model from scratch: you are free to have any number of classes of objects (number of labels) for segmentation. This needs a very long time for training. Use the pre-trained model: you are free to have any number of classes of objects for segmentation. Use the pre-trained model and only update your classifier weights with transfer learning. This will take far less time for training compared to the prior scenario. Let us name your new dataset as “PQR”. Create a new folder “PQR” as: tensorflow/models/research/deeplab/datasets/PQR . To start, all you need is input images and their pre-segmented images as ground-truth for training. Input images need to be color images and the segmented images need to be color indexed images. Refer to the PASCAL dataset. Create a folder named “dataset” inside “PQR”. It should have the following directory structure: + dataset -JPEGImages -SegmentationClass -ImageSets + tfrecord JPEGImages It contains all the input color images in *.jpg format. A sample input image from PASCAL VOC dataset SegmentationClass This folder contains all the semantic segmentation annotations images for each of the color input images, which is the ground truth for the semantic segmentation. These images should be color indexed. Each color index represents a unique class (with unique color) known as a color map. Sample Color Map [source: https://github.com/DrSleep/tensorflow-deeplab-resnet] Note: Files in the “SegmentationClass” folder should have the same name as in the “JPEGImage” folder for corresponding image-segmentation file pair. A sample semantic segmentation ground truth image from PASCAL VOC dataset ImageSets This folder contains: train.txt: list of image names for the training set val.txt: list of image names for the validation set trainval.txt: list of image names for training + validation set Sample *.txt file looks something like this: pqr_000032 pqr_000039 pqr_000063 pqr_000068 pqr_000121 Remove the color-map in the ground truth annotations If your segmentation annotation images are RGB images instead of color indexed images. Here is a Python script that will be of help. Here, the palette defines the “RGB:LABEL” pair. In this sample code (0,0,0):0 is background and (255,0,0):1 is the foreground class. Note, the new_label_dir is the location where the raw segmentation data is stored. Next, the task is to convert the image dataset to a TensorFlow record. Make a new copy of the script file ./dataset/download_and_convert_voc2012.sh as ./dataset/convert_pqr.sh . Below is the modified script. The converted dataset will be saved at ./deeplab/datasets/PQR/tfrecord Defining the dataset description Open the file segmentation_dataset.py present in the research/deeplab/datasets/ folder. Add the following code segment defining the description for your PQR dataset. _PQR_SEG_INFORMATION = DatasetDescriptor( splits_to_sizes={ 'train': 11111, # number of file in the train folder 'trainval': 22222, 'val': 11111, }, num_classes=2, # number of classes in your dataset ignore_label=255, # white edges that will be ignored to be class ) Make the following changes as shown bellow: _DATASETS_INFORMATION = { 'cityscapes': _CITYSCAPES_INFORMATION, 'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION, 'ade20k': _ADE20K_INFORMATION, 'pqr': _PQR_SEG_INFORMATION } Training In order to train the model on your dataset, you need to run the train.py file in the research/deeplab/ folder. So, we have written a script file train-pqr.sh to do the task for you. Here, we have used xception_65 for your local training. You can specify the number of training iterations to the variable NUM_ITERATIONS. and set “ — tf_initial_checkpoint” to the location where you have downloaded or pre-trained the model *.ckpt. After training, the final trained model can be found in the TRAIN_LOGDIR directory. Finally, run the above script from the …/research/deeplab directory. # sh ./train-pqr.sh Voilà! You have successfully trained DeepLab on your dataset. In the coming months, I will be sharing more of my experiences with Images & Deep Learning. Stay tuned and don’t forget to spare some claps if you like this article. It will encourage me immensely.
https://medium.com/free-code-camp/how-to-use-deeplab-in-tensorflow-for-object-segmentation-using-deep-learning-a5777290ab6b
['Beeren Sahu']
2018-09-24 20:20:58.565000+00:00
['Deep Learning', 'Image Processing', 'Artificial Intelligence', 'TensorFlow', 'Tech']
The Blackest City in the U.S. Is Facing an Environmental Justice Nightmare
Growing up in southwest Detroit, Vince Martin thought it was normal for the sky to be orange. When he was three years old, his family moved from Cuba to one of the Black areas of town. At the time, discriminatory housing practices segregated the city. His Afro-Cuban family settled in the 48217, now Michigan’s most polluted zip code, where 71% of the population is Black and air pollution makes the sky look like it’s on fire. Specifically, the Martins moved to Boynton, a working-class neighborhood. The town sits next door to a Marathon oil refinery and its sprawling industrial campus. Martin, now an environmental activist in Detroit, remembers the refinery being made up of “one or two tankers” when his family settled there in the 1960s. Now, Marathon is a 250-acre tank farm that emits so much air pollution it’s received 15 violation notices from the Michigan Department of Environment, Great Lakes, and Energy since 2013 for surpassing state and federal regulations emission limits. (Marathon denies any wrongdoing, claiming it has reduced emissions by 75% over the last 20 years and only contributes to 3% of emissions in the area.) But Martin saw air quality worsen as the refinery grew over the decades. He believes he escaped the worst of it in his youth because he traveled so often for sports, but others “weren’t so fortunate.” At his 30-year high school reunion, it seemed to Martin that more people in his class were dead than living. He knew many had died from cancer. As a child, Martin’s younger brother David developed asthma and juvenile diabetes, both of which have been linked with air pollution. Every few days, Martin remembers, David was rushed to the hospital with respiratory issues. “These episodes kept happening every time he’d try to go outside and enjoy his environment,” says Martin. After a life of health complications, David died at age 45 from what Martin calls “toxic poisoning.” “Seeing someone with such joy in life, seeing it stripped away little by little, it’s a terrible thing,” Martin says. “To be in a community like that and be exposed to those kinds of pollutants. It’s a sad story.” These stories are common in the 48217. Four of the state’s top emitters of particulate matter sulfur dioxide and nitrous oxides, which can, respectively, cause respiratory issues, and create acid rain are located within a five-mile radius of Boynton. A portion of I-75, one of the busiest highways in Michigan, runs along the northern border of the neighborhood. The neighborhood is nine minutes from the traffic-choked Ambassador Bridge, the busiest international border crossing in North America. Plans to open the new Gordie Howe International Bridge next to the Ambassador Bridge in 2020 are expected to increase diesel truck traffic by 125%.
https://onezero.medium.com/the-blackest-city-in-the-u-s-is-facing-an-environmental-justice-nightmare-788e0fb5c6b9
['Drew Costley']
2020-01-15 16:13:02.425000+00:00
['Health', 'Pollution', 'Environment', 'Equality', 'Black In Climate Change']
The Magic Key to Making Habits Sticky
I have some good news for you. You’re already really good at creating habits. You are, I promise. You probably have dozens that are so ingrained in you, you don’t even realize you do them. Here are some of mine: I check my email on my phone first thing after I open my eyes every morning. I grab a pre-packaged protein shake for breakfast every morning. I brush my teeth right after the first time I go to the bathroom after my breakfast protein shake every morning. I check my calendar while I’m drinking that shake every morning. I read on my Kindle while I eat lunch every afternoon. I take my vitamins right after lunch, before I go back to work, every afternoon. When I’m on my computer, I check my email, my Facebook, and my Amazon book sales, every time. I think about dinner at 5 p.m. every afternoon. I brush my teeth, take my medication, and wash my face when I’m in the bathroom for the last time before I go to bed, every night. Those are just the habits I came up with in two minutes while I’m sitting here writing this post. Habits come and go. I used to be in the habit of checking my Medium stats every time I was on the computer, but lately that one’s fallen off. I’m more into self-publishing at the moment, than blogging. And I’ve only had my Kindle for a month, so that one’s new. It developed quickly. Sticky habits that happen without you thinking about them. You have them. I have them. We all have them. The trick is to figure out why and tap into the ability to develop those habits so that we can do it intentionally. There’s a magic key. Habits that become ingrained are tied to either time or an action. Either you are in the habit of doing something at a specific time every day (I always think about dinner at 5 p.m., for instance) or in reaction to a specific action (I always brush my teeth when I’m in the bathroom after breakfast and before bed.) In other words, it’s not enough to decide you want to develop a habit. If you want to make it stick, you need to tie it to a time or an action. I can tell you until I’m blue in the face that a teeny, tiny habit of writing for ten minutes every day is powerful. It won’t stick for you if you don’t decide when you’re going to do it. Either I’m going to write for ten minutes every day at 8 a.m. OR I’m going to write for ten minutes every day right after breakfast. It doesn’t matter which. What does matter is now your habit has parameters. It has boundaries. It’s real. You’ve set aside time for it. Stack that Habit Even more powerful is if you stack your new habit, time- or action-bound, with another one that’s already ingrained. So: I’m going to write for ten minutes every day at 8 a.m., right after I check my email. Or: I’m going to write for ten minutes every day, right after breakfast. Oh, whoops. See? Tying your new habit to an action often automatically ties it to an ingrained habit. If you don’t actually eat breakfast every morning, then that habit is going to be less powerful. Resistance is Often Confusion I don’t mean top-level confusion. If you make a teeny-tiny goal — like writing for ten minutes a day, for instance — there’s no confusion there. It’s simple and straight forward. But the part of your brain that needs to engage for it to become ingrained? That part of your brain runs on a deeper level and it’s more easily confused. Because maybe you’re saying ‘I’ll write ten minutes a day’ but you’re thinking ‘actually, I’ll write for an hour a day.’ Yikes. Confusion. Or maybe you’re thinking ‘I’ll write for ten minutes a day at 8 a.m.’, but you don’t actually ever get out of bed before 9 a.m. Mmmhmm. Confusion. Next thing you know, you’re looking around wondering why you can’t manage to write for ten stinking minutes a day. It’s ten minutes! Resistance often is a mask for deep-brain confusion. I’m sure there’s a more technical term for that. Whatever. I think you know what I mean. You unconscious monkey brain hasn’t got the memo yet. You need to lay it out super clear and precise for the monkey brain. It’s hard to do that, because the goal here is a minimum, not a maximum. There’s nothing wrong with writing for an hour a day. In fact, that might be the reason for the tiny goal. But you get full credit for hitting your tiny goal. Talk to your monkey brain and make sure it gets that. So, think about what time you actually get out of bed before you decide to tie a habit to an early morning hour. And make sure that you’ve actually accepted that tiny goal as the whole, real goal. The Steps to Creating a New Sticky Habit If you want to create a new, sticky, small habit — here’s how:
https://medium.com/the-write-brain/the-magic-key-to-making-habits-sticky-5b7f1bab6d27
['Shaunta Grimes']
2020-12-06 16:57:23.890000+00:00
['Creativity', 'Life', 'Goals', 'Productivity', 'Habits']
The two Google Search Python Libraries you should never miss!
The two Google Search Python Libraries you should never miss! In this article, let us discover the usage of the two most used Google Search Python libraries. Photo by Benjamin Dada on Unsplash This morning while I was looking for search-related Python API, I saw these two amazing Google Search Libraries written in Python. So I thought of bringing them to light and share them with you guys. So that next time when you are in need, these libraries will be at your fingertips. Now that we know the name of these two amazing Python libraries, let’s try to use them one-at-a-time and perform some searching. Before diving into the API’s, the whole code of this article can be found in my GitHub Repository down below: googlesearch It is a Python library used to do an effective Google search. It uses requests and BeautifulSoup4 libraries to scrape the data from Google. Installation To install googlesearch, run the following command !pip install googlesearch-python Note: If you are using any other IDE other than Google Colab, use pip install googlesearch-python no need for appending ! at the beginning. Also, make sure to restart the runtime/editor in order to use newly installed versions. Perform basic searching Now, with the help of this library, let’s search Katy Perry. I mean, why not?. To do this, you need to import the library first and then search for the topic. import googlesearch search = googlesearch.search('Katy Perry') print(search) As soon as you execute the following code, you will be prompted by Katy Perry’s search results as a list. Yes, you heard it right, the return type of the search method is a list. Playing with search method You can also control the number of search results as well with num_results and also get the results in other languages using lang parameters. Let’s use the same Katy Perry’s example and limit the search results to 5 and get the results in french. import googlesearch search = googlesearch.search('Katy Perry', num_results = 5, lang = 'fr') print(search) You know the outcome of the above code. The output will not be displayed in French rather the source they point to will be a French website. And this time there will only be 5 search results displayed. Reference To go through the full source code and more details of googlesearch library. Please visit their repository given down below:
https://medium.com/analytics-vidhya/the-two-google-search-python-libraries-you-should-never-miss-dfb2ec324a33
['Tanu N Prabhu']
2020-12-22 20:51:20.662000+00:00
['Python', 'API', 'Google', 'Google Search', 'Programming']
Testing in React with Jest and Enzyme: An Introduction
Not using Create React App? If you are not using Create React App, you can install Jest with npm or yarn: yarn add --dev jest #or npm install --save-dev jest In order to run jest with npm test , replace or append a test script to your scripts block within your package.json file: ... scripts: { "test": "jest", ... } Within a Create React App environment this will look slightly different, as react-scripts is called instead of jest : (Nothing needs to be changed here) ... scripts: { "test": "react-scripts test", ... } To verify things are working, run npm test now in your Terminal at your project root directory. You will now be in watch mode, and should see a prompt displaying something similar to “No tests found related to files changed since last commit”, along with some shortcuts to run tests in various ways. So what is the best way to introduce tests to your React project? Let’s visit this first before diving into some more useful tools. Start Using Jest Jest discovers test files within your project via their filenames, of which can be located at any depth of your project. There are 3 naming conventions we can adopt in order for Jest to pick up our tests: Any file with a .test.js suffix or a .spec.js suffix. This may be preferable when a component (e.g. App.js ) can be accompanied by an App.test.js file in the same directory, where they will reside next to each other. This optimises discoverability and keeps import statements to a minimum. This may be preferable when a component (e.g. ) can be accompanied by an file in the same directory, where they will reside next to each other. This optimises discoverability and keeps import statements to a minimum. Any .js file within __tests__ folders throughout your project. If you have multiple test files to test a particular component or component directory, a __tests__ folder allows a more coherent structure whereby your tests and components are not mixed. This may be preferable in larger projects. Which method you adopt is your call and will depend on your project. Now let’s write a simple test to examine Jest syntax. Let’s add an App.test.js file within the same directory as App.js . This test will have no relation to App.js just yet, but instead will introduce some key methods of Jest; describe() and it() , as well as the expect() methods: // App.test.js describe('Examining the syntax of Jest tests', () => { it('sums numbers', () => { expect(1 + 2).toEqual(3); expect(2 + 2).toEqual(4); }); }); Upon first glance you may find this test quite obvious, and that is a good thing; readability is an intentional design choice to keep tests simple and easy to understand. Now, upon saving this test and provided you have Jest running in the Terminal, you should receive an output similar to the following: Let’s break down the above example to understand this syntax: describe() : An optional method to wrap a group of tests with. describe() allows us to write some text that explains the nature of the group of tests conducted within it. As you can see in the Terminal, the describe() text acts as a header before the test results are shown. : An optional method to wrap a group of tests with. allows us to write some text that explains the nature of the group of tests conducted within it. As you can see in the Terminal, the text acts as a header before the test results are shown. it() : Similar in nature to describe() , it() allows us to write some text describing what a test should successfully achieve. You may see that the test() method is used instead of it() throughout the Jest documentation, and vice-versa in the Create React App documentation. Both are valid methods. : Similar in nature to , allows us to write some text describing what a test should successfully achieve. You may see that the method is used instead of throughout the Jest documentation, and vice-versa in the Create React App documentation. Both are valid methods. expect() and .toEqual() : Here we carry out the test itself. The expect() method carries a result of a function, and toEqual() , in this case, carries a value that expect() should match. expect() makes assertions: what is an assertion? expect() is a global Jest function for making assertions. An assertion takes a boolean valued function and always expects that return value to be true — hence the name expect. In the event that false is returned, the test fails and execution stops within the corresponding it() or test() block. toEqual() is a matcher: what is a matcher? toEqual() is called a matcher. A matcher is a function whose resulting value must be true relative to what it is testing from expect() . Jest have documented the full list of matchers here, ranging from toContain , toBeFalsy , toMatch(regexpOrString) and toThrow(error) . It’s worthwhile familiarising yourself with what is available to optimise your test syntax. We get a full breakdown of each test result in the Terminal. Green ticks highlight a success whereas red crosses highlight a failure. To highlight a failure (hopefully a rare occasion!) let’s change the last test to expect(2 + 2).toEqual(5) : Our terminal now displays some handy output, including exactly where our test failed within the script itself, line number and code. Why test two or more sums instead of just one? Our first example above tested a sum function twice, the first time with 1 + 2 and then with 2 + 2. This is important; and to demonstrate why, let’s now introduce a sum() function instead of hard-coding our sums: // math.js export const sum = (x, y) => x + y; And run the test with the updated sum() : // App.test.js import {sum} from './math'; describe('Examining the syntax of Jest tests', () => { it('sums numbers', () => { expect(sum(1, 2)).toEqual(3); expect(sum(2, 2)).toEqual(4); }); }); Making these changes will again yield successful tests. However, what if we change sum() to the following: export const sum = (x, y) => 3; What we have done here is simply return a hard-coded value, 3, from our sum function. Now, if we simply left our test suite with a singular assertion of a 1 + 2 sum, the tests would have passed even though the sum function itself does not function as we wish — it always returns 3!
https://rossbulat.medium.com/testing-in-react-with-jest-and-enzyme-an-introduction-99ce047dfcf8
['Ross Bulat']
2019-07-12 06:29:35.114000+00:00
['React', 'ES6', 'Programming', 'JavaScript', 'Software Engineering']
Coupling and Cohesion
Thoughts on Coupling Coupling is the measure of how dependent your code modules are on each other. Strong coupling is bad and low coupling is good. High coupling means that your modules cannot be separated. It means that the internals of one module know about and are mixed up with the internals of the other module. When your system is really badly coupled, it is said to be “spaghetti” code, as everything is all mixed up together like a bowl of spaghetti noodles. High coupling means that a change in one place can have unknown effects in unknown other places. It means that your code is harder to understand because complex, intertwined relationships are difficult to understand. Heavily coupled code is difficult to reuse because it is difficult to remove from the system for use elsewhere. One should strive to reduce coupling in one’s code to as high a degree as possible. Of course, your code can’t be completely decoupled. A collection of completely decoupled modules can’t do anything. They need to be coupled in a thin and light manner. I’ve said before that an interface is like a wisp of smoke — there is something there, but it is really hard to grab onto it. Code that is coupled via interfaces is coupled as thinly as it can possibly be coupled. A real-world example of heavy coupling is the space shuttle. Now, I’m no rocket scientist, but my guess is that a space shuttle is made up of hundreds of thousands — if not millions — of unique parts that fit together in one way and one way only. The parts are not reusable, and if you need to alter one part — especially if you have to alter an interface of a part — you have a problem because the other parts around it will very likely need to be altered as well. That initial alteration could spread a long way throughout the craft and have long-reaching ramifications. The space shuttle, then, is a highly coupled system. What’s an example of loose coupling? How about a space shuttle built out of Legos. Sure, it won’t go anywhere, but it would be easy to change its design as Lego pieces are easily put together and taken apart to make whatever you want. A Lego space shuttle would have low coupling In summary, here’s what you should consider when writing uncoupled code: Your code will be easier to read because it will be simple and not complex. Your code will be reusable because it will be connected only by very thin interfaces. Your code will be easy to change and maintain because changes to the code will be isolated. So, if you don’t want your code to be tightly coupled, what do you want your code to be?
https://medium.com/better-programming/coupling-and-cohesion-75aa16bc9adf
['Nick Hodges']
2020-03-09 01:47:42.699000+00:00
['Design Patterns', 'Software Development', 'Software Engineering', 'Software Design', 'Programming']
I Quit Coffee for 30 Days and Drank Black Tea Instead
Tea can give you a lot of energy I was surprised to learn that energy-vise tea can be a great substitute for coffee. Before, when I regularly drank coffee, I could drink tea like water. It had no effect on me. But after my coffee detox, I could feel my energy levels skyrocketing before I even finished my cup of tea. Tea leaves contain more caffeine (3.5%) than coffee beans (1.1–2.2%). But because of the water temperature used to brew coffee and the number of beans used to make the drink, the average amount of caffeine that ends up in your cup of joe will be higher (96mg). The average amount of caffeine in black tea is 47mg per serving. But if you like your tea strong, as I do, your cup can contain up to 90mg, which will be almost equal to the average amount in a cup of coffee. You can enjoy more cups during the day Because the average amount of caffeine in a cup of tea is two times smaller than in coffee, you can enjoy more cups of the drink during the day. It is recommended by several resources not to exceed 400mg of caffeine per day. That roughly equals four cups of coffee or eight cups of tea. It might be quite hard to drink more than eight cups of tea so you can drink it without worrying about the negative effects of too much caffeine on your health. It would be two times harder to exceed the recommended dosage if you choose to drink tea. But please don’t forget to stay hydrated and drink enough water! Drinking too much can cause bad symptoms, just like coffee Though tea has a smaller amount of caffeine, drinking more than 3–4 cups can lead to some negative side effects, such as: Reduced iron absorption Increased anxiety and stress Troubles falling asleep and poor sleep quality Nausea Heartburn Headaches Caffeine dependence I learned that I could drink 2–3 cups of tea per day without it having a negative effect on how I feel. But this may vary from person to person since a lot of people are more sensitive to caffeine. It is crucial to listen to your body and cut down your intake if you experience any bad symptoms. You can also consult your doctor if you’re unsure about the amount of caffeine you can tolerate. Tea tastes great When my taste buds were spoiled with the amount of coffee I used to consume, I was not too fond of the taste of tea. But when tea became my only option, I grew into loving it. Now I’m as excited about my cup of English breakfast as I would be about my almond milk latte. These days I don’t just use tea as a substitute for coffee, but I also crave it every day and drink it as a treat like I used to do with coffee. Final thought During my 30 day experiment, I discovered that tea could become a good replacement for coffee. If you’re struggling to cut down your caffeine intake, drinking tea can be a great way to do it. You can easily cut the amount of caffeine you consume in half by replacing your cup of coffee with a cup of tea. This helped me reduce the negative symptoms that were connected with my coffee intake. I still love coffee and consider it a pretty important part of my life. Even my bio says I do. I’m not sure if I’m ready to say goodbye to it forever. For now, my goal is to cut down on coffee as much as I can and drink it as a treat a couple of times a month. But for now, I have a great new alternative.
https://medium.com/age-of-awareness/i-quit-coffee-for-30-days-and-drank-black-tea-instead-11cb21f3dbb8
['Alice White']
2020-11-04 01:46:10.393000+00:00
['Health', 'Diet', 'Productivity', 'Habits', 'Coffee']
Introduction To Deep Learning
Introduction To Deep Learning What Is Deep Learning And How Can I Study It? This is the first article in this series, and is associated with our Intro to Deep Learning Github repository where you can find practical examples of many deep learning applications and tactics. Read the second article here, the third here, and the fourth here. Although normally the “prework” comes before the introduction, I’m going to give the 30,000 foot view of the fields of artificial intelligence, machine learning, and deep learning at the top. I have found that this context can really help us understand why the prerequisites seem so broad, and help us study just the essentials. Besides, the history and landscape of artificial intelligence is interesting, so lets dive in! Artificial Intelligence, Machine Learning, and Deep Learning Deep learning is a subset of machine learning. Machine learning is a subset of artificial intelligence. Said another way — all deep learning algorithms are machine learning algorithms, but many machine learning algorithms do not use deep learning. As a Venn Diagram, it looks like this: Deep learning refers specifically to a class of algorithm called a neural network, and technically only to “deep” neural networks (more on that in a second). This first neural network was invented in 1949, but back then they weren’t very useful. In fact, from the 1970’s to the 2010’s traditional forms of AI would consistently outperform neural network based models. These non-learning types of AI include rule based algorithms (imagine an extremely complex series of if/else blocks); heuristic based AIs such as A* search; constraint satisfaction algorithms like Arc Consistency; tree search algorithms such as minimax (used by the famous Deep Blue chess AI); and more. There were two things preventing machine learning, and especially deep learning, from being successful. Lack of availability of large datasets and lack of availability of computational power. In 2018 we have exabytes of data, and anyone with an AWS account and a credit card has access to a distributed supercomputer. Because of the new availability of data and computing power, Machine learning — and especially deep learning — has taken the AI world by storm. You should know that there are other categories of machine learning such as unsupervised learning and reinforcement learning but for the rest of this article, I will be talking about a subset of machine learning called supervised learning. Supervised learning algorithms work by forcing the machine to repeatedly make predictions. Specifically, we ask it to make predictions about data that we (the humans) already know the correct answer for. This is called “labeled data” — the label is whatever we want the machine to predict. Here’s an example: let’s say we wanted to build an algorithm to predict if someone will default on their mortgage. We would need a bunch of examples of people who did and did not default on their mortgages. We will take the relevant data about these people; feed them into the machine learning algorithm; ask it to make a prediction about each person; and after it guesses we tell the machine what the right answer actually was. Based on how right or wrong it was the machine learning algorithm changes how it makes predictions. We repeat this process many many times, and through the miracle of mathematics, our machine’s predictions get better. The predictions get better relatively slowly though, which is why we need so much data to train these algorithms. Machine learning algorithms such as linear regression, support vector machines, and decision trees all “learn” in different ways, but fundamentally they all apply this same process: make a prediction, receive a correction, and adjust the prediction mechanism based on the correction. At a high level, it’s quite similar to how a human learns. Recall that deep learning is a subset of machine learning which focuses on a specific category of machine learning algorithms called neural networks. Neural networks were originally inspired by the way human brains work — individual “neurons” receive “signals” from other neurons and in turn send “signals” to other “neurons”. Each neuron transforms the incoming “signals” in some way, and eventually an output signal is produced. If everything went well that signal represents a correct prediction! This is a helpful mental model, but computers are not biological brains. They do not have neurons, or synapses, or any of the other biological mechanisms that make brains work. Because the biological model breaks down, researchers and scientists instead use graph theory to model neural networks — instead of describing neural networks as “artificial brains”, they describe them as complex graphs with powerful properties. Viewed through the lens of graph theory a neural network is a series of layers of connected nodes; each node represents a “neuron” and each connection represents a “synapse”. Different kinds of nets have different kinds of connections. The simplest form of deep learning is a deep neural network. A deep neural network is a graph with a series of fully connected layers. Every node in a particular layer has an edge to every node in the next layer; each of these edges is given a different weight. The whole series of layers is the “brain”. It turns out, if the weights on all these edges are set just right these graphs can do some incredible “thinking”. Ultimately, the Deep Learning Course will be about how to construct different versions of these graphs; tune the connection weights until the system works; and try to make sure our machine does what we think it’s doing. The mechanics that make Deep Learning work, such as gradient descent and backpropagation, combine a lot of ideas from different mathematical disciplines. In order to really understand neural networks we need some math background. Background Knowledge — A Little Bit Of Everything Given how easy to use libraries like PyTorch and TensorFlow are, it’s really tempting to say, “you don’t need the math that much.” But after doing the required reading for the two classes, I’m glad I have some previous math experience. A subset of topics from linear algebra, calculus, probability, statistics, and graph theory have already come up. Getting this knowledge at university would entail taking roughly 5 courses. Calculus 1, 2 and 3; linear algebra; and computer science 101. Luckily, you don’t need each of those fields in their entirety. Based on what I’ve seen so far, this is what I would recommend studying if you want to get into neural networks yourself: From linear algebra, you need to know the dot product, matrix multiplication (especially the rules for multiplying matrices with different sizes), and transposes. You don’t have to be able to do these things quickly by hand, but you should be comfortable enough to do small examples on a whiteboard or paper. You should also feel comfortable working with “multidimensional spaces” — deep learning uses a lot of many dimensional vectors. I love 3Blue1Brown’s Essence of Linear Algebra for a refresher or an introduction into linear algebra. Additionally, compute a few dot products and matrix multiplications by hand (with small vector/matrix sizes). Although we use graph theory to model neural networks these graphs are represented in the computer by matrices and vectors for efficiency reasons. You should be comfortable both thinking about and programming with vectors and matrices. From calculus you need to know the derivative, and you ideally should know it pretty well. Neural networks involve simple derivatives, the chain rule, partial derivatives, and the gradient. The derivative is used by neural nets to solve optimization problems, so you should understand how the derivative can be used to find the “direction of greatest increase”. A good intuition is probably enough, but if you solve a couple simple optimization problems using the derivative, you’ll be happy you did. 3Blue1Brown also has an Essence of Calculus series, which is lovely as a more holistic review of calculus. Gradient descent and backpropagation both make heavy use of derivatives to fine tune the networks during training. You don’t have to know how to solve big complex derivatives with compounding chain and product rules, but having a feel for partial derivatives with simple equations helps a lot. From probability and statistics, you should know about common distributions, the idea of metrics, accuracy vs precision, and hypothesis testing. By far the most common applications of neural networks are to make predictions or judgements of some kind. Is this a picture of a dog? Will it rain tomorrow? Should I show Tyler this advertisement, or that one? Statistics and probability will help us assess the accuracy and usefulness of these systems. It’s worth noting that the statistics appear more on the applied side; the graph theory, calculus, and linear algebra all appear on the implementation side. I think it’s best to understand both, but if you’re only going to be using a library like TensorFlow and are not interested in implementing these algorithms yourself — it might be wise to focus on the statistics more than the calculus & linear algebra. Finally, the graph theory. Honestly, if you can define the terms “vertex”, “edge” and “edge weight” you’ve probably got enough graph theory under your belt. Even this “Gentle Introduction” has more information than you need. In the next article in this series I’ll be examining Deep Neural Networks and how they are constructed. See you then! Part 2: Deep Neural Networks as Computational Graphs Part 3: Classifying MNIST Digits With Different Neural Network Architectures
https://medium.com/tebs-lab/introduction-to-deep-learning-a46e92cb0022
['Tyler Elliot Bettilyon']
2020-10-13 22:39:37.793000+00:00
['Deep Learning', 'Machine Learning', 'Technology', 'Artificial Intelligence', 'AI']
How Cohesion And Coupling Correlate
As I was finishing my blog post about defining service boundaries, I had a very strong feeling that there must be some abstract concept of what I was trying to explain on concrete examples… Of course, there is! It’s the concept of cohesion and coupling I will discuss in this post. In the rain (cut) by Franz Marc Let’s start with little definitions: Cohesion: the degree to which the elements inside a module belong together. Coupling: the degree of interdependence between software modules. High cohesion and loose coupling are the most important principles in software engineering. They manifest themselves everywhere from code to team organization. Cohesion and coupling are tightly related. Why are they so important? Both help us reduce complexity, the true fun killer of software development. To a lot of people, sadly, the concepts sound too academic and are therefore often poorly understood. What is cohesion, anyway? Tough question. The definition is pretty broad and there are several interpretations out there. Not all of them are necessarily wrong, the valid question is: which one is the most beneficial? I use the following definition as I believe it always leads to cohesive components with tight coupling inside and loose coupling outside, which is exactly what we want: The degree of cohesion of a component by a particular key equals the number of elements cohesive by the key within the component divided by the sum of the total number of elements cohesive by the key in the whole system and the number of elements not cohesive by the key inside the component. This long definition can be expressed as a simple formula: The cohesion formula Where c stands for the component, k stands for the key, and N stands for the number of elements. Obviously, the maximal cohesion of a component is equal to one. This is what we strive for. I want to emphasize that cohesion doesn’t depend on the number of connections between elements, that’s what coupling is all about. Cohesion is rather about belonging together. However, cohesive components do tend to have a higher degree of coupling within the component, but that’s just a symptom of high cohesion, not the cause. The definition above might look complicated, but it’s rather quite easy. Let’s illustrate it with some examples. We measure the degree of cohesion by the violet key for the components bordered with a dashed line in the following systems: Example measurements of cohesion Functionality (business) is always the right key to use. Violet and blue can stand for sales and accounting, a product and an invoice, or user registration and ordering. Notice that my definition may be a bit oversimplified as the boundaries are not always as solid and obvious. This is why business experts must be involved. Myth busted Cohesion and coupling are almost always discussed together as they tightly correlate. The relation is sometimes a source of confusion as well, although its understanding is very useful to gain the most for the software system under development. A typical myth, I often hear people believe in, puts cohesion and coupling in opposition. Practically, they say that “the higher cohesion the tighter coupling”. I’ll show you how wrong this statement is. This is usually illustrated with an example: Consider the highest possible cohesion of the system where every module is represented by a single line of code (or a single function, an object with a single method, etc.). Such a degree of cohesion will inevitably increase the coupling between modules to the maximum. As the conclusion is true, there is a small problem in the prerequisite. To find it out, we have to recall the definition of cohesion once again. It talks about belonging together, the strength of relationship of elements, and a common purpose. What does it mean in practice? In fact, splitting elements that belong together makes cohesion actually lower. So, in the example above, the system really doesn’t have the highest possible cohesion, in the opposite: breaking modules into the smallest possible elements will separate related concepts and will lead to a pretty low cohesion. The moral here is: Cohesion is not something you can create automatically. Cohesion is discovered in a particular context. That’s why it is so hard for cohesion to be reliably measured. We will discuss this in detail later, stay tuned. Cohesion and coupling Let me show you some pictures. In each figure below, there are the very same elements with the very same dependencies. Those are further differently organized. Related domain concepts are represented with the same color: Low cohesion, tight coupling Elements in the first picture have no explicit boundaries, it’s an example of so-called coincidental cohesion. Such architecture is known as the Big Ball of Mud or the God Object (in OOP code). High cohesion, tight coupling The second picture shows a system with three modules and a lot of dependencies between them. Although the modules are highly cohesive, they are cohesive by the wrong key. This happens when code is organized by other than a domain relationship. A typical example is a logical organization of code in the Layered Architecture: just image modules such as controllers, repositories, services, etc. Have you seen these already somewhere? Hell yeah! High cohesion, loose coupling The system in the third picture shows the ideal case: correctly organized modules leading to high cohesion and loose coupling. The right key for organization is functionality, in other words, a business domain. The domain defines abstractions with a stable purpose the cohesion is driven upon. By the way, that’s the main idea of the Domain-Driven Design. Focus on cohesion, not coupling We exhausted all variants except one: a system with low cohesion and loose coupling. Is it even possible to have such an architecture? Unfortunately, it is, and it’s actually pretty common. Systems with low cohesion and loose coupling are results of incorrect understanding of the domain and applying purely technical approaches to decouple the modules in an arbitrary way. Interfaces everywhere with no abstraction representing a domain purpose are typical for systems built in this way. Misuse of interfaces won’t actually reduce coupling anyway, it just moves it into the runtime. Striving for loose coupling at any cost can (and will) harm cohesion. As loose coupling is driven by high cohesion, we should strive for high cohesion in the first place. Level of abstraction Yes, high cohesion doesn’t only make the system easy to understand and change, it also reduces the level of coupling. How is this even possible? Common sense says that the dependencies don’t disappear simply by reorganizing elements. While this is true for the overall system dependencies, high cohesion does reduce dependencies on a higher level of abstraction. That is, although the absolute amount of dependencies remains the same, the coupling is tackled on different levels of abstraction. “The whole is greater than the sum of the parts.” ~ Aristotle Indeed, we can ignore the interdependencies inside modules getting so a simplified big picture with only three loosely coupled elements: Coupling on the higher level of abstraction is dramatically reduced Neat. As we see, high cohesion actually results in loose coupling! Talk to me in code! Pictures are nice, but as software developers, we trust only code, don’t we? Alright, I have some code for you. Consider a simple class for a Book Store (in JavaScript, whatever): class BookStore { add(book) { … } remove(book) { … } sale(book) { … } receiptFor(book) { … } } This class does literally everything. Its cohesion is pretty low and all clients, whatever their needs are, will be coupled to it. It’s an example of a God Object. We can do better: class Inventory { add(book) { … } remove(book) { … } } class Sales { sale(book) { … } receiptFor(book) { … } } The Inventory class looks fine, but what about Sales ? Must sales and accounting really be so tightly related? Maybe it’d be better to split the functionalities into more cohesive classes: class Sales { sale(book) { … } } class Accounting { receiptFor(book) { … } } But what if our Book Store is just a small family business with one clerk doing sales together with accounting on one old cash desk? We just hit the nail on the head: we can’t really know what the right cohesion key is unless we know the domain really well. True cohesion is defined by the clients. High cohesion is achieved when there’s no way to split the module any further while still satisfying the client’s needs. By the way, this is exactly what the Single Responsibility Principle teaches us. Conclusion High cohesion and loose coupling are the main design drivers towards simple system architecture, that is easy to understand, change, and maintain. High cohesion and loose coupling help us reduce accidental complexity and create modules with well-defined boundaries. Coupling is about connections, cohesion is about belonging together. Cohesion can’t be created automatically, instead it’s discovered in a context. Cohesion is defined by the clients. True cohesion is domain-driven. High cohesion results in loose coupling. High cohesion is to die for. It enables all others, loose coupling included.
https://ttulka.medium.com/how-cohesion-and-coupling-correlate-dd1716ca04fa
['Tomas Tulka']
2020-12-09 06:57:43.816000+00:00
['Programming', 'Software Development', 'Design Patterns', 'Software Engineering', 'Software Architecture']
Headline Analyzers are use full
Sup Drake! 7 Reasons All The Greatest Writers in The History of The World Aren’t Afraid to Always Choose Powerful Headlines to Alert John Cena’s Eyes Attention You don’t want to read this, but you will Courtesy WWE All Rights Reserved On Facebook there was a question posted about headline analyzers. The true purpose was attracting readers because they were promoting a story they wrote about headline analyzers. I gave my opinion anyways because I’m a writer and it was a prompt. I find headline generators more useful than analyzers. I can spot a headline created with with help of analyzer and usually stay away. 7 reasons all the greatest writers in the history of the world always use powerful headlines to capture Bon Jovi’s full attention would probably get a high score but… Heh. I got an idea. Brb. — Hogan Torah Welcome to my idea. The quest for the perfect headline I free — styled the headline “7 reasons all the greatest writers in the history of the world always use powerful headlines to capture Bon Jovi’s full attention” off the top of my head. I knew I was above a 70 but it would need a little fine tuning if I wanted to hit the hundred.
https://medium.com/wreader/7-reasons-all-the-greatest-writers-in-the-history-of-the-world-arent-afraid-to-always-choose-ec9b39804420
['Hogan Torah']
2020-12-02 02:26:28.639000+00:00
['Headlines', 'AI', 'Writing', 'Success', 'Machine Learning']
Our Shared Vision of the Future
It’s time to start being honest with ourselves. Blockchains are confusing, and kind of scary. They’re not easy to understand. People use words like “trustless” and “consensus” and “incentive structure” when describing them. Like what does that even mean? Photo by Andreas P. on Unsplash Well, society is evolving. Right now, we’re going through a rebirth of trust and values. We started out trusting each other using facial expressions, body language, and eventually spoken language. Trust was based on social reputation. As technology continued to grow, trust became “institutionalized” through the rise of centralized entities, like Facebook, Google, or the Federal Reserve. This was the natural evolution. This new global, institutionalized trust model allowed strangers all over the world to communicate, share knowledge, participate in markets, and do business. That is, so long as we trusted the institutions in the middle to properly handle our data and money. Things are good until they aren’t. But the shift is happening. I can sense it. Can you? Public blockchain systems, beginning with Bitcoin, use our common interest — money — to incentivize immutability and finality of cryptographically-pure data. That’s at the core of what’s happening. This a beautiful base on which to build new trust models. In fact, the builders among us are doing just that: hard at work designing, prototyping, and creating new businesses and industries based on a provably better way to participate in self-sovereign digital activities. And with these new systems, the world is figuring out what it means to continue participating in a global digital economy like we’re used to, but without that traditional centralized institutional elephant in the room that we’re currently begrudgingly using to power our digital lives and businesses. Like all good movements, this one happens from the bottom -> up. These new systems have been important since their ideas began to form over a decade ago. But they only truly matter once integrating with them is the norm. When the fabric of secured blockchain systems underpin our digital activities, considering the alternative will be ludicrous. It’s time for us, as individuals, as businesses, as a community, as a species, to demand better for ourselves. We’re entrenched in our current ways, but don’t have to stay here. Let’s walk down this path together.
https://medium.com/decentlabs/our-shared-vision-of-the-future-d518791f1a26
['Adam Gall']
2018-12-18 17:20:03.190000+00:00
['Blockchain', 'Future', 'Trust', 'Evolution', 'Bitcoin']
Public Relations: Using the Media as a Third-Party to Influence Your Audience
What are Public Relations? Public relations (PR) is the manages the release and spread of publicity from a firm or individual to the public to influence their opinions, attitudes or behaviours. PR aims to build and maintain relationships with stakeholders and those who influence the target audience, to enhance the public reputation. Public relations professionals are storytellers and image shapers who create a positive narrative for their clients by working closely with journalists and other media. This allows them to manage and generate positive publicity for their clients to enhance their reputations. Public relations are controlled internally as a strategy, but publicity is controlled and distributed externally. “(Public Relations) helps establish and maintain mutual lines of communication, understanding, acceptance and cooperation between an organisation and its publics; involves the management of problems and issues; helps management keep abreast of and effectively utilise change.” (Harlow, 1976) PR has been a profession since the dawn of the 20th century, but the roots of the idea of widely influencing public opinion and action can be found and during the movement to abolish slavery in England 100 years before that. Because of these beginnings, one of the underlying assumptions of PR is that it should be socially responsible and go beyond organisational goals to play a constructive role in society. Depending on the situation, PR will have a particular tone. For example, it could be focused on showing empathy and understanding, storytelling and creativity, or more persuasive messaging. Messages are tailored to the relevant target audiences. PR applies to all organisations from small businesses to corporations to governments or activists. They could be from the private, public or third sector. The third sector is an umbrella term for voluntary and community organisations such as social enterprises. Some of the tools used for PR are: Owned media (e.g. website) Earned media (e.g. newspapers) Shared media (e.g. social networks) Sponsorships and fundraising Face-to-face Photography Moving images (video) Print (e.g. newsletters) Events (e.g. conferences) Public Speaking
https://medium.com/the-innovation/public-relations-using-the-media-as-a-third-party-to-influence-your-audience-3a9bf00281fa
['Daniel Hopper']
2020-11-11 19:42:09.848000+00:00
['Public Relations', 'Strategy', 'Business', 'Startup', 'Marketing']
My Ego Sabotaged My Writing
Illustration by Fresh Idea Full manuscript in two months. That was the goal. I was on vacation in Mexico, no less, when I checked my email and saw the opportunity. It was mid-June and the conference was happening near the end of August. Agents would be there, and I told myself that nothing was going to stop me from getting this done, even though I’d never written a manuscript in less than a year. But I was determined. I put myself on a rigorous schedule: Write from 5:00 am to 8:30 am. Do freelance work from 9–5 (I still needed to make money) Write again from 7:00 pm — 10:00 pm or whenever my brain shut down. I was literally writing all day with only a short break. On many of those days, I’d be eating while I’m typing, I’d be in the bathroom ruminating about what I’d written and how to make it better, my showers were dedicated to new ideas for the next chapter. This went on for two months. I made the deadline, and to be honest, I was proud of myself. But as I started pitching the manuscript, the reactions weren’t what I expected. “The concept of the story is good but the writing is not what I expected.” “It’s just not all the way thought out.” “I like it but don’t love it.” “You have something here, but it’s not fully baked.” My disappointment felt physical. With each negative response, my body convulsed. Part of it was trying to cope with the rejection, but the other half was knowing that the work I was showing wasn’t my best. Not yet. I needed to spend more time with it, to live with the characters a little bit longer so I better understood their motivations. I knew this. I knew this the whole time. Fighting Urgency There’s an urgency we feel as writers. It’s born from the reality that nothing we do is guaranteed to be heard much less appreciated or revered. This causes us to pounce on any opportunity that may increase the chance that our writing can transcend our screens and find its way to an audience. And if it takes writing 60,000 words in two months, so be it. But consider the repercussions of this urgency. First, the mental anguish caused by my writing schedule. I was sitting in front of a laptop for 12–13 hours a day. I’m a big proponent of giving creativity some space to breathe and there was no space over that two-month time span. What filled that space was pressure, all of it self-inflicted. And while a little bit of pressure keeps you sharp, the balance wasn’t right and it wore on me daily. Instead of waking up excited to write, I woke up feeling like every morning was the Superbowl and I had to win. The other repercussion of this urgency was delusion. Because of the pace of my writing, I never gave myself time to reflect on whether or not it was actually good. Whatever idea came to my mind made its way to the page. If you’re a writer, you’d have to be delusional to think that all of your ideas are good ideas. And they weren’t. Instead, the manuscript was more like a sketch that hadn’t been painted. All of the colours were missing. And then there was me. After my initial pride in having completed a manuscript in record time, I was left to face the torment of my own doing. The disappointment came even before the rejection. I knew what I had created wasn’t done with the passion, thoughtfulness and technique that’s required of effective storytelling. More than anything, that was the most difficult part to accept. That I could’ve done better. What I should’ve done I let the thought of “making it” dictate my creative process. I saw an opportunity to prove that all these years of writing, all the rejection and the disappointment actually meant something. My ego needed to be satieted. It was tired of feeling empty. I was tired of feeling invisible. Someone needed to tell me I was great. Even though this had nothing to do with me and everything to do with my writing, I didn’t make that distinction. Who I am and the words I put together exist in concert. One can’t be criticized without the other feeling its sting. That concept is another article in itself. What I should’ve reminded myself was that the work comes first. Before any acknowledgement, before any luck, before any peek at the kind of success I expect of myself, the work must be done. There is no skipping steps, no jumping ahead of the line if the work hasn’t been properly prepared to match the opportunity. I learned that the hard way. When I tucked my ego back inside, I realized how much work had to be done. I needed to get better, to push myself to become a better writer and that would take time and an investment into my education. With my ego no longer leading the way, those decisions became clear. The path wasn’t paved with stone but I could see the way. Despite all of this, I‘m thankful for the experience. It lead me to a better understanding of what I’m capable of as a writer. It also made me realize how important it is to take my time through the process. I’m not one of those writers who can bang out 50,000 words in a month and have it be any good, and that’s OK. I know what it takes for me to be my best and I’ll keep doing that till I reach my goals.
https://medium.com/cry-mag/my-ego-sabotaged-my-writing-b5abee61ac76
['Kern Carter']
2020-12-15 13:49:30.516000+00:00
['Creativity', 'Creative Writing', 'Publishing', 'Ego', 'Writing']
New Writers: Submission Guidelines
New Writers: Submission Guidelines Write for Inspired Writer Photo by Corinne Kutz on Unsplash Are you an emerging writer with a story to share? You’re in the right place! Inspired Writer aims to bring together experienced and emerging writers to share their knowledge and stories. We want to support new writers, like you, to share your story, develop your writing skills, and kick-start you career. How to Apply to Be a Writer Remember to become a follower of Inspired Writer and join our newsletter to keep up with the best stories, opportunities and mentoring, and writing tips. Read all the guidelines carefully before submitting. If we accept you as a writer, you can then submit your story directly to Inspired Writer. *Please Note: We ONLY accept unpublished drafts from new writers. Please only submit one story at a time.
https://medium.com/inspired-writer/new-writers-submission-guidelines-59ace27d9e4a
['Kelly Eden']
2020-10-03 21:11:05.173000+00:00
['Write For Us', 'Writing', 'Submission Guidelines', 'Writing Tips', 'Creativity']
Creating Python Functions for Exploratory Data Analysis and Data Cleaning
Exploratory Data Analysis and Data Cleaning are two essential steps before we start to develop Machine Learning Models, and they can be time-consuming, especially for people who are still familiarizing themselves with this whole process. EDA and Data Cleaning is rarely a one-time, linear process: you might find yourself going back to earlier sections and modifying the way you treat the dataset quite often. One way to speed up this process is to recycle some of the code you find yourself using over and over again. This is why we should create functions to automate the repetitive parts of EDA and Data Cleaning. Another benefit of using functions in the EDA and Data Cleaning is to eliminate the inconsistency of results caused by accidental differences in the code.
https://towardsdatascience.com/creating-python-functions-for-exploratory-data-analysis-and-data-cleaning-2c462961bd71
['Freda Xin']
2020-01-21 22:58:04.781000+00:00
['Data Visualization', 'Exploratory Data Analysis', 'Python', 'Data Science']
6 Pandas Operations You Should Not Miss
6 Pandas Operations You Should Not Miss Advanced methods and function to crunch some data Pandas is used mainly for reading, cleaning, and extracting insights from data. We will see an advanced use of Pandas which are very important to a Data Scientist. These operations are used to analyze data and manipulate it if required. These are used in the steps performed before building any machine learning model. Summarising Data Concatenation Merge and Join Grouping Pivot Table Reshaping multi-index DataFrame We will be using the very famous Titanic dataset to explore the functionalities of Pandas. Let’s just quickly import NumPy, Pandas, and load Titanic Dataset from Seaborn. import numpy as np import pandas as pd import seaborn as sns df = sns.load_dataset('titanic') df.head() Summarizing data The very first thing any data scientist would like to know is the statistics of the entire data. With the help of the Pandas .describe() method, we can see the summary stats of each feature. Notice, the stats are given only for numerical columns which is an obvious behavior we can also ask describe function to include categorical columns with the parameter ‘include’ and value equal to ‘all’ ( include=‘all’). df.describe() Another method is .info(). It gives metadata of a dataset. We can see the size of the dataset, dtype, and count of null values in each column. df.info() Concatenation Concatenation of two DataFrames is very straightforward, thanks to the Pandas method concat(). Let us take a small section of our Titanic data with the help of vector indexing. Vector indexing is a way to specify the row and column name/integer we would like to index in any order as a list. smallData = df.loc[[1,7,21,10], ['sex','age','fare','who','class']] smallData Also, I have created a dataset with matching columns to explain concatenation. By default, the concatenation happens row-wise. Let’s see how the new dataset looks when we concat the two DataFrames. pd.concat([smallData, newData]) What if we want to concatenate ignoring the index? just set the ingore_index parameter to True. pd.concat([ newData,smallData], ignore_index=True) If we wish to concatenate along with the columns we just have to change the axis parameter to 1. pd.concat([ newData,smallData], axis=1) Left table-smallData, Right table-newData Notice the changes? As soon we concatenated column-wise Pandas arranged the data in an order of row indices. In smallData, row 0 and 2 are missing but present in newData hence insert them in sequential order. But we have row 1 in both the data and Pandas retained the data of the 1st dataset because that was the 1st dataset we passed as a parameter to concat. Also, the missing data is represented as NaN. We can also perform concatenation in SQL join fashion. Let’s create a new DataFrame ‘newData’ having a few columns the same as smallData but not all. If you are familiar with SQL join operation we can notice that .concat() performs outer join by default. Missing values for unmatched columns are filled with NaN. pd.concat([smallData, newData]) We can control the type of join operation with ‘join’ parameter. Let’s perform an inner join that takes only common columns from two. pd.concat([smallData, newData], join='inner') Merge and Join Pandas provide us an exclusive and more efficient method .merge() to perform in-memory join operations. Merge method is a subset of relational algebra that comes under SQL. I will be moving away from our Titanic dataset only for this section to ease the understanding of join operation with less complex data. There are different types of join operations: One-to-one Many-to-one Many-to-many The classic data used to explain joins in SQL in the employee dataset. Lets create DataFrames. Left table-df1, Right table-df2 One-to-one One-to-one merge is very similar to column-wise concatenation. To combine ‘df1’ and ‘df2’ we use .merge() method. Merge is capable of recognizing common columns in the datasets and uses it as a key, in our case column ‘employee_name’. Also, the names are not in order. Let’s see how the merge does the work for us by ignoring the indices. df3 = pd.merge(df1,df2) df3 Many-to-one Many-to-one is a type of join in which one of the two key columns have duplicate values. Suppose we have supervisors for each department and there are many employees in each department hence, Many employees to one supervisor. Many-to-many This is the case where the key column in both the dataset has duplicate values. Suppose many skills are mapped to each department then the resulting DataFrame will have duplicate entries. Merge on uncommon column names and values Uncommon column names Many times merging is not that simple since the data we receive will not be so clean. We saw how the merge does all the work provided we have one common column. What if we have no common columns at all? or there is more than one common column. Pandas provide us the flexibility to explicitly specify the columns to act as the key in both DataFrames. Suppose we change our ‘employee_name’ column to ‘name’ in ‘df2’. Let’s see how datasets look and how to tell merge explicitly the key columns. Parameter ‘left_on’ to specify the key of the first column and ‘right_on’ for the key of the second. Remember, the value of ‘left_on’ should match with the columns of the first DataFrame you passed and ‘right_on’ with second. Notice, we get redundant column ‘name’, we can drop it if not needed. Uncommon values Previously we saw that all the employee names present in one dataset were also present in other. What if the names are missing. By default merge applies inner join, meaning join in performed only on common values which is always not preferred way since there will be data loss. The method of joining can be controlled by using the parameter ‘how’. We can perform left join or right join to overcome data loss. The missing values will be represented as NaN by Pandas. print('-------left join-------- ',pd.merge(df1, df2, how='left')) print(' -------right join-------- ',pd.merge(df1,df2,how='right')) GroupBy GroupBy is a very flexible abstraction, we can think of it as a collection of DataFrame. It allows us to do many different powerful operations. In simple words, it groups the entire data set by the values of the column we specify and allows us to perform operations to extract insights. Let’s come back to our Titanic dataset Suppose we would like to see how many male and female passengers survived. print(df.groupby('sex')) df.groupby('sex').sum() Notice, printing only the groupby without performing any operation gives GroupBy object. Since there are only two unique values in the column ‘sex’ we can see a summation of every other column grouped by male and female. More insightful would be to get the percentage. We will capture only the ‘survived’ column of groupby result above upon summation and calculate percentages. data = df.groupby('sex')['survived'].sum() print('% of male survivers',(data['male']/(data['male']+data['female']))*100) print('% of male female',(data['female']/(data['male']+data['female']))*100) Output % of male survivers 31.87134502923976 % of male female 68.12865497076024 Under the hood, the GroupBy function performs three operations: split-apply-combine. Split - breaking the DataFrame in order to group it into the specified key. Apply - it involves computing the function we wish like aggregation or transformation or filter. Combine - merging the output into a single DataFrame. Courtesy-Python Data Science Handbook by Jake VanderPlas Perhaps, more powerful operations that can be performed on groupby are: Aggregate Filter Transform Apply Let’s see each one with an example. Aggregate The aggregate function allows us to perform more than one aggregation at a time. We need to pass the list of required aggregates as a parameter to .aggregate() function. df.groupby('sex')['survived'].aggregate(['sum', np.mean,'median']) Filter The filter function allows us to drop data based on group property. Suppose we want to see data where the standard deviation of ‘fare’ is greater than the threshold value say 50 when grouped by ‘survived’. df.groupby('survived').filter(lambda x: x['fare'].std() > 50) Since the standard deviation of ‘fare’ is greater than 50 only for values of ‘survived’ equal to 1, we can see data only where ‘survived’ is 1. Transform Transform returns the transformed version of the entire data. The best example to explain is to center the dataset. Centering the data is nothing but subtracting each value of the column with the mean value of its respective column. df.groupby('survived').transform(lambda x: x - x.mean()) Apply Apply is very flexible unlike filter and transform, the only criteria are it takes a DataFrame and returns Pandas object or scalar. We have the flexibility to do anything we wish in the function. def func(x): x['fare'] = x['fare'] / x['fare'].sum() return x df.groupby('survived').apply(func) Pivot tables Previously in GroupBy, we saw how ‘sex’ affected survival, the survival rate of females is much larger than males. Suppose we would also like to see how ‘pclass’ affected the survival but both ‘sex’ and ‘pclass’ side by side. Using GroupBy we would do something like this. df.groupby(['sex', 'pclass']['survived'].aggregate('mean').unstack() This is more insightful, we can easily make out passengers in the third class section of the Titanic are less likely to be survived. This type of operation is very common in the analysis. Hence, Pandas provides the function .pivot_table() which performs the same with more flexibility and less complexity. df.pivot_table('survived', index='sex', columns='pclass') The result of the pivot table function is a DataFrame, unlike groupby which returned a groupby object. We can perform all the DataFrame operations normally on it. We can also add a third dimension to our result. Suppose we want to see how ‘age’ has also affected the survival rate along with ‘sex’ and ‘pclass’. Let’s divide our ‘age’ into groups within it: 0–18 child/teenager, 18–40 adult, and 41–80 old. age = pd.cut(df['age'], [0, 18, 40, 80]) pivotTable = df.pivot_table('survived', ['sex', age], 'class') pivotTable Interestingly female children and teenagers in the second class have a 100% survival rate. This is the kind of power the pivot table of Pandas has. Reshaping Multi-index DataFrame To see a multi-index DataFrame from a different view we reshape it. Stack and Unstack are the two methods to accomplish this. unstack( ) It is the process of converting the row index to the column index. The pivot table we created previously is multi-indexed row-wise. We can get the innermost row index(age groups) into the innermost column index. pivotTable = pivotTable.unstack() pivotTable We can also convert the outermost row index(sex) into the innermost column index by using parameter ‘level’. pivotTable = pivotTable.unstack(level=0) piviotTable stack( ) Stacking is exactly inverse of unstacking. We can convert the column index of multi-index DataFrame into a row index. The innermost column index ‘sex’ is converted to the innermost row index. The result is slightly different from the original DataFrame because we unstacked with level 0 previously.
https://medium.com/towards-artificial-intelligence/6-pandas-operations-you-should-not-miss-d531736c6574
['Sujan Shirol']
2020-10-03 05:51:47.992000+00:00
['Python', 'Exploratory Data Analysis', 'Data Analysis', 'Data Science', 'Pandas']
Think of Fiction Writing as an Act of Telepathy
Is Fiction Writing Really an Act of Telepathy? You bet it is! In all my years of writing fiction, especially in the last ten years when I’ve primarily been writing novels, I’ve never really thought of the practice as telepathy. Because for the most part, I’m telling myself the story, not talking to anyone else. A reader picking up your book months, maybe years, down the road is an act completely separate from the author’s own experience, after all. Being an author isn’t like being a filmmaker in this regard. If you’re the director of a movie, when everything is done, all the shooting and all the intensive post-production, you get to eventually screen your movie for an audience and see how you did. You can sit in a large theater and watch what moments make people laugh, watch what parts make them jump out of their seats. I used to make movies, and this part of the process was always a huge thrill. I remember screening one of my movies for a crowd of more than 100 people, and at the end, as the credits rolled, I heard at least five people sniffling, trying to hold back tears. A comedic movie of mine a year later got big laughs, followed by a huge round of applause at the end. Screening your own movie for an audience is something enthralling like no other, it really is.
https://medium.com/the-partnered-pen/think-of-fiction-writing-as-an-act-of-telepathy-beb5bd0fe130
['Brian Rowe']
2020-08-10 13:31:01.361000+00:00
['Creativity', 'Mystery', 'Life', 'Film', 'Writing']
Write Books in React With Next.js and MDX
Dynamic Paths in Next.js This section is confusing if you don’t know how dynamic routes work in Next.js! Just try to get a general picture of what is going on… but the code is there for you to ponder. Now that we have a list of routes and locations of content, we need to feed this into Next.js, create routes, and serve the content. I will not go too deep into the source code, but essentially we are performing a depth-first search of the URL tree and making a list of all nodes in the tree that we need to generate routes from. We make our object look something like this: This might look kind of funny, but the point of this object of objects is that we can map over the keys in Next.js’s getStaticPaths . This generates pages for a page [...id].js : And you can use the path provided to you in getStaticProps to retrieve metadata like the GitHub URL: Now everything is hooked up.
https://medium.com/better-programming/write-books-in-react-with-next-js-and-mdx-8deec9fec761
['Matthew Caseres']
2020-12-22 16:37:09.757000+00:00
['Programming', 'React', 'JavaScript', 'Nextjs', 'Typescript']
I Published Six Articles on Vocal
Possibilities / Entrepreneurship I Published Six Articles on Vocal This is what I learned. Introduction I published a story about my initial experience of Vocal. The readers well received the story on my experimentation. More importantly, the responses motivated me to write a follow up on my findings. My purpose was to introduce my findings and initial impression on Vocal+. I mentioned the basics and provided adequate information to get you started writing for Vocal. Many writers found Vocal useful as a freelance opportunity. Several writers enhanced my messages. From your comments, I learned about new features. Thank you all for your excellent input. Through this initiative, I met new writers and started collaborating. One simple story can open new communication path. I am grateful for the outcome. Purpose of this post In this post, I will share my findings based on six articles I submitted to Vocal. Here are the links to my six articles for two reasons: a) To show you type of articles for Vocal b) To indicate the communities and topics. I highlighted the communities in bold format. You can find the topics when you click on each link. Some topics nicely surprised me as I did not know their existence on Vocal. I deliberately selected these six articles representing different topics and audience for my experimentation. I am still experimenting to discover new features. I will continue to record my observations and share with you in another post. You can find my observations in the following sections. I created subheadings to make it easy to read and understand. Power of Vocal communities One of the key findings is the power of communities on Vocal. I am discovering new communities with each submission. In the last three submissions, I found Blush, 01, and Motivation communities. Even though there is no follow feature for writer profiles, it is easy to find the contributors. Each contributor has a profile. The profile includes a list of stories. The stories give a hint about the communities the writers follow. They are highlighted in various colors. I came across several Medium writers publishing stories on Vocal. It was a pleasure to notice familiar faces especially contributors to my publication. Challenges and nominations on Vocal I discovered a new feature that you can nominate your stories to competitive challenges. The latest competition is called “The Perfect Pairing”. Your nominated story has a chance to win a prize. How cool is this! Here is some information about this specific competition which I extracted for you from Vocal site: First place: $2,500 + $100 Gift Card Second place: $1,000 + $100 Gift Card Third place: $500 + $100 Gift Card Indication to views and reads on Vocal My six stories received reasonably good views. When I say reasonable, it means that the number of views for my stories exceeded my expectations To give you an idea, my stories on Vocal received at least 20 times more views than what they did on Medium in my first week. To put this into perspective, it took me six months to get reasonable views on Medium, but it took only one week on Vocal. So, it looks promising. The main reason for this 20 times more views is leveraging my Medium experience. I had no idea when I joined to Medium. I was in the total dark and felt lonely. Whereas, with Vocal, I knew how to promote my content. It is not the platform only, but my informed approach to content marketing helped me gain 20 times more views on Vocal in my first week. This approach indicates that success for writers depends on taking personal responsibility for marketing. Unless you meaningfully engage and promote your content, the platforms, whatever their sizes are, would not make a real impact on the outcome. Self-publishing versus moderated publishing It is a great benefit to self-publish our stories. I found this feature useful on Medium. We cannot self-publish on Vocal. But it is not a big deal. In fact, it served as a blessing in disguise for me. When we submit our stories, they are reviewed by moderators and published in relevant communities and topics. Even though I was initially concerned for not being able to self-publish, moderated publishing brought a new advantage to me. Let me explain. There are many communities on Vocal. I couldn’t find any description of those communities, and some names are not even descriptive. For example, I did not know what community 01 mean until I saw my cybersecurity story on this community. However, moderators found the right community and the topic fitting the purpose of my content. With the help of moderators, my six stories were published in different communities. I discovered the topics of specific communities. Learning about a topics is an advantage because when I click on the link for each topic, I can find the writers who write about a specific topic. My initial frustration and confusion turned into joy by discovering new topics and writers. Maybe Vocal created this ambiguity on purpose. Sometimes mystery can be a good thing. I can see some psychological benefits from my experience. Earning and compensation I shared the earning scheme in my previous post. Vocal pays based on views. But in reality, they mean actual reading time for your story. Just clicking on a story and scrolling it would not register as views. There are two streams for each based on membership levels. For the free account, they pay $3.80 per 1000 reads. For the paid version (Vocal+), they pay $6.00 per 1000 reads. The paid account (Vocal+) subscription price is USD 9.99. There are some additional benefits for Vocal+ membership mentioned on their website. I mentioned the creative tips feature in my previous story. Readers can give tips using their credit card, which can be paid to the writers via their Stripe accounts. Tips range between $1 and $20. Since the last post, I found out that there was a minimum payout notice: Non-Member — $35 and Vocal+ Member — $20. My understanding is that you cannot withdraw under $20 tips if you are a Vocal+. There is also a small fee deducted by Vocal for earning from tips. As earning money is not the primary motivation for me, I prefer not to focus on it and not go into details. The income for my level of writing is negligible both on Medium and Vocal hence not worth fussing about it. However, to give you an indication of earning from Vocal stories, 40 views equated to around 20 cents per the stats of my stories in the Vocal dashboard. You can make comparative calculations based on this indicative figure. Major benefits for me My key focus on writing in content platforms, including Medium and Vocal, is to enhance my network by meeting new writers, readers, and collaborators. Medium helped me meet thousands of new writers, readers, and collaborators. I am grateful. I started experiencing a similar effect with Vocal. With my engagement on Vocal in a week, I noticed some improved traffic to my content on my websites. The number of views of my blogs increased by at least 10%. I believe readers from Vocal communities are discovering my content, exploring my profile, and subscribing to my blogs. This is one of the essential benefits and desired by-product of publishing on Vocal. You may find other reasons to publish on Vocal. Anecdotes and Validation There are always good and bad comments about any platform on social media. It is inevitable. Based on my experience with social media, I take reader comments with a pinch of salt as there may be different motives that we may not know. I read some negative and discouraging comments about Vocal on social media. However, most of them turned out to be noise. When I fact-checked those statements, I found out that some people just make arbitrary claims with no evidence. For example, some writers were trying to judge the platform with only one story they submitted and did not perform. Some made comments that it was a scam with allegations such as they don’t pay earning to the writers. However, I noticed that one of the engaging Medium writers, Aamir Kamal 🚀🚀🚀 debunked most of the claims by providing his personal earning history publicly. Aamir also interviewed one of the high earning members on YouTube. I noticed that the high earner is a founding member. I don’t know yet what a founding member means. I will update you when I discover this membership type. Will I continue publishing on Vocal? Several writers asked me on whether I would continue for submitting content to Vocal. Based on my current findings, the answer is yes. The main reason is: I had immediate benefits for publishing six stories on Vocal. My experimentation proved it. I want to continue experimenting and discovering opportunities. Should you write for Vocal? Another frequently asked question from writers is “should we create an account and write for Vocal?”. This is entirely up to you. I don’t endorse any writing platform, tools, or services. However, if I were you, I would give it a go. Let me share my reasons for the encouragement briefly with an example. A year ago, I did not know anything about Medium. In such a short time, I gained thousands of followers, met amazing people across the globe, created significant publications followed by 45K readers and contributed by 5.6K writers. This all happened with an open mind and by giving it a go despite the initial ambiguity on Medium. Yes, in the first six months, exposure to my content was extremely slow on Medium, but once my content and profile were discovered in the second six month, the exposure was exponential. The reason I share my Medium experience is the same potential for Vocal. Since there are many communities on Vocal, who knows your profile and content can be discovered by some communities, and you may gain substantial exposure to a new audience. Vocal offers a free account. It is risk-free. You have nothing to lose. If you see some benefits, you can upgrade it to Vocal+. The best thing is that you can cancel it anytime. They even provide a one-month free membership for Vocal+. Why not try and see? Simply put, if you don’t buy a lotto ticket, you will never have a chance to win. Re-purposing Content One of the benefits of writing for another platform is re-purposing your content. This is a critical content marketing strategy. I have been successfully re-purposing my content for three decades. Content management experts such as Jeff Herring, Tim Maudlin, and MaryJo Wagner, PhD are big on this topic. Content re-purposing provided me many benefits. For example, I published academic papers in peer-reviewed journals for a narrow audience. However, converting the content to public language, and publishing them in popular platforms, my content gained a new audience, in fact, a bigger one. I use the same approach for my Medium content. For example, I re-write some of my Medium stories appealing to a specific community on Vocal. As long as you provide a disclaimer for your content indicating the original content belongs to you, Vocal allows you to publish your previously published material. Ability to publish your prior content is an excellent opportunity for content re-purposing. This is a substantial premise for freelancers. I can now answer the question on whether you can publish your previously published stories on Vocal. The answer is yes, based on my recent finding. All you need to do is to provide a disclaimer with a link so that moderators don’t have to waste their time to validate the copyright ownership. Conclusion In this story, I shared my recent experience in a new content development and publishing platform called Vocal. As I mentioned in my previous article, the learning curve for Vocal was gentle for me. I became self-sufficient in a week. I gained noticeable benefits by publishing for Vocal. Publishing as few as six articles brought immediate benefits. The biggest gain was visibility to my blogger profile and content on multiple platforms. Yes, there will always be negative comments or hypes about any platform. We have the power to filter these extreme viewpoints by applying logic and by low-risk trial and errors. It can be useful to give a try to a new platform. We never know unless we try it. Keeping an open mind is a sign of growth mindset. You have nothing to lose but a lot to gain with a growth mindset. I wish you all the best in your writing career. If you want to collaborate for success on Vocal, please join my Quora Space specifically designed for Vocal writers and stories. I will help you succeed. You can find my Vocal profile from this link. Thank you for reading my perspective.
https://medium.com/illumination/i-published-six-articles-with-vocal-f63309c446d4
['Dr Mehmet Yildiz']
2020-11-21 10:45:29.982000+00:00
['Leadership', 'Entrepreneurship', 'Business', 'Writing', 'Freelancing']
PAIRWISE RANKING : EPL PREDICTION — Part2
Overview Pairwise ranking have multiple use cases in fields like search ranking optimization, Product Comparison etc. and holds edge over traditional cumulative comparison as with the former we will be able to say A is n% better than B/C/D etc.. by also prioritizing the effect of each variable by the corresponding relevance. The theory behind the method is detailed in the previous article of this series: https://medium.com/@davidacad10/pairwise-ranking-epl-prediction-3ce755575958 Image By : thinkersnewsng Assuming that, we now have a brief idea on how the pairwise ranking is applied behind the curtains, we will have an interesting application, where we try to predict the table standings of English Premier League at the end of the current season 2020–21. The parameter we would use to rank teams would be the head to head results in the last 5 matches. It’s quite interesting to see how close it gets to real table standings when applied to 2019–20 season on validation. Case In Action The parameter we would use to rank teams is the head to head results each team had in the last 5 matches. A win is awarded 3,draw 1 and loss 0, just as it is in EPL. So the maximum rating a team can have is 15 and minimum is 0. The objective we are aiming for is predicting the final standings at the end of the season, at the very beginning of the season without even a match being played.[We can try more real time models for a match by match prediction later :D]. Just as in every data science problem, the important part is to get the data ready. There are quite a lot of good sources available. I have collected the match by match result from 2004 from football.com and collated to a single file. You can access the repo on: https://github.com/davidacad10/EPL_Analytics/tree/master/Pairwise_Ranking. Let’s load the data and get to it: library(rio) library(dplyr) library(xgboost) library(knitr) rm(list = ls()) data=import("Pairwise_Ranking_Data.xlsx") data=data%>% mutate(Season_Start_Date=as.Date(Season_Start_Date) ,Season_End_Date=as.Date(Season_End_Date) ,Prev_SSD=as.Date(Prev_SSD) ,Prev_SED1=as.Date(Prev_SED1)) The data frame have the H2H results from 2004 season, with ratings as described before. On the first week of the season the H2H available is those of last seasons. Hence for every season N, the rank will be the final table standings the team had on season N+1. For example, at the end of season 2018–19,when Chelsea had a 9 point over Man City and 6 point over Liverpool, they ended up finishing 3rd the next season 2019–20.Hence for a team Ti, the H2H over all the teams T at the end of season N, will be the parameters used to predict rank in season N+1. Let’s divide the data into train and test. There was a shift in EPL after the 2012–13 season, where all the teams got highly competitive. So we can select train data from 2013–14 season to 2018–19. We can test the model created for 2019–20 which itself can be called as an outlier season for many reasons (Covid-19,Liverpool winning title etc. :D). train=data%>%filter(Prev_SSD<=as.Date('2018-08-10'))%>% filter(Prev_SSD>=as.Date('2013-08-01')) test1=data%>%filter(Prev_SSD>as.Date('2018-08-10')) ##Getting the previous records available for the promoted teams ##And adding into the 17 other teams to have the whole 20 team ##records promoted=c("Norwich","Sheffield United","Aston Villa") test2=data%>%filter(Team1%in%promoted)%>% ungroup()%>% group_by(Team1)%>% arrange(desc(Season_Start_Date))%>% mutate(rpp=row_number())%>% filter(rpp==1)%>% select(-rpp) test=bind_rows(test1,test2) train_data=train%>%ungroup()%>%select(Arsenal:Wolves) test_data=test%>%ungroup()%>%select(Arsenal:Wolves) target=as.numeric(as.character(train$Rank)) set.seed(1000) xgbTrain <- xgb.DMatrix(as.matrix(train_data), label = train$Rank) xgbTest <- xgb.DMatrix(as.matrix(test_data)) ##No extra parameter tuning being applied except ntress=5000 and #early stopping round as 20 params <- list(booster = 'gbtree', objective = 'rank:pairwise') ##Metric used is NDCG as described in tutorial rankModel <- xgb.train(params, xgbTrain, 5000, watchlist = list(tr = xgbTrain), eval_metric = 'ndcg',early_stopping_rounds = 20) ## Stopping. Best iteration: ## [55] tr-ndcg:0.983104 ##Predicting The Model on Test Data for 2019-20 pred=(predict(rankModel, xgbTest,reshape = TRUE)) test$Pred=pred test_pred=test%>% ungroup()%>% select(Team1,Prev_SSD1,Rank,Pred)%>% arrange(Pred)%>% mutate(Pred_Rank=row_number(), Probability=1/(1+exp(Pred)))%>% select(Prev_SSD1,Team1,Rank,Pred_Rank,Pred,Probability)%>% rename(Actual_Rank=Rank,Title_Probability=Probability,Season_Start=Prev_SSD1) ##Print the prediction frame view(test_pred) 2019–20 Season Prediction Voila. We were able to get the title winner correctly and the runner up. Note that in none of the training data Liverpool had a rank 1 but it was only city, Chelsea and Leicester. The model is able to pick up the pattern that Liverpool can finish over just by looking at their improvement in head to heads alone. We have Tottenham at 3,where as they finished 6th(fun breakers huh!).Although, this was an off season for them with even finishing in top 4 and a UCL final last season, they were coming off from their best season ever. Considering this outlier away, the next rankings Man United and Chelsea are spot on. The lambda rank method provides more prevalence to top ratings than bottom ones. Although, except wolves(due to little data availability),4 out of bottom 5 were in relegation battle with 2 of them actually getting relegated. Now that we have seen the accuracy of predictions on last season, let’s see what would be the case at the end of season. ##Load the 2020–21 prediction data newtest=import(“Pairwise_Ranking_Data_Pred_2020_data.xlsx”) rel=c(‘Norwich’,’Watford’,’Bournemouth’) newtest=newtest%>%filter(!Team1%in%rel) newtest2=newtest%>%select(-Team1) newtest_xgb <- xgb.DMatrix(as.matrix(newtest2)) pred=(predict(rankModel, newtest_xgb,reshape = TRUE)) newtest$Pred=pred newtest_pred=newtest%>% ungroup()%>% select(Team1,Pred)%>% arrange(Pred)%>% mutate(Prev_SSD1=as.character(‘2020–09–12’),Pred_Rank=row_number(), Probability=1/(1+exp(Pred)))%>% select(Prev_SSD1,Team1,Pred_Rank,Pred,Probability)%>% rename(Title_Probability=Probability,Season_Start=Prev_SSD1) ##Print the 2020-21 season prediction view(newtest_pred) 2020–21 Season Prediction Noted that this prediction doesn’t have Leeds as they are in EPL for first time after 2002. With the 19 teams available, Chelsea are predicted to win the league, with Liverpool coming in second, United 3rd and city 4th.With the new signings Chelsea have, they could in fact aim for it. Although for a football infested mind it’s hard to digest city finishing 4th and either of city or Liverpool not winning the league. But numbers don’t lie. Fingers crossed :D And for the relegation battle, it’s predicted to be between Fulham, West Brom, Crystal Palace and Aston Villa. Noted that 3 out of those 4 are promoted in the last two seasons. Conclusion There could be few bias the prediction could have in effect: The parameter under consideration is head to heads in last 2 seasons effectively. This doesn’t consider any effect to change in team via transfers More priority is provided to get the top rankings correctly over the bottom one’s Teams that are new to EPL like Leeds doesn’t have data associated as we consider on EPL head to heads and hence can’t be added Now that we have completed a cool application of pairwise ranking, I believe we are good with the application side too. In fact there are multiple packages available other than xgboost which you can pick up on if interested. If you want to brainstorm on the theory behind, give a read on this article https://medium.com/@davidacad10/pairwise-ranking-epl-prediction-3ce755575958
https://medium.com/analytics-vidhya/pairwise-ranking-epl-prediction-part2-756542b85c2b
['David Babu']
2020-12-22 16:39:26.814000+00:00
['Machine Learning', 'Premier League', 'AI', 'Applied Machine Learning', 'Data Science']
Convolutional Neural Networks — Part 3: Convolutions Over Volume and the Convolutional Layer
Convolutional Neural Networks — Part 3: Convolutions Over Volume and the Convolutional Layer Brighton Nkomo Follow Oct 5 · 7 min read This is the third part of my blog post series on convolutional neural networks. Here are the pre-requisite parts for this blog post: Here are the subsequent parts of this series: A pre-requisite here is knowing matrix convolution, which I have briefly explained in the first section of part one of this series. Also knowing something about the rectifier linear unit (ReLU) activation function could help you understand some points that I mentioned in section 2 here. A Rubik’s Cube. Some readers may find it useful to visualize the 3D filters, which will appear later in this post, as Rubik cubes. In the previous parts, we were considering images on a grayscale (black and white images). So what if the input images have color? It turns out that now we have to consider 3D convolutions instead of 2D convolutions. Did you know? That our smart phones use what is called the Organic light emitting diode (OLED) to generate the high-quality images that we see whenever we look at our smartphones. An OLED display is composed of a massive grid of individual pixels and each pixel is composed of a red green, and blue subpixel. This picture shows an OLED grid of microscopic individually controlled dimmable red, green and blue lights . There could be over 10 million of these lights on your phone! 1. Convolutions Over Volume 1.1 Convolutions on RGB images FIGURE 1: Convolving a 6 by 6 by 3 volume with a 3D filter (a 3 by 3 by 3 filter). An RGB image is represented as a 6 by 6 by 3 volume, where the 3 here responds to the 3 color channels (RGB). In order to detect edges or some other feature in this image, you could convolve the 6 by 6 by 3 volume with a 3D filter (a 3 by 3 by 3 filter) as shown on figure 1, not with a 3 by 3 filter, as we have seen in the previous parts. So the filter itself will also have 3 layers corresponding to the red, green, and blue channels. From figure 1, notice that the first 6 is the height of the image, the second 6 is the width, and the 3 is the number of channels. And your filter also similarly has a height, a width, and the number of channels. The number of channels in your image must match the number of channels in your filter (notice the last numbers are connected by a curved line on figure 1). Also, notice that the output image is a 4 by 4 image (or 4 by 4 by 1 image) instead of a 4 by 4 by 3 image. This will become clear once we look at multiple filters in section 1.2. FIGURE 2: A representation of a 3D convolution. Here’s a 6 by 6 by 3 image, and a 3 by 3 by 3 filter once again shown on figure 2. Notice that the 3 by 3 by 3 filter has 27 numbers, or 27 parameters, that’s three cubes.
https://medium.com/swlh/convolutional-neural-networks-part-3-convolutions-over-volume-and-the-convnet-layer-91fb7c08e28b
['Brighton Nkomo']
2020-10-10 15:26:35.122000+00:00
['Neural Networks', 'Machine Learning', 'Deep Learning', 'Artificial Intelligence', 'Computer Vision']
A gentle introduction to the 5 Google Cloud BigQuery APIs
1. BigQuery API The principal API for core interaction. Using this API you can interact with core resources as datasets, views, jobs, and routines. Up today exists 7 client libraries: C#, Go, Java, Node.js, PHP, Python, and Ruby. Example For this example, I will use the python client library for the BigQuery API on my personal computer. Consider that you need to have python already installed. Installing Python As a recommendation install Visual Studio Code and enter the terminal. I’ll use it for all the examples. 1.1 Install the client library pip install --upgrade google-cloud-bigquery Installing the client library 1.2 Setting up authentication To access the BigQuery service first you need to create a service account and setting an environment variable. A service account is a special type of Google account intended to represent a non-human user that needs to authenticate and be authorized to access data in Google APIs [GCP Doc]. Enter the Google Cloud Console and then APIs & Services > Credentials > + Create Credentials > Service Account Creatin a service account Creatin a service account After the SA is created we need to set up a JSON key. Automatically a .json file will be downloaded, keep it safe since it is your key to your BigQuery resources. Back to the Visual Studio Code terminal provide authentication credentials to your application code by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS set GOOGLE_APPLICATION_CREDENTIALS=D:\medium\bigquery-apis\key\key_bqsa.json For this example, I’ll query the Covid dataset. You could also get the result in a pandas dataframe, fir install the following libraries pip install pandas pip install pyarrow pip install google-cloud-bigquery-storage 2. BigQuery Data Transfer API API used for ingestion workflows. If you want to include periodic ingestion from Google Cloud Storage or get analytics data from other Google Service like Search Ads 360, Campaign Manager or YouTube, or other third parties services like Amazon S3, Teradata, or Amazon Redshift. 2.1 First install the library pip install --upgrade google-cloud-bigquery-datatransfer 2.2 Enable the API Important: You need to enable billing https://console.developers.google.com/billing Let’s move the data from a YouTube Channel to BigQuery. Go to Marketplace and look for YouTube Channel Transfers and click on Enroll. Click on Configure Transfer. Lets set some inputs like schedule options and destination setting. This will prompt a windows asking to access your YouTube data. When you scheduled a query this API is used. Now let’s start using this API. First showing all the Data Transfer created The output give us the Transfer ID. With the Transfer ID, using the BigQuery Data Transfer API I’m able to run it. Executing the code returns the job values. In the BigQuery interface you could follow the execution. In this case I didn’t have YouTube data so this trigger an error. 3. BigQuery Storage API This API exposes high throughput data reading for consumers who need to scan large volumes of managed data from their own applications and tools [Google Doc]. Let´s build a simple example 3.1 Install the client library pip install --upgrade google-cloud-bigquery-storage 3.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 3.3 Example 1 Getting the top 20 page views from Wikipedia. This example use both APIs and run a custom query. 3.4 Example 2 If we need fine-grained control over filter and parallelism the BigQuery Storage API read session could be used instead of a query. The following code only changes to use the Apache Arrow data format. 4. BigQuery Connection API This API is for establishing a remote connection to allow BigQuery to interact with remote data sources like Cloud SQL. This means BigQuery is enabled to query data residing in Cloud SQL without moving the data. Let´s build a simple example 4.1 Install the client library and enable the API pip install --upgrade google-cloud-bigquery-connection Enable the API accessing the console. 4.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 4.3 Add an external source For this example, I’ve deployed a Cloud SQL instance in the same project. Next, I’ve added a simple MySQL database as an external source in the BigQuery interface. Adding the connections parameters Running the federated query 4.4 Example This example uses the API to list the connections. For more examples check the official repository. 5. BigQuery Reservation API This API allows provisioning and manage dedicated resources like slots (virtual CPU used by BigQuery to execute SQL queries) and BigQuery BI Engine ( fast, in-memory analysis service) memory allocation. 5.1 Install the client library pip install --upgrade google-cloud-bigquery-reservation 5.2 Set up the authentication Follow the steps from 1 to create a service account and get the JSON with the service account key. 5.3 Enable Reservation API Go to the BigQuery UI and click on Reservations 5.4 Buy Slots This feature is more recommendable to organizations that desire to have predictable pricing (flat-rate). Remember that by default on BigQuery you pay per-query (on-demand). The next images simulate a Slot reservation process in order to use later the API. 5.5 Example Use the API to list the reservations. For more actions check the repository. Conclusions BigQuery gives us many features and with the help of these APIs, we can extend the functionality. All the code is available on Github. PS if you have any questions, or would like something clarified, ping me on Twitter or LinkedIn I like having a data conversation 😊 Useful Links Core API Transfer API Storage API Connection API Reservation API
https://towardsdatascience.com/a-gentle-introduction-to-the-5-google-cloud-bigquery-apis-aafdf4ef0181
['Antonio Cachuan']
2020-12-28 04:45:08.138000+00:00
['Python', 'Bigquery', 'Data Engineering', 'Data Science', 'Towards Data Science']
5 Things Super Productive People Never Do In The Morning
5 Things Super Productive People Never Do In The Morning How you spend the first few hours of your morning can make or break the rest of your day A solid morning routine is the backbone of a productive day. What you do in the morning can make or break the rest of your day. As Richard Whateley once said, “Lose an hour in the morning, and you will spend all day looking for it.” The best morning habits put in the perfect mood to get things done. Highly productive people choose what to focus on the morning carefully. “Focused, productive, successful mornings generate focused, productive, successful days — which inevitably create a successful life — in the same way that unfocused, unproductive, and mediocre mornings generate unfocused, unproductive, and mediocre days, and ultimately a mediocre quality of life,” writes Hal Elrod in his book, The Miracle Morning. There is a right way to have a productive morning? Some people kick off their day with a morning exercise to increase blood flow and improve mental clarity. Others meditate, write their thoughts down, read a book, listen to their favourite podcast, learn and absorb a few new things, write ideas down, do creative work or do their most important task. Highly productive people can also work on their side hustles, or spend time on a hobby. There are dozens of productive things you can do to start your day right. There are so many better habits you can incorporate into your morning routine but what makes a productive morning depends on what you personally feel you need to do in the morning. While good habits can set you up for a productive morning, some morning activities hinder our productivity and overall mood and can derail the rest of your day. How you spend the first few hours of your morning is as important as how you spend the last few hours of the day. Here’s what super-productive people don’t every morning. They don’t forget to hydrate first thing in the morning Staying hydrated is a key component to your overall brain & body health. Dehydration can leave you feeling tired in the morning. Productive people drink at least a glass of water first thing in the morning. “Hydration is incredibly important, especially after waking up. I always find that this larger quantity of water provides incredible energy and prepares my body for the day ahead,” says Jeff Sanders, the author of The 5 A.M. Miracle: Dominate Your Day Before Breakfast. Instead of a cup of coffee, drink a glass of water first. Starting your day with water helps rehydrate your brain, restore natural metabolism and refreshes your body to gradually focus on the day ahead. As you sleep, the natural reservoir in your body will be depleted of water — that means your body can be dehydrated and needs nutrients to jumpstart your bodily functions during the day. Water improves your circulatory system and keeps it running immediately you wake up. Give your body exactly what it needs to start the day right. Super productive people don’t immediately drink coffee There’s nothing wrong with your morning coffee. Coffee might help you start your day, but drinking water before your caffeine can be more beneficial to you in the long run. Most people grab a cup of coffee first thing after waking up — this habit can interfere with the natural ‘wake up process’ of the body. “Drinking coffee at peak cortisol times not only diminishes the energy-boosting effects of caffeine but causes your body to build a tolerance to it, meaning the caffeine jolt you get will diminish over time,” says Dr Steve Miller. Cars run off fuel. Humans run off water. You don’t need caffeine immediately upon waking. Prioritise water first thing in the morning over coffee. Timing your morning coffee can help you get the most benefit from it. Drinking a glass or two of water right when you wake up will aid in better digestion when you have breakfast. Highly productive people don’t choose breakfast options without protein and fibre Low fibre and protein breakfast (or high sugar breakfast) will often set you up for a poor experience later on in the day. Your body uses up sugar energy quickly, and you will soon start feeling inactive. High-fibre foods and nutritious options like hot oatmeal, multi-grain cereal, eggs, nuts and berries stick with you longer than a sweet roll or pastry. “Getting healthy fat and protein can create a positive and sustainable boost in energy without giving you a spike in blood sugar and crash,” says Dr Brady Salcido, a personal health expert. Stacey Morgenstern, co-founder of Health Coach Institute recommends we eat high-quality proteins to keep our energy levels high. “…eat high-quality proteins and fats for a long-lasting source of energy to keep you focused and productive,” she tells Bustle. Breakfast is an important meal of the day — make it work for your brain and energy and you can boost your performance and energy consistently throughout the day. They don’t make too many decisions in the morning Highly effective and productive people don’t spend the first hours of the morning making decisions. Making too many decisions in the morning deplete your energy quickly. Making too many decisions in the morning will waste your energy in the morning. The best way to spend your mornings is to get straight to activities that bring out the best in you and helps you get things done. If you spend your mornings deciding when to work out, what to wear, what to eat for breakfast or which tasks to work on, you are wasting brain energy. Want to read a book in the morning? Pick it out the night before. Want to work on a specific task first thing in the morning? Choose the task the night before and get right to it in the morning. Want to work out in the morning? Choose your gym clothes and put them in a strategic place for easy access. Don’t get caught up in small or important decisions in the morning. Highly productive people don’t plan their day in the morning You can get a lof things done if you start your day on purpose and plan well in advance. Planning what to do the night before Many successful people spend their evenings preparing for the next morning because it makes their mornings free to get an early start on important tasks. “Planning the evening before is effective because we have a limited amount of willpower and decision-making ability every day. The thought of making too many decisions in the morning will slow you down and drain your brain for the rest of the day,” writes Britt Joiner of Trello. So write out your daily to-do list and how you want to spend your morning the night before. After a few weeks of practice, some habits will become automatic and the rest will get easier. It pays to create your schedule and set your priorities the night before. Waking up to an actionable plan for the day will help you to focus more easily. A maximum of 3 priorities is a good start for a productive morning. If you’re setting your do-list first thing in the morning, you’re already too late!
https://medium.com/personal-growth/5-things-super-productive-people-dont-do-every-morning-29281cf0b1e
['Thomas Oppong']
2020-11-19 15:04:51.839000+00:00
['Creativity', 'Self', 'Self Improvement', 'Productivity', 'Work']
Why Depression Is Becoming an Issue for Software Engineers — And How to Deal With It
Practical Steps 1. Ask for help Software developers are often too proud to admit they need help. My first and most important advice is: Ask for help. There's a ton of ways to do that, like online or in-person physiological sessions (highly recommended) or speaking with a close friend/husband/spouse/mate/preacher. This will produce two important fruits. a) You will acknowledge to yourself that you need help, but more importantly, you're progressing in healing and that's releases dopamine. It will also b) bring outside wisdom and accountability to the table. 2. Set small fitness goals Yes, you heard that. F-I-T-N-E-S-S. A person who lives a life based in healthy habits is someone with far fewer chances to face depression. Additionally, habits are built upon goals, and goals, once achieved, produce happiness and fulfillment, and then dopamine. Remember, developer, we are logical people. We are breaking a bad cycle and in order to succeed, we need to fill this place with something else. Walking, running, some sit-ups, etc. Start slow, but start. 3. Stop eating garbage Your body needs some fuel to help you get out of this cycle, so make a deal with yourself. Set a goal of small changes, i.e., Tuesdays without eating/drinking sugar, or drink three liters of water three days a week. Again, the main objective is to define small yet achievable goals that will produce direct and indirect benefits for you. 4. Join a community and socialise In times like we're living in, it's really hard to be physically connected to a community, but if possible, I'd highly recommend that. Churches, small interest groups, meetups, mentorship meetings, etc. We are social animals and we need to interact with others, for real. 5. Help others Yes, help others, even if you’re not ready with your own healing. There’s tremendous power in helping others. As a side effect of this, a different perspective based on somebody else’s problems can shift your mind and change your inside almost instantly. Teach a course for free, mentor junior developers, or do a Q&A session with people trying to get into our profession. “The best way to find yourself is to lose yourself in the service of others.” — Mahatma Gandhi 6. Find an accountability partner This is a golden rule. There's no need for it to be a close friend if you don't want to. Find someone who you can drop a daily message sharing your progress. This person will be like your personal trainer. You will be amazed at how many people are willing to play this role and help others.
https://medium.com/better-programming/why-depression-is-becoming-an-issue-for-software-engineers-and-how-to-deal-with-it-8478f361033d
['Jonathan Ferreira']
2020-09-02 04:37:13.491000+00:00
['Software Development', 'Mental Health', 'Self-awareness', 'Self Improvement', 'Programming']
3 to read: LA Times debacle | Goodbye, alt weeklies | Amazing 1950s dataviz
By Matt Carroll <@MattCData> Aug. 29, 2017: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co What’s the matter with the L.A. Times?: The editorial leadership of the L.A. Times was fired this week, in a stunning move. Well, here’s some of the jaw-dropping backstory that led to the axings. Ed Leibowitz of L.A. Magazine details the ineptness of editor-in-chief and publisher Davan Maharaj, who sat on major investigative pieces for years, but had time to ask reporters to check out his new Italian shoes. The story ran in December, but it’s worth resurrecting for it’s excruciating detail of dysfunction at the top of what was once a great newsroom. Logo by Leigh Carroll (Instagram: @leighzaah) 2. What we lose when we say goodbye to alt weeklies: The Village Voice is dead, at least in print. So, pretty much, are alt weeklies across the country. Paul Farhi of the WaPo has a nice elegy of what we have lost when those quirky mixes of band reviews, massage parlor ads, and hard-hitting political coverage closed up shop. A good read. 3. Amazing graphics from the 1950s NYT archive: Most people tend to think of “data viz” as a relatively new phenomenon. Not quite. News orgs have been doing impressive data viz for as long as they have existed. Stuart A. Thompson on Medium has a wonderful piece on some spectacular (and forgotten) visualizations done at the NYT during the 1950s. Check out the cool space travel illustrations (all b&w, by the way). Get notified via email: Send a note to 3toread (at) gmail.com Matt Carroll is a journalism professor at Northeastern University.
https://medium.com/3-to-read/3-to-read-la-times-debacle-goodbye-alt-weeklies-amazing-1950s-dataviz-6f9a51367229
['Matt Carroll']
2017-08-28 17:47:47.348000+00:00
['Journalism', 'Dataviz', 'Alt Weeklies', 'Media', 'Data Visualization']
Let’s Help Each Other Grow
This idea was born as a result of seeing Darrin Atkins article We Can Help Each Other. I’m not even entirely sure if this is allowed on this platform but at this point all I have to say is eeF-it. We need to come together and support each other. I want nothing more than to see others succeed in this world. And that’s why I feel compelled to post this article. So, if you have a business, comment the link to your website, social media or simply explain how we can help support your business. No request is too big or too small. You may be surprised by the type of support you get from this article. I look forward to seeing how I can help. Let’s Help Each Other Grow! oh. P.S. This article is monetized but I will donate all of the proceeds of this article, if any, to a business or entrepreneur that has commented and I will pick at random on Jan 15, 2020. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/lets-help-each-other-grow-4310a6475dee
['Tim Lekach']
2020-12-29 15:15:59.268000+00:00
['Entrepreneurship', 'Small Business', 'Business', 'Entrepreneur', 'Marketing']
The greatest dataviz, that nobody has seen…
One step back. “Remove to improve” — it’s an expression used in countless professions and can, crazily enough, also be applied to data visualizations. Edward R. Tufte introduced the concept of data-ink ratio in his classic “The Visual Display of Quantitative information” in 1983. The concept talks about removing most of the non-data ink/pixels to improve the data to tell its story. Most people who are serious about data visualization will or have probably come to know this concept. The data-ink ratio can easily be applied to dashboards and other visualizations where you expect the audience to already be interested. But what if you need to convert people and get them engaged towards your story? By applying the data-ink ratio and holding onto it too much, we could lose our users within the onboarding process, because maybe we’ve removed an easy to digest piece of context. I think there are several reasons why designers should consider onboarding, but I want to highlight three of my main reasons: ambiguity, irreversible growth of data and conversion. Ambiguous figures One of the reasons why we should consider onboarding is because of the ambiguity of data visualization. Most data visualizations are telling multiple stories at once. Now you might think; how is that possible, it’s only one figure, right? It has everything to do with the “curse” of knowledge and Steven Franconeri pointed it out during the Openvis Conference 2018: “Knowledge and expertise can literally change what you see.” When I create a more complex visualization about a certain topic, experts about this topic are likely to see this visualization differently than I do. The visualization tells them a more detailed story. It’s like an “Ambiguous Wally” figure: whenever you spot Wally in a “Where’s Wally” figure, the figure looks different because you’re now cursed with the knowledge of Wally’s location. Onboarding can help a lot by making data visualization less ambiguous. Ambiguous Wally Growth of data Besides us humans being responsible for the growing amount of plastic into our oceans, we’re also responsible for the ongoing growth of data. Data is growing, but with help of intelligent software we can unfold these big piles of data and discover the relations within. But we still have to tell the story of the unfolded, complicated data. Most graphs, for example line graphs, bar charts or scatterplots, are well known to most people. Main reason for this is that we have been taught the function of these graphs at primary/high school. The only problem with these kind of graphs is that they’re not always able to tell highly complicated stories efficiently. Maarten Lambrechts (2018) suggests that we should all start acting like William Playfair (great inventor of the bar chart and line graph but also the pie chart 😢). By creating more custom and perhaps weird visualizations we could solve this problem. Within these custom visualizations we can combine multiple dimensions at once, which provide the possibility for getting better insights out of complex data. But these custom visualizations need some kind of onboarding to reveal their true power. Conversion For this it’s maybe nice to have a look at Reif’s Larssens Fairly Unscientific Graph of an Infogasm. As mentioned, it’s unscientific, but it shows Reif’s process of processing information when he looks at an infographic. First, Reif is really excited about all the information he instantly understands…..then he realises he doesn’t…….and then he gradually starts to unfold the infographic like reading a paragraph. From a conversion point of view, it’s important to realise that people need to understand quickly and easily how the data visualization works and what it could mean to them. According to research conducted by Lindgaard, Fernandes, Dudek, & Brown (2006), people make up their mind about a website in the blink of an eye. I don’t know to what extent this conclusion applies to data visualization, but it tells us something about their attention span. We might grab their attention by gradually telling people a story.
https://medium.com/adyen-design/the-greatest-dataviz-that-nobody-has-seen-bf71e385090b
['Luuk Van Der Meer']
2019-04-04 12:08:51.076000+00:00
['Dataviz', 'Fintech', 'Data Visualization', 'Design']
A guide to IBM’s complete set of data & AI tools and services
A guide to IBM’s complete set of data & AI tools and services So you can be a fearless wielder of AI Before you scroll down through this page and think “There is no way I’m getting into all this, no ma’am,” just hold up a sec and let me tell you one, er… four, very quick things first: As someone who was trained in Swiss grids, rubylith, and letterpress — and has had to make those archaic skills work through 20+ years of design jobs evolving from print, branding, websites, apps, platform strategies, software and most recently AI — I 100% promise if you, if you follow these tried-and-true rules, you’ll be fine no matter how much of the tech stuff you do or don’t absorb. Ready? Burn this into your brain: No matter what new technology comes along… Designing for any medium will always be, first and foremost, about satisfying user and business needs. Starting with why and designing with the end in mind always leads to the right results. And finally, good design is still good design. Poop is still poop. And the one tip I’ll add to this list specifically for AI… For every single step of every AI use case you come up with, ask yourself, “Couldn’t AI just do that for the user?” and “How does the user know if what the AI is doing/recommending is right?” All this said, the more of the tech stuff you DO understand, the more confidently you’ll be able to push your concepts and your team to build better, cooler AI features. So here it is my design lovelies… the shortest, sweetest summary of all of IBM’s data & AI tools and services and what you can use them for that you’ll find on the interwebs as of 3:27pm, Monday, October 12th, 2020! What we’re about to cover: The AI Ladder Cloud Pak for Data Watson Studio — Watson Natural Language Classifier in Watson Studio — Data Refinery in Watson Studio — Cognos Dashboard Embedded Service in Watson Studio Watson Assistant Watson Discovery Watson Natural Language Understanding Watson OpenScale Watson Knowledge Catalog Watson AIOps Which IBM services to use for the six main AI capabilities Recommended links — — Note: The vast majority of this content is consolidated and simplified from other sources that I’ve pointed to in the Recommended Links sections. Can’t take any credit for real authorship here, just a clean-up for your speed learning enjoyment :)
https://medium.com/design-ibm/a-guide-to-ibms-complete-set-of-data-ai-tools-and-services-29662433ad07
['Jennifer Aue']
2020-10-16 15:17:05.997000+00:00
['Machine Learning', 'Design', 'Ibm Data Science', 'AI', 'Ibm Watson']
Despite Dying at 25, Here’s How This Poet Wrote Some of the Best Poems in History
After some time, Keats had a desire to try his hand on poetry. But when he started writing, he discovered a huge setback in his learning process — he had no instructor or group he could exchange ideas with and get feedback from. To overcome this setback, he would have to read as many great poets as possible. He began to structure his writing to mimic the style of the writers he wanted to be like. Soon, he was creating verses in dozens, slowly but steadily getting better and discovering his own voice through the works of the writers he modeled himself after. As his love for poetry grew, he needed to find better and harder challenges to help him master this skill that he loved so much. Next, he set out on a rigorous task to write a long poem of precisely 4000 lines. This poem, which revolved around the ancient Greek myth of Endymion, he decided, would be a test to his imaginative and inventive power. For Keats, it was all about the tension and difficulty. To make this task even more rigorous, he set his deadline at an impossible seven months period, during which he would write fifty lines per day. Draft of Endymion by John Keats; from the collection in the Morgan Library. Via Wikipedia commons This task wasn’t just difficult, it made Keats hate his own writing. Just a few lines into writing the poem, he discovered another flaw in his prose — it contained too many unnecessary words (and clichés). Though it frustrated him that he wasn’t half as good as he’d fancied himself to be, he found solace in the fact that this exercise was helping him discover his flaws. Despite the setback, he finished the poem. Though he considered Endymion to be a mediocre piece, the lessons he learned from the rigorous exercise will change his writing forever. He had learned to never depend on how he felt to write. He now knew that no matter the mood he was in, what mattered was to get himself to start and continue writing. That the best ideas came when he was writing the poem itself. Not when he was sitting, ruminating over his inadequacies. He had acquired the habit of writing quickly with intensity and focus. He’d learned to write through drudgery, fishing out his own faults. Armed with these lessons from years practice and self-criticism Keats would go on to produce some of the most memorable poems and odes that would be talked about long after he was gone. The years between 1818 and 1819, before Keats became gravely ill, became perhaps the most productive two years in the history of Western literature. Though he died in Rome from a tuberculosis infection in 1821 (at an early age of 25), his reputation grew rapidly because of the magnificence of his writings.
https://destinyfemi.medium.com/despite-dying-at-25-heres-how-this-poet-wrote-some-of-the-best-poems-in-history-6b0d6fee4574
['Destiny Femi']
2020-09-20 15:31:59.757000+00:00
['Self Improvement', 'History', 'Writing', 'Poetry', 'Creativity']
Where To Live If You Want To Escape Climate Change
Where To Live If You Want To Escape Climate Change You don’t need to be a doomsday prepper to prepare for a warmer world. Photo by Fabian Mardi on Unsplash The world’s getting pretty crazy, isn’t it? I’m only 25, but I can’t recall living through times as strange and surreal as these. And lurking in the background of all the distressing headlines we see every day is the omnipresent threat of climate change. The climate crisis is rearing its very ugly head these days. From massive wildfires in California to deadly hurricanes in the Atlantic to record-breaking monsoon flooding in South Asia, the natural world is reminding us every day that thanks to our exploitative attitude toward the planet, we’re running on borrowed time. I’m cautiously optimistic we’ll solve the climate crisis in due time. But while you should really hope for the best (since we’re talking about the whole freaking planet), you should prepare for the worst. If you’re like me, you might be thinking about how you can get away from all this craziness while protecting yourself from the worst long-term impacts of climate change. Out of personal curiosity, I recently did some Internet digging to evaluate where I might relocate in the long run to keep my head above water (literally and figuratively). I’m assuming we won’t all join hands and start a love train to Mars or the Moon with Elon Musk and/or Jeff Bezos as our overlords, although nothing is out of the realm of possibility these days! I limited my search to the United States, although you can apply the rationale I detail below to consider where else you can live in a climate apocalypse.
https://medium.com/climate-conscious/where-to-live-if-you-want-to-escape-climate-change-a55e3a52eb22
['Danny Schleien']
2020-09-01 13:01:01.965000+00:00
['Environment', 'Disaster Preparedness', 'Real Estate', 'Climate Change', 'Science']
Have a Happy, Healthy Holiday!
Have a Happy, Healthy Holiday! Merry Christmas! Happy Kwanza! Happy New Year! Photo by freestocks on Unsplash Happy, Happy Holidays from all of us at Middle-Pause! Whatever and however you celebrate, may your joy overflow and spread out in all directions, blessing everyone and everything. We are pleased to announce our Podcast Episode #3 launches on January 1, 2021. Start the new year with our very own Debbie Walker interviewing yours truly! If you’ve missed our first two episodes, catch them here: STOMP! Stronger Together on Middle-Pause. We aim to inspire you while doing what you gotta do! In the meantime, lift your spirits with these posts… Delighted by a Magician’s Mistake — Aikya Param All I want for Christmas is a Good Night’s Sleep — Marilyn Flower Putting Up the Lights on Remembrance Day — Alison Acheson How to Choose the Power of Joy in Your Life — Debbie Walker
https://medium.com/middle-pause/have-a-happy-healthy-holiday-30f74b328061
['Marilyn Flower']
2020-12-24 16:50:45.717000+00:00
['Holidays', 'Health', 'Mental Health', 'Christmas', 'Wisdom']
How I Built a Lasting Exercise Habit
I chase a feeling, not a size or shape It starts with having the right goals. What do you want to get out of exercise? In too many instances, people prioritize aesthetics above all else. “I want to be thinner” “I want to look better in a bathing suit” “I want to be more muscular” “I want heads to turn as I walk by” And so they grind away, forever in pursuit of their ideal. But they never quite get there, often resulting in them taking extensive time off or quitting all-together. It’s because they have the wrong goal. Aesthetics should be a side effect of a larger commitment to health and fitness, not the root motivator. What should motivate you is how exercise makes you feel, because exercise feels good. It’s this feeling I chase each and every day. When I’m out running, I’m searching for the “groove”. That place where things are firing on all cylinders, everything is clicking, and movement feels smooth and effortless. Time flies by. Motivation is high. When I’m done I feel a sense of accomplishment and satisfaction, knowing I just spent time doing something healthy that’s highly productive. It’s an addicting feeling. Once you get a taste of it, you’ll want it more and more. And so you’ll keep coming back for more and more. Next time you’re not feeling motivated, exercise by feel instead. Don’t worry about weight or sets or reps or time or distance or anything. Just show up and do whatever the spirit moves you to do. It’s a lot easier to find the “groove” when you’re not limited by yours (or others’) expectations. Prioritize completion, not competition Deep down, humans are hardwired for competition. According to Psychology Today’s Sander van der Linden: “There is something inexplicably compelling about the nature of competition. Perhaps that’s because, as some scholars argue, ‘competitiveness’ is a biological trait that co-evolved with the basic need for human survival.” We compete against others in virtually every aspect of our lives. From how much money we make and what type of car we drive to how much weight we lift and how fast we run. Some competition is good. Competition fosters ambition, conscientiousness, and tenacity. These are very useful traits for getting ahead in society. But too much competition is a path to the dark side, as Yoda would say. Too much competition fosters negative traits like obsession, selfishness and dissatisfaction. And ultimately, failure. For exercisers, this may manifest itself in the form of an injury, a plateau, or total burnout. All of which leads to breaking our exercise routine. Prioritizing completion takes the pressure off. Don’t worry about how fast you run, just get your mileage in. Don’t worry about how much weight you lift, just get your exercises in. Don’t worry about how you look compared to the yoga instructor, just get your class in. Personally, once I started prioritizing completing my runs vs timing my runs, my overall mileage increased, my satisfaction with running increased, and my injury propensity decreased dramatically. Worry less about what other’s think I’ve learned to ignore the naysayers. For whatever reason, it seems like other people take a curiously high, often inappropriate level of interest in my day to day decision-making. I’m always astounded by how my exercise, diet and daily habits have the ability to influence (and often offend) other people when they have nothing to do with them. Why do you care how many miles I run? Why do you care in what order I perform my exercises? Why do you care if I switch up my diet? Why do you care what time I go to bed or wake up in the morning? And any number of other examples. It’s as if doing what’s best for me is a personal affront to them. A curious phenomenon indeed. Learn to ignore these people. I say learn because it won’t happen overnight. It will take practice. It’s not easy to fend off resistance from your workout partner about you changing your routine, or pushback from your buddies about you taking some time away from alcohol. I’m a huge proponent of exercising alone. I know the science behind increased exercise motivation in a group setting, but I remain steadfast on this one. Exercising alone will help you build confidence in yourself and your plan, because there isn’t anyone else you need to please. Work group exercise sessions back into your routine in time if that’s what you want to do, after you’ve developed the confidence and assurance in yourself. Plus, I find it much easier to find the “groove” in a solo setting. And it’s all about finding the “groove”. Be strong, be safe, be well.
https://medium.com/in-fitness-and-in-health/how-i-built-a-lasting-exercise-habit-d72f77d6dc4
['Scott Mayer']
2020-10-29 20:42:44.941000+00:00
['Life', 'Fitness', 'Lifestyle', 'Health', 'Motivation']
What It’s Like to Be Transgender in a Psychiatric Hospital.
Going to a psychiatric hospital can be a scary experience, especially if its your first time. However, if you’re transgender it can go from extremely scary, to an absolute nightmare in the matter of seconds. In this article I will be sharing my experience in a psychiatric facility last week. It was a Tuesday night, I was having extreme anxiety as well as the early signs of a panic attack. I’ve always had anxiety, but it was never this bad. My regular coping skills weren’t working, I was ready to do anything to make it stop. Once I felt myself progressing to this state, I walked into Children’s Hospital Los Angeles. Upon a full psychological evaluation, they determined that I wasn’t safe. I was cooperative, and agreed to be transferred to a psychiatric hospital for a 5150 (72 hour) hold. Not that I had a choice, I just knew that being cooperative would be helpful for this kind of situation. Since CHLA doesn’t offer inpatient psychiatric care, they had to transport me to a hospital that did. My stomach dropped to my feet once I found out that they couldn’t treat me there. I asked ER doctor several times if they were going to send me to a place that was transgender friendly. The doctor looked at me in the eyes, and said; “Yes Jaden, that is our top priority.” I trusted him. Since they didn’t want me running away, they strapped me down very tightly to the hospital bed I was in, then I was moved onto a gurney to be transported to Del Amo Hospital in Torrance, California. The ambulance driver handed over my California ID card, and Medi-Cal health insurance card over to the intake nurse of the Youth Psychiatric Unit. She began filling out my paperwork while asking me 5 million questions about why I was here. Let me remind you, I’m still tightly strapped to the gurney. I wish I could have taken pictures of the marks those straps left on my shoulders, but they were gone by the time I got out. Then, the awkward skip of pen came when she reached the “sex” question on the intake packet. Looking at my ID and Medi-Cal card (which both said male), then looking at me while biting her lip. She wasn’t positive I was actually male. She just skipped over that question and finished the rest of the packet. I was finally let out of the gurney, at that point my legs started getting sore from being in a locked straight position for so long. I stumbled to the ground as I attempted to walk again. She pulled a male intake nurse from the nurse’s station, and then directed me to a restroom with a new hospital gown. “We’re conducting a safety check. We’re not sure if you’re male or female, so we need to check to make sure you and the other patients here are safe.” It was 2AM, and I was in no means to argue with anyone, so I just shrugged my shoulders and followed instructions. A big, bold, F was stamped on my intake papers. I put on the hospital gown, and off I was to observation. I was placed in this observation room for three hours, while the nurses wrote down my every move and breath. After the three hours, I was placed into an isolation room at the end of the hallway in the female wing of the youth unit. This room had no windows, just four walls and a toilet attached to the wall. The door closed, and then it was locked. The nurses checked on the patients every two hours, at least that’s what it felt like. Every time you cried out, you were given more sedatives to sleep. That morning my breakfast was brought to me while everyone else was able to eat together in the common room. Why was I being isolated? All I’ve done was cooperate with everything they were saying. I just assumed that’s what all new patients went though. I lost track of time in that small room. The four off white walls were driving me insane. I was more anxious than I was when I came in, since anxiety was what I came in for anyway. That afternoon it was my turn to see the doctor. The nurse came over to my room, unlocked it, and then directed me towards the doctors office. She stood in the corner of the room while I had a five minute conversation with the doctor. In that five minutes, he was on his cell phone texting and scrolling through Facebook. He asked me questions, and then interrupted me as I tried to answer. From that five minute conversation, he was able to diagnose me with Borderline Personality Disorder, Depression, and Anxiety. He was able to prescribe me medications that he “knew” would cure my problems. I was directed back to my room. The nurse came over with a small cup full of 5 pills. I asked her what they were, and what the side effects were so I could make an informed decision about whether or not I wanted to take them. Since I do have the legal right to refuse medications in a psychiatric hospital. She refused to tell me what they were, and that if I didn’t take them I would have to stay in the hospital longer. I took them. Now, in this room there is no clock. I don’t recall much of this day, I just remember sleeping A LOT. I have no idea how long I was sleeping, and I still don’t know what kind of medications she gave me. It was the next morning, I was woken up by a nurse checking my vitals. I asked her what time it was, and what the date was. She gave me an angry look, wrote down the information and left me. I forced myself to stay awake even though the medication was making me sleep, for the 2 hour check up to get more information about my treatment plan here. When I was checked on, I asked when I would be able to attend support groups and interact with other patients. She told me that it was hospital policy for me to be isolated. I asked her why. She then told me that transgender people have a high risk of sexual assault and violence, therefore it was for MY safety to be isolated. At this point I knew my 72 hour hospital stay was turning into an awful nightmare. I asked to use the phone, which is my right by the way. She agreed, and let me make a phone call. This was when I found out it was still my first full day in the facility. Since this conversation was monitored, I was extremely careful with what I said. The person I was talking to knew something was wrong, but had no idea how to fix it. I assured everyone I was okay for the sake of getting out of there ASAP. There are lots of myths about testosterone making it unable for you to cry. I’ve been on T for six months, and haven’t had any luck crying since my first shot. However, when I was stuck in isolation I cried about 75% of the time in there, but quickly wiping away the tears when the nurses did their bi-hourly checks. (Yes, anytime you are caught crying they keep you a day longer.) Now, I didn’t spend these 72 hours alone. They did put me in a room with another transgender male. He didn’t really talk much, but he was compassionate. He did help the time pass, whether was it was secretly holding hands from behind the bed, to a long 5 minute crying hug. To a verbal identity validation every time the nurse misgendered us, or deadnamed him. He was in just as much pain as me, only he was 3 years younger than me. So much respect for that kid. Soon enough, after 72 hours of not seeing sunlight, being completely isolated from the world around us, I was released. I gave my room mate one last hug, while secretly passing him a piece of paper with my contact information for when he got out. He’s being held on a 5270 (30 day) hold. Ever since my discharge he’s called my cell phone using the hospital phone everyday. There’s only so much I can do from the outside, but I do plan on visiting him; even though going back there is extremely triggering. Upon discharge I posted about my awful experience on social media, and got a wonderful response. Hundreds of messages from strangers, showing their sympathies. Other trans people who have been mistreated at this hospital also reached out to me to encourage legal action for a violation of patients rights. I did file a complaint with the hospital. A patient advocate called me back dismissing my complaint because I was “hospitalized for a reason.” Seeking help from free LGBT legal aids, I am determined to take this place down for their transphobic policies that go against basic civil rights. After this experience, I now understand why transgender people hate psychiatric hospitals so much. This is how we’re treated when we simply reach out for help.
https://medium.com/psych-ward-experiences/what-its-like-to-be-transgender-in-a-psychiatric-hospital-f0550565e66b
['Jaden Prendergast']
2017-03-29 03:09:23.235000+00:00
['Health', 'Psychiatric Disorders', 'Mental Health', 'LGBTQ', 'Transgender']
10 Products that Hide Plastic in Plain Sight
10 Products that Hide Plastic in Plain Sight It may surprise you. Photo by Bernard Hermant on Unsplash Plastic is fantastic as a material. Its versatility means it’s used to make everything from food packaging, bottles, containers, to durable objects like tables and chairs, and parts of electric appliances. As such, it has crept into all facets of our lives. Here’s a list of 10 everyday objects that you may not realize contain plastic. Pillow stuffing These days, pillow stuffings are usually made with polyester. This could be a good use of recycled plastic, but most pillow manufacturers wouldn’t use recycled plastic because of a higher price tag. I’m sure there are pillow makers who make pillows from recycled polyester, keep an eye out for them! Even then, the pillows still can’t be recycled at the end of their lifespan. It doesn’t help that pillows stuffed with synthetic fibers also have a shorter lifespan than their foam and latex counterparts. So, the next time you want to buy a new set of pillows, be it regular pillows or throw pillows, or even a dog/cat bed, remember what they’re made with. And keep these questions in mind: Can you wash your old throw pillow? Use fewer throw pillows? Is it possible to repurpose old pillows into pet beds? Can you use old pillows to cushion your knees in the garden? Make soft toys for your pet? Can you give them to someone who’s moving to use as a packing cushion? I don’t see pillow fights in the same light as I used to. Can you imagine the plume of dead skin and polyester fibers that burst into the air when the pillows are beaten together? Heh. Teabags / Sachets Teabags used to be made with silk or cotton. These days, they’re more often made with paper coated with a thin layer of plastic. Some are made entirely with polyester. The same goes for sugar/salt sachets too. They look like they’re made with paper but they’re lined with plastic. The solution to this is very simple. Just go for loose tea leaves and buy sugar in bulk! All you’ll need is a tea ball/tea strainer. Maybe I’m paranoid, I never liked the idea of submerging plastic in boiling/hot water. I’ll pick the tea ball over the teabag any day. Besides, loose tea leaves are usually better in quality than the store-bought variety of tea bags and have a lower carbon footprint — simply because they aren’t individually bagged. Chewing gum Traditionally, chewing gum was made from chicle, a resin from the sapodilla trees in Mexico. These days, the chicle has been replaced with synthetic rubber, which is cheaper and is also… a type of plastic. Gross, if you ask me. I don’t think I’ll ever want a stick of plastic gum anymore. That said, there are still gums that are plastic-free. Two brands that I came across include Glee Gum and Simply Gum. If you’re an avid gum chewer who doesn’t mind regular gum, check this company out. They recycle chewed gum! Aluminum and tin cans Boo! Out of this list, this is the one that disappointed me the most, but what did I expect? Most aluminum and tin cans are coated with a thin coat of plastic to prevent the metal from being corroded by the food. Still better than plastic bottles though! I’m for the use of canned food and drinks, but if you aren’t, you can always buy beans and grains at places that sell them in bulk. Or choose products sold in glass jars. Do note that the COVID-19 has caused a temporary suspension of the use of personal bags at certain shops, so buying beans and grains with your own bag may not be possible at the moment. Tetra Paks You probably know this one already. Tetra Paks contain thin layers of plastic on both the internal and external walls of the packaging. Tetra Paks still beat plastic bottles if there’s a recycling facility near you. However, I’d choose aluminum and tin over a Tetra Pak if I’m uncertain because aluminum and tin will have no problems getting recycled. Clothes, bed linen, curtains — most things fabric Thanks to the “affordability” of polyester, it’s used to make everything from socks to clothes to carpet and bedsheets. The unseen price is microfiber pollution. There’s no easy solution to this. Tossing them away would cause waste, but washing them causes microfiber pollution. I recommend switching to preloved items made with natural fiber as the polyester ones wear out. If you can afford it, you can get Guppyfriend bags to wash your polyester clothes in, install a Lint LUV-R in your washing machine to filter out the microfibers, or use these nifty Vermont-made Cora balls to catch the nasty microfibers. I’m not affiliated with these products, but they’re recommended by authoritative sites. Produce sticker Most produce stickers contain plastic. That’s why zero-wasters peel them off and put them into their jars. These little buggers are the bane of my compost bin! Nah I’m just kidding. It’s a breeze to peel them off. Owing to their size, we may either think that produce stickers are too small to matter in a compost bin, or they’ll disintegrate because they’re so small. But they contain plastic, so while they do disintegrate with time, it only means the plastic broke down to smaller bits and will now contaminate the soil. Plastic has no place in the compost bin. Trash the produce stickers! To avoid produce stickers, go to a farmer’s market. Glitter Glitter is a mixture of tiny plastic and aluminum fragments, which means glitter contains microplastic. If you’ve used glitter, you’ll know that it gets everywhere and is difficult to clean off surfaces! What do we do when we get our hands full of glitter? We wash our hands at the sink. The glitter goes down the drain. At the water treatment plant, it isn’t completely caught by the filter — they’re tiny! So the glitter enters the environment, where they attract contaminants then get eaten by little fishes. They’re definitely not as good as they look. If you have to use glitter, go for the biodegradable or edible type, especially if you’re buying it for your child! Craft glue Maybe you already know this, but I never really thought about glue. I remember them as a stinky sticky thing I loved to peel off my fingers as a kid. Craft glue is made with Polyvinyl Acetate (PVA), a type of thermoplastic. I found it difficult to wrap my mind around a liquid type of plastic, but PVA is a type of plastic. There are alternatives though. For example, if your kids love paper crafts, have them make their own glue with starch! However, since we don’t use glue in huge quantities, I think there’s no need to be uptight about finding an alternative. Do consider stop making slime though. Personal products E.g. Masks, mascara, hairspray can contain PVA too! But this is a simple category to avoid. There are many products out there that don’t contain PVA. Just be sure to look at the ingredients!
https://medium.com/climate-conscious/10-products-that-hide-plastic-in-plain-sight-ed8a8f67537
['Julie X']
2020-07-29 13:31:01.614000+00:00
['Sustainability', 'Climate Change', 'Climate Action', 'Plastic Pollution', 'Environment']
When Parallelism Beats Concurrency
Introduction To start with, let’s take a brief look at what we should be understanding as concurrency and parallelism. Concurrency In layman’s terms, concurrency is the situation where, in order to solve a problem, we process it in a way that one single task gets processed concurrently by multiple workers; that said, let’s imagine a big array where we have multiple workers and each worker does some work on the next element to be processed in the array, until we reach the end of the array. When concurrency takes place, some synchronisation is required in order to access the resource that gets shared among all the existing workers (in our example, this is the array). The complexity and performance overhead involved in this approach could be very significant in some cases; we’ll try to demonstrate this later on in this article. Parallelism On the other hand, parallelism is the situation where, in order to solve a problem, we decide to take a “ Divide and Conquer” approach and split the problem into multiple small problems. This allows us to solve multiple smaller problems in parallel. How would it look like using the same example we’ve shown above? Let’s imagine that we have four parallel workers. A parallel solution would be as shown below: As you can see, the difference is substantial; we have now four smaller tasks that can be solved independently. Knowing this, we can affirm that the elements get processed sequentially by each worker! When each worker has completed the task, we can then combine their results to produce one single and final result. The main benefit of this is that we don’t need to synchronise the access of the workers to a shared resource; now, each worker has its independent chunk of work to process. I hope the difference between these two is clear, but what’s left? We have one more case, which is quite obvious: sequential processing. In a standard sequential process, we’d be processing our task in sequential order with a single worker. This is the more common approach, and in some cases, it could surprisingly be the fastest! If you need to have a better understanding about concurrency and multi-threading and why we do things in the way we do them nowadays, I’d recommend that you read the book “Java Concurrency in Practice” by Brian Goetz.
https://medium.com/better-programming/when-parallelism-beats-concurrency-5f52d7012944
['The Bored Dev']
2020-07-23 09:48:51.096000+00:00
['Programming', 'Java', 'Concurrency', 'Software Development', 'Software Engineering']
How to read out your smart gas meter with a raspberry pi
How to read out your smart gas meter with a raspberry pi Quick guide on how to setup a raspberry pi to read out a smart gas meter and setup a website to plot gas consumption over time. In the Netherlands, most houses are heated with natural gas and most people use gas for cooking. Over the last 10 years, the majority of houses switched to smart meters for gas and electricity. These meters make it possible for suppliers to remotely read out your consumption. Unfortunately, less than 10% of home owners actually read out their gas meter themselves. This is a shame as it could help reduce consumption. reading out my smart gas meter with a raspberry pi zero I have a couple of raspberry pi’s laying around and I took up a project to read out my smart gas meter and show the readings in a graph on a website. I used AWS IoT core to process the messages from the smart meter and AWS dynamoDB to store the readings. Finally, the website is hosted on AWS and protected by AWS Cognito (not everyone needs to know how much gas I use). I assume you have some affinity with raspberry pi, Python and AWS to keep this quick guide relatively short. The end result looks like this: cumulative gas consumption over time Lets get started Step 1: Read out the smart gas meter with Python Each smart meter comes with a P1 port (one of these old telephone connections), which you can connect to USB with the right cable. I used a pi zero without wifi, so had to use an USB to RJ45 ethernet adapter (full set up is shown in the picture above, the black cable is the P1-to-USB connector). To read out the meter I used this Python script (heavily borrowed from http://gejanssen.com/howto/Slimme-meter-uitlezen/). Each meter type has a different message layout, which means you need to try different values in row 29 and row 43. Next step is to setup a messaging service to push the readings to AWS IoT core. Step 2: Setup AWS IoT core AWS IoT core provides secure communication between internet-connected devices such as sensors, actuators, embedded micro-controllers, or smart appliances and the AWS Cloud. It uses the MQTT machine-to-machine connectivity protocol. To setup a new thing, follow these steps in the AWS IoT core developer guide. register your thing in AWS IoT core Download the things’ certificates and store them on your pi. You also need to download the AWS IoT Python SDK. To send messages to AWS IoT core I enhanced the previous script with a MQTT messaging script, see below. This script sends a message with the gas meter’s reading to a so called ‘topic’. Within AWS IoT core you can subscribe to this topic and setup rules to process the incoming messages. Before we do that we need to create a dynamoDB database. Step 3: Setup dynamoDB and a processing rule As I want to plot a historic trend of gas consumption, I need a database. For this project, a dynamoDB is very suitable; It is easy to set up and cheap. Simply create a new table with datetime as the primary key. If you have multiple things you can use deviceID as primary key and datetime as sorting key and store everything in a single table. setup dynamoDB to store gas meter readings Now that we have a database, we need to set up a rule in AWS IoT core to process incoming messages and push them to the database. In my case the rule is quite simple: send all messages from topic gas_reading to the dynamoDB database gas_meter. You can do this in the Act section of AWS IoT core. setup a rule to push topic messages to dynamoDB Step 4: schedule python script with crontab Now that the ‘pipeline’ is setup, the python script can be scheduled at the desired interval. The easiest way to do this is by using crontab. On your pi, run crontab -e and add the following line at the bottom. This will run the script every 5 minutes. Now, every 5 minutes a reading is taken from the smart meter, published to a topic in AWS IoT and pushed into the dynamoDB. add a line to crontab to schedule the python script at the desired frequency Step 5: Setup a website to plot the graph The last step is to plot the values in a graph on a website. I am pretty new to front end development, so bear with me on this one. I assume you have some affinity with node.js as you need to install two modules. The first one is chart.js, a powerful module for creating nice graphs and the second one is moment.js as you are working with date time values and these can be quite tricky. To be able to connect to the dynamoDB in AWS you need to add a link to the AWS SDK script (through CDN). The most bear bone version of the html script is shown below. most bear bone html script to show the graph The function to create the graph is shown below. The most tricky part is to work with AWS credentials as you don’t want everyone to just query your cloud-based database. For local development it’s fine to hard code AWS credentials (Access key ID and Secret access key ), but on a public website you need to secure access much more. I used AWS Cognito for that. chart.js file for the website To setup AWS Cognito you need to define three things: user pool — who can login app client — to which application do they have access identify pool — which resources within AWS can be accessed for this application by these users Within AWS, go to Amazon Cognito and click on ‘Manage User Pools’ first. start by setting up a user pool in AWS Cognito Give your user pool a name and proceed with entering the details. Note down the user pool ID as you need this later. give your user pool a name and proceed with setup define details of the user pool Once setup, go to ‘Users and Groups’ in the menu on the left and create a test user (using a working email address). once setup, add a user for testing purposes The next step is to define an App Client. Do this by clicking on App client settings on the menu on the left. Note down the App client ID as you need this later. setup AppClient As part of the app client setup, you need to define sign in and sign out URLs. For development purposed you can use localhost on http, but all other URLs have to use https (keep this in mind when setting up your website). setup URLs for login and logout As a final step in setting up the app client, you need to define the domain which is needed in the authentication process. setup domain url This sounds like a whole lot of work, but things make sense as soon as you integrate this with your website. In order to trigger the authentication process you can define a button on your website with a URL that combines the parameters that you defined above. The login URL is setup as follows: https://<your domain prefix>.auth.<your region>.amazoncognito.com/login?client_id=<your app client id>&response_type=token&scope=aws.cognito.signin.user.admin+email+openid+phone&redirect_uri=<your callback url> Once logged in, you want to be able to access the dynamoDB with the gas meter readings. You do this by setting up a so called Identity Pool (i.e. Federated Identity). now click on Manage Identity Pools Next step is to give the identity pool a name and specify the options (defaults work fine). give your identity pool a name and create a new pool In the next screen take note of the identity pool ID (you need it later) and create new IAM roles for authenticated and unauthenticated users. When creating the roles, you need to specify to which AWS services they have access and to which extent (e.g. read or write). In this case, you want users to have read access to the dynamoDB table containing the gas readings. take note of the pool ID and define IAM roles Once this is setup, authenticated users get credentials to query the dynamoDB with the gas readings. Now update the chart.js file for the website by entering the IdentityPoolId and userPoolId in the correct places. The end result will be the graph I showed in the beginning of the article. The working website can be found at myhome.schrama.io. You will find some other features there as well, which I will write about in a next article. You can find the full code on GitHub: Summary In this article I talked you through how you can use a Raspberry Pi to read out a smart gas meter, push the readings to a dynamoDB using AWS IoT and visualize the readings on a website with credentials. For me this was a nice project to learn more about Python, Raspberry Pi, AWS, databases, IoT and front end development. I also learned a lot about our gas consumption at home. By being more conscious about how much we consume, we now limit the time we shower and set the thermostat a few degrees lower. So, something positive for the environment as well. I hope you enjoyed reading this article and got enthusiastic to pick up a project like this yourself!
https://medium.com/python-in-plain-english/how-to-read-out-your-smart-gas-meter-with-a-raspberry-pi-f28168b9658c
['Erik Schrama']
2020-07-02 11:07:41.951000+00:00
['Front End Development', 'Python', 'IoT', 'AWS', 'Raspberry Pi']
The Ghost Doll
I’m going to tell you a story. It started with the rain. Summer storms are something we’re used to in the Midwest. The day begins hot and dry, and you spend the whole afternoon looking for shade. Or as it happens on that day, sitting in the front yard waiting for the ice cream truck to come rolling down the street. Anything to beat the heat. Until, that is, the skies opened up and a deluge followed. We were chased in the house immediately. We barely had time to grab any of our things before we were soaked to the bone. The rain fell furiously. Within minutes, the dry cracked dirt turned to mud and the front yard was already flooding. At first, this meant no more playing outside. However, the rain did have a silver lining. Our mom couldn’t make us go outside, and we’d recently been indoctrinated into the cult of Playstation. So we then proceeded to spend the rest of the day playing Crash Team Racing. Eventually, the rain stopped. It was like any other day. We ate dinner. We played a few more video games. Before long, it was time for bed. I waited till the door was shut and quietly turned the television back on. It was summer, and I’d reached the age where staying up late was getting more enjoyable, even though there wasn’t really anything on but infomercials. And that’s when it started. I was sitting in my bed when I heard something. It was faint at first. It almost sounded like…like singing? I tried to ignore it for as long as possible. But as the evening went on it kept happening. I tried to wake my little brother up, but he was out cold. I decided that it had to be the television. So I turned it off. Without the ambient noise, the sound became clear, and all the more unsettling. It was someone singing. It was clearer now. It was definitely a little girl’s voices. “Ring-around-the-rosie, Pocketful of posies, Ashes, ashes, We all fall down.” It repeated over and over again. Now imagine you’re ten years old. You’re sitting in the dark, and then you hear some little girl singing Ring Around The Rosie. I was terrified. I was convinced that I was being haunted. I’d only seen a few horror movies at this point but I knew what was coming. I tried to get my mom to come listen but she thought it was just me trying to get out of going to bed. To be fair, I had a rather active imagination at this time and had a penchant for making up stories. Thrice I was rejected in my pleas to have her come and listen because someone was singing outside my room. I must have sounded insane. Finally, she came to my room with me. Sure enough, she could hear it too. Ever the smug little rascal, I immediately quipped, “I told you.” However, she was far too curious and downright worried why there was someone singing outside the house. She was worried enough to call my father at work. He asked if the dog was barking or seemed to be worried. At the time, we had a truly massive canine guardian who loved patroling the house at night. But she was sleeping soundly. He assured her it had to be nothing. He offered to come home and take a look, but she declined figuring there had to be an explanation. We let the dog out. She wandered into the night, but we heard no commotion. If someone had physically been in our yard they would have been mincemeat. But the dog roamed around casually. We joined her outside. I was holding the flashlight. This was well before the days of high powered led lights that could brighten the whole yard. The only light source we had was a Maglight the size of a little league bat. We could hear the singing clearer now. Still going strong. I cast the light around the yard. Nothing. The yard was empty, or so I thought. So we wandered onto the grass. This would be the point in the story where people would be screaming “No” at the screen in a horror movie. We walked further into the black void, with just a small cone of light in front of us. The light chose that evening to start waning. We went farther out. The singing got louder. Finally, I looked down, and found out what was making the noise. This is where I interrupt the story to insert a critical detail. You see about six months prior, my sister got a doll for Christmas. It was one that sang when you held both its hands. However, my sister grew tired of it rather quickly, and no one saw much of the doll after a week. Until she happened to take that very doll outside during the day. The same day we got the surprise summer storm and didn’t have time to pick everything up. No one knew the doll was outside. Heck, everyone forgot the doll existed. Guess what song the doll sang when you pressed the hands? Now, the doll was laying out on the ground just singing away, but no one was holding it. That’s when my mother figured the rain shorted something and as soon as she pulled the battery out the singing stopped. I don’t know if I’d ever been that relieved. My mom put on a brave face, but later she told me how freaked out she was hearing the singing. After that we went back inside. My mom let me play video games even though it was well past my bedtime, because I had so freaked out I don’t think she really wanted to say no. And I went back to the bedroom and turned on the Playstation, excited that the only sound was coming from the television. My mom kissed me on the forehead and told me to not get too loud so I wouldn’t wake my brother. Now we laugh about it, but man that was a freaky night. We were both so relieved that it was just a malfunctioning electronic toy. It was kind of funny though. Even though it wasn’t singing my mom still left the thing outside. It went out in the trash the next day.
https://medium.com/the-inkwell/the-ghost-doll-23133e63a5a6
['Matthew Donnellon']
2020-12-15 06:36:34.380000+00:00
['Life', 'Books', 'Life Lessons', 'Short Story', 'Creativity']
Why Carbon Capture is The Future
Details about our buddy CO2 CO2 (or carbon dioxide) is a natural gas that allows sunlight to reach the earth. How so? Well, let’s take a look at the diagram below first: We can see that when the sun emits solar energy, it can go two ways. Either in space or in the earth surface. However, this distribution is not proportionate, the majority goes to the earth. Thus warming and increasing its temperature. Hold up! Why does the heat of the sun get absorbed by the earth’s crust more than space? So remember our buddy CO2? Carbon dioxide is a greenhouse gas (GHG), which means that it is a very weak absorber in terms of incoming solar radiation. Furthermore, because it is unable to prevent the solar radiation that enters the earth’s surface, the heat comes rushing through without further protection. Think of CO2 like a phony soldier. It tries to give the allure that it can protect the noble earth. But in fact, it lets the solar radiation (i.e. the “enemies”) to race through which heats the earth. Because CO2 can’t absorb the solar energy of the sun, it will instead emit it. The emission of this infrared radiation (heat of the sunlight) is propagated and directed downwards towards the Earth. That’s not to say that it’s only directed towards the Earth’s crust. Sure, some of the rays will go outward towards space but only minimally. Should I curse CO2 molecules? Of course not. In fact, if it weren’t for this process which is called the greenhouse effect, our earth would be approximately -18 degrees Celsius! That’s like living on an iceberg in Iceland! If it weren’t for the weakness of this greenhouse gas, there would not be any entrapment of solar radiation on our planet. Therefore, without it, our earth would not be habitable. Here’s the difference between what happens when it’s an incoming radiation (coming from the sun) Vs. an outgoing radiation (reflected by the earth) : Uhh… remind me what the problem is? Anything that is done excessively to the environment is by consequence adverse. Because there has been an excess amount of mining for fossil fuels, this creates a temperature imbalance. The more CO2 is released into the earth’s atmosphere = the warmer the earth becomes (because the majority of the infrared radiation is directed to the earth by the CO2 molecules). Greenhouse gases such as CO2 are the main contributors to global warming which is deteriorating the environment (icebergs melting, wildlife losing their habitats, inundations evoked by excess of water melted etc.). You might be wondering: “Well Kawtar, why are you only talking about CO2?” Good question! It’s because not only is CO2 the most abundant greenhouse gas on earth, but it’s also the one that shows the most potential to improve our earth’s circumstance. I know, seems pretty contradictory. It’s the most plentiful but it also shows the most potential to help? Let me explain. The Process of Capturing CO2 One of the most promising initiatives in terms of reducing the establishment of climate change is to quite literally, capture the CO2 emissions that amplify and initiate the earth’s immoderate heating. In other words, it’s putting the carbon dioxide emissions back where it came from (underground)! There are three steps that involve carbon capture: Separating and trapping the CO2 from other gases. Transporting the substance to a storage location. Storing and secluding it away from the atmosphere. Mechanism of the Separation There are various methods to separate CO2 from other gases. Some utilize a “filter” which contains a solvent that absorbs CO2. This solvent is later heated which will liberate water vapor and leave a concentrated amount of CO2 as residue. There is also the process of calcination (essentially heating a solid to a boiling high temperature to remove the CO2), selective membranes (acts like the filter, clogs in the CO2) etc. If we have all these mechanisms, how come the progress of carbon capture has been so slow with only 22 projects around the world? Well, let’s just say that these processes tend to require a lot of cha-ching $$$ (specifically direct air capture), time and resources. So, to make carbon capture a universal objective, companies need to allocate these facilities and take the initiative of becoming carbon negative. Companies like Climeworks are pioneering the carbon capture industry by installing the world’s first direct air capture facility. Air is drawn into the collector by a fan. Then, a filter which acts as a selective layer, gets CO2 on the surface and is closed. Finally, the container is heated to the temperature that will release (and separate) the carbon dioxide from the substance. Transportation of the Substance Firstly, what do we use to transport the substance? Pipelines. They start at the source of capture and goes to its storage location. CO2 is usually travelled in its gaseous state. However, I am quite uneasy and hesitant about the usage of pipelines. Although they are the safest way to transport oils (in terms of their quality), there are still ethical concerns tied to this method, like the perturbation this would pose on indigenous communities. It wouldn’t make sense to solve a problem while simultaneously creating another. One of the greatest flaws of carbon capture is that the actual transportation of the matter disturbs indigenous folk. Yes, it is the most environmentally-friendly option compared to trains or trucks . However, that should not infringe or pose a threat to the land that other people step on. Where the heck is it stored?! Now, when the matter finally arrives at its destination (think of CO2 like a spoiled yet posh knight that uses pipelines as its plane and sits at the first class area), it can be snuggly stored away from the atmosphere. It’s kind of like a whole kingdom (the deteriorated earth) actually resents the CO2 soldier so instead of firing it, they find a cowardly way to just hide it away from civilization. So where will Sir CO2 go? Either underground in the soil or deep underwater (geez, this spoiled molecule is going to take some time to adjust from its once comfy first class seat to literal dirt and in fear of sharks…). Storing gases and oils underground isn’t new for these industries, in fact, this entrapment of underground storage even has a term : geological sequestration. These reservoirs are ideal for storing CO2, because they have a multitude of overlying rocks. These rocks form a seal which ensures the entrapment of the carbon dioxide molecules. We can also dump the CO2 in the ocean. Doesn’t sound the most appealing, right? Hear me out, when it is dumped at a depth that is greater than 3500 meters, scientists affirm that it will convert to a slushy form while falling to the ocean’s floor. Thus, it shouldn’t make its way back up to the surface of the ocean while will deteriorate the water and marine life. Nonetheless, this is still a prediction, meaning that because it hasn’t been largely tested, it remains a hypothetical antidote. Thoughts To conclude, carbon capture has its benefits and risks. It is one of the most promising emerging technologies that is directly capturing the excess of CO2 which reduces the warmth (aka global warming) on our planet. On the other hand, the mechanisms involved may pose ethical issues on indigenous communities by amplifying the quantity of pipelines (which disrupts their land) and its novelty creates a marketing problem as it is difficult to commercialize when there are still so many concerns with the predictions. All in all, we can see that the underlying problems with this technology are centered around the social aspect. So, if we can find a way to use pipelines without disrupting indigenous land (specifically in Canada), conduct more tests to affirm the once predictions and commercialize the hell out of it…then there’s really no downside to helping prevent climate change.
https://medium.com/carre4/why-carbon-capture-is-the-future-474ef7eb715c
['Kawtar Karmouni']
2020-12-23 17:52:31.078000+00:00
['Science', 'Climate Change', 'Global Warming', 'Environment', 'Green Energy']
How to Choose the Best Open Source Module For Your Needs
Consider the Code I don’t necessarily take the time to read every line of code in a module that I am considering using in my project. But I do check for the basics: Does the module follow a coding standard? A good way to check is if the module includes a lint command, or if the README indicates that a standard is being followed. I do have my own preference for coding in JS, but I’m not super concerned about which standard is used, as long as a standard is used. This is important because there’s a good chance that I will need to consult a module’s source code at some point in time, and I do not want to deal with code that is poorly formatted. Does the module include automated tests? Are those tests automatically run by a CI system, and does the README include a badge indicating the current build/test state? Cloud-based continuous integration services are so ubiquitous these days that I don’t think there is a reasonable excuse not to use them, even for the smallest of projects. (There is a side note that I think is important to mention here. Just because a badge indicates that the latest build failed, it doesn’t mean the build actually failed. Sometimes web caching gets in the way of indicating the true status, and so I always click on these badges to check for myself. I’ve also found that the build matrix may include environments or versions that I’m not concerned about, and their failure may or may not be an issue.) Does the module use Semantic Versioning, and is the current release at 1.0.0 or higher? SemVer isn’t always the easiest concept to apply correctly, but I always award points to those who are trying. I also tend to avoid modules that are stuck in “not ready for prime time” mode (meaning, they proudly indicate they are major version 0 and were initially released months or years ago), even if they are at least somewhat popular and fill an important need. To me, this is a big red flag. Remember what I said above about reliability and consistency? I’ve come to the conclusion that using SemVer is a sign of respect and consideration for others. We should stand by our work, and if we’re ready to release something “into the wild,” so to speak, then we should at least provide and communicate some guarantee about stability. Does the API generally conform to a style or pattern (functional vs. imperative, fluent, declarative, etc.) that I’m comfortable programming against? API design is a little tricky to get right, and some APIs definitely make more sense than others. One of the good things about having multiple choices between different modules is that we can often pick one that matches our preferences.
https://medium.com/better-programming/how-to-choose-the-best-open-source-module-for-your-needs-a205c1defd62
['David Passarelli']
2020-07-15 22:03:25.912000+00:00
['Programming', 'Software Engineering', 'Software Development', 'Open Source', 'Startup']
Medium Has Changed Again, What Does It Mean for You?
Medium Has Changed Again, What Does It Mean for You? Does curation still exist and other pressing questions Photo by Viktor Vasicsek on Unsplash Medium has done it again. A new announcement regarding a change to the curation and distribution system has just been dropped. But first things first. Yes, there is still curation, and yes, your story will still be curated into topics. More on that later. Here is the official Medium article on the updates to the curation and distribution system: The changes to curation and distribution are all in sync with Medium’s new plan of a “relational Medium.” So, before we delve into curation changes, we should take a quick look at the latest from Ev Williams. To sum it up, Medium is rolling out a new mobile app. The big difference is the algorithm will not be based on topics but on followings. What does this mean for you? If you have a large following on Medium, your stories will become more accessible to your readers. This is great for the top writers with thousands of followers. On the other hand, if you have a modest following, it doesn’t appear you’re going to get much visibility. For example, in the old method, I follow the topic of “fitness,” therefore, I see stories about “fitness” in my feed. I might see a writer whose work I read regularly, but I can also discover new writers that write about “fitness.” But now, what stories I see are driven by whom I follow, not the topics I follow. I don’t have the updated app, so I can’t review it in action. But from a screenshot from Williams’s article, you can see that you will be presented with a row of authors from your followers. You can click on any one of them and be taken directly to their profile page, where you can read their latest articles. Again, this is fantastic for the top writers. I’m sure they are thrilled as this means even more exposure for them. But if you’re a new writer without much of a following, I am concerned you will struggle to get your stories in front of readers. The other casualty of this new method is the writer with multiple niches. I mainly write about health and fitness. But I also write about parenting and the environment. Occasionally I write a poem. The beauty of Medium is that I’m allowed to express my creativity and write about anything that interests me. Medium’s algorithms made this all work. My fitness followers could see my fitness articles. My poetry followers, if they existed, could see my poetry. But now, how many of them will be turned away if they click my profile and get hit with a barrage of mediocre poems? If you’re a new writer without much of a following, I am concerned you will struggle to get your stories in front of readers. My other thoughts are how I consume media. When I read the New York Times or scroll through Google News, I’m not searching out my favorite columnists. I’m reading what headlines catch my eye in the topics I am interested in. Williams is moving away from this in an effort for authors to “develop deeper relationships” with their readers. Sadly, the side effect of this is that it will be even more difficult to discover new voices on Medium.
https://medium.com/illumination/medium-has-changed-again-what-does-it-mean-for-you-c6a9e4aef5b8
['Jennifer Geer']
2020-10-12 19:19:05.857000+00:00
['Creativity', 'Writing', 'Blogging', 'Medium', 'Writing On Medium']
The Roles of Service Mesh and API Gateways in Microservice Architecture
Service Mesh A service mesh is a technology that manages service-to-service communication within a distributed software system. Service meshes manage the east-west type of network communications. East-west traffic indicates a traffic flow inside a data center, Kubernetes cluster, or a distributed system. Service meshes consist of two important components: Control plane Data plane The proxies residing next to the app are called the data plane, while the management components coordinating the behavior of proxies are called the control plane. Service Mesh — Image credit: Author A service mesh allows you to separate the application’s business logic from the network, reliability, security, and observability. Networking and traffic management A service mesh allows you to perform dynamic service discovery. A sidecar proxy can help you do the load balancing and rate limiting. It can help you do traffic splitting to perform an A/B type of testing, which can be helpful for canary releases. Observability and reliability A service mesh supports distributed tracing, which helps you do advanced monitoring (number of requests, success rates, and response latencies) and debugging. It even has the capability to tap service-to-service communication to better understand communication. Since the service mesh provides health checks, retries, timeouts, and circuit breaking, it improves the baseline reliability of your application. Security A service mesh allows mutual TLS among the services, which helps increase the security of service-to-service communication. You can also implement access-control lists (ACLs) as security policies. A true service mesh/sidecar proxy supports a wide range of services and implements L4/L7 traffic policies. There are numerous service meshes available on the market. The following are a few of them: You can find a number of articles on the internet comparing the service meshes listed above.
https://medium.com/better-programming/the-roles-of-service-mesh-and-api-gateways-in-microservice-architecture-f6e7dfd61043
['Tanmay Deshpande']
2020-10-03 01:34:55.343000+00:00
['Software Development', 'API', 'Microservices', 'Software Engineering', 'Programming']
3 to read: LA Times debacle | Goodbye, alt weeklies | Amazing 1950s dataviz
By Matt Carroll <@MattCData> Aug. 29, 2017: Cool stuff about journalism, once a week. Get notified via email? Subscribe: 3toread (at) gmail. Originally published on 3toread.co What’s the matter with the L.A. Times?: The editorial leadership of the L.A. Times was fired this week, in a stunning move. Well, here’s some of the jaw-dropping backstory that led to the axings. Ed Leibowitz of L.A. Magazine details the ineptness of editor-in-chief and publisher Davan Maharaj, who sat on major investigative pieces for years, but had time to ask reporters to check out his new Italian shoes. The story ran in December, but it’s worth resurrecting for it’s excruciating detail of dysfunction at the top of what was once a great newsroom. Logo by Leigh Carroll (Instagram: @leighzaah) 2. What we lose when we say goodbye to alt weeklies: The Village Voice is dead, at least in print. So, pretty much, are alt weeklies across the country. Paul Farhi of the WaPo has a nice elegy of what we have lost when those quirky mixes of band reviews, massage parlor ads, and hard-hitting political coverage closed up shop. A good read. 3. Amazing graphics from the 1950s NYT archive: Most people tend to think of “data viz” as a relatively new phenomenon. Not quite. News orgs have been doing impressive data viz for as long as they have existed. Stuart A. Thompson on Medium has a wonderful piece on some spectacular (and forgotten) visualizations done at the NYT during the 1950s. Check out the cool space travel illustrations (all b&w, by the way). Get notified via email: Send a note to 3toread (at) gmail.com Matt Carroll is a journalism professor at Northeastern University.
https://medium.com/3-to-read/3-to-read-la-times-debacle-goodbye-alt-weeklies-amazing-1950s-dataviz-5ad26f2a65f1
['Matt Carroll']
2017-08-26 14:18:28.959000+00:00
['Data Visualization', 'Media Criticism', 'Journalism', 'Media', 'Dataviz']
3 Mistakes Junior Developers Make With React Function Component State
2. Setting State That Relies on the Previous State Without Using a Function There are two ways to use the setter method returned by the useState hook. The first way is to provide a new value as an argument. The second way is to provide a function as an argument. So, when would you want to use one over the other? If you were to have, for example, a button that can be enabled or disabled, you might have a piece of state called isDisabled that holds a boolean value. If you wanted to toggle the button from enabled to disabled, it might be tempting to write something like this, using a value as the argument: // Initial setup const [isDisabled, setIsDisabled] = useState(false) // Later, modifying the state setIsDisabled(!isDisabled) So, what’s wrong with this? The problem lies in the fact that React state updates can be batched, meaning that multiple state updates can occur in a single update cycle. If your updates were to be batched and you had multiple updates to the enabled/disabled state, the end result may not be what you expect. A better way to update the state here would be to provide a function of the previous state as the argument: // Initial setup const [isDisabled, setIsDisabled] = useState(false) // Later, modifying the state setIsDisabled(isDisabled => !isDisabled) Now, even if your state updates are batched and multiple updates to the enabled/disabled state are made together, each update will rely on the correct previous state so that you always end up with the result you expect. The same is true for something like incrementing a counter. Don’t do this: // Initial setup const [counterValue, setCounterValue] = useState(0) // Later, modifying the state setCounterValue(counterValue + 1) Do this: // Initial setup const [counterValue, setCounterValue] = useState(0) // Later, modifying the state setCounterValue(counterValue => counterValue + 1) The key here is that if your new state relies on the value of the old state, you should always use a function as the argument. If you are setting a value that does not rely on the value of the old state, then you can use a value as the argument.
https://medium.com/better-programming/3-mistakes-junior-developers-make-with-react-function-component-state-8a744ab99a0d
['Tyler Hawkins']
2020-07-27 12:13:27.622000+00:00
['JavaScript', 'React', 'Software Engineering', 'Hooks', 'Programming']
Selling Russian Hackers
The Russian Trade Delegation has its headquarters is a 1970’s style building hidden in a leafy lane near Highgate woods, North London. I was due to attend a trade delegation of Russian Cyber-Security companies. Russian hackers. The week before, I’d been researching the creation of fake social media profiles and had been following a long string of forum posts by a nameless Russian Hacker on the BlackhatSEOworld forum. A strangely compelling live log of his journey building (and ultimately selling) huge numbers of fake Facebook accounts. This event, however, was a government sponsored promotion of the Skolkovo foundation. The Skolkovo foundation is a huge planned startup town just outside Moscow aiming to have 1400 startups by 2020, across multiple different sectors complete with shared laboratories and its own university, Skoltech. All of the companies pitching here were not so much startups, as scale ups. They already had some big name clients, primarily Russian-sphere focused, but eager to grow Western connections. So it was all about pitching trust. Some succeeded, and some missed the mark of British sensitivities. Ilya Sachov CEO of Group-IB, opened with a story about how a movie company had approached them with a script about cyber security. His company does threat detection — not software to protect against attacks, but proactively following and detecting the agents that might. The scriptwriter came in, asked questions, looked around. Some months later, they apprehended a number of criminals and were able to filter through their assets. This turned out to be the very organisation that had financed the movie company. “We now have a new level of internal paranoia” said Ilya. Ilya Sachov, Group-IB explains threat detection Sergey Khodakov, of the Skolkovo foundation, put the case: “Countries that build good knives know best how to defend against them. Our cyber security companies typically begin in penetration testing. Once they’re trusted with that, it becomes much easier to sell product into corporations” Others pitched enthusiastically the features of employee monitoring software. Perhaps missing British big brother sensitivities in an unashamedly Russian way. A question from the audience about whether it was legal to monitor employees quite that much led to the following response: “…don’t worry…there is checkbox you can untick” “…don’t worry…there is checkbox you can untick” So just how much business goes on between Russian and British Companies? One of the British attendees from Ixcellerate, a data centre with facillities in Moscow, talked about the changes. “There are blacklists of people you can’t deal with, but there are good people doing good business everywhere. Those blacklists have stretched a little since 2014 [the annexation of Crimea] but everything still continues.” Sergey Khodakov, head of Infosecurity track, Skolkovo Foundation, Dmitry Politov, director of international relations, Skolkovo Foundation. Paul Graham of world premiere startup incubator Y-combinator lists five qualities that he looks for in entrepreneurs. One of these is “naughtiness”. In his words: “Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They’re not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties.” After the Skolkovo pitches were over, one entrepreneur leaned into me. “Did you know Kim Philby came here, to this building?”, he said. Kim Philby, former head of MI5, was of course a Russian spy. For inspiring trust in corporate clients, this was, of course, utterly inappropriate. But for reckless entrepreneurs, this was a uniquely bonding comment. “for reckless entrepreneurs, this was a uniquely bonding comment.” Dr. Finn Macleod works for zenlikefocus.com a service for startups that intelligently grows presence and sales through Social Media.
https://medium.com/the-data-experience/selling-russian-hackers-13e30fb487c6
[]
2016-07-12 01:37:37.272000+00:00
['Data Visualization', 'Cybersecurity', 'Data Science', 'Startup', 'Marketing']
Be a champion today
2. Hike like you’re in Ahaggar National Park When you hike through Ahaggar National Park in Algeria, each step builds off the last. Great views will come into your vision. And, if you’re in the right place, you’ll catch a sunset behind Mount Tahat. Your career should resemble such a wonderful hike. Or, perhaps your career is a wonderful hike if you’re lucky enough to be a pro hiker. Anyway, everything you’re doing should be working towards a beautiful peak or finish. If you ask Kamel about his career, you get this feeling. Having started with Djezzy in 2006, Kamel first worked as a call center advisor, learning a lot from his direct communications with customers. Then, he took a position as a copywriter for the company before working in advertising management. This eventually led to his role as a Brand & Communications Director today. Kamel and the team “Each position has been a step to the next one,” notes Kamel. “I completely understand the vision Djezzy has and I’m getting better and better at fulfilling that vision for this awesome company.” For Kamel, success hinges on continuous improvement and a willingness to learn and try new things. If you ask him what advice he would give his childhood self, he echoes similar sentiments. “I would tell myself: Never give up. Learn from your mistakes. Always work on getting better. And be open and listen to others” Read Kamel’s last tidbit of advice to his childhood self (about listening to others). It brings us to this point: If you are to be a champion, you can’t do it all alone (it’s best to hike through the Ahaggar Mountains with partners, too). “Reaching your highest potential goes beyond just improving your skills and self-confidence,” asserts Kamel. “You must understand teamwork and know how to work with others to get to where you want to be. That’s why at Djezzy I’m constantly working on building team spirit.” The new recruitment space at Djezzy 3. Airport signs say “welcome” in all sorts of languages You’ve probably seen those signs in the international airport that say “welcome” in many of the world’s languages. Kudos to you if you can read all or most of them. Anyway, the lesson here is that you have to speak to your customers in their language. Kamel recognized the importance of this while managing copywriting campaigns for Djezzy. That’s because there is a unique mix of languages and dialects used in Algeria. Throughout Algeria, Berber languages and dialects are spoken. In the north and northeast, you’ll commonly hear Kabyle; in the east, Chaoui is used; in the Algerian Sahara, it’s Tuareg. Arabic is a standard language in Algeria and is the mother tongue for many in the country, which means it’s heard everywhere. French is also used in commerce, government, and education, and is frequently encountered in major cities. In the west, there is even an influence from Spanish. In many regions, Algerians blend words from other languages with their native tongue, creating their own dialect. This is part of what makes Algeria so magical. But it also makes work as a copywriter more complex. “We had to use language people were using in their daily lives — because Djezzy is all about connecting to the person,” says Kamel. “We found ways to write customized copy for each region, while maintaining our unifying campaign theme.” Kamel and the team To make sure such copywriting campaigns work, Kamel believes in testing and looking at the data. Because you have to speak to the customer on their level. “For instance, we found out from our Facebook interactions that many users wanted Arabic subtitles for our French press conferences,” describes Kamel. “So now we’re doing that for all our videos.” So, if you want to succeed with your customers, find the right language and words to reach them. As you make your pitch to them, listen to your customer’s reactions — and adjust accordingly. 4. Cook something special for your customers (literally and figuratively) “Djezzy wants to connect to Algerians’ lives,” exclaims Kamel. “We have a show called Djezzy CLYC on our YouTube channel. This is where we talk about new product launches, new offers, and more. We’ve also sponsored MasterChef Algeria — a show all about making great food!” Like Kamel has stated, culinary art is one of the pillars of Djezzy’s brand development. Djezzy sponsors MasterChef so that the company can align its brand with Algerian culture. That means being there when it comes time to discover culinary arts (we want to be there for the delicious meal, too!). Before you go check out the MasterChef show, which will make you super hungry by the way, understand the deeper meaning here. It’s that you must prepare something special for your customers. To satisfy your customers, you must give them something they would like. After all, you wouldn’t cook a dish for guests if they hate that dish. This is why Kamel is always looking at the data: He wants to know what his customers like. The idea here is simple: You cook the flavors your guests prefer (i.e. your customers). If you ignore their tastes, you risk a meal going to waste. Make a dish they like, and you’ll be a champion. 5. Some experts are saying even robots need sleep Even if you’re actually a robot, you still need sleep. You can’t go at full blast all the time; you’ll burn out before becoming a champion. In fact, some experts are saying that robots do need sleep to function better. So don’t feel bad for taking a break. “I love spending time with my family and having meals with them,” says Kamel. “I like to play video games, like FIFA. I like to play football with friends and also with my son!” Again, we’re back at football. We wonder how good robots will be at football. Anyway, you get the point here. If you want to be a champion, take time for rest and fun along the journey. It’s the only way you can reach the pinnacle of excellence. Keep on pushing towards the top Kamel has given you his tips on how to be a champion. Now, your mission is to just do it. As you go on your journey, remember the value of persistence — a value Kamel’s team members say he’s instilled in them. Persistence is what will get you to that beautiful destination (well, that and culinary art and football).
https://medium.com/djezzy-careers/be-a-champion-today-fa17447a5aaf
['Veon Careers']
2018-12-14 13:26:56.037000+00:00
['Technology', 'Social Media', 'Marketing', 'Music']
Could the Flu Shot Offer Protection Against Covid-19?
Could the Flu Shot Offer Protection Against Covid-19? Experts say it’s more important than ever that people get the flu vaccine Earlier this year, a team of researchers based primarily at Johns Hopkins University examined county-level health data collected from all 50 states and the District of Columbia. The team was looking for associations between last season’s flu vaccine and deaths attributed to Covid-19. After adjusting for more than a dozen variables that could have confused their findings — such as race, income, education level, health status, and access to hospital care — the researchers concluded that among adults age 65 and older, a group that accounts for the vast majority of coronavirus-related deaths, high rates of vaccine coverage were associated with a significant drop in Covid-19 deaths. Each 10% increase in flu vaccine coverage within a community corresponded with a 28% reduction in coronavirus deaths, they found. “Our findings suggest that influenza vaccination can possibly play a protective role in preventing the worst Covid-19 outcomes,” says Luigi Marchionni, co-author of the paper and an associate professor at Johns Hopkins Medicine. In his team’s paper, which was posted online June 26 and has not yet undergone peer review or formal publication, Marchionni and his colleagues were quick to highlight their study’s limitations — which included a lack of Covid-19 reporting consistency among states and localities. But he says that he and his group have since performed additional analyses on the data, and their conclusions have held up. He also references a similar study from Brazil, which found evidence that people vaccinated against the flu were less likely than the unvaccinated to suffer from severe or deadly cases of Covid-19. “We’re most likely going to face a dual pandemic this fall,” he says, referring to the ongoing SARS-CoV-2 crises and the predictable resurgence of seasonal influenza. “Our findings point to some obvious reasons for getting the influenza vaccination.” How the flu vaccine could provide an immune boost Marchionni takes pains to explain that his team’s work was observational, meaning they were examining population-level data in an effort to find patterns and associations. They were not assessing individual Covid-19 patients in order to identify underlying explanations for their findings. But after delivering that caveat, he says that there’s some evidence that the flu vaccine may trigger one or more immune system changes that may repel the novel coronavirus. “There is an epitope, meaning a piece of antigen that the immune system can recognize, that is similar between influenza and SARS-CoV-2,” he says. It’s possible that the flu vaccine could teach the immune system to be on guard for this antigen, which could provide a coronavirus-blocking benefit. He also says that the flu vaccine could induce a general “boost” to the immune system’s innate defenses, which could help it repel SARS-CoV-2. If vaccinated people are generally healthier, then that alone could explain the association between the flu shot and lower rates of Covid-19 deaths. There are other possible explanations for the observed association between flu vaccine coverage and reduced Covid-19 risk. During the spring, a number of reports documented cases in which people were infected with both the flu and SARS-CoV-2 at the same time. It’s not yet certain that coinfection with both viruses results in more extreme illness, but experts say that this is possible — and probably likely. “Coinfection could absolutely make things worse,” says Steven Pergam, MD, an infectious disease specialist and associate professor at the Fred Hutchinson Cancer Research Center in Seattle. “If someone gets sick from the flu and their immune system is already on high alert and then they get Covid-19 on top of that, I worry that could shift them into a really severe Covid situation.” The flu vaccine can prevent influenza infections, Pergam explains, and it can also reduce the severity of illness among those who become infected with flu. Both of these benefits could theoretically reduce the risk of life-threatening illness among those also infected with SARS-CoV-2, he says. It’s also possible that someone who contracts the flu and then recovers from it is at greater risk for a bad case of Covid-19 due to lingering immune system changes or deficiencies. T-cells are a category of specialized white blood cells that help regulate the immune system. Research has shown that, following a case of the flu, people have reduced T-cell diversity and also a greater proportion of influenza-specific T-cells in the lungs. If a person is infected with the novel coronavirus, these T-cell imbalances may lead to the “exaggerated inflammatory response” that is associated with severe Covid-19 disease, Marchionni says. But others say there could be a simpler explanation for the observed associations between the flu shot and reduced Covid-19 risks. “Patients who get the [flu] vaccine are also likely to be more health-conscious [than those who don’t],” points out Michael Ison, MD, a professor in the Division of Infectious Diseases at Northwestern University’s Feinberg School of Medicine. If vaccinated people are generally healthier, then that alone could explain the association between the flu shot and lower rates of Covid-19 deaths, he says. Ison says that, for other reasons, it’s important for everyone to get the flu shot this year. But he’s not convinced that the influenza vaccine provides any special Covid-19-weakening immune benefits. Other vaccines could offer an immune-system boost The influenza vaccine isn’t the only one that researchers believe may offer some coronavirus-related benefits. Some are exploring the possibility that active but weakened polio or tuberculosis vaccines could stimulate the immune system in ways that protect people from SARS-CoV-2. “When you vaccinate people with some of these live attenuated viral vaccines, the immune system produces a number of factors, such as interferons, which have a general antiviral effect,” says Paul Offit, MD, director of the Vaccine Education Center at Children’s Hospital of Philadelphia. Offit says that the National Institutes of Health is exploring the potential benefits of these vaccines, which could theoretically be deployed cheaply and widely among at-risk groups in an effort to block the spread of SARS-CoV-2. “Some people are pushing for this, but it’s not clear yet whether that will move forward,” he adds. “When you vaccinate people with some of these live attenuated viral vaccines, the immune system produces a number of factors, such as interferons, which have a general anti-viral effect.” The importance of getting the flu vaccine While experts say that the coronavirus-weakening power of the flu vaccine is far from proven, they unanimously agree that everyone should get a flu shot this fall. “Every year, the recommendation for the influenza vaccine is for everyone over six months of age to get it, and it’s more important to get it this year than normal,” says Offit. Why? During the 2018–19 flu season, the Centers for Disease Control and Prevention estimates that roughly 36 million Americans came down with the flu, which led to more than 16 million health care visits or consultations and 500,000 hospitalizations. “If all this happens concurrently with Covid-19, you can see how that would quickly overwhelm the health care system,” Offit says. Northwestern’s Ison reiterates these concerns and says that they underscore the need for people to get the flu shot, which is already available at many nationwide pharmacies. “I think that most individuals should go ahead and get the vaccine as early as possible,” he says. Those who are immunocompromised may want to wait until October, he adds, because their body’s protective immune response to the vaccine may not last until the end of the flu season. Apart from the flu shot, he says that Covid-19-related safety measures like social distancing and masks can also protect people from seasonal respiratory infections like influenza. Good compliance with these protocols has led to an unusually mild flu season in countries like Australia, where the flu tends to circulate earlier in the year than it does in the U.S. But, due in part to poor adherence to these safety measures, Ison says that the U.S. “hasn’t done a good job of controlling Covid-19,” and so may be in for a rough flu season. “Flu vaccination this year is going to be very important,” he adds.
https://elemental.medium.com/could-the-flu-shot-offer-protection-against-covid-19-9b2c4ecf055f
['Markham Heid']
2020-08-21 14:53:48.976000+00:00
['The Nuance', 'Covid 19', 'Coronavirus', 'Disease', 'Health']
My top viewed posts, and thank you. ❤
If the follow button isn’t green, you should click it so my writing lands in your feed. Thanks! :) Follow
https://medium.com/linda-caroll/my-top-viewed-posts-and-thank-you-d975424cadbb
['Linda Caroll']
2020-12-10 00:54:00.257000+00:00
['Creativity', 'Advice', 'Reading', 'Inspiration', 'Writing']
6 Ways Trauma Might Inform Your Current Life
By Noel Hunter An all too common experience of trauma survivors is hearing the suggestion, “Why don’t you just get over it?” The idea is that, well, it happened in the past, so it shouldn’t still be affecting you now. It’s as if each moment in life exists in a vacuum, separate and untouched by anything that happened prior to this moment. The thing is, everything that’s happening right now is impacted by everything that has preceded it. Our brain filters each perception through a lens of past experience, it predicts the next moment based on past experience, and it reacts with primitive automatic reactions that are-you guessed it-based on past experience. Obviously, then, our past experiences influence everything in our current life. Of course, not everyone who has survived trauma in their life will continue to be haunted and controlled by it. Many can and do heal, going on to live content and successful lives with the past nothing more than a fading scar. At the same time, many others continue to struggle in various ways. Many mental health systems push for positive, trauma-informed practices and awareness. Unfortunately, their policies are typically based in re-traumatizing narratives and reinforcement of trauma-based self-perceptions. The following are some ways in which trauma commonly impacts a trauma survivor’s life. Imagine, as you read through, how different our society might be if systems of care and justice were as trauma-informed as your life might be… Sense of Self War, violence, sexual assault, physical and emotional abuse, physical and emotional neglect, chronic invalidation, chronic racism, chronic oppression, poverty-these things profoundly shape and/or re-shape how people view themselves. If you’ve been ignored, gaslit, blamed, or chronically invalidated, you’re probably going to pretty quickly get the idea that you don’t matter. Worthlessness, feeling invisible, constantly doubting yourself, and feeling two inches tall become a way of life. Being violated, assaulted, or even assaulting another creates such a fundamental sense of shame deep within one’s soul that it festers like hidden mold behind a layer of glossy paint. Feelings of defectiveness and self-hatred appear reasonable and can be taken as objective fact. If you’ve been told your whole life that you’re a piece of shit, guess what? You’re probably going to believe that you’re a piece of shit. You don’t just snap out of these kinds of thinking simply because you get older. And, you definitely don’t stop thinking this way just because someone tells you to get over it. Relationships Our sense of self directly impacts how we interact with others. Self-hatred and feelings of inferiority and worthlessness make it pretty difficult to make small talk or engage in light-hearted verse. Not to mention that these feelings get projected onto others-we love to assume that everyone thinks just like us. If you look at yourself with disdain and fury, you’re pretty likely to assume everyone’s looking at you like that. And, who wants to engage with someone who thinks you’re defective and awful? We are taught from our first breath how to interact with those around us and what to expect from others. If those people to whom you are closest hurt you, then you learn that everyone will hurt you. Despite “knowing” that it’s inevitable, you will likely spend a great deal of energy trying your darndest to stop that hurt from happening; never really allowing yourself to just be with another. It is fairly common for those who have experienced interpersonal trauma, specifically, to view people through the lens of the “ drama triangle.” This trauma lens perceives all humans to play one of three roles: the perpetrator, the victim, or the rescuer. The thing is, these roles are ever-shifting. Anyone individual will be perceived as taking on one of these roles at any given time… including oneself. When people are perceived as always playing a role of a savior, someone to feel sorry for, or a monster, it becomes really difficult to actually see the person before you (or yourself!) for who they really are. Worse, someone always has to be a monster. The fear and anger never end. Parenting We all know that trauma tends to repeat itself across generations. Perhaps there’s some epigenetic piece to this, but there is no doubt as to the role of direct trauma and stress as well. If you (or those closest to you) are always a potential monster, this sets up an extremely precarious situation in which to bring up a helpless and totally dependent baby. You fear becoming the monster and so might become passive or over-protective. Or, perhaps you do become the monster and repeat the cycle of abuse. Worse, you might start to perceive the child to be the monster. Often, parents whose needs were never met as a child will look to their own children to get their needs met. This sets up a cycle of parentification, lack of attunement, and emotional neglect. This is the cycle of trauma that stays hidden behind layers of enmeshed love and toxic interdependence. Parenting is hard when you’ve never had a healthy parent to learn from. Career Trauma is a tricky devil. There is no singular path away from pain; what for one person might be a life of self-sabotage and expected failures is, for another, a life of over-achievement and incredible success. What is shared underneath these seemingly opposite paths is a fundamental sense of inferiority. For the person who struggles to get ahead and/or to keep a job there are a plethora of merging factors that can cause this. Perhaps academic capabilities have been stunted by severe neglect. Intense stress and emotional overwhelm makes it nearly impossible for many to focus on silly things like algebra or Charles Dickens. If you’re the sort to act out your pain, from the get-go you might be labeled a troublemaker and have others instill repeated messages of your hopeless future. Authority figures are frequently seen as dangerous and hypocritical; if you can’t get along with authority, you’re not likely to do so well in school or job. Worse, when in chronic survival mode, the future is bleak-if it’s possible to believe in its existence at all. And, so, going to college or saving for what’s ahead just seems, frankly, dumb. Let’s not forget, of course, that having a mental health diagnosis, especially the more severe ones that are themselves directly associated with trauma, leads to prejudiced hiring and discrimination in the workplace. On the other hand, hyperfocusing on academics and/or a job might become itself a coping tool to escape the horrors of home/community/peers, etc. Feelings of inferiority might fuel a never-ending effort to prove yourself. Life might become motivated solely or largely by fleeting moments of praise and accolades. Whichever divergent path you might find yourself on, it’s an exhausting one that rarely is fulfilling and often reinforces that gnawing sense of emptiness and self-hatred. Freedom If you’re controlled by the past, it’s hard to feel free regardless of your external circumstances. It also exponentially increases your odds of losing your external freedom as well. Increased traumatic experiences directly relate to increased chances of jail, hospitalization, addiction, chronic psychosis, AOT orders, guardianships, and severe health issues. If you’re trapped by the past, you’re very likely to be trapped, literally, in the present. Experiencing chronic and multiple traumatic events drastically increases the odds that someone will be arrested and incarcerated. Almost half of all women in jail and a third of men have a lifetime history of PTSD. And that’s just including overt, DSM-defined traumatic experiences. Add in racism, oppression, emotional abuse, and emotional neglect and I would venture to guess that the prevalence approaches 100%. Being locked up in a psychiatric hospital is inherently associated with past trauma. One study showed that 91% of admitted patients report overt trauma, with 69% reporting repeated, chronic trauma. Another found that almost 100% report overt trauma. Not to mention how common it is for people to be directly traumatized by the treatment experience itself. The homeless population consists nearly entirely of trauma survivors, particularly childhood trauma. And once homeless, it is common to be trapped in a cycle of housing problems, jailtime for minor infractions, and being sent to the psych ward. Physical Health It shouldn’t be that surprising that chronic stress and trauma would leave your body in a toxic state. Traumatic experiences have been shown repeatedly to be directly associated with: autoimmune disease, heart disease, stroke, cancer, diabetes, obesity, adolescent obesity, drug and alcohol abuse, and Alzheimer’s. And when you consider all of the above, you can see how quickly things compound. If you’re lonely and filled with hate for yourself, you’re not likely to be so interested in healthy eating. Conversely, you might become obsessed with some factor of your body like, say, weight, and starve or purge or excessively exercise to make up for your perceived defectiveness. If you aren’t making money, you can’t even afford to eat healthy or go to some fancy gym. If you live in the United States, you’ll also likely have terrible healthcare, if you have it at all. The meals in locked facilities are about as healthy as what is fed to lab rats to keep them merely alive. The drugs given and/or forced in these places not only can result in obesity, digestive issues, brain damage, and blood problems, they also can make a person numb, hyperactive and/or shut down, and agitated, leading to its own cascade of health issues. If you’re addicted to street drugs, alcohol, food, and/or risky lifestyles, health declines fairly rapidly. Even if you are relatively healthy, trauma can be felt in the body through chronic pain, pseudoseizures, unexplained pain, digestive issues, memory problems, numbness, clumsiness, tight muscles, headaches, teeth erosion and jaw pain, and breathing issues. Any one of these can itself directly lead to more severe injury or health issues over time. Having a diagnosis, history of hospitalization, and/or history of incarceration directly results in discrimination by healthcare workers. So, even if you’re fortunate to have access to decent healthcare, you’re still not likely to receive it. Trauma and all of the additive effects of the sequelae of trauma understandably may lead to suicidality, self-harm, and passive suicidal behaviors. Is it that surprising, then, that those with a significant trauma history have a lifespan decreased by 20 years? Imagine if our systems of care and our governments understood this. Imagine if they developed programs around these ideas. Imagine if all the money spent on finding the genetic roots of everything mentioned here, or on pharmaceutical interventions, was instead spent on helping people heal from trauma? Better, what if such funds were directed towards creating a less traumatic society? More equality, more compassion, more social services, greater access to quality healthcare, access to healing modalities not couched in further prejudice, more protection for children, more rehabilitative rather than punitive reactions, more relaxation, and more love and connection. Imagine.
https://medium.com/mad-in-america/6-ways-trauma-might-inform-your-current-life-2665ef2d791f
['Mad In America']
2020-06-07 13:20:01.261000+00:00
['Trauma', 'Health', 'Relationships', 'Mental Health', 'Self']
Understanding the Application
You can pass data to a method using method parameters. The values passed to a method can be used inside of the method to influence the return value. For example, you may define a method that adds two numbers together. The parameters for the method are two numbers, a and b. The return value of the method is the sum a + b. Each time you call the method in your program, you can pass different numbers to the method to get a different return value. The Main method has a parameter called args. We’ll talk about args more in the next section. Sometimes, there is confusion about the terms ‘parameters’ and ‘arguments’. When you define a method, you also define the names and the types of the values that are passed to the method. These are the method’s parameters. When you call the method in your program, you pass an actual value, called an argument, to the method for each parameter. WriteLine Finally, on Line 9 we have a call to the method WriteLine which is part of the Console class. The method WriteLine accepts a parameter of the data type String. In our application, we pass the argument “Hello Ken!” to the method. WriteLine takes this argument and prints it to the Command Prompt window. What’s Next? In this part of the series, we explored that structure of the Program.cs file. Along the way, we got a brief introduction to concepts like directives and namespaces, classes, and methods. Don’t worry if those concepts are unclear to you — we’ll go into much more detail as the series progresses. As you work through examples, you will begin to understand each concept better. In the next part of the series, we’ll learn how to make our application interactive by accepting user input and using that input to modify the output of our application. You can find that part of the series here.
https://kenbourke.medium.com/understanding-the-application-1ee031cc40fa
['Ken Bourke']
2020-12-28 18:59:11.971000+00:00
['Software Development', 'Csharp', 'Dotnet', 'Software Engineering', 'Programming']
Sky News presents UN report findings which reveal that the ozone layer hole above Antarctica is…
Sky News presents UN report findings which reveal that the ozone layer hole above Antarctica is recovering and could be fully repaired by 2060. The UN accounts the repair, happening at a rate of 3% per decade, to the banning of Chlorofluorocarbons (CFC’s) in the 1980’s. Amongst all the dismal news surrounding the current state of our planet, this bright light at the end of the tunnel is much welcomed, although there is still a lot more that we must do if we want to nurse our home back to health. CFC is an organic compound, made up of carbon, chlorine, and fluorine which is produced as a volatile product of methane and ethane. Since the 1930’s, CFC has been used in the manufacture of solvents, refrigerants, aerosol sprays, and blowing agents for packing materials. Initially the introduction of CFC was a great substitution for dangerous substances like ammonia, and was labelled as safe, non- toxic and non-flammable. However, the planet suffered greatly. The chlorine subsequently emitted into the atmosphere, which has the potential to significantly damage and destroy large parts of the ozone, as witnessed, particularly over Antarctica. Thanks to the 1987 Montreal Protocol , an international treaty with over 197 signatories, banning ozone-depleting chemicals, such as CFCs alongside new technology, we are now witnessing the increasing health of the ozone layer. The UN report that the upper layer of ozone over the northern hemisphere will be repaired by 2030, and the damage over the southern hemisphere will return to normal by mid-century. The healing of the ozone layer is fundamental to the survival of our planet and the health of the human race, as it is responsible for absorbing most of the sun’s cancer-causing ultraviolet radiation. Sadly, despite this great news, there are still areas in the world that continue to destroy the ozone layer with CFCs. Namely, the UN report shows evidence of an unexpected and unidentified increase in CFC-11 from eastern Asia since 2012. This is amidst earlier reports by The Guardian that uncover the source of some of these illegal emissions as pinpointed to the Chinese plastic foam industry. If continued, these emissions will delay the recovery of the ozone layer by up to 20 years, Sky News reports. Additionally, there has been a trend that sees pollutant industries moving to eastern countries, which accounts for their high levels of CO2 emissions. Combining damaging CFCs and harmful greenhouse gasses, is a sure way to prevent sustainability and the recovery of our planet. To mitigate this, production and manufacturers, businesses, and individuals in Eastern Asia, as well as globally, need to start acting more energy efficiently and more environmentally consciously. Human behaviour is habitual and takes repeated and tangible incentives to change. EnergiToken, supported by a variety of energy efficient partners offer individuals the solution. Present predominantly across Europe and Asia, EnergiToken is a rewards platform which awards energy efficiency with EnergiTokens (ETK), a financially tangible cryptocurrency which can be spent within their ecosystem of approved vendors. When an individual proves that they’ve completed an act that reduces consumption through our partners, such as purchasing energy efficient appliances or travelling via low-carbon transport they will be rewarded with a sum of ETK ready to be spent. Some of the recent partners include Solisco, nextbike, and Korean-based Hotel Cappuccino. Hotel Cappuccino is an urban lifestyle hotel that is tackling the increasing issue of energy over-consumption by encouraging its guests to act efficiently such as reusing towels. Guests who do so receive rewards from the hotel in the form of Angel Coupons which can be used within the hotel café or to send a donation to a charity. Now, after partnering with Energi Mine, Hotel Cappuccino can integrate the EnergiToken rewards platform, which will further reward their energy mindful guests with ETK which can be spent in a wider ecosystem of vendors outside of the hotel itself. Rewards will motivate more people to uptake energy efficient behaviour until eco-friendly and energy reductive actions become habitual. This will subsequently have a great impact in terms of creating a sustainable planet as green behaviour becomes the natural choice. Together we can change the future of the planet and nurse it back to health. Visit energitoken.com today to find out how you can help save the world and get rewarded for it!
https://medium.com/energitokennews/sky-news-presents-un-report-findings-which-reveal-that-the-ozone-layer-hole-above-antarctica-is-2691e3462597
[]
2018-11-09 09:50:42.175000+00:00
['Pollution', 'Sustainability', 'Climate Change', 'Global Warming', 'Environment']
Marketing Fear, Uncertainty, Doubt and, Ultimately, Positivity for Profit
The Default Option Guess how many vets are open on Thanksgiving morning? None. It’s emergency rooms only. That’s powerful right there. Monopolies don’t need to worry about marketing or customer service (just look at cable and telecom companies). You only have one option, and they get away with murder. The most valuable businesses are built on defaults. Whether it’s Google’s colossal search empire or Facebook’s massive ad engine, becoming a default in today’s economy is often associated with monopoly. And monopolies are the best businesses to build — certainly, they’re the most profitable. (Want to build a monopoly? Hacking the 5 types of network effects helps.) That emergency vet had a monopoly all right. There wasn’t another within an hour’s drive. Who wants to waste that much time while your dog’s “dying?” Besides, I wanted to get back for turkey day. It was a lot of pain and an easy decision — no other choice. Not all businesses can become the default choice. Think about this in relation to your own business. What are you selling, who are your competitors and how do you differentiate? That ER vet didn’t have to advertise, they didn’t need SEO — when they’re the only one open, guess where you end up. What is that for your business? How can you become a default, even if only for a small niche? If you’re the only supplier of XYZ or the exclusive partner of ABC, that’s differentiation and the beginnings of a moat. And without a moat, you’re not fundable, not really (many companies — including Uber and Blue Apron — get this wrong. Here’s why!)
https://medium.com/better-marketing/marketing-fear-uncertainty-doubt-and-ultimately-positivity-for-profit-8d973f5152b1
['Matt Ward']
2019-12-18 22:12:22.044000+00:00
['Marketing', 'Sales', 'Business', 'Branding', 'Startup']
5 Myths About Running That You Shouldn’t Believe and 5 Absolutely True Facts
5 Myths About Running That You Shouldn’t Believe and 5 Absolutely True Facts Let’s separate the fact from the fiction. Image by skeeze from Pixabay It’s no surprise that a sport that’s been around for centuries would have developed a lot of myths about it. In 776 B.C., the first event at the first Olympics Games was the footrace. Then came the running boom of the 1970s, when jogging for fitness exploded onto the scene. Fast forward to the 90s, the decade when I began running, and the running world was full of advice. Lots of it included carbo-loading and stretching. That and bad knees, I spent two decades listening to nonrunners tell me how my knees were going to fall apart if I kept running. Well, I kept on running. And now, more than 20 years later, my knees are still going strong. So what are the real facts about running, and what ideas should we leave behind? The myths Myth #1: Running will ruin your knees. Runners hear this a lot. Hopefully, this piece of advice will fade away, as it is completely false. For one thing, running helps keep weight off, and not carrying around extra weight is beneficial to your joints. And researchers have conducted studies that have shown that running is beneficial to the cartilage in your knee joints. Myth #2: You should always stretch before running. Static stretching before a run will not do you any good. I wrote all about it here. A short warm-up and some dynamic stretches are better before you head out for your run. Myth #3: You need to eat pasta for dinner before a big race. Sadly, this one is not true. The carbo-loading craze of the ’90s is over. You don’t need to eat a giant bowl of pasta the night before a big race. Eating that big bowl of spaghetti can have the effect of making you feel bloated the next day. Instead, increase your carbs slightly in the days leading up to your race. And think healthy and complex carbohydrates, like whole grains, black beans, lentils. Myth #4: Real runners don’t walk. Don’t let anyone tell you this. It’s simply not true. Popular running coach, Jeff Galloway, has a program where he incorporates running with walking. Myth #5: Runners can eat anything they want without gaining weight. Like myth #3, I wish this one were true. Unfortunately, running is not a license to eat anything you want. If at the end of the day your calorie expenditure is less than what you’re taking in, you’re going to end up gaining weight. This is true even if you ran 20 miles that day. Runners still need to be conscious of their diet. Although a lot of running does mean you’ll need to eat more calories to keep up your energy levels than someone less active. The facts Fact #1: Running can boost your mood. Exercise reduces stress and increases your ability to deal with anxiety. It does this by increasing concentrations of norepinephrine, which is a chemical in the brain that helps people to deal with stress. Many studies have shown that people who exercise are less likely to be depressed and show higher levels of emotional well-being. Fact #2: Running helps build strong bones. Running is a weight-bearing exercise. Weight-bearing exercises reduce the loss of bone strength. Some studies have shown that more than just slowing down bone loss, running can build bones. Fact #3: Running helps you live longer. Researchers looked at 14 previous studies which included 232,149 adults. They found that compared to people who didn’t run at all, runners were 27% less likely to die during the study. The researchers concluded that running improves overall health and longevity. Fact #4: Running can improve your sleep quality. Exercising regularly can help you get deep sleep. Deep sleep, or slow-wave sleep, is how your body and mind rejuvenate during the night. Miss out on deep sleep, and even though you are rested, you may feel fatigued the next day. Researchers at Johns Hopkins Center for Sleep found that people who exercise for more than 30 minutes per day will have better sleep quality. You may want to experiment with what time of day works best for you. Going for a run right before bedtime may make it harder to get to sleep. Fact #5: You need new shoes every 300 to 500 miles. Running is a simple sport and doesn’t require much equipment. However, your shoes are critical, and once they begin to wear down, you may experience injuries. Keep track of how many miles you run in your shoes. Many apps can help you do this. You can also watch for the following signs: pain when running, worn-out treads, and loss of shock absorption.
https://medium.com/runners-life/5-myths-about-running-that-you-shouldnt-believe-and-5-absolutely-true-facts-487f5edc3565
['Jennifer Geer']
2020-06-14 12:01:03.248000+00:00
['Fitness', 'Health', 'Exercise', 'Running', 'Wellness']
My Ukulele
Welt Aus Quark collects small everyday moments that are worth honouring and turns them into comic strips and drawings Follow
https://medium.com/welt-aus-quark/my-ukulele-9e3df4c66d8e
['Paul Götze']
2020-08-14 10:46:23.572000+00:00
['Comics', 'Ukulele', 'Musicians', 'Illustration', 'Music']
Tech Can’t Handle Criticism: A Conversation with Anna Wiener and Jessica Powell
OneZero: Let’s talk about how both of you first came to Silicon Valley. What drew your interest to the field? Jessica Powell: I was not remotely interested in tech. But I ended up in London and had no visa or anything. So I applied for every job under the sun. No one, no one would call me back. I applied for every job in the Guardian ads. I applied for waitressing jobs, I applied for a CEO job, and across the board no one called me back. Google was the only place that I got an interview at, and they hired me as a contractor. Anna Wiener: My peer group wasn’t thinking about tech at all, and I was an assistant at a literary agency. I was applying for other assistant jobs — this was just considered forward momentum — and I sort of fell backwards into the [tech] industry. I read on the Paris Review blog — the source for business information — that this startup was doing like a Netflix for ebooks product and they had raised $3 million. [In the article, there] was just this picture of these three men. They looked so happy, and I was like, ‘Yeah, I guess I would also be happy if I had $3 million to make something.’ This just seemed like they had a license, like this was the future. How can it not be? They’ve been given this funding, and they can just go forth and disrupt publishing, and I wanted to try that. It seemed like the version of tech that was related to my interests, which at the time were pretty much books and Tumblr. So that was how I got my start in tech. That job was also a contract position. “[In the article, there] was just this picture of these three men. They looked so happy, and I was like, ‘Yeah, I guess I would also be happy if I had $3 million to make something.’” Once you arrived in Silicon Valley, it seems as if some of the broader structural cultural issues become apparent pretty quickly. Anna, there’s a great line in your book about how it became clear that so much of the tech culture is built by young white men from “the soft suburbs.” And then that is being exported. Anna Wiener: Well, one might say that a lot of American society is run by young white men from American suburbs, so it wasn’t like a total fish-out-of-water situation. When I started at my first company in Silicon Valley, it was almost entirely young men. Anna Wiener. I think this manifests in different ways: You have people who are quite young and who are quite inexperienced — professionally and also in life. They’re learning how to manage other people — some of whom are older than they are and have more experience — at the same time that they are figuring out how to scale, how to grow, how to be a professional, how to be independent in their twenties. You end up with these cultures that, with that youthfulness combined with certain industry values, tend to lean toward the irreverent — this idea that this is like an anti-authority, anti-government, anti-bureaucratic form of business. When those two things collide, you get a workplace culture that can be quite alienating to someone who is on the outside in any way. I think you also just get a workplace culture that’s governed by the sort of business advice one might read on Medium from a semi-experienced venture capitalist. OneZero: And that often results in what Jessica details in her book — institutional sexism. In Jessica’s book, it’s very explicit: The engineers are building internal hookup apps to keep the engineers happy. The message is pretty clear that the company culture in a lot of these places is serving a very narrow interest, which is the interest of the men working there. Jessica, can you talk about what in your experience inside Silicon Valley led you to register that critique? Jessica Powell: Everyone thinks I wrote the book when I was at Google. I was at Google for a long time. Then I went to a startup in London and then went back to Google. This startup was pretty horrific. Like one time I came in and there were, like, dildos on my desk. Not like dildos. There were dildos on my desk. When I started writing, I had no idea it was gonna be a book. I was just writing because I was mad, but I didn’t even know if I had the right to be mad. On the one hand, it seemed so obvious that what was happening around me was bad. But no one seemed to care. That’s probably not true. I think probably people did care. But when something becomes normalized, you really doubt yourself. Whenever I would raise some of these issues, people would be like, “Oh, don’t freak out about it,” — I’d be the only woman on the management team — and, “Oh, but you just care about that because you’re a woman.” Maybe. Generally in the Valley, we really have bought into this idea of it being a meritocracy. And certainly if I think of my own career, going into Google at 26 and, you know, running my department by the time I was, I think, 37 or 36, I don’t think that would have actually happened in any other industry. If you were there to catch the ball, and you could catch it, then you kind of continued to advance. So, I do think there is something really special about the Silicon Valley culture in that sense. But it’s a total myth that it’s a meritocracy. The problem is that we think of our platforms as neutral. We think of our company structures as neutral, and we don’t always look at the inequalities that exist within them. At Google, we had a TGIF [meeting]… where you have the Q&A and the founders get up and they’ll answer any question, which is great. That does not happen in every industry. But we would assume that because we put the mic in the room, that meant everyone had equal access to the mic. But that’s not how it works. Guess what happens? Who are the majority of people who go to the mic? It’s a bunch of men, primarily white men. They should absolutely have the right to ask those questions. There’s nothing wrong with them, but we’re wrong to assume that tech is neutral. And I think that is how a lot of the sexism and racism and ageism — we never talk about ageism, but my God ageism! — I think that’s how that all happens. We keep thinking we’re all wonderful, fair people, and so we don’t interrogate enough how our tools and how our structures in the company are used. OneZero: You both explicitly said that you hope your work has a political element, that you hope it engenders change. What would that look like? Anna Wiener: I think [tech] is an industry that’s quite hostile to criticism — not just [an industry that] doesn’t want to think about any critique, but is actively dismissive of criticism. I think criticism is inherently political, especially when you’re criticizing power. And I do hope my book is politically useful in the sense that I want people to see how people speak and how they think and what the intellectual figureheads are in a culture that has amassed such a high concentration of wealth and power so quickly. My reason for writing the book as memoir was I felt that—especially as a nontechnical woman with the story I had—if I had disguised it as fiction, it would not be taken seriously. I wanted people to understand that these things all happen, the good and the bad, that my experience here was as fortuitous as it was disenchanting. [By not naming companies or executives in the book,] my point was that this is more about people or institutions in a structural position, and the behavior is what’s important — not any particular company. People have come to me since the book’s come out and asked, “Did you work at X analytics company?” and I did not. OneZero: Because their experiences so mirrored what they saw at their workplace? Anna Wiener: A concrete example: There’s a part in the book where I talk about my team being brought into a conference room and our manager asking us to write down the names of the five smartest people we knew. We all wrote down our friends’ names, and then he was like, “Why don’t they work here?” A woman I know emailed me after reading this, and she was like, “The same exact thing happened at a company that I worked for in a totally different part of the industry — they must have just read it on the same blog post.” I just feel like there are these cultural ticks — I want to point out that this is not hyperspecific. OneZero: Jessica, what’s been the response to your book as a piece of satire? If you’re reading between the lines, some people may or may not be super favorably rendered in there. Jessica Powell: Yeah, there’s no one-for-one matching. Everyone has an opinion about the [character of the] nymphomaniac CEO. They all have guesses, which are actually super-good guesses. But the biggest inspiration [for this book] was definitely working at that startup. I don’t think the head of that startup liked the book. In fact, there was a huge exposé of him last summer, about all the misogyny and racism at their headquarters. I gave a quote for the article, and his response was, “Jessica’s just trying to promote her book, along with all the other people who had spoken out.” And I was just like, no one even cares about your company. If I was going to promote my book, I think I’d talk about Google’s sexism. Jessica Powell. As for Google, the people I consider my friends have all been great. There were a ton of Google employees who reached out and were just like, “Wow, I feel like you’re making us eat our vegetables, and we’re talking a lot about it.” That is super rewarding to hear. There were definitely some people who were not happy about it, who all of a sudden stopped talking to me. And that hurt, definitely, in the first beat of it. And then I had a moment where I was like, wait a second, so we are not friends because I criticized the industry you worked in? That’s way more dystopian than anything I wrote in that book. Anna Wiener: Jess, I’m curious if you’re worried about credibility. As a fairly low-level employee at these companies, I worried about people dismissing my account because I’d only worked in the industry for five years. I wasn’t even a programmer. I wasn’t super ambitious in my career after a certain point. Did the question of credibility come up for you as someone who had worked in the industry for a much longer period and had become an executive? Was that on your mind at all? Jessica Powell: People like to position critics as outsiders trying to undermine the credibility of [the industry]. And even more so if you’re not technical. Then they can kind of, you know, bucket you as, “Eh, they don’t really understand the tech.” “The problem is that we think of our platforms as neutral. We think of our company structures as neutral, and we don’t always look at the inequalities that exist within them.” OneZero: In 2014, the founder of GitHub stepped down after serious sexual harassment allegations led to a companywide probe of the startup’s culture. Anna, when you joined the company, it had adopted a number of measures to try to make structural changes, to deter harassment and promote inclusion. You detail some of the ways those were and weren’t successful. What do you think of such efforts put forward by the industry so far? Anna Wiener: We have to look at these as structural issues. We have to look at the incentives of the industry, the incentives of the business model, the incentives of venture capital. I think there’s a lot of really important work being done at a lot of startups and larger corporations with diversity and inclusion. I think there are people working really, really hard to change cultures internally. But I think until those people are really given a seat at the table, really given power internally, we’re unlikely to see any sort of meaningful cultural shift. Unless you see a shift in the business model or in these incentives, we’re unlikely to see a shift in the way the products are used. I’m not saying we should dismantle capitalism, exactly, but I do think these problems can’t be solved at the company level, because it’s not in anyone’s interest to do so. Until we can really acknowledge that and reckon with that, we’re unlikely to see meaningful shifts. I can’t speak to what happened at GitHub. I wasn’t in the room for important meetings, but my understanding of the work people were doing around diversity and inclusion is that [it] was going to take time, and it was going to require people to slow down. It’s going to require certain changes to the product itself. And that’s not in the interest of the company that is gunning for an IPO or for a multibillion-dollar acquisition by Microsoft. I think there are things we haven’t tried yet, which makes me really excited. Some of the recent organizing we’re seeing in tech is really compelling. I think employees having a seat at the table or employees on the board, that’s a step in an interesting direction — collective ownership. For an industry that has so many smart people and so many optimistic, idealistic people, we haven’t been super imaginative. Jessica Powell: I’d like to see more engagement from tech with the outside world. [Anna was] saying earlier that it’s an industry that does not take kindly to criticism. I think that’s very, very much true. And so what happens is, say Facebook does something dumb or controversial on the content moderation side, and everyone gets all worked up. What happens if you are at Reddit, YouTube, Twitter, any kind of UGC platform? You totally close ranks. You don’t say a thing. You don’t want to get pulled into that, on a policy side, on a press side. So you don’t say anything. And so then you have this vacuum of largely reporters or politicians weighing in on these debates. You get this vacuum and anti-tech vitriol. [It’s] not always super informed. But it’s a problem of our own making, because we don’t do anything. We keep our heads below the parapet. And so I think we kind of deserve it. On Twitter, I’ve got my media feed and I’ve got my tech feed. It’s very funny to watch them sometimes. All the tech people are like, “Oh, the media. It’s all fake news and they hate us and so they make things up.” And yeah, sometimes it’s a little extreme, but it’s also because we don’t do a whole lot. There’s so much anti-tech sentiment. Everyone just kept on trying to put me into this hole of, like, “You’re this whistleblower, you hate tech, da, da, da, da.” The thing is, I don’t. I still work in tech. I think it’s something that’s really, really special — the questioning of assumptions and trying to do stuff even if you don’t know that you can do it. I love that there are people trying and failing. [But] can we all just behave better, please? Update: This article has been updated to clarify Jessica Powell’s role at Google.
https://onezero.medium.com/tech-cant-handle-criticism-a-conversation-with-anna-wiener-and-jessica-powell-b4dfca6bb2de
['Brian Merchant']
2020-02-27 15:49:09.639000+00:00
['Books', 'Google', 'Tech', 'Culture', 'Into The Valley']
Genaro Network aimed at Sustainability
Mining cryptocurrencies became one of the hottest & most profitable sources of income across the globe. People who began mining early became wealthy fast. At the start, miners could use a simple home computer to earn a bitcoin. With more people mining bitcoin, the amount of energy required increases. Nowadays it takes a massive amount of energy to power the computers needed to solve the algorithms necessary to receive the rewards. The costs of setting up and running a mining operation is not profitable today. The amount of power needed to run a mining computer such as the Antminer s9 (Cost $8,200) constantly for one year is around 15,000 kilowatt hours producing approx. 0.85 bitcoins per year. Energy costs of mining a single bitcoin can range from $3,200 (in Louisiana) to $9,400 (in Hawai’i). Smog in Shanghai, China Sustainability and taking into consideration the impact being made on the environment should be at the forefront of people’s minds. While I do not think that all of the world’s energy will be consumed by 2020 because of bitcoin mining, I do think that mining the coin is becoming harmful to the environment. In China, where most mining takes place, fossil fuels are being used to power operations, causing further damage to China’s ecosystem. ‘Mining’ on the Genaro Network does not cause such harm on the environment. It can be done on a home computer, because miners on the Genaro Network share hard drive space instead of using exorbitant amounts of energy. Genaro Eden is the first Dapp built on the Genaro Network, which allows users to store their data on Genaro’s public blockchain, giving them unparalleled security. That data is encrypted and separated into different pieces, which is stored on different nodes across the network. A benefit of becoming an Eden user is that with more users, costs for storing data will go down. Genaro Sharer allows users to earn rewards for sharing their hard drive space. This creates a global ecosystem. The Genaro Network ensures that data is stored on the nearest node, which ensures data can be efficiently accessible. Sharer is sustainable, because it allows for the optimization of empty space, giving both parties benefit. Unused space does not remain idle, rather it can store others data, but also enable ‘Sharers’ to earn rewards as well. Genaro Eden sells unused space to make storage more economical and environmentally friendly and sets the stage for the optimal allocation of resources. That is what we call “green computing!” The Genaro Network understands taking care of the environment is important. Genaro Network is dedicated to sustainable solutions & is pursuing the elimination of damaging environmental impact which has been associated with the mining of cryptocurrencies. The ecosystem created enables for parties storing data and sharing unused space to benefit, while also optimizing hard disk space, which is often left unused.
https://medium.com/genaro-network/genaro-network-aimed-at-sustainability-4efdc226bdff
['Kauwila Johsens']
2018-07-04 09:20:20.656000+00:00
['Environment', 'Energy', 'Sustainability', 'Data', 'Bitcoin']
Watch Out!!
Lamborghini Aventador from en.wikipedia.org The Lamborghini posters on my walls have faded. I canceled my membership of the Rotary and other clubs, sold off my stocks, albeit at a loss, and stopped buying lottery tickets. Best of all, I chucked all the motivational books that I had in the bin. I consoled myself with only one thought — at least I tried. It's better to have tried and failed than to live life wondering what would’ve happened if I had tried — Alfred Lord Tennyson It took me well over thirty years to achieve this state of tranquility. Yes, everyone must strive to attain great heights. A few will make it through sheer grit and determination. But this is not enough. You need to have the foresight to know what people want and smart business acumen to bring your idea to fruition. The rest of us fall by the wayside. In the latter group, there are people like me who, for the love of Mike, never succeed in any venture. I am happy that I tried. I am also happy that I gave up trying, thus saving money and, most important of all, enormous stress that was taking a toll on my health. When you have tried and lost, not just once, twice, or any number of times, be graceful enough to admit defeat. With every failure, step back, and assess. Is it achievable? Do I have the requisite knowledge, training, skills, and finances to continue without plunging into poverty? A friend of mine, a general practitioner, lost his entire bank balance trading penny stocks. He had to sell his house and now, his family lives in a caravan. My pension is just about enough to pay for my bills, health insurance, and daily necessities. Having given up striving for anything, I am now writing stories for Medium and other blogs just to kill time waiting for my Maker. I am not going to stress out on this. I know I can never aspire to become a Tim Denning or Alexandra Sifferelin. But the best part is — I am currently at peace with myself. Are you?
https://medium.com/open-house/watch-out-55c6d32d693b
['Open House']
2020-10-23 17:08:36.872000+00:00
['Money', 'Business', 'Success', 'Startup', 'Motivation']
How to Submit to Medium Publications. 11 More to Try
Where to submit Medium has hundreds of publications you can submit your work to from large Medium owned magazines such as Forge, GEN, and OneZero, to small publications created by individual writers. Anyone can create a publication. Here’s a list of all the current top Medium Publications. Initially, you might want to reach out to a small to medium-sized one. Here are some publications to try: 1. PS I Love You This publication has over 200K subscribers. It’s all about love, but not just romantic love — they cover a wide variety of topics: family, parenting, self love, friendship, dating, marriage, divorce… They also run Fiction Friday for short stories and Poetry Sunday. It’s a supportive community with a Facebook page and a large group of engaged writers and readers. They also, at the time of writing, have a contract with Medium for curation — so if you get published by them you get curated immediately. On their guidelines page they now have this request: Moving forward, due to overwhelming demand, P.S. I Love You will be reviewing new writer applications for only the first week of each month — the first through the 7th. Please only reach out during that window each month. 2. The Writing Cooperative This publication also has a large following. It has a very positive engaged Facebook group for writers. To get a story accepted you need to really check that you’ve met their requirements as they have a very high standard and specific requirements. Even an incorrectly formatted heading will have your story rejected for violating the rules. 3. The StartUp The StartUp has the largest following of all publications on Medium (as of writing this) and accepts a wide range of stories. You will get a “yes” or nothing from the StartUp: they say in their guidelines that if you don’t hear back in 36 hours, assume your story has not been accepted. 4. Slackjaw Humor writers should aim for this one! With a fantastic range of stories and a supportive Facebook group of comedy writers, Slackjaw is the place to hone your comedy skills. They also run competitions. 5. The Ascent The Ascent has around 90K subscribers and describe themselves as: “equal parts real life and true self, with a healthy dose of improvement, development, and growth in between.” They like self improvement, personal growth, and life lessons. If you like to inspire and educate, give them a go. Here are their guidelines: 6. Home Sweet Home A publication where you can send anything to do with family: parenting, step-parenting, adoption, grandchildren, single parenting….. Home Sweet Home editors won’t turn you down if you’re not a parent either. They say: “If you want to write about your hopes for when you become a parent or about family in general, you are very welcome to.” Here are their guidelines: 7. Sexography Not your typical sex stories! Sexography is not erotica but rather “an open forum for people to educate, discuss ideas, recount impactful memories and explore how people all over the world experience sex.” They accept serious and funny stories with a personal touch. From Sexography Editors: “We are especially interested in topics with a cultural twist exploring how sex is treated differently around the world.” 8. Better Marketing If you have a marketing, sales, business or leadership background, or just a great marketing-related story to tell, try this publication. Better Marketing say they “help you succeed at marketing your work or products.” They like how-to’s, case studies, templates, tools, and strategies. This is a fast growing publication that really market themselves and encourage you to do the same as a writer (surprise surprise!) 9. Mind Cafe This publication is fast growing and has an engaged readership. They look for stories that do three things: Teach readers how to live a better, happier life. Provides value to the average reader, not just a niche group of people. Has a clear and actionable takeaway. Mind Cafe Editors say, “we pride ourselves on simplicity and honesty. Our tone is conversational and relaxed.” If that suits you, here are their guidelines: 10. Invisible Illness This is Medium’s largest mental health publication with a growing audience of over 40K subscribers and good engagement. Invisible Illness want mental illness/health (and other invisible illnesses) talked about openly. Here’s a writing prompt they offer new writers: Describe your idea of normalcy and how that’s changed and developed. What is “normal” to you? Do you believe in being “normal”, and if so, do you think the standard is a good thing? What experiences have shaped these beliefs? 11. Inspired Writer And of course, you can write for us! We have two ways to submit work. If you’re a new writer and want to develop your skills, apply to our mentoring program or sign up for The Personal Essay Workshop. Otherwise we accept polished stories from three areas: Advice for writers (Writing on and off Medium, marketing, publishing etc) Personal or inspirational stories from emerging writers. We also publish a limited amount of fiction. Check out our full guidelines here:
https://medium.com/inspired-writer/how-to-submit-to-medium-publications-11-more-to-try-5048d2ea0e38
['Kelly Eden']
2020-05-09 23:16:41.722000+00:00
['Creativity', 'Writing Tips', 'Writing', 'Submission', 'Writing On Medium']
Leading the Charge 🔌 🚘: 10 Charts on Electric Vehicles in Plotly
Nissan, Tesla, BMW, and many other car companies use Dash or are customers of Dash Deployment Server (DDS). To see what Dash is all about, check out Plotly’s Dash Gallery, or check out our recent post on Dash apps. Interested? Get in touch. This post is the first in a two-part series. We’ll share some fresh visualizations on the world of electric cars: who’s leading the way, the costs of going electric, which manufacturers are most represented out on the road, and more. 🚗 🚗 🚗 1. Who drives the most? Following the low-population Republic of San Marino, the United States has the dubious distinction of having the most motor vehicles per 1,000 people. There are 910 motor vehicles for every 1,000 people. Of countries with more than a million inhabitants, New Zealand is next on the list. The country, whose tagline is “clean and green,” has 774 cars per 1,000 folks. Italy (679), Canada (662), and Finland (612) follow closely. On the other side of the spectrum, West African Togo (2), Bangladesh (3), Liberia (3), and the Solomon Islands (3) have the fewest cars per 1,000 individuals. 2. Which country boasts the largest proportion of electric vehicles? Norway. Nearly 40 percent of total new car sales between 2013–2017 were electric, meaning this Scandinavian nation has the most plug-in electric vehicles (EV) per capita in the world. In fact, Oslo, Norway, is recognized as the EV capital of the world. What’s the secret? Ninety-eight percent of Norway’s electricity comes by way of hydropower, making the country’s fleet of electric cars one of the cleanest on the globe. All electric cars and vans in Norway are exempt from on-purchase taxes. In second place is Iceland at 14 percent electric, and in third place, Andorra at 5.6 percent. Although the United States has the most motor vehicles per 1,000 people, only 1.1 percent of them are electric. 3. Which U.S. states are winning the EV race? 🏎️ California, Hawaii, Washington, Oregon, Vermont, and Georgia — in that order. Those states have more than two EVs registered per 1,000 people. On the flip side of the coin, Mississippi is dead last with one EV for every 6,667 people. Arkansas, West Virginia, and Louisiana aren’t far behind. EV registrations within a state are influenced by a variety of factors, including state and local incentive programs, charging infrastructure, and fuel pricing. 4. Who’s who in the EV biz? Perhaps it doesn’t come as a surprise, but over 90 percent of Tesla’s manufactured vehicles are electric. This number dwarfs the second place company, China’s Zhidou (31 percent), and third place BYD Auto (12 percent), also of China. Other notable electric producers include BMW (0.7 percent), Nissan (0.6 percent), and General Motors (0.3 percent). 5. Tesla’s slide in European markets Tesla’s sales fell in several European countries in early 2018. A blend of stiffer competition, like Porsche’s all-electric Taycan (pictured below), and a lack of significant rebates, contributed to the downturn. Porsche’s all-electric Taycan, the first battery-powered car from the iconic German sports car maker. Jaguar is also delivering its new and (relatively) affordable I-Pace, an all-electric sport-utility vehicle, seen as direct competition to Tesla. “The data from both IHS and regional transportation departments is not reflective of actual sales data,” Tesla countered (Behrmann and Kehnscherper, Bloomberg, 2018). 6. What does it cost to charge an EV? 💰 Average retail electricity prices have gradually ticked upward over the past several decades. During 2017, the average price for one kilowatt hour of electricity was about 11 cents — although there is significant regional variability. Prices even change depending on the time of day. Every EV has a kWh/100 mile figure. Take the Nissan Leaf for instance: 29. Simple math tells us that it would cost $3.19 to travel 100 miles (29*11). However, the cost of electricity is based on the rates set by utility companies, and typically, the more you use, the more you pay. Therefore, you might find yourself paying much more than the national average to charge your EV. You’d be better off juicing up the battery at the mall during your lunch break! 7. EV batteries have become much less expensive Despite how pricey it is to charge them, lithium-ion batteries used in EVs have been dramatically cheaper. From 2009 to 2017, the average price had decreased from $1,000 per kWh to $209 per kWh. For EVs to become cost-competitive with internal combustion engine vehicles, the price needs to reach approximately $100 per kWh, which isn’t far off. 8. China is the lithium ion battery powerhouse The Tesla Gigafactory 1, based in Sparks, Nevada, in the U.S., will not only be the world’s biggest lithium ion battery plant when construction finishes — but possibly the biggest building in the world, full stop. Tesla’s Gigafactory 1, Sparks, Nevada, USA [image source] Despite that impressive feat, China is still expected to dominate EV battery production over the next decade. 9. and 10. Which countries offer the most charging ports? In 2017, France was the king of EV charge points 🥇 A whopping 11,987 additional charge points were installed in the country, which was 4,050 more than Germany, which came in second. The remainder of the top 10 included many European countries: U.K., Switzerland, Austria, Norway, Italy, Belgium, and Sweden, in that order. Canada followed in 10th place with 1,070 charge points installed during 2017. By 2020, The City of Montreal, where Plotly is based, will have a network of 1,000 charging stations of its own, with a price tag of $10 million. In our next post, we’ll explore how Montreal and the province of Quebec are promoting EV use. Tesla’s “Supercharger network” accounts for 1,317 chargers globally. This particular charger gives the Tesla Model S one hundred and seventy miles of range in just 30 minutes and a full charge in 75 minutes. Tesla vehicle charging station, Mt Ruapehu, New Zealand In our next post, we’ll look at how Plotly’s home base of Quebec, Canada, is trying to make it easier for drivers to go electric ⚡️
https://medium.com/plotly/leading-the-charge-10-charts-on-electric-vehicles-in-plotly-d951acdc49c1
[]
2018-10-17 15:01:08.955000+00:00
['Plotly', 'Data Visualization', 'Visualization', 'Tesla', 'Electric Car']
A Guide to Serverless Computing with AWS Lambda
The name ‘Serverless Architecture’ misleadingly implies the server’s magical absence. In serverless computing, a third-party service provider takes responsibility for processes, operating systems and servers. Developers can now focus on just building great software. All they need to do is code. Resource consideration (deploy, configure, manage) is no longer their concern. The cloud migration companies takes care of that. The less you are meant to manage the instance, the more serverless it is. Serverless goes well with certain functions. It is for companies to learn optimizing their use and integrate them into broader data systems. The 2014 AWS Lambda launch was the first major serverless offering. Google Cloud Functions, Iron.io, Microsoft Azure Functions and IBM Bluemix OpenWhisk have since entered the market with similar service offerings on their cloud. Backend as a service (BaaS), Mobile backend as service and Function as a Service (FaaS) are approaches that go with serverless. Serverless Architecture & AWS Lambda AWS Lambda offers competitive pricing and an event driven computing model. That is, Lambda executes your code in response to events. An event can be a change to an Amazon S3 bucket, an update to an Amazon DynamoDB table, or custom events generated by various applications or devices. Moments after an event trigger, Lambda automatically prepares compute resources and runs Lambda function or code. It eradicates issues related to unused utilized server capacity without compromising on scalability or speed of response. Startups have moved from monolithic application architecture to microservices driven architecture by using AWS Lambda. Startups like group chat and messaging app SendBird and analytics platform Wavefront are using Lambda. Many startups like Click Travel have moved from monolithic app architecture to microservice-driven architecture (using AWS Lambda). Lambda eliminates the issues related to unused server capacity without sacrificing scalability or responsiveness. For e.g., developers working on a hotel booking website experiences user escalation on holidays. Usually, developers must create mechanisms to deal with this demand surge. Instead, Lambda takes care of this aspect for you. It supports multiple languages and libraries, including Python and JavaScript. AWS Lambda blueprints: A collection of organized but reusable and extendable lambda functions. Blueprints can be compared to recipes with sample event sources and Lambda functions configurations. The AWS Lambda Effect To break down how AWS Lambda does it for you in four steps: AWS Lambda & Microservices: AWS Lambda is great for companies who want to direct their entire resources on solving the problem. With its elastic ability to scale and handle heavy traffic, microservices go well with AWS lambda. Courtesy the new versioning feature and Amazon API Gateway, building microservices is much easier now. Scalable microservices can now be built for IoT apps, web and mobile. View a presentation about moving from monolithic architecture to microservices using Lambda in this video. Testing: The AWS Lambda blueprint library has an uncomplicated framework for conducting Lambda function tests. The blueprints include those for load testing and running unit. Test functions are no different than other Lambda functions. When test functions are invoked, this function accepts the function name to be tested as a parameter. You can read more about developing and testing AWS Lambda functions here. Machine Learning: Lambda can be used in alignment with Amazon machine learning. You can either do streaming data predictions or customized predictions by exposing a Lambda function wrapping Machine learning as API. Check how to build a machine learning app with AWS lambda here. Analytics: AWS Lambda can double up as an analytics option too. Enter Amazon Kinetic Analytics that allows you to process streaming data in real time with standard SQL. No losing time in processing frameworks or learning a new programming language. A Kinesis stream can be created to relentlessly capture and store mammoth data chunks from a hundred thousand sources or more. Kinesis helps in continuous low-cost collection, storage and processing of streaming data. You can build customize data applications to serve any specific need. Either Amazon Elasticsearch or AMR picks data from the Kinesis Stream for analysis. The results are up for display on a real-time dashboard. To know more about Amazon Kinesis, click here. Big Data Pipelines: Big Data ‘pipelines’ can be build using serverless architecture. How does a serverless pipeline help though? First, it is highly scalable, payment is on ‘per execution basis’, and third — no need to manage group of EC2 instances. The process of moving data from data source to data target through a Big Data pipeline is called ETL (extract, transform, load). Working with AWS Lambda: An Example Apart from working with one of our Fortune 500 client, we have used AWS Lambda for a variety of interesting use cases, from IOT startup clients to integrating a revolutionary kitchen appliance. For illustration purposes, we reproduce here an example of how Lambda plays a key part in the creation of a news blog app. Users can visit the app and sign in for the news blog newsletter. Admins can log in and collect the newsletter subscriber list. The requirements are as follows: Newsletter signups handles by a Lambda function and API endpoint. Subscribers list retrieved by a Lambda function and API endpoint. Enabling admins to log in. Multiple AWS services need to be employed to attain our goal. Microservices will be written using Lambda. TO expose Lambda functions to the web, API Gateway will come in handy. IAM and Cognito will handle user authentication. Dynamo DB, database for storing newsletter subscriber info. You first need to set up Dynamo DB. You can learn more about Dynamo DB here. Onto Lambda: Post database set up, we implement the Lambda functions. Two lambda functions will be created. The first to store the user email addresses, second to retrieve email lists in the database. A function that accepts multiple parameters is exported to help set the request’s context. The implementation is written within the function. To finish the operation, callback function is called and passed into data that we would like to respond with. We use Node.js to write our functions inline within the Lambda dashboard. STORE NEW SUBSCRIBER AWS LAMBDA FUNCTION The first function is implemented by navigating to Lambda homepage in AWS dashboard and creating Lambda function. A few settings require configuration, before commencing to write code. Set the lambda function name for runtime select Node 4.3. Remaining settings can be set to default. Set role to “Choose Existing Role”. Now you can select ‘server role/admin’. This gives Lambda function the ability to call and execute code from various AWS services like DynamoDB. Incorrect role setting will cause Lambda function errors. AWS SDK is used to allow easy interaction with other AWS services confined to the code. 'use strict'; // Requir'use strict'; 'use strict'; // Require the AWS SDK and get the instance of our DynamoDB var aws = require('aws-sdk'); var db = new aws.DynamoDB(); // Set up the model for our the email var model = { email: {"S" : ""}, }; // This will be the function called when our Lambda function is exectued exports.handler = (event, context, callback) => { // We'll use the same response we used in our Webtask const RESPONSE = { OK : { statusCode : 200, message: "You have successfully subscribed to the newsletter!", }, DUPLICATE : { status : 400, message : "You are already subscribed." }, ERROR : { status : 400, message: "Something went wrong. Please try again." } }; // Capture the email from our POST request // For now, we'll just set a fake email var email = event.body.email;; if(!email){ // If we don't get an email, we'll end our execution and send an error return callback(null, RESPONSE.ERROR); } // If we do have an email, we'll set it to our model model.email.S = email; // Insert the email into the database, but only if the email does not already exist. db.putItem({ TableName: 'Emails', Item: model, Expected: { email: { Exists: false } } }, function (err, data) { if (err) { // If we get an err, we'll assume it's a duplicate email and send an // appropriate message return callback(null, RESPONSE.DUPLICATE); } // If the data was stored succesfully, we'll respond accordingly callback(null, RESPONSE.OK); }); }; // Require the AWS SDK and get the instance of our DynamoDB var aws = require('aws-sdk'); var db = new aws.DynamoDB(); // Set up the model for our the email var model = { email: {"S" : ""}, }; // This will be the function called when our Lambda function is exectued exports.handler = (event, context, callback) => { // We'll use the same response we used in our Webtask const RESPONSE = { OK : { statusCode : 200, message: "You have successfully subscribed to the newsletter!", }, DUPLICATE : { status : 400, message : "You are already subscribed." }, ERROR : { status : 400, message: "Something went wrong. Please try again." } }; // Capture the email from our POST request // For now, we'll just set a fake email var email = event.body.email;; if(!email){ // If we don't get an email, we'll end our execution and send an error return callback(null, RESPONSE.ERROR); } // If we do have an email, we'll set it to our model model.email.S = email; // Insert the email into the database, but only if the email does not already exist. db.putItem({ TableName: 'Emails', Item: model, Expected: { email: { Exists: false } } }, function (err, data) { if (err) { // If we get an err, we'll assume it's a duplicate email and send an // appropriate message return callback(null, RESPONSE.DUPLICATE); } // If the data was stored succesfully, we'll respond accordingly callback(null, RESPONSE.OK); }); }; e the AWS SDK and get the instance of our DynamoDB var aws = require('aws-sdk'); var db = new aws.DynamoDB(); // Set up the model for our the email var model = { email: {"S" : ""}, }; // This will be the function called when our Lambda function is exectued exports.handler = (event, context, callback) => { // We'll use the same response we used in our Webtask const RESPONSE = { OK : { statusCode : 200, message: "You have successfully subscribed to the newsletter!", }, DUPLICATE : { status : 400, message : "You are already subscribed." }, ERROR : { status : 400, message: "Something went wrong. Please try again." } }; // Capture the email from our POST request // For now, we'll just set a fake email var email = event.body.email;; if(!email){ // If we don't get an email, we'll end our execution and send an error return callback(null, RESPONSE.ERROR); } // If we do have an email, we'll set it to our model model.email.S = email; // Insert the email into the database, but only if the email does not already exist. db.putItem({ TableName: 'Emails', Item: model, Expected: { email: { Exists: false } } }, function (err, data) { if (err) { // If we get an err, we'll assume it's a duplicate email and send an // appropriate message return callback(null, RESPONSE.DUPLICATE); } // If the data was stored succesfully, we'll respond accordingly callback(null, RESPONSE.OK); }); }; In this function, event object passed will receive email in when the function is called. Moving on the second Lambda function to retrieve newsletter subscribers. RETRIEVE SERVERLESS STORIES NEWSLETTER SUBSCRIBERS Earlier process is followed for new lambda function creation. The only difference — changing name of function to subscribers. Post function creation, code logic implementation will be as follows. 'use strict'; // We'll again use the AWS SDK to get an instance of our database var aws = require('aws-sdk'); var db = new aws.DynamoDB(); exports.handler = (event, context, callback) => { // We'll modify our response code a little bit so that when the response // is ok, we'll return the list of emails in the message const RESPONSE = { OK : { statusCode : 200, message: [], }, ERROR : { status : 400, message: "Something went wrong. Please try again." } }; // We'll use the scan method to get all the data from our database db.scan({ TableName: "Emails" }, function(err, data) { if (err) { callback(null, RESPONSE.ERROR); } else { // If we get data back, we'll do some modifications to make it easier to read for(var i = 0; i < data.Items.length; i++){ RESPONSE.OK.message.push({'email': data.Items[i].email.S}); } callback(null, RESPONSE.OK); } }); }; We now test the function. Lambda functions can be easily tested by clicking TEST button on top of page. The code will execute and display operation results. It will also display a log to help debug any issues. The AWS Lambda Edge: Summary Though AWS Lambda has been around only since 2014, major global businesses are already adopting it. The main factors fueling its popularity: Product launch time is significantly reduced IT costs reduced to a huge extent. Businesses don’t have to budget for underutilized or wasted computing and engineering capabilities. It’s still early days for Lambda though. It must prove its mettle for enterprise IT functions like testing, security and configuration management. For more on serverless architecture, AWS lambda, comparisons with similar computing services and other aspects of technology, keep visiting our blog section. (The AWS lambda Effect image idea source: https://aws.amazon.com/lambda/. Code example courtesy). Source: Cuelogic Blog
https://medium.com/cuelogic-technologies/a-guide-to-serverless-computing-with-aws-lambda-ba613e2d8d5f
['Cuelogic Technologies']
2019-02-04 04:37:29.723000+00:00
['Microservices', 'Machine Learning', 'Serverless Computing', 'AWS', 'AWS Lambda']
Freelance Writing Business: How to Have Early Success
The old adage “time is money” certainly applies to freelance businesses. It is crucial at every stage of business development, but especially in the beginning when you are mapping out your business strategies and implementing new software, getting your organization in place, and setting yourself up for success. With online services on the rise, now is the time to build that side hustle into dependable income. What fulfills a part time interest today could be the soil for a full time writing career (or other entrepreneurial opportunity) to grow and flourish in the future. Early Freelancing Success Is Possible I launched a full time writing business in July of 2020 after not having worked a “real job” in over 5 years due to some health issues. In July, I decided the time was right. The time was now. My expectations, though low, have been exceeded by the early success I am seeing for Fiddleheads & Floss Writing Services. Whatever freelance business you are dreaming of, there are a few things you can do to elicit early success for your business. I’ll cover those here along with some tips and strategies for finding early success in your freelance or solopreneur business. In a previous article in The StartUp, Building a Freelance Business From Scratch, I discussed the components of starting your own freelance business. Things like business establishment, effective communication, organization, and using good tools for running your business. These were all things I got in “crash course” form being a new business owner. What I didn’t share in depth was that I had previously been an entrepreneur in a different business. I was a self-employed hairdresser for 16 years and became rather successful in that field. I ran a full time business and also worked for several large corporations travelling the east coast and teaching small seminars. What made my business more profitable than others was my dedication to running it as a business. But what took me approximately 2 years in the beauty industry to see in terms of profits, I have done with a writing career in less than 3 months. What fulfills a part time interest today could be the soil for a full time writing career (or other entrepreneurial opportunity) to grow and flourish in the future. Here’s How I Have Found Early Freelancing Success (and you can too!) First, let me be clear. Success is a subjective term. What you may consider success could differ greatly from what I consider success. For the purposes we are discussing here I define it as: Early measurable profit A pattern of increasing financial growth Regularity of sales / work / client orders Eventually, all these things securely in place, success will be measured more in terms of the profit vs. self-satisfaction and quality of life. Sow the Soil To begin any freelancing or small business, timing is everything. You have to have the wisdom, self-awareness, and a critical eye for the business you are entering to know when the timing is right to launch your business. Before launch date, you need to sow the soil; meaning, create fertile ground for your business to grow in and flourish. Expand your reach. Grow your social media support and begin your email list. Begin developing the connections you need to support your business. This includes mentors and business professionals from whom you can learn more about your business, potential clients, and people who know things you don’t know, like IT stuff or legal jargon. Begin developing rapport with people who will be your potential clients. Take on some jobs even if they are at very cheap pricing — you will need references. Develop a portfolio of your work and some references for each type of service you will offer. Have your soil sown, seeded, and watered before you launch your business. Clearly Defined Services & Branding To launch any kind of business, you need direction. This is not to say you cannot try new things or branch out in new directions once established, but starting out, your potential clients need to know exactly who you are and what you have to offer them. Do all of your brainstorming and toe-in-the-water testing on your own and let your clients feel a sense of stability with you. They need to know they can trust you to work with you. Your reputation is everything — with every client. Every job, no matter how small, is an opportunity to build or break your business and should be treated as such. Set Realistic Goals and Monitor Your Progress I started out with a simple goal; double my income each month. This is exactly what is happening. I adjusted my schedule for this goal. My pricing and how I spent my time — all geared toward that goal. I am projected to double what you see below by the 3rd week of December, a full week ahead of schedule. Then, I plan to double it again. Current profit analysis and outstanding revenue, on set to more than double for December. Author screenshot. When you set a goal for your business, it holds you accountable for all those “I’ll do it later” moments which can be so tempting when you self-manage. To make your business take off, you must be committed to its success. Goal-setting and holding yourself accountable will force you to analyze risk vs reward, time spent vs gain, expenditures vs necessity and all other important aspects of your business in a logical, purposeful manner. Diversified Revenue Streams For most small businesses, you will have one main revenue stream albeit sales, services, or a combination of both. To be truly successful you need to diversify your revenue streams as much as possible, create passive income to help feed into your profits, and consider how you can monetize at all levels of your business. Below you can see some of the different revenue streams I have coming in, all under the same umbrella of Fiddleheads & Floss Writing Services. There are jobs from Fiverr, jobs from private writing clients, income from paid content sites, and book sales. No matter how small a revenue stream is, it all adds up. Various revenue streams make for a more successful start. Author’s screenshot. Self-Discipline Is Key Need I expand on this? This is your business; treat it as such. Self-discipline and time management will be the foundation for your every day schedule. No one is there to tell you to get up in the morning and get busy or to check your emails or to touch base with clients. Only you. I often told my boys as they were growing up, there are two types of people in the world: those who make excuses and those who make things happen. You get to decide each day which you want to be. Your reputation is everything — with every client. Curb Unnecessary Spending — But DO Invest in Your Business As a brand new business owner, you can bet it was not easy for me to shell out $139.50 for Freshbooks, $19.99 for Hemmingway, $129.00 for Dragon Naturally Speaking, $50.00 for a headset, and other subscription services my business uses, but these programs keep my business running smoothly and professionally. Author’s note: The screenshots from this article are from my Freshbooks account, which I use to monitor all aspects of my business. I highly recommend this software for creative professional business owners. The program is very user friendly and the customer service is stellar. Imagine the difference between giving a client your PayPal account and asking them to send you a payment — and sending them a business invoice, detailing what they have ordered from you, the cost breakdown, and offering several methods for easy payment. Clients love it when you take them seriously, and when you are a legit, organized, reputable business. Suddenly, people were much more prepared to pay me for my time. It bears repeating: reputation and client perception of your business are everything. Regarding curbing your spending, while yes, you must invest in your business, you must do it wisely. You need functionality, not bells and whistles. It’s that simple. Be Completely Honest with Yourself About Your Strengths and Weaknesses Lastly, there are some things you know you suck at doing. Develop strategies to manage these weaknesses. For example, I am more productive with fewer jobs. I know this so I keep my prices higher and aim for fewer, bigger clients. Some people prefer to have many clients and roll out their orders more quickly, but larger clients will expect you to take your time with their work. They may also require company training which you may or may not get paid to complete. Consider it an investment in your skills, only with your time rather than your debit card. This training can prove invaluable for your work across the board and it will make better at what you do. Be honest with yourself about what kind of work you are really good at and aim to develop your business around your strengths. Start with what you are good at and be willing to develop your skills in areas where you are lacking. This is no time for conceit. Honest analysis of your own working methods, your strengths, and your skill levels are important for developing your business and building a client load that works for you. Invest in courses to shore up your skills. Open Learn has loads of free courses you can take online. The Takeaway Hard work means nothing without direction and without end results in mind. You can find early success in your freelance business if you are clear on what you want from your business and set yourself up to receive it. I wish you great success in your business.
https://medium.com/datadriveninvestor/freelance-writing-business-how-to-have-early-success-f97410cb262f
['Christina M. Ward']
2020-12-28 05:55:31.365000+00:00
['Freelance', 'Business', 'Entrepreneurship', 'Self Improvement', 'Writing']
What’s Left to Say About Maps?. Tackling the sticky questions about…
Our third Fireside Chat is all about maps. We tackle the sticky questions about best practices, when to break the “rules” of cartography, and let our panelists debate the merits of proportional symbols compared to choropleths. Not sure what those words mean? That’s fine; if you’ve been consuming maps in the news lately, we promise this session will be just as fun and exciting for you. After an initial discussion among the panel, we opened the floor for questions from viewers. Panelists include: Kenneth Field—Principal cartographic product engineer at ESRI Madison Draper—Designer at Mapbox Elijah Meeks—Chief visualization officer at Noteable Moderated by Alberto Cairo Subscribe to the DVS YouTube channel here.
https://medium.com/nightingale/whats-left-to-say-about-maps-dvs-fireside-chat-d66c9e5d2539
['Data Visualization Society Team']
2020-06-25 15:25:45.490000+00:00
['Maps', 'GIS', 'Data Visualization', 'Mapping', 'Design']
Chasing Immortality
The year is 2573: Lilara’s laughter fades into awed silence as the autocab drifts to a stop. It was a long ride from the Johannesburg spaceport, but finally, the family had arrived at their destination: Lilara’s mother’s old family estate. Her mom met her father there when he was just a 24-year-old graduate student performing ecological research on the 50,000-acre preserve. That was in 2088 when the desert was threatening to overtake everything completely. Eventually, Lilara’s parents moved together to Mars City Two, also known as Olympia, where they became leaders in the terraforming initiatives. Over the next few centuries, Mom and Dad would end up holding some of the most senior roles in Mars’Sci. Because of them, Mars was fast becoming a wet, green planet that would be on par with Earth in another couple of centuries. After dinner in the guest house, the little family spent a few hours with Lilara’s grandparents in the VR world Archeon, where all of her relatives who died since about 2100 continued to live. It was always bittersweet since everyone who visited knew that one day they would be there, too. All the Archeonese assured the Living, however, that they couldn’t tell any difference between their biological lives and their virtual ones. Mom and Dad waited so long to become parents because they were busy, important people, and there was so much to do. But Lilara knew that when they did decide to have her, it was the happiest choice they’d ever made. They had spent a long time to make sure the people of Earth could have a second home if they needed it, and now they would spend just as much time making sure their daughter could have the same opportunity if she ever chose to. Immortality More than a few scientists believe that a baby born today might never die. “Immortality” is a loaded word, for sure: In the broadest sense, it refers to not being able to die, ever. To know what immortality is, however, we need to define death. For us, death is a permanent cessation of all vital functions: the end of life. (Merriam-Webster) There is cell death when all cell processes end and they begin to degenerate. There is also brain death. And then we have “poetic death”, when the light goes out of a person’s eyes, a soul escaping a damaged vessel and moving to a higher plane. Death, as with life, is such a complicated matter with roots in both philosophy and science. For our purposes, we will say that death is when the body dies and takes with it a person’s memory, personality, and consciousness. “Immortality” will be any state in which a person’s memory, personality, and consciousness remain intact, in some accessible form, indefinitely. To many people, we are already immortal, in the sense that what makes us “us”, in the form of a soul, leaves our bodies upon biological death but then moves on to an afterlife. What might keep Lilara and her family going for centuries? We may find some of the answers in creatures that we’ve shared this pale blue dot with for aeons. The Undying Things Among Us A German marine biology student discovered Turritopsis dohrnii in 1988 when he was researching hydrozoan invertebrates on the Italian Riviera. He kept this strange species, which he didn’t recognise, in a petri dish, just as he did with other findings, and observed it over the course of the next week. Rather than reproducing or dying, as expected, his Turritopsis dohrnii specimens behaved very differently: it appeared to be ageing in “reverse”, and returning to earlier stages of its life cycle. Here he had found a real-life Curious Case of Benjamin Button, or, shall we say, Jeremy Jellyfish. Other researchers continued to study Turritopsis dohrnii over the ensuing decade, ultimately publishing a paper in 1996 by Ferdinando Boero et al. titled “Reversing the Life Cycle”. The creature was examined in great detail and observed returning to its earliest life stage for jellies (the polyp form). In the decades since, we have come to understand this a bit better. When stressed by its environment, members of this jelly’s genus undergo the process of cellular transdifferentiation, when various cells transform into other types of cell, which is what occurs with human stem cells. Now popularised as the “immortal jellyfish”, we still don’t completely grasp all of the intricacies of how Turritopsis dohrnii does what it does. The little creatures have been spreading around the world, moving from the Mediterranean in cargo ship seawater ballast, and can now be found from the Caribbean to Japanese waters. Kevin J. Peterson is a molecular biologist at Dartmouth studying the mechanism by which our immortal jellyfish continues in its constant forward, backward, forward cycle. This is controlled, seemingly, by a genetic material called microRNA which appears to cause stem cells to move from their undifferentiated state to its fated form (heart cell, brain cell, skin cell, etc.). James Carlton, a professor of marine sciences at Williams College in Massachusetts, says, “ “That word ‘immortal’ is distracting. If by ‘immortal’ you mean passing on your genes, then yes, it’s immortal. But those are not the same cells anymore. The cells are immortal, but not necessarily the organism itself.” In essence, when our immortal jellyfish’s cells transdifferentiate, it is just returning its cells to their primary undifferentiated forms. It then re-builds them into new versions of their differentiated forms, creating what is essentially a clone of the original jelly. Other Earthly living things may be immortal (and could be included along with some of the most extreme creatures here), or at the least are astonishingly long-lived: Remarkably similar to the immortal jelly is the freshwater hydra polyp, which also seems to be able to live indefinitely. It regenerates lost appendages and never shows signs of cellular senescence or ageing. Red sea urchins seem to be able to continue living, reproducing and regenerating almost indefinitely once they grow out of their larval stage. The bristlecone pine of North America can live thousands of years. One specimen, nicknamed “Methuselah” after the longest-lived human named in the Old Testament, has been dated at over 4,700 years. This nigh-immortality also seems to be related to stem cells, as described in this research paper from 2013. The Arctica islandica ocean quahog clam can live for over 500 years. One was found in 2006, nicknamed “Ming the Mollusc”, but was summarily placed in a freezer by the researchers not realising it would turn out to be of record-breaking age. These clams, along with numerous other long-lived bivalves, tend to follow a rule common among many animals: the longer it takes to reach reproductive maturity, and the slower the animal’s growth rate in general, the longer its lifespan tends to be post-maturity. Long life isn’t just reserved for small, less complex animals. The Aldabra giant tortoise, tuatara lizards, rougheye rockfish, Greenland shark, koi, and even the huge bowhead whale have all been found in many cases to live longer than 200 years. And let us not forget ourselves, Asian elephants and macaws who all share lifespans between 70–80 years when provided with proper care and nutrition. Technology as Savior of Biology Immortality for humans could take many forms. As quantum computing progresses, thanks to continued efforts of IBM and others, our potential to store the immense complexity that is a human mind grows. In the March 2018 issue of Fortune, write Grace Donnelly reports on the Y Combinator venture Nectome that is aiming to preserve brain structure through a chemical freezing process, with the hopes that memory and knowledge will also be preserved and can somehow be copied. Photo by Andy Kelly on Unsplash Dr Ian Pearson, a British futurologist and author of You Tomorrow, places paths to immortality into three buckets: “Living” in virtual worlds. “Living” in android bodies. And renewing the human body. The first two are essentially the same and would likely both be possible if either one comes to be. This is because the fundamental problem that needs to be solved for each is the same: Can the mind be somehow downloaded or otherwise copied? “A complete map of the human brain containing detailed information about each neuron and synapse would occupy about 20,000 terabytes and require 1016 flops (floating point operations per second) of processing power to function. Currently, only the world’s fastest supercomputer possesses the capability of crunching that many numbers in a second.” -Jordan Inafuku, et. al. If what makes a person a person is all in the structure of the brain, many scientists feel that this is essentially possible, and with enough advances in technology we will eventually be able to make a copy of ourselves. If there is a “missing ingredient” that results in consciousness and that is responsible for making you “you”, something that cannot be recreated by just building an exact structure of neurons, we may be out of luck. We may find we can copy the structure and information of a mind, but when we “boot” it up all we see is hard data and a complete lack of anything like self-awareness. Renewing the human body can take numerous forms, and would likely be a combination of replacement parts and gene therapy based on trends in current medical technology. Advances in lab-grown organs, such as these fallopian tubes, and 3D printing (called “bioprinting) will probably make traditional transplanted, and artificial organs and even limbs seem antique by the middle of this century. Stem cell infusion therapy is also proving quite promising, as detailed in the results of a study published in April 2017 in The Journals of Gerontology. Fifteen frail patients between the ages of 60 and 95 each received stem cell transfusions from donors between the ages of 20–45, and six months later, all patients were healthier. And what happens when we achieve immortality? If we can keep our bodies and minds intact for many multiples of our natural lifespan, this will not be a guarantee of eternal bliss. Unless all of us become “immortal” at once, there would be a long time during which only small numbers of people might be transcending our biological limits. Perhaps the treatments and procedures are only affordable by the very wealthy, resulting in situations much like those found in the recent Netflix series Altered Carbon, the 2013 Neill Blomkamp film Elysium and the 1990 Orson Scott Card novel The Worthing Saga. And no matter if our minds end up in a massive VR simulation, android bodies, or they remain in biological construct or cloned bodies, there is the problem of how much moving through the vastness of time itself might affect us. How long can one consciousness continue to operate in an uninterrupted state before a psychological problem develops? The boredom of everyday existence generally affects most modern humans with varying psychological impact even before their eighth decade of life. Imagine eight centuries. And time would eventually seem to become a blur, as we experience and remember things relative to how long we’ve lived in total. Beyond those issues, there is the question of how we would handle the increased overpopulation that would result from widespread “immortality”. Ultimately, one thing that can’t be ignored is that immortality may not be possible at all. Inevitably, given enough time for the science to catch up to our imaginations, we will be able to prolong the lifespans of our bodies significantly, and perhaps even copy and download our minds and preserve self-awareness and consciousness. Just as with the tiny Turritopsis dohrnii, which must completely remake itself in order to never truly die, most paths to our immortality will require that same sacrifice. Copying a mind would be similar to creating another instance of a database. For a while, at least, you would have 2 of the same thing, and the original must either be kept or destroyed. But the copy would not be the original. It may seem that way to an outside observer, and the copy may not even know it’s a copy. The fact is, though, that the copy is a new thing, a new person, whether built from a brain scan or regenerated cell by cell in a lab. Eugen Suman, a fellow thinker and futurist, based in Romania, says regarding the above problem (dubbed the “continuity paradox”) that we can: …imagine this scenario: you get injected with an army of nanobots that, over a period of ten or more years, will replace your brain cells one by one with “artificial cells” so that at the end of the ten years you will have a new artificial brain instead of the old one you possessed. Most people would agree that in the second scenario it would still be you… This is a variation on ideas put forth by famed futurists Ray Kurzweil and Hans Moravec. Replacing our mortal brain with immortal substrate the smallest possible piece at a time while keeping it all connected offers the best — if not yet feasible — chance at keeping our identity. So there may well be a way to create a true immortality loop for humans, given enough time and technological advancement. Still, we are left with the potential development of various psychoses or other unforeseen problems that will arise with extremely long-lived consciousnesses. Immortality can only ever go so far. Thank you for reading and sharing.
https://medium.com/predict/chasing-immortality-49bbf3a11a61
['A. S. Deller']
2020-12-27 13:18:29.083000+00:00
['Immortality', 'Science', 'Future', 'Consciousness', 'Biology']
Class 8: Preparing an AI Workshop
Class 8: Preparing an AI Workshop The steps and tools for setting up an AI Design Camp or Workshop Watch Class 8 > Any time you’re asked to run an AI-focused design workshop or camp, the very first step is to make a duplicate of this folder and move it to your team’s project folder where you can start customizing it: The AI Jumpstart Toolkit: Original > The “AI Jumpstart Toolkit: Original” Box Folder contents The documents in this folder are numbered to match the order you’ll need them in: 01_AI Jumpstart Guide Review this first for a how-to on customizing, preparing for, and conducting an AI Jumpstart Camp. This is also a good reference to have printed and on hand throughout camp. 02_AI Jumpstart Agendas 1, 2, and 3 day agenda examples you can customize 03_Create your AI Camp Mural Links to the design thinking Mural templates you can use or customize for your camp 04_AI Jumpstart Prep Work Once you have your Mural and agenda established, share the prep work assignments with the camp attendees. 05_AI Jumpstart Kickoff This is a deck template you can use to kickoff day one of camp. 06_ML 4 Design Guide This is a good reference to have printed and on hand throughout camp. 07_AI Jumpstart Retro Have the team complete this 15 minute retro to conclude your camp. 08_Journey to GA A guide through your next steps — preparing to build and deploy AI, and setting measurable key results. 09_AI Camp Results A PDF of measurable AI Design Camp results captured between 2018–2019.
https://medium.com/ai-design-thinkers/class-8-6c67402053
['Jennifer Aue']
2020-01-27 23:41:16.394000+00:00
['AI', 'Design Thinking', 'Design', 'Adfclass']
We designers serve others
However, the path very often turns to be rough. There’s a lot of obstacles to deal with to make something just good. In between 18th and 19th centuries when the Industrial Revolution took place, and technological progress allowed to manufacture faster and in a high volume, the Three Pillars of Design were defined: Voice of Business. How do you as a designer respond to a company’s business model to make a product relevant? How do you as a designer respond to a company’s business model to make a product relevant? Voice of Technology. How do you design to create something within a budget and yet well made? How do you design to create something within a budget and yet well made? Voice of Customer. How do you answer people’s needs? How do you solve people’s problems? How effectively do you solve those problems? How do you design to embrace individualities? How do you design for people to enrich their lives? That looks like a challenge. And it is. You see, I think we as designers need a tremendous amount of self-discipline to be good in what we do. Answering those questions above, facing them in the most successful way, requires a thing that I find the most precious quality of us designers, a thing that we constantly tend to learn — humility. Humility — ענוותנות, Hebrew “anavah” — a sign of strength and purpose, not weakness. We often mislead humility with weakness, whereas it’s a strength. We can be highly skilled specialists, using the most advanced tools, but a result of our work comes from a process, from our relationship with whom we collaborate. Way before a product meets customers, we designers meet business stakeholders and engineers, people responsible for revenue and the fact that our ideas can come to life. That’s the place for us designers serving others with humility which is represented by: ability to listening not stealing somebody’s air being aware that sometimes we may be wrong There’s a beauty in the service of being responsible for giving a form to business intention and implementing solutions. By being designers we tend to brilliantly communicate however by asking questions. So much can be unheard when we’re not attentively listening to barely formed, fragile thoughts that have got a chance to be a foundation for a fabulous solution. During ideation and execution, any contribution is a new point of view, that can pivot a project in a way more beneficial to a final result, therefore we designers are expected to create a convenient environment for others to collaborate. And finally, constant learning means being ready to fail and seek an outcome from it to bring the value to our teams, to a project, and therefore to customers. Sadly, very often customers can be victims of design, instead of being beneficiaries of it. Unfortunately, we stick at looking for a balance between a look and function. Obviously, a product looking good but having a very poorly solved functionality can be ugly. In the same time, we often try to believe that the thing that predominantly matters is functionality. But we very rarely ask the question: “What do we want people to feel?” You see, designing is three dimensional. I believe that we designers do what we do in the service of aesthetics, function, and feelings that someone, somewhere had using a product we designed. Do you remember the smell of just unboxed product that you had waited for a while? Do you remember the joy of the unboxing? Do you remember the texture of that shiny new product? Do you remember the sound of taking off a foil? We forget what a company did and said to convince us to buy their product, but we never forget how they made us feel. Those are outcomes of unreasonable intentions that are far from being just aesthetically and functionally correct. These are outcomes of the amount of care that was invested in every meticulously considered detail. The amount of care invested in expertise and research to find a solution, creative direction, material, manufacturing process that let translate reckless ideas into something unremarkable, something that will definitely be discerned but barely articulated. Voice of Environment. There’s one other thing that wasn’t mentioned in the Three Pillars and is ever so important. We designers, we also serve the environment. We more than anyone have a civic and moral responsibility in creating solutions, choosing materials, tools, and processes that will serve not only business, technology, and customers but also our common, the only one home we have — the planet. Our decisions are made within minutes but often impact years to come. An ignorance can turn into evil. But yet proactivity can avoid disasters and resolve problems. With deliberate solutions, we can address the quality of life of future generations. Being a designer requires maturity, a self-awareness. We’ve got a power to build, to enrich, to embrace. We carry the responsibility of connecting creative voices. We can make trustworthy room for ideas incubation. We can shape the future. To serve others is delightful, isn’t it?
https://medium.com/the-supersymmetry/we-designers-serve-others-49cdbfcccf0e
['Radek Szczygiel']
2020-11-21 13:37:32.737000+00:00
['Environment', 'Business', 'Customers', 'Technology', 'Design']
How to buy Bitcoin by programming language?
It is first time that programmer can buy and sell Bitcoin through their favorite programming language. PHP Java C# Node.js Go Python
https://medium.com/mixinnetwork/how-to-buy-bitcoin-by-programming-language-17b0620bdd4d
['Lin Li']
2019-04-22 14:29:09.900000+00:00
['Nodejs', 'Go', 'PHP', 'Java', 'Python']